Loading

Agentforce: Common Agentforce Behavior Issues and How to Resolve Them

Data pubblicazione: Feb 2, 2026
Descrizione

Agentforce behavior issues occur when an agent is active and accessible, but its responses or execution do not match expectations.

Common scenarios include:

  • The agent does not invoke the expected topic or action

  • Responses are incomplete, truncated, or rewritten

  • Knowledge citations do not appear in responses

  • The agent escalates unexpectedly to a human

  • Identical inputs produce inconsistent outputs

These issues are usually caused by:

  • Ambiguous or overly complex prompts

  • Non-deterministic context variable handling

  • Topic or action filtering logic

  • Token or execution limits

  • Response validation or groundedness checks

This article focuses on behavior-level troubleshooting that can be performed using Agent Builder, prompt configuration, and setup validation.

Risoluzione

Prompting Issues

Prompting issues are the most frequent cause of unexpected Agentforce behavior. Unlike deterministic systems such as traditional Einstein Bots, LLM-based agents can behave unpredictably if prompts are unclear or inconsistent.

Where Prompting Occurs

Agent behavior is influenced by prompts defined in:

  • Topic descriptions

  • Topic instructions

  • Action descriptions

  • Action input descriptions

  • Action output descriptions

Best Practices

  • Use explicit and unambiguous language in topic and action descriptions

  • Avoid relying on the LLM to infer business rules

  • Clearly state when an action should be used and when it should not

  • Ensure action outputs are clearly described so the LLM understands how to use them

 


Session Won’t Start or “Something went wrong. Refresh the conversation and try again”

If conversations fail to start or display generic errors, validate the following setup items.

Troubleshooting Steps

  1. Toggle Agentforce

    • Navigate to Setup → Einstein Copilot

    • Toggle Agentforce off, save, then toggle it on again

  2. Toggle Einstein Bots

    • Navigate to Setup → Einstein Bots

    • Toggle off, save, then toggle on

  3. Toggle Einstein Setup

    • Navigate to Setup → Einstein GPT Setup

    • Toggle off, save, then toggle on

  4. Verify the Default Agent Exists

    • Ensure the “Enable the Agentforce (Default) Agent” option is enabled

    • Toggle it off and on again to ensure the default agent is properly created

 


Topic or Action Not Getting Called

Common Causes

  • Prompting issues

  • Topic or action filters

  • Context variables not being set

  • Permission restrictions

What to Check

Topic and Action Filters

  • Confirm filters are configured correctly

  • Ensure required context variables exist before filters are evaluated

Important Considerations

  • Context variables are not set by default in Agent Builder

  • If filters depend on context variables, set default values in Builder before testing

  • Do not rely on prompt instructions to set context variables

Recommended approach: Use variable mapping from an action output to set context variables deterministically. If needed, create a simple utility action that returns fixed values used only for filtering.

 


Citations Not Appearing in Responses

Citations are displayed only when:

  1. The Knowledge action returns citations

  2. Citations are enabled for that action

Troubleshooting Steps

  • Open the Knowledge action configuration

  • Confirm that citations are enabled

  • Test the Knowledge action independently to verify it returns results

  • Ensure the final response is based on Knowledge output and not overwritten later

Also verify that:

  • Knowledge articles are published

  • Articles are indexed

  • Articles are accessible to the agent

 


 

Response or Content Is Getting Truncated

Truncation Limits

 

Area

Limit

LLM response

~2048 tokens

Action output

~65,000 characters

Common Symptoms

  • Responses cut off mid-sentence

  • Long summaries or emails missing content

  • Partial action results displayed

Mitigation Strategies

  • Use Show in Conversation to reference action output instead of embedding it

    • Reduces token usage

    • Markdown formatting is not supported

  • Use ES Types with variable mapping to store and reference large outputs

  • Store large user inputs in custom objects and reference them through actions

 


Streaming Issues

Symptoms

  • Frequent “Something went wrong” messages

  • Agent responses briefly appear and then get rewritten

Troubleshooting Steps

  • Review prompt templates for strict formatting requirements (for example, JSON-heavy outputs)

  • Some models are sensitive to formatting consistency; switching models may improve stability

  • Reduce overly complex prompt instructions that generate large or deeply nested responses

  • Ensure action outputs used during streaming are concise and well-structured

Best Practices

  • Avoid generating very large responses in a single turn

  • Break complex interactions into smaller steps

  • Define clear fallback responses to prevent repeated regeneration

 


Agent Response Gets Rewritten

If the agent responds and then retracts or rewrites the message, groundedness checks may be failing.

Mitigation

  • Review topic prompts for accuracy and coverage

  • Avoid vague or overly broad instructions

  • Ensure responses are grounded in Knowledge, data, or clearly defined logic

 


Topic or Action Not Available

Common Reasons

  • Filtering logic excludes the topic or action

  • Permission restrictions

Troubleshooting Steps

  • Review filter logic carefully

  • Confirm required context variables are set deterministically

  • Ensure the topic or action is selected in the agent configuration

  • If filtering is ruled out, review permissions with an org administrator

 


URL Redaction in Responses

Some URLs may be hidden for security reasons.

Resolution

  • Add trusted URLs in Setup → Trusted URLs

  • Ensure URLs are generated by actions or explicitly allowed

 


Unexpected Human Escalation

Escalation can occur due to:

  • The agent being unable to proceed

  • Escalation topics being selected

  • System limits being reached

Recommendations

  • Review escalation topic naming and logic

  • Ensure escalation instructions are intentional

  • Provide clear fallback topics and guidance

 


Agent Exceeds Maximum LLM Calls

An agent can make up to 8 LLM calls per user turn.

Common Causes

  • Repeated action failures

  • Validation loops

  • Overly complex workflows

Mitigation

  • Simplify topic flows

  • Reduce retries

  • Split complex workflows into multiple steps

 


Flow Execution Errors

Error Message

“An error occurred when executing a flow interview”

Resolution

Ensure the Service Agent User has the following assignments:

  • Agentforce Service Agent Secure Base

  • Agentforce_Service_Agent permissions

  • Permission Set Group: AgentforceServiceAgentUserPsg

  • Permission Set License: Agentforce Service Agent User

 


Language or Locale Issues

If the agent responds in the wrong language:

  • Verify the agent’s language configuration

  • Check locale settings in Setup

  • Ensure prompts are language-consistent

 


Global Instructions

Global Instructions define system-level behavior for Agentforce.

Recommended Uses

  • Maintain consistent tone and style

  • Define polite fallback responses

  • Handle ambiguity

  • Guide sensitive scenarios

  • Address competitor mentions

Numero articolo Knowledge

005305511

 
Caricamento
Salesforce Help | Article