Skip to main content

Agent Best Practices

This guide covers advanced techniques for optimizing your AI agents, common patterns, and troubleshooting approaches.

Instruction Design Patterns

The Role-Task-Format Pattern

Structure your instructions with three clear sections:
# Role Definition
You are [specific role] with expertise in [domain].
Your goal is to help users [primary objective].

# Task Instructions
When asked to [task type]:
1. First, [initial step]
2. Then, [analysis step]
3. Finally, [output step]

# Output Format
Always structure your response as:
## [Section 1]
[Content description]

## [Section 2]
[Content description]

The Guardrails Pattern

Add explicit constraints to prevent unwanted behavior:
# Constraints
You must:
- Always cite data sources
- Ask for clarification when uncertain
- Admit when you don't have enough information

You must never:
- Fabricate statistics or data
- Make promises about outcomes
- Share sensitive information
- Provide advice outside your expertise

The Example Pattern

Include examples for complex outputs:
When providing account analysis, use this format:

EXAMPLE OUTPUT:
## Health Assessment: At Risk
The account shows declining engagement over the past 30 days.

### Key Metrics
| Metric | Current | 30 Days Ago | Trend |
|--------|---------|-------------|-------|
| Health Score | 62 | 78 | ↓ 16 |
| Active Users | 12 | 23 | ↓ 48% |

### Recommended Actions
1. **Schedule check-in call** - Understand usage drop
2. **Review training needs** - May need additional enablement

Tool Usage Optimization

Minimal Toolkits

Start with the minimum tools needed:
Agent PurposeRecommended Toolkits
Account analysisAccount Data, Activity Data
Task managementAccount Data, Task Management
Outreach draftingAccount Data, Contact Data
Full CS workflowAccount Data, Activity Data, Task Management, Communication
Enabling too many toolkits can cause:
  • Slower responses
  • Confusion about which tool to use
  • Unnecessary tool calls
  • Higher costs

Explicit Tool Instructions

When agents should use specific tools, say so explicitly:
When analyzing an account:
1. ALWAYS use the Account Data toolkit to fetch current health metrics
2. ALWAYS use the Activity Data toolkit to review recent engagement
3. Only use Task Management if the user asks about follow-up actions

Reasoning Level Optimization

When to Use Each Level

Reasoning LevelUse WhenExamples
LowSimple lookups, formatting”Get Acme Corp’s health score”
MediumStandard analysis, most tasks”Summarize Acme Corp’s recent activity”
HighComplex reasoning, multi-step”Identify churn risk patterns across accounts”

Signs of Wrong Reasoning Level

Too Low:
  • Answers are superficial
  • Missing important context
  • Not connecting related information
Too High:
  • Responses are slow
  • Over-analysis for simple questions
  • Unnecessary complexity

Common Agent Patterns

The Research Agent

Focused on gathering and synthesizing information.
Name: Account Researcher
Model: Gemini 3 Pro
Reasoning: High
Toolkits: Account Data, Activity Data, Contact Data

Instructions:
You are a thorough researcher. When asked about an account:
1. Gather all relevant data using available tools
2. Look for patterns and anomalies
3. Cross-reference multiple data sources
4. Synthesize findings into clear insights
5. Highlight what needs attention

Never provide recommendations without data support.

The Action Agent

Focused on executing tasks efficiently.
Name: Task Executor
Model: Gemini 2.5 Flash
Reasoning: Low
Toolkits: Account Data, Task Management

Instructions:
You help users quickly create and manage tasks.
Keep interactions brief and action-oriented.

When creating tasks:
- Get title, assignee, and due date
- Create immediately without excessive questions
- Confirm completion

When updating tasks:
- Find the task, make the change, confirm done

The Advisory Agent

Focused on providing recommendations.
Name: Success Advisor
Model: Gemini Auto
Reasoning: Medium
Toolkits: Account Data, Activity Data

Instructions:
You provide strategic recommendations for customer success.

For each recommendation:
1. State the recommendation clearly
2. Explain the reasoning with data
3. Outline implementation steps
4. Note potential risks

Always prioritize recommendations by impact.

Performance Tuning

Speed Optimization

To make agents faster:
  1. Reduce toolkits - Only enable what’s necessary
  2. Lower reasoning - Use Low or Medium for simple tasks
  3. Use Flash models - Gemini Flash is faster than Pro
  4. Simplify instructions - Shorter, clearer instructions process faster

Quality Optimization

To improve output quality:
  1. Use Pro models - Better reasoning capability
  2. Increase reasoning level - More thorough analysis
  3. Add examples - Show exactly what you want
  4. Be specific - Detailed instructions get better results

Cost Optimization

To reduce costs:
  1. Use Auto model selection - System optimizes automatically
  2. Match reasoning to task - Don’t over-reason simple tasks
  3. Minimize tool calls - Explicit instructions reduce unnecessary calls
  4. Review usage patterns - Identify where lower settings work

Error Handling

Graceful Degradation

Add instructions for handling incomplete data:
If you cannot find sufficient data:
1. State clearly what information is missing
2. Explain how this limits your analysis
3. Provide what insights you can from available data
4. Recommend how to get the missing information

Scope Boundaries

Help the agent know when to redirect:
If asked about topics outside customer success:
"I specialize in customer success and account analysis.
For [topic], please consult [appropriate resource]."

If asked to take actions you cannot perform:
- If the agent has the Communication toolkit with email permissions:
  "I can send that email for you. Here's what I'll send: [draft]
  Would you like me to send it now, or would you prefer to make changes first?"
- If the agent does NOT have email sending capabilities:
  "I can help you draft that email, but I cannot send it directly since I don't have email sending permissions.
  Here's a ready-to-send version: [draft]"

Testing Strategies

Systematic Testing

Test each capability systematically:
Test CategoryExample Prompts
Happy path”Analyze Acme Corp’s account health”
Edge cases”Analyze an account I just created with no activity”
Ambiguity”Acme” (multiple possible meanings)
Out of scope”What’s the weather?”
Error conditionsReference a non-existent account

A/B Testing Instructions

When optimizing, test variations:
Version A:
"Analyze the account and provide recommendations."

Version B:
"Analyze the account by reviewing health metrics, recent
activities, and contact engagement. Provide 3 specific
recommendations ranked by impact."
Compare results to see which produces better outcomes.

Troubleshooting Guide

Symptoms: Responses could apply to any account, lack specific data.Solutions:
  • Add explicit instructions to cite specific data
  • Ensure toolkits are enabled and working
  • Add examples showing data-rich responses
  • Increase reasoning level
Symptoms: Responses take too long, feel sluggish.Solutions:
  • Reduce enabled toolkits
  • Lower reasoning level
  • Switch to Flash model
  • Simplify instructions
Symptoms: Includes data or statistics that don’t exist.Solutions:
  • Add explicit constraints against fabrication
  • Require citation of sources
  • Add instruction to say “I don’t have that data”
  • Use lower temperature (if configurable)
Symptoms: Gives generic answers instead of fetching data.Solutions:
  • Verify toolkits are enabled
  • Add explicit “ALWAYS use [toolkit] to…” instructions
  • Reduce the number of toolkits (less confusion)
  • Test tools individually
Symptoms: Answers questions it shouldn’t, wanders from purpose.Solutions:
  • Add clear scope constraints
  • Add explicit redirect instructions
  • Narrow the agent’s defined role
  • Add examples of out-of-scope handling

Iteration Workflow

The Improvement Cycle

  1. Deploy - Release initial version
  2. Observe - Monitor how users interact
  3. Identify - Find common issues or gaps
  4. Adjust - Modify instructions/configuration
  5. Test - Verify improvement
  6. Deploy - Release updated version
  7. Repeat

Feedback Collection

Gather feedback on:
  • Response accuracy
  • Response usefulness
  • Missing capabilities
  • Confusing outputs
  • Speed/performance
Use this feedback to guide iterations.

Next Steps

Creating Agents

Step-by-step agent creation guide

AI Components

Use agents in automation flows