AI Components
AI components bring large language model (LLM) capabilities into your automations. Use them to generate content, analyze data, make decisions, and perform complex reasoning tasks.Overview
Agent Studio provides three types of AI components:| Component | Purpose | Best For |
|---|---|---|
| LLM V2 | Direct text generation | Simple prompts, content generation |
| LLM with Structured Output V2 | Generate structured JSON | Data extraction, form filling |
| Agent V2 | Autonomous reasoning with tools | Complex multi-step tasks |
LLM V2
A straightforward component for generating text using a large language model.Configuration
| Field | Description |
|---|---|
| System Prompt | Instructions that define the LLM’s behavior |
| User Prompt | The specific request (supports @ notation) |
| Temperature | Creativity level (0 = deterministic, 1 = creative) |
Example: Generate Email Content
System Prompt:Output
Use Cases
- Drafting personalized emails
- Generating meeting summaries
- Creating task descriptions
- Writing notification messages
LLM with Structured Output V2
Generates responses that conform to a specific JSON schema, ensuring consistent, parseable output.Configuration
| Field | Description |
|---|---|
| System Prompt | Instructions for the LLM |
| User Prompt | The request with context |
| Output Schema | JSON schema defining the expected output structure |
| Temperature | Creativity level |
Example: Extract Action Items
System Prompt:Output
Use Cases
- Extracting structured data from unstructured text
- Classifying content into categories
- Parsing meeting transcripts
- Generating form-ready data
Agent V2
The most powerful AI component—an autonomous agent that can reason, use tools, and perform multi-step tasks.How Agents Work
Unlike simple LLM calls, agents can:- Reason about what steps to take
- Use tools to fetch information or take actions
- Iterate through multiple steps to complete a task
- Adapt based on intermediate results
Configuration
| Field | Description |
|---|---|
| System Prompt | Define the agent’s role and behavior |
| User Prompt | The task to accomplish |
| Tools | Select which tools the agent can use |
| Max Iterations | Limit on reasoning steps |
Available Tools
Agents can be configured with various tools:| Tool Category | Examples |
|---|---|
| Data Access | Query accounts, fetch tasks, get activities |
| Communication | Send emails, post to Slack |
| CRM | Create Salesforce cases, update opportunities |
| Analysis | Calculate metrics, generate insights |
Example: Research and Summarize Account
System Prompt:- Get account activities
- Get account health history
- Get open tasks
- Get contact details
Output
The agent will:- Fetch recent activities for the account
- Analyze health score trends
- Review open tasks and their status
- Identify key contacts
- Synthesize into a structured QBR summary
Use Cases
- Complex research tasks
- Multi-step workflows
- Decision-making with multiple data sources
- Autonomous customer analysis
Choosing the Right AI Component
Use LLM V2 when...
- You need simple text generation
- The task is straightforward
- You want fast, single-step output
- Format flexibility is acceptable
Use Structured Output when...
- You need specific data formats
- Output will be used by other nodes
- You need reliable parsing
- Consistency is important
Use Agent V2 when...
- Task requires multiple steps
- You need to fetch/combine data
- Complex reasoning is required
- Task may need iteration
Best Practices
Prompt Engineering
Be specific and clear
Be specific and clear
Good prompt:Poor prompt:
Provide context
Provide context
Always include relevant context in prompts:
- Account information
- Recent activities
- Historical data
- Your specific goals
Use examples
Use examples
For complex outputs, provide examples:
Temperature Settings
| Temperature | Behavior | Use For |
|---|---|---|
| 0.0 - 0.3 | Deterministic, focused | Data extraction, analysis |
| 0.4 - 0.6 | Balanced | General tasks |
| 0.7 - 1.0 | Creative, varied | Content generation, brainstorming |
Error Handling
Validate AI outputs
Validate AI outputs
Add condition nodes after AI components to validate:
- Output is not empty
- Required fields are present
- Values are within expected ranges
Have fallback paths
Have fallback paths
Create alternative flows for when AI doesn’t produce usable output:
Troubleshooting
Output is too generic
Output is too generic
- Add more specific context to your prompt
- Include examples of desired output
- Lower the temperature
- Provide relevant data via @ notation
Output format is wrong
Output format is wrong
- Use Structured Output component instead of plain LLM
- Add explicit format instructions to the prompt
- Include an example in the prompt
Agent takes too long
Agent takes too long
- Reduce the number of enabled tools
- Lower max iterations
- Simplify the task into smaller steps
- Add more guidance to narrow the search space
Agent doesn't use tools
Agent doesn't use tools
- Verify tools are enabled in configuration
- Add explicit instructions to use specific tools
- Check that the task actually requires tools
Next Steps
Agents Overview
Build standalone AI agents
Testing Automations
Test your AI components