Testing Automations
Testing is a critical part of building reliable automations. Agent Studio provides tools to test individual nodes and complete flows before activating them.Node-Level Testing
Every node in Agent Studio can be tested individually using the Play button.How to Test a Node
- Select the node by clicking on it
- Click the Play button (▶) that appears on the node
- Provide test input in the dialog that opens
- Review the output to verify the node works as expected
Test Input Options
When testing a node, you can provide input in several ways:| Option | Description |
|---|---|
| Manual Input | Type in JSON data directly |
| Select Entity | Choose an existing account, task, or other entity |
| Use Sample Data | Use predefined sample data for testing |
| From Previous Node | Use output from a previously tested upstream node |
Example: Testing a Condition Node
- Click on the Condition node
- Click Play (▶)
- Enter test input:
- Click Run
- Review which output (True/False) received the data
Inspecting Output
After running a test:- Output Data: See the exact data the node produces
- Execution Path: For logic nodes, see which path was taken
- Errors: View any error messages if the test failed
- Logs: See detailed execution logs
Testing Trigger Nodes
Trigger nodes require special consideration since they normally wait for events.Testing a Trigger
- Click on the trigger node
- Click Play (▶)
- Select a test entity (e.g., choose an account for Account Segment trigger)
- The trigger will execute as if that entity triggered it
What Trigger Tests Verify
- Entity data is loaded correctly
- Flow context is established
- Output data structure is correct
- Connected downstream nodes receive data
Testing a trigger does not verify that the trigger conditions (segment criteria) are correct—it only tests the data loading and output. Verify your segment criteria separately.
Testing Complete Flows
Beyond individual nodes, you can test the entire flow end-to-end.Run a Test Flow
- Save your automation
- Click Test Flow in the toolbar
- Select test data for the trigger
- Click Run Test
- Watch execution as it progresses through nodes
- Review results for each node
Test Flow Visualization
During a test run, you’ll see:- Highlighted paths: Active execution path lights up
- Node status indicators:
- ✓ Green: Executed successfully
- ✗ Red: Failed with error
- ○ Gray: Not executed (skipped path)
- Data flow: View data at each connection point
Reviewing Test Results
After a test completes:| View | What It Shows |
|---|---|
| Execution Summary | Overall success/failure, time taken |
| Node Details | Input/output for each executed node |
| Path Taken | Which branches were followed |
| Errors | Detailed error information |
Testing Best Practices
Test Early, Test Often
Test as you build
Test as you build
Don’t wait until the flow is complete. Test each node as you add it to catch issues early.
Test with realistic data
Test with realistic data
Use actual entities from your Statisfy data when possible. This catches issues that sample data might miss.
Test edge cases
Test edge cases
Include tests for:
- Empty or null values
- Maximum/minimum values
- Unusual characters in strings
- Missing optional fields
Test All Paths
Test both True and False paths
Test both True and False paths
For every condition node, run tests that exercise both outputs.
Test boundary conditions
Test boundary conditions
If checking
health_score > 50, test with scores of 49, 50, and 51.Document Your Tests
Save test cases
Save test cases
Keep notes on what test inputs you used and what results you expected.
Create test entities
Create test entities
Consider creating dedicated test accounts/contacts that you can use repeatedly without affecting production data.
Debugging Failed Tests
Common Issues and Solutions
Node shows 'No input data'
Node shows 'No input data'
Cause: The node isn’t receiving data from upstream.Solutions:
- Check that connections are properly made
- Verify the upstream node produces output
- Ensure the connection types are compatible
@ reference returns empty/null
@ reference returns empty/null
Cause: The referenced field doesn’t exist in the data.Solutions:
- Inspect the actual input data using node testing
- Check for typos in field names
- Verify the data structure matches your expectations
- Check if the field is nested differently than expected
Condition always goes one way
Condition always goes one way
Cause: Data type mismatch or incorrect operator.Solutions:
- Inspect the actual values being compared
- Check if comparing string “100” to number 100
- Verify the operator matches your intent
- Check case sensitivity for string comparisons
Action node fails
Action node fails
Cause: Invalid configuration or external service issue.Solutions:
- Check all required fields are populated
- Verify @ references resolve to valid values
- Check integration credentials and permissions
- Look for rate limits or service outages
Using Logs for Debugging
- Enable verbose logging in flow settings (if available)
- Check execution logs after test runs
- Look for error messages with specific details
- Trace data flow through each node
Pre-Deployment Checklist
Before activating your automation, verify:All nodes tested individually
All nodes tested individually
- Each node has been tested with realistic data
- All outputs produce expected results
Complete flow tested
Complete flow tested
- Full flow executed successfully end-to-end
- All conditional paths have been exercised
- Edge cases handled correctly
Actions verified
Actions verified
- Email/Slack messages contain correct content
- Tasks created with right assignments
- Field updates work as expected
Error handling reviewed
Error handling reviewed
- Flow handles missing data gracefully
- No unintended side effects on failure
- Errors are logged appropriately
Performance considered
Performance considered
- No infinite loops possible
- SQL queries are efficient
- External API calls are reasonable
Monitoring After Deployment
After activating your automation:Track Execution
- Run History: View all executions with status
- Success Rate: Monitor how often the flow completes successfully
- Error Trends: Watch for increasing failure rates
Set Up Alerts
Consider adding monitoring within your flow:Review Periodically
- Check that the automation is still relevant
- Verify integrations haven’t changed
- Update conditions if business logic changes
Next Steps
AI Components
Add AI-powered nodes to your flows
Agents Overview
Build intelligent AI agents