Setting Up LLMs with Live Search in Make.com: A Comprehensive Guide
This guide will walk you through different methods of integrating Large Language Models (LLMs) with live search capabilities in Make.com (formerly Integromat). We’ll cover multiple approaches, from using direct API integrations to creating custom modules for enhanced functionality.

Method 1: Direct OpenAI Integration with Google Search
Prerequisites
- Make.com account with API access
- OpenAI API key
- Google Programmable Search Engine API key and Search Engine ID
- Basic understanding of JSON and HTTP requests
Setup Steps
- Create a New Scenario
- Log into Make.com
- Click “Create a new scenario”
- Select a trigger module (e.g., HTTP Webhook or Scheduler)
- Configure Google Search
- Add “HTTP” module
- Set Method to “GET”
- URL:
https://www.googleapis.com/customsearch/v1
- Query Parameters:
key
: Your Google API keycx
: Your Search Engine IDq
: Map to your search query input
- Parse response: “JSON”
- Process Search Results
- Add “Array Aggregator” module
- Map items from Google Search response
- Select relevant fields (title, snippet, link)
- Setup OpenAI Integration
- Add “HTTP” module
- Method: “POST”
- URL:
https://api.openai.com/v1/chat/completions
- Headers:
Content-Type: application/json Authorization: Bearer YOUR_OPENAI_API_KEY
- Body:
json { "model": "gpt-4-turbo-preview", "messages": [ { "role": "system", "content": "You are a helpful assistant that analyzes search results and provides comprehensive answers." }, { "role": "user", "content": "Based on these search results: {{aggregated_results}}, provide a detailed answer to: {{original_query}}" } ] }
Method 2: Using Claude API with DuckDuckGo
Prerequisites
- Anthropic API key
- Make.com account
- DuckDuckGo Search API access
Setup Steps
- Initialize Search Module
- Add “HTTP” module
- Method: “GET”
- URL: DuckDuckGo API endpoint
- Configure search parameters
- Create Claude Integration
- Add “HTTP” module
- Method: “POST”
- URL:
https://api.anthropic.com/v1/messages
- Headers:
anthropic-version: 2023-06-01 x-api-key: YOUR_ANTHROPIC_API_KEY Content-Type: application/json
- Body:
json { "model": "claude-3-opus-20240229", "max_tokens": 1000, "messages": [ { "role": "user", "content": "Using these search results: {{search_results}}, answer: {{query}}" } ] }
Method 3: Custom Search Router with Multiple LLMs
Prerequisites
- Access to multiple LLM APIs (OpenAI, Anthropic, etc.)
- Make.com Enterprise account (recommended for complex scenarios)
- Custom router logic implementation
Setup Steps
- Create Router Module
// Example router logic
function routeQuery(query, providers) {
const queryLength = query.length;
const complexity = assessComplexity(query);
if (complexity > 0.8) {
return 'claude';
} else if (queryLength > 100) {
return 'gpt4';
}
return 'gpt35';
}
- Implement Provider Selection
- Add “Router” module
- Configure branching logic based on query type
- Set up parallel processing for search results
- Configure Result Aggregation
- Add “Merge” module
- Combine results from different providers
- Implement fallback logic
Best Practices and Optimization
Rate Limiting and Error Handling
- Implement retry logic for failed API calls
- Add delay modules between consecutive requests
- Use error handling routes for each API integration
Cost Optimization
- Implement token counting
- Use cheaper models for simple queries
- Cache frequent searches
Performance Tuning
- Set appropriate timeouts
- Use parallel processing where possible
- Implement request batching
Advanced Features
Real-time Search Enhancement
- Implement Streaming
- Configure webhook endpoints
- Set up SSE connections
- Handle partial responses
- Context Window Management
function manageContext(searchResults, maxTokens) {
let tokens = 0;
const context = [];
for (const result of searchResults) {
const resultTokens = estimateTokens(result);
if (tokens + resultTokens > maxTokens) break;
context.push(result);
tokens += resultTokens;
}
return context;
}
Monitoring and Analytics
- Setup Logging
- Create logging module
- Track API usage
- Monitor response times
- Performance Metrics
- Implement custom metrics
- Track success rates
- Monitor cost per query
Troubleshooting Common Issues
API Connection Problems
- Check API keys and permissions
- Verify endpoint URLs
- Monitor rate limits
Response Processing Issues
- Validate JSON formatting
- Check character encoding
- Verify data mapping
Security Considerations
- API Key Management
- Use Make.com’s built-in encryption
- Rotate keys regularly
- Implement access controls
- Data Privacy
- Implement data filtering
- Add PII detection
- Configure data retention policies
LLMs with live search in Make.com
This guide covered multiple approaches to integrating LLMs with live search in Make.com. Choose the method that best fits your needs based on:
- Required response time
- Cost constraints
- Complexity of queries
- Desired accuracy
Remember to regularly monitor and optimize your implementation based on usage patterns and performance metrics.