Skip to main content
Tools in superglue are reusable workflows that you can execute on-demand, schedule, or trigger via webhooks. Each tool consists of sequential steps with built-in data transformations, loops, and error handling. Learn more about how tools fit into the bigger picture in our core concepts page.
  • Via Agent
  • Via UI
  • Via SDK
The fastest way to create a tool is by talking to our agent. Describe what you want to accomplish and the agent will build the tool for you.Simple example:
"Create a tool that fetches all active customers from Stripe"
superglue will:
  1. Identify which integration to use (or create one if needed)
  2. Find the relevant API endpoint from the documentation
  3. Configure the API call with appropriate parameters
  4. Test it and make it available for execution
Complex example:
"Create a tool that loops through all contacts in HubSpot,
filters those with email domains matching @company.com,
and updates their lifecycle stage to 'customer'"
The agent can build multi-step tools with:
  • Sequential API calls
  • Data transformations between steps
  • Loops for batch processing
  • Error handling and retries

Try it now

Start building tools with our agent

Tool anatomy

Every tool consists of:

Steps

Sequential API calls that fetch or modify data. Each step can reference data from previous steps.

Execution Mode

Steps run with self-healing enabled or disabled. Self-healing can auto-repair failures during execution.

Transformations

JavaScript functions that shape the step inputs and the final output. Ensures tool results adhere to response schemas.

Variables

Access data from previous steps, credentials, and input payload using <<variable>> syntax.

Step configuration

Each step in a tool represents a single API call with the following configuration:

Basic structure

{
  id: "stepId",                    // Unique identifier for this step
  integrationId: "stripe",         // Which integration to use
  loopSelector: "(sourceData) => { return [\"one\", \"two\"]}",  // The data selector for the current step
  apiConfig: {
    method: "POST",                 // HTTP method
    urlPath: "/v1/customers",      // API endpoint path
    queryParams: {},               // URL query parameters
    headers: { "Authorization": "Bearer <<stripe_apiKey>>" }, // Custom headers
    body: ""                       // Request body
  }
}

Variable syntax

Use <<variable>> to access dynamic values: Access credentials:
{
  headers: {
    "Authorization": "Bearer <<integrationId_access_token>>"
  }
}
Access previous step data:
{
  urlPath: "/customers/<<getCustomer.data.id>>",
  body: {
    email: "<<getCustomer.data.email>>"
  }
}
Access input payload:
{
  queryParams: {
    user_id: "<<userId>>",
    date: "<<startDate>>"
  }
}
Execute JavaScript expressions:
{
  body: {
    ids: "<<(sourceData) => JSON.stringify(sourceData.users.map(u => u.id))>>",
    timestamp: "<<(sourceData) => new Date().toISOString()>>",
    count: "<<(sourceData) => sourceData.items.length>>",
    uppercaseName: "<<(sourceData) => sourceData.name.toUpperCase()>>"
  }
}
JavaScript expressions must use the arrow function syntax (sourceData) => .... Direct property access like <<userId>> works for simple variables, but not for nested properties or transformations.

Data transformations

Data selector

Extract an array to iterate over:
{
  loopSelector: `(sourceData) => {
    const items = sourceData.fetchItems.results || [];
    const excludeIds = sourceData.excludeIds || [];
    return items.filter(item => !excludeIds.includes(item.id));
  }`;
}

Final transform

Shape the final output of the entire tool:
{
  finalTransform: `(sourceData) => {
    const customers = sourceData.getCustomers.data || [];
    const updated = sourceData.updateCustomers || [];
    
    return {
      success: true,
      total: customers.length,
      updated: updated.length,
      results: updated.map(r => ({
        id: r.currentItem.id,
        status: r.data.status
      }))
    };
  }`;
}

Special integrations

PostgreSQL

Query databases using the postgres:// URL scheme:
{
  id: "queryDatabase",
  integrationId: "postgres_db",
  apiConfig: {
    urlHost: "postgres://<<postgres_db_username>>:<<postgres_db_password>>@<<postgres_db_hostname>>:5432",
    urlPath: "<<postgres_db_database>>",
    body: {
      query: "SELECT * FROM users WHERE status = $1 AND created_at > $2",
      params: ["active", "<<(sourceData) => sourceData.startDate>>"]
    }
  }
}
Always use parameterized queries with $1, $2, etc. placeholders to prevent SQL injection. Provide values in the params array, which can include static values or <<>> expressions.

FTP/SFTP

Access files on FTP servers:
{
  id: "listFiles",
  integrationId: "ftp-server",
  apiConfig: {
    urlHost: "sftp://<<integrationId_username>>:<<integrationId_password>>@<<integrationId_hostname>>:22",
    urlPath: "/data",
    body: {
      operation: "list",
      path: "/reports"
    }
  }
}
Supported operations:
  • list - List directory contents
  • get - Download file (auto-parses CSV/JSON/XML)
  • put - Upload file
  • delete - Delete file
  • rename - Rename/move file
  • mkdir - Create directory
  • rmdir - Remove directory
  • exists - Check if file exists
  • stat - Get file metadata

Error handling

Automatic retries

Failed steps are automatically retried with exponential backoff

Self-healing

When enabled, superglue attempts to fix configuration errors automatically

Validation

Response data is validated against expected schemas

Graceful degradation

Handle missing data with optional chaining and defaults in transformations
Keep steps focused - Each step should make a single API call. Use transformations to prepare data, not additional steps.Use descriptive IDs - Step IDs are used to reference data in later steps. Use clear names like getCustomers, updateOrder, sendEmail.Avoid unnecessary loops - Don’t loop over thousands of items if the API supports batch operations. Check the documentation first.Test with real data - Test each step incrementally with production-like data before deploying.

Input and output schemas

Define schemas to make your tools more robust: Input schema - Validates payload before execution:
{
  "type": "object",
  "properties": {
    "userId": { "type": "string" },
    "startDate": { "type": "string", "format": "date" }
  },
  "required": ["userId"]
}
Response schema - Validates final output:
{
  "type": "object",
  "properties": {
    "success": { "type": "boolean" },
    "data": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "id": { "type": "string" },
          "email": { "type": "string" }
        }
      }
    }
  }
}

Next steps