MiN8T
Home

MCP Integration Guide: AI-Powered Email Tools

Sarah Chen
Sarah Chen
Email Strategy Lead at MiN8T

The Model Context Protocol (MCP) is an open standard that lets AI models interact with external tools and services through a structured, secure interface. MiN8T's AI Builder implements MCP as a first-class integration layer, enabling your AI chat to call email verification tools, run deliverability audits, analyze spam risk, and interact with any custom tooling you build.

This guide is the technical reference. It covers the architecture of MiN8T's MCP implementation, the three transport types, every tool in the DeliverIQ MCP server, how to build your own servers, and the security model that governs tool execution. If you want the high-level overview, start with our MCP blog article and come back here for the implementation details.

12
DeliverIQ tools
50+
DNSBL zones
13
Spam trap signals
3
Transport types

1 MCP Architecture in MiN8T

MiN8T's MCP integration lives within the AI Builder module. The architecture has four layers:

  1. Settings UI -- the MCP Servers tab in Settings provides a JSON configuration editor where you define servers, their transport types, and connection parameters. Configuration is persisted in a Zustand store backed by localStorage, meaning your server configs survive across sessions without requiring a backend.
  2. Server Manager -- when you save a configuration, the server manager performs availability checking (can it connect?), tool registration (what tools does it offer?), and conflict detection (do any tool names collide with existing tools?). Failed availability checks surface clear error messages with remediation steps.
  3. Tool Registry -- all registered tools from all connected MCP servers are aggregated into a unified tool registry. The AI chat interface reads from this registry to determine what tools are available. Tools from different servers can coexist as long as their names do not conflict.
  4. Chat Integration -- registered and approved MCP tools appear alongside MiN8T's built-in tools in the AI chat. When the AI decides to use a tool, the chat layer routes the call to the appropriate MCP server, handles the transport, and streams the response back into the conversation.
i

Zustand + localStorage persistence: MCP server configurations are stored client-side. This means each user manages their own server connections independently. For team-wide MCP server deployments, configure the server URL and share it -- each team member adds the same SSE or streamable-http endpoint to their own settings.


2 Transport Types Explained

MCP defines how AI models communicate with tool servers. The transport type determines the wire protocol. MiN8T supports all three official transport types.

stdio -- Standard I/O

The MCP server runs as a local process on your machine. MiN8T spawns the process and communicates through stdin/stdout using JSON-RPC messages. This is the simplest transport and the one you will use most during development.

JSON
{
  "mcpServers": {
    "deliveriq": {
      "transport": "stdio",
      "command": "npx",
      "args": ["@deliveriq/mcp"],
      "env": {
        "DELIVERIQ_API_KEY": "diq_live_abc123..."
      }
    }
  }
}

When to use: Local development, personal tools, CLI-based utilities. The server process runs on the same machine as MiN8T. No network configuration required.

SSE -- Server-Sent Events

The MCP server runs on a remote host and exposes an SSE endpoint. MiN8T connects to the endpoint and receives tool responses as a stream of server-sent events. The initial connection is a standard HTTP request; subsequent messages arrive as events on the persistent connection.

JSON
{
  "mcpServers": {
    "team-deliverability": {
      "transport": "sse",
      "url": "https://mcp.yourcompany.com/deliverability/sse",
      "headers": {
        "Authorization": "Bearer your-team-token"
      }
    }
  }
}

When to use: Shared team servers, cloud-hosted MCP services, environments where you need real-time streaming and the server is accessible via HTTP. SSE maintains a long-lived connection, so it works best behind load balancers that support connection persistence.

streamable-http -- HTTP Streaming

The newest MCP transport. Each tool call is a standard HTTP POST request, and the response is streamed back as a chunked HTTP body. Unlike SSE, there is no persistent connection -- each call is independent.

JSON
{
  "mcpServers": {
    "edge-tools": {
      "transport": "streamable-http",
      "url": "https://mcp-edge.yourcompany.com/tools",
      "headers": {
        "X-API-Key": "your-api-key"
      }
    }
  }
}

When to use: Serverless environments (AWS Lambda, Cloudflare Workers, GCP Cloud Functions), edge deployments, or infrastructure where maintaining persistent connections is impractical. Each request is stateless, making it easier to scale horizontally.

Transport Connection Latency Best For
stdio Local process Lowest Development, CLI tools
SSE Persistent HTTP Low Team servers, cloud services
streamable-http Per-request HTTP Medium Serverless, edge deployments

3 DeliverIQ MCP Server Deep Dive

The @deliveriq/mcp package is a production-ready MCP server that exposes 12 tools across three categories. It is the reference implementation for how MCP servers integrate with MiN8T and serves as the primary deliverability tooling for the AI chat.

Verification tools (5)

Tool Input Output Credits
verify_email { email: string } Deliverability score (0-100), status (valid/risky/invalid), checks (syntax, DNS, mailbox, disposable, role) 1
batch_verify { emails: string[] } Job ID, estimated completion time 1 per email
batch_status { jobId: string } Progress percentage, status (queued/processing/complete/failed), processed count 0
batch_download { jobId: string } Full results array with per-address scores, categories, and risk flags 0
list_jobs { limit?: number } Array of jobs with IDs, statuses, creation dates, and email counts 0

Intelligence tools (6)

Tool Input Output Credits
find_email { firstName: string, lastName: string, domain: string } Matched email address, confidence score, pattern used 1
check_blacklist { target: string } Listed/clean per zone, total zones checked (50+), listing details 1
audit_infrastructure { domain: string } SPF (record, lookup count, validity), DKIM (selector results), DMARC (policy, pct, rua), MTA-STS (mode, max-age), BIMI (SVG location, VMC status) 2
analyze_spam_risk { domain: string } Risk score, 13 individual signal assessments (list age, bounce rate, complaint rate, engagement, trap proximity, etc.) 2
domain_trust_report { domain: string } Trust score, reputation breakdown, authentication grade, sending history, abuse reports 3
org_email_patterns { domain: string } Detected patterns (e.g., first.last@, f.last@), confidence levels, sample addresses 1

Account tools (1)

Tool Input Output Credits
credit_balance {} (no input) Current balance, total purchased, total used, usage by tool category, reset date 0

Credit-free tools: batch_status, batch_download, list_jobs, and credit_balance cost zero credits. You can poll batch job progress and check your balance as often as needed without consuming credits.


4 Building Custom MCP Servers

Any service that speaks the MCP protocol can register its tools with MiN8T. The @modelcontextprotocol/sdk package provides the Node.js/TypeScript implementation. Python users can use the mcp package from PyPI.

Node.js / TypeScript example

Here is a complete MCP server that exposes two email-related tools: one to check ESP sending quotas and another to pull recent bounce rates.

TypeScript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "email-ops",
  version: "1.0.0",
  description: "Email operations tools for campaign management",
});

// Tool 1: Check ESP sending quota
server.tool(
  "check_sending_quota",
  "Returns the remaining daily sending quota for a connected ESP account",
  {
    espName: z.enum(["sendgrid", "mailgun", "ses", "postmark"]),
    accountId: z.string().describe("Your ESP account identifier"),
  },
  async ({ espName, accountId }) => {
    const quota = await getQuotaFromESP(espName, accountId);
    return {
      content: [{
        type: "text",
        text: JSON.stringify({
          esp: espName,
          account: accountId,
          daily_limit: quota.limit,
          sent_today: quota.sent,
          remaining: quota.limit - quota.sent,
          resets_at: quota.resetTime,
        }, null, 2)
      }]
    };
  }
);

// Tool 2: Get recent bounce rates
server.tool(
  "get_bounce_rate",
  "Returns bounce rate statistics for the last N days",
  {
    days: z.number().min(1).max(90).default(7),
    espName: z.enum(["sendgrid", "mailgun", "ses", "postmark"]),
  },
  async ({ days, espName }) => {
    const stats = await getBounceStats(espName, days);
    return {
      content: [{
        type: "text",
        text: JSON.stringify({
          period: `Last ${days} days`,
          total_sent: stats.totalSent,
          hard_bounces: stats.hardBounces,
          soft_bounces: stats.softBounces,
          bounce_rate: `${stats.bounceRate.toFixed(2)}%`,
          trend: stats.trend,
        }, null, 2)
      }]
    };
  }
);

// Start the server with stdio transport
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("email-ops MCP server running on stdio");

Python example

Python
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("email-analytics")

@mcp.tool()
def get_open_rates(campaign_id: str, segment: str = "all") -> dict:
    """Returns open rate statistics for a specific campaign.

    Args:
        campaign_id: The campaign identifier
        segment: Filter by audience segment (default: all)
    """
    stats = fetch_campaign_stats(campaign_id, segment)
    return {
        "campaign": campaign_id,
        "segment": segment,
        "total_recipients": stats.recipients,
        "unique_opens": stats.opens,
        "open_rate": f"{stats.open_rate:.1f}%",
        "best_performing_subject": stats.top_subject,
    }

if __name__ == "__main__":
    mcp.run(transport="stdio")

Best practices


5 Security Model

MCP introduces a new attack surface: the AI can now execute external code. MiN8T's security model addresses this with multiple layers of protection.

Tool approval workflow

When a new MCP server is added or an existing server registers new tools, those tools enter a pending state. You see a list of every tool the server wants to register, including its name, description, and parameter schema. Tools are not available to the AI until you explicitly approve them. This prevents a compromised or misconfigured server from silently injecting tools.

Credential isolation

MCP server credentials (API keys, tokens, secrets) are configured in the server's env block and injected into the server process at startup. These values are never sent to the AI model. The AI sees tool names, descriptions, and parameter schemas -- it never sees the authentication credentials used to execute those tools.

!

Never put credentials in tool parameters. If your MCP tool needs an API key, inject it via the server's env configuration. Defining a tool parameter called api_key would expose the credential to the AI model and potentially to the conversation history.

Conflict detection

If two MCP servers try to register tools with the same name, MiN8T flags the conflict during registration. You choose which server's version to use, or you can rename the tool using a namespace prefix. This prevents ambiguity in the AI's tool selection -- it always knows exactly which server handles each tool.

Transport security


6 Advanced Use Cases

Once the basic integration is working, MCP unlocks workflows that combine multiple tools in sequence, driven by natural language conversation.

Email verification pipeline

Automate pre-send verification as part of your campaign workflow. Tell the AI: "I just uploaded a new subscriber list. Verify all addresses, remove invalids, check our domain reputation, and give me a send readiness report." The AI orchestrates batch_verify, waits for completion with batch_status, downloads results with batch_download, runs audit_infrastructure on your sending domain, and synthesizes a comprehensive readiness report.

Deliverability monitoring

Set up a routine check: "Run a deliverability audit on all our sending domains." The AI calls audit_infrastructure and check_blacklist for each domain, compares results against previous runs (if you provide history), and highlights any degradation -- new blacklist appearances, expired DKIM keys, SPF record changes.

Competitive analysis

Research a competitor's email infrastructure: "What can you tell me about competitor.com's email setup?" The AI runs audit_infrastructure to reveal their authentication configuration, org_email_patterns to identify their address format, and domain_trust_report to assess their sender reputation. You get a technical profile of their email operations without leaving the chat.

Multi-server orchestration

Connect multiple MCP servers to cover different operational domains. A DeliverIQ server handles verification and reputation. A custom server connects to your ESP's API for sending quotas and campaign stats. Another server wraps your CRM for contact data. The AI seamlessly calls tools across all servers in a single conversation, combining data from multiple sources into unified answers.

i

Tool chaining is automatic. You do not need to tell the AI which tools to call in which order. Describe your goal in plain language, and the AI determines the optimal sequence of tool calls to produce the answer. It handles dependencies, waits for async operations, and retries on transient failures.


7 Configuration Reference

Complete MCP server configuration schema with all supported options:

JSON
{
  "mcpServers": {
    "server-name": {
      // Required: transport type
      "transport": "stdio" | "sse" | "streamable-http",

      // stdio transport options
      "command": "npx",           // Executable to run
      "args": ["@deliveriq/mcp"], // Command arguments
      "cwd": "/optional/path",    // Working directory
      "env": {                    // Environment variables
        "API_KEY": "value"
      },

      // SSE / streamable-http transport options
      "url": "https://...",       // Server endpoint URL
      "headers": {                // HTTP headers sent with requests
        "Authorization": "Bearer token"
      },

      // Common options
      "enabled": true,            // Toggle without removing config
      "timeout": 30000            // Tool call timeout (ms)
    }
  }
}

To get started, add the DeliverIQ server using the stdio config, approve the 12 tools, and start a conversation in the AI chat. You will have email verification, blacklist checking, and infrastructure auditing at your fingertips -- all through natural language.

Build AI-powered email workflows

DeliverIQ MCP server is included with every MiN8T plan. Connect it in under five minutes.

Get Started Free