Building a Simple MCP Server with Bun - Easily Wrap Any Existing API
📖 New Ebook Available
Build Your First MCP Server: A Developer's Guide to Wrapping Existing APIs for AI Agents to Use
Learn to create powerful AI integrations step by step
Most MCP (Model Context Protocol) server examples are overcomplicated. Handling state is a huge pain. Some are not deployable to the cloud. What if I told you that you could build a complete, production-ready MCP server in a single TypeScript file with under 1,000 lines of code?
That's exactly what I did with my Petfinder MCP Server. Here's how I built it using Bun with zero dependencies.
I chose Petfinder as my example API because, honestly, who doesn't love puppies and kittens? Plus, it demonstrates OAuth client credentials flow, which is more complex than the typical API key authentication you'll encounter with most APIs.
The Problem with Complex MCP Servers
When I started exploring MCP servers, I found most examples were either local only (stdio), or stateful if they were remote (accessible from the internet like any other cloud api). I didn't need to have a stateful MCP Server I just wanted to wrap an existing API so that I can quickly empower my AI Agents (mostly written in Pydantic AI) to have new tools and capabilities. With this approach it only takes a few hours of work to standup this Stateless Remote Streamable HTTP MCP Server (wow that's a mouthful).
For someone wanting to quickly wrap an existing API or create a learning example, this complexity is overkill. Even worse, when you're trying to use these servers as examples for LLMs to learn from, the scattered logic across multiple files makes it nearly impossible for the AI to understand the complete flow.
Important Note: I built this as a single file to make it easier for humans and LLMs to read as an example. In practice, you'd probably want to split up API calls, authentication, and tool/schema definitions into separate modules for better maintainability.
What is MCP and Why Should You Care?
The Model Context Protocol is Anthropic's standard for connecting AI assistants to external data sources and tools. Think of it as a bridge that lets Claude (or other AI assistants) interact with APIs, databases, and services in a structured way.
Instead of the AI trying to guess API formats or making unreliable HTTP requests, MCP provides a standardized interface where tools are clearly defined with input schemas.
The Security Challenge: OAuth Without Exposing Secrets
The trickiest part of building remote MCP servers is security. Most examples just skip this over entirely. Not long ago, all MCP Servers ran on your desktop and only had one client at a time. Security wasn't as concern.
I solved this security challenge for the Petfinder MCP Server by proxying all authentication to the underlying API (Petfinder) and using query parameters for credentials:
// Extract credentials from query parameters
function extractCredentialsFromQuery(url: URL): {
clientId?: string;
clientSecret?: string;
} {
const clientId = url.searchParams.get('client-id') || undefined;
const clientSecret = url.searchParams.get('client-secret') || undefined;
return { clientId, clientSecret };
}
This approach means:
- Each user provides their own API credentials
- No secrets stored on the server
- Multi-client support with isolated token caching
- Petfinder handles all the actual security validation
For most APIs that use simple API keys, you'd just forward the Authorization header from your MCP client to the underlying API and let them handle validation. There's nothing you could do on your end to validate their API token anyway - that's the API provider's job.
Building the Server: Step by Step
Let's walk through the key components of the single-file MCP server.
SDK Note: I didn't use the official MCP TypeScript SDK because I had compatibility issues with Bun at the time of writing. If you're reading this in the future, that problem may have been solved and would make for an easier implementation.
1. Project Setup
First, create a new Bun project:
mkdir petfinder-mcp-server
cd petfinder-mcp-server
bun init
The package.json
is minimal:
{
"name": "petfinder-mcp-server",
"module": "simple-mcp-server.ts",
"type": "module",
"dependencies": {
"zod": "^4.0.10"
},
"devDependencies": {
"@types/bun": "latest"
}
}
2. OAuth Token Management
The server handles Petfinder's OAuth flow automatically with per-client token caching:
Note: Most APIs use simple API key authentication instead of OAuth. If you're wrapping a typical API, you'd just pass the API key as a Bearer token in your headers - much simpler than this token exchange setup. I'm not focusing heavily on this OAuth implementation since it's specific to Petfinder's requirements.
interface TokenCache {
access_token: string;
expires_at: number;
}
const tokenCache = new Map<string, TokenCache>();
async function getAccessToken(
clientId: string,
clientSecret: string
): Promise<string> {
const now = Math.floor(Date.now() / 1000);
// Check cached token
const cachedToken = tokenCache.get(clientId);
if (cachedToken && cachedToken.expires_at > now + 60) {
return cachedToken.access_token;
}
// Request new token
const response = await fetch(`${PETFINDER_BASE}/oauth2/token`, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'client_credentials',
client_id: clientId,
client_secret: clientSecret,
}),
});
const tokenData = await response.json();
// Cache the token
const newToken: TokenCache = {
access_token: tokenData.access_token,
expires_at: now + tokenData.expires_in,
};
tokenCache.set(clientId, newToken);
return newToken.access_token;
}
3. Input Validation with Zod
Zod gives us validation as well as exports to JSON for the tools endpoint.
const animalSearchSchema = z.object({
type: z.enum(['dog', 'cat', 'small-furry', 'bird']).optional(),
breed: z.array(z.string()).optional(),
size: z.array(z.enum(['small', 'medium', 'large'])).optional(),
location: z.string().optional(),
distance: z.number().int().positive().optional(),
limit: z.number().int().min(1).max(100).optional().default(20),
});
// Convert Zod schema to MCP tool schema
function zodToMCPSchema(schema: z.ZodObject<any>) {
const jsonSchema = z.toJSONSchema(schema) as any;
// Clean up required fields
if (jsonSchema.required) {
const shape = schema.shape;
jsonSchema.required = jsonSchema.required.filter((fieldName: string) => {
const field = shape[fieldName];
return !field.isOptional() && field._def.defaultValue === undefined;
});
}
return jsonSchema;
}
4. Tool Implementation
Each MCP tool is a simple async function:
async function searchPets(input: any) {
const validatedInput = animalSearchSchema.parse(input);
const result = await petfinderRequest('/animals', validatedInput);
return {
content: [
{
type: 'text',
text: `Found ${result.animals?.length || 0} pets matching your search.`,
},
{
type: 'text',
text: JSON.stringify(result, null, 2),
},
],
};
}
Here's a tip: when the API response you're wrapping is large or complex, it's much clearer to present the final output as well-formatted Markdown (.MD
). LLMs handle Markdown especially well, and it's far more readable than sifting through pages of raw JSON.
5. HTTP Server with Bun
The entire server runs on Bun's built-in HTTP server:
serve({
port: parseInt(process.env.PORT ?? '3000', 10),
async fetch(req) {
const url = new URL(req.url);
// Handle CORS
if (req.method === 'OPTIONS') {
return new Response(null, {
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS',
'Access-Control-Allow-Headers': '*',
},
});
}
// Health check
if (url.pathname === '/healthz') {
return new Response('OK', { status: 200 });
}
// MCP endpoint
if (req.method === 'POST' && url.pathname === '/mcp') {
const body = (await req.json()) as MCPRequest;
const response = await handleMCPRequest(body, url);
return new Response(JSON.stringify(response), {
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
},
});
}
return new Response('Not found', { status: 404 });
},
});
Running and Testing the Server
Start the server locally:
bun run simple-mcp-server.ts
Test with curl:
curl -X POST "http://localhost:3000/mcp?client-id=YOUR_CLIENT_ID&client-secret=YOUR_CLIENT_SECRET" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "pets.search",
"arguments": {
"type": "dog",
"location": "90210",
"limit": 5
}
}
}'
Or use the MCP Inspector for interactive testing:
npx @modelcontextprotocol/inspector "http://localhost:3000/mcp?client-id=YOUR_CLIENT_ID&client-secret=YOUR_CLIENT_SECRET"
Client Compatibility & Adding to Cursor IDE
Important: Not all MCP clients fully support Remote Stateless Streamable HTTP MCP Servers (quite a mouthful!). I've found that Cursor and Pydantic AI both work perfectly with this approach. However, you might not be able to use this server with Claude.ai or CrewAI quite yet - they blame the Python MCP client not supporting it yet.
One of the best features of MCP servers is integrating them with AI coding assistants like Cursor. Add this to your Cursor settings:
{
"mcpServers": {
"petfinder-local": {
"type": "http",
"url": "http://localhost:3000/mcp?client-id=YOUR_CLIENT_ID&client-secret=YOUR_CLIENT_SECRET"
}
}
}
Replace YOUR_CLIENT_ID
and YOUR_CLIENT_SECRET
with your actual Petfinder API credentials from petfinder.com/developers.
Now Cursor can help you find adoptable pets while you code! Try asking: "Find me small dogs available for adoption in San Francisco."
Using with Pydantic AI
You can also integrate this MCP server with Pydantic AI agents. Here's a simple example:
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP
def create_petfinder_agent() -> Agent:
"""Create and configure the petfinder agent"""
# Set up MCP server connection
server = MCPServerStreamableHTTP(
url='<your-deployed-url>/mcp?client-id=YOUR_CLIENT_ID&client-secret=YOUR_CLIENT_SECRET'
)
# Generate the system prompt
system_prompt = """You are a Petfinder agent who helps find adoptable pets.
Important linking rules:
- Always use the `animal.url` field from the API as the pet's profile link
- Include the primary photo when available
- Provide clear adoption information
"""
# Create the agent
agent = Agent(
'groq:qwen/qwen3-32b', # or your preferred model
toolsets=[server],
system_prompt=system_prompt
)
This creates a Pydantic AI agent that can search for adoptable pets using your MCP server. The agent will automatically use the available tools (pets.search, pets.get, etc.) to fulfill user requests.
Deployment with Docker
The included Dockerfile makes deployment simple:
FROM oven/bun:1.2.19
WORKDIR /app
COPY package.json bun.lockb* ./
RUN bun install --frozen-lockfile
COPY . .
EXPOSE 3000
CMD ["bun", "run", "simple-mcp-server.ts"]
Deploy to any container platform:
docker build -t petfinder-mcp .
docker run -p 3000:3000 petfinder-mcp
Security Considerations
Like any API you deploy to the world, secure it using API security best practices. If you want an additional layer of protection and you're deploying to a public cloud like Render.com (which I use), you can add an additional secret in the headers and validate that secret before processing any requests.
For example:
// Check for additional security header
const authSecret = req.headers.get('x-mcp-secret');
if (authSecret !== process.env.MCP_SECRET) {
return new Response('Unauthorized', { status: 401 });
}
I won't cover securing MCP servers in depth in this post, but remember: let the underlying API handle authentication validation - that's what it's designed for.
What's Next?
Try this approach with whatever API you're trying to wrap around!
The pattern is always the same:
- Handle authentication (OAuth, API keys, etc.)
- Define Zod schemas for input validation
- Create tool functions that call the underlying API
- Set up the MCP protocol handlers
- Serve it all with Bun's HTTP server
Pro tip: For most APIs with simple API key authentication, you'd just pass the key as a Bearer token in headers - much simpler than the OAuth dance I had to implement for Petfinder!
Try It Yourself
Ready to build your own single-file MCP server? Here's what you need:
- Get Petfinder credentials: Sign up at petfinder.com/developers
- Clone the repo:
git clone https://github.com/mattlgroff/petfinder-mcp-server
- Install Bun:
curl -fsSL https://bun.sh/install | bash
- Run the server:
bun run simple-mcp-server.ts
- Test with MCP Inspector:
npx @modelcontextprotocol/inspector "http://localhost:3000/mcp?client-id=YOUR_ID&client-secret=YOUR_SECRET"
The beauty of this approach is that you can adapt it to wrap any API in just a few hours of work. Just change the authentication method, update the Zod schemas, and modify the tool functions. Everything else stays the same. You can copy and paste the simple-mcp-server.ts
into your LLM ChatBot or Agentic Coding Assistant as a starting place. Including my README.md
would probably help as well. I find that when working with tools like Cursor or OpenCode, starting with a Plan and a README.md with defined requirements works best.
Have you built your own MCP server? I'd love to see what APIs you've wrapped! Connect with me on LinkedIn and share your creations.