How to Build an MCP Server with NestJS, Auth0, and Azure AD SSO
The Problem Everyone Is Hitting
If you run an enterprise app with Azure AD SSO and you try to add MCP server support, you will hit this error:
AADSTS9010010: The resource parameter provided in the request
doesn't match with the requested scopes.
This happens because MCP clients are required to send a resource parameter (per RFC 8707) in the OAuth authorization request. When your identity provider federates through Azure AD (Microsoft Entra), that parameter gets forwarded. Azure AD rejects it.
This is not a niche edge case. It affects Microsoft's own Power BI MCP server, IBM's MCP Context Forge, MCP Inspector, and basically anyone trying to put MCP behind enterprise SSO. The upstream spec issue (modelcontextprotocol/modelcontextprotocol#1614) requesting resource be made optional is still open with no resolution.
Microsoft even tightened Entra's enforcement of this in March 2026, converting what had been a latent incompatibility into active breakage across production systems.
What Most People Are Doing About It
The standard enterprise answer is to add infrastructure:
- Azure API Management (APIM) as an OAuth gateway. Microsoft recommends this, but APIM's response buffering breaks MCP streaming.
- Solo.io's agentgateway or Microsoft's mcp-gateway. Real solutions, but you are now running a dedicated proxy for MCP auth.
- A fake DCR proxy. Stand up dynamic client registration endpoints that internally return pre-configured credentials. This is a shim to satisfy MCP clients that assume DCR exists.
- Conditional resource parameter omission. Detect Entra v2 endpoints in your server code and strip the
resourceparameter. Pragmatic, but not spec-compliant.
All of these work. All of them add moving parts. I wanted something simpler.
What I Did Instead
I have a NestJS API that serves a React/Next.js frontend. Auth is handled by Auth0, which federates to Azure AD for enterprise SSO. Users log in through their organization's Microsoft account. Standard enterprise setup.
I wanted to expose MCP tools on this same API so that Claude Code (and other MCP clients) could call them with the same authentication. No new infrastructure. No proxy. No gateway. Just Streamable HTTP, not SSE. One new endpoint, reusing the existing OAuth strategy.
You can get more complex with this (SSE transport, dynamic client registration, dedicated OAuth clients), but I did not need any of that.
Here is what I ended up with:
- Reuse the existing Auth0 application. The same OAuth client that the React frontend uses. No dynamic client registration. The MCP client passes the
clientIdexplicitly in its config. - Enable one Auth0 toggle. The Resource Parameter Compatibility Profile intercepts the
resourceparameter before it reaches Azure AD. - Add a guard and a well-known endpoint. The guard validates the auth context and returns a
WWW-Authenticateheader on 401. The well-known endpoint tells MCP clients where to authenticate.
That is the entire auth layer. No gateway, no proxy, no new OAuth clients.
Why Skip Dynamic Client Registration
The MCP spec (March 2025 version) said implementations "SHOULD" support dynamic client registration. Most MCP tutorials and client implementations treated this as mandatory. If your server did not have a DCR endpoint, many clients would fail.
The November 2025 spec revision changed this significantly. It now defines three registration methods in explicit priority order:
- Pre-registration (use a pre-configured client ID). Highest priority.
- Client ID Metadata Documents (CIMD). The new middle ground.
- Dynamic client registration. Downgraded from SHOULD to MAY.
Pre-registration is not a workaround. It is the spec's preferred path for enterprise deployments.
This makes sense. Some enterprise identity providers are starting to add DCR support, but adoption is uneven and often still in beta. Waiting around for full DCR support across your identity stack is not a practical path for shipping today.
More importantly, DCR is a security risk in enterprise contexts. It allows unlimited app registrations, which is the opposite of what you want when you are building an SSO-grade application with controlled access. Auth0's own documentation recommends static registration for production and explicitly calls DCR a security risk that requires Enterprise-tier controls.
The Resource Parameter Problem (and Fix)
MCP clients must send a resource parameter per RFC 8707. The MCP spec converts RFC 8707's "MAY" into a "MUST." This is a deliberate deviation that breaks interoperability with identity providers that reject the parameter.
Auth0 historically used its own audience parameter instead of the standard resource. Without the compatibility profile enabled, Auth0 does not know which API to issue a token for when it receives resource. It falls back to an encrypted JWE token (5-part) instead of a standard JWT (3-part), and the whole flow breaks.
The Resource Parameter Compatibility Profile (Auth0 Dashboard, Settings, Advanced) makes Auth0 treat resource as equivalent to audience for token audience determination. Critically, it does not forward the resource parameter to upstream identity providers. That is what prevents Azure AD from seeing it and rejecting the request.
This feature entered limited early access in November 2025, driven specifically by MCP adoption. It is documented but not prominently. You will find it in Auth0's AI/MCP-specific docs, not the main Auth0 documentation.
The Implementation
Auth0 Configuration
Four things to configure in Auth0:
1. Enable the Resource Parameter Compatibility Profile. Auth0 Dashboard, Settings, Advanced tab. Toggle it on.
2. Register your MCP server URL as an Auth0 API. Auth0 needs to know your server URL is a valid audience. Create an API with the identifier set to your server URL (e.g., http://localhost:3000/api for local dev). This is what the resource parameter will resolve to.
3. Promote your Azure AD connection to domain-level. This makes the connection available to all applications in the tenant without per-app configuration.
# Find your connection ID (look for strategy: "waad")
auth0 api get connections
# Promote to domain-level
auth0 api patch connections/YOUR_CONNECTION_ID \
--data '{"is_domain_connection": true}'
4. Add the MCP callback URL to your existing application. Add http://localhost:8976/callback (or your chosen port) to the Allowed Callback URLs on the same Auth0 application your frontend uses.
NestJS Server
You need two things on the server: an auth guard for the MCP endpoint, and a well-known metadata endpoint.
The guard validates the auth context your existing middleware sets. When auth fails, it returns a 401 with a WWW-Authenticate header pointing to your OAuth metadata. This header is what triggers MCP clients to start the OAuth discovery flow.
import { Injectable, CanActivate, ExecutionContext, UnauthorizedException } from '@nestjs/common';
import type { Request, Response } from 'express';
@Injectable()
export class McpAuthGuard implements CanActivate {
canActivate(context: ExecutionContext): boolean {
const request = context.switchToHttp().getRequest<Request>();
const auth = (request as any).auth;
if (auth?.userId && auth?.type === 'enterprise' && auth?.orgId) {
return true;
}
// Set the WWW-Authenticate header before throwing so it survives the exception filter.
const apiUrl = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:3000/api';
const baseUrl = apiUrl.replace(/\/api\/?$/, '');
const response = context.switchToHttp().getResponse<Response>();
response.setHeader(
'WWW-Authenticate',
`Bearer resource_metadata="${baseUrl}/.well-known/oauth-protected-resource"`
);
throw new UnauthorizedException('Unauthorized');
}
}
Important: throw the exception. Do not manually write a response body and return false. NestJS throws a ForbiddenException when a guard returns false, which causes a double-response crash (Cannot set headers after they are sent).
The well-known endpoint implements RFC 9728 (OAuth Protected Resource Metadata). It tells MCP clients where to authenticate. This must be served outside your API prefix (at /.well-known/oauth-protected-resource, not /api/.well-known/...).
import { Controller, Get } from '@nestjs/common';
import { ProtectedResourceMetadataBuilder, BearerMethod, SigningAlgorithm } from '@auth0/auth0-api-js';
import { SkipThrottle } from '@nestjs/throttler';
@Controller('.well-known')
@SkipThrottle()
export class McpWellKnownController {
@Get('oauth-protected-resource')
getProtectedResourceMetadata() {
const apiUrl = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:3000/api';
const auth0Domain = process.env.AUTH0_DOMAIN;
if (!auth0Domain) {
return { error: 'OAuth authorization server not configured.' };
}
const metadata = new ProtectedResourceMetadataBuilder(
apiUrl,
[`https://${auth0Domain}`]
)
.withBearerMethodsSupported([BearerMethod.HEADER])
.withResourceSigningAlgValuesSupported([SigningAlgorithm.RS256])
.withScopesSupported(['openid', 'profile', 'email'])
.build();
return metadata.toJSON();
}
}
This endpoint must be anonymous (no auth required). MCP clients hit it before they have a token. If you have a global auth guard, make sure this route is excluded or marked as public.
Consider relaxing rate limiting on this endpoint. MCP clients may poll it during the OAuth discovery flow, and strict throttling can cause authentication to fail. I skip it entirely in my setup, but a generous limit would also work.
The resource field in this response must be your actual server URL, not the Auth0 audience URL. MCP clients use it to match the resource they are connecting to. If it does not match, you get SDK auth failed: Protected resource does not match expected.
For the MCP transport itself, I use @rekog/mcp-nest. It gives you decorator-based tool registration with full NestJS dependency injection.
The module setup has two parts. A shared module registers tool classes with forFeature() and imports the domain modules those tools need:
import { Module } from '@nestjs/common';
import { McpModule as RekogMcpModule } from '@rekog/mcp-nest';
import { MyToolClass } from './tools/my.tools';
@Module({
imports: [
RekogMcpModule.forFeature([MyToolClass], 'your-server-name'),
// domain modules your tools depend on
],
providers: [MyToolClass],
exports: [MyToolClass]
})
export class McpSharedModule {}
A root module in your app configures the transport, attaches the auth guard, and imports the shared module:
import { Module } from '@nestjs/common';
import { McpModule as RekogMcpModule, McpTransportType } from '@rekog/mcp-nest';
import { McpAuthGuard } from './mcp-auth.guard';
import { McpWellKnownController } from './mcp-well-known.controller';
import { McpSharedModule } from './mcp-shared.module';
@Module({
imports: [
RekogMcpModule.forRoot({
name: 'your-server-name',
version: '1.0.0',
description: 'Your MCP server description',
transport: McpTransportType.STREAMABLE_HTTP,
streamableHttp: {
statelessMode: true,
enableJsonResponse: true
},
guards: [McpAuthGuard]
}),
McpSharedModule
],
controllers: [McpWellKnownController]
})
export class McpApiModule {}
This split keeps tool registration separate from transport configuration. The shared module can be tested independently, and the tool classes stay focused on input validation and service delegation.
Tool Classes
Tool classes are ultra-thin. They validate input with Zod, call an authorized service method, and return the result. No business logic, no authorization logic. That all lives in the service layer.
A few design principles that matter for agent ergonomics:
- Name parameters unambiguously.
projectIdnotproject.clientNamenotclient. - Describe what the tool does for the agent, not just what it wraps. Tell the agent what this tool returns and how its outputs connect to other tools.
- Cap results with sensible defaults. Agents have limited context. Return 10 results by default, let them ask for more.
- Use enums for constrained values. Zod enums prevent agents from guessing status names or field values.
- Document relationships between tools. If
add_project_noteneeds a project ID, say "Usesearch_projectsto find the project ID first."
import { Injectable } from '@nestjs/common';
import { Tool } from '@rekog/mcp-nest';
import { z } from 'zod';
import { ProjectsService } from '../projects';
const projectStatuses = ['active', 'completed', 'on_hold'] as const;
@Injectable()
export class McpProjectTools {
constructor(private readonly projectsService: ProjectsService) {}
@Tool({
name: 'search_projects',
description:
'Search projects by keyword, status, or client name. Returns matching records ' +
'with project IDs. Use project IDs from this tool as input to add_project_note.',
parameters: z.object({
search: z.string().optional().describe('Fuzzy search on project name or description.'),
status: z.enum(projectStatuses).optional(),
clientName: z.string().optional().describe('Exact or partial client organization name.'),
limit: z.number().int().min(1).max(25).optional().describe('Max results (default 10).')
})
})
async searchProjects(params: {
search?: string;
status?: string;
clientName?: string;
limit?: number;
}) {
const result = await this.projectsService.search(params);
return { content: [{ type: 'text' as const, text: JSON.stringify(result) }] };
}
@Tool({
name: 'add_project_note',
description:
'Add a note to a project. Use search_projects to find the project ID first.',
parameters: z.object({
projectId: z.string().describe('The project ID from search_projects.'),
content: z.string().min(1).max(5000).describe('Note body text.'),
title: z.string().optional().describe('Short title for the note.')
})
})
async addProjectNote(params: { projectId: string; content: string; title?: string }) {
const result = await this.projectsService.addNote(params);
return { content: [{ type: 'text' as const, text: JSON.stringify(result) }] };
}
}
The pattern is the same for every tool: Zod schema in, service call, JSON out. Your service methods handle authorization, entitlement checks, and business logic. The tool class is just the interface.
MCP Client Configuration
This example is for local development. In your .mcp.json (project-level or ~/.claude.json global):
{
"mcpServers": {
"your-server": {
"type": "http",
"url": "http://localhost:3000/api/mcp",
"oauth": {
"clientId": "YOUR_AUTH0_CLIENT_ID",
"callbackPort": 8976,
"authServerMetadataUrl": "https://YOUR_TENANT.us.auth0.com/.well-known/openid-configuration"
}
}
}
}
In production, you would replace the url with your live server URL and drop the callbackPort (that is only needed for local OAuth redirects).
The clientId is required. This is the same client ID your React frontend uses. Because we are not using dynamic client registration, the MCP client needs to know which OAuth application to authenticate against.
The authServerMetadataUrl points directly to Auth0's OIDC discovery endpoint. This helps the MCP SDK find the authorization and token endpoints without relying on the MCP server to provide them.
main.ts Configuration
Two things need to happen in main.ts: exclude the well-known path from your global API prefix, and add MCP headers to CORS.
// Exclude .well-known from /api prefix so it's served at the root
app.setGlobalPrefix('api', {
exclude: [
{ path: '.well-known/oauth-protected-resource', method: RequestMethod.GET },
{ path: '.well-known/oauth-protected-resource', method: RequestMethod.OPTIONS }
]
});
app.enableCors({
origin: corsAllowedOrigins,
credentials: true,
allowedHeaders: [
'Content-Type',
'Authorization',
'X-Requested-With',
'Accept',
'Origin',
'Access-Control-Request-Method',
'Access-Control-Request-Headers',
'User-Agent', // AI SDK and MCP clients set this; Safari/Firefox require it in CORS allowedHeaders
'Mcp-Session-Id'
],
exposedHeaders: ['Content-Length', 'Content-Type', 'Mcp-Session-Id'],
methods: ['GET', 'POST', 'PUT', 'DELETE', 'PATCH', 'OPTIONS'],
maxAge: 3600
});
The Mcp-Session-Id header is required in both allowedHeaders and exposedHeaders. The User-Agent header matters because some MCP clients and AI SDKs set it, and Safari and Firefox will reject the preflight if it is not explicitly allowed.
The Full Auth Flow
MCP Client (Claude Code)
|
+-- POST /api/mcp -> 401 + WWW-Authenticate header
|
+-- GET /.well-known/oauth-protected-resource -> Auth0 server URL
|
+-- OAuth flow with Auth0 (resource param intercepted, NOT forwarded)
| +-- Azure AD SSO (clean request, no resource param)
|
+-- Receives JWT access token (audience: your server URL)
|
+-- POST /api/mcp + Authorization: Bearer <token>
+-- McpAuthGuard validates -> tools execute
The key insight is in the middle: Auth0 intercepts the resource parameter and uses it to determine which API to issue a token for, but never forwards it to Azure AD. Azure AD sees a clean authorization request. It authenticates the user through SSO and returns. Auth0 mints a JWT with the correct audience. The MCP client gets a usable token.
This Is for Private Applications
I want to be explicit about scope. This setup is for a private, secured application. MCP users must already have login access to the organization's UI. They authenticate with the same credentials, through the same SSO flow, against the same OAuth client.
If you are building a public MCP server that arbitrary clients should be able to connect to, you probably do want dynamic client registration or CIMD. This post is not about that use case.
This is about taking an existing enterprise app with existing enterprise auth and adding MCP as a transport layer on top of it. No new identity infrastructure. No new OAuth clients. No gateway.
Where the Spec Is Heading
The November 2025 MCP spec update and the MCP blog post on evolving client registration signal a clear direction:
- Pre-registration is the preferred enterprise path
- CIMD is the new middle ground for public discovery without DCR's database growth problems
- DCR is a fallback, not the default
Auth0 is positioning CIMD as the future of MCP client registration. Aaron Parecki (now at Auth0, previously OAuth spec editor) has been writing extensively about enterprise-ready MCP auth patterns.
The resource parameter conflict is still unresolved at the spec level. The maintainers' position (spec compliance takes priority, Entra should adapt) is defensible from a standards perspective. But for developers who cannot change Azure AD's behavior, the practical question is: what works today?
For Auth0 customers with Azure AD federation, the answer is the Resource Parameter Compatibility Profile plus pre-registered client reuse. It is the simplest working path I have found.
Troubleshooting
AADSTS9010010 error. The Resource Parameter Compatibility Profile is not enabled. Auth0 Dashboard, Settings, Advanced. Toggle it on.
Token is a JWE (5-part) instead of JWT (3-part). Same cause. Without the profile, Auth0 does not know which API to scope the token to.
SDK auth failed: Protected resource does not match expected. The resource field in your /.well-known/oauth-protected-resource response must match your MCP server URL (e.g., http://localhost:3000/api), not the Auth0 audience.
Cannot set headers after they are sent. Your guard is writing a response body manually and then returning false. Throw UnauthorizedException instead.
needs authentication but no browser redirect. Check that your Azure AD connection is promoted to domain-level and that the clientId in your MCP config matches your Auth0 application.
Further Reading
- MCP Spec (November 2025) - Authorization - The current authoritative spec, including the pre-registration priority order
- RFC 9728: OAuth Protected Resource Metadata - The well-known endpoint standard
- RFC 8707: Resource Indicators for OAuth 2.0 - The resource parameter that causes the Azure AD conflict
- Auth0: Resource Parameter Compatibility Profile - The toggle that fixes the federation issue
- Solo.io: MCP Authorization is a Non-Starter for Enterprise - The gateway-first counterargument
- Aaron Parecki: MCP Auth Spec Update - Spec author commentary on CIMD and enterprise patterns
- @rekog/mcp-nest - NestJS MCP server library
- @auth0/auth0-api-js - Auth0 OAuth metadata builder
- Build Your First MCP Server - My free e-book covering MCP server development, including OAuth
If you found this post helpful or have questions, feel free to connect with me on LinkedIn. It's the best place to reach me.