Breaking Through the AI Context Wall
I’ve spent the last six months pushing Claude Code to its limits in production. While it’s great at writing boilerplate, it eventually hits a wall: it doesn’t know your internal APIs, your private Jira boards, or how your team handles specific deployments. Claude Code is Anthropic’s CLI agent that handles code and git workflows, but it’s only as smart as the data it can access.
To make it a true teammate, you need to build custom “skills.” These are powered by the Model Context Protocol (MCP). Think of an MCP server as a bridge that lets Claude safely step outside its sandbox to talk to your infrastructure. In my experience, adding just two or three custom tools can shave 20% off your daily manual lookup time.
The 5-Minute Setup: Your First Custom Skill
The fastest way to start is with the TypeScript MCP starter. This lets Claude execute local logic you’ve written yourself.
1. Initialize the project
# Create a project for a custom Jira or Wiki fetcher
mkdir claude-internal-tools && cd claude-internal-tools
# Scaffolding a new MCP server
npx @modelcontextprotocol/create-server internal-docs-server
2. Define a Practical Tool
Open src/index.ts. Instead of a generic example, let’s build a tool that fetches internal dev environment specs. This prevents Claude from guessing which ports or URLs to use.
server.tool(
"get_service_config",
"Fetches dev environment ports and URLs from the internal registry",
{ serviceName: z.string().description("The name of the microservice, e.g., 'auth-api'") },
async ({ serviceName }) => {
// In production, this would be an Axios call to your internal registry
const mockData = { "auth-api": "Port: 8081, DB: pg-dev-01", "gateway": "Port: 3000" };
return {
content: [{ type: "text", text: mockData[serviceName] || "Service not found" }]
};
}
);
3. Linking the Skill to Claude Code
Claude Code needs to know where your server lives. Edit your claude_desktop_config.json. On macOS, find it at ~/Library/Application Support/Claude/. On Windows, check %APPDATA%\Claude\.
{
"mcpServers": {
"internal-docs": {
"command": "node",
"args": ["/Users/yourname/dev/claude-internal-tools/build/index.js"]
}
}
}
Restart your Claude Code session. Now, you can simply ask, “What port does the auth-api use?” and it will call your code to find out.
The Art of the Semantic Interface
When you build a skill, you aren’t just writing code; you’re writing instructions for a model. Claude uses your description fields to decide when to trigger a tool. If your description is lazy, Claude will ignore the tool or hallucinate parameters.
I once spent two hours debugging why Claude wouldn’t use a database tool I built. The culprit? A vague description. Instead of “Queries the database,” I changed it to: “Queries the PostgreSQL production database for user records using a standard email address format.” The success rate jumped from 40% to nearly 100% instantly.
Key Components of a Skill:
- Name: Use clear, snake_case names like
fetch_sentry_errors. - Description: Be hyper-specific. Mention data formats and exact use cases.
- Input Schema: Use Zod to strictly define what Claude can send. This prevents the model from sending “yesterday” when your API expects an ISO timestamp.
Handling Complexity: State and Async Workflows
Basic tools are great, but the real power lies in handling long-running tasks. Claude Code has a timeout for tool responses. If a task takes longer than 30 seconds—like running a full CI suite—don’t make Claude wait. Use a “Job ID” pattern.
// Tool 1: Kick off the long process
server.tool("trigger_deploy", { cluster: z.string() }, async ({ cluster }) => {
const jobId = "job_" + Math.random().toString(36).substring(7);
return { content: [{ type: "text", text: `Deploy started on ${cluster}. ID: ${jobId}. Ask for status in 60s.` }] };
});
// Tool 2: Poll for results
server.tool("get_deploy_status", { jobId: z.string() }, async ({ jobId }) => {
return { content: [{ type: "text", text: "Status: Success" }] };
});
Production Hardening Tips
After building dozens of these, I’ve found a few non-negotiable rules for reliability:
1. Give Claude the Stack Trace. If your tool hits an error, don’t just say “Something went wrong.” Return the specific error message. Claude can often read the error, realize it made a typo in the input, and retry the call automatically without you saying a word.
2. Debugging via Stderr. MCP uses stdin and stdout for communication. If you use console.log() for debugging, you’ll corrupt the JSON stream and crash the connection. Use console.error() instead; Claude Code ignores it, but it will show up in your terminal for debugging.
3. Keep Tools Focused. Avoid the “God Tool” that tries to do everything. It’s better to have five specialized MCP servers than one massive, confusing one. This keeps the prompt context clean and helps the model stay focused on the task at hand.
Custom skills turn Claude Code from a generic chatbot into a specialized engineering partner. By connecting it to your specific stack, you stop being the middleman between the AI and your data. This lets you focus on architecture while the AI handles the repetitive API lookups and status checks.

