MCP, Skills, and Subagents: How to Make AI Do Things Without Destroying Your System

Introduction: Why is everyone obsessed with agent autonomy?
Just a few years ago, it seemed like AI was only good for writing rhymes and fixing syntax errors. Today, every second developer is trying to bolt all sorts of "junk" onto their AI agent and, of course, wire it directly to production servers. "Let it deploy everything itself, I'm going to scroll TikTok!" — a brilliant plan, reliable as a Swiss watch, which usually ends with a dropped database and a developer nursing a nervous tic.
These "brilliant plans" have led the industry to try and control this chaos (or at least shift the responsibility from the LLMs to us). For AI to actually do things without blowing up the system, it needs targeted, secure access. That's exactly why giants like Anthropic and GitHub have highlighted three distinct access approaches: MCP (Model Context Protocol), Skills, and Subagents.
If you read IT forums, it feels like these are competitors fighting for your access rights: "I already have MCP installed, why would I add some Skills?". Let me be honest right away: the "either/or" choice doesn't work here at all.
In a normal, secure workflow, you will definitely need all three tools. Let's break down why you need them, and why your favorite agent is still acting dumb.
1. MCP: The Single Protocol for External APIs
Essentially, MCP is a standardized integration protocol. The industry just agreed: "Stop making every agent write custom crutches for every new service; let's build one pipe for everyone."
How it works in reality: you run a background server (a daemon) on your local machine. Let's say you install the official github-mcp. This server hangs in memory and greets your editor: "Hello, I can create Pull Requests and read issues." The agent goes: "Cool, let's do it," and they start tossing JSONs back and forth.
What are the pros? If you need to pull massive chunks of data from Stripe, Slack, or Postgres—it's perfect. MCP handles all the headache with OAuth, tokens, and rate limits.
Where does the pain start? (Security)
It's a total "black box". To install it, you often type a magic npx @some-guy/mcp-server-tools in the terminal. This unknown process gets file system access, runs in the background FOREVER, and holds your passwords in memory. It's a direct path to a supply chain attack simply because you wanted the AI agent to "format text nicely". For massive tasks like databases—it's super. But for tiny local operations—it's a security hole.
2. Skills: Local Scripts Without the Magic
This is where Skills enter the arena—your own local scripts that work exactly as you wrote them, not a drop more.
A Skill is not a protocol, not a server, and not a daemon. It is a profoundly simple but massively effective thing. It's literally an ordinary folder on your laptop. Inside, there's a text file SKILL.md saying "When I ask you to deploy, use this tool," and next to it—a completely standard bash script or script.js.
Scenario:
You tell the agent: "Audit the project." It reads the markdown from Skills, runs your audit.sh through the terminal, waits for the result, reads it from the console, and... the script terminates. No background daemons holding onto passwords in memory. Job done—go take a walk.
Why is it cool?
Because you can read the code with your own eyes. If the agent makes a mistake, it won't delete your System32 folder because your personal script strictly says "read files only". Use Skills for specific, dirty local work (running tests, building for a tricky server). This gives you 100% control and zero framework magic.
3. Subagents: Delegating Intellect
And now we get to the good stuff. Subagents are an architectural approach where your main AI agent doesn't do the dirty intellectual work itself, but instead hires isolated "micro-agents".
Imagine you ask the AI: "Find me info about the React 19 update on three different sites, analyze 10 pages, and write me a report." If a naive agent tries to read all those kilobytes of HTML code and dumps the garbage into your current chat—the chat will just burst. The agent will suffer a context overflow and start hallucinating hard.
How do Subagents solve this? The main agent says: "I'm the architect here, I'm not going to dig through this HTML trash myself." It quietly pushes a Researcher subagent into an isolated background environment. The subagent downloads the sites, runs the parsing (or passes it through a cheap LLM model), and returns a clean, concentrated two-paragraph JSON conclusion to the central boss.
- Clean context. The main chat stays clean; there are no kilobytes of minified JS code from an Indian dev's StackOverflow page.
- Security. We never give the scraper subagent write permissions to files or the database. Its only job is to "read the internet". The danger is reduced to zero.
- Concurrency. You can send one subagent to google, and another to dig through hundreds of files of legacy code. While you go drink coffee.
Practice: How to Create Your Own Subagent?
Many people think creating a subagent means writing a thousand-line editor plugin. In reality, you can build an isolated subagent to handle the dirty work in literally 5 minutes using a Skill script. This is the most viable approach today.
Here is a step-by-step example: creating a Node.js scraper subagent.
Step 1. SKILL.md — instructions for the main agent
Create a .skills/scraper/ folder in your project and drop a SKILL.md inside. This is exactly where your main AI assistant learns it has a new employee.
markdown1# Scraper Subagent 2 3## When to use 4This is your local parsing tool. Use it whenever you need to: 5- Visit an external website and read its content. 6Do not try to load the site yourself in the chat, delegate this work to the subagent. 7 8## How to call 9```bash 10node .skills/scraper/index.js "" 11``` 12 13## Response format 14The subagent will return a ready-to-use parsed JSON in stdout: 15```json 16{ "title": "...", "text": "Clean text without tags" } 17```
Step 2. index.js — the subagent's logic
And here is the "brain" of the subagent going out to do the dirty work:
javascript1// .skills/scraper/index.js 2import fetch from 'node-fetch'; 3import * as cheerio from 'cheerio'; 4 5const url = process.argv[2]; 6 7async function runSubagent() { 8 // console.error — used strictly for logging (the agent ignores this and doesn't pull it into context) 9 console.error(`[Scraper Subagent] The boss said to go parse HTML: ${url}`); 10 11 // 1. Hit the network and pull the heavy document 12 const response = await fetch(url, { headers: { 'User-Agent': 'Mozilla/5.0' } }); 13 const html = await response.text(); 14 15 // 2. Isolated data processing 16 const $ = cheerio.load(html); 17 $('script, style, nav, footer').remove(); // Bin the trash 18 const cleanText = $('body').text().replace(/\s+/g, ' ').trim().slice(0, 4000); 19 20 const result = { 21 status: "success", 22 title: $('title').text().trim(), 23 text: cleanText 24 }; 25 26 // 3. Spit the result out to the main agent via stdout 27 console.log(JSON.stringify(result)); 28} 29 30runSubagent();
What happens in practice? The main agent receives a task with a link from you. It sees it has a helper in the form of a skill, spawns a Node process in a background terminal, and drops the URL there. Your Node subagent—isolated, without clogging the main chat's memory with garbage—downloads hundreds of HTML kilobytes, cleans them up, and returns a pristine JSON to the central boss.
Congratulations, you just quickly implemented a simple microservices architecture for your AI.
Who Actually Supports All This? (Ecosystem Cheat Sheet)
Theory sorted, but where do we shove all this? In 2026, every editor or client offers its own implementation. To save you from reading tons of documentation and racking your brain, here's a quick table:
| Tool / IDE | MCP (Global Server) | Skills (Local Scripts) | Subagents (Delegation) | How to set up? |
|---|---|---|---|---|
| Cursor | ✅ Native, click-click | ⚠️ Via workaround | ❌ Not supported directly | Add MCP via Settings -> Features -> MCP Server. To force it to see Skills, you must enforce in .cursorrules: "Always read instructions from scripts in the .skills/ folder". |
| Claude Code (cli) | ✅ Native | ✅ Its native home | ❌ Not out of the box | MCP is configured via config file. Console geeks will be happy: Skills just sit in a local folder—the client finds and comprehends them automatically as a core feature. |
| Claude Desktop | ✅ Native and powerful | ❌ Can't do it at all | ❌ None | Register paths in claude_desktop_config.json. It simply doesn't pull local bash scripts (Skills) and subagents yet. |
| GitHub Copilot | ⚠️ Currently an experiment | ❌ Not supported | ✅ Native (ideal) | Subagents are built-in out of the box (e.g., @workspace, @terminal, or your custom agents via API). Brilliantly done for delegating tasks and exploring code without cluttering the main chat's brain. |
| Roo Code (VS Code) | ✅ Native, convenient | ⚠️ Via workaround | ⚠️ Via agent "Roles" | Add MCP directly in the GUI. For full-scale subagents, it allows configuring 'Custom Modes'—restricted roles you can switch between. |
| Windsurf | ✅ Native | ❌ Better not to bother | ❌ None | Add MCP directly in graphical settings. It's too early to poke it with custom scripts. |
Quick ecosystem takeaway: MCP is already the global, inevitable integration standard. Skills are more of a feature for those who want strict control over local scripts (Claude Code leads here). But Subagents are the obvious trend for complex multi-step development, currently working natively best within the Copilot system.
Conclusion: So What Should I Use?
Remember a simple rule, and your AI assistants will stop ruining your life:
- Need to talk to an external system (Stripe, GitHub, Supabase)? Grab MCP. Let competent developers and a ready-made server handle the authorization and protocols; don't poke around there yourself.
- Need to do something on your local machine (run isolated tests, copy files)? Write your own Skill. You can see your bash script yourself, and it's safer than giving a stranger's npm package root access to your machine.
- Need to execute heavy research work (download five sites, analyze tons of legacy code, and return JSON)? Implement a Subagent. Offload the heavy lifting to an isolated "micro-agent" and protect your main chat's sanity.
Don't give unknown, third-party processes full access to your computer just to wait for surprises from random npm packages. Your codebase will thank you.
Каментары
(Каб даслаць каментар залагуйцеся ў свой уліковы запіс)