A story about organizational knowledge, the hidden cost of bad prompts, and why the future of AI in engineering isn’t smarter models, it’s smarter context.
Every time,When I’d open my AI coding assistant, stare at the blank prompt box, and type something that felt slightly embarrassing. Not because of what I was asking but because I’d asked the exact same thing yesterday. And the day before. And the week before that.
“Write an integration test for this service. We use our internal test library. Don’t use raw HTTP clients. Seed the database before assertions. Clean up after. Follow our pattern where the setup lives in a separate method. Oh, and don’t forget…”
By the time I finished explaining, I’d written a small essay. The AI would produce something decent. I’d fix a few things. Move on.
It worked. But it was exhausting in a quiet, invisible way.
The Problem Nobody Talks About
We talk a lot about whether AI is capable enough. Does it understand the code? Can it reason through edge cases? Is the model good enough?
But that was never my real problem.
My real problem was simpler: I had to re-teach the AI everything, every single time.
My team had developed real patterns over years. We had internal libraries that handled the messy parts of testing. We had conventions not written down anywhere formal, just known about how integration tests should be structured, what to assert, what to skip, and what would silently break in production if ignored.
That knowledge lived in people’s heads. And every time someone opened a new chat with an AI, that knowledge evaporated. The AI started from zero. The engineer had to reconstruct the entire context from scratch.
I started to notice this wasn’t just my problem. Different engineers on the same team were prompting the AI differently. Getting different results. Writing tests that looked nothing alike. The AI wasn’t the inconsistency we were.
A Shift in Thinking
At some point, a realization crept in that changed how I looked at all of this.
The industry had been framing AI assistance as a question of intelligence. Make the model smarter. Give it better reasoning. Train it on more code.
But intelligence without context is just guessing confidently.
What was actually missing wasn’t a smarter model. It was a way to give the model our knowledge, the organizational kind. The stuff that takes six months to absorb when you join a team. The unwritten rules. The preferred libraries. The “we tried that once and it broke everything in production” stories.
What if instead of typing all of that into a prompt every morning, I could package it? Not as documentation no one reads. Not as a wiki that goes stale. But as something an AI agent could actually use when doing the work?
Enter: Agentic Skills
This is where the concept of Agentic Skills comes in.
An Agentic Skill is essentially a reusable capability you give to an AI agent. It’s a structured package, a markdown file, sometimes a template, sometimes a script, that tells the agent: here is how this specific type of work gets done in our context.
Think of it like this. Imagine you’re onboarding a brilliant new engineer. They’re technically sharp, fast, and eager. But they don’t know your codebase, your preferred libraries, or your team’s quirks yet. So you pair them with a senior engineer who gives them a rundown: “Here’s how we write integration tests. Here’s the library we use. Here’s what a good test looks like around here. Here are the three mistakes everyone makes their first week.”
An Agentic Skill is that rundown, but written once, versioned, and handed to the AI every time it needs to do that category of work.
What Goes Inside a Skill?
A skill for something like integration testing might include:
- When to use it — what kind of task triggers this skill
- The preferred library — not the generic approach, but our approach
- Initialization steps — how to spin up test containers, seed data
- Example test structure — a real pattern, not a textbook example
- Common failure modes — the gotchas, the edge cases that bite people
- Cleanup logic — what needs to happen after the test runs
- What to assert — and what not to assert (often more important)
When the agent loads this skill before writing a test, it doesn’t need to be taught from scratch. It already knows the playbook. The engineer just says: “Write an integration test for this service” and the result looks like something a senior member of the team wrote.
That’s the shift. From prompting as a teaching session, to prompting as a delegation.
Why This Is Actually About Knowledge, Not AI
Here’s the part that surprised me most when I started thinking about this seriously.
The hard part isn’t the AI. The hard part is the knowledge capture.
Most engineering organizations have enormous amounts of embedded knowledge, it’s just not in a form that anyone, human or AI, can reliably access. It lives in Slack threads, in the heads of the engineers who’ve been around longest, in old pull request comments, in institutional memory that walks out the door when someone leaves.
Agentic Skills force you to externalize that knowledge. To write it down. To make it explicit. And once it’s explicit, it becomes useful to everyone, not just the AI.
New engineers onboard faster. Standards get applied more consistently. The “right way to do things” stops being tribal knowledge and starts being something you can point to and share.
The Practical Path
The beautiful thing about this approach is that you don’t need to revolutionize anything to start.
You begin small. Pick one thing your team does repeatedly integration testing is a good first candidate, or unit testing, or code review preparation. Write a skill for it. Have one team try it. See if the AI output improves. See if the prompts get shorter.
The skill is just a markdown file. There’s no new infrastructure. No model training. No vendor contract. You’re just packaging knowledge you already have, in a form an agent can use.
Over time, you build a library. Testing skills. Review skills. Deployment skills. Service scaffolding. Each one encoding a piece of what your organization has learned the hard way.
And the interesting thing is: even if AI assistants change, even if you switch tools, the knowledge you captured remains useful. Because it’s your knowledge, organized for the first time.
What I Actually Learned
I used to think the goal of AI in engineering was to make the AI do more. Write more code. Answer more questions. Move faster.
But the real leverage isn’t in the AI doing more. It’s in the AI needing less from you because you’ve done the work of making your organization’s knowledge available to it.
The morning prompt that used to take five minutes of context-setting now takes one sentence. Not because the model got smarter. Because I stopped making it start from zero every time.
That’s the quiet revolution here. Not artificial intelligence getting more powerful. Organizational knowledge finally becoming usable.
What’s Coming Next
In the next post, I’m going to go a level deeper.
We talked about what Agentic Skills are and why they matter. Next time, I want to show you how to actually build a production-grade AI agent knowledge setup the folder structure, how to write a SKILL.md that an agent actually reads correctly, how to version and manage skills across a team, and how to know when a skill is working vs. when it’s quietly being ignored.
If you’ve ever wanted your AI tools to feel less like a search engine and more like a teammate who actually knows your codebase that post is for you.
Stay tuned.
References that shaped this thinking: Anthropic’s Agent Skills documentation, Vercel’s skills ecosystem, and Hugging Face’s work on agent-ready knowledge packages — all pointing in the same direction.