Home Insights Book a Call
Insight / Building with AI

How to Build AI Skills That Actually Work

Most AI advice tells you to write better prompts. Skills are different. Here's what makes them work and what makes them break.
By Cole  ·  Cascade AI Consulting

Most AI advice tells you to write better prompts. That's fine for one-off tasks. But if you're doing the same thing every week — writing case notes, drafting grant reports, prepping for meetings — you're copying and pasting the same instructions over and over. And every time, the output is slightly different.

Skills fix that. A skill is a set of instructions you write once that tells your AI tool how to do something specific, every time, the same way. Think of it like training a new employee: you show them the process, explain what good looks like, point out the common mistakes, and then they can do it without you standing over their shoulder.

I've been building skills for the last several months for my consulting work and for the organizations I work with. Here's what I've learned about what makes them work and what makes them break.

Prompts vs. Skills
Why one-off instructions hit a ceiling
Consistency
Prompt
Skill
Reusability
Skill
Improves Over Time
Skill
The description matters more than anything else

I'd say 80% of your effort should go into the description. This is the short paragraph at the top that tells the AI when to use the skill. If it's vague ("helps with reports"), the skill either won't fire when you need it or fires when you don't.

A good description names what the skill produces, includes the actual phrases someone would say when they need it, and uses the specific vocabulary of your field. If you work in homeless services, your description should say "HMIS case notes" and "HUD CoC reporting," not "documentation assistance." The specific terms matter because they point the AI toward the right knowledge.

Technical Tip

The description has to stay on a single line. If your text editor wraps it onto a second line, the AI won't read past the break. This catches a lot of people.

Put your most important rules at the top and bottom

Research on how AI models process long instructions shows something interesting: they pay the most attention to the beginning and the end of what you give them. The middle gets about 30% less attention. This isn't a quirk you can prompt around. It's baked into how the technology works.

The U-Shaped Attention Curve
How AI models process instructions — beginning and end get the most weight
← Beginning (high attention) Middle (~30% less) End (high attention) →

The practical takeaway is straightforward: your non-negotiable rules (things like "never put client names in the output" or "always verify numbers against the source data") go in the first few lines. Your output format and quality checks go at the end. Supporting details and examples can go in the middle.

Keep it lean

There's a real temptation to put everything you know into a skill file. Don't. At around 19 specific instructions, accuracy actually drops compared to having just 5 focused ones. More isn't better. Focused is better.

More Instructions ≠ Better Output
Accuracy by number of instructions in a skill file
5 instructions
92% accuracy
10 instructions
85%
15 instructions
74%
19+ instructions
61%

I try to keep my core skill files under 150 lines. If something is reference material (a template, a list of examples, sector-specific context), I put it in a separate file that the AI reads only when it needs it. The core file stays tight: here's what to do, here's why, here are the common mistakes, here's what the output should look like.

Tell it why, not just what

A skill that only has step-by-step procedures breaks the first time it hits a situation you didn't plan for. If you instead explain the reasoning behind your process, the AI can generalize.

For example, instead of "always use the phrase 'exited to permanent housing,'" you could write: "Use HUD-standard terminology for housing outcomes because funders expect it and inconsistent language creates confusion in reporting." Now the AI understands the principle and can apply it to situations you didn't specifically cover.

Write down what a human would "just know"

This is the one that catches people most often. Experienced staff handle edge cases through common sense. The AI doesn't have that. If your grant writer knows that a missed target should be acknowledged honestly but briefly, with a pivot to what you learned, you need to write that down. If your case manager knows not to include a client's specific shelter location in notes, that needs to be in the skill.

Every place where a human would use judgment, write it out explicitly. These edge cases are often what separate a skill that works from one you have to babysit.

Don't use flattery

This one surprised me. Telling the AI "you are the world's best grant writer" actually makes the output worse. Research shows that flattery activates the wrong kind of knowledge in the model, more like motivational content than actual expertise. Brief, realistic framing works better. "Senior grants manager writing for HUD CoC funders" in a few words outperforms three paragraphs of praise.

What works vs. what doesn't

Worse: "You are the world's most talented and experienced grant writer who produces exceptional, award-winning narratives."

Better: "Senior grants manager writing funder reports for a HUD CoC-funded homeless services provider."

Build from real work, not from theory

The best way to build a skill is to do the task manually first, in a normal conversation with your AI. Write the grant report together. Draft the case notes. Go through the whole process. Notice where it gets stuck, where it makes wrong assumptions, where it needs more context.

Then turn that conversation into a skill. Tell the AI to remember every roadblock it hit and bake those lessons into the instructions. You end up with a skill that works because it was built from real failure points, not from imagining what might go wrong.

How to Build a Skill
The four-step process from manual work to reusable automation
1
Do it manually
with AI
2
Note where
it breaks
3
Package into
a skill file
4
Use, refine,
repeat
Skills compound over time

A prompt evaporates when the conversation ends. You have to re-paste it, re-explain the context, and hope you remember all the details. A skill persists. And every time you use it, notice something that could be better, and update it, the skill gets sharper.

Over six months of refining, a skill becomes something that would take a new person weeks to learn. It captures how your best people do their best work, in a format that both AI and humans can read and follow.

Where to start

Pick one thing you do every week that takes longer than it should. Case notes, meeting prep, grant reporting, onboarding emails, whatever it is. Do it once with AI, manually, in a conversation. Get the output right. Then package that conversation into a skill.

You don't need to build 20 skills. One good skill that fires reliably and saves you an hour a week is worth more than a library of skills that kind of work sometimes.

One thing worth saying before you build anything: skills only pay off if your team actually uses them. Getting staff past the initial resistance is its own challenge. I covered the five fears that drive pushback and a 30-day rollout plan in The People Wall: Why AI Rollouts Fail (and How to Fix It).

If you want help figuring out which workflows in your organization are the best candidates for skills, or you want to see how this works with real nonprofit documentation, reach out. This is exactly the kind of thing we help teams set up at Cascade AI Consulting.

Cole Redepenning

Founder of Cascade AI Consulting, which helps social service nonprofits implement practical AI tools that actually get used. With a background in homeless services program management and healthcare operations, Cole understands both the complexity of the work and what it takes to make new tools stick on the ground.

📧 cole@cascadeaiconsulting.com  ·  🌐 cascadeaiconsulting.com  ·  📅 Book a free 30-min call