Posted on Leave a comment

Extend your coding agent with .NET Skills

Coding agents are becoming part of everyday development, but quality of responses
and usefulness still depends on the best context as input. That context comes in
different forms starting from your environment, the code in the workspace, the
model training knowledge, previous memory, agent instructions, and of course
your own starting prompt. On the .NET team we’ve really adopted coding agents as
a part of our regular workflow and have, like you, learned the ways to improve
our productivity by providing great context. Across our repos we’ve adopted our
agent instructions and have also started to use agent skills to improve our
workflows. We’re introducing dotnet/skills,
a repository that hosts a set of agent skills for .NET developers from the team
who is building the platform itself.

What is an agent skill?

If you’re new to the concept, an agent skill is a lightweight package with specialized knowledge an agent can discover and use while solving a task. A skill bundles intent,
task-specific context, and supporting artifacts so the agent can choose better
actions with less trial and error. This work follows the
Agent Skills specification, which defines a common
model for authoring and sharing these capabilities with coding agents. GitHub Copilot CLI, VS Code, Claude Code and other coding agents support this specification.

What we are doing with dotnet/skills

With dotnet/skills, we’re publishing skills from the team that ships the platform.
These are the same workflows we’ve used ourselves, with first-party teams, and
in engineering scenarios we’ve seen in working with developers like yourself.

So what does that look like in practice? You’re not starting from generic
prompts. You’re starting from patterns we’ve already tested while shipping
.NET.

Our goal is practical: ship skills that help agents complete common .NET tasks
more reliably, with better context and fewer dead ends.

Does it help?

While we’ve learned that context is essential, we also have learned not to assume
more is always better. The AI models are getting remarkably better each release
and what was thought to be needed even 3 months ago, may no longer be required
with newer models. In producing skills we want to measure the validity if an
added skill actually improves the result. For each of our skills merged, we run
a lightweight validator (also available in the repo) to score it. We’re also learning the best graders/evals for this type…and so is the ecosystem as well.

Think of this as a unit test for a skill, not an integration test for the
whole system. We measure (using a specific model each run) against a baseline (no skill present) and try to score if the specific skill improved the intended behavior, and by how much. Some of this is taste as well so we’re careful not to draw too many hard lines on a specific number, but look at the result, adjust and re-score.

Each skill’s evaluation lives in the repository as well, so
you can inspect and run them. This gives us a practical signal on usefulness
without waiting for large end-to-end benchmark cycles. We will continue to learn in this space and adjust. We have a lot of partner teams trying different evaluation techniques as well at this level. The real test is you telling us if they have improved.

A developer posted this just recently on Discord sharing what we want to see:

The skill just worked with the log that I’ve with me, thankfully it was smartter[sic] than me and found the correct debug symbol. At the end it says the crash is caused by a heap corruption and the stack-trace points to GC code, by any chance does it ring a bell for you?

This is a great example of how a skill accelerated to the next step rapidly in this particular investigation for this developer. This is the true definition of success in unblocking and accelerating productivity.

Discovery, installation, and using skills

Popular agent tools have adopted the concept of
plugin marketplaces
which simply put are a registry of agent artifacts, like skills. The
plugin definition
serves as an organizational unit and defines what skills, agents, hooks, etc.
exist for that plugin in a single installable package. The dotnet/skills repo
is organized in the same manner, with the repo serving as the marketplace and we
have organized a set of plugins by functional areas. We’ll continue to define
more plugins as they get merged and based on your feedback.

While you can simply copy the SKILL.md files directly to your environment, the
plugin concept in coding agents like GitHub Copilot aim to make that process simpler.
As noted in the
README,
you can register the repo as a marketplace and browse/install the plugins.

/plugin marketplace add dotnet/skills

Once the marketplace is added, then you can browse any marketplace for a set of plugins to install and install the named plugin:

/plugin marketplace browse dotnet-agent-skills
/plugin install <plugin>@dotnet-agent-skills

Copilot CLI browsing plugin marketplace and installing a plugin via the CLI

They are now available in your environment automatically by your coding agent, or you can also invoke them explicitly.

/dotnet:analyzing-dotnet-performance

And in VS Code you can add the marketplace URL into the Copilot extension settings for Insiders, adding https://github.com/dotnet/skills as the location and then you can browse in the extensions explorer to install, and then directly execute in Copilot Chat using the slash command:

Browsing agent plugins in the Extension marketplace

We acknowledge that discovery of even marketplaces can be a challenge and are
working with our own Copilot partners and ecosystem to better understand ways to
improve this discovery flow — it’s hard to use great skills if you don’t know
where to look! We’ll be sure to post more on any changes and possible .NET
specific tools to help identify skills that will make your project and developer
productivity better.

Starting principles

Like evolving standards in the AI extensibility space, skills is fast moving. We
are starting with the principle of simplicity first. We’ve seen in our own uses
that a huge set of new tools may not be needed with well scoped skills
themselves. Where we need more, we’ll leverage things like MCP or scripts, or
SDK tools that already exist and rely on them to enhance the particular skill
workflow. We want our skills to be proven, practical, and task-oriented.

We also know there are great community-provided agent skills that have evolved,
like github/awesome-copilot which
provide a lot of value for specific libraries and architectural patterns for .NET
developers. We support all these efforts as well and don’t think there is a ‘one
winner’ skills marketplace for .NET developers. We want our team to keep focused
closest to the core runtime, concepts, tools, and frameworks we deliver as a
team and support and learn from the community as the broader set of agentic
skills help all .NET developers in many more ways. Our skills are meant to
complement, not replace any other marketplace of skills.

What’s next

The AI ecosystem is moving fast, and this repository will too. We’ll iterate
and learn in the open with the developer community.

Expect frequent updates, new skills, and continued collaboration as we improve
how coding agents work across .NET development scenarios.

Explore dotnet/skills, try the skills in your own workflows, and share
feedback
on things that can improve or new ideas we should consider.

Posted on Leave a comment

Troubleshooting ASP.NET Core Performance Problems

This is a guest post by Mike Rousos

I recently had an opportunity to help a developer with an ASP.NET Core app that was functionally correct but slow when under a heavy user load. We found a few different factors contributing to the app’s slowdown while investigating, but the majority of the issues were some variation of blocking threads that could have run in a non-blocking way. It was a good reminder for me just how crucial it is to use non-blocking patterns in multi-threaded scenarios like a web app.

Beware of Locks

One of the first problems we noticed (through CPU analysis with PerfView) was that a lot of time was spent in logging code paths. This was confirmed with ad hoc exploration of call stacks in the debugger which showed many threads blocked waiting to acquire a lock. It turns out some common logging code paths in the application were incorrectly flushing Application Insights telemetry. Flushing App Insights requires a global lock and should generally not be done manually during the course of an app’s execution. In this case, though, Application Insights was being flushed at least once per HTTP request and, under load, this became a large bottleneck!

You can see this sort of pattern in the images below from a small repro I made. In this sample, I have an ASP.NET Core 2.0 web API that enables common CRUD operations against an Azure SQL database with Entity Framework Core. Load testing the service running on my laptop (not the best test environment), requests were processed in an average of about 0.27 seconds. After adding a custom ILoggerProvider calling Console.WriteLine inside of a lock, though, the average response time rose to 1.85 seconds – a very noticeable difference for end users. Using PerfView and a debugger, we can see that a lot of time (66% of PerfView’s samples) is spent in the custom logging method and that a lot of worker threads are stuck there (delaying responses) while waiting for their turn with the lock.

Something's up with this logging call

Something’s up with this logging call

Threads waiting on lock acquisition

Threads waiting on lock acquisition

ASP.NET Core’s Console logger used to have some locking like this in versions 1.0 and 1.1, causing it to be slow in high-traffic scenarios, but these issues have been addressed in ASP.NET Core 2.0. It is still a best practice to be mindful of logging in production, though.

For very performance-sensitive scenarios, you can use LoggerMessage to optimize logging even further. LoggerMessage allows defining log messages ahead-of-time so that message templates don’t need to be parsed every time a particular message is logged. More details are available in ourdocumentation, but the basic pattern is that log messages are defined as strongly-typed delegates:

// This delegate logs a particular predefined message
private static readonly Action<ILogger, int, Exception> _retrievedWidgets = LoggerMessage.Define<int>( LogLevel.Information, new EventId(1, nameof(RetrievedWidgets)), "Retrieved {Count} widgets"); // A helper extension method to make it easy to call the 
// LoggerMessage-produced delegate from an ILogger
public static void RetrievedWidgets(this ILogger logger, int count) => _retrievedWidgets(logger, count, null);

Then, that delegate is invoked as needed for high-performance logging:

var widgets = await _dbContext.Widgets.AsNoTracking().ToListAsync();
_logger.RetrievedWidgets(widgets.Length);

Keep Asynchronous Calls Asynchronous

Another issue our investigation uncovered in the slow ASP.NET Core app was similar: calling Task.Wait() or Task.Result on asynchronous calls made from the app’s controllers instead of using await. By making controller actions async and awaiting these sorts of calls, the executing thread is freed to go serve other requests while waiting for the invoked task to complete.

I reproduced this issue in my sample application by replacing async calls in the action methods with synchronous alternatives. At first, this only caused a small slowdown (0.32 second average response instead of 0.27 seconds) because the async methods I was calling in the sample were all pretty quick. To simulate longer async tasks, I updated both the async and synchronous versions of my sample to have a Task.Delay(200) in each controller action (which, of course, I used await with when async and .Wait() with when synchronous). In the async case, average response time went from 0.27s to 0.46s which is more or less what we would expect if each request has an extra pause or 200ms. In the synchronous case, though, the average time went from 0.32 seconds to 1.47 seconds!

The charts below demonstrate where a lot of this slowdown comes from. The green lines in the charts represent requests served per second and the red lines represent user load. In the first chart (which was taken while running the async version of my sample), you can see that as users increase, more requests are being served. In the second chart (corresponding to theTask.Wait() case), on the other hand, there’s a strange pattern of requests per second remaining flat for several minutes after user load increases and only then increasing to keep up. This is because the existing pool of threads serving requests couldn’t keep up with more users (since they were all blocked on Task.Wait() calls) and throughput didn’t improve until more threads were created.

Threads Keeping Up

Asynchronous RPS compared to user load

 

Sync Thread Growth Lag

Synchronous RPS compared to user load

 

Attaching a debugger to both scenarios, I found that 75 managed threads were being used in the async test but 232 were in use in the synchronous test. Even though the synchronous test did eventually add enough threads to handle the incoming requests, calling Task.Result and Task.Wait can cause slowdowns when user load changes. Analyzers (like AsyncFixer) can help to find places where asynchronous alternatives can be used and there are EventSource events that can be used to find blocking calls at runtime, if needed.

Wrap-Up

There were some other perf issues in the application I helped investigate (server GC wasn’t enabled in ASP.NET Core 1.1 templates, for example, something that has been corrected in ASP.NET Core 2.0), but one common theme of the problems we found was around blocking threads unnecessarily. Whether it’s from lock contention or waiting on tasks to finish, it’s important to keep threads unblocked for good performance in ASP.NET Core apps.

If you’d like to dig into your own apps to look for perf trouble areas, check out the Channel9 PerfView tutorials for an overview of how PerfView can help uncover CPU and memory-related perf issues in .NET applications.