Posted on Leave a comment

Modernize .NET Anywhere with GitHub Copilot

Modernizing a .NET application is rarely a single step. It requires understanding the current state of the codebase, evaluating dependencies, identifying potential breaking changes, and sequencing updates carefully.

Until recently, GitHub Copilot modernization for .NET ran primarily inside Visual Studio. That worked well for teams standardized on the IDE, but many teams build elsewhere. Some use VS Code. Some work directly from the terminal. Much of the coordination happens on GitHub, not in a single developer’s local environment.

The modernize-dotnet custom agent changes that. The same modernization workflow can now run across Visual Studio, VS Code, GitHub Copilot CLI, and GitHub. The intelligence behind the experience remains the same. What’s new is where it can run. You can modernize in the environment you already use instead of rerouting your workflow just to perform an upgrade.

The modernize-dotnet agent builds on the broader GitHub Copilot modernization platform, which follows an assess → plan → execute model. Workload-specific agents such as modernize-dotnet, modernize-java, and modernize-azure-dotnet guide applications toward their modernization goals, working together across code upgrades and cloud migration scenarios.

What the agent produces

Every modernization run generates three explicit artifacts in your repository: an assessment that surfaces scope and potential blockers, a proposed upgrade plan that sequences the work, and a set of upgrade tasks that apply the required code transformations.

Because these artifacts live alongside your code, teams can review, version, discuss, and modify them before execution begins. Instead of a one-shot upgrade attempt, modernization becomes traceable and deliberate.

GitHub Copilot CLI

For terminal-first engineers, GitHub Copilot CLI provides a natural entry point.

You can assess a repository, generate an upgrade plan, and run the upgrade without leaving the shell.

  1. Add the marketplace: /plugin marketplace add dotnet/modernize-dotnet
  2. Install the plugin: /plugin install modernize-dotnet@modernize-dotnet-plugins
  3. Select the agent: /agent to select modernize-dotnet
  4. Then prompt the agent, for example: upgrade my solution to a new version of .NET

Modernize .NET in GitHub Copilot CLI

The agent generates the assessment, upgrade plan, and upgrade tasks directly in the repository. You can review scope, validate sequencing, and approve transformations before execution. Once approved, the agent automatically executes the upgrade tasks directly from the CLI.

GitHub

On GitHub, the agent can be invoked directly within a repository. The generated artifacts live alongside your code, shifting modernization from a local exercise to a collaborative proposal. Instead of summarizing findings in meetings, teams review the plan and tasks where they already review code. Learn how to add custom coding agents to your repo, then add the modernize-dotnet agent by following the README in the modernize-dotnet repository.

VS Code

If you use VS Code, install the GitHub Copilot modernization extension and select modernize-dotnet from the Agent picker in Copilot Chat. Then prompt the agent with the upgrade you want to perform, for example: upgrade my project to .NET 10.

Visual Studio

If Visual Studio is your primary IDE, the structured modernization workflow remains fully integrated.

Right-click your solution or project in Solution Explorer and select the Modernize action to perform an upgrade.

Supported workloads

GitHub Copilot modernization supports upgrades across common .NET project types, including ASP.NET Core (MVC, Razor Pages, Web API), Blazor, Azure Functions, WPF, class libraries, and console applications.

Migration from .NET Framework to modern .NET is also supported for application types such as ASP.NET (MVC, Web API), Windows Forms, WPF, and Azure Functions, with Web Forms support coming soon.

The CLI and VS Code experiences are cross-platform. However, migrations from .NET Framework require Windows.

Custom skills

Skills are a standard part of GitHub Copilot’s agentic platform. They let teams define reusable, opinionated behaviors that agents apply consistently across workflows.

The modernize-dotnet agent supports custom skills, allowing organizations to encode internal frameworks, migration patterns, or architectural standards directly into the modernization workflow. Any skills added to the repository are automatically applied when the agent performs an upgrade.

You can learn more about how skills work and how to create them in the Copilot skills documentation.

Give it a try

Run the modernize-dotnet agent on a repository you’re planning to upgrade and explore the modernization workflow in the environment you already use.

If you try it, we’d love to hear how it goes. Share feedback or report issues in the modernize-dotnet repository.

Posted on Leave a comment

Extend your coding agent with .NET Skills

Coding agents are becoming part of everyday development, but quality of responses
and usefulness still depends on the best context as input. That context comes in
different forms starting from your environment, the code in the workspace, the
model training knowledge, previous memory, agent instructions, and of course
your own starting prompt. On the .NET team we’ve really adopted coding agents as
a part of our regular workflow and have, like you, learned the ways to improve
our productivity by providing great context. Across our repos we’ve adopted our
agent instructions and have also started to use agent skills to improve our
workflows. We’re introducing dotnet/skills,
a repository that hosts a set of agent skills for .NET developers from the team
who is building the platform itself.

What is an agent skill?

If you’re new to the concept, an agent skill is a lightweight package with specialized knowledge an agent can discover and use while solving a task. A skill bundles intent,
task-specific context, and supporting artifacts so the agent can choose better
actions with less trial and error. This work follows the
Agent Skills specification, which defines a common
model for authoring and sharing these capabilities with coding agents. GitHub Copilot CLI, VS Code, Claude Code and other coding agents support this specification.

What we are doing with dotnet/skills

With dotnet/skills, we’re publishing skills from the team that ships the platform.
These are the same workflows we’ve used ourselves, with first-party teams, and
in engineering scenarios we’ve seen in working with developers like yourself.

So what does that look like in practice? You’re not starting from generic
prompts. You’re starting from patterns we’ve already tested while shipping
.NET.

Our goal is practical: ship skills that help agents complete common .NET tasks
more reliably, with better context and fewer dead ends.

Does it help?

While we’ve learned that context is essential, we also have learned not to assume
more is always better. The AI models are getting remarkably better each release
and what was thought to be needed even 3 months ago, may no longer be required
with newer models. In producing skills we want to measure the validity if an
added skill actually improves the result. For each of our skills merged, we run
a lightweight validator (also available in the repo) to score it. We’re also learning the best graders/evals for this type…and so is the ecosystem as well.

Think of this as a unit test for a skill, not an integration test for the
whole system. We measure (using a specific model each run) against a baseline (no skill present) and try to score if the specific skill improved the intended behavior, and by how much. Some of this is taste as well so we’re careful not to draw too many hard lines on a specific number, but look at the result, adjust and re-score.

Each skill’s evaluation lives in the repository as well, so
you can inspect and run them. This gives us a practical signal on usefulness
without waiting for large end-to-end benchmark cycles. We will continue to learn in this space and adjust. We have a lot of partner teams trying different evaluation techniques as well at this level. The real test is you telling us if they have improved.

A developer posted this just recently on Discord sharing what we want to see:

The skill just worked with the log that I’ve with me, thankfully it was smartter[sic] than me and found the correct debug symbol. At the end it says the crash is caused by a heap corruption and the stack-trace points to GC code, by any chance does it ring a bell for you?

This is a great example of how a skill accelerated to the next step rapidly in this particular investigation for this developer. This is the true definition of success in unblocking and accelerating productivity.

Discovery, installation, and using skills

Popular agent tools have adopted the concept of
plugin marketplaces
which simply put are a registry of agent artifacts, like skills. The
plugin definition
serves as an organizational unit and defines what skills, agents, hooks, etc.
exist for that plugin in a single installable package. The dotnet/skills repo
is organized in the same manner, with the repo serving as the marketplace and we
have organized a set of plugins by functional areas. We’ll continue to define
more plugins as they get merged and based on your feedback.

While you can simply copy the SKILL.md files directly to your environment, the
plugin concept in coding agents like GitHub Copilot aim to make that process simpler.
As noted in the
README,
you can register the repo as a marketplace and browse/install the plugins.

/plugin marketplace add dotnet/skills

Once the marketplace is added, then you can browse any marketplace for a set of plugins to install and install the named plugin:

/plugin marketplace browse dotnet-agent-skills
/plugin install <plugin>@dotnet-agent-skills

Copilot CLI browsing plugin marketplace and installing a plugin via the CLI

They are now available in your environment automatically by your coding agent, or you can also invoke them explicitly.

/dotnet:analyzing-dotnet-performance

And in VS Code you can add the marketplace URL into the Copilot extension settings for Insiders, adding https://github.com/dotnet/skills as the location and then you can browse in the extensions explorer to install, and then directly execute in Copilot Chat using the slash command:

Browsing agent plugins in the Extension marketplace

We acknowledge that discovery of even marketplaces can be a challenge and are
working with our own Copilot partners and ecosystem to better understand ways to
improve this discovery flow — it’s hard to use great skills if you don’t know
where to look! We’ll be sure to post more on any changes and possible .NET
specific tools to help identify skills that will make your project and developer
productivity better.

Starting principles

Like evolving standards in the AI extensibility space, skills is fast moving. We
are starting with the principle of simplicity first. We’ve seen in our own uses
that a huge set of new tools may not be needed with well scoped skills
themselves. Where we need more, we’ll leverage things like MCP or scripts, or
SDK tools that already exist and rely on them to enhance the particular skill
workflow. We want our skills to be proven, practical, and task-oriented.

We also know there are great community-provided agent skills that have evolved,
like github/awesome-copilot which
provide a lot of value for specific libraries and architectural patterns for .NET
developers. We support all these efforts as well and don’t think there is a ‘one
winner’ skills marketplace for .NET developers. We want our team to keep focused
closest to the core runtime, concepts, tools, and frameworks we deliver as a
team and support and learn from the community as the broader set of agentic
skills help all .NET developers in many more ways. Our skills are meant to
complement, not replace any other marketplace of skills.

What’s next

The AI ecosystem is moving fast, and this repository will too. We’ll iterate
and learn in the open with the developer community.

Expect frequent updates, new skills, and continued collaboration as we improve
how coding agents work across .NET development scenarios.

Explore dotnet/skills, try the skills in your own workflows, and share
feedback
on things that can improve or new ideas we should consider.

Posted on Leave a comment

Release v1.0 of the official MCP C# SDK

The Model Context Protocol (MCP) C# SDK has reached its v1.0 milestone, bringing full support for the
2025-11-25 version of the MCP Specification.
This release delivers a rich set of new capabilities — from improved authorization flows and richer metadata,
to powerful new patterns for tool calling, elicitation, and long-running request handling.

Here’s a tour of what’s new.

Enhanced authorization server discovery

In the previous spec, servers were required to provide a link to their Protected Resource Metadata (PRM) Document
in the resource_metadata parameter of the WWW-Authenticate header.
The 2025-11-25 spec broadens this, giving servers three ways to expose the PRM:

  1. Via a URL in the resource_metadata parameter of the WWW-Authenticate header (as before)
  2. At a “well-known” URL derived from the server’s MCP endpoint path
    (e.g. https://example.com/.well-known/oauth-protected-resource/public/mcp)
  3. At the root well-known URL (e.g. https://example.com/.well-known/oauth-protected-resource)

Clients check these locations in order.

On the server side, the SDK’s AddMcp extension method on AuthenticationBuilder
makes it easy to configure the PRM Document:

.AddMcp(options =>
{ options.ResourceMetadata = new() { ResourceDocumentation = new Uri("https://docs.example.com/api/weather"), AuthorizationServers = { new Uri(inMemoryOAuthServerUrl) }, ScopesSupported = ["mcp:tools"], };
});

When configured this way, the SDK automatically hosts the PRM Document at the well-known location
and includes the link in the WWW-Authenticate header. On the client side, the SDK handles the
full discovery sequence automatically.

Icons for tools, resources, and prompts

The 2025-11-25 spec adds icon metadata to Tools, Resources, and Prompts. This information is included
in the response to tools/list, resources/list, and prompts/list requests.
Implementation metadata (describing a client or server) has also been extended with icons and a website URL.

The simplest way to add an icon for a tool is with the IconSource parameter on the McpServerToolAttribute:

[McpServerTool(Title = "This is a title", IconSource = "https://example.com/tool-icon.svg")]
public static string ToolWithIcon(

The McpServerResourceAttribute, McpServerResourceTemplateAttribute, and McpServerPromptAttribute
have also added an IconSource parameter.

For more advanced scenarios — multiple icons, MIME types, size hints, and theme preferences — you can
configure icons programmatically via McpServerToolCreateOptions.Icons:

.WithTools([ McpServerTool.Create( typeof(EchoTool).GetMethod(nameof(EchoTool.Echo))!, options: new McpServerToolCreateOptions { Icons = [ new Icon { Source = "https://raw.githubusercontent.com/microsoft/fluentui-emoji/main/assets/Loudspeaker/Flat/loudspeaker_flat.svg", MimeType = "image/svg+xml", Sizes = ["any"], Theme = "light" }, new Icon { Source = "https://raw.githubusercontent.com/microsoft/fluentui-emoji/main/assets/Loudspeaker/3D/loudspeaker_3d.png", MimeType = "image/png", Sizes = ["256x256"], Theme = "dark" } ] } )
])

Here’s how these icons could be displayed, as illustrated in the MCP Inspector:

Icons displayed in MCP Inspector showing tool icons with different themes and styles

This placement works well after the code example showing how to configure multiple icons, providing a visual demonstration of how those icons appear in practice.

The Implementation class also has
Icons and
WebsiteUrl properties for server and client metadata:

.AddMcpServer(options =>
{ options.ServerInfo = new Implementation { Name = "Everything Server", Version = "1.0.0", Title = "MCP Everything Server", Description = "A comprehensive MCP server demonstrating all MCP features", WebsiteUrl = "https://github.com/modelcontextprotocol/csharp-sdk", Icons = [ new Icon { Source = "https://raw.githubusercontent.com/microsoft/fluentui-emoji/main/assets/Gear/Flat/gear_flat.svg", MimeType = "image/svg+xml", Sizes = ["any"], Theme = "light" } ] };
})

Incremental scope consent

The incremental scope consent feature brings the Principle of Least Privilege
to MCP authorization, allowing clients to request only the minimum access needed for each operation.

MCP uses OAuth 2.0 for authorization, where scopes define the level of access a client has.
Previously, clients might request all possible scopes up front because they couldn’t know which scopes
a specific operation would require. With incremental scope consent, clients start with minimal scopes
and request additional ones as needed.

The mechanism works through two flows:

  • Initial scopes: When a client makes an unauthenticated request, the server responds with
    401 Unauthorized and a WWW-Authenticate header that now includes a scopes parameter listing
    the scopes needed for the operation. Clients request authorization for only these scopes.

  • Additional scopes: When a client’s token lacks scopes for a particular operation, the server
    responds with 403 Forbidden and a WWW-Authenticate header containing an error parameter
    of insufficient_scope and a scopes parameter with the required scopes. The client then
    obtains a new token with the expanded scopes and retries.

Client support for incremental scope consent

The MCP C# client SDK handles incremental scope consent automatically. When it receives a 401 or 403 with a scopes
parameter in the WWW-Authenticate header, it extracts the required scopes and initiates the
authorization flow — no additional client code needed.

Server support for incremental scope consent

Setting up incremental scope consent on the server involves:

  1. Adding authentication services configured with the MCP authentication scheme:

    builder.Services.AddAuthentication(options =>
    { options.DefaultAuthenticateScheme = McpAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = McpAuthenticationDefaults.AuthenticationScheme;
    })
  2. Enabling JWT bearer authentication with appropriate token validation:

    .AddJwtBearer(options =>
    { options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateLifetime = true, ValidateIssuerSigningKey = true, // Other validation settings as appropriate };
    })

    The following token validation settings are strongly recommended:

    Setting Value Description
    ValidateIssuer true Ensures the token was issued by a trusted authority
    ValidateAudience true Verifies the token is intended for this server
    ValidateLifetime true Checks that the token has not expired
    ValidateIssuerSigningKey true Confirms the token signature is valid
  3. Specifying authentication scheme metadata to guide clients on obtaining access tokens:

    .AddMcp(options =>
    { options.ResourceMetadata = new() { ResourceDocumentation = new Uri("https://docs.example.com/api/weather"), AuthorizationServers = { new Uri(inMemoryOAuthServerUrl) }, ScopesSupported = ["mcp:tools"], };
    });
  4. Performing authorization checks in middleware.
    Authorization checks should be implemented in ASP.NET Core middleware instead of inside the tool method itself. This is because the MCP HTTP handler may (and in practice does) flush response headers before invoking the tool. By the time the tool call method is invoked, it is too late to set the response status code or headers.

    Unfortunately, the middleware may need to inspect the contents of the request to determine which scopes are required, which involves an extra deserialization for incoming requests. But help may be on the way in future versions of the MCP protocol that will avoid this overhead in most cases. Stay tuned…

    In addition to inspecting the request, the middleware must also extract the scopes from the access token sent in the request. In the MCP C# SDK, the authentication handler extracts the scopes from the JWT and converts them to claims in the HttpContext.User property. The way these claims are represented depends on the token issuer and the JWT structure. For a token issuer that represents scopes as a space-separated string in the scope claim, you can determine the scopes passed in the request as follows:

    var user = context.User;
    var userScopes = user?.Claims .Where(c => c.Type == "scope" || c.Type == "scp") .SelectMany(c => c.Value.Split(' ')) .Distinct() .ToList();

    With the scopes extracted from the request, the server can then check if the required scope(s) for the requested operation is included with userScopes.Contains(requiredScope).

    If the required scopes are missing, respond with 403 Forbidden and a WWW-Authenticate header, including an error parameter indicating insufficient_scope and a scopes parameter indicating the scopes required.
    The MCP Specification describes several strategies for choosing which scopes to include:

    • Minimum approach: Only the newly-required scopes (plus any existing granted scopes that are still relevant)
    • Recommended approach: Existing relevant scopes plus newly required scopes
    • Extended approach: Existing scopes, newly required scopes, and related scopes that commonly work together

URL mode elicitation

URL mode elicitation enables secure out-of-band interactions between the server and end-user,
bypassing the MCP host/client entirely. This is particularly valuable for gathering sensitive data — like API keys,
third-party authorizations, and payment information — that would pose a security risk
if transmitted through the client.

Inspired by web security standards like OAuth, this mechanism lets the MCP client obtain user consent
and direct the user’s browser to a secure server-hosted URL where the sensitive interaction takes place.

The MCP host/client must present the elicitation request to the user — including the server’s identity
and the purpose of the request — and provide options to decline or cancel.
What the server does at the elicitation URL is outside the scope of MCP; it could present a form,
redirect to a third-party authorization service, or anything else.

Client support for URL mode elicitation

Clients indicate support by setting the Url property in Capabilities.Elicitation:

McpClientOptions options = new()
{ Capabilities = new ClientCapabilities { Elicitation = new ElicitationCapability { Url = new UrlElicitationCapability() } } // other client options

The client must also provide an ElicitationHandler.
Since there’s a single handler for both form mode and URL mode elicitation, the handler should begin by checking the
Mode property of the ElicitationRequest parameters
to determine which mode is being requested and handle it accordingly.

async ValueTask<ElicitResult> HandleElicitationAsync(ElicitRequestParams? requestParams, CancellationToken token)
{ if (requestParams is null || requestParams.Mode != "url" || requestParams.Url is null) { return new ElicitResult(); } // Success path for URL-mode elicitation omitted for brevity.
}

Server support for URL mode elicitation

The server must define an endpoint for the elicitation URL and handle the response.
Typically the response is submitted via POST to keep sensitive data out of URLs and logs.
If the URL serves a form, it should include anti-forgery tokens to prevent CSRF attacks —
ASP.NET Core provides built-in support for this.

One approach is to create a Razor Page:

public class ElicitationFormModel : PageModel
{ public string ElicitationId { get; set; } = string.Empty; public IActionResult OnGet(string id) { // Serves the elicitation URL when the user navigates to it } public async Task<IActionResult> OnPostAsync(string id, string name, string ssn, string secret) { // Handles the elicitation response when the user submits the form }
}

Note the id parameter on both methods — since an MCP server using Streamable HTTP Transport
is inherently multi-tenant, the server must associate each elicitation request and response
with the correct MCP session. The server must maintain state to track pending elicitation requests
and communicate responses back to the originating MCP request.

Tool calling support in sampling

This is one of the most powerful additions in the 2025-11-25 spec. Servers can now include tools
in their sampling requests, which the LLM may invoke to produce a response.

While providing tools to LLMs is a central feature of MCP, tools in sampling requests are fundamentally different
from standard MCP tools — despite sharing the same metadata structure. They don’t need to be implemented
as standard MCP tools, so the server must implement its own logic to handle tool invocations.

The flow is important to understand: when the LLM requests a tool invocation during sampling,
that’s the response to the sampling request. The server executes the tool, then issues a new
sampling request that includes both the tool call request and the tool call response. This continues
until the LLM produces a final response with no tool invocation requests.

sequenceDiagram participant Server participant Client Server->>Client: CreateMessage Request Note right of Client: messages: [original prompt]<br/>tools: [tool definitions] Client-->>Server: CreateMessage Response Note left of Server: stopReason: tool_calls<br/>toolCalls: [tool call 1, tool call 2] Note over Server: Server executes tools locally Server->>Client: CreateMessage Request Note right of Client: messages: [<br/> original prompt,<br/> tool call 1 request,<br/> tool call 1 response,<br/> tool call 2 request,<br/> tool call 2 response<br/>]<br/>tools: [tool definitions] Client-->>Server: CreateMessage Response Note left of Server: stopReason: end_turn<br/>content: [final response]

Client/host support for tool calling in sampling

Clients declare support for tool calling in sampling through their capabilities and must provide
a SamplingHandler:

var mcpClient = await McpClient.CreateAsync( new HttpClientTransport(new() { Endpoint = new Uri("http://localhost:6184"), Name = "SamplingWithTools MCP Server", }), clientOptions: new() { Capabilities = new ClientCapabilities { Sampling = new SamplingCapability { Tools = new SamplingToolsCapability {} } }, Handlers = new() { SamplingHandler = async (c, p, t) => { return await samplingHandler(c, p, t); }, } });

Implementing the SamplingHandler from scratch would be complex, but the Microsoft.Extensions.AI
package makes it straightforward. You can obtain an IChatClient from your LLM provider and use
CreateSamplingHandler to get a handler that translates between MCP and your LLM’s tool invocation format:

IChatClient chatClient = new OpenAIClient(new ApiKeyCredential(token), new OpenAIClientOptions { Endpoint = new Uri(baseUrl) }) .GetChatClient(modelId) .AsIChatClient(); var samplingHandler = chatClient.CreateSamplingHandler();

The sampling handler from IChatClient handles format translation but does not implement user consent
for tool invocations. You can wrap it in a custom handler to add consent logic.
Note that it will be important to cache user approvals to avoid prompting the user multiple times for the same tool invocation during a single sampling session.

Server support for tool calling in sampling

Servers can take advantage of the tool calling support in sampling if they are connected to a client/host that also supports this feature.
Servers can check whether the connected client supports tool calling in sampling:

if (_mcpServer?.ClientCapabilities?.Sampling?.Tools is not {})
{ return "Error: Client does not support sampling with tools.";
}

Tools for sampling can be described as simple Tool objects:

Tool rollDieTool = new Tool()
{ Name = "roll_die", Description = "Rolls a single six-sided die and returns the result (1-6)."
};

But the real power comes from using Microsoft.Extensions.AI on the server side too. The McpServer.AsSamplingChatClient()
method returns an IChatClient that supports sampling, and UseFunctionInvocation adds tool calling support:

IChatClient chatClient = ChatClientBuilderChatClientExtensions.AsBuilder(_mcpServer.AsSamplingChatClient()) .UseFunctionInvocation() .Build();

Define tools as AIFunction objects and pass them in ChatOptions:

AIFunction rollDieTool = AIFunctionFactory.Create( () => Random.Shared.Next(1, 7), name: "roll_die", description: "Rolls a single six-sided die and returns the result (1-6)."
); var chatOptions = new ChatOptions
{ Tools = [rollDieTool], ToolMode = ChatToolMode.Auto
}; var pointRollResponse = await chatClient.GetResponseAsync( "<Prompt that may use the roll_die tool>", chatOptions, cancellationToken
);

The IChatClient handles all the complexity: sending sampling requests with tools, processing
tool invocation requests, executing tools, and translating between MCP and LLM formats.

OAuth Client ID Metadata Documents

The 2025-11-25 spec introduces Client ID Metadata Documents (CIMDs) as an alternative
to Dynamic Client Registration (DCR) for establishing client identity with an authorization server.
CIMD is now the preferred method for client registration in MCP.

The idea is simple: the client specifies a URL as its client_id in authorization requests.
That URL resolves to a JSON document hosted by the client containing its metadata — identifiers,
redirect URIs, and other descriptive information. When an authorization server encounters this client_id,
it dereferences the URL and uses the metadata to understand and apply policy to the client.

In the C# SDK, clients specify a CIMD URL via ClientOAuthOptions:

const string ClientMetadataDocumentUrl = $"{ClientUrl}/client-metadata/cimd-client.json"; await using var transport = new HttpClientTransport(new()
{ Endpoint = new(McpServerUrl), OAuth = new ClientOAuthOptions() { RedirectUri = new Uri("http://localhost:1179/callback"), AuthorizationRedirectDelegate = HandleAuthorizationUrlAsync, ClientMetadataDocumentUri = new Uri(ClientMetadataDocumentUrl) },
}, HttpClient, LoggerFactory);

The CIMD URL must use HTTPS, have a non-empty path, and cannot contain dot segments or a fragment component.
The document itself must include at least client_id, client_name, and redirect_uris.

The SDK will attempt CIMD first, and fall back to DCR if the authorization server doesn’t support it
(provided DCR is enabled in the OAuth options).

Long-running requests over HTTP with polling

At the data layer, MCP is a message-based protocol with no inherent time limits.
But over HTTP, timeouts are a fact of life. The 2025-11-25 spec significantly improves the story
for long-running requests.

Previously, clients could disconnect and reconnect if the server provided an Event ID in SSE events,
but few servers implemented this — partly because it implied supporting stream resumption from any
event ID all the way back to the start. And servers couldn’t proactively disconnect; they had to
wait for clients to do so.

The new approach is cleaner. Servers that open an SSE stream for a request begin with an empty event
that includes an Event ID and optionally a Retry-After field. After sending this initial event,
servers can close the stream at any time, since the client can reconnect using the Event ID.

Server support for long-running requests

To enable this, the server provides an ISseEventStreamStore implementation. The SDK includes
DistributedCacheEventStreamStore, which works with any IDistributedCache:

// Add a MemoryDistributedCache to the service collection
builder.Services.AddDistributedMemoryCache();
// Add the MCP server with DistributedCacheEventStreamStore for SSE stream storage
builder.Services .AddMcpServer() .WithHttpTransport() .WithDistributedCacheEventStreamStore() .WithTools<RandomNumberTools>();

When a request handler wants to drop the SSE connection and let the client poll for the result,
it calls EnablePollingAsync on the McpRequestContext:

await context.EnablePollingAsync(retryInterval: TimeSpan.FromSeconds(retryIntervalInSeconds));

The McpRequestContext is available in handlers for MCP requests by simply adding it as a parameter to the handler method.

Implementation considerations

Event stream stores can be susceptible to unbounded memory growth, so consider these retention strategies:

Tasks (experimental)

Note: Tasks are an experimental feature in the 2025-11-25 MCP Specification. The API may change in future releases.

The 2025-11-25 version of the MCP Specification introduces tasks, a new primitive that provides durable state tracking
and deferred result retrieval for MCP requests. While stream resumability
handles transport-level concerns like reconnection and event replay, tasks operate at the data layer to ensure
that request results are durably stored and can be retrieved at any point within a server-defined retention window —
even if the original connection is long gone.

The key concept is that tasks augment existing requests rather than replacing them.
A client includes a task field in a request (e.g. tools/call) to signal that it wants durable result tracking.
Instead of the normal response, the server returns a CreateTaskResult containing task metadata — a unique task ID, the current status (working),
timestamps, a time-to-live (TTL), and optionally a suggested poll interval.
The client then uses tasks/get to poll for status, tasks/result to retrieve the stored result,
tasks/list to enumerate tasks, and tasks/cancel to cancel a running task.

This durability is valuable in several scenarios:

  • Resilience to dropped results: If a result is lost due to a network failure, the client can retrieve it again by task ID
    rather than re-executing the operation.
  • Explicit status tracking: Clients can query the server to determine whether a request is still in progress, succeeded, or failed,
    rather than relying on notifications or waiting indefinitely.
  • Integration with workflow systems: MCP servers wrapping existing workflow APIs (e.g. CI/CD pipelines, batch processing, multi-step analysis)
    can map their existing job tracking directly to the task primitive.

Tasks follow a defined lifecycle through these status values:

Status Description
working Task is actively being processed
input_required Task is waiting for additional input (e.g., elicitation)
completed Task finished successfully; results are available
failed Task encountered an error
cancelled Task was cancelled by the client

The last three states (completed, failed, and cancelled) are terminal — once a task reaches one of these states, it cannot transition to any other state.

Task support is negotiated through explicit capability declarations during initialization.
Servers declare that they support task-augmented tools/call requests, while clients can declare support for
task-augmented sampling/createMessage and elicitation/create requests.

Server support for tasks

To enable task support on an MCP server, configure a task store when setting up the server.
The task store is responsible for managing task state — creating tasks, storing results, and handling cleanup.

var taskStore = new InMemoryMcpTaskStore(); builder.Services.AddMcpServer(options =>
{ options.TaskStore = taskStore;
})
.WithHttpTransport()
.WithTools<MyTools>(); // Alternatively, you can register an IMcpTaskStore globally with DI, but you only need to configure it one way.
//builder.Services.AddSingleton<IMcpTaskStore>(taskStore);

The InMemoryMcpTaskStore is a reference implementation suitable for development and single-server deployments.
For production multi-server scenarios, implement IMcpTaskStore
with a persistent backing store (database, Redis, etc.).

The InMemoryMcpTaskStore constructor accepts several optional parameters to control task retention, polling behavior,
and resource limits:

var taskStore = new InMemoryMcpTaskStore( defaultTtl: TimeSpan.FromHours(1), // Default task retention time maxTtl: TimeSpan.FromHours(24), // Maximum allowed TTL pollInterval: TimeSpan.FromSeconds(1), // Suggested client poll interval cleanupInterval: TimeSpan.FromMinutes(5), // Background cleanup frequency pageSize: 100, // Tasks per page for listing maxTasks: 1000, // Maximum total tasks allowed maxTasksPerSession: 100 // Maximum tasks per session
);

Tools automatically advertise task support when they return Task, ValueTask, Task<T>, or ValueTask<T> (i.e. async methods).
You can explicitly control task support on individual tools using the ToolTaskSupport enum:

  • Forbidden (default for sync methods): Tool cannot be called with task augmentation
  • Optional (default for async methods): Tool can be called with or without task augmentation
  • Required: Tool must be called with task augmentation

Set TaskSupport on the McpServerTool attribute:

[McpServerTool(TaskSupport = ToolTaskSupport.Required)]
[Description("Processes a batch of data records. Always runs as a task.")]
public static async Task<string> ProcessData( [Description("Number of records to process")] int recordCount, CancellationToken cancellationToken)
{ await Task.Delay(TimeSpan.FromSeconds(8), cancellationToken); return $"Processed {recordCount} records successfully.";
}

Or set it via McpServerToolCreateOptions.Execution when registering tools explicitly:

builder.Services.AddMcpServer() .WithTools([ McpServerTool.Create( (int count, CancellationToken ct) => ProcessAsync(count, ct), new McpServerToolCreateOptions { Name = "requiredTaskTool", Execution = new ToolExecution { TaskSupport = ToolTaskSupport.Required } }) ]);

For more control over the task lifecycle, a tool can directly interact with
IMcpTaskStore and return an McpTask.
This bypasses automatic task wrapping and allows the tool to create a task, schedule background work, and return immediately.
Note: use a static method and accept IMcpTaskStore as a method parameter rather than via constructor injection
to avoid DI scope issues when the SDK executes the tool in a background context.

Client support for tasks

To execute a tool as a task, a client includes the Task property in the request parameters:

var result = await client.CallToolAsync( new CallToolRequestParams { Name = "processDataset", Arguments = new Dictionary<string, JsonElement> { ["recordCount"] = JsonSerializer.SerializeToElement(1000) }, Task = new McpTaskMetadata { TimeToLive = TimeSpan.FromHours(2) } }, cancellationToken); if (result.Task != null)
{ Console.WriteLine($"Task created: {result.Task.TaskId}"); Console.WriteLine($"Status: {result.Task.Status}");
}

The client can then poll for status updates and retrieve the final result:

// Poll until task reaches a terminal state
var completedTask = await client.PollTaskUntilCompleteAsync( taskId, cancellationToken: cancellationToken); switch (completedTask.Status)
{ case McpTaskStatus.Completed: // ... break; case McpTaskStatus.Failed: // ... break; case McpTaskStatus.Cancelled: // ... break;
{ var resultJson = await client.GetTaskResultAsync( taskId, cancellationToken: cancellationToken); var result = resultJson.Deserialize<CallToolResult>(McpJsonUtilities.DefaultOptions); foreach (var content in result?.Content ?? []) { if (content is TextContentBlock text) { Console.WriteLine(text.Text); } }
}

The SDK also provides methods to list all tasks (ListTasksAsync)
and cancel running tasks (CancelTaskAsync):

// List all tasks for the current session
var tasks = await client.ListTasksAsync(cancellationToken: cancellationToken); // Cancel a running task
var cancelledTask = await client.CancelTaskAsync(taskId, cancellationToken: cancellationToken);

Clients can optionally register a handler to receive status notifications as they arrive,
but should always use polling as the primary mechanism since notifications are optional:

var options = new McpClientOptions
{ Handlers = new McpClientHandlers { TaskStatusHandler = (task, cancellationToken) => { Console.WriteLine($"Task {task.TaskId} status changed to {task.Status}"); return ValueTask.CompletedTask; } }
};

Summary

The v1.0 release of the MCP C# SDK represents a major step forward for building MCP servers and clients in .NET.
Whether you’re implementing secure authorization flows, building rich tool experiences with sampling,
or handling long-running operations gracefully, the SDK has you covered.

Check out the full changelog
and the C# SDK repository to get started.

Demo projects for many of the features described here are available in the
mcp-whats-new demo repository.

Posted on Leave a comment

Raylib 3.5 Released

Eight months after the release of Raylib 3.0, Raylib 3.5 was just released. Raylib is an open source cross platform C/C++ game framework. Raylib runs on a ton of different platforms and has bindings available for more than 50 different programming languages. The Raylib 3.5 release brings the following new features.

  • NEW Platform supported: Raspberry Pi 4 native mode (no X11 windows) through DRM subsystem and GBM API. Actually this is a really interesting improvement because it opens the door to raylib to support other embedded platforms (Odroid, GameShell, NanoPi…). Also worth mentioning the un-official homebrew ports of raylib for PS4 and PSVita.
  • NEW configuration options exposed: For custom raylib builds, config.h now exposes more than 150 flags and defines to build raylib with only the desired features, for example, it allows to build a minimal raylib library in just some KB removing all external data filetypes supported, very useful to generate small executables or embedded devices.
  • NEW automatic GIF recording feature: Actually, automatic GIF recording (CTRL+F12) for any raylib application has been available for some versions but this feature was really slow and low-performant using an old gif library with many file-accesses. It has been replaced by a high-performant alternative (msf_gif.h) that operates directly on memory… and actually works very well! Try it out!
  • NEW RenderBatch system: rlgl module has been redesigned to support custom render batches to allow grouping draw calls as desired, previous implementation just had one default render batch. This feature has not been exposed to raylib API yet but it can be used by advance users dealing with rlgl directly. For example, multiple RenderBatch can be created for 2D sprites and 3D geometry independently.
  • NEW Framebuffer system: rlgl module now exposes an API for custom Framebuffer attachments (including cubemaps!). raylib RenderTexture is a basic use-case, just allowing color and depth textures, but this new API allows the creation of more advance Framebuffers with multiple attachments, like the G-BuffersGenTexture*() functions have been redesigned to use this new API.
  • Improved software rendering: raylib Image*() API is intended for software rendering, for those cases when no GPU or no Window is available. Those functions operate directly with multi-format pixel data on RAM and they have been completely redesigned to be way faster, specially for small resolutions and retro-gaming. Low-end embedded devices like microcontrollers with custom displays could benefit of this raylib functionality!
  • File loading from memory: Multiple functions have been redesigned to load data from memory buffers instead of directly accessing the files, now all raylib file loading/saving goes through a couple of functions that load data into memory. This feature allows custom virtual-file-systems and it gives more control to the user to access data already loaded in memory (i.e. images, fonts, sounds…).
  • NEW Window states management system: raylib core module has been redesigned to support Window state check and setup more easily and also before/after Window initializationSetConfigFlags() has been reviewed and SetWindowState() has been added to control Window minification, maximization, hidding, focusing, topmost and more.
  • NEW GitHub Actions CI/CD system: Previous CI implementation has been reviewed and improved a lot to support multiple build configurations (platforms, compilers, static/shared build) and also an automatic deploy system has been implemented to automatically attach the diferent generated artifacts to every new release. As the system seems to work very good, previous CI platforms (AppVeyor/TravisCI) have been removed.

Release notes are available here and a complete change log is available here. Binary versions of Raylib are available on Raylib.com while the source code is hosted under the ZLib license on GitHub. If you are interested in learning Raylib you can check out their community on Discord. You can also download Raylib via vcpkg on Visual Studio with step by step instructions available here. You can learn more about Raylib and the 3.5 release in the video below.

[youtube https://www.youtube.com/watch?v=RZJ-Z–6uxY?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Flax Engine Released

The Flax Engine game engine has just seen it’s 1.0 release. We’ve had our eyes on this engine since it’s first public beta in 2018, which was then followed by a few years of radio silence. The in July of 2020 we got the 0.7 release which added several new features including C++ live scripting support. With today’s release the Flax Engine is now available to everyone.

Key features include:

  • Seamless C# and C++ scripting
  • Automatic draw calls batching and instancing
  • Every asset is using async content streaming by default
  • Cross-platform support (Windows, Linux, Android, PS4. Xbox One, Xbox Series X/S, UWP…)
  • GPU Lightmaps Baking
  • Visual Scripting
  • VFX tools
  • Nested prefabs
  • Gameplay Globals for technical artists
  • Open World Tools (terrain, foliage, fog, levels streaming)
  • Hot-reloading C#/C++ in Editor
  • Full source-code available
  • Direct communication and help from engine devs
  • Lightweight development (full repo clone + compilation in less than 3 min)

Flax is available for Windows and Linux developers with the source code available on GitHub. Flax is a commercial game engine, but under fairly liberal terms. Commercial license terms are:

Use Flax for free, pay 4% when you release (above first $25k per quarter). Flax Engine and all related tools, all features, all supported platforms, all source code, all complete projects and Flax Samples with regular updates can be used for free.

If you want to learn more about Flax Engine, be sure to check out the following links:

You can learn more about the game engine and see it in action in the video below. Stay tuned for a more in-depth technical video on Flax Engine in the future.

[youtube https://www.youtube.com/watch?v=R4M4Yp7CjM0?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Wave Engine 3.1 Released

Wave Engine recently released version 3.1. Wave Engine is a completely free to use 3D game engine capable of targeting most platforms and XR devices. We have been keeping an eye on this engine since 2015 when we featured it in the Closer Look series. More recently we looked at Wave Engine again in 2019 when WaveEngine 3.0 was previewed after a long period of silence. After another long period of silence we received the 3.1 release which brings .NET 5 and C# 9 support as well as graphical improvements.

Details from a guest post on the DotNet team blog:

We are glad to announce that, aligned with Microsoft, we have just released WaveEngine 3.1 with official support for .NET 5 and C# 9. So if you are using C# and .NET 5, you can start creating 3D apps based on .NET 5 today. Download it from the WaveEngine download page right now and start creating 3D apps based on .NET 5 today. We would like to share with you our journey migrating from .NET Core 3.1 to .NET 5, as well as some of the new features made possible with .NET 5.

From .NET Core 3.1 to .NET 5

To make this possible we started working on this one year ago, when we decide to rewrite our low-level graphics abstraction API to support the new Vulkan, DirectX12 and Metal graphics APIs. At that time, it was a project based on .NET Framework with an editor based on GTK# which had problems to support new resolutions, multiscreen or the new DPI standards. At that time, we were following all the great advances in performance that Microsoft was doing in .NET Core and the future framework called .NET 5 and we decided that we had to align our engine with this to take advantage of all the new performance features, so we started writing a new editor based on WPF and .NET Core and changed all our extensions and libraries to .NET Core. This took us one year of hard work but the results comparing our old version 2.5 and the new one 3.1 in terms of performance and memory usage are awesome, around 4-5x faster.

Now we have official support for .NET 5 and this technology is ready for .NET 6 so we are glad to become one of the first engines to support it.

In the video below we review Wave Engine 3.1. All of the samples used in the video are available on GitHub. Please note this repository should not be cloned, it simply links to a different repository for each sample.

[youtube https://www.youtube.com/watch?v=9zIQHBPW1E4?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Unigine 2.13 Released

The Unigine engine just released version 2.13. The new release includes an all new GPU based lightmapping tool, a new terrain generation tool, improved clouds, better lighting and a whole lot more. Since Unigine 2.11 there is a free community version available making Unigine a lot more viable for indie game developers.

Highlights of the release include:

  • GPU Lightmapper tool
  • Introducing SRAA (Subpixel Reconstruction Anti-Aliasing)
  • Upgraded 3D volumetric clouds
  • Performance optimizations for vast forest rendering
  • New iteration of the terrain generation tool with online GIS sources support (experimental)
  • Adaptive hardware tessellation for the mesh_base material
  • Project Build tool: extended functionality and a standalone console-based version
  • New samples (LiDAR sensor, night city lights, helicopter winch)
  • Introducing 3D scans library

For further information on the release be sure to check the much more in-depth release notes or watch the video below.

[youtube https://www.youtube.com/watch?v=AmPl2B-pyQ4?feature=oembed&w=1500&h=844]
Posted on Leave a comment

The Machinery Game Engine Enters Open Beta

The Machinery by Our Machinery is an in development professional game engine that just entered open beta. We went hands-on with The Machinery earlier in the year when it was still in closed beta if you want an in-depth but slightly out of date hands-on experience. With the move to open beta all you need to do is register an account and download the engine to get started.

In a world dominated with game engines, what makes The Machinery unique? This engine is being developed by members behind the Stingray/BitSquid engines, used in such titles as Magicka and Warhammer Vermintide. The engine is light weight, modular and written in the C language with a focus on customizibility. Details from the open beta announcement:

If you are still wondering what The Machinery is, it’s a new lightweight and flexible game engine, designed to give you all the power of a modern engine in a minimalistic package that is easy to understand, extend, explore, rewrite, and hack. Beyond games, the API can also be used for simulations and visualizations as well as building custom tools, editors, and applications. 

 Some of the things that make The Machinery more hackable than other game engines are:

  • The Machinery’s API is written in C. It’s easy to understand without learning the complexities of modern C++. And don’t worry, you still have type-safe vectors and hash tables, just as in C++.
  • We use a modular design that is completely plugin-based. This makes it easy to extend and replace parts of the engine.
  • The engine can be stripped down to a minimalistic core. Don’t need physics, animation, or sound? Just ship the engine without those DLLs.
  • Individual DLLs can be hot-reloaded. You can modify gameplay, UI, etc, while the editor is running.
  • The codebase is small, readable and well documented.
  • We offer licenses with full source code for both small and large developers. 

You can learn more about The Machinery open beta and a quick hands-on/getting started guide in the video below.

[youtube https://www.youtube.com/watch?v=y6C5vUm55Eg?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Drag[en]gine Hands-On

The Drag[en]gine is a highly modular, open source (C++) game engine that has been under active development for several years. The Drag[en]gine’s modular approach is built around the GLEM concept breaking your game project into the Game Script, Launcher, Engine and Modules layers. The Game Script is implemented by default in Dragonscript, another open source project available here. Drag[en]gine is open source under the LGPL license on GitHub.

If you want to get started with Drag[en]gine you can download binaries for Linux and Windows available here, it’s most likely the IGDE file you want to start with. There are a number of samples to get you started available here. You can learn more about Drag[en]gine in the video below.

[youtube https://www.youtube.com/watch?v=ZyW22zRk6A8?feature=oembed&w=1500&h=844]
Posted on Leave a comment

FlatRedBall Engine Review

FlatRedBall is an open source C# based game engine with development dating back to 2005. It was originally built to run on-top of Managed Direct X, then was ported to XNA and when XNA was depreciated, it was again ported to run on top of the MonoGame framework.

FlatRedBall provides a layer of APIs and tooling on top of MonoGame designed to simplify the process of creating 2D games. You can currently create games for Windows (and UWP), Android and iOS, with Mac and Linux targets currently a work in progress. The heart of the tooling is Glue, which “glues” together the various other tools, including plugins for tasks such as UI development as well as support for the Tiled 2D map editor.

FlatRedBall is open source with the source code available on GitHub under the flexible and permissive MIT open source license. You can check out FlatRedBall in action in the video below (or here on Odysee). If you are interested in learning more or encounter a problem, they have an active Discord server available here.

[youtube https://www.youtube.com/watch?v=X0ncHtmUk5Y?feature=oembed&w=1500&h=844]