Posted on Leave a comment

Modernize .NET Anywhere with GitHub Copilot

Modernizing a .NET application is rarely a single step. It requires understanding the current state of the codebase, evaluating dependencies, identifying potential breaking changes, and sequencing updates carefully.

Until recently, GitHub Copilot modernization for .NET ran primarily inside Visual Studio. That worked well for teams standardized on the IDE, but many teams build elsewhere. Some use VS Code. Some work directly from the terminal. Much of the coordination happens on GitHub, not in a single developer’s local environment.

The modernize-dotnet custom agent changes that. The same modernization workflow can now run across Visual Studio, VS Code, GitHub Copilot CLI, and GitHub. The intelligence behind the experience remains the same. What’s new is where it can run. You can modernize in the environment you already use instead of rerouting your workflow just to perform an upgrade.

The modernize-dotnet agent builds on the broader GitHub Copilot modernization platform, which follows an assess → plan → execute model. Workload-specific agents such as modernize-dotnet, modernize-java, and modernize-azure-dotnet guide applications toward their modernization goals, working together across code upgrades and cloud migration scenarios.

What the agent produces

Every modernization run generates three explicit artifacts in your repository: an assessment that surfaces scope and potential blockers, a proposed upgrade plan that sequences the work, and a set of upgrade tasks that apply the required code transformations.

Because these artifacts live alongside your code, teams can review, version, discuss, and modify them before execution begins. Instead of a one-shot upgrade attempt, modernization becomes traceable and deliberate.

GitHub Copilot CLI

For terminal-first engineers, GitHub Copilot CLI provides a natural entry point.

You can assess a repository, generate an upgrade plan, and run the upgrade without leaving the shell.

  1. Add the marketplace: /plugin marketplace add dotnet/modernize-dotnet
  2. Install the plugin: /plugin install modernize-dotnet@modernize-dotnet-plugins
  3. Select the agent: /agent to select modernize-dotnet
  4. Then prompt the agent, for example: upgrade my solution to a new version of .NET

Modernize .NET in GitHub Copilot CLI

The agent generates the assessment, upgrade plan, and upgrade tasks directly in the repository. You can review scope, validate sequencing, and approve transformations before execution. Once approved, the agent automatically executes the upgrade tasks directly from the CLI.

GitHub

On GitHub, the agent can be invoked directly within a repository. The generated artifacts live alongside your code, shifting modernization from a local exercise to a collaborative proposal. Instead of summarizing findings in meetings, teams review the plan and tasks where they already review code. Learn how to add custom coding agents to your repo, then add the modernize-dotnet agent by following the README in the modernize-dotnet repository.

VS Code

If you use VS Code, install the GitHub Copilot modernization extension and select modernize-dotnet from the Agent picker in Copilot Chat. Then prompt the agent with the upgrade you want to perform, for example: upgrade my project to .NET 10.

Visual Studio

If Visual Studio is your primary IDE, the structured modernization workflow remains fully integrated.

Right-click your solution or project in Solution Explorer and select the Modernize action to perform an upgrade.

Supported workloads

GitHub Copilot modernization supports upgrades across common .NET project types, including ASP.NET Core (MVC, Razor Pages, Web API), Blazor, Azure Functions, WPF, class libraries, and console applications.

Migration from .NET Framework to modern .NET is also supported for application types such as ASP.NET (MVC, Web API), Windows Forms, WPF, and Azure Functions, with Web Forms support coming soon.

The CLI and VS Code experiences are cross-platform. However, migrations from .NET Framework require Windows.

Custom skills

Skills are a standard part of GitHub Copilot’s agentic platform. They let teams define reusable, opinionated behaviors that agents apply consistently across workflows.

The modernize-dotnet agent supports custom skills, allowing organizations to encode internal frameworks, migration patterns, or architectural standards directly into the modernization workflow. Any skills added to the repository are automatically applied when the agent performs an upgrade.

You can learn more about how skills work and how to create them in the Copilot skills documentation.

Give it a try

Run the modernize-dotnet agent on a repository you’re planning to upgrade and explore the modernization workflow in the environment you already use.

If you try it, we’d love to hear how it goes. Share feedback or report issues in the modernize-dotnet repository.

Posted on Leave a comment

.NET 10.0.5 Out-of-Band Release – macOS Debugger Fix

We are releasing .NET 10.0.5 as an out-of-band (OOB) update to address a regression introduced in .NET 10.0.4.

What’s the issue?

.NET 10.0.4 introduced a regression that causes the debugger to crash when debugging applications on macOS using Visual Studio Code. After installing .NET SDK 10.0.104 or 10.0.200, the debugger could crash when attempting to debug any .NET application on macOS (particularly affecting ARM64 Macs).

This regression is unrelated to the security fixes included in 10.0.4.

Who is affected?

This issue specifically affects:

  • macOS users (particularly Apple Silicon/ARM64)
  • Using Visual Studio Code for debugging
  • Who have installed .NET SDK 10.0.104 or 10.0.200 or .NET 10.0.4 runtime

Important

If you are developing on macOS and use Visual Studio Code for debugging .NET applications, you should install this update. Other platforms (Windows, Linux) and development environments are not affected by this regression.

Download .NET 10.0.5

Installation guidance

For macOS users with VS Code:

  1. Download and install .NET 10.0.5
  2. Restart Visual Studio Code
  3. Verify the installation by running dotnet --version in your terminal

For other platforms:
You may continue using .NET 10.0.4 unless you prefer to stay on the latest patch version. This release addresses a specific crash issue and does not include additional fixes beyond what was released in 10.0.4.

Share your feedback

If you continue to experience issues after installing this update, please let us know in the Release feedback issue.

Thank you for your patience as we worked to resolve this issue quickly for our macOS developer community.

Posted on Leave a comment

Extend your coding agent with .NET Skills

Coding agents are becoming part of everyday development, but quality of responses
and usefulness still depends on the best context as input. That context comes in
different forms starting from your environment, the code in the workspace, the
model training knowledge, previous memory, agent instructions, and of course
your own starting prompt. On the .NET team we’ve really adopted coding agents as
a part of our regular workflow and have, like you, learned the ways to improve
our productivity by providing great context. Across our repos we’ve adopted our
agent instructions and have also started to use agent skills to improve our
workflows. We’re introducing dotnet/skills,
a repository that hosts a set of agent skills for .NET developers from the team
who is building the platform itself.

What is an agent skill?

If you’re new to the concept, an agent skill is a lightweight package with specialized knowledge an agent can discover and use while solving a task. A skill bundles intent,
task-specific context, and supporting artifacts so the agent can choose better
actions with less trial and error. This work follows the
Agent Skills specification, which defines a common
model for authoring and sharing these capabilities with coding agents. GitHub Copilot CLI, VS Code, Claude Code and other coding agents support this specification.

What we are doing with dotnet/skills

With dotnet/skills, we’re publishing skills from the team that ships the platform.
These are the same workflows we’ve used ourselves, with first-party teams, and
in engineering scenarios we’ve seen in working with developers like yourself.

So what does that look like in practice? You’re not starting from generic
prompts. You’re starting from patterns we’ve already tested while shipping
.NET.

Our goal is practical: ship skills that help agents complete common .NET tasks
more reliably, with better context and fewer dead ends.

Does it help?

While we’ve learned that context is essential, we also have learned not to assume
more is always better. The AI models are getting remarkably better each release
and what was thought to be needed even 3 months ago, may no longer be required
with newer models. In producing skills we want to measure the validity if an
added skill actually improves the result. For each of our skills merged, we run
a lightweight validator (also available in the repo) to score it. We’re also learning the best graders/evals for this type…and so is the ecosystem as well.

Think of this as a unit test for a skill, not an integration test for the
whole system. We measure (using a specific model each run) against a baseline (no skill present) and try to score if the specific skill improved the intended behavior, and by how much. Some of this is taste as well so we’re careful not to draw too many hard lines on a specific number, but look at the result, adjust and re-score.

Each skill’s evaluation lives in the repository as well, so
you can inspect and run them. This gives us a practical signal on usefulness
without waiting for large end-to-end benchmark cycles. We will continue to learn in this space and adjust. We have a lot of partner teams trying different evaluation techniques as well at this level. The real test is you telling us if they have improved.

A developer posted this just recently on Discord sharing what we want to see:

The skill just worked with the log that I’ve with me, thankfully it was smartter[sic] than me and found the correct debug symbol. At the end it says the crash is caused by a heap corruption and the stack-trace points to GC code, by any chance does it ring a bell for you?

This is a great example of how a skill accelerated to the next step rapidly in this particular investigation for this developer. This is the true definition of success in unblocking and accelerating productivity.

Discovery, installation, and using skills

Popular agent tools have adopted the concept of
plugin marketplaces
which simply put are a registry of agent artifacts, like skills. The
plugin definition
serves as an organizational unit and defines what skills, agents, hooks, etc.
exist for that plugin in a single installable package. The dotnet/skills repo
is organized in the same manner, with the repo serving as the marketplace and we
have organized a set of plugins by functional areas. We’ll continue to define
more plugins as they get merged and based on your feedback.

While you can simply copy the SKILL.md files directly to your environment, the
plugin concept in coding agents like GitHub Copilot aim to make that process simpler.
As noted in the
README,
you can register the repo as a marketplace and browse/install the plugins.

/plugin marketplace add dotnet/skills

Once the marketplace is added, then you can browse any marketplace for a set of plugins to install and install the named plugin:

/plugin marketplace browse dotnet-agent-skills
/plugin install <plugin>@dotnet-agent-skills

Copilot CLI browsing plugin marketplace and installing a plugin via the CLI

They are now available in your environment automatically by your coding agent, or you can also invoke them explicitly.

/dotnet:analyzing-dotnet-performance

And in VS Code you can add the marketplace URL into the Copilot extension settings for Insiders, adding https://github.com/dotnet/skills as the location and then you can browse in the extensions explorer to install, and then directly execute in Copilot Chat using the slash command:

Browsing agent plugins in the Extension marketplace

We acknowledge that discovery of even marketplaces can be a challenge and are
working with our own Copilot partners and ecosystem to better understand ways to
improve this discovery flow — it’s hard to use great skills if you don’t know
where to look! We’ll be sure to post more on any changes and possible .NET
specific tools to help identify skills that will make your project and developer
productivity better.

Starting principles

Like evolving standards in the AI extensibility space, skills is fast moving. We
are starting with the principle of simplicity first. We’ve seen in our own uses
that a huge set of new tools may not be needed with well scoped skills
themselves. Where we need more, we’ll leverage things like MCP or scripts, or
SDK tools that already exist and rely on them to enhance the particular skill
workflow. We want our skills to be proven, practical, and task-oriented.

We also know there are great community-provided agent skills that have evolved,
like github/awesome-copilot which
provide a lot of value for specific libraries and architectural patterns for .NET
developers. We support all these efforts as well and don’t think there is a ‘one
winner’ skills marketplace for .NET developers. We want our team to keep focused
closest to the core runtime, concepts, tools, and frameworks we deliver as a
team and support and learn from the community as the broader set of agentic
skills help all .NET developers in many more ways. Our skills are meant to
complement, not replace any other marketplace of skills.

What’s next

The AI ecosystem is moving fast, and this repository will too. We’ll iterate
and learn in the open with the developer community.

Expect frequent updates, new skills, and continued collaboration as we improve
how coding agents work across .NET development scenarios.

Explore dotnet/skills, try the skills in your own workflows, and share
feedback
on things that can improve or new ideas we should consider.

Posted on Leave a comment

Release v1.0 of the official MCP C# SDK

The Model Context Protocol (MCP) C# SDK has reached its v1.0 milestone, bringing full support for the
2025-11-25 version of the MCP Specification.
This release delivers a rich set of new capabilities — from improved authorization flows and richer metadata,
to powerful new patterns for tool calling, elicitation, and long-running request handling.

Here’s a tour of what’s new.

Enhanced authorization server discovery

In the previous spec, servers were required to provide a link to their Protected Resource Metadata (PRM) Document
in the resource_metadata parameter of the WWW-Authenticate header.
The 2025-11-25 spec broadens this, giving servers three ways to expose the PRM:

  1. Via a URL in the resource_metadata parameter of the WWW-Authenticate header (as before)
  2. At a “well-known” URL derived from the server’s MCP endpoint path
    (e.g. https://example.com/.well-known/oauth-protected-resource/public/mcp)
  3. At the root well-known URL (e.g. https://example.com/.well-known/oauth-protected-resource)

Clients check these locations in order.

On the server side, the SDK’s AddMcp extension method on AuthenticationBuilder
makes it easy to configure the PRM Document:

.AddMcp(options =>
{ options.ResourceMetadata = new() { ResourceDocumentation = new Uri("https://docs.example.com/api/weather"), AuthorizationServers = { new Uri(inMemoryOAuthServerUrl) }, ScopesSupported = ["mcp:tools"], };
});

When configured this way, the SDK automatically hosts the PRM Document at the well-known location
and includes the link in the WWW-Authenticate header. On the client side, the SDK handles the
full discovery sequence automatically.

Icons for tools, resources, and prompts

The 2025-11-25 spec adds icon metadata to Tools, Resources, and Prompts. This information is included
in the response to tools/list, resources/list, and prompts/list requests.
Implementation metadata (describing a client or server) has also been extended with icons and a website URL.

The simplest way to add an icon for a tool is with the IconSource parameter on the McpServerToolAttribute:

[McpServerTool(Title = "This is a title", IconSource = "https://example.com/tool-icon.svg")]
public static string ToolWithIcon(

The McpServerResourceAttribute, McpServerResourceTemplateAttribute, and McpServerPromptAttribute
have also added an IconSource parameter.

For more advanced scenarios — multiple icons, MIME types, size hints, and theme preferences — you can
configure icons programmatically via McpServerToolCreateOptions.Icons:

.WithTools([ McpServerTool.Create( typeof(EchoTool).GetMethod(nameof(EchoTool.Echo))!, options: new McpServerToolCreateOptions { Icons = [ new Icon { Source = "https://raw.githubusercontent.com/microsoft/fluentui-emoji/main/assets/Loudspeaker/Flat/loudspeaker_flat.svg", MimeType = "image/svg+xml", Sizes = ["any"], Theme = "light" }, new Icon { Source = "https://raw.githubusercontent.com/microsoft/fluentui-emoji/main/assets/Loudspeaker/3D/loudspeaker_3d.png", MimeType = "image/png", Sizes = ["256x256"], Theme = "dark" } ] } )
])

Here’s how these icons could be displayed, as illustrated in the MCP Inspector:

Icons displayed in MCP Inspector showing tool icons with different themes and styles

This placement works well after the code example showing how to configure multiple icons, providing a visual demonstration of how those icons appear in practice.

The Implementation class also has
Icons and
WebsiteUrl properties for server and client metadata:

.AddMcpServer(options =>
{ options.ServerInfo = new Implementation { Name = "Everything Server", Version = "1.0.0", Title = "MCP Everything Server", Description = "A comprehensive MCP server demonstrating all MCP features", WebsiteUrl = "https://github.com/modelcontextprotocol/csharp-sdk", Icons = [ new Icon { Source = "https://raw.githubusercontent.com/microsoft/fluentui-emoji/main/assets/Gear/Flat/gear_flat.svg", MimeType = "image/svg+xml", Sizes = ["any"], Theme = "light" } ] };
})

Incremental scope consent

The incremental scope consent feature brings the Principle of Least Privilege
to MCP authorization, allowing clients to request only the minimum access needed for each operation.

MCP uses OAuth 2.0 for authorization, where scopes define the level of access a client has.
Previously, clients might request all possible scopes up front because they couldn’t know which scopes
a specific operation would require. With incremental scope consent, clients start with minimal scopes
and request additional ones as needed.

The mechanism works through two flows:

  • Initial scopes: When a client makes an unauthenticated request, the server responds with
    401 Unauthorized and a WWW-Authenticate header that now includes a scopes parameter listing
    the scopes needed for the operation. Clients request authorization for only these scopes.

  • Additional scopes: When a client’s token lacks scopes for a particular operation, the server
    responds with 403 Forbidden and a WWW-Authenticate header containing an error parameter
    of insufficient_scope and a scopes parameter with the required scopes. The client then
    obtains a new token with the expanded scopes and retries.

Client support for incremental scope consent

The MCP C# client SDK handles incremental scope consent automatically. When it receives a 401 or 403 with a scopes
parameter in the WWW-Authenticate header, it extracts the required scopes and initiates the
authorization flow — no additional client code needed.

Server support for incremental scope consent

Setting up incremental scope consent on the server involves:

  1. Adding authentication services configured with the MCP authentication scheme:

    builder.Services.AddAuthentication(options =>
    { options.DefaultAuthenticateScheme = McpAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = McpAuthenticationDefaults.AuthenticationScheme;
    })
  2. Enabling JWT bearer authentication with appropriate token validation:

    .AddJwtBearer(options =>
    { options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateLifetime = true, ValidateIssuerSigningKey = true, // Other validation settings as appropriate };
    })

    The following token validation settings are strongly recommended:

    Setting Value Description
    ValidateIssuer true Ensures the token was issued by a trusted authority
    ValidateAudience true Verifies the token is intended for this server
    ValidateLifetime true Checks that the token has not expired
    ValidateIssuerSigningKey true Confirms the token signature is valid
  3. Specifying authentication scheme metadata to guide clients on obtaining access tokens:

    .AddMcp(options =>
    { options.ResourceMetadata = new() { ResourceDocumentation = new Uri("https://docs.example.com/api/weather"), AuthorizationServers = { new Uri(inMemoryOAuthServerUrl) }, ScopesSupported = ["mcp:tools"], };
    });
  4. Performing authorization checks in middleware.
    Authorization checks should be implemented in ASP.NET Core middleware instead of inside the tool method itself. This is because the MCP HTTP handler may (and in practice does) flush response headers before invoking the tool. By the time the tool call method is invoked, it is too late to set the response status code or headers.

    Unfortunately, the middleware may need to inspect the contents of the request to determine which scopes are required, which involves an extra deserialization for incoming requests. But help may be on the way in future versions of the MCP protocol that will avoid this overhead in most cases. Stay tuned…

    In addition to inspecting the request, the middleware must also extract the scopes from the access token sent in the request. In the MCP C# SDK, the authentication handler extracts the scopes from the JWT and converts them to claims in the HttpContext.User property. The way these claims are represented depends on the token issuer and the JWT structure. For a token issuer that represents scopes as a space-separated string in the scope claim, you can determine the scopes passed in the request as follows:

    var user = context.User;
    var userScopes = user?.Claims .Where(c => c.Type == "scope" || c.Type == "scp") .SelectMany(c => c.Value.Split(' ')) .Distinct() .ToList();

    With the scopes extracted from the request, the server can then check if the required scope(s) for the requested operation is included with userScopes.Contains(requiredScope).

    If the required scopes are missing, respond with 403 Forbidden and a WWW-Authenticate header, including an error parameter indicating insufficient_scope and a scopes parameter indicating the scopes required.
    The MCP Specification describes several strategies for choosing which scopes to include:

    • Minimum approach: Only the newly-required scopes (plus any existing granted scopes that are still relevant)
    • Recommended approach: Existing relevant scopes plus newly required scopes
    • Extended approach: Existing scopes, newly required scopes, and related scopes that commonly work together

URL mode elicitation

URL mode elicitation enables secure out-of-band interactions between the server and end-user,
bypassing the MCP host/client entirely. This is particularly valuable for gathering sensitive data — like API keys,
third-party authorizations, and payment information — that would pose a security risk
if transmitted through the client.

Inspired by web security standards like OAuth, this mechanism lets the MCP client obtain user consent
and direct the user’s browser to a secure server-hosted URL where the sensitive interaction takes place.

The MCP host/client must present the elicitation request to the user — including the server’s identity
and the purpose of the request — and provide options to decline or cancel.
What the server does at the elicitation URL is outside the scope of MCP; it could present a form,
redirect to a third-party authorization service, or anything else.

Client support for URL mode elicitation

Clients indicate support by setting the Url property in Capabilities.Elicitation:

McpClientOptions options = new()
{ Capabilities = new ClientCapabilities { Elicitation = new ElicitationCapability { Url = new UrlElicitationCapability() } } // other client options

The client must also provide an ElicitationHandler.
Since there’s a single handler for both form mode and URL mode elicitation, the handler should begin by checking the
Mode property of the ElicitationRequest parameters
to determine which mode is being requested and handle it accordingly.

async ValueTask<ElicitResult> HandleElicitationAsync(ElicitRequestParams? requestParams, CancellationToken token)
{ if (requestParams is null || requestParams.Mode != "url" || requestParams.Url is null) { return new ElicitResult(); } // Success path for URL-mode elicitation omitted for brevity.
}

Server support for URL mode elicitation

The server must define an endpoint for the elicitation URL and handle the response.
Typically the response is submitted via POST to keep sensitive data out of URLs and logs.
If the URL serves a form, it should include anti-forgery tokens to prevent CSRF attacks —
ASP.NET Core provides built-in support for this.

One approach is to create a Razor Page:

public class ElicitationFormModel : PageModel
{ public string ElicitationId { get; set; } = string.Empty; public IActionResult OnGet(string id) { // Serves the elicitation URL when the user navigates to it } public async Task<IActionResult> OnPostAsync(string id, string name, string ssn, string secret) { // Handles the elicitation response when the user submits the form }
}

Note the id parameter on both methods — since an MCP server using Streamable HTTP Transport
is inherently multi-tenant, the server must associate each elicitation request and response
with the correct MCP session. The server must maintain state to track pending elicitation requests
and communicate responses back to the originating MCP request.

Tool calling support in sampling

This is one of the most powerful additions in the 2025-11-25 spec. Servers can now include tools
in their sampling requests, which the LLM may invoke to produce a response.

While providing tools to LLMs is a central feature of MCP, tools in sampling requests are fundamentally different
from standard MCP tools — despite sharing the same metadata structure. They don’t need to be implemented
as standard MCP tools, so the server must implement its own logic to handle tool invocations.

The flow is important to understand: when the LLM requests a tool invocation during sampling,
that’s the response to the sampling request. The server executes the tool, then issues a new
sampling request that includes both the tool call request and the tool call response. This continues
until the LLM produces a final response with no tool invocation requests.

sequenceDiagram participant Server participant Client Server->>Client: CreateMessage Request Note right of Client: messages: [original prompt]<br/>tools: [tool definitions] Client-->>Server: CreateMessage Response Note left of Server: stopReason: tool_calls<br/>toolCalls: [tool call 1, tool call 2] Note over Server: Server executes tools locally Server->>Client: CreateMessage Request Note right of Client: messages: [<br/> original prompt,<br/> tool call 1 request,<br/> tool call 1 response,<br/> tool call 2 request,<br/> tool call 2 response<br/>]<br/>tools: [tool definitions] Client-->>Server: CreateMessage Response Note left of Server: stopReason: end_turn<br/>content: [final response]

Client/host support for tool calling in sampling

Clients declare support for tool calling in sampling through their capabilities and must provide
a SamplingHandler:

var mcpClient = await McpClient.CreateAsync( new HttpClientTransport(new() { Endpoint = new Uri("http://localhost:6184"), Name = "SamplingWithTools MCP Server", }), clientOptions: new() { Capabilities = new ClientCapabilities { Sampling = new SamplingCapability { Tools = new SamplingToolsCapability {} } }, Handlers = new() { SamplingHandler = async (c, p, t) => { return await samplingHandler(c, p, t); }, } });

Implementing the SamplingHandler from scratch would be complex, but the Microsoft.Extensions.AI
package makes it straightforward. You can obtain an IChatClient from your LLM provider and use
CreateSamplingHandler to get a handler that translates between MCP and your LLM’s tool invocation format:

IChatClient chatClient = new OpenAIClient(new ApiKeyCredential(token), new OpenAIClientOptions { Endpoint = new Uri(baseUrl) }) .GetChatClient(modelId) .AsIChatClient(); var samplingHandler = chatClient.CreateSamplingHandler();

The sampling handler from IChatClient handles format translation but does not implement user consent
for tool invocations. You can wrap it in a custom handler to add consent logic.
Note that it will be important to cache user approvals to avoid prompting the user multiple times for the same tool invocation during a single sampling session.

Server support for tool calling in sampling

Servers can take advantage of the tool calling support in sampling if they are connected to a client/host that also supports this feature.
Servers can check whether the connected client supports tool calling in sampling:

if (_mcpServer?.ClientCapabilities?.Sampling?.Tools is not {})
{ return "Error: Client does not support sampling with tools.";
}

Tools for sampling can be described as simple Tool objects:

Tool rollDieTool = new Tool()
{ Name = "roll_die", Description = "Rolls a single six-sided die and returns the result (1-6)."
};

But the real power comes from using Microsoft.Extensions.AI on the server side too. The McpServer.AsSamplingChatClient()
method returns an IChatClient that supports sampling, and UseFunctionInvocation adds tool calling support:

IChatClient chatClient = ChatClientBuilderChatClientExtensions.AsBuilder(_mcpServer.AsSamplingChatClient()) .UseFunctionInvocation() .Build();

Define tools as AIFunction objects and pass them in ChatOptions:

AIFunction rollDieTool = AIFunctionFactory.Create( () => Random.Shared.Next(1, 7), name: "roll_die", description: "Rolls a single six-sided die and returns the result (1-6)."
); var chatOptions = new ChatOptions
{ Tools = [rollDieTool], ToolMode = ChatToolMode.Auto
}; var pointRollResponse = await chatClient.GetResponseAsync( "<Prompt that may use the roll_die tool>", chatOptions, cancellationToken
);

The IChatClient handles all the complexity: sending sampling requests with tools, processing
tool invocation requests, executing tools, and translating between MCP and LLM formats.

OAuth Client ID Metadata Documents

The 2025-11-25 spec introduces Client ID Metadata Documents (CIMDs) as an alternative
to Dynamic Client Registration (DCR) for establishing client identity with an authorization server.
CIMD is now the preferred method for client registration in MCP.

The idea is simple: the client specifies a URL as its client_id in authorization requests.
That URL resolves to a JSON document hosted by the client containing its metadata — identifiers,
redirect URIs, and other descriptive information. When an authorization server encounters this client_id,
it dereferences the URL and uses the metadata to understand and apply policy to the client.

In the C# SDK, clients specify a CIMD URL via ClientOAuthOptions:

const string ClientMetadataDocumentUrl = $"{ClientUrl}/client-metadata/cimd-client.json"; await using var transport = new HttpClientTransport(new()
{ Endpoint = new(McpServerUrl), OAuth = new ClientOAuthOptions() { RedirectUri = new Uri("http://localhost:1179/callback"), AuthorizationRedirectDelegate = HandleAuthorizationUrlAsync, ClientMetadataDocumentUri = new Uri(ClientMetadataDocumentUrl) },
}, HttpClient, LoggerFactory);

The CIMD URL must use HTTPS, have a non-empty path, and cannot contain dot segments or a fragment component.
The document itself must include at least client_id, client_name, and redirect_uris.

The SDK will attempt CIMD first, and fall back to DCR if the authorization server doesn’t support it
(provided DCR is enabled in the OAuth options).

Long-running requests over HTTP with polling

At the data layer, MCP is a message-based protocol with no inherent time limits.
But over HTTP, timeouts are a fact of life. The 2025-11-25 spec significantly improves the story
for long-running requests.

Previously, clients could disconnect and reconnect if the server provided an Event ID in SSE events,
but few servers implemented this — partly because it implied supporting stream resumption from any
event ID all the way back to the start. And servers couldn’t proactively disconnect; they had to
wait for clients to do so.

The new approach is cleaner. Servers that open an SSE stream for a request begin with an empty event
that includes an Event ID and optionally a Retry-After field. After sending this initial event,
servers can close the stream at any time, since the client can reconnect using the Event ID.

Server support for long-running requests

To enable this, the server provides an ISseEventStreamStore implementation. The SDK includes
DistributedCacheEventStreamStore, which works with any IDistributedCache:

// Add a MemoryDistributedCache to the service collection
builder.Services.AddDistributedMemoryCache();
// Add the MCP server with DistributedCacheEventStreamStore for SSE stream storage
builder.Services .AddMcpServer() .WithHttpTransport() .WithDistributedCacheEventStreamStore() .WithTools<RandomNumberTools>();

When a request handler wants to drop the SSE connection and let the client poll for the result,
it calls EnablePollingAsync on the McpRequestContext:

await context.EnablePollingAsync(retryInterval: TimeSpan.FromSeconds(retryIntervalInSeconds));

The McpRequestContext is available in handlers for MCP requests by simply adding it as a parameter to the handler method.

Implementation considerations

Event stream stores can be susceptible to unbounded memory growth, so consider these retention strategies:

Tasks (experimental)

Note: Tasks are an experimental feature in the 2025-11-25 MCP Specification. The API may change in future releases.

The 2025-11-25 version of the MCP Specification introduces tasks, a new primitive that provides durable state tracking
and deferred result retrieval for MCP requests. While stream resumability
handles transport-level concerns like reconnection and event replay, tasks operate at the data layer to ensure
that request results are durably stored and can be retrieved at any point within a server-defined retention window —
even if the original connection is long gone.

The key concept is that tasks augment existing requests rather than replacing them.
A client includes a task field in a request (e.g. tools/call) to signal that it wants durable result tracking.
Instead of the normal response, the server returns a CreateTaskResult containing task metadata — a unique task ID, the current status (working),
timestamps, a time-to-live (TTL), and optionally a suggested poll interval.
The client then uses tasks/get to poll for status, tasks/result to retrieve the stored result,
tasks/list to enumerate tasks, and tasks/cancel to cancel a running task.

This durability is valuable in several scenarios:

  • Resilience to dropped results: If a result is lost due to a network failure, the client can retrieve it again by task ID
    rather than re-executing the operation.
  • Explicit status tracking: Clients can query the server to determine whether a request is still in progress, succeeded, or failed,
    rather than relying on notifications or waiting indefinitely.
  • Integration with workflow systems: MCP servers wrapping existing workflow APIs (e.g. CI/CD pipelines, batch processing, multi-step analysis)
    can map their existing job tracking directly to the task primitive.

Tasks follow a defined lifecycle through these status values:

Status Description
working Task is actively being processed
input_required Task is waiting for additional input (e.g., elicitation)
completed Task finished successfully; results are available
failed Task encountered an error
cancelled Task was cancelled by the client

The last three states (completed, failed, and cancelled) are terminal — once a task reaches one of these states, it cannot transition to any other state.

Task support is negotiated through explicit capability declarations during initialization.
Servers declare that they support task-augmented tools/call requests, while clients can declare support for
task-augmented sampling/createMessage and elicitation/create requests.

Server support for tasks

To enable task support on an MCP server, configure a task store when setting up the server.
The task store is responsible for managing task state — creating tasks, storing results, and handling cleanup.

var taskStore = new InMemoryMcpTaskStore(); builder.Services.AddMcpServer(options =>
{ options.TaskStore = taskStore;
})
.WithHttpTransport()
.WithTools<MyTools>(); // Alternatively, you can register an IMcpTaskStore globally with DI, but you only need to configure it one way.
//builder.Services.AddSingleton<IMcpTaskStore>(taskStore);

The InMemoryMcpTaskStore is a reference implementation suitable for development and single-server deployments.
For production multi-server scenarios, implement IMcpTaskStore
with a persistent backing store (database, Redis, etc.).

The InMemoryMcpTaskStore constructor accepts several optional parameters to control task retention, polling behavior,
and resource limits:

var taskStore = new InMemoryMcpTaskStore( defaultTtl: TimeSpan.FromHours(1), // Default task retention time maxTtl: TimeSpan.FromHours(24), // Maximum allowed TTL pollInterval: TimeSpan.FromSeconds(1), // Suggested client poll interval cleanupInterval: TimeSpan.FromMinutes(5), // Background cleanup frequency pageSize: 100, // Tasks per page for listing maxTasks: 1000, // Maximum total tasks allowed maxTasksPerSession: 100 // Maximum tasks per session
);

Tools automatically advertise task support when they return Task, ValueTask, Task<T>, or ValueTask<T> (i.e. async methods).
You can explicitly control task support on individual tools using the ToolTaskSupport enum:

  • Forbidden (default for sync methods): Tool cannot be called with task augmentation
  • Optional (default for async methods): Tool can be called with or without task augmentation
  • Required: Tool must be called with task augmentation

Set TaskSupport on the McpServerTool attribute:

[McpServerTool(TaskSupport = ToolTaskSupport.Required)]
[Description("Processes a batch of data records. Always runs as a task.")]
public static async Task<string> ProcessData( [Description("Number of records to process")] int recordCount, CancellationToken cancellationToken)
{ await Task.Delay(TimeSpan.FromSeconds(8), cancellationToken); return $"Processed {recordCount} records successfully.";
}

Or set it via McpServerToolCreateOptions.Execution when registering tools explicitly:

builder.Services.AddMcpServer() .WithTools([ McpServerTool.Create( (int count, CancellationToken ct) => ProcessAsync(count, ct), new McpServerToolCreateOptions { Name = "requiredTaskTool", Execution = new ToolExecution { TaskSupport = ToolTaskSupport.Required } }) ]);

For more control over the task lifecycle, a tool can directly interact with
IMcpTaskStore and return an McpTask.
This bypasses automatic task wrapping and allows the tool to create a task, schedule background work, and return immediately.
Note: use a static method and accept IMcpTaskStore as a method parameter rather than via constructor injection
to avoid DI scope issues when the SDK executes the tool in a background context.

Client support for tasks

To execute a tool as a task, a client includes the Task property in the request parameters:

var result = await client.CallToolAsync( new CallToolRequestParams { Name = "processDataset", Arguments = new Dictionary<string, JsonElement> { ["recordCount"] = JsonSerializer.SerializeToElement(1000) }, Task = new McpTaskMetadata { TimeToLive = TimeSpan.FromHours(2) } }, cancellationToken); if (result.Task != null)
{ Console.WriteLine($"Task created: {result.Task.TaskId}"); Console.WriteLine($"Status: {result.Task.Status}");
}

The client can then poll for status updates and retrieve the final result:

// Poll until task reaches a terminal state
var completedTask = await client.PollTaskUntilCompleteAsync( taskId, cancellationToken: cancellationToken); switch (completedTask.Status)
{ case McpTaskStatus.Completed: // ... break; case McpTaskStatus.Failed: // ... break; case McpTaskStatus.Cancelled: // ... break;
{ var resultJson = await client.GetTaskResultAsync( taskId, cancellationToken: cancellationToken); var result = resultJson.Deserialize<CallToolResult>(McpJsonUtilities.DefaultOptions); foreach (var content in result?.Content ?? []) { if (content is TextContentBlock text) { Console.WriteLine(text.Text); } }
}

The SDK also provides methods to list all tasks (ListTasksAsync)
and cancel running tasks (CancelTaskAsync):

// List all tasks for the current session
var tasks = await client.ListTasksAsync(cancellationToken: cancellationToken); // Cancel a running task
var cancelledTask = await client.CancelTaskAsync(taskId, cancellationToken: cancellationToken);

Clients can optionally register a handler to receive status notifications as they arrive,
but should always use polling as the primary mechanism since notifications are optional:

var options = new McpClientOptions
{ Handlers = new McpClientHandlers { TaskStatusHandler = (task, cancellationToken) => { Console.WriteLine($"Task {task.TaskId} status changed to {task.Status}"); return ValueTask.CompletedTask; } }
};

Summary

The v1.0 release of the MCP C# SDK represents a major step forward for building MCP servers and clients in .NET.
Whether you’re implementing secure authorization flows, building rich tool experiences with sampling,
or handling long-running operations gracefully, the SDK has you covered.

Check out the full changelog
and the C# SDK repository to get started.

Demo projects for many of the features described here are available in the
mcp-whats-new demo repository.

Posted on Leave a comment

What’s new for the WinForms Visual Basic Application Framework

Klaus Loeffelmann

Melissa Trevino

.NET from version .NET Core 3.1 up to .NET 7 has plenty of advantages over .NET
Framework: it provides performance improvements in almost every area, and those
improvements were an ongoing effort over each .NET version. The
latest improvements in .NET
6
and
.NET
7
are
really worth checking out.

Migrating your Windows Forms (WinForms) Visual Basic Apps to .NET 6/7+ also
allows to adopt modern technologies which are not (or are no longer) supported in .NET
Framework. EFCore is one example: it is
a modern Entity Framework data access technology that enables .NET developers to
work with database backends using .NET objects. Although it is not natively
supported for VB by Microsoft, it is designed in a way that it is easy for the
community to build up on it and provide code generation support for additional
languages like Visual Basic
. In
that context there are also changes and improvements in the new WinForms
Out-of-Process Designer for .NET, especially around Object Data
Sources
.
For the WinForms .NET runtime, there are a series of improvements in different
areas which have been introduced with the latest releases of .NET:

The new Visual Basic Application Framework Experience

In contrast to the project property Application Framework Designer experience in
earlier versions of Visual Studio and for .NET Framework, you will noticed that
the project properties UI in Visual Studio has changed. It’s style is now in
parity with the project properties experience for other .NET project types: we
have invested into modernizing the experience for developers, focusing on
enhancing productivity and a modern look and feel.

Screenshot of the new Visual Basic Application Framework project settings designer.

We’ve added theming and search to the new experience. If this is your first time
you’re working with the new project properties experience in Visual Studio, it’s
a good idea to read up on the the introductory
blog
.

In contrast to C# projects, Visual Basic Application Framework projects use a
special file for storing the Application Framework project settings: the
Application.myapp file. We’ll talk more about the technical details of how
this file connects the project settings to the VB project specific code
generation of the My namespace later, but one thing to keep in mind is how the
UI translates each property’s value to this file:

  • Windows Visual Styles is to determine if the application will use the most
    current version for the Control Library comctl.dll to provide control
    rendering with modern visual styling. This setting translates to the value
    EnableVisualStyles of type Boolean inside of Application.myapp.

  • Single-instance application is to determine if the application will prevent
    users from running multiple instances of the application. This setting is
    switched off by default, which allows multiple instances of the application to
    be run concurrently. This setting translates to the value SingleInstance of
    type Boolean.

  • Save user settings on exit is to determine if the application settings are
    automatically saved when an app is about to shut down. The settings can be
    changed with the settings editor. In contrast to .NET Framework, a new Visual
    Basic Application Framework App doesn’t contain a settings file by default,
    but you can easily insert one over the project properties, should you need
    one, and then manage the settings
    interactively
    .

    Screenshot of the Settings section in the Application Framework project's property pages

    Adding to the list of settings automatically generates respective code, which
    can be easily access over the My object in the Visual Basic Application
    Framework at
    runtime
    .
    This settings translates to the value SaveMySettingsOnExit of type
    Boolean.

  • High DPI mode is to identify the application-wide HighDpiMode for the
    application. Note that this setting can be programmatically overridden through
    the HighDpiMode
    property

    of the ApplyApplicationDefaultsEventArgs of the ApplyApplicationDefaults
    application event. Choose from the following setting:

    • DPI unaware (0): The application window does not scale for DPI changes and
      always assumes a scale factor of 100%. For higher resolutions, this will
      make text and fine drawings more blurry, but may impose the best setting for
      some apps which demand a high backwards compatibility in rendering content.
    • DPI unaware GDI scaled (4): similar to DPI unaware, but improves the
      quality of GDI/GDI+ based on content. Please note that this mode will not
      work as expected, when you have enabled double
      buffering

      for control rendering via OnPaint and related functionality.
    • Per monitor (2): Per-Monitor DPI allows individual displays to have their
      own DPI scaling setting. WinForms doesn’t optimize for this mode, and
      Per-Monitor V2 should be used instead.
    • Per monitor V2 (3): Per-Monitor V2 offers more advanced scaling features
      such as improved support for mixed DPI environments, improved display
      enumeration, and support for dynamically scaling on-client area of windows.
      In WinForms common controls are optimized for this high dpi mode. Please
      note the events
      Form.DpiChange,
      Control.DpiChangedAfterParent
      and
      Control.DpiChangeBeforeParent,
      when your app need to scale up or down content based on a changed DPI
      environment, for example, when the user of your app has dragged a Form from
      one monitor to another monitor with a different DPI setting.
    • System aware (1): The application queries for the DPI of the primary
      monitor once and uses this for the application on all monitors. When content
      in Forms is dragged from one monitor to another with a different HighDPI
      setting, content might become blurry. SystemAware is WinForm’s most
      compatible high-dpi rendering mode for all supported controls.
  • Authentication mode is to specify the method of identifying the logged-on
    user, when needed. The setting translates to the value AuthenticationMode as
    an enum value of type Integer:

    • 0: The WindowsFormsApplicationBase(AuthenticationMode) constructor does
      not automatically initialize the principal for the application’s main
      thread. It’s completely the developer’s task, to manage authentication for
      the user.
    • 1: The WindowsFormsApplicationBase(AuthenticationMode) constructor
      initializes the principal for the application’s main thread with the current
      user’s Windows user info.
  • Shutdown mode is to to indicate which condition causes the application to
    shut down. This setting translates to the value ShutdownMode as an enum
    value of type Integer (Note: Please also refer to the application event
    ShutDown
    and the further remarks down below.):

    • 0: When the main form closes.
    • 1: Only after the last form closes.
  • Splash screen represents the name of the form to be used as a splash screen
    for the application. Note that the file name does not need to include the
    extension (.vb). This setting translates to the value SplashScreen of type
    String.

    Note: you will may be missing the settings for the Splash dialog up to
    Visual Studio 2022 version 17.5. For a workaround, read the comments in
    the section “A look behind the scenes”. To recap: a “Splash” dialog is
    typically displayed for a few seconds when an application is launched.
    Visual Basic has an item template which you can use to add a basic splash
    dialog to your project. It usually displays the logo or name of the
    application, along with some kind of animation or visual effects, to give
    users the impression that the application is loading or initializing. The
    term “splash” in this context is used because the dialog is designed to
    create a splash or impact on the user, drawing their attention to the
    application while it loads.

  • Application Framework is saved both in the Application.myapp file and the
    .vbproj file:

    • Application.myapp saves the setting MySubMain of type Boolean to
      identify if the Application Framework is enabled.
    • .vbproj uses the setting MyType for identifying the usage of the
      Application Framework for a VB project. If the Application Framework is
      enabled, the value is WindowsForms; if the Application Framework is
      disabled, the value is WindowsFormsWithCustomSubMain.
  • Startup object is the name of the form that will be used as the entry
    point, without its filename extension. Note: this property is found in the
    project property Settings under the General section, and not in the
    Application Framework section. This setting translates to the value MainForm of type
    String, when the Application Framework is activated. The start object setting in
    the .vbproj file is ignored in that case – see also the comments below on this
    topic.

Custom constants new look

Screenshot of the new custom constants editor in the project properties UI.

We are introducing a new custom constants-control in the modernized Project
Property Pages for VB Projects, that allows to encode the input to the format
key=”value”. Our goal is that users will be able to input their custom constants
in a more streamlined key-value pair format, thus enhancing their productivity.
Feedback is welcomed – if you have any comments or suggestions, feel free to
reach out to the project system
team
by filing a new issue or
comment on existing ones.

A look behind the scenes of the WinForms VB Application Framework

The way basic properties and behaviors of a WinForms app are controlled and configured is fundamentally different between C# and Visual Basic. In C#, every app
starts with a static method called main which can usually be found in a file
called Program.cs, and in that main method all the setting get applied.

That is different in Visual Basic. Since VB Apps in WinForms are based on the
Application Framework runtime, there are a few features, which aren’t
intrinsically available to C# WinForms apps to begin with, like configuring to
automatically show Splash dialogs (see below) or ensure a single instance
application start. Since you configure most of the parts of your app
interactively in VB with the settings described above at design time, the actual
code which honors or ensures those settings later at runtime is mostly
code-generated and somewhat hidden behind the scenes. The starting point of a VB
app is therefore not so obvious. There are also a series of differences in .NET
Visual Basic apps when it comes to hooking up event code which is supposed to
run, for example when a VB WinForms app starts, ends, or runs into an unhandled
exception – just to name a few examples.

That all said, technically Visual Basic doesn’t break any fundamental rules.
Under the hood, there is of course a Shared Sub Main when you activate the
Application Framework. You just do not write it yourself, and you don’t see it,
because it is generated by the VB compiler and then automatically added to your
Start Form. This is done by activating the VB compiler switch
/main.

At the same time, when you are activating the Application Framework, a series of
conditional compiler constants are defined. One of the constants is called
_mytype. If that constant is defined as Windows then the VB compiler
generates all the necessary infrastructure code to support the Application
Framework. If that constant is defined as WindowsFormsWithCustomSubMain
however, the VB compiler just generates the bare minimum infrastructure code and
doesn’t apply any settings to the WinForms app on startup. The latter happens,
when you deactivate the Application Framework. This setting is stored in the
vbproj project file, along with the Start Form. What’s important to know
though in this context: only in the case of WindowsFormsWithCustomSubMain, so
with the Application Framework deactivated, is the Start Form definition
actually taken from the vbproj file. When the Application Framework is
activated however then that is the case when the aforementioned
Application.myapp file is used as the settings container. Note, that by
default you cannot find that file in the solution explorer.

Screenshot of solution explorer showing the Application.myapp file.

You need to make sure first to show all files for that project (see screenshot
above). Then you can open the My Project-folder and show that setting file in
the editor by double-clicking it in the solution explorer. The content of that
file looks something like this:

<?xml version="1.0" encoding="utf-16"?>
<MyApplicationData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <MySubMain>true</MySubMain> <MainForm>Form1</MainForm> <SingleInstance>false</SingleInstance> <ShutdownMode>0</ShutdownMode> <EnableVisualStyles>true</EnableVisualStyles> <AuthenticationMode>0</AuthenticationMode> <SaveMySettingsOnExit>true</SaveMySettingsOnExit> <HighDpiMode>3</HighDpiMode>
</MyApplicationData>

Note: Visual Studio 2022 before version 17.6 (Preview 3) won’t have the
option to pick a Splash Dialog interactively, as mentioned above. We will have
an interactive designer for setting the splash form only from that version on
on. Up to then, you can manually patch the Application.myapp file to trigger
the code generation for the Splash dialog. Insert the following line of code in
that file and save the changes.

<SplashScreen>SplashDialog</SplashScreen>

When you do this, make sure not to include the filename extension (.vb) in
that definition, because otherwise the required code does not get generated.

Application.myapp as the source for code generation

Now, if you take a closer look at that file’s properties in the property
browser, you’ll see that it is triggering a custom tool which is invoked
whenever that file is saved.

Screenshot of solution explorer showing the properties for the Application.myapp file.

And that custom tool generates VB code which you
can find under the Application.myapp node in the Solution Explorer in
Application.Designer.vb. It does the following:

  • It defines a Friend Partial Class MyApplication. With the Application
    Framework enabled, that class is inherited from
    WindowsFormsApplicationBase.
    You don’t see that Inherits statement here and the reason is that the major
    part of that Class’
    definition

    is injected by the Visual Basic compiler based on the earlier defined
    conditional constant _myapp.
  • It generates the code to apply all the settings which were saved in
    Application.myapp file.
  • It creates code for a method which overrides
    OnCreateMainForm.
    In that method, it assigns the Form, which is defined as the start form in the
    Application.myapp file.

Warning: The Application.Designer.vb is not supposed to be edited, as it’s
auto-generated. Any changes will be lost as soon as you make changes to Application.myapp. Instead, use the project properties UI.

Now, the class which is injected by the compiler is also responsible for
generating everything which the Visual Basic Application Framework provides you
via the My namespace. The My namespace simplifies access to frequently used
information about your WinForms app, your system, or simplifies access to
frequently used APIs. Part of the My namespace for an activated Application
Framework is the Application property, and its return type is of exactly that
type which is defined by the class generated based on your Application Settings
and then merged with the injected Visual Basic compiler file mentioned earlier.
So, if you access My.Application you are basically accessing a single instance
of the My.MyApplication type which the generated code defines.

With this context understood, we can move on to how two additional features of
the Application Framework work and can be approached. The first one is extending
the My namespace with additional function areas. We won’t go too much into
them, because there are detailed docs about the My namespace and how to
extend
it
.

An even more important concept to understand are the Application Events which are
provided by the Application Framework. Since there isn’t a good way to intercept
the startup or shut down of an app (since that code gets generated
and sort of hidden inside the main Form) Application Events are the way to be
notified of certain application-global occurrences.

Note in this context, that there is a small breaking change in the UI: while in
.NET Framework, you had to insert a code file named ApplicationEvents.vb via
the Property Settings of the VB project, in a .NET Core App this file will be
there from the start when you’ve created a new Application Framework project.

To wire up the available Application events, you open that ApplicationEvent.vb
code file, and then you select ApplicationEvents from the Object drop-down list,
and the application event you want to write up from the events list:

Animated gif showing how to wire app Application Events in the ApplicationEvent.vb code file

As you can see, the ApplicationEvent.vb code file again extends the MyApplication class – this time by the events handler you place there on demand. The options you have here are:

  • Startup: raised when the application starts, before the start form is created.
  • Shutdown: raised after all application forms are closed. This event is not raised if the application terminates abnormally.
  • UnhandledException: raised if the application encounters an unhandled exception.
  • StartupNextInstance: raised when launching a single-instance application and the application is already active.
  • NetworkAvailabilityChanged: raised when the network connection is connected or disconnected.
  • ApplyApplicationDefaults: raised when the application queries default values to be set for the application.

Note: More general information about the Visual Basic Application Model is provided through the Microsoft Learn Docs about this topic. Also note, that, on top of the extensibility of the My namespace, this Application Model also has extensibility points which are also described in great detail by the respective docs.

Summary

With the new and modernized project properties pages, WinForm’s Application
Framework is ready for new, .NET 6,7,8+ based Visual Basic Apps to develop. It’s
also the right time to think about modernizing your older .NET Framework based
VB Apps and bring them over to .NET 6,7,8+. WinForms and the .NET runtime
deliver countless new features and provide considerable performance improvements
for your apps in almost every area. Visual Basic and the Visual Basic
Application Framework are and continue to be first class citizens and are fully
supported in WinForms. Our plans are to continue modernizing around the VB App
Framework in the future without breaking code for existing projects.

And, as always: Feedback about the subject matter is really important to us, so
please let us know your thoughts and additional ideas! Please also note that the
WinForms .NET and the Visual Basic Application Framework runtime is open source,
and you can contribute! If you have general feature ideas, encountered bugs, or
even want to take on existing issues around the WinForms runtime and submit PRs,
have a look at the WinForms Github repo.
If you have suggestions around the WinForms Designer feel free to file new
issues there as well.

Happy coding!

Posted on Leave a comment

Redesigning Configuration Refresh for Azure App Configuration

Avatar

Overview

Since its inception, the .NET Core configuration provider for Azure App Configuration has provided the capability to monitor changes and sync them to the configuration within a running application. We recently redesigned this functionality to allow for on-demand refresh of the configuration. The new design paves the way for smarter applications that only refresh the configuration when necessary. As a result, inactive applications no longer have to monitor for configuration changes unnecessarily.
 

Initial design : Timer-based watch

In the initial design, configuration was kept in sync with Azure App Configuration using a watch mechanism which ran on a timer. At the time of initialization of the Azure App Configuration provider, users could specify the configuration settings to be updated and an optional polling interval. In case the polling interval was not specified, a default value of 30 seconds was used.

public static IWebHost BuildWebHost(string[] args)
{ WebHost.CreateDefaultBuilder(args) .ConfigureAppConfiguration((hostingContext, config) => { // Load settings from Azure App Configuration // Set up the provider to listen for changes triggered by a sentinel value var settings = config.Build(); string appConfigurationEndpoint = settings["AzureAppConfigurationEndpoint"]; config.AddAzureAppConfiguration(options => { options.ConnectWithManagedIdentity(appConfigurationEndpoint) .Use(keyFilter: "WebDemo:*") .WatchAndReloadAll(key: "WebDemo:Sentinel", label: LabelFilter.Null); }); settings = config.Build(); }) .UseStartup<Startup>() .Build();
}

For example, in the above code snippet, Azure App Configuration would be pinged every 30 seconds for changes. These calls would be made irrespective of whether the application was active or not. As a result, there would be unnecessary usage of network and CPU resources within inactive applications. Applications needed a way to trigger a refresh of the configuration on demand in order to be able to limit the refreshes to active applications. Then unnecessary checks for changes could be avoided.

This timer-based watch mechanism had the following fundamental design flaws.

  1. It could not be invoked on-demand.
  2. It continued to run in the background even in applications that could be considered inactive.
  3. It promoted constant polling of configuration rather than a more intelligent approach of updating configuration when applications are active or need to ensure freshness.
     

New design : Activity-based refresh

The new refresh mechanism allows users to keep their configuration updated using a middleware to determine activity. As long as the ASP.NET Core web application continues to receive requests, the configuration settings continue to get updated with the configuration store.

The application can be configured to trigger refresh for each request by adding the Azure App Configuration middleware from package Microsoft.Azure.AppConfiguration.AspNetCore in your application’s startup code.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{ app.UseAzureAppConfiguration(); app.UseMvc();
}

At the time of initialization of the configuration provider, the user can use the ConfigureRefresh method to register the configuration settings to be updated with an optional cache expiration time. In case the cache expiration time is not specified, a default value of 30 seconds is used.

public static IWebHost BuildWebHost(string[] args)
{ WebHost.CreateDefaultBuilder(args) .ConfigureAppConfiguration((hostingContext, config) => { // Load settings from Azure App Configuration // Set up the provider to listen for changes triggered by a sentinel value var settings = config.Build(); string appConfigurationEndpoint = settings["AzureAppConfigurationEndpoint"]; config.AddAzureAppConfiguration(options => { options.ConnectWithManagedIdentity(appConfigurationEndpoint) .Use(keyFilter: "WebDemo:*") .ConfigureRefresh((refreshOptions) => { // Indicates that all settings should be refreshed when the given key has changed refreshOptions.Register(key: "WebDemo:Sentinel", label: LabelFilter.Null, refreshAll: true); }); }); settings = config.Build(); }) .UseStartup<Startup>() .Build();
}

In order to keep the settings updated and avoid unnecessary calls to the configuration store, an internal cache is used for each setting. Until the cached value of a setting has expired, the refresh operation does not update the value. This happens even when the value has changed in the configuration store.  

Try it now!

For more information about Azure App Configuration, check out the following resources. You can find step-by-step tutorials that would help you get started with dynamic configuration using the new refresh mechanism within minutes. Please let us know what you think by filing issues on GitHub.

Overview: Azure App configuration
Tutorial: Use dynamic configuration in an ASP.NET Core app
Tutorial: Use dynamic configuration in a .NET Core app
Related Blog: Configuring a Server-side Blazor app with Azure App Configuration

Avatar

Software Engineer, Azure App Configuration

Follow    

Posted on Leave a comment

A Penny Saved is a Ton of Serverless Compute Earned

Scott Guthrie recently shared one of my favorite anecdotes on his Azure Red Shirt Tour. A Microsoft customer regularly invokes 1 billion (yes, that’s with a “B”) Azure Functions per day. The customer reached out to support after the first month thinking there was a bug in the billing system, only to find out that the $72 was in fact correct. How is that possible? Azure Functions is a serverless compute platform that allows you to focus on code that only executes when triggered by events, and you only pay for CPU time and memory used during execution (versus a traditional web server where you are paying a fee even if your app is idle). This is called micro-billing, and is one key reason serverless computing is so powerful.

Curious about Azure Functions? Follow the link https://aka.ms/go-funcs to get up and running with your first function in minutes.

Scott Guthrie Red Shirt

Scott Guthrie on the Azure Red Shirt Tour

In fact, micro-billing is so important, it’s one of three rules I use to verify if a service is serverless. There is not an official set of rules and there is no standard for serverless. The closest thing to a standard is the whitepaper published by the Cloud Native Computing Foundation titled CNCF WG-Serverless Whitepaper v1.0 (PDF). The paper describes serverless computing as “building and running applications that do not require server management.” The paper continues to state they are “executed, scaled, and billed in response to the exact demand needed at the moment.”

It’s easy to label almost everything serverless, but there is a difference between managed and serverless. A managed service takes care of responsibilities for you, such as standing up a website or hosting a Docker container. Serverless is a managed service but requires a bit more. Here is Jeremy’s Serverless Rules.

  1. The service should be capable of running entirely in the cloud. Running locally is fine and often preferred for developing, testing, and debugging, but ultimately it should end up in the cloud.
  2. You don’t have to configure a virtual machine or cluster. Docker is great, but containers require a Docker host to run. That host typically means setting up a VM and, for resiliency and scale, using an orchestrator like Kubernetes to scale the solution. There are also services like Azure Web Apps that provide a fully managed experience for running web apps and containers, but I don’t consider them serverless because they break the next rule.
  3. You only pay for active invocations and never for idle time. This rule is important, and the essence of micro-billing. ACI is a great way to run a container, but I pay for it even when it’s not being used. A function, on the other hand, only bills when it’s called.

These rules are why I stopped calling managed databases “serverless.” So, what, then, does qualify as serverless?

The Azure serverless platform includes Azure Functions, Logic Apps, and Event Grid. In this post, we’ll take a closer look at Azure Functions.

Azure Functions

Azure Functions allows you to write code that is executed based on an event, or trigger. Triggers may include an HTTP request, a timer, a message in a queue, or any other number of important events. The code is passed details of the trigger but can also access bindings that make it easier to connect to resources like databases and storage. The serverless Azure Functions model is based on two parameters: invocations and gigabyte seconds.

Invocations are the number of times the function is invoked based on its trigger. Gigabyte seconds is a function of memory usage. Image a graph that shows time on the x-axis and memory consumption on the y-axis. Plot the memory usage of your function over time. Gigabyte seconds represent the area under the curve.

Let’s assume you have a microservice that is called every minute and takes one second to scan and aggregate data. It uses a steady 128 megabytes of memory during the run. Using the Azure Pricing Calculator, you’ll find that the cost is free. That’s because the first 400,000 Gigabyte seconds and 1 million invocations are free every month. Running every second (there are 2,628,000 seconds in a month) with double memory (256 megabytes), the entire monthly cost is estimated at $4.51.

Azure Functions pricing

Pricing calculator for Azure Functions

Recently I tweeted about my own experience with serverless cost (or lack thereof). I wrote a link-shortening tool. It uses a function to take long URLs and turn them into a shorter code I can easily share. I also have a function that takes the short code and performs the redirect, then stores the data in a queue. Another microservice processes items in the queue and stores metadata that I can analyze for later. I have tens of thousands of invocations per month and my total cost is less than a dollar.

Link shortener stats

A tweet about cost of running serverless code in Azure

Do I have your attention?

In future posts I will explore the cost model for Logic Apps and Event Grid. In the meantime…

Learn about and get started with your first Azure Function by following this link: https://aka.ms/go-funcs