Posted on Leave a comment

ASP.NET Core updates in .NET Core 3.0 Preview 3

Daniel Roth

Daniel

.NET Core 3.0 Preview 3 is now available and it includes a bunch of new updates to ASP.NET Core.

Here’s the list of what’s new in this preview:

  • Razor Components improvements:
    • Single project template
    • New .razor extension
    • Endpoint routing integration
    • Prerendering
    • Razor Components in Razor Class Libraries
    • Improved event handling
    • Forms & validation
  • Runtime compilation
  • Worker Service template
  • gRPC template
  • Angular template updated to Angular 7
  • Authentication for Single Page Applications
  • SignalR integration with endpoint routing
  • SignalR Java client support for long polling

Please see the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.0 Preview 3 install the .NET Core 3.0 Preview 3 SDK

If you’re on Windows using Visual Studio, you’ll also want to install the latest preview of Visual Studio 2019.

  • Note: To use use this preview of .NET Core 3.0 with the release channel of Visual Studio 2019, you will need to enable the option to use previews of the .NET Core SDK by going to Tools > Options > Projects and Solutions > .NET Core > Use previews of the .NET Core SDK

Upgrade an existing project

To upgrade an existing an ASP.NET Core app to .NET Core 3.0 Preview 3, follow the migrations steps in the ASP.NET Core docs.

Please also see the full list of breaking changes in ASP.NET Core 3.0.

Razor Components improvements

In the previous preview, we introduced Razor Components as a new way to build interactive client-side web UI with ASP.NET Core. This section covers the various improvements we’ve made to Razor Components in this preview update.

Single project template

The Razor Components project template is now a single project instead of two projects in the same solution. The authored Razor Components live directly in the ASP.NET Core app that hosts them. The same ASP.NET Core project can contain Razor Components, Pages, and Views. The Razor Components template also has HTTPS enabled by default like the other ASP.NET Core web app templates.

New .razor extension

Razor Components are authored using Razor syntax, but are compiled differently than Razor Pages and Views. To clarify which Razor files should be compiled as Razor Components we’ve introduced a new file extension: .razor. In the Razor Components template all component files now use the .razor extension. Razor Pages and Views still use the .cshtml extension.

Razor Components can still be authored using the .cshtml file extension as long as those files are identified as Razor Component files using the _RazorComponentInclude MSBuild property. For example, the Razor Component template in this release specifies that all .cshtml files under the Components folder should be treated as Razor Components.

<_RazorComponentInclude>Components\**\*.cshtml</_RazorComponentInclude>

Please note that there are a number of limitations with .razor files in this release. Please refer to the release notes for the list of known issues and available workarounds.

Endpoint routing integration

Razor Components are now integrated into ASP.NET Core’s new Endpoint Routing system. To enable Razor Components support in your application, use MapComponentHub<TComponent> within your routing configuration. For example,

app.UseRouting(routes =>
{ routes.MapRazorPages(); routes.MapComponentHub<App>("app");
});

This configures the application to accept incoming connections for interactive Razor Components, and specifies that the root component App should be rendered within a DOM element matching the selector app.

Prerendering

The Razor Components project template now does server-side prerendering by default. This means that when a user navigates to your application, the server will perform an initial render of your Razor Components and deliver the result to their browser as plain static HTML. Then, the browser will reconnect to the server via SignalR and switch the Razor Components into a fully interactive mode. This two-phase delivery is beneficial because:

  • It improves the perceived performance of the site, because the UI can appear sooner, without waiting to make any WebSocket connections or even running any client-side script. This makes a bigger difference for users on slow connections, such as 2G/3G mobiles.
  • It can make your application easily crawlable by search engines.

For users on faster connections, such as intranet users, this feature has less impact because the UI should appear near-instantly regardless.

To set up prerendering, the Razor Components project template no longer has a static HTML file. Instead a single Razor Page, /Pages/Index.cshtml, is used to prerender the app content using the Html.RenderComponentAsync<TComponent>() HTML helper. The page also references the components.server.js script to set up the SignalR connection after the content has been prerendered and downloaded. And because this is a Razor Page, features like the environment tag helper just work.

Index.cshtml

@page "{*clientPath}"
<!DOCTYPE html>
<html>
<head> ...
</head>
<body> <app>@(await Html.RenderComponentAsync<App>())</app> http://_framework/components.server.js
</body>
</html>

Besides the app loading faster, you can see that prerendering is happening by looking at the downloaded HTML source in the browser dev tools. The Razor Components are fully rendered in the HTML.

Razor Components in Razor Class Libraries

You can now add Razor Components to Razor Class Libraries and reference them from ASP.NET Core projects using Razor Components.

To create reusable Razor Components in a Razor Class Library:

  1. Create an Razor Components app:

    dotnet new razorcomponents -o RazorComponentsApp1
    
  2. Create a new Razor Class Library

    dotnet new razorclasslib -o RazorClassLib1
    
  3. Add a Component1.razor file to the Razor Class Library

Component1.razor

<h1>Component1</h1> <p>@message</p> @functions { string message = "Hello from a Razor Class Library"!;
}
  1. Reference the Razor Class Library from an ASP.NET Core app using Razor Components:

    dotnet add RazorComponentsApp1 reference RazorClassLib1
    
  2. In the Razor Components app, import all components from the Razor Class Library using the @addTagHelper directive and then use Component1 in the app:

Index.razor

@page "/"
@addTagHelper *, RazorClassLib1 <h1>Hello, world!</h1> Welcome to your new app. <Component1 />

Note that Razor Class Libraries are not compatible with Blazor apps in this release. Also, Razor Class Libraries do not yet support static assets. To create components in a library that can be shared with Blazor and Razor Components apps you still need to use a Blazor Class Library. This is something we expect to address in a future update.

Improved event handling

The new EventCallback and EventCallback<> types make defining component callbacks much more straightforward. For example consider the following two components:

MyButton.razor

<button onclick="@OnClick">Click here and see what happens!</button> @functions { [Parameter] EventCallback<UIMouseEventArgs> OnClick { get; set; }
}

UsesMyButton.razor

@text
<MyButton OnClick="ShowMessage" /> @function { string text; void ShowMessage(UIMouseEventArgs e) { text = "Hello, world!"; } }

The OnClick callback is of type EventCallback<UIMouseEventArgs> (instead of Action<UIMouseEventArgs>), which MyButton passes off directly to the onclick event handler. The compiler handles converting the delegate to an EventCallback, and will do some other things to make sure that the rendering process has enough information to render the correct target component. As a result an explicit call to StateHasChanged in the ShowMessage event handler isn’t necessary.

By using the EventCallback<> type OnClick handler can now also be asynchronous without any additional code changes to MyButton:

UsesMyButton.razor

@text
<MyButton OnClick="ShowMessageAsync" /> @function { string text; async Task ShowMessageAsync(UIMouseEventArgs e) { await Task.Yield(); text = "Hello, world!"; } }

We recommend using EventCallback and EventCallback<T> when you define component parameters for event handling and binding. Use EventCallback<> where possible because it’s strongly typed and will provide better feedback to users of your component. Use EventCallback when there’s no value you will pass to the callback.

Note that consumers don’t have to write different code when using a component because of EventCallback. The same event handling code should still work, while removing some thorny issues and improving the usability of your components.

Forms & validation

This preview release adds built-in components and infrastructure for handling forms and validation.

One of the benefits of using .NET for client-side web development is the ability to share the same implementation logic between the client and the server. Validation logic is a prime example. The new forms & validation support in Razor Components includes support for handling validation using data annotations, or you can plug in your preferred validation system.

For example, the following Person type defines validation logic using data annotations:

public class Person
{ [Required(ErrorMessage = "Enter a name")] [StringLength(10, ErrorMessage = "That name is too long")] public string Name { get; set; } [Range(0, 200, ErrorMessage = "Nobody is that old")] public int AgeInYears { get; set; } [Required] [Range(typeof(bool), "true", "true", ErrorMessage = "Must accept terms")] public bool AcceptsTerms { get; set; }
}

Here’s how you can create a validating form based on this Person model:

<EditForm Model="@person" OnValidSubmit="@HandleValidSubmit"> <DataAnnotationsValidator /> <ValidationSummary /> <p class="name"> Name: <InputText bind-Value="@person.Name" /> </p> <p class="age"> Age (years): <InputNumber bind-Value="@person.AgeInYears" /> </p> <p class="accepts-terms"> Accepts terms: <InputCheckbox bind-Value="@person.AcceptsTerms" /> </p> <button type="submit">Submit</button>
</EditForm> @functions { Person person = new Person(); void HandleValidSubmit() { Console.WriteLine("OnValidSubmit"); }
}

If you add this form to your app and run it you will get a basic form that automatically validates the field inputs when they are changed and when the form is submitted.

Validating form

There are quite a few things going on here, so let’s break it down piece by piece:

  • The form is defined using the new EditForm component. The EditForm sets up an EditContext as a cascading value that tracks metadata about the edit process (e.g. what’s been modified, current validation messages, etc.). The EditForm also provides convenient events for valid and invalid submits (OnValidSubmit, OnInvalidSubmit). Or you can use OnSubmit directly if you want to trigger the validation yourself.
  • The DataAnnotationsValidator component attaches validation support using data annotations to the cascaded EditContext. Enabling support for validation using data annotations currently requires this explicit gesture, but we are considering making this the default behavior that you can then override.
  • Each of the form fields are defined using a set of built-in input components (InputText, InputNumber, InputCheckbox, InputSelect, etc.). These components provide default behavior for validating on edit and changing their CSS class to reflect the field state. Some of them have useful parsing logic (e.g., InputDate and InputNumber handle unparseable values gracefully by registering them as validation errors). The relevant ones also support nullability of the target field (e.g., int?).
  • The ValidationMessage component displays validation messages for a specific field.
  • The ValidationSummary component summarizes all validation messages (similar to the validation summary tag helper).

There are some limitations with the built-in input components that we expect to improve in future updates. For example, you can’t currently specify arbitrary attributes on the generate input tags. In the future, we plan to enable components that pass through all extra attributes. For now, you’ll need to build your own component subclasses to handle these cases.

Runtime compilation

Support for runtime compilation was removed from the ASP.NET Core shared framework in .NET Core 3.0, but you can now enable it by adding a package to your app.

To enable runtime compilation:

  1. Add a package reference to Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation

    <PackageReference Include="Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation" Version="3.0.0-preview3-19153-02" />
    
  2. In Startup.ConfigureServices add a call to AddRazorRuntimeCompilation

    services.AddMvc().AddRazorRuntimeCompilation();
    

Worker Service Template

In 3.0.0-preview3 we are introducing a new template called ‘Worker Service’. This template is designed as a starting point for running long-running background processes like you might run as a Windows Service or Linux Daemon. Examples of this would be producing/consuming messages from a message queue or monitoring a file to process when it appears. It’s intended to provide the productivity features of ASP.NET Core such as Logging, DI, Configuration, etc without carrying any web dependencies.

Worker Service

In the coming days we will publish a few blog posts giving more walkthroughs on using the Worker template to get started. We will have dedicated posts about publishing as a Windows/Systemd service, running on ACI/AKS, as well as running as a WebJob.

Caveats

Whilst the intent is for the worker template to not have any dependencies on web technologies by default, in preview3 it still uses the Web SDK and is shown after you select ‘ASP.NET Core WebApplication’ in Visual Studio. This is temporary and will be changed in a future preview. This means that for preview3 you will see many options in VS that may not make sense, such as publishing your worker as a web app to IIS.

Angular template updated to Angular 7

The Angular template is now updated to Angular 7. We anticipate updating again to Angular 8 before .NET Core 3.0 ships a stable release.

Authentication for the Single Page Application templates

This release introduces support for authentication in our Angular and React templates. In this section we show how to create a new Angular or React template that allows us to authenticate users and access a protected API resource.

Our support for authenticating and authorizing users is powered behind the scenes by IdentityServer, with some extensions we built to simplify the configuration experience for the scenarios we are targeting.

Note: In this post we showcase authentication support for Angular but the React template offers equivalent functionality.

Create a new Angular application

To create a new Angular application with authentication support we invoke the following command:

dotnet new angular -au Individual

This command creates a new ASP.NET Core application with a hosted client Angular application. The ASP.NET Core application includes an Identity Server instance already configured so that your Angular application can authenticate users and send HTTP requests against the protected resources in the ASP.NET Core application.

The authentication and authorization support is built as an Angular module that gets imported into your application and offers a suite of components and services to enhance the functionality of the main applicaiton module.

Run the application

To run the application simply execute the command below and open the browser in the url displayed on the console:

dotnet run
Hosting environment: Development
Content root path: C:\angularapp
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

SPA index

When we open the application we see the usual Home, Counter and Fetch data menu options and two new options: Register and Login. If we click on Register we get sent to the default Identity UI where (after running migrations and updating the database) we can register as a new user.

Register a new user

SPA register

After registering as a new user we get redirected back to the application where we can see that we are successfully authenticated

SPA logged in

Call an authenticated API

If we click on the Fetch data we can see the weather forecast data table

SPA fetch data

Protect an existing API

To protect an API on the server we simply need to use the [Authorize] attribute on the controller or action that we want to protect.

[Authorize]
[Route("api/[controller]")]
public class SampleDataController : Controller
{
...
}

Require authentication for a client route.

To require the user be authenticated when visiting a page in the Angular application we apply the [AuthorizeGuard] to the route we are configuring.

import { ApiAuthorizationModule } from 'src/api-authorization/api-authorization.module';
import { AuthorizeGuard } from 'src/api-authorization/authorize.guard';
import { AuthorizeInterceptor } from 'src/api-authorization/authorize.interceptor'; @NgModule({ declarations: [ AppComponent, NavMenuComponent, HomeComponent, CounterComponent, FetchDataComponent ], imports: [ BrowserModule.withServerTransition({ appId: 'ng-cli-universal' }), HttpClientModule, FormsModule, ApiAuthorizationModule, RouterModule.forRoot([ { path: '', component: HomeComponent, pathMatch: 'full' }, { path: 'counter', component: CounterComponent }, { path: 'fetch-data', component: FetchDataComponent, canActivate: [AuthorizeGuard] }, ]) ], providers: [ { provide: HTTP_INTERCEPTORS, useClass: AuthorizeInterceptor, multi: true } ], bootstrap: [AppComponent]
})
export class AppModule { }

Get more details by visiting the docs page

This is a quick introduction to what our new authentication support for Single Page Applications has to offer. For more details check out the docs.

Endpoint Routing with SignalR Hubs

In 3.0.0-preview3 we are hooking SignalR hubs into the new Endpoint Routing feature recently released. SignalR hub wire-up was previously done explicitly:

app.UseSignalR(routes =>
{ routes.MapHub<ChatHub>("hubs/chat");
});

This meant developers would need to wire up controllers, Razor pages, and hubs in a variety of different places during startup, leading to a series of nearly-identical routing segments:

app.UseSignalR(routes =>
{ routes.MapHub<ChatHub>("hubs/chat");
}); app.UseRouting(routes =>
{ routes.MapRazorPages();
});

Now, SignalR hubs can also be routed via endpoint routing, so you’ve got a one-stop place to route nearly everything in ASP.NET Core.

app.UseRouting(routes =>
{ routes.MapRazorPages(); routes.MapHub<ChatHub>("hubs/chat");
});

Long Polling for Java SignalR Client

We added long polling support to the Java client which enables it to establish connections even in environments that do not support WebSockets. This also gives you the ability to specifically select the long polling transport in your client apps.

gRPC Template

This preview release introduces a new template for building gRPC services with ASP.NET Core using a new gRPC framework we are building in collaboration with Google.

gRPC is a popular RPC (remote procedure call) framework that offers an opinionated contract-first approach to API development. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, and cancellation and timeouts.

gRPC template

The templates create two projects: a gRPC service hosted inside ASP.NET Core, and a console application to test it with.

gRPC solution

This is the first public preview of gRPC for ASP.NET Core and it doesn’t implement all of gRPC’s features, but we’re working hard to make the best gRPC experience for ASP.NET possible. Please try it out and give us feedback at grpc/grpc-dotnet on GitHub.

Stay tuned for future blog posts discussing gRPC for ASP.NET Core in more detail.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on Github.

Daniel Roth
Daniel Roth

Principal Program Manager, ASP.NET

Follow Daniel   

<!–


–>

Posted on Leave a comment

Azure Hybrid Benefit for SQL Server

Angelos Petropoulos

Angelos

In my previous blog post I talked about how to migrate data from existing on-prem SQL Server instances to Azure SQL Database. If you haven’t heard SQL Server 2008 end of support is coming this summer, so it’s a good time to evaluate moving to an Azure SQL Database.

If you decide to try Azure, chances are you will not be able to immediately move 100% of your on-prem databases and relevant applications in one go. You’ll probably come up with a migration plan that spans weeks, months or even years. During the migration phase you will be spinning up new instances of Azure SQL Database and turning off on-prem SQL Server instances, but for a little bit of time there will be overlap.

To help manage the cost during such transitions we offer the Azure Hybrid Benefit. You can convert 1 core of SQL Enterprise edition to get up to 4 vCores of Azure SQL Database at a reduced rate. For example, if you have 4 core licenses of SQL Enterprise edition, you can receive up to 16 vCores of Azure SQL Database.

If you want to learn more check out the Azure Hybrid Benefit FAQ and don’t forget, if you have any questions around migrating your .NET applications to Azure you can always ask for free assistance. If you have any questions about this post just leave us a comment below.

Angelos Petropoulos

Posted on Leave a comment

Migrating your existing on-prem SQL Server database to Azure SQL DB

If you are in the process of moving an existing .NET application to Azure, it’s likely you’ll have to migrate an existing, on-prem SQL database as well. There are a few different ways you can go about this, so let’s go through them.

Data Migration Assistant (downtime required)

The Data Migration Assistant (download | documentation) is free, easy to use, slick and extremely powerful! It can:

  • Evaluate if your database is ready to migrate and produce a readiness report (command line support included)
  • Provide recommendations for how to remediate migration blocking issues
  • Recommend the minimum Azure SQL Database SKU based on performance counter data of your existing database
  • Perform the actual migration of schema, data and objects (server roles, logins, etc.)

After a successful migration, applications will be able to connect to the target SQL server databases seamlessly. There are currently a couple of limitations, but the majority of databases shouldn’t be impacted. If this sounds interesting, check out the full tutorials on how to migrate to Azure SQL DB and how to migrate to Azure SQL DB Managed Instance.

Azure Data Migration Service (no downtime required)

The Azure Data Migration Service allows you to move your on-prem database to Azure without taking it offline during the migration. Applications can keep on running while the migration is taking place. Once the database in Azure is ready you can switch your applications over immediately.

If this sounds interesting, check out the full tutorials on how to migrate to Azure SQL DB and how to migrate to Azure SQL DB Managed Instance without downtime.

SQL Server Management Studio (downtime required)

You are probably already familiar with SQL Server Management Studio (download | documentation), but if you are not it’s basically an IDE for SQL Server built on top of the Visual Studio shell and it’s free! Unlike the Data Migration Assistant, it cannot produce readiness reports nor can it suggest remediating actions, but it can perform the actual migration two different ways.

The first way is by selecting the command “Deploy Database to Microsoft Azure SQL Database…” which will bring up the migration wizard to take you through the process step by step:

The second way is by exporting the existing, on-prem database as a .bacpac file (docs to help with that) and then importing the .backpac file into Azure:

Resolving database migration compatibility issues

There are a wide variety of compatibility issues that you might encounter, depending both on the version of SQL Server in the source database and the complexity of the database you are migrating. Use the following resources, in addition to a targeted Internet search using your search engine of choices:

In addition to searching the Internet and using these resources, use the MSDN SQL Server community forums or StackOverflow. If you have any questions or problems just leave us a comment below.

Posted on Leave a comment

Microsoft’s Developer Blogs are Getting an Update

In the coming days, we’ll be moving our developer blogs to a new platform with a modern, clean design and powerful features that will make it easy for you to discover and share great content. This week, you’ll see the Visual Studio, IoTDev, and Premier Developer blogs move to a new URL  – while additional developer blogs will transition over the coming weeks. You’ll continue to receive all the information and news from our teams, including access to all past posts. Your RSS feeds will continue to seamlessly deliver posts and any of your bookmarked links will re-direct to the new URL location.

We’d love your feedback

Our most inspirational ideas have come from this community and we want to make sure our developer blogs are exactly what you need. Share your thoughts about the current blog design by taking our Microsoft Developer Blogs survey and tell us what you think! We would also love your suggestions for future topics that our authors could write about. Please use the survey to share what kinds of information, tutorials, or features you would like to see covered in future posts.

Frequently Asked Questions

For the most-asked questions, see the FAQ below. If you have any additional questions, ideas, or concerns that we have not addressed, please let us know by submitting feedback through the Developer Blogs FAQ page.

Will Saved/Bookmarked URLs Redirect?

Yes. All existing URLs will auto-redirect to the new site. If you discover any broken links, please report them through the FAQ feedback page.

Will My Feed Reader Still Send New Posts?

Yes, your RSS feed will continue to work through your current set-up. If you encounter any issues with your RSS feed, please report it with details about your feed reader.

Which Blogs Are Moving?

This migration involves the majority of our Microsoft developer blogs so expect to see changes to our Visual Studio, IoTDev, and Premier Developer blogs this week. The rest will be migrated in waves over the next few weeks.

We appreciate all of the feedback so far and look forward to showing you what we’ve been working on! For additional information about this blog migration, please refer to our Developer Blogs FAQ.

Posted on Leave a comment

Announcing an easier way to use latest certificates from Key Vault

Posting on behalf of Prashanth Yerramilli

When we launched Azure Key Vault a few years ago, it solved a major problem users had which was that storing sensitive and/or secret information in code or config files in plain text causes multiple problems including security exposure. Users stored their secrets in a safe store like Key Vault and used a URI to fetch the secret material. This service has been wildly popular and has become a standard for cloud applications. It is used by fledling startups to Fortune 500 companies world over.

Developers use Key Vault to store their adhoc secrets, certificates and keys used for encryption. And to follow best security practices they create secrets that are short lived. An example of typical flow in this case could be

  • Step 1: Developer creates a certificate in Key Vault
  • Step 2: Developer sets the lifetime of the secret to be 30 day. In other words developer asks Key Vault to re-create the certificate every 30 days. Developer also chooses to receive an email when a certificate is about to expire
  • Step 3: Developer writes a polling service to check if the certificate has indeed expired

In the above scenario there are few challenges for the customer. They would have to write a polling service that constantly checks if the certificate has expired and if so they wait for the new certificate and then bind it in Windows Certificate manager.
Now what if developer doesn’t have to poll. And also if the developer doesn’t have to bind the new certificate in Windows Certificate manager. To solve this exact problem we built a Key Vault Virtual Machine Extension.

Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script inside of it, a VM extension can be used. Azure VM extensions can be run with the Azure CLI, PowerShell, Azure Resource Manager templates, and the Azure portal. Extensions can be bundled with a new VM deployment, or run against any existing system.
To learn more about VM Extensions please click here

Key Vault VM Extension is supposed to do just that as explained in the steps below

  • Step 1: Create a Key Vault and create an Azure Windows Virtual Machine
  • Step 2: Install the Key Vault VM Extension on the VM
  • Step 3: Configure Key Vault VM Extension to monitor a specific vault by specifying how often it should fetch the certificate

By doing the above steps the latest certificate is bound correctly in Windows Certificate Manager. This feature enables auto-rotation of SSL certificates, without necessitating a re-deployment or binding.

In the lifecycle of secrets management fetching the latest version of the secret (for the purpose of this article a certificate) is just as important as storing it securely. To solve this problem, on an Azure Virtual Machine, we’ve created a VM Extension for Windows. A Linux version is coming soon.
Virtual Machine Extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. In this case the Key Vault Virtual Machine extension once installed fetches the latest version of the certificate at a specified interval and automatically binds the latest version of the certificate in the certificate store on Windows. As you can see this feature enables auto-rotation of SSL certificates, without necessitating a re-deployment or binding.

Also before we begin going through the tutorial, we need to understand a concept called Managed Identities.
Your code needs credentials to authenticate to cloud services, but you want to limit the visibility of those credentials as much as possible. Ideally, they never appear on a developer’s workstation or get checked-in to source control. Azure Key Vault can store credentials securely so they aren’t in your code, but to retrieve them you need to authenticate to Azure Key Vault. To authenticate to Key Vault, you need a credential! A classic bootstrap problem. Through the magic of Azure and Azure AD, MI provides a “bootstrap identity” that makes it much simpler to get things started.

Here’s how it works: When you enable MI for an Azure resource such as a virtual machine, Azure creates a Service Principal (an identity) for that resource in Azure AD, and injects the credentials (of that identity) into the resource (in this case a virtual machine).

  1. Your code calls a local MI endpoint to get an access token
  2. MI uses the locally injected credentials to get an access token from Azure AD
  3. Your code uses this access token to authenticate to an Azure service

Managed Identities

Now within Managed Identities there are 2 types

  1. System Assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that’s trusted by the subscription. The lifecycle of the identity is managed by Azure and is tied to the Azure service instance.
  2. User Assigned managed identity is created as a standalone Azure resource. Users first create an identity and then assign that identity to one or more Azure resources.

In this tutorial I will demonstrate how to create a Azure Virtual Machine with an ARM template which also includes creating a Key Vault VM Extension on the VM.

Prerequisites

Step 1

After the prerequisites are complete, create an System Assigned identity by following this tutorial

Step 2

Assign the newly created System Assigned identity to access to your Key Vault

  • Go to https://portal.azure.com and navigate to your Key Vault
  • Select Access Policies section and Add New by searching for the User Assigned identity
    AccessPolicies

Step 3

Create or Update a VM with the following ARM template
You can view full the ARM template here and the ARM Parameters file here.

The most minimal settings in the ARM template are shown below:

 {
 "secretsManagementSettings": {
 "observedCertificates": [
 "<KeyVault URI of a secret to be monitored/retrieved, in versionless format: https://myVaultName.vault.azure.net/secrets/myCertName">,
 "<more entries here>", 
 "pollingIntervalInS": "[parameters('kvvmextPollingInterval')]",
 ]
 }
 }

As you can see we only specify the observedCertificates parameter and polling Interval in seconds


Note: Your observedCertificates urls should be of the form:

https://myVaultName.vault.azure.net/secrets/myCertName 

and not:

https://myVaultName.vault.azure.net/certificates/myCertName 

Reason being the /secrets path returns the full certificate, inluding the private key, while the /certificates path does not.

By following this tutorial you can create a VM with the above specified template

The above tutorial assumes that you are storing your certificates on Windows Certificate Manager. And so the VM Extension pulls down the latest certificates at a specified interval and automatically binds those certificates in your certificate manager.

That’s all folks!

Linux Version: We’re actively working on a VM Extension for Linux and would love to hear any feedback you might have.

We are eager to hear from you about your use cases and how we can evolve the VM Extension to help you. So please reach out to us and add your feature requests to the Azure feedback forum. If you run into issues using the VM extension please reach out to us on StackOverflow.

Prashanth Yerramilli, Senior Program Manager, Azure Key Vault

Prashanth Yerramilli Profile Pic Prashanth Yerramilli is the Key Vault Program Manager on the Azure Security team. He has over 10 years of Software Engineering experience and brings to the team love for creating the ultimate development experience.

Prashanth can be reached at:
-Twitter @yvprashanth1
-GitHub https://github.com/yvprashanth

Posted on Leave a comment

Changes to the web and JSON editor APIs in Visual Studio 2019

Andrew B. Hall [MSFT]

Andrew

In Visual Studio 2019 Preview 2, The Web Tools team made some changes to improve extensibility features for extension developers. To standardize interfaces, the CSS, HTML, JSON and CSHTML editors renamed their assemblies as per the following table:

Old New
Microsoft.CSS.Core Microsoft.WebTools.Languages.Css
Microsoft.CSS.Editor Microsoft.WebTools.Languages.Css.Editor
Microsoft.Html.Core Microsoft.WebTools.Languages.Html
Microsoft.Html.Editor Microsoft.WebTools.Languages.Html.Editor
Microsoft.VisualStudio.Html.Package Microsoft.WebTools.Languages.Html.VS
Microsoft.JSON.Core Microsoft.WebTools.Languages.Json
Microsoft.JSON.Editor Microsoft.WebTools.Languages.Json.Editor
Microsoft.VisualStudio.JSON.Package Microsoft.WebTools.Languages.Json.VS
Microsoft.VisualStudio.Web.Extensions Microsoft.WebTools.Languages.Extensions
Microsoft.Web.Core Microsoft.WebTools.Languages.Shared
Microsoft.Web.Editor Microsoft.WebTools.Languages.Shared.Editor

To avoid potential parse issues, the JSON parse tree changed behavior. When you call JsonParserService.GetTreeAsync, you now get a snapshot of the JSON parse tree. As an extension developer, you can now request and maintain snapshots of the JSON parse tree.

Andrew B. Hall [MSFT]

Posted on Leave a comment

Blazor 0.8.0 experimental release now available

Blazor 0.8.0 is now available! This release updates Blazor to use Razor Components in .NET Core 3.0 and adds some critical bug fixes.

Get Blazor 0.8.0

To get started with Blazor 0.8.0 install the following:

  1. .NET Core 3.0 Preview 2 SDK (3.0.100-preview-010184)
  2. Visual Studio 2019 (Preview 2 or later) with the ASP.NET and web development workload selected.
  3. The latest Blazor extension from the Visual Studio Marketplace.
  4. The Blazor templates on the command-line:

    dotnet new -i Microsoft.AspNetCore.Blazor.Templates::0.8.0-preview-19104-04
    

You can find getting started instructions, docs, and tutorials for Blazor at https://blazor.net.

Upgrade to Blazor 0.8.0

To upgrade your existing Blazor apps to Blazor 0.8.0 first make sure you’ve installed the prerequisites listed above.

To upgrade a standalone Blazor 0.7.0 project to 0.8.0:

  • Update the Blazor packages and .NET CLI tool references to 0.8.0-preview-19104-04.
  • Replace any package reference to Microsoft.AspNetCore.Blazor.Browser with a reference to Microsoft.AspNetCore.Blazor.
  • Replace BlazorComponent with ComponentBase.
  • Update overrides of SetParameters on components to override SetParametersAsync instead.
  • Replace BlazorLayoutComponent with LayoutComponentBase
  • Replace IBlazorApplicationBuilder with IComponentsApplicationBuilder.
  • Replace any using statements for Microsoft.AspNetCore.Blazor.* with Microsoft.AspNetCore.Components.*, except leave Microsoft.AspNetCore.Blazor.Hosting in Program.cs
  • In index.html update the script reference to reference components.webassembly.js instead of blazor.webassembly.js

To upgrade an ASP.NET Core hosted Blazor app to 0.8.0:

  • Update the client-side Blazor project as described previously.
  • Update the ASP.NET Core app hosting the Blazor app to .NET Core 3.0 by following the migrations steps in the ASP.NET Core docs.
    • Update the target framework to be netcoreapp3.0
    • Remove any package reference to Microsoft.AspNetCore.App or Microsoft.AspNetCore.All
    • Upgrade any non-Blazor Microsoft.AspNetCore.* package references to version 3.0.0-preview-19075-0444
    • Remove any package reference to Microsoft.AspNetCore.Razor.Design
    • To enable JSON support, add a package reference to Microsoft.AspNetCore.Mvc.NewtonsoftJson and updateStartup.ConfigureServices to call services.AddMvc().AddNewtonsoftJson()
  • Upgrade the Microsoft.AspNetCore.Blazor.Server package reference to 0.8.0-preview-19104-04
  • Add a package reference to Microsoft.AspNetCore.Components.Server
  • In Startup.ConfigureServices simplify any call to app.AddResponseCompression to call the default overload without specifying WebAssembly or binary data as additional MIME types to compress.
  • In Startup.Configure add a call to app.UseBlazorDebugging() after the existing call to app.UseBlazor<App.Startup>()
  • Remove any unnecessary use of the Microsoft.AspNetCore.Blazor.Server namespace.

To upgrade a Blazor class library to 0.8.0:

  • Replace the package references to Microsoft.AspNetCore.Blazor.Browser and Microsoft.AspNetCore.Blazor.Build with references to Microsoft.AspNetCore.Components.Browser and Microsoft.AspNetCore.Components.Build and update the versions to 3.0.0-preview-19075-0444.
  • In the project file for the library change the project SDK from “Microsoft.NET.Sdk.Web” to “Microsoft.NET.Sdk.Razor”.

Server-side Blazor is now ASP.NET Core Razor Components in .NET Core 3.0

As was recently announced, server-side Blazor is now shipping as ASP.NET Core Razor Components in .NET Core 3.0. We’ve integrated the Blazor component model into ASP.NET Core 3.0 and renamed it to Razor Components. Blazor 0.8.0 is now built on Razor Components and enables you to host Razor Components in the browser on WebAssembly.

Upgrade a server-side Blazor project to ASP.NET Core Razor Components in .NET Core 3.0

If you’ve been working with server-side Blazor, we recommend upgrading to use ASP.NET Core Razor Components in .NET Core 3.0.

To upgrade a server-side Blazor app to ASP.NET Core Razor Components:

  • Update the client-side Blazor project as described previously, except replace the script reference to blazor.server.js with components.server.js
  • Update the ASP.NET Core app hosting the Razor Components to .NET Core 3.0 as described previously.
  • In the server project:
    • Upgrade the Microsoft.AspNetCore.Blazor.Server package reference to 0.8.0-preview-19104-04
    • Add a package reference to Microsoft.AspNetCore.Components.Server version 3.0.0-preview-19075-0444
    • Replace the using statement for Microsoft.AspNetCore.Blazor.Server with Microsoft.AspNetCore.Components.Server
    • Replace services.AddServerSideBlazor with services.AddRazorComponents and app.UseServerSideBlazor with app.UseRazorComponents.
    • In the Startup.Configure method add app.UseStaticFiles() just prior to calling app.UseRazorComponents.
    • Move the wwwroot folder from the Blazor app project to the ASP.NET Core server project

Switching between ASP.NET Core Razor Components and client-side Blazor

Sometimes it’s convenient to be able to switch between running your Razor Components on the server (ASP.NET Core Razor Components) and on the client (Blazor). For example, you might run on the server during development so that you can easily debug, but then publish your app to run on the client.

To update an ASP.NET Core hosted Blazor app so that it can be run as an ASP.NET Core Razor Components app:

  • Move the wwwroot folder from the client-side Blazor project to the ASP.NET Core server project.
  • In the server project:
    • Update the script tag in index.html to point to components.server.js instead of components.webassembly.js.
    • Add a call to services.AddRazorComponents<Client.Startup>() in the Startup.ConfigureServices method.
    • Add a call to app.UseStaticFiles() in the Startup.Configure method prior to the call to UseMvc.
    • Replace the call to UseBlazor with app.UseRazorComponents<Client.Startup>()
  • If you’re using dependency injection to inject an HttpClient into your components, then you’ll need to add an HttpClient as a service in your server’s Startup.ConfigureServices method.

Previously to get tooling support for Blazor projects you needed to install the Blazor extension for Visual Studio. Starting with Visual Studio 2019 Preview 2, tooling support for Razor Components (and hence Blazor apps) is already included without having to install anything else. The Blazor extension is now only needed to install the Blazor project templates in Visual Studio.

Runtime improvements

Blazor 0.8.0 includes some .NET runtime improvements like improved runtime performance on Chrome and an improved IL linker. In our performance benchmarks, Blazor 0.8.0 performance on Chrome is now about 25% faster. You can now also reference existing libraries like Json.NET from a Blazor app without any additional linker configuration:

@functions {
 WeatherForecast[] forecasts;

 protected override async Task OnInitAsync()
 {
 var json = await Http.GetStringAsync("api/SampleData/WeatherForecasts");
 forecasts = Newtonsoft.Json.JsonConvert.DeserializeObject<WeatherForecast[]>(json);
 }
}

Known issues

There are a couple of known issues with this release that you may run into:

  • “It was not possible to find any compatible framework version. The specified framework ‘Microsoft.NETCore.App’, version ‘2.0.0’ was not found.”: You may see this error when building a Blazor app because the IL linker currently requires .NET Core 2.x to run. To work around this issue, either install .NET Core 2.2 or disable IL linking by setting the <BlazorLinkOnBuild>false</BlazorLinkOnBuild> property in your project file.
  • “Unable to generate deps.json, it may have been already generated.”: You may see this error when running a standalone Blazor app and you haven’t yet restored packages for any .NET Core apps. To workaround this issue create any .NET Core app (ex dotnet new console) and then rerun the Blazor app.

These issues will be addressed in a future Blazor update.

Future updates

This release of Blazor was primarily focused on first integrating Razor Components into ASP.NET Core 3.0 and then rebuilding Blazor on top of that. Going forward, we plan to ship Blazor updates with each .NET Core 3.0 update.

Blazor, and support for running Razor Components on WebAssembly in the browser, won’t ship with .NET Core 3.0, but we continue to work towards shipping Blazor some later date.

Give feedback

We hope you enjoy this latest preview release of Blazor. As with previous releases, your feedback is important to us. If you run into issues or have questions while trying out Blazor, file issues on GitHub. You can also chat with us and the Blazor community on Gitter if you get stuck or to share how Blazor is working for you. After you’ve tried out Blazor for a while please let us know what you think by taking our in-product survey. Click the survey link shown on the app home page when running one of the Blazor project templates:

Blazor survey

Thanks for trying out Blazor!

Posted on Leave a comment

Make the most of your monthly Azure Credits

Angelos Petropoulos

Angelos

If you weren’t aware, Visual Studio subscribers have free monthly Azure credits, that are ideal for experimenting with and learning about Azure services. When you activate this benefit, it creates a separate Azure subscription with a monthly credit balance that renews each month while you remain an active Visual Studio subscriber. If the credits run out before the end of the month the subscription is suspended until more credits are available. No surprises, no cost, and no credit card required for any of it!

The table below shows you how many Azure credits you get based on your type of Visual Studio subscription:

Visual Studio subscription type Monthly Azure Credits Activate Credits
Visual Studio Professional (standard subscription) $50 activate
Visual Studio Test Professional $50 activate
MSDN Platforms $100 activate
Visual Studio Enterprise (standard subscription) $150 activate
Visual Studio Enterprise (BizSpark) $150 activate
Visual Studio Enterprise (MPN) $150 activate

Now that you know how many Azure Credits you get every month for free, you are probably wondering what you can spend it on! We have put together the following simple table to help you get going:

Azure Service Tier Estimated Monthly Cost
App Service Shared $9.49
Storage General Purpose V2 (1GB) $1.06
SQL Single DB (5DTUs, 2GB) $4.90
CosmosDB 1GB $23.61
Functions Dynamic First 1M executions free
(up to 400K GB-s)
Monitor Application Insights First 1GB free
Key Vault Standard $0.03 (per 10k operations)
Service Bus Basic $0.05 (per 1M operations)
Redis Basic (C0: 250MB Cache) $16.06

Hopefully, you found this information helpful and you are on your way to make use of your Azure credits. If you are interested in the cost of an Azure service you didn’t see in the table above, try the Azure price calculator. If you have any questions or problems just leave us a comment below.

Angelos Petropoulos

Posted on Leave a comment

ASP.NET Core updates in .NET Core 3.0 Preview 2

.NET Core 3.0 Preview 2 is now available and it includes a bunch of new updates to ASP.NET Core.

Here’s the list of what’s new in this preview:

  • Razor Components
  • SignalR client-to-server streaming
  • Pipes on HttpContext
  • Generic host in templates
  • Endpoint routing updates

Get started

To get started with ASP.NET Core in .NET Core 3.0 Preview 2 install the .NET Core 3.0 Preview 2 SDK

If you’re on Windows using Visual Studio, you’ll also want to install the latest preview of Visual Studio 2019.

Upgrade an existing project

To upgrade an existing an ASP.NET Core app to .NET Core 3.0 Preview 2, follow the migrations steps in the ASP.NET Core docs.

Add package for Json.NET

As part of the work to tidy up the ASP.NET Core shared framework, Json.NET is being removed from the shared framework and now needs to be added as a package.

To add back Json.NET support to an ASP.NET Core 3.0 project:

Runtime compilation removed

As a consequence of cleaning up the ASP.NET Core shared framework to not depend on Roslyn, support for runtime compilation of pages and views has also been removed in this preview release. Instead compilation of pages and views is performed at build time. In a future preview update we will provide a NuGet packages for optionally enabling runtime compilation support in an app.

Other breaking changes and announcements

For a full list of other breaking changes and announcements for this release please see the ASP.NET Core Announcements repo.

Build modern web UI with Razor Components

Razor Components are a new way to build interactive client-side web UI with ASP.NET Core. This release of .NET Core 3.0 Preview 2 adds support for Razor Components to ASP.NET Core and for hosting Razor Components on the server. For those of you who have been following along with the experimental Blazor project, Razor Components represent the integration of the Blazor component model into ASP.NET Core along with the server-side Blazor hosting model. ASP.NET Core Razor Components is a new capability in ASP.NET Core to host Razor Components on the server over a real-time connection.

Working with Razor Components

Razor Components are self-contained chunks of user interface (UI), such as a page, dialog, or form. Razor Components are normal .NET classes that define UI rendering logic and client-side event handlers, so you can write rich interactive web apps without having to write any JavaScript. Razor components are typically authored using Razor syntax, a natural blend of HTML and C#. Razor Components are similar to Razor Pages and MVC Views in that they both use Razor. But unlike pages and views, which are built around a request/reply model, components are used specifically for handling UI composition.

To create, build, and run your first ASP.NET Core app with Razor Components run the following from the command line:

dotnet new razorcomponents -o WebApplication1
cd WebApplication1
dotnet run

Or create an ASP.NET Core Razor Components in Visual Studio 2019:

Razor Components template

The generated solution has two projects: a server project (WebApplication1.Server), and a project with client-side web UI logic written using Razor Components (WebApplication1.App). The server project is an ASP.NET Core project setup to host the Razor Components.

Razor Components solution

Why two projects? In part it’s to separate the UI logic from the rest of the application. There is also a technical limitation in this preview that we are using the same Razor file extension (.cshtml) for Razor Components that we also use for Razor Pages and Views, but they have different compilation models, so they need to kept separate. In a future preview we plan to introduce a new file extension for Razor Components (.razor) so that you can easily host your components, pages, and views all in the same project.

When you run the app you should see multiple pages (Home, Counter, and Fetch data) on different tabs. On the Counter page you can click a button to increment a counter without any page refresh. Normally this would require writing JavaScript, but here everything is written using Razor Components in C#!

Razor Components app

Here’s what the Counter component code looks like:

@page "/counter"

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button class="btn btn-primary" onclick="@IncrementCount">Click me</button>

@functions {
 int currentCount = 0;

 void IncrementCount()
 {
 currentCount+=1;
 }
}

Making a request to /counter, as specified by the @page directive at the top, causes the component to render its content. Components render into an in-memory representation of the render tree that can then be used to update the UI in a very flexible and efficient way. Each time the “Click me” button is clicked the onclick event is fired and the IncrementCount method is called. The currentCount gets incremented and the component is rendered again. The runtime compares the newly rendered content with what was rendered previously and only the changes are then applied to the DOM (i.e. the updated count).

You can use components from other components using an HTML-like syntax where component parameters are specified using attributes or child content. For example, you can add a Counter component to the app’s home page like this:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<Counter />

To add a parameter to the Counter component update the @functions block to add a property decorated with the [Parameter] attribute:

@functions {
 int currentCount = 0;

 [Parameter] int IncrementAmount { get; set; } = 1;

 void IncrementCount()
 {
 currentCount+=IncrementAmount;
 }
}

Now you can specify IncrementAmount parameter value using an attribute like this:

<Counter IncrementAmount="10" />

The Home page then has it’s own counter that increments by tens:

Count by tens

This is just an intro to what Razor Components are capable of. Razor Components are based on the Blazor component model and they support all of the same features (parameters, child content, templates, lifecycle events, component references, etc.). To learn more about Razor Components check out the component model docs and try out building your first Razor Components app yourself.

Hosting Razor Components

Because Razor Components decouple a component’s rendering logic from how the UI updates get applied, there is a lot of flexibility in how Razor Components can be hosted. ASP.NET Core Razor Components in .NET Core 3.0 adds support for hosting Razor Components on the server in an ASP.NET Core app where all UI updates are handled over a SignalR connection. The runtime handles sending UI events from the browser to the server and then applies UI updates sent by the server back to the browser after running the components. The same connection is also used to handle JavaScript interop calls.

ASP.NET Core Razor Components

Alternatively, Blazor is an experimental single page app framework that runs Razor Components directly in the browser using a WebAssembly based .NET runtime. In Blazor apps the UI updates from the Razor Components are all applied in process directly to the DOM.

Blazor

Support for the client-side Blazor hosting model using WebAssembly won’t ship with ASP.NET Core 3.0, but we are working towards shipping it with a later release.

Regardless of which hosting model you use for your Razor Components, the component model is the same. The same Razor Components can be used with either hosting model. You can even switch your app back and forth from being a client-side Blazor app or Razor Components running in ASP.NET Core using the same components as long as your components haven’t taken any server specific dependencies.

JavaScript interop

Razor Components can also use client-side JavaScript if needed. From a Razor Component you can call into any browser API or into an existing JavaScript library running in the browser. .NET library authors can use JavaScript interop to provide .NET wrappers for JavaScript APIs, so that they can be conveniently called from Razor Components.

public class ExampleJsInterop
{
 public static Task<string> Prompt(this IJSRuntime js, string text)
 {
 // showPrompt is implemented in wwwroot/exampleJsInterop.js
 return js.InvokeAsync<string>("exampleJsFunctions.showPrompt", text);
 }
}
@inject IJSRuntime JS

<button onclick="@OnClick">Show prompt</button>

@functions {
 string name;

 async Task OnClick() {
 name = await JS.Prompt("Hi! What's you're name?");
 }
}

Both Razor Components and Blazor share the same JavaScript interop abstraction, so .NET libraries relying on JavaScript interop are usable by both types of apps. Check out the JavaScript interop docs for more details on using JavaScript interop and the Blazor community page for existing JavaScript interop libraries.

Sharing component libraries

Components can be easily shared and reused just like you would normal .NET classes. Razor Components can be built into component libraries and then shared as NuGet packages. You can find existing component libraries on the Blazor community page.

The .NET Core 3.0 Preview 2 SDK doesn’t include a project template for Razor Component Class Libraries yet, but we expect to add one in a future preview. In meantime, you can use Blazor Component Class Library template.

dotnet new -i Microsoft.AspNetCore.Blazor.Templates
dotnet new blazorlib

In this preview release ASP.NET Core Razor Components don’t yet support using static assets in component libraries, so the support for component class libraries is pretty limited. However, in a future preview we expect to add this support for using static assets from a library just like you can in Blazor today.

Integration with MVC Views and Razor Pages

Razor Components can be used with your existing Razor Pages and MVC apps. There is no need to rewrite existing views or pages to use Razor Components. Components can be used from within a view or page. When the page or view is rendered, any components used will be prerendered at the same time.

To render a component from a Razor Page or MVC View in this release, use the RenderComponentAsync<TComponent> HTML helper method:

<div id="Counter">
 @(await Html.RenderComponentAsync<Counter>(new { IncrementAmount = 10 }))
</div>

Components rendered from pages and views will be prerendered, but are not yet interactive (i.e. clicking the Counter button doesn’t do anything in this release). This will get addressed in a future preview, along with adding support for rendering components from pages and views using the normal element and attribute syntax.

While views and pages can use components the converse is not true: components can’t use views and pages specific features, like partial views and sections. If you want to use a logic from partial view in a component you’ll need to factor that logic out first as a component.

Cross platform tooling

Visual Studio 2019 comes with built-in editor support for Razor Components including completions and diagnostics in the editor. You don’t need to install any additional extensions.

Razor Components tooling

Razor Component tooling isn’t available yet in Visual Studio for Mac or Visual Studio Code, but it’s something we are actively working on.

A bright future for Blazor

In parallel with the ASP.NET Core 3.0 work, we will continue ship updated experimental releases of Blazor to support hosting Razor Components client-side in the browser (we’ll have more to share on the latest Blazor update shortly!). While in ASP.NET Core 3.0 we will only support hosting Razor Components in ASP.NET Core, we are also working towards shipping Blazor and support for running Razor Components in the browser on WebAssembly in a future release.

SignalR client-to-server streaming

With ASP.NET Core SignalR we added Streaming support, which enables streaming return values from server-side methods. This is useful for when fragments of data will come in over a period of time.

With .NET Core 3.0 Preview 2 we’ve added client-to-server streaming. With client-to-server streaming, your server-side methods can take instances of a ChannelReader<T>. In the C# code sample below, the StartStream method on the Hub will receive a stream of strings from the client.

public async Task StartStream(string streamName, ChannelReader<string> streamContent)
{
 // read from and process stream items
 while (await streamContent.WaitToReadAsync(Context.ConnectionAborted))
 {
 while (streamContent.TryRead(out var content))
 {
 // process content
 }
 }
}

Clients would use the SignalR Subject (or an RxJS Subject) as an argument to the streamContent parameter of the Hub method above.

let subject = new signalR.Subject();
await connection.send("StartStream", "MyAsciiArtStream", subject);

The JavaScript code would then use the subject.next method to handle strings as they are captured and ready to be sent to the server.

subject.next("example");
subject.complete();

Using code like the two snippets above, you can create real-time streaming experiences. For a preview of what you can do with client-side streaming with SignalR, take a look at the demo site, streamr.azurewebsites.net. If you create your own stream, you can stream ASCII art representations of image data being captured by your local web cam to the server, where it will be bounced out to other clients who are watching your stream.

Client-to-server Streaming with SignalR

System.IO.Pipelines on HttpContext

In ASP.NET Core 3.0, we’re working on consuming the System.IO.Pipelines API and exposing it in ASP.NET Core to allow you to write more performant applications.

In Preview 2, we’re exposing the request body pipe and response body pipe on the HttpContext that you can directly read from and write to respectively in addition to maintaining the existing Stream-based APIs.
While these pipes are currently just wrappers over the existing streams, we will directly expose the underlying pipes in a future preview.

Here’s an example that demonstrates using both the request body and response body pipes directly.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
 if (env.IsDevelopment())
 {
 app.UseDeveloperExceptionPage();
 }

 app.UseRouting(routes =>
 {
 routes.MapGet("/", async context =>
 {
 await context.Response.WriteAsync("Hello World");
 });

 routes.MapPost("/", async context =>
 {
 while (true)
 {
 var result = await context.Request.BodyPipe.ReadAsync();
 var buffer = result.Buffer;

 if (result.IsCompleted)
 {
 break;
 }

 context.Request.BodyPipe.AdvanceTo(buffer.End);
 }
 });
 });
}

Generic host in templates

The templates have been updated to use the Generic Host instead of WebHostBuilder as they have in the past:

public static void Main(string[] args)
{
 CreateHostBuilder(args).Build().Run();
}

public static IHostBuilder CreateHostBuilder(string[] args) =>
 Host.CreateDefaultBuilder(args)
 .ConfigureWebHostDefaults(webBuilder =>
 {
 webBuilder.UseStartup<Startup>();
 });

This is part of the ongoing plan started in 2.0 to better integrate ASP.NET Core with other server scenarios that are not web specific.

What about IWebHostBuilder?

The IWebHostBuilder interface that is used with WebHostBuilder today will be kept, and is the type of the webBuilder used in the sample code above. We intend to deprecate and eventually remove WebHostBuilder itself as its functionality will be replaced by HostBuilder, though the interface will remain.

The biggest difference between WebHostBuilder and HostBuilder is that you can no longer inject arbitrary services into your Startup.cs. Instead you will be limited to the IHostingEnvironment and IConfiguration interfaces. This removes a behavior quirk related to injecting services into Startup.cs before the ConfigureServices method is called. We will publish more details on the differences between WebHostBuilder and HostBuilder in a future deep-dive post.

Endpoint routing updates

We’re excited to start introducing more of the Endpoint Routing story that began in 2.2. Endpoint routing allows frameworks like MVC as well as other routable things to mix with middleware in a way that hasn’t been possible before. This is now present in the project templates in 3.0.0-preview-2, we’ll continue to add more richness as we move closer to a final release.

Here’s an example:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
 if (env.IsDevelopment())
 {
 app.UseDeveloperExceptionPage();
 }

 app.UseStaticFiles();

 app.UseRouting(routes =>
 {
 routes.MapApplication();

 routes.MapGet("/hello", context =>
 {
 return context.Response.WriteAsync("Hi there!"); 
 });

 routes.MapHealthChecks("/healthz");
 });

 app.UseAuthentication();
 app.UseAuthorization();
}

There’s a few things to unpack here.

First, the UseRouting(...) call adds a new Endpoint Routing middleware. UseRouting is at the core of many of the templates in 3.0 and replaces many of the features that were implemented inside UseMvc(...) in the past.

Also notice that inside UseRouting(...) we’re setting up a few things. MapApplication() brings in MVC controllers and pages for routing. MapGet(...) shows how to wire up a request delegate to routing. MapHealthChecks(...) hooks up the health check middleware, but by plugging it into routing.

What might be surprising to see is that some middleware now come after UseRouting. Let’s tweak this example to demonstrate why that is valuable.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
 if (env.IsDevelopment())
 {
 app.UseDeveloperExceptionPage();
 }

 app.UseStaticFiles();

 app.UseRouting(routes =>
 {
 routes.MapApplication();

 routes.MapGet("/hello", context =>
 {
 return context.Response.WriteAsync("Hi there! Here's your secret message"); 
 })
 .RequireAuthorization(new AuthorizeAttribute(){ Roles = "secret-messages", });

 routes.MapHealthChecks("/healthz").RequireAuthorization("admin");
 });

 app.UseAuthentication();
 app.UseAuthorization();
}

Now I’ve added an AuthorizeAttribute to my request delegate. This is just like placing [Authorize(Roles = "secret-messages")] on an action method in a controller. We’ve also given the health checks middleware an authorization policy as well (by policy name).

This works because the following steps happen in order (ignoring what happens before routing):

  1. UseRouting(...) makes a routing decision – selecting an Endpoint
  2. UseAuthorization() looks at the Endpoint that was selected and runs the corresponding authorization policy
  3. hidden… At the end of the middleware pipeline the Endpoint is executed (if no endpoint was matched then a 404 response is returned)

So think of UseRouting(...) as making a deferred routing decision – where middleware that appear after it run in the middle. Any middleware that run after routing can see the results and read or modify the route data and chosen endpoint. When processing reaches the end of the pipeline, then the endpoint is invoked.

What is an Endpoint and why did we add this?

An Endpoint is a new primitive to help frameworks (like MVC) be friends with middleware. Fundamentally an Endpoint is a request delegate (something that can execute) plus a bag of metadata (policies).

Here’s an example middleware – you can use this to examine endpoints in the debugger or by printing to the console:

app.Use(next => (context) =>
{
 var endpoint = context.GetEndpoint();
 if (endpoint != null)
 {
 Console.WriteLine("Name: " + endpoint.DisplayName);
 Console.WriteLine("Route: " + (endpoint as RouteEndpoint)?.RoutePattern);
 Console.WriteLine("Metadata: " + string.Join(", ", endpoint.Metadata));
 }

 return next(context);
});

In the past we haven’t had a good solution when we’ve wanted to implement a policy like CORS or Authorization in both middleware and MVC. Putting a middleware in the pipeline feels very good because you get to configure the order. Putting filters and attributes on methods in controllers feels really good when you need to apply policies to different parts of the application. Endpoints bring togther all of these advantages.

As an addition problem – what do you do if you’re writing the health checks middleware? You might want to secure your middleware in a way that developers can customize. Being able to leverage the ASP.NET Core features for this directly avoids the need to build in support for cross-cutting concerns in every component that serves HTTP.

In addition to removing code duplication from MVC, the Endpoint + Middleware solution can be used by any other ASP.NET Core-based technologies. You don’t even need to use UseRouting(...) – all that is required to leverage the enhancements to middleware is to set an Endpoint on the HttpContext.

What’s integrated with this?

We added the new authorize middleware so that you can start doing more powerful security things with just middleware. The authorize middleware can accept a default policy that applies when there’s no endpoint, or the endpoint doesn’t specify a policy.

CORS is also now endpoint routing aware and will use the CORS policy specified on an endpoint.

MVC also plugs in to endpoint routing and will create endpoints for all of your controllers and pages. MVC can now be used with the CORS and authorize features and will largely work the same. We’ve long had confusion about whether to use the CORS middleware or CORS filters in MVC, the updated guidance is to use both. This allows you to provide CORS support to other middleware or static files, while still applying more granular CORS policies with the existing attributes.

Health checks also provide methods to register the health checks middleware as a router-ware (as shown above). This allows you to specify other kinds of policies for health checks.

Finally, new in ASP.NET Core 3.0 preview 2 is host matching for routes. Placing the HostAttribute on an MVC controller or action will prompt the routing system to require the specified domain or port. Or you can use RequireHost in your Startup.cs:

app.UseRouting(routes =>
{
 routes.MapGet("/", context => context.Response.WriteAsync("Hi Contoso!"))
 .RequireHost("contoso.com");

 routes.MapGet("/", context => context.Response.WriteAsync("Hi AdventureWorks!"))
 .RequireHost("adventure-works.com");

 routes.MapHealthChecks("/healthz").RequireHost("*:8080");
});

Do you think that there are things that are missing from the endpoint story? Are there more things we should make smarter or more integrated? Please let us know what you’d like to see.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on Github.

Posted on Leave a comment

New Godot 3.1 Tutorial Series! Creating a Complete 2D Game Step by Step

We just published a brand new 18 part text tutorial series over on DevGa.me, Getting Started with Godot Step by Step Tutorial Series.  This tutorial walks you through theEBookCoverA4Format entire game creation process using Godot 3.1, from creating your initial project, to publishing your game with details step by step instructions and screen shots.  Even better it’s got professional quality art assets from Game Developer Studios and is completely open source!

The tutorial consist of:

Getting Started with Godot

Setup and Project Creation

Creating your Title Screen

Playing Background Music

Global Data via Autorun

Creating a Simple UI

Creating the Main Game Scene

Creating Parallax Clouds

Creating the Player

Handling Input

Add a Scene Animation

Creating Bullets

Creating the Enemies

Configuring the Collisions

Populating the Game World

Adding Shooting to the Game

Making Things Explode

The Final Code

Building your Game for Windows

If you need more detailed information on any subject we cover, be sure to check our existing Godot 3 Tutorial series, that goes into much more technical detail.  There will be a step by step video version available shortly.  There is also a 70pg PDF version of this tutorial available for Patreons.

Programming Art Design