Posted on Leave a comment

Exploring Azure App Service – Azure Diagnostics

If you’ve followed our previous posts about using Azure App Service to host web apps in the cloud (1. Introduction to App Service, 2. Hosting web apps that use SQL) you’re already familiar with how easy it is to get an app running in the cloud; but what if the app doesn’t work correctly (it’s crashing, running too slow, etc.)?  In this post we’ll explore how to use Azure’s Application Insights to have always on diagnostics that will help you find and fix these types of issues.

Azure Application Insights provides seamless integration with Azure App Service for monitoring, triaging and diagnosing code-level issues. It can easily be enabled on an existing web app with a single button click. Profiler traces and exception snapshots will be available for identifying the lines of code that caused issues. Let’s take a closer look at this experience.

Enable Application Insights on an existing web app to investigate issues

We recommend you follow along using your own application, but if you don’t have one readily available we will be using the Contoso University sample. The instructions on how to deploy it to App Service are in the readme file.

If you are following along with Contoso University, you will notice the Instructors page throws errors. In addition, the Courses tab loads a bit slower than others.

Let’s turn on Application Insights to investigate. After the app is deployed to App service, Application Insights can be enabled from the App Service portal by clicking the button on the overview blade:

Fill out the Application Insights enablement blade the following way:

  • Create a new resource, with an appropriate name.
  • Leave all default options for Code-level diagnostics on. The added section Code level diagnostics is on by default to enable Profiler and Snapshot Debugger for diagnosing slow app performance and runtime exceptions.
  • Click the ‘OK’ button. Once prompted if to restart the web app, click ‘Continue’ to proceed.

Once Application Insights is enabled, you will be able to see its connection status. You can now navigate to the Application Insights overview blade by clicking on the link next to the green check mark:

Generate traffic for your web app in order to reproduce issues

Let’s generate some traffic to reproduce our two issues: a) Courses page loading slowly, b) Instructors page throwing errors. You are going to use performance test with Azure portal to make requests to the following URLs:

  • http://<your_webapp_name>.azurewebsites.net/Courses
  • http://<your_webapp_name>.azurewebsites.net/Instructors

You are going to be creating two different performance tests, once for each URL. For the Courses page you are going to simulate 250 users over a duration of 5 minutes, since you are experiencing a performance issue. For the Instructors page you can simulate 10 users over 1 minutes since we’d generally expect less instructors than users on the site at any given time.

Profile your app during performance tests to identify performance issues

A performance test will generate traffic, but if you want to understand how you app performs you have to analyze its performance while the test is running using a profiler. Let’s do exactly that, using the Performance | Configure Profiler blade while one of our performance tests is running:

Later on we will walkthrough how to use the profiler to identify the exact lines of code responsible for the slow-down. For now, let’s focus on simply profiling . After clicking on “Profile now”, wait until the performance test finishes and the profiler has completed its analysis. After a new entry in the section “Recent profiling sessions” appears, navigate back to the Overview blade.

Diagnose runtime exceptions

If your web app has any failed requests, you need to root cause and solve the issues to ensure your app works reliably. After running your performance test on the Instructors page you have some example failed requests to take a look at:

Let’s go to Failures blade to investigate what failed.

On the Failures blade, the top failed operation is ‘GET Instructors/Index’ and the top exception is ‘InvalidOperationException’. Click on the ‘COUNT’ column of the exception count to open list of sample exceptions. Click on the suggested exception to open the End-to-End Transaction blade.

In the End-to-End Transaction blade, you can see ‘System.InvalidOperationException’ exceptions thrown from GET Instructors/Index operation. Select the exception row and click on ‘Open debug snapshot’.

If it’s the first time you use Snapshot Debugger, you will prompt to request RBAC access to snapshots to safeguard the potentially sensitive data shown in local variables:

Once you get access, you will see a snapshot view like the following:

From the local variables and call stack, you can see the exception is thrown at InstructorsController.cs line 56 with message ‘Nullable object must have a value’. This provides sufficient information to proceed with debugging the app’s source code.

To get a better experience, download the snapshot and debug it using Visual Studio Enterprise. You will be able to interactively see which line of code caused the exception:

Looking at the code, the exception was thrown because ‘id’ is null and you are missing the appropriate check. When your app is running in the cloud, resources are dynamically provisioned and destroyed and so may not always get the chance to access the state of a failed component before it is reset. The Snapshot Debugger is a powerful tool that can maintain the failed state of the component even after the component’s state has been reset, by capturing local variables and call stack at the time the exception is thrown.

Diagnose bad performance from your web app

The server response time chart on the overview blade provides quick information on how performant your web app is over time. In the Contoso example, we see some spikes on the chart indicating there is a short period of time where users experienced slow responses for the requests they made.

To investigate further, navigate to the Performance blade in App Insights portal:

Zoom into the time range 6pm-10pm in the ‘Operation times: zoom into range’ chart to see spikes on the Request count.

Sort the Operations table by duration and request count to figure out where your web app spent the most time. In our example GET Courses/Index operation has the longest average duration. Let’s click on it to investigate why this operation takes so long.

The right side of the blade provides insights based on the Course/Index operation.

From the chart you can see although most operations only took around 400ms, the worst ones took up to 35 seconds. This means some users are getting really bad experience when hitting this URL and you should address it. Let’s look at Profiler traces to troubleshoot why GET Courses/Index was being so slow.

You should see profiler traces similar to the following:

The code path that’s taking the most time is prefixed with the flame icon. In this particular request, most time was spent on reading the list of courses from the database. Browsing to the code CourseController.cs we can see that when loading the list of courses the AsNoTracking() optimization option is not used. By default, Entity Framework will turn on tracking which caches the results to compare with what’s modified. This could add an approx. ~30% overhead to your app’s performance. For simple read operation like this one, we can optimize the query performance by using the AsNoTracking() option.

Conclusion

We hope that you find it easy to use Application Insights to diagnose performance and errors in your web apps. We believe Azure App Service is a great place to get started hosting and maintaining your web apps. You don’t have to enable App Insights upfront; the option is always there to be turned on when and as needed without re-deployment.

If you have any questions or issues, let us know by leaving comments below.

Catherine Wang Program Manager, VS and .NET

Catherine is on Azure developer experience team and is responsible for Azure diagnostics, security and storage tools.

Posted on Leave a comment

Razor Improvements – Feedback Wanted

In recent releases of Visual Studio 2017, there has been a great focus on improving the experience of working with Razor files (*.cshtml). The improvements were aimed at addressing the most pressing customer-facing issues and included changes from formatting and IntelliSense to general performance and reliability. Now that the fixes and enhancements have been publicly available for a few months, we hope you’ve been having a much-improved experience with the Razor editor.

Please let us know how we’re doing by taking a short, two-minute survey. Also, feel free to leave relevant feedback in the comments section below.

If you haven’t already downloaded the latest version, update your copy of Visual Studio 2017 through the Visual Studio Installer, or follow the links to download the installer from the Visual Studio website.

Razor editor in Visual Studio IDE

We know that despite our improvements, Razor editing isn’t perfect yet, so if you run into issues please file a report using the Visual Studio feedback tool. We review this feedback frequently and will continue to fix issues that are identified.

To launch the feedback tool, choose “Report a Problem…” under the Help->Send Feedback menu. When filing a report, please provide as much of your Razor file as you can share, with a description of what happened versus what you expected. (Sample code and screenshots are very helpful!)

Report a Problem menu item

Thanks for your interest in Web Development in Visual Studio.
Happy coding!

Justin Clareburt, Senior Program Manager, Visual Studio

Justin Clareburt (justcla) Profile Pic Justin Clareburt is the Web Tools PM on the Visual Studio team. He has over 20 years of Software Engineering experience and brings to the team his expert knowledge of IDEs and a passion for creating the ultimate development experience.

Follow Justin on Twitter @justcla78

Posted on Leave a comment

Workaround for Bower Version Deprecation

As of June 25, the version of Bower shipped with Visual Studio was deprecated, resulting in Bower operations failing when run in Visual Studio. If you use Bower, you will see an error something like:

EINVRES Request to https://bower.herokuapp.com/packages/bootstrap failed with 502

This will be fixed in Visual Studio 15.8. In the meantime, you can work around the issue by using a new version of Bower or by adding some configuration to each Bower project.

The Issue

Some time, ago Bower switched their primary registry feed but continued to support both. On June 25th, they turned off the older feed, which the copy in Visual Studio tries to use by default.

The Fix

There are two options to fix this issue:

  1. Update the configuration for each Bower project to explicitly use the new registry: https://reigstry.bower.io
  2. Manually install a newer version of Bower and configure Visual Studio to use it.

Workaround 1: Define the new registry entry in the project’s Bower config file (.bowerrc)

Add a .bowerrc file to the project (or modify the existing .bowerrc) with the following content:

{
 "registry": "https://registry.bower.io"
}

With the registry property defined in a .bowerrc file in the project root, Bower operations should run successfully on the older versions of Bower that shipped with Visual Studio 2015 and Visual Studio 2017.

Workaround 2: Configure Visual Studio to point to use a newer version of Bower

An alternative solution is to configure Visual Studio use to a newer version of Bower that you have installed as a global tool on your machine. (For instructions on how to install Bower, refer to the guidance on the Bower website.) If you have installed Bower via npm, then the path to the bower tools will be contained in the system $(PATH) variable. It might look something like this: C:\Users\[username]\AppData\Roaming\npm. If you make this path available to Visual Studio via the External Web Tools options page, then Visual Studio will be able to find and use the newer version.

To configure Visual Studio to use the globally installed Bower tools:

  1. Inside Visual Studio, open Tools->Options.
  2. Navigate to the External Web Tools options page (under Projects and Solutions->Web Package Management).
  3. Select the “$(PATH)” item in the Locations of external tools list box.
  4. Repeatedly press the up-arrow button in the top right corner of the Options dialog until the $(PATH) item is at the top of the list.
  5. Press OK to confirm the change.

Configure External Web Tools Path
Ordered this way, when Visual Studio is searching for Bower tools, it will search your system path first and should find and use the version you installed, rather than the older version that ships with Visual Studio in the $(VSINSTALLDIR)\Web\External directory.

Note: This change affects path resolution for all external tools. So, if you have Grunt, Gulp, npm or any other external tools on the system path, those tools will be used in preference to any other versions that shipped with VS. If you only want to change the path for Bower, leave the system path where it is and add a new entry at the top of the list that points to the instance of Bower installed locally. It might look something like this: C:\Users\[username]\AppData\Roaming\npm\node_modules\bower\bin

We trust this will solve any issues related to the recent outage of the older bower registry. If you have any questions or comments, please leave them below.

Happy coding!

Justin Clareburt, Senior Program Manager, Visual Studio

Justin Clareburt (justcla) Profile Pic Justin Clareburt is the Web Tools PM on the Visual Studio team. He has over 20 years of Software Engineering experience and brings to the team his expert knowledge of IDEs and a passion for creating the ultimate development experience.

Follow Justin on Twitter @justcla78

Posted on Leave a comment

Changes to script debugging in Visual Studio 15.7

We’re always looking for ways to make developing with Visual Studio faster.  One of the tasks developers do many times a day is launching debugging sessions.  We identified that script debugging added about 1.5s per F5, but only about 15.5% of people actively debugged script using Visual Studio.

Based on the above, in Visual Studio 15.7 we made the decision to turn off script debugging by default to improve the overall experience for most users. If you want to turn the capability back on, you can do it from Tools | Options | Debugging | and check “Enable JavaScript debugging for ASP.NET (Chrome, Edge, and IE):

We also added the following dialog when you attempt to set a breakpoint with script debugging disabled:

When script debugging is ON, Visual Studio automatically stops debugging when the browser window is closed. It will also close the browser window if you stop debugging in Visual Studio. We added the same capability to Visual Studio when script debugging is OFF under Tools | Options | Project and Solutions | Web Projects:

With this option enabled:

  • Visual Studio will open a new browser window when debugging starts
  • Visual Studio will stop debugging when the browser window is closed

The following matrix shows you all the available options and the expected behavior for each combination:

Enable JavaScript debugging for ASP.NET (Chrome, Edge and IE) Stop debugger when browser window is closed What happens when you start debugging What happens when you stop debugging What happens when you close the browser window
TRUE TRUE New browser window always pops up New browser window always goes away, with all open tabs Debugging stops
TRUE FALSE New browser window always pops up New browser window always goes away, with all open tabs Debugging stops
FALSE TRUE New browser window always pops up New browser window always goes away, with all open tabs Debugging stops
FALSE FALSE Opens new tab if browser window already exists Browser tab/window stays open Debugging continues

If you want Visual Studio to return to its default pre-15.7 behavior, all you have to do is enable script debugging in Tools | Options | Debugging | and check “Enable JavaScript debugging for ASP.NET (Chrome, Edge, and IE). If you notice any unexpected behavior with these options please use report a problem in Visual Studio to let us know. If you have any feedback or suggestions regarding this change please let us know on uservoice or simply post a reply to this post.

Posted on Leave a comment

Blazor 0.4.0 experimental release now available

Blazor 0.4.0 is now available! This release includes important bug fixes and several new feature enhancements.

New features in Blazor 0.4.0 (details below):

  • Add event payloads for common event types
  • Use camelCase for JSON handling
  • Automatic import of core Blazor namespaces in Razor
  • Send and receive binary HTTP content using HttpClient
  • Templates run on IIS Express by default with autobuild enabled
  • Bind to numeric types
  • JavaScript interop improvements

A full list of the changes in this release can be found in the Blazor 0.4.0 release notes.

Get Blazor 0.4.0

To get setup with Blazor 0.4.0:

  1. Install the .NET Core 2.1 SDK (2.1.300 or later).
  2. Install Visual Studio 2017 (15.7) with the ASP.NET and web development workload selected.
    • Note: The Blazor tooling isn’t currently compatible with the VS2017 preview channel (15.8). This will be addressed in a future Blazor release.
  3. Install the latest Blazor Language Services extension from the Visual Studio Marketplace.

To install the Blazor templates on the command-line:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

You can find getting started instructions, docs, and tutorials for Blazor at https://blazor.net.

Upgrade an existing project to Blazor 0.4.0

To upgrade an existing Blazor project from 0.3.0 to 0.4.0:

  • Install all of the required bits listed above.
  • Update your Blazor package and .NET CLI tool references to 0.4.0.

Your upgraded Blazor project file should look like this:

<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <RunCommand>dotnet</RunCommand> <RunArguments>blazor serve</RunArguments> <LangVersion>7.3</LangVersion> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.Blazor.Browser" Version="0.4.0" /> <PackageReference Include="Microsoft.AspNetCore.Blazor.Build" Version="0.4.0" /> <DotNetCliToolReference Include="Microsoft.AspNetCore.Blazor.Cli" Version="0.4.0" /> </ItemGroup> </Project>

Event payloads for common event types

This release adds payloads for the following event types:

Event arguments Events
UIMouseEventArgs onmouseover, onmouseout, onmousemove, onmousedown, onmouseup, oncontextmenu
UIDragEventArgs ondrag, ondragend, ondragenter, ondragleave, ondragover, ondragstart, ondrop
UIPointerEventArgs gotpointercapture, lostpointercapture, pointercancel, pointerdown, pointerenter, pointerleave, pointermove, pointerout, pointerover, pointerup
UITouchEventArgs ontouchcancel, ontouchend, ontouchmove, ontouchstart, ontouchenter, ontouchleave
UIWheelEventArgs onwheel, onmousewheel
UIKeyboardEventArgs onkeydown, onkeyup
UIKeyboardEventArgs onkeydown, onkeyup, onkeypress
UIProgressEventArgs onloadstart, ontimeout, onabort, onload, onloadend, onprogress, onerror

Thank you to Gutemberg Ribeiro (galvesribeiro) for this contribution! If you haven’t checked out Gutemberg’s handy collection of Blazor extensions they are definitely worth a look.

Use camelCase for JSON handling

The Blazor JSON helpers and utilities now use camelCase by default. .NET objects serialized to JSON are serialized using camelCase for the member names. On deserialization a case-insensitive match is used. The casing of dictionary keys is preserved.

Automatic import of core for Blazor namespaces in Razor

Blazor now automatically imports the Microsoft.AspNetCore.Blazor and Microsoft.AspNetCore.Blazor.Components namespaces in Razor files, so you don’t need to add @using statements for them. One less thing for you to do!

Send and receive binary HTTP content using HttpClient

You can now use HttpClient to send and receive binary data from a Blazor app (previously you could only handle text content). Thank you Robin Sue (Suchiman) for this contribution!

Bind to numeric types

Binding now works with numeric types: long, float, double, decimal. Thanks again to Robin Sue (Suchiman) for this contribution!

Templates run on IIS Express by default with autobuild enabled

The Blazor project templates are now setup to run on IIS Express by default, while still preserving autobuild support.

JavaScript interop improvements

Call async JavaScript functions from .NET

With Blazor 0.4.0 you can now call and await registered JavaScript async functions like you would an async .NET method using the new RegisteredFunction.InvokeAsync method. For example, you can register an async JavaScript function so it can be invoked from your Blazor app like this:

Blazor.registerFunction('BlazorLib1.DelayedText', function (text) { // Wait 1 sec and then return the specified text return new Promise((resolve, reject) => { setTimeout(() => { resolve(text); }, 1000); });
});

You then invoke this async JavaScript function using InvokeAsync like this:

public static class ExampleJSInterop
{ public static Task<string> DelayedText(string text) { return RegisteredFunction.InvokeAsync<string>("BlazorLib1.DelayedText", text); }
}

Now you can await the async JavaScript function like you would any normal C# async method:

var text = await ExampleJSInterop.DelayedText("See ya in 1 sec!");

Call .NET methods from JavaScript

Blazor 0.4.0 makes it easy to call sync and async .NET methods from JavaScript. For example, you might call back into .NET when a JavaScript callback is triggered. While calling into .NET from JavaScript was possible with earlier Blazor releases the pattern was low-level and difficult to use. Blazor 0.4.0 provides simpler pattern with the new Blazor.invokeDotNetMethod and Blazor.invokeDotNetMethodAsync functions.

To invoke a .NET method from JavaScript the target .NET method must meet the following criteria:

  • Static
  • Non-generic
  • No overloads
  • Concrete JSON serializable parameter types

For example, let’s say you wanted to invoke the following .NET method when a timeout is triggered:

namespace Alerts
{ public class Timeout { public static void TimeoutCallback() { Console.WriteLine('Timeout triggered!'); } }
}

You can call this .NET method from JavaScript using Blazor.invokeDotNetMethod like this:

Blazor.invokeDotNetMethod({ type: { assembly: 'MyTimeoutAssembly', name: 'Alerts.Timeout' }, method: { name: 'TimeoutCallback' }
})

When invoking an async .NET method from JavaScript if the .NET method returns a task, then the JavaScript invokeDotNetMethodAsync function will return a Promise that completes with the task result (so JavaScript/TypeScript can also use await on it).

Summary

We hope you enjoy this latest preview of Blazor. Your feedback is especially important to us during this experimental phase for Blazor. If you run into issues or have questions while trying out Blazor please file issues on GitHub. You can also chat with us and the Blazor community on Gitter if you get stuck or to share how Blazor is working for you. After you’ve tried out Blazor for a while please also let us know what you think by taking our in-product survey. Just click the survey link shown on the app home page when running one of the Blazor project templates:

Blazor survey

Thanks for trying out Blazor!

Posted on Leave a comment

Use Dependency Injection In WebForms Application

Dependency Injection design pattern is widely used in modern applications.  It decouples objects to the extent that no client code needs to be changed simply because an object it depends changes to a different one.  It brings you a lot of benefits, like reduced dependency, more reusable code, more testable code, etc.  in the past, it was very difficult to use Dependency Injection in WebForms application before.  Starting from .NET 4.7.2, it is now easy for developers to use Dependency Injection in WebForms applications.  With the UnityAdapter, you can add it to your existing WebForms application in 4 simple steps.

How to enable Dependency Injection in your existing WebForms application

Suppose you have a movie website which lists most popular movies in history.  You use repository pattern to separate the logic that retrieves the data and maps it to the business entity.  Currently you are creating business logic object and repository object in default.aspx page.  The code looks like bellow.

Now simply follow 4 steps below, you will be able to adopt Dependency Injection to decouple the MovieManager from default.aspx page. The sample web application is on this Github repo.  And you can use tag to retrieve the code change in each step.

1. Retarget the project to .NET Framework 4.7.2. (Git Tag: step-1)

Open project property and change the targetFramework of the project to .NET Framework 4.7.2. You would also need to change targetFramework in httpRuntime section in web.config file as illustrated below.

2. Install AspNet.WebFormsDependencyInjection.Unity NuGet package. (Git Tag: step-2)

3. Register types in Global.asax. (Git Tag: step-3)

4. Refactor Default.aspx.cs. (Git Tag: step-4)

Areas that Dependency Injection can be used

There are many areas you can use Dependency Injection in WebForms applications now. Here is a complete list.

  • Pages and controls
    • WebForms page
    • User control
    • Custom control
  • IHttpHandler and IHttpHandlerFactory
  • IHttpModule
  • Providers
    • BuildProvider
    • ResourceProviderFactory
    • Health monitoring provider
    • Any ProviderBase based provider created by System.Web.Configuration.ProvidersHelper.InstantiateProvider. e.g. custom sessionstate provider

Summary

Using Microsoft.AspNet.WebFormsDependencyInjection.Unity NuGet package on .net framework 4.7.2, Dependency Injection can be easily added into your existing WebForms application. Please give it a try and let us know your feedback.

Posted on Leave a comment

Troubleshooting ASP.NET Core Performance Problems

This is a guest post by Mike Rousos

I recently had an opportunity to help a developer with an ASP.NET Core app that was functionally correct but slow when under a heavy user load. We found a few different factors contributing to the app’s slowdown while investigating, but the majority of the issues were some variation of blocking threads that could have run in a non-blocking way. It was a good reminder for me just how crucial it is to use non-blocking patterns in multi-threaded scenarios like a web app.

Beware of Locks

One of the first problems we noticed (through CPU analysis with PerfView) was that a lot of time was spent in logging code paths. This was confirmed with ad hoc exploration of call stacks in the debugger which showed many threads blocked waiting to acquire a lock. It turns out some common logging code paths in the application were incorrectly flushing Application Insights telemetry. Flushing App Insights requires a global lock and should generally not be done manually during the course of an app’s execution. In this case, though, Application Insights was being flushed at least once per HTTP request and, under load, this became a large bottleneck!

You can see this sort of pattern in the images below from a small repro I made. In this sample, I have an ASP.NET Core 2.0 web API that enables common CRUD operations against an Azure SQL database with Entity Framework Core. Load testing the service running on my laptop (not the best test environment), requests were processed in an average of about 0.27 seconds. After adding a custom ILoggerProvider calling Console.WriteLine inside of a lock, though, the average response time rose to 1.85 seconds – a very noticeable difference for end users. Using PerfView and a debugger, we can see that a lot of time (66% of PerfView’s samples) is spent in the custom logging method and that a lot of worker threads are stuck there (delaying responses) while waiting for their turn with the lock.

Something's up with this logging call

Something’s up with this logging call

Threads waiting on lock acquisition

Threads waiting on lock acquisition

ASP.NET Core’s Console logger used to have some locking like this in versions 1.0 and 1.1, causing it to be slow in high-traffic scenarios, but these issues have been addressed in ASP.NET Core 2.0. It is still a best practice to be mindful of logging in production, though.

For very performance-sensitive scenarios, you can use LoggerMessage to optimize logging even further. LoggerMessage allows defining log messages ahead-of-time so that message templates don’t need to be parsed every time a particular message is logged. More details are available in ourdocumentation, but the basic pattern is that log messages are defined as strongly-typed delegates:

// This delegate logs a particular predefined message
private static readonly Action<ILogger, int, Exception> _retrievedWidgets = LoggerMessage.Define<int>( LogLevel.Information, new EventId(1, nameof(RetrievedWidgets)), "Retrieved {Count} widgets"); // A helper extension method to make it easy to call the 
// LoggerMessage-produced delegate from an ILogger
public static void RetrievedWidgets(this ILogger logger, int count) => _retrievedWidgets(logger, count, null);

Then, that delegate is invoked as needed for high-performance logging:

var widgets = await _dbContext.Widgets.AsNoTracking().ToListAsync();
_logger.RetrievedWidgets(widgets.Length);

Keep Asynchronous Calls Asynchronous

Another issue our investigation uncovered in the slow ASP.NET Core app was similar: calling Task.Wait() or Task.Result on asynchronous calls made from the app’s controllers instead of using await. By making controller actions async and awaiting these sorts of calls, the executing thread is freed to go serve other requests while waiting for the invoked task to complete.

I reproduced this issue in my sample application by replacing async calls in the action methods with synchronous alternatives. At first, this only caused a small slowdown (0.32 second average response instead of 0.27 seconds) because the async methods I was calling in the sample were all pretty quick. To simulate longer async tasks, I updated both the async and synchronous versions of my sample to have a Task.Delay(200) in each controller action (which, of course, I used await with when async and .Wait() with when synchronous). In the async case, average response time went from 0.27s to 0.46s which is more or less what we would expect if each request has an extra pause or 200ms. In the synchronous case, though, the average time went from 0.32 seconds to 1.47 seconds!

The charts below demonstrate where a lot of this slowdown comes from. The green lines in the charts represent requests served per second and the red lines represent user load. In the first chart (which was taken while running the async version of my sample), you can see that as users increase, more requests are being served. In the second chart (corresponding to theTask.Wait() case), on the other hand, there’s a strange pattern of requests per second remaining flat for several minutes after user load increases and only then increasing to keep up. This is because the existing pool of threads serving requests couldn’t keep up with more users (since they were all blocked on Task.Wait() calls) and throughput didn’t improve until more threads were created.

Threads Keeping Up

Asynchronous RPS compared to user load

 

Sync Thread Growth Lag

Synchronous RPS compared to user load

 

Attaching a debugger to both scenarios, I found that 75 managed threads were being used in the async test but 232 were in use in the synchronous test. Even though the synchronous test did eventually add enough threads to handle the incoming requests, calling Task.Result and Task.Wait can cause slowdowns when user load changes. Analyzers (like AsyncFixer) can help to find places where asynchronous alternatives can be used and there are EventSource events that can be used to find blocking calls at runtime, if needed.

Wrap-Up

There were some other perf issues in the application I helped investigate (server GC wasn’t enabled in ASP.NET Core 1.1 templates, for example, something that has been corrected in ASP.NET Core 2.0), but one common theme of the problems we found was around blocking threads unnecessarily. Whether it’s from lock contention or waiting on tasks to finish, it’s important to keep threads unblocked for good performance in ASP.NET Core apps.

If you’d like to dig into your own apps to look for perf trouble areas, check out the Channel9 PerfView tutorials for an overview of how PerfView can help uncover CPU and memory-related perf issues in .NET applications.

Posted on Leave a comment

Announcing ASP.NET Providers Connected Service Visual Studio Extension

Provider pattern was introduced in ASP.NET 2.0 and it gives the developers the flexibility of where to store the state of ASP.NET features (e.g. Session State, Membership, Output Cache etc.). In ASP.NET 4.6.2, we added async support for Session State Provider and Output Cache Provider.  These providers provide much better scalability, and enables the web application to adapt to the cloud environment.  Furthermore, , we also released SqlSessionStateProviderAsync, CosmosDBSessionStateProviderAsync, RedisSessionStateProvider and SQLAsyncOutputCacheProvider.  Through these providers the web applications can store the Session State in Azure resources like, SQL Azure, CosmosDB, and Redis Cache, and Output Cache in SQL Azure.  With these options, it may be not very straightforward to pick one and configure it right in the application.  Today we are releasing ASP.NET Providers Connected Service Visual Studio Extension to help you pick the right provider and configure it properly to work with Azure resources.  This extension will be your one-stop shop where you can install and configure all the ASP.NET providers that are Azure ready.

How to install the extension

The ASP.NET Providers Connected Service Extension can be installed on Visual Studio 2017. You can install it through Extensions and Updates in Visual Studio and type “ASP.NET Providers Connected Service” in the search box. Or you can download the extension from Visual Studio MarketPlace.

How to use the extension

To use the Extension, you need to make sure that your web application targets to .NET Framework 4.6.2 or higher.  You can open the extension through right clicking on the project, selecting Add and clicking on Connected Service. You will see all the Connected Services installed on your VS which apply to your project.

After clicking on Microsoft ASP.NET Providers extension. You will see the following wizard window, you can choose the provider you want to install and configure for your ASP.NET web application. Currently we have two sets of providers, Session State providers and Output Cache provider.

Select a provider and click on the Next button. You will see a list of providers that apply to your application, which connects with Azure resources. Currently we have SQL SessionState provider, CosmosDB SessionState provider, RedisCache Sessionstate provider and SQL OutputCache provider.

After the provider is chosen, the wizard window will lead you to select an Azure instance which will be used by the provider selected.  In order to fetch the Azure instances that apply to the selected provider, you will need to sign in with your account in Visual Studio.   Then Select an Azure instance and click on the Finish button, the extension will install the relevant Nuget packages and update the web.config file to connect the provider with that selected Azure instance.

Things to be aware of

  1. If the application is already configured with a provider and you want to install a same type of provider, you need to remove that provider first. E.g. your application is using SQL SessionState provider and you want to switch to CosmosDB SessionState provider. In this case, you need to remove the SessionState Provider settings in the web.config, then you can use ASP.NET Providers Connected Services to install and configure the CosmosDB SessionState provider.
  2. If you are installing Async SQL SessionState provider or Async SQL OutputCache provider, you need to replace the user name and password in the connection string in web.config added by ASP.NET Providers Connected Services. As you may have multiple accounts in your Azure SQL Database instance.

Summary

ASP.NET Providers Connected Services helps you install and configure ASP.NET providers for your web application to consume Azure services. Our goal of this Visual Studio extension is to make it easier and provide a central place to help you configure different providers for the ASP.NET web applications and connect your web applications with Azure. Please install the extension from Visual Studio Marketplace today and let us know your feedback.

Posted on Leave a comment

A Penny Saved is a Ton of Serverless Compute Earned

Scott Guthrie recently shared one of my favorite anecdotes on his Azure Red Shirt Tour. A Microsoft customer regularly invokes 1 billion (yes, that’s with a “B”) Azure Functions per day. The customer reached out to support after the first month thinking there was a bug in the billing system, only to find out that the $72 was in fact correct. How is that possible? Azure Functions is a serverless compute platform that allows you to focus on code that only executes when triggered by events, and you only pay for CPU time and memory used during execution (versus a traditional web server where you are paying a fee even if your app is idle). This is called micro-billing, and is one key reason serverless computing is so powerful.

Curious about Azure Functions? Follow the link https://aka.ms/go-funcs to get up and running with your first function in minutes.

Scott Guthrie Red Shirt

Scott Guthrie on the Azure Red Shirt Tour

In fact, micro-billing is so important, it’s one of three rules I use to verify if a service is serverless. There is not an official set of rules and there is no standard for serverless. The closest thing to a standard is the whitepaper published by the Cloud Native Computing Foundation titled CNCF WG-Serverless Whitepaper v1.0 (PDF). The paper describes serverless computing as “building and running applications that do not require server management.” The paper continues to state they are “executed, scaled, and billed in response to the exact demand needed at the moment.”

It’s easy to label almost everything serverless, but there is a difference between managed and serverless. A managed service takes care of responsibilities for you, such as standing up a website or hosting a Docker container. Serverless is a managed service but requires a bit more. Here is Jeremy’s Serverless Rules.

  1. The service should be capable of running entirely in the cloud. Running locally is fine and often preferred for developing, testing, and debugging, but ultimately it should end up in the cloud.
  2. You don’t have to configure a virtual machine or cluster. Docker is great, but containers require a Docker host to run. That host typically means setting up a VM and, for resiliency and scale, using an orchestrator like Kubernetes to scale the solution. There are also services like Azure Web Apps that provide a fully managed experience for running web apps and containers, but I don’t consider them serverless because they break the next rule.
  3. You only pay for active invocations and never for idle time. This rule is important, and the essence of micro-billing. ACI is a great way to run a container, but I pay for it even when it’s not being used. A function, on the other hand, only bills when it’s called.

These rules are why I stopped calling managed databases “serverless.” So, what, then, does qualify as serverless?

The Azure serverless platform includes Azure Functions, Logic Apps, and Event Grid. In this post, we’ll take a closer look at Azure Functions.

Azure Functions

Azure Functions allows you to write code that is executed based on an event, or trigger. Triggers may include an HTTP request, a timer, a message in a queue, or any other number of important events. The code is passed details of the trigger but can also access bindings that make it easier to connect to resources like databases and storage. The serverless Azure Functions model is based on two parameters: invocations and gigabyte seconds.

Invocations are the number of times the function is invoked based on its trigger. Gigabyte seconds is a function of memory usage. Image a graph that shows time on the x-axis and memory consumption on the y-axis. Plot the memory usage of your function over time. Gigabyte seconds represent the area under the curve.

Let’s assume you have a microservice that is called every minute and takes one second to scan and aggregate data. It uses a steady 128 megabytes of memory during the run. Using the Azure Pricing Calculator, you’ll find that the cost is free. That’s because the first 400,000 Gigabyte seconds and 1 million invocations are free every month. Running every second (there are 2,628,000 seconds in a month) with double memory (256 megabytes), the entire monthly cost is estimated at $4.51.

Azure Functions pricing

Pricing calculator for Azure Functions

Recently I tweeted about my own experience with serverless cost (or lack thereof). I wrote a link-shortening tool. It uses a function to take long URLs and turn them into a shorter code I can easily share. I also have a function that takes the short code and performs the redirect, then stores the data in a queue. Another microservice processes items in the queue and stores metadata that I can analyze for later. I have tens of thousands of invocations per month and my total cost is less than a dollar.

Link shortener stats

A tweet about cost of running serverless code in Azure

Do I have your attention?

In future posts I will explore the cost model for Logic Apps and Event Grid. In the meantime…

Learn about and get started with your first Azure Function by following this link: https://aka.ms/go-funcs

Posted on Leave a comment

Exploring Azure App Service – Web Apps and SQL Azure

There is a good chance that your web app uses a database. In my previous post introducing Azure App Service, I showed some of the benefits of hosting apps in Azure App Service, and how easy it is to get a basic site running in a few clicks. In this post I’ll show how to set up a SQL Azure database along with an App Service Web App from Visual Studio, and apply Entity Framework automatically as part of publish.

Let’s get going

To get started, you’ll first need:

  • Visual Studio 2017 with the ASP.NET and web development workload installed (download now)
  • An Azure account:
  • Any ASP.NET or ASP.NET Core app that uses a SQL Database. For the purposes of this post, I’ll create a new ASP.NET Core app with Individual Authentication:
    • On the “New ASP.NET Core Web Application” dialog, click the “Change Authentication” button.
      clip_image002
  • Then select the “Individual User Accounts” radio button and click “OK”.
  • Click OK.

I can now run my project locally (F5) and create user accounts which will be stored in a SQL Server Express Local DB on my machine.

Publishing to App Service with a Database

Let’s publish our application to Azure. To do this, I’ll right click my project in Solution Explorer and choose “Publish”

clip_image003

This brings up the Visual Studio publish target dialog, which will default to the Azure App Service pane with the “Create new” radio button selected. To continue click “Publish”.

This brings up the “Create App Service” dialog (see the “Key App Service Concepts” section of my previous post for an explanation of the fields). To create a SQL Database for our app to use, click the “Create a SQL Database” link in the top right section of the dialog.

clip_image005

This will bring up the “Configure SQL Database” dialog.

  • Note: If you are using a Visual Studio Enterprise subscription, many regions will not let you create a SQL Azure database so I recommend choosing “East US” or “West US 2” depending on where you are located (we are adding logic in in the Visual Studio 2017 15.8 update to remove those regions if that’s the case, but for now you’ll need to choose an appropriate region). To do this, click the “New…” button next to your “Hosting Plan Dropdown” and pick the appropriate region (“East US” or “West US 2”).
  • Since I don’t have an existing SQL Server, the first thing I need to do is create a server to host the database, so I’ll click the “New…” button next to the “SQL Server” dropdown,
  • Choose a location for the database.
  • Provide an administrator user name and password for the server
  • Click “OK”
    clip_image007
  • Make sure the connection string name field matches the name of the connection string your application uses to access the database (if using a new project, it is “DefaultConnection” which will be prepopulated for you).
    clip_image009
  • Click OK
  • Then click the “Create” button on the “Create App Service” dialog

It should take ~2-3 minutes to create all of the resources in Azure, then your application will publish and a browser will open to your home page.

Configuring EF Migrations

At this point there is a database for your app to use in the cloud, but EF migrations have not been applied, so any functionality that relies on the database (e.g. Registering for a user account) will result in an error.

To apply EF migrations to the database:

  • Click the “Configure…” button on the publish summary page
    clip_image011
  • Navigate to the “Settings” tab
  • When it finishes discovering data contexts, expand the “Entity Framework Migrations” section, and check the “Apply this migration on publish” for all of the contexts it finds
    clip_image013
  • Click “Save”
  • Click Publish again, in the output window you should see “Generating Entity framework SQL Scripts” and then “Generating Entity framework SQL Scripts completed successfully”
    clip_image015

That’s it, your web app and SQL Azure database are both configured and running in the cloud.

Conclusion

Hopefully, this post showed you how easy it is to try App Service and SQL Azure. We believe that for most people, App Service is the easiest place to get started with cloud development, even if you need to move to other services in the future for further capabilities (compare hosting options). As always, let us know if you run into any issues, or have any questions below or via Twitter.