Quantcast
Channel: ASP.NET Blog
Viewing all 191 articles
Browse latest View live

Learn how to do Image Recognition with Cognitive Services and ASP.NET

$
0
0

With all the talk about artificial intelligence (AI) and machine learning (ML) doing crazy things, it’s easy to be left wondering, “what are practical ways I can use this today?” It turns out there are some extremely easy ways to try this today.

In this post, I’ll walk through how to detect faces, gender, ages, and hair color in photos, by adding only a few lines of code to an ASP.NET app. Images will be uploaded and shown in an image gallery built with ASP.NET, images will be hosted in Azure Storage, and Azure Cognitive Services will be used to analyze the images. The full application is available on GitHub. To begin, clone the repository on your machine.

What we’ll build

Here’s what the recognized photos can look like when displayed in a web browser. Note how the image and metadata generated by Azure Cognitive Services is displayed alongside it.

A sample image of the application running, showing a woman whose age and gender have been estimated by Cognitive Services.

Set up prerequisites with Visual Studio and Azure

To begin, make sure you’ve installed Visual Studio 2017 with the ASP.NET and web workload. This will provide everything you need to build and run the app yourself.

Next, set up the Azure prerequisites.

First, ensure you have an Azure account. If not, you can sign up for an Azure free account, which will give you a $200 credit towards anything.

Next, create a Storage account, through the Azure Portal:

You’ll need to create the Storage resource:

An image of the Azure Portal, showing how to create a Storage resource.

After creating the resource, you’ll need to create the storage account for your resource with the default settings:

An image of the Create Storage Account page in the Azure Portal with default settings selected.

Finally, create a Cognitive Services resource through the Azure portal:

An image showing how to create a Cognitive Services resource.

Once you’ve set that up, you’re ready to start hacking away at the sample app!

Explore the codebase

Open the project in Visual Studio 2017 if you haven’t already. The application is an ASP.NET MVC app. It does three major things:

The first major operation is uploading an image to Azure Blob storage, analyzing the image using Azure Cognitive Services, and uploading image metadata generated from Cognitive Services back to Blob Storage.

The second major operation is to snag images and their associated metadata from Blob Storage.

The UI simply wires up these images to a page with an upload button.

Add your API keys

Modify the Web.config file to include your Cognitive Services URL and Cognitive Services API key. Look for this file:

Your Cognitive Services URL and API keys can be found in the dashboard for your Cognitive Services resource in the Azure Portal here:

The connection strings for your Azure Storage resource can be found in the Azure Portal under Access Keys:

An image showing how to access the Azure Storage Access Keys in the Azure Portal.

Once you have entered your information in your Web.Config file, you’ll be good to go!

To learn more about how to best work with keys and other information in a development, see Best Practices for Deploying Passwords and other Sensitive Data to ASP.NET and Azure.

Run the application and add some images

Now that everything is set up and configuration has been set up locally, you can run the application on your machine!

Press F5 to debug and see how everything works. I recommend that you set a breakpoint in the Upload controller action (HomeController.cs, line 32), so that you can step through each operation as you upload a new image. In the opened browser, upload an image to see what happens!

If you want to see images show up in Azure blobs when running the app, you can do so with Cloud Explorer (View -> Cloud Explorer). You may need to log in first, but after that, you can navigate to your created Storage Account and see all of your Blobs under Blob Containers:

An image showing Visual Studio Cloud Explorer and browsing live Blobs in Azure.

In this example, I’ve uploaded three images to my container called “images”. The web app also uploaded a json file with image metadata for each image.

Publish to Azure and impress your friends with your use of AI

You can publish the entire application to Azure App Service. Right-click on your project and select “Publish”. Next, select App Service and continue. You can create one right in the Visual Studio UI:

An image showing how you can publish to Azure from Visual Studio 2017.

Finally, click “Create” and it will create all the Azure resources you need and publish your app! After that process completes (it should take a minute or two), your browser will open with your application running entirely in Azure.

Next steps

And that’s it! Try exploring other, interesting things you can do with Cognitive services. Some fun things to try, without needing to add support for any services or read other tutorials:

  • Modify the web app to replace someone’s face with an emoji that matches their measured emotion (try the System.Drawing API!)
  • Group faces by similarity, age, or if they have makeup on
  • Try it out on pictures of animals instead of humans

Additionally, check out these tutorials to learn more about what you can do with .NET and Cognitive Services:

Cheers, and happy coding!


A new experiment: Browser-based web apps with .NET and Blazor

$
0
0

Today I’m excited to announce a new experimental project from the ASP.NET team called Blazor. Blazor is an experimental web UI framework based on C#, Razor, and HTML that runs in the browser via WebAssembly. Blazor promises to greatly simplify the task of building fast and beautiful single-page applications that run in any browser. It does this by enabling developers to write .NET-based web apps that run client-side in web browsers using open web standards.

If you already use .NET, this completes the picture: you’ll be able to use your skills for browser-based development in addition to existing scenarios for server and cloud-based services, native mobile/desktop apps, and games. If you don’t yet use .NET, our hope is that the productivity and simplicity benefits of Blazor will be compelling enough that you will try it.

Why use .NET for browser apps?

Web development has improved in many ways over the years but building modern web applications still poses challenges. Using .NET in the browser offers many advantages that can help make web development easier and more productive:

  • Stable and consistent: .NET offers standard APIs, tools, and build infrastructure across all .NET platforms that are stable, feature rich, and easy to use.
  • Modern innovative languages: .NET languages like C# and F# make programming a joy and keep getting better with innovative new language features.
  • Industry leading tools: The Visual Studio product family provides a great .NET development experience on Windows, Linux, and macOS.
  • Fast and scalable: .NET has a long history of performance, reliability, and security for web development on the server. Using .NET as a full-stack solution makes it easier to build fast, reliable and secure applications.

Browser + Razor = Blazor!

Blazor is based on existing web technologies like HTML and CSS, but you use C# and Razor syntax instead of JavaScript to build composable web UI. Note that it is not a way of deploying existing UWP or Xamarin mobile apps in the browser. To see what this looks like in action, check out Steve Sanderson’s prototype demo at NDC Oslo last year or his prototype demo for the ASP.NET Community Standup. You can also try out a simple live Blazor app running as a static site.

Blazor will have all the features of a modern web framework including:

  • A component model for building composable UI
  • Routing
  • Layouts
  • Forms and validation
  • Dependency injection
  • JavaScript interop
  • Live reloading in the browser during development
  • Server-side rendering
  • Full .NET debugging both in browsers and in the IDE
  • Rich IntelliSense and tooling
  • Ability to run on older (non-WebAssembly) browsers via asm.js
  • Publishing and app size trimming

WebAssembly changes the Web

Running .NET in the browser is made possible by WebAssembly, a new web standard for a “portable, size- and load-time-efficient format suitable for compilation to the web.” WebAssembly enables fundamentally new ways to write web apps. Code compiled to WebAssembly can run in any browser at native speeds. This is the foundational piece needed to build a .NET runtime that can run in the browser. No plugins or transpilation needed. You run normal .NET assemblies in the browser using a WebAssembly based .NET runtime.

Last August, our friends on Microsoft’s Xamarin team announced their plans to bring a .NET runtime (Mono) to the web using WebAssembly and have been making steady progress. The Blazor project builds on their work to create a rich client-side single page application framework written in .NET.

A new experiment

While we are excited about the promise Blazor holds, it’s an experimental project, not a committed product. During this experimental phase, we expect to engage deeply with early Blazor adopters to hear your feedback and suggestions. This time allows us to resolve technical issues associated with running .NET in the browser and to ensure we can build something that developers love and can be productive with.

Where it’s happening

The Blazor repo is now public and is where you can find all the action. It’s a fully open source project: you can see all the development work and issue tracking in the public repo.

Please note that we are very early in this project. There aren’t any installers or project templates yet and many planned features aren’t yet implemented. Even the parts that are already implemented aren’t yet optimized for minimal payload size. If you’re keen, you can clone the repo, build it, and run the tests, but only the most intrepid pioneers would attempt to write app code with it today. If you are that intrepid pioneer, please do dig into the sources. Feedback and suggestions can be provided through the Blazor repo issue tracker. In the months ahead, we hope to publish pre-alpha project templates and tooling that will let a wider audience try it out.

Please also check out the Blazor FAQ to learn more about the project.

Thanks!

Diagnosing Errors on your Cloud Apps

$
0
0

One of the most frustrating experiences is when you have your app working on your local machine, but when you publish it it’s inexplicably failing. Fortunately, Visual Studio provides handy features for working with apps running in Azure. In this blog I’ll show you how to leverage the capabilities of Cloud Explorer to diagnose issues in Azure.

If you’re interested in developing apps in the cloud, we’d love to hear from you. Please take a minute to complete our one question survey.

Prerequisites

– If you want to follow along, you’ll need Visual Studio 2017 with Azure development workload installed.
– This blog assumes you have an Azure subscription and have an App running in Azure App Services. If you don’t have an Azure subscription, click here to sign up for free credits.
– For the purposes of this blog, we’ve developed a simple one-page web app. The source is available here.

Open the solution

If you have your app running on Azure, open the solution in Visual Studio.
Alternatively, clone the source for the sample app and open it in Visual Studio.
Publish the app to Microsoft Azure App Services.

Connect to your Azure subscription with Cloud Explorer

Cloud Explorer is a powerful tool that ships with the Azure development workload in Visual Studio 2017. We can use Cloud Explorer to view and interact with the resources in our Azure subscription.

To view your Azure resources in Cloud Explorer, enable the subscription in the Account Manager tab.
– Open Cloud Explorer (View -> Cloud Explorer)
– Press the Account Management button in the Cloud Explorer toolbar.
– Choose the Azure subscription that you are working with, then press Apply.

Cloud Explorer - Account Manager

Your Azure subscription now appears in the Cloud Explorer. You can toggle the grouping of elements by Resource Groups or Resource Types using the drop-down selector at the top of the window.

View Streaming Logs

When I ran my app after publishing, there was an error. The error message shown on the web page was not very descriptive. So what can I do? How can I get more information about what’s going wrong?

One easy way to diagnose issues on the server is to inspect the application logs. Using Cloud Explorer, you can access the streaming logs of any App Service in your subscription. The streaming logs output a concatenation of all the application logs saved on the App Service. The default log level is “Error”.

To view streaming logs for your application running on Azure App Services:
– Expand the subscription node and select your App Service.
– Click View Streaming Logs in the Actions panel.

Cloud Explorer - View Streaming Logs

The Output window opens with a new log stream from the App Service running on the cloud.

– If you’re using the sample app, refresh the page in the web browser and wait for the page to complete rendering.
This might take ten seconds or more, as the server waits for the fetch operation to time out before returning the result.

You can read the log messages to see what’s happening on the server.

Streaming Logs - Showing Errors

If you switch to Verbose output logging, you see a lot more.

Streaming Logs - Verbose view

Notice the [Error] that appears in the streaming logs: “Exception occurred while attempting to list files on server.”
It doesn’t tell us much, but at least now we can start looking in the ListBlobFiles.StorageHelper for clues.

We know it works locally, so we’ll need to debug the version running on the cloud to see why it’s failing.
For that, we need remote debugging. Once again, Cloud Explorer to the rescue!

Remote Debugging App Service running on Azure

Using Cloud Explorer, you can attach a remote debugger to applications running on Azure. This lets you control the flow of execution by breaking and stepping through the code. It also provides an opportunity to view the value of variables and method returns by utilizing Visual Studio’s debugger tooltips, autos, watches, call stack and other diagnostic tools.

Publish a Debug version of the Web App

Before you can attach a debugger to an application on Azure, there must be a debug version of the code running on the App Service. So, we’ll re-publish the app with Debug release configuration. Then we’ll attach a remote debugger to the app running in the cloud, set breakpoints and step through the code.

• Open the publish summary page (Right-click project, choose “Publish…”)
• Select (or create) the publish profile for your web app
• Click Settings
• When the settings dialog opens, go to the Settings tab.
• In the Configurations drop-down, select “Debug”.
• Save the publish profile settings.
• Press Publish to republish the web app

Publish Debug Configuration

Attach Remote Debugger

You can attach a remote debugger to allow you to step through the code that’s running on your Azure App Service. This lets you see the values of variables and watch the flow of control in your app.

To attach a remote debugger:
• In the Cloud Explorer, select the web app.
• Click Attach Debugger in the Actions panel.

Visual Studio will switch over to Debug mode. Now you can set breakpoints in the code and watch the execution as the program runs.

Set breakpoint and execute the code

If you’re following along from the sample, try this:

• Set breakpoints in the GetBlobFileListAsync() method of the StorageHelper.cs
• Refresh the page in the web browser
• Execution will stop at your first breakpoint.
• Hover your mouse cursor over the _storageConnectionString variable and inspect its value.
• Notice that the connection string is “UseDevelopmentStorage=true”.

Remote Debugging in Visual Studio

Problem found! We’re referencing our local Storage (“UseDevelopmentStorge=true”), which won’t work in the cloud.
To fix it, we’ll need to provide a connection string to the app running in the cloud that points to our Blob storage container.

Complete the debugging session.
– Press F5 to allow the request to complete.
– Then press Shift+F5 to stop the remote debugging session.

Next steps

Re-publish with Release configuration
Once you’ve finished debugging and your app is working as expected, you can republish a Release version of the app for better performance.
Go to the Publish page, find the Publish Profile, select “Settings…” and change the configuration to “Release”.

Related Links

Get started with Azure Blob storage using .NET
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs

Use the Azure Storage Emulator for development and testing
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator

Introduction to Razor Pages in ASP.NET Core
https://docs.microsoft.com/en-us/aspnet/core/mvc/razor-pages/?tabs=visual-studio

ASP.NET Core – Simpler ASP.NET MVC Apps with Razor Pages
MSDN Magazine article by Steve Smith
https://msdn.microsoft.com/en-us/magazine/mt842512.aspx

Upload image data in the cloud with Azure Storage
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-upload-process-images

Azure Blob Storage Samples for .NET
https://github.com/Azure-Samples/storage-blob-dotnet-getting-started

File nesting in Solution Explorer

$
0
0

We are excited to share with you a new capability in Visual Studio that was a clear ask from you, the community. Visual Studio has been nesting related files in Solution Explorer for a long time now, but not everybody agrees with the rules it uses. That’s not a problem any more because Visual Studio now gives you complete control over file nesting in Solution Exporer! We hope your continued feedback helps us evolve this capability into a fan favorite!

Out of the box you get to pick between the presets Off, Default and Web, but you can always customize it exactly to your liking. You can even create solution-specific and project-specific settings, but more on all of that later. First let’s go over what you get out of the box.

What you get out of the box

Off: This option gives you a flat list of files without any file nesting whatsoever.

Default: This options gives you the default file nesting behavior in Solution Explorer that Visual Studio has had since before you were able to control it.

Web: This option applies the “Web” file nesting behavior to all the projects in the current solution. It has a lot of rules and we encourage you to check it out and tell us what you think. The very first picture in this post is highlighting just a few good examples of the file nesting that you get with this option.

Customizing file nesting to your exact liking

If you don’t like what you get out of the box, you can always create your own, custom file nesting settings that make Solution Explorer nest files to your exact liking. You can add as many custom file nesting settings as you like and you can switch between them as you see fit. Every time you want to create a new one you start by choosing to either start with an empty file or to use the Web settings as your starting point:

We recommend you use Web settings as your starting point because it’s easier to tweak something that already works. If you do that you’ll be starting off with something that looks like the following (instead of being empty):

Let’s focus on the node dependentFileProvider and more specifically the children being added to it. Each child node is a type of rule that Visual Studio can use to nest files. For example, “having the same filename, but a different extension” is one such type of rule. Let’s go over each type of rule available to you:

  • extentionToExtention: Use this type of rule to make file.js nest under file.ts
  • fileSuffixToExtension: Use this type of rule to make file-vsdoc.js nest under file.js
  • addedExtension: Use this type of rule to make file.html.css nest under file.html
  • pathSegment: Use this type of rule to make jquery.min.js nest under jquery.js
  • allExtensions: Use this type of rule to make file.* nest under file.js
  • fileToFile: Use this type of rule to make bower.json nest under .bowerrc

Ordering is very important in every part of your custom settings file. You can change the order in which rules are executed by moving them up or down inside of the dependentFileProvider node.  For example, if you have one rule that makes file.js the parent of file.ts and another rule that makes file.coffee the parent of file.ts the order in which they appear in the file decides what happens when all three files are present at the same time: file.js, file.ts and file.coffee. Since file.ts can only have one parent, whichever rule executes first wins.

You can manage all settings, including your own custom settings through the same button in Solution Explorer:

 

Creating solution-specific and project-specific settings

You can create solution-specific and project-specific settings through the context menu of each solution and project:

 

Solution-specific and project-specific settings will be combined with whatever Visual Studio settings are already active. Don’t be surprised for example if you have a blank project-specific settings file, yet Solution Explorer is still nesting files. The nesting is either coming from the solution-specific settings or the Visual Studio settings. The process of merging file nesting settings goes: Project > Solution > Visual Studio.

You can tell Visual Studio to ignore solution-specific and project-specific settings, even if the files exist on disk, by enabling the option Ignore solution and project settings under Tools | Options | ASP.NET Core | File Nesting.

You can do the opposite and tell Visual Studio to only use the solution-specific or the project-specific settings. Remember that “root” node we saw earlier in our custom settings? If not, go back and take a look at the picture. If you set that node to true it tells Visual Studio to stop merging files at that level and not combine it with files higher up the hierarchy.

The great thing about solution-specific and project-specific settings is that they can be checked into source control and the entire team that works on the repo can share them.

Next steps

Download Visual Studio 2017 15.6 Preview 4 and try file nesting in Solution Explorer. The feature is currently only supported by ASP.NET Core projects, but tell us that you want it for other projects as well and we will try to make it happen.

Please ask us questions and give us your feedback any way you find most convenient. You can leave a comment on this blog post, you can submit your suggestions on UserVoice or you can drop us an email on Anton.Piskunov<at>microsoft.com (Principal Engineer) and Angelos.Petropoulos<at>microsoft.com (Product Manager).

Two Lesser Known Tools for Local Azure Development

$
0
0

If you’re developing applications that target Azure services (e.g. Web Apps, Functions, Storage), you’ll want to know about two powerful tools that come with Visual Studio 2017 and the Azure development workload:

  • Cloud Explorer is a tool window inside Visual Studio that lets you browse your Azure resources and perform specific tasks – like stop and start app service, view streaming logs, create storage items.
  • Storage Emulator is a separate application to Visual Studio that provides a local simulation of the Azure storage services. It’s really handy for testing Functions that trigger from queues, blobs or tables.

In this blog I’ll show you how you can develop Azure applications entirely locally – including the ability to interact with Azure storage – without ever needing an Azure subscription.

Prerequisites

Note: You will NOT need an Azure subscription to follow this blog. In fact, that’s the whole point of this blog. 😉

Cloud Explorer

The Cloud Explorer is your window into Azure from within Visual Studio. You can browse the common resources in your Azure subscriptions in one convenient tool window. Each of the various Azure services have different properties and actions.

Cloud Explorer - Expanded

In the picture above, you can see it has listed a variety of resources from my Azure subscription including my App Services, SQL Databases and Virtual Machines, as well as my App Service Plans, Storage Accounts and other network infrastructure assets. I have published the sample app to an App Service called ListBlobFilesSample. You can see it listed under the App Services node.

Each resource has a collection of properties and actions. You can trigger actions by right-clicking on the item of interest. For instance, I can View Streaming Logs to see a running output of my application in the cloud, or I can Attach Debugger to step through the code to diagnose errors. (Note: For more information about diagnosing errors, see Diagnosing Errors on your Cloud Apps.)

In this blog, we’ll be using Cloud Explorer to interact with our Storage Accounts – specifically, with the local (Development) storage account using the Microsoft Azure Storage Emulator.

Sample Code

For this post, we’ll be working with a sample Web App with a single Razor Page file that displays a list of items in a Blob storage container (i.e. list of files in a folder in a storage account).

Clone the source from here and open the ListBlobFiles solution in Visual Studio.
The web app consists of:
      – a single Razor Page file (Index.cshtml),
      – its code behind file (Index.cshtml.cs),
      – a utility class for reading items from storage (StorageHelper.cs),
      – the application’s settings file (appsettings.json),
      – standard web app startup files (Program.cs and Startup.cs)

Here’s a snippet of the most interesting part – the helper class that returns a list of files stored in a blob storage container.

Using Storage Completely Offline with Storage Emulator

Using the Storage Emulator, you can develop, run, debug and test your applications that use Azure Storage locally without an Azure subscription. The other great thing is, the Storage Emulator is part of the Azure development workload in Visual Studio, so there is no extra installation required.

Start the Storage Emulator

  • Press the Windows key and type “Storage Emulator”, then select Microsoft Azure Storage Emulator.
  • When the Storage Emulator is running, an icon will appear in the Windows system tray.

    Storage Emulator icon in task bar

Launch the web app from Visual Studio

  • Press Ctrl+F5 to build and run the web app locally.
  • A web browser will launch an open the Index page of the app.
    The page renders and shows there are no files in the Blob container.
    Web page displays errors

Let’s add some files to a local storage container and see if they show up when we refresh the page.

Create local Blob Storage (using Storage Emulator and Cloud Explorer)

  • Open Cloud Explorer
  • Expand to Blob Containers under (Local)->Storage Accounts->(Development)
  • Click Create Blob Container in the Actions panel
  • Cloud Explorer - Create Blob Container

  • Enter a name for the local blob storage container (ie. “myfiles”) – Note: must contain only lowercase/numbers/hyphens

Add files to your Blob container

  • Right-click the new container (myfiles) and select Open.
  • In the toolbar, click the Upload button.
  • Browse for a file, then press OK.
  • Do this repeatedly to add several files to your blob container (storage folder).

Cloud Explorer - Add files to blob container

You’ll see the files appear in the container window, along with the URL for each item.
The Microsoft Azure Activity Log window shows the status of the uploads.

Files appear in container view

Return to the web browser that is running our local web app and refresh the page.
Notice that the page now outputs the URLs of all the files in the container.

Web page renders correctly, showing files in blob container

Success! You’re now doing local development of an app that uses Azure storage – without needing any resources on Azure.

Next Steps

Try it on the cloud! When you’re ready, publish your app to Azure App Services and configure it to run with Azure Storage on the cloud.

You can continue to use Cloud Explorer within Visual Studio to interact with your storage account on Azure in just the same way you did with local development.

Related Links

Get started with Azure Blob storage using .NET
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs

Use the Azure Storage Emulator for development and testing
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator

Introduction to Razor Pages in ASP.NET Core
https://docs.microsoft.com/en-us/aspnet/core/mvc/razor-pages/?tabs=visual-studio

ASP.NET Core – Simpler ASP.NET MVC Apps with Razor Pages
https://msdn.microsoft.com/en-us/magazine/mt842512.aspx

Azure Article: Azure Blob Storage Photo Gallery Web Application
https://azure.microsoft.com/en-us/resources/samples/storage-blobs-dotnet-webapp/
Related sample on GitHub: Image Resizer Web App
https://github.com/Azure-Samples/storage-blob-upload-from-webapp

Upload image data in the cloud with Azure Storage
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-upload-process-images

Azure Blob Storage Samples for .NET
https://github.com/Azure-Samples/storage-blob-dotnet-getting-started

Announcing ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4

$
0
0

Today we released stable packages for ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 on NuGet. This release contains some minor bug fixes and a couple of new features specifically targeted at enabling .NET Standard support for the ASP.NET Web API Client. You can read about the .NET Standard support for the ASP.NET Web API Client in the earlier preview announcement.

For the full list of features and bug fixes for this release please see the release notes.

To update an existing project to use this release you can run the following commands from the NuGet Package Manager Console for each of the packages you wish to update:

Install-Package Microsoft.AspNet.Mvc -Version 5.2.4
Install-Package Microsoft.AspNet.WebApi -Version 5.2.4
Install-Package Microsoft.AspNet.WebPages -Version 3.2.4

If you have any questions or feedback on this release please let us know on GitHub.

Thanks!

ASP.NET Core 2.1.0-preview1 now available

$
0
0

Today we’re very happy to announce that the first preview of the next minor release of ASP.NET Core and .NET Core is now available for you to try out. We’ve been working hard on this release over the past months, along with many folks from the community, and it’s now ready for a wider audience to try it out and provide the feedback that will continue to shape the release.

You can read about .NET Core 2.1.0-preview2 over on their blog.

You can also read about Entity Framework Core 2.1.0-preview1 on their blog.

How do I get it?

You can download the new .NET Core SDK for 2.1.0-preview1 (which includes ASP.NET Core 2.1.0-preview1) from https://www.microsoft.com/net/download/dotnet-core/sdk-2.1.300-preview1

Visual Studio 2017 version requirements

Customers using Visual Studio 2017 should also install (in addition to the SDK above) and use the Preview channel (15.6 Preview 6 at the time of writing) when working with .NET Core and ASP.NET Core 2.1 projects. .NET Core 2.1 projects require Visual Studio 2017 15.6 or greater.

Impact to machines

Please note that given this is a preview release there are likely to be known issues and as-yet-to-be-discovered bugs. While .NET Core SDK and runtime installs are side-by-side on your machine, your default SDK will become the latest version, which in this case will be the preview. If you run into issues working on existing projects using earlier versions of .NET Core after installing the preview SDK, you can force specific projects to use an earlier installed version of the SDK using a global.json file as documented here. Please log an issue if you run into such cases as SDK releases are intended to be backwards compatible.

Already published applications running on earlier versions of .NET Core and ASP.NET Core shouldn’t be impacted by installing the preview. That said, we don’t recommend installing previews on machines running critical workloads.

New features

You can see a summary of the new features in 2.1 in the roadmap post we published previously.

Furthermore, we’re publishing a series of posts here that go over the new feature areas in detail. We’ll update this post with links to these posts as they go live over the coming days:

Announcements and release notes

You can see all the announcements published pertaining to this release at https://github.com/aspnet/Announcements/issues?q=is%3Aopen+is%3Aissue+milestone%3A2.1.0

Release notes will be available shortly at https://github.com/aspnet/Home/releases/tag/2.1.0-preview1

Giving feedback

The main purpose of providing previews like this is to solicit feedback from customers such that we can refine and improve the changes in time for the final release. We intend to release a second preview within the next couple of months, followed by a single RC release (with “go-live” license and support) before the final RTW release.

Please provide feedback by logging issues in the appropriate repository at https://github.com/aspnet or https://github.com/dotnet. The posts on specific topics above will provide direct links to the most appropriate place to log issues for the features detailed.

Migrating an ASP.NET Core 2.0.x project to 2.1.0-preview1

Follow these steps to migrate an existing ASP.NET Core 2.0.x project to 2.1.0-preview1:

  1. Open the project’s CSPROJ file and change the value of the <TargetFramework> element to netcoreapp2.1
    • Projects targeting .NET Framework rather than .NET Core, e.g. net471, don’t need to do this
  2. In the same file, update the versions of the various <PackageReference> elements for any Microsoft.AspNetCore, Microsoft.Extensions, and Microsoft.EntityFrameworkCore packages to 2.1.0-preview1-final
  3. In the same file, update the versions of the various <DotNetCliToolReference> elements for any Microsoft.VisualStudio, and Microsoft.EntityFrameworkCore packages to 2.1.0-preview1-final
  4. In the same file, remove the <DotNetCliToolReference> elements for any Microsoft.AspNetCore packages. These have been replaced by global tools.

That should be enough to get the project building and running against 2.1.0-preview1. The following steps will change your project to use new code-based idioms that are recommended in 2.1

  1. Open the Program.cs file
  2. Rename the BuildWebHost method to CreateWebHostBuilder, change its return type to IWebHostBuilder, and remove the call to .Build() in its body
  3. Update the call in Main to call the renamed CreateWebHostBuilder method like so: CreateWebHostBuilder(args).Build().Run();
  4. Open the Startup.cs file
  5. In the ConfigureServices method, change the call to add MVC services to set the compatibility version to 2.1 like so: services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
  6. In the Configure method, add a call to add the HSTS middleware after the exception handler middleware: app.UseHsts();
  7. Staying in the Configure method, add a call to add the HTTPS redirection middleware before the static files middleware: app.UseHttpsRedirection();
  8. Open the project propery pages (right-mouse click on project in Visual Studio Solution Explorer and select “Properties”)
  9. Open the “Debug” tab and in the IIS Express profile, check the “Enable SSL” checkbox and save the changes
  10. Open the Properties/launchSettings.json file
  11. In the "iisSettings"/"iisExpress" section, note the new property added to define HTTPS port for IIS Express to use, e.g. "sslPort": 44374
  12. In the "profiles/IIS Express/environmentVariables" section, add a new property to flow the configured HTTPS port through to the application like so: "ASPNETCORE_HTTPS_PORT": "44374"
    • This configuration value will be read by the HTTPS redirect middleware you added above to ensure non-HTTPS requests are redirected to the correct port. Make sure it matches the value configured for IIS Express.

Note that some projects might require more steps depending on the options selected when the project was created, or packages added since. You might like to try creating a new project targeting 2.1.0-preview1 (in Visual Studio or using dotnet new at the cmd line) with the same options to see what other things have changed.

ASP.NET Core 2.1.0-preview1: Improvements for using HTTPS

$
0
0

Securing web apps with HTTPS is more important than ever before. Browser enforcement of HTTPS is becoming increasingly strict. Sites that don’t use HTTPS are increasingly labeled as insecure. Browsers are also starting to enforce that new and existing web features must only be used from an secure context (Chromium, Mozilla). New privacy requirements like the Global Data Protection Regulation (GDPR) require the use of HTTPS to protect user data. Using HTTPS during development also helps prevent HTTPS related issues before deployment, like insecure links.

ASP.NET Core 2.1 makes it easy to both develop your app with HTTPS enabled and to configure HTTPS once your app is deployed. The ASP.NET Core 2.1 project templates have been updated to enable HTTPS by default. To enable HTTPS in production simply configure the correct server certificate. ASP.NET Core 2.1 also adds support for HTTP Strict Transport Security (HSTS) to enforce HTTPS usage in production and adds improved support for redirecting HTTP traffic to HTTPS endpoints.

HTTPS in development

To get started with ASP.NET Core 2.1.0-preview1 and HTTPS install the .NET Core SDK for 2.1.0-preview1. The SDK will create an HTTPS development certificate for you as part of the first-run experience. For example, when you run dotnet new razor for the first time you should see the following console output:

ASP.NET Core
------------
Successfully installed the ASP.NET Core HTTPS Development Certificate.
To trust the certificate (Windows and macOS only) first install the dev-certs tool by running 'dotnet install tool dotnet-dev-certs -g --version 2.1.0-preview1-final' and then run 'dotnet-dev-certs https --trust'.
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.

The ASP.NET Core HTTPS Development Certificate has now been installed into the local user certificate store, but it still needs to be trusted. To trust the certificate you need to perform a one-time step to install and run the new dotnet dev-certs tool as instructed:

C:\WebApplication1>dotnet install tool dotnet-dev-certs -g --version 2.1.0-preview1-final

The installation succeeded. If there are no further instructions, you can type the following command in shell directly to invoke: dotnet-dev-certs

C:\WebApplication1>dotnet dev-certs https --trust
Trusting the HTTPS development certificate was requested. A confirmation prompt will be displayed if the certificate was not previously trusted. Click yes on the prompt to trust the certificate.
A valid HTTPS certificate is already present.

To run the dev-certs tool both dotnet-dev-certs and dotnet dev-certs (without the extra hyphen) will work. Note: If you get an error that the tool was not found you may need to open a new command prompt if the current command prompt was open when the SDK was installed.

Trust certificate dialog

Click Yes to trust the certificate.

On macOS the certificate will get added to your keychain as a trusted certificate.

On Linux there isn’t a standard way across distros to trust the certificate, so you’ll need to perform the distro specific guidance for trusting the development certificate.

Run the app by running dotnet run. The ASP.NET Core 2.1 runtime will detect that the development certificate is installed and use the certificate to listen on both http://localhost:5000 and https://localhost:5001:

C:\WebApplication1>dotnet run
Using launch settings from C:\WebApplication1\Properties\launchSettings.json...
Hosting environment: Development
Content root path: C:\WebApplication1
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Close any open browsers and then in a new browser window browse to https://localhost:5001 to access the app via HTTPS.

Razor Pages with HTTPS

If you didn’t trust the ASP.NET Core development certificate then the browser will display a security warning:

Untrusted certificate warning

You can still click on “Details” to ignore the warning and browse to the site, but you’re better off running dotnet dev-certs --trust to trust the certificate. Just run the tool once and you should be all set.

HTTPS redirection

If you browse to the app via http://localhost:5000 you get redirected to the HTTPS endpoint:

HTTPS redirect

This is thanks to the new HTTPS redirection middleware that redirects all HTTP traffic to HTTPS. The middleware will detect available HTTPS server addresses at runtime and redirect accordingly. Otherwise, it redirects to port 443 by default.

The HTTPS redirection middleware is added in app’s Configure method:

app.UseHttpsRedirection();

You can configure the HTTPS port explicitly in your ConfigureServices method:

services.AddHttpsRedirection(options => options.HttpsPort = 5002);

Alternatively you can specify the HTTPS port to redirect to using configuration or the ASPNETCORE_HTTPS_PORT environment variable. This is useful for when HTTPS is being handled externally from the app, like when the app is hosted behind IIS. For example, the project template adds the ASPNETCORE_HTTPS_PORT environment variable to the IIS Express launch profile so that it matches the HTTPS port setup for IIS Express:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:51667",
      "sslPort": 44370
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "ASPNETCORE_HTTPS_PORT": "44370"
      }
    }
  }
}

HTTP Strict Transport Security (HSTS)

HSTS is a protocol that instructs browsers to access the site via HTTPS. The protocol has allowances for specifying how long the policy should be enforced (max age) and whether the policy applies to subdomains or not. You can also enable support for your domain to be added to the HSTS preload list.

The ASP.NET Core 2.1 project templates enable support for HSTS by adding the new HSTS middleware in the app’s Configure method:

if (env.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}
else
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

Note that HSTS is only enabled when running in a non-development environment. This is to prevent setting an HSTS policy for localhost when in development.

You can configure your HSTS policy (max age, include subdomains, exclude specific domains, support preload) in your ConfigureServices method:

services.AddHsts(options =>
{
    options.MaxAge = TimeSpan.FromDays(100);
    options.IncludeSubDomains = true;
    options.Preload = true;
});

Configuring HTTPS in production

The ASP.NET Core HTTPS development certificate is only for development purposes. In production you need to configure your app for HTTPS including the production certificate that you want to use. Often this is handled externally from the app using a reverse proxy like IIS or NGINX. ASP.NET Core 2.1 adds support to Kestrel for configuring endpoints and HTTPS certificates.

You can still configure server URLs (include HTTPS URLs) using the ASPNETCORE_SERVER_URLS environment variable. To configure the HTTPS certificate for any HTTPS server URLs you configure a default HTTPS certificate.

The default HTTPS certificate can be loaded from a certificate store:

{
  "Certificates": {
    "Default": {
      "Subject": "mysite",
      "Store": "User",
      "Location": "Local",
      "AllowInvalid": "false" // Set to "true" to allow invalid certificates (e.g. self-signed)
    }
  }
}

Or from a password protected PFX file:

{
  "Certificates": {
    "Default": {
      "Path": "cert.pfx",
      "Password": "<password>"
    }
  }
}

You can also configure named endpoints for Kestrel that include both the URL for the endpoint and the HTTPS certificate:

{
  "Kestrel": {
    "EndPoints": {
      "Http": {
        "Url": "http://localhost:5005"
      },

      "HttpsInlineCertFile": {
        "Url": "https://localhost:5006",
        "Certificate": {
          "Path": "cert.pfx",
          "Password": "<cert password>"
        }
      },

      "HttpsInlineCertStore": {
        "Url": "https://localhost:5007",
        "Certificate": {
          "Subject": "mysite",
          "Store": "My",
          "Location": "CurrentUser",
          "AllowInvalid": "false" // Set to true to allow invalid certificates (e.g. self-signed)
        }
      }
    }
  }
}

Summary

We hope these new features will make it much easier to use HTTPS during development and in production. Please give the new HTTPS support a try and let us know what you think!


ASP.NET Core 2.1.0-preview1: Using ASP.NET Core Previews on Azure App Service

$
0
0

There are 3 options to get ASP.NET Core 2.1 Preview applications running on Azure App Service:

  1. Installing the Preview1 site extension
  2. Deploying your app self-contained
  3. Using Web Apps for Containers

Installing the site extension

Starting with 2.1-preview1 we are producing an Azure App Service site extension that contains everything you need to build and run your ASP.NET Core 2.1-preview1 app. You can install this site extension by:

  1. Go to the Extensions blade
    Azure App Service Site Extension UI

    Site Extension UI

  2. Click ‘Add’ at the top of the screen and Choose the ‘ASP.NET Core Runtime Extension’ from the list of available extensions.
    ASP.NET Core Runtime Extensions

    Choose the ASP.NET Core Runtime Extensions

  3. Then agree to the license terms by clicking ‘OK’ on the ‘Accept Legal Terms’ screen, finally click ‘OK’ at the bottom of the Add Extension screen.
    Accept Agreement

    Accept Agreement

Once the add operation has completed you will have .NET Core 2.1 Preview 1 installed. You can verify this by going to the Console and running ‘dotnet –info’. It should look like this:

dotnet Info output

dotnet Info output

You can see the path to the site extension where Preview1 has been installed, showing that you are running from the site extension instead of from the default ProgramFiles location. If you see ProgramFiles instead then try restarting your site and running the info command again.

Using an ARM template

If you are using an ARM template to create and deploy applications you can use the ‘siteextensions’ resource type to add the site extension to a Web App. For example:

You could add, and edit, this snippet in your own ARM template to add the site extension to your web app. Making sure that this resource definition is in the resources collection of your site resource.

Deploy a self-contained app

You can deploy a self-contained app that carries the preview1 runtime with it when being deployed. This option means that you don’t need to prepare your site, but it does require you to publish your application differently than you would when deploying an app once the SDK is pre-installed on the server.

Self-contained apps are an option for all .NET Core applications, and some of you may be deploying your applications this way already.

Use Docker

We have 2.1 preview1 Docker images available on Docker Hub for use. You can use them as your base image and deploy to Web Apps for Containers as you normally would.

Conclusion

This is the first time that we are using site extensions instead of pre-installing previews globally on Azure App Service. If you have any problems getting it to work then log an issue on GitHub.

ASP.NET Core 2.1.0-preview1: Getting started with SignalR

$
0
0

Since 2013, ASP.NET developers have been using SignalR to build real-time web applications. Now, with ASP.NET Core 2.1 Preview 1, we’re bringing SignalR over to ASP.NET Core so you can build real-time web applications with all the benefits of ASP.NET Core. We released an alpha version of this new SignalR back in October that worked with ASP.NET Core 2.0, but now it’s ready for a broader preview and built-in to ASP.NET Core 2.1 (no additional NuGet packages required!). This new version of SignalR gave us a chance to significantly redesign some elements and learn from the lessons of the past, but the core APIs you work with should be very similar. The new design gives us a much more flexible platform on which to build the future of real-time .NET server applications. For now, though, let’s walk through a simple Chat demo to see how it works in ASP.NET Core SignalR.

Prerequisites

In order to complete this tutorial you need the following tools:

  1. .NET Core SDK version 2.1.300-preview1 or higher.
  2. Node JS (just needed for NPM, to download the SignalR JavaScript library; we strongly recommend using at least version 8.9.4 of Node).
  3. Your IDE/Editor of choice.

Building the UI

Let’s start by building a simple UI for a simple chat app. First, create a new Razor pages application using dotnet new:

Add a new page for the chat UI:

You should now have Pages/Chat.cshtml and Pages/Chat.cshtml.cs files in your project. First, open Pages/Chat.cshtml.cs, change the namespace name to match your other page models and add the Authorize attribute to ensure only authenticated users can access the Chat page.

Next, open Pages/Chat.cshtml and add some UI:

The UI we’ve added is fairly simple. We’re going to use ASP.NET Core Identity for authentication, which means the user will be authenticated, and will have a username when we get here. To try it out, use dotnet run to launch the site and Register as a new user. Then navigate to the /Chat endpoint, you should see the following UI:


The Chat UI

Writing the server code

In SignalR, you put server-side code in a “Hub”. Hubs contain methods that the SignalR Client allows you to invoke from the browser, much like how an MVC controller has actions that are invoked by issuing HTTP requests. However, unlike an MVC Controller Action, SignalR allows the server to invoke methods on the client as well, allowing you to develop real-time applications that notify users of new content. So, first, we need to build a hub. Back in the root of the project, create a Hubs directory and add a new file to that directory called ChatHub.cs:

Let’s go back over that code a little bit and look at what it does.

First, we have a class inheriting from Hub, which is the base class required for all SignalR Hubs. We apply the [Authorize] attribute to it which restricts access to the Hub to registered users and ensures that Context.User is available for us in the Hub methods. Inside Hub methods, you can use the Clients property to access the clients connected to the hub. We use the .All property, which gives us an object that can be used to send messages to every client connected to the Hub.

When a new client connects, the OnConnectedAsync method will be invoked. We override that method to Send the SendAction message to every client, and provide two arguments: The name of the user, and the action that occurred (in this case, that they “joined” the chat session). We do the same for OnDisconnectedAsync, which is invoked when a client disconnects.

When a client invokes the Send method, we send the SendMessage message to every client, again providing two arguments: The name of the user sending the message and the message itself. Every client will receive this message, including the sending client itself.

To finish off the server-side, we need to add SignalR to our application. We do that in the Startup.cs file. First, add the following to the end of the ConfigureServices method to register the necessary SignalR services into the DI container:

Then, we need to put SignalR into the middleware pipeline, and give our ChatHub hub a URL that the client can reference. We do that by adding these lines to the end of the Configure method:

This configures the hub so that it is available at the URL /hubs/chat. You can use any URL you want, but it can’t match an existing MVC action or Razor Page.

NOTE: You’ll need to add a using directive for SignalRTutorial.Hubs in order to use ChatHub in your MapHub call.

Building the client-side

Now that we have the server hub up and running, we need to add code to the Chat.cshtml page to use the client. First, however, we need to get the SignalR JavaScript client and add it to our application. There are many ways you can do this, such as using a bundling tool like Webpack, but here we’re going to go with a fairly simple approach of copying and pasting. First, install the SignalR client using NPM:

You can find the version of the client designed for use in Browsers in node_modules/@aspnet/signalr/dist/browser. There are minified files there as well. For now, let’s just copy the signalr.js file out of that directory and into wwwroot/lib/signalr in the project:


SignalR JS file in the lib/wwwroot/signalr folder

Now, we can add JavaScript to our Chat.cshtml page to wire everything up. At the end of the file (after the closing </ul> tag), add the following:

We put our scripts in the Scripts Razor section, in order to ensure they end up at the very bottom of the Layout page. First, we load the signalr.js library we just copied in:

Then, we add a script block for our own code. In that code, we first get references to some DOM elements, and define a helper function to add a new item to the messages-list list. Then, we create a new connection, connecting to the URL we specified back in the Configure method.

At this point, the connection has not yet been opened. We need to call connection.start() to open the connection. However, before we do that we have some set-up to do. First, let’s wire up the “submit” handler for the <form>. When the “Send” button is pressed, this handler will be fired and we want to grab the content of the message text box and send the Send message to the server, passing the message as an argument (we also clear the text box so that the user can enter a new message):

Then, we wire up handlers for the SendMessage and SendAction messages (remember back in the Hub we use the SendAsync method to send those messages, so we need a handler on the client for them):

Finally, we start the connection. The .start method returns a JavaScript Promise object that completes when the connection has been established. Once it’s established, we want to enable the text box and button:

Testing it out

With all that code in place, it should be ready to go. Use dotnet run to launch the app and give it a try! Then, use a Private Browsing window and log in as a different user. You should be able to chat back and forth between the browser windows.

Conclusion

This has been a brief overview of how to get started with SignalR in ASP.NET Core 2.1 Preview 1. Check out the full code for this tutorial if you’d like to see more details. If you need help, post questions on StackOverflow using the signalr-core tag. Finally, if you think you’ve found a bug, file it on our GitHub repository.

ASP.NET Core 2.1.0-preview1: Introducing compatibility version in MVC

$
0
0

This post was written by Ryan Nowak

In 2.1 we’re adding a feature to address a long-standing problem for maintaining MVC – how do we make improvements to framework code without making it too hard for developers to upgrade to the latest version? This is not an easy concern to solve – and with 7 major releases of MVC (dating back to 2009) there are a few things we’d like to leave in the past.

Unlike most other parts of ASP.NET Core, MVC is a framework – our code calls your code in lots of idiosyncratic ways. If we change what methods we call or in what order, or how we handle exceptions – it’s very easy for working code to become non-working code. In our experience, it’s also just not good enough for the team to just expect developers to rely on the documented behavior and punish those who don’t.

This last bit is summed up with Hyrum’s Law, or if you prefer, the XKCD version. We make decisions with the assumption that some developers have built working applications that rely on our bugs.

Despite these challenges, we think it’s worthwhile to keep moving forward. We’re disappointed too when we get a good piece of feedback that we can’t act upon because it’s incompatible with our legacy behavior.

What we’re doing

Our plan is to continue to make improvements to framework behaviors – where we think we’ve made a mistake – or where we can update a feature to be unequivocally better. However, we’re going to make these changes opt-in, and make it easy to opt-in. New applications created from our templates will opt-in to the current release’s behaviors by default.

When we reach the next major release (3.0 – not any time soon) – we will remove the old behaviors.

Opt-in means that updating your package references don’t give you different behavior. You have to choose the new version and the new behavior.

Right now this looks like:

OR

What this means

I think this does a few things that are valuable. Consider all of the below as goals or success criteria. We still have to do a good job understanding your feedback and communicating for these things to happen.

For you and for us: We can continue to invest in new ideas and adapt to a changing web landscape.

For you: It’s easy to adopt new versions in small steps.

For us: Streamlines things that require a lot of effort to support, document, and respond to feedback.

For us: Simplifies the decision process of how to make and communicate a change.

What we’re not doing

While we’re giving you fine-grained control over which new behaviors you get, we don’t intend on keeping old things forever. This is not a license to live in the past. As stated above, our plan is to update things that are broken and keep moving forward by removing old behaviors over time.

We’re also not treating this new capability as *open-season* on breaking changes. Making any change that impacts developers on our platform has to be justified in providing enough value, and needs to be comprehensible and actionable by those that are impacted – because we expect all developers to deal with it eventually.

A good candidate change is one that:

  • adds a feature, but with a small break risk for a minority of users (areas for Razor Pages)
  • fixes a big problem, but with a comprehensible impact (exception handling for input formatters)
  • never worked the way we thought (bug), and streamlines something complicated (combining authorization filters)

Note that in all of the cases above, the new behaviors are easier for us to explain and document. We would recommend that everyone choose the new behaviors, it’s not a matter of preference.

Give us feedback about this. If you think this plan leaves you out in the cold, let us know how and why.

What’s happening now?

Most of the known work for us has already happened. Have made about 5 design changes to features inside MVC during the 2.1 milestone that deserved a compatibility switch.

You can find a summary of these things here below. My hope is that the documentation added to the specific options and types explains what is changing when you opt-in to each setting and why we feel it’s important.

General MVC

Combine Authorization Filters

Smarter exception handling for formatters

Smarter validation for enums

Allow non-string types with HeaderModelBinder (2.1.0-preview-2)

JSON Formatter

Better error messages

Razor Pages

Areas for Pages

Appendix A: an auspicious example

I think exception handling for input formatters is probably the best illustrative example of how this philosophy works.

The best starting place is probably to look at the docs that I added in this PR. We have a problem in the 1.X and 2.0 family of MVC releases where any exception thrown by an IInputFormatter will be swallowed by the infrastructure and turned into a model state error. This includes TypeLoadException, NullReferenceException, ThreadAbortException and all other kinds of esoterica.

This is the case because we didn’t have an exception type that says “I failed to process the input, report an error to the client”. We added this in 2.1 and we’ve updated our formatters to use it in the appropriate cases (the XML serializers throw exceptions). However this can’t help formatters we didn’t write.

This leads to the need for a switch. If you need to use a formatter written against 1.0 that throws an exception and expects MVC to handle it, that will still work until you opt-in to the new behavior. We do plan on removing the old way in 3.0, but this eases the pressure – instead of this problem blocking you from adopting 2.1, you have time to figure out a solution before 3.0 (a long time away).

——

I hope this example provides a little insight into what our process is like. See the relevant links for the in-code documentation about the other changes. We are looking forward to feedback on this, either on GitHub or as comments on this post.

ASP.NET Core 2.1.0-preview1: Improvements for building Web APIs

$
0
0

ASP.NET Core 2.1 adds a number of features that make it easier and more convenient to build Web APIs. These features include Web API controller specific conventions, more robust input processing and error handling, and JSON patch improvements.

Please note that some of these features require enabling MVC compatibility with 2.1, so be sure to check out the post on MVC compatibility versions as well.

[ApiController] and ActionResult<T>

ASP.NET Core 2.1 introduces new Web API controller specific conventions that make Web API development more convenient. These conventions can be applied to a controller using the new [ApiController] attribute:

  • Automatically respond with a 400 when validation errors occur – no need to check the model state in your action method
  • Infer smarter defaults for action parameters: [FromBody] for complex types, [FromRoute] when possible, otherwise [FromQuery]
  • Require attribute routing – actions are not accessible by convention-based routes

You can also now return ActionResult<T> from your Web API actions, which allows you to return arbitrary action results or a specific return type (thanks to some clever use of implicit cast operators). Most Web API action methods have a specific return type, but also need to be able to return multiple different action results.

Here’s an example Web API controller that uses these new enhancements:

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly ProductsRepository _repository;

    public ProductsController(ProductsRepository repository)
    {
        _repository = repository;
    }

    [HttpGet]
    public IEnumerable<Product> Get()
    {
        return _repository.GetProducts();
    }

    [HttpGet("{id}")]
    public ActionResult<Product> Get(int id)
    {
        if (!_repository.TryGetProduct(id, out var product))
        {
            return NotFound();
        }
        return product;
    }

    [HttpPost]
    [ProducesResponseType(201)]
    public ActionResult<Product> Post(Product product)
    {
        _repository.AddProduct(product);
        return CreatedAtAction(nameof(Get), new { id = product.Id }, product);
    }
}

Because these conventions are more descriptive tools like Swashbuckle or NSwag can do a better job generating an OpenAPI specification for this Web API that includes information like return types, parameter sources, and possible error responses without needing addition attributes.

Better input processing

ASP.NET Core 2.1 does a much better job of providing appropriate error information when the request body fails to deserialize or the JSON is invalid.

For example, in ASP.NET Core 2.0 if your Web API received a request with a JSON property that had the wrong type (like a string instead of an int) you get a generic error message, like this:

{
  "count": [
    "The input was not valid."
  ]
}

In 2.1 we provide more detailed error information about what was wrong with the request including path and line number information:

{
  "count": [
    "Could not convert string to integer: abc. Path 'count', line 1, position 16."
  ]
}

Similarly, if the request is syntactically invalid (ex. missing a curly brace) then 2.1 will let you know:

{
  "": [
    "Unexpected end when reading JSON. Path '', line 1, position 1."
  ]
}

You can also now add validation attributes to top level parameters of your action method. For example, you can mark a query string parameter as required like this:

[HttpGet("test/{testId}")]
public ActionResult<TestResult> Get(string testId, [Required]string name)

Problem Details

In this release we added support for RFC 7808 – Problem Details for HTTP APIs as a standardized format for returning machine readable error responses from HTTP APIs.

To update your Web API controllers to return Problem Details responses for invalid requests you can add the following code to your ConfigureServices method:

services.Configure<ApiBehaviorOptions>(options =>
{
    options.InvalidModelStateResponseFactory = context =>
    {
        var problemDetails = new ValidationProblemDetails(context.ModelState)
        {
            Instance = context.HttpContext.Request.Path,
            Status = StatusCodes.Status400BadRequest,
            Type = "https://asp.net/core",
            Detail = "Please refer to the errors property for additional details."
        };
        return new BadRequestObjectResult(problemDetails)
        {
            ContentTypes = { "application/problem+json", "application/problem+xml" }
        };
    };
});

You can also return a Problem Details response from your API action for an invalid request using the ValidationProblem() helper method.

An example Problem Details response for an invalid request looks like this (where the content type is application/problem+json):

{
  "errors": {
    "Text": [
      "The Text field is required."
    ]
  },
  "type": "https://asp.net/core",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "detail": "Please refer to the errors property for additional details.",
  "instance": "/api/values"
}

JSON Patch improvements

JSON Patch defines a JSON document structure for implementing HTTP PATCH semantics. A JSON Patch document defines a sequence of operations (add, remove, replace, copy, etc.) that can be applied to a JSON resource.

ASP.NET Core has supported JSON Patch since it first shipped, but in 2.1 we've added support for the test operation. The test operation allows to check for specific values before applying the patch. If any test operations fail then the whole patch fails.

A Web API controller action that supports JSON Patch looks like this:

[HttpPatch("{id}")]
public ActionResult<Value> Patch(int id, JsonPatchDocument<Value> patch)
{
    var value = new Value { ID = id, Text = "Do" };

    patch.ApplyTo(value, ModelState);

    if (!ModelState.IsValid)
    {
        return BadRequest(ModelState);
    }

    return value;
}

Where the Value type is defined as follows:

public class Value
{
    public int ID { get; set; }

    public string Text { get; set; }

    public IDictionary<int, string> Status { get; } = new Dictionary<int, string>();
}

The following JSON Patch request successfully adds a value to the Status dictionary (note that we've also added support for non-string dictionary keys, like int, Guid, etc.):

Successful request

[
  { "op": "test", "path": "/text", "value": "Do" },
  { "op": "add", "path": "/status/1", "value": "Done!" }
]

Successful response

{
  "id": 123,
  "text": "Do",
  "status": {
    "1": "Done!"
  }
}

Conversely the following JSON Patch request fails because the value of the text property doesn't match:

Failed request

[
  { "op": "test", "path": "/text", "value": "Do not" },
  { "op": "add", "path": "/status/1", "value": "Done!" }
]

Failed response

{
  "Value": [
    "The current value 'Do' at path 'text' is not equal to the test value 'Do not'."
  ]
}

Summary

We hope you enjoy these Web API improvements. Please give them a try and let us know what you think. If you hit any issues or have feedback please file issues on GitHub.

ASP.NET Core 2.1-preview1: Introducing HTTPClient factory

$
0
0

HttpClient factory is an opinionated factory for creating HttpClient instances to be used in your applications. It is designed to:

  1. Provide a central location for naming and configuring logical HttpClients. For example, you may configure a client that is pre-configured to access the github API.
  2. Codify the concept of outgoing middleware via delegating handlers in HttpClient and implementing Polly based middleware to take advantage of that.
    1. HttpClient already has the concept of delegating handlers that could be linked together for outgoing HTTP requests. The factory will make registering of these per named client more intuitive as well as implement a Polly handler that allows Polly policies to be used for Retry, CircuitBreakers, etc.
  3. Manage the lifetime of HttpClientMessageHandlers to avoid common problems that can occur when managing HttpClient lifetimes yourself.

Usage

There are several ways that you can use HttpClient factory in your application. For the sake of brevity we will only show you one of the ways to use it here, but all options are being documented and are currently listed in the HttpClientFactory repo wiki.

In the rest of this section we will use HttpClient factory to create a HttpClient to call the default API template from Visual Studio, the ValuesController API.

1. Create a typed client

Typed clients are a class that accepts a HttpClient and optionally uses it to call some HTTP service. For example:

NOTE: The Content.ReadAsAsync method comes from the Microsoft.AspNet.WebApi.Client package. You will need to add that to your application if you want to use it.

The typed client is activated by DI, meaning that it can accept any registered service in its constructor.

2. Register the typed client

Once you have a type that accepts a HttpClient you can register it with the following:

The function here will execute to configure your HttpClient instance before it is passed to the ValuesClient. A typed client is, effectively, a transient service, meaning that a new instance is created each time one is needed and it will receive a new HttpClient instance each time it is constructed. This means that your configuration func, in this case retrieving the URI from configuration, will run every time something needs a ValuesClient.

3. Use the client

Now that you have registerd your client you can use it anywhere that can have services injected by DI, for example I could have a Razor Pages page model like this:

or perhaps like this:

Diagnostics

By default, when you use a HttpClient created by HttpClient factory, you will see logs like the following appear:

Log of outgoing HTTP requests

The log messages about starting and processing a HTTP request are being logged because we are using a HttpClient created by the HttpClient factory. From these 7 log messages you can see:

  1. An incoming request in to localhost:5001 in this case this is the browser navigating to my Razor Pages page.
  2. MVC selecting a handler for the request, the OnGetAsync method of my PageModel.
  3. The beginning of an outgoing HTTP request, this marks the start of the outgoing pipeline that we will discuss in the next section.
  4. We send a HTTP request with the given verb.
  5. Recieve the request back in 439.6606 ms, with a status of OK.
  6. End the outgoing HTTP pipeline.
  7. End and return from our handler.

If you set the LogLevel to at least Debug then we will also log header information. In the following screenshot I added an accept header to my request, and you can see the response headers:

Debug logs showing outgoing headers

The outgoing middleware pipeline

For sometime now ASP.NET has had the concept of middleware that operates on an incoming request. With HttpClientFactory we are going to bring a similar concept to outgoing HTTP requests using the existing DelegatingHttpHandler type that has been in .NET for some time. As an example of how this works we will look at how we generate the log messages that we looked at in the previous section:

NOTE: This code is simplified for the sake of brevity and ease of understanding, the actual class can be found here

Let’s look at another example that isn’t already built in. When using client based service discovery systems then you will ask another service for the host/port combination that you should use to communicate to a given service type. For example, you could be using the HTTP API of Consul.io to resolve the name ‘values’ to an IP and port combination. In the following handler we will replace the incoming host name with the result of a request to an IServiceRegistry type that would be implemented to communicate with whatever service discovery system you used. In this way we could make a request to ‘http://values/api/values’ and it would actually be connect to ‘http://:/api/values’.

NOTE: This sample is inspired by the CondensorDotNet project. Which has a HttpClientHandler that works the same way.

We can then register this with the following:

The type being give to the AddHttpMessageHandler must be registered as a transient service. However, because we have the IServiceRegistry as its own service it can have a different lifetime to the handler, allowing caching and other features to be implemented in the service registry instead of the handler itself.

Now that we’ve registered the handler all requests will have their Host and Port set to whatever is returned from the IServiceRegistry type. If we continued our example we would implement IServiceRegistry to call the Consul.io HTTP Endpoint to resolve the URI from the requested HostName.

HttpClient lifetimes

In general you should get a HttpClient from the factory per unit of work. In the case of MVC this means you would generally accept a typed client in the constructor of your controller and let it be garbage collected when the controller does. If you are using IHttpClientFactory directly, which we don’t talk about in this post but can be done, then the equivalent would be to create a HttpClient in the constructor and let it be collected the same way.

Disposing of the client is not mandatory, but doing so will cancel any ongoing requests and ensure the given instance of HttpClient cannot be used after Dispose is called. The factory takes care of tracking and disposing of the important resources that instances of HttpClient use, which means that HttpClient instances can be generally be treated as .NET objects that don’t require disposing.

One effect of this is that some common patterns that people use today to handle HttpClient instances, such as keeping a single HttpClient instance alive for a long time, are no longer required. Documentation about what exactly the factory does and what patterns it resolves will be available, but hasn’t been completed yet.

In the future we hope that a new HttpClientHandler will mean that HttpClient instances created without the factory will also be able to be treated this way. We are working on this in the corefx GitHub repositories now.

Future

Before 2.1 is released

  • Polly integration.
  • Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner. We will be building a package that allows easy integration of Polly policies with HttpClients created by the HttpClient factory.

Post 2.1
  • Auth Handlers.
  • The ability to have auth headers automatically added to outgoing HTTP requests.

Conclusion

The HTTPClient factory is available in 2.1 Preview 1 apps. You can ask questions and file feedback in the HttpClientFactory github repository.

ASP.NET Core 2.1.0-preview1: Improvements to IIS hosting

$
0
0
The ASP.NET Core Module (ANCM) is a global IIS module that has been responsible for proxying requests over from IIS to your backend ASP.NET Core application running Kestrel. Since 2.0 we have been hard at work to bring to two major improvements to ANCM: version agility and performance.
Note, that in the 2.1.0-preview1 release, we have chosen not to update the global module by default, to avoid impacting any existing 1.x/2.0 applications at this early stage. This post details the changes in ANCM and how you can opt-in to trying out these changes today.

Version agility

It has been hard to iterate on ANCM since we’ve had to ensure forward and backward compatibility between every version of ASP.NET Core and ANCM that has shipped thus far. To mitigate this problem going forward, we’ve refactored our code into two separate components: the ASP.NET Core Shim (shim), and the ASP.NET Core Request Handler (request handler). The shim (aspnetcore.dll), as the name suggests, is a lightweight shim, where as the request handler (aspnetcorerh.dll) does all the request processing.Going forward, the shim will ship globally and will continue to be installed via the Windows Server Hosting installer. The request handler will now ship via a NuGet package- Microsoft.AspNetCore.Server.IIS, which you can directly reference in your application or consume via the ASP.NET Core metapackage or shared runtime. As a consequence, two different ASP.NET Core applications running on the same server can use a different version of the request handler.

Performance

In addition to the packaging changes, ANCM also adds supports for an in-process hosting model for ASP.NET Core applications running on .NET Core. Instead of serving as a reverse-proxy, ANCM can now boot the CoreCLR and host your application inside the IIS worker process (w3wp.exe). Our preliminary performance tests have shown that this model delivers 4.4x the request throughput compared to hosting your .NET Core application out-of-process and proxying over the requests.

How do I try it?

If you have already installed the 2.1.0-preview1 Windows Server Hosting bundle, you can install the latest ANCM by running this script.

Alternatively, you can deploy an Azure VM which is already setup with the latest ANCM by clicking the Deploy to Azure button below.
 

Create a new project or update your existing project

Now that we have an environment to publish to, let’s create a new application that targets 2.1.0-preview1 of ASP.NET Core.
Alternatively, you can upgrade an existing project by following the instructions on this post.

Modify your project

Let’s go ahead and modify our project by setting a project property to indicate that we want our published application to be run inprocess.
Add this to your csproj file.

Publish your project

Create a new publish profile and select the Azure VM that you just created. If you’re using Visual Studio, you can easily publish to the Azure VM you just created. In the Solution Explorer, right-click the project and select Publish to open the Publish wizard where you can choose to publish to an Azure VM that you just created.
You may need to allow WebDeploy to publish to a server using an untrusted certificate. This can be accomplished by adding the following attribute to your publish profile (.pubxml file)
If you’re running elsewhere, go ahead and publish your app to a Folder and copy over your artifacts, or publish directly via WebDeploy.

web.config

As part of the publish process, the Web SDK will read the AspNetCoreModuleHostingModel property and transform your web.config to look something like this (observe the new hostingModel attribute):

Debugging

To view the Cloud Explorer, select View > Cloud Explorer on the menu bar
If you’ve been following along using an Azure VM, you can enable remote debugging on your Azure VM via the cloud explorer. In the Actions tab associated with your VM, you should be able to Enable Debugging.
Once you’ve enabled remote debugging, you should be able to attach directly to the w3wp.exe process. If you don’t see the process listed, you may need to send a request to your server to force IIS to start the worker process.
If you’ve been following along locally, you can use Visual Studio to attach directly to your IIS worker process and debug your application code running in the IIS worker process as shown below. (You may be prompted to restart Visual Studio as an Administrator for this).
We don’t yet have an experience for debugging with IIS Express. At the moment, you will have to publish to IIS and then attach a debugger.

Switching between in-process and out-of-process

Switching hosting models can be a deployment-time decision. To change between hosting models, all you have to do is change the hostingModel attribute in your web.config from inprocess to outofprocess.
It can be easily seen in this simple app where you’ll observe either Hello World from dotnet or Hello World from w3wp based on your hosting model.

ASP.NET Core 2.1.0-preview1: Razor UI in class libraries

$
0
0

One frequently requested scenario that ASP.NET Core 2.1 improves is building UI in reusable class libraries. With ASP.NET Core 2.1 you can package your Razor views and pages (.cshtml files) along with your controllers, page models, and data models in reusable class libraries that can be packaged and shared. Apps can then include pre-built UI components by referencing these packages and customize the UI by overriding specific views and pages.

To try out building Razor UI in a class library first install the .NET Core SDK for 2.1.0-preview1.

Create an ASP.NET Core Web Application by running dotnet new razor or selecting the corresponding template in Visual Studio. The default template has fivestandard pages: Home, About, Contact, Error, and Privacy. Let’s move the Contact page into a class library. Add a .NET Standard class library to the solution and reference it from the ASP.NET Core Web Application.

We need to make some modifications to the class library .csproj file to enable Razor compilation. We need to set the RazorCompileOnBuild, IncludeContentInPack, and ResolvedRazorCompileToolset MSBuild properties as well as add the .cshtml files as content and also a package reference to Microsoft.AspNetCore.Mvc. Your class library project file should look like this:

ClassLibrary1.csproj

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <ResolvedRazorCompileToolset>RazorSdk</ResolvedRazorCompileToolset>
    <RazorCompileOnBuild>true</RazorCompileOnBuild>
    <IncludeContentInPack>false</IncludeContentInPack>
  </PropertyGroup>

  <ItemGroup>
    <Content Include="Pages\**\*.cshtml" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.1.0-preview1-final" />
  </ItemGroup>

</Project>

For Preview1 making these project file modifications is a manual step, but in future previews we will provide a Razor MSBuild SDK (Microsoft.NET.Sdk.Razor) as well as project templates to handle these details for you.

Now we can add some Razor files to our class library. Add a Pages directory to the class library project and move over the Contact page along with its page model (Contact.cshtml, Contact.cshtml.cs) from the web app project. You’ll also need to move over _ViewImports.cshtml to get the necessary using statements.

Class library with Razor

Add some content to the Contact.cshtml file so you can tell it’s being used.

@page
@model ContactModel
@{
    ViewData["Title"] = "Contact";
}
<h2>@ViewData["Title"]</h2>
<h3>@Model.Message</h3>

<h2>BTW, this is from a Class Library!</h2>

Run the app and browse to the Contact page.

Contact page from a class library

You can override views and pages from a class library in your app by putting the page or view at the same path in your app. For example, let’s add a _Message.cshtml partial view that gets called from the contact page.

In the class library project add a Shared folder under the Pages folder and add the following partial view:

_Message.cshtml

<h2>You can override me!</h2>

Then call the _Message partial from the contact page using the new partial tag helper.

Contact.cshtml

@page
@model ContactModel
@{
    ViewData["Title"] = "Contact";
}
<h2>@ViewData["Title"]</h2>
<h3>@Model.Message</h3>

<h2>BTW, this is from a Class Library!</h2>

<partial name="_Message" />

Run the app to see that the partial is now rendered.

Contact page with partial

Now override the partial by adding a _Message.cshtml file to the web app under the /Pages/Shared folder.

_Message.cshtml

<h2>Overridden!</h2>

Rebuild and run the app to see the update.

Overridden partial

Summary

By compiling Razor views and pages into shareable libraries you can reuse existing UI with minimal effort. Please give this feature a try and let us know what you think on GitHub. Thanks!


ASP.NET Core 2.1.0-preview1: Introducing Identity UI as a library

$
0
0

ASP.NET Core has historically provided project templates with code for setting up ASP.NET Core Identity, which enables support for identity related features like user registration, login, account management, etc. While ASP.NET Core Identity handles the hard work of dealing with passwords, two-factor authentication, account confirmation, and other hairy security concerns, the amount of code required to setup a functional identity UI is still pretty daunting. The most recent version of the ASP.NET Core Web Application template with Individual User Accounts setup has over 50 files and a couple of thousand lines of code dedicated to setting up the identity UI!

Identity files

Having all this identity code in your app gives you a lot of flexibility to update and change it as you please, but also imposes a lot of responsibility. It's a lot of security sensitive code to understand and maintain. Also if there is an issue with the code, it can't be easily patched.

The good news is that in ASP.NET Core 2.1 we can now ship Razor UI in reusable class libraries. We are using this feature to provide the entire identity UI as a prebuilt package (Microsoft.AspNetCore.Identity.UI) that you can simply reference from an application. The project templates in 2.1 have been updated to use the prebuilt UI, which dramatically reduces the amount of code you have to deal with. The one identity specific .cshtml file in the template is there solely to override the layout used by the identity UI to be the layout for the application.

Identity UI files

_ViewStart.cshtml

@{
    Layout = "/Pages/_Layout.cshtml";
}

The identity UI is enabled by both referencing the package and calling AddDefaultUI when setting up identity in the ConfigureServices method.

services.AddIdentity<IdentityUser, IdentityRole>(options => options.Stores.MaxLengthForKeys = 128)
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultUI()
    .AddDefaultTokenProviders();

If you want the flexibility of having the identity code in your app, you can use the new identity scaffolder to add it back.

Currently you have to invoke the identity scaffolder from the command-line. In a future preview you will be able to invoke the identity scaffolder from within Visual Studio.

From the project directory run the identity scaffolder with the -dc option to reuse the existing ApplicationDbContext.

dotnet aspnet-codegenerator identity -dc WebApplication1.Data.ApplicationDbContext

The identity scaffolder will generate all of the identity related code in a new area under /Areas/Identity/Pages.

In the ConfigureServices method in Startup.cs you can now remove the call to AddDefaultUI.

services.AddIdentity<IdentityUser, IdentityRole>(options => options.Stores.MaxLengthForKeys = 128)
    .AddEntityFrameworkStores<ApplicationDbContext>()
    // .AddDefaultUI()
    .AddDefaultTokenProviders();

Note that the ScaffoldingReadme.txt says to remove the entire call to AddIdentity, but this is a typo that will be corrected in a future release.

To also have the scaffolded identity code pick up the layout from the application, remove _Layout.cshtml from the identity area and update _ViewStart.cshtml in the identity area to point to the layout for the application (typically /Pages/_Layout.cshtml or /Views/Shared/_Layout.cshtml).

/Areas/Identity/Pages/_ViewStart.cshtml

@{
    Layout = "/Pages/_Layout.cshtml";
}

You should now be able to run the app with the scaffolded identity UI and log in with an existing user.

You can also use the code from the identity scaffolder to customize different pages of the default identity UI. For example, you can override just the register and account management pages to add some additional user profile data.

Let's extend identity to keep track of the name and age of our users.

Add an ApplicationUser class in the Data folder that derives from IdentityUser and adds Name and Age properties.

public class ApplicationUser : IdentityUser
{
    public string Name { get; set; }
    public int Age { get; set; }
}

Update the ApplicationDbContext to derive from IdentityContext<ApplicationUser>.

    public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
    {
        public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
            : base(options)
        {
        }
    }

In the Startupclass update the call to AddIdentity to use the new ApplicationUser and add back the call to AddDefaultUI if you removed it previously.

services.AddIdentity<ApplicationUser, IdentityRole>(options => options.Stores.MaxLengthForKeys = 128)
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultUI()
    .AddDefaultTokenProviders();

Now let's update the register and account management pages to add UI for the two additional user properties.

In a future release we plan to update the identity scaffolder to support scaffolding only specific pages and provide a UI for selecting which pages you want, but for now the identity scaffolder is all or nothing and you have to remove the pages you don't want.

Remove all of the scaffolded files under /Areas/Identity except for:

  • /Areas/Identity/Pages/Account/Manage/Index.*
  • /Areas/Identity/Pages/Account/Register.*
  • /Areas/Identity/Pages/_ViewImports.cshtml
  • /Areas/Identity/Pages/_ViewStart.cshtml

Let's start with updating the register page. In /Areas/Identity/Pages/Account/Register.cshtml.cs make the following changes:

  • Replace IdentityUser with ApplicationUser
  • Replace ILogger<LoginModel> with ILogger<RegisterModel> (known bug that will get fixed in a future release)
  • Update the InputModel to add Name and Age properties:

      public class InputModel
      {
          [Required]
          [DataType(DataType.Text)]
          [Display(Name = "Full name")]
          public string Name { get; set; }
    
          [Required]
          [Range(0, 199, ErrorMessage = "Age must be between 0 and 199 years")]
          [Display(Name = "Age")]
          public string Age { get; set; }
    
          [Required]
          [EmailAddress]
          [Display(Name = "Email")]
          public string Email { get; set; }
    
          [Required]
          [StringLength(100, ErrorMessage = "The {0} must be at least {2} and at max {1} characters long.", MinimumLength = 6)]
          [DataType(DataType.Password)]
          [Display(Name = "Password")]
          public string Password { get; set; }
    
          [DataType(DataType.Password)]
          [Display(Name = "Confirm password")]
          [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
          public string ConfirmPassword { get; set; }
      }
    
  • Update the OnPostAsync method to bind the new input values to the created ApplicationUser

      var user = new ApplicationUser()
      {
          Name = Input.Name,
          Age = Input.Age,
          UserName = Input.Email,
          Email = Input.Email
      };
    

Now we can update /Areas/Identity/Pages/Account/Register.cshtml to add the new fields to the register form.

<div class="row">
    <div class="col-md-4">
        <form asp-route-returnUrl="@Model.ReturnUrl" method="post">
            <h4>Create a new account.</h4>
            <hr />
            <div asp-validation-summary="All" class="text-danger"></div>
            <div class="form-group">
                <label asp-for="Input.Name"></label>
                <input asp-for="Input.Name" class="form-control" />
                <span asp-validation-for="Input.Name" class="text-danger"></span>
            </div>
            <div class="form-group">
                <label asp-for="Input.Age"></label>
                <input asp-for="Input.Age" class="form-control" />
                <span asp-validation-for="Input.Age" class="text-danger"></span>
            </div>
            <div class="form-group">
                <label asp-for="Input.Email"></label>
                <input asp-for="Input.Email" class="form-control" />
                <span asp-validation-for="Input.Email" class="text-danger"></span>
            </div>
            <div class="form-group">
                <label asp-for="Input.Password"></label>
                <input asp-for="Input.Password" class="form-control" />
                <span asp-validation-for="Input.Password" class="text-danger"></span>
            </div>
            <div class="form-group">
                <label asp-for="Input.ConfirmPassword"></label>
                <input asp-for="Input.ConfirmPassword" class="form-control" />
                <span asp-validation-for="Input.ConfirmPassword" class="text-danger"></span>
            </div>
            <button type="submit" class="btn btn-default">Register</button>
        </form>
    </div>
</div>

Run the app and click on Register to see the updates:

Register updated

Now let's update the account management page. In /Areas/Identity/Pages/Account/Manage/Index.cshtml.cs make the following changes:

  • Replace IdentityUser with ApplicationUser
  • Update the InputModel to add Name and Age properties:

      public class InputModel
      {
          [Required]
          [DataType(DataType.Text)]
          [Display(Name = "Full name")]
          public string Name { get; set; }
    
          [Required]
          [Range(0, 199, ErrorMessage = "Age must be between 0 and 199 years")]
          [Display(Name = "Age")]
          public int Age { get; set; }
    
          [Required]
          [EmailAddress]
          public string Email { get; set; }
    
          [Phone]
          [Display(Name = "Phone number")]
          public string PhoneNumber { get; set; }
      }
    
  • Update the OnGetAsync method to initialize the Name and Age properties on the InputModel:

      Input = new InputModel
      {
          Name = user.Name,
          Age = user.Age,
          Email = user.Email,
          PhoneNumber = user.PhoneNumber
      };
    
  • Update the OnPostAsync method to update the name and age for the user:

      if (Input.Name != user.Name)
      {
          user.Name = Input.Name;
      }
    
      if (Input.Age != user.Age)
      {
          user.Age = Input.Age;
      }
    
      var updateProfileResult = await _userManager.UpdateAsync(user);
      if (!updateProfileResult.Succeeded)
      {
          throw new InvalidOperationException($"Unexpected error ocurred updating the profile for user with ID '{user.Id}'");
      }
    

Now update /Areas/Identity/Pages/Account/Manage/Index.cshtml to add the additional form fields:

<div class="row">
    <div class="col-md-6">
        <form method="post">
            <div asp-validation-summary="All" class="text-danger"></div>
            <div class="form-group">
                <label asp-for="Username"></label>
                <input asp-for="Username" class="form-control" disabled />
            </div>
            <div class="form-group">
                <label asp-for="Input.Email"></label>
                @if (Model.IsEmailConfirmed)
                {
                    <div class="input-group">
                        <input asp-for="Input.Email" class="form-control" />
                        <span class="input-group-addon" aria-hidden="true"><span class="glyphicon glyphicon-ok text-success"></span></span>
                    </div>
                }
                else
                {
                    <input asp-for="Input.Email" class="form-control" />
                    <button asp-page-handler="SendVerificationEmail" class="btn btn-link">Send verification email</button>
                }
                <span asp-validation-for="Input.Email" class="text-danger"></span>
            </div>
            <div class="form-group">
                <label asp-for="Input.Name"></label>
                <input asp-for="Input.Name" class="form-control" />
            </div>
            <div class="form-group">
                <label asp-for="Input.Age"></label>
                <input asp-for="Input.Age" class="form-control" />
            </div>
            <div class="form-group">
                <label asp-for="Input.PhoneNumber"></label>
                <input asp-for="Input.PhoneNumber" class="form-control" />
                <span asp-validation-for="Input.PhoneNumber" class="text-danger"></span>
            </div>
            <button type="submit" class="btn btn-default">Save</button>
        </form>
    </div>
</div>

Run the app and you should now see the updated account management page.

Manage account updated

You can find a complete version of this sample app on GitHub.

Summary

Having the identity UI as a library makes it much easier to get up and running with ASP.NET Core Identity, while still preserving the ability to customize the identity functionality. For complete flexibility you can also use the new identity scaffolder to get full access to the code. We hope you enjoy these new features! Please give them a try and let us know what you think about them on GitHub.

ASP.NET Core 2.1.0-preview1: GDPR enhancements

$
0
0

2018 sees the introduction of the General Data Protection Regulation, an EU framework to allow EU citizens to control, correct and delete their data, no matter where in the word it is held. In ASP.NET Core 2.1 Preview 1 we’ve added some features to the ASP.NET Core templates to allow you to meet some of your GDPR obligations, as well as a cookie “consent” features to allow you to annotate your cookies and control whether they are sent to the user based on their consent to have such cookies delivered.

HTTPS

In order to help keep users’ personal data private, ASP.NET Core configures new projects to be served over HTTPS by default. You can read more about this feature in Improvements to using HTTPS.

Cookie Consent

When you create an ASP.NET Core application targeting version 2.1 and run it you will see a new banner on your home page,

Cookie Consent Bar

Cookie Consent Bar

This is the consent feature in action. This feature allows you to prompt a user to consent to your application creating “non-essential” cookies. Your application should have a privacy policy and an explanation of what the user is consenting to that conforms to your GDPR requirements. By default, clicking “Learn more” will navigate the user to /Privacy where you could publish the details about your app.
The banner itself is contained in the _CookieConsentPartial.cshtml shared view. If you open this file you can see some code showing how the user’s consent value is retrieved and how it can be updated. The current consent status is exposed as an HttpFeature, ITrackingConsentFeature. If a user consents to allowing the use of cookies a new cookie will be created by calling CreateConsentCookie() on the feature. The status of the user’s consent can be examined by the CanTrack property on the feature, however you don’t need to do this manually, instead you can use the IsEssential property on cookie options. For example

context.Response.Cookies.Append("Test", "Value", new CookieOptions { IsEssential = false });

would append a non-essential cookie to the response. If a user has not indicated their consent this cookie will not be appended to the response but will be silently dropped. Conversely marking a cookie as essential,

context.Response.Cookies.Append("Test", "Value", new CookieOptions { IsEssential = true });

will always create the cookie in the response, no matter the user’s consent status.

You can provide feedback on the cookie consent tracking feature at https://github.com/aspnet/Security/issues.

Data Control

The GDPR gives users the right to examine the data your application holds on it, edit the data and delete the data entirely from your application. Obviously, we cannot know what data you have, where it lives or how its all linked together but what we do know is what personal data a default ASP.NET Core Identity application holds and how to delete Identity users, so we can give you a starting point. When you create an ASP.NET Core application with Individual Authentication and the data stored in-app you might notice two new options in the user profile page, Download and Delete.

Default Data Control actions

Default Data Control actions

Download takes its data from ASP.NET Core Identity and creates a JSON file for download, delete does as you’d expect, it deletes the user. You will probably have extended the identity models or added new tables to your database which uses a user’s identity as a foreign key, so you will need to customize both these functions to match your own data structure and your own GDPR requirements, to do this you’ll need to override the view for each of these functions.

If you look at the code created in your application you will see that a lot of the old template code has vanished, this is because of the new “Identity UI as a library” feature. To override the functionality, you need to manually create the view as it would appear if ASP.NET Identity’s UI were not bundled into a library. For now, until tooling arrives, this is a manual process. The Download capability is contained in DownloadPersonalData.cshtml.cs and the Delete capability is in DeletePersonalData.cshtml.cs. You can see each of these files in the Identity UI GitHub repository. For example, to override the data in the download page you must create an Account Folder under Areas\Identity\Pages, then a Manage folder under the account folder and finally a DownloadPersonalData.cshtml and associated DownloadPersonalData.cshtml.cs.

For the cshtml file you can take the source from GitHub as a starting point, then add your own namespace, a using statement for Microsoft.AspNetCore.Identity.UI.Pages.Account.Manage.Internal and the instruction to wire up MVC Core Tag Helpers, for example if application namespace is WebApplication21Auth the .cshtml file would look like this:

Then for the corresponding .cs file you can take the default implementation from the source as a starting point for the OnPost implementation so your version might look like the following:

You can give feedback on the data control features of Identity at https://github.com/aspnet/Identity/issues.

Conclusion

These features should put you in a good starting position for the GDPR but remember the GDPR places a lot more requirements on your company and application than just the features we provide, including protection of data at rest, risk assessments and management, data breach reporting and so on. You should consult with a GDPR specialist to see what implications the regulation has for your company.

ASP.NET Core 2.1.0-preview1: Functional testing of MVC applications

$
0
0

For ASP.NET Core 2.1 we have created a new package, Microsoft.AspNetCore.Mvc.Testing, to help streamline in-memory end-to-end testing of MVC applications using TestServer.

This package takes care of some of the typical pitfalls when trying to test MVC applications using TestServer.

  • It copies the .deps file from your project into the test assembly bin folder.
  • It sets the content root the application's project root so that static files and views can be found.
  • It provides a class WebApplicationTestFixture<TStartup> that streamlines the bootstrapping of your app on TestServer.

Create a test project

To try out the new MVC test fixture, let's create an app and write an end-to-end in-memory test for the app.

First, create an app to test.

dotnet new razor -au Individual -o TestingMvc/src/TestingMvc

Add an xUnit based test project.

dotnet new xunit -o TestingMvc/test/TestingMvc.Tests

Create a solution file and add the projects to the solution.

cd TestingMvc
dotnet new sln
dotnet sln add src/TestingMvc/TestingMvc.csproj
dotnet sln add test/TestingMvc.Tests/TestingMvc.Tests.csproj

Add a reference from the test project to the app we're going to test.

dotnet add test/TestingMvc.Tests/TestingMvc.Tests.csproj reference src/TestingMvc/TestingMvc.csproj

Add a reference to the Microsoft.AspNetCore.Mvc.Testing package.

dotnet add test/TestingMvc.Tests/TestingMvc.Tests.csproj package Microsoft.AspNetCore.Mvc.Testing -v 2.1.0-preview1-final

In the test project create a test using the WebApplicationTestFixture<TStartup> class that retrieves the home page for the app. The test fixture sets up an HttpClient for you that allows you to invoke your app in-memory.

using Xunit;

namespace TestingMvc.Tests
{
    public class TestingMvcFunctionalTests : IClassFixture<WebApplicationTestFixture<Startup>>
    {
        public MyApplicationFunctionalTests(WebApplicationTestFixture<Startup> fixture)
        {
            Client = fixture.Client;
        }

        public HttpClient Client { get; }

        [Fact]
        public async Task GetHomePage()
        {
            // Arrange & Act
            var response = await Client.GetAsync("/");

            // Assert
            Assert.Equal(HttpStatusCodes.OK, response.StatusCode);
        }
    }
}

To correctly invoke your app the test fixture tries to find a static method on the entry point class (typically Program) of the assembly containing the Startup class with the following signature:

public static IWebHostBuilder CreateWebHostBuilder(string [] args)

Fortunately the built-in project templates are already setup this way:

namespace TestingMvc
{
    public class Program
    {
        public static void Main(string[] args)
        {
            CreateWebHostBuilder(args).Build().Run();
        }

        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .UseStartup<Startup>();
    }
}

If you don't have the Program.CreateWebHostBuilder method the text fixture won't be able to initialize your app correctly for testing. Instead you can configure the WebHostBuilder yourself by overriding CreateWebHostBuilder on WebApplicationTestFixture<TStartup>.

Specifying the app content root

The test fixture will also attempt to guess the content root of the app under test. By convention the test fixture assumes the app content root is at <<SolutionFolder>>/<<AppAssemblyName>>. For example, based on the folder structure defined below, the content root of the application is defined as /work/MyApp.

/work
    /MyApp.sln
    /MyApp/MyApp.csproj
    /MyApp.Tests/MyApp.Tests.csproj

Because we are using a different layout for our projects we need to inherit from WebApplicationTestFixture and pass in the relative path from the solution to the app under test when calling the base constructor. In a future preview we plan to make configuration of the content root unnecessary, but for now this explicit configuration is required for our solution layout.

public class TestingMvcTestFixture<TStartup> : WebApplicationTestFixture<TStartup> where TStartup : class
{
    public TestingMvcTestFixture()
        : base("src/TestingMvc") { }
}

Update the test class to use the derived test fixture.

public class TestingMvcFunctionalTests : IClassFixture<TestingMvcTestFixture<Startup>>
{
    public TestingMvcFunctionalTests(TestingMvcTestFixture<Startup> fixture)
    {
        Client = fixture.Client;
    }

    public HttpClient Client { get; }

    [Fact]
    public async Task GetHomePage()
    {
        // Arrange & Act
        var response = await Client.GetAsync("/");

        // Assert
        Assert.Equal(HttpStatusCode.OK, response.StatusCode);
    }
}

For some end-to-end in-memory tests to work properly, shadow copying needs to be disabled in your test framework of choice, as it causes the tests to execute in a different folder than the output folder. For instructions on how to do this with xUnit see https://xunit.github.io/docs/configuring-with-json.html.

Run the test

Run the test by running dotnet test from the TestingMvc.Tests project directory. It should fail because the HTTP response is a temporary redirect instead of a 200 OK. This is because the app has HTTPS redirection middleware in its pipeline (see Improvements for using HTTPS) and base address setup by the test fixture is an HTTP address ("http://localhost"). The HttpClient by default doesn't follow these redirects. In a future preview we will update the text fixture to configure the HttpClient to follow redirects and also handle cookies. But at least now we know the test is successfully running the app's pipeline.

This test was intended to make simple GET request to the app's home, not test the HTTPS redirect logic, so let's reconfigure the HttpClient to use an HTTPS base address instead.

public TestingMvcFunctionalTests(TestingMvcTestFixture<Startup> fixture)
{
    Client = fixture.Client;
    Client.BaseAddress = new Uri("https://localhost");
}

Rerun the test and it should now pass.

Starting test execution, please wait...
[xUnit.net 00:00:01.1767971]   Discovering: TestingMvc.Tests
[xUnit.net 00:00:01.2466823]   Discovered:  TestingMvc.Tests
[xUnit.net 00:00:01.2543165]   Starting:    TestingMvc.Tests
[xUnit.net 00:00:09.3860248]   Finished:    TestingMvc.Tests

Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.

Summary

We hope the new MVC test fixture in ASP.NET Core 2.1 will make it easier to reliably test your MVC applications. Please give it a try and let us know what you think on GitHub.

Announcing Preview 1 of ASP.NET MVC 5.2.5, Web API 5.2.5, and Web Pages 3.2.5

$
0
0

Today we released Preview 1 of ASP.NET MVC 5.2.5, Web API 5.2.5, and Web Pages 3.2.5 on NuGet. This is a patch release that contains only bug fixes. You can find the full list of bug fixes for this release in the release notes.

To update an existing project to use this preview release run the following commands from the NuGet Package Manager Console for each of the packages you wish to update:

Install-Package Microsoft.AspNet.Mvc -Version 5.2.5-preview1
Install-Package Microsoft.AspNet.WebApi -Version 5.2.5-preview1
Install-Package Microsoft.AspNet.WebPages -Version 3.2.5-preview1

Please try out Preview 1 of ASP.NET MVC 5.2.5, Web API 5.2.5, and Web Pages 3.2.5 and let us know what you think. Any feedback can be submitted as issues on GitHub. Assuming everything with this preview goes smoothly, we expect to ship a stable release of these packages in a few weeks.

Enjoy!

ASP.NET Core manageability and Application Insights improvements

$
0
0

There are many great investments on the ASP.NET Core 2.1 roadmap. These investments make ASP.NET Core applications easier to write, host, test, and make security and standards compliant. This blog post talks about areas of investments in manageability and monitoring space. It covers ASP.NET Core, .NET, and Application Insights SDK for ASP.NET Core features and spans beyond 2.1 milestone.

The main themes of manageability improvements across the application stack are:

  1. Distributed tracing
  2. Cross platform features parity
  3. Runtime awareness
  4. Ease of enablement
  5. App framework self-reporting

Let’s dig into the improvements made and the roadmap ahead in these areas.

Distributed tracing

ASP.NET Core 2.0 applications are distributed tracing aware. The context required to track a distributed trace is automatically created or read from incoming HTTP requests and forwarded along with any outgoing out-of-process calls. Collection of distributed trace details does NOT require application code change. No need to register a middleware or install an agent. You can check out the preview of end-to-end trace view as shown on a picture below and more distributed tracing scenarios in Azure Application Insights.

ASP.NET Core 2.0 was shipped with the support of incoming http requests and outgoing HttpClient requests monitoring. Recently support was extended for outgoing calls made via SqlClient for .NET Core, Azure Event Hub, and the Azure Service Bus SDKs. Libraries are instrumented with the DiagnosticSource callbacks. It makes distributed tracing easy to consume by any APM or diagnostics tool. More libraries plan to enable DiagnosticSource support to participate in distributed trace.

Application Insights SDK for ASP.NET Core 2.2.1 was shipped recently. It now automatically collects outgoing calls made using the libraries mentioned above.

We are also working with the community to standardize distributed tracing protocols. Accepted standard enables even wider adoption of distributed tracing. It also simplifies mixing components written in different languages as well as serverless cloud components in a single microservice environment. Our hope is that this standard will be in place for adoption by the next version of ASP.NET Core.

Cross platform features parity

ASP.NET Core applications may target two .NET versions – .NET Framework and .NET Core. They can run on Windows and Linux. Many efforts are directed to bring feature parity between these runtime environments.

There are framework investments for better manageability of ASP.NET Core application across runtime environments. For instance, System.Diagnostics.PerformanceCounter package was recently released. It allows application to collect Performance Counters from .NET Core applications running on Windows. This package was only available for apps compiled for .NET Framework environment before.

Low level manageability interfaces like Profiling API also getting to the feature parity on various runtime platforms.

Recently more Application Insights features were ported from .NET Framework version to .NET Core. Application Insights SDK for ASP.NET Core version 2.2.1 have live metrics support, hardened telemetry channel with more reliable data upload. And adaptive sampling feature to enable better control of telemetry volume and price.

We are excited to announce the public preview for Application Insights Profiler on ASP.NET core Linux web apps. Learn more at documentation page Profile ASP.NET Core Azure Linux Web Apps with Application Insights Profiler.

Runtime awareness

Variety of runtime platforms makes the job of monitoring tools harder. Application Insights SDK needs to be runtime aware. Team makes investments to natively understand platforms like Azure Web Apps or containers run by Kubernetes.

Ability to associate infrastructure telemetry with application insights is important. Correlating container CPU and number of running instances with the request load and reliability of an application allows to get a full picture of application behavior. It allows to find out the root cause of the problem faster and apply remediations curated to the runtime environment.

Ease of enablement

When time comes to manageability and diagnostics – the last thing you want to do is to redeploy an application to enable additional data collection. Especially when application is running in production. There are set of investments teams making to simplify enablement of manageability, monitoring and diagnostics settings.

Snapshot Debugger will be enabled by default for the ASP.NET Core applications running as Azure Web App.

Another aspect of easier onboarding is Application Insights SDK configuration story ironing. Today Application Insights predefine many monitoring settings. Those settings work great for majority of application. However, changing of them is not always easy and intuitive when needed.

ASP.NET Core has many built-in self-reporting capabilities. Exposing them in a form that is easy to consume across runtime platforms is one of the goals of .NET team. There is a proposal to expose many of manageability settings and monitoring data via http callbacks.

App framework self-reporting

ASP.NET framework improves manageability by exposing more internal app metrics. As a result of this discussion these metrics are exposed in a platform-independent way via EventCounters. Metrics exposed via EventCounters are available for in-process and out-of-process consumption.

There is another example of a great improvement made in .NET for better manageability and monitoring. Stack traces became way more readable in .NET 2.1. This blog post outlines few improvements made recently in this area.

Summary

There are many new manageability and monitoring features coming up. Some of them committed, some planned, and some are just proposals. You can help prioritizing features by commenting on GitHub for ASP.NET, Application Insights and .NET. You can also get live updates and participate in the conversation by watching the weekly ASP.NET Community Standup at https://live.asp.net. Your feedback is welcome and appreciated!

Viewing all 191 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>