Quantcast
Channel: ASP.NET Blog
Viewing all 191 articles
Browse latest View live

Getting Started with Windows Containers

$
0
0

Containers provide a way of running an application in a controlled environment, isolated from other applications running on the machine, and from the underlying infrastructure. They are a cost-effective way of abstracting away the machine, ensuring that the application runs in the same conditions, from development, to test, to production.

Containers started in Linux, as a virtualization method at the OS level that creates the perception of a fully isolated and independent OS, but it does not require creating a full virtual machine. People have been already using Linux containers for a while. Docker greatly simplified the containerization on Linux by offering a set of tools that make it easy to create, deploy and run applications by using containers.

Windows Server implements the container technology, and Docker API’s and tool-set are extended to support Windows Containers, offering developers who use Linux Docker the same experience on Windows Server.

There are two kinds of container images available: Windows Server Core and Nano Server. Nano Server is lightweight and only for x64 apps. Windows Server Core image is larger and has more capabilities; it allows running “Full” .NET Framework apps, such as an ASP.NET application, in containers. The higher compatibility makes it more suitable as a first step in transitioning to containers. ASP.NET Core on .NET Core apps can run on both Nano Server and Server Core, but are better suited for running on Nano Server, because of its smaller size.

The following steps show how to get started on running ASP.NET Core and ASP.NET applications on Windows containers.

Prerequisites:

Install Docker

Install Docker for Windows – Stable channel

After installing Docker, logging out of Windows and re-login is required. Docker may prompt for that. After logging in again, Docker starts automatically.

Switch Docker to use Windows Containers

By default, Docker is set to use Linux containers. Right-click on the docker tray icon and select “Switch to Windows Containers”.

Switch to Windows Containers

Switch to Windows Containers

Running docker version will show Server OS/arch changed to Windows after docker switched to Windows containers.

Docker version before switching to Windows containers

Docker version before switching to Windows containers

Docker version after switching to Windows Containers

Docker version after switching to Windows Containers

Set up an ASP.NET or ASP.NET Core application to run in containers

ASP.NET as well as ASP.NET Core applications can be run in containers. As mentioned above, there are two kinds of container images available for Windows: Nano Server and Server Core containers. ASP.NET Core apps are lightweight enough that they can run in Nano Server containers. ASP.NET apps need more capabilities and require Server Core containers.

The following walkthrough shows the steps needed to run an ASP.NET Core and an ASP.NET application in a Windows Container. To start, create an ASP.NET or ASP.NET Core Web application, or use an existing one.

Note: ASP.NET Core applications developed in Visual Studio can have Docker support automatically added using Visual Studio Tools for Docker. Until recently, Visual Studio Tools for Docker only supported Linux Docker scenarios, but in Visual Studio 2017 version 15.3, support has been added for containerizing ASP.NET Core apps as Windows Nano images. Docker support with Windows Nano Server can be added at project creation time by checking the “Enable Docker Support” checkbox and selecting Windows in the OS dropdown, or it can be added later on by right-clicking on the project in Solution Explorer, then Add -> Docker Support.

This tutorial assumes that “Docker Support” was not checked when the project was created in Visual Studio, so that the whole process of adding Docker support manually can be explained.

Publish the App

The first step is to put together in one folder all the application artifacts needed for the application to run in the container. This can be done with the publish command. For ASP.NET Core, run the following command from the project directory, which will publish the app for Release config in a folder; here it is named PublishOutput.

dotnet publish -c Release -o PublishOutput

dotnet Publish Output
Or use the Visual Studio UI to publish to a folder (for ASP.NET or ASP.NET Core)

Publish with Visual Studio

Publish with Visual Studio

Create the Dockerfile

To build a container image, Docker requires a file with the name “Dockerfile” which contains all the commands, in order, to build a given image. Docker Hub contains base images for ASP.NET and ASP.NET Core.

Create a Dockerfile with the content shown below and place it in the project folder.

Dockerfile for ASP.NET Core application (use microsoft/aspnetcore base image)

FROM microsoft/aspnetcore:1.1
COPY ./PublishOutput/ ./
ENTRYPOINT ["dotnet", "myaspnetcorewebapp.dll"]

The instruction FROM microsoft/aspnetcore:1.1 gets the microsoft/aspnetcore image with tag 1.1 from dockerhub. The tag is multi-arch, meaning that docker figures out whether to use the Linux or Nano Server container image depending on what container mode is set. You can use as well the specific tag of the image: FROM microsoft/aspnetcore:1.1.2-nanoserver
The next instruction copies the content of the PublishOutput folder into the destination container, and the last one uses the ENTRYPOINT instruction to configure the container to run an executable: the first argument to ENTRYPOINT is the executable name, and the second one is the argument passed to the executable.

If you publish to a different location, you need to edit the dockerfile, so to avoid this, you can copy the content of the current folder into the destination container, as in the dockerfile below. The dockerfile needs to be added to the published output.

FROM microsoft/aspnetcore:1.1
COPY . .
ENTRYPOINT ["dotnet", "myaspnetcorewebapp.dll"]

Dockerfile for ASP.NET application (use microsoft/aspnet base image)

FROM microsoft/aspnet
COPY ./PublishOutput/ /inetpub/wwwroot

An entry point does not need to be specified in the ASP.NET dockerfile, because the entry point is IIS, and this is configured in the microsoft/aspnet base image.

Build the image

Run docker build command in the project directory to create the container image for the ASP.NET Core app.

docker build -t myaspnetcoreapp .

Build Your Application Image

Build Your Application Image

The argument -t is for tagging the image with a name. Running the docker build command will cause pulling the ASP.NET Core base image from Docker Hub. Docker images consist of multiple layers. In the example above, there are ten layers that make the ASP.NET Core image.

The docker build command for ASP.NET will take significantly longer compared with ASP.NET Core, because the images that need to be downloaded are larger. If the image was previously downloaded, docker will use the cached image.

After the container image is created, you can run docker images to display the list and size of the container images that exist on the machine. The following is the image for the ASP.NET (Full Framework):

ASP.NET Full Framework Image

And this is the image for the ASP.NET Core:

ASP.NET Core Image

Note in the images above the differences in size for the ASP.NET vs ASP.NET Core containers: the image size for the ASP.NET container is 11.6GB, and the image size for the ASP.NET Core container is about ten times smaller.

Run the container

The command docker run will run the application in the container:

docker run -d -p 80:80 myaspnetcoreapp

Docker Run ResultsThe -d argument tells Docker to start the image in the detached mode (disconnected from the current shell).

The -p argument maps the container port to the host port.

The ASP.NET app does not need the -p argument when running because the microsoft/aspnet image has already configured the container to listen on port 80 and expose it.

The docker ps command shows the running containers:

Docker ps ResultsTo give the running container a name and avoid getting an automatically assigned one, use the --name argument when with the run command:

docker run -d --name myapp myaspnetcoreapp

This name can be used instead of the container ID in most docker commands.

View the web page running in a browser

Due to a bug that affects the way Windows talks to containers via NAT (https://github.com/Microsoft/Virtualization-Documentation/issues/181#issuecomment-252671828) you cannot access the app by browsing to http://localhost. To work around this issue, the internal IP address of the container must be used.

The address of the running Windows container can be obtained with:

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" <first chars of HASH>

Docker Inspect Results

Where HASH is the container ID; the name of the running container can be used instead.

Then type the URL returned into your browser: http://172.25.199.213:80 and you will see the site running.

Note that the limitation mentioned above only applies when accessing the container from localhost. Users from other machines, or other VM’s or containers running on the host, can access the container using the host’s IP and port.

Wrap up

The steps above show a simple approach for adding docker support for ASP.NET Core and ASP.NET applications.

For ASP.NET Core, in addition to the base images that help build the docker container which runs the application, there are docker images available that help compile/publish the application inside the container, so the compile/publish steps can be moved inside the Dockerfile. The Dockerfile can use several base images, each in different stages of execution. This is known as “multi-stage” build. A multi-stage build for ASP.NET Core uses the base image microsoft/aspnetcore-build  such as in this github sample: https://github.com/dotnet/dotnet-docker-samples/blob/master/aspnetapp/Dockerfile

Resources to help getting started with Windows containers:


Welcome to the New Blog Template for ASP.NET Developers

$
0
0

By Juliet Daniel, Lucas Isaza, and Uma Lakshminarayan

Have you always wanted to build a blog or other web application but haven’t had the time or educational resources to learn? With our blog template, available in our GitHub repo, you can create your web application fast and effortlessly, and even learn to master the new Razor Pages architecture along the way.

This blog post will explore how to use Razor Pages features and best practices and walk through the blog template code that we wrote.

This summer we had the awesome opportunity to be part of Microsoft’s Explore Program, a 12-week internship for rising college sophomores and juniors to learn more about software development and program management. As interns on the Visual Studio Web Tools team, our task was to create a web application template as a pilot for a set of templates showcasing new features and best practices in Razor Pages, the latest ASP.NET Core coding paradigm. We decided to build a blog template because of our familiarity with writing and reading blogs and because we believe that many developers would want a shortcut to build a personal or professional blog.

In our first week, the three of us all acted as Program Managers (PM) to prioritize features. Along with researching topics in web development, we had fun playing with different blog engines to help us brainstorm features for our project. After that, every three weeks we rotated between the PM and developer roles, with one of us acting as PM and the other two as developers. Working together, we’ve built a tool that we believe will inspire developers to build more web applications with Microsoft’s technologies and to contribute to the ASP.NET open source movement.

Introduction

This blog template is a tool to help developers quickly build a blog or similar web application. This blog template also serves as an example that shows how to build a web app from ASP.NET Core using the new Razor Pages architecture. Razor Pages effectively streamlines building a web application by associating HTML pages with C# code, rather than compartmentalizing a project into the Model-View-Controller pattern.

We believe that a blog template appeals to a broad audience of developers while also showcasing a variety of unique and handy features. The basic structure of the template is useful for developers interested in building an application beyond blogs, such as an ecommerce, photo gallery, or personal web site. All three alternatives are simply variations of a blog with authentication.

You can find our more detailed talk on the ASP.NET Community Standup about writing the blog template with code reviews and demos here. You can also access our live demo at https://venusblog.azurewebsites.net/ (Username: webinterns@microsoft.com, Password: Password.1).

Background

This template was designed to help Visual Studio users create new web applications fast and effortlessly. The various features built in the template make it a useful tool for developers:

  • Data is currently stored using XML files. This was an early design decision made to allow users on other blogs to move their data to this template smoothly.

    The usage of LINQ (Language Integrated Query) enables the developer to query items from the blog from a variety of sources such as databases, XML documents (currently in use), and in-memory objects without having to redesign or learn how elements are queried from a specific source.
  • The blog is built on Razor Pages from ASP.NET Core. The image below showcases the organization of the file structure that Razor Pages uses. Each view contains a corresponding Model in a C# file. Adding another Razor Page to your project is as simple as adding a new item to the Pages folder and choosing the Razor Page with model type.
  • The template includes a user authentication feature, implemented using the new ASP.NET Identity Library. This tool allows the owner of the blog to be the single user registered and in control of the blog. Identity also provided us with a tested and secure way to create and protect user profiles.
    We were able to use this library to implement login, registration, password recovery, and other user management features. To enable identity, we simply included it in the startup file and added the corresponding pages (with their models).
  • Customizing the theme is fast and flexible with the use of Bootstrap. Simply download a Bootstrap theme.min.css file and add it to the CSS folder in your project (wwwroot > css). You can find free or paid Bootstrap themes at websites such as bootswatch.com. You can delete our default theme file, journal-bootstrap.min.css, to remove the default theming. Run your project, and you’ll see that the style of your blog has changed instantly.
  • Entity Framework provides an environment that makes it easy to work with relational data. In our scenario, that data comes in the form of blog posts and comments for each post.

Using the Template

Creating an Instance of the Template (Your Own Blog)

There are two options for instantiating a template. You can use dotnet new included with the dotnet CLI. However, the current version contains minor bugs that will be fixed soon. Alternatively, you’ll need to get the newest templating code with the following steps. Click the green “Clone or download” button. Copy the link in the dropdown that appears. Open a command prompt and change directories to where you want to install the templating repo.

In the desired directory, enter the command:

git clone <link you copied earlier>

This will pull all the dotnet.templating code and put it in a directory named “templating”. Now change to the templating directory and switch branches to “rel/2.0.0-servicing” by running:

git checkout rel/2.0.0-servicing

Then run the command “setup”.

  • Note: If you get errors about not being able to run scripts, close your command window. Then open a powershell window as administrator and run the command “Set-ExecutionPolicy Unrestricted”. Close the powershell window, then open a new command prompt and go back to the templating directory and run setup again.

Once the setup runs correctly, you should be able to run the command “dotnet new3”. If you are just using the dotnet CLI, you can replace “dotnet new3” with “dotnet new” for the rest of the steps. Install your blog template with the command:

dotnet new3 -i [path to blog template source]

This path will be the root directory of your blog template repository.
Now you can create an instance of the template by running:

dotnet new3 blog -o [directory you want to create the instance in] -n [name for the instance]

For example:

dotnet new3 blog -o c:\temp\TestBlog\ -n “My Blog”

Reflection

We hope that our project encourages developers to build more web applications with Microsoft’s technologies and have fun doing so. Personally, we’ve learned a lot about web development and Razor Pages through developing this project. We’ve also developed useful skills to move forward in our careers. For example, we really enjoyed learning to brainstorm and prioritize features, which turned out to be a much more complicated process than any of us had expected. Sprint planning and time estimation also proved to be a tricky task. Sometimes it was hard to predict how much time it would take to implement certain features, but as we became more familiar with our project and our team’s engineering processes this became much easier.

Reaching out to the right people turned out to be a key ingredient to accelerating our development process and making sure we were building in the right direction. Once we began meeting with people outside of our assigned team, we realized almost immediately that it was a great way to get feedback on our project. We also began to look for the right people to ask our questions so the development of our project progressed even faster. Most importantly, we really appreciate how helpful and communicative our manager, Barry, and our mentors, Jimmy and Mads, were throughout the internship. They took time out of their busy schedules to help us and give us insightful career advice.

Juliet Daniel is a junior at Stanford studying Management Science & Engineering. In her free time, she enjoys biking, running, hiking, foodspotting, and playing music. She keeps a travel blog at juliets-journey.weebly.com.

Lucas Isaza is a junior at Stanford studying Economics and Applied Statistics. He enjoys playing basketball and lacrosse, exploring new restaurants in the area, and hanging out with friends.

Uma Lakshminarayan is a junior at UCLA studying Computer Science. She enjoys cooking and eating vegetarian foods, taking walks with friends, and discovering new music. You will usually find her singing or listening to music.

Announcing SignalR for ASP.NET Core 2.0

$
0
0

Today we are glad to announce an alpha release of SignalR for ASP.NET Core 2.0. This is the first official release of a new SignalR that is compatible with ASP.NET Core. It consists of a server component, a .NET client targeting .NET Standard 2.0 and a JavaScript/TypeScript client.

What’s New?

SignalR for ASP.NET Core is a rewrite of the original SignalR. We looked at common SignalR usage patterns and issues that users face today and decided that rewriting SignalR is the right choice. The new SignalR is simpler, more reliable, and easier to use. Despite these underlying changes, we’ve worked to ensure that the user-facing APIs are very similar to previous versions.

JavaScript/TypeScript Client

SignalR for ASP.NET Core has a brand-new JavaScript client. The new client is written in TypeScript and no longer depends on jQuery. The client can also be used from Node.js with a few additional dependencies.

The client is distributed as an npm module that contains the Node.js version of the client (usable via require), as well as a version for use in the browser which can be included using a <script> tag. TypeScript declarations for the client included in the module make it easy to consume the client from TypeScript applications.

The JavaScript client runs on the latest Chrome, FireFox, Edge, Safari and Opera browsers as well as Internet Explorer version 11, 10, 9. (Not all transports are compatible with every browser). Internet Explorer 8 and below is not supported.

Support for Binary Protocols

SignalR for ASP.NET Core offers two built-in hub protocols – a text protocol based on JSON and a binary protocol based on MessagePack. Messages using the MessagePack protocol are typically smaller than messages using the JSON protocol. For example a hub method returning the integer value of 1 will be 43 bytes when using the JSON based protocol while only 16 bytes when using MessagePack. (Note, the difference in size may vary depending on the message type, the contents of the message and the transport used – binary messages sent over Server Sent Events transport will be base64 encoded since Server Sent Events is a text transport.)

Support for Custom Protocols

The SignalR hub protocol is documented on GitHub and now has extension points that make it possible to plug in custom implementations.

Streaming

It is now possible to stream data from the server to the client. Unlike a regular Hub method invocation, streaming means the server is able to send results to the client before the invocation completes.

Using SignalR with Bare Websockets

The process of connecting to SignalR has been simplified to the point where, when using websockets, it is now possible to connect to the server without any client with a single request.

Simplified Scale-Out Model

Unfortunately, when it comes to scaling out applications there is no “one size fits all” model – each application is different and has different requirements that need to be considered when scaling out the application. We have worked to improve, and simplify, the scale-out model and are providing a Redis based scale-out component in this Alpha. Support for other providers is being evaluated for the final release, for example service bus.

What’s Changed?

We added a number of new features to SignalR for ASP.NET Core but we also decided to remove support for some of the existing features or change how they work. One of the consequences of this is that SignalR for ASP.NET Core is not compatible with previous versions of SignalR. This means that you cannot use the old server with the new clients or the old clients with the new server. Below are the features which have been removed or changed in the new version of SignalR.

Simplified Connection Model

In the existing version of SignalR the client would try starting a connection to the server, and if it failed it would try using a different transport. The client would fail starting the connection when it could not connect to the server with any of the available transports. This feature is no longer supported with the new SignalR.

Another functionality that is no longer supported is automatic reconnects. Previously SignalR would try to reconnect to the server if the connection was dropped. Now, if the client is disconnected the user must explicitly start a new connection if they want to reconnect. Note, that it was required even before – the client would stop its reconnect attempts if it could not reconnect successfully within the reconnect timeout. One more reason to remove automatic reconnects was a very high cost of storing messages sent to clients. The server would by default remember the last 1000 messages sent to a client so that it could replay messages the client missed when it was offline. Since each connection had its own buffer the memory footprint of storing these messages was very high.

Sticky Sessions Are Now Required

Because of how scale-out worked in the previous versions of SignalR, clients could reconnect and/or send messages to any server in the farm. Due to changes to the scale-out model, as well as not supporting reconnects, this is no longer supported. Now, once the client connects to the server it needs to interact with this server for the duration of the connection.

Single Hub per Connection

The new version of SignalR does not support having more than one Hub per connection. This results in a simplified client API, and makes it easier to apply Authentication policies and other Middleware to Hub connections. In addition subscribing to hub methods before the connection starts is no longer required.

Other Changes.

The ability to pass arbitrary state between clients and the Hub (a.k.a. HubState) has been removed as well as the support for Progress messages. We also don’t create a counterpart of hub proxies at the moment.

Getting Started

Setting up SignalR is relatively easy. After you create an ASP.NET Core application you need to add a reference to the Microsoft.AspNetCore.SignalR package like this

and a hub class:

This hub contains a method which once invoked will invoke the Send method on each connected client.

After adding a Hub class you need to configure the server to pass requests sent to the chatend point to SignalR:

Once you set up the server you can invoke hub methods from the client and receive invocations from the server. To use the JavaScript client in a browser you need to install the signalr-client npm module first using the following command:

then copy the signalr-client.js to your script folder and include on your page using the <script> tag:

After you include the script you can start the connection and interact with the server like this:

To use the SignalR managed client you need to add a reference to the Microsoft.AspNetCore.SignalR.Client package:

Then you can invoke hub methods and receive invocations like this:

If you want to take advantage of streaming you need to create a hub method that returns either a ReadableChannel<T> or an IObservable<T>. Here is an example of a hub method streaming stock prices to the client from the StockTicker sample we ported from the old SignalR:

The JavaScript code that invokes this hub method looks like this:

Each time the server sends a stream item the displayStock client function will be invoked.

Invoking a streaming hub method from a C# client and reading the items could look as follows:

Migrating from existing SignalR

We will be releasing a migrating from existing SignalR guide in the coming weeks.

Known issues

This is an alpha release and there are a few issues we know about:

  • Connections using the Server Sent Event transport may be disconnected after two minutes of inactivity if the server is running behind IIS
  • The WebSockets transport will not work if the server hosting SignalR is running behind IIS on Windows 7 or Windows Server 2008 R2, due to limitations in IIS
  • ServerSentEvents transport in the C# client can hang if the client is being closed while the data from the server is still being received
  • Streaming invocations cannot be canceled by the client
  • Generating a production build of an application using TypeScript client in angular-cli fails due to UglifyJS not supporting ES6. This issue can be worked around as described in this comment.

Summary

The long awaited version of SignalR for ASP.NET Core just shipped. Try it out and let us know what you think! You can provide feedback or let us know about bugs/issues here.

Announcing SignalR for ASP.NET Core Alpha 2

$
0
0

A few weeks ago we released the alpha1 version of SignalR for ASP.NET Core 2.0. Today we are pleased to announce a release of the alpha2 version of SignalR for ASP.NET Core 2.0. This release contains a number of changes (including API changes) and improvements.

Notable Changes

  • The JSON hub protocol now uses camel casing by default when serializing and deserializing objects on the server and by the C# client
  • IObservable subscriptions for streaming methods are now automatically unsubscribed when the connection is closed
  • It is now possible to invoke client methods in a type safe manner when using HubContext (a community contribution from FTWinston– Thanks!)
  • A new HubConnectionContext.Abort() method allows terminating connections from the server side
  • Users can now control how their objects are serialized when using MessagePack hub protocol
  • Length prefixes used in binary protocols are now encoded using Varints which reduces the size of the message by up to 7 bytes

Release notes can be found on github.

API Changes

TypeScript/JavaScript client:

  • Event names were changed and now use lower case:
    • onDataReceived on IConnection was renamed to onreceive
    • onClosed on HubConnection and IConnection was renamed to onclose
  • It is now possible to register multiple handlers for the HubConnection onclose event by passing the handler as a parameter. The code used to subscribe to the closed event when using the alpha1 version of the client:

needs to be changed to:

  • The HubConnection on() method now allows registering multiple callbacks for a client method invocation
  • A new off() method was added to HubConnection to enable removing callbacks registered with the on method

C# Client

The HubConnection.Stream() method was changed to be async and renamed to StreamAsync()
New overloads of WithJsonHubProtocol() and WithMessagePackProtocol() on HubConnectionBuilder that take protocol-specific settings were added

Server

The params keyword was removed from the IClientProxy.InvokeAsync() method and replaced by a set of extension methods

A word of thanks to everyone who has tried the new SignalR and provided feedback. Please keep it up! You can provide feedback or let us know about bugs/issues here.

For examples on using this, and future, versions you can look at the SignalR Samples repository on GitHub.

User accounts made easy with Azure

$
0
0

One of the most common requirements for web applications is for users create accounts for the purpose of access control and personalization. While ASP.NET templates have always made it easy to create an application that uses a database you control to register and track user accounts, that introduces other complications over the long term. As laws around user information get stricter and security becomes more important, maintaining a database of users and passwords comes with an increasing set of maintenance and regulatory challenges.

A few weeks ago I tried out the new Azure Active Directory B2C service, and was really impressed with how easy it was to use. It added user identity and access control to my app, while moving all the responsibility for signing users up, authenticating them, and maintaining the account database to Azure (and it’s free to develop with).

In this post I’ll briefly walk through how to get up and running with Azure B2C in a new ASP.NET Core app. It’s worth noting it works just as well with ASP.NET apps on the .NET Framework with slightly different steps (see walkthrough). I’ll then include some resources that will help you with more complex scenarios including authenticating against a backend Web API.

Step 1: Create the B2C Tenant in Azure

  • To get started, you’ll need an Azure account. If you don’t have one yet, create your free account now
  • Create an Azure AD B2C Directory
  • Create your policies (this is where you indicate what you need to know about the user)
    • Create a sign-up or sign-in policy
      • Choose all of the information you want to know about the user under “Sign-up attributes”
      • Indicate all the information you want passed to your application under “Application Claims” (note: the default template uses the “Display Name” attribute in the navigation bar so you will want to include that)
        clip_image002
    • Create a profile editing policy
    • Create a password reset policy
    • Note: After you create each policy, you’ll be taken back to the tab for that policy type which will show you the full name of the policy you just created, which will be in the form “B2C_1_<name_you_entered>”.  You’ll need these names below when creating your project.
      image
  • Register your application (follow the instructions for a Web App)
    • Note: You’ll get the “Reply URL” in the next step when you create the new project.

Step 2: Create the Project in Visual Studio

  • File -> New Project -> Visual C# -> ASP.NET Core Web Application
    clip_image004
  • On the New ASP.NET dialog, click the “Change Authentication” button on the right side of the dialog
    image
    • Choose “Individual User Accounts”
    • Change the dropdown in the top right to “Connect to an existing user store in the cloud”
    • Fill in the required information from the B2C Tenant you created in the Azure portal previously
    • Copy the “Reply URI” from the “Change Authentication” dialog and enter it into the application properties for the app you previously created in your B2C tenant in the Azure portal.
    • Click OK
      clip_image006

Step 3: Try it out

Now run your application (ctrl+F5), and click “Sign in” in the top right:

clip_image008

You’ll be navigated to Azure’s B2C sign-in/sign-up page:

clip_image010

The first time, click the “Sign up now” at the bottom to create your account. Once your account is created, you’ll be redirected back to your app and you’re now signed in. It’s as easy that.

clip_image012

Additional Resources

The above walk through provided a quick overview for how to get started with Azure B2C and ASP.NET Core. If you are interested in exploring further or using Azure B2C in a different context, here are a few resources that you may find useful:

  • Create an ASP.NET (.NET Framework) app with B2C
  • ASP.NET Core GitHub sample: This sample demonstrates how to use a web front end to authenticate, and then obtain a token to authenticate against a backend Web API.
  • If you are looking to add support to an existing app, you may find it easiest to create a new project in Visual Studio and copy and paste the relevant code into your existing application. You can of course use code from the GitHub samples mentioned above as well

Conclusion

Hopefully you found this short overview of Azure B2C interesting. Authentication is often much more complex than the simple scenario we covered here, and there is no single “one size fits all”, so it should be pointed out that there are many alternative options, including third-party and open source options. As always, feel free to let me know what you think in the comments section below, or via twitter.

Sharing Configuration in ASP.NET Core SPA Scenarios

$
0
0

This is a guest post from Mike Rousos

ASP.NET Core 2.0 recently released and, with it, came some new templates, including new project templates for single-page applications (SPA) served from an ASP.NET Core backend. These templates make it easy to setup a web application with a rich JavaScript frontend and powerful ASP.NET Core backend. Even better, the templates enable server-side prerendering so the JavaScript front-end is already rendered and ready to display when users first arrive at your web app.

One challenge of the SPA scenario, though, is that there are two separate projects to manage, each with their own dependencies, configuration, etc. This post takes a look at how ASP.NET Core’s configuration system can be used to store configuration settings for both the backend ASP.NET Core app and a front-end JavaScript application together.

Getting Started

To get started, you’ll want to create a new ASP.NET Core Angular project – either by creating a new ASP.NET Core project in Visual Studio and selecting the Angular template, or using the .NET CLI command dotnet new angular.

New ASP.NET Core Angular Project

At this point, you should be able to restore client packages (npm install) and launch the application.

In this project template, the ASP.NET Core app’s configuration is loaded from default sources thanks to the WebHost.CreateDefaultBuilder call in Program.cs. The default configuration providers include:

  • appsettings.json
  • appsettings.{Environment}.json
  • User secrets (if in a development environment)
  • Environment variables
  • Command line arguments

You can see that appsettings.json already has some initial config values related to logging.

For the client-side application, there aren’t any configuration values setup initially. If we were using the Angular CLI to create and manage this application, it would provide environment-specific TypeScript files (environment.ts, environment.prod.ts, etc.) to provide settings specific to different environments. The Angular CLI would pick the right config file to use when building or serving the application, based on the environment specified. In our case, though, we’re not using the Angular CLI to build the client (we’re just using WebPack directly).

Instead of using client-side TypeScript files for configuration, it would be convenient to share portions of our server app’s configuration with the client app. That would enable us to use ASP.NET Core’s rich configuration system which can load from environment-specific config files, as well as from many other sources (environment variables, Azure Key Vault, etc.). We just need to make those config settings available to our client app.

Embedding Client Configuration

Since our goal is to store client and server settings together in the ASP.NET Core app, it’s helpful to define the shape of the client config settings by creating a class modeling the configuration data. This isn’t required (you could just send settings as raw json), but if the structure of your configuration isn’t frequently changing, it’s a little nicer to work with strongly typed objects in C# and TypeScript.

Here’s a simple class for storing sample client configuration data:

Next, we can use configuration Options to easily extract a ClientConfiguration object from the server application’s larger configuration.

Here are the calls to add to Startup.ConfigureServices to make a ClientConfiguration options object available in the web app’s dependency injection container:

Notice that we’ve specified that the ClientConfiguration object comes from the “ClientConfiguration” section of the app’s configuration, so that’s where we need to add config values in appsettings.json (or via environment variables, etc.):

If you want to set these sorts of hierarchical settings using environment variables, the variable name should include all levels of the setting’s hierarchy delimited by colons or double underscores. So, for example, the ClientConfiguration section’s UserMessage setting could be set from an environment variable by setting ClientConfiguration__UserMessage (or ClientConfiguration:UserMessage) equal to some value.

Creating a Client Configuration Endpoint

There are a number of ways that we can make configuration settings from our server application available to the client. One easy option is to create a web API that returns configuration settings.

To do that, let’s create a ClientConfiguration controller (which receives the ClientConfiguration options object as a constructor parameter):

Next, give the controller a single index action which, as you may have guessed, just returns the client configuration object:

At this point, you can launch the application and confirm that navigating to /ClientConfiguration returns configuration settings extracted from those configured for the web app. Now we just have to setup the client app to use those settings.

Creating a Client-Side Model and Configuration Service

Since our client configuration is strongly typed, we can start implementing our client-side config retrieval by making a configuration model that matches the one we made on the server. Create a configuration.ts file like this:

Next, we’ll want to handle app config settings in a service. The service will use Angular’s built-in Http service to request the configuration object from our web API. Both the Http service and our application’s ‘BASE_URL’ (the web app’s root address which we’ll call back to to reach the web API) can be injected into the configuration service’s constructor.

Then, we just create a loadConfiguration function to make the necessary GET request, deserialize into a Configuration object, and store the object in a local field. We convert the http request into a Promise (instead of leaving it as an Observable) so that it works with Angular’s APP_INITIALIZER function (more on this later!).

The finished configuration service should look something like this:

Now that we have a configuration service, we need to register it in app.module.shared.ts to make it available to other components. The ASP.NET Core Angular template puts most module setup for our client app in app.module.shared.ts (instead of app.module.ts) since there are separate modules for server-side rendering and client-side rendering.App.module.shared.ts contains the module pieces common to both scenarios.

To register our service, we need to import it and then add it to a providers array passed to the @NgModule decorator:

There’s one other important change to make before we leave app.module.shared.ts. We need to make sure that config values are loaded from the server before any components are rendered. To do that, we add ConfigurationService.loadConfiguration to our app’s APP_INITIALIZER function (which is called at app-initiazliation time and waits for returned promises to finish prior to any components being rendered).

Import APP_INITIALIZER from @angular/core and then update your providers argument to include a registration for APP_INITIALIZER:

Note that useFactory is a function that must return a function (which, in turn, returns a promise), so we have the double fat-arrow syntax seen above. Also, don’t forget to specify multi: true since there may be multiple APP_INITIALIZER functions registered.

Now the configuration service is registered with DI and will automatically load configuration from the server when the app starts up.

To make use of it, let’s update the app’s home component. Import ConfigurationService into the home component and update the component’s constructor to take an instance of the service as a parameter. Make sure to make the parameter public so that it can be used from the home component’s HTML template. Since we will want to loop over the ‘messageCount’ config setting, it’s also useful to create a small helper function to return an array with a length of messageCount for use with *ngFor in the HTML template later.

Here’s my simple home component:

Finally, get rid of everything currently in home.component.html and replace it with an HTML template that takes advantage of the configuration values:

Trying it Out

You should now be able to run the web app and see the server-side configuration values reflected in the client application!

Here’s a screenshot of my sample app running with ASPNETCORE_ENVIRIONMENT set to Development (I set MessageCount to 2 in appsettings.Development.json):

Development Environment Results

And here’s one with ASPNETCORE_ENVIRONMENT set to Production (where MessageCount is three and the message is appropriately updated):

Production Environment Results

Wrap Up

By exposing (portions of) app configuration from our ASP.NET Core app and making use of Angular’s APP_INITIALIZER function, we can share configuration values between server and client apps. This allows our client apps to take advantage of ASP.NET Core’s rich configuration system. In this sample, the client configuration settings were only used by our Angular app, but if your scenario includes some settings that are needed by both the client and server applications, this sort of solution allows you to set the config values in one place and have them be available to both applications.

Future improvements of this sample could include adding a time-to-live on the client’s cached configuration object to allow automatically reloaded config values to propagate to the client, or perhaps using different configuration providers to show Angular app configuration coming from Azure Key Vault or some other less common source.

Further Reading

 

Recent updates for publishing

$
0
0

We have recently added a few interesting features for ASP.NET publishing. The updates include:

In this post, we will briefly describe these updates. We will get started with the container related news.

Container Registry Publish Updates

Container development (e.g. Docker) has grown in popularity recently, including in .NET development. We’ve started adding support for containerized applications in Visual Studio as well. When developing a containerized app, there are two components that are needed to run your application.

  • App image
  • Host to run the container

The app image includes the application itself and info about configuring the machine hosting the application.

The host machine will load the app image and run it. There are a variety of options for the host machine that can be used. In previous releases we supported publishing a containerized ASP.NET Core project to Azure Container Registry (ACR) and creating a new Azure App Service to host the application. If you were running your application using a different host, Visual Studio wouldn’t have helped. Now in Visual Studio we have the following container publish related features:

  • A: Publish an ASP.NET Core containerized app to ACR and a new Azure App Service (Visual Studio 2017 15.0)
  • B: Publish an ASP.NET (Core or full .NET Framework) containerized project to a container registry (including, but not limited to, ACR) (Visual Studio 2017 15.5 Preview 2)

 

Feature A enabled Azure App Service users to run a containerized ASP.NET Core app to a new Azure App Service host. This feature was included in the initial release Visual Studio 2017. We are including it here for completeness. To publish one of these apps to App Service you’ll use the Microsoft Azure App Service Linux option in the publish page. See the next image.

After selecting this option you’ll be prompted to configure the new App Service instance and the container registry settings.

For feature B, we have added a new Container Registry publish option on the Publish page. You can see an image of that below.

The radio buttons below the Container Registry button lists out the different options. Let’s take a closer look at those in the table below.

 

Option When to use
Create New Azure Container Registry Select this option when you want to publish your app image to a new Azure Container Registry. You can publish several different app images to the same Container Registry.
Select Exiting Azure Container Registry Select this option when you’ve already created the Azure Container Registry, and you want to publish a new app image to it.
Docker hub Select this option if you want to publish to docker hub (hub.docker.com).
Custom Select this option to explicitly set publish options.

 

After selecting the appropriate option and clicking the Publish button, you’ll be prompted to complete the configuration and continue to publish. The Container Registry publish feature is enabled for both ASP.NET Core and ASP.NET full .NET Framework projects.

To try out the Azure related features you’ll need an Azure subscription. If you don’t already have one you can get started for free.

We’ve only briefly covered the Container Registry features here. We will be blogging more soon about how to use this in end-to-end scenarios here. Until then you can take a look at the docs. Now let’s move on to the next update.

Create Publish Profile Without Publishing

In Visual Studio publishing to a new destination includes two steps:

  • Create Publish Profile
  • Start publishing

In Visual Studio 2017 15.5 Preview 2 we have added a new gear option next to the Publish button. In previous releases of Visual Studio 2017 when you created a publish profile, the publish process was automatically started immediately after that. This prevented you from changing publish settings for the initial publish. We’ve heard feedback from users that in some cases the publish options need to be customized before the initial publish. Some reasons you may chose to delay the publish process includes; you need to configure databases, you need to change the Build Configuration used, you want to validate publish credentials before publish, etc. In the image below you can see the new gear option highlighted.

To create a publish profile and not publish, after selecting the publish destination (by clicking on one of the big buttons) and then clicking on the gear you’ll get a context menu with two options. You’ll want to select Create Profile.

 

 

 

After you select Create Profile here, you’ll continue to create the profile, and any new Azure resources if applicable. You can then publish your app at a later time with the Publish button. The following image shows this button.

Now that we’ve covered the delayed publish feature, let’s wrap up.

Conclusion

These were some updates that we wanted to share with you. We’ll be blogging more soon about how to use the container features in full scenarios. If you have any questions, please comment below or email me at sayedha AT microsoft.com or on Twitter @SayedIHashimi. You can also use the built in send feedback feature in Visual Studio 2017.

Thanks,
Sayed Ibrahim Hashimi

Publishing a Web App to an Azure VM from Visual Studio

$
0
0

We know virtual machines (VMs) are one of the most popular places to run apps in Azure, but publishing to a VM from Visual Studio has been a tricky experience for some. So, we’re pleased to announce that in Visual Studio 15.5 (get the preview now) we’ve added some improvements to the experience. In this post, we’ll discussed the requirements for a VM that’s ready to run an ASP.NET web application, and then walk through how to publish to it from Visual Studio 15.5 Preview 2. Also, if you have a minute to tell us about how you work with VMs, we’d appreciate it.

Contents

    – Prepare your VM for publishing
    – Walk-through: Publishing from Visual Studio
    – Modifying publish settings [Optional]

Prepare your VM for publishing

Before you can publish a web application to an Azure Virtual Machine from Visual Studio, you must have an Azure VM that’s properly configured.

Create a new VM on Azure
  1. Click the button below to deploy this custom Azure Resource Manager (ARM) Template, which will create a new Azure VM with all required components installed and configured.

    Create ASP.NET VM in Azure
  2. Once the VM is provisioned, go to the VM settings in the Azure Portal and assign a DNS name to the VM.
Update existing VM

The minimum requirements for publishing from Visual Studio are listed below.

    Server Components:
        • IIS
        • ASP.NET 4.6
        • Web Management Service
        • Web Deploy
    Open firewall ports:
        • Port 80 (http)
        • Port 8172 (Web Deploy)
    DNS:
        • A DNS name assigned to the VM

You can run this PowerShell script on an existing VM to install and configure all the required server components.

Note: You will need to go into the Azure Portal to configure the firewall rules and the DNS name.

Walk-through: Publishing a web app to an Azure Virtual Machine from Visual Studio 2017

  1. Open your web application in Visual Studio 2017 (v15.5 Preview 2)
  2. Right-click the project and choose “Publish…”
  3. Press the arrow on the right side of the page to scroll through the publishing options until you see “Microsoft Azure Virtual Machine”.
  4. Select the “Microsoft Azure Virtual Machine” icon, then click “Browse…” to open the Azure Virtual Machine selector.
    The Azure VM selector dialog will open.
  5. Choose the appropriate account (with Azure subscription connected to your virtual machine).
    • If you’re signed in to Visual Studio, the account list will be pre-populated with all your authenticated accounts.
    • If you are not signed in, or if the account you need is not listed, choose “Add an account…” and follow the prompts to log in.

    Wait for the list of Existing Virtual Machines to populate. (Note: This can take some time).

  6. From the Existing Virtual Machines list, select the VM that you intend to publish your web application to, then press “OK”.

    Focus returns to the Publish page with the Azure Virtual Machine populated and the “Publish” button enabled.

  7. Press the “Publish” button to create the publish profile and begin publishing to your Azure VM.
    Note: You can delay publishing so you can configure additional settings prior to your first publish as covered later in the post.
  8. When prompted for User name and Password, enter the credentials of a user who is authorized for publishing web applications on the VM, then press “OK”.
    Note: For new VMs, this is usually the administrator account. To enable non-administrator user accounts with permission to publish via WebDeploy, follow the steps in this document.
  9. If prompted, accept the security certificate.
  10. Publishing proceeds.
    You can watch the progress in the Output window.
    When publishing completes, a web browser will launch and open at the destination URL of the web site hosted on the Azure VM.
    Note: If you don’t want the web browser launching after each publish, remove the “Destination URL” from the Publish Profile settings.

Success!

At this point, you have finished publishing your web application to the VM.
The Publish page refreshes with the new profile selected and the details shown in the Summary section.

You can return to this screen any time to publish again, rename or delete the profile, launch the web site in a browser, or modify the publish settings.
Read on to learn about some interesting settings.

Modify Publish Settings [Optional]

After the Publish Profile has been created, you can edit the settings to tweak your publishing experience.
To modify the settings of the publish profile, click the “Settings…” link on the Publish page.

This will open the Publish Profile Settings dialog.

Save user credentials to the profile

To avoid having to provide user name and password each time you publish, you can store the user credentials in the publish profile.

  1. In the “User name” and “Password” fields, enter the credentials of an authorized user on the target VM.
  2. Press “Validate Connection” to confirm that the details are correct.
  3. Choose “Save password” if you don’t want to be prompted to enter the password each time you publish.
  4. Click “Next” to progress to the “Settings” tab, or click “Save” to accept the changes and close the dialog.
Ensure a clean publish each time

To ensure that your web application is uploaded to a clean web site each time you publish, you can configure the publish profile to delete all files on the target web server before publishing.

  1. Go into the “Settings” page of the Publish dialog.
  2. Expand the File Publish Options.
  3. Choose “Remove additional files at destination”.
    Warning! Deleting files on the target VM may have undesired effects, including removing files that were uploaded by other team members, or files generated by the application. Please be sure you know the state of the machine before publishing with this option enabled.

Conclusion

We’d love for you to download the 15.5 Preview and let us know what you think of the new experience. Also, if you could take two minutes to tell us about how you use VMs in the cloud, we’d appreciate it. As always please let us know what you think in the comments section below, by using the send feedback tool in Visual Studio, or via Twitter.


Creating a Minimal ASP.NET Core Windows Container

$
0
0

This is a guest post by Mike Rousos

One of the benefits of containers is their small size, which allows them to be more quickly deployed and more efficiently packed onto a host than virtual machines could be. This post highlights some recent advances in Windows container technology and .NET Core technology that allow ASP.NET Core Windows Docker images to be reduced in size dramatically.

Before we dive in, it’s worth reflecting on whether Docker image size even matters. Remember, Docker images are built by a series of read-only layers. When using multiple images on a single host, common layers will be shared, so multiple images/containers using a base image (a particular Nano Server or Server Core image, for example) will only need that base image present once on the machine. Even when containers are instantiated, they use the shared image layers and only take up additional disk space with their writable top layer. These efficiencies in Docker mean that image size doesn’t matter as much as someone just learning about containerization might guess.

All that said, Docker image size does make some difference. Every time a VM is added to your Docker host cluster in a scale-out operation, the images need to be populated. Smaller images will get the new host node up and serving requests faster. Also, despite image layer sharing, it’s not unusual for Docker hosts to have dozens of different images (or more). Even if some of those share common layers, there will be differences between them and the disk space needed can begin to add up.

If you’re new to using Docker with ASP.NET Core and want to read-up on the basics, you can learn all about containerizing ASP.NET Core applications in the documentation.

A Starting Point

You can create an ASP.NET Core Docker image for Windows containers by checking the ‘Enable Docker Support’ box while creating a new ASP.NET Core project in Visual Studio 2017 (or by right-clicking an existing .NET Core project and choosing ‘Add -> Docker Support’).

Adding Docker Support

To build the app’s Docker image from Visual Studio, follow these steps:

  1. Make sure the docker-compose project is selected as the solution’s startup project.
  2. Change the project’s Configuration to ‘Release’ instead of ‘Debug’.
    1. It’s important to use Release configuration because, in Debug configuration, Visual Studio doesn’t copy your application’s binaries into the Docker image. Instead, it creates a volume mount that allows the application binaries to be used from outside the container (so that they can be easily updated without rebuilding the image). This is great for a code-debug-fix cycle, but will give us incorrect data for what the Docker image size will be in production.
  3. Push F5 to build (and start) the Docker image.

Visual Studio Docker Launch Settings

Alternatively, the same image can be created from a command prompt by publishing the application (dotnet publish -c Release) and building the Docker image (docker build -t samplewebapi --build-arg source=bin/Release/netcoreapp2.0/publish .).

The resulting Docker image has a size of 1.24 GB (you can see this with the docker images or docker history commands). That’s a lot smaller than a Windows VM and even considerably smaller than Windows Server Core containers or VMs, but it’s still large for a Docker image. Let’s see how we can make it smaller.

Initial Template Image

Windows Nano Server, Version 1709

The first (and by far the greatest) improvement we can make to this image size has to do with the base OS image we’re using. If you look at the docker images output above, you will see that although the total image is 1.24 GB, the majority of that (more than 1 GB) comes from the underlying Windows Nano Server image.

The Windows team recently released Windows Server, version 1709. One of the great features in 1709 is an optimized Nano Server base image that is nearly 80% smaller than previous Nano Server images. The Nano Server, version 1709 image is only about 235 MB on disk (~93 MB compressed).

The first thing we should do to optimize our ASP.NET Core application’s image is to use that new Nano Server base. You can do that by navigating to the app’s Dockerfile and replacing FROM microsoft/aspnetcore:2.0 with FROM microsoft/aspnetcore:2.0-nanoserver-1709.

Be aware that in order to use Nano Server, version 1709 Docker images, the Docker host must be running either Windows Server, version 1709 or Windows 10 with the Fall Creators Update, which is rolling out worldwide right now. If your computer hasn’t received the Fall Creators Update yet, don’t worry. It is possible to create Windows Server, version 1709 virtual machines in Azure to try out these new features.

After switching to use the Nano Server, version 1709 base image, we can re-build our Docker image and see that its size is now 357 MB. That’s a big improvement over the original image!

If you’re building your Docker image by launching the docker-compose project from within Visual Studio, make sure Visual Studio is up-to-date (15.4 or later) since recent updates are needed to launch Docker containers based on Nano Server, version 1709 from Visual Studio.

AspNet Core v1709 Docker Image

That Might be Small Enough

Before we make the image any smaller, I want to pause to point out that for most scenarios, using the Nano Server, version 1709 base image is enough of an optimization and further “improvements” might actually make things worse. To understand why, take a look at the sizes of the component layers of the Docker image created in the last step.

AspNet Core v1709 Layers

The largest layers are still the OS (the bottom two layers) and, at the moment, that’s as small as Windows images get. Our app, on the other hand is the 373 kB towards the top of the layer history. That’s already quite small.

The only good places left to optimize are the .NET Core layer (64.9 MB) or the ASP.NET Core layer (53.6 MB). We can (and will) optimize those, but in many cases it’s counter-productive to do so because Docker shares layers between images (and containers) with common bases. In other words, the ASP.NET Core and .NET Core layers shown in this image will be shared with all other containers on the host that use microsoft/aspnetcore:2.0-nanoserver-1709 as their base image. The only additional space that other images consume will be the ~500 kB that our application added on top of the layers common to all ASP.NET Core images. Once we start making changes to those base layers, they won’t be sharable anymore (since we’ll be pulling out things that our app doesn’t need but that others might). So, we might reduce this application’s image size but cause others on the host to increase!

So, bottom line: if your Docker host will be hosting containers based on several different ASP.NET Core application images, then it’s probably best to just have them all derive from microsoft/aspnetcore:2.0-nanoserver-1709 and let the magic of Docker layer sharing save you space. If, on the other hand, your ASP.NET Core image is likely to be used alongside other non-.NET Core images which are unlikely to be able to share much with it anyhow, then read on to see how we can further optimize our image size.

Reducing ASP.NET Core Dependencies

The majority of the ~54 MB contributed by the ASP.NET Core layer of our image is a centralized store of ASP.NET Core components that’s installed by the aspnetcore Dockerfile. As mentioned above, this is useful because it allows ASP.NET Core dependencies to be shared between different ASP.NET Core application Docker images. If you have a small ASP.NET Core app (and don’t need the sharing), it’s possible to just bundle the parts of the ASP.NET Core web stack you need with your application and skip the rest.

To remove unused portions of the ASP.NET Core stack, we can take the following steps:

  1. Update the Dockerfile to use microsoft/dotnet:2.0.0-runtime-nanoserver-1709 as its base image instead of microsoft/aspnetcore:2.0-nanoserver-1709.
  2. Add the line ENV ASPNETCORE_URLS http://+:80 to the Dockerfile after the FROM statement (this was previously done in the aspnetcore base image for us).
  3. In the app’s project file, replace the Microsoft.AspNetCore.All metapackage dependency with dependencies on just the ASP.NET Core components the app requires. In this case, my app is a trivial ‘Hello World’ web API, so I only need the following (larger apps would, of course, need more ASP.NET Core packages):
    1. Microsoft.AspNetCore
    2. Microsoft.AspNetCore.Mvc.Core
    3. Microsoft.AspNetCore.Mvc.Formatters.Json
  4. Update the app’s Startup.cs file to callservices.AddMvcCore().AddJsonFormatters() instead of services.AddMvc() (since the AddMvc extension method isn’t in the packages we’ve referenced).
    1. This works because our sample project is a Web API project. An MVC project would need more MVC services.
  5. Update the app’s controllers to derive from ControllerBase instead ofController
    1. Again, since this is a Web API controller instead of an MVC controller, it doesn’t use the additional functionality Controller adds).

Now when we build the Docker image, we can see we’ve shaved a little more than 40 MB by only including the ASP.NET Core dependencies we need. The total size is now 315 MB.

NanoServer No AspNet All

Bear in mind that this is a trivial sample app and a real-world application would not be able to cut as much of the ASP.NET Core framework.

Also, notice that while we eliminated the 54 MB intermediate ASP.NET Core layer (which could have been shared), we’ve increased the size of our application layer (which cannot be shared) by about 11 MB.

Trimming Unused Assemblies

The next place to consider saving space will be from the .NET Core/CoreFX layer (which is consuming ~65 MB at the moment). Like the ASP.NET Core optimizations, this is only useful if that layer wasn’t going to be shared with other images. It’s also a little trickier to improve because unlike ASP.NET Core, .NET Core’s framework is delived as a single package (Microsoft.NetCore.App).

To reduce the size of .NET Core/CoreFX files in our image, we need to take two steps:

  1. Include the .NET Core files as part of our application (instead of in a base layer).
  2. Use a preview tool to trim unused assemblies from our application.

The result of those steps will be the removal of any .NET Framework (or remaining ASP.NET Core framework) assemblies that aren’t actually used by our application.

First, we need to make our application self-contained. To do that, add a <RuntimeIdentifiers>win10-x64</RuntimeIdentifiers> property to the project’s csproj file.

We also need to update our Dockerfile to use microsoft/nanoserver:1709 as its base image (so that we don’t end up with two copies of .NET Core) and useSampleWebApi.exe as our image’s entrypoint instead of dotnet SampleWebApi.dll.

Up until now, it was possible to build the Docker image either from Visual Studio or the command line. But Visual Studio doesn’t currently support building Docker images for self-contained .NET Core applications (which are not typically used for development-time debugging). So, to build our Docker image from here on out, we will use the following command line interface commands (notice that they’re a little different from those shown previously since we are now publishing a runtime-specific build of the application). Also, you may need to delete (or update) the .dockerignore file generated as part of the project’s template because we’re now copying binaries into the Docker image from a different publish location.

dotnet publish -c Release -r win10-x64
docker build -t samplewebapi --build-arg
   source=bin/Release/netcoreapp2.0/win10-x64/publish .

These changes will cause the .NET Core assemblies to be deployed with our application instead of in a shared location, but the included files will be about the same. To remove unneeded assemblies, we can use Microsoft.Packaging.Tools.Trimming, a preview package that removes unused assemblies from a project’s output and publish folders. To do that, add a package reference to Microsoft.Packaging.Tools.Trimming and a <TrimUnusedDependencies>true</TrimUnusedDependencies> property to the application’s project file.

After making those additions, re-publishing, and re-building the Docker image (using the CLI commands shown above), the total image size is down to 288 MB.

NanoServer SelfContained Trim Dependencies

As before, this reduction in total image size does come at the expense of a larger top layer (up to 53 MB).

One More Round of Trimming

We’re nearly done now, but there’s one more optimization we can make.Microsoft.Packaging.Tools.Trimming removed some unused assemblies, but others still remain because it isn’t clear whether dependencies to those ones assemblies are actually exercised or not. And that’s not to mention all the IL in an assembly that may be unused if our application calls just one or two methods from it.

There’s another preview trimming tool, the .NET IL Linker, which is based on the Mono linker and can remove unused IL from assemblies.

This tool is still experimental, so to reference it we need to add a NuGet.config to our solution and include https://dotnet.myget.org/F/dotnet-core/api/v3/index.json as a package source. Then, we add a dependency to the latest preview version of ILLink.Tasks(currently, the latest version is 0.1.4-preview-981901).

ILLink.Tasks will trim IL automatically, but we can get a report on what it has done by passing /p:ShowLinkerSizeComparison=true to our dotnet publish command.

After one more publish and Docker image build, the final size for our Windows ASP.NET Core Web API container image comes to 271 MB!

NanoServer Double Trim

Even though trimming ASP.NET Core and .NET Core Framework assemblies isn’t common for most containerized projects, the preview trimming tools shown here can be very useful for reducing the size of large applications since they can remove application-local assemblies (pulled in from NuGet, for example) and IL code paths that aren’t used.

Wrap-Up

This post has shown a series of optimizations that can help to reduce ASP.NET Core Docker image size. In most cases, all that’s needed is to be sure to use new Nano Server, version 1709 base images and, if your app is large, to consider some preview dependency trimming options like Microsoft.Packaging.Tools.Trimming or the .NET IL Linker.

Less commonly, you might also consider using app-local versions of the ASP.NET Core or .NET Core Frameworks (as opposed to shared ones) so that you can trim unused dependencies from them. Just be careful to keep common base image layers unchanged if they’re likely to be shared between multiple images. Although this article presented the different trimming and minimizing options as a series of steps, you should feel free to pick and choose the techniques that make sense for your particular scenario.

In the end, a simple ASP.NET Core web API sample can be packaged into a < 360 MB Windows Docker image without sacrificing any ability to share ASP.NET Core and .NET Core base layers with other Docker images on the host and, potentially, into an even smaller image (271 MB) if that sharing is not required.

Improvements to Azure Functions in Visual Studio

$
0
0

We’re excited to announce several improvements to the Azure Functions experience in Visual Studio as part of the latest update to the Azure Functions tools on top of Visual Studio 2017 v15.5. (Get the preview now.)

New Function project dialog

To make it easier to get up and running with Azure Functions, we’ve introduced a new Functions project dialog. Now, when creating a Functions project, you can choose one that starts with the one of the most popular trigger types (Http, Queue or Timer). If you’re looking for something different choose the Empty project, then add the item after project creation.

Additionally, most Function apps require a valid storage account to be specified in AzureWebJobsStorage. Typically this has meant adding a connection string to the local.settings.json after the function is created. To make it easier to find and configure the connection strings for your Function’s storage account, we’ve introduced a Storage Account picker in the new project dialog.

Storage account picker in new Functions project dialog

The default option is the Storage Emulator. The Storage Emulator is a local service, installed as part of the Azure workload, that offers much of the functionality of a real Azure storage account. If it’s not already running, you can start it by pressing the Windows Start key and typing “Microsoft Azure Storage Emulator”. This is a great option if you’re looking to get up and running quickly – especially if you’re playing around, as it doesn’t require any resources to be provisioned in Azure.

However, the best way to guarantee that all supported features are available to your Functions project is to configure it to use an Azure storage account. To help with this, we’ve added a Browse… option in the Storage Account picker that launches the Azure Storage Account selection dialog. This lets you choose from existing storage accounts that you have access to through your Azure subscriptions.

When the project is created, the connection string for the selected storage account will be added to the local.settings.json file and you’ll be able to run your Functions project straight away!

.NET Core support

You can now create Azure Functions projects inside Visual Studio that target .NET Core. When creating a Functions project, you can choose a target from the selector at the top of the new project dialog. If you choose the Azure Functions v2 (.NET Standard) target, your project will run against .NET Core or .NET Framework.

Choose Azure Functions runtime

Manage Application Settings

An important part of deploying Functions to Azure is adding appropriate application settings. Azure Functions projects store local settings in the local.settings.json file, but this file does not get published to Azure (by design). So, the settings that control the application running in Azure need to be manually configured. As part of our new tooling improvements, we’ve added the ability for you to view and edit your Function’s app settings in the cloud from within Visual Studio. On the Publish page of the Connected Services dialog, you’ll find an option to Manage Application Settings….

Manage App Settings link in Publish dialog

This launches the Application Settings dialog, which allows you to view, update, add and remove app settings just like you would on the Azure portal. When you’re satisfied with the changes, you can press Apply to push the changes to the server.

Application Settings editor

Detect mismatching Functions runtime versions

To prevent the issue where you are developing locally against an out-of-date version of the runtime, now, after publishing a Functions app, we’ll compare your local runtime version against the portal’s version. If they are different, Visual Studio will offer to change the app settings on the cloud to match the version you are using locally.

Update mismatching Functions extension version

Try out the new features

Download the latest version of Visual Studio 2017 (v15.5) and start enjoying the improved Functions experience today.

Ensure you have the Azure workload installed and the latest version of the Azure Web Jobs and Functions Tools.
Note: If you have a fresh installation, you may need to manually apply the update to Azure Functions and Web Jobs Tools. Look for the new notifications flag in the Visual Studio title bar. Clicking the link in the Notifications window opens the Extensions and Updates dialog. From there you can click Update to upgrade to the latest version.

Update notifications

If you have any questions or comments, please let us know by posting in the comments section below.

Announcing .NET 4.7.1 Tools for the Cloud

$
0
0

Packages and ContainersToday we are releasing a set of providers for ASP.NET 4.7.1 that make it easier than ever to deploy your applications to cloud services and take advantage of cloud-scale features.  This release includes a new CosmosDb provider for session state and a collection of configuration builders.

A Package-First Approach

With previous versions of the .NET Framework, new features were provided “in the box” and shipped with Windows and new versions of the entire framework.  This means that you can be assured that your providers and capabilities were available on every current version of Windows.  It also means that you had to wait until a new version of Windows to get new .NET Framework features.  We have adopted a strategy starting with .NET Framework 4.7 to deliver more abstract features with the framework and deploy providers through the NuGet package manager service.  There are no concrete ConfigurationBuilder classes in the .NET Framework 4.7.1, and we are now making available several for your use from the NuGet.org repository.  In this way, we can update and deploy new ConfigurationBuilders without requiring a fresh install of Windows or the .NET Framework.

ConfigurationBuilders Simplify Application Management

In .NET Framework 4.7.1 we introduced the concept of ConfigurationBuilders.  ConfigurationBuilders are objects that allow you to inject application configuration into your .NET Framework 4.7.1 application and continue to use the familiar ConfigurationManager interface to read those values.  Sure, you could always write your configuration files to read other config files from disk, but what if you wanted to apply configuration from environment variables?  What if you wanted to read configuration from a service, like Azure Key Vault?  To work with those configuration sources, you would need to rewrite some non-trivial amount of your application to consume these services.

With ConfigurationBuilders, no code changes are necessary in your application.  You simply add references from your web.config or app.config file to the ConfigurationBuilders you want to use and your application will start consuming those sources without updating your configuration files on disk.  One form of ConfigurationBuilder is the KeyValueConfigBuilder that matches a key to a value from an external source and adds that pair to your configuration.  All of the ConfigurationBuilders we are releasing today support this key-value approach to configuration.  Lets take a look at using one of these new ConfigurationBuilders, the EnvironmentConfigBuilder.

When you install any of our new ConfigurationBuilders into your application, we automatically allocate the appropriate new configSections in your app.config or web.config file as shown below:

The new “builders” section contains information about the ConfigurationBuilders you wish to use in your application.  You can declare any number of ConfigurationBuilders, and apply the settings they retrieve to any section of your configuration.  Let’s look at applying our environment variables to the appSettings of this configuration.  You specify which ConfigurationBuilders to apply to a section by adding the configBuilders attribute to that section, and indicate the name of the defined ConfigurationBuilder to apply, in this case “Environment”

<appSettings configBuilders="Environment">
  <add key="COMPUTERNAME" value="VisualStudio" />
</appSettings>

The COMPUTERNAME is a common environment variable set by the Windows operating system that we can use to replace the VisualStudio setting defined here.  With the below ASPX page in our project, we can run our application and see the following results.

AppSettings Reported in the Browser

AppSettings Reported in the Browser

The COMPUTERNAME setting is overwritten by the environment variable.  That’s a nice start, but what if I want to read ALL the environment variables and add them as application settings?  You can specify Greedy Mode for the ConfigurationBuilder and it will read all environment variables and add them to your appSettings:

<add name="Environment" mode="Greedy"
  type="Microsoft.Configuration.ConfigurationBuilders.EnvironmentConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Environment, Version=1.0.0.0, Culture=neutral" />

There are several Modes that you can apply to each of the ConfigurationBuilders we are releasing today:

  • Greedy – Read all settings and add them to the section the ConfigurationBuilder is applied to
  • Strict – (default) Update only those settings where the key matches the configuration source’s key
  • Expand – Operate on the raw XML of the configuration section and do a string replace where the configuration source’s key is found.

The Greedy and Strict options only apply when operating on AppSettings or ConnectionStrings sections.  Expand can perform its string replacement on any section of your config file.

You can also specify prefixes for your settings to be handled by adding the prefix attribute.  This allows you to only read settings that start with a known prefix.  Perhaps you only want to add environment variables that start with “APPSETTING_”, you can update your config file like this:

<add name="Environment"
     mode="Greedy" prefix="APPSETTING_"
     type="Microsoft.Configuration.ConfigurationBuilders.EnvironmentConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Environment, Version=1.0.0.0, Culture=neutral" />

Finally, even though using the APPSETTING_ prefix is a nice catch to only read those settings, you may not want your configuration to actually be called “APPSETTING_Setting” in code.  You can use the stripPrefix attribute (default value is false) to omit the prefix when the value is added to your configuration:

Greedy AppSettings with Prefixes Stripped

Greedy AppSettings with Prefixes Stripped

Notice that the COMPUTERNAME was not replaced in this mode.  You can add a second EnvironmentConfigBuilder to read and apply settings by adding another add statement to the configBuilders section and adding an entry to the configBuilders attribute on the appSettings:

Try using the EnvironmentConfigBuilder from inside a Docker container to inject configuration specific to your running instances of your application.  We’ve found that this significantly improves the ability to deploy existing applications in containers without having to rewrite your code to read from alternate configuration sources.

Secure Configuration with Azure Key Vault

We are happy to include a ConfigurationBuilder for Azure Key Vault in this initial collection of providers.  This ConfigurationBuilder allows you to secure your application using the Azure Key Vault service, without any required login information to access the vault.  Add this ConfigurationBuilder to your config file and build an add statement like the following:

<add name="AzureKeyVault"
     mode="Strict"
     vaultName="MyVaultName"
     type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure" />

If your application is running on an Azure service that has , this is all you need to read configuration from the vault and add it to your application.  Conversely, if you are not running on a service with MSI, you can still use the vault by adding the following attributes:

  • clientId – the Azure Active Directory application key that has access to your key vault
  • clientSecret – the Azure Active Directory application secret that corresponds to the clientId

The same mode, prefix, and stripPrefix features described previously are available for use with the AzureKeyVaultConfigBuilder.  You can now configure your application to grab that secret database connection string from the keyvault “conn_mydb” setting with a config file that looks like this:

You can use other vaults by using the uri attribute instead of the vaultName attribute, and providing the URI of the vault you wish to connect to.  More information about getting started configuring key vault is available online.

Other Configuration Builders Available

Today we are introducing five configuration builders as a preview for you to use and extend:

This new collection of ConfigurationBuilders should help make it easier than ever to secure your applications with Azure Key Vault, or orchestrate your applications when you add them to a container by no longer embedding configuration or writing extra code to handle deployment settings.

We plan to fully release the source code and make these providers open source prior to removing the preview tag from them.

Store SessionState in CosmosDb

Today we are also releasing a session state provider for Azure Cosmos Db.  The globally available CosmosDb service means that you can geographically load-balance your ASP.NET application and your users will always maintain the same session state no matter the server they are connected to.  This async provider is available as a NuGet package and can be added to your project by installing that package and updating the session state provider in your web.config as follows:

<connectionStrings
  <add name="myCosmosConnString"
       connectionString="- YOUR CONNECTION STRING -"/>
</connectionStrings>
<sessionState mode="Custom" customProvider="cosmos">
  <providers>
    <add name="cosmos"
         type="Microsoft.AspNet.SessionState.CosmosDBSessionStateProviderAsync, Microsoft.AspNet.SessionState.CosmosDBSessionStateProviderAsync"
         connectionStringName="myCosmosConnString"/>
  </providers>
</sessionState>

Summary

We’re continuing to innovate and update .NET Framework and ASP.NET.  With these new providers, they should make it easier to deploy your applications to Azure or make use of containers without having to rewrite your application.  Update your applications to .NET 4.7.1 and start using these new providers to make your configuration more secure, and to start using CosmosDb for your session state.

Orchard Core Beta 1 released

$
0
0

This is a guest post by Sebastien Ros on behalf of the Orchard community

Two years ago, the Orchard community started developing Orchard on .NET Core. After 1,500 commits, 297,000 lines of code, 127 projects, we think it’s time to release a public version, namely Orchard Core Beta 1.

What is Orchard Core?

If you know what Orchard and .NET Core are, then it might seem obvious: Orchard Core is a redevelopment of Orchard on ASP.NET Core.

Orchard Core consists of two different targets:

  • Orchard Core Framework: An application framework for building modular, multi-tenant applications on ASP.NET Core.
  • Orchard Core CMS: A Web Content Management System (CMS) built on top of the Orchard Core Framework.

It’s important to note the differences between the framework and the CMS. Some developers who want to develop SaaS applications will only be interested in the modular framework. Others who want to build administrable websites will focus on the CMS and build modules to enhance their sites or the whole ecosystem.

Beta

Quoting Jeff Atwood on https://blog.codinghorror.com/alpha-beta-and-sometimes-gamma/:

“The software is complete enough for external testing — that is, by groups outside the organization or community that developed the software. Beta software is usually feature complete, but may have known limitations or bugs. Betas are either closed (private) and limited to a specific set of users, or they can be open to the general public.”

It means we feel confident that developers can start building applications and websites using the current state of development. There are bugs, limitations and there will be breaking changes, but the feedback has been strong enough that we think it’s time to show you what we have accomplished so far.

Building Software as a Service (SaaS) solutions with the Orchard Core Framework

It’s very important to understand the Orchard Core Framework is distributed independently from the CMS on nuget.org. We’ve made some sample applications on https://github.com/OrchardCMS/OrchardCore.Samples that will guide you on how to build modular and multi-tenant applications using just Orchard Core Framework without any of the CMS specific features.

One of our goals is to enable community-based ecosytems of hosted applications which can be extended with modules, like e-commerce systems, blog engines and more. The Orchard Core Framework enables a modular environment that allows different teams to work on separate parts of an application and make components reusable across projects.

What’s new in Orchard Core CMS

Orchard Core CMS is a complete rewrite of Orchard CMS on ASP.NET Core. It’s not just a port as we wanted to improve the performance drastically and align as close as possible to the development models of ASP.NET Core.

  • Performance. This might the most obvious change when you start using Orchard Core CMS. It’s extremely fast for a CMS. So fast that we haven’t even cared about working on an output cache module. To give you an idea, without caching Orchard Core CMS is around 20 times faster than the previous version.
  • Portable. You can now develop and deploy Orchard Core CMS on Windows, Linux and macOS. We also have Docker images ready for use.
  • Document database abstraction. Orchard Core CMS still requires a relational database, and is compatible with SQL Server, MySQL, PostgreSQL and SQLite, but it’s now using a document abstraction (YesSql) that provides a document database API to store and query documents. This is a much better approach for CMS systems and helps performance significantly.
  • NuGet Packages. Modules and themes are now shared as NuGet packages. Creating a new website with Orchard Core CMS is actually as simple as referencing a single meta package from the NuGet gallery. It also means that updating to a newer version only involves updating the version number of this package.
  • Live preview. When editing a content item, you can now see live how it will look like on your site, even before saving your content. And it also works for templates, where you can browse any page to inspect the impact of a change on templates as you type it.
  • Liquid templates support. Editors can safely change the HTML templates with the Liquid template language. It was chosen as it’s both very well documented (Jekyll, Shopify, …) and secure.
  • Custom queries. We wanted to provide a way to developers to access all their data as simply as possible. We created a module that lets you create custom ad-hoc SQL, and Lucene queries that can be re-used to display custom content, or exposed as API endpoints. You can use it to create efficient queries, or expose your data to SPA applications.
  • Recipes. Recipes are scripts that can contain content and metadata to build a website. You can now include binary files, and even use them to deploy your sites remotely from a staging to a production environment for instance. They can also be part of NuGet Packages, allowing you to ship predefined websites.
  • Scalability. Because Orchard Core is a multi-tenant system, you can host as many websites as you want with a single deployment. A typical cloud machine can then host thousands of sites in parallel, with database, content, theme and user isolation.

Resources

Development plan

The Orchard Core source code is available on GitHub.

There are still many important pieces to add and you might want to check our roadmap, but it’s also the best time to jump into the project and start contributing new modules, themes, improvements, or just ideas.

Feel free to drop on our dedicated Gitter chat and ask questions.

Improve website performance by optimizing images

$
0
0

We all want our web applications to load as fast as possible to give the best possible experience to the users. One of the steps to achieve that is to make sure the images we use are as optimized as possible.

If we can reduce the file size of the images then we can significantly reduce the weight of the website. This is important for various reasons, including:

  • Less bandwidth needed == cheaper hosting
  • The website loads faster
  • Faster websites have higher conversion rates
  • Less data needed to load your page on mobile devices (mobile data can be expensive)

To optimize images is always better for the user and therefore for you too, but it’s something that is easy to forget to do and a bit cumbersome. So, let’s look at a couple of options that are simple to use.

All these options use great optimization algorithms that are capable of reducing the file size of images by up to 75% without any noticeable quality loss.

Gulp

If you are already using Gulp, then using the gulp-imagemin package is a good option. When configured it will automatically optimize the images as part of your build.

Pros:

  • Can be automated as part of a build
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • It is open source

Cons:

  • Requires some configuration
  • Increases the build time sometimes by a lot
  • Doesn’t optimize dynamically added images

Visual Studio Image Optimizer

The Image Optimizer extension for Visual Studio is one of the most popular extensions due to its simplicity of use and strong optimization algorithms.

Pros:

  • Remarkably simple to use – no configuration
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • It is open source

Cons:

  • No build time support
  • Doesn’t optimize dynamically added images

Azure Image Optimizer

Installing the Azure.ImageOptimizer NuGet package into any ASP.NET application will automatically optimize images once the app is deployed to Azure App Services with zero code changes to the web application. It uses the same algorithms as the Image Optimizer extension for Visual Studio.

To try out the Azure Image Optimizer you’ll need an Azure subscription. If you don’t already have one you can get started for free.

This is the only solution that optimize images dynamically added at runtime such as a user uploaded profile pictures.

Pros:

  • Remarkably simple to use
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • Optimizes dynamically added images
  • Set it and forget it
  • It is open source

Cons:

  • Only works on Azure App Service

To understand how the Azure Image Optimizer works, check out the documentation on GitHub. Spoiler alert – it is an Azure Webjob running next to your web application.

Final thoughts

There are many more options for image optimizations that I didn’t cover, but it doesn’t really matter how you chose to optimize the images. The important part is that you optimize them.

My personal preference is to use the Image Optimizer extension for Visual Studio to optimize the known images and combine that with the Azure.ImageOptimizer NuGet package to handle any dynamically added images at runtime.

For more information about image optimization techniques check out Addy Osmani’s very comprehensive eBook Essential Image Optimization.

Configuring HTTPS in ASP.NET Core across different platforms

$
0
0

As the web moves to be more secure by default, it’s more important than ever to make sure your websites have HTTPS enabled. And if you’re going to use HTTPS in production its a good idea to develop with HTTPS enabled so that your development environment is as close to your production environment as possible. In this blog post we’re going to go through how to setup an ASP.NET Core app with HTTPS for local development on Windows, Mac, and Linux.

This post is primarily focused on enabling HTTPS in ASP.NET Core during development using Kestrel. When using Visual Studio you can alternatively enable HTTPS in the Debug tab of your app to easily have IIS Express enable HTTPS without it going all the way to Kestrel. This closely mimics what you would have if you’re handling HTTPS connections in production using IIS. However, when running from the command-line or in a non-Windows environment you must instead enable HTTPS directly using Kestrel.

The basic steps we will use for each OS are:

  1. Create a self-signed certificate that Kestrel can use
  2. Optionally trust the certificate so that your browser will not warn you about using a self-signed certificate
  3. Configure Kestrel to use that certificate

You can also reference the complete Kestrel HTTPS sample app

Create a certificate

Windows

Use the New-SelfSignedCertificate Powershell cmdlet to generate a suitable certificate for development:

New-SelfSignedCertificate -NotBefore (Get-Date) -NotAfter (Get-Date).AddYears(1) -Subject "localhost" -KeyAlgorithm "RSA" -KeyLength 2048 -HashAlgorithm "SHA256" -CertStoreLocation "Cert:\CurrentUser\My" -KeyUsage KeyEncipherment -FriendlyName "HTTPS development certificate" -TextExtension @("2.5.29.19={critical}{text}","2.5.29.37={critical}{text}1.3.6.1.5.5.7.3.1","2.5.29.17={critical}{text}DNS=localhost")

Linux & Mac

For Linux and Mac we will use OpenSSL. Create a file https.config with the following data:

Run the following command to generate a private key and a certificate signing request:

openssl req -config https.config -new -out csr.pem

Run the following command to create a self-signed certificate:

openssl x509 -req -days 365 -extfile https.config -extensions v3_req -in csr.pem -signkey key.pem -out https.crt

Run the following command to generate a pfx file containing the certificate and the private key that you can use with Kestrel:

openssl pkcs12 -export -out https.pfx -inkey key.pem -in https.crt -password pass:<password>

Trust the certificate

This step is optional, but without it the browser will warn you about your site being potentially unsafe. You will see something like the following if you browser doesn’t trust your certificate:

Windows

To trust the generated certificate on Windows you need to add it to the current user’s trusted root store:

  1. Run certmgr.msc
  2. Find the certificate under Personal/Certificates. The “Issued To” field should be localhost and the “Friendly Name” should be HTTPS development certificate
  3. Copy the certificate and paste it under Trusted Root Certification Authorities/Certificates
  4. When Windows presents a security warning dialog to confirm you want to trust the certificate, click on “Yes”.

Linux

There is no centralized way of trusting the a certificate on Linux so you can do one of the following:

  1. Exclude the URL you are using in your browsers exclude list
  2. Trust all self-signed certificates on localhost
  3. Add the https.crt to the list of trusted certificates in your browser.

How exactly to achieve this depends on your browser/distro.

Mac

Option 1: Command line

Run the following command:

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain https.crt

Some browsers, such as Chrome, require you to restart them before this trust will take affect.

Option 2: Keychain UI

If you open the “Keychain Access” app you can drag your https.crt into the Login keychain.

Configure Kestrel to use the certificate we generated

To configure Kestrel to use the generated certificate, add the following code and configuration to your application.

Application code

This code will read a set of HTTP server endpoint configurations from a custom section in your app configuration settings and then apply them to Kestrel. The endpoint configurations include settings for configuring HTTPS, like which certificate to use. Add the code for the ConfigureEndpoints extension method to your application and then call it when setting up Kestrel for your host in Program.cs:

Windows sample configuration

To configure your endpoints and HTTPS settings on Windows you could then put the following into your appsettings.Development.json, which configures an HTTPS endpoint for your application using a certificate in a certificate store:

Linux and Mac sample configuration

On Linux or Mac your appsettings.Development.json would look something like this, where your certificate is specified using a file path:

You can then use the user secrets tool, environment variables, or some secure store such as Azure KeyVault to store the password of your certificate using the HttpServer:Endpoints:Https:Password configuration key instead of storing the password in a file that goes into source control.

For example, to store the certificate password as a user secret during development, run the following command from your project:

dotnet user-secrets set HttpServer:Endpoints:Https:Password

To override the certificate password using an environment variable, create an environment variable named HttpServer:Endpoints:Https:Password (or HttpServer__Endpoints__Https__Password if your system does not allow :) with the value of the certificate password.

Run your application

When running from Visual Studio you can change the default launch URL for your application to use the HTTPS address by modifying the launchSettings.json file:

Redirect from HTTP to HTTPS

When you setup your site to use HTTPS by default, you typically want to allow HTTP requests, but have them redirected to the corresponding HTTPS address. In ASP.NET Core this can be accomplished using the URL rewrite middleware. Place the following code in the Configure method of your Startup class:

Conclusion

With a little bit of work you can setup your ASP.NET Core 2.0 site to always use HTTPS. For a future release we are working to simplify setting up HTTPS for ASP.NET Core apps and we plan to enable HTTPS in the project templates by default. We will share more details on these improvements as they become publicly available.

Testing ASP.NET Core MVC web apps in-memory

$
0
0

This post was written and submitted by Javier Calvarro Nelson, a developer on the ASP.NET Core MVC team

Testing is an important part of the development process of any app. In this blog post we’re going to explore how we can test ASP.NET Core MVC app using an in-memory server. This approach has several advantages:

  • It’s very fast because it does not start a real server
  • It’s reliable because there is no need to reserve ports or clean up resources after it runs
  • It’s easier than other ways of testing your application, such as using an external test driver
  • It allows testing of traits in your application that are hard to unit test, like ensuring your authorization rules are correct

The main shortcoming of this approach is that it’s not well suited to test applications that heavily rely on JavaScript. That said, if you’re writing a traditional web app or an API then all the benefits mentioned above apply.

For testing MVC app we’re going to use TestServer. TestServer is an in-memory implementation of a server for ASP.NET Core app akin to Kestrel or HTTP.sys.

Creating and setting up the projects

Start by creating an MVC app using the following command:

dotnet new mvc -au Individual -uld --use-launch-settings -o .\TestingMVC\src\TestingMVC

Create a test project with the following command:

dotnet new xunit -o .\TestingMVC\test\TestingMVC.Tests

Next create a solution, add the projects to the solution and add a reference to the app project from the test project:

dotnet new sln
dotnet sln add .\src\TestingMVC\TestingMVC.csproj
dotnet sln add .\test\TestingMVC.Tests\TestingMVC.Tests.csproj
dotnet add .\test\TestingMVC.Tests\TestingMVC.Tests.csproj reference .\src\TestingMVC\TestingMVC.csproj

Add references to the components we’re going to use for testing by adding the following item group to the test project file:

Now, we can run dotnet restore on the project or the solution and we can move on to writing tests.

Writing a test to retrieve the page at ‘/’

Now that we have our projects set up, let’s wirte a test that will serve as an example of how other tests will look.

We’re going to start by changing Program.cs in our app project to look like this:

In the snippet above, we’ve changed the method IWebHost BuildWebHost(string[] args) to call a new method IWebHostBuilder CreateWebHostBuilder(string[] args) within it. The reason for this is that we want to allow our tests to configure the IWebHostBuilder in the same way the app does and to allow making changes required by tests. (By chaining calls on the WebHostBuilder.)

One example of this will be setting the content root of the app when we’re running the server in a test. The content root needs to be based on the appliation’s root, not the test’s root.

Now, we can create a test like the one below to get the contents of our home page. This test will fail because we’re missing a couple of things that we describe below.

The test above can be decomposed into the following actions:

  • Create an IWebHostBuilder in the same way that my app creates it
  • Override the content root of the app to point to the app’s project root instead of the bin folder of the test app. (.\src\TestingMVC instead of .\test\TestingMvc.Tests\bin\Debug\netcoreapp2.0)
  • Create a test server from the WebHost builder
  • Create an HttpClient that can be used to communicate with our app. (This uses an internal mechanism that sends the requests in-memory – no network involved.)
  • Send an HTTP request to the server using the client
  • Ensuring the status code of the response is correct

Requirements for Razor views to run on a test context

If we tried to run the test above, we will probably get an HTTP 500 error instead of an HTTP 200 success. The reason for this is that the dependency context of the app is not correctly set up in our tests. In order to fix this, there are a few actions we need to take:

  • Copy the .deps.json file from our app to the bin folder of the testing project
  • Disable shadow copying assemblies

For the first bullet point, we can create a target file like the one below and include in our testing project file as follows:

For the second bullet point, the implementation is dependent on what testing framework we use. For xUnit, add an xunit.runner.json file in the root of the test project (set it to Copy Always) like the one below:

This step is subject to change at any point; for more information look at the xUnit docs at http://xunit.github.io/#documentation.

Now if you re-run the sample test, it will pass.

Summary

  • We’ve seen how to create in-memory tests for an MVC app
  • We’ve discussed the requirements for setting up the app to find static files and find and compile Razor views in the context of a test
  • Set up the content root in the tests to the app’s root folder
  • Ensure the test project references all the assemblies in the app
  • Copy the app’s deps file to the bin folder of the test project
  • Disable shadow copying in your testing framework of choice
  • We’ve shown how to write a functional test in-memory using TestServer and the same configuration your app uses when running on a real server in Production

The source code of the completed project is available here: https://github.com/aspnet/samples/tree/master/samples/aspnetcore/mvc/testing/TestingMVC

Happy testing!


Take a Break with Azure Functions

$
0
0

So, it’s the Holidays. The office is empty, the boss is away, and you’ve got a bit of free time on your hands. How about learning a new skill and having some fun?

Azure Functions are a serverless technology that executes code based on various triggers (i.e. a URL is called, an item is placed on a queue, a file is added to blob storage, a timer goes off.) There’s all sorts of things you can do with Azure Functions, like running high CPU-bound calculations, calling various web services and reporting results, sending messages to groups – and nearly anything you can imagine. But unlike traditional applications and services, there’s no need to set up an application host or server that’s constantly running, waiting to respond to requests or triggers. Azure Functions are deployed as and when needed, to as many servers as needed, to meet the demands of incoming requests. There’s no need to set up and maintain hosting infrastructure, you get automatic scaling, and – best of all – you only pay for the cycles used while your functions are being executed.

Want to have a go and try your hand at the latest in web technologies? Follow along to get started with your own Azure Functions.

In this post I’ll show you how to create an Azure Function that triggers every 30 minutes and writes a note into your slack channel to tell you to take a break. We’ll create a new Function app, generate the access token for Slack, then run the function locally.

Prerequisites:

Create a Function App (Timer Trigger)

We all know how important it is to take regular breaks if you spend all day sitting at a desk, right? So, in this tutorial, we’ll use a Timer Trigger function to post a message to a Slack channel at regular intervals to remind you (and your whole team) to take a break. A Timer Trigger is a type of Azure Function that is triggered to run on regular time intervals.

Just run it

If you want to skip ahead and run the function locally, fetch the source from this repo, insert the appropriate Slack channel(s) and OAuth token in the local.settings.json file, start the Azure Storage Emulator, then Run (or Debug) the Functions app in Visual Studio.

Step-by-step guide
  1. Open Visual Studio 2017 and select File->New Project.
  2. Select Azure Functions under the Visual C# category.
  3. Provide a name (e.g. TakeABreakFunctionApp) and press OK.
    The New Function Project dialog will open.
  4. Select Azure Functions v1 (.NET Framework), chose Timer trigger and press OK.
    Note: This will also work with Azure Functions v2, but for this tutorial I’ve chosen v1, since v2 is still in preview.

    New Timer Trigger

    A new solution is created with a Functions App project and single class called Function1 that contains a basic Timer trigger.

  5. Edit Function1.cs.
    • Add helper methods:
      • Env (for fetching environment variables)
      • SendHttpRequest (for sending authenticated http requests)
      • SendMessageToSlack (for generating and sending the appropriate Slack request – based on environment variables)
    • Update method: Run
      • Change the return type to async Task.
      • Add an asynchronous call to the SendMessageToSlack method.
      • Update Chron settings for the TimerTrigger attribute.
    • Add appropriate Using statements.

  6. The completed code should look like this:

    using System;
    using System.Net.Http;
    using System.Net.Http.Headers;
    using System.Threading.Tasks;
    using Microsoft.Azure.WebJobs;
    using Microsoft.Azure.WebJobs.Host;
    
    namespace TakeABreakFunctionsApp
    {
        public static class Function1
        {
            [FunctionName("Function1")]
            public static async Task Run([TimerTrigger("0 */30 * * * *")]TimerInfo myTimer, TraceWriter log)
            {
                log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
                await SendMessageToSlack("You're working too hard. How about you take a break?", log);
            }
    
            private static async Task SendMessageToSlack(string message, TraceWriter log)
            {
                // Fetch environment variables (from local.settings.json when run locally)
                string channel = Env("ChannelToNotify");
                string slackbotUrl = Env("SlackbotUrl");
                string bearerToken = Env("SlackOAuthToken");
    
                // Prepare request and send via Http
                log.Info($"Sending to {channel}: {message}");
                string requestUrl = $"{slackbotUrl}?channel={Uri.EscapeDataString(channel)}&text={Uri.EscapeDataString(message)}";
                await SendHttpRequest(requestUrl, bearerToken);
            }
    
            private static async Task SendHttpRequest(string requestUrl, string bearerToken)
            {
                HttpClient httpClient = new HttpClient();
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", bearerToken);
                HttpResponseMessage response = await httpClient.GetAsync(requestUrl);
            }
    
            private static string Env(string name) => Environment.GetEnvironmentVariable(name, EnvironmentVariableTarget.Process);
        }
    }
  7. Edit local.settings.json.
    Add the following environment variables.
    • SlackbotUrl – The URL for the Slack API to post chat messages
    • SlackOAuthToken – An OAuth token that grants permission for your app to send messages to a Slack workspace.
      – See below for help generating a Slack OAuth token.
    • ChannelToNotify – The Slack channel to send messages to
  8. Your local.settings.json should look something like this:
    (Your SlackOAuthToken and ChannelToNotify variables will be specific to your Slack workspace.)

    {
      "IsEncrypted": false,
      "Values": {
        "AzureWebJobsStorage": "UseDevelopmentStorage=true",
        "AzureWebJobsDashboard": "UseDevelopmentStorage=true",
        "SlackbotUrl": "https://slack.com/api/chat.postMessage",
        "SlackOAuthToken": "[insert your generated token]",
        "ChannelToNotify": "[your channel id]"
      }
    }

Your Functions app is now ready to run! You just need to grab an authorization token for your Slack workspace.

Generate an OAuth token for your app to send messages to your Slack workspace

Before you can post a message to a Slack workspace, you must first tell Slack about the app and assign specific permissions for the app to send messages as a bot. Once you’ve installed the app to the Slack workspace, you will be issued an OAuth token that you can send with your http requests. For full details, you can follow the instructions here. Otherwise, follow the steps below.

  • Click here to register your new Functions app with your Slack workspace.
  • Provide a name (e.g. “Take a Break”) and select the appropriate Slack workspace, then press Create App.
  • Create A Slack App

    When the app is registered with Slack, the Slack API management page opens for the new app.

  • Select OAuth & Permissions from the navigation menu on the left.
  • In the OAuth & Permissions page, scroll down to Scopes, select the permission chat:write:bot, then select Save Changes.
  • Select Permission Scopes

  • After the scope permissions have been created and the page has refreshed, scroll to the top of the OAuth & Permissions page and select Install App to Workspace.
  • Slack Install App to Workspace

  • A confirm page opens. Review the details, then click Authorize.
  • Your OAuth Access Token is generated and presented at the top of the page.
  • OAuth Access Token

  • Copy this token and add it to your local.settings.json as the value for SlackOAuthToken.

    Note: The OAuth access token is a secret and should not be made public. If you check this token into a public source control system like GitHub, Slack will find it and permanently disable it!

Run your Functions App on your local machine

Now that you’ve registered your app with Slack and have provided a valid OAuth token in your local.settings.json, you can run the Function locally.

Start the local Storage Emulator

You can configure your function to use a storage account on Azure. But if your app is configured to use development storage (which is the default for new Functions), then it will run against the local Azure Storage Emulator. Therefore, you’ll need to make sure the Storage Emulator is started before running your Functions app.

  • Open the Windows Start Menu and search for “Storage Emulator”.

Microsoft Azure Storage Emulator will launch. You can manage it via the icon in the Windows System Tray.

Start the Function app from Visual Studio
  • Press Ctrl+F5 to build and run the Functions app.
  • If prompted, update to the latest Functions tools.
  • A new command window launches and displays the log output from the Functions app.

Function App Running

After a certain period of time, the Timer trigger will fire and send a message to your Slack workspace.

Function Timer Executes

You should see the message appear in the appropriate Slack channel.

Message Appears In Slack

Feel free to play around with the Timer Chron options in the Run method’s attributes to configure the function to execute at the intervals you’d like. Here are some example Chron settings.
        Trigger Chron format: (seconds minutes hours days months years)
        (“0 */15 6-20 * * *”) = Every 15 minutes, between 06:00 AM and 08:59 PM
        (“0 0 0-5,21-23 * * *”) = Every hour from 12:00 AM to 06:00 AM and 09:00 PM to 12:00 AM

Congratulations! You’ve written a working Azure Functions App with a Timer trigger function.

What’s next?


Publish your Functions App to the cloud
So that your Functions app is always available, and can be accessed globally (eg. For Http trigger types), you can publish your app to the cloud. This article describes the process of publishing a Functions app to Azure.

Experiment with other Functions types
There’s an excellent collection of open-source samples available here. Poke around and see what takes your interest.

Tell us about your experience with Azure Functions
We’d love to hear about your experience with Azure Functions. If you’ve got a minute, please complete this short survey.
As always, feel free to leave comments and questions in the space below.

Happy holidays!

Justin Clareburt
Senior Program Manager
Visual Studio and .NET

Announcing Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4

$
0
0

Today we are releasing Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 on NuGet. This release contains some minor bug fixes and a couple of new features specifically targeted at enabling .NET Standard support for the ASP.NET Web API Client.

You can find the full list of features and bug fixes for this release in the release notes.

To update an existing project to use this preview release run the following commands from the NuGet Package Manager Console for each of the packages you wish to update:

Install-Package Microsoft.AspNet.Mvc -Version 5.2.4-preview1
Install-Package Microsoft.AspNet.WebApi -Version 5.2.4-preview1
Install-Package Microsoft.AspNet.WebPages -Version 3.2.4-preview1

ASP.NET Web API Client support for .NET Standard

The ASP.NET Web API Client package provides strongly typed extension methods for accessing Web APIs using a variety of formats (JSON, XML, form data, custom formatter). This saves you from having to manually serialize or deserialize the request or response data. It also enables using .NET types to share type information about the request or response with the server and client.

This release adds support for .NET Standard 2.0 to the ASP.NET Web API Client. .NET Standard is a standardized set of APIs that when implemented by .NET platforms enables library sharing across .NET implementations. This means that the Web API client can now be used by any .NET platform that supports .NET Standard 2.0, including cross-platform ASP.NET Core apps that run on Windows, macOS, or Linux. The .NET Standard version of the Web API client is also fully featured (unlike the PCL version) and has the same API surface area as the full .NET Framework implementation.

For example, let’s use the new .NET Standard support in the ASP.NET Web API Client to call a Web API from an ASP.NET Core app running on .NET Core. The code below shows an implementation of a ProductsClient that uses the Web API client helper methods (ReadAsAsync<T>(), Post/PutAsJsonAsync<T>()) to get, create, update, and delete products by making calls to a products Web API:

Note that all the serialization and deserialization is handled for you. The ReadAsAsync<T>() methods will also handle selecting an appropriate formatter for reading the response based on its content type (JSON, XML, etc.).

This ProductsClient can then be used to call the Products Web API from your Razor Pages in an ASP.NET Core 2.0 app running on .NET Core (or from any .NET platform that supports .NET Standard 2.0). For example, here’s how you can use the ProductsClient from the page model for a page that lets you edit the details for a product:

For more details on using the ASP.NET Web API Client see Call a Web API From a .NET Client (C#).

Please try out Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 and let us know what you think! Any feedback can be submitted as issues on GitHub. Assuming everything with this preview goes smoothly, we expect to ship a stable release of these packages by the end of the month.

Enjoy!

64 bit ASP.NET Core on Azure App Service

$
0
0

When creating an Azure App Service .NET Core is already pre-installed. However, only the 32 bit .NET runtime is installed. In this post we will look at a  few ways that you can get a 64 bit runtime on Azure App Service

During the 2.1 timeframe we are hoping to have both 32 and 64 bit runtimes installed as well as enabling the portal experience to switch between the two.

1. Deploy a self-contained application

Self-contained deployments don’t require .NET Core to be installed on a machine, because they carry the runtime they need with them. Because of this you can deploy a 64bit self-contained deployment to Azure App Service. For information about self-contained deployments you can look here:

Information: https://docs.microsoft.com/en-us/dotnet/core/deploying/

CLI instructions: https://docs.microsoft.com/en-us/dotnet/core/deploying/deploy-with-cli

Visual Studio instructions: https://docs.microsoft.com/en-us/dotnet/core/deploying/deploy-with-vs

2. Deploy your own runtime

The pre-installed runtime is installed on a local SSD, but you can copy your own runtime onto your server and modify your application to use that instead. To do this you would:

  1. Download a zip of the x64 runtime that you want to use
  2. Go to the Kudu console (under advanced tools, debug console)
  3. Drag the zip of the runtime onto the file explorer section of the Kudu console. Kudu has a feature that will copy up the zip and extract it on the server. The UI should change as you drag the zip showing you a location to drop the zip in order for this feature to work.
  4. Modify your applications web.config to use the dotnet.exe that was just extracted on the server

A web.config file is generated for your ASP.NET Core application when you don’t have one in your App. But if your application already contains one, then it will be used instead. Your web.config would like this:

[PATH_TO_EXE] will point to the location you extracted the dotnet.exe, for example D:\home\dotnet\dotnet.exe. Your application will now use the copy of dotnet.exe that you copied to the server, meaning that it is now using a 64 bit runtime.

NOTE: There are two main caveats with this approach. 1, you must service your own runtime. If a new patch of .NET Core comes out then you will need to deploy it yourself to get any improvements. 2, the cold start time of your application will likely be a bit slower as the runtime is loading from a slower drive.

3. Use Linux Azure App Service

There is no official 32 bit runtime for .NET Core available on Linux. Because of that, if you use Linux Azure App Service then you will have a 64 bit runtime with a normal deployment.

4. Use Web Apps for Containers

Because you are deploying your own container, with whichever runtime you choose, when using Containers you will always have the runtime that you want available. You can find more information about web Apps for Containers here: https://azure.microsoft.com/en-us/services/app-service/containers/

Conclusion

We hope to add 64 bit as a pre-installed option for Azure App Service, but in the meantime you can use the options listed here if you need a 64 bit runtime.

Azure Storage for Serverless .NET Apps in Minutes

$
0
0

Azure Storage is a quick and effortless way to store data for applications that has high availability, is secure, scales and is redundant. This blog post walks through a simple application that creates a short code for a long URL to easily reference it. It uses Table Storage to map codes to URLs and a Queue to process redirect counts. Everything is handled by serverless Azure Functions. The only prerequisite to build and run locally is Visual Studio 2017 15.5 or later, including the Azure Developer workload. That will automatically install the Azure Storage Emulator you can use to program against tables, queues, blobs, and files on your local machine. You do not have to have an Azure account to run this on your machine.

Build and Test Locally with Function App Host and Azure Storage Emulator

You can download the source code for this project here.

Open Visual Studio 2017 and create a new  “Azure Functions” project (the template will be under the “Cloud” category). Pick a name like, ShortLink.

Add new Azure Functions Project

Add new Azure Functions Project

In the next dialog, choose “Azure Functions v1”, select “Http Trigger”, pick “Storage Emulator” for the Storage Account, and set Access rights to “Anonymous.”

Choosing the function template

Choosing the function template

Right-click the name Function1.cs in the Solution Explorer and rename it to LinkShortener.cs. Change the function name to “Set” and update the code to use “href” instead of “name” as follows:

Hit F5 to run the function locally. You should see the function console launch and provide you with a list of URLs to access your function.

Endpoint from function app

Endpoint from function app

Access the end point from your web browser by copying and pasting the URL for the “Set” operation. You should receive an error message asking you to pass an href. Append the following to the end of the URL:

?href=https://developer.microsoft.com/advocates

You should see the URL echoed back to you. Stop debugging (SHIFT+F5).

Out of the box, the functions template creates a function app. The function app hosts multiple functions, which are snippets of code that can be triggered by various events. In this example, the code is triggered by an HTTP/HTTPS request. Visual Studio uses attributes to declare the function name and specify the bindings. The log is automatically passed into the method you to to write logging information.

It’s time to add storage!

Table Storage uses a partition (to segment the data) and a row key (to identify a unique data item). The app will use a special partition of “1” to store a key that indicates the next code to use. The short code is generated by a simple algorithm that translates an integer to a string of alphanumeric characters. To store a short code, the partition will be set to the first character of the code, the row key will be the short code, and a target field will contain the full URL. Create a new class file and name it UrlKey.cs. Add this using statement:

using Microsoft.WindowsAzure.Storage.Table;

Then add the class:

Next, add a class named UrlData.cs, include the same “using” statement and define the class like this:

Add the same using statement to the top of the LinkShortener.cs file. Azure Functions provides special bindings that take care of connecting to various resources. Modify the Run method to include a binding for the key and another binding that will be used to write out the URL information.

The Table attributes represent bindings to Table Storage. Different parameters allow behaviors such as passing in existing entries or collections of entries, as well as a CloudTable instance you can think of as the context you use to interact with a specific table. The binding logic will automatically create the table if it doesn’t exist. The key entry is automatically passed in if it exists. This is because the partition and key are included in the binding. If it doesn’t exist, it will be passed as null and you can initialize it and store it as a new entry:

Next, add the code to turn the numeric key value into an alphanumeric code, then create a new instance of the UrlData class.

The final steps for the redirect loop involve saving the data and updating the key. The response returns the code.

Now you can test the functionality. Make sure the storage emulator is running by searching for “Storage Emulator” in your applications and clicking on it. It will send a notification when it is ready. Press F5 and paste the same URL used earlier with the query string set. If all goes well, the response should contain the initial value “BNK”. Next, open “Cloud Explorer” (View -> Cloud Explorer) and navigate to local developer storage. Expand table storage and view the two entries. Note the id for the key has been incremented:

Cloud Explorer with local Table Storage

Cloud Explorer with local Table Storage

With an entry in storage, the next step is a function that takes the short code and redirects to the full URL. The strategy is simple: check for an existing entry for the code that is passed. If it exists, redirect to the URL, otherwise redirect to a “fallback” (in this case I used my personal blog). The redirect should happen quickly, so the short code is placed on a queue for a separate function to process statistics. Simply declaring the queue with the Queue binding is all it takes for the storage driver to create the queue and add the entry. You are passed an asynchronous collection so you may add multiple queue entries. Anything you add is automatically inserted into the queue. It’s that simple!

Run the project again, and navigate to the new “Go” endpoint and pass the “BNK” parameter. Your URL will look something like: http://localhost:7071/api/Go/BNK. You should see it redirect to the page you originally passed in. Refresh your Cloud Explorer and expand the “Queues” section. There should be a new queue named “counts” with a single entry (or more if you tried the redirect multiple times).

Cloud Explorer with local Queue

Cloud Explorer with local Queue

Processing the queue ties together elements of the previous function. The function uses a queue trigger and will be called for and with each entry in the queue. The implemented logic simply looks for a matching entry in the table, increments the count, then saves it.

Run the project, and if your Storage Emulator is running, you should see a call to the queue processing function in the function app console. After it completes, refresh your Cloud Explorer. You should see the queue is now empty and the count has been updated on the URL in Table Storage.

Publish to Azure

It’s great to be able to run and debug locally, but to be useful the app should be hosted in the cloud. This step requires an Azure Account (you can get one for free). Right-click on the ShortLink project and choose “Publish…”. Make sure “Azure Function App” and “Create New” are selected, then click the “Publish” button.

Publish to Azure

Publish to Azure

In the dialog, give the app a unique name (it must be globally unique so you may have to try a few variations). Choose “New” for the resource group and give it a logical name, then choose “New” for plan. Give the plan a name (I like to use the app name followed by “Link”), choose a region close to you and pick the “Consumption Plan” then press “OK.”

Choose a service plan

Choose a service plan

Click “Create” to create the necessary assets in Azure. Visual Studio will create the resources for you, build your application, then publish it to Azure. When everything is ready, you will see the message “Publish completed.” in the Output dialog for Build.

Test adding a link (replace “myshortlink” with your own function app name):
http://myshortlink.azurewebsites.net/api/Set?href=https://docs.microsoft.com/azure/storage/


Then test the redirect:
http://myshortlink.azurewebsites.net/api/Go/BNK

You can use the Storage Explorer to attach to Azure and verify the count.

But wait – isn’t Azure Storage supposed to be secure? How did this just work without me entering credentials?

If you don’t specify a connection string, all storage references default to an AzureWebJobsStorage connection key. This is the storage account created automatically to support your function app. In your local project, the local.settings.json file points to development storage (the emulator). When the Azure Function App was created, a connection string was automatically generated for the storage account. The application settings override your local settings, so the application was able to run against the storage account without modification! If you want to connect to a different storage account (for example, if you choose to use CosmosDB for premium table storage) you can simply add a new connection string and specify it as a parameter on the bindings and triggers.

When you publish from Visual Studio, the publish dialog has a link to “Manage Application Settings…”. There, you can add your own settings including any custom connection strings you need, and it will deploy the settings securely to Azure as part of the publish process.

Custom application settings

Custom application settings

That’s all there is to it!

Conclusion

There is a lot more you could do with the application. For example, the application “as is” does not have any authentication, meaning anyone could access your link shortener and create short links. You want to change the access to “Function level” for the “Set” function and secure the website with an SSL certificate to prevent anonymous access. For a more complete version of the application that includes logging, monitoring, and web front end to paste links, read Build a Serverless Link Shortener Faster than you can Finish your Latte.

The intent of this post was to illustrate how easy and effective the experience of integrating Azure Storage with your application can be. There are SDKs available to perform the same functions from desktop and mobile applications as well. Perhaps the biggest benefit of leveraging storage is the low cost. I  run a production link shortener that processes several hundred hits per day, and my monthly cost for both the serverless function and the storage is less than one dollar. Azure Storage is both accessible and cost effective.

Here is the full project.

Enjoy!

ASP.NET Core 2.1 roadmap

$
0
0

Five months ago, we shipped ASP.NET Core 2.0 as a foundational release for our high performance, cross-platform web framework for .NET and .NET Core. Since then we have been hard at work to deliver the next wave of features in ASP.NET Core 2.1. Below is an outline of the features and improvements that are planned for this release, which is targeted for mid-year 2018.

Contents:

You can read about the roadmap for .NET Core 2.1 and EF Core 2.1 on the .NET team blog.

A few of us also recorded an On.NET show to introduce .NET Core 2.1, ASP.NET Core 2.1, and EF Core 2.1 in the Channel 9 studios, in two parts (roadmap, demos). The roadmap part is shown below:

MVC

Razor Pages improvements

In ASP.NET Core 2.0 we introduced Razor Pages as a new paged-based model for building Web UI. In 2.1 we are making a variety of improvements to Razor Pages to make it even more productive.

Razor Pages in an area

Areas provide a way to partition a large MVC app into smaller functional groupings each with their own controllers and views. In 2.1 we will add support for areas to Razor Pages so that areas can have their own pages directory.

Support for /Pages/Shared

In 2.1 Razor Pages will fall back to finding Razor assets such as layouts and partials in /[pages root]/Shared before falling back to /Views/Shared.

Bind all properties on a page or controller

Starting in 2.0 you could use the BindPropertyAttribute to specify that a property on a page model or controller should be bound to data from the request. If you have lots of properties that you want to bind, then this can get tedious and verbose. In 2.1 we will add support for specifying that all properties on a page or controller should be bound by putting the BindPropertyAttribute on the class.

Implement IPageFilter on page models

We will implement IPageFilter on page models, so that you can run logic before or after page handlers run for a given request, much the same way that you can implement IActionFilter on a controller.

Functional testing infrastructure

Writing functional tests for an MVC app allows you to test handling of a request end-to-end including running routing, filters, controllers, actions, views and pages. While writing in-memory functional tests for MVC apps is possible with ASP.NET Core 2.0 it requires significant setup.

For 2.1 we will provide an test fixture implementation that handles the typical pitfalls when trying to test MVC applications using TestServer:

  • Copy the .deps file from your project into the test assembly bin folder
  • Specify the content root of the application’s project root so that static files and views can be found
  • Streamline setting up your app on TestServer

A sample test that uses the new test fixture with xUnit looks like this:

See https://github.com/aspnet/announcements/issues/275 for additional details.

Web API improvements

ASP.NET Core gives you a single unified framework for building both Web UI and Web APIs. In 2.1 we are making various improvements to the
framework for building Web APIs.

Better Input Processing

We want the experience around invalid input to be more automatic and more consistent. More concretely we’re going to:

  • Create a programming model where your action code isn’t called when a request has validation errors (see “Enhanced Web API controller conventions” below)
  • Improve the fidelity of error responses when the request body fails to deserialize or the JSON is invalid
  • Enable placing validation attributes directly on action parameters
Support for Problem Details

We are adding support for RFC 7808 – Problem Details for HTTP APIs as a standardized format for returning machine readable error responses from HTTP APIs. You can return a Problem Details response from your API action using the ValidationProblem() helper method.

Improved OpenAPI specification support

We want to embrace the OpenAPI specification (previously called “Swagger”) and make Web APIs built with ASP.NET Core more descriptive. Today you need a lot of “attribute soup” to get a reasonable OpenAPI spec from ASP.NET Core. We plan to introduce an opinionated layer that infers the possible responses based on what you’re likely to have done with your actions (attributes still win when you want to be explicit).

For example, actions that return IActionResult need to be attributed to indicate the return type so that the schema of the response body can be determined. Actions that return the response type directly don’t need to be attributed, but then you lose the flexibility to return any action result.

We will introduce a new ActionResult<T> type that allows you to return either the response type or any action result, while still indicating the response type.

Enhanced Web API controller conventions and ActionResult<T>

We are adding the [ApiController] attribute as the way to opt-in to Web API specific conventions and behaviors. These behaviors include:

  • Automatically responding with a 400 when validation errors occur
  • Infer smarter defaults for action parameters: [FromBody] for complex types, [FromRoute] when possible, otherwise [FromQuery]
  • Requires attribute routing – actions are not accessible by convention-based routes

Here’s an example Web API controller that uses these new enhancements:

Here’s what the Web API would look like if you were to implement it with 2.0:

JSON Patch improvements

For JSON Patch we will add support for the test operator and for patching dictionaries with non-string keys.

Partial Tag Helper

Razor partial views are a convenient way to include some Razor content into a view or page. Today there are four different methods for rendering a partial on a page that have different trade-offs and limitations (Html.Partial vs Html.RenderPartial, sync vs async). Rendering partials also suffers from a limitation where the generated prefix for rendered form elements based on the given model, must be handled manually for each partial rendering.

The new partial Tag Helper makes rendering a partial straightforward and elegant. You can specify the model using model expression syntax and the partial Tag Helper will handle setting up the correct HTML field prefix for you:

Razor UI in a class library

ASP.NET Core 2.1 will make it easier to build and include Razor based UI in a library and share it across multiple projects. A new Razor SDK will enable building Razor files into a class library project that can then be packaged into a NuGet package. Views and pages in libraries will automatically be discovered and can be overridden by the application. By integrating Razor compilation into the build, the app startup time is also significantly faster, while still allowing for fast updates to your Razor views and pages at runtime as part of an iterative development workflow.

SignalR

For ASP.NET Core 2.1 we are porting ASP.NET SignalR to ASP.NET Core to support real-time web scenarios. As previously announced, ASP.NET Core SignalR will also include a number of improvements, including a simplified scale-out model, a new JavaScript client with no jQuery dependency, a new compact binary protocol based on MessagePack, support for custom protocols, a new streaming response model, and support for clients based on bare WebSockets. You can start trying out ASP.NET Core SignalR today by checking out the samples.

WebHooks

WebHooks are a lightweight HTTP pattern for event notification across the web. WebHooks enable services to send event notifications over HTTP to registered subscribers. For 2.1 we are porting a subset of the ASP.NET WebHooks receivers to ASP.NET Core in a way that integrates with the ASP.NET Core idioms.

For 2.1 we plan to port the following receivers:

  • Microsoft Azure alerts
  • Microsoft Azure Kudu notifications
  • Microsoft Dynamics CRM
  • Bitbucket
  • Dropbox
  • GitHub
  • MailChimp
  • Pusher
  • Salesforce
  • Slack
  • Stripe
  • Trello
  • WordPress

To use a WebHook receiver in ASP.NET Core WebHooks you attribute a controller action that you want to handle the notification. For example, here’s how you can handle an Azure alert:

Improvements for GDPR

The ASP.NET Core 2.1 project templates will include some extension points to help you meet some of your UE General Data Protection Regulation (GDPR) requirements.

A new cookie consent feature will allow you to ask for (and track) consent from your users for storing personal information. This can be combined with a new cookie feature where cookies can be marked as essential or non-essential. If a user has not consented to data collection, non-essential cookies will not be sent to the browser. You will still need to create the wording on the UI prompt and a suitable privacy policy which matches the GDPR analysis you or your company have performed, along with implementing the logic for determining under what conditions a given user should be asked for consent before writing non-essential cookies (the templates simply default to asking all users).

Additionally, the ASP.NET Core Identity templates for individual authentication now have a UI to allow users to download their personal data, along with the ability to delete their account entirely. By default, these UI areas only return personal information from ASP.NET Core identity, and perform a delete on the identity tables. As you add your own information into your database you should extend these features to also include that data according to your GDPR analysis.

Finally, we are considering extension points to allow you to apply your own encryption of ASP.NET Core identity data. We recommend that you examine the encryption features of your database to see if they match your GDPR requirements before attempting to layer on your own encryption mechanisms. Both Microsoft SQL and SQL Azure, as well as Azure table storage offer transparent encryption of data at rest, which does not require any changes to your application and is managed for you.

Security

HTTPS

With the increased focus on security and privacy, enabling HTTPS for web apps is more important than ever before. HTTPS enforcement is becoming increasingly strict on the web, and sites that don’t use it are considered, and increasingly labeled as, not secure. Browsers are starting to enforce that many new and existing web features must only be used from an secure context (Chromium, Mozilla). GDPR requires the use of HTTPS to protect user privacy. While using HTTPS in production is critical, using HTTPS during development can also help prevent related issues before deployment, like insecure links.

On by default

To facilitate secure website development, we are enabling HTTPS in ASP.NET Core 2.1 by default. Starting in 2.1, in addition to listing on http://localhost:5000, Kestrel will listen on https://localhost:5001 when a local development certificate is present. A suitable certificate will be created when the .NET Core SDK is installed or can be manually setup using the new ‘dev-certs’ tool. We will also update our project templates to run on HTTPS by default and include HTTPS redirection and HSTS support.

HTTPS redirection and enforcement

Web apps typically need to listen on both HTTP and HTTPS, but then redirect all HTTP traffic to HTTPS. ASP.NET Core 2.0 has URL rewrite middleware that can be used for this purpose, but it could be tricky to configure correctly. In 2.1 we are introducing specialized HTTPS redirection middleware that intelligently redirects based on the presence of configuration or bound server ports.

Use of HTTPS can be further enforced using HTTP Strict Transport Security Protocol (HSTS), which instructs browsers to always access the site via HTTPS. ASP.NET Core 2.1 adds HSTS middleware that supports options for max age, subdomains, and the HSTS preload list.

Configuration for production

In production, HTTPS must be explicitly configured. In 2.1 we are introducing default configuration schema for configuring HTTPS for Kestrel that is simple and straightforward. You can configure multiple endpoints including the URLs and the certificate to use for HTTPS either from a file on disk or from a certificate store:

Virtual authentication schemes

We’re adding something tentatively called “Virtual Schemes” to address two main scenarios:

  1. Making it easier to mix authentication schemes, like bearer tokens and cookie authentication in the same app (sample). Virtual schemes allow you to configure a dynamic authentication scheme that will use bearer authentication only for requests starting with /api, and cookie authentication otherwise
  2. Compose (mix/match) different authentication verbs (Challenge/SignIn/SignOut/Authenticate) across different handlers. For example, combining OAuth + Cookies, where you would have Challenge = OAuth, and everything else handled by cookies.

Identity

Identity as a library

ASP.NET Core Identity gives you a framework for setting up authentication and identity concerns for your site, including user registration, managing passwords, two-factor authentication, social logins and much more. However, setting up a site to use ASP.NET Core Identity requires quite a bit of code. While project templates help with generating this code, they don’t help with adding identity to an existing application and the code can’t easily be updated.

For 2.1 we will provide a default identity UI implementation as a library. You can add the default identity UI to your application by installing a NuGet package and then enable it in your Startup class:

Identity scaffolder

If you want all the identity code to be in your application so that you can change it however you want, you can use the new identity scaffolder to add the identity code to your application. All the scaffolded identity code is generated in an identity specific area folder so that it remains nicely separated from your application code.

Options improvements

To configure options with the help of configured services, you can today implement IConfigureOptions<T>. In 2.1 we’re adding convenience overloads to the Configure method that allow you to configure options using services without having to implement a separate class:

Also, the new ConfigureOptions<TSetup> method lets you register a single class that configures multiple options (by implementing IConfigureOptions<T> multiple times):

HttpClientFactory

The new HttpClientFactory type can be registered and used to configure and consume instances of HttpClient in your application. It provides several benefits:

  1. Provide a central location for naming and configuring logical instances of HttpClient. For example, you may configure a “github” client that is pre-configured to access GitHub and a default client for other purposes.
  2. Codify the concept of outgoing middleware via delegating handlers in HttpClient and implementing Polly based middleware to take advantage of that.
  3. Manage the lifetime of HttpClientMessageHandlers to avoid common problems that can be hit when managing HttpClient lifetimes yourself.

HttpClient already has the concept of delegating handlers that could be linked together for outgoing HTTP requests. The factory will make registering of these per named client more intuitive as well as implement a Polly handler that allows Polly policies to be used for Retry, CircuitBreakers, etc. Other “middleware” could also be implemented in the future but we don’t yet know when that will be.

In this first example we will configure two logical HttpClient configurations, a default one with no name and a named “github” client.

Registration in Startup.cs:

Consumption in a controller:

In addition to using strings to differentiate configurations of HttpClient, you can also leverage the DI system using what we are calling a typed client:

A class called GitHubService:

This type can have behavior and completely encapsulate HttpClient access if you wish, or just be used as a strongly typed way of naming an HttpClient as shown here.

Registration in Startup.cs:

NOTE: The Polly section of this code sample should be considered pseudocode at best. We haven’t built this yet and as such are not sure of the final shape of the API.

Consumption in a Razor Page:

Kestrel

Transport Extensibility

The current implementation of the underlying libuv connection semantics has been decoupled from the rest of Kestrel and abstracted away into a new Transport abstraction. While we continue to ship with libuv as the default transport, we are also adding support for a new transport based on the socket types included in .NET.

Socket Transport

We are continuing to invest in a new socket transport for Kestrel as we believe it has the potential to be more performant than the existing libuv transport. While we aren’t quite there yet, you can still easily switch to the new socket transport and try it out today.

Default configuration

We are adding support to Kestrel for configuring endpoints and HTTPS settings (see HTTPS: Configuration for production)

ASP.NET Core Module

The ASP.NET Core Module (ANCM) is a global IIS module for IIS that acts as a reverse-proxy from IIS to your Kestrel backend.

Version agility

Since ANCM is a global singleton, it can’t version or ship with the same agility as the rest of the ASP.NET Core. In 2.1, we’ve refactored ANCM into two pieces: the shim and the request handler. The shim will continue to be installed as a global singleton, but the request handler will ship as part of the new Microsoft.AspNetCore.Server.IIS package which can be referenced directly by your application. This will allow you to use different versions of ANCM with different app deployments.

In-process hosting

In 2.1, we’re adding a new in-process mode to ANCM for .NET Core based apps where the runtime and your app are both loaded inside the IIS worker process (w3wp.exe). This removes the performance penalty of proxying requests over the loopback adapter. Our preliminary tests show performance improvements of around ~4.4x compared to running out-of-process. Configuring your app to use to use the in-process model can be done using `web.config`, and will be eventually be the default for new applications targeting 2.1:

Alternatively, you can set a project property in your project file:

New Microsoft.AspNetCore.App package

ASP.NET Core 2.1 will introduce a new meta-package for use by applications: Microsoft.AspNetCore.App. The new meta-package differs from the existing meta-package in that it reduces the number of dependencies of packages not owned or supported by the ASP.NET or .NET teams to just those deemed necessary to ensure the major framework features function. We will update project templates to use the new meta-package. The existing Microsoft.AspNetCore.All meta-package will continue to be made available throughout the 2.x lifecycle. For additional details see https://github.com/aspnet/Announcements/issues/287.

In conclusion

We hope you are as excited about these features and improvements as we are! Of course, it is still early in the release and these plans are subject to change, but you can follow along with the latest status of these features by tracking the action on GitHub. Major updates and changes will be posted on the Announcements repo. You can also get live updates and participate in the conversation by watching the weekly ASP.NET Community Standup at https://live.asp.net. You can also read about the roadmaps for .NET Core 2.1 and EF Core 2.0 on the .NET team blog. Your feedback is welcome and appreciated!

Viewing all 191 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>