The .NET 6 project started in late 2020 as Microsoft was finishing .NET 5. .NET 5 proved to be a very successful base to start from, having been the first release to tackle .NET platform unification, the first of the annual November releases, the first “all remote” team release (due to the pandemic), and it has been (most importantly) rapidly and broadly adopted. The.NET 5 release cycle taught Microsoft how to better span major investments across multiple releases, which continues into .NET 6. The new release delivers major performance improvements, enables new scenarios for building client apps for multiple operating systems, adds support for Apple Silicon chips, and provides much faster and more responsive development tools with hot reload. At the same time, it improves on existing scenarios.
.NET users see keeping up with .NET innovation as a key ingredient of their business success, expanding their developer workforce to include the .NET team, and taking advantage of performance improvements, observability, and new language features. Microsoft thinks that .NET developers will be eager to convert to .NET 6.
This article is focused on the fundamentals of the release, including runtime, libraries, and SDK. It's these fundamental features that you experience and interact with most every day, with new libraries APIs, language features, runtime plumbing, and SDK capabilities. The article provides a look at only a handful of improvements and new capabilities. You'll want to check out the .NET Team blog (https://devblogs.microsoft.com/dotnet/) to learn about the whole release.
Unifying the .NET Platform as net6.0-everything
The top headline of the release (and this article) is unifying the .NET platform. Looking several years back, the .NET Framework with Windows was on one side, and Xamarin with Android and Apple operating systems was on the other. They were both “.NET” but were defined more by their differences than their commonality. .NET 6 unifies the experience and product into a single offering.
The following items unify the platform:
- Uniform runtime and library implementation and common APIs
- Symmetric model for targeting operating systems, like Android and Windows
- Support for all of the relevant operating systems and environments
- Tools that enable building all app types
- Opt-in targeting of additional experiences, enabling a significant limit to the time and size it takes to use .NET on your computer
- New functionality is available to all .NET developers at the same time
Let's take a look at a number of templates to better demonstrate what you'll see in .NET 6.
Cross-Platform Model
I'll start with the Console template (class library is the same: https://docs.microsoft.com/en-us/dotnet/core/tutorials/library-with-visual-studio?pivots=dotnet-5-0) because it's the baseline by which you'll judge all others. You can think of net6.0 as the cross-platform target framework moniker (TFM).
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
</PropertyGroup>
</Project>
Note: All unrelated content has been removed in these examples. The actual templates are longer and include other configuration, like enabling nullability. Those changes are also important but aren't covered in this article.
Apps that target the net6.0 TFM will work on all supported operating systems and CPU architectures. The APIs exposed via the net6.0 TFM are designed to work everywhere, like HttpClient. There are platform compat analyzers that warn you in the few cases where APIs are OS-specific.
There's nothing surprising in this template. It has a reference to the base SDK: Microsoft.NET.SDK
. As an aside, the SDK reference is the reason this project format is often called “SDK-style.” The project also declares that it's a .NET 6 app by specifying a dependence on the net6.0 target framework.
As an aside, the net6.0 TFM, and net5.0 before it, satisfy the same purpose as .NET Standard. .NET Standard is still supported but Microsoft is no longer making new versions. You can think of net6.0 as your new .NET Standard, if you'd like. One of the major improvements over .NET Standard is that it works for apps, not only libraries.
ASP.NET Core apps are nearly identical but reference a different SDK, which is Microsoft.NET.Sdk.Web
. That's the mechanism that provides Web apps with additional APIs and build-time functionality (like Razor page compilation) as compared to Console apps.
Operating System API Targeting
In terms of existing templates, Windows Forms and WPF apps introduce operating system targeting.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net6.0-windows</TargetFramework>
<UseWindowsForms>true</UseWindowsForms>
</PropertyGroup>
</Project>
There are two differences to call out. The first is that Microsoft has extended the target framework to describe and include operating system APIs. The change was first made in .NET 5. This is apparent in net6.0-windows because Windows is an operating system. Although Windows Forms and WPF aren't Windows APIs, they're available only on Windows and rely heavily on Windows technologies. As a result, Microsoft chose to expose them with the Windows-specific TFM. Windows APIs, including Windows Forms and WPF, aren't available if you target the cross-platform net6.0 target framework.
The second change is that .NET 6 doesn't expose application-specific SDKs. You'll notice that the Windows Forms project uses the base Microsoft.NET.Sdk
and also sets the UseWindowsForms
property to true
. WPF works the same way. The UseXYZ
property tells the base SDK which additional SDKs should be imported as an implementation detail. There are all the same SDKs as before but they're not a formal part of the project file. This is the new model going forward. It may be applied to ASP.NET Core templates in a future release.
This new model was created to enable multi-targeting. SDKs don't play nicely with multi-targeting, at least not with the way they're currently exposed as a singular attribute value. They also don't work well for composing multiple technologies. For example, imagine that you want to expose a Web endpoint from a client app. Which SDK would you put at the top of the file? With the new model, that problem goes away.
Before I switch to looking at other operating systems, let's take a closer look at the Windows TFM. The new net6.0-windows has no version number, yet .NET 6 supports multiple Windows versions. The version-less TFM (as it relates to the operating system) targets the lowest-supported operating system version. In this case, that's Windows 7. If you want access to WinRT APIs, you need to target Windows 10. You can use net6.0-windows10.0.17763.0 to target Windows 10, version 1809, for example.
Expanding Supported Operating Systems
Now that you've taken a look at the more familiar Windows experience, check out how the same model plays out for Android, macOS, and iOS. The spoiler is that it's the same.
The following are the TFMs for these OSes:
- net6.0-android
- net6.0-maccatalyst
- net6.0-ios
These TFMs are version-less, just like net6.0-windows. They are all equivalent to the lowest-supported versioned TFM for each of those operating systems. For example, net6.0-ios and net6.0-ios14 are equivalent. For .NET 7, perhaps net7.0-ios and net7.0-ios15 will similarly match.
You may not be familiar with Mac Catalyst. It's a newer macOS application type defined by Apple and a variant of iOS (including iOS UI APIs) that's optimized for desktop apps. Its primary purpose is to make source code sharing between iOS and macOS platforms easier and to provide macOS developers with access to the newest Apple APIs (which have historically only been available with iOS). For .NET 6, Microsoft decided to prioritize Mac Catalyst over Mac (classic). There's no support with .NET 6 for creating non-Mac Catalyst Mac apps and no net6.0-macos target framework.
You can see this all coming together with a .NET Multi-platform App UI (.NET MAUI) app.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFrameworks>net6.0-android;net6.0-ios; net6.0-maccatalyst</TargetFrameworks>
<UseMaui>true</UseMaui>
</PropertyGroup>
</Project>
This example is taken from the Weather `21 app that you can find on GitHub, here: https://github.com/davidortinau/WeatherTwentyOne.
You can see a few design points at play:
- The app multi-targets over three target frameworks.
- The SDK is uniform and coherent across all three because it's the base SDK.
- The app declares that it's a .NET MAUI app - with UseMaui - across all target frameworks, which results in MAUI-specific build tasks and other configuration.
You can see that there's added support for Android, iOS, and macOS with .NET 6 (previously all supported by Xamarin) and that they're modeled in the same way as Windows. These new operating systems have first-class support at the most fundamental levels of the .NET SDK.
macOS and Windows Arm64
Continuing with client operating systems, there's added support for Arm64 CPUs for macOS and Windows. For macOS, that's new with .NET 6 and for Windows, Microsoft is building on .NET 5 capabilities. Both Arm64 operating systems offer x64 emulation, which, on one hand, is zero cost for Microsoft but on the other hand, has caused Microsoft to significantly rethink the .NET installation model and the CLI support for architecture targeting.
macOS Arm64
Let's start with macOS. You've probably heard about Apple's move to Apple Silicon chips, called (in this timeframe) “M1” and “M2.” They are essentially the desktop version of the A-series iPhone chips, which are all the way up to (in this timeframe) “A14” and “A15”. Microsoft has had support for Arm64 (on Linux) since the .NET Core 3.0 release, and Arm32 before that. That all helped, but Apple required implementation of a couple of security-oriented features above and beyond the existing .NET Arm64 capability.
The primary requirement was adding support for the W^X memory feature, which was already on Microsoft's backlog. Memory pages (think virtual memory) can (in theory) be marked with any or all of three states: read, write, and execute. Think of these as permissions or capabilities. When running on Apple Silicon chips, macOS doesn't allow a memory page to be configured for both write and execute. This prevents an attacker from generating code at runtime and then causing the application to execute it. That's why the feature is called “write exclusive execute” or “write xor execute.” Pages can be read-write or read-execute but never write-execute or read-write-execute. Some parts of the runtime, like the JIT, relied on r-w-x pages and have since been adapted to new approaches that only use the allowable memory page types.
For .NET 6, this memory-related feature is enabled by default for macOS on Apple Silicon computers, and is otherwise opt-in. Microsoft expects it to be enabled by default for all environments with .NET 7. It's a good security feature and will benefit all .NET developers and deployments. There's a roadmap of defense-in-depth features, and others are planned for .NET 7 and future releases to further secure applications.
X64 Emulation
The most significant Arm64-related change is x64 emulation, which is available on both macOS and Windows (on Arm64 computers). The primary issue is that x64 emulation (on both operating systems) is a very narrow capability (focused nearly exclusively on instruction set emulation), as compared to the broad WoW64 subsystem on Windows that supports 32-bit x86 apps including file and registry virtualization. That means that .NET and other development platforms are responsible for the bulk of the user experience for supporting x64 emulation.
First, the team needed to enable developers to install both Arm64 and x64 .NET builds on the same computer. At the start of the release, and at the time of writing, these builds collide (in multiple ways). That's not a workable model. Microsoft has been working on a plan - documented at dotnet/designs - for enabling Arm64 and x64 builds to coexist and to be insensitive to the order of install.
Going forward, it's expected that most developers (on Arm64 macOS and Windows computers) will exclusively install the Arm64 .NET SDK (which will also include Arm64 runtimes for that version) for building code and then install and use whichever additional Arm64 and x64 runtimes they want to use for running and testing it. For developers, x64 runtime usage (on Arm64 computers) will probably be limited to ensuring compatibility with x64 production targets (both cloud and client) and validating x64-specific bugs. Most x64 validation is expected to be performed by x64-capable continuous integration (CI). Microsoft expects this to be common for many years. A common need for the x64 SDK on Arm64 computers isn't expected, although it will be available.
Microsoft also expects end users to use x64-only apps on Arm64 as a popular scenario.
The .NET CLI syntax has been extended to make targeting x64 easier with the Arm64 SDK. The following is an example of that.
Here's the .NET 6 app.
using System.Runtime.InteropServices;
Console.WriteLine($"Hello, {RuntimeInformation.OSArchitecture}!");
Assuming the .NET 6 Arm64 SDK is installed, the app runs as Arm64 by default. Let's validate that.
rich@M1 % dotnet build
rich@M1 % ./bin/Debug/net6.0/yyzapp
Hello, Arm64!
Using the Arm64 SDK again, you can also target the app to x64 with the new -a (architecture) switch to produce an x64 app instead of the default native architecture. This assumes that the .NET 6 x64 runtime is installed, because otherwise the app wouldn't run.
rich@M1 % dotnet build -a x64
rich@M1 % ./bin/Debug/net6.0/osx-x64/yyzapp
Hello, X64!
The same thing works with dotnet run and dotnet test.
The goal with x64 emulation was to deliver an experience that was intuitive to use and could be driven entirely from the Arm64 SDK. Microsoft focused on the Arm64 SDK because most developers have that anyway and because it's faster, by definition, given that it isn't emulated. The .NET build system is a significant body of software and it's going to run much faster natively on Apple Silicon chips.
Effect on Containers?
You might be wondering how all of this affects containers. The answer is: Not a lot.
rich@M1</a> ~ % docker run --rm <a href="http://mcr.microsoft.com/dotnet/samples">mcr.microsoft.com/dotnet/samples
Debian GNU/Linux 10 (buster)
OSArchitecture: Arm64
By default, Docker runs in Arm64 mode on Apple Silicon, the native architecture of the computer. Just like on Mac Intel computers, Docker uses Linux images, so no change there. You can also run x64 container images using QEMU-based emulation. Microsoft doesn't support .NET running in QEMU (on any operating system). That said, I'll at least show you how it works, using the –platform switch, so you can try it out.
rich@M1 ~ % docker run --rm --platform
linux/amd64 ubuntu bash -c "cat /etc/os-release
| grep PRETTY && uname -a"
PRETTY_NAME="Ubuntu 20.04.2 LTS"
Linux a881a5627af8 5.10.47-linuxkit
x86_64 x86_64 x86_64 GNU/Linux
System.Text.Json Source Generators
One of the goals, if not the most fundamental goal, of high-level programming languages is to compile human-centric abstractions down to machine-centric optimized (and safe) code. Aspects of .NET do just that, like the garbage collector, the thread pool, and async/await. Those features have a well-defined contract with the rest of the system. For the System.Text.Json
serializer (and really any serializer), it's a lot harder to separate the human-centric API from the runtime execution model, in large part due to reflection. Reflection is both an incredibly enabling technology and a damned curse. Source generators, which were new in .NET 5, offer a way to break that formal coupling.
Reflection has at least two challenges. The first is that pervasive use is bad for performance (startup, throughput, and memory). It also makes assembly trimming difficult, which is another dimension of performance. The assembly trimmer - and any software like it - makes decisions statically based on what it can see and trust in assembly metadata. Reflection is inherently late-bound such that its complete operation is not recorded in metadata, which in turn limits the assembly trimmer from doing a great job.
With this new approach, you can write the same high-level serialization code as normal, and then opt into using the source generator, which generates a custom serialization implementation with static (early bound) code using low-level primitives like Utf8JsonWriter
and no reflection.
Zooming out, the System.Text.Json
serializer is perhaps the best example of a relatively high-level .NET libraries component that takes advantage of and supports many new features while maintaining and improving performance. Recent examples are: IAsyncEnumerable
, records, and nullability. These improvements make the serializer increasingly easier to use and more capable. They also inform these low-level features because the team itself is an important consumer.
Baseline Case
Let's start with the baseline case for using the System.Text.Json
serializer. It's important to start with this case to demonstrate how easy it is to switch the new optimized patten.
using System.Text.Json;
using System.Text.Json.Serialization;
JsonMessage message = new("Hello, world!");
// baseline case for using JsonSerializer
string json = JsonSerializer.Serialize<JsonMessage>(message);
Console.WriteLine(json);
// Message type
internal record JsonMessage(string Message);
This code results in the following output.
{"Message":"Hello, world!"}
The serializer uses reflection to discover the Message
property and then to extract its value from the associated getter. That works but it isn't optimal.
Optimized Serialization
The following code uses the source generator and produces much better results because it doesn't use reflection, but uses property accessor calls on JsonMessage
and generates the JSON with Utf8JsonWriter
directly.
// relies on source generation
string optimizedJson = JsonSerializer.Serialize(message, JsonContext.Default.JsonMessage);
Console.WriteLine(optimizedJson);
// Source generator definition
[JsonSerializable(typeof(JsonMessage))]
internal partial class JsonContext : JsonSerializerContext
{
}
I've shared just the changes to the program. The call to JsonSerializer.Serialize
is switched to use a different signature and the partial JsonContext
class is new. Otherwise, it's all the same. Note that the JsonContext
name is arbitrary. You can choose any name for the class.
The magic is three-part:
JsonContext
is a partial class, which means the source generator can generate.g.cs
files that fill out the rest of the class.- The
JsonContext
class provides a place to hang an attribute that's global to the program (as opposed to a single serialization call) that links a type (in this caseJsonMessage
) and any serialization options (none of which are provided in this example) to the source-generated code. JsonSerializerContext
defines and enforces (by virtue of inheritance) the shape that the serializer expects from (in this case)JsonContext
.
That's pretty reasonable for a new scheme with so much benefit. You can see that it doesn't require much to adapt existing code. This new model is generally recommended, and is something you should strongly consider for performance-sensitive scenarios that process JSON content.
On the TechEmpower Caching Benchmark, a 40% increase in throughput solely was observed by moving to source generation for JSON serialization. Table 1 gives you a sense of how much reflection can cost and how much computers love executing static code.
Microsoft has also validated that IL trimming is improved when using source generation. In particular, trimming is able to cut the size of System.Text.Json.dll
(for self-contained apps) in half. It also makes the assembly trimmer easier to use in more aggressive trimming modes because all code (at least as it relates to System.Text.Json
) is statically reachable.
This description has been entirely focused on serialization. Deserialization has also been improved but not to the same degree. For deserialization (and you can do this with serialization, too), you can opt into using source generation to produce a metadata model that can be used at runtime. This is more like having a map, but not the route. Similar support for deserialization as serialization might be added in a future release.
JIT Compiler
Performance has been a big part of every .NET release. Microsoft publishes a post on the .NET Team blog every year on the latest improvements. Everyone is recommended to take a look at the “Performance improvements in .NET 6” post (https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-6/). I'll provide a short summary of some of the performance improvements in the JIT.
Inlining
One of the most effective performance optimization techniques in the just-in-time compiler is inlining. The runtime gets the JIT to compile one method and then calls into another that then needs to be JITed. Method calls are not free, particularly if they are virtual or (worse yet) interface calls (which is common). The JIT can erase method calls by pulling a method body (that would be called) into the current one as inline code. For methods that get called a lot, this performance optimization can help a lot.
The first example in the .NET 6 performance post (https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-6/) describes a case where a Utf8Formattter.TryFormat
improved significantly in this release without any code changes. Surely that's impossible. A 22% improvement was seen in throughput and a 35% reduction in generated assembly code as seen in Table 2.
The Utf8Formattter.TryFormat
method has a one-line implementation to the internal TryFormatInt64
method. In .NET 6, that method was marked with the MethodImplOptions.AggressiveInlining
attribute, which greatly increases the chance that the method will be inlined. You can think of this attribute as the .NET performance optimization that's responsible for the double-digit improvement to TryFormat
and likely other callers.
It gets better. As a result of inlining, the JIT is able to see through the method call and choose to copy the method body in full or in part. In this case, the JIT is able to see and process branches (if and switch statements) in the method implementation and choose to inline just a single method call that would have been the final and only observable result of actually running all the code. That's a huge benefit if this method is called a lot.
The JIT isn't really “running code” but it sure seems like it. It can reason about code and safely skip operations that are unnecessary but probably produce the same results. There are lots of compiler optimizations like this.
Devirtualization
Another big win from inlining is devirtualization, particularly for interfaces. Imagine a method is inlined that takes a collection interface like IList<T>
or IEnumerable<T>
. At this point, the code is now specific to the parent method and not subject to being called by arbitrary callers. As a result, the JIT may be able to reliably specialize the code to a single class and type of T
resulting in much faster direct calls instead of interface dispatch.
Here is an example in the performance post that does this.
public int GetLength() => ((ITuple)(5, 6, 7)).Length;
Table 3 is the ValueTuple'3 type being called on the ITuple
interface that it implements.
The JIT inlines and devirtualizes the .Length
property call in .NET 5 and .NET 6, respectively.
This improvement is absolutely impressive and demonstrates the value of this style of optimization. However, this particular optimization only applies when a method can be inlined and then specialized based on the narrow use of the code. Methods are generally not inlined (for good reason). As part of .NET 6, Microsoft has developed a completely different technology called dynamic PGO that has the capability to devirtualize any method (non-inlined). That enables much broader performance benefits.
If you have familiarity with devirtualization, you'll know that a code generator needs to be correct when it devirtualizes an interface or other virtual call. If not, the program will have unpredictable results or (more likely) crash because it might specialize, for example, an ICollection<T>
argument as List<T>
but then IList<T>
or ImmutableArray<T>
is passed in next. Clearly, you shouldn't risk crashing apps to get a performance win.
Dynamic PGO includes a new feature called guarded devirtualization. It's a sort of “zero risk gamble” performance feature. Based on observation, it can see that your code almost always passes List<T>
to a method that takes an ICollection<T>
. It then generates a fast path for List<T>
and then a slow path for any other ICollection<T>
. If dynamic PGO is right most of the time, it can provide a significant performance win. If the gamble proves wrong more than it expects, it can skip the preferred devirtualized call and go back to the normal virtualized call for all cases.
Let's see how this feature plays out with IEnumerable<int>
with a call to MoveNext()
, as captured by the benchmark shown in Table 4.
You can see that PGO results in bigger code size because it requires more machinery to work correctly and safely (the fast and slow paths), but wow! The drop in execution time is worth the price of admission. IEnumerable<T>
is a particularly apt example because it's used everywhere.
Dynamic PGO is a fully supported opt-in feature in .NET 6 and worth trying out (by setting the DOTNET_TieredPGO
environment variable to “1”). Microsoft plans to enable dynamic PGO by default with .NET 7. It's a very exciting feature with a lot of potential for improving performance.
Closing
.NET 6 is perhaps the most foundational release since .NET Core 1.0. It includes support for major new hardware platforms, broader use of source generation, another jump forward in performance, and tens of features not mentioned in this article. .NET 6 is a good reminder that Microsoft is investing in .NET for the long-term across both client and cloud. If you build cloud or client apps - and particularly if you build both - you've got a lot of strong options with .NET. Looking ahead, what comes next looks even better as new investments come to fruition. Like it's always been, it's a great time to be a .NET developer.
Table 1: TechEmpower Caching Benchmark (with source generation)
Requests/sec | Requests | |
.NET 5 | 243,000 | 3,669,151 |
.NET 6 | 260,928 | 3,939,804 |
.NET 6 + JSON Source generation | 364,224 | 5,499,468 |
Table 2: TryFormat Performance
Method | Runtime | Mean | Ratio | Code Size |
Format | .NET 5.0 | 13.21 ns | 1.00 | 1,649 B |
Format | .NET 6.0 | 10.37 ns | 0.78 | 590 B |
Table 3: Interface dispatch performance
Method | Runtime | Mean | Ratio | Code Size | Allocated |
GetLength | .NET Framework 4.8 | 6.3495 ns | 1.000 | 106 B | 32 B |
GetLength | .NET Core 3.1 | 4.0185 ns | 0.628 | 66 B | -- |
GetLength | .NET 5.0 | 0.1223 ns | 0.019 | 27 B | -- |
GetLength | .NET 6.0 | 0.0204 ns | 0.003 | 27 B | -- |
Table 4: Devirtualization performance with PGO
Method | Mean | Code Size |
PGO Disabled | 1.905 ns | 30 B |
PGO Enabled | 0.7071 ns | 105 B |