I've probably written a thousand async-await functions and you probably have too. But I have a confession: It wasn't until quite recently that I understood what async-await actually does and what purpose it serves. In this article, I want to take you through my discovery process and perhaps give you some insight into this very common, but often misunderstood, C# language feature.
What Confused Me about Async-Await
When you think about asynchronous programming, the basic idea is very simple: You want your code to do more than one thing at a time. Rather than block and wait on some function call that might take a while, you call it asynchronously so that the code can go work on something else and then process the results of that function when they arrive. It's a clear and intuitive idea, but to really get it in focus, let's compare an asynchronous call pattern to a synchronous one.
Synchronous Call Patterns
Let's say you have a function that needs to call two other functions in its body. In Figure 1, you can see what this looks like if the primary function calls the sub functions using a synchronous call pattern. The calls are made serially, one after the other and the Main Function waits for Sub Function 1 to complete before calling Sub Function 2.

Asynchronous Call Patterns
Now let's say that Sub Function 1 and Sub Function 2 are independent of each other. In other words, Sub Function 2 does not need the result of Sub Function 1 and vice-versa. This means they can be called in any order and, in fact, can safely run in parallel. To achieve this, you decide to call these functions asynchronously. You kick them both off and wait for them both to return. Figure 2 shows what this call pattern looks like:

In this particular sequence diagram, Sub Function 2 happens to complete first, but it doesn't have to, and it doesn't really matter. The point is that you have parallelized the code. The two functions run at the same time.
This is simple and intuitive. Unfortunately, this is not what the C# async-await pattern does, and that's what I found so confusing.
An Await Statement Blocks Local Execution
To understand why I was so confused, let's translate the asynchronous call pattern into C#. Imagine that both Sub Function 1 and Sub Function 2 call some REST API endpoint to retrieve data that the Main Function eventually needs. Here's the code:
async Task MainFunction()
{
await SubFunction1();
await SubFunction2();
}
async Task SubFunction1()
{
HttpResponseMessage response =
await _myHttpClient.GetAsync("http:\\someuri.net");
}
async Task SubFunction2()
{
HttpResponseMessage response =
await _myHttpClient.GetAsync("http:\\another.net");
}
If you run this code, you'll see that the call to SubFunction1 blocks the local code execution. The call to SubFunction2 doesn't start until the first function finishes. In other words, this looks like the synchronous call pattern in Figure 1, not the asynchronous call pattern in Figure 2.
When I first saw this, my reaction was, “Wait, what?!?!?” I fully expected my code to keep running with the async task going in the background. When I saw that my code blocked, I was honestly kind of shocked and I thought, “What's the point? What is this even doing?” Maybe you've had the same feeling or reaction.
What Async-Await Actually Does
The reality is that async-await does do something very important. To understand what is going on here, you need to take some time to understand Tasks and the Thread Pool.
Tasks and the Thread Pool
In C#, an async-await call is shorthand for creating, running, and awaiting a Task, and as you may know, Tasks run on Thread Pool Threads. Let's walk this through step-by-step to see how it works.
The Thread Pool that a real .NET application gets has thousands of threads in the pool by default. But let's imagine that you have a sad little Thread Pool with exactly three threads, as shown in Figure 3.

Now imagine that you have a server application that's listening for requests. Whenever it gets a request, it calls a request handler. This causes a task to get created and assigned a thread from the Thread Pool, as shown in Figure 4.

The request handler contains a few normal steps (Step 1 and Step 2) and then it gets to Step 3, which, for some reason, is slow. Maybe it's calling some REST API that takes a bit to reply. The result is that Request 1, and therefore T1, is waiting on the result of Step 3.
Now imagine that two more requests come in. This causes two more tasks to start running, each of which gets a thread pool thread, as shown in Figure 5.

What happens if the server receives a fourth request? Because there are no available threads, the fourth request cannot even start. This is called “thread starvation,” as shown in Figure 6.

Solving Thread Starvation
Thread starvation is the problem that async-await solves. Here's how it works.
Let's go back in time to the moment when the very first client request comes in. Request 1 starts running. It does Step 1 and Step 2. Then it gets to Step 3. Rather than calling Step 3 synchronously, call it using an await. This causes .NET to create what is called a Continuation. You can think of a Continuation as a sort of bookmark. Essentially, .NET is saying, okay, this function is awaiting the result of an asynchronous operation. Let's suspend that function for the time being and, when asynchronous operation completes, you can restart (continue) that function where it left off. In the meantime, the thread can return to the pool, as shown in Figure 7.

You can see how this solves the problem of thread starvation. Because the threads return to the pool, when that fourth request comes in, you have no trouble at all servicing it. There are three threads to choose from, as shown in Figure 8.

Request 4 happened to be assigned T2 to start with, but it really doesn't matter. With tasks, there's no affinity between a task and the thread that's servicing it at any given point in time. The key is that using an await allows a thread to return to the pool during the async operation, and this allows a real .NET application to service many thousands of simultaneous requests using a limited thread pool.
To complete the story, now let's imagine that the async call in Request 1 completes. At that point, .NET clears the Continuation. It assigns a Thread Pool thread to the task and resumes execution at Step 4, as shown in Figure 9.

Did you notice how when async Function 1 resumed, it resumed on a different Thread Pool thread from the one it started on? Once again, there's no affinity between a thread and a task. After every await, there's a high likelihood that your code will continue on a different Thread Pool thread, which is why using techniques like the ThreadStatic attribute can be disastrous. But that's a topic for another day. For now, the important thing is to understand what async–await is really doing. It's not letting the code inside your function do multiple things at once. Rather, it's freeing up Thread Pool threads so your application overall can do multiple things at once.
Implications and Analysis
This understanding of async-await raises some questions and has some interesting implications. Let's address some of the most common ones.
Running Tasks in Parallel
I've established that async–await doesn't allow the code inside of a function to do multiple things at once. But what if that's what you want to do? What if you want some true parallel processing where one request can await multiple operations at a time, as you hoped for in Figure 2? How do you do that?
The C# tasks model makes this very easy. You can modify the MainFunction
from above as follows:
async Task MainFunction()
{
// start both tasks
Task t1 = SubFunction1();
Task t2 = SubFunction2();
// wait for both tasks to finish
await Task.WhenAll(new Task[] { t1, t2 });
}
Now both SubFunction1 and SubFunction2 run at the same time and neither holds onto a Thread Pool thread while awaiting the result of their respective HTTP GetAsync calls. Importantly, the thread associated with MainFunction
is also able to return to the Thread Pool and be reused while both SubFunction1 and SubFunction2 complete. This is the best of all worlds in terms of parallel processing.
Turtles All the Way Down
Sometimes when you introduce async-await code into your project, it creates a sort of problem of “infinite regress” where it seems everything must become async. Imagine that you have a sequence of three functions where Function1 calls Function2 and Function2 calls Function3
.
void Function1()
{
Function2();
}
void Function2()
{
Function3();
}
void Function3()
{
}
Now imagine that the body of Function3
contains an async call, therefore when you implement Function3
you change it to by marked as async like this**:
async Task Function3()
{
HttpResponseMessage response =
await _myHttpClient.GetAsync("http:\\someuri.net");
}
This causes a problem for Function2. Function2 can no longer call Function3
as written because Function3
is now async. So, you fix Function2 and make it async also. This then causes the same problem for Function1. After you fix all three, your code ends up looking like this:
async Task Function1()
{
await Function2();
}
async Task Function2()
{
await Function3();
}
async Task Function3()
{
HttpResponseMessage response =
await _myHttpClient.GetAsync("http:\\abc.net");
}
What ends up happening is that as soon as you introduce a single async-await, it seems to percolate all the way up the entire call chain. I like to think of this as a version of the “turtles all the way down” problem, which is a way of describing “infinite regress.” Perhaps you've experienced this phenomenon.
What should you do? One approach is to simply make everything async all the way to the external entry point of your code. In other words, stack the turtles all the way! If your application is a console app or a Windows executable that has a Main
function at the root, you can declare your Main
function as async like this:
static async Task Main(string[] args)
{
await Function1();
}
If you're working with an ASP.NET web service where your Controller functions serve as the root of each request, you can declare your Controller functions as async like this:
[HttpPost]
public async Task Post()
{
await Function1();
}
Making everything async works, but it can be quite burdensome in a large legacy code base. The alternative approach is to explicitly navigate the synchronous/asynchronous boundary yourself. There are several ways to do this and some, like using the Task-based Asynchronous Pattern (TAP), are very sophisticated and give you a lot of control over what your calling synchronous function does while your asynchronous task is executing. A simple, safe, and common approach is to do something like this:
void Function1()
{
Function2().GetAwaiter().GetResult();
}
In this code, the Function1 context is synchronous, and the thread associated with Function1 does indeed block and does not return to the pool. However, any awaits called downstream are asynchronous and release their threads.
It should be noted that when you have an async Main or an async Controller method, C# is navigating the synchronous/asynchronous boundary for you behind the scenes. For example, with a console or Windows app, your async Main function is not exposed to the operating system (Windows has no concept of a task). Rather, there is an invisible function that probably looks something like this:
// hidden main which the operating system calls
static void HiddenMain(string[] args)
{
Main(args).GetAwaiter().GetResult();
}
// your "main" function
static async Task Main(string[] args)
{
await Function1();
}
The point is, there is always a synchronous/asynchronous boundary. The question is where and how it's navigated. If you don't want to implement the infinite regress of async calls, you can, depending on your application, choose the point at which it makes sense to make the transition.
What Makes Something “Awaitable?”
In the discussion of the “turtles all the way down” phenomena, you saw how you sometimes end up with a chain of async functions that works its way back to the root, or entry point, of your program. But what happens at the other end? Looking at the other end of the chain, you'll find that a series of async calls almost always bottoms out with a function like HttpClient.GetAsync that's provided by a .NET library.
Table 1 includes some of the async functions found in the .NET libraries. This list is by no means exhaustive. It's just meant to highlight some common (and perhaps less common) async functions you might run into. (Table 1 is at the end of the article.)
What do all of these async methods have in common? They're all IO operations. They all connect to, read from, or write to a file, port, or stream of some kind. This is a very important observation. Think back to the earlier discussion of a Continuation. As you saw, a Continuation can be thought of as a bookmark. The .NET runtime places the bookmark at an await in a method and resumes the code there once the asynchronous operation completes. But how does the runtime know that the asynchronous operation has completed? Is there some magic signal that tells the runtime that it's time to continue?
The answer is: Yes! There is, in fact, a kind of magic signal that makes this happen. In Windows, it involves a Windows facility called IO Completion Port (IOCP). This is a feature of the Windows API that allows a caller to initiate an IO operation and then subscribe to a special Port that will signal when the IO operation completes. This is the “magic signal” that makes async-await Thread Pool sharing possible. It may be “turtles all the way down” but the last turtle is implemented using IOCP!
What About CPU-Bound Tasks?
Everything you've looked at so far involves IO-bound tasks. This is work where the result is dependent on (bounded by or bottlenecked by) an IO operation, and as you've seen, the “awaitability” of these tasks is made possible by a special operating system mechanism that the .NET runtime can take advantage of. But what about CPU-bound tasks? These are tasks that are dependent on (bounded by or bottlenecked by) the number and speed of CPU cores. Are these also “awaitable?” And if so, what does that mean? Do they also support continuations and thread sharing?
To answer this, let's look at a snippet of code that launches and awaits a set of CPU-Bound tasks.
async Task DoABunchOfCPUBoundWork()
{
List<Task> tasks = new();
for (int i = 1; i <= 10; i++)
{
tasks.Add(Task.Run(() =>
{
SomeCPUBoundJob();
}));
}
await Task.WhenAll(tasks);
}
void SomeCPUBoundJob()
{
// processor intensive work here
}
The first thing to note is that the CPU-bound function, SomeCPUBoundJob, is not marked as async and does not return a task. It's a regular C# function. So how does it get run as a task? The magic is the call to Task.Run() which takes the supplied function and executes it in a task. This is how you run CPU-bound work in a task: by using Task.Run().
As this code executes, it queues ten tasks, each of which eventually gets scheduled on its own Thread Pool thread. Meanwhile, the thread associated with DoABunchOfCPUBoundWork is, in fact, freed back to the Thread Pool. This happens when you await Tasks.WhenAll(). At this point, a normal Continuation is indeed created and the thread can be assigned other work.
With IO-bound work, the async-await pattern allows an application to potentially service many thousands of requests with a small number of threads. With CPU-bound work, this is not the case. For CPU-bound work, the application will be limited by the number and speed of the CPU cores available to the .NET task scheduler that's queuing and running all those tasks. The code looks similar, but the parallelization behavior turns out to be quite different for IO-bound tasks that are truly awaitable and CPU-bound tasks that need a processor to run on. But at least the call that launches all the CPU-bound tasks doesn't sit on the controlling thread. You can still use async-await to make efficient use of the Thread Pool even in a CPU-bound situation.
Conclusion
At a high level, the idea of asynchronous programming is easy to understand: You simply want your code to be able to do more than one thing at a time. In practice, as you've seen, there's a bit of nuance and complexity to it and simply using async-await doesn't automatically make your code run in parallel. What async-await does do, is allow your application overall to make more effective use of the Thread Pool, and by using other techniques, such as Task.WhenAll(), you can combine this efficient Thread Pool usage with true parallel processing for both IO-bound and CPU-bound work.
When I first started using async-await, I was a bit frustrated and confused, but now I think I have a clearer understanding of how it works and what it's for. I hope you do, too.
Table 1: Some async functions in the .NET libraries
NameSpace | Class | Method |
---|---|---|
System.Net.Http | HttpClient | GetAsync |
HttpClient | PatchAsync | |
HttpClient | PostAsync | |
HttpClient | PutAsync | |
HttpClient | DeleteAsync | |
HttpClient | SendAsync | |
Microsoft.Data.SqlClient | SqlConnection | OpenAsync |
SqlCommand | ExecuteReaderAsync | |
SqlCommand | ExecuteNonQueryAsync | |
SqlCommand | ExecuteScalarAsync | |
SqlDataReader | ReadAsync | |
SqlDataReader | GetFieldValueAsync | |
System.IO | File | ReadAllLinesAsync |
File | ReadAllBytesAsync | |
File | ReadAllTextAsync | |
File | WriteAllLinesAsync | |
File | WriteAllBytesAsync | |
File | WriteAllTextAsync | |
FileStream | ReadAsync | |
FileStream | WriteAsync | |
FileStream | CopyToAsync | |
Azure.Messsaging.ServiceBus | ServiceBusSender | SendMessageAsync |
ServiceBusSender | SendMessagesAsync | |
ServiceBusSender | ScheduleMessageAsync | |
ServiceBusSender | CancelSheculedMessageAsync | |
ServiceBusReceiver | ReceiveMessageAsync | |
ServiceBusReceiver | RecieveMessagesAsync | |
ServiceBusReceiver | PeekMessageAsync | |
ServiceBusReceiver | CompleteMessageAsync | |
ServiceBusReceiver | AbandonMessageAsync | |
ServiceBusReceiver | RenewMessageLockAsync | |
ServiceBusReceiver | DeadLetterMessageAsync | |
Windows.Storage | FileIO | ReadTextAsync |
FileIO | ReadLinesAsync | |
FileIO | WriteBufferAsync | |
FileIO | WriteBytesAync | |
FileIO | AppendTextAsync | |
System.IO.Pipes | PipeStream | ReadAsync |
PipeStream | WriteAsync | |
PipeStream | FlushAsync | |
System.Security.Cryptography | CryptoStream | ReadAsync |
CryptoStream | WriteAsync | |
CryptoStream | FlushAsync |