A Short History of AI Frameworks in .NET

The best way to build an AI solution is evolving almost as fast as LLMs themselves. ChatGPT was publicly released on November 30, 2022, and like many of us, I tried it for the first time in January 2023. My first attempt at incorporating it into an application came later that summer. Programming was an interesting experience involving using HttpClient(), and learning about some basic concepts like temperature and tokens and prompt engineering. The original C# code to make a simple call and return non-streaming responses, with no guardrails, no validation, and no telemetry in a simple chat window was almost two hundred lines of code, and most of the responses were more novel than useful.

Today marks less than three years since I wrote that first bit of code and, wow, have we come a long way. After some hand-coding, we built our own framework to make programming easier. I experimented with LangChain, and I experimented with Microsoft's Semantic Kernel (SK). That was in the early days of SK, when almost every feature was considered experimental. I quickly chose SK and am glad I did. It evolved fairly quickly, took away a lot of the plumbing headaches as generative AI got more sophisticated, and I could always hand-code new features when I needed them before SK supported them. I'm happy to say I've enjoyed three great years with SK—nearly the entire lifetime of generative AI development. While it trailed AutoGen (open-sourced in the same timeframe) for building AI agents, it was a mostly bulletproof platform and scaled to production well. With SK abstracting away the LLM model, the provider of the model, the semantic search lookup, the chat history and a dozen other things, the code base for a much more powerful version of my original application could be written in less than fifty lines of code.

The New .NET AI Stack

As quickly as Semantic Kernel and its amazing community evolved to try to keep up with advances in generative AI, it still suffered from imperfect knowledge of the future. After a few years and the emergence of concepts like agents, workflows, and deep thinking, along with the sprawl of features being added, the foundations started to crack. A new approach was needed, and at first, that new approach was built into SK itself. As early as fall 2024 (in the .NET 8 timeframe), Microsoft started breaking out the low-level bits from SK into an underlying set of libraries called Microsoft.Extensions.AI, which SK sat on top of. This allowed the low-level plumbing to evolve separately from the high-level concepts in SK, and it allowed us to use the low-level abstraction layer directly if we didn't need all the high-level features of SK. Microsoft.Extensions.AI contains concepts like IChatClient, IImageGenerator, and IEmbeddingGenerator, interfaces that work across models and across providers, including local providers like Ollama. Using this library, my chat code from 2023 can now be written in about twelve lines and will be light years ahead of the original. This breakout is important because it allowed a new high-level framework to be built without recreating all the low-level plumbing.

The new high-level framework, Microsoft.Agent.AI has been described as the “spiritual successor of Semantic Kernel and AutoGen.” While both SK and AutoGen are very powerful and have enjoyed great longevity, the respective development teams knew they needed a different path moving forward. Building on top of Microsoft.Extensions.AI, the team was free to reimagine what a new high-level framework that combined the best of both existing frameworks could look like. Currently in release candidate and named Agent Framework (AF), the Microsoft.Agent.AI libraries have streamlined AI development in .NET and are a pleasure to use. I expect they'll be released when you're reading this.

As the name implies, Agent Framework's features revolve around creating and orchestrating agents. Like SK, there are both C# and Python versions and artifacts can be re-used between languages. Agent Framework includes AIAgent, a concrete implementation built on top of IChatClient to extend basic, generic chat functionality, adding features like MCP integration, reliable structured output, background responses for durable, long-running workflows and workflow orchestration of agents. It includes AgentSession, an implementation of memory functions like short- or long-term chat history, chat trimming, and cached knowledge. Agent Framework also builds on the infrastructure features in the underlying Microsoft.Extensions.AI libraries including OpenTelemetry, Logging, Caching, Dependency Injection, Builders, and more.

Think of these sets of libraries as two layers. Microsoft.Extensions.AI is the foundation layer for basic chat and tool use. Microsoft.Agents.AI (also known as Agent Framework) is the higher-level layer for agents and orchestration. Lately I've been building everything in Agent Framework, even though I may only need basic features, so I have an easier path to adding higher-end features if I need to. Though I have yet to do any proper testing, I have yet to notice a performance hit to this approach in casual observation.

The Development Experience

For these examples, I'm using Azure OpenAI hosted models, but you can use a variety of other models and model providers. Let's start with AzureOpenAIClient.GetChatClient(), which comes from the Azure.AI.OpenAI package. That's a provider-specific chat client for AzureOpenAI models. If I used that directly, I would have to write different code for every provider.

ChatClient aOaiClient = new AzureOpenAIClient(
    new Uri(_azureOpenAiEndpoint),
    new ApiKeyCredential(_azureOpenAiApiKey)
)
.GetChatClient(_chatDeployment);

var result = aOaiClient.CompleteChat("What is the capital of France?");

foreach (var content in result.Value.Content)
{
    Console.WriteLine(content.Text);
}

I can instead call the extension method AsIChatClient() from the Microsoft.Extensions.AI.OpenAI package and get back an IChatClient. I can then switch providers without affecting the rest of the code as IChatClient is not specific to OpenAI or AzureOpenAI hosting or models. IChatClient works the same way with an Azure Open AI Model as it does with a local Ollama model, for example.

IChatClient chatClient = new AzureOpenAIClient(
    new Uri(_azureOpenAiEndpoint),
    new ApiKeyCredential(_azureOpenAiApiKey)
)
.GetChatClient(_chatDeployment)
.AsIChatClient();

var result = await chatClient.GetResponseAsync(
    "What is the capital of France?"
);

Console.WriteLine(result);

As you can see, it's now easier to write the response to the console. I don't have to loop through the result.Value.Content collection and specify the Text property anymore. And if I want to target a local model through Ollama, for example, I can use an Ollama library to create a different client and call .AsIChatClient(). Though to be perfectly transparent, the OllamaSharp NuGet package recommended by Microsoft is not currently working with the latest release candidate; I assume it will be updated soon. The result will be an IChatClient, so none of the rest of the code changes.

If I want to stream the response instead of waiting for it to complete:

await foreach (var content in chatClient
    .GetStreamingResponseAsync("What is the capital of France?"))
{
    Console.Write(content);
}

I can do all that and more with Microsoft.Extensions.AI. I've let the framework choose a lot of sensible defaults for me that I can override by passing options. But, if I want to build something more complex, I can use Agent Framework.

AIAgent agent = new AzureOpenAIClient(
    new Uri(_azureOpenAiEndpoint),
    new ApiKeyCredential(_azureOpenAiApiKey)
)
.GetChatClient(_chatDeployment)
.AsAIAgent(
    instructions: "You're a friendly assistant. Keep answers brief.",
    name: "MyAgent"
);

Console.WriteLine(await agent.RunAsync("Explain how to tie shoelaces"));

In this case, I'm not yet really taking advantage of Agent Framework. I'm calling .AsAgent() instead of .AsChatClient(). Agents are created with a set of instructions that help define how to act, and a name. And here's how to stream the response instead of waiting for it to complete:

await foreach (var content in chatClient
  .GetStreamingResponseAsync(
  "Explain how to tie shoelaces"))
    Console.Write(content);

Agents begin getting more powerful when you allow them to use tools. A tool can be a method that you write in C#, a built-in tool, such as a code generator or web lookup, searching a semantic index, or calling an MCP server. As an example, I'll create my own tool in C# to retrieve weather information.

public class WeatherTool
{
    [Description("Get the weather for a given location.")]
    public static string GetWeather(
        [Description("The location to get the weather for.")]
        string location)
    {
        return $"The weather in {location} is cloudy with a high of 19°C.";
    }
}

OK, it's not a very accurate weather forecaster, but the point is you can write any code you like. The [Description] attributes on both the method and the parameters are important because the LLM will use those descriptions to help it select tools and to know how to properly call its methods. Now I can allow my agent to use this tool if it decides it would be helpful to accomplish its goals.

AIAgent agent = new AzureOpenAIClient(
    new Uri(_azureOpenAiEndpoint),
    new ApiKeyCredential(_azureOpenAiApiKey)
)
.GetChatClient(_chatDeployment)
.AsAIAgent(
    instructions: "You're a friendly assistant. Keep answers brief.",
    tools: [AIFunctionFactory.Create(WeatherTool.GetWeather)],
    name: "MyAgentWithTools"
);

Console.WriteLine(await agent.RunAsync(
    "I'm in Taos, NM. Should I take an umbrella today?"
));

Other Features of Agent Framework

So far we've just done simple completions, not multi-turn conversations. For that, I need to create a session to store the interactions. When I wrote my first app in 2023, that required a lot of code. In Agent Framework I can use the default in-memory provider like this (assuming the same AIAgent shown above):

AgentSession session = await agent.CreateSessionAsync();

Console.WriteLine(await agent.RunAsync("I'm in Taos, NM.", session));
Console.WriteLine(await agent.RunAsync("Should I take an umbrella today?", session));

Something else we all struggled with in the early days of AI was attempting to force the LLM to return a response in a format we could make use of consistently. Agent Framework now handles that plumbing reliably. Let's assume we need to retrieve some information about a book such as title, author, summary, and number of pages. We can define a class:

public class BookInfo
{
    public string? Title { get; set; }

    public string? Author { get; set; }

    public string? Summary { get; set; }

    public int? Pages { get; set; }
}

We can specify that we want the agent to respond with an instance of that class with the properties filled in. In real life, I would likely provide my agent with a web search tool, but for illustration purposes, I'll just stuff some info into the prompt.

AgentResponse<BookInfo> response = await agent.RunAsync<BookInfo>(
    """Please provide information about Hitchhiker's Guide to the Galaxy, By Douglas Adams. It's a humorous novel About an unsuspecting space traveler. It has 413 pages.""");

Console.WriteLine(
    $"Title: {response.Result.Title}, Author: {response.Result.Author}, Summary: {response.Result.Summary}, Pages: {response.Result.Pages}"
);

Summary

Microsoft's newest AI development stack consists of the foundation layer Microsoft.Extensions.AI and the high-level Agent Framework (Microsoft.Agents.AI). Using Agent Framework from the start is a good technique for future-proofing by allowing your simple response and chat interactions to evolve into complex orchestrated agents when necessary. While this article doesn't provide an exhaustive list of the features and benefits of Agent Framework, it does illustrate the power, flexibility, ease of use, and brevity of the framework. Yes, you can write provider-specific code and hand-code your own helpers, but why would you give up the elegance, power and flexibility you can get for free with Agent Framework? In a future article, I'll explore agent orchestration and workflows and discuss how Agent Framework provides even more value out of the box. Until then, please experiment with Agent Framework and discover for yourself how good it really is.

Currently a release candidate and named Agent Framework, the Microsoft.Agent.AI libraries have streamlined AI development in .NET and are a pleasure to use.

Microsoft.Extensions.AI is the foundation layer for basic chat and tool use. Microsoft.Agents.AI (a.k.a. Agent Framework) is the higher-level layer for agents and orchestration.