Not too long ago, I wrote an article on AI and developer productivity (https://www.codemag.com/Article/2509021/AI-and-Developer-Productivity). While a lot of what I had written in that article still holds true, technology has improved by leaps and bounds in the past few months. I hear a lot of angst and comments that the job of a software developer is over. That software developers better learn how to be a plumber or electrician. As a software developer myself, I was seriously wondering about my fallback career plan of being a comedian, until I started to master these technologies myself.

But then I looked into the past. When GUI-based IDEs first came on the scene, developers who knew the ins and outs of gcc felt uncomfortable. I distinctly remember hearing words like, “Who needs to learn to program when you have things like Visual Basic?” Indeed, Visual Basic was so much simpler than C++. The GUI-based IDE was so much easier to use than complex terminal commands and coding in vim. Fast forward 20 years and so much has changed, but the one thing that hasn't changed is the need for developers.

I challenge any CEO who says software development will be automated away. Famously Anthropic's CEO recently said, “We are 6-12 months away from doing what software engineers do.” I wonder why then Anthropic themselves are hiring so many developers?

As you can see, I'm in the skeptic group. But I also feel that to remain relevant, you need to completely master the power of AI-assisted development. I use AI for my work and nonwork life heavily. If you asked me to code without AI today, you might as well be asking me to code with chopsticks. In fact, I cannot imagine how we managed to get so far being so inefficient without AI. I'm not exaggerating; my personal output with AI is now twenty times more than what it used to be without AI.

In this article, I am going to walk you through a soup-to-nuts example of AI-assisted development using Gemini, giving you a good insight into how I use AI-assisted development. But I want to learn from you too. Please share your tips and tricks with me.

Why Gemini CLI?

Let me get this out of the way first. I picked Gemini CLI for this article because you can get quite far with it for free. I use Claude Code a lot. I have also used OpenAI Codex. They all have their pros and cons. Let me give you a comparison of those before we dive into a Gemini CLI-based article. I am confident this article could have also been written using Claude Code or OpenAI Codex.

When picking a development CLI, there are a few criteria I want to evaluate my tool of choice on. Things such as model being used, cost, context window, and integration with any specific cloud and extensibility.

The first is the model being used. At the time of writing this article, Gemini CLI uses Gemini 3.0 Pro or 2.5 Pro, although in this article, I will use a personal Google account (free tier). Claude uses Opus 4.6 or Sonnet 4.5, and OpenAI Codex uses GPT-5.3-Codex. Now these are my reasons and opinions, so take these with a grain of salt.

Today, the coding landscape has evolved into a three-way battle between specialized “terminal agents.” While each can write code, they have distinct “personalities” optimized for different parts of the development lifecycle.

Gemini 3.0 Pro is to me the context giant. Google's strength lies in its massive context window and native multimodality. Gemini, offered by Google, is amazing at handling context. While other models might “forget” the beginning of a long file, Gemini 3.0 Pro can ingest your entire codebase, plus 5,000 pages of documentation, and still find a specific bug in a utility function from many years ago. I also found Gemini to be far superior to its counterparts when it comes to “looking” at your frontend. You can record a 10-second video of a UI glitch, and Gemini can often map that visual bug back to the specific CSS or React component causing the issue. In my opinion, Gemini is great for research-heavy coding, learning new frameworks by feeding it the entire docs, and frontend/UI debugging.

Claude Opus 4.6, in my opinion, is the reasoning king. Released in February 2026, Opus 4.6 is currently the favorite for complex refactors. It is famous for “thinking before doing.” If you ask it to migrate a database, it will often create a step-by-step .md plan, verify it with you, and then execute it across dozens of files. It has a plan mode, a learn mode, so it can show you what it is doing and lets you do parts of the problem yourself, so you aren't in total idiot mode. It also has the highest “self-healing” rate, meaning if it runs a test and it fails, it rarely gets stuck in a loop; it pivots its strategy more like a human senior engineer. I have found Claude to be great at massive codebases where you need the AI to understand the intent and logic across a 100+ file dependency graph.

The GPT-5.3-Codex, to me, is the performance thoroughbred. This is OpenAI's latest iteration and it focuses on speed and reliability. On Terminal-Bench 2.0, it leads the industry (77.3%). It is incredibly “brave” in the CLI—it handles complex bash scripts, Docker configurations, and CI/CD pipelines with fewer syntax errors than Claude. One thing I especially like is that unlike Opus, which can be chatty, Codex is tuned for “diffs.” It provides the exact lines needed, making it the preferred choice for IDE integrations like Cursor or the new Codex Desktop App. Claude recommends you use sub-agents to keep your main work focused and low on noise. In my opinion, GPT-5.3-Codex is very well suited for fast-paced iteration, vibe coding, and DevOps/Infrastructure-as-Code tasks where precision in execution is paramount.

In the real world, I find myself using Gemini CLI and Claude Code heavily. But one big shortcoming of Claude Code is its pricing model. It is included in Claude Pro which is around $20/month. I'm cheap, and I want everyone to follow this article at no cost. I chose to write this article in Gemini because it offers a generous free tier of sixty requests per minute, or a pay as you go plan beyond that.

If money wasn't a consideration, I would find myself going back and forth between Gemini and Claude. Because Claude has a smaller context window, I find it great for very focused coding tasks. But if I am trying to diagnose a complex UX issue, I switch to Gemini. But let's be honest, either is fine at most of what I do.

A New Era for the Terminal

AI will take many forms as we move forward. If you are on a Mac, open text edit, and start doing some math. You will see that Terminal can do basic math right inside of text edit now. Just type 1+1= and it'll respond with "2". Try anything more complex; this has taken over my basic mathing needs. It's still pretty dumb, if I type 12% of 8381, it doesn't understand it, but basic plus minus multiply divide, it works fine at (Figure 1).

Figure 1: MacOS textedit can do basic math.
Figure 1: MacOS textedit can do basic math.

Companies are releasing desktop apps to automate a lot of what we do. Companies are also baking AI into our usual productivity tools like email and spreadsheets to take advantage of AI where we can, sometimes with hilarious results. The autocomplete you have on your iPhone could in some ways be considered rudimentary AI. But as a developer, are you surprised that most of the action is happening right here in terminal?

How did coding work in the near past? You'd open VS Code or similar. You'd bootstrap a project, you'd run it in debug mode if the underlying platform supported it, and you'd try to build your application. Let's be honest that in doing so, you focused a lot of your energy on friction. For example, how do I do xyz in React? Or, I know Splunk is powerful, but my Splunk query skills suck. Or, I know this SQL query can be optimized, but I don't have the smarts to do it. You know something? I have a lot of respect for the artisans that have mastered those skills over the past so many years. But I am being paid to get a job done, and I wish I could get past the friction of learning these new silos, and just use them to their max capability.

Have you ever found someone good with a hammer thinking everything is a nail? This is the problem with artisans, they master a skill, because that is what those humans do. Then they try to solve any problem with that skill. As an architect, I want to use the right tool for the right job, and I don't have the time to master every skill. I could work with a bunch of strong-minded humans, or I can delegate a lot of it to AI and get much of this done myself, so that is what I do now.

So, for pure coding we started using AI. Our workflow changed to:

Get stuck on a problem.

Copy and paste an error message, or some code snippet from VS Code, into some browser-based UI offered by some big AI company like Google or Anthropic or OpenAI.

Ask the AI a question

See what AI responded with.

But the problem here was the friction of copy and paste, and I can only give it so much context. The results were frequently poor and hilarious, and often would make me look bad. But, if you have used this “level” of AI, I think you'll agree that it is a very valuable tool in your quiver. I am amazed at how many developers use this, find it to be such a superpower, and then don't feel the need to go further.

Well in this article we go further. The terminal or CLI-based development is the new kid in town. Here, we are dealing with agentic AI, which is a fancy way of saying it can do things for you. It doesn't just talk back to you in a chat format; it can do things. Because it runs in your terminal, it can read your local files, your project structure, anything else you tell it, leverage external MCP servers or plugins, and directly execute commands in terminal. Heck it can even read documentation from the internet and make better decisions as it codes, just like a developer would.

All the code for this article is at https://github.com/maliksahil/jwtdecode. I made pull requests as I built it so you should be able to follow along nicely.

Install Gemini CLI

The first thing to do is to install Gemini CLI. You will need Node.js installed on your machine, version 20 or higher. You can head over to https://nodejs.org and install the latest LTS version if you don't already have it. To install Gemini CLI, go ahead and run the command below.

npm install -g @google/gemini-cli

That's it, Gemini CLI is now installed on your machine. Go ahead and run it by issuing the command below on terminal.

gemini

Soon as you launch Gemini, it will ask you if you trust the folder you are running this command in. Well, Gemini is going to have read/write access to this folder, and it may even send the contents to Google, so I wouldn't be running this in a folder with nuclear codes for sure. But since I am running this in a safe new folder, I'll go ahead and trust it.

Next, it will ask you to authenticate. Here is where the money part comes in. I'm cheap, and I want to use this for free, so I'll select Login with Google. This puts me in the free tier by default which gives me 1,000 requests per day for free. Choosing to login with Google will pop open the browser, walk you through a couple of screens and warnings around trust, and you are signed in. It will ask you to restart Gemini CLI as well for the authentication settings to take effect. Go ahead and hit r to restart it.

Once you are all logged in, it will give you some tips to get started. You can ask questions, edit files, or run commands. It'll tell you to be specific for best results. You can create a GEMINI.md to customize your interactions with Gemini, or type help for more information.

Go ahead and type /help. There are quite a few commands available, aren't there?

Writing Our App

I am so excited, I just can't hide it. Let's start writing our first application. My version of “hello world” is, I've always wanted a JavaScript SPA (single-page application) that allows developers to paste in a JSON Web Token and see its contents. I want to follow the development of this application step by step just as a senior developer would do. So, we will scaffold the application, bit by bit, and make enhancements, and use git, just like a senior developer would.

Go ahead and create a folder called jwtdecode and go into that folder.

mkdir jwtdecode
cd jwtdecode

Once inside this folder, turn this into a git repo using the following command:

git init

Now, go ahead and launch Gemini on terminal again.

gemini

As you can see, other than launching Gemini, there is nothing weird going on. But now is when things get weird.

The first thing I will do is create a GEMINI.md file. Interestingly, if I was using Claude Code, I'd be creating a CLAUDE.md file. This file tells the AI how to behave in this specific project.

Inside the Gemini CLI, type the following.

*Create a GEMINI.md file. Tell yourself to always use React (Vite), Tailwind CSS, and a clean Functional Component style. Also, instruct yourself to use a multi-branch Git strategy for every feature.

*After Gemini thinks for a bit, it should show you an output similar to Figure 2.

Figure 2: Gemini asking if it can write GEMINI.md
Figure 2: Gemini asking if it can write GEMINI.md

I'll go ahead and allow it. You can also run gemini --yolo to get rid of these nags.

Notice that Gemini has now created a GEMINI.md file with some contents. These are so easy to read for a human and for AI. I have pasted a preview version of the generated file in Figure 3.

Figure 3: Our initial gemini.md
Figure 3: Our initial gemini.md

As you read the GEMINI.md, it feels obvious to understand. You can choose to add more instructions, tweak it as necessary, but right out of the gate, and without me even explaining anything, it has set up the basic instructions to use React (Vite), and Tailwind CSS. Additionally, it is going to use functional component style with hooks, and it is going to use proper git techniques just like a senior developer would.

You can imagine that you can customize this to your organization's needs as you wish, but let's keep chugging along.

Project Scaffolding

I want to set up a React + Tailwind environment. Without AI, I would have to Google for commands on how to do this, or learn Vite as fast as I could, but with Gemini all this friction is removed.

In Gemini CLI, go ahead and enter the following prompt next.

*I want to start the project. Create a new branch feat/setup. Use Vite to initialize a React app in the current folder. Install Tailwind CSS and verify the configuration. Create a basic Hello World page and commit the changes using your git tools.

*Soon as you run this, Gemini goes about thinking, and before it makes any changes, it asks you what exactly it intends to do, and how it should proceed. This can be seen in Figure 4.

Figure 4: Creating a git branch
Figure 4: Creating a git branch

This is super nice. Gemini is asking me if it can create a git branch. Now, I am inside this project, and I don't want to keep allowing Gemini to do things, so I'll allow changes for this session by picking option 2.

Now Gemini goes back to thinking, and a moment later, it says:

*I'll initialize the React app with Vite in the current directory.

*And in a screenshot similar to Figure 5, it tells me that the command it will run is: command

npm create vite@latest . -- --template react-ts && npm install

In a matter of seconds, it has set up a Vite React TypeScript environment for me, and it is running the server on localhost:5173. This can be seen in Figure 5.

Figure 5: My project is running.
Figure 5: My project is running.

You can visit http://localhost:5173/ to ensure that your app is running.

The JWT Decoder Logic

My app is running, which is nice. But it doesn't do much. So that you can follow along my progress, I pushed my code to https://github.com/maliksahil/jwtdecode.

Before I move further, I pressed Ctrl+C twice to exit Gemini CLI. It shows how many tokens I have consumed so far (Figure 6).

Figure 6: My usage so far
Figure 6: My usage so far

At this point feel free to walk through the code that Gemini has setup so far.

Now back in Gemini, my next goal is to setup the core decoder logic.

I am not much of a designer, but I took some stylistic liberties and created a UX mockup and put it in a screenshot named UXMockup.png. You can also paste this directly into Gemini CLI. Figure 7 shows my mockup.

Figure 7: My UX mockup.
Figure 7: My UX mockup.

Now this is very rudimentary, but in the real world, you can tie this to products like Figma via their MCP server, and build out high fidelity mockups, flows for your whole application and AI will code the whole thing up. Or you can go from your code to UI mockups too. It's quite incredible.

With my UI mockup done, let's issue our next command to Gemini.

*Switch to a new branch feat/decode-logic. Create a component that has a large text area for a JWT input. Write a function that decodes the Header and Payload parts of the JWT using atob() and JSON.parse(). Display the result as formatted JSON on the screen. Do not use external libraries for the decoding. Use the UXMockup.png file in the root of the project as inspiration, but feel free to take stylistic liberties.

*As before, allow Gemini to read write files or run git commands, etc.

You will see that Gemini creates a new branch. It then reads the UX mockup. Then it wants to install Tailwind CSS using the command below.

npm install -D tailwindcss @tailwindcss/vite && npx tailwindcss init -p

Then it starts making code changes, and it tells you in a git diff format exactly what changes it will make. I really like this since if I pay attention here, I am learning as I go (Figure 8).

Figure 8: Gemini making code changes like a pro.
Figure 8: Gemini making code changes like a pro.

I am on a free tier, so it's possible that I might get throttled. If I do Gemini will ask me to either switch models or keep trying or stop. You can, of course, pay to get past this annoyance. Figure 9 shows throttling by Gemini.

Figure 9: Uh oh, we are getting throttled.
Figure 9: Uh oh, we are getting throttled.

After some patience, and after trying a few more times, I was able to get my changes done. As usual, Gemini walked me through each step showing me exactly what changes it would make.

As I am watching it scroll by, it is meticulously making changes to my tsx files, my css, my package.json, and it is even writing tests for me. Like a real developer it then ran the tests, found an error, fixed the code. Then it went further and added error checking, all the while running the linter to ensure it is writing good code.

Wow! This would have taken me hours to do by hand.

I'll run the application and you can see the results in Figure 10.

Figure 10: My JSON Web Token decoder
Figure 10: My JSON Web Token decoder

I pasted an access token from https://learn.microsoft.com/en-us/entra/identity-platform/access-tokens but feel free to paste in any JSON Web Token. Let me just say, I am floored. Using the free tier, I have written up a nice little application for myself.

Note that at no point did I have to explain Gemini what “JWT” means, or that it has three parts. It automatically wrote all that logic and created a user interface better than the mockup I had created.

Before we go any further, I am going to exit Gemini CLI and push my code up to GitHub and merge it with main.

If you are interested in examining the changes, you can see them at https://github.com/maliksahil/jwtdecode/pull/1.

Collapsible JSON

I like what we have built so far, but as a human developer, I can immediately see an improvement I can make. Sometimes tokens can be long. They can have claims such as roles, and it would be super nice to have the UX be collapsible to make it easier to follow. Let's make that improvement.

My code is merged in main, so I'll go back to main, and launch Gemini CLI.

Here, I'll give Gemini another command as below.

*Create a new branch feat/collapsible-ui. Update the display logic so that the JSON payload is collapsible. If a key has an object as a value, show a toggle icon to expand or hide it. Use Tailwind for a ‘dark mode’ developer aesthetic.

*We are becoming real pros at this AI-driven coding, so I'll skip some of the details this time around. But as before, Gemini understands my command and starts making code changes. It installs tailwindcss postcss autoprefixer for stylistic use. It modifies the tsx files, the css files, modifies the logic, modifies the tests, and runs the application to ensure it is doing what I intend it to do. It takes a dependency on lucide-react (https://lucide.dev/guide/) for open-source icons.

It's funny. At one point as I was looking through the code changes being made and I noticed it had a bunch of spans inside divs with a bunch of styling. I am so happy I don't have to struggle with all that anymore. As much as it is emotionally satisfying to fix a div that is centered just right, is this really what my end user cares for? Will I get any additional pats on my back for spending two hours centering a div? Not really! So, I now focus on true business value.

After a while Gemini is done and it tells me, “The UI is now ready for use and can be launched with npm run dev.” So, let's run npm run dev. My user interface now looks like Figure 11.

Figure 11: Improved user interface JWT decoder
Figure 11: Improved user interface JWT decoder

I grabbed a token for use from https://learn.microsoft.com/en-us/entra/identity-platform/id-tokens. Now I consider myself a pretty good developer, but this user interface is slick. Not only does it look professional (within limits), but it also has great error checking and frankly looks like a ready-to-use application.

Try pasting in an invalidToken. Yep, Gemini thought of that and gives a pretty useful error message.

At this point, I am going to commit my changes and merge them into main. If you are interested in seeing the changes I made, you can see them here: https://github.com/maliksahil/jwtdecode/pull/2/.

Distributable Version of the Application

Even though my application is a SPA, and I can create a hosted version of this using npm run build, I want to make it easy for my users to use my fancy new jwtdecoder. I want users to be able to double click on an index.html and run this application locally.

To accomplish this, I gave Gemini another prompt.

*In a new branch add logic to create a distributable version of the application in a dist folder, that can be run locally by double clicking on the index.html. Make these changes in a new git branch called feat/distribution.

*As a sidenote, I find it funny that Gemini gives funny prompts as it is thinking, but if you pay close attention, sometimes it will also give you useful tips and tricks, as you can see in Figure 12.

Figure 12: Funny entertaining prompts as Gemini works hard
Figure 12: Funny entertaining prompts as Gemini works hard

As Gemini was working hard, it informed me that dist folder is ignored using .gitignore. I can ask Gemini to fix it, or I'll just take a note of it and fix it before I check in my changes.

Once Gemini is done, I ensure that the generated dist\index.html runs as intended locally. I modified my .gitignore to ensure it will get checked in during the next add commit push.

I then did a git commit push and did a pull request to merge my changes to main. You can see the changes here: https://github.com/maliksahil/jwtdecode/pull/3.

The main change here is that we took a dependency on a node package called vite-plugin-singlefile to bundle my project into a single file. I didn't know such a package existed; this saved me so much Googling.

Update the README File

I am so proud of myself for having written such a nice app in such little time. But it does deserve a nice README file explaining how to use this application. I guess it's time for me to write one. Like most developers, my least favorite task is to write documentation. I want to see if Gemini can write good documentation for me. Let's try.

Again, launch Gemini and give it the following prompt:

*This project needs a new readme.md. Update the readme.md to explain what this application is, include enough screenshots and walkthroughs to explain to the user how to use this application. Do all this work in a new git branch called feat/readme.

*Now, I am really curious how good or bad of a job Gemini will do in writing up a README that I can ship to users. All through this project, I never explained to Gemini what a JWT is or why I would use it. I also didn't describe how I would go about using this project, building it, running it, or what the security ramifications of it are. Will it be able to generate screenshots? Let's see, how it does. But we are truly entering black magic territory here now.

Now what would have otherwise taken me a couple of hours to do, Gemini accomplished in seconds. The documentation can be seen at https://github.com/maliksahil/jwtdecode. What do you think? Did I do a good job? I mean, did Gemini do a good job?

Summary

Times are changing. I have used this JWT decoder project as a good starter project for junior devs joining the team. I am not kidding, it has taken people three to four weeks to do this work and produce quality far inferior to what Gemini built in a matter of a couple of hours, including writing this article. And I haven't even started to scratch the surface of how powerful Gemini CLI truly is.

I have used Gemini and Claude in projects that are far more complex than this. I am simply floored at how good these AIs have gotten. While I do worry about the future of a junior dev, I am convinced that the job of software engineers isn't going away, although the definition of that role will change substantially.

Try this, have Gemini analyze your local tax return, and ask it some complex question, like how is Line 2 on my state tax return form xyz calculated. You'd be amazed that not only will it give you a clear explanation, but it will also show you any mistakes you made, or frankly mistakes that the most popular tax preparation software makes.

You can ask Gemini to learn new things, like try these prompts,

*What are some popular prompt-to-video AI models I can use that I can run locally on a Mac?

What is a good treadmill I can buy with a width of more than 2 feet, and length more than 4 feet, that does not stress my joints?

Do a technical analysis on $MSFT chart and make price and level predictions for key buy and sell levels.

I am running out of disk space. Find the largest files on my disk that I could delete to make space.

*Or upload your car manual, and ask it, “What does the 15000-mile service include?”

I haven't even begun to explore other capabilities, such as saving workflows, adding MCP servers, authoring skills, running security checks, etc.

I hope to talk more about those in my upcoming article. But I am very curious: “How are you using AI in your dev lifecycle?” “What do you think the future holds for developers?”

Until next time, happy coding!