Back to all posts

How my LLM usage has evolved over time

How my LLM usage has evolved since the early days of ChatGPT

LLMsProductivity
March 13, 2025
8 min read

Back in late 2022, I kept seeing ChatGPT blow up on Twitter. Like everyone else, I had to check it out and started with the simple chatbot features.

My first wow moment with LLMs

My first real "wow" moment with LLMs wasn't even work-related. It was Christmas 2022, and I wanted to do something special for our family gathering. My wife and I had an idea to use GPT-3.5 to write personalized poems about each family member.

These weren't generic "roses are red" type poems. They started off ambiguous before dropping specific details that made it obvious who each poem was about. Taking turns reading them aloud created this really special moment. Everyone felt seen and celebrated in a unique way.

I'm definitely not a poet and I couldn't have created that experience without AI. It was my first glimpse of how these tools could geninuely improve human connection and manipulate words more effectively than humans.

Back to work

Soon after, I started using GPT-3.5 for basic coding and work related tasks. GPT-4's release was a huge jump for me as a user.

GPT-4 felt like talking to something truly intelligent. Code that would've taken me hours was repeatedly generated in seconds. The reasoning abilities were on par with someone with several years of professional experience and I had a legitimate co-pilot for all my work.

When Claude Sonnet 3.5 was released it became my daily driver for front-end development and eventually all code generation. It would solve complex coding problems in one shot that GPT models needed multiple attempts for or would fail entirely. Sonnet-3.5 was great at all aspects of code generation, including infrastructure as code.

The progression kept accelerating:

  • Agentic IDEs with Claude: My coding environment started understanding my entire project, not just snippets and could take multi-step actions with reasonably high reliability. $20/month is too cheap.
  • OpenAI's o1 and o1 pro: These models would spend 5-10 minutes thinking about complex problems before giving remarkably better answers
  • Grok 3 from XAI: Showed that OpenAI, Anthropic, and Google won't dominate frontier model development forever, and there is room for new players to innovate with great design and user experience.
  • Real-time voice interfaces: Changed how and where I could use these tools

Key product release timeline

Model Release Date Breakthrough
GPT-3.5 November 30, 2022 First widely accessible advanced LLM
GPT-4 March 14, 2023 Major reasoning improvements, multi-modal capabilities
Claude Sonnet 3.5 June 19, 2024 Superior code generation, proof of model personality
o-1/o-1 Pro December 5, 2024 Proof that scaling inference compute gives better output
Grok 3 February 17, 2025 Real-time voice modes, innovative UX
Claude Code February 24, 2025 Expensive, but highly reliable coding agent
GPT-4.5 February 27, 2025 Improved creativity and human-like tone

The quality improvements happening every few months have been mind-blowing. GPT-4.5 has shown OpenAI can repeatedly climb back on top of being the state of the art frontier model provider.

Going all-in

These tools are now woven into basically everything I do professionally:

When I'm writing, I'll brain-dump my thoughts then have an LLM help reorganize them into something coherent. It's easier than ever to articulate great ideas and filter out bad ones.

I often use advanced voice mode while walking or driving to brainstorm ideas. My car has basically become an extension of my workspace.

Model preferences over time

Each new model seems to take the crown for a few months before something better comes along. I've embraced this rapid progress rather than getting stuck with one product.

GPT-4 impressed me with its raw intelligence. Claude is amazing at code generation. o1-pro blew me away with its deep thinking and long-context ability. Grok 3 has pushed the UX and assistant quality forward.

Rather than picking one, I've learned to use different models for different tasks.

The economic model

The economics of these tools fascinates me. We're seeing longer context handling, decreasing costs per token, and responses that feel human rather than robotic.

When GPT-4 first launched, its value was so obvious that I thought a $2,000 monthly subscription would have been reasonable. The actual $20 price seemed absurdly cheap.

Premium tiers have now reached $200 monthly, and I wouldn't be shocked if the recent headlines about to see $2,000 or even $20,000 tiers become real. These tools truly deliver that much economic value to a certain set of the user base.

What's next?

The jump from Christmas poems to full stack coding assistants happened in two years. The next two years? I won't pretend to know for sure, but its going to be equally inspiring.

One thing for sure is that my monthly budget for tokens is going to keep increasing.

Written by Sachin Dhar