AI

Agentic Coding In Practice

In the last post, I explained the basic concepts of agentic coding - the terminology, the constructs, when to use what. It was mostly theoretical. This post is the practical follow-up that describes how I actually build features with AI agents day to day.

I hesitate to call these ‘best practices’ because in this fast moving space, it just means ‘what someone discovered last week’. So, think of this as a field report: here’s what works for me today, and why.

Agentic Coding: The Basic Concepts

I used to think that I love coding, but in the last year, I came to realize that what I love more is building - creating something useful and beautiful. Last year this time, my coding workflow was to fire up VS Code with Claude in the browser or GitHub Copilot in ‘Ask’ mode and brainstorm with a model, review solutions suggested by the LLM, copy code into the editor, test and deploy🚀. This was fun in the beginning, but soon the context-switching became tedious and broke the flow of building.

Compare AI Tools: LLMs and AI Assistants

In this post, we will compare the most popular AI tools (frontier models and AI assistants) based on its capabilities, limitations and my personal experience of using them day-to-day.

In order to accomodate the multiple dimensions of each model, this comparison is represented as mind maps in three parts:

  1. Language models - the most popular text-based models that are used for text-to-text content generation.
  2. AI assistants - the chatbots that are powered by one or more of the above models.
  3. Other models - other models that are used for text-to-image, text-to-voice or text-to-video use cases.

At the end, we will also see the common set of tasks that we try to accomplish using these tools and recommendations on which tool is best for each task.