Robert's Blog

The random thoughts of Robert Barrios

If you think “vibe coding” is just fancy copy-paste from ChatGPT, you’re not doing it right.

When I demo how I actually vibe code using CLI tools, jaws hit the floor. The difference isn’t in the chat window – it’s in the terminal where your code assistant becomes your full-stack orchestrator. I’m talking about Claude Code or AWS Q for Developer integrated with your entire ecosystem:…

When I demo how I actually vibe code using CLI tools, jaws hit the floor. The difference isn’t in the chat window – it’s in the terminal where your code assistant becomes your full-stack orchestrator. I’m talking about Claude Code or AWS Q for Developer integrated with your entire ecosystem: AWS CLI, GitHub, Linear, Docker, local services, the works.

The timeline tells the story of this evolution. Anthropic launched Claude Code in research preview February 2025, going fully live with Claude 4 in May 2025. OpenAI followed with their Codex CLI in April 2025. Google joined the party with Gemini CLI in July 2025. AWS had been quietly building this capability through their Q for Developer platform, evolving from CodeWhisperer. The CLI is the new battleground for AI-assisted development.

Your CLI, whether on your local machine or in the cloud, coupled with CLI tools to external services like GitHub and AWS, plus MCP services to Linear, gives your code assistant access to everything without ever leaving your terminal. You can deploy an EC2 instance without knowing the syntax. But here’s the workflow that blows minds: you can tell your assistant “go check Linear for my latest assigned issue,” and watch it pull the story details, understand the requirements, write the code, run the tests, generate a descriptive commit message, push to your remote repo, create a pull request, and then update the Linear issue with the PR link and status change to “In Review.” That’s a complete development cycle executed by describing intent in plain English.

Watch this approach: spin up multiple terminal windows with different git branches for the same feature. You can have your assistant try different approaches across those branches simultaneously – one exploring a React solution, another testing a Vue approach, maybe a third experimenting with server-side rendering. You’ve just multiplied your development resources and can compare real working code instead of theoretical approaches. Just make sure to use descriptive branch names (feature/react-approach, feature/vue-approach) and clean up the unused branches afterward to avoid repo clutter.

Think of traditional LLM usage like having a really smart research assistant who can only write notes. But CLI-integrated coding assistants? That’s like having a senior developer who can actually execute across your whole infrastructure stack. They’re not just suggesting docker commands or AWS deployment steps, they’re running them. Building your app, spinning up containers locally, pushing to cloud services, deploying to production environments – all while you focus on the business logic.

The paradigm shift is significant. I don’t need to context-switch between my IDE, terminal, AWS console, and project management tools. The assistant handles the orchestration layer while I stay in flow state. It’s not about memorizing complex commands or remembering the right cloud syntax anymore – it’s about describing intent and watching it happen.
This is where AI-assisted development gets genuinely transformative. We’re not just automating code generation, we’re automating the entire development workflow.

+

Leave a comment