Robert's Blog

The random thoughts of Robert Barrios

  • Claude Desktop and OpenAI Atlas can now control your browser, click buttons, fill forms, navigate between tabs. I tested this by having it edit a Google Sheet and then create templated letters pulling data directly from that sheet. No APIs, no integrations, just plain English instructions to an AI that can see your screen and move your mouse.

    If you’re running web based SaaS applications, this changes how you think about automation. Traditional RPA needed perfect APIs and rigid workflows. These AI agents work with the messy reality of how web apps actually behave, they adapt to layout changes, they handle exceptions, they figure out the next step without breaking when a button moves three pixels to the left.

    Think about your internal tools right now. All those one off scripts someone wrote to move data between systems, all those manual processes where someone copies from one web form and pastes into another, all those “it only takes 10 minutes” tasks that eat up hours across your team. That’s the automation sweet spot these browser agents are built for.

    I’ll be honest, both are still slow and sluggish right now. You’re not going to impress anyone watching these things think through each click in real time. But the trajectory is clear, and in a year or less the speed will be there. We’ve seen this pattern before with LLMs, the capability shows up first, then the performance catches up fast.

    I’m watching Google scramble to get Gemini integrated into Chrome because they see what’s coming. If I was Microsoft or Apple, I’d be building this straight into the OS layer, not just the browser. The company that owns the interface layer where AI agents operate is going to have a massive advantage in the next phase of enterprise software.

    The window between “interesting demo” and “competitive necessity” is getting shorter every quarter. Time to start testing.

    +
  • Image of a Pyramid with Simplify, Standardize, Digitize, Automate, Agentic

    The Maslow’s hierarchy post resonated with a lot of you, so let me break down the actual framework. Five layers that build on each other, and if you skip one, the whole thing falls apart. But here’s what’s different now, LLMs can contextualize in ways traditional automation never could, which means we can be a bit more forgiving about perfect standardization.

    Simplify is still the foundation. Before you automate anything, before you even think about AI agents, you need to eliminate as much of the unnecessary complexity in your processes. This isn’t glamorous work. It’s saying no to exceptions, consolidating workflows, and removing steps that exist because “that’s how we’ve always done it.” The difference now is that LLMs can handle some variability that would have broken traditional automation, but you still can’t build on chaos.

    Standardize comes next. Once you’ve simplified, you need consistency. Same inputs, same outputs, every time. This is where you document what actually happens versus what you wish happened. Here’s where the technology helps, LLMs can work with ambiguity better than rule-based systems ever could, so your standardization doesn’t have to be perfect to start seeing value.

    Digitize is where your standardized processes move into systems. Not spreadsheets, email, or analog paper processes. Real systems of record that capture your business logic and make it accessible. The context window of modern LLMs means they can pull from messy data sources and still make sense of it, but you still need that data captured somewhere.

    Automate is the layer most people jump to first, and that’s the mistake. Automation only works when the three layers below are solid. This is your RPA, your integration layer, your APIs and data warehouses, your MCP connections. You’re orchestrating the digital processes you built. LLMs make this more resilient because they can handle edge cases and inconsistencies that would crash traditional workflows.

    Agentic sits at the top. This is where your AI agents actually deliver value because they have clean, standardized, digitized, automated processes to work with. The ability to contextualize means your agents can navigate imperfect data and still execute, but without the foundation, you’re just building expensive demos.

    I’m going to break down each layer in detail over the next few posts. Starting with Simplify, because that’s where most AI strategies actually fail, even with all the flexibility LLMs provide. More coming soon.

    #AI #EnterpriseStrategy #DigitalTransformation #ProcessImprovement #TechLeadership #CIO #AIStrategy #BusinessProcesses

    +
  • Everyone’s rushing to implement AI agents, but most companies are missing the fundamentals. Think about Maslow’s hierarchy of needs, you can’t worry about self-actualization when you’re still figuring out basic survival.

    AI implementation follows the same pattern. I keep seeing organizations trying to deploy sophisticated LLM architectures while their foundational processes are still manual chaos. There’s a natural hierarchy here that works.

    Start with standardized processes. If your workflows aren’t documented and repeatable, AI will just automate your inconsistencies at scale. You need process maturity before you need artificial intelligence.

    Next comes digital capture, those standardized processes have to live in systems, not in people’s heads or email threads. This is your system of record layer, ERP, CRM, whatever actually captures your business logic.

    Then you need integration. Your data has to be accessible through APIs and consolidated in warehouses. Siloed information doesn’t help anyone. This includes exposing your data through protocols like MCP so your AI systems can actually connect to your business context. This layer determines whether your data architecture enables AI or becomes a bottleneck.

    After that comes your LLM architecture, vector databases, model orchestration, prompt engineering frameworks. This only works if the layers below are solid.

    Finally you get to AI agents at the top. These consume everything underneath to deliver business value. But they’re only as good as their foundation.

    Most companies try building from the top down. It’s like trying to feel self-actualized while your basic needs aren’t met. Build the foundation first, work your way up, and your AI agents will actually transform operations instead of creating expensive demos.

    #AI #DigitalTransformation #TechLeadership #EnterpriseAI #CIO #BusinessProcesses #DataStrategy #ArtificialIntelligence #Innovation

  • When I demo how I actually vibe code using CLI tools, jaws hit the floor. The difference isn’t in the chat window – it’s in the terminal where your code assistant becomes your full-stack orchestrator. I’m talking about Claude Code or AWS Q for Developer integrated with your entire ecosystem: AWS CLI, GitHub, Linear, Docker, local services, the works.

    The timeline tells the story of this evolution. Anthropic launched Claude Code in research preview February 2025, going fully live with Claude 4 in May 2025. OpenAI followed with their Codex CLI in April 2025. Google joined the party with Gemini CLI in July 2025. AWS had been quietly building this capability through their Q for Developer platform, evolving from CodeWhisperer. The CLI is the new battleground for AI-assisted development.

    Your CLI, whether on your local machine or in the cloud, coupled with CLI tools to external services like GitHub and AWS, plus MCP services to Linear, gives your code assistant access to everything without ever leaving your terminal. You can deploy an EC2 instance without knowing the syntax. But here’s the workflow that blows minds: you can tell your assistant “go check Linear for my latest assigned issue,” and watch it pull the story details, understand the requirements, write the code, run the tests, generate a descriptive commit message, push to your remote repo, create a pull request, and then update the Linear issue with the PR link and status change to “In Review.” That’s a complete development cycle executed by describing intent in plain English.

    Watch this approach: spin up multiple terminal windows with different git branches for the same feature. You can have your assistant try different approaches across those branches simultaneously – one exploring a React solution, another testing a Vue approach, maybe a third experimenting with server-side rendering. You’ve just multiplied your development resources and can compare real working code instead of theoretical approaches. Just make sure to use descriptive branch names (feature/react-approach, feature/vue-approach) and clean up the unused branches afterward to avoid repo clutter.

    Think of traditional LLM usage like having a really smart research assistant who can only write notes. But CLI-integrated coding assistants? That’s like having a senior developer who can actually execute across your whole infrastructure stack. They’re not just suggesting docker commands or AWS deployment steps, they’re running them. Building your app, spinning up containers locally, pushing to cloud services, deploying to production environments – all while you focus on the business logic.

    The paradigm shift is significant. I don’t need to context-switch between my IDE, terminal, AWS console, and project management tools. The assistant handles the orchestration layer while I stay in flow state. It’s not about memorizing complex commands or remembering the right cloud syntax anymore – it’s about describing intent and watching it happen.
    This is where AI-assisted development gets genuinely transformative. We’re not just automating code generation, we’re automating the entire development workflow.

    +
  • We’ve been talking about AI accelerating development, but there’s another side to consider: AI doesn’t just write code faster – it writes vulnerable code faster too.

    AI models were trained on decades of code from Stack Overflow and GitHub repos, including all the bad examples. When you ask AI to “build a user login system,” it might give you something that works perfectly but stores passwords in plain text.

    Traditional code reviews often miss this. Your senior developers are focused on logic errors and performance issues, not spotting security anti-patterns in AI-generated code blocks they didn’t write themselves.

    The solution is being proactive with your AI interactions. Whether you’re using Claude Code with a claude.md file or AWS Q with custom rules in the .amazonq/rules folder, define your security requirements upfront. For example: “Never store passwords in plain text. Always use bcrypt or similar hashing. Include input validation for all user data. Follow OWASP guidelines for authentication.”

    Treat your AI code like third-party libraries. You wouldn’t deploy external dependencies without security scanning, so why treat AI-generated code differently? Include tools like Semgrep and CodeQL directly in your CI/CD pipeline – make security scanning a required gate, not an optional review step.

    The speed advantage of AI development only works if you can deploy safely. Getting this right means building security into the development process, not bolting it on afterward.

    #CyberSecurity #AICodeAssistant #TechLeadership #CIO #SecureCode #DevSecOps #RiskManagement #SoftwareDevelopment

  • Last week I talked about how proper issue tracking becomes critical when AI accelerates your development cycle. The response was overwhelming – lots of teams recognizing they’re seeing 3-5x more commits but their processes haven’t caught up.

    Here’s the reality check: your CI/CD pipeline was designed for the old world. When developers were manually writing every line, pushing 2-3 commits per day was normal. Your build processes, testing suites, and deployment workflows were optimized for that pace.

    Now your team is pushing 10-15 commits daily, and suddenly your “fast” 20-minute build pipeline becomes a traffic jam. It’s like trying to funnel a fire hose through a garden sprinkler.

    Three things need immediate attention:

    Code Commits: Your commit standards matter more than ever. When AI is generating large code blocks, sloppy commit messages and massive changesets become organizational debt. You need atomic commits with clear descriptions – not because it’s good practice, but because it’s survival.

    CI/CD Pipeline: That comprehensive test suite that takes 45 minutes? It’s now your biggest bottleneck. You need parallel execution, smarter test selection, and staged deployments. What used to be “thorough” is now just slow.

    Code Reviews: Here’s the big shift – you’re not catching requirements misunderstandings anymore (good issue tracking solved that). You’re validating AI-generated code quality, checking for security vulnerabilities, and ensuring architectural consistency. Different focus, different skills needed.

    Think of it like this: you upgraded from a bicycle to a motorcycle, but you’re still using bicycle brakes. The speed is exhilarating until you need to stop.

    The teams adapting fastest aren’t just using better AI tools – they’re completely rethinking their development operations. They’re treating process scalability as seriously as code scalability.

    Because here’s the uncomfortable truth: if your deployment process can’t keep up with your development speed, all that AI productivity just turns into a backlog of finished features that can’t ship.

    Your competitive advantage isn’t how fast you can write code anymore. It’s how fast you can safely deploy it.

  • A few weeks ago, I wrote about how vibe coding is creating front row seats to the next wave of industry disruption – comparing it to software’s Blockbuster moment. The response was immediate: “but what about code quality?” and “how do you manage the chaos?”

    Those questions led me to explore what I call the maturation curve of AI-assisted development, which I detailed in my latest post on spec-driven development with AI coding agents. The reality is that vibe coding gets you lightning-fast prototypes, but the gap between impressive demos and production systems is where most teams struggle.

    Here’s what I didn’t fully explore in either post: the mechanism that bridges this gap isn’t just better prompts or smarter AI. It’s fundamentally changing how we approach issue tracking.

    Think about your typical development workflow. Business stakeholder says “users want better reporting.” Developer spends an hour in Slack trying to decode what that means. They build something. Three feedback rounds later, it’s still not right. Sound familiar?

    Now imagine this: you describe that feature to your coding assistant, and it generates comprehensive Git issues – complete with user stories, acceptance criteria, technical requirements, and test cases. But here’s the key difference: these aren’t just documentation. They become the single source of truth that drives your entire development process.

    Good issue tracking creates what I call “shared certainty” across your team. When your Git issue clearly defines “done” before anyone writes code, you eliminate the interpretation gaps that kill productivity. Junior developers know exactly what to build. Senior developers can spot architectural problems before they happen. Code reviews focus on implementation quality, not requirements clarification.

    This is why issue tracking becomes the critical mechanism in the AI development era. It’s not just about organizing work – it’s about creating the structured communication layer that lets human judgment guide AI capability.

    But here’s the consequence nobody talks about: when you can generate detailed specs this quickly and AI can turn those specs into working code just as fast, your commit velocity explodes. Teams are seeing 3-5x more commits per day.

    Which means your CI/CD pipeline, designed for traditional development speeds, suddenly becomes your biggest bottleneck.

    More on that challenge next week – because if you’re not rethinking your deployment discipline right now, you’re about to get buried by your own productivity gains.

  • Vibe coding has transformed how we approach development. The ability to describe what you want and get working code immediately is genuinely powerful. Our teams are shipping prototypes faster than ever.

    The challenge comes when those prototypes need to become production systems. Without clear requirements or documentation, maintenance becomes a nightmare and knowledge transfer fails.

    This is where spec-driven development changes the game. Tools like kiro.dev take the same conversational AI approach but add the structure enterprise teams actually need. You still get the speed of AI-assisted coding, but with proper requirements, design documentation, and implementation plans.

    Here’s what’s working for us: whether you’re using Claude Code with a claude.md file or AWS Q with custom rules, the key is defining your development standards upfront. Document your coding practices, architecture patterns, testing requirements, and security guidelines. Make these part of your AI interactions from the start.
    For example, in our claude.md file, we specify: “All new features must follow test-driven development. Write tests first using the AAA pattern – Arrange (setup), Act (execute), Assert (verify). Include both positive and negative test cases. Use descriptive test names that explain the business scenario being tested.”

    Instead of “build me a review system,” you get user stories, technical specifications, and task breakdowns that your entire team can follow. The AI understands your context and delivers code that fits your existing systems and standards, complete with comprehensive tests written before implementation.

    This isn’t about replacing creativity with process. It’s about making AI a more reliable partner by being explicit about expectations. When your AI assistant knows your team’s definition of “done,” the results are significantly more predictable and production-ready.

    The evolution from vibe coding to spec-driven development represents the maturation of AI-assisted development. We’re moving from impressive demos to sustainable systems.

    What’s your experience with AI-assisted development? Are you seeing similar challenges moving from prototype to production?
    Next week, I’ll be diving into why proper Git practices become even more critical when AI is writing code – and how poor version control can amplify technical debt exponentially.

    #AICodeAssistant #SoftwareDevelopment #SpecDrivenDevelopment #TechLeadership #CIO #DigitalTransformation #TestDrivenDevelopment #EnterpriseIT

    +
  • Remember when Blockbuster executives looked at Netflix and said “people will always want to browse physical movies” and “late fees are 16% of our revenue”? Or when Kodak invented the digital camera in 1975 but buried it because it threatened their film business?

    We are going to witness this happen again, except it will be to software companies and system integrators. Vibe coding is having its moment, and I’m watching software companies and system integrators react exactly like those executives did twenty years ago.

    For those not familiar, vibe coding is where you describe what you want in plain English and AI writes the code. No syntax, no debugging, no late nights staring at stack traces. Just “build me a customer dashboard with these metrics” and boom – working software. Andrej Karpathy coined the term earlier this year, and it’s spreading like wildfire through Silicon Valley.

    The knee-jerk reaction from traditional software shops and systems integration firms? “That’s not real programming.” “The code quality will be terrible.” “You need to understand what you’re building.” Sound familiar? It’s the same playbook every incumbent uses when disruption comes knocking.

    But here’s what’s really happening. Companies using vibe coding are reporting 30-55% faster development cycles. Startups are building entire products with 90% AI-generated code. Non-technical founders are creating working prototypes in hours, not months. The barrier to entry is collapsing, and with it, the premium software companies and system integrators have always charged for technical expertise.

    The smart money isn’t asking whether vibe coding will replace traditional development – it’s asking how fast. Because when the democratization wave hits, it doesn’t matter how good your developers are. What matters is how quickly you can turn business ideas into working solutions.

    I’ve been in this industry long enough to know that the companies that survive disruption aren’t the ones with the best technology. They’re the ones that figure out how to use the new technology to serve customers better, faster, and cheaper than everyone else.

    The question isn’t whether vibe coding is here to stay. The question is whether your revenue model can survive when anyone can build software.

    +
  • There’s a big shift happening in how we all look up information online. Not too long ago, “Googling” was the default way of finding answers. But now, a lot of people are starting to just chat with an AI, like ChatGPT or Perplexity, instead of searching the web the old way. I see Google’s new AI Mode as a direct response to this change. More than a quarter of people in the US, 27%, now use AI tools instead of traditional search engines, and Google’s search share has dipped below 90% for the first time in years, so this move is very important to them.

    The whole idea behind AI Mode is to make search feel more like a conversation. Instead of just giving you a bunch of links, Google’s aiming to give you real, detailed answers, kind of like what you get from ChatGPT or Perplexity. You can ask complicated questions, follow up with more questions, and actually get a synthesized answer, not just a list of websites.

    By adding AI Mode directly into Search, Google is hoping to win back the 27% of users who have started drifting to other platforms. It’s a smart move, most people already use Google, so bringing this new AI experience right into the main product makes it easy for everyone to try it out.

    Looking ahead, I think this is just the beginning. Google wants to make sure Search stays relevant, and AI Mode is their way of showing they’re not just keeping up, they’re trying to lead the way. It’s all about making search smarter, more helpful, and more conversational. The competition is heating up, and Google’s making it clear they’re still in the game.

    What’s really striking is that with this shift, companies really need to start thinking about how their products show up in these new AI powered answers. To make sure your product is included, focus on having complete, accurate, and detailed product data, use conversational keywords, and add structured data like schema markup so AI can easily understand and recommend your products in relevant searches. I’ll post more on this later.

    + ,