?? → BI → ML → AI → ??

AI’s past and the future

Where acronyms in business come from, what they sold, who won, and what might come after “AI”

Acronyms are the currency of business storytelling. They compress complex technology into a neat package a salesperson can pitch in a single slide: CRM, ERP, BI, ML, AI. Each one marked a shift in what companies sold to their customers and how value was captured. I want to walk through that history briefly, honestly, with business examples and what “winning” looked like in each era and then make a practical, evidence-based prediction for what comes after AI. I’ll finish with concrete signs companies and entrepreneurs should watch if they want to be on the winning side next.


The pre-acronym age: data collectors and automation (before CRM/ERP took over)

Before the catchy three-letter packages, businesses bought automation and niche systems: financial ledgers, bespoke reporting scripts, and the earliest mainframe systems. The selling point was efficiency: replace paper, reduce human error, scale payroll or accounting.

Winners: large system integrators and early software firms that could deliver reliability and scale. Value to the customer was operational: fewer mistakes, faster month-end closes, predictable processes.

This era set the expectation that software replaces tedious human work an expectation every later acronym exploited and monetized.


CRM / ERP the era of process standardization and cross-company suites

Acronyms like ERP and CRM told customers what problem a vendor solved: enterprise resource planning for the core business, customer relationship management for sales and marketing. The message was simple: centralize and standardize.

Business sales example: SAP and Oracle sold ERP as a bet on process control; Siebel (then Oracle) sold CRM as the way to professionalize sales organizations. Projects were expensive, multi-year, and became investments in repeatability and governance. The commercial model was license + services. Success looked like longer, stickier contracts and high services revenue.

Winners: vendors who could sell a vision of stability and then deliver implementation expertise.


BI (Business Intelligence) data becomes a product

BI formalized the idea that data itself is valuable: dashboards, reports, and the ability to make decisions from consolidated datasets. The term was popularized in the late 1980s and 1990s as companies realized that aggregated data and fact-based dashboards could change executive decision making. BI vendors promised that data could be turned into actionable insight.

Business sales example: BusinessObjects, Cognos, MicroStrategy sold a reliable narrative centralize data, produce dashboards, enable managers to make informed choices. Customers were large enterprises whose decisions had big dollar consequences: pricing, inventory, and marketing allocation.

Success metric: adoption by management, ROI from better decisions, and a move to subscription models as vendors evolved. BI also laid the foundation for data warehouses and ETL pipelines the plumbing later eras would rely on.


ML (Machine Learning) predictions replace static dashboards

Machine learning shifted the promise from describing the past to predicting the future. ML isn’t a single product but a set of techniques that let systems learn patterns recommendations, fraud detection, demand forecasting. Its commercialization accelerated as larger datasets and compute made models useful in production. (Timeline of ML milestones is long from perceptrons to ImageNet and modern deep learning.)

Business sales example: Netflix used ML for recommendations (watch time → retention); Amazon used ML for recommendations and dynamic pricing; banks used ML for fraud detection. The product pitch became “we will increase revenue (or reduce losses) by X% using model-driven predictions.”

Success metric: measurable impact on key business metrics (conversion, churn, fraud rate) and repeatable MLops pipelines. Winning companies built both models and the integration into products and workflows the second part mattered as much as the model.


AI (Artificial Intelligence) foundation models, agents, and ubiquity

“AI” is a broader, more emotionally charged badge than ML. It promises not just predictions, but agency: systems that write, design, plan, and interact. The recent leap in capability comes from large foundation models and multimodal systems, and the market’s attention has become concentrated on a smaller set of platform players. OpenAI is the obvious poster child widely integrated and publicly visible and it’s now part of a small club of companies shaping how enterprises adopt AI. Others Anthropic, Google/DeepMind, Microsoft (as a partner and investor), Nvidia (as the infra champion) are also core to who wins in the AI era. Recent reporting and market movement underscore how concentrated and influential these players are.

Business sales example: AI is sold as both a strategic platform and as task automation. Microsoft + OpenAI integrations sell enterprise productivity gains; Anthropic partners with platforms and enterprise vendors to bring chat/agent capabilities into products; Nvidia sells the hardware that makes large models economically viable. Sales morph into partnerships (platform + integration) and usage-based monetization (API calls, seats for AI assistants, compute consumption).

Success metric: ecosystem adoption and sticky integrations. The winners aren’t just model makers they are the platforms that make models reliably usable within enterprise apps, the cloud vendors that provide infra, and the companies that embed AI into workflows to measurably lower costs or multiply revenue.


What’s next? Predicting the post-AI acronym

Acronyms rise from what businesses need to sell next. Right now, AI sells capability; tomorrow, the market will demand something different: not raw capability but safe, contextual, composable, and human-centric value. Based on where the money, engineering effort, and regulatory focus are going, here are a few candidate acronyms and my pick.

Candidate futures (short list)

  • CAI: Contextual AI
    Focus: models that understand user context (company data, regulations, customer history) and deliver context-aware outputs with provenance. Selling point: trust and relevance. Businesses pay for AI that “knows the company” and can operate under constraints.
  • A^2I / AI²: Augmented & Autonomous Intelligence
    Focus: agents that both augment humans and act autonomously on behalf of businesses (book meetings, negotiate, execute trades). Selling point: time reclaimed and tasks delegated with measurable outcomes.
  • DAI: Distributed AI
    Focus: moving models to the edge, on-device privacy, and federated learning. Selling point: privacy, latency, and regulatory compliance. Monetization: device + orchestration + certification.
  • HXI: Human-Centered Experience Intelligence (or HCI reimagined)
    Focus: design + AI that measurably improves human outcomes (productivity, wellbeing). Selling point: human adoption and long-term retention; less hype, more stickiness.
  • XAI: Explainable AI (commercialized)
    Focus: regulations and auditability breed a market for explainable models as first-class products. Selling point: compliance, audit trails, and legally defensible automation.

My prediction (the one I’d bet money on)

CAI: Contextual AI.
Why? The immediate commercial friction after capability is trust and integration. Companies will not pay forever for raw creativity if outputs can’t be traced to corporate data, policies, and goals. The era of foundation models created broad capabilities; the next era will productize those capabilities into contextualized, policy-aware services that integrate directly into enterprise systems (CRMs, ERPs, legal, finance) and produce auditable actions. In short: AI + enterprise context = the next product category.

Concrete signs for CAI already exist: enterprises demanding model fine-tuning on private corpora, partnerships between model-makers and enterprise software vendors, and regulatory attention pushing for explainability and provenance. Those are the ingredients for a context-first commercial product.

(If you prefer the agent narrative, A^2I where agents actually do things reliably and accountably is a close second. But agents without context are liability; agents with context are product.)


What winning looks like in CAI

If CAI becomes the next category, how do businesses win?

  1. Data integration champions vendors that make it trivial to connect enterprise data (ERP, CRM, contracts) to models with privacy and governance baked in. The sales pitch: “We connect, govern, and make AI outputs auditable.”
  2. Actionable interfaces not just a chat box, but agents that produce auditable actions inside workflows (e.g., “Create invoice,” “Propose contract clause,” “Adjust inventory reorder”). The pitch: “We reduce X hours/week for role Y.”
  3. Regulatory and risk products explainability, model cards, audit logs, and compliance workflows become table stakes. Vendors packaging those for regulated industries will command higher multiples.
  4. Infra + economics hardware and cloud vendors that optimize cost/performance for fine-tuned, context-rich models (Nvidia-like infra winners) will capture a slice. Recent market moves show infrastructure captures enormous value; watch the hardware and cloud players.

Practical advice for sellers and builders today

  • If you sell to enterprises: stop pitching “we use AI.” Start pitching what measurable outcome you deliver and how you keep it governed. Show integration architecture diagrams: where the data lives, what’s fine-tuned, and where the audit logs are.
  • If you build products: invest in connectors, provenance, and reversible actions. A product that lets customers roll back an AI decision will win trust and enterprise POs.
  • If you’re an investor or operator: look for companies that own context (industry datasets, domain rules, vertical workflows). Horizontal foundation models will be commoditized; contextual wrappers will be the economic moat.
  • If you’re an infra player: optimize for cost + compliance. The market will pay a premium for infra that matches enterprise security and cost constraints.

Example scenarios; how each era turned into commercial value

  • BI era: a retail chain buys a BI suite to consolidate POS data across stores. Result: optimized promotions, fewer stockouts, 3% margin improvement. The seller (BI vendor) expanded into recurring maintenance and cloud hosting.
  • ML era: an e-commerce platform adds recommendation models. Result: personalized homescreens boost AOV by 7%. The ML vendor sells models + integration and gets paid per API call and for model retraining.
  • AI era: an agency uses generative models to prototype marketing copy at scale. Result: faster iteration and lower creative costs; large platforms (OpenAI, Anthropic, Google) sell the models, cloud vendors sell the compute. OpenAI’s integrations made it a visible “winner” for developers and enterprises adopting chat/assistant features.
  • CAI era (predicted): the same retail chain buys a contextual assistant that reads contracts, vendor SLAs, and inventory rules, then suggests optimal promotions aligned with margin and regulatory rules. Result: promotions that respect contracts, better margins, and an auditable decision trail. Pricing: subscription + outcome share.

Acronyms are marketing. Value is behavioral change.

Acronyms succeed when they promise a specific, repeatable business result and when vendors can deliver measurable change in behavior. BI helped managers act on facts. ML helped products predict user intent. AI made interaction and creativity broadly available. The next profitable acronym my money is on CAI (Contextual AI) will sell trustworthy, context-aware automation that actually becomes part of the way companies operate.

If you’re building, selling, or investing: focus less on the label and more on the edges where value is realized integration, governance, measurable business outcomes. That’s where the next winners will be, and where your clients will write the checks.

The Vibe Code Tax

Momentum is the oxygen of startups. Lose it, and you suffocate. Getting it back is harder than creating it in the first place.

Here’s the paradox founders hit early:

  • Move too slowly searching for the “perfect” technical setup, and you’re dead before you start.
  • Move too fast with vibe-coded foundations, and you’re dead later in a louder, more painful way.

Both paths kill. They just work on different timelines.

Death by Hesitation

Friendster is a perfect example of death by hesitation. They had the idea years before Facebook. They had users. They had momentum.

But their tech couldn’t scale, and instead of fixing it fast, they stalled. Users defected. Momentum bled out. By the time they moved, Facebook and MySpace had already eaten their lunch.

That’s hesitation tax: waiting, tinkering, second-guessing while the world moves on.

Death by Vibe Coding

On the flip side, you get the vibe-coded death spiral.

Take Theranos. It wasn’t just fraud, it was vibe coding at scale. Demos that weren’t real. A prototype paraded as a product. By the time the truth surfaced, they’d burned through billions and a decade of time.

Or look at Quibi. They raced to market with duct-taped assumptions the whole business was a vibe-coded bet that people wanted “TV, but shorter.” $1.75 billion later, they discovered the foundation was wrong.

That’s the danger of mistaking motion for progress.

The Right Way to Use Vibe Coding

Airbnb is the counterexample. Their first site was duct tape. Payments were hacked together. Listings were scraped. It was vibe code but they treated it as a proof of concept, not a finished product.

The moment they proved demand (“people really will rent air mattresses from strangers”), they rebuilt. They didn’t cling to the prototype. They moved fast, validated, then leveled up.

That’s the correct use: vibe code as validation, not as production.

The Hidden Tax

The vibe code tax is brutal because it’s invisible at first. It’s not just money.

  • Lost time → The 6–12 months you’ll spend duct-taping something that later has to be rebuilt from scratch.
  • Lost customers → Early adopters churn when they realize your product is held together with gum and string. Most won’t return.
  • Lost momentum → Investors don’t like hearing “we’re rebuilding.” Momentum is a story you only get to tell once.

And you don’t get to dodge this tax. You either pay it early (by finding a technical co-founder or paying real engineers), or you pay it later (through rebuilds, lost customers, and wasted months).

How to Stay Alive

  1. Be honest. Call your vibe-coded MVP a prototype. Never pitch it as “production-ready.”
  2. Set a timer. Airbnb didn’t stay in duct tape land for years. They validated and moved on. You should too.
  3. Budget for the rebuild. If you don’t have a co-founder, assume you’ll need to pay engineers once the prototype proves itself.
  4. Go small but real. One feature built right is more valuable than ten features that crumble.

Final Word

The startup graveyard is full of companies that either waited too long or shipped too fast without a foundation. Friendster hesitated. Theranos faked it. Quibi mistook hype for traction.

Airbnb survived because they paid the vibe code tax on their terms. They used duct tape to test, then rebuilt before the cracks became fatal.

That’s the playbook.

Because no matter what the vibe code tax always comes due.

Is AI Slowing Everyone Down?

Over the past year, we’ve all witnessed an AI gold rush. Companies of every size are racing to “adopt AI” before their competitors do, layering chatbots, content tools, and automation into their workflows. But here’s the uncomfortable question: is all of this actually making us more productive, or is AI quietly slowing us down?

A new term from Harvard Business Review “workslop” captures what many of us are starting to see. It refers to the flood of low-quality, AI-generated work products: memos, reports, slide decks, emails, even code snippets. The kind of content that looks polished at first glance, but ultimately adds little value. Instead of clarity, we’re drowning in noise.

The Illusion of Productivity

AI outputs are fast, but speed doesn’t always equal progress. Generative AI makes it effortless to produce content, but that ease has created a different problem: oversupply. We’re seeing more documents, more proposals, more meeting summaries but much of it lacks originality or critical thought.

When employees start using AI as a crutch instead of a tool, the result is extra layers of text that someone else has to review, fix, or ignore. What feels like efficiency often leads to more time spent filtering through workslop. The productivity gains AI promises on paper are, in practice, canceled out by the overhead of sorting the useful from the useless.

Numbers Don’t Lie

The MIT Media Lab recently published a sobering study on AI adoption. After surveying 350 employees, analyzing 300 public AI deployments, and interviewing 150 executives, the conclusion was blunt:

  • Fewer than 1 in 10 AI pilot projects generated meaningful revenue.
  • 95% of organizations reported zero return on their AI investments.

The financial markets noticed. AI stocks dipped after the report landed, signaling that investors are beginning to question whether this hype cycle can sustain itself without real business impact.

Why This Happens

The root cause isn’t AI itself it’s how organizations are deploying it. Instead of rethinking workflows and aligning AI with core business goals, many companies are plugging AI in like a patch. “We need to use AI somewhere, anywhere.” The result is shallow implementations that create surface-level outputs without driving real outcomes.

It’s the same mistake businesses made during earlier tech booms. Tools get adopted because of fear of missing out, not because of a well-defined strategy. And when adoption is guided by FOMO, the outcome is predictable: lots of activity, little progress.

Where AI Can Deliver

Despite the noise, I don’t think AI is doomed to be a corporate distraction. The key is focus. AI shines when it’s applied to specific, high-leverage problems:

  • Automating repetitive, low-value tasks (think: data entry, scheduling, or document classification).
  • Enhancing decision-making with real-time insights from complex data.
  • Accelerating specialized workflows in domains like coding, design, or customer support if humans remain in the loop.

The companies that will win with AI aren’t the ones pumping out endless AI-generated documents. They’re the ones rethinking their processes from the ground up and asking: Where can AI free humans to do what they do best?

The Human Factor

We have to remember: AI isn’t a replacement for judgment, creativity, or strategy. It’s a tool one that can amplify our abilities if used thoughtfully. But when used carelessly, it becomes a distraction that actually slows us down.

The real productivity gains won’t come from delegating everything to AI. They’ll come from combining human strengths with AI’s capacity, cutting through the noise, and resisting the temptation to let machines do our thinking for us.


Final thought: Right now, most companies are stuck in the “workslop” phase of AI adoption. They’re generating more content than ever but producing less clarity and value. The next phase will belong to organizations that stop chasing hype and start asking harder questions: What problem are we actually solving? Where does AI fit into that solution?

Until then, we should be honest with ourselves: AI isn’t always speeding us up. Sometimes, it’s slowing everyone down.


20+ Years as a CTO: Lessons I Learned the Hard Way

Being a CTO isn’t what it looks like from the outside. There are no capes, no magic formulas, and certainly no shortcuts. After more than two decades leading engineering teams, shipping products, and navigating the chaos of startups and scale-ups, I’ve realized that the real challenges and the real lessons aren’t technical. They’re human, strategic, and sometimes painfully simple.

Here are the lessons that stuck with me, the ones I wish someone had told me when I started.


Clarity beats speed every time

Early in my career, I thought speed meant writing more code, faster. I would push engineers to “ship now,” measure velocity in lines of code or story points, and celebrate sprint completions.

I was wrong.

The real speed comes from clarity. Knowing exactly what problem you’re solving, who it matters to, and why it matters that’s what lets a team move fast. I’ve seen brilliant engineers grind for weeks only to realize they built the wrong thing. Fewer pivots, fewer surprises, and focus make a team truly fast.


Engineers want to care, they just need context

One of the most frustrating things I’ve witnessed is engineers shrugging at product decisions. “They just don’t care,” I thought. Until I realized: they do care. They want to make an impact. But when they don’t have context, the customer pain, the market reality, the business constraints,they can’t make informed decisions.

Once I started sharing the “why,” not just the “what,” engagement skyrocketed. A well-informed team is a motivated team.


Vision is a tactical tool, not a slogan

I’ve been guilty of writing vision statements that sounded great on slides but did nothing in practice. The turning point came when I started treating vision as a tactical tool.

Vision guides decisions in real time: Should we invest in this feature? Should we rewrite this component? When the team knows the north star, debates become productive, not paralyzing.


Great engineers are problem solvers first

I’ve worked with engineers who could write elegant code in their sleep, but struggle when the problem itself is unclear. The best engineers are not just builders, they’re problem solvers.

My role as a CTO became ensuring the problem was well-understood, then stepping back. The magic happens when talent meets clarity.


Bad culture whispers, it doesn’t shout

I’ve learned to pay attention to the quiet. The subtle signs: meetings where no one speaks up, decisions made by guesswork, unspoken assumptions. These moments reveal more about culture than any HR survey ever could.

Great culture doesn’t need fanfare. Bad culture hides in silence and it spreads faster than you think.


Done is when the user wins

Early on, “done” meant shipped. A feature went live, the ticket closed, everyone celebrated. But shipping doesn’t equal solving.

Now, “done” only counts when the user’s problem is solved. I’ve had to unteach teams from thinking in terms of output and retrain them to think in terms of impact. It’s subtle, but transformative.


Teams don’t magically become product-driven

I used to blame teams for not thinking like product people. Then I realized the missing piece was me. Leadership must act like product thinking matters. Decisions, recognition, discussions, they all reinforce the mindset. Teams reflect the leadership’s priorities.


Product debt kills momentum faster than tech debt

I’ve chased the holy grail of perfect code only to watch teams get bogged down in building the wrong features. Clean architecture doesn’t save a product if no one wants it. Understanding the problem is far more important than obsessing over elegance.


Focus is a leadership decision

I once ran a team drowning in priorities. Tools, frameworks, and fancy prioritization systems didn’t help. The missing ingredient was leadership. Saying “no” to the wrong things, protecting focus, and consistently communicating what matters that’s what accelerates teams.


Requirements are not the problem

If engineers are stuck waiting for “better requirements,” don’t introduce another process. Lead differently. Engage with your team, clarify expectations, remove ambiguity, and give feedback in real time. Requirements are never the bottleneck leadership is.


The hard-earned truth

After twenty years, I’ve realized technology is the easy part. Leadership is where the real work and the real leverage lies.

Clarity, context, vision, problem-solving, culture, focus these aren’t buzzwords. They are the forces that determine whether a team thrives or stalls.

I’ve seen brilliant teams fail, and ordinary teams excel, all because of the way leadership showed up. And that’s the lesson I carry with me every day: if you want speed, impact, and results, start with the leadership that creates the conditions for them.

Why AI won’t solve these problems

With all the excitement around AI today, it’s tempting to think that tools can fix everything. Need better requirements? There’s AI. Struggling with design decisions? AI can suggest options. Want faster development? AI can generate code.

Here’s the hard truth I’ve learned: none of these tools solve the real problems. AI can assist, accelerate, and automate but it cannot provide clarity, set vision, or foster a healthy culture. It doesn’t understand your users, your market, or your team’s dynamics. It can’t decide what’s important, or make trade-offs when priorities conflict. Those are human responsibilities, and they fall squarely on leadership.

I’ve seen teams put too much faith in AI as a silver bullet, only to discover that the fundamental challenges alignment, focus, problem definition, and decision-making still exist. AI is powerful, but it’s a force multiplier, not a replacement. Without strong leadership, even the most advanced AI cannot prevent teams from building the wrong thing beautifully, or from stagnating in a quiet, passive culture.

Ultimately, AI is a tool. Leadership is the strategy. And experience with decades of trial, error, and hard-won insight is what turns potential into real results.

Cocoa, Chocolate, and Why AI Still Can’t Discover

Imagine standing in front of a freshly picked cocoa pod. You break it open, and inside you find a pale, sticky pulp with bitter seeds. Nothing looks edible, nothing smells particularly appetizing. By every reasonable measure, this is a dead end.

Yet humanity somehow didn’t stop there. Someone, centuries ago, kept experimenting, steps that made no sense at the time:

  • Picking out the seeds and letting them ferment until they grew mold.
  • Washing and drying them for days, though still inedible.
  • Roasting them into something crunchy, still bitter and strange.
  • Grinding them into powder, which tasted worse.
  • Finally, blending that bitterness with sugar and milk, turning waste into one of the most beloved foods in human history: chocolate.

No algorithm would have told you to keep going after the first dozen failures. There was no logical stopping point, only curiosity, persistence, and maybe a bit of luck. The discovery of cocoa as food wasn’t the result of optimization, it was serendipity.

Why This Matters for AI

AI today is powerful at recombining, predicting, and optimizing. It can remix what already exists, generate new connections from vast data, and accelerate discoveries we’re already aiming toward. But there’s a limit: AI doesn’t (yet) explore dead ends with stubborn curiosity. It doesn’t waste time on paths that appear pointless. It doesn’t ferment bitter seeds and wait for mold to form, just to see if maybe, somehow, there’s something new hidden inside.

Human discovery has always been messy, nonlinear, and often illogical. The journey from cocoa pod to chocolate shows that sometimes the only way to find the extraordinary is to persist through the ridiculous.

The Future of Discovery

If we want AI to go beyond optimization and into true discovery, it will need to embrace the irrational side of exploration, the willingness to try, fail, and continue without clear reasons. Until then, AI remains a tool for extending human knowledge, not replacing the strange, stubborn spark that drives us to turn bitter seeds into sweetness.

Because the truth is: chocolate exists not because it was obvious, but because someone refused to stop at “nothing edible.”

This path makes no sense. At every step the signal says stop. No data suggests you should continue. No optimization algorithm rewards the action. Yet someone did. And that’s how one of the world’s favorite foods was discovered.

This is the gap between human discovery and AI today.

AI can optimize, remix, predict. It can explore a search space, but only one that’s already defined. It can’t decide to push through meaningless, irrational steps where there’s no reason to keep going. It won’t follow a path that looks like failure after failure. It won’t persist in directions that appear to lead nowhere.

But that’s exactly how discovery often works.

Cocoa to chocolate wasn’t about efficiency. It was curiosity, stubbornness, and luck. The same applies to penicillin, vulcanized rubber, even electricity. Breakthroughs happen because someone ignored the “rational” stopping point.

AI is far from that. Right now, it’s bounded by what already exists. It doesn’t yet invent entirely new domains the way humans stumble into them.

The lesson? Discovery is still deeply human. And the future of AI will depend not just on making it smarter, but on making it willing to walk blind paths where no reward signal exists until something unexpected emerges.

Because sometimes, you need to go through moldy seeds and bitterness to find chocolate.

When to Hire Real Engineers Instead of Freelancers for Your MVP

Building a startup is a race against time. Every day you wait to ship your idea is a day your competitors could gain an edge. That’s why many founders start with freelancers or “vibe coding” to launch their MVP (Minimum Viable Product) quickly. But this fast-track approach comes with hidden risks. There comes a point when hiring real engineers is no longer optional, it’s critical for your startup’s survival.

In this post, we’ll explore when it’s the right time to transition from freelancers to full-time engineers, and why vibe coding with low-cost freelancers can be dangerous for your MVP.


Why Start With Freelancers?

Freelancers are often the first choice for early-stage founders. Here’s why:

  • Speed: Freelancers can help you quickly prototype your idea.
  • Lower Cost: You pay for work done, without the overhead of full-time salaries or benefits.
  • Flexibility: You can scale the workforce up or down depending on the project stage.

Freelancers are perfect for validating your idea, testing market demand, or building proof-of-concept features. However, relying on freelancers too long can create technical debt and slow your growth when your product starts attracting real users.


The Hidden Dangers of Vibe Coding With Low-Cost Freelancers

Many founders are tempted by freelancers offering extremely low rates. While the idea of saving money is appealing, vibe coding with bargain-rate developers comes with serious risks:

  • Poor Code Quality: Low-cost freelancers may cut corners, leaving messy, unmaintainable code.
  • Lack of Documentation: Your codebase may be difficult for future engineers to understand or build upon.
  • Delayed Timelines: Cheap freelancers often juggle multiple clients, causing unpredictable delays.
  • False Confidence: Founders may assume their MVP is “production-ready” when it’s not.
  • Hidden Costs: Fixing technical debt later often costs more than hiring quality engineers from the start.

Using low-cost freelancers is fine for prototyping ideas quickly, but it becomes risky when your MVP starts attracting real users or paying customers.


Signs You Need Real Engineers

Here are the main indicators that your MVP has outgrown freelancers:

1. Product Complexity Increases

  • Your MVP is no longer a simple prototype.
  • Features require backend scalability, integrations, or complex logic.
  • Codebase is hard for freelancers to maintain consistently.

2. Customers Expect Stability

  • Paying users begin using your product regularly.
  • Bugs, downtime, or inconsistent updates start hurting your credibility.
  • You need reliable, professional code that can scale.

3. You Plan for Rapid Growth

  • You anticipate increasing traffic, user engagement, or data volume.
  • Your MVP needs a scalable architecture to handle more users efficiently.

4. Security and Compliance Matter

  • Sensitive user data, payment systems, or regulatory requirements are involved.
  • Freelancers may lack the expertise to ensure security best practices.

How to Transition Smoothly to Full-Time Engineers

Once you’ve decided to hire real engineers, plan the transition carefully to avoid disruption:

  1. Audit Existing Code: Identify areas of technical debt and create a roadmap for refactoring.
  2. Hire Strategically: Look for engineers with startup experience who can handle rapid iteration and product scaling.
  3. Document Everything: Ensure all features, APIs, and infrastructure are well-documented for the new team.
  4. Maintain Continuity: Keep a few top freelancers for short-term tasks during the handover period.
  5. Invest in Tools: Use code repositories, CI/CD pipelines, and testing frameworks to support professional development practices.

Cost Considerations

Hiring full-time engineers is an investment. While freelancers may seem cheaper upfront, consider the long-term costs:

  • Technical Debt: Fixing poor-quality code can cost far more than hiring engineers initially.
  • Lost Customers: Product instability can lead to churn and missed revenue.
  • Opportunity Cost: Delays in scaling and adding features can let competitors win market share.

Think of full-time engineers as insurance for your product’s future success.


Conclusion

Freelancers are invaluable for testing your idea and building a lean MVP quickly. But relying on low-cost vibe coding can be dangerous: messy code, delayed timelines, and hidden costs can stall your startup before it even takes off. Once your product gains traction, complexity, and paying users, hiring real engineers ensures stability, scalability, and long-term growth.

Key Takeaway: Use freelancers for prototyping, but transition to full-time engineers before your MVP becomes a product your customers depend on. Planning the move carefully saves time, money, and frustration.


Have you experienced the vibe code tax firsthand? Share your story in the comments and tell us how you decided when to hire full-time engineers.

Why “Lines of Code” Is the Wrong Way to Measure AI Productivity in Software Development

Last week, I had a conversation with another CTO that got me thinking about how we measure productivity in the age of AI-assisted software development.

Here’s how it went:

CTO: Right now, about 70% of our software output is generated by AI.
Me: Interesting. How are you measuring that?
CTO: By looking at the proportion of code that AI tools contribute during check-ins.
Me: But what if the AI produces ten times more code than a human would to solve the same problem? Doesn’t that risk adding unnecessary complexity and future technical debt?

That short exchange captures a much bigger problem: we are still in the earliest innings of understanding how AI fits into the software development lifecycle (SDLC). And one thing is already clear, measuring lines of code is not the way forward.


The Myth of Lines of Code as a Metric

The idea that “more lines of code = more productivity” has always been flawed. Good engineering has never been about who can write the most code. It’s about solving problems in the most effective and maintainable way possible.

A senior engineer might solve a business-critical feature in 20 lines of clean, tested code, while a less experienced developer or now, an AI might churn out 200 lines to accomplish the same thing. On paper, the latter looks “more productive,” but in practice, they’ve just created more surface area for bugs, maintenance, and future refactoring.

When AI is added into the mix, this risk multiplies. Large language models are excellent at producing code, but quantity is a misleading signal. We could easily end up with a lot of syntactically correct but architecturally messy software.


Why AI-Generated Code Can Inflate the Wrong Numbers

Imagine this scenario:

Human developer approach

  • Implements a payment validation service in 50 lines.
  • Uses existing abstractions, adds unit tests, and ensures long-term maintainability.

vs AI-generated approach

  • Produces a 500-line solution with redundant logic, overly generic abstractions, and unnecessary boilerplate.
  • Works today, but in 3 months, another engineer spends days cleaning it up.

If you measure productivity by “lines of code checked in,” the AI looks 10x more productive. But if you measure by long-term cost of ownership, the AI just created technical debt.

The irony is that AI often defaults to verbosity. It tries to be “helpful” by filling in every possible branch, every possible configuration, every possible error case. That’s useful in some contexts, but often it leads to bloat instead of clarity.


Better Metrics for AI-Assisted Development

If we agree that lines of code isn’t the answer, what should we be looking at?

Here are a few ideas:

Cycle time to value

  • How quickly can a feature go from idea → production → value delivered to the business?
  • If AI speeds this up without adding long-term drag, that’s true productivity.

Code quality and maintainability

  • Static analysis scores, cyclomatic complexity, duplication, and readability.
  • AI should be measured on whether it improves or degrades these metrics.

Bug rate and rework cost

  • How often do AI-generated contributions introduce regressions?
  • How much effort do humans spend debugging or rewriting AI code?

Developer experience

  • Are engineers spending less time on repetitive boilerplate?
  • Do they feel empowered to focus on architecture, problem-solving, and innovation?

Business outcomes

  • Does AI accelerate delivery of meaningful features, reduce time-to-market, and improve customer satisfaction?
  • Or does it just fill up the repo with more stuff to manage?

    Where This Leaves Us

    We are only at the beginning of figuring out how AI fits into modern software development. There will be new metrics, new practices, and new patterns of collaboration between humans and machines.

    But one thing is certain: lines of code is the wrong measure of productivity.

    In fact, if you see your AI adoption leading to more lines of code, you should pause and ask:

    • Is this actually solving problems more effectively?
    • Or are we just accumulating more technical debt at machine speed?

    The future of engineering with AI will belong to those who focus on outcomes, maintainability, and business value not vanity metrics.


    Conclusion

    AI is not going to replace developers; it’s going to reshape what productivity looks like. The teams that succeed will be the ones that look beyond shallow metrics and focus on sustainable velocity.

    The next time someone tells you their AI is producing 70% of the code, ask them:

    • 70% of what?
    • Is that 70% value, or 70% debt?

    Because in software, less is often more.

    On AI-Generated Code, Maintainability, and the Possibility of Disposable Software

    Over the past two years, I’ve been using various AI-assisted tools for programming like Codeium, GitHub Copilot, ChatGPT, and others. These tools have become part of my daily workflow. Mostly, I use them for code completion and to help me finish thoughts, suggest alternatives, or fill in repetitive boilerplate. I’m not doing anything too wild with autonomous agents or fully automated codebases. Just practical, incremental help.

    That said, even in this limited use, I’m starting to feel the friction.

    Not because the tools are bad but actually, they’re improving fast. Individual lines and even complete methods are cleaner than they used to be. The suggestions are smarter. The models are more context-aware. But one thing still nags at me: even with better completions, a lot of the output still isn’t good code in the long-term sense.


    Maintainability Still Matters

    The issue, to me, isn’t whether AI can help me write code faster. It can. The issue is whether that code is going to survive over time. Is it going to be easy to understand, extend, or refactor? Does it follow a style or pattern that another human could step into and build on?

    This matters especially when you’re not the only one touching the code or when you come back to it after a few months and wonder, “Why did I do it this way?”

    And here’s the contradiction I keep running into: AI helps you write code faster, but it often creates more problems to maintain. That’s especially true when I’ve tested more advanced setups where you let an agent plan and generate entire components, classes, or services. It sounds great in theory, but in practice it causes a lot of changes, inconsistencies, and small bugs that end up being more trouble to fix than if I had just written it myself from the start.

    So for now, I stay close to completions. Code at the scale of a line or a method. It’s easier to understand, easier to control. I can be the architect, and the AI can be the assistant.


    The Self-Fulfilling Trap

    There’s a strange loop forming in AI development. Since the generated code is harder to reason about or maintain, people often treat it as throwaway. And because it’s throwaway, nobody bothers to make it better. So it stays bad.

    Self-fulfilling prophecy.

    The more AI you use to generate logic, the more you’re tempted to not go back and polish or structure it. You get into a loop of “just generate something that works,” and soon you’re sitting on a pile of glue code and hacks that’s impossible to build on.

    But maybe that’s okay? Maybe we need to accept that some code isn’t meant to last.


    Disposable Software Might Be the Point

    This is where I’m starting to shift my thinking a little. I’ve always approached code as something you build on something that lives and evolves. But not all code needs that.

    A lot of software today already is disposable, even if we don’t admit it. One-off internal dashboards, ETL jobs, scripts for events, MVPs for marketing campaigns, integrations that won’t live beyond a quarter. We often pretend we’re building maintainable systems, but we’re not. We just don’t call them disposable.

    With AI in the mix, maybe it’s time to embrace disposability for what it is. Write the code, run the code, get the result, throw it away. Next time, generate it again maybe with better context or updated specs.

    This mindset removes a lot of the pressure around maintainability. And it fits the strengths of today’s AI tools.


    When Not to Use Agentic Systems (Yet)

    I’ve played with more autonomous agent systems what people call “agentic AI” or multi-agent code platforms. Honestly? I’m not sold on them yet. They generate too much. They make decisions I wouldn’t make. They refactor things that didn’t need to be touched.

    And then I spend more time reading diff views and undoing changes than I saved by delegating in the first place.

    Maybe in the future I’ll be comfortable letting an AI agent draft a service or plan out an architectural pattern. But today, I’m not there. I use these tools more like smart autocomplete than autonomous developers. It’s still very much my code and they’re just helping speed up the flow.


    Final Thoughts

    There’s a real risk of overhyping what AI can do for codebases today. But there’s also an opportunity to rethink how we treat different classes of software. We don’t need to hold everything to the same standards of longevity. Not every project needs to be built for 10 years of feature creep.

    Some software can (and should) be treated like scaffolding and built quickly, used once, and removed without guilt.

    And that’s where AI shines right now. Helping us build the things we don’t need to keep.

    I’ll keep experimenting. I’ll keep writing most of my own code, and using AI where it makes sense. But I’m also watching carefully because the balance between what’s worth maintaining and what’s better thrown away is shifting.

    And we should all be ready for what that means.

    The AI isn’t going to be on call at 2 AM when things go down.

    Large Language Models (LLMs) like ChatGPT, Copilot, and others are becoming a regular part of software development. Many developers use them to write boilerplate code, help with unfamiliar syntax, or even generate whole modules. On the surface, it feels like a productivity boost. The work goes faster, the PRs are opened sooner, and there’s even time left for lunch.

    But there’s something underneath this speed, something we’re not talking about enough. The real issue with LLM-generated code is not that it helps us ship more code, faster. The real issue is liability.


    Code That Nobody Owns

    There’s a strange effect happening in teams using AI to generate code: nobody feels responsible for it.

    It’s like a piece of code just appeared in your codebase. Sure, someone clicked “accept,” but no one really thought through the consequences. This is not new, we saw the same thing with frameworks and compilers that generated code automatically. If no human wrote it, then no human cares deeply about maintaining or debugging it later.

    LLMs are like that, but on a massive scale.


    The “Average” Problem

    LLMs are trained on a massive corpus of public code. What they produce is a kind of rolling average of everything they’ve seen. That means the code they generate isn’t written with care or with deep understanding of your system. It’s not great code. It’s average code.

    And as more and more people use LLMs to write code, and that code becomes part of new training data, the model quality might even degrade over time, it becomes an average of an average.

    This is not just about style or design patterns. It affects how you:

    • Deliver software
    • Observe and monitor systems
    • Debug real-world issues
    • Write secure applications
    • Handle private user data responsibly

    LLMs don’t truly understand these things. They don’t know what matters in your architecture, how your team works, or what your specific constraints are. They just parrot what’s most statistically likely to come next in the code.


    A Fast Start, Then a Wall

    So yes, LLMs speed up the easiest part of software engineering: writing code.

    But the hard parts remain:

    • Understanding the domain
    • Designing for change
    • Testing edge cases
    • Debugging production issues
    • Keeping systems secure and maintainable over time

    These are the parts that hurt when the codebase grows and evolves. These are the parts where “fast” turns into fragile.


    Example: Generated Code Without Accountability

    Imagine you ask an LLM to generate a payment service. It might give you something that looks right, maybe even works with your Stripe keys or some basic error handling.

    But:

    • What happens with race conditions?
    • What if fraud detection fails silently?
    • What if a user gets double-charged?
    • Who is logging what?
    • Is the payment idempotent?
    • Is sensitive data like credit cards being exposed in logs?

    If no one really “owned” that code because it was mostly written by an AI and these questions might only surface after things go wrong. And in production, that can be very costly.


    So What’s the Better Approach?

    LLMs can be great tools, especially for experienced engineers who treat them like assistants, not authors.

    To use LLMs responsibly in your team:

    • Review AI-generated code with care.
    • Assign clear ownership, even for generated components.
    • Add context-specific tests and documentation.
    • Educate your team on the why, not just the how.
    • Make accountability a core part of your development process.

    Because in the end, you are shipping the product. The AI isn’t going to be on call at 2 AM when things go down.


    Final Thoughts

    LLMs give us speed. But they don’t give us understanding, judgment, or ownership. If you treat them as shortcuts to ship more code, you may end up paying the price later. But if you treat them as a tool and keep responsibility where it belongs they can still be part of a healthy, sustainable development process.

    Thanks for reading. If you’ve seen this problem in your team or company, I’d love to hear how you’re dealing with it.

    AI Isn’t Leveling the Playing Field, it’s Amplifying the Gap

    We were told that AI would make development more accessible. That it would “level the playing field,” empower juniors, and help more people build great software.

    That’s not what I’m seeing.

    In reality, AI is widening the gap between junior and senior developers and fast.


    Seniors Are 10x-ing With AI

    For experienced engineers, AI tools like ChatGPT and GitHub Copilot are a multiplier.

    Why?

    Because they know:

    • What to ask
    • How to evaluate the answers
    • What matters in their system
    • How to refactor and harden code
    • When to ignore the suggestion completely

    Seniors are using AI the same way a great chef uses a knife: faster, safer, more precise.


    Juniors Are Being Left Behind

    Many junior developers, especially those early in their careers, don’t yet have the experience to judge what’s good, bad, or dangerous. And here’s the issue:

    AI makes it look like they’re productive until it’s time to debug, optimize, or maintain the code.

    They’re often:

    • Copy-pasting solutions without understanding the trade-offs
    • Relying on AI to write tests they wouldn’t know how to write themselves
    • Shipping code that works on the surface, but is fragile underneath

    What they’re building is a slow-burning fire of tech debt, and they don’t even see the smoke.


    Prompting Isn’t Engineering

    There’s a new kind of developer emerging: one who can write a great prompt but can’t explain a stack trace.

    That might sound harsh, but I’ve seen it first-hand. Without a foundation in problem-solving, architecture, debugging, and security prompting becomes a crutch, not a tool.

    Good engineering still requires:

    • Judgment
    • Pattern recognition
    • Systems thinking
    • Curiosity
    • Accountability

    AI doesn’t teach these. Mentorship does.


    Where Is the Mentorship?

    In many teams, mentorship is already stretched thin. Now we’re adding AI to the mix, and some companies expect juniors to “just figure it out with ChatGPT.”

    That’s not how this works.

    The result? Juniors are missing the critical lessons that turn coding into engineering:

    • Why things are built the way they are
    • What trade-offs exist and why they matter
    • How to debug a system under load
    • When to break patterns
    • How to think clearly under pressure

    No AI can give you that. You only get it from real experience and real guidance.


    What We Can Do

    If you’re a senior engineer, now is the time to lean into mentorship not pull away.

    Yes, AI helps you move faster. But if your team is growing and you’re not helping juniors grow too, you’re building speed on a weak foundation.

    If you’re a junior, use AI but don’t trust it blindly. Try to understand everything it gives you. Ask why. Break it. Fix it. Learn.

    Because here’s the truth:

    AI won’t make you a better engineer. But it will make great engineers even better.

    Don’t get left behind.


    Final Thoughts

    AI isn’t the enemy. But it’s not a shortcut to seniority either. We need to be honest about what it’s good for and where it’s failing us.

    Let’s stop pretending it’s a magic equalizer. It’s not.

    It’s a magnifier.
    If you’re already strong, it makes you stronger.
    If you’re still learning, it can hide your weaknesses until they blow up.