Why AI Can’t (Yet) Write Maintainable Software

In the past few years, large language models (LLMs) have burst onto the software development scene like a meteor bright, exciting, and full of promise. They can write entire applications in seconds, generate boilerplate code with ease, and explain complex algorithms in plain English. It’s hard not to be impressed.

But after spending serious time testing various AI platforms as coding assistants, I’ve reached a clear conclusion:

AI is not yet suitable for generating long-term, maintainable, production-grade software.

It’s fantastic for prototyping, disposable tools, and accelerating development, but when it comes to real-world, evolving, multi-developer systems it falls short. And the root cause is simple but fundamental: non-determinism.


The Non-Determinism Problem

At the heart of every LLM lies a probabilistic process. When you ask an AI to write or modify code, it doesn’t “recall” what it said before it predicts the next most likely word or token based on the context it sees. Even when you give it the exact same prompt twice, you often get subtly (or wildly) different answers.

In casual conversation, this doesn’t matter much. But in software engineering, determinism is sacred. A build must produce the same binary every time. Tests must behave consistently. A function’s output must depend solely on its input.

LLMs break this rule by design.

When you ask AI to “add a new field to this API,” it might add the field but it might also rename unrelated variables, adjust indentation styles, reorder imports, or subtly alter unrelated logic. These incidental changes make it almost impossible to track what actually changed and why. In version control, that’s noise. In production code, that’s risk.


The Illusion of Velocity

Using AI for coding can feel like flying until you realize you’ve lost track of where you’re going.

AI-generated code feels fast. You type a prompt, and it spits out a function that looks plausible. But as any experienced engineer knows, code that looks correct is not the same as code that is correct.

Worse still, AI often gets 90% right just enough to lull you into trusting it, but that last 10% (the edge cases, performance issues, or security vulnerabilities) can be costly. In long-lived systems, those flaws become ticking time bombs.

So yes, AI saves time but only if you’re ready to spend that saved time reviewing, refactoring, and making it consistent with your project standards. Otherwise, you’re borrowing technical debt against future maintenance.


“Vibe Coding” vs. Real Engineering

There’s a growing trend I like to call “vibe coding” relying on AI to produce code that “feels” right without understanding it deeply. It’s seductive, especially for less experienced developers or under time pressure.

But the truth is: software longevity is built on understanding, not vibes.

A healthy codebase is not just functional it’s coherent, documented, and maintainable. Every class, function, and comment exists for a reason that another human can later understand. AI-generated code often lacks that intentionality. It can mimic style, but it doesn’t comprehend architecture, team conventions, or long-term evolution.

AI doesn’t “see” the whole system it only sees your current prompt.


Where AI Does Shine

Despite these limitations, I’m not anti-AI. In fact, I use it daily strategically.

AI is brilliant at:

  • Prototyping ideas getting something working fast, even if it’s messy.
  • Generating boilerplate writing repetitive CRUD or setup code.
  • Explaining code translating complex logic into human-readable summaries.
  • Brainstorming solutions helping you think through alternative approaches.
  • Writing tests drafting coverage you can refine manually.

In other words, AI accelerates cognition, not automation. It’s a thinking partner, not a replacement for engineering discipline.


What It Means for the Future

As LLMs improve, we’ll likely see more deterministic, context-aware systems perhaps ones that can “anchor” to a codebase and learn its structure persistently. But until then, the responsibility for coherence, maintainability, and correctness still lies with us the humans.

AI might be the apprentice, but we’re still the architects.

My takeaway after months of experimentation is simple:

Use AI to accelerate development, not to abdicate responsibility.

Treat its output like an intern’s draft: useful, fast, and full of potential but never production-ready without review, cleanup, and integration into your project’s ecosystem.


The Bottom Line

AI coding tools are a revolution but like every revolution, they require balance and maturity to use effectively. They’re not replacing software engineers; they’re augmenting them.

So go ahead, let the AI write your prototypes, mock APIs, or test scaffolds. But when it comes to the production systems that real users depend on make sure there’s a human behind the keyboard who understands every line.

Because in the end, the difference between disposable and durable code isn’t who (or what) wrote it it’s who owns it.


Returning to the Rails World: What’s New and Exciting in Rails 8 and Ruby 3.3+

It’s 2025, and coming back to Ruby on Rails feels like stepping into a familiar city only to find new skyscrapers, electric trams, and an upgraded skyline.
The framework that once defined web development simplicity has reinvented itself once again.

If you’ve been away for a couple of years, you might remember Rails 6 or early Rails 7 as elegant but slightly “classic.”
Fast-forward to today: Rails 8 and Ruby 3.4 together form one of the most modern, high-performance, and full-stack ecosystems in web development.

Let’s explore what changed from Ruby’s evolution to Rails’ latest superpowers.


The Ruby Renaissance: From 3.2 to 3.4

Over the last two years, Ruby has evolved faster than ever.
Performance, concurrency, and developer tooling have all received major love while the language remains as expressive and joyful as ever.

Ruby 3.2 (2023): The Foundation of Modern Ruby

  • YJIT officially production-ready: Introduced a new JIT compiler written in Rust, delivering 20–40% faster execution on Rails apps.
  • Prism Parser (preview): The groundwork for a brand-new parser that improves IDEs, linters, and static analysis.
  • Regexp improvements: More efficient and less memory-hungry pattern matching.
  • Data class proposal: Early syntax experiments to make small, immutable data structures easier to define.

Ruby 3.3 (2024): Performance, Async IO, and Stability

  • YJIT 3.3 update: Added inlining and better method dispatch caching big wins for hot code paths.
  • Fiber Scheduler 2.0: Improved async I/O great for background processing and concurrent network calls.
  • Prism Parser shipped: Officially integrated, paving the way for better tooling and static analysis.
  • Better memory compaction: Long-running apps now leak less and GC pauses are shorter.

Ruby 3.4 (2025): The Next Leap

  • Prism as the default parser making editors and LSPs much more accurate.
  • Official WebAssembly build: You can now compile and run Ruby in browsers or serverless environments.
  • Async and Fibers 3.0: Now tightly integrated into standard libraries like Net::HTTP and OpenURI.
  • YJIT 3.4: Huge startup time and memory improvements for large Rails codebases.
  • Smarter garbage collector: Dynamic tuning for better throughput under load.

Example: Native Async Fetching in Ruby 3.4

require "async"
require "net/http"

Async do
  ["https://rubyonrails.org", "https://ruby-lang.org"].each do |url|
    Async do
      res = Net::HTTP.get(URI(url))
      puts "#{url} → #{res.bytesize} bytes"
    end
  end
end

That’s fully concurrent, purely in Ruby no threads, no extra gems.
Ruby has quietly become fast, efficient, and concurrent while keeping its famously clean syntax.


The Rails Revolution: From 7 to 8

While Ruby evolved under the hood, Rails reinvented the developer experience.
Rails 7 introduced the “no-JavaScript-framework” movement with Hotwire.
Rails 8 now expands that vision making real-time, async, and scalable apps easier than ever.

Rails 7 (2022–2024): The Hotwire Era

Rails 7 changed the front-end game:

  • Hotwire (Turbo + Stimulus): Replaced complex SPAs with instant-loading server-rendered apps.
  • Import maps: Let you skip Webpack entirely.
  • Encrypted attributes: encrypts :email became a one-line reality.
  • ActionText and ActionMailbox: Brought full-stack communication features into Rails core.
  • Zeitwerk loader improvements: Faster boot and reloading in dev mode.

Example: Rails 7 Hotwire Simplicity

# app/controllers/messages_controller.rb
def create
  @message = Message.create!(message_params)
  turbo_stream.append "messages", partial: "messages/message", locals: { message: @message }
end

That’s a live-updating chat stream with no React, no WebSocket boilerplate.


Rails 8 (2025): Real-Time, Async, and Database-Native

Rails 8 takes everything Rails 7 started and levels it up for the next decade.

Turbo 8 and Turbo Streams 2.0

Hotwire gets more powerful:

  • Streaming updates from background jobs
  • Improved Turbo Frames for nested components
  • Async rendering for faster page loads
class CommentsController < ApplicationController
  def create
    @comment = Comment.create!(comment_params)
    turbo_stream.prepend "comments", partial: "comments/comment", locals: { comment: @comment }
  end
end

Now you can push that stream from Active Job or Solid Queue, enabling real-time updates across users.

Solid Queue and Solid Cache

Rails 8 introduces two built-in frameworks that change production infrastructure forever:

  • Solid Queue: Database-backed job queue think Sidekiq performance without Redis.
  • Solid Cache: Native caching framework that integrates with Active Record and scales horizontally.
# Example: background email job using Solid Queue
class UserMailerJob < ApplicationJob
  queue_as :mailers

  def perform(user_id)
    UserMailer.welcome_email(User.find(user_id)).deliver_now
  end
end

No Redis, no extra service everything just works out of the box.

Async Queries and Connection Pooling

Rails 8 adds native async database queries and automatic connection throttling for multi-threaded environments.
This pairs perfectly with Ruby’s improved Fiber Scheduler.

users = ActiveRecord::Base.async_query do
  User.where(active: true).to_a
end

Smarter Defaults, Stronger Security

  • Active Record Encryption expanded with deterministic modes
  • Improved CSP and SameSite protections
  • Rails generators now use more secure defaults for APIs and credentials

Developer Experience: Rails Feels Modern Again

The latest versions of Rails and Ruby have also focused heavily on DX (developer experience).

  • bin/rails console --sandbox rolls back all changes automatically.
  • New error pages with interactive debugging.
  • ESBuild 3 & Bun support for lightning-fast JS builds.
  • Improved test parallelization with async jobs and Capybara integration.
  • ViewComponent and Hotwire integration right from generators.

Rails in 2025 feels sleek, intelligent, and incredibly cohesive.


The Future of Rails and Ruby Together

With Ruby 3.4’s concurrency and Rails 8’s async, streaming, and caching power, Rails has evolved into a true full-stack powerhouse again capable of competing with modern Node, Elixir, or Go frameworks while staying true to its elegant roots.

It’s not nostalgia it’s progress built on the foundation of simplicity.

If you left the Rails world thinking it was old-fashioned, this is your invitation back.
You’ll find your favorite framework faster, safer, and more capable than ever before.


Posted by Ivan Turkovic
Rubyist, software engineer, and believer in beautiful code.

What You Should Learn to Master but Never Ship

Every engineer should build a few things from scratch search, auth, caching just to understand how much complexity lives beneath the surface. But the real skill isn’t rolling your own; it’s knowing when not to. In the age of AI, understanding how things work under the hood isn’t optional it’s how you keep control over what your tools are actually doing.

There’s a quiet rite of passage every engineer goes through. You build something that already exists. You write your own search algorithm. You design your own auth system. You roll your own logging framework because the existing one feels too heavy.

And for a while, it’s exhilarating. You’re learning, stretching, discovering how the pieces actually work.

But there’s a difference between learning and shipping.


The Temptation to Reinvent

Every generation of engineers rediscovers the same truth: we love building things from scratch. We tell ourselves our use case is different, our system is simpler, our constraints are unique.

But the moment your code touches production when it has to handle real users, scale, security, and compliance you realize how deep the rabbit hole goes.

Here’s a short list of what you probably shouldn’t reinvent if your goal is to ship something that lasts:

  • Search algorithms
  • Encryption
  • Authentication
  • Credit card handling
  • Billing
  • Caching systems
  • Logging frameworks
  • CSV, HTML, URL, JSON, XML parsing
  • Floating point math
  • Timezones
  • Localization and internationalization
  • Postal address handling

Each one looks simple on the surface. Each one hides decades of hard-won complexity underneath.


Learn It, Don’t Ship It

You should absolutely build these things once.

Do it for the same reason musicians practice scales or pilots train in simulators. You’ll understand the invisible edges where systems fail, what tradeoffs libraries make, how standards evolve.

Build your own encryption to see why key rotation matters.
Write your own caching layer to feel cache invalidation pain firsthand.
Parse CSVs manually to understand why “CSV” isn’t a real standard.

You’ll emerge humbled, smarter, and far less likely to call something “trivial” again.

But then don’t ship it.


The Cost of Cleverness

Production is where clever ideas go to die.

The real cost of rolling your own isn’t just the initial build. It’s the invisible tax that compounds over time: maintenance, updates, edge cases, security audits, integration testing.

That custom auth system? It’ll need to handle password resets, MFA, SSO, OAuth, token expiration, brute-force protection, and GDPR deletion requests.

Your homegrown billing service? Get ready for tax handling, currency conversion, refund flows, audit trails, and legal exposure.

Most of us underestimate this cost by an order of magnitude. And that gap between what you think you built and what reality demands is where projects go to die.


The Wisdom of Boring Software

Mature engineering isn’t about novelty it’s about leverage.

When you use battle-tested libraries, you’re not being lazy. You’re standing on top of millions of hours of debugging, testing, and iteration that others have already paid for.

The best engineers I know are boring. They use Postgres, Redis, S3. They trust Stripe for billing, Auth0 for authentication, Cloudflare for caching. They’d rather spend their creative energy on business logic and user experience the parts that actually differentiate a product.

Boring software wins because it doesn’t collapse under its own cleverness.


Why This Matters Even More in the AI Era

Today, a new kind of abstraction has arrived: AI.
We don’t just import libraries anymore we import intelligence.

When you integrate AI into your workflow, you’re effectively outsourcing judgment, reasoning, and data handling to a black box that feels magical but is still software under the hood.

If you’ve never built or understood the underlying systems search, parsing, data handling, caching, numerical precision you’ll have no intuition for what the AI is actually doing. You’ll treat it as oracle instead of a tool.

Knowing how these fundamentals work grounds you. It helps you spot when the model hallucinates, when latency hides in API chains, when an embedding lookup behaves like a fuzzy search instead of real understanding.

The engineers who will thrive in the AI era aren’t the ones who blindly prompt. They’re the ones who know what’s happening behind the prompt.

Because AI systems don’t erase complexity they just bury it deeper.

And if you don’t know what lives underneath, you can’t debug, govern, or trust it.


When It’s Worth Reinventing

There are exceptions. Sometimes the act of rebuilding is the product itself.

Search at Google. Encryption at Signal. Auth at Okta.

If your business is the infrastructure, then yes go deep. Reinvent with intention. But if it’s not, your job is to assemble reliable systems, not to recreate them.

Learn enough to understand the tradeoffs, but don’t mistake knowledge for necessity.


The Real Lesson

Here’s the paradox: you can’t truly respect how hard these problems are until you’ve built them yourself.

So do it once. In a sandbox, on weekends, or as a thought exercise. Feel the pain, appreciate the elegance of the libraries you once dismissed, and move on.

That humility will make you a better engineer and a more trusted builder in the AI age than any clever homegrown library ever could.


Final thought:
Master everything. Ship selectively.

That’s the difference between engineering as craft and engineering as production.
And it’s the difference between using AI and actually understanding it.

AI Vibe Coding vs. Outsourcing vs. Local Developers. What Really Works Best

The way we build software is changing fast.
You can now code alongside AI in real time. You can hire an offshore team across time zones. Or you can build with local developers right next to you the old-school way that suddenly feels new again.

Each model works, but they work differently. And when it comes to product quality, iteration speed, and long-term success only one consistently delivers.

Let’s unpack all three, step by step.


AI Vibe Coding is Building at the Speed of Thought

AI vibe coding is when you work directly with AI tools like ChatGPT, Claude, or GitHub Copilot as your pair developer.

It’s not about asking for snippets it’s about co-developing live.
You describe your intent, get code, refine it instantly, and iterate in a tight feedback loop.

Process (Pseudocode-Style)

while (building_feature):
    describe_intent_to_ai("Create an onboarding flow with email + OAuth")
    ai.generates_scaffold()
    you.review_and_edit_live()
    ai.adjusts_structure_and_tests()
    run_tests()
    deploy_if_ready()

Pros

✅ Extremely fast iteration
✅ Context-aware (if you prompt consistently)
✅ Great for prototyping and boilerplate
✅ Ideal for solo founders or small teams

Cons

⚠️ Requires strong technical judgment
⚠️ AI lacks product intuition and domain empathy
⚠️ Risk of hidden bugs or overconfidence in generated code
⚠️ Limited long-term maintainability without review

AI vibe coding accelerates early-stage building but it still needs human oversight, structured review, and context. It’s great for speed, not yet for strategy.


Outsourcing Development, Slow Communication, Slow Momentum

Outsourcing means hiring remote developers (often overseas) to build parts or all of your product.

The promise: cost savings and flexibility.
The reality: delays, misunderstandings, and low-context execution.

Process (Pseudocode-Style)

while (project_in_progress):
    product_owner.create_ticket():
        - detailed specs
        - screenshots, examples, acceptance criteria

    offshore_team(next_day):
        - reads ticket
        - implements as interpreted
        - opens PR for review

    product_owner(next_morning):
        - reviews PR
        - finds edge cases or misalignment
        - leaves comments

Pros

✅ Lower hourly cost
✅ Access to global talent
✅ Good for simple, well-scoped tasks

Cons

⚠️ Delayed feedback loops (timezone lag)
⚠️ Communication loss and misinterpretation
⚠️ Reduced ownership or creative input
⚠️ Code that “works” but doesn’t “fit”
⚠️ High hidden cost in management and rework

Outsourcing works if your specs are crystal clear and your needs don’t change.
But in reality product needs always change.


Local Developers, Real-Time Collaboration and True Ownership

Now let’s talk about the model that consistently wins: local developers.

Whether sitting next to you or just in the same time zone, local developers bring both technical skill and product empathy. They understand your users, your goals, and your market context intuitively.

This creates a feedback loop that’s impossible to replicate with outsourcing or AI.

Process (Pseudocode-Style)

while (building_product):
    morning_sync():
        - quick discussion on goals and blockers
        - align on what matters today

    devs.start_coding():
        - spontaneous chat or screen share
        - brainstorm architecture in real time
        - fix and test instantly

    afternoon_review():
        - peer review and refactor collaboratively
        - same-day deploy

Pros

✅ Real-time communication
✅ Shared product understanding
✅ Collaborative brainstorming
✅ High accountability and quality
✅ Culture and creativity alignment

Cons

⚠️ Higher cost per developer
⚠️ Limited local hiring pool
⚠️ Needs strong leadership and culture

But what you gain far outweighs the cost.
When your developers vibe with your product, decisions are faster, reviews are deeper, and every line of code carries intent.


The Hidden Layer, How Context Shapes Code

Here’s the truth:
Every developer human or AI writes code based on context.

When context is broken (as in outsourcing), code quality drops.
When context is partial (as with AI), you get speed but need oversight.
When context is shared (as with local devs), you get clarity, accountability, and alignment.

The Context Pyramid

LayerOutsourcingAI Vibe CodingLocal Developer
Product intuitionLowMediumHigh
SpeedLowVery HighHigh
Collaboration depthLowMediumVery High
Communication lagHighNoneMinimal
Code qualityVariableGood with reviewConsistently strong
OwnershipLowNoneHigh
Best forFixed-scope tasksRapid prototypingLong-term, evolving products

The Hybrid Reality

The future is likely hybrid:

  • Use AI for ideation, scaffolding, and acceleration.
  • Avoid outsourcing for evolving or strategic projects.
  • Anchor everything around a local team that owns the product, understands the users, and ensures quality.

That’s the winning setup AI for speed, local developers for soul.


💬 Final Thoughts

Building great software isn’t just about writing code.
It’s about alignment shared context, communication, and ownership.

  • AI vibe coding gives you momentum.
  • Outsourcing gives you manpower.
  • Local developers give you meaning and mastery.

If you want code that not only runs but lasts go local, stay collaborative, and use AI as your accelerator, not your replacement.


👨‍💻 Need Help Structuring Your Team or Workflow?

If you’re building an MVP, scaling a startup, or managing tech in fintech or SaaS that needs both speed and reliability, I can help.

With nearly two decades of experience building, scaling, and advising startups and complex systems, I offer consulting on how to structure teams, integrate AI effectively, and build codebases that actually scale and stand the test of time.

Let’s make your team and your code vibe.

The Vibe Code Tax

Momentum is the oxygen of startups. Lose it, and you suffocate. Getting it back is harder than creating it in the first place.

Here’s the paradox founders hit early:

  • Move too slowly searching for the “perfect” technical setup, and you’re dead before you start.
  • Move too fast with vibe-coded foundations, and you’re dead later in a louder, more painful way.

Both paths kill. They just work on different timelines.

Death by Hesitation

Friendster is a perfect example of death by hesitation. They had the idea years before Facebook. They had users. They had momentum.

But their tech couldn’t scale, and instead of fixing it fast, they stalled. Users defected. Momentum bled out. By the time they moved, Facebook and MySpace had already eaten their lunch.

That’s hesitation tax: waiting, tinkering, second-guessing while the world moves on.

Death by Vibe Coding

On the flip side, you get the vibe-coded death spiral.

Take Theranos. It wasn’t just fraud, it was vibe coding at scale. Demos that weren’t real. A prototype paraded as a product. By the time the truth surfaced, they’d burned through billions and a decade of time.

Or look at Quibi. They raced to market with duct-taped assumptions the whole business was a vibe-coded bet that people wanted “TV, but shorter.” $1.75 billion later, they discovered the foundation was wrong.

That’s the danger of mistaking motion for progress.

The Right Way to Use Vibe Coding

Airbnb is the counterexample. Their first site was duct tape. Payments were hacked together. Listings were scraped. It was vibe code but they treated it as a proof of concept, not a finished product.

The moment they proved demand (“people really will rent air mattresses from strangers”), they rebuilt. They didn’t cling to the prototype. They moved fast, validated, then leveled up.

That’s the correct use: vibe code as validation, not as production.

The Hidden Tax

The vibe code tax is brutal because it’s invisible at first. It’s not just money.

  • Lost time → The 6–12 months you’ll spend duct-taping something that later has to be rebuilt from scratch.
  • Lost customers → Early adopters churn when they realize your product is held together with gum and string. Most won’t return.
  • Lost momentum → Investors don’t like hearing “we’re rebuilding.” Momentum is a story you only get to tell once.

And you don’t get to dodge this tax. You either pay it early (by finding a technical co-founder or paying real engineers), or you pay it later (through rebuilds, lost customers, and wasted months).

How to Stay Alive

  1. Be honest. Call your vibe-coded MVP a prototype. Never pitch it as “production-ready.”
  2. Set a timer. Airbnb didn’t stay in duct tape land for years. They validated and moved on. You should too.
  3. Budget for the rebuild. If you don’t have a co-founder, assume you’ll need to pay engineers once the prototype proves itself.
  4. Go small but real. One feature built right is more valuable than ten features that crumble.

Final Word

The startup graveyard is full of companies that either waited too long or shipped too fast without a foundation. Friendster hesitated. Theranos faked it. Quibi mistook hype for traction.

Airbnb survived because they paid the vibe code tax on their terms. They used duct tape to test, then rebuilt before the cracks became fatal.

That’s the playbook.

Because no matter what the vibe code tax always comes due.

20+ Years as a CTO: Lessons I Learned the Hard Way

Being a CTO isn’t what it looks like from the outside. There are no capes, no magic formulas, and certainly no shortcuts. After more than two decades leading engineering teams, shipping products, and navigating the chaos of startups and scale-ups, I’ve realized that the real challenges and the real lessons aren’t technical. They’re human, strategic, and sometimes painfully simple.

Here are the lessons that stuck with me, the ones I wish someone had told me when I started.


Clarity beats speed every time

Early in my career, I thought speed meant writing more code, faster. I would push engineers to “ship now,” measure velocity in lines of code or story points, and celebrate sprint completions.

I was wrong.

The real speed comes from clarity. Knowing exactly what problem you’re solving, who it matters to, and why it matters that’s what lets a team move fast. I’ve seen brilliant engineers grind for weeks only to realize they built the wrong thing. Fewer pivots, fewer surprises, and focus make a team truly fast.


Engineers want to care, they just need context

One of the most frustrating things I’ve witnessed is engineers shrugging at product decisions. “They just don’t care,” I thought. Until I realized: they do care. They want to make an impact. But when they don’t have context, the customer pain, the market reality, the business constraints,they can’t make informed decisions.

Once I started sharing the “why,” not just the “what,” engagement skyrocketed. A well-informed team is a motivated team.


Vision is a tactical tool, not a slogan

I’ve been guilty of writing vision statements that sounded great on slides but did nothing in practice. The turning point came when I started treating vision as a tactical tool.

Vision guides decisions in real time: Should we invest in this feature? Should we rewrite this component? When the team knows the north star, debates become productive, not paralyzing.


Great engineers are problem solvers first

I’ve worked with engineers who could write elegant code in their sleep, but struggle when the problem itself is unclear. The best engineers are not just builders, they’re problem solvers.

My role as a CTO became ensuring the problem was well-understood, then stepping back. The magic happens when talent meets clarity.


Bad culture whispers, it doesn’t shout

I’ve learned to pay attention to the quiet. The subtle signs: meetings where no one speaks up, decisions made by guesswork, unspoken assumptions. These moments reveal more about culture than any HR survey ever could.

Great culture doesn’t need fanfare. Bad culture hides in silence and it spreads faster than you think.


Done is when the user wins

Early on, “done” meant shipped. A feature went live, the ticket closed, everyone celebrated. But shipping doesn’t equal solving.

Now, “done” only counts when the user’s problem is solved. I’ve had to unteach teams from thinking in terms of output and retrain them to think in terms of impact. It’s subtle, but transformative.


Teams don’t magically become product-driven

I used to blame teams for not thinking like product people. Then I realized the missing piece was me. Leadership must act like product thinking matters. Decisions, recognition, discussions, they all reinforce the mindset. Teams reflect the leadership’s priorities.


Product debt kills momentum faster than tech debt

I’ve chased the holy grail of perfect code only to watch teams get bogged down in building the wrong features. Clean architecture doesn’t save a product if no one wants it. Understanding the problem is far more important than obsessing over elegance.


Focus is a leadership decision

I once ran a team drowning in priorities. Tools, frameworks, and fancy prioritization systems didn’t help. The missing ingredient was leadership. Saying “no” to the wrong things, protecting focus, and consistently communicating what matters that’s what accelerates teams.


Requirements are not the problem

If engineers are stuck waiting for “better requirements,” don’t introduce another process. Lead differently. Engage with your team, clarify expectations, remove ambiguity, and give feedback in real time. Requirements are never the bottleneck leadership is.


The hard-earned truth

After twenty years, I’ve realized technology is the easy part. Leadership is where the real work and the real leverage lies.

Clarity, context, vision, problem-solving, culture, focus these aren’t buzzwords. They are the forces that determine whether a team thrives or stalls.

I’ve seen brilliant teams fail, and ordinary teams excel, all because of the way leadership showed up. And that’s the lesson I carry with me every day: if you want speed, impact, and results, start with the leadership that creates the conditions for them.

Why AI won’t solve these problems

With all the excitement around AI today, it’s tempting to think that tools can fix everything. Need better requirements? There’s AI. Struggling with design decisions? AI can suggest options. Want faster development? AI can generate code.

Here’s the hard truth I’ve learned: none of these tools solve the real problems. AI can assist, accelerate, and automate but it cannot provide clarity, set vision, or foster a healthy culture. It doesn’t understand your users, your market, or your team’s dynamics. It can’t decide what’s important, or make trade-offs when priorities conflict. Those are human responsibilities, and they fall squarely on leadership.

I’ve seen teams put too much faith in AI as a silver bullet, only to discover that the fundamental challenges alignment, focus, problem definition, and decision-making still exist. AI is powerful, but it’s a force multiplier, not a replacement. Without strong leadership, even the most advanced AI cannot prevent teams from building the wrong thing beautifully, or from stagnating in a quiet, passive culture.

Ultimately, AI is a tool. Leadership is the strategy. And experience with decades of trial, error, and hard-won insight is what turns potential into real results.

Cocoa, Chocolate, and Why AI Still Can’t Discover

Imagine standing in front of a freshly picked cocoa pod. You break it open, and inside you find a pale, sticky pulp with bitter seeds. Nothing looks edible, nothing smells particularly appetizing. By every reasonable measure, this is a dead end.

Yet humanity somehow didn’t stop there. Someone, centuries ago, kept experimenting, steps that made no sense at the time:

  • Picking out the seeds and letting them ferment until they grew mold.
  • Washing and drying them for days, though still inedible.
  • Roasting them into something crunchy, still bitter and strange.
  • Grinding them into powder, which tasted worse.
  • Finally, blending that bitterness with sugar and milk, turning waste into one of the most beloved foods in human history: chocolate.

No algorithm would have told you to keep going after the first dozen failures. There was no logical stopping point, only curiosity, persistence, and maybe a bit of luck. The discovery of cocoa as food wasn’t the result of optimization, it was serendipity.

Why This Matters for AI

AI today is powerful at recombining, predicting, and optimizing. It can remix what already exists, generate new connections from vast data, and accelerate discoveries we’re already aiming toward. But there’s a limit: AI doesn’t (yet) explore dead ends with stubborn curiosity. It doesn’t waste time on paths that appear pointless. It doesn’t ferment bitter seeds and wait for mold to form, just to see if maybe, somehow, there’s something new hidden inside.

Human discovery has always been messy, nonlinear, and often illogical. The journey from cocoa pod to chocolate shows that sometimes the only way to find the extraordinary is to persist through the ridiculous.

The Future of Discovery

If we want AI to go beyond optimization and into true discovery, it will need to embrace the irrational side of exploration, the willingness to try, fail, and continue without clear reasons. Until then, AI remains a tool for extending human knowledge, not replacing the strange, stubborn spark that drives us to turn bitter seeds into sweetness.

Because the truth is: chocolate exists not because it was obvious, but because someone refused to stop at “nothing edible.”

This path makes no sense. At every step the signal says stop. No data suggests you should continue. No optimization algorithm rewards the action. Yet someone did. And that’s how one of the world’s favorite foods was discovered.

This is the gap between human discovery and AI today.

AI can optimize, remix, predict. It can explore a search space, but only one that’s already defined. It can’t decide to push through meaningless, irrational steps where there’s no reason to keep going. It won’t follow a path that looks like failure after failure. It won’t persist in directions that appear to lead nowhere.

But that’s exactly how discovery often works.

Cocoa to chocolate wasn’t about efficiency. It was curiosity, stubbornness, and luck. The same applies to penicillin, vulcanized rubber, even electricity. Breakthroughs happen because someone ignored the “rational” stopping point.

AI is far from that. Right now, it’s bounded by what already exists. It doesn’t yet invent entirely new domains the way humans stumble into them.

The lesson? Discovery is still deeply human. And the future of AI will depend not just on making it smarter, but on making it willing to walk blind paths where no reward signal exists until something unexpected emerges.

Because sometimes, you need to go through moldy seeds and bitterness to find chocolate.

When to Hire Real Engineers Instead of Freelancers for Your MVP

Building a startup is a race against time. Every day you wait to ship your idea is a day your competitors could gain an edge. That’s why many founders start with freelancers or “vibe coding” to launch their MVP (Minimum Viable Product) quickly. But this fast-track approach comes with hidden risks. There comes a point when hiring real engineers is no longer optional, it’s critical for your startup’s survival.

In this post, we’ll explore when it’s the right time to transition from freelancers to full-time engineers, and why vibe coding with low-cost freelancers can be dangerous for your MVP.


Why Start With Freelancers?

Freelancers are often the first choice for early-stage founders. Here’s why:

  • Speed: Freelancers can help you quickly prototype your idea.
  • Lower Cost: You pay for work done, without the overhead of full-time salaries or benefits.
  • Flexibility: You can scale the workforce up or down depending on the project stage.

Freelancers are perfect for validating your idea, testing market demand, or building proof-of-concept features. However, relying on freelancers too long can create technical debt and slow your growth when your product starts attracting real users.


The Hidden Dangers of Vibe Coding With Low-Cost Freelancers

Many founders are tempted by freelancers offering extremely low rates. While the idea of saving money is appealing, vibe coding with bargain-rate developers comes with serious risks:

  • Poor Code Quality: Low-cost freelancers may cut corners, leaving messy, unmaintainable code.
  • Lack of Documentation: Your codebase may be difficult for future engineers to understand or build upon.
  • Delayed Timelines: Cheap freelancers often juggle multiple clients, causing unpredictable delays.
  • False Confidence: Founders may assume their MVP is “production-ready” when it’s not.
  • Hidden Costs: Fixing technical debt later often costs more than hiring quality engineers from the start.

Using low-cost freelancers is fine for prototyping ideas quickly, but it becomes risky when your MVP starts attracting real users or paying customers.


Signs You Need Real Engineers

Here are the main indicators that your MVP has outgrown freelancers:

1. Product Complexity Increases

  • Your MVP is no longer a simple prototype.
  • Features require backend scalability, integrations, or complex logic.
  • Codebase is hard for freelancers to maintain consistently.

2. Customers Expect Stability

  • Paying users begin using your product regularly.
  • Bugs, downtime, or inconsistent updates start hurting your credibility.
  • You need reliable, professional code that can scale.

3. You Plan for Rapid Growth

  • You anticipate increasing traffic, user engagement, or data volume.
  • Your MVP needs a scalable architecture to handle more users efficiently.

4. Security and Compliance Matter

  • Sensitive user data, payment systems, or regulatory requirements are involved.
  • Freelancers may lack the expertise to ensure security best practices.

How to Transition Smoothly to Full-Time Engineers

Once you’ve decided to hire real engineers, plan the transition carefully to avoid disruption:

  1. Audit Existing Code: Identify areas of technical debt and create a roadmap for refactoring.
  2. Hire Strategically: Look for engineers with startup experience who can handle rapid iteration and product scaling.
  3. Document Everything: Ensure all features, APIs, and infrastructure are well-documented for the new team.
  4. Maintain Continuity: Keep a few top freelancers for short-term tasks during the handover period.
  5. Invest in Tools: Use code repositories, CI/CD pipelines, and testing frameworks to support professional development practices.

Cost Considerations

Hiring full-time engineers is an investment. While freelancers may seem cheaper upfront, consider the long-term costs:

  • Technical Debt: Fixing poor-quality code can cost far more than hiring engineers initially.
  • Lost Customers: Product instability can lead to churn and missed revenue.
  • Opportunity Cost: Delays in scaling and adding features can let competitors win market share.

Think of full-time engineers as insurance for your product’s future success.


Conclusion

Freelancers are invaluable for testing your idea and building a lean MVP quickly. But relying on low-cost vibe coding can be dangerous: messy code, delayed timelines, and hidden costs can stall your startup before it even takes off. Once your product gains traction, complexity, and paying users, hiring real engineers ensures stability, scalability, and long-term growth.

Key Takeaway: Use freelancers for prototyping, but transition to full-time engineers before your MVP becomes a product your customers depend on. Planning the move carefully saves time, money, and frustration.


Have you experienced the vibe code tax firsthand? Share your story in the comments and tell us how you decided when to hire full-time engineers.

On AI-Generated Code, Maintainability, and the Possibility of Disposable Software

Over the past two years, I’ve been using various AI-assisted tools for programming like Codeium, GitHub Copilot, ChatGPT, and others. These tools have become part of my daily workflow. Mostly, I use them for code completion and to help me finish thoughts, suggest alternatives, or fill in repetitive boilerplate. I’m not doing anything too wild with autonomous agents or fully automated codebases. Just practical, incremental help.

That said, even in this limited use, I’m starting to feel the friction.

Not because the tools are bad but actually, they’re improving fast. Individual lines and even complete methods are cleaner than they used to be. The suggestions are smarter. The models are more context-aware. But one thing still nags at me: even with better completions, a lot of the output still isn’t good code in the long-term sense.


Maintainability Still Matters

The issue, to me, isn’t whether AI can help me write code faster. It can. The issue is whether that code is going to survive over time. Is it going to be easy to understand, extend, or refactor? Does it follow a style or pattern that another human could step into and build on?

This matters especially when you’re not the only one touching the code or when you come back to it after a few months and wonder, “Why did I do it this way?”

And here’s the contradiction I keep running into: AI helps you write code faster, but it often creates more problems to maintain. That’s especially true when I’ve tested more advanced setups where you let an agent plan and generate entire components, classes, or services. It sounds great in theory, but in practice it causes a lot of changes, inconsistencies, and small bugs that end up being more trouble to fix than if I had just written it myself from the start.

So for now, I stay close to completions. Code at the scale of a line or a method. It’s easier to understand, easier to control. I can be the architect, and the AI can be the assistant.


The Self-Fulfilling Trap

There’s a strange loop forming in AI development. Since the generated code is harder to reason about or maintain, people often treat it as throwaway. And because it’s throwaway, nobody bothers to make it better. So it stays bad.

Self-fulfilling prophecy.

The more AI you use to generate logic, the more you’re tempted to not go back and polish or structure it. You get into a loop of “just generate something that works,” and soon you’re sitting on a pile of glue code and hacks that’s impossible to build on.

But maybe that’s okay? Maybe we need to accept that some code isn’t meant to last.


Disposable Software Might Be the Point

This is where I’m starting to shift my thinking a little. I’ve always approached code as something you build on something that lives and evolves. But not all code needs that.

A lot of software today already is disposable, even if we don’t admit it. One-off internal dashboards, ETL jobs, scripts for events, MVPs for marketing campaigns, integrations that won’t live beyond a quarter. We often pretend we’re building maintainable systems, but we’re not. We just don’t call them disposable.

With AI in the mix, maybe it’s time to embrace disposability for what it is. Write the code, run the code, get the result, throw it away. Next time, generate it again maybe with better context or updated specs.

This mindset removes a lot of the pressure around maintainability. And it fits the strengths of today’s AI tools.


When Not to Use Agentic Systems (Yet)

I’ve played with more autonomous agent systems what people call “agentic AI” or multi-agent code platforms. Honestly? I’m not sold on them yet. They generate too much. They make decisions I wouldn’t make. They refactor things that didn’t need to be touched.

And then I spend more time reading diff views and undoing changes than I saved by delegating in the first place.

Maybe in the future I’ll be comfortable letting an AI agent draft a service or plan out an architectural pattern. But today, I’m not there. I use these tools more like smart autocomplete than autonomous developers. It’s still very much my code and they’re just helping speed up the flow.


Final Thoughts

There’s a real risk of overhyping what AI can do for codebases today. But there’s also an opportunity to rethink how we treat different classes of software. We don’t need to hold everything to the same standards of longevity. Not every project needs to be built for 10 years of feature creep.

Some software can (and should) be treated like scaffolding and built quickly, used once, and removed without guilt.

And that’s where AI shines right now. Helping us build the things we don’t need to keep.

I’ll keep experimenting. I’ll keep writing most of my own code, and using AI where it makes sense. But I’m also watching carefully because the balance between what’s worth maintaining and what’s better thrown away is shifting.

And we should all be ready for what that means.

The AI isn’t going to be on call at 2 AM when things go down.

Large Language Models (LLMs) like ChatGPT, Copilot, and others are becoming a regular part of software development. Many developers use them to write boilerplate code, help with unfamiliar syntax, or even generate whole modules. On the surface, it feels like a productivity boost. The work goes faster, the PRs are opened sooner, and there’s even time left for lunch.

But there’s something underneath this speed, something we’re not talking about enough. The real issue with LLM-generated code is not that it helps us ship more code, faster. The real issue is liability.


Code That Nobody Owns

There’s a strange effect happening in teams using AI to generate code: nobody feels responsible for it.

It’s like a piece of code just appeared in your codebase. Sure, someone clicked “accept,” but no one really thought through the consequences. This is not new, we saw the same thing with frameworks and compilers that generated code automatically. If no human wrote it, then no human cares deeply about maintaining or debugging it later.

LLMs are like that, but on a massive scale.


The “Average” Problem

LLMs are trained on a massive corpus of public code. What they produce is a kind of rolling average of everything they’ve seen. That means the code they generate isn’t written with care or with deep understanding of your system. It’s not great code. It’s average code.

And as more and more people use LLMs to write code, and that code becomes part of new training data, the model quality might even degrade over time, it becomes an average of an average.

This is not just about style or design patterns. It affects how you:

  • Deliver software
  • Observe and monitor systems
  • Debug real-world issues
  • Write secure applications
  • Handle private user data responsibly

LLMs don’t truly understand these things. They don’t know what matters in your architecture, how your team works, or what your specific constraints are. They just parrot what’s most statistically likely to come next in the code.


A Fast Start, Then a Wall

So yes, LLMs speed up the easiest part of software engineering: writing code.

But the hard parts remain:

  • Understanding the domain
  • Designing for change
  • Testing edge cases
  • Debugging production issues
  • Keeping systems secure and maintainable over time

These are the parts that hurt when the codebase grows and evolves. These are the parts where “fast” turns into fragile.


Example: Generated Code Without Accountability

Imagine you ask an LLM to generate a payment service. It might give you something that looks right, maybe even works with your Stripe keys or some basic error handling.

But:

  • What happens with race conditions?
  • What if fraud detection fails silently?
  • What if a user gets double-charged?
  • Who is logging what?
  • Is the payment idempotent?
  • Is sensitive data like credit cards being exposed in logs?

If no one really “owned” that code because it was mostly written by an AI and these questions might only surface after things go wrong. And in production, that can be very costly.


So What’s the Better Approach?

LLMs can be great tools, especially for experienced engineers who treat them like assistants, not authors.

To use LLMs responsibly in your team:

  • Review AI-generated code with care.
  • Assign clear ownership, even for generated components.
  • Add context-specific tests and documentation.
  • Educate your team on the why, not just the how.
  • Make accountability a core part of your development process.

Because in the end, you are shipping the product. The AI isn’t going to be on call at 2 AM when things go down.


Final Thoughts

LLMs give us speed. But they don’t give us understanding, judgment, or ownership. If you treat them as shortcuts to ship more code, you may end up paying the price later. But if you treat them as a tool and keep responsibility where it belongs they can still be part of a healthy, sustainable development process.

Thanks for reading. If you’ve seen this problem in your team or company, I’d love to hear how you’re dealing with it.