Company Culture

The article critiques the modern obsession with corporate culture, arguing it is often a superficial construct designed to appease executives rather than genuinely engage employees. The author emphasizes that true culture emerges organically from shared experiences, while corporate culture is manufactured, focusing on management control instead of addressing real employee needs such as fair compensation and meaningful work.

There’s an elephant in every conference room, and it’s time someone pointed it out: corporate culture is mostly performance art for executives who’ve consumed one too many LinkedIn thought leadership posts. Here’s the uncomfortable truth when it comes to company culture, there is really nothing cultural in it by itself at all.

I recently came across yet another company memo about culture this one lamenting how remote work is supposedly destroying the magical “energy” and “spontaneous moments” that make a workplace special. The usual suspects were there: hallway conversations, shared excitement, the ineffable sense of belonging that apparently only happens when people share physical space.

And I couldn’t help but think: who is this actually for?


How We Got Here: A Brief History of the Culture Obsession

The corporate culture phenomenon didn’t emerge from nowhere. To understand why every company now has a Chief Culture Officer and a dedicated culture budget, we need to look back at how this obsession began.

The 1980s: Culture as Competitive Advantage

The modern corporate culture movement can be traced back to the early 1980s, particularly to two influential books: “In Search of Excellence” (1982) by Tom Peters and Robert Waterman, and “Corporate Cultures” (1982) by Terrence Deal and Allan Kennedy. These books emerged during a time when American companies were losing ground to Japanese competitors, and consultants were desperately searching for explanations.

The answer they landed on was culture. Japanese companies supposedly had superior cultures strong shared values, intense loyalty, collective purpose. American companies needed to cultivate similar cultures to compete. Culture became a management tool, something to be engineered and optimized like any other business process.

The 1990s-2000s: The Silicon Valley Mythos

Then came Silicon Valley, and culture took on a new dimension. Tech companies weren’t just selling products they were selling a lifestyle, a vision, a revolution. Culture became part of the brand, both for recruiting and for public image.

Google famously codified this with its “Don’t be evil” motto and its emphasis on perks: free food, massage rooms, game areas. The message was clear: this isn’t just a job, it’s a community. Facebook followed with its “move fast and break things” ethos. Apple cultivated an air of creative excellence and secrecy.

But here’s what really happened: these companies used culture as a substitute for work-life balance. Free dinner wasn’t a perk it was an incentive to stay at the office until 9 PM. The ping-pong table wasn’t about fun it was about keeping you on campus. The mission-driven culture wasn’t about meaning it was about getting you to work 80-hour weeks for below-market equity.

The 2010s: Culture Goes Mainstream

By the 2010s, every company wanted to be the next Google. Corporate culture became an industry. LinkedIn filled with posts about culture. Business schools taught it. Consultants sold it. HR departments built entire teams around it.

Netflix released its famous culture deck in 2009, which has been viewed millions of times. Zappos made headlines by paying people to quit if they didn’t fit the culture. HubSpot, Spotify, Airbnb every successful tech company published their culture code, and traditional companies scrambled to copy them.

The irony? Most of these companies were successful despite their culture obsession, not because of it. They succeeded because they built products people wanted, captured market opportunities at the right time, or benefited from network effects. The culture was window dressing.

The Etymology Betrays the Problem

Here’s where we need to address the fundamental linguistic sleight of hand. When we talk about “company culture,” we’re borrowing a word culture that has deep anthropological and sociological meaning. Real culture emerges organically from shared history, values, traditions, and collective experience. It evolves over generations. It’s authentic and lived, not designed and mandated.

Corporate culture, by contrast, is manufactured. It’s decided in boardrooms, written into documents, and rolled out through internal communications. It’s not emergent it’s imposed. It’s not organic it’s engineered. When it comes to company culture, there is really nothing cultural in it by itself at all. It’s just management strategy dressed up in anthropological language.

Real culture the kind anthropologists study comes from the bottom up. Corporate culture comes from the top down. Real culture reflects genuine shared values. Corporate culture reflects what leadership wants people to value. These are fundamentally different things, but we use the same word for both, and that confusion serves the interests of management.


The Culture Industry and Its Empty Promises

Walk into any modern company and you’ll witness the same elaborate theater. Mission statements are plastered across reception walls in carefully chosen fonts. Core values are printed on branded notebooks that nobody opens. There are Slack channels dedicated to “living our culture,” all-hands meetings where leadership delivers inspiring speeches about shared purpose, and mandatory workshops on belonging.

The culture machinery churns constantly, producing engagement surveys, team-building exercises, and culture decks that get shared on social media as proof of how progressive and human-centered the organization is.

But here’s the uncomfortable truth that nobody in the C-suite wants to acknowledge: most employees don’t actually care about any of this.

They care about their work. They care about their paycheck. They care about having autonomy and respect. But the corporate culture apparatus? That’s someone else’s concern.

What People Actually Want From Work

Let’s cut through the corporate-speak and talk about what employees genuinely value:

Fair Compensation

Pay people what they’re worth. Not what market conditions allow you to get away with. Not what your compensation band dictates. What their actual contribution to the company’s success merits. Nothing kills “culture” faster than discovering your colleague doing the same job makes 30% more, or that the company just raised another funding round while salaries remain frozen.

You can’t pizza-party your way out of pay inequity. You can’t build belonging on top of resentment about compensation.

Meaningful Work

Give people problems worth solving. Grant them genuine autonomy to solve those problems. Provide the resources, tools, and support they need to do excellent work. That’s it. That’s the formula.

People don’t need culture programs to feel engaged they need to work on things that matter, with the freedom to approach problems creatively and the support to execute their ideas.

Respect for Their Time and Boundaries

Stop pretending that mandatory fun is fun. Stop acting like team-building exercises are anything other than unpaid work. Stop scheduling culture activities outside working hours and calling it optional when everyone knows it’s not.

Respect that people have lives outside of work. That they have families, hobbies, communities, and identities that have nothing to do with the company. The more you try to make the company their primary source of belonging, the more you reveal that you’re trying to extract more than you’re paying for.

Professional Relationships Without Forced Community

People are perfectly capable of building genuine connections with colleagues without corporate facilitation. They’ll grab coffee with people they like. They’ll collaborate effectively with people they respect. They’ll form friendships naturally when there’s actual compatibility.

What they don’t need is ice-breaker activities, personality assessments, or trust falls. Organic relationships built through real work will always be stronger than manufactured bonding through company-mandated activities.


So Who Actually Benefits From the Culture Obsession?

If employees aren’t clamoring for more culture initiatives, who is this all for?

Leadership Teams

Executive leadership gets to feel like visionaries instead of managers. It’s far more inspiring to write manifestos about “energy” and “shared purpose” than to fix broken processes, address systemic issues, or have difficult conversations about compensation gaps.

Culture talk allows leaders to focus on the intangible and aspirational while avoiding the concrete and measurable. You can’t be held accountable for whether the culture feels right, but you absolutely can be held accountable for whether salaries are competitive or processes are efficient.

HR Departments

Culture initiatives justify expanding HR budgets and headcount. Every new belonging program requires a program manager. Every engagement survey needs analysis. Every culture transformation demands consultants.

The output? Endless PowerPoint presentations, internal communications, and reports that measure sentiment instead of solving problems. The culture apparatus becomes self-perpetuating it exists to justify its own existence.

Corporate Consultants

An entire industry has emerged around selling culture frameworks to companies. The same repackaged ideas get sold again and again, promising transformation while delivering jargon.

Every company thinks their culture is unique, so every company pays for a bespoke culture assessment that tells them roughly the same things. The consultants make millions. The culture doesn’t actually change.


The Remote Work Scapegoat

I want to address the specific example that prompted this post the notion that remote work is killing corporate culture by eliminating spontaneous moments and reducing people to “tiles on a screen.”

This argument is nostalgia disguised as analysis, and it conveniently ignores some inconvenient truths:

Those hallway moments were never accessible to everyone. They excluded remote workers, parents who couldn’t stay late for after-hours drinks, people with disabilities who found the office environment challenging, and anyone who wasn’t part of the dominant in-group. The spontaneous moments were spontaneous for some, but they created systematic exclusion for others.

The “shared excitement” happens regardless of location. When a team ships something great, the excitement is genuine whether people are celebrating in a conference room or a group chat. The dopamine hit of solving a hard problem doesn’t require physical proximity.

Trust and collaboration are built through work, not proximity. You know what builds trust? Delivering on commitments. Communicating clearly. Having each other’s backs when things get difficult. Respecting boundaries. None of that requires an office.

Forcing people back to offices doesn’t create culture. It creates commutes, distractions, and resentment.

The push for return-to-office masked as culture concern is often about something else entirely: a desire for visibility and control, real estate investments that need to be justified, or managers who never learned how to lead distributed teams.


What Actually Matters

If you genuinely want people to care about their work and feel connected to their team, here’s what you should focus on:

Stop investing in culture programs. Invest in people instead. Channel that budget into competitive compensation, comprehensive benefits, real professional development opportunities, and career growth paths that aren’t just theoretical.

Stop measuring belonging. Start measuring enablement. Do people have the tools they need? Can they get decisions made efficiently? Are there clear paths for escalation when they’re blocked? These are concrete questions with measurable answers.

Stop forcing togetherness. Trust teams to self-organize. Some teams will want to meet in person regularly. Others will thrive remotely. Many will want a hybrid approach. Let teams figure out what works for them instead of mandating a one-size-fits-all solution.

Stop pretending your company is special. Most companies do roughly similar work. Most face similar challenges. The sooner you accept this, the sooner you can focus on what actually differentiates you: the quality of work you produce, how you treat the people producing it, and whether you deliver value to customers.


The Uncomfortable Truth About Corporate Culture

Let’s say the quiet part out loud: corporate culture isn’t really about belonging. It’s about control.

It’s about making people identify with the company so deeply that they’ll accept below-market compensation for the privilege of being part of the “mission.” It’s about creating emotional investment that can be leveraged for longer hours and fewer boundaries. It’s about building loyalty that serves the company more than it serves the employee.

The culture apparatus exists to make people feel like they’re part of something bigger than themselves and then to use that feeling as justification for asking them to sacrifice more.

But employees are getting smarter. They see through the posters and the speeches. They recognize when culture talk is cover for avoiding harder conversations about money, equity, and sustainable work practices. They know the difference between genuine care and performative concern.


A Final Thought

I’m not suggesting that culture doesn’t matter at all. Of course it does. How people treat each other matters. Whether there’s psychological safety matters. Whether the organization lives up to its stated values matters.

But real culture isn’t built through programs and initiatives. It’s built through a thousand small decisions how you handle a mistake, whether you give credit where it’s due, if you protect someone’s time off, how you respond when priorities conflict.

Real culture is what happens when nobody’s performing for leadership. It’s the unwritten rules about how work actually gets done. It’s whether people feel safe being honest. It’s whether the company’s actions match its words.

You can’t manufacture that with team-building exercises and culture decks. You can only create the conditions for it to emerge and then get out of the way.

So maybe it’s time to stop obsessing over culture as a thing to be built and managed, and start focusing on the fundamentals: doing good work, treating people well, and paying them fairly.

The culture will take care of itself.

Building a Decentralized Credit Card System Part 2: Solidity Smart Contract Implementation

In the first part of this series, we explored the conceptual architecture of a blockchain-based credit card system using multi-signature keys and encrypted spending limits. Now, let’s dive into the technical implementation with concrete Solidity examples.

This post will give you production-ready smart contract code that demonstrates how to build a secure, multi-signature credit card system on Ethereum or any EVM-compatible blockchain.

Smart Contract Architecture Overview

Our decentralized credit card system consists of five interconnected smart contracts:

  1. CreditFacility Contract: Manages the master account and credit line
  2. CardManager Contract: Handles individual card issuance and lifecycle
  3. SpendingLimits Contract: Enforces encrypted spending rules
  4. PaymentProcessor Contract: Executes and settles transactions
  5. MultiSigGovernance Contract: Handles high-value transaction approvals

Each contract has a specific responsibility, following the principle of separation of concerns. This modular approach makes the system more maintainable, upgradeable, and secure.

Note: These contracts are designed for educational and proof-of-concept purposes. Production deployment would require extensive security audits, gas optimization, and integration with off-chain systems.

1. The Credit Facility Contract

This contract represents the bank account or credit line, the source of funds controlled by the master key. It implements multi-signature controls to ensure that no single party can unilaterally make critical decisions.

Key Features

  • Multi-signature authorization for sensitive operations
  • Credit limit management with approval workflows
  • Real-time tracking of available credit and outstanding balance
  • Support for multiple authorized signers per account
  • Emergency account suspension capabilities

The Complete Contract

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

/**
 * @title CreditFacility
 * @dev Manages the master credit account with multi-sig controls
 */
contract CreditFacility {

    struct CreditAccount {
        uint256 creditLimit;
        uint256 availableCredit;
        uint256 outstandingBalance;
        bool isActive;
        address[] authorizedSigners;
        uint256 requiredSignatures;
    }

    mapping(address => CreditAccount) public accounts;
    mapping(address => mapping(bytes32 => uint256)) public transactionApprovals;

    event CreditAccountCreated(address indexed account, uint256 creditLimit);
    event CreditLimitUpdated(address indexed account, uint256 newLimit);
    event CreditUsed(address indexed account, uint256 amount);
    event CreditRepaid(address indexed account, uint256 amount);
    event TransactionApproved(address indexed signer, bytes32 transactionHash);

    modifier onlyAccountOwner(address account) {
        require(isAuthorizedSigner(account, msg.sender), "Not authorized");
        _;
    }

    modifier accountActive(address account) {
        require(accounts[account].isActive, "Account not active");
        _;
    }

    function createCreditAccount(
        address accountAddress,
        uint256 creditLimit,
        address[] memory signers,
        uint256 requiredSigs
    ) external {
        require(signers.length >= requiredSigs, "Invalid signer configuration");
        require(!accounts[accountAddress].isActive, "Account already exists");

        accounts[accountAddress] = CreditAccount({
            creditLimit: creditLimit,
            availableCredit: creditLimit,
            outstandingBalance: 0,
            isActive: true,
            authorizedSigners: signers,
            requiredSignatures: requiredSigs
        });

        emit CreditAccountCreated(accountAddress, creditLimit);
    }

    function updateCreditLimit(
        address account,
        uint256 newLimit,
        bytes32 approvalHash
    ) external onlyAccountOwner(account) accountActive(account) {
        require(hasRequiredApprovals(account, approvalHash), "Insufficient approvals");

        CreditAccount storage creditAccount = accounts[account];
        uint256 difference = newLimit > creditAccount.creditLimit 
            ? newLimit - creditAccount.creditLimit 
            : 0;

        creditAccount.creditLimit = newLimit;
        creditAccount.availableCredit += difference;

        emit CreditLimitUpdated(account, newLimit);
        clearApprovals(account, approvalHash);
    }

    function useCredit(
        address account,
        uint256 amount
    ) external accountActive(account) returns (bool) {
        CreditAccount storage creditAccount = accounts[account];
        require(creditAccount.availableCredit >= amount, "Insufficient credit");

        creditAccount.availableCredit -= amount;
        creditAccount.outstandingBalance += amount;

        emit CreditUsed(account, amount);
        return true;
    }

    function repayCredit(
        address account,
        uint256 amount
    ) external payable accountActive(account) {
        require(msg.value >= amount, "Insufficient payment");

        CreditAccount storage creditAccount = accounts[account];
        require(creditAccount.outstandingBalance >= amount, "Overpayment");

        creditAccount.outstandingBalance -= amount;
        creditAccount.availableCredit += amount;

        emit CreditRepaid(account, amount);
    }

    function approveTransaction(
        address account,
        bytes32 transactionHash
    ) external onlyAccountOwner(account) {
        transactionApprovals[account][transactionHash]++;
        emit TransactionApproved(msg.sender, transactionHash);
    }

    function hasRequiredApprovals(
        address account,
        bytes32 transactionHash
    ) public view returns (bool) {
        return transactionApprovals[account][transactionHash] >= 
               accounts[account].requiredSignatures;
    }

    function isAuthorizedSigner(
        address account,
        address signer
    ) public view returns (bool) {
        address[] memory signers = accounts[account].authorizedSigners;
        for (uint i = 0; i < signers.length; i++) {
            if (signers[i] == signer) return true;
        }
        return false;
    }

    function clearApprovals(address account, bytes32 transactionHash) internal {
        delete transactionApprovals[account][transactionHash];
    }

    function getAccountDetails(address account) external view returns (
        uint256 creditLimit,
        uint256 availableCredit,
        uint256 outstandingBalance,
        bool isActive
    ) {
        CreditAccount storage acc = accounts[account];
        return (
            acc.creditLimit,
            acc.availableCredit,
            acc.outstandingBalance,
            acc.isActive
        );
    }
}Code language: JavaScript (javascript)

Understanding the Code

Let’s break down the key components of this contract.

CreditAccount Structure

The CreditAccount struct stores all essential information about a credit account. It tracks the credit limit, available credit, outstanding balance, activation status, and the multi-signature configuration. This structure ensures that all account data is organized and easily accessible.

Multi-Signature Security

The contract implements a flexible multi-signature system. When creating an account, you specify both the authorized signers and how many signatures are required for critical operations. For example, a business account might have five authorized signers but only require three signatures to approve a credit limit increase.

The approveTransaction function allows authorized signers to vote on proposed actions. Once enough approvals are collected, the action can be executed. This prevents any single compromised key from causing damage to the system.

Credit Management

The useCredit and repayCredit functions handle the core financial operations. When a card makes a purchase (which we’ll see in later contracts), it calls useCredit to deduct from the available balance. When a payment is made, repayCredit restores the available credit.

Security Feature: Notice how credit operations include checks for account status and available balance. These guards prevent overdrafts and ensure the account is active before any transaction proceeds.

2. The Card Manager Contract

While the Credit Facility manages the master account, individual cards need their own management layer. The Card Manager contract handles card issuance, activation, deactivation, and the assignment of spending limits to individual cards.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

interface ICreditFacility {
    function useCredit(address account, uint256 amount) external returns (bool);
    function accounts(address) external view returns (
        uint256 creditLimit,
        uint256 availableCredit,
        uint256 outstandingBalance,
        bool isActive
    );
}

contract CardManager {

    struct Card {
        address cardAddress;
        address linkedAccount;
        bool isActive;
        uint256 dailyLimit;
        uint256 monthlyLimit;
        uint256 perTransactionLimit;
        uint256 dailySpent;
        uint256 monthlySpent;
        uint256 lastResetDay;
        uint256 lastResetMonth;
        string cardholderName;
        bytes32 cardType;
    }

    ICreditFacility public creditFacility;

    mapping(address => Card) public cards;
    mapping(address => address[]) public accountCards;
    mapping(address => bool) public cardExists;

    event CardIssued(
        address indexed cardAddress,
        address indexed account,
        string cardholderName
    );
    event CardActivated(address indexed cardAddress);
    event CardDeactivated(address indexed cardAddress);
    event CardLimitsUpdated(address indexed cardAddress);
    event SpendingRecorded(address indexed cardAddress, uint256 amount);

    modifier onlyActiveCard(address cardAddress) {
        require(cards[cardAddress].isActive, "Card not active");
        _;
    }

    modifier cardOwner(address cardAddress) {
        require(msg.sender == cardAddress, "Not card owner");
        _;
    }

    constructor(address _creditFacility) {
        creditFacility = ICreditFacility(_creditFacility);
    }

    function issueCard(
        address cardAddress,
        address account,
        string memory cardholderName,
        uint256 dailyLimit,
        uint256 monthlyLimit,
        uint256 perTransactionLimit,
        bytes32 cardType
    ) external {
        require(!cardExists[cardAddress], "Card already exists");

        cards[cardAddress] = Card({
            cardAddress: cardAddress,
            linkedAccount: account,
            isActive: true,
            dailyLimit: dailyLimit,
            monthlyLimit: monthlyLimit,
            perTransactionLimit: perTransactionLimit,
            dailySpent: 0,
            monthlySpent: 0,
            lastResetDay: block.timestamp / 1 days,
            lastResetMonth: getMonthFromTimestamp(block.timestamp),
            cardholderName: cardholderName,
            cardType: cardType
        });

        accountCards[account].push(cardAddress);
        cardExists[cardAddress] = true;

        emit CardIssued(cardAddress, account, cardholderName);
    }

    function activateCard(address cardAddress) external {
        require(cardExists[cardAddress], "Card does not exist");
        cards[cardAddress].isActive = true;
        emit CardActivated(cardAddress);
    }

    function deactivateCard(address cardAddress) external {
        require(cardExists[cardAddress], "Card does not exist");
        cards[cardAddress].isActive = false;
        emit CardDeactivated(cardAddress);
    }

    function updateCardLimits(
        address cardAddress,
        uint256 dailyLimit,
        uint256 monthlyLimit,
        uint256 perTransactionLimit
    ) external {
        require(cardExists[cardAddress], "Card does not exist");

        Card storage card = cards[cardAddress];
        card.dailyLimit = dailyLimit;
        card.monthlyLimit = monthlyLimit;
        card.perTransactionLimit = perTransactionLimit;

        emit CardLimitsUpdated(cardAddress);
    }

    function checkAndRecordSpending(
        address cardAddress,
        uint256 amount
    ) external onlyActiveCard(cardAddress) returns (bool) {
        Card storage card = cards[cardAddress];

        resetSpendingIfNeeded(cardAddress);

        require(amount <= card.perTransactionLimit, "Exceeds per-transaction limit");
        require(card.dailySpent + amount <= card.dailyLimit, "Exceeds daily limit");
        require(card.monthlySpent + amount <= card.monthlyLimit, "Exceeds monthly limit");

        card.dailySpent += amount;
        card.monthlySpent += amount;

        emit SpendingRecorded(cardAddress, amount);
        return true;
    }

    function resetSpendingIfNeeded(address cardAddress) internal {
        Card storage card = cards[cardAddress];
        uint256 currentDay = block.timestamp / 1 days;
        uint256 currentMonth = getMonthFromTimestamp(block.timestamp);

        if (currentDay > card.lastResetDay) {
            card.dailySpent = 0;
            card.lastResetDay = currentDay;
        }

        if (currentMonth > card.lastResetMonth) {
            card.monthlySpent = 0;
            card.lastResetMonth = currentMonth;
        }
    }

    function getMonthFromTimestamp(uint256 timestamp) internal pure returns (uint256) {
        return timestamp / 30 days;
    }

    function getCardDetails(address cardAddress) external view returns (
        address linkedAccount,
        bool isActive,
        uint256 dailyLimit,
        uint256 monthlyLimit,
        uint256 perTransactionLimit,
        uint256 dailySpent,
        uint256 monthlySpent,
        string memory cardholderName
    ) {
        Card storage card = cards[cardAddress];
        return (
            card.linkedAccount,
            card.isActive,
            card.dailyLimit,
            card.monthlyLimit,
            card.perTransactionLimit,
            card.dailySpent,
            card.monthlySpent,
            card.cardholderName
        );
    }

    function getAccountCards(address account) external view returns (address[] memory) {
        return accountCards[account];
    }

    function getRemainingDailyLimit(address cardAddress) external view returns (uint256) {
        Card storage card = cards[cardAddress];
        if (card.dailySpent >= card.dailyLimit) return 0;
        return card.dailyLimit - card.dailySpent;
    }

    function getRemainingMonthlyLimit(address cardAddress) external view returns (uint256) {
        Card storage card = cards[cardAddress];
        if (card.monthlySpent >= card.monthlyLimit) return 0;
        return card.monthlyLimit - card.monthlySpent;
    }
}Code language: PHP (php)

Key Features of Card Manager

The Card Manager introduces several important concepts:

Individual Card Limits

Each card has three types of limits:

  • Per-transaction limit: Maximum amount for a single purchase
  • Daily limit: Maximum spending in a 24-hour period
  • Monthly limit: Maximum spending in a 30-day period

This multi-tiered approach provides granular control over spending patterns and helps prevent fraud.

Automatic Limit Resets

The contract automatically resets daily and monthly spending counters. The resetSpendingIfNeeded function checks if a new day or month has begun and resets the appropriate counters. This happens transparently during transaction validation.

Card Lifecycle Management

Cards can be issued, activated, and deactivated. A deactivated card cannot make purchases, but the card data remains on-chain for historical records. This is crucial for fraud prevention, if a card is compromised, it can be immediately deactivated without affecting other cards linked to the same account.

3. The Spending Limits Contract with Encryption

Now we get to the interesting part: encrypted spending limits. This contract demonstrates how to store and validate spending rules while keeping certain parameters private using commitment schemes.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract SpendingLimits {

    struct EncryptedLimit {
        bytes32 limitCommitment;
        bytes32 categoryCommitment;
        bool isActive;
        uint256 validUntil;
        bytes32[] allowedMerchantCategories;
        string[] restrictedCountries;
    }

    mapping(address => mapping(uint256 => EncryptedLimit)) public cardLimits;
    mapping(address => uint256) public limitCount;

    event LimitCreated(
        address indexed cardAddress,
        uint256 limitId,
        bytes32 limitCommitment
    );
    event LimitValidated(address indexed cardAddress, uint256 limitId, bool success);
    event LimitRevoked(address indexed cardAddress, uint256 limitId);

    function createEncryptedLimit(
        address cardAddress,
        bytes32 limitCommitment,
        bytes32 categoryCommitment,
        uint256 validUntil,
        bytes32[] memory allowedCategories,
        string[] memory restrictedCountries
    ) external returns (uint256) {
        uint256 limitId = limitCount[cardAddress];

        cardLimits[cardAddress][limitId] = EncryptedLimit({
            limitCommitment: limitCommitment,
            categoryCommitment: categoryCommitment,
            isActive: true,
            validUntil: validUntil,
            allowedMerchantCategories: allowedCategories,
            restrictedCountries: restrictedCountries
        });

        limitCount[cardAddress]++;

        emit LimitCreated(cardAddress, limitId, limitCommitment);
        return limitId;
    }

    function validateTransaction(
        address cardAddress,
        uint256 limitId,
        uint256 amount,
        bytes32 merchantCategory,
        string memory country,
        bytes32 proof
    ) external view returns (bool) {
        EncryptedLimit storage limit = cardLimits[cardAddress][limitId];

        require(limit.isActive, "Limit not active");
        require(block.timestamp <= limit.validUntil, "Limit expired");

        bytes32 computedCommitment = keccak256(
            abi.encodePacked(amount, merchantCategory, country, proof)
        );

        if (computedCommitment != limit.limitCommitment) {
            return false;
        }

        if (!isCategoryAllowed(limit.allowedMerchantCategories, merchantCategory)) {
            return false;
        }

        if (isCountryRestricted(limit.restrictedCountries, country)) {
            return false;
        }

        return true;
    }

    function isCategoryAllowed(
        bytes32[] memory allowedCategories,
        bytes32 category
    ) internal pure returns (bool) {
        if (allowedCategories.length == 0) return true;

        for (uint i = 0; i < allowedCategories.length; i++) {
            if (allowedCategories[i] == category) return true;
        }
        return false;
    }

    function isCountryRestricted(
        string[] memory restrictedCountries,
        string memory country
    ) internal pure returns (bool) {
        for (uint i = 0; i < restrictedCountries.length; i++) {
            if (keccak256(bytes(restrictedCountries[i])) == keccak256(bytes(country))) {
                return true;
            }
        }
        return false;
    }

    function revokeLimit(address cardAddress, uint256 limitId) external {
        require(cardLimits[cardAddress][limitId].isActive, "Limit already inactive");
        cardLimits[cardAddress][limitId].isActive = false;
        emit LimitRevoked(cardAddress, limitId);
    }

    function getLimitDetails(
        address cardAddress,
        uint256 limitId
    ) external view returns (
        bytes32 limitCommitment,
        bool isActive,
        uint256 validUntil,
        bytes32[] memory allowedCategories,
        string[] memory restrictedCountries
    ) {
        EncryptedLimit storage limit = cardLimits[cardAddress][limitId];
        return (
            limit.limitCommitment,
            limit.isActive,
            limit.validUntil,
            limit.allowedMerchantCategories,
            limit.restrictedCountries
        );
    }
}Code language: JavaScript (javascript)

How Encrypted Limits Work

The encryption here uses a commitment scheme, a cryptographic technique where you commit to a value without revealing it.

Creating a Commitment

When you create a spending limit, instead of storing the actual amount on-chain (which would be publicly visible), you store a hash commitment:

limitCommitment = keccak256(amount + merchantCategory + country + secret)

The actual limit amount remains off-chain or encrypted. Only someone with the correct values can prove they’re within the limit.

Validating Transactions

During a transaction, the card provides:

  • The transaction amount
  • The merchant category
  • The country
  • A proof (the secret used in the original commitment)

The contract recomputes the commitment and checks if it matches. If it does, the transaction is within the encrypted limit. The beauty is that observers can see the transaction was validated, but they cannot see what the actual spending limit is.

Merchant Categories and Geographic Restrictions

Beyond amount limits, the contract supports:

  • Merchant category codes: Restrict cards to specific types of merchants (e.g., only gas stations and groceries)
  • Geographic restrictions: Block transactions from certain countries (useful for fraud prevention)

These restrictions are stored openly because they don’t reveal sensitive financial information about the cardholder.

4. The Payment Processor Contract

This contract orchestrates the entire payment flow, bringing together all the previous components.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

interface ICardManager {
    function checkAndRecordSpending(address cardAddress, uint256 amount) external returns (bool);
    function cards(address) external view returns (
        address cardAddress,
        address linkedAccount,
        bool isActive,
        uint256 dailyLimit,
        uint256 monthlyLimit,
        uint256 perTransactionLimit,
        uint256 dailySpent,
        uint256 monthlySpent,
        uint256 lastResetDay,
        uint256 lastResetMonth,
        string memory cardholderName,
        bytes32 cardType
    );
}

interface ISpendingLimits {
    function validateTransaction(
        address cardAddress,
        uint256 limitId,
        uint256 amount,
        bytes32 merchantCategory,
        string memory country,
        bytes32 proof
    ) external view returns (bool);
}

contract PaymentProcessor {

    struct Transaction {
        address cardAddress;
        address merchant;
        uint256 amount;
        bytes32 merchantCategory;
        string country;
        uint256 timestamp;
        TransactionStatus status;
        bytes32 transactionHash;
    }

    enum TransactionStatus {
        Pending,
        Approved,
        Declined,
        Settled,
        Refunded
    }

    ICreditFacility public creditFacility;
    ICardManager public cardManager;
    ISpendingLimits public spendingLimits;

    mapping(bytes32 => Transaction) public transactions;
    mapping(address => bytes32[]) public cardTransactions;
    mapping(address => bytes32[]) public merchantTransactions;

    uint256 public transactionCount;

    event TransactionInitiated(
        bytes32 indexed transactionHash,
        address indexed cardAddress,
        address indexed merchant,
        uint256 amount
    );
    event TransactionApproved(bytes32 indexed transactionHash);
    event TransactionDeclined(bytes32 indexed transactionHash, string reason);
    event TransactionSettled(bytes32 indexed transactionHash);
    event TransactionRefunded(bytes32 indexed transactionHash);

    constructor(
        address _creditFacility,
        address _cardManager,
        address _spendingLimits
    ) {
        creditFacility = ICreditFacility(_creditFacility);
        cardManager = ICardManager(_cardManager);
        spendingLimits = ISpendingLimits(_spendingLimits);
    }

    function initiateTransaction(
        address cardAddress,
        address merchant,
        uint256 amount,
        bytes32 merchantCategory,
        string memory country,
        uint256 limitId,
        bytes32 proof
    ) external returns (bytes32) {
        bytes32 txHash = keccak256(
            abi.encodePacked(
                cardAddress,
                merchant,
                amount,
                merchantCategory,
                country,
                block.timestamp,
                transactionCount++
            )
        );

        transactions[txHash] = Transaction({
            cardAddress: cardAddress,
            merchant: merchant,
            amount: amount,
            merchantCategory: merchantCategory,
            country: country,
            timestamp: block.timestamp,
            status: TransactionStatus.Pending,
            transactionHash: txHash
        });

        cardTransactions[cardAddress].push(txHash);
        merchantTransactions[merchant].push(txHash);

        emit TransactionInitiated(txHash, cardAddress, merchant, amount);

        bool approved = processTransaction(txHash, limitId, proof);

        if (approved) {
            emit TransactionApproved(txHash);
        }

        return txHash;
    }

    function processTransaction(
        bytes32 txHash,
        uint256 limitId,
        bytes32 proof
    ) internal returns (bool) {
        Transaction storage txn = transactions[txHash];

        (, address linkedAccount, bool isActive, , , , , , , , , ) = 
            cardManager.cards(txn.cardAddress);

        if (!isActive) {
            txn.status = TransactionStatus.Declined;
            emit TransactionDeclined(txHash, "Card not active");
            return false;
        }

        bool cardLimitCheck = cardManager.checkAndRecordSpending(
            txn.cardAddress,
            txn.amount
        );

        if (!cardLimitCheck) {
            txn.status = TransactionStatus.Declined;
            emit TransactionDeclined(txHash, "Card limit exceeded");
            return false;
        }

        bool encryptedLimitCheck = spendingLimits.validateTransaction(
            txn.cardAddress,
            limitId,
            txn.amount,
            txn.merchantCategory,
            txn.country,
            proof
        );

        if (!encryptedLimitCheck) {
            txn.status = TransactionStatus.Declined;
            emit TransactionDeclined(txHash, "Encrypted limit validation failed");
            return false;
        }

        bool creditUsed = creditFacility.useCredit(linkedAccount, txn.amount);

        if (!creditUsed) {
            txn.status = TransactionStatus.Declined;
            emit TransactionDeclined(txHash, "Insufficient credit");
            return false;
        }

        txn.status = TransactionStatus.Approved;
        return true;
    }

    function settleTransaction(bytes32 txHash) external {
        Transaction storage txn = transactions[txHash];
        require(txn.status == TransactionStatus.Approved, "Transaction not approved");

        txn.status = TransactionStatus.Settled;

        // In a real system, this would transfer funds to the merchant
        // payable(txn.merchant).transfer(txn.amount);

        emit TransactionSettled(txHash);
    }

    function refundTransaction(bytes32 txHash) external {
        Transaction storage txn = transactions[txHash];
        require(
            txn.status == TransactionStatus.Settled,
            "Transaction not settled"
        );

        (, address linkedAccount, , , , , , , , , , ) = 
            cardManager.cards(txn.cardAddress);

        // Return credit to account
        // creditFacility.repayCredit{value: txn.amount}(linkedAccount, txn.amount);

        txn.status = TransactionStatus.Refunded;
        emit TransactionRefunded(txHash);
    }

    function getTransaction(bytes32 txHash) external view returns (
        address cardAddress,
        address merchant,
        uint256 amount,
        bytes32 merchantCategory,
        string memory country,
        uint256 timestamp,
        TransactionStatus status
    ) {
        Transaction storage txn = transactions[txHash];
        return (
            txn.cardAddress,
            txn.merchant,
            txn.amount,
            txn.merchantCategory,
            txn.country,
            txn.timestamp,
            txn.status
        );
    }

    function getCardTransactions(address cardAddress) external view returns (bytes32[] memory) {
        return cardTransactions[cardAddress];
    }

    function getMerchantTransactions(address merchant) external view returns (bytes32[] memory) {
        return merchantTransactions[merchant];
    }
}Code language: PHP (php)

Payment Flow Explained

The Payment Processor orchestrates a complex multi-step validation process:

  1. Transaction Initiation: A transaction is created with all necessary details (card, merchant, amount, category, country)
  2. Card Validation: Check if the card is active and in good standing
  3. Card Limit Checks: Validate against daily, monthly, and per-transaction limits
  4. Encrypted Limit Validation: Verify the transaction against encrypted spending rules using the commitment proof
  5. Credit Availability: Ensure the linked account has sufficient credit
  6. Approval/Decline: If all checks pass, approve the transaction; otherwise, decline with a specific reason
  7. Settlement: After approval, the transaction is marked as settled and funds are transferred (in a production system)
  8. Refund Capability: Transactions can be refunded, returning credit to the account

Transaction Status Lifecycle

Transactions move through distinct states:

  • Pending: Initial state when created
  • Approved: All validations passed
  • Declined: Failed one or more validations
  • Settled: Funds transferred to merchant
  • Refunded: Transaction reversed

This status tracking provides transparency and enables proper accounting.

5. Multi-Signature Governance Contract

For high-value transactions or critical system changes, we need additional oversight beyond individual card limits.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract MultiSigGovernance {

    struct Proposal {
        uint256 proposalId;
        ProposalType proposalType;
        address targetContract;
        bytes callData;
        uint256 value;
        string description;
        uint256 createdAt;
        uint256 executionTime;
        bool executed;
        bool cancelled;
        mapping(address => bool) hasVoted;
        uint256 votesFor;
        uint256 votesAgainst;
    }

    enum ProposalType {
        CreditLimitIncrease,
        CardIssuance,
        SystemUpgrade,
        EmergencyAction,
        ParameterChange
    }

    address[] public governors;
    mapping(address => bool) public isGovernor;

    uint256 public quorumPercentage;
    uint256 public proposalCount;
    uint256 public votingPeriod;
    uint256 public timelockPeriod;

    mapping(uint256 => Proposal) public proposals;

    event ProposalCreated(
        uint256 indexed proposalId,
        ProposalType proposalType,
        address indexed proposer,
        string description
    );
    event VoteCast(
        uint256 indexed proposalId,
        address indexed voter,
        bool support
    );
    event ProposalExecuted(uint256 indexed proposalId);
    event ProposalCancelled(uint256 indexed proposalId);
    event GovernorAdded(address indexed governor);
    event GovernorRemoved(address indexed governor);

    modifier onlyGovernor() {
        require(isGovernor[msg.sender], "Not a governor");
        _;
    }

    constructor(
        address[] memory _governors,
        uint256 _quorumPercentage,
        uint256 _votingPeriod,
        uint256 _timelockPeriod
    ) {
        require(_governors.length > 0, "Must have at least one governor");
        require(_quorumPercentage > 0 && _quorumPercentage <= 100, "Invalid quorum");

        for (uint i = 0; i < _governors.length; i++) {
            governors.push(_governors[i]);
            isGovernor[_governors[i]] = true;
        }

        quorumPercentage = _quorumPercentage;
        votingPeriod = _votingPeriod;
        timelockPeriod = _timelockPeriod;
    }

    function createProposal(
        ProposalType proposalType,
        address targetContract,
        bytes memory callData,
        uint256 value,
        string memory description
    ) external onlyGovernor returns (uint256) {
        uint256 proposalId = proposalCount++;

        Proposal storage proposal = proposals[proposalId];
        proposal.proposalId = proposalId;
        proposal.proposalType = proposalType;
        proposal.targetContract = targetContract;
        proposal.callData = callData;
        proposal.value = value;
        proposal.description = description;
        proposal.createdAt = block.timestamp;
        proposal.executed = false;
        proposal.cancelled = false;

        emit ProposalCreated(proposalId, proposalType, msg.sender, description);
        return proposalId;
    }

    function vote(uint256 proposalId, bool support) external onlyGovernor {
        Proposal storage proposal = proposals[proposalId];

        require(!proposal.executed, "Proposal already executed");
        require(!proposal.cancelled, "Proposal cancelled");
        require(!proposal.hasVoted[msg.sender], "Already voted");
        require(
            block.timestamp <= proposal.createdAt + votingPeriod,
            "Voting period ended"
        );

        proposal.hasVoted[msg.sender] = true;

        if (support) {
            proposal.votesFor++;
        } else {
            proposal.votesAgainst++;
        }

        emit VoteCast(proposalId, msg.sender, support);

        if (hasReachedQuorum(proposalId)) {
            proposal.executionTime = block.timestamp + timelockPeriod;
        }
    }

    function executeProposal(uint256 proposalId) external onlyGovernor {
        Proposal storage proposal = proposals[proposalId];

        require(!proposal.executed, "Already executed");
        require(!proposal.cancelled, "Proposal cancelled");
        require(hasReachedQuorum(proposalId), "Quorum not reached");
        require(
            block.timestamp >= proposal.executionTime,
            "Timelock not expired"
        );
        require(proposal.executionTime > 0, "Execution time not set");

        proposal.executed = true;

        (bool success, ) = proposal.targetContract.call{value: proposal.value}(
            proposal.callData
        );
        require(success, "Execution failed");

        emit ProposalExecuted(proposalId);
    }

    function cancelProposal(uint256 proposalId) external onlyGovernor {
        Proposal storage proposal = proposals[proposalId];

        require(!proposal.executed, "Already executed");
        require(!proposal.cancelled, "Already cancelled");

        proposal.cancelled = true;
        emit ProposalCancelled(proposalId);
    }

    function hasReachedQuorum(uint256 proposalId) public view returns (bool) {
        Proposal storage proposal = proposals[proposalId];
        uint256 totalVotes = proposal.votesFor + proposal.votesAgainst;
        uint256 requiredVotes = (governors.length * quorumPercentage) / 100;

        return totalVotes >= requiredVotes && proposal.votesFor > proposal.votesAgainst;
    }

    function addGovernor(address newGovernor) external {
        require(!isGovernor[newGovernor], "Already a governor");

        governors.push(newGovernor);
        isGovernor[newGovernor] = true;

        emit GovernorAdded(newGovernor);
    }

    function removeGovernor(address governor) external {
        require(isGovernor[governor], "Not a governor");
        require(governors.length > 1, "Cannot remove last governor");

        isGovernor[governor] = false;

        for (uint i = 0; i < governors.length; i++) {
            if (governors[i] == governor) {
                governors[i] = governors[governors.length - 1];
                governors.pop();
                break;
            }
        }

        emit GovernorRemoved(governor);
    }

    function getProposalDetails(uint256 proposalId) external view returns (
        ProposalType proposalType,
        address targetContract,
        string memory description,
        uint256 votesFor,
        uint256 votesAgainst,
        bool executed,
        bool cancelled
    ) {
        Proposal storage proposal = proposals[proposalId];
        return (
            proposal.proposalType,
            proposal.targetContract,
            proposal.description,
            proposal.votesFor,
            proposal.votesAgainst,
            proposal.executed,
            proposal.cancelled
        );
    }

    function getGovernors() external view returns (address[] memory) {
        return governors;
    }
}Code language: PHP (php)

Governance Features

This contract implements a sophisticated governance system with several key protections:

Proposal System

Any governor can create a proposal for:

  • Increasing credit limits beyond normal thresholds
  • Issuing special cards with elevated privileges
  • Upgrading system contracts
  • Emergency actions (like freezing all cards)
  • Changing system parameters

Voting Mechanism

Governors vote on proposals during a voting period. The system supports:

  • Quorum requirements: A minimum percentage of governors must participate
  • Simple majority: More votes for than against
  • Timelock: Even after approval, there’s a delay before execution

The timelock is critical, it gives governors time to react if a malicious proposal passes, potentially vetoing it before execution.

Governor Management

Governors can be added or removed through the governance process itself. This creates a self-governing system that can adapt over time without external intervention.

Real-World Usage Example

Let’s walk through a complete transaction flow using all these contracts:

Step 1: Setup

// Deploy contracts
const creditFacility = await CreditFacility.deploy();
const cardManager = await CardManager.deploy(creditFacility.address);
const spendingLimits = await SpendingLimits.deploy();
const paymentProcessor = await PaymentProcessor.deploy(
    creditFacility.address,
    cardManager.address,
    spendingLimits.address
);

// Create credit account with 3-of-5 multisig
await creditFacility.createCreditAccount(
    accountAddress,
    ethers.utils.parseEther("10000"), // $10,000 credit limit
    [signer1, signer2, signer3, signer4, signer5],
    3 // requires 3 signatures
);Code language: JavaScript (javascript)

Step 2: Issue a Card

// Issue card with spending limits
await cardManager.issueCard(
    cardAddress,
    accountAddress,
    "John Doe",
    ethers.utils.parseEther("500"),  // $500 daily limit
    ethers.utils.parseEther("5000"), // $5000 monthly limit
    ethers.utils.parseEther("200"),  // $200 per transaction
    ethers.utils.formatBytes32String("STANDARD")
);Code language: JavaScript (javascript)

Step 3: Create Encrypted Spending Limit

// Create commitment for encrypted limit
const secretAmount = ethers.utils.parseEther("100");
const category = ethers.utils.formatBytes32String("GROCERY");
const country = "US";
const secret = ethers.utils.randomBytes(32);

const commitment = ethers.utils.keccak256(
    ethers.utils.defaultAbiCoder.encode(
        ["uint256", "bytes32", "string", "bytes32"],
        [secretAmount, category, country, secret]
    )
);

// Store encrypted limit on-chain
await spendingLimits.createEncryptedLimit(
    cardAddress,
    commitment,
    category,
    Math.floor(Date.now() / 1000) + 365 * 24 * 60 * 60, // valid for 1 year
    [ethers.utils.formatBytes32String("GROCERY")],
    ["RU", "KP"] // restricted countries
);Code language: JavaScript (javascript)

Step 4: Process a Transaction

// Initiate payment at grocery store
const txHash = await paymentProcessor.initiateTransaction(
    cardAddress,
    merchantAddress,
    ethers.utils.parseEther("75"), // $75 purchase
    ethers.utils.formatBytes32String("GROCERY"),
    "US",
    0, // limitId
    secret // proof for encrypted limit
);

// Transaction automatically validated against:
// 1. Card active status
// 2. Daily/monthly/per-transaction limits
// 3. Encrypted spending rules
// 4. Available creditCode language: JavaScript (javascript)

Step 5: Settlement

// After validation, settle the transaction
await paymentProcessor.settleTransaction(txHash);

// Funds are now transferred to merchant
// Credit account balance is updatedCode language: JavaScript (javascript)

Advanced Features and Optimizations

Gas Optimization Techniques

These contracts can be further optimized for production:

Batch Operations: Instead of processing transactions one-by-one, implement batch processing to reduce gas costs.

function batchProcessTransactions(
    bytes32[] memory txHashes
) external {
    for (uint i = 0; i < txHashes.length; i++) {
        settleTransaction(txHashes[i]);
    }
}Code language: JavaScript (javascript)

Storage Packing: Use smaller data types where possible and pack related variables.

struct PackedCard {
    address cardAddress;        // 20 bytes
    address linkedAccount;      // 20 bytes
    uint128 dailyLimit;        // 16 bytes
    uint128 monthlyLimit;      // 16 bytes
    bool isActive;             // 1 byte
    // Total: 73 bytes (can fit in 3 storage slots)
}Code language: JavaScript (javascript)

Event Indexing: Properly index events for efficient off-chain querying.

Security Considerations

Reentrancy Protection

Always use the Checks-Effects-Interactions pattern:

function useCredit(address account, uint256 amount) external {
    // Checks
    require(accounts[account].availableCredit >= amount);

    // Effects
    accounts[account].availableCredit -= amount;
    accounts[account].outstandingBalance += amount;

    // Interactions (external calls)
    emit CreditUsed(account, amount);
}Code language: JavaScript (javascript)

Access Control

Implement role-based access control using OpenZeppelin:

import "@openzeppelin/contracts/access/AccessControl.sol";

contract SecuredCardManager is AccessControl {
    bytes32 public constant ISSUER_ROLE = keccak256("ISSUER_ROLE");
    bytes32 public constant ADMIN_ROLE = keccak256("ADMIN_ROLE");

    function issueCard(...) external onlyRole(ISSUER_ROLE) {
        // Card issuance logic
    }
}Code language: PHP (php)

Oracle Integration

For real-world data (exchange rates, merchant verification), integrate Chainlink oracles:

import "@chainlink/contracts/src/v0.8/interfaces/AggregatorV3Interface.sol";

contract PriceOracle {
    AggregatorV3Interface internal priceFeed;

    function getLatestPrice() public view returns (int) {
        (, int price, , ,) = priceFeed.latestRoundData();
        return price;
    }
}Code language: JavaScript (javascript)

Privacy Enhancements

For production systems, implement zero-knowledge proofs using libraries like ZoKrates or Circom:

// Pseudocode for ZK proof
circuit SpendingLimit {
    private input actualLimit;
    private input transactionAmount;
    public input commitment;

    assert(hash(actualLimit) == commitment);
    assert(transactionAmount <= actualLimit);
}Code language: PHP (php)

This allows truly private spending limits where even the contract cannot see the actual values.

Integration with Off-Chain Systems

Real-world credit card systems require integration with:

Payment Networks

Connect to Visa/Mastercard networks through payment gateway APIs:

// Off-chain service
async function processCardPayment(transaction) {
    // Validate with smart contract
    const isValid = await paymentProcessor.initiateTransaction(...);

    if (isValid) {
        // Submit to payment network
        await visaGateway.authorizePayment({
            cardNumber: encryptedCard,
            amount: transaction.amount,
            merchant: transaction.merchant
        });
    }
}Code language: JavaScript (javascript)

KYC/AML Compliance

Implement identity verification before card issuance:

async function issueCardWithKYC(user) {
    // Verify identity off-chain
    const kycResult = await kycProvider.verify(user);

    if (kycResult.approved) {
        // Issue card on-chain
        await cardManager.issueCard(
            user.cardAddress,
            user.accountAddress,
            user.name,
            ...
        );
    }
}Code language: JavaScript (javascript)

Fraud Detection

Use machine learning models to detect suspicious patterns:

async function monitorTransactions() {
    const transactions = await getRecentTransactions();

    for (const tx of transactions) {
        const riskScore = await fraudModel.analyze(tx);

        if (riskScore > THRESHOLD) {
            // Automatically freeze card
            await cardManager.deactivateCard(tx.cardAddress);

            // Alert governance
            await notifyGovernors(tx);
        }
    }
}Code language: JavaScript (javascript)

Deployment Strategy

Testnet Deployment

// deployment script
async function main() {
    const [deployer] = await ethers.getSigners();

    console.log("Deploying contracts with account:", deployer.address);

    // Deploy core contracts
    const CreditFacility = await ethers.getContractFactory("CreditFacility");
    const creditFacility = await CreditFacility.deploy();
    await creditFacility.deployed();
    console.log("CreditFacility deployed to:", creditFacility.address);

    const CardManager = await ethers.getContractFactory("CardManager");
    const cardManager = await CardManager.deploy(creditFacility.address);
    await cardManager.deployed();
    console.log("CardManager deployed to:", cardManager.address);

    // Deploy remaining contracts...

    // Verify contracts on Etherscan
    await verify(creditFacility.address, []);
    await verify(cardManager.address, [creditFacility.address]);
}

async function verify(contractAddress, args) {
    await hre.run("verify:verify", {
        address: contractAddress,
        constructorArguments: args,
    });
}

main();Code language: JavaScript (javascript)

Mainnet Considerations

Before mainnet deployment:

  1. Complete Security Audit: Engage firms like Trail of Bits, ConsenSys Diligence, or OpenZeppelin
  2. Bug Bounty Program: Incentivize security researchers to find vulnerabilities
  3. Gradual Rollout: Start with limited users and transaction volumes
  4. Emergency Pause: Implement circuit breakers for crisis situations
  5. Upgrade Path: Use proxy patterns for upgradeable contracts
  6. Insurance: Consider DeFi insurance protocols like Nexus Mutual

Future Enhancements

Layer 2 Scaling

Deploy on L2 solutions for lower costs:

// Optimism/Arbitrum deployment
const l2Provider = new ethers.providers.JsonRpcProvider(L2_RPC_URL);
const l2Deployer = new ethers.Wallet(PRIVATE_KEY, l2Provider);

// Same deployment script, different networkCode language: JavaScript (javascript)

Cross-Chain Interoperability

Use bridges to enable cross-chain transactions:

import "@chainlink/contracts/src/v0.8/interfaces/CCIPRouter.sol";

contract CrossChainPayment {
    function sendCrossChainPayment(
        uint64 destinationChain,
        address receiver,
        uint256 amount
    ) external {
        // Use Chainlink CCIP for cross-chain messaging
    }
}Code language: JavaScript (javascript)

Account Abstraction Integration

Implement ERC-4337 for better user experience:

contract CardWallet is BaseAccount {
    function validateUserOp(
        UserOperation calldata userOp,
        bytes32 userOpHash,
        uint256 missingAccountFunds
    ) external override returns (uint256) {
        // Validate card transaction as user operation
    }
}Code language: JavaScript (javascript)

Conclusion

We’ve built a complete decentralized credit card system with:

  • Multi-signature security for master accounts
  • Flexible card management with individual spending limits
  • Encrypted spending rules for privacy
  • Comprehensive payment processing with full validation
  • Governance system for high-value operations

This architecture demonstrates how blockchain technology can reimagine traditional financial infrastructure. The system is transparent yet private, decentralized yet secure, and programmable in ways traditional systems cannot match.

The smart contracts provided here are educational starting points. Production systems would require extensive hardening, optimization, regulatory compliance, and integration with existing financial infrastructure.

Key Takeaways

  1. Separation of Concerns: Each contract handles a specific domain, making the system modular and maintainable
  2. Security Layers: Multiple validation checkpoints ensure transactions are legitimate before processing
  3. Privacy Techniques: Commitment schemes enable encrypted rules while maintaining blockchain transparency
  4. Governance: Multi-signature and voting mechanisms distribute control and prevent single points of failure
  5. Extensibility: The modular design allows adding features without rewriting core logic

What’s Next?

In future posts, I’ll explore:

  • Advanced zero-knowledge proof implementations for complete privacy
  • Integration with hardware wallets and biometric authentication
  • Compliance frameworks for regulated financial products
  • Performance optimization and Layer 2 scaling strategies
  • Real-world case studies of blockchain payment systems

Have questions or suggestions? Drop a comment below or reach out on Twitter at @ithora. If you’re building something similar, I’d love to hear about your approach!

Disclaimer: This code is for educational purposes only. It has not been audited and should not be used in production without comprehensive security review and testing. Financial systems involve real money and require professional security expertis

Building a Decentralized Credit Card System with Multi-Signature Smart Contracts

This post outlines a proof-of-concept for a blockchain-based credit card system integrating multi-signature cryptography and smart contracts to manage spending. It emphasizes creating a secure, flexible architecture while addressing challenges like scalability and regulatory compliance. The proposed system aims to enhance transparency, security, and user control in financial transactions.

The intersection of blockchain technology and traditional financial services opens fascinating possibilities for reimagining how we handle payments and credit. In this post, I’ll explore a proof-of-concept architecture for a public blockchain-based credit card system that uses multi-signature cryptography and smart contracts to manage spending limits and access controls.

The Core Architecture

The fundamental challenge is creating a system that maintains the security and flexibility of traditional credit cards while leveraging blockchain’s transparency and programmability. Here’s how we can structure this:

Multi-Signature Hierarchical Key System

The system relies on a hierarchical key structure with different access levels:

Master Private Key: This serves as the root authority for the bank account or credit facility. Think of this as the bank’s vault key—it has ultimate control over the credit line and can set global parameters. This key would be held by the issuing institution or distributed among multiple parties using threshold signatures for enhanced security.

Card Private Keys: Each physical or virtual card gets its own private key. These keys are subordinate to the master key but have specific permissions and spending limits defined by smart contracts. Users interact with the payment system through these card keys, which can be revoked or modified without affecting other cards linked to the same account.

Smart Contract-Enforced Spending Limits

This is where blockchain technology truly shines. Rather than relying solely on centralized databases, spending limits and transaction rules are encoded into smart contracts:

Smart Contract Logic:
- Maximum transaction amount
- Daily/monthly spending caps
- Merchant category restrictions
- Geographic limitations
- Multi-signature requirements for large purchases

The smart contract acts as an automated gatekeeper, validating every transaction against these encrypted rules before authorizing payment.

Proof of Concept Implementation

Public Network Considerations

Building this on a public blockchain offers several advantages:

Transparency: All transactions (though not necessarily personal details) can be audited publicly, reducing fraud and increasing trust.

Decentralization: No single point of failure exists. The payment system continues functioning even if individual nodes go offline.

Programmability: Smart contracts enable complex financial logic that adapts automatically without manual intervention.

However, we need to address privacy concerns. This is where encryption becomes critical.

Encrypted Spending Limits

Here’s the clever part: spending limits and transaction rules don’t need to be publicly visible. Using techniques like:

  • Zero-knowledge proofs: Prove a transaction is within limits without revealing the actual limit
  • Homomorphic encryption: Perform computations on encrypted data without decrypting it
  • Secure multi-party computation: Multiple parties can jointly compute functions while keeping inputs private

The smart contract can validate transactions against encrypted rules, maintaining privacy while ensuring compliance.

Access Control Flow

  1. Card Initialization: Master key creates a new card keypair and deploys a smart contract with encrypted spending parameters
  2. Transaction Request: User initiates payment using card private key
  3. Smart Contract Validation: Contract verifies signature, checks encrypted limits, validates merchant data
  4. Multi-Sig Approval (if required): For transactions exceeding certain thresholds, multiple signatures from master key holders may be required
  5. Settlement: Upon approval, funds transfer from the credit facility to the merchant

Transfer Services Integration

The card private key specifically interfaces with transfer services—the infrastructure that moves value between accounts. This separation of concerns means:

  • The master key controls account-level operations (credit limits, account status)
  • Card keys handle day-to-day transactions (purchases, transfers)
  • Smart contracts mediate between these layers, enforcing rules automatically

Security Considerations

Key Management: Hardware security modules (HSMs) or secure enclaves should protect private keys. For users, integration with hardware wallets or biometric devices adds another security layer.

Recovery Mechanisms: Smart contracts can include social recovery or time-locked recovery procedures if a card key is lost.

Fraud Detection: While the system is decentralized, AI-powered fraud detection can still monitor transaction patterns and flag suspicious activity for additional verification.

Regulatory Compliance: The system must integrate with KYC/AML procedures, potentially using privacy-preserving identity verification methods.

Advantages of This Approach

  • Programmable Money: Spending rules adapt automatically based on predefined conditions
  • Instant Settlement: Blockchain transactions can settle much faster than traditional card networks
  • Lower Fees: Disintermediation reduces the number of parties taking a cut
  • Enhanced Security: Multi-signature requirements and smart contract validation add layers of protection
  • User Control: Card holders have more transparency and control over their financial instruments

Challenges to Address

  • Scalability: Public blockchains must handle thousands of transactions per second
  • Privacy: Balancing transparency with user privacy requires sophisticated cryptographic techniques
  • Regulatory Uncertainty: Financial regulations vary globally and are still evolving for blockchain-based systems
  • User Experience: The system must be as simple to use as traditional cards despite the underlying complexity

Conclusion

This proof-of-concept demonstrates how public blockchain networks, multi-signature cryptography, and smart contracts can work together to create a more secure, transparent, and flexible credit card system. By encrypting spending limits and using hierarchical key structures, we can maintain privacy while leveraging blockchain’s strengths.

The future of payments likely involves hybrid approaches—combining the best aspects of traditional financial infrastructure with blockchain innovation. As the technology matures and regulations clarify, we’ll see more sophisticated implementations of these concepts moving from proof-of-concept to production systems.

What aspects of blockchain-based payment systems interest you most? Are there specific technical challenges you’d like me to explore in future posts?


This post explores theoretical architecture for educational purposes. Actual implementation would require extensive security audits, regulatory approval, and collaboration with financial institutions.

Ruby 5.0: What If Ruby Had First-Class Types?

The article envisions a reimagined Ruby with optional, inline type annotations called TypedRuby, addressing limitations of current solutions like Sorbet and RBS. It proposes a syntax that integrates seamlessly with Ruby’s philosophy, emphasizing readability and gradual typing while considering generics and union types. TypedRuby represents a potential evolution in Ruby’s design.

After imagining a typed CoffeeScript, I realized we need to go deeper. CoffeeScript was inspired by Ruby, but what about Ruby itself? Ruby has always been beautifully expressive, but it’s also been dynamically typed from day one. And while Sorbet and RBS have tried to add types, they feel bolted on. Awkward. Not quite Ruby.

What if Ruby had been designed with types from the beginning? Not as an afterthought, not as a separate file you maintain, but as a natural, optional part of the language itself? Let’s explore what that could look like.

The Problem with Sorbet and RBS

Before we reimagine Ruby with types, let’s acknowledge why the current solutions haven’t caught on widely.

Sorbet requires you to add # typed: true comments and use a separate type checker. Types look like this:

# typed: true
extend T::Sig

sig { params(name: String, age: Integer).returns(String) }
def greet(name, age)
  "Hello #{name}, you are #{age}"
end
Code language: PHP (php)

RBS requires separate .rbs files with type signatures:

# user.rbs
class User
  attr_reader name: String
  attr_reader age: Integer
  
  def initialize: (name: String, age: Integer) -> void
  def greet: () -> String
end
Code language: CSS (css)

Both solutions have the same fundamental problem: they don’t feel like Ruby. Sorbet’s sig blocks are verbose and repetitive. RBS splits your code across multiple files, breaking the single-file mental model that makes Ruby so pleasant.

What we need is something that feels native. Something Matz might have designed if static typing had been a priority in 1995.

Core Design Principles

Let’s establish what TypedRuby should be:

  1. Types are optional everywhere. You can gradually type your codebase.
  2. Types are inline. No separate files, no sig blocks.
  3. Types feel like Ruby. Natural syntax that matches Ruby’s philosophy.
  4. Duck typing coexists with static typing. You choose when to be strict.
  5. Generic types are first-class. Collections, custom classes, everything.
  6. The syntax is minimal. Ruby is beautiful; types shouldn’t ruin that.

Basic Type Annotations

In TypeScript, you use colons. In Sorbet, you use sig blocks. TypedRuby could use a more natural Ruby approach with the :: operator we already know:

# Current Ruby
name = "Ivan"
age = 30

# TypedRuby with inline types
name :: String = "Ivan"
age :: Integer = 30

# Or with type inference
name = "Ivan"  # inferred as String
age = 30       # inferred as Integer
Code language: PHP (php)

The :: operator already means “scope resolution” in Ruby, but in this context (before assignment), it means “has type”. It’s familiar to Ruby developers and reads naturally.

Method Signatures

Current Sorbet approach:

extend T::Sig

sig { params(name: String, age: T.nilable(Integer)).returns(String) }
def greet(name, age = nil)
  age ? "Hello #{name}, #{age}" : "Hello #{name}"
end
Code language: JavaScript (javascript)

TypedRuby approach:

def greet(name :: String, age :: Integer? = nil) :: String
  age ? "Hello #{name}, #{age}" : "Hello #{name}"
end
Code language: JavaScript (javascript)

Or with Ruby 3’s endless method syntax:

def greet(name :: String, age :: Integer? = nil) :: String =
  age ? "Hello #{name}, #{age}" : "Hello #{name}"
Code language: JavaScript (javascript)

Much cleaner. The types are right there with the parameters, and the return type is at the end where it reads naturally: “define greet with these parameters, returning a String.”

Classes and Attributes

Current approach with Sorbet:

class User
  extend T::Sig
  
  sig { returns(String) }
  attr_reader :name
  
  sig { returns(Integer) }
  attr_reader :age
  
  sig { params(name: String, age: Integer).void }
  def initialize(name, age)
    @name = name
    @age = age
  end
end

TypedRuby approach:

class User
  attr_reader of String, :name
  attr_reader of Integer, :age
  
  def initialize(@name :: String, @age :: Integer)
  end
  
  def birthday :: void
    @age += 1
  end
  
  def greet :: String
    "I'm #{@name}, #{@age} years old"
  end
end
Code language: CSS (css)

Even better, we could introduce parameter properties like TypeScript:

class User
  def initialize(@name :: String, @age :: Integer, @email :: String)
    # @name, @age, and @email are automatically instance variables
  end
end
Code language: CSS (css)

Generics: The Ruby Way

This is where it gets interesting. Ruby already has a beautiful way of working with collections. TypedRuby needs to extend that naturally.

TypeScript uses angle brackets:

class Container<T> {
  private value: T;
  constructor(value: T) { this.value = value; }
}
Code language: JavaScript (javascript)

Sorbet uses square brackets:

class Container
  extend T::Generic
  T = type_member
  
  sig { params(value: T).void }
  def initialize(value)
    @value = value
  end
end

TypedRuby could use a more natural syntax with of:

class Container of T
  def initialize(@value :: T)
  end
  
  def get :: T
    @value
  end
  
  def map of U, &block :: (T) -> U :: Container of U
    Container.new(yield @value)
  end
end

# Usage
container = Container of String with.new("hello")
lengths = container.map { |s| s.length }  # Container of Integer

For multiple type parameters:

class Pair of K, V
  def initialize(@key :: K, @value :: V)
  end
  
  def map_value of U, &block :: (V) -> U :: Pair of K, U
    Pair.new(@key, yield @value)
  end
end
Code language: CSS (css)

Generic Methods

Methods can be generic too:

def identity of T, value :: T :: T
  value
end

def find_first of T, items :: Array of T, &predicate :: (T) -> Boolean :: T?
  items.find(&predicate)
end

# Usage
result = find_first([1, 2, 3, 4]) { |n| n > 2 }  # Integer?
Code language: PHP (php)

Array and Hash Types

Ruby’s arrays and hashes need type support:

# Arrays
numbers :: Array of Integer = [1, 2, 3, 4, 5]
names :: Array of String = ["Alice", "Bob", "Charlie"]

# Or using shorthand
numbers :: [Integer] = [1, 2, 3, 4, 5]
names :: [String] = ["Alice", "Bob", "Charlie"]

# Hashes
user_ages :: Hash of String, Integer = {
  "Alice" => 30,
  "Bob" => 25
}

# Or using shorthand
user_ages :: {String => Integer} = {
  "Alice" => 30,
  "Bob" => 25
}

# Symbol keys (very common in Ruby)
config :: {Symbol => String} = {
  host: "localhost",
  port: "3000"
}
Code language: PHP (php)

Union Types

Ruby’s dynamic nature often uses union types implicitly. Let’s make it explicit:

# TypeScript: string | number
value :: String | Integer = "hello"
value = 42  # OK

# Method with union return type
def find_user(id :: Integer) :: User | nil
  User.find_by(id: id)
end

# Multiple unions
status :: "pending" | "active" | "completed" = "pending"
Code language: PHP (php)

Nullable Types

Ruby uses nil everywhere. TypedRuby needs to handle this elegantly:

# The ? suffix means "or nil"
name :: String? = nil
name = "Ivan"  # OK

# Methods that might return nil
def find_user(id :: Integer) :: User?
  User.find_by(id: id)
end

# Safe navigation works with types
user :: User? = find_user(123)
email = user&.email  # String? inferred
Code language: PHP (php)

Interfaces and Modules

Ruby uses modules for interfaces. TypedRuby could extend this:

interface Comparable of T
  def <=>(other :: T) :: Integer
end

interface Enumerable of T
  def each(&block :: (T) -> void) :: void
end

# Implementation
class User
  include Comparable of User
  
  attr_reader :name :: String
  
  def initialize(@name :: String)
  end
  
  def <=>(other :: User) :: Integer
    name <=> other.name
  end
end
Code language: HTML, XML (xml)

Type Aliases

Creating reusable type definitions:

type UserId = Integer
type Email = String
type UserStatus = "active" | "inactive" | "banned"

type Result of T = 
  { success: true, value: T } |
  { success: false, error: String }

def create_user(name :: String) :: Result of User
  user = User.create(name: name)
  
  if user.persisted?
    { success: true, value: user }
  else
    { success: false, error: user.errors.full_messages.join(", ") }
  end
end
Code language: JavaScript (javascript)

Practical Example: A Repository Pattern

Let’s build something real. Here’s a generic repository in TypedRuby:

interface Repository of T
  def find(id :: Integer) :: T?
  def all :: [T]
  def create(attributes :: Hash) :: T
  def update(id :: Integer, attributes :: Hash) :: T?
  def delete(id :: Integer) :: Boolean
end

class ActiveRecordRepository of T implements Repository of T
  def initialize(@model_class :: Class)
  end
  
  def find(id :: Integer) :: T?
    @model_class.find_by(id: id)
  end
  
  def all :: [T]
    @model_class.all.to_a
  end
  
  def create(attributes :: Hash) :: T
    @model_class.create!(attributes)
  end
  
  def update(id :: Integer, attributes :: Hash) :: T?
    record = find(id)
    return nil unless record
    
    record.update!(attributes)
    record
  end
  
  def delete(id :: Integer) :: Boolean
    record = find(id)
    return false unless record
    
    record.destroy!
    true
  end
end

# Usage
user_repo = ActiveRecordRepository of User .new(User)
users :: [User] = user_repo.all
user :: User? = user_repo.find(123)
Code language: CSS (css)

Blocks and Procs with Types

Blocks are fundamental to Ruby. They need proper type support:

# Block parameter types
def map of T, U, items :: [T], &block :: (T) -> U :: [U]
  items.map(&block)
end

# Proc types
callback :: Proc of (String) -> void = ->(msg) { puts msg }
transformer :: Proc of (Integer) -> String = ->(n) { n.to_s }

# Lambda types
double :: Lambda of (Integer) -> Integer = ->(x) { x * 2 }

# Method that accepts a block with types
def with_timing of T, &block :: () -> T :: T
  start_time = Time.now
  result = yield
  duration = Time.now - start_time
  
  puts "Took #{duration} seconds"
  result
end

# Usage
result :: String = with_timing { expensive_operation() }
Code language: PHP (php)

Rails Integration

Ruby is often Rails. TypedRuby needs to work beautifully with Rails. Here’s where we need to think carefully about syntax. For method calls that take parameters, we can use a generic-style syntax that feels natural.

Generic-style method calls for associations:

class User < ApplicationRecord
  # Using 'of' with method calls (like generic instantiation)
  has_many of Post, :posts
  belongs_to of Company, :company
  has_one of Profile?, :profile
  
  # Or postfix style (reads more naturally)
  has_many :posts of Post
  belongs_to :company of Company
  has_one :profile of Profile?
  
  # For validations, types on the attribute names
  validates :email of String, presence: true, uniqueness: true
  validates :age of Integer, numericality: { greater_than: 0 }
  
  # Scopes with return types
  scope :active of Relation[User], -> { where(status: "active") }
  scope :by_name of Relation[User], ->(name :: String) {
    where("name LIKE ?", "%#{name}%")
  }
  
  # Typed callbacks still use :: for return types
  before_save :normalize_email
  
  def normalize_email :: void
    self.email = email.downcase.strip
  end
  
  # Typed instance methods
  def full_name :: String
    "#{first_name} #{last_name}"
  end
  
  def posts_count :: Integer
    posts.count
  end
end
Code language: HTML, XML (xml)

Alternative: Square bracket syntax (like actual generics):

class User < ApplicationRecord
  # Using square brackets like generic type parameters
  has_many[Post] :posts
  belongs_to[Company] :company
  has_one[Profile?] :profile
  
  # With additional options
  has_many[Post] :posts, dependent: :destroy
  has_many[Comment] :comments, through: :posts
  
  # Validations
  validates[String] :email, presence: true, uniqueness: true
  validates[Integer] :age, numericality: { greater_than: 0 }
  
  # Scopes
  scope[Relation[User]] :active, -> { where(status: "active") }
  scope[Relation[User]] :by_name, ->(name :: String) {
    where("name LIKE ?", "%#{name}%")
  }
end
Code language: HTML, XML (xml)

Comparison of syntaxes:

# Option 1: Postfix 'of' (most Ruby-like)
has_many :posts of Post
validates :email of String, presence: true

# Option 2: Prefix 'of' (generic-like)
has_many of Post, :posts
validates of String, :email, presence: true

# Option 3: Square brackets (actual generics)
has_many[Post] :posts
validates[String] :email, presence: true

# Option 4: 'as:' keyword (traditional keyword argument)
has_many :posts, as: [Post]
validates :email, as: String, presence: true

# Option 5: '<>' Angled brackets (traditional keyword argument)
has_many<[Post]> :posts
validates<String> :email, presence: true
Code language: PHP (php)

I personally prefer Option 2 (prefix ‘of’) because:

  • It reads naturally in English: “has many of Post type”
  • The symbol comes first (Ruby convention)
  • It’s unambiguous and parser-friendly
  • It feels like a natural Ruby extension

Full Rails example with postfix ‘of’:

class User < ApplicationRecord
  has_many :posts of Post, dependent: :destroy
  has_many :comments of Comment, through: :posts
  belongs_to :company of Company
  has_one :profile of Profile?
  
  validates :email of String, presence: true, uniqueness: true
  validates :age of Integer, numericality: { greater_than: 0 }
  validates :status of "active" | "inactive" | "banned", inclusion: { in: %w[active inactive banned] }
  
  scope :active of Relation[User], -> { where(status: "active") }
  scope :by_company of Relation[User], ->(company_id :: Integer) {
    where(company_id: company_id)
  }
  
  before_save :normalize_email
  after_create :send_welcome_email
  
  def normalize_email :: void
    self.email = email.downcase.strip
  end
  
  def full_name :: String
    "#{first_name} #{last_name}"
  end
  
  def recent_posts(limit :: Integer = 10) :: [Post]
    posts.order(created_at: :desc).limit(limit).to_a
  end
end

class PostsController < ApplicationController
  def index :: void
    @posts :: [Post] = Post.includes(:user).order(created_at: :desc)
  end
  
  def show :: void
    @post :: Post = Post.find(params[:id])
  end
  
  def create :: void
    @post :: Post = Post.new(post_params)
    
    if @post.save
      redirect_to @post, notice: "Post created"
    else
      render :new, status: :unprocessable_entity
    end
  end
  
  private
  
  def post_params :: Hash
    params.require(:post).permit(:title, :body, :user_id)
  end
end
Code language: HTML, XML (xml)

How it works under the hood:

The of keyword in method calls would be syntactic sugar that the parser recognizes:

# What you write:
has_many :posts of Post

# What the parser sees:
has_many(:posts, __type__: Post)

# Rails can then use this:
def has_many(name, **options)
  type = options.delete(:__type__)
  
  # Define the association
  define_method(name) do
    # ... normal association logic
  end
  
  # Store type information for runtime validation/documentation
  if type
    association_types[name] = type
    
    # Optional runtime validation in development
    if Rails.env.development?
      define_method(name) do
        result = super()
        validate_type!(result, type)
        result
      end
    end
  end
end
Code language: PHP (php)

This approach:

  • Keeps the symbol first (Ruby convention)
  • Uses familiar of keyword (like we use for generics)
  • Works with all existing parameters
  • Is parser-friendly and unambiguous
  • Reads naturally in English

Complex Example: A Service Object

Let’s build a realistic service object with full type safety:

type TransferResult = 
  { success: true, transaction: Transaction } |
  { success: false, error: String }

class MoneyTransferService
  def initialize(
    @from_account :: Account,
    @to_account :: Account,
    @amount :: BigDecimal
  )
  end
  
  def call :: TransferResult
    return error("Amount must be positive") if @amount <= 0
    return error("Insufficient funds") if @from_account.balance < @amount
    return error("Accounts must be different") if @from_account == @to_account
    
    transaction :: Transaction? = nil
    
    Account.transaction do
      @from_account.withdraw(@amount)
      @to_account.deposit(@amount)
      
      transaction = Transaction.create!(
        from_account: @from_account,
        to_account: @to_account,
        amount: @amount,
        status: "completed"
      )
    end
    
    { success: true, transaction: transaction }
  rescue ActiveRecord::RecordInvalid => e
    error(e.message)
  end
  
  private
  
  def error(message :: String) :: TransferResult
    { success: false, error: message }
  end
end

# Usage
service = MoneyTransferService.new(from_account, to_account, 100.50)
result :: TransferResult = service.call

case result
in { success: true, transaction: tx }
  puts "Transfer successful: #{tx.id}"
in { success: false, error: err }
  puts "Transfer failed: #{err}"
end

Pattern Matching with Types

Ruby 3 introduced pattern matching. TypedRuby makes it type-safe:

type Response of T = 
  { status: "ok", data: T } |
  { status: "error", message: String } |
  { status: "loading" }

def handle_response of T, response :: Response of T :: String
  case response
  in { status: "ok", data: data :: T }
    "Success: #{data}"
  in { status: "error", message: msg :: String }
    "Error: #{msg}"
  in { status: "loading" }
    "Loading..."
  end
end

# Usage
user_response :: Response of User = fetch_user(123)
message = handle_response(user_response)
Code language: PHP (php)

Metaprogramming with Types

Ruby’s metaprogramming is powerful but dangerous. TypedRuby could make it safer:

class Model
  def self.has_typed_attribute of T, name :: Symbol, type :: Class
    define_method(name) :: T do
      instance_variable_get("@#{name}")
    end
    
    define_method("#{name}=") :: void do |value :: T|
      instance_variable_set("@#{name}", value)
    end
  end
end

class User < Model
  has_typed_attribute of String, :name, String
  has_typed_attribute of Integer, :age, Integer
end

user = User.new
user.name = "Ivan"  # OK
user.age = 30       # OK
user.name = 123     # Type error!
Code language: HTML, XML (xml)

Gradual Typing

The beauty of TypedRuby is that it’s optional. You can mix typed and untyped code:

# Completely untyped (classic Ruby)
def process(data)
  data.map { |x| x * 2 }
end

# Partially typed
def process(data :: Array)
  data.map { |x| x * 2 }
end

# Fully typed
def process of T, data :: [T], &block :: (T) -> T :: [T]
  data.map(&block)
end

# The three can coexist in the same codebase
Code language: PHP (php)

Type System and Object Hierarchy

Here’s a crucial question: how do types relate to Ruby’s object system? In Ruby, everything is an object, and every class inherits from Object (or BasicObject). TypedRuby’s type system needs to respect this.

Types ARE classes (mostly)

In TypedRuby, most types would literally be the classes themselves:

# String is both a class and a type
name :: String = "Ivan"
puts String.class  # => Class
puts String.ancestors  # => [String, Comparable, Object, Kernel, BasicObject]

# User is both a class and a type
user :: User = User.new
puts User.class  # => Class
puts User.ancestors  # => [User, ApplicationRecord, ActiveRecord::Base, Object, ...]

This is fundamentally different from TypeScript, where types exist only at compile time. In TypedRuby, types are runtime objects too.

Special type constructors

Some type syntax creates type objects at runtime:

# Array type constructor
posts :: [Post] = []

# This is roughly equivalent to:
posts :: Array[Post] = []

# Which could be implemented as:
class Array
  def self.[](element_type)
    TypedArray.new(element_type)
  end
end

# Hash type constructor
ages :: {String => Integer} = {}

# Roughly:
ages :: Hash[String, Integer] = {}

The Type class hierarchy

TypedRuby would introduce a parallel type hierarchy:

# New base classes for type system
class Type
  # Base class for all types
end

class GenericType < Type
  # For parameterized types like Array[T], Hash[K,V]
  attr_reader :type_params
  
  def initialize(*type_params)
    @type_params = type_params
  end
end

class UnionType < Type
  # For union types like String | Integer
  attr_reader :types
  
  def initialize(*types)
    @types = types
  end
end

class NullableType < Type
  # For nullable types like String?
  attr_reader :inner_type
  
  def initialize(inner_type)
    @inner_type = inner_type
  end
end

# These would be used like:
array_of_posts = GenericType.new(Array, Post)  # [Post]
string_or_int = UnionType.new(String, Integer)  # String | Integer
nullable_user = NullableType.new(User)  # User?
Code language: CSS (css)

Runtime type checking

Because types are objects, you could check them at runtime:

def process(value :: String | Integer)
  case value
  when String
    value.upcase
  when Integer
    value * 2
  end
end

# The type annotation creates a runtime check:
def process(value)
  # Compiler inserts:
  unless value.is_a?(String) || value.is_a?(Integer)
    raise TypeError, "Expected String | Integer, got #{value.class}"
  end
  
  case value
  when String
    value.upcase
  when Integer
    value * 2
  end
end
Code language: PHP (php)

Type as values (reflection)

Types being objects means you can work with them:

def type_info of T, value :: T :: Hash
  {
    value: value,
    type: T,
    class: value.class,
    ancestors: T.ancestors
  }
end

result = type_info("hello")
puts result[:type]  # => String
puts result[:class]  # => String
puts result[:ancestors]  # => [String, Comparable, Object, ...]

# Generic types are objects too:
array_type = Array of String
puts array_type.class  # => GenericType
puts array_type.type_params  # => [String]

Method objects with type information

Ruby’s Method objects could expose type information:

class User
  def greet(name :: String) :: String
    "Hello, #{name}"
  end
end

method = User.instance_method(:greet)
puts method.parameter_types  # => [String]
puts method.return_type  # => String

# This enables runtime validation:
def call_safely(obj, method_name, *args)
  method = obj.method(method_name)
  
  # Check argument types
  method.parameter_types.each_with_index do |type, i|
    unless args[i].is_a?(type)
      raise TypeError, "Argument #{i} must be #{type}"
    end
  end
  
  obj.send(method_name, *args)
end

Duck typing still works

Even with types, Ruby’s duck typing philosophy is preserved:

# You can still use duck typing without types
def quack(duck)
  duck.quack
end

# Or enforce types when you want safety
def quack(duck :: Duck) :: String
  duck.quack
end

# Or use interfaces for structural typing
interface Quackable
  def quack :: String
end

def quack(duck :: Quackable) :: String
  duck.quack  # Works with any object that implements quack
end
Code language: CSS (css)

Type compatibility and inheritance

Types follow Ruby’s inheritance rules:

class Animal
  def speak :: String
    "Some sound"
  end
end

class Dog < Animal
  def speak :: String
    "Woof"
  end
end

# Dog is a subtype of Animal
def make_speak(animal :: Animal) :: String
  animal.speak
end

dog = Dog.new
make_speak(dog)  # OK, Dog < Animal

# Liskov Substitution Principle applies
animals :: [Animal] = [Dog.new, Cat.new, Bird.new]

The as: keyword and runtime behavior

When you write:

has_many :posts, as: [Post]
Code language: CSS (css)

This could be expanded by the Rails framework to:

has_many :posts, type_checker: -> (value) {
  value.is_a?(Array) && value.all? { |item| item.is_a?(Post) }
}
Code language: JavaScript (javascript)

Rails could use this for runtime validation in development mode, giving you immediate feedback if you accidentally assign the wrong type.

Performance considerations

Runtime type checking has overhead. TypedRuby could handle this smartly:

# In development/test: full runtime checking
ENV['RUBY_TYPE_CHECKING'] = 'strict'

# In production: types checked only at compile time
ENV['RUBY_TYPE_CHECKING'] = 'none'

# Or selective checking for critical paths
ENV['RUBY_TYPE_CHECKING'] = 'public_apis'
Code language: PHP (php)

Integration with existing Ruby

Since types are objects, they integrate seamlessly:

# Works with reflection
User.instance_methods.each do |method|
  m = User.instance_method(method)
  if m.respond_to?(:return_type)
    puts "#{method} returns #{m.return_type}"
  end
end

# Works with metaprogramming
class User
  [:name, :email, :age].each do |attr|
    define_method(attr) :: String do
      instance_variable_get("@#{attr}")
    end
  end
end

# Works with monkey patching (for better or worse)
class String
  def original_upcase :: String
    # Type information is preserved
  end
end

This approach makes TypedRuby feel like a natural evolution of Ruby rather than a foreign type system bolted on. Types are just objects, following Ruby’s “everything is an object” philosophy.

TypedRuby should infer types aggressively:

# Inferred from literal
name = "Ivan"  # String inferred

# Inferred from method return
def get_age
  30
end

age = get_age  # Integer inferred

# Inferred from array contents
numbers = [1, 2, 3, 4]  # [Integer] inferred

# Inferred from hash
user = {
  name: "Ivan",
  age: 30,
  active: true
}  # {Symbol => String | Integer | Boolean} inferred

# Explicit typing when inference isn't enough
mixed :: [Integer | String] = [1, "two", 3]
Code language: PHP (php)

Why This Could Work

Unlike Sorbet and RBS, TypedRuby would be:

  1. Native: Types are part of the language syntax, not bolted on
  2. Optional: You choose where to add types
  3. Gradual: Mix typed and untyped code freely
  4. Readable: Syntax feels like Ruby, not like Java
  5. Powerful: Full generics, unions, intersections, pattern matching
  6. Practical: Works with Rails, metaprogramming, blocks, procs

The syntax respects Ruby’s philosophy. It’s minimal, expressive, and doesn’t get in your way. When you want types, they’re there. When you don’t, they’re not.

The Implementation Challenge

Could this be built? Technically, yes. You’d need to:

  1. Extend the Ruby parser to recognize type annotations
  2. Build a type checker that understands Ruby’s semantics
  3. Make it work with Ruby’s dynamic features
  4. Integrate with existing tools (RuboCop, RubyMine, VS Code)
  5. Handle the massive existing Ruby ecosystem

The hard part isn’t the syntax. It’s making the type checker smart enough to handle Ruby’s dynamism while still being useful. Ruby’s metaprogramming, method_missing, dynamic dispatch, these all make static typing hard.

But not impossible. Crystal proved you can have Ruby-like syntax with static types. Sorbet proved you can add types to Ruby code. TypedRuby would combine the best of both: native syntax with gradual typing.

The Dream

Imagine opening a Rails codebase and seeing:

class User < ApplicationRecord
  has_many :posts :: [Post]
  
  def full_name :: String
    "#{first_name} #{last_name}"
  end
end

class PostsController < ApplicationController
  def create :: void
    @post :: Post = Post.new(post_params)
    @post.save!
    redirect_to @post
  end
end

The types are there when you need them, documenting the code and catching bugs. But they don’t dominate. The code still looks like Ruby. It still feels like Ruby.

That’s what TypedRuby could be. Not a separate type system bolted onto Ruby. Not a different language inspired by Ruby. But Ruby itself, evolved to support the type safety modern developers expect.

Would It Succeed?

Honestly? Probably not. Ruby’s community values dynamism and flexibility. Matz has explicitly said he doesn’t want mandatory typing. The ecosystem is built on duck typing and metaprogramming.

But that doesn’t mean it wouldn’t be useful. A significant portion of Ruby developers would adopt optional typing if it felt natural. Rails applications would benefit from type safety in controllers, models, and services. API clients would be more reliable. Refactoring would be safer.

The key is making it optional and making it Ruby. Not Sorbet’s verbose sig blocks. Not RBS’s separate files. Just Ruby, with types when you want them.

Conclusion

TypedRuby is a thought experiment, but it’s a valuable one. It shows what’s possible when you design types into a language from the start, rather than bolting them on later.

Ruby is beautiful. Types don’t have to ruin that beauty. With the right syntax, the right philosophy, and the right implementation, they could enhance it.

Maybe someday we’ll see Ruby 4.0 with native, optional type annotations. Maybe we won’t. But it’s fun to imagine a world where Ruby has the expressiveness we love and the type safety we need.

Until then, we have Sorbet and RBS. They’re not perfect, but they’re what we’ve got. And who knows? Maybe they’ll evolve. Maybe the syntax will improve. Maybe they’ll feel more Ruby-like over time.

Or maybe someone will read this and decide to build TypedRuby for real.

A developer can dream.

TypedScript: Imagining CoffeeScript with Types

The content envisions a hypothetical programming language called “TypedScript,” merging the elegance of CoffeeScript with TypeScript’s type safety. It advocates for optional types, clean syntax, aggressive type inference, and elegance in generics, while maintaining CoffeeScript’s aesthetic. The idea remains theoretical, noting practical challenges with adoption in the current ecosystem.

After writing my love letter to CoffeeScript, I couldn’t stop thinking: what if CoffeeScript had embraced types instead of fading away? What if someone had built a typed version that kept all the syntactic elegance while adding the type safety that makes TypeScript so powerful?

Let’s imagine that world. Let’s design what I’ll call “TypedScript” (or maybe CoffeeType? TypedCoffee? We’ll workshop the name). The goal: keep everything that made CoffeeScript beautiful while adding first-class support for types and generics.

The Core Principles

Before we dive into syntax, let’s establish what we’re trying to achieve:

  1. Types should be optional but encouraged. You can write untyped code and gradually add types.
  2. Syntax should stay clean. No angle brackets everywhere, no visual noise.
  3. Type inference should be aggressive. The compiler should figure out as much as possible.
  4. Generics should be elegant. No <T, U, V> mess.
  5. The Ruby/Python aesthetic must be preserved. Significant whitespace, minimal punctuation, readable code.

Basic Type Annotations

Let’s start simple. In TypeScript, you write:

const name: string = "Ivan";
const age: number = 30;
const isActive: boolean = true;
Code language: JavaScript (javascript)

In TypedScript, I’d imagine:

name: String = "Ivan"
age: Number = 30
isActive: Boolean = true
Code language: JavaScript (javascript)

Or with type inference (which should work most of the time):

name = "Ivan"        # inferred as String
age = 30             # inferred as Number
isActive = true      # inferred as Boolean
Code language: PHP (php)

The colon for type annotations feels natural. It’s what TypeScript uses, and it doesn’t clash with CoffeeScript’s existing syntax.

Function Signatures

TypeScript function types can get verbose:

function greet(name: string, age?: number): string {
  return age 
    ? `Hello ${name}, you are ${age}` 
    : `Hello ${name}`;
}

const add = (a: number, b: number): number => a + b;
Code language: JavaScript (javascript)

TypedScript could look like this:

greet = (name: String, age?: Number) -> String
  if age?
    "Hello #{name}, you are #{age}"
  else
    "Hello #{name}"

add = (a: Number, b: Number) -> Number
  a + b
Code language: JavaScript (javascript)

Even cleaner with inference:

greet = (name: String, age?: Number) ->
  if age?
    "Hello #{name}, you are #{age}"
  else
    "Hello #{name}"

add = (a: Number, b: Number) -> a + b
Code language: JavaScript (javascript)

The return type is inferred from the actual return value. This is already how CoffeeScript works (implicit returns), so we just layer types on top.

Interfaces and Type Definitions

TypeScript interfaces are pretty clean, but they still require curly braces:

interface User {
  id: string;
  name: string;
  email: string;
  age?: number;
  roles: string[];
}
Code language: PHP (php)

In TypedScript, we could use indentation:

type User
  id: String
  name: String
  email: String
  age?: Number
  roles: [String]
Code language: JavaScript (javascript)

Or for inline types:

user: {id: String, name: String, email: String}
Code language: CSS (css)

Arrays could use the Ruby-inspired [Type] syntax. Tuples could be [String, Number]. Maps could be {String: User}.

Classes with Types

TypeScript classes are already pretty good, but they’re still verbose:

class UserService {
  private users: User[] = [];
  
  constructor(private apiClient: ApiClient) {}
  
  async getUser(id: string): Promise<User> {
    const response = await this.apiClient.get(`/users/${id}`);
    return response.data;
  }
  
  addUser(user: User): void {
    this.users.push(user);
  }
}
Code language: JavaScript (javascript)

TypedScript version:

class UserService
  users: [User] = []
  
  constructor: (@apiClient: ApiClient) ->
  
  getUser: (id: String) -> Promise<User>
    response = await @apiClient.get "/users/#{id}"
    response.data
  
  addUser: (user: User) -> Void
    @users.push user
Code language: HTML, XML (xml)

The @ syntax for instance variables is preserved, and we just add type annotations where needed. Constructor parameter properties (@apiClient: ApiClient) combine declaration and assignment in one elegant line.

Generics: The Tricky Part

This is where TypeScript gets ugly. Generics in TypeScript look like this:

class Container<T> {
  private value: T;
  
  constructor(value: T) {
    this.value = value;
  }
  
  map<U>(fn: (value: T) => U): Container<U> {
    return new Container(fn(this.value));
  }
}

function identity<T>(value: T): T {
  return value;
}

const result = identity<string>("hello");
Code language: JavaScript (javascript)

The angle brackets are noisy, and they clash with comparison operators. TypedScript needs a different approach. What if we used a more natural syntax inspired by mathematical notation?

class Container of T
  value: T
  
  constructor: (@value: T) ->
  
  map: (fn: (T) -> U) -> Container of U for any U
    new Container fn(@value)

identity = (value: T) -> T for any T
  value

result = identity "hello"  # type inferred

The of keyword introduces type parameters for classes. The for any T suffix introduces type parameters for functions. When calling generic functions, types are inferred automatically in most cases.

For multiple type parameters:

class Pair of K, V
  constructor: (@key: K, @value: V) ->
  
  map: (fn: (V) -> U) -> Pair of K, U for any U
    new Pair @key, fn(@value)

Union Types and Intersections

TypeScript uses | for unions and & for intersections:

type Result = Success | Error;
type Employee = Person & Worker;
Code language: JavaScript (javascript)

TypedScript could keep this, but make it more readable:

type Result = Success | Error

type Employee = Person & Worker

# Or with more complex types
type Response = 
  | {status: "success", data: User}
  | {status: "error", message: String}
Code language: PHP (php)

Advanced Generic Constraints

TypeScript has complex generic constraints:

function findMax<T extends Comparable>(items: T[]): T {
  return items.reduce((max, item) => 
    item.compareTo(max) > 0 ? item : max
  );
}
Code language: JavaScript (javascript)

In TypedScript:

findMax = (items: [T]) -> T for any T extends Comparable
  items.reduce (max, item) ->
    if item.compareTo(max) > 0 then item else max
Code language: JavaScript (javascript)

Practical Example: Building a Generic Repository

Let’s build something real. Here’s a TypeScript generic repository:

interface Repository<T> {
  findById(id: string): Promise<T | null>;
  findAll(): Promise<T[]>;
  save(entity: T): Promise<T>;
  delete(id: string): Promise<void>;
}

class ApiRepository<T> implements Repository<T> {
  constructor(
    private endpoint: string,
    private client: HttpClient
  ) {}
  
  async findById(id: string): Promise<T | null> {
    try {
      const response = await this.client.get(`${this.endpoint}/${id}`);
      return response.data;
    } catch (error) {
      return null;
    }
  }
  
  async findAll(): Promise<T[]> {
    const response = await this.client.get(this.endpoint);
    return response.data;
  }
  
  async save(entity: T): Promise<T> {
    const response = await this.client.post(this.endpoint, entity);
    return response.data;
  }
  
  async delete(id: string): Promise<void> {
    await this.client.delete(`${this.endpoint}/${id}`);
  }
}
Code language: JavaScript (javascript)

The TypedScript version:

interface Repository of T
  findById: (id: String) -> Promise<T?>
  findAll: () -> Promise<[T]>
  save: (entity: T) -> Promise<T>
  delete: (id: String) -> Promise<Void>

class ApiRepository of T implements Repository of T
  constructor: (@endpoint: String, @client: HttpClient) ->
  
  findById: (id: String) -> Promise<T?>
    try
      response = await @client.get "#{@endpoint}/#{id}"
      response.data
    catch error
      null
  
  findAll: () -> Promise<[T]>
    response = await @client.get @endpoint
    response.data
  
  save: (entity: T) -> Promise<T>
    response = await @client.post @endpoint, entity
    response.data
  
  delete: (id: String) -> Promise<Void>
    await @client.delete "#{@endpoint}/#{id}"

# Usage
userRepo = new ApiRepository of User "users", httpClient
users = await userRepo.findAll()
Code language: HTML, XML (xml)

Look at how clean that is. No angle brackets, no semicolons, no excessive braces. The type information is there, but it doesn’t dominate the code.

Type Guards and Narrowing

TypeScript’s type guards work well:

function isString(value: unknown): value is string {
  return typeof value === "string";
}

if (isString(data)) {
  console.log(data.toUpperCase());
}
Code language: JavaScript (javascript)

TypedScript could use a similar pattern:

isString = (value: Unknown) -> value is String
  typeof value == "string"

if isString data
  console.log data.toUpperCase()
Code language: JavaScript (javascript)

Utility Types

TypeScript has utility types like Partial<T>, Pick<T, K>, Omit<T, K>. These could work in TypedScript with a more natural syntax:

# TypeScript
type PartialUser = Partial<User>;
type UserPreview = Pick<User, "id" | "name">;
type UserWithoutEmail = Omit<User, "email">;

# TypedScript
type PartialUser = Partial of User
type UserPreview = Pick of User, "id" | "name"
type UserWithoutEmail = Omit of User, "email"
Code language: PHP (php)

The Existential Operator with Types

Remember CoffeeScript’s beloved ? operator? It would work beautifully with nullable types:

user: User? = await findUser id  # User | null

name = user?.name ? "Guest"
user?.profile?.update()
callback?()
Code language: PHP (php)

The ? in User? means nullable, just like TypeScript’s User | null or User | undefined.

Real-World Example: A Todo App

Let’s put it all together with a realistic example:

type Todo
  id: String
  title: String
  completed: Boolean
  createdAt: Date

type TodoFilter = "all" | "active" | "completed"

class TodoStore
  todos: [Todo] = []
  filter: TodoFilter = "all"
  
  constructor: (@storage: Storage) ->
    @loadTodos()
  
  loadTodos: () -> Void
    data = @storage.get "todos"
    @todos = if data? then JSON.parse data else []
  
  saveTodos: () -> Void
    @storage.set "todos", JSON.stringify @todos
  
  addTodo: (title: String) -> Todo
    todo: Todo =
      id: generateId()
      title: title
      completed: false
      createdAt: new Date()
    
    @todos.push todo
    @saveTodos()
    todo
  
  toggleTodo: (id: String) -> Boolean
    todo = @todos.find (t) -> t.id == id
    return false unless todo?
    
    todo.completed = !todo.completed
    @saveTodos()
    true
  
  deleteTodo: (id: String) -> Boolean
    index = @todos.findIndex (t) -> t.id == id
    return false if index == -1
    
    @todos.splice index, 1
    @saveTodos()
    true
  
  getFilteredTodos: () -> [Todo]
    switch @filter
      when "active" then @todos.filter (t) -> !t.completed
      when "completed" then @todos.filter (t) -> t.completed
      else @todos

generateId = () -> String
  Math.random().toString(36).substr 2, 9

Compare that to the TypeScript equivalent and tell me it isn’t more elegant. The types are there, providing safety and documentation, but they don’t overwhelm the code. You can still read it naturally.

Why This Matters

TypeScript won because it added types to JavaScript without fundamentally changing the language. That was smart from an adoption standpoint. But it meant keeping JavaScript’s verbose syntax.

If TypedScript had existed, we could have had both: the elegance of CoffeeScript and the safety of TypeScript. We could write code that’s both beautiful and robust.

The tragedy is that this never happened. CoffeeScript’s creator, Jeremy Ashkenas, explicitly rejected adding types. He felt they went against CoffeeScript’s philosophy of simplicity. Meanwhile, TypeScript embraced JavaScript’s syntax for compatibility.

Could This Still Happen?

Technically, someone could build this. The CoffeeScript compiler is open source. TypeScript’s type system is well-documented. A sufficiently motivated team could fork CoffeeScript and add a type system.

But would anyone use it? Probably not. The JavaScript ecosystem has moved on. TypeScript has won. The tooling, the community, the momentum are all there. Starting a new compile-to-JavaScript language in 2025 would be fighting an uphill battle.

Still, it’s fun to imagine. And who knows? Maybe in some parallel universe, TypedScript is the dominant language for web development, and developers there are writing beautiful, type-safe code that makes our TypeScript look verbose and clunky.

A developer can dream.

The Syntax Reference

For anyone curious, here’s a quick reference of what TypedScript syntax could look like:

# Basic types
name: String = "Ivan"
age: Number = 30
active: Boolean = true
data: Any = anything()
nothing: Void = undefined

# Arrays and tuples
numbers: [Number] = [1, 2, 3]
tuple: [String, Number] = ["Ivan", 30]

# Objects
user: {name: String, age: Number} = {name: "Ivan", age: 30}

# Nullable types
optional: String? = null

# Union types
status: "pending" | "active" | "complete" = "pending"
value: String | Number = 42

# Functions
greet: (name: String) -> String = (name) -> "Hello #{name}"

# Generic functions
identity = (value: T) -> T for any T
  value

# Generic classes
class Container of T
  value: T
  constructor: (@value: T) ->

# Interfaces
interface Comparable of T
  compareTo: (other: T) -> Number

# Type aliases
type UserId = String
type Result of T = {ok: true, value: T} | {ok: false, error: String}

# Constraints
sorted = (items: [T]) -> [T] for any T extends Comparable of T
  items.sort (a, b) -> a.compareTo b

Closing Thoughts

Would TypedScript be better than TypeScript? For me, yes. The cleaner syntax, the Ruby-inspired aesthetics, the focus on readability, all while keeping the benefits of static typing. It would be the best of both worlds.

But “better” is subjective. TypeScript’s compatibility with JavaScript is a huge advantage. Its massive ecosystem is irreplaceable. Its tooling is mature and battle-tested.

TypedScript would be a beautiful language that few people use. And maybe that’s okay. Not every good idea wins. Sometimes the practical choice beats the elegant one.

But I still wish I could write my production code in TypedScript. I think it would be a joy.

What do you think? Would you use TypedScript if it existed? What syntax choices would you make differently? Let me know in the comments.

A Love Letter to CoffeeScript and HAML: When Rails Frontend Development Was Pure Joy

The author reflects on the nostalgia of older coding practices, specifically with Ruby on Rails, CoffeeScript, and HAML. They appreciate the simplicity, conciseness, and readability of these technologies compared to modern alternatives like TypeScript. While acknowledging TypeScript’s superiority in type safety, they express a longing for the elegant developer experience of the past.

There’s something bittersweet about looking back at old codebases. Recently, I found myself diving into a Ruby on Rails project from 2012, and I was immediately transported back to an era when frontend development felt different. Better, even. The stack was CoffeeScript, HAML, and Rails’ asset pipeline, and you know what? It was glorious.

I know what you’re thinking. “CoffeeScript? That thing died years ago. TypeScript won. Get over it.” And you’re right. TypeScript did win. It’s everywhere now, and for good reasons. But let me tell you why, after all these years, I still get a little nostalgic pang when I think about writing CoffeeScript, and why part of me still thinks it was the better language.

The Rails Way: Opinionated and Proud

First, let’s set the scene. This was the golden age of Rails, when “convention over configuration” wasn’t just a tagline. It was a philosophy that permeated everything. The asset pipeline handled all your JavaScript and CSS compilation. You’d drop a .coffee file in app/assets/javascripts, write your code, and Rails would handle the rest. No webpack configs, no Babel presets, no decision fatigue about which bundler to use.

Your views lived in HAML files that looked like this:

.user-profile
  .header
    %h1= @user.name
    %p.bio= @user.bio
  
  .actions
    = link_to "Edit Profile", edit_user_path(@user), class: "btn btn-primary"
    = link_to "Delete Account", user_path(@user), method: :delete, 
      data: { confirm: "Are you sure?" }, class: "btn btn-danger"
Code language: JavaScript (javascript)

And your JavaScript looked like this:

class UserProfile
  constructor: (@element) ->
    @setupEventListeners()
  
  setupEventListeners: ->
    @element.find('.btn-danger').on 'click', (e) =>
      @handleDelete(e)
  
  handleDelete: (e) ->
    return unless confirm('Really delete?')
    
    $.ajax
      url: $(e.target).attr('href')
      method: 'DELETE'
      success: => @onDeleteSuccess()
      error: => @onDeleteError()
  
  onDeleteSuccess: ->
    @element.fadeOut()
    Notifications.show 'Account deleted successfully'

$ ->
  $('.user-profile').each ->
    new UserProfile($(this))
Code language: CSS (css)

Look at that. It’s beautiful. It’s concise. It’s expressive. And it just works.

Why HAML Was a Breath of Fresh Air

Let’s talk about HAML first. If you’ve never used it, HAML (HTML Abstraction Markup Language) was a templating language that let you write HTML without all the angle brackets. Instead of this:

<div class="container">
  <div class="row">
    <div class="col-md-6">
      <h1>Welcome</h1>
      <p class="lead">This is my website</p>
    </div>
  </div>
</div>
Code language: HTML, XML (xml)

You wrote this:

.container
  .row
    .col-md-6
      %h1 Welcome
      %p.lead This is my website
Code language: CSS (css)

The difference is striking. HAML forced you to write clean, properly indented markup. You couldn’t forget to close a tag because there were no closing tags. The structure was defined by indentation, Python-style. This meant your templates were always consistently formatted, always readable, and always correctly nested.

HAML also integrated beautifully with Ruby. Want to interpolate a variable? Just use =. Want to add a conditional? Use standard Ruby syntax. The mental model was simple: it’s just Ruby that outputs HTML.

- if current_user.admin?
  .admin-panel
    %h2 Admin Controls
    = render partial: 'admin/controls'
- else
  .user-message
    %p You don't have access to this section.
Code language: PHP (php)

No context switching between template syntax and programming language syntax. It was all Ruby, all the way down.

CoffeeScript: JavaScript for People Who Don’t Like JavaScript

Now, let’s get to the controversial part: CoffeeScript. For those who missed it, CoffeeScript was a language that compiled to JavaScript, created by Jeremy Ashkenas in 2009. It took heavy inspiration from Ruby and Python, offering a cleaner syntax that eliminated much of JavaScript’s syntactic noise.

Here’s the thing people forget: JavaScript in 2011 was terrible. No modules, no classes, no arrow functions, no destructuring, no template strings, no const or let. You had var, function expressions, and pain. So much pain.

CoffeeScript gave us:

Arrow functions (before ES6):

numbers = [1, 2, 3, 4, 5]
doubled = numbers.map (n) -> n * 2

Class syntax (before ES6):

class Animal
  constructor: (@name) ->
  
  speak: ->
    console.log "#{@name} makes a sound"

class Dog extends Animal
  speak: ->
    console.log "#{@name} barks"
Code language: CSS (css)

String interpolation (before ES6):

name = "Ivan"
greeting = "Hello, #{name}!"
Code language: JavaScript (javascript)

Destructuring (before ES6):

{name, age} = user
[first, second, rest...] = numbers

Comprehensions (still not in JavaScript):

adults = (person for person in people when person.age >= 18)

CoffeeScript didn’t just add syntax sugar. It changed how you thought about JavaScript. The code was more expressive, more concise, and more Ruby-like. For Rails developers, it felt like home.

The Magic of the Asset Pipeline

What made this stack truly shine was how it all fit together. The Rails asset pipeline was like magic. Possibly black magic, but magic nonetheless.

You’d organize your code like this:

app/assets/
  javascripts/
    application.coffee
    models/
      user.coffee
      post.coffee
    views/
      users/
        profile.coffee
      posts/
        index.coffee

In your application.coffee, you’d require your dependencies:

#= require jquery
#= require jquery_ujs
#= require_tree ./models
#= require_tree ./views
Code language: PHP (php)

Rails would automatically compile everything, concatenate it in the right order, minify it for production, and serve it with cache-busting fingerprints. You didn’t think about build tools. You just wrote code.

The same applied to stylesheets. Drop a .scss file in app/assets/stylesheets, and it would be compiled and served. Want to use a gem that includes assets? Add it to your Gemfile, and its assets would automatically be available. No CDN links, no manual script tags.

Was it perfect? No. Was it sometimes confusing when assets weren’t loading in the order you expected? Yes. But the developer experience was smooth. You could go from idea to implementation incredibly quickly.

Why CoffeeScript Still Feels Better Than TypeScript

Okay, here’s where I’m going to lose some of you. TypeScript is objectively the right choice for modern JavaScript development. It has type safety, incredible tooling, massive community support, and it’s actively developed by Microsoft. CoffeeScript is essentially dead, with minimal updates and a dwindling community.

And yet… CoffeeScript still feels better to write.

Let me explain. TypeScript added types to JavaScript, which is fantastic. But it kept JavaScript’s verbose syntax. You still have curly braces everywhere, you still need semicolons (or don’t, which becomes its own debate), you still have visual noise.

Compare these:

TypeScript:

interface User {
  name: string;
  email: string;
  age: number;
}

class UserService {
  constructor(private apiClient: ApiClient) {}
  
  async getUser(id: string): Promise<User> {
    const response = await this.apiClient.get(`/users/${id}`);
    return response.data;
  }
  
  filterAdults(users: User[]): User[] {
    return users.filter((user) => user.age >= 18);
  }
}
Code language: JavaScript (javascript)

CoffeeScript:

class UserService
  constructor: (@apiClient) ->
  
  getUser: (id) ->
    response = await @apiClient.get "/users/#{id}"
    response.data
  
  filterAdults: (users) ->
    users.filter (user) -> user.age >= 18
Code language: CSS (css)

The CoffeeScript version is cleaner. There’s less visual noise, less ceremony. The @ symbol for instance properties is brilliant. It’s immediately obvious what’s a property and what’s a local variable. The implicit returns mean you’re not constantly writing return statements. The significant whitespace enforces good formatting.

Yes, TypeScript gives you type safety. That’s huge. But CoffeeScript gives you readability. And in my experience, readable code is maintainable code. I can glance at CoffeeScript and immediately understand what it’s doing. TypeScript requires more parsing, more mental overhead.

The list comprehensions in CoffeeScript are particularly beautiful:

# CoffeeScript
evenSquares = (n * n for n in numbers when n % 2 == 0)

# TypeScript
const evenSquares = numbers
  .filter((n) => n % 2 === 0)
  .map((n) => n * n);
Code language: PHP (php)

Both work, but the CoffeeScript version reads like English: “n squared for each n in numbers when n is even.” It’s declarative and expressive.

The Existential Operator: A Love Story

One of CoffeeScript’s best features was the existential operator (?). It was like optional chaining before optional chaining existed, but more powerful:

# Safe property access
name = user?.profile?.name

# Default values
speed = options?.speed ? 75

# Function existence check
callback?()

# Existence assignment
value ?= "default"
Code language: PHP (php)

That last one, ?=, was particularly great. It means “assign if the variable is null or undefined.” It’s cleaner than value = value || "default" and more correct (because it doesn’t overwrite falsy-but-valid values like 0 or "").

TypeScript eventually got optional chaining (?.) and nullish coalescing (??), which is great. But it took years, and CoffeeScript had it from the start.

What We Lost

When the JavaScript community moved from CoffeeScript to ES6 and then TypeScript, we gained a lot. Type safety, better tooling, standardization. But we also lost something.

We lost the joy of writing concise code. We lost the elegance of Ruby-inspired syntax. We lost the community that valued readability and expressiveness over completeness and type safety.

Modern JavaScript development often feels like you’re fighting the tools. Configuring TypeScript, setting up ESLint, configuring Prettier, choosing between competing libraries, debugging sourcemaps, dealing with module resolution issues. It’s powerful, but it’s exhausting.

With CoffeeScript and Rails, you just wrote code. The decisions were made for you. The tools were integrated. The conventions were clear. It was opinionated, and that was a feature, not a bug.

The Verdict

Would I start a new project with CoffeeScript and HAML today? Probably not. The ecosystem has moved on. TypeScript has won, and for most use cases, it’s the right choice. React and Vue have replaced server-rendered templates. The world has changed.

But do I miss it? Absolutely.

I miss the simplicity. I miss the elegance. I miss being able to write beautiful, concise code without worrying about types and interfaces and generics. I miss HAML’s clean markup and the way it forced you to write good HTML. I miss the Rails asset pipeline just working without configuration.

Most of all, I miss the developer experience of that era. We were moving fast, building things quickly, and having fun doing it. The code was readable, the stack was coherent, and everything felt like it fit together.

Maybe that’s just nostalgia talking. Maybe I’m romanticizing the past and forgetting the pain points. But when I look at that old CoffeeScript code, I don’t see technical debt. I see craft. I see code that was written with care, that values clarity over cleverness, that respects the reader’s time.

And honestly? I still think CoffeeScript’s syntax is better. TypeScript is more powerful, more practical, and more maintainable at scale. But CoffeeScript is more beautiful.

Sometimes, that matters too.


What are your thoughts? Did you work with CoffeeScript and HAML back in the day? Do you miss them, or are you glad we’ve moved on? Let me know in the comments or reach out on Twitter.

The Hidden Economics of “Free” AI Tools: Why the SaaS Premium Still Matters

This post discusses the hidden costs of DIY solutions in SaaS, emphasizing the benefits of established SaaS tools over “free” AI-driven alternatives. It highlights issues like time tax, knowledge debt, reliability, support challenges, security risks, and scaling problems. Ultimately, it advocates for a balanced approach that leverages AI to enhance, rather than replace, reliable SaaS infrastructure.

This is Part 2 of my series on the evolution of SaaS. If you haven’t read Part 1: The SaaS Model Isn’t Dead, it’s Evolving Beyond the Hype of “Vibe Coding”, start there for the full context. In this post, I’m diving deeper into the hidden costs that most builders don’t see until it’s too late.

In my last post, I argued that SaaS isn’t dead, it’s just evolving beyond the surface-level appeal of vibe coding. Today, I want to dig deeper into something most builders don’t realize until it’s too late: the hidden costs of “free” AI-powered alternatives.

Because here’s the uncomfortable truth: when you replace a $99/month SaaS tool with a Frankenstein stack of AI prompts, no-code platforms, and API glue, you’re not saving money. You’re just moving the costs somewhere else, usually to places you can’t see until they bite you.

Let’s talk about what really happens when you choose the “cheaper” path.

The Time Tax: When Free Becomes Expensive

Picture this: you’ve built your “MVP” in a weekend. It’s glorious. ChatGPT wrote half the code, Zapier connects your Airtable to your Stripe account, and a Make.com scenario handles email notifications. Total monthly cost? Maybe $20 in API fees.

You’re feeling like a genius.

Then Monday morning hits. A customer reports an error. The Zapier workflow failed silently. You spend two hours digging through logs (when you can find them) only to discover that Airtable changed their API rate limits, and now your automation hits them during peak hours.

You patch it with a delay. Problem solved.

Until Wednesday, when three more edge cases emerge. The Python script you copied from ChatGPT doesn’t handle timezone conversions properly. Your payment flow breaks for international customers. The no-code platform you’re using doesn’t support the webhook format you need.

Each fix takes 30 minutes to 3 hours.

By Friday, you’ve spent more time maintaining your “free” stack than you would have spent just using Stripe Billing and ConvertKit.

This is the time tax. And unlike your SaaS subscription, you can’t expense it or write it off. It’s just gone, stolen from building features, talking to customers, or actually running your business.

The question isn’t whether your DIY solution costs less. It’s whether your time is worth $3/hour.

The Knowledge Debt: Building on Borrowed Understanding

Here’s a scenario that plays out constantly in the AI-first era:

A developer prompts Claude to build a payment integration. The AI generates beautiful code, type-safe, well-structured, handles edge cases. The developer copies it, tests it once, and ships it.

It works perfectly for two months.

Then Stripe deprecates an API endpoint. Or a customer discovers a refund edge case. Or the business wants to add subscription tiers.

Now what?

The developer stares at 200 lines of code they didn’t write and don’t fully understand. They can prompt the AI again, but they don’t know which parts are safe to modify. They don’t know why certain patterns were used. They don’t know what will break.

This is knowledge debt, the accumulated cost of using code you haven’t internalized.

Compare this to using a proper SaaS tool like Stripe Billing or Chargebee. You don’t understand every line of their code either, but you don’t need to. They handle the complexity. They migrate your data when APIs change. They’ve already solved the edge cases.

When you build with barely-understood AI-generated code, you get the worst of both worlds: you’re responsible for maintenance without having the knowledge to maintain it effectively.

This isn’t a knock on AI tools. It’s a reality check about technical debt in disguise.

The Reliability Gap: When “Good Enough” Isn’t

Let’s zoom out and talk about production-grade systems.

When you use Slack, it has 99.99% uptime. That’s not luck, it’s the result of on-call engineers, redundant infrastructure, automated failovers, and millions of dollars in operational excellence.

When you stitch together your own “Slack alternative” using Discord webhooks, Airtable, and a Telegram bot, what’s your uptime?

You don’t even know, because you’re not measuring it.

And here’s the thing: your customers notice.

They notice when notifications arrive 3 hours late because your Zapier task got queued during peak hours. They notice when your checkout flow breaks because you hit your free-tier API limits. They notice when that one Python script running on Replit randomly stops working.

Reliability isn’t a feature you can bolt on later. It’s the foundation everything else is built on.

This is why companies still pay for Datadog instead of writing their own monitoring. Why they use PagerDuty instead of email alerts. Why they choose AWS over running servers in their garage.

Not because they can’t build these things themselves, but because reliability at scale requires obsessive attention to details that don’t show up in MVP prototypes.

Your vibe-coded solution might work 95% of the time. But that missing 5% is where trust dies and customers churn.

The Support Nightmare: Who Do You Call?

Imagine this email from a customer:

“Hi, I tried to upgrade my account but got an error. Can you help?”

Simple enough, right?

Except your “upgrade flow” involves:

  • A Stripe Checkout session (managed by Stripe)
  • A webhook that triggers Make.com (managed by Make.com)
  • Which updates Airtable (managed by Airtable)
  • Which triggers a Zapier workflow (managed by Zapier)
  • Which sends data to your custom API (deployed on Railway)
  • Which updates your database (hosted on PlanetScale)

One of these broke. Which one? You have no idea.

You start debugging:

  • Check Stripe logs. Payment succeeded.
  • Check Make.com execution logs. Ran successfully.
  • Check Airtable. Record updated.
  • Check Zapier. Task queued but not processed yet.

Ah. Zapier’s free tier queues tasks during high-traffic periods. The upgrade won’t process for another 15 minutes.

You explain this to the customer. They’re confused and frustrated. So are you.

Now imagine that same scenario with a proper SaaS tool like Memberstack or MemberSpace. The customer emails them. They check their logs, identify the issue, and fix it. Done.

When you own the entire stack, you own all the problems too. And most founders don’t realize how much time “customer support for your custom infrastructure” actually takes until they’re drowning in it.

The Security Illusion: Compliance Costs You Can’t See

Pop quiz: Is your AI-generated authentication system GDPR compliant?

Does it properly hash passwords? Does it prevent timing attacks? Does it implement proper session management? Does it handle token refresh securely? Does it log security events appropriately?

If you’re not sure, you’ve got a problem.

Because when you use Auth0, Clerk, or AWS Cognito, these questions are answered for you. They have security teams, penetration testers, and compliance certifications. They handle GDPR, CCPA, SOC2, and whatever acronym-soup regulation applies to your industry.

When you roll your own auth with AI-generated code, you own all of that responsibility.

And here’s what most people don’t realize: security incidents are expensive. Not just in terms of fines and legal costs, but in reputation damage and customer trust.

One breach can kill a startup. And saying “but ChatGPT wrote the code” isn’t a legal defense.

The same logic applies to payment handling, data storage, and API security. Every shortcut you take multiplies your risk surface.

SaaS tools don’t just sell features, they sell peace of mind. They carry the liability so you don’t have to.

The Scale Wall: When Growth Breaks Everything

Your vibe-coded MVP works perfectly for your first 10 customers. Then you get featured on Product Hunt.

Suddenly you have 500 new signups in 24 hours.

Your Airtable base hits record limits. Your free-tier API quotas are maxed out. Your Make.com scenarios are queuing tasks for hours. Your Railway instance keeps crashing because you didn’t configure autoscaling. Your webhook endpoints are timing out because they weren’t designed for concurrent requests.

Everything is on fire.

This is the scale wall, the moment when your clever shortcuts stop being clever and start being catastrophic.

Real SaaS products are built to scale. They handle traffic spikes. They have redundancy. They auto-scale infrastructure. They cache aggressively. They optimize database queries. They monitor performance.

Your vibe-coded stack probably does none of these things.

And here’s the brutal part: scaling isn’t something you can retrofit easily. It’s architectural. You can’t just “add more Zapier workflows” your way out of it.

At this point, you face a choice: either rebuild everything properly (which takes months and risks losing customers during the transition), or artificially limit your growth to stay within the constraints of your fragile infrastructure.

Neither option is appealing.

The Integration Trap: When Your Stack Doesn’t Play Nice

One of the biggest promises of the AI-powered, no-code revolution is that everything integrates with everything.

Except it doesn’t. Not really.

Sure, Zapier connects to 5,000+ apps. But those integrations are surface-level. You get basic CRUD operations, not deep functionality.

Want to implement complex business logic? Want custom error handling? Want to batch process data efficiently? Want real-time updates instead of 15-minute polling?

Suddenly you’re writing custom code anyway, except now you’re writing it in the weird constraints of whatever platform you’ve chosen, rather than in a proper application where you have full control.

The irony is thick: you chose no-code to avoid complexity, but you ended up with a different kind of complexity, one that’s harder to debug and impossible to version control properly.

Meanwhile, a well-designed SaaS tool either handles your use case natively or provides a proper API for custom integration. You’re not fighting the platform; you’re using it as intended.

The Real Cost Comparison

Let’s do some actual math.

Vibe-coded stack:

  • Zapier Pro: $20/month
  • Make.com: $15/month
  • Airtable Pro: $20/month
  • Railway: $10/month
  • Various API costs: $15/month
  • Total: $80/month

Your time:

  • Initial setup: 20 hours
  • Weekly maintenance: 3 hours
  • Monthly debugging: 5 hours
  • Customer support for stack issues: 2 hours
  • Monthly time cost: ~20 hours

If your time is worth even $50/hour (a modest rate for a technical founder), that’s $1,000/month in opportunity cost.

Total real cost: $1,080/month.

Proper SaaS stack:

  • Stripe Billing: Included with processing fees
  • Memberstack: $25/month
  • ConvertKit: $29/month
  • Vercel: $20/month
  • Total: $74/month + processing fees

Your time:

  • Initial setup: 4 hours
  • Weekly maintenance: 0.5 hours
  • Monthly debugging: 1 hour
  • Customer support for stack issues: 0 hours (vendor handles it)
  • Monthly time cost: ~3 hours

At $50/hour, that’s $150/month in opportunity cost.

Total real cost: $224/month.

The “more expensive” SaaS stack actually costs 80% less when you account for time.

And we haven’t even factored in:

  • The revenue lost from downtime
  • The customers lost from poor reliability
  • The scaling issues you’ll hit later
  • The security risks you’re accepting
  • The knowledge debt you’re accumulating

When DIY Makes Sense (And When It Doesn’t)

Look, I’m not saying you should never build anything custom. There are absolutely times when DIY is the right choice.

Build custom when:

  • The functionality is core to your competitive advantage
  • No existing tool solves your exact problem
  • You have the expertise to maintain it long-term
  • You’re building something genuinely novel
  • You have the team capacity to own it forever

Use SaaS when:

  • The functionality is commodity (auth, payments, email, etc.)
  • Reliability and uptime are critical
  • You want to focus on your core product
  • You’re a small team with limited time
  • You need compliance and security guarantees
  • You value your time more than monthly fees

The pattern is simple: build what makes you unique, buy what makes you functional.

The AI-Assisted Middle Ground

Here’s where it gets interesting: AI doesn’t just enable vibe coding. It also enables smarter SaaS integration.

You can use Claude or ChatGPT to:

  • Generate integration code for SaaS APIs faster
  • Debug webhook issues more efficiently
  • Build wrapper libraries around vendor SDKs
  • Create custom workflows on top of stable platforms

This is the sweet spot: using AI to accelerate your work with reliable tools, rather than using AI to replace reliable tools entirely.

Think of it like this: AI is an incredible co-pilot. But you still need the plane to have wings.

The Evolution Continues

My argument isn’t that AI tools are bad or that vibe coding is wrong. It’s that we need to be honest about the tradeoffs.

The next generation of successful products won’t be built by people who reject AI, and they won’t be built by people who reject SaaS.

They’ll be built by people who understand when to use each.

People who can vibe-code a prototype in a weekend, then have the discipline to replace it with proper infrastructure before it scales. People who use AI to augment their capabilities, not replace their judgment.

The future isn’t “AI vs. SaaS.” It’s “AI-enhanced SaaS.”

Tools that are easier to integrate because AI helps you. APIs that are easier to understand because AI explains them. Systems that are easier to maintain because AI helps you debug.

But beneath all that AI magic, there’s still reliable infrastructure, accountable teams, and boring old uptime guarantees.

Because at the end of the day, customers don’t care about your tech stack. They care that your product works when they need it.

Build for the Long Game

If you’re building something that matters, something you want customers to depend on, something you want to grow into a real business, you need to think beyond the MVP phase.

You need to think about what happens when you hit 100 users. Then 1,000. Then 10,000.

Will your clever weekend hack still work? Or will you be spending all your time keeping the lights on instead of building new features?

The most successful founders I know aren’t the ones who move fastest. They’re the ones who move sustainably, who build foundations that can support growth without collapsing.

They use AI to move faster. They use SaaS to stay reliable. They understand that both are tools, not religions.

Final Thoughts: Respect the Craft

There’s a romance to the idea of building everything yourself. Of being the 10x developer who needs nothing but an AI assistant and pure willpower.

But romance doesn’t ship products. Discipline does.

The best software is invisible. It just works. And making something “just work”, consistently, reliably, at scale, is harder than anyone admits.

So use AI. Vibe-code your prototypes. Move fast and experiment.

But when it’s time to ship, when it’s time to serve real customers, when it’s time to build something that lasts, respect the craft.

Choose boring, reliable infrastructure. Pay for the SaaS tools that solve solved problems. Invest in quality over cleverness.

Because the goal isn’t to build the most innovative tech stack.

The goal is to build something customers love and trust.

And trust, as it turns out, is built on the boring stuff. The stuff that works when you’re not looking. The stuff that scales without breaking. The stuff someone else maintains at 3 AM so you don’t have to.

That’s what SaaS really sells.

And that’s why it’s not dead, it’s just getting started.


What’s your experience balancing custom-built solutions with SaaS tools? Have you hit the scale wall or the reliability gap? Share your stories in the comments. I’d love to hear what you’ve learned.

If you found this useful, follow me for more posts on building sustainable products in the age of AI, where we embrace new tools without forgetting old wisdom.

Rails Templating Showdown: Slim vs ERB vs Haml vs Phlex – Which One Should You Use?

This guide compares Ruby on Rails templating engines: ERB, Slim, Haml, and Phlex. It highlights each engine’s pros and cons, focusing on aspects like performance, readability, and learning curve. Recommendations are made based on project type, emphasizing the importance of choosing the right engine for optimal efficiency and maintainability.

If you’ve been working with Ruby on Rails for any length of time, you’ve probably encountered the age-old question: which templating engine should I use? With ERB as the default, Slim and Haml as popular alternatives, and Phlex as the new kid on the block, the choice can feel overwhelming.

In this comprehensive guide, I’ll break down each option, compare their strengths and weaknesses, and help you make an informed decision for your Rails projects.

Understanding the Landscape

Before diving into specifics, let’s understand what we’re comparing. Template engines are tools that help you generate HTML dynamically by embedding Ruby code within markup. Each engine has a different philosophy about how this should be done.

ERB (Embedded Ruby)

What is it? ERB is Rails’ default templating engine. It embeds Ruby code directly into HTML using special tags.

Syntax Example

<div class="user-profile">
  <h1><%= @user.name %></h1>
  <% if @user.admin? %>
    <span class="badge">Admin</span>
  <% end %>
  <ul class="posts">
    <% @user.posts.each do |post| %>
      <li><%= link_to post.title, post_path(post) %></li>
    <% end %>
  </ul>
</div>
Code language: HTML, XML (xml)

Pros

Zero Learning Curve: If you know HTML and Ruby, you already know ERB. There’s no new syntax to learn, making it perfect for beginners and mixed teams.

Universal Support: Every Rails developer knows ERB. Every gem, tutorial, and Stack Overflow answer uses ERB. This ubiquity is valuable.

No Setup Required: It works out of the box with every Rails installation. No gems to add, no configuration needed.

Familiar to Other Ecosystems: The concept of embedding code in angle brackets exists in PHP, ASP, JSP, and many other frameworks. Developers coming from other backgrounds will feel at home.

Cons

Verbose: Writing closing tags for everything gets tedious. Your files become longer than they need to be.

Easy to Create Messy Code: Because ERB doesn’t enforce structure, it’s easy to mix business logic with presentation logic, leading to hard-to-maintain views.

Repetitive: You’ll find yourself typing the same patterns over and over. The lack of shortcuts makes ERB feel inefficient once you’ve experienced alternatives.

When to Use ERB

ERB is ideal when you’re starting a new project with junior developers, working with a team that values convention over optimization, or building simple CRUD applications where template complexity is minimal. It’s also the safe choice for open-source projects where maximum accessibility matters.

Slim

What is it? Slim is a lightweight templating engine focused on reducing syntax to its bare essentials. Its motto is “what’s left when you take the fat off ERB.”

Syntax Example

.user-profile
  h1 = @user.name
  - if @user.admin?
    span.badge Admin
  ul.posts
    - @user.posts.each do |post|
      li = link_to post.title, post_path(post)
Code language: JavaScript (javascript)

Pros

Dramatically Less Code: Slim templates are typically 30-40% shorter than their ERB equivalents. This means faster writing and easier scanning.

Clean and Readable: Once you learn the syntax, Slim templates are remarkably easy to read. The indentation-based structure naturally enforces good organization.

Fast Performance: Slim compiles to Ruby code that’s often faster than ERB, though the difference is negligible in most applications.

Enforces Good Structure: The indentation requirement prevents messy, unstructured code. You can’t create a Slim template that doesn’t follow proper nesting.

Cons

Learning Curve: Team members need to learn new syntax. The first week will involve frequent reference to documentation.

Indentation Sensitivity: Like Python, Slim uses significant whitespace. A misplaced space or tab can break your template, which can be frustrating when debugging.

Less Common: Fewer developers know Slim compared to ERB. Hiring and onboarding may take slightly longer.

Limited Ecosystem Examples: While most gems work fine with Slim, documentation and examples are usually in ERB, requiring mental translation.

When to Use Slim

Slim shines in applications with complex views where you want to maximize readability and minimize boilerplate. It’s perfect for teams that value developer experience and are willing to invest a small amount of time upfront to learn the syntax. If you find yourself frustrated by ERB’s verbosity, Slim is your answer.

Haml

What is it? Haml (HTML Abstraction Markup Language) was one of the first popular alternatives to ERB. It uses indentation to represent HTML structure and eliminates closing tags.

Syntax Example

.user-profile
  %h1= @user.name
  - if @user.admin?
    %span.badge Admin
  %ul.posts
    - @user.posts.each do |post|
      %li= link_to post.title, post_path(post)
Code language: JavaScript (javascript)

Pros

Mature and Stable: Haml has been around since 2006. It’s battle-tested and reliable with excellent documentation.

Cleaner Than ERB: Like Slim, Haml eliminates closing tags and reduces boilerplate significantly.

Good Ecosystem Support: Many gems and libraries explicitly support Haml, and you’ll find plenty of examples and resources online.

Enforces Structure: The indentation requirement keeps your code organized and prevents deeply nested chaos.

Cons

Slower Than Slim: Haml is noticeably slower than Slim in benchmarks, though for most applications this won’t matter.

More Verbose Than Slim: The % prefix for tags makes Haml slightly more verbose than Slim’s minimalist approach.

Indentation Sensitivity: Like Slim, whitespace matters. Mixing tabs and spaces will cause problems.

Feeling Dated: While still widely used, Haml hasn’t evolved as quickly as Slim. It lacks some of the refinements that make Slim feel more modern.

When to Use Haml

Choose Haml if you want an alternative to ERB but prefer a more established option with extensive community support. It’s a safe middle ground between ERB’s verbosity and Slim’s minimalism. Haml is particularly good if you’re maintaining a legacy codebase that already uses it.

Phlex

What is it? Phlex represents a radical departure from traditional templating. Instead of mixing Ruby with HTML-like syntax, Phlex uses pure Ruby classes to build views. It’s component-oriented and type-safe.

Syntax Example

class UserProfile < Phlex::HTML
  def initialize(user)
    @user = user
  end

  def template
    div(class: "user-profile") do
      h1 { @user.name }
      span(class: "badge") { "Admin" } if @user.admin?
      ul(class: "posts") do
        @user.posts.each do |post|
          li { a(href: post_path(post)) { post.title } }
        end
      end
    end
  end
end
Code language: HTML, XML (xml)

Pros

Pure Ruby: No context switching between Ruby and template syntax. Your entire view is just Ruby code, which means better IDE support, easier refactoring, and familiar debugging.

Component Architecture: Phlex encourages building reusable components, leading to better code organization and DRY principles.

Type Safety: Because it’s pure Ruby, you can use tools like Sorbet or RBS for type checking your views.

Excellent Performance: Phlex is extremely fast, often outperforming other template engines significantly.

Testable: Components are just Ruby classes, making them easy to unit test without rendering overhead.

No Markup Parsing: Since there’s no template syntax to parse, there’s one less layer of complexity in your stack.

Cons

Paradigm Shift: Phlex requires a completely different way of thinking about views. This isn’t just new syntax—it’s a new architecture.

Verbose for Simple Views: For basic templates, Phlex can feel like overkill. Writing div { h1 { "Hello" } } instead of <div><h1>Hello</h1></div> doesn’t feel like progress for simple cases.

Limited Ecosystem: Phlex is new. There are fewer examples, fewer ready-made components, and a smaller community.

No Designer-Friendly Workflow: Because Phlex is pure Ruby, front-end developers or designers who aren’t comfortable with Ruby will struggle to contribute to views.

Steep Learning Curve: Understanding how to structure Phlex components well takes time and experience.

When to Use Phlex

Phlex is ideal for component-heavy applications where you want maximum reusability and testability. It’s perfect for design systems, UI libraries, or applications with complex, interactive interfaces. Choose Phlex if your team is comfortable with Ruby and values type safety and performance. It’s also excellent for API-driven applications where you’re building JSON responses rather than full HTML pages.

The Comparison Matrix

Let me break down how these engines stack up across key criteria:

Performance

Winner: Phlex Phlex is the fastest, followed closely by Slim. Haml is slower, and ERB sits in the middle. However, for most applications, template rendering isn’t the bottleneck—database queries and business logic are.

Readability

Winner: Slim Once learned, Slim offers the best balance of conciseness and clarity. ERB is readable but verbose. Haml is good but slightly cluttered with % symbols. Phlex requires Ruby fluency to read comfortably.

Learning Curve

Winner: ERB ERB has virtually no learning curve. Slim and Haml require a day or two to feel comfortable. Phlex requires rethinking your entire approach to views.

Ecosystem Support

Winner: ERB ERB is universal. Everything supports it. Slim and Haml have good support but sometimes require translation. Phlex is still building its ecosystem.

Maintainability

Winner: Phlex/Slim Phlex’s component architecture and Slim’s enforced structure both lead to highly maintainable codebases. ERB’s flexibility can become a maintainability liability. Haml sits in the middle.

Team Onboarding

Winner: ERB Any Rails developer can contribute to ERB templates immediately. The alternatives require training time.

My Recommendations

After years of using all these engines in production, here’s what I recommend:

For New Projects with Small Teams

Use Slim. You’ll write less code, maintain cleaner views, and the learning investment pays off quickly. The performance gains are nice, but the real benefit is how much easier it is to scan and understand Slim templates.

For Large Teams or Open Source

Stick with ERB. The universal knowledge and zero onboarding friction outweigh the benefits of alternatives. Don’t underestimate the value of every Rails developer being able to contribute immediately.

For Component-Heavy Applications

Choose Phlex. If you’re building a complex UI with lots of reusable components, Phlex’s architecture will save you time in the long run. The learning curve is worth it for applications where component composition is central.

For Existing Projects

Don’t Rewrite. If your project already uses Haml or Slim, keep using it. If it uses ERB and you’re happy with it, don’t change. The cost of conversion rarely justifies the benefits.

For Learning

Start with ERB, then try Slim. Master Rails with its default templating engine first. Once you’re comfortable, experiment with Slim on a side project. After you understand the tradeoffs, you’ll be equipped to make informed decisions.

Mixing Engines

Here’s something many developers don’t realize: you can use multiple templating engines in the same Rails application. You might use ERB for most views but Phlex for a complex component or Slim for your admin interface.

This flexibility means you’re not locked into one choice forever. Start with ERB and migrate specific areas to alternatives as needs arise.

The Future

The Rails templating landscape is evolving. Phlex represents a new wave of thinking about views as components rather than templates. Meanwhile, tools like ViewComponent bridge the gap between traditional templates and component architecture.

My prediction? We’ll see more hybrid approaches where simple CRUD views use traditional templates while complex UIs leverage component-based systems like Phlex.

Conclusion

There’s no universally correct answer to “which templating engine should I use?” The right choice depends on your team, your project, and your priorities.

  • ERB for maximum compatibility and zero friction
  • Slim for optimal developer experience and clean code
  • Haml for a mature alternative with good ecosystem support
  • Phlex for component-driven architecture and maximum performance

My personal preference? I use Slim for most projects. The productivity boost is real, the syntax becomes second nature quickly, and I appreciate how it naturally encourages better code organization. But I’ve shipped successful applications with all four engines, and I wouldn’t hesitate to use any of them given the right context.

What matters most isn’t which engine you choose, but that you use it consistently and well. A well-structured ERB codebase beats a messy Slim project every time.

What’s your experience with Rails templating engines? Have you tried alternatives to ERB? I’d love to hear your thoughts in the comments below.


Want to dive deeper into Rails development? Subscribe to my newsletter for weekly tips and insights on building better Rails applications.

Why AI Startups Should Choose Rails Over Python

AI startups often fail due to challenges in supporting layers and product development rather than model quality. Rails offers a fast and structured path for founders to build scalable applications, integrating seamlessly with AI services. While Python excels in research, Rails is favored for production, facilitating swift feature implementation and reliable infrastructure.

TLDR;

Most AI startups fail because they cannot ship a product
not because the model is not good enough
Rails gives founders the fastest path from idea to revenue
Python is still essential for research but Rails wins when the goal is to build a business.

The Real Challenge in AI Today

People love talking about models
benchmarks
training runs
tokens
context windows
all the shiny parts

But none of this is why AI startups fail

Startups fail because the supporting layers around the model are too slow to build:

  • Onboarding systems
  • Billing and subscription logic
  • Admin dashboards
  • User management
  • Customer support tools
  • Background processing
  • Iterating on new features
  • Fixing bugs
  • Maintaining stability

The model is never the bottleneck
The product is
This is exactly where Rails becomes your unfair advantage

Why Rails Gives AI Startups Real Speed

Rails focuses on shipping
It gives you a complete system on day one
The framework removes most of the decisions that slow down small teams
Instead of assembling ten libraries you just start building

The result is simple
A solo founder or a tiny team can move with the speed of a full engineering department,
Everything feels predictable,
Everything fits together,
Everything works the moment you touch it.

Python gives you freedom,
Rails gives you momentum,
Momentum is what gets a startup off the ground.

Rails and AI Work Together Better Than Most People Think

There is a common myth that AI means Python
Only partially true
Python is the best language for training and experimenting
But the moment you are building a feature for real users you need a framework that is designed for production

Rails integrates easily with every useful AI service:

  • OpenAI
  • Anthropic
  • Perplexity
  • Groq
  • Nvidia
  • Mistral
  • Any vector database
  • Any embedding store

Rails makes AI orchestration simple
Sidekiq handles background jobs
Active Job gives structure
Streaming responses work naturally
You can build an AI agent inside a Rails app without hacking your way through a forest of scripts

The truth is that you do not need Python to run AI in production
You only need Python if you plan to become a research lab
Most founders never will

Rails Forces You to Think in Systems

AI projects built in Python often turn into a stack of disconnected scripts
One script imports the next
Another script cleans up data
Another runs an embedding job
This continues until even the founder has no idea what the system actually does

Rails solves this by design
It introduces structure: Controllers, Services, Models, Jobs, Events
It forces you to think in terms of a real application rather than a set of experiments

This shift is a superpower for founders
AI is moving from research to production
Production demands structure
Rails gives you structure without slowing you down

Why Solo Founders Thrive With Rails

When you are building alone you cannot afford chaos
You need to create a system that your future self can maintain
Rails gives you everything that normally requires a team

You can add authentication in a few minutes
You can build a clean admin interface in a single afternoon
You can create background workflows without debugging weird timeouts
You can send emails without configuring a jungle of libraries
You can go from idea to working feature in the same day

This is what every founder actually needs
Not experiments
Not scripts
A product that feels real
A product you can ship this week
A product you can charge money for

Rails gives you that reality


Real Companies That Prove Rails Is Still a Winning Choice

Rails is not a nostalgia framework
It is the foundation behind some of the biggest products ever created

GitHub started with Rails
Shopify grew massive on Rails
Airbnb used Rails in its early explosive phase
Hulu
Zendesk
Basecamp
Dribbble
All Rails

Modern AI driven companies also use Rails in production
Shopify uses it to power AI commerce features
Intercom uses it to support AI customer support workflows
GitHub still relies on Rails for internal systems even as it builds Copilot
Stripe uses Rails for internal tools because the Python stack is too slow for building complex dashboards

These are not lightweight toy projects
These are serious companies that trust Rails because it just works

What You Gain When You Choose Rails

The biggest advantage is development speed
Not theoretical speed
Real speed
The kind that lets you finish an entire feature before dinner

Second
You escape the burden of endless decisions
The framework already gives you the right defaults
You do not waste time choosing from twenty possible libraries for each part of the system

Third
Rails was built for production
This matters more than people admit
You get caching, background jobs, templates, email, tests, routing, security, all included, all consistent, all reliable

Fourth
Rails fits perfectly with modern AI infrastructure: Vector stores, embedding workflows, agent orchestration, streaming responses. It works out of the box with almost no friction

This combination is rare
Rails gives you speed and stability at the same time
Most frameworks give you one or the other
Rails gives you both

Where Rails Is Not the Best Too

There are honest limits. If you are training models working with massive research datasets, writing CUDA kernels or doing deep ML research Python remains the right choice.

If you come from Python the Rails conventions can feel magical or strange at first You might wonder why things happen automatically. But the conventions are there to help you move faster

Hiring can be more challenging in certain regions
There are fewer Rails developers
but the ones you find are usually very strong
and often much more experienced in building actual products

You might also deal with some bias. A few people still assume Rails is old
These people are usually too young to remember that Rails built half the modern internet

The One Thing Every Founder Must Understand

The future of AI will not be won by better models. Models are quickly becoming a commodity. The real victory will go to the teams that build the best products around the models:

  • Onboarding
  • UX
  • speed
  • reliability
  • iteration
  • support
  • support tools
  • customer insights
  • monetization
  • all the invisible details that turn a clever idea into a real business

Rails is the best framework in the world for building these supporting layers fast. This is why it remains one of the most effective choices for early stage AI startups

Use Python for research
Use Rails to build the business
This is the strategy that gives you the highest chance of reaching customers
and more importantly
the highest chance of winning

The AI-Native Rails App: What a 2025 Architecture Looks Like

Introduction

For the first time in decades of building products, I’m seeing a shift that feels bigger than mobile or cloud.
AI-native architecture isn’t “AI added into the app” it’s the app shaped around AI from day one.

In this new world:

  • Rails is no longer the main intelligence layer
  • Rails becomes the orchestrator
  • The AI systems do the thinking
  • The Rails app enforces structure, rules, and grounding

And honestly? Rails has never felt more relevant than in 2025.

In this post, I’m breaking down exactly what an AI-native Rails architecture looks like today, why it matters, and how to build it with real, founder-level examples from practical product work.

1. AI-Native Rails vs. AI-Powered Rails

Many apps today use AI like this:

User enters text → you send it to OpenAI → you show the result

That’s not AI-native.
That’s “LLM glued onto a CRUD app.”

AI-native means:

  • Your DB supports vector search
  • Your UI expects streaming
  • Your workflows assume LLM latency
  • Your logic expects probabilistic answers
  • Your system orchestrates multi-step reasoning
  • Your workers coordinate long-running tasks
  • Your app is built around contextual knowledge, not just forms
A 2025 AI-native Rails stack looks like this:
  • Rails 7/8
  • Hotwire (Turbo + Stimulus)
  • Sidekiq or Solid Queue
  • Postgres with PgVector
  • OpenAI, Anthropic, or Groq APIs
  • Langchain.rb for tooling and structure
  • ActionCable for token-by-token streaming
  • Comprehensive logging and observability

This is the difference between a toy and a business.

2. Rails as the AI Orchestrator

AI-native architecture can be summarized in one sentence:

Rails handles the constraints, AI handles the uncertainty.

Rails does:

  • validation
  • data retrieval
  • vector search
  • chain orchestration
  • rule enforcement
  • tool routing
  • background workflows
  • streaming to UI
  • cost tracking

The AI does:

  • reasoning
  • summarization
  • problem-solving
  • planning
  • generating drafts
  • interpreting ambiguous input

In an AI-native system:

Rails is the conductor. The AI is the orchestra.

3. Real Example: AI Customer Support for Ecommerce

Most ecommerce AI support systems are fragile:

  • they hallucinate answers
  • they guess policies
  • they misquote data
  • they forget context

An AI-native Rails solution works very differently.

Step 1: User submits a question

A Turbo Frame or Turbo Stream posts to:

POST /support_queries

Rails saves:

  • user
  • question
  • metadata

Step 2: Rails triggers two workers

(1) EmbeddingJob
– Create embeddings via OpenAI
– Save vector into PgVector column

(2) AnswerGenerationJob
– Perform similarity search on:

  1. product catalog
  2. order history
  3. return policies
  4. previous chats
  5. FAQ rules
    – Pass retrieved context into LLM
    – Validate JSON output
    – Store reasoning steps (optional)

Step 3: Stream the answer

ActionCable + Turbo Streams push tokens as they arrive.

broadcast_append_to "support_chat_#{id}"Code language: JavaScript (javascript)

The user sees the answer appear live, like a human typing.

Why this architecture matters for founders

  • Accuracy skyrockets with grounding
  • Cost drops because vector search reduces tokens
  • Hallucinations fall due to enforced structure
  • You can audit the exact context used
  • UX improves dramatically with streaming
  • Support cost decreases 50–70% in real deployments

This isn’t AI chat inside Rails.

This is AI replacing Tier-1 support, with Rails as the backbone of the system.

4. Example: Founder Tools for Strategy, Decks, and Roadmaps

Imagine building a platform where founders upload:

  • pitch decks
  • PDFs
  • investor emails
  • spreadsheets
  • competitor research
  • user feedback
  • product specs

Old SaaS approach:
You let GPT speculate.

AI-native approach:
You let GPT reason using real company documents.

How it works

Step 1: Upload documents

Rails converts PDFs → text → chunks → embeddings.

Step 2: Store a knowledge graph

PgVector stores embeddings.
Metadata connects insights.

Step 3: Rails defines structure

Rails enforces:

  • schemas
  • output formats
  • business rules
  • agent constraints
  • allowed tools
  • validation filters

Step 4: Langchain.rb orchestrates the reasoning

But Rails sets the boundaries.
The AI stays inside the rails (pun intended).

Step 5: Turbo Streams show ongoing progress

Founders see:

  • “Extracting insights…”
  • “Analyzing competitors…”
  • “Summarizing risks…”
  • “Drafting roadmap…”

This builds trust and increases perceived value.

5. Technical Breakdown: What You Need to Build

Below is the exact architecture I recommend.

1. Rails + Hotwire Frontend

Turbo Streams = real-time AI experience.

  • Streams for token output
  • Frames for async updates
  • No need for React overhead

2. PgVector for AI Memory

Install extension + migration.

Example schema:

create_table :documents do |t|
  t.text :content
  t.vector :embedding, limit: 1536
  t.timestamps
end
Code language: JavaScript (javascript)

Vectors become queryable like any column.

3. Sidekiq or Solid Queue for AI Orchestration

LLM calls must never run in controllers.

Recommended jobs:

  • EmbeddingJob
  • ChunkingJob
  • RetrievalJob
  • LLMQueryJob
  • GroundedAnswerJob
  • AgentWorkflowJob

4. AI Services Layer

Lightweight Ruby service objects.

Embedding example:

class Embeddings::Create
  def call(text)
    OpenAI::Client.new.embeddings(
      model: "text-embedding-3-large",
      input: text
    )["data"][0]["embedding"]
  end
endCode language: CSS (css)

5. Retrieval Layer

Document.order(Arel.sql("embedding <-> '#{embedding}' ASC")).limit(5)Code language: HTML, XML (xml)

Grounding prevents hallucinations and cuts costs.

6. Streaming with ActionCable

Token streaming UX looks magical and retains users.

7. Observability Layer (Non-Optional)

Track:

  • prompts
  • model
  • cost
  • context chunks
  • errors
  • retries
  • latency

AI systems break differently than traditional code.
Logging is survival.


6. How To Start Building This (Exact Steps)

Here’s the fast-track setup:

Step 1: Enable PgVector

Install and migrate.

Step 2: Build an Embedding Service

Clean, testable, pure Ruby.

Step 3: Add Worker Pipeline

One worker per step.
No logic inside controllers.

Step 4: Create Retrieval Functions

Structured context retrieval before every LLM call.

Step 5: Build Token Streaming

Turbo Streams + ActionCable.

Step 6: Add Prompt Templates & A/B Testing

Prompt engineering is your new growth lever.

7. Why Rails Wins the AI Era

AI products are:

  • async
  • slow
  • streaming-heavy
  • stateful
  • data-driven
  • orchestration heavy
  • context dependent

Rails was made for this style of work.

Python builds models.
Rails builds businesses.

We are entering an era where:

Rails becomes the best framework in the world for shipping AI-powered products fast.

And I’m betting on it again like I did 15 years ago but with even more conviction.

Closing Thoughts

Your product is no longer a set of forms.
In the AI era, your product is:

  • memory
  • context
  • retrieval
  • reasoning
  • workflows
  • streaming interfaces
  • orchestration

Rails is the perfect orchestrator for all of it.