That’s a wrap
The last few years have been quite a ride. Generative AI has invaded every single hidden corners of the web. Every LinkedIn member with more than a few followers suddenly claims to be an automation expert. Thousands of start-ups were born, almost out of nowhere, promising to AI-hammer every possible task one can imagine. Some called it a bullish market, others called it a bubble. Personally, I think AI is a revolution, but it still has a long way to go to profoundly change most industries.
A large share of those first-wave companies are ChatGPT wrappers, bound to vanish. At the time of this writing, they are slowly dying, and I believe we will soon be entering into the interesting phase of any technological disruption : the post-bubble era.
The main question, then, is: Why did they die, and how can you survive?
I’m a tech founder and a mathematician by education, I studied GAN neural networks back when it was unfashionable. Yet, I decided to wait before tackling this topic, especially its implementation in business. I think it’s wise to avoid the early low-quality noise, the bubbles, and the endless craze that accompany every technological revolution. AI is, without a doubt, a life-changing breakthrough. But the usual suspects are the usual bullshit makers: big-four consultants, MBAs, and some VC-backed CEOs with no technical background, tons of money, and an obsession with raising a few more bucks. As a consequence, the conversation around AI in business has revolved around the wrong questions, and the wrong persons.
If all you have is a hammer, everything looks like a nail
Every breakthrough technology provokes a rush. Everyone suddenly wants to solve every problem with it. The same happened with blockchain — but this time it’s worse. The main difference is that LLMs are incredibly good at pretending to solve problems. They can act intelligent and simulate progress itself. Blockchain was too abstract for most people, even for many of its users. But AI feels tangible: it speaks, writes, reasons, and therefore can convince. It is a technology that flatters human vanity, and that makes its illusions more dangerous.
I faced those illusions in my personal journey. I’ve spent the last few years building an accounting software company. The accounting industry is one of the main targets of the AI revolution. It’s boring, repetitive, and filled with tasks that add zero real value—necessary, yes, but rarely creative.
Unsurprisingly, a myriad of wannabe tech CEOs has appeared out of nowhere with the same promise: “We automate your accounting and reduce your operational burden to zero.” They claim to free every CEO from the administrative noise, leaving only pure purpose and product creation. It’s a tempting lie. Most of them are wrong, and their clients learned that very quickly. The promise will one day be fulfilled, but not by them.
Their failures stem as much from ignorance as from dishonesty. Their approach is flawed not because it uses AI, but because they take the problem by the wrong end : they start with the tool, rather than the problem it’s supposed to solve. Hence they mould the client’s challenges to their solution, which in this case is ChatGPT. A good product does the opposite, it fits to an existing problem. This is a good question one can ask to detect fragile AI companies : does AI really add value and a hard-to-imitate differentiator in their market? More often than not, the answer is negative. Anyone can wrap some AI assistant with a prompt, and AI-only can solve no critical problems without serious pre-requisite work, closer to the data and the infrastructure. In the rare cases it can, then the company will have no moat.
Most bubble-born AI startups try to use AI where much simpler algorithm would do the trick. They operate at the wrong layer. For instance, in accounting, they appear only to read invoices. They don’t build an accounting system, nor do they profoundly integrate into one. They just take invoices, read them, and send them somewhere else. That’s date brokerage disguised as intelligent computation. Others try to solve problems that shouldn’t even be addressed that way: forecasting with two or three months of data, or guessing how much of your money belongs to the tax agency where a percentage suffices. The same thing happens in other industries. In marketing, for instance, most so-called “AI tools” focus on blasting outreach campaigns to the wrong audience. The cost is high, the output low.
In reality, 10 or 20 percent of tasks usually create 90 percent of the value. Yet the naive, brute-force application of AI ignores that logic: companies spend 90 percent of their computing resources on the 90 percent of tasks that barely matter. The result is spectacular inefficiency, presented as innovation.
Take marketing again. AI isn’t useful for spamming thousands of unqualified leads. It’s powerful when it digs deep into the web to “over-qualify” the right ones—the rare leads that actually matter, that align with your product, story, and purpose. Real leverage is precision, not volume. A company willing to improve its sales pipeline must know its customer, and craft a compelling story around their ideal potential buyer. Only then, AI can be useful to target qualified potential buyers at scale.
Generally speaking, process automations should come after differentiation. In the case of sales and marketing, a unique, genuine story-telling, and an outstanding message featuring testimonials and an original brand, should come before outreach automation.
I am not telling you this from unreachable heights, I myself naively sent AI-generated sales message to unqualified leads, then got angry at the rock-bottom conversion rate. That’s fine, sucking at being an entrepreneur is part of the entrepreneur journey.
Differentiation is survival
When a tool becomes a standard, it’s generally wise to consider it is no longer a differentiator, but a pre-requisite for survival. Yet, many entrepreneurs served nicely-packaged LLMs wrappers and thought it would be enough. It can, over the short term, but the closer you are to the original tool — in this case, OpenAI or any other LLM — the sooner you will die.
At some point, your clients will realize they can just use ChatGPT or Claude, and get the same result. In the tangible economy, this dynamic is obvious to anyone : you know the local grocery store charges a small premium compared to the wholesaler. In the digital economy, the same dynamic is hidden behind slogans, scroll-downs, fundraising rounds, and LinkedIn likes. The main point here : AI wrongly used can turn a company’s product into a free commodity.
In other words, ubiquitous and poorly-engineered AI usage can lead to the erosion of a company’s differentiator — and differentiation is survival. If I use AI to interpret financial statements and send tax reports with zero human input, I’m doing exactly what every other firm will soon do. My clients will realize they can pay twenty dollars a month to get the same outcome elsewhere. Why pay three hundred for something built on the same model, the same interface, the same answers? AI should be between humans—not humans between AIs.
Back to my marketing example. Lazy community managers feed prompts to one AI to generate content, then send it to another AI that distributes it to random people. The human becomes a middleman between two algorithms, an anonymous courier in a loop of artificial messages. A good community manager, on the contrary, thinks deeply about who the ideal customer really is—the person with the right incentives, the right context, the right resonance with your story. That thinking cannot, and should never be outsourced.
I find it useful to think as AI as an horizontal layer, just like the internet or the electricity. At some point, its mere usage cannot be a differentiator, but rather a pre-requisite for survival. For example, no serious large-scale retailers can live without e-commerce, and no industry have stayed equal with the advent of those horizontal layers. Nowadays, no one would ever qualify a company as innovative because it has a website. For the same reasons, no one will see your product as revolutionary because it’s got a chat-bot.
Differentiator first, AI then
Let’s summary the most important messages here.
If you’re building a technology company, AI alone cannot be your differentiator. It must sit on top of something hard to imitate—an existing proprietary system, or unique data you already control, such as client interactions, documents, or behavioral patterns. AI should come after you’ve gained access to rare data or achieved an interesting, tedious feat of engineering. If anyone can replicate your product just by plugging into the OpenAI API, then you’re essentially selling hamburgers next to a McDonald’s.
When it comes to sales and marketing, AI cannot replace the most important part: knowing who you want to talk to and what you want to say. Sales and marketing are the most psychological, human aspects of company building. Your offer and your message are you. If a simple AI can define your story and your ideal customer, then it can do the same for anyone—and you’ll end up sounding like the same generic AI sludge as everyone else.
In both cases, AI should catalyze the differentiator, not define it. Ramp, one of the most successful fintechs of the past few years, is a perfect example: they built a world-class, tightly integrated product first, and then added AI to make it even smoother. That’s the right sequence—foundation before automation, meaning before intelligence.
AI should not define and even less answer core questions for you. Intelligence, human or artificial, is only powerful when it’s directed toward valuable targets. Without it, AI becomes noise disguised as information, motion mistaken for progress, and automation mistaken for speed.