27 March 2026 (updated: 27 March 2026)
Chapters
The pressure to use AI often seems too urgent: a competitor just launched something "intelligent," a stakeholder wants AI on the roadmap, marketing needs a narrative, and the decision gets made before the design process begins
AI washing, a term first coined by the AI Now Institute at NYU in 2019, is modeled directly on "greenwashing." Just as "eco-friendly" became a label applied to unchanged products, "AI-powered" has become a default signal for innovation, regardless of what's actually running under the hood.
In 2025, Builder.ai collapsed into bankruptcy.
The London-based startup had raised $445 million from investors including Microsoft and SoftBank, marketing an AI assistant called "Natasha" that could build software applications six times faster than traditional development. The pitch was compelling. The reality, revealed by Bloomberg, was rather different: behind the AI promise sat hundreds of human engineers doing the actual work, while revenues had been overstated by as much as 300%. The "AI" was just a tale.
Builder.ai is an extreme but not exceptional case — sitting on a spectrum, not in a category of its own. Across the product industry, a quieter and far more common version of the same problem plays out daily: a chatbot relabeled "intelligent," a filter renamed "smart recommendations," a search bar badged "AI-powered" with no semantic improvement underneath. The features ship, the marketing goes out, and users — sooner or later — notice the gap between the promise and the experience.
For product design, the stakes are practical as well as ethical. False AI claims create compounding dysfunction: engineering cycles get burned on capability theater, roadmaps get built around promises the technology can't keep, and trust — once lost — is disproportionately hard to rebuild.
The design question at the root of all of it is simple: are we solving a real problem, or are we claiming a label?
TL;DR: Slapping "AI-powered" on a feature doesn't make it valuable; it makes it a liability. AI-washing in product design erodes user trust, bloats your product, and signals to savvy users that you're more interested in optics than outcomes. The antidote isn't avoiding AI, it's genuine AI product design: building with honesty, starting with the problem rather than the press release.
The pressure to use AI often seems too urgent: a competitor just launched something "intelligent," a stakeholder wants AI on the roadmap, marketing needs a narrative, and the decision gets made before the design process begins.
The UX tells the rest of the story.
The feature lands in the onboarding tour but never shows up when a user actually needs help. The recommendation surfaces at the wrong moment with no explanation. The "AI-powered" search returns the same results as before, just slower. Users feel the friction and they draw conclusions about the whole product.
Internally, the damage is quieter. Engineering gets redirected toward features that don't move meaningful metrics, complexity grows without corresponding value, and because the feature was never tied to a real problem, there's no honest way to measure whether it's working — which means it rarely gets cut, even when it should.
AI washing takes hold the moment the design question flips from what problem are we solving? to where can we put AI?
Recognizing that inversion early is what separates products that earn the label from products that just wear it.
AI washing, in most cases, is the result of a business environment where the incentive to claim AI and the incentive to build it well have drifted far apart.
Parker Conrad, founder of Rippling, an HR platform valued at $13.5 billion, put it plainly in a TechCrunch interview: companies are racing to "sprinkle AI pixie dust" into their products, driven by a simple valuation logic.
"If I'm a SaaS company, my multiple is 7x — but if I change my name to whatever-my-name-was-before.ai, my multiple is like 50x."
Parker Conrad, founder of Rippling
In the first half of 2024 alone, AI companies accounted for 41% of all U.S. venture deal value, according to PitchBook. More than 40% of new unicorns carried an AI label. When the financial reward for claiming AI is this large, and the cost of verification is this low, the rational short-term move is obvious.
But money isn't the whole story.
The AI boom didn't just create a market opportunity — it created a brand new identity for many wanna-be Tech Bros, and that can be far more dangerous.
Overnight, LinkedIn filled with AI strategists, AI visionaries, and AI-native founders ready to disrupt, transform, and revolutionize everything from enterprise procurement to pet nutrition. The language followed: every product became cutting-edge, every feature transformative, every roadmap revolutionary.
The technology was genuinely exciting — and that excitement, for many, crossed the line from conviction into ego.
Building an AI product stopped being a means to an end and became the end itself — the unicorn, the TED Talk (or, more often, the LinkedIn post).
That pressure cascades through organizations in predictable ways. Boards ask why AI isn't on the roadmap. Marketing needs a launch narrative. A competitor just shipped something called "intelligent." None of these conversations begin with a user problem — they end with a feature brief. And by the time it reaches a designer, the decision has already been made.
The result is AI bolted onto a product for narrative reasons rather than functional ones — a chatbot on a broken support flow, a generative summary attached to a report nobody reads, a rule-based filter relabeled as a smart recommendation engine.
AI washing isn't one thing; it spans from a mislabeled button to a fully fabricated capability, and the further along it goes, the harder it is to spot from the inside.
The test that cuts through all of it is simple: does the system learn and adapt from data, or does it follow instructions someone hardcoded? Everything else is a label.
|
Type |
What it looks like |
The question that exposes it |
|
The label is the feature |
"AI-powered," "AI-enhanced," or "smart" appended to something built on static rules, keyword filters, or pre-coded logic |
Can you explain what this feature does without using the word "AI"? |
|
A chatbot appeared and nobody asked why |
A bot bolted onto existing software as a cosmetic layer signaling modernity rather than solving anything |
Is there a user problem this chatbot solves — or did it appear because chatbots feel like AI? |
|
Humans are doing the work the AI was supposed to do |
AI-promised automation delivered by undisclosed human labor — invisible to users until it isn't |
If the AI were removed tomorrow, would anything change — or is someone quietly filling the gap? |
|
The AI built the product, but isn't in it |
AI used during development to write code or generate copy, marketed as an AI-powered product feature |
Is the AI running inside the product experience — or did it just help ship it? |
|
Personalization is just segmentation |
Static audience segments with pre-assigned rules marketed as "AI-driven personalization" |
Does the system change its behavior based on what individual users actually do — or are they slotted into pre-defined buckets? |
|
The feature lives in onboarding, not the critical journey |
AI prominent in the first five minutes, absent in the moments users actually need help |
Where does this feature appear? Is it in the flow users care about — or a highlight reel that gets skipped after day one? |
|
No measurable outcome tied to it |
Metrics limited to impressions or feature activations, with no connection to user outcomes |
What does success look like for this feature — and can we actually measure it? |
To put it simply: there's one core question that separates genuine AI integration from AI washing: "What problem are we solving?" — not "Where can we add AI?"
It sounds like a no-brainer but in reality it rarely happens as when AI enters the product conversation as an answer rather than a tool, the result is almost always a feature built around a label rather than a need.
Here is the framework that might help to overcome the risk.
|
Step |
The question to ask |
What bad looks like |
|
Problem |
What is the user struggling with — specifically, how often, and at what cost? |
"Users would probably find this useful" |
|
User Need |
What outcome would make this meaningfully better? |
"An AI feature" |
|
AI as a tool |
Is AI the right way to deliver that outcome — or would something simpler solve 80% of the problem? |
Adding AI because it's available, not because it's necessary |
AI earns its place when the task involves pattern recognition, contextual adaptation, or processing unstructured data at scale. It doesn't earn its place just because it's on the roadmap.
This is the core of how we think about AI feature prioritization: validate the problem first, verify the data second, only then decide whether AI is the right lever.
|
Infrastructure |
Decoration |
|
Solves a problem users experience daily |
Solves a problem the roadmap needed |
|
Invisible — the product just works better |
Visible — announces itself, then underdelivers |
|
Remove it and the product gets meaningfully worse |
Remove it and nothing changes |
|
Measured by user outcomes |
Measured by feature activations |
The difference is always decided before a line of code is written — in the moment a team either asks what problem they're solving, or skips that question to get to the build.
How a feature communicates uncertainty, handles failure, and respects user control determines whether AI builds trust or erodes it.
Here are five principles that apply before, during, and after launch:
The difference between genuine AI and AI washing isn't always visible in the interface. It shows up in what happens after launch — in whether users come back to the feature, whether it gets better over time, and whether removing it would actually hurt.
❌ Ryanair's chatbot was marketed as AI-driven customer service while operating on simple keyword-matching rules. Strip the "AI" label and what's left is a lookup table with a friendly face.
✅ Perplexity built its search engine around a specific friction point: traditional search returns links, not answers. By combining semantic search, real-time retrieval, and inline citations, it surfaces synthesized responses grounded in verifiable sources. The AI isn't a label on the interface — it's the mechanism that makes the product work at all. Remove it and there's nothing left.
❌ "AI-driven personalization" that assigns users to static segments with pre-set content rules. The recommendations don't change based on what users actually do. They just have a better name.
✅ Spotify's Discover Weekly was built to solve a specific friction point: finding new music was cumbersome and labor-intensive. The feature uses collaborative filtering, NLP, and audio analysis to surface songs users haven't heard — and adapts continuously from behavior signals like skips, saves, and replays. When it failed to update one Monday, users noticed immediately. That reaction, as Spotify's own engineers noted, was the clearest possible validation that the AI was doing real work.
❌ Amazon's Just Walk Out was marketed as fully AI-powered cashierless checkout. Around 700 of every 1,000 transactions required review by human contractors in India. The automation was real in the marketing, invisible in the operation.
✅ GitHub Copilot learns from the context of the codebase being written — not just the current line, but the file, the project structure, and patterns across millions of repositories. It gets measurably more useful the more a developer uses it. Microsoft reported developers completing tasks up to 55% faster. Remove it and the workflow returns to what it was before — that's the test.
❌ McDonald's AI ordering system went viral for adding 260 Chicken McNuggets to a single order. The system couldn't handle corrections, context, or ambiguity — the basic requirements of conversation. The AI label was on a feature that wasn't ready to carry it.
✅ A voice ordering system that handles corrections naturally, confirms understanding before completing an order, and degrades gracefully when uncertain — rather than confidently getting things wrong — earns the label by making the experience better, not just different.
❌ Auto-generated summaries attached to reports nobody reads. The AI produces output; no one checks whether anyone uses it. The metric is generation volume, not user value.
✅ Contextual content surfaced at the exact moment of a decision — a brief, relevant summary that appears when a user is about to act, not as a default feature that runs on everything. The difference is knowing when AI helps and when it's just noise.
Before any AI feature ships, run it through these four questions.
They don't require a process, a tool, or a meeting — just honest answers.
|
Question |
What a good answer looks like |
Red flag |
|
Can you explain the feature's value without using the word "AI"? |
A clear, concrete benefit: saves time, reduces errors, surfaces the right information at the right moment |
The value disappears without the label |
|
What does the user experience if the model is wrong? |
A graceful fallback, a visible correction mechanism, or an honest empty state |
The feature fails silently — or worse, confidently |
|
Is this solving friction that existed before — or creating new friction? |
Users are doing something faster, with less effort, or with more confidence than before |
The AI introduced a new step, a new uncertainty, or a new thing to learn |
|
Would this ship if it weren't "AI"? |
Yes — the underlying value stands on its own |
No — and that's the answer |
If any of these questions don't have a clean answer, the feature isn't ready to ship. It may not be ready to build.
The AI buzz will settle. It always does — dot-com, blockchain, the metaverse. What survives every cycle isn't the products that rode the narrative hardest, but the ones that solved real problems well enough that people kept coming back.
AI is a genuinely extraordinary material — more powerful than animation, more expressive than typography, capable of things no previous design tool could do.
But the principle that governs all design materials hasn't changed: they should serve the experience, not define it. The best AI, like the best type, is invisible in the right way. Users don't think "this product uses AI." They think "this just works."
When the water settles, that's the only product worth being.
What is AI-washing in software products?
AI washing in product design is the practice of labeling a feature or product as "AI-powered" when the underlying technology is basic automation, rule-based logic, or, in some cases, undisclosed human labor. The label exists for marketing reasons, not functional ones.
How to tell if a product is AI-washing?
Ask whether the system learns and adapts from data, or simply follows hardcoded instructions. If the AI label disappeared and nothing about the feature changed, that's your answer. Other signals: the feature lives in the onboarding tour but not the critical user journey, there's no measurable outcome tied to it, and users either ignore it or don't trust it.
When should you add AI to your product?
When you can answer three questions honestly: what specific problem does this solve, what does the user actually need, and is AI the right tool, or would something simpler solve 80% of the problem at a fraction of the cost? Responsible AI product development means the starting point is always the problem, never the capability.
How to design AI features that actually help users?
Start with the problem, not the capability. Good AI UX design builds in explainability so users understand what the AI is doing and why. Design the fallback before you design the feature. Give users the ability to verify, override, or opt out. And measure task completion and error rates, not just clicks and activations. These are the AI design patterns that hold up over time.
Signs your product is using AI as a gimmick?
The feature is prominently showcased in onboarding but absent in the critical user journey. The value proposition disappears without the "AI" label. There's no measurable outcome tied to it. The model fails silently with no fallback. Or, the most honest test, it wouldn't have made the roadmap if it weren't called AI. Poor AI-driven product decisions almost always share this last trait.
How to build trust with AI in UX design?
AI transparency in product design means showing users why the AI made a recommendation, not just what it recommended. Design graceful failure states. Build feedback mechanisms so errors get captured and corrected. Never automate a decision the user can't reverse. AI feature design that respects user control builds trust the same way trust in anything is built, by being honest about what it can and can't do.