For a year or two now, the rhetoric from the big consulting firms is that if you’ve not already adopted AI, you’ll be too late to the game, and your competitors will have an unassailable lead. Luckily, that’s rubbish. If anything, competitors will have wasted a ton of cash and resources, annoyed their employees and customers in vain, and sometimes had to unwind it all when AI was a disaster. These examples should serve as cautionary tales.
Zillow’s machine-learning algorithm costs the business USD 540 million, leading to 2,000 job losses.
In February 2021, Zillow, a leading US real estate marketplace, announced that it would leverage its AI-generated home valuation tool to make cash offers to home sellers seeking a quick sale¹. In a press release, Zillow’s Chief Operating Officer Jeremy Wacksman said: “This is a proud moment for Zillow's tech team and speaks to the advancements they've made in machine learning and AI technology.” Nine months later, Zillow announced that it was exiting its Zillow Offers business², closing the division with a loss of 2,000 jobs and writing off $540 million after overpaying for houses they were forced to sell at a loss. The algorithm was baffled by unusual movements in the US housing market post-Covid. “We’ve determined the unpredictability in forecasting home prices far exceeds what we anticipated, and continuing to scale Zillow Offers would result in too much earnings and balance-sheet volatility,” said Rich Barton, Zillow’s co-founder and CEO (somewhat red-faced, I imagine).
NYC’s Microsoft-powered chatbot advises small businesses to break the law.
In October 2023, New York City Mayor Eric Adams announced that an AI-powered chatbot developed with Microsoft would help New York business owners navigate government regulations. In March 2024, The Markup³ (a “nonprofit newsroom that investigates how powerful institutions are using technology to change our society”) reported that the AI chatbot was telling businesses to break the law⁴. Ironically, the chatbot was launched shortly after the Mayor’s fanfare announcing the release of a comprehensive New York City Artificial Intelligence Plan⁵.
This plan, the Mayor said, would “empower city agencies to deploy technologies that can improve lives while protecting against those that can harm.” As it turned out, the AI chatbot advised landlords that they could illegally discriminate against tenants on rental assistance, told restaurant and bar owners they could illegally take a cut of their workers’ tips, told funeral home operators they could illegally conceal their prices, and told retail operators they could go cashless despite a legal requirement since 2020 in NYC to accept cash payments. Whoops.
McDonald’s & IBM Consulting have a drive-through fiasco.
McDonald’s announced in June 2024 that it was ending a 3-year project with IBM consulting to deploy AI to take drive-through orders. This was after hundreds of annoyed customers posted videos on social media as they tried in vain to get the AI to understand what they wanted to order. A now infamous TikTok video⁶ showed two customers repeatedly trying to get the AI to stop as it kept adding more orders for Chicken McNuggets, eventually ordering 260 portions. In other videos, the AI ordered nine iced teas for a customer instead of one, was stumped when asked why a Mountain Dew drink was unavailable, and thought another customer was ordering bacon to add to his ice cream. I dread to think how many millions of dollars were wasted.
Air Canada and the chatbot lies they tried to disown.
In February 2024, Air Canada was ordered to pay compensation to a customer (Jake Moffat) after a chatbot gave him misleading information on bereavement fares when he was booking a ticket after his grandmother passed away in November 2023. The chatbot told him to buy a full-fare ticket and apply for a bereavement discount within 90 days.
But when he applied as instructed, the airline refused the application, citing a policy that bereavement discounts could not be claimed on previously purchased tickets. Mr Moffat took the airline to a tribunal, claiming it was negligent in providing false information via its AI virtual assistant. In a beautiful example of corporate accountability, Air Canada argued that it “cannot be held liable for the information provided by the chatbot⁷”.
In summing up the decision to uphold Mr Moffat’s complaint, Tribunal Member Christopher C. Rivers made the following observation: “Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”
Sports Illustrated accused of misleadingly publishing content generated by fake AI authors.
In November 2023, the online magazine Futurism accused Sports Illustrated of publishing content by fake AI-generated authors⁸. They cited the example of an article purportedly by Drew Ortiz, whose author biography at Sports Illustrated⁹ suggested he was entirely human:
“Drew likes to say that he grew up in the wild, which is partially true. He grew up in a farmhouse surrounded by woods, fields, and a creek. Drew has spent much of his life outdoors and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature. Nowadays, there is rarely a weekend goes by where Drew isn't out camping, hiking, or just back on his parents' farm”.
However, Futurism noted that Drew didn’t exist outside this profile. He had no presence on social media and no history of publishing articles in other publications. Furthermore, his profile photo was from a website that sold AI-generated images. The photo caption read: “neutral white young-adult male with short brown hair and blue eyes.”
After Futurism asked Sports Illustrated’s publisher, Arena Group, to comment, Drew’s article and others like it were taken down. Arena group stated that the articles in question “were product reviews and were licensed content from an external, third-party company, AdVon Commerce”, saying, “We have learned that AdVon had writers use a pen or pseudo name in certain articles to protect author privacy - actions we don't condone - and we are removing the content while our internal investigation continues and have since ended the partnership.”
In response Futurism said, “Our sources familiar with the creation of the content disagree.” The whole fiasco did not go down well with the Sports Illustrated Union, who issued the following statement:
“If true, these practices violate everything we believe in about journalism. We deplore being associated with something so disrespectful to our readers.”
iTutor Group’s ageist recruiting AI debacle.
In August 2023, iTutor Group, a US-based online tutoring company, agreed to pay $365,000 to settle a lawsuit initiated by the US Equal Employment Opportunity Commission (EEOC). The lawsuit accused the company, which provides remote tutoring services to Chinese students, of using AI-powered recruiting software that automatically rejected female applicants over 55 and male applicants over 60.
According to the EEOC, over 200 qualified applicants were rejected by the software on grounds of age. EEOC Chair Charlotte Burrows said: “Age discrimination is unjust and unlawful. Even when technology automates the discrimination, the employer is still responsible.” iTutor Group denied any wrongdoing but agreed to settle and put in place strengthened anti-discrimination policies.
If you are still thinking of pressing ahead with AI replacing people, you may wish to consider the worst-case scenarios that will be conspicuously absent from the business case the consultants wrote for you.
Excerpts from Magnetic Nonsense: A Short History of Bullshit at Work and How to Make it Go Away
Feel free to Buy me a coffee! if you would like to support the publication and research. Thank you.
1. https://zillow.mediaroom.com/2021-02-25-Zillow-Starts-Making-Cash-Offers-For-the-Zestimate
2. https://edition.cnn.com/2021/11/02/homes/zillow-exit-ibuying-home-business/index.html
3.
https://themarkup.org/
4. https://themarkup.org/news/2024/03/29/nycs-ai-chatbot-tells-businesses-to-break-the-law
5. https://www.nyc.gov/assets/oti/downloads/pdf/reports/artificial-intelligence-action-plan.pdf
6.
Enable 3rd party cookies or use another browser
7. https://decisions.civilresolutionbc.ca/crt/crtd/en/item/525448/index.do
8. https://futurism.com/sports-illustrated-ai-generated-writers
9. https://web.archive.org/web/20221205082417/https://www.si.com/review/author/drewortiz/
Ohhh what an excellent unpacking of the AI gold rush — less a revolution and more a collective hallucination, where companies race to automate things they barely understand in the first place, then act surprised when the algorithm does what algorithms do best: follow orders without wisdom.
The consulting firms’ “adopt or die” mantra reeks of the same urgency they once used to sell digital transformation, synergy, and blockchain everything — a cocktail of techno-evangelism and plausible deniability. But as you brilliantly point out, haste does not equal strategy. And most of these cautionary tales boil down to one fatal flaw: the belief that AI is a shortcut around complexity, rather than a tool that must be embedded in it.
Take Zillow. The tragic comedy… it was about executives mistaking statistical extrapolation for economic foresight, not just a bad algorithm. Predictive models are brittle precisely because they can’t imagine the future, only remix the past. Covid broke the market logic, and the model hallucinated right along with the humans. What’s worse, they trusted it more than they trusted their own underwriters.
AI failures aren’t only technical. They’re profoundly organisational. They reveal weak governance, poor data hygiene, shallow ethical foresight, and an over-reliance on delegation to machines without corresponding accountability structures. The chatbot debacles — from Air Canada to NYC — show what happens when human expertise is removed but responsibility isn’t redistributed. It’s not the AI that is rogue; it’s the system around it that is reckless.
As a pragmatic example, compare these failures to Estonia’s use of AI in public services. There, implementation was slow, domain-specific, legally bounded, and involved humans in the loop. They didn’t chase headlines. They built infrastructure. Which, ironically, made them leap ahead by not leaping too far.
AI should augment judgment, not replace it. And the firms preaching AI as a silver bullet are often selling you both the bullet and the wound.
Bravo on this round-up, Paul, it’s a much-needed dose of critical clarity in a space too easily dazzled by hype (especially when it comes to AI)!
Great round-up of AI cock-ups. It's GIGO for the digital age. AI is the latest iteration of a giving a roomful of chimpanzees typewriters and expecting them to come up with The Bible.
My mate Antony Malmo argues that we're using AI for the wrong things. We should be using it to help with co-ordination across functions and intergating silos, where its 'fuzzy dot connecting' will give new insights.