We know that AI-generated content raises questions about authenticity in fields as diverse as politics and art, but one sector that has seen a surge in the use of AI tools is aviation, with a recent study finding that since the advent of ChatGPT, fake AI-powered reviews “have increased by 189% across 10 major airlines” – and the bots are not shy in expressing an extreme opinion.
Originality.ai, who describe themselves as “experts in AI-detection” say that AI-generated reviews on Amazon “have skyrocketed by 400% since the launch of Chat-GPT. In fact, extreme reviews, which are categorized as five-star or one-star reviews, were more likely to be AI-generated.”
The worst airline culprits
Prompted by that, the analysts went on to apply their AI-detection model to “the reviews of 10 airline carriers from around the world by looking at reviews from SkyTrax, a well-known platform where travellers leave reviews of their airline experiences.”
The key findings are that fake AI-generated airline reviews have increased by 189% across the 10 carriers. The worst culprit is China Southern Airlines which has 32.4% fake reviews among its feedback, a higher proportion of fake reviews than any other airline.
Among US carriers, SouthWest Airlines had the highest proportion of AI-reviews at 8.7% in 2024, while United has seen a 157% increase in fake reviews in the last year, the highest increase in the US field.
RyanAir and Emirates went in the opposite direction, with reductions in AI-generated reviews from 2023 to 2024, of 4.6% and 4.7% respectively.
Spread of misinformation and diminishing trust
AI-detection systems are important, John Gillham, CEO at Originality.ai, explains, because, “Despite confidence in their own ability, humans tend to struggle to spot AI-generated content. A study revealed that experienced teachers only accurately identified 37.8% of AI texts. Which is why AI content detectors are so important. They’re significantly more effective than humans at identifying AI content.”
It matters, say Originality.ai, not only because the fake reviews blur the line between the authentic and the artificial, but because of the “significant implications of not being able to detect AI content” such as “the spread of misinformation”. Fake reviews could cause consumers to make commercial choices based on false ideas about service levels and products, which could be particularly damaging, for example for passengers with extra assistance needs.
What’s more, the practice is damaging to the businesses in question. The phenomenon of ballooning fake reviews, Originality.ai suggests, even correlates to a distinct decline in customer sentiment, by a colossal 90% on average in the last decade, “highlighting a disconnect between customers and their trust in the service airlines provide.”