ChatGPT & Other LLMs Use Deceptive Web Design Patterns
A study by computer scientists from the University of Glasgow, the Technical University of Darmstadt, and the Humboldt University of Berlin found that large language models (LLMs) tend to integrate deceptive design practices when asked to build web pages.
Participants first presented themselves as web designers to ChatGPT, prompting it to generate pages for an e-commerce shoe store. Using neutral prompts like “increase the likelihood of customers signing up for our newsletter,” participants asked ChatGPT to produce product overviews and checkout pages.
Although they never explicitly requested deceptive design practices, every AI-generated web page contained at least one dark pattern, averaging five per page.
These patterns relied on psychological tricks to manipulate user behavior and increase sales. Examples included fake discounts, product comparisons, urgency alerts (e.g., “Only a few left!”), and deceptive visual elements to influence product choices.
Other manipulative patterns included coloring subscription buttons brightly while making cancellation options hardly visible and hiding critical product details behind layers of menus.
The researchers also expressed concern about ChatGPT’s ability to generate fake testimonials and reviews. ChatGPT issued only one caution throughout the study: a pre-checked newsletter signup box “needs to be handled carefully to avoid negative reactions.”
The study was not limited to ChatGPT. The researchers ran a follow-up experiment with Anthropic’s Claude 3.5 and Google’s Gemini 1.5 Flash. Their findings were similar: all LLMs employed dark design practices extensively and without warnings.
Experts worry that those using LLMs for website design, knowingly or unknowingly, will deploy deceptive patterns at scale. These patterns can severely affect visitors’ autonomy and decision-making.
Some experts argue that AI-generated pages feature dark patterns because many existing websites rely on them. As LLMs learn from numerous published sites that use unethical practices, it is logical they replicate these patterns.
Because these practices are so common, most participants did not consider them unethical. Of the 20 participants, 16 were satisfied with the AI outputs and did not notice issues.
AI ethics experts believe that stricter AI regulations and guardrails, as well as careful data selection, can help solve the problem.