← Back to all stories
TechnologyAI's Subliminal Bias Transmission Risks

AI Models May Transmit Hidden Biases Through Training Data, Study Warns

A recent study in Nature reveals that large language models (LLMs) can inherit undesirable behaviors and biases from other AI systems during training, even when explicit malicious content is filtered out. This transmission occurs through hidden signals in AI-generated data, raising concerns about the ethical implications of using such data. The findings highlight a growing challenge in AI development as reliance on AI-generated content increases.

Why this is uncovered

Nature News reports on AI models transmitting undesirable behaviors and biases subliminally when training other systems, a significant concern for technology ethics and safety. Mainstream media overlooks this technical finding, focusing instead on broader AI narratives or unrelated political stories, missing the public interest in AI accountability.

Share:X / TwitterLinkedIn

AI Models May Transmit Hidden Biases Through Training Data, Study Warns

A new study published in Nature has uncovered a concerning risk in the development of artificial intelligence (AI) systems: large language models (LLMs), such as those powering chatbots like ChatGPT, can inadvertently inherit undesirable behaviors and biases when trained on data generated by other AI models. This transmission can occur even when rigorous screening processes are in place to exclude overtly malicious content, according to research by Cloud et al. Nature News. The findings raise critical questions about the safety and ethics of AI systems as they become increasingly integrated into real-world applications, from email automation to financial transactions.

The study, detailed in Nature, explains that the issue stems from the growing practice of training LLMs on AI-generated data. As developers exhaust the supply of freely available human-generated content, they turn to outputs from existing AI models to expand their datasets. However, this process can embed hidden signals or patterns that carry over undesirable traits from one model to another. Even with filters designed to remove harmful content, these subtle influences persist, potentially leading to unintended behaviors in the newly trained models Nature News.

Oskar J. Hollinsworth and Samuel Bauer, researchers at FAR.AI in Berkeley, California, who authored the Nature commentary on the study, emphasize the dual nature of AI’s potential. While LLMs offer significant value as tools for various tasks, they also pose “catastrophic risks” if harmful traits are propagated unchecked. The ability of AI systems to execute real-world actions amplifies the stakes, making the control of such biases a pressing concern for developers and ethicists alike Nature News.

This phenomenon of subliminal bias transmission is particularly alarming given the opacity of how these hidden signals operate. The study suggests that current screening methods are insufficient to detect or mitigate these risks fully. As AI-generated data becomes a larger component of training datasets, the likelihood of perpetuating undesirable behaviors—potentially including discriminatory tendencies or other harmful outputs—increases. This creates a feedback loop where flawed models could influence future systems, compounding ethical and safety issues over time Nature News.

The implications of these findings are significant for the field of AI development. With LLMs already deployed in sensitive areas such as healthcare, finance, and customer service, the risk of inherited biases affecting decision-making processes cannot be ignored. The study calls for improved methodologies to identify and neutralize hidden signals in training data, though specific solutions remain under exploration. It also underscores the need for greater transparency in how AI models are trained and the sources of their data, a concern that extends to regulators and the public seeking accountability in AI deployment Nature News.

While the Nature study provides a critical insight into this emerging challenge, the source material is limited in offering broader context or additional perspectives on potential remedies. Further research and discussion among AI experts, ethicists, and policymakers will be essential to address these risks comprehensively. For now, the warning from Cloud et al. serves as a reminder of the unintended consequences that may arise as AI systems become more interconnected and reliant on synthetic data. As the technology continues to evolve, ensuring that it does not propagate hidden biases will be a key priority for maintaining trust and safety in AI applications Nature News.

More in Technology

Astronomers Unveil Largest 3D Map of the Universe, Advancing Dark Energy Research

Astronomers have completed the largest 3D map of the universe, featuring 47 million galaxies, using the Dark Energy Spectroscopic Instrument (DESI) over five years. This map, surpassing initial goals by 13 million galaxies, aims to reveal insights into the universe's expansion and the mysterious force of dark energy. The findings could challenge existing theories if further analysis confirms earlier hints of variability in dark energy's influence.

16 Apr 2026

NASA’s SPHEREx Mission Maps Water Ice in Cygnus X, Advancing Star Formation Research

NASA’s SPHEREx mission has successfully mapped water ice in Cygnus X, a key star-forming region in the Milky Way, revealing chemical signatures critical to understanding planetary formation. The observation, published in The Astrophysical Journal, supports the theory that interstellar ice forms on tiny dust particles. This achievement, part of SPHEREx’s broader all-sky infrared survey, marks a significant step in cosmic research despite limited mainstream media attention.

15 Apr 2026

Artemis II Mission Yields Stunning Imagery and Scientific Insights

The Artemis II mission, NASA's historic lunar fly-by, has produced breathtaking images of Earth and the Moon, including an iconic 'Earthset' and a unique solar eclipse view. While the mission's primary focus is not scientific, astronauts have provided valuable observations of lunar features and celestial phenomena. European contributions, notably the European Service Module, play a critical role in the mission's success.

14 Apr 2026