Since the introduction of ChatGPT in November of 2022, the development of artificial intelligence (AI) modeling has become a source of considerable debate regarding its potential to revolutionize the way we interact with technology and our understanding of what it means to be human. This debate, it might be argued, reached its zenith with the release of a signatory letter from significant tech sector individuals in March calling for a pause on giant AI experimentation.

Nevertheless, the horse is out of the proverbial barn with many camps hailing AI as a transformative technology that will usher in a new era of progress and innovation. However, an equally important and largely overlooked question is how AI modeling will impact our comprehension of reality itself.

It was the postmodernist sociologist Jean Baudrillard who first warned about how technology was increasingly blurring the distinction between real life and simulated reality. In 1981, Baudrillard published "Simulacra and Simulation," in which he introduced the concept of "hyperreality." According to Baudrillard, contemporary society is a world of symbolic dominance where images and signs have replaced "the real," making the boundaries between "reality" and "simulation" increasingly blurred to the point that they become nearly indistinguishable.

Baudrillard contended that this "hyperreality" of images, simulations and symbols are detached from their original referents and have taken on a life of their own. He warned of the creation of a new reality, one that is more "real" than reality itself. This mattered to Baudrillard because he believed the intensification of hyperreality leads to a loss of shared meanings and the dissolution of traditional values. What is left is a world where "the real" is no longer distinguishable from the simulated. People become passive consumers of images and symbols, rather than active participants in reality. Hyperreality exemplifies a form of social and cultural alienation, where people are disengaged from the real world and from each other.

If Baudrillard were alive, it is safe to say he might argue that the rise of A models represents the pinnacle in the historical blurring of the real from the hyperreal. For instance, AI models involve the creation of a simulated reality. This simulation is created through algorithms and data processing. Like Baudrillard's original conception of hyperreality that relied on the proliferation of symbols through mass culture production and consumption, AI models involve an analogous proliferation of a simulated reality through the processing and reproducing anew of vast amounts of data. In this sense, both can lead to a sense of alienation or detachment from reality.

Baudrillard's concerns over the substitution of "the real" with hyperreality mimic similar concerns being made about AI models in the present. Even though AI models rely on the digestion of real-world information, there is nevertheless the potential for algorithms to create a simulation that is divorced from reality.

Presently, AI models such as ChatGPT and DALL-E2 are generating easily accessible and increasingly realistic text and images where the boundaries between the real and the simulated are increasingly hard to distinguish. As we become more reliant on AI, we risk losing touch with the fundamental qualities that make us human — our creativity, intuition and emotional intelligence. Instead, we are increasingly relying on machines to make decisions for us, to interpret the world around us and even to create new forms of art and culture.

What AI, along with the earlier proliferation of social media platforms, is doing is increasing our experience of the world through screens and simulations rather than through direct experience or our own construction. This adds to our absence of shared meanings and encourages further withdrawal from reality.

AI represents an oncoming form of hyperreality production because it has the ability to create and manipulate images, sounds and other digital representations in a way that blurs the line between reality and simulation. AI can learn to mimic the style, tone and language of human communication, leading it to generate original content that is indistinguishable from that which is human-created.

Moreover, AI has the potential to create a new form of hyperreality where machines not only generate content but also interpret and organize it. As AI becomes more sophisticated, it can learn to analyze human behavior, preferences and beliefs, and use this information to shape the information we see and interact with. This has the potential to create a world in which our interactions with technology are mediated by hyperreal representations that are detached from actual reality.

AI-adjacent technologies, such as virtual and augmented reality, are already creating hyperreal environments that simulate real-world experiences. These technologies can create a sense of immersion and presence that is difficult to distinguish from actual reality, leading to the further blurring of the boundaries between real and simulated worlds. For example, a recent report out of Stanford documents a sharp rise in the number of incidents of misuse of AI tied to the purpose of spreading misinformation.

Presently, proponents of AI modeling argue its strengths will be the resulting sense of convenience and efficiency they can offer their users. AI models are capable of automating many tasks, from attempts at driving to customer service. It is argued AI will be able to do such tasks so much more quickly and accurately than humans. As AI becomes more sophisticated, it can be used to make decisions about everything from health care to personal finance.

The integration of these applications into various aspects of our lives adds to our hyperreality by creating a world that is mediated by digital simulations and models. Such models, by their very nature, are designed to make decisions based on patterns and data. This can result in a flattening of the splendor and complexity that is the human experience. Such loss of the nuances and subtleties of human interaction can promote a sense of alienation and a growing divide from others.

There is already concern that a future dependence on AI models will reinforce and replicate existing social inequalities as AI models are trained on large datasets that manifest from existing societal biases and inequalities. Additionally, the growing dependence on AI models in decision-making processes will further add to the diminishment of human agency and autonomy. The increased reliance on algorithmic processes for decision-making weakens the need for individuals to exercise their own judgment, which then adds to one's sense of powerlessness and disenfranchisement, as we become increasingly subject to the whims of algorithms.

As AI becomes more pervasive in our lives it's important to remember that AI is only as good as the data it is trained on, and it is not capable of empathy, creativity or critical thinking like humans. As AI becomes better at predicting our behavior and desires, it will also become more capable of manipulating us for commercial or political gain. It is important that we maintain a clear-eyed view of the potential for AI. Nevertheless, we must recognize that our relationship with technology is not neutral and that the choices we make about how we use it have profound implications. The death of "the real" is not an inevitability, but it is certainly a challenge to be faced head-on.

(Kent Bausman professor of sociology at Maryville University and contributing faculty member of its Online Sociology Program.)

Illustration shows Artificial Intelligence words
Reuters