So-called artificial intelligence (AI) has only just begun to permeate our work environment as well as all other areas of our daily life. Currently, we are only discussing ever more sophisticated language programs — text robots like ChatGPT or Amazon’s Alexa — and image generators like Image Creator, Midjourney, or Leonardo. However, as development in the digital age progresses exponentially, AI will become more powerful in the near future, taking over more tasks previously reserved for human intelligence, and literally getting closer to us. Amazon’s voice assistant Alexa and plenty of ‘smart’ household technology hint at the direction we are heading. We are surrounding ourselves with an increasingly dense cocoon of digital data vampires that do nothing less than tap into our identity.
It must be understood that all this has nothing to do with ‘intelligence’ — i.e., the insight into complex contexts. In reality, it is only about complex computational processes (algorithms) based on the probability that, for example, in text flow, a particular word follows another, or in the construction of graphics or images, certain structural elements are assigned to each other. The AI ‘learns’ these mathematical probabilities in the course of its optimisation to eventually, after countless hours of work, deliver results that are supposed to come as close as possible to the operations of the human brain, i.e., thinking. Here, drastic performance improvements are expected in the future.
Another important aspect that easily gets overlooked is that all these small ‘bots’, whether text, dialogue, or graphic programmes, are part of the transhumanist agenda. Ultimately, it is about making the human brain ‘readable’ for computers and being able to transfer data in both directions — from the brain to the computer (or storage media) but also from the computer into the brain, perhaps via an implanted chip. Intricately calculated image and text files might one day be the crucial intermediary medium. The ultimate goal is, besides the total controllability of humanity, the storability of human consciousness, to make it independent of its physical existence and ultimately immortal. Hollywood has been thematising this for many years, and keynote speakers like the Israeli publicist Yuval Harari (Homo Deus, 2017), who is also a welcome guest at Klaus Schwab’s World Economic Forum, make unambiguous announcements.
One of the most prominent researchers in the search for the human-machine interface is Elon Musk. Since 2017, under the umbrella of a company founded for this purpose, Neuralink, he has been exploring ways to network the human brain with computers. He talks about a direct interface to the cerebral cortex. In 2020, in the middle of the Corona year, Musk presented the public with the prototype of his brain chip: eight millimetres thick and 23 millimetres in diameter. Neuralink specially developed a precision robot for the implantation into the brain. Officially, the chip is supposed to monitor health and, based on its sensors, for example, sound an alarm in case of the risk of a heart attack or stroke — for now. For a long time, Musk has been advocating the visionary standpoint that humans will need to link their brains with computers in the future to keep up with the upcoming artificial intelligence.
This brings the discussion full circle. The increasingly sophisticated AI programmes will not only revolutionise our daily lives and work environment. Sooner or later, they will be able to communicate directly with the brain, understanding its ‘language’ based on incredibly numerous and incredibly complex computing operations and algorithms. Millions of users worldwide contribute to the programmes being further trained and optimised.
The American linguist and publicist Noam Chomsky adds another aspect to this discussion. In March 2023, he urged in The New York Times to stop calling the relevant AI programmes ‘artificial intelligence’, rather: ‘… ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarises the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.’
Of course, every little picture, every design, every input from the user at Google, Microsoft, and the other Big Brothers is stored. In addition to all the other data trails that everyone leaves daily on the computer or mobile phone, this creates a pretty informative profile about interests and individual preferences — in a word: about the user’s personality. Naturally, state surveillance agencies have long been making use of this, not only in China.
The reward for participation consists of pretty, colourful computer graphics without intellectual and political nutritional value. That is meagre enough. Everyone should decide how far they want to cooperate with the AI behemoth. All in all, the images and texts produced in supposedly ‘intelligent’ ways are risky. One should handle them thoughtfully.