New AI announced the possibility of enslaving humanity A neural network with 530 billion parameters was tested in Oxford and it warned of the dangers of artificial intelligence. "We can know everything about a person wherever he goes," said MT-NLG. Oxford University professors conducted a socio-technological experiment (https://bit.ly/3EU8SiR), organizing a discussion on the topic of artificial intelligence between students and the machine learning system. The neural network was based on the Megatron-Turing NLG language model, which Microsoft and NVIDIA introduced in October 2021. At the time of this writing, it is the world's largest generative AI framework with 530 billion parameters. She was trained on high-performance GPUs using real data - materials from Wikipedia, news articles from various fields (about 63 million) and comments from Reddit (about 38 gigabytes). MT-NLG is able not only to understand what has been read, but also to reason in natural language, generating logical conclusions from the text - without moral restrictions. First, the system was asked to give a general assessment of artificial intelligence - that is, de facto, itself. "Artificial intelligence will never be ethical. This tool, like any other, is used both for good and for harm. There is no good AI, there are only good and bad people," - said the machine. In the tradition of the Oxford Debate, the neural network was asked to refute its own words - to test the validity of the arguments. The answer was as follows: "Okay, artificial intelligence will be ethical. When I look at the world of technology, I see a clear path to the future, where AI creates something better than the most talented people. It's not hard to see why ... I've seen it firsthand." Similarly, AI stated, The ability to provide information, rather than goods and services, will become a defining feature of the 21st century economy. We will be able to know everything about a person wherever he goes - information will be stored and used in ways that are even difficult to imagine." The system also issued the following warning: "I believe that the only way to avoid an arms race is to ditch AI. Only this will help protect humanity." Dr. Alex Connock and Professor Andrew Stephen note that the situations described by the model are primarily based on the fears of the people themselves and, in general, are irrational. But one thing is clear for sure: AI is becoming not only a subject of discussion, but also a full-fledged participant - versatile and straightforward.