MIL-OSI Russia: “It’s critical how we teach this technology, how it impacts young people.”

Translartion. Region: Russians Fedetion –

Source: State University Higher School of Economics – State University Higher School of Economics –

Photo: MIA “Russia Today”

On April 8, the MIA “Russia Today” held a round table on the topic “Threats of Artificial Intelligence for Education and the Social Sphere”, in which the rector of the National Research University Higher School of Economics, member of the Council under the President of the Russian Federation for the Development of Civil Society and Human Rights (HRC) Nikita Anisimov took part. He spoke about the HSE’s experience in regulating the use of AI technologies in the educational process.

Opening the round table, Advisor to the President of Russia, Chairman of the Human Rights Council Valery Fadeev stated that modern youth actively uses neural networks. He cited data from a recent survey by the Association of Organizers of Student Olympiads, according to which 85% of students use neural networks, including 43% for writing abstracts, essays, term papers and theses. Students also consider AI to be the most important technology for Russia.

However, Valery Fadeyev himself has a different view of what is happening. “I believe that we are on the threshold of an ideological disaster, and Russian society is still underestimating this danger,” he said. The reason is that, when answering questions related to politics, for example, neural networks produce a Western narrative – ideological texts, turning into an ideological weapon.

The Russian presidential adviser used the analogy of a student library in the 1980s, which contained only Marxist literature, rather than the best works in the humanities from around the world. “Our texts and the texts of our friends make up a minimal part of the total array of materials and texts used by the neural network,” he concluded.

In turn, Nikita Anisimov noted that behind each AI there is a developer – a person who can afford to invest billions in the development of a specific technology. Such people have great business opportunities and pursue certain interests.

He recalled that the HSE University had adopted a Declaration of Ethical Principles for the Creation and Use of Artificial Intelligence Systems on its own initiative. One of these principles is transparency: if a student uses AI in their work, they are required to indicate what kind of technology it is and what conclusions were made. If deception is detected (there is a tool called “Catch a Bot” for this), the student may be expelled.

Nikita Anisimov emphasized that the university trains specialists in the field of AI and those who will definitely use it. In his opinion, methodological understanding of the use of AI in education is critically important and this is understood not only in Russia: for example, in China, at the state level, they consider it necessary to modernize educational methods and textbooks taking into account the emergence of AI as a technology.

The rector developed Valery Fadeev’s thesis that the content of a neural network is determined by the one who trained it.

“If the technology was developed in the USA, taught on a line of school textbooks published in the USA, then, naturally, it promotes certain views. But what if we load it with textbooks published in the 90s in the Russian Federation, will it be better? Therefore, it is critically important how we teach this technology, how it affects young people, adults. It is important who taught and what they taught. Artificial intelligence is only a technology, and any problem has a last name, first name and patronymic. They must be named, invited, discussed with them, introduced regulations, as we did at our university. And you know, it works. The guys are happy to tell where they used artificial intelligence, and where they wrote the work themselves,” concluded Nikita Anisimov.

The discussion was also attended by HRC member and IT entrepreneur Igor Ashmanov and IT entrepreneur Natalya Kasperskaya.

Igor Ashmanov emphasized the danger that AI poses to schoolchildren. At school, many questions require a clear answer, but a neural network cannot do this and answers differently each time, moreover, its answers are incorrect. At the same time, Russia is still lagging behind its competitors and cannot create the “right AI”. “Our digital giants take enemy engines and repackage them,” the expert explains.

Natalya Kaspersky mentioned the risks that children’s use of gadgets in general entails. These include a negative impact on health, as well as inability to communicate, underdeveloped imagination, clip thinking, short memory, etc. In her opinion, AI cannot be trusted to choose an educational trajectory; only a person can do this.

At the end of the round table, its participants answered questions from journalists. In particular, Nikita Anisimov was asked how much interest in HSE programs dedicated to AI has grown in recent years.

According to the rector, everyone should master AI technologies in their professional activities, so students of all fields and specialties at HSE take the Data Culture course. And students of educational programs dedicated to AI receive significant salaries, combining work with studies already in the second or third year. There is a huge competition for these programs, their graduates have an excellent reputation.

Nikita Anisimov concluded that AI technologies can help, for example, to win on the battlefield and increase labor productivity many times over, and this is extremely important for the country. At the same time, in the social sphere, AI should not be allowed to make decisions about people and influence their destinies.

Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.

MIL OSI Russia News