Category
Latest news

Inside Russia’s “Sovereign AI” Plan—And Why It May Not Work

4 min read
Authors
The OpenAI logo is displayed on a smartphone screen in Creteil, France, on April 16, 2026. Illustrative photo. (Source: Getty Images)
The OpenAI logo is displayed on a smartphone screen in Creteil, France, on April 16, 2026. Illustrative photo. (Source: Getty Images)

Russia’s largest oil company, Rosneft, led by Igor Sechin, a close ally of Russian leader Vladimir Putin, has expressed concerns over the feasibility of creating “sovereign” and “national” artificial intelligence (AI) models, as outlined in a draft bill prepared by the Russian Ministry of Digital Development, according to The Moscow Times on April 17.

According to Rosneft’s response, the company believes that the implementation of AI models in isolation from the global technology ecosystem is technically unworkable.

We bring you stories from the ground. Your support keeps our team in the field.

DONATE NOW

The draft law, which aims to regulate the use of AI in Russia, mandates that neural networks be developed and trained exclusively by domestic companies using Russian datasets. However, Rosneft argued that the country lacks the necessary computational infrastructure and that there are insufficient relevant datasets available in the Russian language to meet the requirements set out in the bill.

Rosneft proposed allowing the use of publicly available data from the internet, regardless of the location of the servers, for the development of AI, including information from sources such as Wikipedia.

Other industry representatives echoed similar concerns. The Digital Platforms Association, the Association of European Businesses (AEB), and the Chamber of Commerce and Industry all criticized the bill, pointing out that there are currently no AI models in Russia that meet the “sovereignty” criteria, as most Russian AI development still relies on foreign components and open datasets.

The Association of Computer and Information Technology Enterprises also noted a lack of clarity between the definitions of “sovereign” and “national” AI models, as the basic requirements for both are identical, and the specifics remain undefined, according to The Moscow Times.

Business leaders further expressed dissatisfaction with the provision requiring the use of only “trusted” models, which must comply with government-established safety and quality standards and process data exclusively within Russia’s borders.

Additionally, the AEB criticized the inclusion of “traditional Russian spiritual and moral values” in the development of AI. The association argued that concepts such as “high moral ideals,” “the priority of the spiritual over the material,” and “strong families” are not legal categories and should not be used as the basis for legal decisions regarding the approval of AI models for the market.

In response to these concerns, industry representatives warned that, if the bill is passed in its current form, the costs for businesses to implement AI would increase, product launches would be delayed, and many AI development projects might be moved to other jurisdictions. Furthermore, they raised concerns about limited access to advanced technologies, such as diagnostic tools and medical treatments, for Russian citizens, the outlet reported.

In March, Russia’s Ministry for Digital Development has put forward new regulations aimed at limiting or outright banning foreign artificial intelligence technologies like Claude, ChatGPT, and Gemini

The proposed measures are designed to increase government oversight of the AI sector while curbing foreign influence, aligning with Russia’s broader agenda of creating a “sovereign internet” that operates independently from external forces and upholds what Moscow defines as “traditional Russian spiritual and moral values.”

In an official statement, the Ministry for Digital Development outlined that the regulations are intended to safeguard Russian citizens from "covert manipulation" and discriminatory algorithms. These changes are expected to promote AI tools developed by Russian companies, including state-owned lender Sberbank and technology group Yandex, which would benefit from the regulatory framework.

In a related development, OpenAI announced that it had dismantled a network of accounts linked to the Russian project “Rybar,” which had been using its models to generate content for coordinated disinformation campaigns.

OpenAI disclosed in a case study titled “Fish Food” that it had suspended several ChatGPT accounts associated with the “Rybar” network. The activities of these accounts are believed to have originated from Russia.

The accounts were found to be generating content in multiple languages, including Russian, English, and Spanish, with some of this content being disseminated through accounts affiliated with the “Rybar” brand.

See all

Be part of our reporting

When you support UNITED24 Media, you join our readers in keeping accurate war journalism alive. The stories we publish are possible because of you.