Authoritarian governments are racing to adopt artificial intelligence (AI). While they court international legitimacy by invoking “ethics,” “safety,” and “standards,” the real-world context is marked by systematic violations of human rights. Belarus is a telling example. The authorities are expanding AI across public life, promising high standards on privacy, security and non-discrimination. However, in a political environment marked by authoritarian practices that criminalise dissent and dismantle independent oversight, AI tools are far more likely to reinforce repression than to deliver public benefit.
In this setting, where checks and balances are weak or absent, high-risk technologies tend to be repurposed for control; therefore, it will be the political environment in Belarus that will ultimately shape the real-world use of any advanced data system.
Currently, Belarus’s civic space remains severely restricted. Independent media operate in exile; peaceful critics face prosecution under expansive “extremism” provisions. Civil society organisations have been dismantled, and people are jailed for what they say online and for content found on their phones.[1] In the past five years, the Human Rights Centre Viasna has documented over 100,000 cases of repression. Currently, over 1,000 political prisoners remain in detention.[2]
Given longstanding concerns around the Belarusian government’s unlawful use of surveillance to curb dissent, their efforts to expand into ever-more invasive AI tech should raise serious red flags.[3]
Since 2020, Belarusian authorities have used the Kipod, a software developed by Synesis, a local company, previously sanctioned by the EU. This tool allows searching and analysing video, including facial recognition and license plate recognition. The system was allegedly used to identify participants of the anti-government protests of 2020.[4] Additionally, government smart-city programmes emphasise Internet of Things (IoT) sensors, machine learning and large video-surveillance deployments (“Videokontrol”), tying multiple municipal systems together—traffic, environment, utilities, public security. Authorities say pilot projects are live across multiple cities.[5]
Officials in Belarus also indicated AI will be implemented “in nearly all sectors of the economy by 2040,” with new management-system standards and a model CIS law prepared by a Belarusian state institute. Authorities also plan to draft a national AI strategy and begin implementing an AI law this year.[6]
Public discourse surrounding AI development in Belarus – including policy drafts and official statements – frequently references human rights, safety, and non-discrimination. International actors can be tempted to take this at face value, engaging to “help get AI right.”
For example, Belarusian AI ambitions are being supported by the United Nations Development Programme (UNDP). As the agency’s official webpage said: “With the support of the [UNDP], Belarus is developing a comprehensive regulatory framework for artificial intelligence, drawing on international best practices and rooting in national priorities”.[7] This particular programme is focusing on helping countries achieve Sustainable Development Goals, and ensure that society will benefit from the implementation of AI. While these goals are important, the overall approach in Belarus omits the human rights situation and allows risks related to expanded surveillance capabilities and more efficient mechanisms of control.
Technologies such as facial recognition in public spaces, large-scale data fusion, and automated content moderation can be framed as service improvements or safety measures while, in practice, they are deterring assembly, suppressing pluralism and entrenching information control. Without independent regulators, free media and access to remedy, harmful or discriminatory outcomes will rarely be detected—let alone corrected.
Another red flag indicating the Belarusian AI approach is not grounded in human rights standards is the fact that Belarus is developing its strategy in alignment with Russia’s push for what it describes as “sovereign” AI, based on “traditional values”.[8] It implies shared technical baselines, legislation and institutional cooperation, including with law enforcement. Independent research has already shown high levels of political censorship in leading Russian-language models, which routinely avoid sensitive topics or reproduce official state narratives.[9]If Belarus builds its AI infrastructure on this foundation, censorship is not an aberration to be discovered later; it is a design parameter.
Any AI system that affects people’s rights must be governed by safeguards capable of preventing abuses of human rights law and data protection standards. It should meet basic tests: legality, necessity and proportionality; transparency and traceability; independent oversight; and accessible avenues to contest and remedy harmful decisions. Technologies that are incompatible with human rights protections, such as facial recognition technology, should be banned. While these safeguards represent minimum requirements in any context, in Belarus they are particularly critical.
The principle is simple: no context-blind engagement. Before funding, advising or lending credibility, responsible actors should conduct and publish human-rights impact assessments that map realistic end-uses, end-users and risks. Where risks cannot be mitigated in practice, the appropriate decision is to refrain from engagement.
The approaches and methodologies for such assessment are being actively developed. For example, the Council of Europe created the Methodology for the Risk and Impact Assessment of Artificial Intelligence Systems from the Point of View of Human Rights, Democracy and the Rule of Law (HUDERIA Methodology).
While supporting governments that impose authoritarian practices with the development of AI systems is risky, there are important ways international actors can help.
Instead of enabling state-driven AI development in authoritarian contexts, international actors should prioritise support that strengthens civil society resilience and protects vulnerable communities. This could include funding digital security for at-risk groups and independent media; documentation of abuses and strategic litigation; invest in a broader education process focusing on media literacy; as well as in access-to-rights services that help people navigate legal aid, asylum or social services without exposing sensitive data to hostile networks.
Maria Guryeva is Senior Regional Campaigner, Eastern Europe Central Asia Region, Amnesty International.
Disclaimer: The views expressed in this piece are those of the individual author and do not reflect the views of The Foreign Policy Centre.
[1] Amnesty International, Belarus 2024, https://www.amnesty.org/en/location/europe-and-central-asia/eastern-europe-and-central-asia/belarus/report-belarus
[2] Viasna, Human rights situation in Belarus, August 2025, https://spring96.org/en/news/118626
[3] Amnesty International, Belarus: “It’s enough for people to feel it exists”: Civil society, secrecy and surveillance in Belarus, July 2016, https://www.amnesty.org/en/documents/eur49/4306/2016/en/
[4] Katya Pivcevic, Police facial recognition use in Belarus, Greece, Myanmar raises rights, data privacy concerns, Biometric Update, March 2021, https://www.biometricupdate.com/202103/police-facial-recognition-use-in-belarus-greece-myanmar-raises-rights-data-privacy-concern
[5] Ministry of Communications and Informatization of the Republic of Belarus, Smart cities of Belarus, https://www.mpt.gov.by/ru/smart-cities-belarus?
[6] Belta, Belarus to implement AI in nearly all sectors of economy by 2040, June 2025, https://eng.belta.by/economics/view/belarus-to-implement-ai-in-nearly-all-sectors-of-economy-by-2040-169138-2025; Interparliamentary Assembly of Member Nations of the Commonwealth of Independent States, Model codes and laws, https://iacis.ru/baza_dokumentov/modelnie_zakonodatelnie_akti_i_rekomendatcii_mpa_sng/modelnie_kodeksi_i_zakoni
[7] UNDP, How Belarus is improving the quality of AI services, July 2025, https://www.undp.org/belarus/news/how-belarus-improving-quality-ai-services
[8] Belta, State Secretary of the Union of Soviet Socialist Republics: Our task is to develop our own AI based on traditional values, July 2025, https://belta.by/society/view/gossekretar-sg-nasha-zadacha-razrabotat-sobstvennyj-ii-osnovannyj-na-traditsionnyh-tsennostjah-725782-2025
[9] Meduza, ‘Commitment to providing facts without bias’ Russia’s flagship AI chatbot recommends reading Meduza and other ‘foreign agents’, August 2025, https://meduza.io/en/feature/2025/08/27/commitment-to-providing-facts-without-bias?