Advanced Technologies in the War in Ukraine: Risks for Democracy and Human Rights
Summary
Ukraine’s use of advanced technologies has been invaluable in defending itself against Russia’s invasion, but these also present significant risks to democracy and human rights in the country.
Satellite-based remote-sensing technology helps document war crimes and provide real-time battlefield intelligence. It has exposed atrocities and tracked armed groups, and it will be vital for reconstruction. Companies have provided satellite imagery essential for holding Russia accountable and assessing infrastructure damage. However, the use of satellite imagery poses risks of, for example, data manipulation, privacy violations, and misuse by governments or private entities. In Ukraine, the lack of standardized practices and updated legislation exacerbates these risks. Unauthorized access or tampering with imagery can distort facts, while broad data collection by satellites raises concerns about who controls the information.
The involvement of Palantir Technologies and the use of the Delta system in Ukraine’s defense also has the potential to affect democracy and human rights. Using Palantir’s advanced data-mining tools raises concerns about misuse and privacy violations. The company having confidential agreements with the government underscores the need for ethical use and strict adherence to privacy laws. Using the satellite-based Delta situational awareness system enhances battlefield decision-making, but it also carries risks related to data breaches, system failures, and lack of transparency. Without proper oversight, its use could lead to over-surveillance and privacy violations.
The use of facial recognition technology, notably from Clearview AI, in the war has been transformative but also raises serious ethical concerns. While it aids in identifying Russian soldiers and missing persons, it also has potential for mass surveillance, misidentification, and privacy violations. Its unregulated use threatens fundamental rights, and Clearview AI has faced legal actions in different countries for breaching data-protection laws. These concerns are heightened by the technology’s possible continued use after the war.
Russia’s use of AI in information warfare has intensified the conflict, with AI-generated disinformation, deepfakes, and voice cloning spreading false narratives and destabilizing Ukrainian society. Deepfake videos, such as those depicting Ukrainian leaders surrendering, are used to erode trust in media and pose significant threats to social cohesion and democracy.
As Ukraine increasingly relies on these technologies in the war, the needs to withhold sensitive information and to maintain public trust conflict. Striking a balance between national security and human rights is essential. International law allows for limited derogations from human rights during emergencies, but these must adhere to the principles of necessity and proportionality.
These crucial technologies require stringent safeguards to protect democratic values and human rights in Ukraine. Key recommendations to do so include regulating remote-sensing technologies; harmonizing the country’s AI and data-protection laws with EU standards; adopting impact assessment frameworks; creating “regulatory sandboxes” for the safe development, testing, and adaptation of AI innovations; and enhancing media literacy programs to counter AI-generated disinformation. Through these measures, Ukraine can harness advanced technologies for defense and reconstruction while ensuring that wartime measures do not undermine its democratic future.
Anna Mysyshyn is a GMF ReThink.CEE Fellow. This paper is published under the ReThink.CEE Fellowship