AI and Closed Countries: Why Context Matters
Ever-evolving networked technologies continue to shape us all, with the Internet fundamentally transforming the ways our societies, governments and the private technology sector operate, and interact. Nearly 30 years on since the birth of the commercial Internet – the World Wide Web – its initial promises to deliver immense social good, emancipation, and the democratisation of knowledge are being confronted by severe scrutiny and skepticism. It has become abundantly clear over the past decade that in politically closed countries, the Internet is a brutally effective mechanism for state control, and yet another frontier where historical dynamics of inequality and oppression have presented themselves, putting users at a variety of risks.
Now another emerging technology, Artificial Intelligence (AI), is receiving a heightened level of scrutiny after a period in which its proponents offered an optimistic vision of how it could benefit societies around the world. The wide application of AI in a large variety of fields, thanks to advances in Machine Learning (ML) and neural networks, have also presented societies with threats, including the replication of historic racial inequalities, the deployment of new tools for government surveillance and repression, further labour displacement as a result of automation, the expansion and consolidation of surveillance capitalism and last but not least, the exploitation of marginalised populations’ data (and particularly within the Global South),in what is described as ‘techno-colonialism’.
With these challenges in mind, researchers in the emerging field of ‘AI ethics’ and ‘AI governance’, as well as those assessing the impact of AI on human rights, have been working to create guidelines to reduce AI’s harmful impacts. Many of these efforts are centered around the countries of the Global North. AI ethics’ lack of consideration of perspectives from less developed, and often politically closed regions – the Middle East being one of them – poses the risk of ignoring vital regional and cultural contexts which can prove those ethics or guidelines either ineffective and useless in such countries, or more worryingly, could endanger citizens.
With these concerns in mind, in this report we will consider how AI technologies could impact upon people’s lives and digital rights in Iran. We will briefly identify some of the challenges and urgent concerns arising from the deployment of AI technologies in the country, and will demonstrate how certain AI technologies may have different human rights implications in different contexts. We will also show that a lack of government transparency in Iran means that it may not be always apparent how AI technologies are being used.
This should be an initial contribution for the inclusion of lesser represented perspectives on AI to the work already underway. It should also be a helpful resource for those working on human rights and digital rights in Iran to be more aware of the challenges surrounding emerging AI technologies. We hope to expand further on a number of the topics highlighted here in the coming months, and to provide more analysis on how AI technologies are being deployed by state and non-state actors in Iran.
A Brief Attempt to Define AI
Providing a definition of AI might be a good starting point, as AI has been used as a catch-all term to describe a variety of different technologies in recent years. Here, we will use the definition provided by Organisation for Economic Cooperation and Development (OECD), which defines AI as ‘a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’
There is however no single agreed-upon definition of AI, which poses a major challenge, particularly from a policy-making perspective; in the context of a multi-stakeholder environment containing technologists, civil society, and state actors, such definitional ambiguity can result in certain types of AI being overemphasised, while others are sidelined. It is therefore important to remain aware of the challenges around defining AI. The definition we are using is simply one definition among many, but we have chosen it as it centres the role of data in training AI, and the importance of automation – both of which are key considerations when considering the adverse impacts of AI in society.
The Iranian Government and AI
Analysts have already issued numerous warnings about the potential for AI-driven technologies to empower authoritarian governments.
Iran has been identified as having a relatively high potential for the development and deployment of AI-driven technologies in the Middle East. The Government AI Readiness Index scored Iran somewhere in the middle (42.89) in the MENA region (with the United Arab Emirates scoring the highest with 72.40). Meanwhile, the Scimago Institutions Rankings (SIR) ranked Iran as one of the leaders in ‘AI developments’ in the Middle East (based on published articles). However, while these rankings do give us an idea of their attention towards AI research and development, it is worth bearing in mind some of the methodological limitations of such indexes in respect of politically closed countries.
In Iran, the state’s interventions in the realm of internet policy have tended to focus on expanding capacities for surveillance, online censorship, internet shutdowns, and the policing of social and cultural life online. AI-driven technologies have the potential to further enhance all of these state capacities, from facial recognition systems, to predictive policing, to the dissemination of disinformation online. Iran has demonstrated some interest in engaging with this field, though progress remains patchy; Iran as of yet has not published a national AI strategy, and while there was talk of launching a ‘National Centre for the Development of AI Innovation’ by ICT Minister Mohammad-Javad Azari Jahromi earlier this year, however no further details have emerged about this move . That being said, there is already some evidence that AI-driven technologies have been imported or are being used in Iran, both by the government and by the private sector.
A report on the Global Expansion of AI Surveillance identified Iran as an importer of Chinese surveillance technologies. There is also evidence of planning for the deployment of technologies and systems to transform Iran’s cities into ‘Smart Cities’, with a focus on the country’s capital, Tehran. Despite smart cities being seen as solutions to environmental, traffic, and planning issues, they are also yet another way of increasing surveillance in offline spaces. These solutions often include CCTV cameras and sensors which can collect enormous amounts of data on citizens. Some of these technologies are already being used in Iran, such as in Tehran’s traffic control systems – such as Licence Plate Recognition systems, which have been designed domestically to be compatible with Persian language, and are managed by the police. In recent years, some Iranians inside the country have reportedly received text messages from the police, notifying them that they had broken the law by removing their head covering inside their car (with some recipients apparently receiving the texts by mistake). It remains unclear precisely how these individuals were identified and whether or not any automated or AI systems were involved, but facial recognition technology and other automated detection systems could very feasibly be adapted to play a role in automating and intensifying such enforcement and surveillance activities.
The technologies described above may seem relatively low-risk at first glance, but it points us towards two important conclusions. Firstly, that the emergence of AI is often insidious, and slow to become apparent. As a result, it may take some time for us to be able to detect the full extent of AI-driven technologies’ usage, especially in politically closed environments like Iran. A further consequence of this is that by the time such information comes to light, the country’s inhabitants have already been placed at risk without their knowledge, or the ability to provide consent. Secondly, seemingly mundane technologies such as traffic control technologies, ‘smart health’ systems or even navigation systems can have much more significant consequences depending on who is using them, and who is collecting this data. This point reinforces the need for technologists, as well as civil society actors and human rights activists to first work to clearly define what they mean by AI, and to be aware of country-specific challenges. Such understanding is particularly important when it comes to establishing frameworks for AI ethics and shaping regulations for governing them in a way that upholds fundamental human rights
This is also important when it comes to the export of AI-driven technologies to countries with poor human rights records. While there have been calls for export controls of more controversial and dangerous systems such as facial recognition software, it should be recognised that other, less-obviously problematic AI systems could have similarly dangerous effects.
Therefore, we would recommend that rigorous human rights impact assessments are mandated before exporting any such technologies to authoritarian countries, and better regulation and oversight should be applied to any transactions involving such technologies. At the same time, we must acknowledge that other AI leaders, such as China, which has a worrying track record of misusing AI for the purpose of mass surveillance is unlikely to restrict its technology exports on the basis of human rights concerns. This makes it all-the-more urgent for more rigorous international standards for AI use to be devised and applied. The international community should seek to better understand how end-users may put these technologies to use in a way that poses human rights risks, despite how the technology is designed or intended to be used by its developers. Therefore, human rights assessments should also consider mandating export controls on such technologies so that their expansion and development in high-risk fields is not indirectly enabled.
Understanding the System: Institutional Challenges
The private sector is also an important player when it comes to emerging technologies. The localisation of the internet in Iran, coupled with US sanctions – which have restrictedIranian users’ access to international services – have also had an effect of accelerating the development of domestic technologies. The popular Android navigation Android app Balad – created by the Iranian app store Cafe Bazaar – relies on AI for some of its functionalities such as speech recognition. Other Iranian technology companies have been investing in implementing machine learning and natural language processing-powered systems.
The development of these systems requires large quantities of data. However, in Iran, there is a lack of comprehensive data protection laws, limiting user’s rights and agency over their data and how it is used and stored. As a result, this data can potentially be accessed by state authorities, and misused to undertake surveillance and prosecution of users by the judiciary and Iran’s Cyber Police (FATA). Additionally, the Supreme Council for Cyberspace (SCC) – the country’s top internet policy making body – has been working hard to limit online anonymity and security, which it made clear in its Digital Identity Verification resolution.
AI-driven technologies in Iran also pose particular challenges relating to inequality, whether on the basis of wealth, gender, and a host of other characteristics. This problem arises from ‘dirty data’ referring to data with which algorithmic systems are trained which can be poorly represented, incorrect, or manipulated to produce biases. While this is a challenge everywhere, in the context of Iran these concerns are heightened as marginalised ethnic and religious groups, as well as sexual and gender minorities who are often at risk of prosecution or exclusion from society, therefore could find themselves at even further risks from the rise of AI and automated decision making and data processing. This risk exists both in the private sector who may rely on algorithmic decision making for providing services (such as financial services) or via the government in order to profile and misrepresent certain sections of society in order to further exclude them from society.
The outcomes given by algorithms have tended to be held up as the truth and are easier to label it as such where rights and protections to challenge these decisions do not exist. However, given the error-prone nature of these systems and the opacity in their processes, these systems could do a lot to further exacerbate existing inequalities in Iranian society which those in the private sector must also be aware of. Also given that the development of machine learning systems requires large amounts of data, Iran’s private sector should be cautious and transparent about collecting personal data from its users, be aware of the shortcomings of their datasets based on which algorithms are trained, and despite the lack of legal protections in the country, be more proactive in protecting its users against potential data misuse or data sharing with governments.
Next Steps and Challenges Ahead
Firstly the Iranian government should take action to protect Iranian users against the potential harmful effects of AI, and should not integrate AI-driven technologies into its surveillance or censorship apparatus.. These issues should first be addressed by introducing regulations to uphold data protection standards and algorithmic regulations in line with international standards. Such reforms are crucial first steps to protect user data from misuse, and to increase public awareness and corporate transparency about how data is harvested.
The private technology sector in Iran also has responsibilities and duties towards its users, despite the lack of legal protections afforded to them. Companies should therefore strive to offer greater levels of transparency about how they collect data, and how they can uphold the privacy and security of users. To the extent possible, companies should also limit their collection of users’ personal data, given the lack of protections available to them in the event that the government requests access to user data.
International AI developers, whether they are state backed or private technology companies, also play a role in this space, especially when exporting certain AI technologies such as facial recognition tools to authoritarian countries. Any companies exporting to such states must engage in meaningful human rights impact assessments to consider whether there is a risk that such technologies could be deployed in a manner that undermines their citizens’ rights. . This point is incredibly important, as there may be limitations to Iran’s capacities to develop these technologies by itself, given the ongoing effects of US sanctions. As a consequence, Iran may remain somewhat dependent upon the importation of AI-driven technologies from other countries, at least in the short-to-medium term.
Lastly, international actors working to establish frameworks for the regulation of AI-driven technologies should urgently seek greater engagement from among civil society, activist communities, and representatives from underrepresented countries and regions. The experiences of such groups must be drawn upon in order to better foster the creation of rigorous human rights-based standards for AI ethics.
As we mentioned at the start of this report, this article is just an introduction to these challenges. In the coming months we hope to expand on some of the topics mentioned here in order to better understand the human rights implications of AI-driven technologies in Iran – as they arise from both state actors, and the private sector.