Geopolitics and Cyberspace
Geopolitics and Cyberspace
The GCSP recently organised a public discussion on Shaping the Cyber Future. This is the speech of Dixie O’Donnell, Cyber Security Fellow with the GCSP’s Global Fellowship Initiative.
Disclaimer: Please note that these views may not represent those of the GCSP or of the US Mission in Geneva.
Good morning everyone,
Thank you for being here to participate in this discussion, and to the GCSP and U.S. Mission for sponsoring it.
I’m a Cyber Security Fellow here at the GCSP, where I focus on the interplay of geopolitics and cyber space, as well as emerging efforts to counter disinformation.
I became interested in cyberspace and geopolitics in 2014, when I was working as a NATO political adviser in Kabul, Afghanistan. This may not be a country that comes immediately to mind when we are talking about digital democracy or algorithms, yet it is one that has not only been greatly affected by geopolitical conflicts but is also reliant in many respects on services provided by the major internet platforms. Lack of funds and rudimentary infrastructure mean that many government employees rely on Gmail to communicate official business, and Facebook and Twitter to make public announcements. Political parties and figures rely on social media, among other means, to communicate with the general public. The New York Times recently ran a story about how essential WhatsApp has become to secure communication for both the Taliban and the Afghan government, which also uses it to communicate with top US officials in Kabul. WhatsApp is used on the battlefield between soldiers and commanders, as well as by the delegations involved in the Afghan peace process. Because you can send voice recordings and video on the app, literacy is not needed to make use of it.
These are illustrations of how internet platforms have become critical infrastructure around the world, including to governments, individuals and private businesses located at a great distance from where these platforms were founded and even where their huge data storage centres are located. We are even more reliant on the internet here in developed countries, and it can be difficult to escape internet-connected devices, which can now include your watch or refrigerator. All of these devices are gathering data about your habits, feelings, spending patterns and much more, which businesses can feed into proprietary algorithms to place ads and suggest content that is most likely to engage you. Along with this increasingly recognised power, internet platforms are also becoming more and more exposed to geopolitical risks, as opposed to the more ordinary business risks of individual markets imposing regulations that threaten or hinder their business models.
Thus it should not be surprising that the internet is no longer seen only as the great democratiser of access to information and markets but is also increasingly viewed as a tool for manipulation. We now find ourselves talking about the spread of propaganda, including extremist propaganda; disinformation (the intentional spreading of false information); and misinformation (the unknowing spread of information that is false or misleading). There is, of course, nothing new about propaganda, disinformation and misinformation, or even extremism.
The context is changing
What has changed is the technological and geopolitical environments in which disinformation, misinformation and propaganda of various kinds exist. You may associate some of these terms with the Cold War, during which the United States and Soviet Union were engaged in an ideological battle. Both deployed spies, carried out sabotage and spun news events against each other, trying to win hearts and minds to their cause while undermining support for – and morale in – the other country and its supporters/allies.
Once the Soviet Union had demonstrated it had viable nuclear weapons in 1949, both countries sought all means short of direct kinetic conflict to enhance their strategic positions, win other countries to their side in the conflict and undermine the ‘enemy’. This included economic means, proxy warfare and information warfare – what today would be variously called hybrid or non-linear warfare.
But during the Cold War the media were restricted to print and radio, newsreels shown in cinemas, and later television, each of which requires large investments in infrastructure to both operate and to produce and distribute the information content to a wide audience. Equally – unlike today – these mass media could not be precisely targeted at a highly specific audience of people that would be most likely to be sympathetic to or influenced by the targeter’s message. Editors decided what was ultimately published or broadcast. Although the situation is a bit different today, the increase in both governments’ and non-state actors’ use of disinformation as a tool is similar to how it was used during the Cold War – a non-kinetic way of engaging in conflict and undermining the credibility, unity and resolve of your adversary.
Cyberspace has grown exponentially since the 1990s in terms of physical infrastructure, the amount of data it can carry and the number of devices connected to it around the world. Not only do these devices bring you information, but they collect information on you as well. All activities done with or in the vicinity of internet-connected devices can potentially be monitored and stored and used for both positive purposes such as improving services and products you use, or negative ones such as manipulating you, whether to spend money on products or vote in a particular way. Ever more activities are being subsumed into cyberspace.
One could say that the defining characteristic of our era is convergence, which in the 21st century is characterised in particular by cross-border flows, including people, goods and finance, as well as political influence. For example, a social media user can be based anywhere in the world, as we saw with the troll farms in North Macedonia, which published inflammatory material during the 2016 US presidential election simply because the social media algorithms favoured any content that kept users engaged, driving traffic and therefore earning advertising revenue on social media sites containing inflammatory material. So here we had a situation of ideas and money crossing borders in cyberspace, with financial incentives potentially affecting political behaviour in the United States.
Internet-based disinformation, misinformation and propaganda come from both domestic and external sources. Internal and external information spaces now largely overlap. One of the biggest trends we see in disinformation is that much of it is now domestically generated in many countries. The Oxford Internet Institute found that the number of countries experiencing domestic disinformation campaigns has doubled in the past two years to 70 – using the criterion that there is evidence that at least one political party or government entity in each of these countries has engaged in social media manipulation.
All of this questionable information is eroding trust at every level – between individuals in communities, between ethnic groups, between citizens and their public institutions and governments, and between countries and governments – making the hard work of adapting governance to new realities all the more challenging both domestically and internationally.
Another aspect of this convergence is that critical national infrastructure is largely in private hands, which has a number of implications. Private companies are now on the front lines of national security in cyberspace – as a technical director in the U.S. National Security Agency said in a recent interview: “‘You used to see a nation-state spent [sic] their time attacking a nation-state” entity like the Pentagon .... “Now we’re seeing a broadening …. They’ll also go after companies, and universities, and non-profits, and civilian government agencies, and state governments.” An example of this is North Korea’s hack of Sony Pictures when that country’s government did not like the Sony film The Interview.
Attributing cyber attacks to a particular source is not only extremely difficult for technical reasons, but it is a political statement for a government to attribute an incident to another state actor, so sometimes governments leave it to private companies to announce attribution.
Internet-based platforms and technologies were designed in a culture that considered open access to information to be the answer to many social problems. Security became an afterthought when it was not hard-wired into the original structure of the internet and the software it enabled. While we did see many positive effects enabled by social media, many types of illicit actors such as cyber criminals have learned to take advantage of the open design of internet infrastructure, software and devices. A cyber security expert in the United States recently looked at 24 of the top undergraduate computer science programs there, and only one required a security course to be part of the programme. So, security is normally an afterthought in software and device design rather than being mainstreamed into the design process. This has resulted in a great deal of financially motivated cybercrime but has also enabled the stealing of data for spreading misinformation, as well as providing platforms for extremist content, resulting in national security and public safety challenges. For this reason, it is time to mainstream security into the design of all internet-connected hardware and software.
The geopoliticised operating environment
The geopolitical situation has also evolved over the past few years. The United States, Russia and China are engaged in strategic competition with one another on all fronts – political, economic and technological – which is making the development and implementation of international norms in cyberspace, including in the cyber information space, extremely difficult. Related to this, extremist political rhetoric across the political spectrum has become more mainstream domestically in many countries around the world, such as the Philippines, Brazil, the United States, Hungary, Poland and more, making what was previously dismissed as empty ranting into the online void seem like normal behaviour.
A big part of what the emergence of cyberspace has changed is how information, including misinformation and extremist content, is spread. It is no longer badly photocopied pamphlets or a short-range radio broadcast or low-quality audio or visual tape. Social media such as Facebook, YouTube and Twitter in particular can help conspiracy theorists and extremist groups look more mainstream or professional with their clean, standardised online formats. As stated above, misinformation can come from official government accounts or organisations that were previously expected not to spread such information. Setting up an account is free and is possible almost anywhere in the world, while monetising it by allowing ads to be placed on the account can not only make it look more normal to an unsuspecting user, but generates revenue for the account holders, and therefore potentially creates funds for them to further their activities both on- and offline. (For this reason, internet platforms have instituted stricter standards to control what type of content can be monetised and what cannot.) When users see their peers sharing or engaging with content, they may be more likely to believe it or see it as legitimate.
The disintermediation of humans; mediation by machine
This brings us to another point I would like to briefly touch on – the proliferation of user-generated content and therefore the emphasis in some discussions on how media and information have become disintermediated, because a human editor is no longer controlling what is or is not published on an internet platform. This emphasis on disintermediation can obscure the fact that algorithms are increasingly mediating what information and content we see or do not see online; what services we are or are not offered; and what political or issue-based ads we might see, based on the data private companies have collected about us and advertisers have purchased. What we see is not being decided by a human editor, but a proprietary algorithm built and maintained by a private company that may or may not be auditable.
Algorithms have notoriously – and rightfully – been blamed for promoting inflammatory or extremist content. Some former extremists have said they were radicalised in part by an endless stream of YouTube videos recommended by its algorithm that included extremist or other problematic content. The algorithm was designed to recommend videos that kept people engaged, and therefore the platform had more time to present ads that generate revenue, and sometimes the videos contained poor-quality content that would entice a user with an anger-inducing message or a wild conspiracy theory. There were similar problems on Facebook and Google Search, which would serve up conspiracy sites along with reputable ones when users searched for something as innocuous as health information or some form of history. Both companies have taken steps to add more signals to control their recommendation, search and newsfeed algorithms so that they do not prioritise false or extremist content. As of 2019, when users of these platforms search for local electoral information or basic health information they are directed to public institutions rather than random or popular websites.
Algorithms are extremely useful in mediating what would otherwise be totally unmanageable volumes of information. As internet platforms have started to accept that they must take on more responsibility to remove harmful content, they have built algorithms that remove content that is illegal and undeniably harmful such as child pornography or terrorist propaganda by groups such as the Islamic State, often before users ever view it. In some cases, this is easy because the branding of the Islamic State’s media outlets is easy to pick up with artificial intelligence (AI) filters, for example.
But we have to remember that algorithms are only as good as the information or instructions they have been given. What algorithms prioritise, what signals they focus on, what historical data they have or do not have all depend on who builds and operates the algorithm. For us to know if one is not working as intended and have it fixed requires people – engineers and policymakers – who both care and are empowered with the time and resources to monitor and check how their algorithms are performing. Do these companies or the governments of the markets they operate in feel a responsibility to protect freedom of speech or the privacy of their users? Are they protecting the increasingly intimate and comprehensive data gathered on consumers and citizens? Accountability and data governance are rightly becoming national security concerns, and have implications for how AI algorithms are designed, used and subjected to audit.
These questions are being grappled with globally as governments have come to realise the national security implications of internet platforms and the associated issues of data collection and how algorithms can manipulate public perceptions and behaviour. Internet platforms and other technologies mostly developed in the private sector are becoming geopoliticised and, as geopolitical competition increases, different spheres of power are developing different models for the governance of internet platforms.
The regulatory implications of national values and culture
The United States has an absolute cultural and legal norm of protecting freedom of speech. Although there are legal punishments for hate crimes, speech is not criminalised in the United States, unlike in some European countries. This is coupled with the fact that, generally speaking, Americans desire as little regulation of speech and private business as possible. Section 230 of the 1996 Communications Decency Act shields internet platforms from liability for content posted by third parties, such as Facebook users, while allowing them to remove content according to their own policies. However, there is now bipartisan discussion about whether this shield should be altered to better enable action against harmful content beyond that which is clearly illegal or against the policies of a particular platform.
In many European countries hate speech is criminalised. Europe is more willing to regulate speech, but at the same time is currently trying to create a business- and innovation-friendly environment. Individual countries are passing laws on hate speech and fake news as these have proliferated and driven increases in anti-migrant and other targeted violence. There is widespread public support for requiring internet platforms to observe silence periods on social media platforms before election day, as French traditional media do on the Saturday before Sunday French election days.
For China, the security of the state and social harmony are paramount. The Chinese government can intervene directly in the operations of private firms and require them to support its priorities. Yet Chinese companies, although not faced with the thorny problem of protecting freedom of speech or expression, still must employ significant human and technical resources to remove prohibited content.
Russia, like China, views information content as a national security issue, and therefore it is subject to more state regulation than is traditionally the case in the West, although not to the same extent that it is in China. Like the United States, which is considering alleged national security risks from Chinese internet platforms, infrastructure and investments, Russia is also concerned with foreign investment in strategic tech companies, and in 2019 was reportedly considering a law to limit foreign investment in companies that it considers to be strategic assets. Yandex, Russia’s biggest tech company, which offers services such as search, taxi hailing and other internet-platform-based services, proposed another solution this week. It would offer a “golden share”, which is currently held by a Russian state bank, to a new entity called the Public Interest Foundation, which would reportedly “defend the country’s interests” and have the ability to temporarily remove Yandex’s management, block a potential acquisition of the company, and nominate two permanent board members. The CEO, Arkady Volozh, said this would balance three necessities, which he said were:
- “Leaving control of the company in our hands,
- maintaining the confidence of our international investors in the prospects for Yandex’s business and
- defending the country’s interests.”
Mr Volozh sent an email to his employees in which he assured them that they could continue to work and innovate as freely as before.
Cooperation and political will to harness this technological revolution
I do not think US or European businesses would like to emulate the Chinese or this proposed Russian model of significant state influence over private firms whose operations happen to have strategic and national security implications. But if such private firms want to stave off harmful legislation that is rushed through the law-making system because of domestic political pressure or direct foreign government interference, they will likely need to become more transparent and work with both government and civil society to institutionalise and regularise the currently ad-hoc solutions to harmful information and other national security risks.
Indeed, Facebook employees recently sent a letter to CEO Mark Zuckerberg pleading for the platform to adopt more of the solutions that the company’s employees have come up with to mitigate misleading content around elections in all markets it operates in. To quote from a recent piece by Tom Wheeler for the Brookings Institution,
As the Industrial Revolution transformed the Western world’s economy and the lives of its citizens, it became apparent that the marketplace rules that had worked for agrarian mercantilism were no longer adequate for industrial capitalism. When the interests of the industrialists clashed with the broader public interest, the result was a new set of rules and regulations designed to mitigate the adverse effects resulting from the exploitation of the new technologies. … The results of previous such battles have been not only the protection of competitive markets, consumers, and workers, but also the preservation of capitalism through the establishment of behavioural expectations.”
The challenge we face is that of adapting our governance structures to keep up with the way in which technology once again transforms our economies, our lives, and how we relate to one another. – about keepingWe need to retain good sound principlesals and mitigateing the negative externalities that inevitably come withaccompany every technological revolution. Human beings have come to dominate the planet in large part because of our their capacity to cooperate and solve complex problems. This is only the latestWe will overcome the challenges posed by the internet, and but again we will only overcome do soit with constructive cooperation, as we have done in the past when faced with world-changing technological innovations.
Discover more about what the GCSP is doing in Cyber Security.
Disclaimer: The views, information and opinions expressed in the written publications are the authors’ own and do not necessarily reflect those shared by the Geneva Centre for Security Policy or its employees. The GCSP is not responsible for and may not always verify the accuracy of the information contained in the written publications submitted by a writer.