In this interview, Dr Joe Devanny (King’s College London) and Catherine Stihler (Creative Commons) examine whether the UK’s approach is fit for purpose in the age of digital transformation, especially in regards to Artificial Intelligence, and discuss the challenges and opportunities this presents. Full series around the G20 can be read here.
In your view, how has digitisation transformed the international system in recent history?
Dr Joe Devanny: Digitisation has not transformed the international system, but it has created a new series of challenges and opportunities for states to address and exploit. Technological transformation is firmly on the international agenda – for example, in the UN’s Global Digital Compact – but the underlying realities of the international system have not shifted.
“Digitisation has not transformed the international system, but it has created a new series of challenges and opportunities for states to address and exploit.”
Where states have significantly increased their ‘cyber power’ over the last 20 years – for example, in China – this is principally symptomatic of wider, non-cyber factors in the international system. And where states perceive vulnerabilities as a result of dependence on technology – for example in the cyber security of critical infrastructure, or the domestic political threat from digital disinformation – these can be seen as new examples of perennial problems against which states have always needed to defend.
Catherine Stihler: We live increasingly interconnected lives which makes access and the speed of access to information both a challenge and opportunity. At the tap of your phone you can now access knowledge it may previously have taken weeks to gather. Yet the rise of misinformation and disinformation can be witnessed daily leaving citizens questioning what is real and what is not.
Back in 1996 Fukyyama recognised this in his book ‘Trust: The Social Virtues and the Creation of Prosperity’:“By contrast, people who do not trust one another will end up cooperating only under a system of formal rules and regulations, which have to be negotiated, agreed to, litigated, and enforced, sometimes by coercive means. This legal apparatus, serving as a substitute for trust, entails what economists call “transaction costs.” Widespread distrust in a society, in other words, imposes a kind of tax on all forms of economic activity, a tax that high-trust societies do not have to pay.”[1]
What we witness today in our international system is a breakdown at a time where we require international institutions to step up. At last month’s UN General Assembly, few of the key leaders attended, signalling a lack of importance in the meeting. Without political support, no international organisation can succeed.
“[…] we need clear rules of engagement, accepted internationally, to provide a digital operating framework for governments, organisations and society at large.”
In order to ensure trust can thrive, we need clear rules of engagement, accepted internationally, to provide a digital operating framework for governments, organisations and society at large.
How do you perceive the continual, and accelerating, sophistication of technology affecting public space, privacy and equality? And how can it be regulated both domestically and internationally?
Dr Joe Devanny: Technological change – and the pace of that change – certainly affects the public space, our privacy and equality. A good example of how the accelerating pace of technological change is shaping government reactions is the last year of frenzied activity relating to generative Artificial Intelligence (AI).
International forums had discussed AI for several years beforehand, and states had increasingly issued public statements or developed national strategies about AI, but the urgency of this agenda has undoubtedly intensified in the last year. Regulation is certainly possible, but there are inevitable trade offs and consequences of different approaches. And it remains the case that some states have more choices available than others. For example, the recent initiatives by both the European Union and the United States to regulate AI are examples of what can be done (and indeed of the limits of what can be done) by two powerful international actors. Most states in the international system cannot hope to have the same impact. That is why international cooperation is so important, notwithstanding how difficult it is to reach substantive agreement.
Catherine Stihler: The rise of AI is proving that the acceleration witnessed in the past 20 years with the Internet is once again morphing, at ever increasing speed, into something different which is more disruptive. No longer do we need websites to navigate information. Just like the way mobile technology has transformed the way we work, AI will set change to another level.
AI also means changes which we cannot currently even imagine. Only the EU is at the final stages of a cross national state system of agreed regulations on AI across 27 independent member states. This will set a global standard in AI which countries who want to trade with the EU will have to follow. Just like GDPR before, the EU is pushing on openness, transparency and accountability — all important to gain the public’s trust.
“Without the pluralist system of international organisations helping nation states agree to rules which transcend their borders, we are not going to see technology regulated at a global level anytime soon.”
Without the pluralist system of international organisations helping nation states agree to rules which transcend their borders, we are not going to see technology regulated at a global level anytime soon. This will have an impact on standard setting, norms and holding AI companies to account globally. It may also create unlevel playing fields between competing jurisdictions, with some nations wilfully gaming the system, potentially for nefarious activities, undermining the entire order. How can we set standards which are not regulatory? Creative Commons set a global standard of sharing because we created a solution to the problem of all-rights-reserved copyright. Today we are being called upon to apply our creative thinking to solve the challenges which creators face concerning AI.
What security risks and threats do you see, or anticipate, with technology advancing at a faster rate than technological policy?
Dr Joe Devanny: There is certainly a risk that innovation, occurring at a faster rate than policy or regulation can match, will lead to adverse consequences. This was the premise of the recent Bletchley Park Summit on AI safety.[2] As so much of the innovation occurs in the private sector – and in a relatively small number of countries – there is a clear need for effective dialogue between these private sector actors and governments. The challenge for governments is to strike the right balance between supporting innovation and reducing risk. This is particularly difficult in circumstances in which the potential risks are not well understood. In this context, initiatives such as the AI Policy Observatory, supported by the Organisation for Economic Co-operation and Development (OECD), are valuable ways in which states can improve both shared understanding of the potential and risks of new technologies, and also share knowledge about how to pursue effective policy-making.
Catherine Stihler: I think the security risks concern holding the technology companies who are driving change to account. We know from practice that voluntary measures do not work and if we want AI that serves the public interest, it is hard to see how with a concentration of power in the hands of a few US companies, we will see transparency, responsibility and accountability.
From a policy specific angle there are clearly important political and ethical challenges for policymakers to consider when it comes to ensuring fundamental rights are preserved within AI frameworks, particularly in areas such as defence/national security; healthcare; and access to basic (digital) services.
Which countries are leading the way in their strategies to mitigate the challenges, and seize the opportunities, of the digital age?
Dr Joe Devanny: The question highlights the two sides of this policy dilemma: how best to seize opportunities whilst also mitigating challenges. The more prescriptive and prohibitive the regulation, the greater the risk of stifling innovation and missing out on opportunities. Conversely, the looser and more permissive the framework, the greater the risk of harm.
“There is a genuine competition for influencing the world’s digital future.”
Using the example of AI again, the European Union and the United States are showcasing different approaches to this challenge, the former based on legislation and the latter – to date – on executive action. Both approaches are proceeding in tandem and it is too early to say what the combined impact of both will be on the future development of AI. Similarly, being first in the lead doesn’t mean you will remain out in the lead. For example, China has started to articulate a more global vision for its approach to AI. There is a genuine competition for influencing the world’s digital future.
Catherine Stihler: The EU is leading the way with General Data Protection Regulation (GDPR), Digital Services Act (DSA) and Digital Markets Act (DMA) rule setting. The new rule setting on AI could be transformational on how AI globally will be regulated.
“The UK Government’s AI Safety Summit was an important exercise in attempting to bring together multiple jurisdictions in search of common solutions to shared problems.”
The UK Government’s AI Safety Summit was an important exercise in attempting to bring together multiple jurisdictions in search of common solutions to shared problems. Having said this, it remains unclear as to whether or not developed and developing, as well as democratic and non-democratic jurisdictions will be willing and able to find common ground when it comes to regulating AI and creating a shared vision for the digital age.
What can the UK Government be learning, and prioritising, from these strategies?
Dr Joe Devanny: The recent Bletchley Park AI Safety Summit was prominently associated with the current Prime Minister, but AI-related deliberations in Whitehall pre-date and will inevitably outlive this premiership. The UK already has a national AI strategy, a defence AI strategy, and has prioritised technological innovation in its national security strategy (currently called an ‘Integrated Review’).
“The Government deserves credit for recognising the national priority of having a thriving sector of science and technology, but the boldness of its ambition is not matched by the necessary scale of resources.”
Arguably, the most important lesson the UK should learn from recent experience is that the UK’s freedom of action is limited. It can achieve more through cooperating with other states than it can achieve alone. The Government deserves credit for recognising the national priority of having a thriving sector of science and technology, but the boldness of its ambition (for the UK to be a ‘science and technology superpower’ – irrespective of the vagueness of the term) is not matched by the necessary scale of resources. Increasing investment and reducing hyperbole would not be a bad place to start.
Catherine Stihler: The UK Government is wise to consult with other countries with the AI Summit in November which provides an effective way to listen and learn from others.
“By becoming the world’s largest advocate for open source, [the UK Government could demonstrate] its ongoing ability to be a global leader in forefront technologies.”
The UK Government should become the world’s biggest advocate for open source, push open technologies as a differentiator against closed systems which are not transparent and accountable and lead on public interest AI. By becoming the world’s largest advocate for open source, could provide the UK Government with an effective new voice, demonstrating its ongoing ability to be a global leader in forefront technologies.
Dr Joe Devanny is a Lecturer in the Department of War Studies and deputy director of the Centre for Defence Studies at King’s College London. He is writing in a personal capacity.
Catherine Stihler has been an international champion for openness as a legislator and practitioner for over 25 years. In August 2020, Catherine was appointed Chief Executive Officer and President of Creative Commons, a global non-profit organisation that helps overcome legal obstacles to the sharing of knowledge and creativity to address the world’s pressing challenges. More recently Catherine has been called upon to apply her expertise in public interest technology to Generative AI serving on the Governor of Pennsylvania’s AI Task force and the World Economic Forum’s AI Alliance. She was selected to the Fellowship of Scotland’s National Academy, the Royal Society of Edinburgh (RSE), in 2022.
[1] Fukuyama, Francis. 1995. Trust: the Social Virtues and the Creation of Prosperity. New York :Free Press.
[2] Foreign, Commonwealth & Development Office and Department for Science, Innovation and Technology, AI Safety Summit 2023, Gov.uk, November 2023 https://www.gov.uk/government/topical-events/ai-safety-summit-2023