The last few years have witnessed striking progress in artificial intelligence, driven by the machine learning revolution. However, as ChatGPT foreshadows, humanity is not prepared to govern extremely powerful AI systems as it remains difficult to understand their functioning and direct their influence.
To assist international policy actors in the development of sustainable technological development, we’re publishing this short guide to recent developments in artificial intelligence.
1-sentence summary: Artificial Intelligence (AI) has transformative potential for personal well-being, economic growth, and scientific advancement, but risks of misaligned systems range from exacerbated inequality to loss of human control, with solutions including research, auditing, and standard-setting.
Definition: Artificial intelligence is the science of building systems that can understand, plan, and execute without continuous human supervision. Recent breakthroughs, like GPT-4 (the foundation model behind ChatGPT), use deep learning to train on large data sets autonomously and are then refined through reinforcement learning from human feedback.
The technical details: how artificial intelligence (specifically machine learning) works:
For the most advanced systems, AI = algorithms + compute + data
Data is the information used to train the system. More data = better systems.
Recently, most advances have come from using more compute. Scaling compute leads to emergent AI properties (i.e unpredictable advances). This also means the cost of developing AI is rapidly rising, though the potential for profits is too.
AI is a ‘black box’. The iterative process of AI system development means it is currently impossible to know what is happening within and to align the system with human requirements. There are two types of misalignment:
Currently, the most advanced systems are ‘large language models’ (e.g. the system underlying ChatGPT). Such ‘foundation models’ are extensively trained on large datasets and human feedback and can then be used in diverse applications.
The international AI landscape:
- The most advanced systems are concentrated among a few big companies, but once a system has been trained, it is cheap to proliferate it on consumer technology.
- There have been numerous international attempts at AI ethics standards, with little implementation.
- The EU AI Act is the leading attempt to legislate safe AI systems.
Opportunities from AI:
- Economic growth through increased productivity from human augmentation;
- Reduced inequality through the democratization of training and education;
- Increased cooperation through enhanced facilitation and mediation;
- Scientific advancement leads to better medicine, clean energy, and more.
Risks from AI:
- Increasing inequality, bias, and misinformation issues in generative media from incomplete training datasets and the black-box improvisational nature of AI systems leading to difficulties assessing authenticity of outputs and discriminating against underrepresented populations
- Unprecedented instability due to drastic changes in key organizing features of our societies through automation - e.g. job displacement, community organization, international affairs, or social media
- Autonomous misinformation or weapon systems increasing the number of conflicts through highly targeted, low-cost approaches and reducing the barrier to deployment and increasing willingness to take risks of terrorist and state actors
- Loss of human control and agency due to the increasing returns in effectiveness of larger models, recursive self-improvement in AI agents, and exploitation of vulnerabilities of the human psyche - leading to the potential capture of organizations, markets, regulators and entire populations due to the incomplete consideration of all human values in AI goals
Key governance gaps in AI:
- Insufficient democratic governance of AI and data regulation (AI development and access should not be broadened)
- No internationally implemented standards for safety and interpretability of AI behavior
- Barriers to cooperation between major AI developers
Solutions for the governance of AI:
- Government-funded AI alignment research
- Advocacy for AI standard-setting
- Mediation between AI powers
- Use bans in certain cases (e.g Lethal Autonomous Weapon Systems, NC3)
- Investment in cybersecurity for advanced AI models
Case studies in risks from AI:
- GPT-4 doesn’t notice knowledge gaps and constantly improvises - requiring careful vetting of which output is real and which one is ‘hallucinated’
- AI deepfakes of Trump - and many other people
- You can find more on incidentdatabase.ai
Prominent experts speaking about the risks and solutions:
- Geoff Hinton, ‘Godfather of AI’, on AI risks
- Stuart Russell, author of the main AI textbook, speaks at WEF
If you would like to suggest additions or have other feedback, please email firstname.lastname@example.org.