Below are the key trends, translated into digestible numbers.
1. AI research is exploding – and it’s mostly about AI now
Between 2013 and 2023, the number of AI papers in computer-science venues almost tripled, from about 102,000 to more than 242,000 a year. AI now makes up 41.8% of all computer-science publications, up from 21.6% a decade ago.
China leads on volume: in 2023, it produced 23.2% of all AI papers and 22.6% of all citations, more than any other country. The U.S., however, still leads on high-impact work, contributing the largest share of the 100 most cited AI papers over the past three years.
2. Models are bigger, hungrier – and dirtier
On the technical side, the report shows a relentless scale-up:
- Training compute for notable models doubles roughly every five months.
- Dataset sizes for large language models double about every eight months.
- Power consumption for training doubles about once a year.
The environmental side is much less glamorous. Training AlexNet in 2012 emitted about 0.01 tons of CO?. Fast-forward to today: GPT-3 is estimated at 588 tons, GPT-4 at 5,184 tons, and Llama 3.1 405B at roughly 8,930 tons of CO? – hundreds of average annual U.S. lifestyles per model.
3. Hardware is getting better – a lot better
There is a partial counterweight: hardware efficiency. According to the analysis in the report, modern AI chips have seen:
- 43% annual growth in raw performance (16-bit floating-point ops), roughly doubling every 1.9 years.
- 30% annual drop in cost per unit of performance.
- 40% yearly improvement in energy efficiency.
So the hardware is sprinting just to keep up with the appetite of today’s models.
4. Using AI has become astonishingly cheap
If training is expensive, inference (actually using the models) is racing in the opposite direction. To get performance roughly at GPT-3.5 level on the MMLU benchmark, the cost per million tokens fell from about 20 dollars in November 2022 to 7 cents with Gemini-1.5-Flash-8B in October 2024 – a 280-fold drop in just 18 months.
Depending on the use case, the report estimates that LLM inference prices have been falling anywhere from 9 to 900 times per year. The bottom line: powerful AI is rapidly becoming a commodity.
5. Industry dominates frontier models, academia dominates ideas
Nearly 90% of the “notable” AI models released in 2024 came from industry, up from 60% just one year earlier. The U.S. leads comfortably here: 40 notable models versus 15 from China and only 3 from all of Europe combined.
Yet when you look at the most cited papers, academia still rules. Universities remain the main producers of the top 100 AI papers each year. The brains and the GPU clusters, in other words, increasingly live in different institutions.
Meanwhile, the frontier is getting crowded. On the Chatbot Arena leaderboard, the Elo gap between the best and 10th-best models shrank from 11.9% to 5.4% in a year, and the top two are separated by just 0.7%.
6. Open models are catching up fast
In early 2024, the best closed-weight model outperformed the best open-weight model by about 8 percentage points on the Chatbot Arena benchmark. By February 2025, that gap had shrunk to 1.7 percentage points.
Smaller models are also getting astonishingly good. In 2022, you needed something like PaLM with 540 billionparameters to score above 60% on MMLU; by 2024, Phi-3-mini does that with just 3.8 billion parameters – a 142× reduction in size.
7. Business is “all-in” – but the money impact is still modest
Corporate AI investment hit 252.3 billion dollars in 2024, more than thirteen times the level of 2014. Private investment alone grew 44.5% in a year; M&A activity rose 12.1%.
The United States dominates private AI spending with 109.1 billion dollars, nearly 12× China’s 9.3 billion and 24× the U.K.’s 4.5 billion. In generative AI specifically, global private funding reached 33.9 billion dollars, up 18.7% from 2023 and now more than 20% of all AI private investment.
Inside firms, 78% of surveyed organizations reported using AI in 2024, up from 55% the year before. 71% said they use generative AI in at least one business function. However, most reported cost savings below 10% and revenue gains below 5% in the functions where AI is applied.
So AI is everywhere, but its financial impact is still in the “incremental” rather than “revolutionary” category.
8. AI is already in medicine, logistics and transport
The report devotes an entire chapter to science and health. A few striking points:
- The U.S. FDA approved 223 AI-enabled medical devices in 2023, up from just 6 in 2015.
- New medical foundation models, such as Med-Gemini and domain-specific systems for radiology or ophthalmology, are arriving quickly.
- On the MedQA benchmark for clinical knowledge, OpenAI’s o1 hits 96%, improving nearly 30 percentage pointsover leading models from late 2022.
On the streets, AI is doing more than just demos. Waymo now completes over 150,000 autonomous rides per week, while Baidu’s Apollo Go robotaxis operate across multiple Chinese cities.
9. Governments are finally acting – mostly with money and regulation
In 2024, U.S. federal agencies introduced 59 AI-related regulations, more than double the number in 2023 and issued by twice as many agencies. Across 75 countries, mentions of AI in parliamentary debates rose 21.3% year-on-year; since 2016 they’ve increased more than ninefold.
Meanwhile, governments are writing very large cheques:
- Canada: 2.4 billion dollars for AI infrastructure
- China: 47.5 billion-dollar semiconductor fund
- France: 117 billion euros for AI infrastructure
- India: 1.25 billion dollars
- Saudi Arabia: “Project Transcendence,” a 100-billion-dollar AI initiative
Internationally, AI safety institutes are popping up from Washington and London to Seoul, Brussels and Canberra.
10. Responsible AI: more talk, more incidents, slow practice
The AI Incidents Database recorded 233 AI-related incidents in 2024, a 56.4% jump from 2023 and the highest number yet.
New benchmarks for safety, bias and hallucinations (such as HELM Safety, AIR-Bench, FACTS) are emerging, and the average transparency score of major model developers rose from 37% to 58% in the updated Foundation Model Transparency Index. Yet surveys suggest a gap between identifying responsible-AI risks and actually mitigating them inside organizations.
Bias remains stubborn: even “de-biased” models like GPT-4 or Claude 3 still show implicit racial and gender biases in controlled evaluations.
11. Education is scrambling to catch up
Roughly two-thirds of countries now offer or plan to offer K–12 computer-science education, double the share in 2019. But access remains uneven, especially in parts of Africa where many schools still lack reliable electricity.
In the United States, the number of graduates with a master’s degree in AI nearly doubled between 2022 and 2023. At the same time, 81% of high-school CS teachers think AI should be part of basic CS education, but less than half feel prepared to teach it.
12. Public opinion: optimistic, anxious, and regionally split
Globally, 55% of people now say AI products and services bring more benefits than harms, up from 52% in 2022. But the map is fractured:
- China (83%), Indonesia (80%), and Thailand (77%) are strongly optimistic.
- Canada (40%), the U.S. (39%), and the Netherlands (36%) are still skeptical.
Interestingly, the sharpest increases in optimism came from countries that were previously the most wary—Germany, France, the U.K., Canada and the U.S. all registered notable jumps. At the same time, trust is fragile: only 47% of people believe AI companies will protect their data, and fewer people than last year think AI systems are fair and unbiased.
What about the European Union ?
According to the report, the European Union shows up above all as a regulator and standard-setter, more than as a hardware or frontier-model powerhouse. Its role is basically fourfold.
1. Regulatory trailblazer: the AI Act and the “Brussels effect”
The report repeatedly frames the EU as the first major economy with a comprehensive AI law. The EU AI Act is described as “the first comprehensive regulatory framework for AI in a major global economy” and as a risk-based regime where providers of high-risk systems carry most of the obligations (“the act categorizes AI by risk, regulating them accordingly and ensuring that providers—or developers—of high-risk systems bear most of the obligations” (p. 192)).
The Act:
- bans specific practices (social scoring, manipulative systems, biometric categorization based on “sensitive characteristics”) (p. 329)
- imposes stringent transparency and reporting duties
- is explicitly presented as building on the EU’s already strict privacy framework (GDPR), with a significantly restrictive approach to generative AI (“the Act is significant for its restrictive nature, building on the already stringent EU privacy regulations” (p. 329)).
The text implicitly aligns this with a kind of Brussels effect in AI: EU rules, because of market size and legal design, are likely to shape global practices, just as with data protection and competition law. This is reinforced by the French competition authority’s use of EU IP rules to fine Google over the training of Bard/Gemini on French news content (p. 329), casting the EU not only as a regulator of uses of AI but also of training data and platform power.
2. Institutional hub: AI Office and codes of practice
The report highlights the creation of the European Commission’s AI Office as the operational core of this regulatory project. The AI Office is “a key role in implementing the Act, enforcing standards for general-purpose AI models, coordinating the development of codes of practice, and applying sanctions for offenses under the Act” (p. 332).
Two functions stand out:
- Oversight of general-purpose / foundation models (where much of the current power is concentrated).
- Development of a Code of Practice for General-Purpose AI, through expert working groups on transparency, copyright, risk identification, risk mitigation and internal governance; this is meant as an interim compliance path “until a finalized standard is published” (p. 335).
So the EU is not only passing laws, it is building bureaucratic capacity: a staffed office, soft-law instruments, and iterative codes of practice that will likely radiate outwards to non-EU firms that operate in the Single Market.
3. Global responsible-AI actor and safety partner
In the responsible AI chapter, the EU appears alongside the OECD, UN and African Union as one of the key international organizations that, in 2024, published frameworks on transparency, explainability and trustworthy AI (“several major organizations—including the OECD, European Union, United Nations, and African Union—published frameworks to articulate key RAI concerns such as transparency and explainability, and trustworthiness” (p. 164)).
On AI safety, the EU is:
- listed among the jurisdictions that pledged or launched AI safety institutes at or after the Seoul Summit, as part of a global network focused on advanced-model risks (p. 328).
- a founding member of the International Network of AI Safety Institutes, chaired by the U.S., which aims to coordinate testing of foundation models, synthetic-content risk management and safety research funding (“initial members including … the European Union, France, Japan…” (p. 336)).
So the EU is depicted not just as a unilateral regulator, but as a multilateral norm entrepreneur in RAI and safety architectures.
4. Economic and investment player – but civilian-oriented
On the money side, the report works mostly with national-level European data, but still draws some regional conclusions:
- It shows Europe as a rapidly scaling public investor in AI: public AI contracts in Europe in 2023 are said to be around 67 times the 2013 level, compared with a fifteenfold increase in the U.S. (p. 359).
- The gap in public AI spending between the U.S. and Europe widened until 2020 but has narrowed in the last three years, suggesting Europeans are catching up in aggregate public investment (p. 359).
However, the structure is very different from the U.S.:
- European public AI money is concentrated in general public services, education and health, which together account for ~84% of public AI tenders in 2023, while defense represents only 0.84% (p. 362).
The report also notes that it cannot yet fully capture EU-level grants because of data limitations, explicitly mentioning the European Union and China as missing from the global grant-data picture (p. 353).
So as an economic actor, “Europe” (including the EU) appears as:
- increasing its public AI investment,
- doing so with a civilian, welfare-state orientation (public services, education, health),
- but still less visible than the U.S. in defense-driven AI spending and in private AI mega-deals.
In one line
In the report’s narrative, the European Union is primarily the global legal architect of AI governance—pioneering binding regulation (AI Act), building enforcement machinery (AI Office, codes of practice), shaping responsible-AI norms, and participating in safety networks—while ramping up, but not leading, in the more hardware- and defense-driven dimensions of the AI race.