Are AI researchers concerned about the existential threat of AI? Our survey suggests not

Ananya Karanam, University College London

Introduction:

“If anyone builds it, everyone dies.” That’s how one prominent artificial intelligence researcher – known to some as the “guru of the AI apocalypse” – described the risk of advanced AI in a book published last month.

For the last few years, headlines have been full of similar warnings from AI “doomers,” many of whom are the same people researching and building these systems day to day. Computer science researchers were among the first to sound the alarm bell on AI risk, with many fixated on the long-term, existential threat posed by highly advanced systems.

But when our team at the UCL Centre for Responsible Innovation asked over 4,000 of them about what keeps them up at night, we found a very different story.

How we analysed expert perspectives on AI risk:

As part of the largest international survey of AI researchers to date, we posed a simple, open-ended question to experts working across industry, academia, and government in 90+ countries: “What worries you most about AI?” The value of these expert perspectives lay not in their predictive power, but rather in the insight they provided into the current risk perceptions of those closest to the technology and most influential in steering its development.

To analyse the thousands of responses we received, we experimented with automated methods but eventually found that a more iterative process of hand coding helped to capture the full breadth of issues that arose. While this meant that subjective judgment shaped the codes and categories, our research team’s understanding of contemporary AI discourses allowed us to identify nuances and patterns that might go unnoticed by an automated topic model.

On a personal level, as the primary coder, spending the summer reading thousands of these reflections gave me a rare window into the people steering this field. My biggest takeaway: the concerns of AI researchers are far more plural, immediate, and socially-grounded than the extinction narratives suggest.

What worries AI researchers? Not human extinction.

Our results offer a clear corrective to the media hype surrounding existential risk–which did not land amongst the top ten concerns that emerged in responses. Only 3% of researchers said they were most worried about the long-term, speculative scenario in which advanced, out-of-control AI poses a threat to human existence.

This relatively small percentage is set against a diverse landscape of much more immediate concerns. While no single topic dominated the responses, some of the most prominent issues were malicious use (11%), misuse (10%), misinformation (9%) and jobs (7%). Notably, researchers were far more engaged with these near- and medium-term societal impacts than long-term existential risks. Far from speculating about future superintelligence, researchers tended to see today’s AI systems as normal technology – worrying about who’s using them, how they’re being deployed, and what harms might follow.

However, the 3% figure doesn’t tell the full story. This proportion remained consistent between researchers from academia and industry, and a larger group expressed worries about the control or ‘alignment’ of future advanced AI (sometimes referred to as artificial general intelligence, or AGI). This suggests that the language of existential risk seems to have caught on, growing out of a niche, Silicon Valley ideology into a more mainstream way to conceptualize AI risk. Still, in practice, those fears remain a small fraction of the picture.

Figure 1: Distribution of the most frequently mentioned concerns in response to the open-ended question, “What worries you most about AI?” (N=3718)

The real worry: How AI is used once it leaves the lab

A much larger cluster of concern revolves around downstream use – that is, what happens when AI models leave research labs and end up in the hands of people. Nearly 25% of responses referenced one or more of “malicious use”, “misuse”, “overuse”, or “uncritical use.” This is in contrast to more upstream concerns like the opacity of “black box” models themselves or the issues surrounding the energy, resource, and training data supply chains that enable AI development. The focus on downstream use over upstream development is striking, in part because it aligns with recent calls from techno-optimists to regulate the former rather than the latter.

Yet, even as our respondents frequently worried about misuse, few of them detailed specific use cases, leaving plenty of ambiguity around intentions, actors, and consequences. This ambiguity in how even AI experts understand and distinguish between intentional and unintentional (mis)use reflects a broader challenge in AI governance and policy discourse, where “misuse” is frequently referenced, but rarely defined (e.g., see here and here). It can function as a catch-all or even as a performative term when it is not specified what kind of misuse is being imagined, who is responsible, or in what context, obscuring important distinctions in risk perception and accountability.

Conclusion:

In the end, coding researcher concerns gave us a very different picture of AI expert debates than the headlines suggest. In the face of attention-grabbing extinction narratives and prominent individuals throwing around “probabilities of doom”, we had the opportunity to listen to and make sense of nearly 4,000 voices saying something else entirely.

Most were not preoccupied with apocalyptic scenarios, but with the tangible, everyday challenges of how AI is used. If we want to have an honest public conversation about AI, it starts here–with unpacking the real concerns of the people building it.

Figure 2: Full table of most frequently mentioned concerns (N=3718)
Topic Frequency Description Example
malicious use 10.6% Intentional use for harm by members of the public or criminals “How people (in particular bad actors, armed forces and profit-hungry corporations) make use of AI. Let me rephrase that: I am not worried about super intelligent gone-rogue AI itself (like in a HAL9000-like or SkyNet-like scenario), but about how people deploy way more “stupid” AI for their own benefit (and the detriment of others), very often in insipid ways.”
misuse 9.9% Incorrect or inappropriate use of AI “Its a powerful tool which can be misused in the wrong hands”
misinformation 8.8% AI spreading false or misleading content “Fake news and videos being created for social media and influencing people e.g. in how to vote. “
impact on jobs 7.1% AI replacing human labour or reducing employment opportunities “Massive unemployment, as this time there may be no way out: this is the first time that AI may replace white collar jobs entirely”
public understanding of AI 5.4% Non-experts misunderstanding AI systems “The inability of people to recognize that current AI is not intelligent but a fancy description of various statistical methods”
general societal impacts 5.2% Reshaping of relationships, norms, and social structures due to AI “Its social impact – further polarisation/ division within societies.”
hype 5.1% Overpromising or exaggerated claims about AI “People overhype it or else see it as an evil in and of itself, as if we lived in a sci-fi movie. Rather than being generally informed about how it works. Just like most people should know generally how a washing machine works or their car.”
bias 4.5% Algorithmic bias and unfair outcomes from AI systems “How pattern detection, particularly in biased, historical data, will lead to propagated bias, discrimination, and deepening inequality within and across societies in the world. “
privacy 4.0% AI compromising or exploiting private information “There can be privacy concerns regarding the sensitive information of users’ data used to train the AI”
performance 4.0% Hallucinations and other limits in accuracy and robustness “The fact that AI can very confidently convey wrong information.”
regulation 4.0% Too little, too much, or problematic AI governance “poorly thought out policy making resulting from hype and misconceptions”
alignment and control 4.0% Ensuring AI aligns with human values/control “Making increasingly high-stakes decisions using AI without knowing that is aligned with our interests”
power concentration 3.9% Control of AI by a few dominant actors “Centralizes power in the hands of a few corporate actors, through its (material) dependencies – compute, data, labor, software frameworks. “
inequality 3.8% Unequal distribution of AI benefits/harms “I think that it has the ability to make billions for 0.001% of the population and destroy the lives of millions of others without compensation for the losses”
black box 3.8% Lack of transparency or explainability in model reasoning “That they are almost always blackboxes. We still lack a strong theoretical framework for what happens inside an AI.”
existential risk 3.4% Fear that advanced AI could cause catastrophic or irreversible harm “Rogue AI taking over, human extinction”
overuse 3.3% Overreliance or unnecessary use of AI “That seemingly everybody wants to use it for everything, even where AI does not make sense at all”
profit-driven 3.2% Development of AI for private, commercial priorities over public interest “It’s owned and governed by people who are only interested in money and power; it’s usually not put to use for the public good; more intelligent will likely mean more risk; focuses on the wrong goals (it helps me with my intellectual tasks, while I want AI to do my laundry, cleaning, etc – or otherwise solve world problems like war and climate)”
responsibility 3.0% Absences or shifts of responsibility by developers/users “Irresponsible deployment of AI to the real-world.”
uncritical use 3.0% Blind trust or careless use of AI systems “Blind use of AI, without understanding of its limitations, especially for deciding on social issues (i.e., AI for deciding on policy, predicting crime, deciding what’s biased or toxic, etc.)”
ethics 2.4% General ethical or moral concerns about AI “AI does not think and has no ethics or morals.”
deskilling 2.3% Loss of skills and critical thinking due to AI reliance “I may become lazy and lose some parts of my skills because I ask AI to do it.”
tech companies 2.1% Critiques of Big Tech control and behaviour “The strongest AI teams are usually part of big tech companies whose objectives and interest do not align with the well-being of all humans but with their own economic benefit.”
human in the loop 1.9% Lack of human oversight in AI decisions “Widespread delegation of important decision-making to AI algorithms with little human involvement.”
manipulation 1.9% AI used to influence or control people “Large-scale political manipulation”
advanced AI 2.0% Development of superintelligent or sentient systems “I think the most interesting one is AGI.”
general concerns 1.7% Broad or unspecified AI worries “Unexpected negative outcomes”
n/a 1.5% Responses that do not meaningfully address the question “No.”
safety 1.6% Unspecified fears about AI being unsafe “AGI is approaching but we are not ready in terms of safety.”
expert understanding 1.6% Whether experts truly understand AI systems “People (even researchers and practitioners) inability of understanding its limits, risks and benefits and its potential social impact, both positive or negative.”
speed 1.6% Rapid development of increasingly large models “The speed at which the industry is growing and adapting AI in almost everything. It is really hard now days to separate facts from fiction.”
energy and resource 1.4% High electricity/water consumption requirements “Obscene energy use, increasing technocapitalist domination of life”
nothing 1.4% Respondent explicitly expresses no concern about AI “Nothing, it’s just a software.”
environment 1.3% Environmental harms from AI development “It’s carbon footprint and the potential for us to exacerbate global warming to the point society fails”
security 1.2% AI creating or exposing security risks “Security vulnerabilities can be easily exploited and people rarely seem to care”
slop 1.1% Low-quality mass AI content polluting online spaces “A deluge of generated disinformation, misinformation, and low-quality content displacing quality works by humans.”
training data 1.1% Issues with data sourcing, labelling, and attribution “Predatory nature of obtaining data for model training abusing rights of millions of creators”
slowing progress 1.0% Concerns that AI progress may stall or pause “The development of AI systems may get stalled after some years.”
creativity 1.0% Loss or devaluing of human creativity due to automation “Coming generations will lack creativity”
disempowerment 1.0% Loss of human agency or meaning due to automation “Eventual disempowerment of humans as the agency, capability, and generality of AI systems increases”
fraud 0.9% AI used for scams or deception “Fraud using AI, such as deepfake and voice imitation”
AI research ecosystem 0.9% Concerns about AI research norms/direction “Leading edge of research is not in universities”
specific applications 0.7% Risks tied to particular domains (health, law, etc.) “Application of LLMs to the medical domain”
transparency 0.6% Lack of clarity about model development/use “Lack of transparency in the development practices and the kind of data used to train widely used models”
indistinguishability 0.7% Hard to tell AI vs. human content “Not being able to distinguish whether I’m interacting with humans or machines, especially in online interactions”
costs 0.6% High financial and computational costs “The computation of large models. Only the rich and big organizations can afford the GPUs and the costs, leaving the rest of people from research and development. Also, the consumption of large models is huge.”
anthropomorphisation 0.5% Attributing human traits to AI “The Eliza effect: attributing too much ability and responsibility to AI”
data contamination 0.3% AI-generated data polluting training datasets “Degrading quality due to training data being itself AI generated”
geopolitics 0.2% AI driving international competition/conflict “That it leads to a new cold war-era where geopolitical tensions are high and conflicts are numerous.”
participation 0.1% Lack of public involvement in AI governance “AI systems are deployed without asking opinion of general public and impact of such systems”