February 8, 2025

The Dangers of AI

Concerns around the dangers of AI have been discussed quite a bit, including by experts such as Geoffrey Hinton - though there is far from consensus. My perspective which was just surfaced and reinforced upon reading the last of Thomas Haigh's articles in CACM(1) is that the there are two categories of risks.

The direct risks of AI such as a "singularity" still feel relatively implausible for myself and others. The current prominent systems are driven by statistical inference without understanding where they are subject to "hallucinations" which is a generous term suggesting that such understanding does exist but is in some way distorted. While there is the prospect that intelligence can ultimately be reduced to such inference and the gap for AGI is additional input that could be filled by sensors - so far there's been no demonstration that these systems would move past being "stochastic parrots". There is also a fairly constant tendency to expect progress to continue unabated whereas historically it tends to plateau after hitting unforeseen limits.

Far more likely seems more indirect adverse consequences. One clear risk mentioned in the article is the climate impact of the massive energy consumption involved in model training (along with other culprits such as cryptocurrency mining). Seemingly part of the cracking veneer of technology solving societal problems is the shift from tech promising more ecologically friendly solutions (turning a blind-eye towards the mining of necessary rare minerals) to being a primary cause for net carbon emissions (including the reopening of coal plants).

The other fairly clear risk is the application of AI solutions. While AI by itself may not pose a clear risk, it already presents issues when applied in terms of issues such as amplifying bias. As it's uses are extended to systems that have direct physical impacts (e.g. military uses such as drones) it seems highly plausible that something like a suspect reward function could lead to a catastrophe. Or AI could also act as a messenger which drives humans to instigate the catastrophe themselves in the spirit of Dr. Strangelove.

Given the potentially severe consequences any of the above could lead to the realization of significant hazards and so it may somewhat come down to a race to see which threat first becomes existential. I'd put my money on the options that come down to us destroying ourselves in the pursuit or use of some form of AI rather than getting to pass the blame onto something independent that we've divinely birthed.

1.
HAIGH, Thomas. Artificial Intelligence Then and Now. Communications of the ACM [online]. February 2025. Vol. 68, no. 2, p. 24–29. [Accessed 8 February 2025]. DOI 10.1145/3708554. Available from: https://dl.acm.org/doi/10.1145/3708554
From engines of logic to engines of bullshit?