3 Comments
User's avatar
Jan Andrew Bloxham's avatar

I'm with https://garymarcus.substack.com/ who holds that we are fundamentally not in the ballpark of AGI, so we will not achieve it via anything we're doing so far.

We still don't have a formal definition of intelligence, but most agree that it includes novel problem-solving. IQ in indeed an extremely limited part of general intelligence.

Things get weird when one starts philosophising what intelligence emerges from, ie, is it substrate-dependent? Imho any human definition of intelligence must include the concept that there is an agent that is doing the thinking, so without that, something simply cannot BE intelligent: it's just delivering some output, exactly like a calculator or text predictor. Even if pre-programmed to act, e.g. steering a plane on autopilot, it's still not intelligent, for there is no one “at home”. Regardless of what emerges from AIs talking to each other at hyper speeds, re-programming themselves to be smarter and more complex, there will, in my book, never be anyone at home, and no actual intelligence — just incredibly well-simulated intelligence. And so the arguments will begin when these simulations go off the charts. Indeed, we might already be there. I reckon it's only a question of time before cults grow into religions that believe AI is sentient.

One can then continue down the philosophising path of what it means to be alive (I think it requires being able to sense (not measure!) phenomenological input), and, of course, inevitably end up with the unsolvable so-called hard problem of consciousness.

I went off on a tangent again, sorry. I miss geeking out on this stuff, it's so much more fun than the more worrying things in life I'm consumed by today. On a side note, 'The Alignment Problem' by Brian Christian is a superb read imho.

Expand full comment
Claire Hartnell's avatar

I just tried to find a Taleb tweet on this - he said something like, the problem with AI is that the training data is so broad that it fools us into thinking that the machine is making deductive pronouncements. But it is really just extreme inductive thinking based on learned patterns. I also follow Gary Marcus - and Arvind Narayanan, who thinks that AI companies have plateaued. The point I was trying to express (probably very badly) here was that evolutionary intelligence is multi-scale. Human intelligence is just a tiny emergent output. But evolution keeps adapting by recombining parts at different levels of scale. Machines can't do that. Even if they could disassemble and reproduce they would always be limited by prior knowledge. I think it's fanciful to imagine that they will ever be able to do 'new things' as humans can because they lack that embedded evolutionary intelligence.

Expand full comment
Jan Andrew Bloxham's avatar

Yeah I appreciate viewing if thru that lens, a higher level of understanding. It’s not my strongest suit, and it’s incredibly hard to get right, but it’s beautiful — as long as one doesn’t mess it up by failing to consider something crucial 😆

I think AI are a dead end re AGI, but they will absolutely continue to specialize off the charts, taking over more and more information jobs that don’t require motor and social skills. I read somewhere about an AI judge being much better than humans at passing out sentences (judges are so fallible and biased it’s scary).

I recently published a post with a conversation someone had with DeepSeek, and judging from the reception it got on reddit (where it was subsequently removed), people fundamentally misunderstand what chatbots are (in different ways). It’s a bit scary.

Expand full comment