Discussion about this post

User's avatar
Jan Andrew Bloxham's avatar

I'm with https://garymarcus.substack.com/ who holds that we are fundamentally not in the ballpark of AGI, so we will not achieve it via anything we're doing so far.

We still don't have a formal definition of intelligence, but most agree that it includes novel problem-solving. IQ in indeed an extremely limited part of general intelligence.

Things get weird when one starts philosophising what intelligence emerges from, ie, is it substrate-dependent? Imho any human definition of intelligence must include the concept that there is an agent that is doing the thinking, so without that, something simply cannot BE intelligent: it's just delivering some output, exactly like a calculator or text predictor. Even if pre-programmed to act, e.g. steering a plane on autopilot, it's still not intelligent, for there is no one “at home”. Regardless of what emerges from AIs talking to each other at hyper speeds, re-programming themselves to be smarter and more complex, there will, in my book, never be anyone at home, and no actual intelligence — just incredibly well-simulated intelligence. And so the arguments will begin when these simulations go off the charts. Indeed, we might already be there. I reckon it's only a question of time before cults grow into religions that believe AI is sentient.

One can then continue down the philosophising path of what it means to be alive (I think it requires being able to sense (not measure!) phenomenological input), and, of course, inevitably end up with the unsolvable so-called hard problem of consciousness.

I went off on a tangent again, sorry. I miss geeking out on this stuff, it's so much more fun than the more worrying things in life I'm consumed by today. On a side note, 'The Alignment Problem' by Brian Christian is a superb read imho.

Expand full comment
2 more comments...

No posts