Yes, you’ve guessed it, David – I want to say a few random things about Artificial Intelligence. The recent launch of ChatGPT resulted in an explosion of apocalyptic predictions about the end of humanity: AIs will take over the world and then do something bad to us.
However, AIs are still programmable computers and, as far as we can tell, they still cannot do things that are really of fundamental interest to humanity (just ask ChatGPT how to quantise gravity).
Picasso’s statement comes to mind: “Computers are useless. They can only give you answers.” What do we need to do to computers so that they begin to ask questions the way we humans do (hopefully even better)? This issue seems to me to be as challenging as what it is that happens to inanimate matter that makes it become alive. We don’t know, but we should probably also try to answer this question as it might impinge on AIs too. Constructors?
Even then, something that you usually like to point out, David, could be the case. Once we generate genuine AIs, why would they behave any differently from the next generation of humans? Genuine AIs should not worry us more than the question regarding what will become of our children.
For me, a curious outcome would be a phase transition in comprehension (or whatever you might want to call it) that would make the gap between us and these AIs be as big as that between the apes and us. Apes have not even been able to discover how to make fire, let alone how to do technologies that we currently have. Could AIs ultimately be so much smarter than us?
My feeling is that the answer is “yes”. What I do hope, though, is that the same technology that makes such AIs could be used to “upgrade” us so that we too could start to see bits of the universe that currently remain hidden from us simply because of our own natural stupidity.