Hi folks, this is Phil Tanny from TannyTalk. Hope you’re having a great day.
So I’ve been hanging out on some AI blogs over the last few months, and have arrived at a few thoughts to share about the future of AI technology. Please know that I’m not at all an expert on this subject, and that this article isn’t an extensive look at the topic, but only addresses two issues which have arisen in my mind.
The Experts, And Their Conflict Of Interest
If we wished to learn more about AI technology the obvious place to look for education would be AI experts, those who work in the AI industry, and/or comment upon that field for a living. This makes perfect sense, except in one instance….
What if our question is whether the AI industry should exist? What if we’re concerned that the AI industry may be taking humanity in the wrong direction, and we’re wondering whether we should stop developing this technology any further? If that is what we wish to learn about, AI experts may be the wrong people to ask.
Here’s why…
Imagine that I just spent the last ten years working hard to become an expert on XYZ. I now make my living working in the XYZ field. Maybe I have two kids on their way to college, a big mortgage, and a spouse who depends upon my income. If this was my situation, how objective would you expect me to be about whether the field I’ve built my life around should continue?
Even if I were to privately agree that the field of XYZ is a mistake, am I really in a position to state such agreement publicly? What would happen to my career then?
I’ve used the symbols XYZ as a placeholder here, because the conflict of interest I’ve just described applies not just to AI, but to any emerging field of technology. I’ve seen this dynamic at work on other subjects besides AI, such as genetic engineering for example.
People will ask genetic engineering experts whether that field is safe, and the answer the experts offer is always yes. There will often be some acknowledgement of the risks involved, but the focus will typically then be shifted to benefits, or maybe theoretical governance schemes, and the bottom line answer from experts in any field will almost always be that that field should continue. It would be pretty unrealistic to expect any other answer, right?
Imagine that I was making hundreds of thousands of dollars a year with my blog on Substack. If I were in such a happy situation, would you expect me to be able to comment objectively on whether Substack should exist?
So, the experts in any field can be relied upon to provide useful information about how their technology works, but we can’t count on them to provide an objective analysis of whether that technology should continue.
Are The Experts Really Experts?
As you might expect, there is a lot of discussion on AI blogs about what the future of AI will look like. Most experts seem to comment upon such futurist speculation to some degree or another, and their analysis often reveals their deep knowledge about how AI technology works today. Such discussions can seem quite impressive, until you realize two things.
The future of AI will be determined by what happens with nuclear weapons, and…
No AI expert I can find talks about this.
UPDATE: My favorite AI blogger has heard my rants on this subject and just wrote an AI article that references nuclear weapons. You go Alberto!
At least in the AI analysis I can find, there seems to be a near infinite amount of speculation about the future of AI, and I’ve never seen nuclear weapons mentioned even once. Everywhere I go in AI land I bring this up, and everywhere I go it gets ignored. AI experts don’t even bother to argue with me about it.
The existential threat to modern civilization presented by nuclear weapons has been widely known since I was a child in the 1950’s. There was the Cuban Missile Crisis of course, and on multiple occasions we’ve come within minutes of having global nuclear war BY MISTAKE.
But somehow, AI experts speculating about the future of AI never seem to consider that the field they’ve given their life to could be over at any moment without warning. If that sounds hysterical, well, just ask yourself this…
QUESTION: How likely is it that human beings can maintain big stockpiles of massive hydrogen bombs and those weapons will never be used?
And so, as I watch the AI industry seemingly completely ignore what could easily become a very quick end to the future of AI I’ve begun to wonder. Are these folks really smart enough to be considered experts?
Clearly they are expert at the technical details of how AI currently works. But is their vision so narrow that we can’t really count on them to provide useful speculation about where the AI industry is headed?
I keep asking this….
QUESTION: We already are faced with two pressing threats, nuclear weapons and climate change. Why are we taking on more risk at this time with revolutionary technologies like AI?
I never hear a good answer. Typically, I don’t hear ANY answer.
I don’t know. Like I said at the top, I’m just a man in the street civilian asking inconvenient questions, and should in no way be confused with an expert. You would be well advised to ask real experts whether their technology is safe, to see what you can learn. Just be clear as you do, the experts have every incentive to give you only one answer.
I understand the parallel between AI and nuclear, but the comparison stops quickly, once you realize that nuclear weapons are in the hands of very few people, including some psychopaths, and that a bomb can immediately kill tens of thousands of people. AI cannot kill anyone, directly, and is potentially in the hands of millions of people. Some of whom have the power to deflect disasters before they happen.