My Favorite AI Blog On Substack
Meet Alberto Romero, author of one of Subtack's best AI blogs.
Alberto Romero’s blog The Algorithmic Bridge was one of the first blogs I discovered when I arrived on the Substack network. His generous personal responsiveness with his readers was quite influential in persuading me that Substack was a place I’d like to hang out.
After almost 30 years of typing on the Net I know that I’m not always such an easy a visitor to be generously responsive to. I’m kinda loud, I type too much, too often, and take a contrarian position to almost everything. Lots and lots of net writers just aren’t secure enough people to happily embrace such inconvenient bombastic barrages as seems to be my writing style.
Roberto is one of those somewhat rare people with the confidence and grace to intelligently embrace and engage just about anybody and any idea. After three months of enjoying his writing, it’s past time for me to say that.
What inspired me to finally get off my lazy geezer butt and introduce you to Alberto’s blog is his latest article entitled, The World Must Reconcile AI Skepticism and AI Optimism, which does such a great job of illustrating Alberto’s depth of understanding of the field of AI, and his mature nuanced relationship with the collection of controversies which that field generates.
So, without further ado, let’s get on with the today’s show.
The Nature Of Thought
Alberto starts off his article with this:
Controversial topics are weird. People spontaneously gather around black vs white dichotomies: if you’re not with me, you’re against me. That's happening in AI.
This is an important observation, because if pursued deeply enough the simplistic black vs. white dichotomies that most debates revolve around shines a light on the inherently divisive nature of that which all we humans are made of psychologically, thought. In this context I’m using the word “divisive” to mean simply “to divide”, and not “argumentative” which is just a symptom of thought’s never ending focus on conceptual division.
The point here is that AI developers are in a sense attempting to replicate the human mind, and to the degree that’s true, it’s essential that they understand the biological device they are attempting to reproduce in digital form. That said, the nature of thought is way too big of a topic to explore fully here, so let’s save that for another day. More coming soon on that….
Introduction To AI Controversies
Alberto continues in his article to offer a quick sketch of the kinds of debates engulfing the world of AI, and he provides links to those involved. Should you follow the links offered in the second paragraph of his article you could easily spend the day advancing your understanding of AI considerably.
Surface Impressions Can Obscure A Deeper Truth
Alberto’s main point seems to be a reasonable one.
Lying below the shiny loud hyped up surface of a simplistic, dualistic, us vs. them, polarized debate on many AI issues lies what Alberto refers to as “a rainbow of gray tones”. His argument seems to be that adamant proponents of position “X” often share more in common with adherents of position “Y” than is immediately obvious from a surface examination of AI debates, and I think he’s right.
As just one little example, while I’ve been way more than skeptical about the future of AI on too many occasions, were someone to present a credible theory on how AI might meet the challenge presented by violent men (my current obsession) my views on AI might take a radical turn. Chatbots don’t interest me all that much, but if I perceived AI being able to make a decisive difference on the topics that do interest me, I might change my mind about AI in a hurry.
The Value Of Self Contradiction
Alberto goes on to argue that…
“It's possible—and even desirable—to have apparently conflicting views on the present and future of AI.”
This seems a mature view which aligns with my own understanding of the value of reason. I will often plant an adamant flag on some particular position (sometimes known as trolling) in an attempt to help feed an interesting debate. There is a place for dualistic debate.
But it’s far too easy for our egos to trap us in such limited perspectives, and if our interest in a topic is sincere (not the norm) we will attempt to understand all sides of an issue, which may be best accomplished by arguing all sides of it ourselves. Consider an attorney in a courtroom. Their job is not to be “right” but to represent their client’s view as persuasively as possible. We could use more of that mindset.
Alberto continues…
The startling pace of progress, the intrinsic unpredictability of the tech, the vast potential for transversal upside (and also downside), and the stated limitations and unforeseeable benefits make it absurd to hold a monolithic—and unchanging—stance on all aspects of AI.
Here I feel forced to do my usual contrarian dance. And I do so joyfully, knowing as I do that Alberto is a confident enough person to welcome discussion of opposing views.
Alberto’s thinking here seems to assume that AI technology, which is indeed changing quite rapidly, should be the focus of our relationship with the AI phenomena. And so, from his point of view (as I understand it) a rapidly changing technology and a static point of view are incompatible. From that point of view, that makes sense.
I would counter argue that our discussion of AI should instead focus on the species which AI is being developed to serve, which is of course we humans. And we humans, bound by biology as we are, have not changed substantially in thousands of years. And so, from this perspective, the one I inhabit, a static point of view on AI seems much more reasonable.
Is AI A Net Benefit To Humanity?
Alberto recognizes the threat presented by AI, as he explains here:
I'm critical and skeptical: I believe there’s a non-trivial probability that AI will cause more short- and long-term harm than wellness if we continue in the current direction.
But he sees the other side too…
But I'm also optimistic. Done well, AI is among the best quests we’ve, as humanity, ever embarked on.
As Alberto already knows, I’m not so sure. Is ever more knowledge delivered at an ever faster pace really what humanity needs right now? For further discussion of that question, see this article.
Alberto goes on to ask…
The first thing I'd notice if I were just becoming interested in AI would be this: how is it possible that, in the face of overwhelming evidence for ever-increasing performance and real-world successes, some people remain skeptical about AI?
That’s easy friend. It’s the overwhelming evidence for ever-increasing performance that has some of us skeptical about AI. Do you see overwhelming evidence for increased maturity performance on the human side of this equation?
Alberto writes…
Skeptics like professor Gary Marcus perceive, as well as everyone else, progress of some kind (i.e., things are changing).
This is a welcome opportunity to link to my other favorite AI blog by Gary Marcus, entitled The Road To AI We Can Trust. Marcus is also a very well informed and entirely reasonable friendly fellow too, though personally, I wouldn’t label him an AI skeptic, at least not as I use that word, as he seems too reasonable for that.
What would be of great interest to me would be for experts like Romero and Marcus to introduce us to what Alberto seems to be calling “absolute skeptics”, which I understand to refer to those who feel AI development should be put on hold. I would love to see ongoing debates between such “absolutists” and those in the AI industry, ideally on an online discussion forum available to all, and _NOT_ on social media, or JUST in elite offline forums reserved for discussion among the expert class.
Alberto has been exceedingly generous to me as I argue against the further development of AI, but I have the cultural authority of a dead squirrel, the smallest audience on Substack (geniuses all!), and often defeat my own points by hysterically honking like Foghorn Leghorn, so here’s hoping we can find some more influential absolute skeptics.
Alberto wisely suggests….
We’re going to see much more dichotomic stances in the short-term future of AI so it’s good to recognize the parts of us that belong to each of the camps without feeling the need to become one with that identity. What we choose to say or write isn't necessarily a perfect portrayal of what hides within our minds. In general, skeptics are more optimistic—and optimists more skeptical—than they seem.
True that. Even I, Mr. Anti-AI, am fundamentally optimistic. Not about AI specifically, but the human condition. I’m optimistic about what lies beyond this life. And I’m optimistic about the future of humanity too.
But what younger people may not get yet is that human beings don’t really learn big things through reason, but by reference to authority, and from our most persuasive teacher, pain.
As quick example, consider the fall of the Roman Empire, rulers of the ancient world. We survived that fall, and went on to the bigger and better things we enjoy today. But not before we suffered through a thousand years of darkness called the Middle Ages.
That’s what’s coming down the pike for us I’m afraid, a repeat of that cycle. And I suspect that AI will be part of the turning of those historical wheels. Pain is coming, and we will indeed learn from it, but not in our lifetimes.
Very long term, we’re looking good.
Short and medium term, not so much.
Well, you’ve heard enough from me today. If you’re interested in AI be sure to check out Alberto Romero and his The Algorithmic Bridge, if you haven’t already. Good stuff, nice guy!
Thanks for writing this Phil (I read it the other day but had to find time to comment!) From this and all the thoughtful comments you always leave in my articles, I can tell you could have a lot of success writing about AI if you found the right channels to promote and give visibility to your work! I haven't seen anyone write from your perspective and it's quite clear that you have no problem writing a lot and fast! Best of luck with your newsletter!