There are many questions surrounding the development of this technology that poses an existential threat to humanity, but most people are too busy worrying about whether computers can have a soul to address these concerns.
The Corbett Report community held a vigorous discussion on the future of Artificial Intelligence this past summer, and the results are in: No one understands the question (myself included). Now, to be fair, I didn't do a very good of framing the debate so, as is typical with this particular topic, it quickly became the standard AI argument about whether or not toasters have souls ("I toast therefore I am!"). I exaggerate only slightly.
I understand why the conversation inevitably steers in this direction. The nature of consciousness and the question of the soul are topics that have fascinated us as a species for thousands of years, and the advent of robotic consciousness threatens to upend the deeply-held beliefs of billions of people.
Perhaps inevitably, the prophets of the technological singularity have brought this issue directly to the fore by creating their own religion with "a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software". It's called "Way of the Future" and was founded (with a large degree of MSM fanfare from the usual suspects) by...wait for it...an ex-Google engineer. That's right, after developing our modern-day, real-life Big Brother's self-driving car, this Silicon Valley dropout has started a church for people to worship our coming robot overlords. You can't make this stuff up.
But, generally speaking, this is not the kind of thing that people researching in the field think about. So what do they think about? Oh, things like "What will the financial world be like when Skynet runs the markets?"
Think that sounds crazy? Well take that up with our good friends over at the shadowy Bank for International Settlements' even shadowier little brother, the Financial Stability Board. They just published a white paper on that very subject.
Well, OK, they don't quite phrase it like that. Their paper has the less snappy title "Artificial intelligence and machine learning in financial services: Market developments and financial stability implications." But come on. These are the minions of the central banksters. It's their job to make even the most incredible things sound eye-wateringly boring.
So what do the eggheaded lackeys of the bankster oligarchs conclude? Why, that AI is a swell thing, of course! It'll make everything more efficient and (even if there's an itsy bitsy teenie weenie risk of a hiccup or two) everyone will be better off for it.
In fact, they go out of their way to misinterpret their own scenarios in such a way as to make the inherent problems with the technology seem inconsequential. For example, they note that "AI and machine learning may enable certain market participants to collect and analyze information on a greater scale" but then nonsensically conclude that these disparities in ability "could reduce information asymmetries and thus contribute to the efficiency and stability of markets."
It's worth reading that premise and conclusion again just to see how completely they contradict each other. "Certain market participants" who have access to the AI will be able to collect and analyze information more quickly than their competitors...but that will reduce information asymmetry? Well, maybe after they've communicated that information to the rest of the market in the form of a buy or sell order, but it's a bit late for everyone else then, isn't it?
Contrast that line of thinking with Vladimir Putin's recent observation that "the one who becomes the leader in this sphere [AI] will be the ruler of the world." The disparity between the AI haves and have-nots won't reduce information asymmetry, it will magnify it by orders of magnitude. In essence, the first market participant to deploy effective AI will be the ruler of the market.
To be fair, the FSB report does pay lip service to some of the important risks surrounding AI technology, like its lack of auditability or the issue of data bias, but these problems are inevitably waved away by the study's authors, either by calling for vague measures like "testing and training" or calling on government regulators to oversee these new developments.
The most infuriating part of this whole meaningless academic exercise is that we don't have to theorize about the potential effects of AI in some imaginary far-off future scenario. We already have real-world examples of exactly how (admittedly crude) AI technologies can wreak havoc on world markets.
Remember the Flash Crash of May 6, 2010, when the US stock market crashed by $862 billion in a breathtaking 36 minute window and then magically recovered just as quickly? I do. And for those who weren't keeping track of that story, it culminated in the 2015 slaying of the sacrificial lamb, Navinder Singh Sarao, a British day trader who supposedly caused the entire meltdown from his parent’s house in Hounslow. That was when Sarao was arrested for "spoofing" the markets by placing a flood of buy orders that he never intended to commit and then canceling his order and cashing in at the now-elevated price.
As I reported at the time, the charge was fraudulent from the outset. Not only do untold scores of traders regularly "spoof" the market in this way, but Sarao himself had used those very same techniques 250 separate times before the Flash Crash. So why did it suddenly cause mayhem on that particular day? The answer, of course, is that it was the High-Frequency Trading (HFT) algorithms that now account for the majority of trading that were the real culprit:
"The unspoken and uncomfortable truth is that Sarao is only playing on a problem enabled by the high speed trading algorithms that now account for as much as 60 percent of the trading volume in the US futures markets. These computer-generated trades can function orders of magnitude faster than any human, reacting to changes in market direction and implementing buy and sell orders on the basis of that knowledge thousands of times per second. As the flash crash displayed, once the algorithms get tricked into selling into a plunge, the entire market can be plunged into chaos in a matter of minutes."
Members of the "Way of the Future" church will no doubt protest that the HFT algos operating on the markets in 2010 were so primitive compared to the coming sentient AI Godhead that the comparison is absurd. But they would say that, wouldn't they?
However, as that sage philosopher Donald Rumsfeld reminds us, there are things we know that we don't know and things that we don't know that we don't know, and it's the latter kind that are the real problem. And when it comes to AI, it is a very good question as to why anyone who had made a significant advance toward creating a truly artificial intelligence would share that market-monopolizing information with anyone else. Despite the hype about "Open AI" from the usual hypesters, is anyone under the delusion that the evil geniuses in the bowels of the Pentagon (let alone the agents of FANG) are really going to keep the public up to date on their latest adventures in quantum computing or neural network development or natural language processing or nonlinear computation?
Just think of Ptech, the legendary software that backdoored its way into the most sensitive computer systems in the world and promised to give its users God-like powers to actually predict events before they happened, and which was almost certainly deployed on 9/11 by the real perpetrators of that false flag attack. Or think of the PROMIS software from which it was allegedly derived, the stolen program that lay at the centre of the Inslaw/Octopus story. These programs are decades old by this point. How much more advanced is the software that will be used to perpetrate the next false flag attack?
There are many questions surrounding the development of this technology that poses an existential threat to humanity, but most people are too busy worrying about whether computers can have a soul to address these concerns. And the ones who aren't distracted by these philosophical puzzles are working for the likes of the FSB to put the shiniest PR gloss on the whole subject.
I don't have the definitive answers here, but I do know this: The singularists are at least right that information processing power is advancing exponentially and there will be nearly unimaginable changes to our society coming in our lifetime. And call it "computer super-intelligence" or whatever you want, but if we leave the development of this technology to the likes of Elon Musk and Bill Gates and MIT and the FSB and the Dr. Strangeloves at the Rand Corporation, the consequences will be horrifically predictable.