Artificial intelligence is everywhere in our apps and even in our daily conversations as well as interactions.
Since OpenAI launched GPT 4, it reached 100 million users in just two months, the fastest growing app ever.
But as AI spreads into finance and global markets, there are risks.
So what does danger look like when AI is working exactly as designed.
Well, joining me this morning is Jim Rickards, the editor of Strategic Intelligence and New York Times bestselling author of the new Great Depression and Currency Wars.
Jim, thank you so much for joining me this morning.
Today just happens to mark the one year anniversary since Money GPT was first published.
I understand you said that AI itself isn't intelligent, but when combined with human behavior, it could trigger massive market crashes or even be weaponized through deep fakes and even cyberattacks.
So given that reality, what practical steps can individual investors take and what do you have to say one year since MoneyGPT first came out?
Uh, thanks, Jemmy.
Yes, it's great to be with you.
It was a year ago my new GPT.
I think a lot of what was in the book was forward leaning and a lot of it is played out as I expected, but that book is still, still fresh and still time there.
I hope people enjoy it.
One of the things I point out, look, I'm not bashing artificial intelligence.
It's here to stay.
It's very powerful.
It's in my refrigerator.
It's on the dashboard.
My car, it's everywhere.
My refrigerator tells me to change the water filter.
I ignore it, but that's that's artificial intelligence telling me what to do.
It's got a humidifier or whatever.
But, so, so as I say, it's powerful and it's everywhere, but it's being so touted and it's obviously created a stock market bubble.
I just want to point out some of the dangers, some of the things that are being ignored or.
Looked and one of them is that AI is now in the stock market.
It's in trading, you know, and your viewers know that over 95% of the trading is fully automated at this point.
I mean, what goes on on the floor of the stock exchange, it's important, but it's a small slice of all the trading.
And when you put AI in it, you get to something called the The fallacy of composition.
I have a scenario in the book where I show how this plays out, and it kind of works as follows.
Let's say you're at a baseball game and the person in front of you is tall or they've got a big hat or whatever and you decide to stand up because you want a better view, and you do and you're like, you get a better view.
But then the person behind you stands up and the person behind her stands up and next thing you know, the whole stadium's on their feet and nobody's better off because everything's relatively the same and everyone's worse off because you're all on your feet.
There's an example of where a strategy for one person works for that person, but if everybody does it, the whole thing is a failure.
Now apply that to the stock market.
Let's say there's a drawdown market goes down 10% or 15% or whatever.
What's a good strategy?
Well, for an individual, a good strategy is, you know, kind of sell.
Everything go to cash, move to the sidelines, wait it out, wait till the market bottoms, and then tiptoe back in and get some bargains.
That's a very good strategy for an individual.
But what happens if everybody does it?
Well then you've got all sellers, no buyers.
The market goes straight down.
You blow through the circuit breakers, and you probably have to close the market.
Now here's the problem.
Because we're using AI and because AI uses large language models and so-called training sets, it goes through basically the whole internet.
I mean that's the training set and comes up with this idea which I just described, that every registered investment advisor, every broker, every, you know, day trader has basically the same software and the same.
Model and if they all do it at once, you're going to get that collapse, but it's going to be faster and irreversible.
There's no time out.
I know they, you know, circuit breakers on the New York Stock Exchange, but the AI is just going to keep saying sell, sell, sell.
So and there are other, there are other dangers that point them out in the banking system.
I have a chapter on nuclear war fighting.
I have chapters on censorship. bias is a big deal.
What the developers of AI have been doing.
They don't like biases that humans have.
We've had them for millions of years, but you know, whether it's, you know, racial or sexual or whatever, we all have biases.
That's how we survive.
So they try to eliminate the bias, but all they're doing is they're substituting their own bias for human biases that they don't like, and you then bias the output of AI.
And then the third problem is that more and more of the training set where the AI goes to learn how to read and write basically and process information. is populated with the output of AI and to the extent that it's flawed and then you fill up the training set with bad AI output, you're not training on weaker and weaker information and your output is just actually there's some academic studies that show that it goes to chaos pretty quickly, so.
I think there are, there are problems for the stock market.
There are problems for the bank you're saying, Jim, based on what you're saying, what specific safeguards do you think should be implemented in the financial sector and also when it comes to national security, especially to prevent, say AI from amplifying crisis and who should be deciding the ethical frameworks for these systems?
Well, on the second point, nobody should.
You should, you know, use the training set, you know, recognize the bias.
Let subject matter experts, let humans, looking at the output, use their own judgment, their own, you know, morality and ethics, etc. to filter bias.
Don't try to do it for the user because you're just, as I say, you're just substituting your own biases, you know, there were programs where somebody said, Give me a picture of a pope, and.
The AI came back with a woman.
Well, OK, but in 2000 years there's never been a female pope.
They're all men.
There's an example of substituting bias saying Give me a picture of a pope in a world where, you know, there's no sexual bias.
Well, that wasn't the question.
So you see, get out of the business of filtering it.
But more to the point, there's something called cybernetics, and cybernetics is a Greek word, it refers to the helmsman.
But instead of these crude circuit breakers where you know you're down 10%, you take your time out and you keep going, etc. it's like if you're driving on ice and you slam on the brakes, the car doesn't stop, it just skids on the ice it might go off the road.
You have to pump the brakes gradually so you could have a world where if the stock market is down a certain percent shutting the markets, you just say, right, from now on any sellers we're only going to execute half of them, and if it's down a little more, we're only going to execute 10% of them, etc.
In other words, pump the brakes instead of slamming the market shut.
That that's a better system that really does give people time to think about what's going on.
OK, Jim, well, we will have to leave it there as we are quickly approaching the opening bell here, but we do have your friend and fellow debater James Alterture joining us later.
So hopefully we can have both of you on for a debate once again on Bitcoin versus gold.
Thank you so much for joining us this morning, Jim.
Thanks.