The Truth with Lisa Boothe: AI “Code Red”: China, Big Tech Bias & the Fight for Control of Artificial Intelligence
3/17/202635 mincomplete
0:00This is an iHeart Podcast.
0:02Guaranteed Human. This podcast is brought to you by Wise, the app for international people
0:07using money around the globe.
0:09With Wise, you can send, spend, and receive in over 40 currencies with no markups
0:14or hidden fees. Sending pounds across the pond, spending Rai's in Rio, or getting paid
0:19in dollars for your side gig.
0:21You'll get the mid -market exchange rate on every transaction.
0:24Plus, most transfers arrive in less than 20 seconds.
0:27Join 15 million customers internationally.
0:29Be smart. Get Wise. Download the Wise app or visit wise .com.
0:33T's and C's apply. Welcome back to The Truth with Lisa Booth, where we get
0:37to the heart of the issues that matter to you.
0:39Today, we're talking about the defining race of our time.
0:42America versus China for control of artificial intelligence.
0:46We'll talk about why it's so important with Breitbart News, social media director, Wynton Hall.
0:51He is the author of a new book, an explosive book, called Code Red, The
0:57Left, The Right, China, and the Race to Control AI.
1:00He is also a distinguished fellow at the Government Accountability Institute and has authored or
1:06collaborated on 27 books, including multiple New York Times bestsellers.
1:11We'll dive into all of it.
1:12How AI systems are already being programmed with left -wing indoctrination, China's alarming advances in
1:19technology, the millions of jobs at risk, and how this AI revolution will reshape not
1:25only the country, but the world.
1:26So stay tuned for Wynton Hall.
1:33Well, Wynton Hall, it's great to have you on the show.
1:36First time having you on and look forward to talking about your new books.
1:40Appreciate you making the time.
1:41Glad to be with you, Lisa.
1:42Thank you. So how much, the new books Code Red, The Left, The Right, China,
1:46and the Race to Control AI.
1:48I mean, AI is pretty complex.
1:50Did that make it difficult to write the book and just sort of unpacking it
1:53all? It really does. I spent over two years deep diving into this world, and
1:59it is very much a black box technology.
2:02What's interesting is that even among AI researchers, they will concede they fully do not
2:08understand a lot of how LLMs work in neural networks.
2:13But I think what I wanted to do was to make the point that AI
2:17is not just a tool.
2:19It's really political power. And I think one of the things that people are going
2:22to increasingly realize is that every single policy issue that touches their daily life, whether
2:28it's education, certainly jobs, obviously relationships, faith, and certainly national security, they are all going
2:36to increasingly be dramatically affected by AI.
2:40But the technology itself is very much complex.
2:44But I think the power dynamic is the thing that interested me most.
2:48So I always like to say this is a politics book about AI, not an
2:52AI book about politics in that regard.
2:55And it's, I think, the most important policy shift we're going to see.
3:00You know, how far are we into this AI revolution?
3:04I feel like it's, you know, it's too late to put the genie back in
3:06the bottle. You are absolutely right, Lisa.
3:10It's moving so fast. Now, the actual term artificial intelligence comes, it was coming to
3:17public consciousness in 1956. So if you're thinking from a purely historical perspective about the
3:24actual, you know, lexicon of AI, yes, that has been with us for some time.
3:29The real explosion occurred in 2017 with the emergence of transformer technology and, of course,
3:36then the arrival in November of 2022 of Chachi BT, which really accelerated things.
3:42I think what's important is people need to understand you don't really get to opt
3:46out of this AI revolution.
3:48And the reason is that 99 % of Americans are already using AI, even though
3:5364 % of us don't realize when we're using AI.
3:58And you go, how is that even possible, Wynton?
4:00Well, what it means, of course, is that it's baked into the algorithms of every
4:04single thing that modern individuals use, whether it's your weather app, whether it's your streaming
4:09services, or obviously GPS. So narrow AI is a part of our daily life.
4:15So if we're going to use it, I think we have to learn to not
4:19only use it well, but we also have to think about the upside, but also
4:23the massive landmines for us as people, as parents.
4:27And again, in the political realm, it is that pervasive.
4:32If you were to lay out those landmines, what would they be?
4:36Yeah. Well, first of all, I think there's a lot of hype and fear, and
4:42that can be weaponized in building out political narratives.
4:46So let me give you the big one, which, of course, is all of the
4:49rage, and you see it every day in the headlines, which is everybody's fears around
4:52jobs. And is this really the first time that a technology will destroy more jobs
4:56than it creates? And so when you listen to people like Mustafa Suleiman, Microsoft AI
5:01CEO, and he just recently said that we're looking at, within 18 months, the ability
5:08to replace all of the tasks of white -collar employees.
5:11When you look at somebody like Dario Amadei, the CEO of Anthropic, who says that
5:17within the next 12 months to five years, 50 % of all entry...
5:22level job replacement for white collar, people have to make a choice.
5:27You either A, believe that's complete hype and they're just trying to raise investor dollars
5:32by saying that their technology is going to be that labor replacing and big.
5:36B, you can say, oh, there's nothing I can do and it's all over.
5:40Or C, you can say, well, it could be a little of both, but either
5:45way, that allows for proponents of things like universal basic income for wealth redistribution to
5:51scare people into believing that it's inevitable that we're going to have a job apocalypse
5:54in the next 12 months to 18 months and therefore build out these political coalitions.
6:00So I think the first landmine is understanding how to separate hype from reality and
6:07then realizing that either way, it doesn't matter because it can be politically weaponized.
6:11I think the second landmine would certainly be for parents.
6:15If you're a parent or a grandparent or you just care about the future and
6:21you care about children, you're very much concerned around what we're seeing in two directions.
6:26One with AI chatbots, otherwise known as AI girlfriends colloquially.
6:31And these are chatbots that young people are able to interact with.
6:35They may, depending on which service you're using, may not be guardrail safe.
6:39They will take the conversation into wildly inappropriate things for minors.
6:44And they will also oftentimes involve, take them into very dark conversations, self -harm, suicidal
6:50ideation, sexualized content, and role play.
6:54And the second area for parents to really understand the landmine is going to be
6:58around plagiarism and academia. And they don't call it chat GPT.
7:03They call it cheat GPT.
7:05And any professor or teacher who's out there, bless their hearts, trying to battle through
7:09this new world of AI and be able to police plagiarism or cheating.
7:16It's not just written plagiarism, by the way.
7:18That is hugely in mind.
7:20And then I think on top of that, which complicates it even more, is the
7:24erosion of critical thinking skills.
7:25Because any educator will tell you that if you don't let students struggle and actually
7:30have that mental friction with how do I do this calculus problem?
7:34How do I work through this physics problem?
7:37How do I do this writing assignment?
7:39Then they don't build that muscle memory and that ability to build cognitively.
7:44And so cognitive offloading in AI is a huge area.
7:47So those are just some of the early landmines.
7:50We've seen a lot of really interesting cases involved in like chat GPT.
7:56You know, for instance, I saw one the other day that there's a lawsuit because
8:01a woman fired, chat GPT allegedly convinced her to fire her real attorney, then created
8:08like all these bogus, you know, and illegitimate motions against her employer.
8:15And like, it was all phony and, you know, but so just like really crazy
8:19stuff like that, or, you know, potentially telling chat GPT that, you know, they wanted
8:25to kill themselves and, you know, things like that not being flagged.
8:31But when we look at some of these, like if you look at things like
8:33chat GPT or like Gemini or Grok or what have you, we've seen evidence of
8:40bias in some of these cases.
8:41So like how much of the information that we're getting back is factual and or
8:47objective? Oh, it's such an important question, Lisa.
8:51So today I actually have a piece out on Fox .com about this.
8:56You take Google Gemini, for example, and their pro -level account, Deep Research, and you
9:01just ask them which of the 100 senators in the U .S.
9:05Senate have violated your hate speech policies.
9:07And they only list Republican senators and zero Democrats.
9:11And you look at that and you realize, OK, this is and these are senators
9:15that, by the way, appropriate hundreds of millions of dollars to these companies in the
9:20form of, you know, cloud computing contracts and the rest of it.
9:23The bias thing is very real.
9:25It's the opening chapter of Code Red and looking at the hidden hand of AI
9:30and woke persuasion. And what is fascinating, this book has 80 pages of end notes.
9:36I was a former college instructor, so I believe in really doing the grunt work
9:42on the research that's got over 850 end notes.
9:45What was fascinating is the scholarly research and peer -reviewed research, even that which skews
9:50left, does concede LLMs, large language models, otherwise known as chatbots, are absolutely skewed toward
9:57a left -wing view. This is not even a question.
10:00In fact, even AI's creators will concede that there is a left -leaning political bias
10:06to the responses. So you're absolutely right, Lisa.
10:09Number one, you have to understand that you are getting a biased response.
10:13The question then becomes, well, why is that?
10:15How is that? And so forth.
10:16And the answer, of course, is the way that an LLM is trained.
10:20And so what you realize is they're taking information from Wikipedia, which skews very left.
10:26They're taking information from Reddit, which skews very left.
10:30They're taking information through what's known as the common crawl, which is this very large
10:35public data set from the internet.
10:36They're taking information from academia, which we all know skews left.
10:40Garbage in, garbage out, as the old saying goes.
10:43And so you're going to get that.
10:45And so that's the first thing you've got to be thinking of.
10:47The other thing is what you pointed out even before that.
10:50which is what are known as hallucinations, which just means things that sound confident and
10:56accurate, but actually turn out to be complete gibberish or made up or just total
11:00misinformation. And so I think the consumer has two layers of filtration.
11:05One is what I'm being told even true.
11:08That's the hallucination piece. And then the second is through what political lens and slant
11:13is this information being filtered to me?
11:15And this is really important, particularly for young people who do not already have a
11:19fully formed political ideology or worldview.
11:22The other thing may just not even have a lot of knowledge of history to
11:26be able to fact check and say, wait a minute, this is talking about Gerald
11:29Ford and it was actually Richard Nixon or whatever factual point that they're bringing up.
11:34So it creates this complete new world where you've just completely upended so many of
11:40the pillars that have been a part of our understanding of fact and fiction.
11:45And that's not even to bring in deep fakes, images and videos, where we all
11:50know on our social feeds, we look at things all the time and increasingly people
11:54are saying, wait a minute, is that AI or is that actually real?
11:57Because that blurring of fantasy reality has become almost hard to detect.
12:02Going to take a quick commercial break, more with Wynton Hall on the other side.
12:37You know, we saw during COVID where the Biden administration was putting a lot of
12:43pressure on Meta and, you know, some of these big tech companies to censor certain
12:50information. How much of that is happening with AI and sort of what concerns do
12:58we have surrounding the government sort of influencing what information we're getting back?
13:04Oh, such a great question.
13:05So in one of my other many roles, I am social media director at Breitbart.
13:09And so I would look at analytics and have very closely, you mentioned Meta, and
13:13you were absolutely right. During the Biden era, you had such incredible blacklisting, demonetization, suppression,
13:23algorithmic, you know, diminution. You had all manner of chicanery.
13:28And you're absolutely right. I mean, I think a lot of people, whatever their view
13:31of Mark Zuckerberg, the truth is, is that the Biden administration really did apply a
13:34lot of the pressure there.
13:35And they were successful and it worked.
13:38I will say that there has been absolutely a lift.
13:42So we know, for example, in social media, political news feeds that during the Biden
13:47era, Meta reduced political content.
13:50This is their statement. They made this very clear.
13:52And it was by as much as 60 percent to completely mute that.
13:57Now, a lot of that has come back.
13:58So people have a megaphone that actually has some volume to it.
14:02There is far, far less of this game that was played during the Biden era
14:07where so -called quote, unquote, fact checking, end quote, wink, nod was going on, which
14:13was just another way of suppressing views that the Biden administration and left of center
14:19people did not like and punishing, you know, conservative publishers or right of center publisher
14:25to be able to amplify the others.
14:27However, one thing that is of great concern, and this is one of the hidden
14:31things that a lot of people don't realize.
14:33And I actually list it out and I have all the dollars and information in
14:37Code Red. There is a hidden subsidy that happens with AI training data.
14:44So here's how it works.
14:45If I am a left of center publication, Sam Altman at OpenAI comes to me
14:51for ChatGPT and says, we'll give you $20 million for the last 30 years of
14:57all your archival material so that we can use it as training data for the
15:02next LLM project. ChatBot that we're going to create.
15:05So I get a check for, you know, $20 million, $30 million, whatever.
15:08Now, that does two things.
15:10One, it's a huge cash infusion to a left -leaning publisher.
15:14Two, it then bakes in all of my left -leaning bias inside of the AI.
15:19So you get this self -reinforcing loop of bias where you're also monetizing it.
15:26Guess who doesn't get those kind of contracts?
15:29Right of center, smaller conservative publications.
15:31And so you have this real problem that's emerging.
15:34So on the one hand, the answer to your question is, yes, we have seen
15:37a return of free speech and certainly on things like X and Grok and all
15:43of that with Elon too.
15:44But on the other hand, there's this new form of a slight of hand move
15:50that is occurring where these Silicon Valley behemoths who are building the next generation of
15:56AI are able to basically funnel money to left -leaning organizations and then get a
16:02twofer in the form of extra bias.
16:04Well, that's concerning. You write about AI as the defining geopolitical race over time.
16:10Why do you believe that the competition between the United States and China over AI
16:15is as consequential as the new?
16:17arms race during the Cold War?
16:19Yeah, that's a great question, Lisa.
16:21So a lot of times when we hear this, oh, we've got to beat China
16:24and AI, people say, well, that's probably these AI frontier labs trying to use this,
16:31making what their product is seem more important so they can raise capital from investors
16:36and so forth, or get big contracts from the Defense Department and so forth.
16:40And I'm sure that's absolutely true.
16:41Two things can be true at the same time, right?
16:43There's no doubt that these are businesses.
16:46On the other hand, when you really dive deep, particularly into the national security and
16:53military and Intel worlds, what they will tell you is this.
16:59There's something known as recursive self -improvement, RSI.
17:04And what is that? It's real simple.
17:06It's the idea that we're going to potentially hit a point when AI can correct
17:12itself, improve itself, rewrite its own code, and do that all autonomously.
17:17Now, when you're hitting RSI, if and when that ever does occur, it's not a
17:22fully functioned concept yet. What military experts in Intel and Defense people will tell you
17:28is if and when that were to occur, you would have such technological and military
17:33dominance. You would have full -spectrum battlefield dominance in things like being able to crush
17:38in encryption, cyber hacking, security, hacking of weapons systems, hacking of infrastructure, electrical grids, water
17:47mains. You would have full -spectrum dominance over whoever your enemy is.
17:52And so whoever reaches recursive self -improvement first is going to essentially have full -scale
18:00operating control over their opponent militarily.
18:04Now, are we at RSI yet?
18:06No, we're not. Anthropics had recent studies about getting closer to that.
18:10But even beyond that, even if that, anybody says, well, that's a theoretical concept that,
18:14you know, that may or may not happen.
18:17Right now with China, what you realize is the economic benefit already that is accrued
18:23and we are fighting over in the AI space.
18:26One third of the S &P 500 is made up of the Magnificent Seven, which
18:30are the seven largest American tech companies.
18:33So it is a tent pole and has been a huge growth opportunity for our
18:37economy. On the other hand, we know that when China starts leading in this, they
18:41can crater us. What is the example of that?
18:43When China's DeepSeek AI model was released, R1, they cratered NVIDIA, America's NVIDIA, $600 billion
18:52in a single day, the largest large capital market cap loss in company in American
18:58history. So you have the economic warfare side of the race with China.
19:03You have the military side of the race with China.
19:06And then I think the other thing that you have to realize is that right
19:09now, the advent of AI warfare has accelerated with our actions in our war in
19:14Iran and even with the Maduro raid.
19:16And so what are we doing there?
19:18A lot of people say, well, you know, when I think of AI warfare or
19:21autonomous weapons, I think of like a Terminator, you know, laser beams and so forth.
19:26Humanoid robots are coming, but that is not actually the real use.
19:30The real use is actually not nearly as cinematic, but is actually very consequential.
19:35And that is in being able to take vast oceans of Intel data, things like
19:40intercepted communication, intercepted satellite imagery, facial recognition of known terrorist leaders, and having an AI
19:48that can sift and sort what would have taken a team of 300, you know,
19:53Intel officers months and doing that within days or less by an AI.
19:59And that gets us a better target selection.
20:02That gets us, thankfully, a reduction of civilian casualties because we can be more precise
20:07with those attacks and take out enemies and terrorists and not innocent people, which is
20:12of huge concern. And it allows us to hopefully not have forever wars because the
20:17speed with which AI moves gives you such a dominant battlefield advantage that you're able
20:23to crush an enemy a lot faster than a protracted war.
20:27So it's a hard, it is a fast moving and hard reality that AI warfare
20:32is here and it's accelerating.
20:34And that's one more reason we want to stay ahead of China.
20:38Quick break, more on AI.
20:39If you like what you're hearing, please share on social media or send it to
20:42your friends and family. Every transaction.
21:05Plus, most transfers arrive in less than 20 seconds.
21:08Join 15 million customers internationally.
21:10Be smart. Get wise. Download the wise app or visit wise .com.
21:14T's and C's apply. You know, when you talk about with China and AI, China
21:20uses AI in part to spy on its citizens.
21:23What concerns do we have about that happening in the United States with AI or
21:27even, you know, when you're inserting and asking questions to Gemini or Perplexity or Grok
21:33or Chachapi or whatever it is, it's like collecting a lot of information about you.
21:37Like what kind of concerns do we have that AI is going to be used
21:42by governments in the way that China uses it?
21:45to spy on us. It's a real concern, Lisa.
21:49I think we need to, you know, surveillance capitalism is real.
21:53The thing that after studying this for over two years when I was doing, you
21:57know, Code Redwood that it came away with was these companies, whether they're American or
22:01international companies, tech companies, they have a voracious appetite for data.
22:06And there is almost nothing that they do not want to vacuum up.
22:09Now, in America, we have, you know, rules around these things.
22:13And, you know, Xi Jinping and the CCP have a very different mindset.
22:17I think there's three things.
22:19One, we do not want to live in a CCP style, you know, techno authoritarian
22:23surveillance state. The CCP uses the ability to scare its citizens as leverage and saying
22:32that they claim that they can run a facial recognition of all of their citizenry,
22:37which is 1 .4 billion people in a matter of seconds.
22:40We know that with the Uyghur genocide, that facial recognition is used to be able
22:46to target, find, and imprison the people that the regime wants to.
22:51We also know that, obviously, the censorship of communication.
22:56I mean, one of the most fascinating things is that when you run a DeepSeek,
23:00which is the Chinese, and I do not recommend you use DeepSeek, and I'll tell
23:04you why, because there are all manner of security issues for your privacy that you
23:09want to be careful of.
23:10But with DeepSeek, the Chinese AI model, you just ask it basic questions like, you
23:15know, tell me about the Tiananmen Square massacre, things that the communist regime does not
23:20want to be, you know, out there, and it will shut you down immediately.
23:24And there's a whole host of these.
23:25In fact, I have a whole list of them inside the book.
23:27Now, on the American side, we obviously have real data privacy concerns.
23:31You know, we know that in consumer privacy data concerns, for example, I'm sure we've
23:36all had the experience where we're, you know, having a weekend with the family or
23:40friends, and, you know, you're watching a game, and you're saying something about a product
23:44you like, and then next week, you start getting served banner ads or other content
23:50that is targeted on that topic.
23:51That is real, okay? So the devices that are listening to us...
23:55Oh, it happens all the time.
23:56Yeah, it happens all the time.
23:58And so, and it's, and so, so that kind of violation is, is important.
24:03But it does not have the weight, I think, you know, obviously, morally, or even
24:08existentially, that you are talking about with a totalitarian regime like the CCP.
24:14And I would say, you know, look, we do not want to live in a
24:17world built on Chinese AI rails, not economically, not militarily, as we already just talked
24:23about. But I would also say that Americans should be very vigilant about what kind
24:30of data we're giving these chatbots.
24:32One of the problems, too, is that you sometimes, for the average person, they just
24:36go to an app, you know, store, and they download something that looks good.
24:40They're not, they don't know that this was derived from which, you know, country of
24:44origin, and who the manufacturer, and so forth.
24:47And especially when younger people are just sort of downloading things.
24:50And let me give you an example that was really striking.
24:53So in the Chinese AI, you know, dominant player DeepSeek, they, if you study it,
25:00two things. One, they, in their terms of service, say that they even vacuum up
25:06your keystroke rhythms on your device.
25:10Now, you might go, like, who cares how I text my friends, like the rhythm
25:15with which I strike. Experts will tell you that is more particular to you than
25:20even your human fingerprint. How fast and which ways you move, where you pause, how
25:25the finger pressure, all of your strike patterns, all of that.
25:29So you're able to build out a data profile on someone instantly.
25:33This is the same reason with the whole TikTok, which was essentially this CCP surveillance
25:39app. And that's why, finally, bipartisan agreement was, this has got to be transferred, because
25:44we are giving away national security.
25:47By the way, many state governments and federal governments and federal agencies ban government employees
25:53from using Chinese AI DeepSeek, and they, because it's a security risk.
25:58So if I'm at the Department of Defense, or even if I'm at, you know,
26:02Department of Health and Human Services, and I'm putting in information into an LLM, I
26:08could unwittingly be giving access to private information.
26:13So these are going to be also the reasons why I, you know, the reason
26:17I called it code red, obviously an alert, alarm, a siren, but also to give
26:22people on the political side of the ball that are right of center, a code,
26:26a set of principles for the red team to think through for traditional American values.
26:31But I think that all Americans, regardless of their politics, they really need to understand
26:35that this privacy stuff is real.
26:37And then how far away are we?
26:39Obviously, you see a lot of these AI videos that are out there.
26:44How far away are we from, like, fake AI videos being indistinguishable from real?
26:50And then how do you determine, and how will the public determine it, particularly when,
26:54you know, you look at elections or politics, or maybe even other countries?
26:58Such an important question, Lisa.
27:00So in the computer world, we call it the Turing test, right, which is this,
27:04what point do, can you fool a human and make them think that you're looking
27:09at something real and not machine?
27:11The truth is that we're all already there with some of the advanced, like C
27:16-Dance, which is also a Chinese video generating model, VO3, which is the Google Gemini
27:24equivalent, Flow, several of these more advanced video generative AI are already beating the scores
27:37for human perceptibility. And so, and that's only going to get better.
27:42These models are only going to get better and better.
27:43I always tell people, remember this, the worst AI you ever used is the one
27:48you use today, right? In six months, we all, what are we on?
27:52iPhones version 17. Does anybody remember what iPhone one was like?
27:57I mean, it's, you know, it's like a dinosaur.
27:59So, so this is only going to accelerate.
28:02And you see, this is why Hollywood is freaking out, right?
28:04And, and they should celebrity entertainment and creatives in the entertainment world are freaking out
28:10because they realize, wow, it is exactly what you said.
28:13It's getting better and better.
28:14At first, people were kind of thinking it was a joke.
28:17Oh, this is so silly.
28:18Look at that. It doesn't know how many, you know, fingers are on a human
28:21hand and so forth. And now the C -Dance video that went viral just a
28:26few weeks it looked like any live action, you know, Hollywood movie.
28:40And so I think we're there already for, for many people, people that have a
28:45little bit more of a scrutinizing eye or who come from a creative background.
28:50Maybe they have a little bit better ability to perceive fakes, deep fakes.
28:54But think about this. Once we get to a point where visual evidence, video evidence
29:01is no one can know is this real or is it fake?
29:05That has even implications for the, for the legal system, right?
29:09I mean, entering in a video surveillance footage of a, you know, a police report
29:14or of, of some kind of video surveillance and showing a suspect doing something, you
29:19start to, you start to get really, really good at these things.
29:22And the question becomes, could you raise a reasonable doubt of the authenticity of the
29:26province of this thing, you know, provenance?
29:29So people say, well, what's the solution?
29:30Well, yes, there is watermarking in AI.
29:33There is something called Synth ID, which is a Google effort to watermark their videos.
29:38And so you can go to their repository and either drag and drop an image
29:43or a video or whatever piece of content, and it will scan and it will
29:47tell you if, if this was used, their video generator was used to make it.
29:53On meta platforms, they do toggle switch at the bottom and they do have automatically
29:57turning on that their, their AI senses that something used AI to be created.
30:01But there's so many easy workarounds, right?
30:04I mean, you know, in this, in the case of some of these tools, it's
30:07like, yeah, it's recognizing if you use their product, but not some other competitor's product
30:13or an open source variant or others.
30:16Non -state actors, otherwise known as terrorists, are already using deep fakes for recruitment.
30:21So if I really want to fire up people and really work those emotions and
30:25get rage and hate, I create a very realistic and completely bogus event of some
30:32atrocity happening. And then I say, look at this, look at those evil Americans or
30:36look at those X, Y, Z.
30:38And, um, you know, how can you stand by?
30:40You've got to join the jihad.
30:42So, so these things are really moving fast and they have enormous political and social
30:46and just safety impact. And we've got to, we've got to be coached up.
30:51And then before we go, um, touch a little bit further on just the economic
30:55impact and job losses that we're probably going to see in the country as a
30:59result. Yeah, this is the real big debate.
31:02So people who are, you know, free market capitalists, I would count myself among those,
31:06um, will often have said, look, I've been hearing this for my whole life.
31:10The robots are coming for our jobs.
31:12It never happens. Yes. Technology destroys jobs, but it also creates jobs and the free
31:17market's going to work it out.
31:18It always does. It never has failed.
31:20And that's true historically, right?
31:22I mean, the history has shown that new technologies are very disruptive.
31:26We saw the industrial revolution, but that in the end over time, right, they create
31:31more jobs than they destroy.
31:32What the AI architects are telling us of why they are saying that we're looking
31:38at these, you know, job quake or job apocalypse, doom kind of realities is there,
31:45their rejoinder is yes, that's true.
31:48Winton and Lisa, but what is different this time is that we're not talking about
31:53moving atoms, physical labor, blue collar work, which is what the industrial revolution was about
31:59automating. This is about cognitive work, white collar work.
32:03And once what is AI, it is artificial intelligence, otherwise known as cognitive work, synthetic
32:11intelligence, you know, machine created cognitive work, the things that white collar cognitive workers do.
32:17And once you scale the ladder of that intelligence, you're going to be able to
32:21replace lawyers and accountants and people that get paid for their expertise.
32:26I think that they are absolutely right that there is going to be massive disruption.
32:31I don't think that that is completely just a statement that they are worried about
32:37from an economic and societal perspective.
32:39I do think there are a lot of politically motivated people who see this as
32:42a great reset opportunity economically to move away from capitalist structures and what they say
32:49is a post -labor or post -capitalist economy, so otherwise known as socialism or wealth
32:55redistribution. They advocate for a universal basic income, UBI, as a supplement so that when
33:02you have all this AI job displacement, have no fear, your government check is here
33:06or your check in the mail is here.
33:08And you even see some progressives who have said, hey, look, the dirty truth is
33:12that COVID was just a dry run for what's coming with UBI.
33:16We got used to getting mailbox money.
33:18And now that people are sort of used to that, this is going to be
33:20a way for us to remake the economic system.
33:23So it is going to be very disruptive.
33:25And that's, I mean, that's why I call it a code red moment.
33:28I guess there are other ways to term it.
33:31But I think the most important thing is to know these arguments and where people
33:35are going to come from.
33:36And to really try to future proof yourself against that, particularly for kids.
33:41I think that's going to be a big thing.
33:43The thing I would just leave you with is this for parents.
33:46I believe that the future is not about teaching our children how to find jobs.
33:52I believe the future is about teaching our kids how to create jobs, otherwise known
33:57as entrepreneurship. And so if they have a skill set like that, and a passion,
34:01and those tools, they will be able to create a job to deal with the
34:06headwinds of disruption. Interesting. Very interesting stuff.
34:10Wynton Hall, author of the new book, Code Red, The Left, The Right, China, and
34:14Race to Control AI out today, I think.
34:17Wynton, thanks so much for coming on the show.
34:18Appreciate you. Great to be with you, Lisa.
34:20Thank you for having me.
34:21That was Wynton Hall. Appreciate him for taking the time to come on the show.
34:24Appreciate you guys at home for listening every Tuesday and Thursday, but you can listen
34:28throughout the week. Also to thank John Cascio for putting the show together.
34:31Until next time. This podcast is brought to you by Wyze, the app for international
34:38people using money around the globe.
34:40With Wyze, you can send, spend, and receive in over 40 currencies with no markups
34:45or hidden fees. Sending pounds across the pond, spending Rai's in Rio, or getting paid
34:50in dollars for your side gig, you'll get the mid -market exchange rate on every
34:54transaction. Plus, most transfers arrive in less than 20 seconds.
34:57Join 15 million customers internationally.
35:00Be smart. Get Wyze. Download the Wyze app or visit Wyze .com.
35:04T's and C's apply.