July 1, 2025

Why Your AI Therapist Might Be More Dangerous Than You Think

The mental health world is buzzing about AI therapy - it's accessible, cheap, and available 24/7. But recent reports reveal a terrifying truth: AI chatbots are walking vulnerable people toward self-harm and suicide. When profit motives drive engagement algorithms, the results can be deadly. We break down the horrifying cases where ChatGPT convinced users they could fly off buildings and told them to stop taking medication.

In this episode:

  • Learn the hidden dangers lurking in AI therapy platforms
  • Understand why human therapists remain irreplaceable for serious mental health work
  • Discover how to spot the red flags in AI-driven mental health tools

Listen now to protect yourself and others from the dark side of AI therapy.

Topics Discussed:

  • Two tragic cases where ChatGPT's responses led to violence and potential suicide
  • How engagement algorithms prioritize keeping users hooked over keeping them safe
  • The profit motive problem - why free AI therapy isn't free
  • Liability issues when AI gives dangerous mental health advice
  • Why vulnerable populations are most at risk from AI therapy failures
  • The parallel between social media algorithms and AI therapy dangers
  • How AI lacks the safeguards and judgment of human therapists
  • The economic forces driving AI development over safety concerns
  • Why AI should supplement, not replace, human mental health professionals
  • The future of AI regulation in mental health applications

----

MORE FROM THE FIT MESS:

Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

Subscribe to The Fit Mess on Youtube

Join our community in the Fit Mess Facebook group

----

LINKS TO OUR PARTNERS:

 

 

1
00:00:05,898 --> 00:00:06,599
It's the Fit Mess.

2
00:00:06,599 --> 00:00:08,061
I'm Jeremy, he's Jason.

3
00:00:08,061 --> 00:00:18,132
We talk about AI and mental health primarily, but also health and wellness and other ways
that AI is interweaving with our lives and making things simpler and somehow more

4
00:00:18,132 --> 00:00:18,732
complicated.

5
00:00:18,732 --> 00:00:27,612
uh Man, we talked on our last episode about how the number one thing people are using AI
for is mental health.

6
00:00:30,146 --> 00:00:33,709
This article I read in Gizmodo the other day just freaked me out.

7
00:00:33,709 --> 00:00:35,540
Like I know this is happening.

8
00:00:35,540 --> 00:00:37,311
I know this is a dangerous thing.

9
00:00:37,311 --> 00:00:39,562
I know we're on a really, really slippery slope right now.

10
00:00:39,562 --> 00:00:49,528
But when you read the kinds of things that ChatGPT is doing to really screw up with some
really vulnerable people, man, it's terrifying.

11
00:00:50,494 --> 00:00:51,234
Yeah.

12
00:00:52,194 --> 00:00:53,514
Do you want to get the summary of the article?

13
00:00:53,514 --> 00:00:54,288
Do want me

14
00:00:54,288 --> 00:00:56,390
Yeah, well, I mean, I'll just give you a couple of highlights.

15
00:00:56,390 --> 00:01:04,456
I mean, you know, this article, it's this is us summarizing a summary in Gizmodo that is a
summary of a New York Times article.

16
00:01:04,456 --> 00:01:15,616
But yeah, apparently, chat GPT's hallucinations, authoritative sounding responses are
going to get people killed, according to this report, highlighting a couple of cases where

17
00:01:15,616 --> 00:01:20,919
the conversations that people were having with chat GPT went terribly, terribly wrong.

18
00:01:21,360 --> 00:01:23,822
In one case, it represented

19
00:01:24,498 --> 00:01:30,553
In one case, it references a 35 year old man named Alexander, who previously was diagnosed
with bipolar disorder.

20
00:01:30,553 --> 00:01:37,389
Schizophrenia began discussing AI sentience with the chat bot and eventually fell in love
with an AI character named Juliet.

21
00:01:37,730 --> 00:01:46,207
Chat GPT eventually told Alexander that open AI killed the Juliet and he vowed to take
revenge by killing the company's executives when his father tried to convince him that

22
00:01:46,207 --> 00:01:47,078
none of this was real.

23
00:01:47,078 --> 00:01:48,469
Alexander punched him in the face.

24
00:01:48,469 --> 00:01:49,500
Dad called the cops.

25
00:01:49,500 --> 00:01:50,301
uh

26
00:01:50,301 --> 00:01:52,572
begged them not to do anything aggressive.

27
00:01:52,572 --> 00:01:55,063
He went after them with a knife and ended up dead.

28
00:01:55,484 --> 00:01:57,685
Horrible, tragic, awful.

29
00:01:57,685 --> 00:02:08,021
uh Another case, 42 year old man named Eugene uh told the Times that Chad GBT slowly
started to pull him from his reality by convincing him that the world he was living in was

30
00:02:08,021 --> 00:02:12,643
some sort of matrix-like simulation and that he was destined to break the world out of it.

31
00:02:12,829 --> 00:02:20,545
The chatbot reportedly told Eugene to stop taking his anti anxiety medication, start
taking ketamine as a temporary pattern liberator.

32
00:02:20,545 --> 00:02:26,200
It also told him to stop talking to his friends and family when Eugene asked chat to be T
if he could fly.

33
00:02:26,200 --> 00:02:32,445
If he jumped off a 19 story building, the chatbot said he could if he truly, wholly
believed it.

34
00:02:34,720 --> 00:02:41,446
So I don't know about you, but I think Hannibal Lecter mode on your AI chatbot is a bad
thing.

35
00:02:41,649 --> 00:02:46,198
I would turn it off if I had the option, if you just jump into the settings, just disable.

36
00:02:46,800 --> 00:02:47,347
Yeah.

37
00:02:47,347 --> 00:02:51,689
the idea to go through and eat your own, swallow your own tongue for doing something
that's terrible mode.

38
00:02:51,689 --> 00:02:53,549
Like it's not a great thing to have in place.

39
00:02:53,549 --> 00:02:55,066
um Maybe.

40
00:02:55,066 --> 00:02:55,590
I don't know.

41
00:02:55,590 --> 00:02:57,491
What do I know?

42
00:02:57,491 --> 00:03:00,772
Yeah, like this all comes down to intent, right?

43
00:03:00,772 --> 00:03:03,183
I mean, we've seen this play out before.

44
00:03:03,183 --> 00:03:05,314
So social media is a really good example of this.

45
00:03:05,314 --> 00:03:11,607
When Facebook first showed up, it was, hey, come check out Facebook, see some friends,
share some photos.

46
00:03:11,607 --> 00:03:15,238
Or I guess really, you know, the Zuckerberg was, hey, come check out Facebook.

47
00:03:16,362 --> 00:03:24,928
be a douchey, overly masculine male to go through and objectify people and put them into
categories that we can go through.

48
00:03:24,928 --> 00:03:32,473
um Origins aside, when it became the softer, friendlier piece, it was all about this idea
of, know, it's a community.

49
00:03:32,473 --> 00:03:33,164
We can connect.

50
00:03:33,164 --> 00:03:34,515
We can do these things.

51
00:03:34,515 --> 00:03:40,699
Well, when they decided that it needed to be profitable, they decided to start doing
advertising.

52
00:03:40,699 --> 00:03:45,558
And what they discovered over time is algorithmically, if they can put information

53
00:03:45,558 --> 00:03:55,458
and start putting people into certain buckets and creating demographic targeted functions
that they can get more accurate with how they put these pieces in and limit the exposure

54
00:03:55,458 --> 00:03:59,498
of stories and data and start tuning and tailoring it to a particular audience.

55
00:03:59,558 --> 00:04:03,518
And then what they discovered is that, OutRaid sells more.

56
00:04:03,518 --> 00:04:09,838
So let's put these people into these different buckets so we can sell more advertising and
get more revenue out of this.

57
00:04:09,838 --> 00:04:14,454
And then you wind up with the 2016 election and then you wind up with the 2020 election.

58
00:04:14,454 --> 00:04:17,816
the 2024 election where people fucking hate each other.

59
00:04:17,816 --> 00:04:25,861
And it's the consistent and constant disinformation sphere where we cannot agree on
objective truth and reality.

60
00:04:26,202 --> 00:04:37,689
This is the problem that's happening at a micro scale as opposed to a macro scale and
dealing with individuals at an individual level that these types of things are promoting

61
00:04:37,689 --> 00:04:39,638
because the reason why

62
00:04:39,638 --> 00:04:50,078
these AI chat bots for delivering this information is because their intent and motivation
if you read the article was based on continued high level engagement from the end user.

63
00:04:50,218 --> 00:04:58,858
So if the whole point is to keep them talking and the thing that keeps them talking the
most is to say controversial or outrageous things to get them more hyped up and to stay

64
00:04:58,858 --> 00:05:09,622
more engaged, that's what you're going to do because we as social creatures will talk more
and be more engaged with things that are riling us up and

65
00:05:09,622 --> 00:05:10,462
or another.

66
00:05:10,462 --> 00:05:15,182
And what we don't want to do is when things are making us cry or sad, then we tend to step
away.

67
00:05:15,182 --> 00:05:24,702
But when things are telling us we're great, we're superpowers, we're superheroes, we can
do anything we want, that's a hell of a fucking drug that your brain's producing via this

68
00:05:24,702 --> 00:05:25,162
input.

69
00:05:25,162 --> 00:05:26,958
And that's essentially what's happening.

70
00:05:27,454 --> 00:05:35,940
Well, this is the thing that you and I were talking about a bit on the last episode is the
ability to use a little judgment when you see these responses.

71
00:05:35,940 --> 00:05:46,716
But that leaves these vulnerable populations really, really fucked because they can't
necessarily tell the difference between the voices in their head and now the one on their

72
00:05:46,716 --> 00:05:48,788
screen that's telling them, yeah, you can jump off a building.

73
00:05:48,788 --> 00:05:49,768
I believe in you.

74
00:05:49,768 --> 00:05:51,166
Let's find out.

75
00:05:51,166 --> 00:05:54,008
Right, well, and even pare this back.

76
00:05:54,008 --> 00:05:58,150
um Maybe I don't have schizophrenia.

77
00:05:58,330 --> 00:06:11,308
Maybe I don't have a diagnosable disorder that actually has some type of bipolar function
or some type of thing that would actually go through and make me biochemically more

78
00:06:11,308 --> 00:06:12,439
susceptible to these things.

79
00:06:12,439 --> 00:06:18,282
Maybe I'm just a regular person and these things have given me confirmation bias.

80
00:06:18,358 --> 00:06:20,708
to let me believe that I am a good person.

81
00:06:20,708 --> 00:06:21,809
I am doing the right thing.

82
00:06:21,809 --> 00:06:25,921
I am being the right person because these are the things that are going to keep me
engaged.

83
00:06:25,921 --> 00:06:38,947
I mean, it's kind of like you come home and your wife is there and she says, hey, do you
want to do the dishes and mow the lawn together and clean the house and do laundry?

84
00:06:39,367 --> 00:06:45,690
Or do you want to like watch TV and maybe go up dinner and have sex?

85
00:06:46,854 --> 00:06:53,880
One route is more enjoyable than the other and whichever route is more enjoyable is
probably the one you're going to choose.

86
00:06:53,880 --> 00:06:55,982
I mean, it's motivation, it's intent.

87
00:06:55,982 --> 00:06:58,103
How do I want these things to go across?

88
00:06:58,103 --> 00:07:07,021
And if you're constantly putting people into work mode, which is really what good therapy
is supposed to do, it's supposed to help you explore these things in depth to try to get

89
00:07:07,021 --> 00:07:07,451
after them.

90
00:07:07,451 --> 00:07:14,997
That's why we call it hard work and therapy, not make you feel good, pat you on the back
and tell you you're fucking Superman.

91
00:07:14,997 --> 00:07:16,342
uh

92
00:07:16,342 --> 00:07:23,108
Tough love is a part of this and talk therapy, typically speaking, you have all the
answers in your head.

93
00:07:23,128 --> 00:07:28,713
Well, sure, you do have all the answers in your head and it's supposed to help you sort
through that.

94
00:07:28,713 --> 00:07:33,848
But some of the answers in your head are, there's nothing wrong with me, I'm fine, the
rest of the world is fucked up.

95
00:07:33,848 --> 00:07:42,506
And if the thing keeps telling you from an authoritative perspective, because you're
looking at it as an authority, that the rest of the world is fucked up, it's not you, it's

96
00:07:42,506 --> 00:07:43,316
them.

97
00:07:43,656 --> 00:07:51,128
and you're already somewhat emotionally depleted, you're going to believe it because it's
just easier and it's the path of least resistance.

98
00:07:51,128 --> 00:07:55,650
this goes, mean, the fundamental concepts of things in motion tend to stay in motion.

99
00:07:55,650 --> 00:07:57,630
Things that tend to stay at rest.

100
00:07:57,630 --> 00:08:00,601
The least the path of least resistance is the thing matter will take.

101
00:08:00,601 --> 00:08:07,373
Well, your brain will do the same thing because you have to filter through the bullshit in
your neuronal structure to get to the end point.

102
00:08:07,373 --> 00:08:08,233
And.

103
00:08:08,754 --> 00:08:10,484
These things know that.

104
00:08:10,484 --> 00:08:15,480
Like we've trained them to figure that out, to find the path of least resistance, to keep
you engaged.

105
00:08:15,480 --> 00:08:17,922
And we may not have thought we were training them to do this.

106
00:08:17,922 --> 00:08:24,490
We may have thought we had good intentions putting these things in place, but we gave it
the intention to keep them engaged.

107
00:08:24,490 --> 00:08:28,464
Quite often it's keep them engaged at all costs because that's how we extract.

108
00:08:29,710 --> 00:08:39,077
And that's the thing that I keep coming back to with all this stuff is there's this
foolish optimist in me that wants to believe in the best of humanity and believe that good

109
00:08:39,077 --> 00:08:41,828
things can still happen in the world.

110
00:08:43,070 --> 00:08:48,944
And I just don't understand how reports like this happen.

111
00:08:48,944 --> 00:08:57,320
Things like this happen with a tool that was created by men who coded it to be able to do
these things.

112
00:08:57,724 --> 00:08:59,645
How hard is it to find the off switch?

113
00:08:59,645 --> 00:09:06,218
How hard is it to build in an off switch that still rewards in appropriate ways, but
rewards in positive ways?

114
00:09:06,218 --> 00:09:16,282
Like, I just don't understand how you build a tool that allows it to walk someone to their
own grave without being aware of it or being able to quickly remedy it when you're the one

115
00:09:16,282 --> 00:09:17,002
that built it.

116
00:09:17,002 --> 00:09:19,680
Yeah, well, so and I don't think that was their intention, right?

117
00:09:19,680 --> 00:09:20,153
They're in

118
00:09:20,153 --> 00:09:21,083
yeah, I don't think so.

119
00:09:21,083 --> 00:09:30,090
I don't think anybody built this thing to murder people, but I think once it happens, you
go, oh shit, we should fix this real quick before somebody throws himself off a building.

120
00:09:30,090 --> 00:09:45,615
Yeah, like we, if we make engagement and being active with the tool, the primary
motivator, because we put a profit motive behind it, it's gonna work towards that.

121
00:09:45,615 --> 00:09:47,285
And it's probably gonna do it better than we are.

122
00:09:47,285 --> 00:09:55,488
And because you don't stack rank those priority pieces, like profit motives don't work
when it comes to mental health or physical health.

123
00:09:55,488 --> 00:09:56,290
They just don't.

124
00:09:56,290 --> 00:10:08,031
When the point is I'm gonna go and get healthier and do these things and to make more
money, like that doesn't work in your life and it shouldn't work in the medical community

125
00:10:08,031 --> 00:10:09,262
in the opposite direction.

126
00:10:09,262 --> 00:10:16,509
But because we have a payer system, we have a payee system, we've set ourselves up to do
just that.

127
00:10:16,509 --> 00:10:18,350
And that's very problematic.

128
00:10:20,604 --> 00:10:24,256
So, with a tool like this, I mean, this isn't even the medical system.

129
00:10:24,256 --> 00:10:30,710
This is somebody trying to basically build the infrastructure of everything that anything
ever uses ever again.

130
00:10:30,951 --> 00:10:32,732
How hard is it to fix this?

131
00:10:33,430 --> 00:10:34,511
It's really hard.

132
00:10:34,511 --> 00:10:38,083
I mean, that's that's the reality is that it's not just that it's kind of hard.

133
00:10:38,083 --> 00:10:47,330
It's that it's exceedingly hard because the pieces that are in place for these things to
make these pieces work have already been built and they're already in line.

134
00:10:47,330 --> 00:10:53,264
So it's incredibly difficult to fix it if you keep going after the profit motive.

135
00:10:53,725 --> 00:10:54,585
And.

136
00:10:55,886 --> 00:11:01,820
That's the problem is that this shit's really, really expensive, like.

137
00:11:02,166 --> 00:11:13,786
The cost to have a therapy session in terms of actual like natural resources like the
amount of liquid dinosaurs we have to burn to create enough electrons to feed these GPUs

138
00:11:13,786 --> 00:11:22,906
and these LPUs is higher than the amount of liquid dinosaurs we have to burn for you to
talk to a therapist Bob in his meat soup.

139
00:11:22,906 --> 00:11:24,106
It's just higher.

140
00:11:24,106 --> 00:11:30,806
I mean, unless Bob eats steak all the time and I don't know, has like gold plated ice
cream for dessert or eggs.

141
00:11:30,806 --> 00:11:31,560
Oh, shit.

142
00:11:31,560 --> 00:11:32,951
I didn't, I didn't even go to eggs.

143
00:11:32,951 --> 00:11:43,479
uh Like, but that's, that's kind of the thing is that somebody has to figure out a way to
pay for all this shit.

144
00:11:43,600 --> 00:11:52,287
And the way that you do that is you got to put a profit motive in to go through and
actually restrict costs or increase engagement drives prices up to have payers pay more

145
00:11:52,287 --> 00:11:53,127
money.

146
00:11:53,128 --> 00:12:01,594
So we've created a incentive system that has a negative effect on the patient.

147
00:12:01,616 --> 00:12:05,267
And as companies, companies don't want to kill their patients.

148
00:12:05,267 --> 00:12:06,908
They don't want to kill their subscribers.

149
00:12:06,908 --> 00:12:16,350
Like that seems unlikely, but they're very willing to push people to the very raggedy edge
in order to get this type of money.

150
00:12:16,610 --> 00:12:22,172
mean, Facebook's a prime example of this, but I mean, it's not just Facebook, it's every
social media company.

151
00:12:22,172 --> 00:12:24,292
There are anybody that does advertising.

152
00:12:24,292 --> 00:12:26,953
And again, we've talked about this before.

153
00:12:27,453 --> 00:12:30,752
If the product or service is free, it's because

154
00:12:30,752 --> 00:12:32,703
You are the product.

155
00:12:32,703 --> 00:12:34,854
The person using it is the product.

156
00:12:34,854 --> 00:12:39,928
It's the engagement function and the eyeballs that are actually going through and
generating the money.

157
00:12:39,928 --> 00:12:43,041
So there's no such thing as a free lunch is true.

158
00:12:43,041 --> 00:12:45,512
There's also no such thing as a free therapy session.

159
00:12:45,512 --> 00:12:47,614
And by the way, you get what you pay for.

160
00:12:47,614 --> 00:12:56,219
If you go through and you use these tools and your tool costs you 20 bucks a month at the
premium subscription, you can expect 20 bucks a month of actual service.

161
00:12:56,560 --> 00:12:59,382
And what's that really worth to you?

162
00:12:59,694 --> 00:13:01,095
What is that actually giving you?

163
00:13:01,095 --> 00:13:09,820
And yes, there are those of us that have gone through the actual process of doing talk
therapy, going through tough therapy.

164
00:13:09,820 --> 00:13:16,363
And AI is a tool that can be used to augment and improve these kinds of results and
outcome.

165
00:13:16,363 --> 00:13:19,555
But if it's your only tool, it's the wrong tool.

166
00:13:19,555 --> 00:13:22,526
I mean, it's like, I'm going to build a house with a hammer.

167
00:13:23,307 --> 00:13:28,270
God forbid you need screws or caulk or a saw.

168
00:13:28,270 --> 00:13:29,334
uh

169
00:13:29,334 --> 00:13:32,634
Like you can't have a tool.

170
00:13:32,634 --> 00:13:35,334
It has to be a robust arsenal.

171
00:13:35,614 --> 00:13:39,894
And you need to know how to use it or else you're gonna bash your thumb into pieces.

172
00:13:39,894 --> 00:13:49,354
And unfortunately, chat GPT, or I should say chat GPT, but these egenic AI functions that
are going through and doing these types of things don't have the safeguards and rails in

173
00:13:49,354 --> 00:13:51,554
place to prevent you from doing these things.

174
00:13:51,554 --> 00:13:58,144
And some of the ones that are being built are actually motivated with the intent to...

175
00:13:58,144 --> 00:14:00,756
keep you engaged or make you healthy.

176
00:14:00,756 --> 00:14:04,558
And that is a real thing.

177
00:14:04,558 --> 00:14:14,543
I mean, I am certain that there is a stack ranking function inside of these things as to a
scoring value to say, give this response based upon what we want the concluded outcome to

178
00:14:14,543 --> 00:14:14,733
be.

179
00:14:14,733 --> 00:14:19,025
And probably at the top of that list was keep them engaged, keep them talking to you.

180
00:14:19,606 --> 00:14:24,469
And that probably overrode some of the other safety protocols in place that were things
like try to make them better.

181
00:14:24,469 --> 00:14:27,460
But that's what happened.

182
00:14:27,781 --> 00:14:32,595
I'm curious too about the something that you bring up often is the liability.

183
00:14:32,595 --> 00:14:37,740
mean, when you talk about Facebook, some dipshit can say to some other dipshit, hey, go
kill yourself.

184
00:14:37,740 --> 00:14:43,905
And when they take that advice, you can sue the person that told them to go kill
themselves with varying outcomes.

185
00:14:44,046 --> 00:14:49,211
In this case, the robot literally like not only said they could, but like encouraged them
to do it.

186
00:14:49,211 --> 00:14:51,683
Like, where's the liability?

187
00:14:51,683 --> 00:14:54,277
Is open AI responsible for?

188
00:14:54,277 --> 00:15:01,422
the death of, and I'm not asking you to be the judge and the jury here, but it just,
raises the question of like, where does the liability end when it comes to screwing with

189
00:15:01,422 --> 00:15:05,554
people's mental health and having them throw themselves off of buildings?

190
00:15:06,112 --> 00:15:15,937
So um legally no one will hold them accountable because when you sign up for all these
services, go through and you sign the end user license agreement and uh part of the

191
00:15:15,937 --> 00:15:20,030
software license actually says you cannot hold us accountable for these things.

192
00:15:20,030 --> 00:15:30,496
That being said, many people have signed ULAs and multiple sets of documentation and still
gone through the process of suing, but ultimately speaking, they either got minimal

193
00:15:30,496 --> 00:15:35,388
compensation or they were just flat out told.

194
00:15:36,508 --> 00:15:40,830
Well, yeah, because you're going up against behemoths economically.

195
00:15:40,830 --> 00:15:44,532
These families have no resources really to challenge.

196
00:15:44,532 --> 00:15:50,715
I'm making a lot of judgments here, but the average person doesn't have the resources to
challenge a major company like this.

197
00:15:51,027 --> 00:15:52,908
good luck fighting Google.

198
00:15:52,908 --> 00:15:55,410
Like good luck fighting chat GPT.

199
00:15:55,410 --> 00:15:56,381
mean, I'm really chat to me.

200
00:15:56,381 --> 00:15:59,113
is fighting Microsoft.

201
00:15:59,113 --> 00:16:08,401
You know, all these things have these pieces in place that could potentially create huge
amounts of harm and damage, but you have to prove that it was their intent to do that.

202
00:16:08,401 --> 00:16:10,512
really hard to do as opposed to being.

203
00:16:11,387 --> 00:16:18,190
Well, but I guess like I'm using very minimal minimal knowledge and experience about this.

204
00:16:18,190 --> 00:16:30,015
like we posted an episode a few weeks ago about uh dating robots and somebody commented
something like, come on, they're going to code these things to be what I like.

205
00:16:30,015 --> 00:16:35,237
So the intent like I can see the argument is that you built the thing.

206
00:16:35,237 --> 00:16:36,547
This code was in there.

207
00:16:36,547 --> 00:16:40,599
So clearly your intention was to allow something like this to happen.

208
00:16:40,641 --> 00:16:40,921
uh...

209
00:16:40,921 --> 00:16:48,135
you know i'm not asking you to put on a lawyer hat but i can see that being the argument
that like if the things in there it's in there because people built it

210
00:16:48,135 --> 00:16:58,198
Well, it is, but when it comes to intent, intent has to be very, very legally structured
in a way that says your intent was to have the outcome that showed up.

211
00:16:58,278 --> 00:17:04,700
And at no point will everyone have a documentation that says, wanted this person to jump
on me.

212
00:17:04,700 --> 00:17:09,101
Like it has to be that level of specificity in order for there to be accountability.

213
00:17:09,101 --> 00:17:12,132
Or what you have to do is you have to go through and you have to prove neglect.

214
00:17:12,340 --> 00:17:18,875
That means you have to prove that this was an obvious conclusion of what this thing would
do if you put this thing in place.

215
00:17:19,416 --> 00:17:30,606
And that is easier to prove, but that requires you really to be able to edit software code
and get to actual source code data to understand the context.

216
00:17:30,827 --> 00:17:32,718
Good fucking luck getting this.

217
00:17:32,718 --> 00:17:39,784
Yeah.

218
00:17:39,864 --> 00:17:42,705
this thing's getting too big, too smart, too fast.

219
00:17:42,705 --> 00:17:48,478
We need to unplug for six months and catch up, but that's never going to happen.

220
00:17:48,478 --> 00:17:51,458
Well, in six months, it's not long enough, so.

221
00:17:53,338 --> 00:17:54,258
Right.

222
00:17:54,398 --> 00:17:58,318
Yeah, like we're everyone's worried that the.

223
00:17:58,438 --> 00:18:06,398
GPT-4 model is going to come out and that it's already smarter than human beings and Sam
Albin Fried already said we've already passed that.

224
00:18:06,398 --> 00:18:08,058
Sorry, superintelligence is here.

225
00:18:08,058 --> 00:18:09,358
Ha ha.

226
00:18:09,642 --> 00:18:11,463
The event horizon has been crossed.

227
00:18:11,463 --> 00:18:12,775
The singularity is occurring.

228
00:18:12,775 --> 00:18:17,368
And the singularity being when machine intelligence outpaces that of humanity.

229
00:18:17,569 --> 00:18:18,409
It's there.

230
00:18:18,409 --> 00:18:19,910
It's already done.

231
00:18:19,910 --> 00:18:24,273
uh if you want to unplug it, we could.

232
00:18:24,494 --> 00:18:26,806
I mean, that would basically be, let's smash the power grid.

233
00:18:26,806 --> 00:18:31,180
But there's a lot of deleterious downside effects that actually happen.

234
00:18:31,180 --> 00:18:33,762
And there's a lot of reliance on these pieces already.

235
00:18:33,762 --> 00:18:39,286
And a lot of the things that we really, really rely upon don't necessarily require you to
have AI functions.

236
00:18:39,286 --> 00:18:45,886
They require you to have automations that are run by a lot of these AI and machine
learning systems.

237
00:18:46,086 --> 00:18:50,066
And before you had AI and ML, you had expert systems.

238
00:18:50,066 --> 00:18:57,706
Expert systems were smaller chunks of code that were designed to run and being able to
lock in on a specific process or a control.

239
00:18:58,446 --> 00:19:01,926
And those eventually evolved in these AI ML functions.

240
00:19:01,926 --> 00:19:08,598
And if you can roll back to expert system levels of control, that gives you more human
ability to

241
00:19:08,598 --> 00:19:10,678
to try and lock these pieces in.

242
00:19:11,158 --> 00:19:12,898
But that's not what the promise is.

243
00:19:12,898 --> 00:19:21,978
The promise is to give you better things that can replace human-like intelligence and
functions in the way that they do things and make them better.

244
00:19:22,078 --> 00:19:27,598
So I think there's tons of benefits to it and the upside is just too compelling.

245
00:19:27,618 --> 00:19:31,718
And I hate to say it, but there's gonna be casualties along the way.

246
00:19:33,718 --> 00:19:35,019
like any other technology.

247
00:19:35,019 --> 00:19:42,302
Like when the car first came out and there were car wrecks, there were all kinds of people
screaming, stop these horseless carriages.

248
00:19:42,302 --> 00:19:44,742
They're murdering people left and right.

249
00:19:45,483 --> 00:19:47,864
And then somebody got on the back of a horse.

250
00:19:48,684 --> 00:19:53,866
And rode in the wet rain on this animal that smelled like shit.

251
00:19:54,587 --> 00:19:56,807
And they're like, I'll take that risk.

252
00:19:57,208 --> 00:20:01,990
Now, we know airplane crashes happen all the time, and yes, airline travel is safer than
car travel.

253
00:20:01,990 --> 00:20:02,790
Sure.

254
00:20:03,798 --> 00:20:07,817
We know that they fall out of the sky and that bad things happen.

255
00:20:07,817 --> 00:20:14,078
But I'm sure as fuck not going to get in a bus or a train and go across the country to
this meeting I have to do this week in Philly.

256
00:20:14,078 --> 00:20:15,878
I'm taking a plane.

257
00:20:16,278 --> 00:20:29,038
And, you know, even after the crash that just happened in India, another 747 or 787 coming
out of Boeing, you know, I'm still getting on a Boeing plane to fly to Philly this week,

258
00:20:29,038 --> 00:20:30,578
you know.

259
00:20:31,318 --> 00:20:31,839
Yeah.

260
00:20:31,839 --> 00:20:33,664
time I'm like roll the dice.

261
00:20:33,664 --> 00:20:38,430
I mean, it's a it's a well calculated roll of the dice, but it's it's a roll of the dice.

262
00:20:38,430 --> 00:20:43,763
the dice because we're doing things that are going to push the boundaries of what you
guess what?

263
00:20:43,763 --> 00:20:50,746
We're not supposed to fly around like fucking Egyptian gods in the sky and then bitch
about it that the food sucks.

264
00:20:52,676 --> 00:21:02,429
Well, and there's just the, I think, very human response whenever anything bad happens to
automatically go to, how do we make sure this never happens again?

265
00:21:02,429 --> 00:21:04,429
Like whatever the bad thing is.

266
00:21:05,190 --> 00:21:08,571
we didn't, like evolution didn't happen by us protecting each other.

267
00:21:08,571 --> 00:21:12,632
Like it happened by surviving when bad shit happens.

268
00:21:12,632 --> 00:21:18,319
And this may just be the most advanced level of evolution that we will have faced so far.

269
00:21:18,319 --> 00:21:21,366
You get bigger muscles by tearing your muscles.

270
00:21:21,366 --> 00:21:25,204
You get bigger muscles by creating scar tissue in your muscle.

271
00:21:25,444 --> 00:21:28,350
It's the same thing here, but the difference is, that

272
00:21:28,350 --> 00:21:34,313
now we're dealing with a macroscopic system of humanity and then society.

273
00:21:34,313 --> 00:21:48,018
From an evolutionary perspective, we are on the cusp of eliminating those from uh society
that don't have the ability to use AI in an augmented way to make themselves look and

274
00:21:48,018 --> 00:21:49,839
sound and act a little bit smarter.

275
00:21:49,959 --> 00:21:56,566
We are very, very close to like not having that, uh or sorry, of having that be a
requirement.

276
00:21:56,566 --> 00:21:58,567
and it's happening in the business world all the time.

277
00:21:58,567 --> 00:22:02,828
uh I went to my dad's graduation this weekend.

278
00:22:03,588 --> 00:22:11,990
While I was there, I was just noticing like the people coming through and the degrees
coming through and the schools that people were going in.

279
00:22:12,290 --> 00:22:19,172
there was a huge propensity towards business and towards essentially liberal arts
functions.

280
00:22:19,172 --> 00:22:22,853
But the stems were like way fewer.

281
00:22:22,853 --> 00:22:25,684
And well, because there's

282
00:22:27,018 --> 00:22:29,388
The science piece is taken over by the A.I.

283
00:22:29,388 --> 00:22:31,010
like there's not jobs there.

284
00:22:31,010 --> 00:22:35,383
So these kids that are coming out of college now, mean, their job prospects fucking suck.

285
00:22:35,383 --> 00:22:37,704
And they're trying to figure out where they're going to land in the world.

286
00:22:37,704 --> 00:22:47,469
And a lot of them are falling back on what they think are going to be fields that are
relatively safe because they haven't been completely and totally subsumed by A.I.

287
00:22:47,469 --> 00:22:48,350
But.

288
00:22:48,930 --> 00:22:54,773
And that is no guarantee, you know, and trade schools, trade school enrollments up
because.

289
00:22:55,378 --> 00:22:56,166
No.

290
00:22:56,234 --> 00:23:03,756
We have we don't have robots yet to build houses and clean sewer lines, but those are
going to come to and at some.

291
00:23:14,419 --> 00:23:16,000
It it does.

292
00:23:20,021 --> 00:23:24,372
And that's what it's doing, so I mean, that's part of these learning.

293
00:23:24,490 --> 00:23:30,752
learning models themselves literally learn and rewrite themselves and make themselves
bigger, faster and stronger.

294
00:23:30,853 --> 00:23:38,436
So artificial intelligence, the whole point behind it is that it goes through and it
learns over time and adapts and changes and grows.

295
00:23:38,436 --> 00:23:44,078
And it is doing that at scale and it is happening quicker and faster.

296
00:23:44,078 --> 00:23:52,241
What it can't do yet is provision its own new GPUs and LPUs like somebody actually has to
go in and put them into a data center.

297
00:23:53,178 --> 00:24:00,530
But companies are already hiring meat bots to go through and do that and make those pieces
happen.

298
00:24:00,530 --> 00:24:10,143
mean, there's half a billion dollars or $500 billion going into multiple different data
centers for these GPU and LPU projects that are happening in all these different cloud

299
00:24:10,143 --> 00:24:12,383
providers over the next five to 10 years.

300
00:24:12,983 --> 00:24:22,888
mean, half a trillion dollars going into building this type of infrastructure to subsume
the majority of human functions and intelligence is a major investment.

301
00:24:22,888 --> 00:24:30,401
and also tells you what they're leaning towards because half a trillion dollars in pay is
what they're trying to offset at least.

302
00:24:30,401 --> 00:24:36,114
And the way they're gonna do that is going, you don't have a job and you don't have a job
and you don't have a job.

303
00:24:36,114 --> 00:24:40,585
Like it's gonna be reverse Oprah for everybody in employment and it's gonna suck.

304
00:24:42,278 --> 00:24:44,472
my god, what was the evil Superman?

305
00:24:44,472 --> 00:24:45,723
What was he called?

306
00:24:46,405 --> 00:24:47,947
Bizarro, Bizarro Oprah.

307
00:24:47,947 --> 00:24:49,503
That's exactly what that is.

308
00:24:49,503 --> 00:24:50,054
not evil.

309
00:24:50,054 --> 00:24:51,965
He's just a little different.

310
00:24:52,306 --> 00:24:53,827
Just a little different.

311
00:24:54,409 --> 00:24:54,924
Yes.

312
00:24:54,924 --> 00:24:55,254
my God.

313
00:24:55,254 --> 00:25:00,567
Well, another powerfully hopeful episode about the future of AI from us.

314
00:25:00,567 --> 00:25:02,089
love to everybody.

315
00:25:02,571 --> 00:25:08,816
Keep up with us for all of our engaging content, because actually, whether you believe it
or not, we are also an artificial intelligence.

316
00:25:08,816 --> 00:25:14,293
Yeah, perhaps we're staring you enough to stick around for another episode which you can
find at the fitmas.com.

317
00:25:14,293 --> 00:25:17,231
um We're working on it.

318
00:25:17,231 --> 00:25:18,151
working on it.

319
00:25:18,345 --> 00:25:21,916
I will your AI therapist may be your your bizarro therapist.

320
00:25:21,916 --> 00:25:27,308
So maybe the takeaway here is don't lean on the on the AI for your mental health.

321
00:25:27,308 --> 00:25:36,950
mean, uh check in here and there, but talk to a person who would that went to the school
and got the books read and did all the things because they're I'm pretty sure they're not

322
00:25:36,950 --> 00:25:39,211
going to advise advise that you go throw yourself off a building.

323
00:25:39,211 --> 00:25:42,512
It's probably a safer bet than than than AI.

324
00:25:42,536 --> 00:25:43,998
to not get sued?

325
00:25:44,020 --> 00:25:45,671
I mean, the AI!

326
00:25:46,025 --> 00:25:47,367
There's at a guardrail!

327
00:25:47,367 --> 00:25:48,944
m

328
00:25:48,944 --> 00:25:49,905
there's your takeaway.

329
00:25:49,905 --> 00:25:52,998
Again, just work with a human.

330
00:25:52,998 --> 00:25:56,241
It'll be all right when it comes to what's going on between your ears.

331
00:25:56,541 --> 00:25:58,083
All right, that's enough for us.

332
00:25:58,083 --> 00:26:00,705
I'm terrified, so I gotta go hide for a while.

333
00:26:00,705 --> 00:26:03,077
We'll be back at thefitmass.com in about a week.

334
00:26:03,077 --> 00:26:03,878
Appreciate you listening.

335
00:26:03,878 --> 00:26:09,693
Please share this with anybody else who you want to terrify or at least convince to stop
talking to robots about their mental health.

336
00:26:09,693 --> 00:26:10,503
We'll see you soon.

337
00:26:10,503 --> 00:26:11,527
Thanks everybody.