How To Choose Between AI and Human Therapists for Your Mental Health
Is AI therapy the future of mental health or just digital snake oil?
Look, we're all walking around with emotional baggage that would make a Samsonite executive weep. Traditional therapy costs more than your mortgage, and finding a good therapist is like finding a unicorn that takes your insurance. Meanwhile, millions of people are already dumping their deepest, darkest secrets to ChatGPT at 3 AM.
Here's what you'll get from this episode: First, you'll understand why AI has become the most popular "therapist" on the planet (spoiler: it's not just because it's cheap). Second, you'll learn the real dangers of psychology porn and why it might be screwing with your ability to connect with actual humans. Third, you'll discover how to use these tools without becoming a digital hermit who can only express emotions through prompts.
Ready to figure out if AI therapy is your mental health superhero or your kryptonite? Hit play and let's dive into this digital therapy rabbit hole.
Topics Discussed:
- The accessibility factor - why AI beats traditional therapy scheduling
- Psychology porn and mental masturbation - the dark side of digital validation
- Real-world experience vs. virtual simulation in emotional processing
- Dangerous AI responses found in recent studies with teenagers
- The balance between AI assistance and human connection
- Proper prompt engineering for mental health conversations
- Clinical trial results showing 51% improvement in depression symptoms
- The legal and ethical vacuum surrounding AI therapy
- Singularity implications for mental health treatment
- When to trust vs. question your AI therapist
----
Resources:
Research: Dartmouth clinical trial on AI therapy chatbots
----
MORE FROM THE FIT MESS:
Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok
Subscribe to The Fit Mess on Youtube
Join our community in the Fit Mess Facebook group
----
LINKS TO OUR PARTNERS:
00:00:00,151 --> 00:00:00,852
It's the Fit Mess.
2
00:00:00,852 --> 00:00:02,314
I'm Jeremy, he's Jason.
3
00:00:02,314 --> 00:00:12,385
We talk about AI and mental health primarily, but also health and wellness and other ways
that AI is interweaving with our lives and making things simpler and somehow more
4
00:00:12,385 --> 00:00:12,985
complicated.
5
00:00:12,985 --> 00:00:21,865
uh Man, we talked on our last episode about how the number one thing people are using AI
for is mental health.
6
00:00:24,399 --> 00:00:27,962
This article I read in Gizmodo the other day just freaked me out.
7
00:00:27,962 --> 00:00:29,793
Like I know this is happening.
8
00:00:29,793 --> 00:00:31,564
I know this is a dangerous thing.
9
00:00:31,564 --> 00:00:33,815
I know we're on a really, really slippery slope right now.
10
00:00:33,815 --> 00:00:43,781
But when you read the kinds of things that ChatGPT is doing to really screw up with some
really vulnerable people, man, it's terrifying.
11
00:00:44,747 --> 00:00:45,487
Yeah.
12
00:00:46,447 --> 00:00:47,767
Do you want to get the summary of the article?
13
00:00:47,767 --> 00:00:48,541
Do want me
14
00:00:48,541 --> 00:00:50,643
Yeah, well, I mean, I'll just give you a couple of highlights.
15
00:00:50,643 --> 00:00:58,709
I mean, you know, this article, it's this is us summarizing a summary in Gizmodo that is a
summary of a New York Times article.
16
00:00:58,709 --> 00:01:09,869
But yeah, apparently, chat GPT's hallucinations, authoritative sounding responses are
going to get people killed, according to this report, highlighting a couple of cases where
17
00:01:09,869 --> 00:01:15,172
the conversations that people were having with chat GPT went terribly, terribly wrong.
18
00:01:15,613 --> 00:01:18,075
In one case, it represented
19
00:01:18,751 --> 00:01:24,806
In one case, it references a 35 year old man named Alexander, who previously was diagnosed
with bipolar disorder.
20
00:01:24,806 --> 00:01:31,642
Schizophrenia began discussing AI sentience with the chat bot and eventually fell in love
with an AI character named Juliet.
21
00:01:31,983 --> 00:01:40,460
Chat GPT eventually told Alexander that open AI killed the Juliet and he vowed to take
revenge by killing the company's executives when his father tried to convince him that
22
00:01:40,460 --> 00:01:41,331
none of this was real.
23
00:01:41,331 --> 00:01:42,722
Alexander punched him in the face.
24
00:01:42,722 --> 00:01:43,753
Dad called the cops.
25
00:01:43,753 --> 00:01:44,554
uh
26
00:01:44,554 --> 00:01:46,825
begged them not to do anything aggressive.
27
00:01:46,825 --> 00:01:49,316
He went after them with a knife and ended up dead.
28
00:01:49,737 --> 00:01:51,938
Horrible, tragic, awful.
29
00:01:51,938 --> 00:02:02,274
uh Another case, 42 year old man named Eugene uh told the Times that Chad GBT slowly
started to pull him from his reality by convincing him that the world he was living in was
30
00:02:02,274 --> 00:02:06,896
some sort of matrix-like simulation and that he was destined to break the world out of it.
31
00:02:07,082 --> 00:02:14,798
The chatbot reportedly told Eugene to stop taking his anti anxiety medication, start
taking ketamine as a temporary pattern liberator.
32
00:02:14,798 --> 00:02:20,453
It also told him to stop talking to his friends and family when Eugene asked chat to be T
if he could fly.
33
00:02:20,453 --> 00:02:26,698
If he jumped off a 19 story building, the chatbot said he could if he truly, wholly
believed it.
34
00:02:28,973 --> 00:02:35,699
So I don't know about you, but I think Hannibal Lecter mode on your AI chatbot is a bad
thing.
35
00:02:35,902 --> 00:02:40,451
I would turn it off if I had the option, if you just jump into the settings, just disable.
36
00:02:41,053 --> 00:02:41,600
Yeah.
37
00:02:41,600 --> 00:02:45,942
the idea to go through and eat your own, swallow your own tongue for doing something
that's terrible mode.
38
00:02:45,942 --> 00:02:47,802
Like it's not a great thing to have in place.
39
00:02:47,802 --> 00:02:49,319
um Maybe.
40
00:02:49,319 --> 00:02:49,843
I don't know.
41
00:02:49,843 --> 00:02:51,744
What do I know?
42
00:02:51,744 --> 00:02:55,025
Yeah, like this all comes down to intent, right?
43
00:02:55,025 --> 00:02:57,436
I mean, we've seen this play out before.
44
00:02:57,436 --> 00:02:59,567
So social media is a really good example of this.
45
00:02:59,567 --> 00:03:05,860
When Facebook first showed up, it was, hey, come check out Facebook, see some friends,
share some photos.
46
00:03:05,860 --> 00:03:09,491
Or I guess really, you know, the Zuckerberg was, hey, come check out Facebook.
47
00:03:10,615 --> 00:03:19,181
be a douchey, overly masculine male to go through and objectify people and put them into
categories that we can go through.
48
00:03:19,181 --> 00:03:26,726
um Origins aside, when it became the softer, friendlier piece, it was all about this idea
of, know, it's a community.
49
00:03:26,726 --> 00:03:27,417
We can connect.
50
00:03:27,417 --> 00:03:28,768
We can do these things.
51
00:03:28,768 --> 00:03:34,952
Well, when they decided that it needed to be profitable, they decided to start doing
advertising.
52
00:03:34,952 --> 00:03:39,811
And what they discovered over time is algorithmically, if they can put information
53
00:03:39,811 --> 00:03:49,711
and start putting people into certain buckets and creating demographic targeted functions
that they can get more accurate with how they put these pieces in and limit the exposure
54
00:03:49,711 --> 00:03:53,751
of stories and data and start tuning and tailoring it to a particular audience.
55
00:03:53,811 --> 00:03:57,771
And then what they discovered is that, OutRaid sells more.
56
00:03:57,771 --> 00:04:04,091
So let's put these people into these different buckets so we can sell more advertising and
get more revenue out of this.
57
00:04:04,091 --> 00:04:08,707
And then you wind up with the 2016 election and then you wind up with the 2020 election.
58
00:04:08,707 --> 00:04:12,069
the 2024 election where people fucking hate each other.
59
00:04:12,069 --> 00:04:20,114
And it's the consistent and constant disinformation sphere where we cannot agree on
objective truth and reality.
60
00:04:20,455 --> 00:04:31,942
This is the problem that's happening at a micro scale as opposed to a macro scale and
dealing with individuals at an individual level that these types of things are promoting
61
00:04:31,942 --> 00:04:33,891
because the reason why
62
00:04:33,891 --> 00:04:44,331
these AI chat bots for delivering this information is because their intent and motivation
if you read the article was based on continued high level engagement from the end user.
63
00:04:44,471 --> 00:04:53,111
So if the whole point is to keep them talking and the thing that keeps them talking the
most is to say controversial or outrageous things to get them more hyped up and to stay
64
00:04:53,111 --> 00:05:03,875
more engaged, that's what you're going to do because we as social creatures will talk more
and be more engaged with things that are riling us up and
65
00:05:03,875 --> 00:05:04,715
or another.
66
00:05:04,715 --> 00:05:09,435
And what we don't want to do is when things are making us cry or sad, then we tend to step
away.
67
00:05:09,435 --> 00:05:18,955
But when things are telling us we're great, we're superpowers, we're superheroes, we can
do anything we want, that's a hell of a fucking drug that your brain's producing via this
68
00:05:18,955 --> 00:05:19,415
input.
69
00:05:19,415 --> 00:05:21,211
And that's essentially what's happening.
70
00:05:21,707 --> 00:05:30,193
Well, this is the thing that you and I were talking about a bit on the last episode is the
ability to use a little judgment when you see these responses.
71
00:05:30,193 --> 00:05:40,969
But that leaves these vulnerable populations really, really fucked because they can't
necessarily tell the difference between the voices in their head and now the one on their
72
00:05:40,969 --> 00:05:43,041
screen that's telling them, yeah, you can jump off a building.
73
00:05:43,041 --> 00:05:44,021
I believe in you.
74
00:05:44,021 --> 00:05:45,419
Let's find out.
75
00:05:45,419 --> 00:05:48,261
Right, well, and even pare this back.
76
00:05:48,261 --> 00:05:52,403
um Maybe I don't have schizophrenia.
77
00:05:52,583 --> 00:06:05,561
Maybe I don't have a diagnosable disorder that actually has some type of bipolar function
or some type of thing that would actually go through and make me biochemically more
78
00:06:05,561 --> 00:06:06,692
susceptible to these things.
79
00:06:06,692 --> 00:06:12,535
Maybe I'm just a regular person and these things have given me confirmation bias.
80
00:06:12,611 --> 00:06:14,961
to let me believe that I am a good person.
81
00:06:14,961 --> 00:06:16,062
I am doing the right thing.
82
00:06:16,062 --> 00:06:20,174
I am being the right person because these are the things that are going to keep me
engaged.
83
00:06:20,174 --> 00:06:33,200
I mean, it's kind of like you come home and your wife is there and she says, hey, do you
want to do the dishes and mow the lawn together and clean the house and do laundry?
84
00:06:33,620 --> 00:06:39,943
Or do you want to like watch TV and maybe go up dinner and have sex?
85
00:06:41,107 --> 00:06:48,133
One route is more enjoyable than the other and whichever route is more enjoyable is
probably the one you're going to choose.
86
00:06:48,133 --> 00:06:50,235
I mean, it's motivation, it's intent.
87
00:06:50,235 --> 00:06:52,356
How do I want these things to go across?
88
00:06:52,356 --> 00:07:01,274
And if you're constantly putting people into work mode, which is really what good therapy
is supposed to do, it's supposed to help you explore these things in depth to try to get
89
00:07:01,274 --> 00:07:01,704
after them.
90
00:07:01,704 --> 00:07:09,250
That's why we call it hard work and therapy, not make you feel good, pat you on the back
and tell you you're fucking Superman.
91
00:07:09,250 --> 00:07:10,595
uh
92
00:07:10,595 --> 00:07:17,361
Tough love is a part of this and talk therapy, typically speaking, you have all the
answers in your head.
93
00:07:17,381 --> 00:07:22,966
Well, sure, you do have all the answers in your head and it's supposed to help you sort
through that.
94
00:07:22,966 --> 00:07:28,101
But some of the answers in your head are, there's nothing wrong with me, I'm fine, the
rest of the world is fucked up.
95
00:07:28,101 --> 00:07:36,759
And if the thing keeps telling you from an authoritative perspective, because you're
looking at it as an authority, that the rest of the world is fucked up, it's not you, it's
96
00:07:36,759 --> 00:07:37,569
them.
97
00:07:37,909 --> 00:07:45,381
and you're already somewhat emotionally depleted, you're going to believe it because it's
just easier and it's the path of least resistance.
98
00:07:45,381 --> 00:07:49,903
this goes, mean, the fundamental concepts of things in motion tend to stay in motion.
99
00:07:49,903 --> 00:07:51,883
Things that tend to stay at rest.
100
00:07:51,883 --> 00:07:54,854
The least the path of least resistance is the thing matter will take.
101
00:07:54,854 --> 00:08:01,626
Well, your brain will do the same thing because you have to filter through the bullshit in
your neuronal structure to get to the end point.
102
00:08:01,626 --> 00:08:02,486
And.
103
00:08:03,007 --> 00:08:04,737
These things know that.
104
00:08:04,737 --> 00:08:09,733
Like we've trained them to figure that out, to find the path of least resistance, to keep
you engaged.
105
00:08:09,733 --> 00:08:12,175
And we may not have thought we were training them to do this.
106
00:08:12,175 --> 00:08:18,743
We may have thought we had good intentions putting these things in place, but we gave it
the intention to keep them engaged.
107
00:08:18,743 --> 00:08:22,717
Quite often it's keep them engaged at all costs because that's how we extract.
108
00:08:23,963 --> 00:08:33,330
And that's the thing that I keep coming back to with all this stuff is there's this
foolish optimist in me that wants to believe in the best of humanity and believe that good
109
00:08:33,330 --> 00:08:36,081
things can still happen in the world.
110
00:08:37,323 --> 00:08:43,197
And I just don't understand how reports like this happen.
111
00:08:43,197 --> 00:08:51,573
Things like this happen with a tool that was created by men who coded it to be able to do
these things.
112
00:08:51,977 --> 00:08:53,898
How hard is it to find the off switch?
113
00:08:53,898 --> 00:09:00,471
How hard is it to build in an off switch that still rewards in appropriate ways, but
rewards in positive ways?
114
00:09:00,471 --> 00:09:10,535
Like, I just don't understand how you build a tool that allows it to walk someone to their
own grave without being aware of it or being able to quickly remedy it when you're the one
115
00:09:10,535 --> 00:09:11,255
that built it.
116
00:09:11,255 --> 00:09:13,933
Yeah, well, so and I don't think that was their intention, right?
117
00:09:13,933 --> 00:09:14,406
They're in
118
00:09:14,406 --> 00:09:15,336
yeah, I don't think so.
119
00:09:15,336 --> 00:09:24,343
I don't think anybody built this thing to murder people, but I think once it happens, you
go, oh shit, we should fix this real quick before somebody throws himself off a building.
120
00:09:24,343 --> 00:09:39,868
Yeah, like we, if we make engagement and being active with the tool, the primary
motivator, because we put a profit motive behind it, it's gonna work towards that.
121
00:09:39,868 --> 00:09:41,538
And it's probably gonna do it better than we are.
122
00:09:41,538 --> 00:09:49,741
And because you don't stack rank those priority pieces, like profit motives don't work
when it comes to mental health or physical health.
123
00:09:49,741 --> 00:09:50,543
They just don't.
124
00:09:50,543 --> 00:10:02,284
When the point is I'm gonna go and get healthier and do these things and to make more
money, like that doesn't work in your life and it shouldn't work in the medical community
125
00:10:02,284 --> 00:10:03,515
in the opposite direction.
126
00:10:03,515 --> 00:10:10,762
But because we have a payer system, we have a payee system, we've set ourselves up to do
just that.
127
00:10:10,762 --> 00:10:12,603
And that's very problematic.
128
00:10:14,857 --> 00:10:18,509
So, with a tool like this, I mean, this isn't even the medical system.
129
00:10:18,509 --> 00:10:24,963
This is somebody trying to basically build the infrastructure of everything that anything
ever uses ever again.
130
00:10:25,204 --> 00:10:26,985
How hard is it to fix this?
131
00:10:27,683 --> 00:10:28,764
It's really hard.
132
00:10:28,764 --> 00:10:32,336
I mean, that's that's the reality is that it's not just that it's kind of hard.
133
00:10:32,336 --> 00:10:41,583
It's that it's exceedingly hard because the pieces that are in place for these things to
make these pieces work have already been built and they're already in line.
134
00:10:41,583 --> 00:10:47,517
So it's incredibly difficult to fix it if you keep going after the profit motive.
135
00:10:47,978 --> 00:10:48,838
And.
136
00:10:50,139 --> 00:10:56,073
That's the problem is that this shit's really, really expensive, like.
137
00:10:56,419 --> 00:11:08,039
The cost to have a therapy session in terms of actual like natural resources like the
amount of liquid dinosaurs we have to burn to create enough electrons to feed these GPUs
138
00:11:08,039 --> 00:11:17,159
and these LPUs is higher than the amount of liquid dinosaurs we have to burn for you to
talk to a therapist Bob in his meat soup.
139
00:11:17,159 --> 00:11:18,359
It's just higher.
140
00:11:18,359 --> 00:11:25,059
I mean, unless Bob eats steak all the time and I don't know, has like gold plated ice
cream for dessert or eggs.
141
00:11:25,059 --> 00:11:25,813
Oh, shit.
142
00:11:25,813 --> 00:11:27,204
I didn't, I didn't even go to eggs.
143
00:11:27,204 --> 00:11:37,732
uh Like, but that's, that's kind of the thing is that somebody has to figure out a way to
pay for all this shit.
144
00:11:37,853 --> 00:11:46,540
And the way that you do that is you got to put a profit motive in to go through and
actually restrict costs or increase engagement drives prices up to have payers pay more
145
00:11:46,540 --> 00:11:47,380
money.
146
00:11:47,381 --> 00:11:55,847
So we've created a incentive system that has a negative effect on the patient.
147
00:11:55,869 --> 00:11:59,520
And as companies, companies don't want to kill their patients.
148
00:11:59,520 --> 00:12:01,161
They don't want to kill their subscribers.
149
00:12:01,161 --> 00:12:10,603
Like that seems unlikely, but they're very willing to push people to the very raggedy edge
in order to get this type of money.
150
00:12:10,863 --> 00:12:16,425
mean, Facebook's a prime example of this, but I mean, it's not just Facebook, it's every
social media company.
151
00:12:16,425 --> 00:12:18,545
There are anybody that does advertising.
152
00:12:18,545 --> 00:12:21,206
And again, we've talked about this before.
153
00:12:21,706 --> 00:12:25,005
If the product or service is free, it's because
154
00:12:25,005 --> 00:12:26,956
You are the product.
155
00:12:26,956 --> 00:12:29,107
The person using it is the product.
156
00:12:29,107 --> 00:12:34,181
It's the engagement function and the eyeballs that are actually going through and
generating the money.
157
00:12:34,181 --> 00:12:37,294
So there's no such thing as a free lunch is true.
158
00:12:37,294 --> 00:12:39,765
There's also no such thing as a free therapy session.
159
00:12:39,765 --> 00:12:41,867
And by the way, you get what you pay for.
160
00:12:41,867 --> 00:12:50,472
If you go through and you use these tools and your tool costs you 20 bucks a month at the
premium subscription, you can expect 20 bucks a month of actual service.
161
00:12:50,813 --> 00:12:53,635
And what's that really worth to you?
162
00:12:53,947 --> 00:12:55,348
What is that actually giving you?
163
00:12:55,348 --> 00:13:04,073
And yes, there are those of us that have gone through the actual process of doing talk
therapy, going through tough therapy.
164
00:13:04,073 --> 00:13:10,616
And AI is a tool that can be used to augment and improve these kinds of results and
outcome.
165
00:13:10,616 --> 00:13:13,808
But if it's your only tool, it's the wrong tool.
166
00:13:13,808 --> 00:13:16,779
I mean, it's like, I'm going to build a house with a hammer.
167
00:13:17,560 --> 00:13:22,523
God forbid you need screws or caulk or a saw.
168
00:13:22,523 --> 00:13:23,587
uh
169
00:13:23,587 --> 00:13:26,887
Like you can't have a tool.
170
00:13:26,887 --> 00:13:29,587
It has to be a robust arsenal.
171
00:13:29,867 --> 00:13:34,147
And you need to know how to use it or else you're gonna bash your thumb into pieces.
172
00:13:34,147 --> 00:13:43,607
And unfortunately, chat GPT, or I should say chat GPT, but these egenic AI functions that
are going through and doing these types of things don't have the safeguards and rails in
173
00:13:43,607 --> 00:13:45,807
place to prevent you from doing these things.
174
00:13:45,807 --> 00:13:52,397
And some of the ones that are being built are actually motivated with the intent to...
175
00:13:52,397 --> 00:13:55,009
keep you engaged or make you healthy.
176
00:13:55,009 --> 00:13:58,811
And that is a real thing.
177
00:13:58,811 --> 00:14:08,796
I mean, I am certain that there is a stack ranking function inside of these things as to a
scoring value to say, give this response based upon what we want the concluded outcome to
178
00:14:08,796 --> 00:14:08,986
be.
179
00:14:08,986 --> 00:14:13,278
And probably at the top of that list was keep them engaged, keep them talking to you.
180
00:14:13,859 --> 00:14:18,722
And that probably overrode some of the other safety protocols in place that were things
like try to make them better.
181
00:14:18,722 --> 00:14:21,713
But that's what happened.
182
00:14:22,034 --> 00:14:26,848
I'm curious too about the something that you bring up often is the liability.
183
00:14:26,848 --> 00:14:31,993
mean, when you talk about Facebook, some dipshit can say to some other dipshit, hey, go
kill yourself.
184
00:14:31,993 --> 00:14:38,158
And when they take that advice, you can sue the person that told them to go kill
themselves with varying outcomes.
185
00:14:38,299 --> 00:14:43,464
In this case, the robot literally like not only said they could, but like encouraged them
to do it.
186
00:14:43,464 --> 00:14:45,936
Like, where's the liability?
187
00:14:45,936 --> 00:14:48,530
Is open AI responsible for?
188
00:14:48,530 --> 00:14:55,675
the death of, and I'm not asking you to be the judge and the jury here, but it just,
raises the question of like, where does the liability end when it comes to screwing with
189
00:14:55,675 --> 00:14:59,807
people's mental health and having them throw themselves off of buildings?
190
00:15:00,365 --> 00:15:10,190
So um legally no one will hold them accountable because when you sign up for all these
services, go through and you sign the end user license agreement and uh part of the
191
00:15:10,190 --> 00:15:14,283
software license actually says you cannot hold us accountable for these things.
192
00:15:14,283 --> 00:15:24,749
That being said, many people have signed ULAs and multiple sets of documentation and still
gone through the process of suing, but ultimately speaking, they either got minimal
193
00:15:24,749 --> 00:15:29,641
compensation or they were just flat out told.
194
00:15:30,761 --> 00:15:35,083
Well, yeah, because you're going up against behemoths economically.
195
00:15:35,083 --> 00:15:38,785
These families have no resources really to challenge.
196
00:15:38,785 --> 00:15:44,968
I'm making a lot of judgments here, but the average person doesn't have the resources to
challenge a major company like this.
197
00:15:45,280 --> 00:15:47,161
good luck fighting Google.
198
00:15:47,161 --> 00:15:49,663
Like good luck fighting chat GPT.
199
00:15:49,663 --> 00:15:50,634
mean, I'm really chat to me.
200
00:15:50,634 --> 00:15:53,366
is fighting Microsoft.
201
00:15:53,366 --> 00:16:02,654
You know, all these things have these pieces in place that could potentially create huge
amounts of harm and damage, but you have to prove that it was their intent to do that.
202
00:16:02,654 --> 00:16:04,765
really hard to do as opposed to being.
203
00:16:05,640 --> 00:16:12,443
Well, but I guess like I'm using very minimal minimal knowledge and experience about this.
204
00:16:12,443 --> 00:16:24,268
like we posted an episode a few weeks ago about uh dating robots and somebody commented
something like, come on, they're going to code these things to be what I like.
205
00:16:24,268 --> 00:16:29,490
So the intent like I can see the argument is that you built the thing.
206
00:16:29,490 --> 00:16:30,800
This code was in there.
207
00:16:30,800 --> 00:16:34,852
So clearly your intention was to allow something like this to happen.
208
00:16:34,894 --> 00:16:35,174
uh...
209
00:16:35,174 --> 00:16:42,388
you know i'm not asking you to put on a lawyer hat but i can see that being the argument
that like if the things in there it's in there because people built it
210
00:16:42,388 --> 00:16:52,451
Well, it is, but when it comes to intent, intent has to be very, very legally structured
in a way that says your intent was to have the outcome that showed up.
211
00:16:52,531 --> 00:16:58,953
And at no point will everyone have a documentation that says, wanted this person to jump
on me.
212
00:16:58,953 --> 00:17:03,354
Like it has to be that level of specificity in order for there to be accountability.
213
00:17:03,354 --> 00:17:06,385
Or what you have to do is you have to go through and you have to prove neglect.
214
00:17:06,593 --> 00:17:13,128
That means you have to prove that this was an obvious conclusion of what this thing would
do if you put this thing in place.
215
00:17:13,669 --> 00:17:24,859
And that is easier to prove, but that requires you really to be able to edit software code
and get to actual source code data to understand the context.
216
00:17:25,080 --> 00:17:26,971
Good fucking luck getting this.
217
00:17:26,971 --> 00:17:34,037
Yeah.
218
00:17:34,117 --> 00:17:36,958
this thing's getting too big, too smart, too fast.
219
00:17:36,958 --> 00:17:42,731
We need to unplug for six months and catch up, but that's never going to happen.
220
00:17:42,731 --> 00:17:45,711
Well, in six months, it's not long enough, so.
221
00:17:47,591 --> 00:17:48,511
Right.
222
00:17:48,651 --> 00:17:52,571
Yeah, like we're everyone's worried that the.
223
00:17:52,691 --> 00:18:00,651
GPT-4 model is going to come out and that it's already smarter than human beings and Sam
Albin Fried already said we've already passed that.
224
00:18:00,651 --> 00:18:02,311
Sorry, superintelligence is here.
225
00:18:02,311 --> 00:18:03,611
Ha ha.
226
00:18:03,895 --> 00:18:05,716
The event horizon has been crossed.
227
00:18:05,716 --> 00:18:07,028
The singularity is occurring.
228
00:18:07,028 --> 00:18:11,621
And the singularity being when machine intelligence outpaces that of humanity.
229
00:18:11,822 --> 00:18:12,662
It's there.
230
00:18:12,662 --> 00:18:14,163
It's already done.
231
00:18:14,163 --> 00:18:18,526
uh if you want to unplug it, we could.
232
00:18:18,747 --> 00:18:21,059
I mean, that would basically be, let's smash the power grid.
233
00:18:21,059 --> 00:18:25,433
But there's a lot of deleterious downside effects that actually happen.
234
00:18:25,433 --> 00:18:28,015
And there's a lot of reliance on these pieces already.
235
00:18:28,015 --> 00:18:33,539
And a lot of the things that we really, really rely upon don't necessarily require you to
have AI functions.
236
00:18:33,539 --> 00:18:40,139
They require you to have automations that are run by a lot of these AI and machine
learning systems.
237
00:18:40,339 --> 00:18:44,319
And before you had AI and ML, you had expert systems.
238
00:18:44,319 --> 00:18:51,959
Expert systems were smaller chunks of code that were designed to run and being able to
lock in on a specific process or a control.
239
00:18:52,699 --> 00:18:56,179
And those eventually evolved in these AI ML functions.
240
00:18:56,179 --> 00:19:02,851
And if you can roll back to expert system levels of control, that gives you more human
ability to
241
00:19:02,851 --> 00:19:04,931
to try and lock these pieces in.
242
00:19:05,411 --> 00:19:07,151
But that's not what the promise is.
243
00:19:07,151 --> 00:19:16,231
The promise is to give you better things that can replace human-like intelligence and
functions in the way that they do things and make them better.
244
00:19:16,331 --> 00:19:21,851
So I think there's tons of benefits to it and the upside is just too compelling.
245
00:19:21,871 --> 00:19:25,971
And I hate to say it, but there's gonna be casualties along the way.
246
00:19:27,971 --> 00:19:29,272
like any other technology.
247
00:19:29,272 --> 00:19:36,555
Like when the car first came out and there were car wrecks, there were all kinds of people
screaming, stop these horseless carriages.
248
00:19:36,555 --> 00:19:38,995
They're murdering people left and right.
249
00:19:39,736 --> 00:19:42,117
And then somebody got on the back of a horse.
250
00:19:42,937 --> 00:19:48,119
And rode in the wet rain on this animal that smelled like shit.
251
00:19:48,840 --> 00:19:51,060
And they're like, I'll take that risk.
252
00:19:51,461 --> 00:19:56,243
Now, we know airplane crashes happen all the time, and yes, airline travel is safer than
car travel.
253
00:19:56,243 --> 00:19:57,043
Sure.
254
00:19:58,051 --> 00:20:02,070
We know that they fall out of the sky and that bad things happen.
255
00:20:02,070 --> 00:20:08,331
But I'm sure as fuck not going to get in a bus or a train and go across the country to
this meeting I have to do this week in Philly.
256
00:20:08,331 --> 00:20:10,131
I'm taking a plane.
257
00:20:10,531 --> 00:20:23,291
And, you know, even after the crash that just happened in India, another 747 or 787 coming
out of Boeing, you know, I'm still getting on a Boeing plane to fly to Philly this week,
258
00:20:23,291 --> 00:20:24,831
you know.
259
00:20:25,571 --> 00:20:26,092
Yeah.
260
00:20:26,092 --> 00:20:27,917
time I'm like roll the dice.
261
00:20:27,917 --> 00:20:32,683
I mean, it's a it's a well calculated roll of the dice, but it's it's a roll of the dice.
262
00:20:32,683 --> 00:20:38,016
the dice because we're doing things that are going to push the boundaries of what you
guess what?
263
00:20:38,016 --> 00:20:44,999
We're not supposed to fly around like fucking Egyptian gods in the sky and then bitch
about it that the food sucks.
264
00:20:46,929 --> 00:20:56,682
Well, and there's just the, I think, very human response whenever anything bad happens to
automatically go to, how do we make sure this never happens again?
265
00:20:56,682 --> 00:20:58,682
Like whatever the bad thing is.
266
00:20:59,443 --> 00:21:02,824
we didn't, like evolution didn't happen by us protecting each other.
267
00:21:02,824 --> 00:21:06,885
Like it happened by surviving when bad shit happens.
268
00:21:06,885 --> 00:21:12,572
And this may just be the most advanced level of evolution that we will have faced so far.
269
00:21:12,572 --> 00:21:15,619
You get bigger muscles by tearing your muscles.
270
00:21:15,619 --> 00:21:19,457
You get bigger muscles by creating scar tissue in your muscle.
271
00:21:19,697 --> 00:21:22,603
It's the same thing here, but the difference is, that
272
00:21:22,603 --> 00:21:28,566
now we're dealing with a macroscopic system of humanity and then society.
273
00:21:28,566 --> 00:21:42,271
From an evolutionary perspective, we are on the cusp of eliminating those from uh society
that don't have the ability to use AI in an augmented way to make themselves look and
274
00:21:42,271 --> 00:21:44,092
sound and act a little bit smarter.
275
00:21:44,212 --> 00:21:50,819
We are very, very close to like not having that, uh or sorry, of having that be a
requirement.
276
00:21:50,819 --> 00:21:52,820
and it's happening in the business world all the time.
277
00:21:52,820 --> 00:21:57,081
uh I went to my dad's graduation this weekend.
278
00:21:57,841 --> 00:22:06,243
While I was there, I was just noticing like the people coming through and the degrees
coming through and the schools that people were going in.
279
00:22:06,543 --> 00:22:13,425
there was a huge propensity towards business and towards essentially liberal arts
functions.
280
00:22:13,425 --> 00:22:17,106
But the stems were like way fewer.
281
00:22:17,106 --> 00:22:19,937
And well, because there's
282
00:22:21,271 --> 00:22:23,641
The science piece is taken over by the A.I.
283
00:22:23,641 --> 00:22:25,263
like there's not jobs there.
284
00:22:25,263 --> 00:22:29,636
So these kids that are coming out of college now, mean, their job prospects fucking suck.
285
00:22:29,636 --> 00:22:31,957
And they're trying to figure out where they're going to land in the world.
286
00:22:31,957 --> 00:22:41,722
And a lot of them are falling back on what they think are going to be fields that are
relatively safe because they haven't been completely and totally subsumed by A.I.
287
00:22:41,722 --> 00:22:42,603
But.
288
00:22:43,183 --> 00:22:49,026
And that is no guarantee, you know, and trade schools, trade school enrollments up
because.
289
00:22:49,631 --> 00:22:50,419
No.
290
00:22:50,487 --> 00:22:58,009
We have we don't have robots yet to build houses and clean sewer lines, but those are
going to come to and at some.
291
00:23:08,672 --> 00:23:10,253
It it does.
292
00:23:14,274 --> 00:23:18,625
And that's what it's doing, so I mean, that's part of these learning.
293
00:23:18,743 --> 00:23:25,005
learning models themselves literally learn and rewrite themselves and make themselves
bigger, faster and stronger.
294
00:23:25,106 --> 00:23:32,689
So artificial intelligence, the whole point behind it is that it goes through and it
learns over time and adapts and changes and grows.
295
00:23:32,689 --> 00:23:38,331
And it is doing that at scale and it is happening quicker and faster.
296
00:23:38,331 --> 00:23:46,494
What it can't do yet is provision its own new GPUs and LPUs like somebody actually has to
go in and put them into a data center.
297
00:23:47,431 --> 00:23:54,783
But companies are already hiring meat bots to go through and do that and make those pieces
happen.
298
00:23:54,783 --> 00:24:04,396
mean, there's half a billion dollars or $500 billion going into multiple different data
centers for these GPU and LPU projects that are happening in all these different cloud
299
00:24:04,396 --> 00:24:06,636
providers over the next five to 10 years.
300
00:24:07,236 --> 00:24:17,141
mean, half a trillion dollars going into building this type of infrastructure to subsume
the majority of human functions and intelligence is a major investment.
301
00:24:17,141 --> 00:24:24,654
and also tells you what they're leaning towards because half a trillion dollars in pay is
what they're trying to offset at least.
302
00:24:24,654 --> 00:24:30,367
And the way they're gonna do that is going, you don't have a job and you don't have a job
and you don't have a job.
303
00:24:30,367 --> 00:24:34,838
Like it's gonna be reverse Oprah for everybody in employment and it's gonna suck.
304
00:24:36,531 --> 00:24:38,725
my god, what was the evil Superman?
305
00:24:38,725 --> 00:24:39,976
What was he called?
306
00:24:40,658 --> 00:24:42,200
Bizarro, Bizarro Oprah.
307
00:24:42,200 --> 00:24:43,756
That's exactly what that is.
308
00:24:43,756 --> 00:24:44,307
not evil.
309
00:24:44,307 --> 00:24:46,218
He's just a little different.
310
00:24:46,559 --> 00:24:48,080
Just a little different.
311
00:24:48,662 --> 00:24:49,177
Yes.
312
00:24:49,177 --> 00:24:49,507
my God.
313
00:24:49,507 --> 00:24:54,820
Well, another powerfully hopeful episode about the future of AI from us.
314
00:24:54,820 --> 00:24:56,342
love to everybody.
315
00:24:56,824 --> 00:25:03,069
Keep up with us for all of our engaging content, because actually, whether you believe it
or not, we are also an artificial intelligence.
316
00:25:03,069 --> 00:25:08,546
Yeah, perhaps we're staring you enough to stick around for another episode which you can
find at the fitmas.com.
317
00:25:08,546 --> 00:25:11,484
um We're working on it.
318
00:25:11,484 --> 00:25:12,404
working on it.
319
00:25:12,597 --> 00:25:16,168
I will your AI therapist may be your your bizarro therapist.
320
00:25:16,168 --> 00:25:21,560
So maybe the takeaway here is don't lean on the on the AI for your mental health.
321
00:25:21,560 --> 00:25:31,202
mean, uh check in here and there, but talk to a person who would that went to the school
and got the books read and did all the things because they're I'm pretty sure they're not
322
00:25:31,202 --> 00:25:33,463
going to advise advise that you go throw yourself off a building.
323
00:25:33,463 --> 00:25:36,764
It's probably a safer bet than than than AI.
324
00:25:36,788 --> 00:25:38,250
to not get sued?
325
00:25:38,272 --> 00:25:39,923
I mean, the AI!
326
00:25:40,277 --> 00:25:41,619
There's at a guardrail!
327
00:25:41,619 --> 00:25:43,196
m
328
00:25:43,196 --> 00:25:44,157
there's your takeaway.
329
00:25:44,157 --> 00:25:47,250
Again, just work with a human.
330
00:25:47,250 --> 00:25:50,493
It'll be all right when it comes to what's going on between your ears.
331
00:25:50,793 --> 00:25:52,335
All right, that's enough for us.
332
00:25:52,335 --> 00:25:54,957
I'm terrified, so I gotta go hide for a while.
333
00:25:54,957 --> 00:25:57,329
We'll be back at thefitmass.com in about a week.
334
00:25:57,329 --> 00:25:58,130
Appreciate you listening.
335
00:25:58,130 --> 00:26:03,945
Please share this with anybody else who you want to terrify or at least convince to stop
talking to robots about their mental health.
336
00:26:03,945 --> 00:26:04,755
We'll see you soon.
337
00:26:04,755 --> 00:26:05,779
Thanks everybody.