Why AI Therapists Might Actually Help Your Mental Health (And Where They Fall Short)

Are your mental health issues trapped between therapy appointments? When I hit a rough patch, I turned to AI for therapy out of desperation, and was genuinely shocked by the results. The AI asked me introspective questions identical to what my human...
Are your mental health issues trapped between therapy appointments? When I hit a rough patch, I turned to AI for therapy out of desperation, and was genuinely shocked by the results. The AI asked me introspective questions identical to what my human therapist uses, but went further by offering specific suggestions my therapist couldn't provide.
In this episode, we explore the surprising benefits and legitimate concerns of using AI for mental wellness. You'll discover how these tools can organize your scattered thoughts, provide actionable steps for improvement, and serve as a valuable bridge between professional therapy sessions.
Listen now to learn how AI might become your unexpected mental health ally – just don't forget the human connection that algorithms can't replace.
Topics Discussed:
- My personal experience using AI as a therapy tool
- How AI can provide structured introspection and actionable steps for mental health
- The surprising effectiveness of AI-generated journaling prompts for emotional connection
- The potential dangers of relying solely on AI for medical or mental health advice
- Privacy concerns and data collection risks when sharing personal information with AI
- Research comparing AI vs human doctor diagnostic accuracy (77% vs 67%)
- How AI lacks human imagination but excels at pattern recognition and memory recall
- The future integration of AI tools in professional healthcare settings
- Finding the balance between AI assistance and human connection
- Using AI to transform traditional journaling practices for deeper emotional connection
----
MORE FROM THE FIT MESS:
Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok
Subscribe to The Fit Mess on Youtube
Join our community in the Fit Mess Facebook group
----
LINKS TO OUR PARTNERS:
-
Explore the many benefits of cold therapy for your body with Nurecover
-
Muse's Brain Sensing Headbands Improve Your Meditation Practice.
-
Get a Free One Year Supply of AG1 Vitamin D3+K2, 5 Travel Packs
-
You Need a Budget helps you quickly get out of debt, and save money faster!
00:00:01,393 --> 00:00:02,744
Alright, this is the FitMask Podcast.
2
00:00:02,744 --> 00:00:03,585
Thanks so much for listening.
3
00:00:03,585 --> 00:00:10,681
We talk about all things AI and how it can either completely fuck up or really help your
life when it comes to your health and your wellness.
4
00:00:10,681 --> 00:00:20,169
uh Jason, you and I, we actually, talked about a similar topic a few weeks ago talking
about how doctors use AI, even though most of them don't even trust it to begin with.
5
00:00:20,169 --> 00:00:27,515
So why shouldn't the rest of us do the same thing, jump on the bandwagon and try and
figure out what the hell's going on in our heads and our bodies with this fancy new tool
6
00:00:27,515 --> 00:00:28,516
that we're all getting used to?
7
00:00:28,516 --> 00:00:29,603
uh
8
00:00:29,603 --> 00:00:32,095
And you and I were supposed to talk about this a few days ago.
9
00:00:32,095 --> 00:00:39,661
And when we were sort of kicking the idea around over a text, I mentioned that I'd read a
bunch of articles about how people are using it for therapy.
10
00:00:40,382 --> 00:00:41,873
And so I got curious.
11
00:00:41,873 --> 00:00:46,267
I was in a bit of a slump, having some rough times, just fully upfront.
12
00:00:46,267 --> 00:00:49,930
Like I'm a new assistant coach for my kids' softball team.
13
00:00:50,050 --> 00:00:53,673
And I watched her pitch her first game and just got annihilated.
14
00:00:53,975 --> 00:01:00,117
And it was, and it killed me like as, as her coach, as her dad, like it took a couple of
days to bounce back from that.
15
00:01:00,117 --> 00:01:04,189
And I literally was like, okay, AI therapist, get me through this.
16
00:01:04,189 --> 00:01:06,630
And so I started asking it all these questions.
17
00:01:07,350 --> 00:01:08,591
It was nuts.
18
00:01:08,591 --> 00:01:19,575
Like I've been in therapy for a long time and the questions that it asked me as a
literate, like I went in real dumb guy, real like basic level, Hey AI be my therapist.
19
00:01:19,575 --> 00:01:23,834
And it's like, started asking me questions and I got specific and
20
00:01:23,834 --> 00:01:28,276
it started asking me these really introspective questions about why I was feeling the way
I did.
21
00:01:28,276 --> 00:01:34,639
And perhaps I was being a bit hard on myself and all these things that my therapist has
said to me for years.
22
00:01:34,699 --> 00:01:35,957
And I was blown away.
23
00:01:35,957 --> 00:01:38,300
I was like, wow, that's really cool.
24
00:01:38,321 --> 00:01:40,742
But then it also gave me actionable steps.
25
00:01:40,742 --> 00:01:47,475
Not only go write in your journal or whatever, but was like, what would you do to be a
better coach next time?
26
00:01:47,475 --> 00:01:49,256
And I was like, I would do some more drills.
27
00:01:49,256 --> 00:01:50,076
And it was like,
28
00:01:50,076 --> 00:01:51,477
perhaps try these drills.
29
00:01:51,477 --> 00:01:54,498
Like it had softball drills to take to the lesson.
30
00:01:54,498 --> 00:01:57,069
like my therapist wouldn't have that, right?
31
00:01:57,069 --> 00:02:00,710
They would go, oh, go Google some ways to be a better coach.
32
00:02:00,870 --> 00:02:08,373
It was so profound that I actually did it for a few days and ended up like coming up with
new ways to journal, new ways to do things.
33
00:02:08,993 --> 00:02:19,427
It's mind blowing how much this can be used as a massive tool to help your mental health
that I kind of was poo-pooing and laughing at before trying it myself.
34
00:02:19,674 --> 00:02:21,525
Yeah, I mean, that's that's legit.
35
00:02:21,525 --> 00:02:34,140
So as a former former Little League softball coach, uh I would have loved to have had this
tool as opposed to uh sitting there doing the assistant work and, you know, essentially
36
00:02:34,140 --> 00:02:38,662
playing cage mom and doing cheers and the dugout with the girls and running a batting list
order.
37
00:02:39,423 --> 00:02:45,525
Maybe actually know how to play softball and maybe maybe the AI could have helped me.
38
00:02:45,885 --> 00:02:47,886
Alas, it was not available.
39
00:02:48,900 --> 00:02:50,771
So yeah, I mean, I could definitely see how that would happen.
40
00:02:50,771 --> 00:02:55,194
And like, I don't think it's substitute necessarily for regular human interactions.
41
00:02:55,194 --> 00:03:03,398
But when you're talking about what you're talking about, what it really does is it helps
you organize your thoughts and organize your questions in a way that makes it easier to
42
00:03:03,398 --> 00:03:07,060
get through to the truth of the topic and the truth of the subject.
43
00:03:07,100 --> 00:03:12,913
And it cuts down the noise that you have going on in your own head to try to find those
pieces.
44
00:03:12,913 --> 00:03:18,278
Now, it's awesome because one, it can make things less confusing.
45
00:03:18,278 --> 00:03:23,278
and you can organize information in a way that makes it much more palatable for you as an
individual.
46
00:03:24,018 --> 00:03:31,318
The other side of that is that it's giving you information that's structured in a way that
it thinks is useful and helpful for you.
47
00:03:31,318 --> 00:03:36,138
And by it, whatever told it how to respond in those things.
48
00:03:36,138 --> 00:03:43,318
And the way these learning models normally work is they have basically
49
00:03:43,642 --> 00:03:50,786
humans that come in and augment and look at responses as they come through and say, yeah,
this looks like the best response, or this does not look like the best response.
50
00:03:50,786 --> 00:03:55,227
And ChatGPT does that to me all the time, because I'm part of their beta signup program
for different pieces.
51
00:03:55,228 --> 00:04:02,071
So um it's fantastic, because it gives you that kind of power and enables you to do those
types of things.
52
00:04:02,071 --> 00:04:08,544
It's maybe a bit dangerous, because it definitely enhances the potential for us to all
fall into the Denny Kruger trap.
53
00:04:08,634 --> 00:04:14,838
Thinking we actually know what the fuck we're doing when we really we don't what we're
doing is we're using something to organize our thoughts and feed us information that we
54
00:04:14,838 --> 00:04:26,664
respond to and Most of us just want it to say, you know do this take this pill do this
thing that's relatively easy because Hard work is just that it's hard work.
55
00:04:26,664 --> 00:04:37,620
So at some point we have to make the delineation between Is is what I'm doing here with
his virtual therapist in this chat GPT realm
56
00:04:39,518 --> 00:04:41,718
enough or is it kind of my starting point?
57
00:04:41,718 --> 00:04:51,078
And like you said, you know, you've been in therapy for years and it asked you questions
over the weekend that were as in depth and as interesting as what it took you years to go
58
00:04:51,078 --> 00:04:52,258
through in therapy.
59
00:04:52,258 --> 00:04:56,158
I don't think you would have been able to unearth that if you hadn't previously gone
through therapy.
60
00:04:56,158 --> 00:04:58,958
So the layman looking at this reading and said, mean, like, well, fuck you.
61
00:04:58,958 --> 00:05:00,098
It's like search the Internet.
62
00:05:00,098 --> 00:05:03,698
Well, of course, because I can put in a question, the Internet happens be back in response
to me.
63
00:05:03,698 --> 00:05:06,278
And nowadays, the top of line is, of course, an AI response.
64
00:05:06,278 --> 00:05:07,998
But then there's several subjects that
65
00:05:08,402 --> 00:05:14,488
And when you look at the compendium of information that's actually there, it really is
like a choose your own adventure novel.
66
00:05:14,488 --> 00:05:23,416
And when you start thinking about your mental health as gamifying things, and you start
thinking that your physical health is gamifying things, it's really, really useful from a
67
00:05:23,416 --> 00:05:30,342
productivity perspective, because you can work backwards and look at these different
actionable steps and actually show measurable improved performance.
68
00:05:30,342 --> 00:05:35,428
So we've talked about it before, all these different metric wearable devices, they
69
00:05:35,428 --> 00:05:37,129
give you that same kind of level of interaction.
70
00:05:37,129 --> 00:05:40,741
And, you know, I spent time to build a company to do this to make it work.
71
00:05:40,741 --> 00:05:45,404
ah But the mental health side of it has similar types of challenges and stretches.
72
00:05:45,404 --> 00:05:51,627
The problem is, is that there's not really a way for you to go through and wear a wearable
that says today you're happy.
73
00:05:51,627 --> 00:06:01,053
ah So this becomes that kind of subjective feeling of your own because well,
biometrically, you might look great, like you're calm, relaxed and mellow.
74
00:06:01,133 --> 00:06:02,874
You might actually be
75
00:06:02,874 --> 00:06:16,462
depressed and you might be this calm relaxed and mellow because you've been laying in bed
all day watching reruns of mash, know, yeah, the yeah, exactly, you know, maybe not mash.
76
00:06:16,462 --> 00:06:26,147
um But that's kind of the thing is that like, the way that your body reacts and the way
that your nervous system reacts, certainly they're tied together.
77
00:06:26,147 --> 00:06:31,230
But the way the signals actually represent something as far as measurable metrics goes is
very, very difficult.
78
00:06:31,440 --> 00:06:38,055
So tool like ChatGPT, we could actually journal it and say, today I felt like this.
79
00:06:38,476 --> 00:06:47,114
And then if you compare that and overlay that with your other biometric data, you're
actually gonna be able to get a much better understanding of how these things affect you
80
00:06:47,114 --> 00:06:47,603
over time.
81
00:06:47,603 --> 00:06:55,311
So like a good night of sleep, most people find that if they get a good night of sleep,
that they tend to feel better and are more productive.
82
00:06:55,311 --> 00:06:57,432
Like it's not rocket math, right?
83
00:06:57,713 --> 00:06:59,374
But if I got,
84
00:06:59,374 --> 00:07:04,598
all the biometrics that said I got a good night of sleep, but I woke up and I still didn't
feel good.
85
00:07:04,598 --> 00:07:11,742
And I still felt down and depressed or just, just blase because I have some other
underlying condition going on.
86
00:07:11,742 --> 00:07:13,003
Nothing's going to tell me that.
87
00:07:13,003 --> 00:07:16,085
Like I got to ask questions to try to get to the answers to those pieces.
88
00:07:16,085 --> 00:07:20,999
And that's what this really comes down to is that it's up to us as individuals to advocate
for ourselves.
89
00:07:20,999 --> 00:07:24,491
And the way that we advocate for ourselves traditionally is we say, I don't feel good.
90
00:07:24,491 --> 00:07:28,063
I'm going to the doctor and say, doctor, blah, blah, blah, blah.
91
00:07:28,063 --> 00:07:28,974
And the doctor says,
92
00:07:28,974 --> 00:07:34,578
Yes, uh take this and you'll feel better or do these things and you might feel better.
93
00:07:35,619 --> 00:07:42,704
The problem that we run into is that doctors themselves, typically speaking, when we go
and see them, they're not as invested in our health as we are.
94
00:07:42,704 --> 00:07:50,560
So because they're not as invested in our health as we are, they're not as likely to try
to keep us motivated and on task and on track.
95
00:07:50,560 --> 00:07:56,114
So we supplement that with other commercially available tools or other pieces out there,
apps or these kinds of things.
96
00:07:56,238 --> 00:07:59,661
AI is just another advancement of this and just another repurposing of these functions.
97
00:07:59,661 --> 00:08:08,109
And if you look at all the tools out there that do this shit today, they're all using AI
and they're using it to either get rid of people, correlate data or both.
98
00:08:08,109 --> 00:08:16,276
And the idea is they're trying to make these things much more focused and much easier to
execute on because the signal to noise ratio of life is just high.
99
00:08:16,657 --> 00:08:20,961
using AI to do that, I think is actually quite healthy.
100
00:08:21,081 --> 00:08:22,442
There is a downside.
101
00:08:22,457 --> 00:08:23,578
Of course, of course there is.
102
00:08:23,578 --> 00:08:31,642
ah The thing that I thought was so great about it was that, like you said, this is not a
replacement necessarily.
103
00:08:31,642 --> 00:08:34,374
But it's a hell of a stop gap between appointments.
104
00:08:34,374 --> 00:08:43,709
Or if your insurance is only going to get you in the office so many times or in the
virtual call so many times, it's a great tool to go to in the moment when it's like, I'm
105
00:08:43,709 --> 00:08:45,150
maybe not crisis.
106
00:08:45,150 --> 00:08:47,261
If you're in crisis mode, AI might not be the way to go.
107
00:08:47,261 --> 00:08:48,211
But if you're like, you know what?
108
00:08:48,211 --> 00:08:49,592
just, I got to get out of this.
109
00:08:49,592 --> 00:08:51,473
I got to sort through this.
110
00:08:51,793 --> 00:08:58,767
Like I said, the kinds of questions that it made me ask were fantastic and really helped
me through that slump.
111
00:08:58,767 --> 00:09:06,972
But then when I got a little more clarity, I was thinking about just like my journaling
practice and how it feels very automatic and very like I checked the boxes that I did the
112
00:09:06,972 --> 00:09:09,983
journal today and I don't really feel like I get anything out of it.
113
00:09:09,983 --> 00:09:16,247
And so I asked it to help me come up with a way to improve my connection to the feeling of
gratitude, right?
114
00:09:16,247 --> 00:09:20,109
Like not just I'm grateful to have a warm home to.
115
00:09:20,167 --> 00:09:20,680
Yeah.
116
00:09:20,680 --> 00:09:22,263
connect to the feeling?
117
00:09:22,263 --> 00:09:30,278
And the prompts that it gave me, like as I was filling in my journal following these
prompts, like I was getting emotional, like really connecting with the feeling.
118
00:09:30,278 --> 00:09:31,149
And I was like,
119
00:09:33,091 --> 00:09:40,731
A robot taught me how to feel this in a way that I haven't been able to on my own in a
really long time.
120
00:09:40,752 --> 00:09:45,449
I just, again, no, I know, I know it's exactly that.
121
00:09:45,449 --> 00:09:47,094
It's exactly Wally, yes.
122
00:09:47,094 --> 00:09:49,678
that new one, like the wild robot?
123
00:09:49,678 --> 00:09:50,599
Like...
124
00:09:51,461 --> 00:09:52,868
I fucking cried in all of them.
125
00:09:52,868 --> 00:09:54,608
Seriously, seriously.
126
00:09:54,949 --> 00:09:59,871
But I just thought, how many books have I read about how to journal?
127
00:09:59,871 --> 00:10:02,853
How many times have I searched how good journaling prompts?
128
00:10:02,853 --> 00:10:12,888
And three things that you're grateful for, but the way that it dialed it in to find the
feeling and sitting with that feeling and writing about, I won't bore you with the details
129
00:10:12,888 --> 00:10:19,641
of it, but just to get that connection from just asking a robot what to do better was so
powerful.
130
00:10:19,739 --> 00:10:21,378
But yeah.
131
00:10:21,378 --> 00:10:23,379
Yeah, no, I you brought up a good point there, too, though.
132
00:10:23,379 --> 00:10:28,681
um people in crisis and actually like real legit mental health crisis.
133
00:10:28,681 --> 00:10:32,882
um It's highly likely.
134
00:10:33,562 --> 00:10:47,426
That the person on, you know, like a suicide hotline probably has a prompt engineer or a
prompt right there, AI prompt, but an AI prompt actually transcoding the call, reading it.
135
00:10:47,524 --> 00:10:49,326
and saying, are possible answers for you.
136
00:10:49,326 --> 00:10:52,678
Here are possible paths that you can take to try to make these things more effective.
137
00:10:52,678 --> 00:11:00,485
Because I'm sure as an individual, every time these people listen to it, they're probably
getting swept up in the story and the emotion of it, at least to some degree.
138
00:11:00,485 --> 00:11:03,217
And the things that they might miss, the AI might catch.
139
00:11:03,217 --> 00:11:09,332
It's like having that secondary set of ears listening in to try to find the best path
forward.
140
00:11:09,332 --> 00:11:12,295
Because again, this is augmented intelligence.
141
00:11:12,295 --> 00:11:14,537
That's really what we're using it for.
142
00:11:14,537 --> 00:11:16,526
And we're using it to augment our own
143
00:11:16,526 --> 00:11:19,687
mental intelligence as well, and our own mental awareness.
144
00:11:20,728 --> 00:11:25,631
If you think about it in those terms, this thing is great.
145
00:11:25,631 --> 00:11:26,672
We should use it.
146
00:11:26,672 --> 00:11:28,052
Everyone should have access to it.
147
00:11:28,052 --> 00:11:29,826
There shouldn't be a cost barrier to it.
148
00:11:29,826 --> 00:11:31,774
Like, this should just be part of regular mental health.
149
00:11:31,774 --> 00:11:36,177
uh The downside is that it actually is not cheap.
150
00:11:36,177 --> 00:11:45,958
I mean, your mental health session with ChatGPT from an overall resource utilization
perspective is probably quite high because you're using a lot of GPU and LPU cycles.
151
00:11:45,958 --> 00:11:59,948
em But also, it is even still relatively low cost and they subsidize that by selling
something of yours, which is your data and which is the information about you.
152
00:11:59,948 --> 00:12:05,702
So as you're giving it more information, as you're telling it more about you, that's quite
possible.
153
00:12:05,702 --> 00:12:07,714
It's building a profile on you.
154
00:12:07,714 --> 00:12:10,376
And that profile could be, this person has mental health issues.
155
00:12:10,376 --> 00:12:11,707
These are the things that affect them this way.
156
00:12:11,707 --> 00:12:14,078
This is how you effectively target them with advertising.
157
00:12:14,394 --> 00:12:18,697
This is how these are the things you want to send to them so they can actually buy them
and actually make money off of them.
158
00:12:18,837 --> 00:12:23,020
Like this is all the nasty reality of living in an e-commerce environment.
159
00:12:23,020 --> 00:12:35,129
And really we've gone from, you know, this idea of democratized capitalism into economic
feudalism because you wind up with these different kingdoms of relative ownership of these
160
00:12:35,129 --> 00:12:36,190
data principles.
161
00:12:36,190 --> 00:12:39,372
know, Chai GPT is one of those.
162
00:12:39,512 --> 00:12:40,403
So is Amazon.
163
00:12:40,403 --> 00:12:41,403
So is Microsoft.
164
00:12:41,403 --> 00:12:42,448
So is Facebook.
165
00:12:42,448 --> 00:12:46,985
These are all wild gardens of information sets and take these data pieces and try to make
sense of them.
166
00:12:46,985 --> 00:12:52,231
And the kingdoms themselves share and sell your data back and forth between each other.
167
00:12:52,994 --> 00:12:57,460
So just be aware that this might be going out there and that might be a thing.
168
00:12:57,460 --> 00:12:59,022
And yeah.
169
00:12:59,022 --> 00:13:05,625
bunch of the articles I was reading were highlighting, Gemini has no HIPAA rules.
170
00:13:05,625 --> 00:13:12,788
So whatever you tell it about your health conditions, your mental health, whatever, that's
not locked in the doctor's safe.
171
00:13:12,788 --> 00:13:16,470
You are sharing that willingly in a relatively public way.
172
00:13:16,470 --> 00:13:18,851
So you certainly want to be careful.
173
00:13:18,851 --> 00:13:23,909
But when it's like, hey, I'm feeling sad because I feel like a shit assistant coach on a
softball team.
174
00:13:23,909 --> 00:13:29,430
You know, there's relatively low risk there, other than I'm probably gonna get sold a lot
more softball shit in my Facebook feed.
175
00:13:29,430 --> 00:13:30,832
But otherwise...
176
00:13:31,646 --> 00:13:35,749
Right, like, and that might not be a bad thing.
177
00:13:37,190 --> 00:13:41,373
You might actually need that D Marini LX bat for your daughter.
178
00:13:41,373 --> 00:13:42,514
You might need that.
179
00:13:42,529 --> 00:13:43,652
seeing the rope bat a lot.
180
00:13:43,652 --> 00:13:46,007
The rope bat is the one that I keep getting.
181
00:13:46,328 --> 00:13:51,142
You might need to buy a $500 bat to make yourself a better as a coach.
182
00:13:51,142 --> 00:13:54,315
And if for some reason the coaching doesn't work, you can blame the bat.
183
00:13:57,018 --> 00:14:00,587
I have like $2,000 in bats in my garage for my two kids playing softball.
184
00:14:00,587 --> 00:14:03,759
my God, I might have to come shopping because we're looking for a new bat.
185
00:14:05,221 --> 00:14:06,842
All right.
186
00:14:07,303 --> 00:14:16,380
One of the things that I did read that many doctors, at least in the American Medical
Association, were talking about that were concerning is the way that these AI tools sort
187
00:14:16,380 --> 00:14:18,132
of gather all of this information.
188
00:14:18,132 --> 00:14:20,073
So we used to just be Dr.
189
00:14:20,073 --> 00:14:21,535
Google and read a bunch of links.
190
00:14:21,535 --> 00:14:22,997
So now AI does that for us.
191
00:14:22,997 --> 00:14:25,578
It reads all the links, combines all the information together.
192
00:14:25,578 --> 00:14:29,521
I loved this one quote because it talked about the stuff that gets cut out.
193
00:14:29,521 --> 00:14:32,984
of all that is it's sifting through and finding the most important information.
194
00:14:32,984 --> 00:14:41,922
And it says, even if the source is appropriate, when some of these tools are trying to
combine everything into its summary, it's often missing context clues, meaning it might
195
00:14:41,922 --> 00:14:43,413
forget a negative.
196
00:14:43,413 --> 00:14:46,456
This doctor says, it might forget to use the word not.
197
00:14:46,456 --> 00:14:49,518
So a great example is somebody asking what to do for a kidney stone.
198
00:14:49,518 --> 00:14:52,400
And Google AI told them to drink urine.
199
00:14:52,401 --> 00:14:57,635
The guidance was probably to drink lots of fluids and then assess your urine intake, not
to actually.
200
00:14:57,635 --> 00:14:58,517
drink urine.
201
00:14:58,517 --> 00:15:04,567
So it's important to verify the information that the AI tools generate for you.
202
00:15:04,662 --> 00:15:10,526
Yeah, I mean, the first time drinking your own pee is probably not so bad.
203
00:15:11,127 --> 00:15:18,311
But, you know, if you keep recycling it, I've seen enough movies on the topic that it
could be problematic.
204
00:15:19,493 --> 00:15:20,333
Right.
205
00:15:20,333 --> 00:15:21,552
Pee on you?
206
00:15:25,796 --> 00:15:27,190
Yeah, it's a tough one.
207
00:15:27,190 --> 00:15:31,439
We um want to rely on these tools, right?
208
00:15:34,694 --> 00:15:37,774
We also don't want to be solely relying on these tools.
209
00:15:39,914 --> 00:15:41,934
It's a give and a take, it's a balance.
210
00:15:42,694 --> 00:15:44,394
honestly, it's like anything else.
211
00:15:44,394 --> 00:15:46,354
You're trying to find the 51 % median marker.
212
00:15:46,354 --> 00:15:51,374
You're trying to be better than at least half.
213
00:15:51,374 --> 00:15:55,954
It's just this aggressive mediocrity to try to actually get a win and a victory in some of
these spaces.
214
00:15:55,954 --> 00:15:57,474
We're trying to find these pieces.
215
00:15:57,534 --> 00:16:02,914
you bringing up it being a stopgap between you and being able to talk to your actual
therapist.
216
00:16:03,802 --> 00:16:04,863
I mean, that's a great idea.
217
00:16:04,863 --> 00:16:16,371
um The other piece about this is that the way that you look at these data sets, and if you
look at the information that's there, and you can start journaling and adding your journal
218
00:16:16,371 --> 00:16:22,134
to your chat GBT, hey, here's my journal for the last year.
219
00:16:22,755 --> 00:16:25,978
Am I trending towards happiness or am I trending towards something else?
220
00:16:25,978 --> 00:16:28,619
What are you actually seeing when I go through this?
221
00:16:28,619 --> 00:16:29,531
How is this reading?
222
00:16:29,531 --> 00:16:32,678
Because really what you're talking about doing with journaling is creating a narrative.
223
00:16:32,678 --> 00:16:37,140
And it's the narrative in this story that you're telling yourself over a protracted period
of time.
224
00:16:37,240 --> 00:16:43,042
And ChaiTPT actually does do a really, really good job of looking at the intent focus of
actual words.
225
00:16:43,042 --> 00:16:52,346
And by tokenizing every statement that's inside of it, it can see if you're actually
getting, if it looks like your intent is more or less positive or negative.
226
00:16:52,346 --> 00:16:54,247
And there's a lot of variation in that, right?
227
00:16:54,247 --> 00:17:02,150
Like you might sound more abrupt because you're doing more work and you don't have time to
journal as much, or you're more tired.
228
00:17:02,150 --> 00:17:10,470
Or a few days a week, you have a little bit of extra time when you're going to make things
30 % longer because you're going to spend more time clicking on those things.
229
00:17:10,470 --> 00:17:15,590
Or you've discovered something new, like voice to text.
230
00:17:15,590 --> 00:17:17,650
And you're going to talk these things in.
231
00:17:18,389 --> 00:17:23,630
These are all marker posts that require human analysis in the regular world to go through.
232
00:17:23,730 --> 00:17:30,718
That ChatGPT is enabling individuals to do this with their own data sets as opposed to
data experts, which is how we do it today.
233
00:17:30,832 --> 00:17:39,010
So the idea of having these data scientists be able to go through and pull through massive
amounts of information and then provide a summary and information and next steps and
234
00:17:39,010 --> 00:17:40,991
actions and really analysis.
235
00:17:41,132 --> 00:17:46,036
Chat GPT gives that to you as an individual, which is fucking rad.
236
00:17:46,056 --> 00:17:54,234
It's also terrifying because there are some people that shouldn't have this kind of
information analysis because they're not going to be critical in the way that they
237
00:17:54,234 --> 00:17:54,803
actually use it.
238
00:17:54,803 --> 00:17:56,285
And they're not going to be their own.
239
00:17:56,285 --> 00:17:57,246
oh
240
00:17:57,798 --> 00:18:01,920
they're not going to be their own safety net, and they're going to want to become in their
own cautionary tale.
241
00:18:02,020 --> 00:18:07,262
And there seems to be plenty of that that has occurred with other technology trends.
242
00:18:07,342 --> 00:18:25,668
And if you look at the way that we're starting to uh kind of try to absorb and understand
mental health uh as a discernible concept, not just within
243
00:18:25,668 --> 00:18:33,932
the way that we interact with people in the real world, but the way that we interact with
people online, and you start looking at how these things kind of mix together, the
244
00:18:33,932 --> 00:18:43,896
dividing line between the actual value and benefit of the data that you're producing and
putting online versus the data that you're actually producing in real space, that curve is
245
00:18:43,896 --> 00:18:47,268
going to, or that ratio is going to get wider.
246
00:18:47,268 --> 00:18:52,610
Like your online data and your online presence and your online who you are, that's going
to become.
247
00:18:52,784 --> 00:18:55,726
who all these virtual tools that you interact with think you are.
248
00:18:55,726 --> 00:19:01,300
And when you approach things over time, it's gonna start using that as the bias lens of
how it actually views you.
249
00:19:01,300 --> 00:19:07,534
And over time, this shit's gonna make it into your doctor's office, and it's gonna make it
into your mental health clinic office.
250
00:19:07,534 --> 00:19:13,898
Like, it's going to happen, and it's gonna make it to your insurance company, and they're
gonna start using it to do what insurance companies do to say, you have a pre-existing
251
00:19:13,898 --> 00:19:14,609
condition.
252
00:19:14,609 --> 00:19:22,395
Like, you think you suck at softball, therefore, therefore obviously you would have eaten,
therefore you would have obviously eaten three extra donuts every day.
253
00:19:22,395 --> 00:19:23,428
That's cool.
254
00:19:23,795 --> 00:19:24,547
Yes.
255
00:19:24,547 --> 00:19:34,049
Yeah, because they're gonna make a bullshit to make it justifiable and but this is this is
just the reality where we live and We can either embrace it or we can try to shun it but
256
00:19:34,049 --> 00:19:44,012
no matter what you're gonna be affected by it one way or the other whether it's passively
or actively and I for one am gonna actively involve myself with it and Fuck around with it
257
00:19:44,012 --> 00:19:52,090
because I think at the end of the day It's gonna at least have some value and that value
might be a positive for me or at least an understanding what the negatives look like
258
00:19:52,090 --> 00:19:56,576
so can know what to avoid and how to make those pitfalls actually things that I'm not
gonna draw myself.
259
00:19:57,231 --> 00:20:02,723
Yeah, I I'm talking myself out of my own argument that I was about to make.
260
00:20:02,723 --> 00:20:11,525
But the idea that companies are farming this information and using it for evil, that's
nothing new.
261
00:20:11,785 --> 00:20:19,008
We're just perhaps giving a little more detail and a little more information than we used
to, where we used to ask a question and then search, click the first 10 links and read
262
00:20:19,008 --> 00:20:19,528
them.
263
00:20:19,528 --> 00:20:23,709
Cool, now everyone knows what the 10 links are that I clicked, but they don't necessarily
know.
264
00:20:23,919 --> 00:20:27,532
what I'm doing with that information as much as maybe they are now.
265
00:20:27,532 --> 00:20:37,304
ah Your point though about the person who maybe can't get out of their own way and is
going to screw themselves over in this process, this isn't going to help their case.
266
00:20:37,304 --> 00:20:46,845
I was reading an article today that was uh this Israeli study that was done to compare how
AI diagnoses patients versus real actual doctors.
267
00:20:46,926 --> 00:20:51,921
And this study was done, and then actual doctors reviewed the results of both side by
side.
268
00:20:51,921 --> 00:21:01,701
And they found that the AI recommendations rated as optimal in 77 % of cases versus 67 %
when it was an actual doctor diagnosing these people.
269
00:21:01,701 --> 00:21:10,361
So literally, the patient would come into the office, go through this AI doctor process,
and then from there, go and talk to the doctor, share all that information again.
270
00:21:10,361 --> 00:21:13,841
Same experience, one with a robot, one with a person.
271
00:21:13,921 --> 00:21:16,521
The robot's got it right 77 % of the time.
272
00:21:16,521 --> 00:21:18,786
Doctor's got it right 67 % of the time.
273
00:21:18,786 --> 00:21:20,291
bet a robot did that analysis.
274
00:21:20,291 --> 00:21:21,101
I bet it did.
275
00:21:21,101 --> 00:21:22,963
I absolutely bet it did.
276
00:21:23,083 --> 00:21:24,204
It totally cheated.
277
00:21:24,204 --> 00:21:35,163
ah But that's crazy to me that an independent doctor would look at both and find that,
like, holy shit, it knew enough in those cases.
278
00:21:35,163 --> 00:21:37,795
And I mean, I think it's also treating very common stuff.
279
00:21:37,795 --> 00:21:42,108
It wasn't looking at user-specific medical records, stuff like that.
280
00:21:42,108 --> 00:21:46,571
But that's pretty terrifying and awesome.
281
00:21:47,322 --> 00:21:55,547
Yeah, and the more data you give it, the higher likelihood that it's going to have that
it's going to be accurate and correct.
282
00:21:55,547 --> 00:21:59,429
I mean, it can actually look at visual data now.
283
00:21:59,429 --> 00:22:07,434
there's uh Phillips and all these other companies out there are using AI and ML to go
through and do image diagnostics.
284
00:22:07,434 --> 00:22:10,656
They can actually try to track things down faster, which makes perfect sense, right?
285
00:22:10,656 --> 00:22:12,637
Like, does this thing look like a tumor?
286
00:22:12,637 --> 00:22:15,152
Yes, does this thing look like it's but not?
287
00:22:15,152 --> 00:22:17,043
cancerous, is it moving in these directions?
288
00:22:17,043 --> 00:22:18,984
How do these things change over time?
289
00:22:19,064 --> 00:22:20,475
It can do a lot of those pieces.
290
00:22:20,475 --> 00:22:29,650
And because it actually has better memory recall than the human brain, and it's more
accurate, it will give you better contrasting ratios.
291
00:22:29,650 --> 00:22:32,332
Because human memory fucking sucks.
292
00:22:32,332 --> 00:22:33,132
And we know that.
293
00:22:33,132 --> 00:22:38,125
And it's manipulable, malleable, based upon different situations where AI memory is pretty
atomic.
294
00:22:38,125 --> 00:22:40,236
Like it stays intact.
295
00:22:40,356 --> 00:22:44,396
And it can recall those things with a much greater level of accuracy because it can
actually
296
00:22:44,396 --> 00:22:49,907
open up the image and recheck again, where humans, they might be like, I remember this.
297
00:22:49,907 --> 00:22:52,389
I don't need to reopen that image again and just be totally wrong.
298
00:22:52,389 --> 00:23:02,203
So, I mean, and there's also the accessibility of it, like their ability, the AI's ability
to access information, pull it in, they don't have to open it up around the third visual
299
00:23:02,203 --> 00:23:03,174
cortex.
300
00:23:03,174 --> 00:23:06,685
They're not, you know, worried that the doctor changed their prescriptions.
301
00:23:06,685 --> 00:23:07,896
So they see things differently.
302
00:23:07,896 --> 00:23:14,558
Like there's all kinds of reasons these things are better, but it doesn't take away, um,
303
00:23:16,038 --> 00:23:20,018
value of having human review of this data.
304
00:23:20,258 --> 00:23:28,718
So I think what we're going to find over time is that we're going to use this to become a
much better sorting mechanism, a much better filtering mechanism to try to get through
305
00:23:28,718 --> 00:23:29,398
faster.
306
00:23:29,398 --> 00:23:42,374
And what we're going to wind up with is homogenization of information and how things are
actually being represented and understood, which means, you know, Kevin, Doug, Bob, James.
307
00:23:42,374 --> 00:23:53,574
are going to be put into these buckets of things and Kevin, Bob and Doug, you know, both
have a cold and James has, it looks like the avian flu, but something new has come out and
308
00:23:53,574 --> 00:23:57,534
nobody's going to track that because they don't have a bucket for it or they're going to
have a bucket called other.
309
00:23:57,654 --> 00:24:04,734
And they're going to try to keep these things as separated as possible, but it's always
going to be about intent focus and how much we can get out of it.
310
00:24:04,734 --> 00:24:09,234
And finding novelty, that's one thing that AI is actually shitty at.
311
00:24:09,234 --> 00:24:09,982
Like
312
00:24:10,136 --> 00:24:13,017
It's good at finding things when they don't fit into a pattern.
313
00:24:13,017 --> 00:24:21,669
It's not good at defining this thing with a new pattern once it breaks out of those pieces
because it's not making shit up in the same way.
314
00:24:21,669 --> 00:24:29,311
And what we've discovered is that when we tell it, all right, go ahead, like turn up the p
value and the i value is really, really high.
315
00:24:29,311 --> 00:24:31,252
And it starts hallucinating, which is what it does.
316
00:24:31,252 --> 00:24:33,373
And it gets high and starts putting out responses.
317
00:24:33,373 --> 00:24:35,753
It's wildly inaccurate.
318
00:24:35,753 --> 00:24:38,654
And like, it's not like there's a
319
00:24:39,483 --> 00:24:41,934
Well, it doesn't have imagination.
320
00:24:42,656 --> 00:24:43,917
Humans have imagination.
321
00:24:43,917 --> 00:24:45,198
It does not have imagination.
322
00:24:45,198 --> 00:24:46,879
It's connecting dots.
323
00:24:46,879 --> 00:24:58,959
It's taking existing knowledge and connecting dots where we're able to somehow see things
that aren't there and identify that, oh, this is something new and start to explain what
324
00:24:58,959 --> 00:24:59,494
it is.
325
00:24:59,494 --> 00:25:05,214
Well, and I would argue that it connects dots.
326
00:25:05,214 --> 00:25:16,794
And when you tell it to hallucinate and you tell it to be creative, it doesn't have the
ability to go, oh, I'm going to try a little bit to be creative and I'm going to keep most
327
00:25:16,794 --> 00:25:17,874
of this stuff in check.
328
00:25:17,874 --> 00:25:21,994
It just goes, OK, it just starts throwing shit at it.
329
00:25:21,994 --> 00:25:26,914
Like it throws a wig on and a cloud nose and some big floppy shoes and like, now I guess
we're going to do this.
330
00:25:26,914 --> 00:25:29,342
And like runs out to go try to make this thing happen.
331
00:25:29,635 --> 00:25:30,734
You
332
00:25:30,734 --> 00:25:31,614
of the difference.
333
00:25:31,614 --> 00:25:40,014
You can tell a human, hey, 98 % of what I do is this pattern, and it's solid, and these
things don't go through.
334
00:25:40,014 --> 00:25:50,794
But this 2 % that I'm going to experiment with and play with and try to use some logic and
reasoning around to try to see if this thing's legit, AI doesn't really have that kind of
335
00:25:50,794 --> 00:25:51,674
nuance just yet.
336
00:25:51,674 --> 00:25:53,174
But it's getting there.
337
00:25:53,174 --> 00:25:54,774
And eventually, it will.
338
00:25:55,014 --> 00:25:58,714
Now, whether it's going to be completely and totally replace the human experience,
339
00:25:59,532 --> 00:26:12,036
No, because it's not bound by the same physical inputs and outputs that we are like it
doesn't have a visual cortex that may or may not make me feel like I'm warm because I see
340
00:26:12,036 --> 00:26:15,287
the sun, even though the temperature hasn't changed and no sunlight's hitting me.
341
00:26:15,287 --> 00:26:18,928
But I suddenly have a psychosomatic approach where I'm feeling different.
342
00:26:18,928 --> 00:26:20,648
Like it doesn't do those things.
343
00:26:20,648 --> 00:26:22,169
It doesn't have those same kind of interactions.
344
00:26:22,169 --> 00:26:27,290
It has had the same type of neural network layout like we're just we're just different.
345
00:26:27,290 --> 00:26:31,774
But that doesn't mean it's not a really, really good tool to optimize things and try to
make things.
346
00:26:31,899 --> 00:26:40,795
Yeah, so I think the lesson here is uh find the balance between human interaction and some
of this and how it can all work together.
347
00:26:40,795 --> 00:26:51,281
Don't completely replace the human interaction with your doctor, your therapist, your
friend for God's sake, ah but it's certainly a valuable tool.
348
00:26:51,281 --> 00:26:57,605
And one that I know I'm gonna use, you have to go, I have an appointment with my online
therapist, so I should go as well.
349
00:26:57,825 --> 00:26:59,206
So a good conversation.
350
00:26:59,206 --> 00:27:00,247
Thanks so much for listening.
351
00:27:00,247 --> 00:27:02,870
We'll be back in about a week at thefitmass.com.
352
00:27:02,870 --> 00:27:06,753
If you found any value in this conversation at all, please do share it with somebody who
may benefit from it.
353
00:27:06,753 --> 00:27:08,675
The links to do so are at thefitmass.com.
354
00:27:08,675 --> 00:27:10,074
We'll see you in about a week.
355
00:27:10,074 --> 00:27:10,629
Thanks folks,