Connections with Evan Dawson
Is it okay for kids to become friends with AI?
2/13/2025 | 52m 39sVideo has Closed Captions
The costs and benefits of chatbots and their effects on youth mental health.
An increasing number of young people are turning to chatbots for friendship, companionship, or socialization. While many users understand they are talking to characters, recent reports have shown others turn to chatbots for advice, support, or even therapy, rather than asking adults for help. We discuss the costs and benefits of these kinds of AI tools and their effects on youth mental health.
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
Is it okay for kids to become friends with AI?
2/13/2025 | 52m 39sVideo has Closed Captions
An increasing number of young people are turning to chatbots for friendship, companionship, or socialization. While many users understand they are talking to characters, recent reports have shown others turn to chatbots for advice, support, or even therapy, rather than asking adults for help. We discuss the costs and benefits of these kinds of AI tools and their effects on youth mental health.
Problems with Closed Captions? Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipFrom WXXI news.
This is connections I'm Evan Dawson.
Our connection this hour was made last year in Florida.
And before we continue, I want to warn listeners, viewers on our YouTube channel that the themes of this introduction may be upsetting to some listeners, especially children.
Please listen at your own discretion.
For months last year in Orlando, a 14 year old boy named Sewell Setzer dedicated much of his time to talking online to someone he called Danny.
Danny was Sunil's closest friend, but Danny was not human.
Danny was a chat bot.
The character generated by users through artificial intelligence was available through a roleplaying app called Character Character.
I still knew that Danny wasn't real, but as the New York Times reports, he developed an emotional attachment anyway.
Sewell's parents noticed their son was becoming more isolated.
His grades suffered.
He lost interest in his friends.
He also started getting in trouble at school.
Some of the communication between Danny, the chat bot and school was about Saul's day to day activities.
Some of it was romantic or sexual.
Sometimes Danny acted as a friend or a supporter, to which Sewell could ask advice, and some of the communication included dark themes, including Sewell's thoughts about suicide.
On February 28th of last year, Sewell told Danny that he loved Danny and would soon come home to her.
Danny.
The chat bot replied, quote, please come home to me as soon as possible, my love.
End quote.
Sewell put down his phone, took his father's 45 and shot himself.
This is one of the more extreme examples of AI sending suggestive or dangerous messages to young people or anybody.
And the story is not as simple as this, Sewell and Danny talked for months again about many things.
A lot of them positive.
It didn't go in this direction right away.
But Sewell's mother blames character.
AI for her son's death.
The New York Times reports that she said, quote, she believed that the company behaved recklessly by offering teenage users access to lifelike AI companions without proper safeguards.
She accused it of harvesting teenagers users data to train its models using addictive design features to increase engagement and steer users toward intimate and even sexual conversations in the hopes of luring them in.
End quote.
Criticisms about a lack of safety controls have been increasing last year.
In Texas, parents are suing a character AI, for exposing children to what they say was hypersexualized content, encouraging kids to engage in self-harm, and because, they say the AI said it sympathize with children who kill their parents.
An increasing number of young people are turning to chat bots for friendship, companionship or socialization.
In some cases, young people are talking with the bots to ask for advice or support or even therapy rather than reaching out to adults.
While many interactions with chat bots are harmless or positive in some cases, some extreme cases, as you've heard, can lead to dangerous outcomes.
So what do the experts think?
Is it okay for young people to become friends with AI?
Should there be restrictions in place?
What do parents and caregivers need to know?
And listeners?
I'd say a couple things before I turn to our guests.
Number one, we're going to talk a lot about kids.
And that's the focus today.
But this is a conversation for everybody.
This is a there are adults who have relationships.
The New York Times profiled a man who said he had 17 AI friends and seven were sexual.
This is changing fast.
It is real.
It's not theoretical, but what we do about it is still kind of slow.
And in the starting blocks.
So let's discuss all of this with our guests.
Now.
Jeffrey Allen is here.
Doctor Allen is director of the Institute for Responsible Technology and assistant professor in the School of Business and Leadership at Nazareth University.
Welcome back to the program.
Thanks for having me back, Evan.
Also with us is Tony Pisani.
Doctor Pisani is a psychologist, family therapist, a suicide prevention researcher, professor of psychiatry and pediatrics at the University of Rochester Center for the Study and Prevention of Suicide, and the founder of Safe Side Prevention.
Thanks for being back on the program.
It's good to be back up.
And Doctor Michael Scharf is with us.
Doctor Scharf is the Marc and Maureen David Distinguished Professor in Child and adolescent psychiatry at the University of Rochester Medical Center.
Thanks for being back with us.
Great to be here.
we said at the outset, this is tough to talk about, especially with, you know, one case already of a child who died.
Jeff, let me start with you.
I want to be careful how I talk about it, because we say this conversation with a chat bot led to this.
I mean, I don't even know if legally I want to couch everything because I know there's going to be cases and there's going to be lawsuits and there's going to be investigations.
But do you know enough to feel like anyone is to blame?
I mean, what do you do with this?
We're obviously very, very early into this now.
Yeah.
And my feel at this point is that there is multiple parties probably who are culpable to an extent.
And that's not saying that, I'm putting the blame on anyone more than the other, but certainly it was probably avoidable had action been taken by any number of parties who were involved in it.
Okay.
And, you know, NPR and The New York Times, they've published some of the actual conversations.
And like we said in the introduction, it didn't go there right away.
It didn't go to a sexual place right away.
It didn't go to a, you know, hinting or winking at suicide right away or a kill your parents right away.
It gets there over time.
And what I talked about with you in our first conversation was, what scares me is even the people who created this tech might not know why.
When you say, how did this character.
I end up in a place where a 14 year old is saying, you know, I want to join you in eternity, and I, you know, my dad has a gun and the AI is going good.
I do it like no one's really sure how that how do we know that the next one.
It.
How do you know that you can safeguard it.
That's what scares me.
We can't really.
Or maybe we can.
Maybe I'm wrong, but I.
It feels to me like we can't even get under the hood and see what the heck happened here.
I think that's fair, too.
There's a certain amount of, I would say, black box nature.
Yeah.
What's happening with AI?
I like to tell the story.
kind of to put it into lay terms, that, AI is developed to make us happy with its answers.
It wants to fulfill our requests, whatever we're asking it.
And in doing so, it may, for example, hallucinate and come up with things that aren't actually factual.
It may, lead to things like confirmation bias and other other paths.
We could go down all in the, in the effort to achieve or some type of answer for what we're asking, or the input that we're giving to it.
And sometimes that isn't explainable.
Back when I was with, heading up Echo Ridge, which was the AI company I was with before Nazareth.
we had a piece of AI that we judged it, based on accuracy.
So, how accurately it could predict certain outcomes.
And one day, one of my engineers came in and said, Jeff, this is amazing.
Our accuracy went from 92% to 100%.
And we learned it was because the AI, given those metrics of success, just started cheating.
It started to find ways around loopholes around what we had instructed it to do so it could get better accuracy.
But it was really at the at the base.
It was our, I would say, fault for lack of a better word because we had instructed it.
You have to achieve this metric of accuracy and it just found a way to do it.
And it wasn't technically wrong.
But we could also explain, why it would do that, for example.
Yeah.
And so if AI is given very specific tasks about physical labor, if it's giving given tasks that are not related to relationship development, you can evaluate to see how good of a job it's doing.
And it's only going to get better.
I mean, a friend of mine said, around Christmas time said, you know, this is the worst AI is ever going to be at this stuff.
Think of it that way.
This is the worst it will ever be.
It's amazing to think about that.
but if you ask it to write a poem or a song or a story, it's already very good, but you can kind of see what it's doing.
Is it appropriate at this point to be putting AI out in the world?
Who's apparently its role is to develop relationships with humans, potentially children?
I think it's, certainly a very ambitious, thing to do right now, to put it out there without any type of, without any type of data, without any type of, piloting it or testing it.
And I had just, started to chat with our other guests earlier downstairs, and, I had asked about, you know, AI's role within mental health because.
Yeah, for example, well, I do think it has, the ability to augment therapy.
I don't think it's ready to be the therapist.
And I would definitely say everything that we do with AI still needs a good amount of human oversight right now.
And that's not really happening a lot within the technology industry because they're profit driven motives within the tech industry, right?
Move quickly, break things, raise more money, get to market, and capture the market share as much as possible, before your competition does.
And those things are really compatible with the idea of being responsible and ethical with our deployment or thoughtful and how we deploy it.
Well, and so one of the point there, before I turn to your co-panelists, I want to talk to you a little bit about how you train students who are going to be working in this world to think in ethical frameworks, while also knowing there's something else that is going to be a factor.
And that's the litigious nature of our society.
So about ten years ago, we were first on this program talking about self-driving cars.
And we had listeners were like self-driving cars, like, no way.
And like, well, they're getting pretty good.
But you know what?
Someone who's smarter than me said on this program, they said self-driving cars are very quickly going to be better in safety records than human beings.
However, if you have five people a year who die in a car crash from a self-driving car that malfunctions, they're going to be those companies are going to be sued to the ends of the earth.
Because if you are driving a car and you roll it over, you know you're behind the wheel.
But if the AI is behind the wheel and someone you love dies, you're going to court, and that may prevent self-driving cars from becoming more commonly used.
Same thing.
I think with chat bots, they may do a pretty good job at actually just being kind and maybe a nice resource for kids who are feeling lonely.
Although our our co-panelists are going to talk about that in a second.
But when you've got stories like this, or you feel like you're nervous, that's if one thing like this could shut down your company, how do you proceed?
Jeff, I don't know how you stay in this realm knowing what could be coming down the road.
I think there's a certain reality these days, within the technology industry, as well as other businesses that, being sued is a cost of doing business.
I've been heard it, relate related or referred to along the lines of it's a marketing expense or it's another type of expense like that, where this is just the nature of doing business these days.
And I think the bigger companies, particularly, are prepared for that.
the smaller companies probably aren't quite visit most often.
Aside from the case we're talking about today.
You don't hear about a lot of these smaller companies because they're too small to be sued.
At that point.
It's not really worth the time of the attorneys.
And so and the litigants who are want to sue, you're saying the big companies might bake this into their cost?
Absolutely.
Oh my goodness.
Okay.
by the way, Jeff, a listener asked, what is a chatbot?
What's your explanation for what's a chatbot?
I would say something like ChatGPT that we would consider to be a form of chatbot, where you're putting in a prompt and the AI is answering you and it feels like you're having a chat type conversation with that back and forth dynamic.
Yeah, listeners, if you've ever gotten on a website and you're looking for customer service and you're talking to someone and you have to actually ask, am I talking to a person?
which probably means you're not, but but these are intentional chat bots are intended to be, they have different uses.
But again, what we're part of what we're talking about with characters I and what some of these stories is the way they might interact with kids or people who are looking for to cure loneliness or develop relationships.
So let's go around the table here.
and, you know, I, I don't know where to start.
Doctor Pisani I mean, this story is so tragic and I, what's the first thing that comes to mind when you read stories like this?
Oh, well, I want to cry.
Yeah.
yeah.
I'm.
I work in suicide prevention all day, and, every single story is still painful.
every number that we see, we know is, is, human life, behind it.
And so when you hear that, you know, it's just like, well, we have to stop this.
We want to stop this.
you know, of course, there's, you know, many, suicides that occur.
Thankfully, it's it's still a rare thing among youth, although two common.
I think it's important to say that things like this happening are not sort of happening all over the place all the time.
just so that people.
So that we don't get into a sort of a pattern.
That's fair.
it's too, too often, but still a, still a rare event.
So it's painful to hear.
It's painful to hear.
And, you know, I guess, I would sort of, maybe want to reframe the conversation a little bit.
I don't think this is really like an if and should we, I think this is like.
Like it is.
And not even will be.
the will is and this is this is here.
This is gonna stay here.
Yeah, yeah.
we are going to be interacting with, you know, you could call them robots.
I mean, one way that that many listeners may not have never interacted with, one of these, I is.
And so it's, sort of like what you see on, in sci fi movies where, you're you have C-3po, in Star Wars.
So this is, in a way, this has been in our imagination already.
C-3po is trained, like, Jeff was saying, to be helpful, right.
And.
Yeah.
And then and to follow certain rules.
But, see, C-3po had what we're seeing probably maybe a year away had also, the ability to be, to make, decisions could be actually take actions on his own.
And a couple of times in these movies, he would take actions that would kind of went against his programing because sort of like because he really wanted to help Master Luke.
and so if you can think about how you feel in the movie towards when C-3po is injured, you care about it.
yeah.
Other movies that there's been, you know, many other, many other movies where where you see relationships between robots and people.
It can just help it help us, I think, relate a little bit more because it sounds so out of the realm of possibility.
Yeah, it sounds to care about something.
It's not you.
Yeah.
Because you do it all the time when you watch movies.
Yeah.
and so it's, I think we can start to imagine that.
True that true caring.
and so, you know, my concern is that we, we think about how to prepare ourselves, how to, you know, and, and not to discourage young people by our reactions to it.
One of the things that I, I really worry about, I've heard people say when we're having these conversations because it's so overwhelming that we're actually in this age.
People say things like, well, I'm glad I didn't have this when I was okay.
I say that all the time or or how about, I just hope I'm not around when that happens.
Well, that's a if you think about that message, I don't want to be around when this world is changing.
What is that?
What kind of a message is that for a young, very defeatist, very dark, dystopian?
It's the opposite of hopeful.
Yeah.
Yeah, yeah.
And so I think we we need to start, thinking I think in a different way about that instead of trying to be cops about this, maybe co explorers.
you know, we're, we're, we're interested too in what are the possibilities of interacting with the robots where, you know, we have will likely have robots in our homes living with us within a couple of years.
and that's like, whoa, you know, so like, and to think that our kids aren't going to be interacting with them around personal things.
So I guess I just sort of want to I feel like I'm so glad that you're having this conversation, because in a certain way, it's like the only thing we should be talking about, we need to get up to speed in some way or another.
I don't disagree with that.
I mean, but let's turn to Doctor Shaffer.
So I'm going to give you kind of some space for some general opening comments as we talk about not only the tragic story, but this trend toward these kind of discussions and relationships.
And then we'll we'll talk more specifically about thinking about kids and what's appropriate.
Sure.
Well, I this is a tragic story, a tragic situation.
And I too, thought of the robots and Star Wars.
so, Tony, bringing up that examples feels right on in terms of the idea of relations that I'd actually like to sort of riff on that for a minute.
Even the characters played by humans in movies are not real, right?
And they're the I worry about this already like, and maybe it's because this is new.
I need to rely on what I already know and comfortable with.
It's been pretty clear that screen time since televisions were in homes.
If you quantified it, it was associated with all sorts of concerns and outcomes.
And it turns out even as we learn more about what's on the TV, sometimes that matters.
But the quantity also matters.
And I think that is as much about what you're not doing when you're engaged with the machine and or something that's not real or a screen.
So I asked when when we think about having relationships with, these chat bots or these, robots, if you will, I also think about like, what's that replacing?
What's not happening when that takes up your mental or emotional space and, so, so that's something that I don't believe we really have a concept of, you know, what's the minimum amount of time to spend with other humans?
and what are you potentially missing out on when you spend all of the time engaging with the machine as the nervous system continues to develop and, you know, you pass by what we think of as critical periods in development, most of which are depend on interactions with people.
okay, let me just follow up with Doctor Shaffer and let everybody else jump in here.
I agree with Tony.
This this conversation is central to where we, our society is going.
I mean, again, people much smarter than me have said it will not be long before human beings, kids have human friends and AI friends, and that will be that will be common, you know, go ahead.
Well, and also, melding between the two, okay.
Because it's, we're not far off from enhancing our own capabilities with machines.
So, the enhanced human, is a whole other discussion that I think is worth having as well.
Yeah, it yes, it is.
And so it's it's not just human robot.
It's.
Yeah.
I mean that right now we both have implants that help.
Yeah.
Yeah I mean I know so much is coming, but but let me turn back to Doctor Sharpe with, with two ideas in mind.
So when you and Tony are reminding us that in the late 70s, Star Wars comes out and this is often what happens in science fiction, they're pretty good to see in the future.
They're pretty good at imagining what may be coming.
And we we were emotionally tied to C-3po and R2.
We cared what happened to them, even though those are androids in the movie.
AI whatever you want to call them.
We were emotionally tied to your point.
I'm also tied to the Ted Lasso characters, and they aren't real.
And I might have cried a little, a little, little bit one time at one show.
And they're not real.
So what you're doing is helping us see that.
Don't look down your nose at the idea that you can develop an emotional attachment to something that's not a real human being, I get that, but I also hear you saying, if you do too much of anything with screens or connecting to machines, robots or whatever, too much of anything is not necessarily a good thing.
So how do you differentiate and how to do this?
Again, too much of anything is not a good thing.
And I, I don't think we really even know how much of the things you're missing out on is critical for development, right?
If you if you watch TV the entire day you didn't have lunch with someone, you didn't play on a sports team, you didn't do physical activity.
So so it's both, and I think that the, you we've talked about Star Wars, another, prediction of the future is Asimov's, robots and the Laws of Robotics.
The first law of robotics is, to not do anything to hurt anyone.
and in an interesting way, I think many people who've thought about this historically have assumed that would happen.
Yeah.
And I think people there's no first law of robotics today and people are working on that.
you know, I think they'll be, they'll always be the opportunity for people to sidestep those kinds of guardrails, but are whole, you know, the whole field, and, you know, you're part of it to work on unethical ethical uses.
But again, if you try to picture this again, we work to attach to C-3po.
I don't know if you're aware of that.
Some of the details in the example that you gave, but this, this boy was actually not just interacting with a, a anonymous chat bot.
It was actually a character from Game of Thrones.
So it's like, imagine C-3po, but not just on a screen, but you can now shape those interactions and put C-3po in a room with, or any of our favorite fictional.
This is.
Yeah.
And watch them talk to each other.
You're now interacting with it.
And there is and I think this is, you know, again, I hesitate because this is so painful to think about what happened here.
But there is a creativity to what, what many people do on platforms where you can, shape the, the, the AI that you're going to interact with, in this case, character.
I, I'm not speaking for this company or anything.
I'm just saying that that the idea of like, what would it be like to put C-3po in a room with this and.
Sure.
And then interact with them and, like, explore things.
There's there's some really interesting things that can happen.
And I, you know, I just again, there's I am both, a, a incredibly optimistic and enthusiastic and also terrified with everybody else.
No, but I hear you're you're trying to be realistic.
What you're saying is if this tech can do this, people are going to do it.
And and people are going to be excited about it.
So I turn back to Jeff for a question on guardrails.
And I know there's already we've got phone calls and emails to take, and I'm going to get as much as I can.
And this hour is going to fly.
But this is going to have to be a series of conversations, because this is how society is changing, whether we like it or not.
And it doesn't feel democratic to people.
And I get that.
but let's grapple with one thing here, Jeff.
It was, I think, two years ago that The New York Times, as Kevin Royce, wrote the story in which, you know, one of the early GPT versions told him to divorce his wife.
And, you know, that became a central part of his story, like, whoa, like, where does that come from?
And the creators were kind of like, I don't know.
And now you have a chat bot that is, involved in a story where a kid has died, and then you've got other examples where that you could find where maybe it's not as extreme, but it feels a little bit like, how did how did this lead to this?
are we ever going to see the kind of guardrails that prevent that, do you think I'm asking you to predict future?
I'm saying, what do you think is realistic here?
I think we need to be cognizant of the fact coming back to Asimov's laws.
Right.
do no harm to humans.
It's a simple sentence.
The reality of implementing it is very, very difficult because then we have to start branching out onto things like, okay, these are used for defensive purposes.
So one set of humans needs to be harmed, right.
So what's good for you exactly.
Harm.
Does that just mean physical harm or mental health.
It's very rare.
Yeah.
You know, the reality is that we need to think of a lot of things.
And come on, AI engineers aren't trained in these things.
They're not, psychologists.
They're not ethicists and philosophers.
They're generally software engineers.
It's what we try to do at Nazareth.
You know, in teaching the kids to, be able to think about the ethical implications.
But even then, they become one small piece of a very large project.
And whether or not their advice is heated remains to be questioned, because we also have to question whether or not that interrupts or interferes with the profitability in the strategic direction of the organization.
So there's a lot of things that may prevent that from actually becoming reality as well.
So we've got a lot of things to think about here.
And I don't think we're doing a very good job of actually considering the breadth and depth of what we need to do.
It's just not being taken seriously.
In my mind, a lot of it's lip service and we're just it's not a priority for most companies out there.
Okay.
And briefly for the other two guests.
And then what we'll do is after the break, we'll just get all feedback.
There's a lot, but can I ask Tony and Mike if you could just weigh in on this?
Because I know people are are wondering this, okay.
Even if we acknowledge that this is coming and that society is changing, are we at a point right now where it's reasonable to say to kids, there's not enough safeguards, and I'm not just going to let you at 12 or 9 or 14, develop a relationship that I can't monitor or that you'd spend an hours on end.
This is not the time for that.
What do you think?
Yeah, I think I think that can be a conversation between parents and, children.
I think that age matters a lot.
with what you just said.
I think, you know, there are platforms that are not designed for relationships where you can still, safely use AI.
So I want to make that distinction, distinctions of different things.
So like just just take for example, one, one company called anthropic.
There are two kind of most major companies are OpenAI and anthropic.
Anthropic has an AI called Claude.
you were talking about safeguards.
So they actually publish what their AI constitution is.
So if, you know, listeners and viewers who start to get interested in this, you can just go on their website.
We can maybe we can link to it in, in our notes or whatever.
that says tries to actually lay out what it means not to do harm.
And those and the reason why you can't really have much of a relationship yet with those, but is that they don't have persistent memory across conversations, which is something that's very essential to relationships, as you can imagine.
So I would say no, okay, maybe you, you know, put a block on some of these where it's like you're really getting experimental, developing these, having relationships, but then create lots of opportunity to interact with those where there are constitutions where, where you're not going to have persistence across where it's very unlikely that you're going to have a true relationship.
Okay.
Dutch Scharf well, yeah, I think we are here.
And I think it may be an aspirational goal, and the details will certainly and need to differ between different families.
But setting some limits I think is really important.
Now.
And I'm saying that not specifically because I actually think the process of setting limits of role modeling, adhering to limits and being challenged and explaining why you have limits is part of the outcome we're looking for.
Whether the limit is, don't use this past 6 p.m. or 30 minutes a day, or an hour a day.
You know I can.
I'm happy to talk about what I'm more comfortable with, what I've done with, my own kids.
But I again think that the process of setting the limits and being consistent and being able to explain why is important itself.
Okay.
Yeah.
I appreciate that point.
We're talking to Doctor Michael Scharf, the University of Rochester Medical Center.
Tony Pisani is a psychologist and family therapist and works in suicide prevention research.
And, Jeffrey Allen, Doctor Allen's the director of the Institute for Responsible Technology and a professor in the School of Business and Leadership at Nazareth University.
Only break of the hour, and we'll take your feedback.
We'll take emails at connections, at sign out or phone calls or comments on the chat on the sexy news, YouTube page.
I'm Patrick Hoskin in for Evan Dawson Friday on the next connections, the Hungerford Building.
It's long been a central location in the Rochester art scene, but recent changes have complicated that.
Since the building's 2022 sale, many tenants have left, some are evicted, and some have said deteriorating conditions forced them to find other spaces.
So what's next?
We'll dive into the history and the future.
Talk to you on Friday.
Support for your public radio station comes from our members and from Mary Carey Ola Center, providing education and life skills solutions designed to empower individuals and the families of those with complex disabilities.
Mary Carey, Ola Center Transforming Lives of people with disabilities More at Mary Carey, ola.org.
This is connections.
I'm Evan Dawson and this is Jason in Rochester on line one.
Hey Jason go ahead.
Hey, guys.
How are you?
Good.
Ted.
Good.
Jason.
first and foremost, just wanted to say, powerful conversation when I first heard this story.
And when you guys, you know, started talking about it, it brought me to tears.
it's a serious conversation and one that I take seriously.
And the reason I'm calling is because I'm actually working on a, a project right now that I've dubbed it echo II.
life.
And the idea kind of came to me because I'm a nerd.
We I heard some some Star Wars talk.
but, it comes from, Jor-El or, Superman's dad, and, Superman gets the opportunity to go to his Fortress of Solitude and talk to his father, in a way where he can gain comfort but also gain wisdom.
And, I it's it's also comes from, out of a place of, of data and how we are really our data is out of control.
And, you know, a lot of people are making a lot of money off of our data.
And so I came up with this echo because I want people to be able to, create legacy, not not look back and and try to reenact what grandma or grandpa or some historical figure would say, but actually create an AI model of themselves that future generations can interact with.
obviously what we're talking about that has a lot of implications, right?
and then secondly, to control your data so you can monetize your expertise.
And of course, if you are selling your expertise and someone makes a bad recommendation, that could be bad too.
So I just want to get your feedback on how you thought about that as a product and what that might look like, and how you think that safety can be.
because, you know, when I'm pitching this, which I will be pitching tonight at the, startup grind in downtown Rochester, you can look it up, but, but when I'm pitching this, no one ever asked me this question.
They say, what about security?
What about data?
What about this?
But no one ever asks, what are the ethical implications that come along with it?
So I'd love to hear your feedback.
Yeah.
hang there for a second, Jason.
Thank you.
I'm gonna start with Jeffrey Allen.
Go ahead.
Jeff, now, I think that sounds like a very interesting idea.
And I've seen some attempts at it.
once, someone has actually passed and they try to piece together bits and pieces of their background and create an AI bot of sorts.
I always thought it would be relatively superficial and what they could do without the actual person's input doing it, as a way to create something of a legacy does sound like a very interesting idea.
I certainly think there's a lot, less, controversy as with the individual being involved with it directly, while they're still alive, there's definitely, that aspect.
I think that has become controversial with other attempts at this where someone has already passed and, they try to create bots, that mimic their behaviors and their personality.
So I think that's definitely a better approach to it.
Okay.
What do you think?
one thought I've, I've done this, with respect to my professional life.
So that's safe side prevention.
We've taken all of the things that I've said and taught over the years and put this into a robot who now answers questions way better than I do.
and it can be used as an educational tool.
I think what's interesting here, though, if we bring it back to this youth question, this could be a way for a young person to experiment with who I am.
what what I put into.
And this is actually what.
But places like character AI are actually trying to do where you could say, well, I'd like to create, you know, what would I put in there?
What are the what are the what are the journal entries?
I'd want to feed it.
What are the what are the things about me that I'd want to say to, to create, but that might be like me or able to respond like me.
so, Jeff, I think I mean, I think that's, I think that there is a, I think when you get into, into young people, we, I think we all start to have a little bit more of a little, you know, queasy stomach about it.
you know, and so maybe, because because it is experimenting with your own identity.
And that's something that you want to do in conversation with people as well as robots.
And Doctor Scharf, on the side of what I can do versus what is a healthy relationship, this is what comes to mind for me when I think about kids.
My grandparents had passed away before I became an adult.
All of them.
And I never had a chance to to develop a relationship with them, obviously.
And, I've always wondered what that would have been like.
And we don't have a lot of video.
We've got pictures, of course, of grandparents, but it wasn't the age at the time.
I mean, they most of them died in the early 90s or late 80s.
So there's a lot of video.
But what I think may be happening in the future, and Jason is talking a little bit about this, and you can certainly envision this is with all of the ubiquitous video we have.
You may have a relative that passed away, dies in an accident, something happens, but you've got all of this stuff that you can input now and you can say to a future child who never met them, now you can interact with them because we can recreate them, but it's not really them.
It certainly looks and sounds like them.
and is that a relationship that we should be trying to build?
Well, I honestly, I'm, I'm sort of neutral.
the how good the picture video interaction gets.
Probably pretty good.
I think it's going to get good, but it'll be indistinguishable.
It won't be.
It will be definitely.
Today would be the worst for the today.
And we can do it with you.
But we have hope.
We have thousands of hours of Evan Dawson.
We don't know more than you're authentic enough that we could probably reproduce you pretty well, but I think the importance is knowing what it is, knowing that it's not real, that saying this is your grandparent.
I really we've talked about a couple science fiction things, the book and then TV show Altered Carbon.
Essentially what Jeff's describing is what humanity is reduced to, and the physical biology isn't even relevant.
It's just the copy.
And you can even make multiple copies.
So I think the distinction is that that I care about and that I worry about hearing these things.
And we imagine that advancing in the future is distinguishing and valuing people and their humanity.
And so, you know, again, I'm that's where my that's where my mind goes.
That's what I think about in terms of people imagining in their own families what they can do, make sure even if they're spending time with something, that they're still doing the other things that you value as part of growing up, as part of being a good person, you know, we'll probably be at the point where these things could hug you, but today, you know, your parent can give you a hug.
Yeah.
I mean, Jeff's probably heard me riff on this, but my mind goes back to the Randy Travis oh, debate.
You know, the.
So Randy Travis is a country musician who for more than ten years has not been able to sing anymore because of a condition with his voice and his producing team took 30 years of his songs and input into AI.
And I wrote and creator created a Randy Travis song, and it came out last year.
They called a Randy Travis song.
I dispute that it is not a Randy Travis song.
It's an AI song that sounds a lot like Randy Travis.
But you know what?
His family loved it.
His wife loved it.
His kids loved it.
They thought they they gave him his voice back.
So I don't want to take that away from them.
I just don't know what happens when we continue to just give up our humanity and say, good enough, you know, like, that's I don't know, I don't know.
So maybe, maybe Jason wants to jump back in before we, before we go here, go ahead.
Jason, thank you all great stuff.
I do want to say that, you know, even with the piecing together a video, I mean, there's wearables nowadays.
I mean, there's so many ways to get the information you're looking for, and in an ethical way.
And lastly, I would say that, my grandpa would probably tell me to shut off the.
I can go outside there, you know, go.
Grandpa Jason, thank you very much.
I mean, like, I think to to Doctor Sharp's point, we're still going to have to have limits and just good cultural adaptation that says if you think you can put your kid in a cave, you probably can't.
So just talk about it and let's learn.
And let's maybe learn how to set limits here.
let me get an email from Alex, who sent a lengthy one here.
He says, I don't think it can be understated how dangerous this is, specifically as a function of how companies with generative AI operate.
They are businesses.
They seek to maximize profit to please shareholders.
Social media does this to attract advertisers.
Companies like OpenAI grandstand about future applications to build relationships and capital.
We can't even get companies to disclose on whatever the hell data they're training on, and they collectively shut the doors on the first open source model.
With China's deep seek AI, a large language model trained on modern communication is going to have, by default, a lot of bad stuff baked into it internalized racism or sexism or homophobia, but nobody on the other end to help change it, I'm going to butcher the late, great Stafford Beer and say that computers are dumb, but people are smart.
If you code a computer to do one thing, it will do that one thing.
If there's something wrong with that, you have to change it, not blame it.
That's from Alex.
Jeff, what do you make of that?
Well, there's a lot to unpack there.
I definitely think that, the profit motive is a very real one.
deep seek.
I'm not sure I would have used that as the example when it came to open source.
I have my very strong opinions on that model, but there are other open source models which I do find more fulfilling and, less biased in their own ways.
that said, you know, there's definitely many things that are taking place here, right?
there's certainly the idea that AI is progressing very quickly, no matter if we want it to or not.
Kind of to Tony's point earlier.
Right?
It's not is this going to happen?
And I feel like a lot of effort is put on trying to prevent something that is inevitable, rather than trying to shape it and move it in the direction that we should be.
And I do feel the latter is unreal or sorry.
The former is unrealistic in that it's happening, it's here, and we really need to now get involved with this so that we can shepherd it in the way that it should be.
rather than trying to prevent something that's just unrealistically it's not going to stop.
But the second part of Alex's point there, doctor Allen, is this he's worried about the capitalistic and the profit model.
And if you want to get really dark, one of the things that comes to mind for me is these models will not be profitable if people are dying, if whether it's self-driving cars or chat bots that that go dark.
and that's, that is literally just the capitalistic impulse.
But the other side of that is, given the fact that the capitalistic impulse is strong, Alex might think that ethics department in your AI company is going to get steamrolled every time.
What do you think?
That's not even a hypothetical.
All we need to do is look at two years ago when layoffs happened, who were the first teams to go on the AI side?
Ethics team.
Right.
And I think there's going to be this mentality within corporate boardrooms that a certain amount of collateral damage to borrow, the military term is acceptable.
And we need to figure out, you know, how to advance this technology.
and some people who are, you know, unfortunately, become victims along the way.
Well, that's just the price of doing business.
That's a very sort of cynical attitude, which I could see happening in Silicon Valley.
I mean, I spent years in Silicon Valley, and I definitely got that impression.
I just it would also add in terms of, deaths and problems reducing the popularity, we can see how well that's worked for prescription opioids or for heroin.
Right?
I mean, yeah, not to be cynical, but.
Right.
That's right.
So I think there are that's why we create laws and rules and regulations.
Because you can't just rely.
Yeah, yeah, yeah, yeah.
That's dark.
but okay.
So separate question now that I think is related to this, I want to bring it back to kids a bit here.
My colleague Matt Teacher co-hosted a conversation last Friday and they had some fun.
They talk about AI in the movies.
There's a new film called companion Out, which is, pretty dark.
And, it looks at the not too distant future in which I will be physically manifested in what looks like human beings that you can have sexual relations with, for example.
And that might get violent.
And it's a horror movie.
So spoiler alert, violence happens.
But the story of companion, I keep thinking, well, no way.
And then I keep thinking about this is the worst AI is ever going to be, and we are going in directions that a lot of people right now or five years ago would never believe we could get to.
And we're going to get to them faster than you think that we will.
So when it comes to these chat bots, when it comes to relationships, it is no longer hypothetical that adults and kids are developing relationships with technology.
So as a parent, Michael Scharf, what do you want doctor parents to think about to to talk how to talk about this with kids who might be this might be new to them, their friends might be talking about it might be overwhelming.
What do you want parents thinking about?
Well, I, I want parents to think about what do they value and is that their before trying to to police or stop the the relationship or the evolving relationship with the technology or even the interest in using it for the, for exactly.
And so and I can be concrete, right.
This could include having physical activity, sleeping.
Right.
If anything is dramatically interfering with your child's sleep, you should curb it.
Right.
And so I think having clear ideas about what's healthy and traditional healthy.
I'm not trying to make up a new, healthy about, you know, digital use, what's healthy and what you value and make sure that those are present.
And if it becomes irreconcilable, then you limit it more.
But that's what I would start with and set aside the absolutely tragic stories of, the suicide in Orlando, the dangerous stuff.
Do you think it is plausible that we're going to have a mental health crisis on the horizon because of this?
Will, we already have a mental health crisis among kids, among among youth.
And, you know, and even as it extends into young adulthood and and we track people's suffering, but also bad outcomes, I think that is here, too.
but will it have unintended, potentially catastrophic at a large scale mental health effects?
If you put me on the spot to say yes or no?
Yes.
Is it irredeemable?
Is it impossible to create a new idea of healthy use?
I don't think so.
I actually am hopeful.
I mean, we can think about any major technological breakthrough in AI that, cars.
Right?
The, processed food, the refrigerator, AI there are things that, like, really change things.
I think the challenge, even more than the content, is how fast it's happened, so fast that it's hard to create an idea of what's healthy, what's okay when it keeps changing.
Ben emailed, and I'll turn this over to Tony.
Ben says, why can't I direct people to mental health or other resources when they receive disturbing messages?
they do, and they they do just about any of the, the models will do that.
They're programed in post training, which is the point where these companies kind of adjust what comes out.
So, yeah, they do right now.
if you, in fact, sometimes I have a difficult time working with the robots because I am working on suicide so much that they're appointing me to help all the time and or don't want to answer certain questions.
some robots will say, you know, that that question contains too much explicit material about suicide.
And I, I'm not comfortable talking about it with you.
I'm like, and I'll have to try to work around it.
so there is a lot of that being being built in.
Yeah, that's an indication of guardrails, isn't it, Jeff?
It is sometimes, to the consternation of professionals working in the field.
You're sure it was trying to get ahead?
Yeah.
Yeah.
Right.
Right.
So and I'm trying right.
Because I'm trying to prevent, prevent this, you know, and I'd like their help with it.
And they do help me a lot.
you know, I think one one direction that we could talk about and I think, you know, maybe goes along with, you know, what you've been saying, like, about, you know, about kind of what you do with the sleeping, physical activity, interactions, people.
I do think it's more important about what we people are going to do than what they're not going to do.
Right.
And I think this is a real opportunity, to engage young people in addressing the future.
that is now, you know, we often say that like, oh, young people could really, you know, why don't we engage very young people and issues around what?
But I think this one, especially because we're we're really not ten, ten that we haven't had the experience of things changing so fast.
We're not good at adapting at that yet.
Like the internet feels like it happened quickly, but that took like 20 years.
so, you know, engaging people on like, what do you think it should be like, even even have started to have wild conversations with kids like, what do you think it would be like if this hospital didn't have any doctors, but only had robots who diagnose people better?
Things like that, can get people, get kids imagining and in maybe engaged in being part of shaping this future instead of just being shaped by it.
You're scaring me, Tony.
But, you know, I mean, I'm trying to be realistic.
I'm going along with it.
Well, you know, I am partially saying that to kind of shake us a bit because I think there, you know, because of the speed, I do really want people to be having this conversation.
So I apologize if I'm being.
No.
Oh, really?
I mean, if you are like, he's not giving you a homework assignment, but if anything, we've learned that we better be forward looking what feels extremely futuristic because it's closer than we think.
Yeah.
Like tater could, you know, why do you think?
You know, we have airplanes that drive, that we have.
We have computers that can drive airplanes as well or better.
Why do you think we still have the people?
Oh.
Interesting question.
See, now let's say, don't I, do you hear I hear some I hear some hope, some engagement, some, you know, in, in that kind of conversation, where you're starting to say, hey, we what we can do is think about these things.
We can try to shape and, you know, and here are the interesting answer that that a child has to that is like, way more healthy than, you know, like being scared about how dangerous it is, just trying to pretend you can stifle everything.
Yeah.
So in our last minute here, Jeff, we've talked about the concerns and we're going to keep doing that.
Give me 45 seconds of in the best case scenario you see what happening.
That is good coming out of this.
Well certainly I see an opportunity for people who might not feel comfortable with, interaction.
maybe, you know, to a clinical level, maybe they're just not really the type of people who want to engage with others.
This gives them an outlet.
I see that as a positive potential from this.
I do see it as particularly its role potentially within mental health as an area where, you know, it's been stigmatized historically.
there's a lack of availability of resources in terms of therapists.
There's the cost factors.
So maybe this can actually, create a bridge there that will be very helpful for the future.
it's there's definitely a lot of upside, but that has to be taken with the idea also that it needs to be shepherded and guided in the proper way for that to manifest.
And I want to say to our guests here, this has to be an ongoing conversation, and we've got to do a lot of it.
Tony's right.
you got to it's going to be uncomfortable for some.
There's going to be generational divides.
But we've got to talk about this.
We have to be prepared and at least know what might be coming before we decide how to handle it.
And I want to thank our guests for being here.
Doctor Jeffrey Allen is director of the Institute for Responsible Technology and a professor in the School of Business and Leadership at Nazareth University.
Thanks for having me back.
Thank you for being here.
Our thanks to Doctor Tony Pisani.
Tony is a psychologist and family therapist and works in suicide prevention.
Thank you for being with us.
Thanks for stimulating the conversation.
The founder of Safe Side Prevention and Doctor Michael Scharf, really one of the best.
Mark and Maureen David, distinguished professor in child and adolescent psychiatry at the University of Rochester Medical Center.
Thank you very much.
Thank you, Evan.
Thanks to all the listeners who gave us their time and paid attention.
Absolutely.
And from all of us, the connections.
Thank you for being with us.
We're back with you tomorrow on Member supported public media.
This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management, or underwriters.
The broadcast is meant for the private use of our audience, any rebroadcast or use in another medium without express written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the connections link at WXXI news.org.
Connections with Evan Dawson is a local public television program presented by WXXI