Connections with Evan Dawson
Human-cyborg relations: how movies portray AI, robots, and human love
2/7/2025 | 52m 39sVideo has Closed Captions
Hollywood portrayed robot relationships, compared to where we're at and how AI is being used today?
From 1927's "Metropolis" to 2025's "Companion," cinema portrays the relationships of human and robot with passion, surprises, and regret. With artificial intelligence becoming more and more ubiquitous worldwide, how has Hollywood portrayed robot relationships, compared to where we're at and how AI is being used today? Guest host Matt DeTurck discusses it with an AI expert and film experts.
Connections with Evan Dawson
Human-cyborg relations: how movies portray AI, robots, and human love
2/7/2025 | 52m 39sVideo has Closed Captions
From 1927's "Metropolis" to 2025's "Companion," cinema portrays the relationships of human and robot with passion, surprises, and regret. With artificial intelligence becoming more and more ubiquitous worldwide, how has Hollywood portrayed robot relationships, compared to where we're at and how AI is being used today? Guest host Matt DeTurck discusses it with an AI expert and film experts.
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipFrom Sky news I'm Matthew Turk, artistic director at the Little Theater.
Filling in today for Evan Dawson.
And this is connections.
Artificial intelligence, shortened to AI, is a concept as old as human civilization, with tales of items or objects infused with consciousness and intelligence by their craftsmen.
The term really took hold in the 1950s as related to more modern computing.
And if you've used any internet enabled device for the last year, our smartphones centered lives are now filled with the term to no surprise.
Hollywood and the motion picture industry have always been fascinated by humanoid robots.
from Metropolis in 1927, all the way through to 2025.
Companion, which is on movie screens right now.
but specifically the relationship that humans have to these robots.
And I companions and, it's always been kind of a situation of sometimes they're filled with passion, sometimes they're filled with surprises, or sometimes they're filled with regret.
joining me here in the studio so we can discuss movies and current AI, are my guests.
we have Max Erwin, founder and CEO of Max io.
Meaghan Murphy, co-founder of anomaly, the Rochester Genre film Festival.
Adam Liberto, projectionist and programmer at the Little Theater and programing director for anomaly, the Rochester Genre Film Festival.
And Scott Lucas, director of communications for the Little Theater.
Thanks, everybody, for being here.
Thanks for having us.
Yeah.
I think we probably, have enough, ideas, conversation and materials to last for three hours, so I'm just letting, our engineers know that we're going to be here for all day.
So just that, you know.
So, I kind of want to start with just like, a nice, easy softball about, like, do you have a favorite representation of either robots or AI onscreen in a movie?
Doesn't have to be necessarily be a the favorite, but like something that when you when I say AI and robots in a movie, do you have something that comes to mind?
Adam, let's start with you.
What eyes.
Maybe it's low hanging fruit, but I feel like the first thing I think of is AI, artificial intelligence, the Steven Spielberg movie, and, Haley Joel Osment as the, child, AI robot who is a supplement for the real child of the parents of the film.
But yeah, that is immediately what I think of the titular role.
If.
Yeah, exactly.
Megan.
I actually, go with Blade Runner and the idea of replicants, and it goes into what I find very interesting, which is I think it's a discussion about what it is to be a human when you're talking about what it is to be artificial intelligence.
in entertainment and art.
Max, I'm going to go with Hal from, 2001.
I guess there's no embodiment.
And so this machine that kind of goes awry.
And, Scott.
Yeah, I have a couple.
So Ex Machina from Alex Garland is definitely one.
And I think it's such an interesting take on it.
And before I saw, I really didn't know where I was going.
anyone with robots do I love The Matrix?
I think that's in there, and one that I like the portrayal.
That's kind of a silly, goofy summer movie.
Is I robot the early 2000 Will Smith, movie based on, I believe, an Isaac Asimov, story.
Nice.
I will say, if you have questions for our guests as we go along through this, program, feel free to call in, 844295 talk.
or if you're in Rochester, the number is (585) 263-9994.
You can email the show, at connections at six.
I dawg.
Or you can also leave comments on our YouTube.
we're also that you can find to this program.
what what do you guys think captures the imagination like and that's the fact as we're doing like Hollywood and movies and, Max, you work with an AI, what do you think captures, like, people's imagination?
about this, Meg, I'm going to start with you.
Oh, fantastic.
Well, I think there's actually a couple branches of of what it is, because I think there's definitely the fear based one, which, like starting from Frankenstein and earlier, the idea of, like, what if humans create a thing and then you lose control, you know, that's fear, that's a little religious space.
Like have, you know, do you become God?
And is this what God feels like?
So that's I think that's one.
I think then, as I mentioned before, there's one that's what it means to be human.
robots as a metaphor for at what point does a consciousness become its own thing, the intelligence in artificial intelligence?
I think it's a interesting way to, explore that topic.
Scott view of that.
Yeah.
I think what captures the imagination.
So it's sci fi, but it's also speculative fiction.
It's something that could happen.
So, you know, we're looking toward toward the future and what, what kind of horrors could happen.
And it's interesting to look even the movies we all listed were from several different era.
So we had like 1960s and 2001 A Space Odyssey.
You mentioned companion, which just came out like a few weeks ago.
So it's really interesting to see how that has evolved over the years and how writers and people have have looked at this.
either usually in a movie like that in movies will be a threat.
The AI will be something that is, the characters have to work against or something that is a threat toward humanity.
but I think, yeah, just anything like that with ideas and stuff that is based a little in reality, I think is just so intriguing.
Yeah.
Speaking of like the reality aspect, like Mac.
So you work within I on a daily basis.
You have a very different perspective on this than the rest of us.
And by that I mean, I think that you can actually give the rest of us some perspective on it as someone who's actually dealing with it.
Can you talk a little bit about, like, your work?
Yeah, yeah, my work with artificial intelligence, I started working with AI.
Well, various greatness of AI from, I'd say 2011, but really more seriously, in 2015 when I started working with natural language processing is a field of artificial intelligence, and I. I build tools and I, I call AI a tool.
I build tools, so to help people learn and find things in a lot of text content, you know, if you're in an enterprise and you have, you know, a million documents, it's hard to like get answers out of those things.
So that's the type of thing that I, that I work with.
I don't work in the research realm of, like, how can we make a copy of ourselves, to go off and do all of these interesting things?
I, I mean, I philosophize about that.
I know how these things work, and I do personal research on on how all of these models, AI models work and, and what they're doing.
But that's not my primary field of focus.
Cool.
Does that like in effect then influence kind of when you're when you're watching a movie which is like deeply science Fictiony in that case, like in terms of do you end up being like, oh, that's not realistic or oh man, that scares me too, or I don't know, like, how does that affect you?
Yeah, this I mean, it goes it goes back before, even when I was working with I wouldn't just being, you know, in, in tech, in, in software, you know, you see some hacker on some movie and they're doing this thing and it's like, it doesn't make any sense.
So that just, you know, there's this whole, the, the suspension of belief, like, you can do all kinds of things in movies that just don't have any boundaries in reality or, or ties to reality.
So I think, you know, there's a lot of that.
And I do try to take note of situations where they try to get it right.
because, you know, nerds and technical people, we try to do that in our field.
It's like, hey, they use that tool correctly.
I'm also like, like I was a chess geek as a kid.
And so when I see people playing chess on screen and it's totally wrong, I get so frustrated and I can't watch the results.
So it kind of it's the same.
So and I actually, I so are there's like movies or representations of AI that you think do the topic justice.
You're like, yeah, that seems plausible or that's, grounded in some sense of reality.
I'm just curious.
It.
Yeah, I like the ones that, I like.
I called out, you know, how how from 2001, because it's software.
It's a tool and it goes wrong like it has, like this thing that's happening, but.
And it can go wrong in a very interesting way and a, a plausible way.
And, outside of movies, I'd say my favorite portrayal is actually like Futurama if you watch the Futurama series at all, because it's like, so it's set in the future, but like a very buggy future where things like go wrong all the time.
And I think that's kind of the world that we live in.
It's like, yeah, you can use ChatGPT to like get answers, but it's not always right.
Like, and it goes wonky and weird, strange ways.
And I think that's kind of the future that we're, that we're forming for ourselves where, things are just going to break.
How we deal with that is an interesting question.
Oh, sorry.
Meg, do you do you feel like, in that case, like we're talking about, like what is, like, futuristic and then what is realistic?
And you as, like a local artist in that case, now even getting into the generative things.
Yeah, we'll get into that later.
But like, do you feel like, having seen so many movies and, and also working with like technology thing, do you also feel that sense of like suspension disbelief at any point?
Well, I think what interests me a lot.
Yeah.
And I think this is a connection.
Both are like fear based stories.
And what it means to be human stories is the humans at the core of it.
technology reflects the human, their fears, their desires.
they're just base competence.
like, we see something like in The Stepford Wives, where it is like dark wants of control that has caused this, something like Megan where it's basically, I don't want to deal with my niece.
I'll just give her to this robot, though, as a fellow.
Megan, I'm on Megan side.
but something like, say, AI, which is trying to deal with grief.
But again, like, humans themselves still haven't learned how to like we can't deal with it perfectly.
So I'm fascinated by a story like, oh, I'll just make a system that will deal with it for me.
I'll just make like a fake son.
And then all the problems and this poor like kid, because we're talked about artificial intelligence, I think an AI, the idea of becoming a real boy like he's basically human, like, what do you what's what's the definition of human when you can feel pain and suffering?
And that's all humans did that because humans haven't dealt with their stuff, you know?
And until we deal with our stuff, tech's going to reflect that.
And that includes artificial intelligence.
We are imperfect creators, and there will be imperfect creations, and we have to take responsibility for it.
Victor Frankenstein, this comes right back around.
Or we get a Westworld situation, which, you know, maybe, maybe we all deserve it.
Then I think what you're saying, though, about like, taking responsibility, but also that that sense of like, I was interested in doing a little bit of like, you know, research, like where I came from and everything because I had no idea.
but I thought like, yeah, like something like AI deals with so many themes.
Sorry.
Spielberg's AI deals with so many themes of Pinocchio, which again, is literally a woodworker carving out of wood, literally, and then something that is not human trying to become real.
And I was like, oh yeah, we've been dealing with this concept for a very, very, very long time, which I thought was really interesting.
got you are a huge fan of The Matrix, which also deals with a lot of like, machine pieces in that case.
Is that something from like, like a, a marketing slash film fan perspective that, that you also kind of like, are intrigued by brings you into it or are you constantly like, know that that's not going to work?
Like, that's not my suspension of disbelief has gone.
no, I think it works.
I think robots, yeah.
As the bad guys, especially as something that has been visited often, and I usually like it.
So that's if anyone hasn't seen The Matrix.
It's kind of when it came out in 1999, which is the best movie year ever, as we all are aware of, part of its marketing was what is The Matrix?
So I think the robot angle wasn't even I don't think you were as where maybe and there was some music videos that accompanied it, that maybe had glimpses of robots.
So maybe you, you were vaguely aware of some robot angle, but mostly I think that was kept secret and then kind of revealed throughout the movie.
but yeah, I always think that works well.
And even so, like I came out in 1999, which wasn't that far removed from the Terminator movies, which are very interesting.
The first one, of course, with Arnold Schwarzenegger, where the Terminator is the main robot.
That's the bad guy, that's the villain.
and then Terminator two, which I think is one of the rare cases where a sequel is better than the original.
Yeah, you have Arnold now, the robot as the good guy, and a newer version of The Terminator.
I don't know the technical term, but I always call it the liquid Terminator.
goes after the T-1000.
Yeah, it's, you know, so I mean, like that one movie is that it's robot versus robot, and the robots are also helping the human.
So there's different layers, and it's complicating that formula for the robot as the as the bad guy as the villain.
Yeah, I, I think what's we're in an interesting space where I feel like we keep end up, we keep end up putting like the robot or the AI like system as either a villain or as like, the hero, like an ex machina situation.
I don't I don't feel like there's a ton of examples where it's just a system that's existing and it actually gets those human elements like, like we're talking about, like with, acknowledging that it has to course correct or anything.
Can you get I don't know if this is putting all of you on the spot, but that's what we're doing is can any of you think of something where there is that kind of like an element where it realizes it's doing wrong or something?
I, I actually have a little extra section, which is like children's family stories.
And I think both The Big Hero six and the Iron Giant in particular deal with that idea, like the idea of the Iron Giant was made to be a weapon, but because, again, I would say he has reached, consciousness, he actively chooses to go against his programing and make his own choice, which is not to commit violence.
so I think that's an interesting idea of an evolution of a, of a character, you know, and the ending, you know, doesn't make me cry every time.
I'll tell you that.
It's fine.
I keep it together.
But like, I think that that's an interesting especially in, like, a family, children's movie.
I think with the same with a big hero six, where you have, Baymax, realizing doesn't want the armor because he can't do what he wants to do in that as much, which is he'll like he's being forced into something that he doesn't want to do, and he has enough awareness to go.
I don't want to be this.
I want to be soft, you know?
I want to be healing.
I want to be welcoming.
And I think that's, very interesting.
Maybe an important lesson for kids to be like, hey, you don't have to do what you're told.
If that is not good, if it's not comfortable, if it's not who you are, you know?
So I think that's interesting to use a robot in that.
In that sense, it's almost as a metaphor for like a child learning.
That's cool Adam we just watched we rewatched her recently in that case.
And I felt like that also had a, had a good sense of like sense of learning about it, like is that something where her is such a different film from like The Iron Giant and everything?
I don't know, did you have, like, a sense of how you felt in terms of Scarlett Johansson's Samantha character in that case, or what that made you think of?
I've put me on the spot.
I don't know.
I feel like I feel like it was realistic in the sense of I feel like that's how technology kind of works, is that it evolves.
But it's also I feel like her is an AI movie, but it's less about the AI specifically then more as a metaphor for human relationships and how humans change, which is kind of going back to the theme we keep talking about, about how these stories, at least in pop culture, end up reflecting, ourselves in a more an exploration of humanity than they are specifically, you know, just the technology side.
It's more what effect that has on us as people and what that means for us.
Yeah.
Max, in terms of since you're doing more with like text and language in that case, like, do you feel like I, I know very little about how any of the AI aspects actually work in that case.
So, is there is there anything in terms of like how does is that are we at a point where realistically that learning aspect is happening within the tech?
so people are trying the technical term is called continual learning or continuous learning.
And there are a lot of problems with that.
because we have all kinds of fun names for things.
So one of the major drawback of something called continual learning is you have a model which is basically a piece of software, and you trained it in a certain way.
And like people did that very purposefully, and then you stop, right?
And then you that's like ChatGPT.
And it's like in production and then people can use it for its job.
The idea of like giving it more after that and teaching it new things, it forgets older stuff.
So it's called the catastrophic forgetting problem, which was a fantastic term.
And so that's like the major technical drawback for, this idea of continual learning where, where a model can acquire new data and learn on it.
But there are like a lot of workarounds to that right now, whereas, well, you don't necessarily have to learn.
You just give it more information as it's happening.
Like you can give it more context.
Like if you use a search engine right now, like Google, you type a search term in and sometimes it'll give you a summary at the top.
Like that's a, that's a form of like a workaround to this continual learning problem where you're giving it contextual information and the model does something further with that.
The AI does something further with that to give you that summary.
So there's a lot of work being done in state of the art research to try to solve this continual learning problem.
But until we get there, we're going to have to deal with these workarounds.
So that's kind of where I, where I draw the line of like plausibility of, of models is like when I see that something is just learning on the fly, I'm like, okay, that's out of the realm of known possibility at this point.
That's a that's a major technical hurdle.
Oh, and I'll jump back to what Megan said before, and I think it's worth pointing out the way, the way that I delineate, what I think is maybe a plausibility sense is giving human, human qualities to technology in terms of like this idea of want like, you know, the Baymax wanted to do these other things.
So that's like a very human thing.
We have hormones and emotions and all kinds of stuff that machines don't have.
You know, machines can replicate a certain amount of stuff, but the idea of a purpose in us modifying our purpose because we know we have like the human condition, we're going to die at some point and we have to, like, struggle with that.
Just from a philosophical perspective.
There's no such thing of that translated to how machines work.
So I, I tend to draw a line in that, in that realm of like what is plausible and what is not.
When I see something on screen, the the sense of like learning though I feel like is interesting cause I feel like we talk a lot about like AI and learning, but we're still talking about like the fact that it's still human data sets input into it, and then kind of what it pulls from there.
I'm attempting to, like, sound like I have any sort of knowledge about this, but like, it doesn't always have to be human data, though.
I mean, if you think about, like a self-driving vehicle, like a self-driving car, you know, it's been trained like somebody had to drive around for millions of hours to get the data necessary to train that.
But then what if all of a sudden, like it sees something that it's never seen before, like a meteor, like crashes in the road that has never been seen?
When the people were driving around to get the data to train the model.
That's not human.
That's just environmental data.
Like, how would it handle this meteor that just appears out of a, I guess, meteorite if it gets to Earth?
it would.
so being able to adapt to the environment on things that it's never seen before, like us, we have like flight or, fight or flight instincts and things like that where it's like, okay, I have to steer.
I don't know what that is, but it's pretty bad, and I'm going to go away.
Like, what a model.
Do the same thing.
What did I do the same thing.
And you know, I would call that undefined, undefined behavior.
We don't know what's going to happen.
Interesting.
I feel like this is so much of like, again, what we look at with, like, when we see robots portrayed in on screen is it's so much about, like, them trying to become human, but it's so much about that, that getting over that divide where like they're dealing with like what they've been programed to do and then going beyond that.
Scott, you and Adam have both seen companion in that case.
And because it's so new, not going too deep into spoilers on it, but, that's such an aspect of what that movie deals with in that case is a quote unquote, whether or not the robot can think for themselves beyond that.
do you have any, like, thoughts in that case on like, how that film portrays it and how like you feel like that comes across narratively as opposed to realistically?
Yeah.
Well, first of all, I had after back to the catastrophic forgetting problem that's been stuck.
It's a seared in my brain, immediately wrote it down in my it's everything else.
So I'll attempt to answer your question, but I, I'm letting you know that that is very much on your brain.
but yeah, companion was interesting.
I think they, they borrowed a lot of themes that we've seen in other, movies as well.
and Megan, you had mentioned Westworld two, which I'm assuming you're referring to the more recent show and not the Michael Crichton, both because they're both human desires and creating vessels for them.
And like, maybe, oh, oh, maybe they don't want to be vessels for that anymore.
Yeah.
But yeah, to answer your question, I feel like it.
I'm trying to avoid spoilers.
They're speaking more vague.
Vague terms of it.
but it takes some interesting things where, like, the robot doesn't know that they're a robot right away.
Maybe sometimes they do, sometimes they don't.
So that adds a different element.
and that movie, again, being more vague, I think takes more of where the bad guy in that is like, we've been talking about a few where like, oh, of the AI, the robots are the villains of the story.
in this one, it's more of some of the humans and you're rooting for for, not the humans, but the robot.
So, it has an interesting take.
And again, like I said earlier, I think it borrows from some other sci fi movies as well, but, I thought I thought it was a fun portrayal.
I had a good time at that movie.
Yeah.
Adam, your any thoughts?
yeah, I agree with what Scott said.
yeah.
I feel like it's.
Yeah.
story.
That is definitely, the sympathies of the film definitely lie with the AI over the humans.
which is.
Yeah, I think in itself interesting in that, yeah, it comes back to the flaws of humanity.
Yeah.
I think that a especially like currently where we are like, there's a lot of like feelings of doomsday ness or feelings of, like the end of civilization as we know it, but like that sense of, like dystopia is, I feel like not because of AI, it's because of what humans have done.
Like in our time that's been here.
but I think that there's an interesting sense of like, as humans, we continue to try to find something to blame.
And I think that that continually keeps coming up as being like, it's the technology or something else, as opposed to being like, oh no, we are responsible.
We need to actually like, make sure that, like, our tech works in a way that we need.
It is what Megan said we need to do with our stuff first.
You know, it's interesting if I gave a second.
So it's interesting.
There is actually a movie I quite like that I think brings a lot of this together.
It's called cherry 2000.
It's 1987, 1988.
We actually covered it on our podcast.
But that's why it's important.
So it's, kind of a post-apocalyptic story where you have a city where there's humans and everything's great, but out in the desert, like it's Las Vegas, everything's desert and covered.
And you have our main character and he has a cherry 2000, which is basically like, like a, like, kind of like a love doll.
Like she makes dinner, they have sex, but she has, like, a little desk.
She's not a thinking feeling.
She's that part where, like, it's AI, but there's no there's not intelligence.
She just has recorded things that she says, her body gets destroyed and he has to go find one out in the desert.
He hires a tracker, which is, Melanie Griffith, one of her first roles.
And you start having a story about a guy learning to deal with an actual human being as they face all these dangers out in a blasted world because humans did stupid stuff, you know, it's.
And it's dealing with, like, oh, hey, maybe I can deal with a real person because by the end, he's trying to find this perfect package for this little disc.
Cherry's a disc, but he's so focused on this, repeated like it gives him exactly what he wants.
She gives him exactly what he wants, and maybe he needs more.
And I think it's a very interesting.
Like this.
Yeah.
Again, post-apocalyptic desert blasted story about finding what is human.
And the human is finding out what it's human.
He's figuring out.
Oh, maybe I've lost a lot of stuff because I've just put myself in a little box because the world's destroyed.
I have a crappy job, and I can just be given what I think I want instead of what I need, which is the sassy tracker who's so cool and she's so amazing and she's got this bright red hair.
But anyway, like, I think that's interesting cause it's dealing with like, human choices, a destruction of a world, and yet trying to find comfort in a place that you can't because there is an intelligence there.
There's no humanity in this particular I. I'll see this.
I want to see this.
I have never so I know it is off.
It is a romp.
I saw bits and pieces a long, long time ago.
It sounds like it sounds like a hoot and a holler.
Oh, and oh, I mean, some of those themes.
Companion has similar ones, although companion doesn't sound like the same movie at all.
But but some of the ones.
Yeah.
Where I think it's a lesson toward the humans and the, the evils, of mankind and, and the robot learning stuff, especially in companion as they're learning what their actual existence is, because, again, they think they're human and even they in the movie, there's some really sweet meet Cutes that you're like, oh, that's so romantic.
Like, that's right out of a rom com.
And then you find out that, you know, that's there's a little more to that, something a little more sinister that the humans did that to have that part.
Oh, yeah.
All human needs and desires and dealing with our stuff.
No, I won't, I will just make a tool to link it to help if you're not together.
Basically therapy.
Everybody.
We need a couple centuries of therapy and then maybe we could create something that won't destroy us and we won't deserve it.
I keep coming back to this, and yet I feel like that's a lot.
A lot of the stories is, you get what you put out into the world.
you know, like, what are your intentions?
And they will come back to you threefold one way or another.
we are going to take a short break, and then when we come back, we'll have more with our guests Max Irwin, Meghan Murphy, Adam Liberto and Scott Lucas talking about movies.
And I, I. I'm Evan Dawson.
Monday on the next connections, we're joined in studio by Rochester Mayor Malik Evans.
We'll be talking about the response of cities like Rochester to Ice, to the Trump administration's desire for mass deportation, what Mayor Evans is hearing and seeing from law enforcement what comes next.
We'll talk about that in more with the mayor on Monday.
Support for your public radio station comes from our members and from Great Thai Consulting, providing management consulting services to help support not for profit organizations.
Services include C-suite staffing support, executive coaching, and business and fund development.
More at Great Thai Consulting, income.
This is connections.
I'm Matthew Turk sitting in for Evan Dawson, and we are discussing AI, its portrayal in Hollywood, and why we want to love robots so much.
I am joined here in the studio with Max Irwin, Megan Murphy, Adam Lieberthal, and Scott Lucas.
so what, I want to kind of, like, transition us into talking a little bit now is about finding, like, where that balances between, like, how we think about AI in movies and then how it kind of it's realistically being portrayed at the moment and kind of like where some of those challenges are.
during the break, Adam, you had a really, really great question.
for Max, in that case, about like, like within the realm of sci fi versus what actually is happening, do you think that there is a detrimental effect on what society thinks about for AI because of how it's portrayed in sci fi, with all these robots trying to kill us?
so I, I think the, the detriment is actually, I'm going to answer this question in a very strange way.
when we watch movies, you know, things like The Terminator and, you know, stuff like that, it's like all of a sudden this thing appears and it's horrible, and it's trying to kill everybody, right?
it comes out of nowhere.
It's very it's highly disruptive.
And I think, you know, we as people maybe think that's what's going to happen in our nightmares.
But I think a worse scenario is actually what's really happening is that we are very, very slowly progressing.
And it's called the boiling frog problem right here, where it's like you have a frog in a pot of water and it's slowly boiling.
It doesn't notice that it's happening, so it won't jump out until eventually it's too late.
And I think that's more of the situation that we're in.
Some movies probably deal with that better than others, but I think there's this idea of like, hey, if something bad is happening, will recognize it because, you know, that's what happens in the movies, but we're not in that situation.
We're in the situation of things are getting slowly worse and worse and worse over time.
Are we going to stop it in time?
And that's that's probably the detriment that I point out.
I'm trying to think of a horror movie, but I think that I have, you know.
Yeah, I'm sorry.
I feel like with that though, like something that we've constantly run up against in terms of like it's applicable ethical use in that case is the is the generative aspect.
Meg, you we recently talked about what was the movie, the horror film you saw that, Late Night with the devil?
Yeah.
Can you talk a little bit about that?
Yeah.
Okay.
So I always have to put an asterisk next to this film.
I like it very much.
I think it's very well done.
I like ideas about.
Oddly enough, it is a film about making the deal, making a deal with forces for what you think you want and instead of what it gives you.
Which is super interesting because the filmmakers and I will give them the benefit of the doubt, is when they made this, this was a little earlier in our discussions about generative AI.
The idea of using AI to create new content is how I'm using the term but little earlier in the discussion.
and like, say film and entertainment and art.
so they used it, and the filmmaker said, like, basically just to test it out, to experiment with, two, kind of like fake interstitials because this takes place in, in 19, like during the 19, like 60, 70 talk show and then one like photo, which is supposed to be an older photo.
Right.
of course, my issues with this is, there is no system that I know of right now that does not work on stolen art that was been scraped.
Who the artist did not give consent to, which is big thing.
And also the environmental factors of instead of just doing a thing like type type type enter right plus on top of it unnecessary.
The film does it need them?
And I feel like, you know, if you can't afford it, either you figure it out.
Which filmmakers have done up to this point or you leave it.
Not none of these things were necessary for the film.
If I feel just, it takes me out every time I get angry because it's just it's not necessary.
And I'm worried about like that going on that continuing with this idea of like, well, I want it and I'll just get it.
Even though the cost and that it's built on stolen work.
You're a creator and you're fine with that.
I don't know, I just on top is just the creativity.
I mentioned Spielberg and and jaws, and I said, what if he could have just fixed Bruce with, like, make Shark Attack look cool?
Enter.
Right.
And I'm like, we wouldn't have gotten jaws, which is more effective because we don't see the creature for most of it, I know.
So it's all mixed up in that.
But in the end, I just feel like, you know, people are aware now where it comes from.
And unless your system, you can guarantee is only trained on stuff that people consented to, I can never be okay with it.
Max, you talked a little bit about like, the self-generation aspect of, like, coming from language coming from programing and everything, so, so we're to your knowledge, we're not at a point yet where, like, I can paint for itself because it's pulling from what the idea of human paintings already are.
I don't know, how do we find where like is there a way we potentially could teach a machine to paint without it seeing what other paintings are like, feel like I have that trouble as a human.
I see other things like go, oh, that's how that thing works.
So now I feel like I can not backwards engineer, but have a sense of like what that is, is that is that that sense of pure invention.
I don't know, I'm kind of spiraling on like where this goes from here.
Yeah.
So the way all this stuff works is just you take data and it's AI is software.
So it's software that is created by mimicking data, and, and mimicking what happens with the data.
And, you know, people spend a lot of time to finagle it so it does it.
Right.
so, there are ways to, you know, come up with, models that would look at, like the environment, like you could just take a cameras into the forest and like, you know, but there's always a person behind the camera.
So then they own the IP of like, oh, I film this forest, you know, and then so where what is the provenance of the data is always the question there.
absolutely.
All of the models that are being used right now took data illegally.
there's no question there.
And the courts are like trying to figure that out.
You know, in the text world, you know, there's the biggest one that's hanging out right now is the New York Times versus open AI, because the New York Times, like, says, hey, you took all of our news articles and you trained our model, but that was just a very small subset of the data that OpenAI used.
And then very recently it came out that meta Facebook, took like 13,000,000,000,000TB of books to train their llama models.
And, you know, we there's there has been a lawsuit from Sarah Silverman, who helped write a book with some coauthors, and she found that these, like the Facebook models, were, like, able to just reproduce some of the prose and in their book.
So it's very obvious that there's theft, there's a lot of dispute of what's going on with is it fair use or not?
And I think that's kind of a silly question to ask.
If it's fair use, it's, you know, whether legally it's deemed to do that.
Is it ethically okay?
I disagree.
so this you know, it's kind of weird.
It's kind of weird to be in a situation.
I, I don't use visual.
I use visual models, for example, to show like, hey, this is the capability.
but I don't use them in my work.
I subscribe to, you know, when I write blog posts or, you know, I give presentations, I subscribe to something called direct Easy, which is a clip art, you know, subscription.
And the artists get paid for the stuff that I download, right?
Which is what people should be doing.
so I don't know how you train a model without human data, right?
Effectively, because everybody because we are the people that create it.
Unless you're just saying stuff, okay, in the public domain and after a certain amount of time, stuff goes into the public domain.
Right.
But that's, you know, how many here 120 years is it now in the US copyright system that things just go into that.
Yeah.
So I don't know where we are at with that.
I think for me, one of the things because it's not I'm not like no, I ever I think a lot of times the question is, well, what problem is it solving?
Is there a problem there like say captioning?
I think that's super important because not everyone can caption all their work, by hand.
But if you have, a system that can do that, I think that's incredibly useful.
what's the problem with, wanting to be seen as an artist but not wanting to make the art or want to be seen as a writer, but not actually writing at that point, you're content producing, and I'm like, that's not really a problem.
That's a you want to be something you're not, and you could draw and you could write, but there's a chunk of folks who just think that they can generate a cover, generate the information, pop it up on a, like a pop it up on like Amazon.
And that all just seems so useless to me.
You know?
Why?
Why are you using I, I think is another good question.
What are you what is this tool doing?
Does it need to be doing that or could you maybe doing it, you know, fix your problems humans like why do you want to be a writer?
Not right.
That's on you and you shouldn't be burning the planet and stealing work to solve a new problem.
Go to therapy.
Yeah, I feel like that.
The connective tissue between sort of the different aspects of, what AI is, is kind of the idea of we get into trouble when we want it to replace something human.
Yeah.
As opposed to as a tool to serve something or add.
Yeah.
To when it's, a sense of.
Yeah, supplement, not supplement humanity, but just replacing it or acting as a substitute.
You're robbing yourself that I'll never understand.
Like, as an artist.
Right?
I've made a lot of crap I've made.
So I've been made way more.
Many bad marks than good marks.
But I only got to the good marks because the bad marks and it's process, it's the process is important, you know, it's not just a song pops into your head.
a composer has heard stuff inspires and their own experience.
And then maybe they hear, like, like, chimes in the breeze.
Right?
Why would you go or just type it like you're stealing from yourself?
And that.
That breaks my heart.
So sorry.
I'm going to I'm going to jump in here because this is an interesting argument that a lot of people make.
who are I, bros.
I'll call them like, this is one of the things is when you think about creating art and that process that you go through of iteration and discovery, right?
So models learn by looking at data and they get it wrong a lot of times because you have a metric and you have a task and you have the data and you learn to perfect the task using the metric, right?
So that you do the same thing when you're drawing.
Oh, that doesn't look like a face.
I'm going to try it again.
You do it 100 times.
You get better and better.
Better.
So so there's that.
But then there's the idea of like, well, who are your influences?
Like, I'm, you know, I love this artist, so I'm going to write like this artist or I'm going to learn guitar and oh, I love Jimi Hendrix.
So I'm going to play, I'm going to try to play like Jim.
So there's this idea of like, I've heard all of this music.
I'm not creating something from nothing, right?
I've been taking and learning and being influenced by what I've consumed in my lifetime.
And then training myself with that and then reflecting and and pushing farther.
Right.
So I don't necessarily agree with that because we're, you know, I, I definitely draw the line.
But there are some people who don't draw the line and, and that and I think that's really that's a point that, that I think needs to be brought up and in how some people will look at that problem.
Oh yeah.
I'll, I'll, I don't want people.
I'll say there's a difference between me going through that process because it's worthwhile for me and assigning that process to a third party.
And yet I still want it reflected back on me.
I think it's a lot of ego, you know, like that's what I think it is.
You you want to be seen as an artist, but you don't want to actually do the art.
I think that there's also within this like gets into the whole issue of like, when do we bother to use AI to quote unquote, correct something, or when do we bother to use it to like modify something?
There's all the discussion right now about the current, like Best Picture nominees and the fact that the brutalist, used it, in two notable ways.
One is to create, technical drawings of architecture towards the end of the film, instead of hiring artists, they just had AI generated.
Then the other one is about Adrien Brody's performance, and about how they used, language models to try to modify, and tweak his Hungarian pronunciations in that case.
And there's the question of like, is it ethical when or when not to do that?
But also I just kept wondering, like, why bother?
It's not that I don't want the Hungarian to be pronounced correctly, but this isn't a documentary about like the Hungarian language, it's the brutalist.
It's a made up fictional piece, like, I guess, what do you think it's.
Do you think that's important?
Am I am I wrong in that case?
Okay, I don't want to keep talking, but I have to keep going.
This project had Adrien Brody money.
Okay, you pay Adrien Brody whatever he pronounces.
That's how he pronounced it.
That's the performance.
If you have Adrien Brody money you can't afford an artist to make that.
Anyway.
That's me.
They had Adrien Brody money.
They they they they could have had human touch on all of this.
And it's weird to me that you would choose not to like, if I was Adrien Brody, I'd be upset.
Like, yeah, I mispronounced it, but that was my performance.
That's my voice.
The tone and the tenor, which is all a part of someone's performance, is being affected.
I think the like the artistic truth is way more important than a technical pronunciation.
I do think that technology has been used before us.
I know Bohemian Rhapsody to that.
With, so that's the Queen biopic, from a few years ago with Rami Malek.
and they use that to, to imitate, Freddie Mercury's voice singing, because obviously it's you're not going to get no actor is going to be of the, sound like Freddie Mercury.
You just can't.
So I think that's when years before, I think when the bird was bothering me more than the accent was the for the buildings in the artistic work.
and then when you, when you're taking work, like, I don't think I should be doing that creative work like that, that should be for humans.
And, there was a reason, it's a two with the movie Civil War, which came out this year, which is from Alex Garland, who did Ex Machina, which I mentioned earlier.
but they had some posters that were generated by I, so A24 releases, it's a company for those who don't know who, people really love, and their movies have been great.
people love it for good reason.
But there was a lot of backlash when they used AI to generate these posters.
It's like, why not hire a human and have our creative person do that?
You don't need AI to steal the work and make it into the the, harm it does on the environment as well.
Yeah, I just want to throw out that one of my favorite movies of all time is The Hunt for Red October, where you have Sean Connery with a very thick Scottish accent, trying to play a Russian naval commander in a totally destroys the accent on the language.
He he's Spanish and Egyptian in Highlander and we just let that fly.
Sean Connery makes it better, but I think that said suspension of disbelief situation.
I think that's the issue of like when, when can we as a as a people like just be like, oh yeah, that's I know realistically that's not exactly quote unquote correct, but it doesn't really matter because it's art.
It's entertainment.
I think this was also one of the major interesting aspects of the the Hollywood actors and writers strikes, recently in that case, in a note here from, the CT, the center for Democracy and Technology, they talked about how, the big issue that got renegotiated in the contracts regarding a was digital replicas and synthetic performers.
and they ended up defining this in two ways, which I thought was really interesting.
One was employment based digital replicas, where a studio captures the likeness of a performer in the course of shooting something with the performers participation, and then an independently created digital replica, where a studio creates from existing materials and uses it to portray a performer in scenes they did not actually shoot.
And I think that that's again where we get into like that question of these were now like the legal terms that they came up with to define this.
And then there's the okay, so now it's legal, but is it ethical in that case, like, I've had a bunch of issues with Star Wars films that have brought actors back from the dead, and I think that, I don't know, I personally it rubs me the wrong way.
I'm like, just hire an actor who looks close enough and literally just go with that.
So any thoughts on that?
I don't have a specific thought, but I feel like I feel like that's in general, I feel like people want, people that have a much harder time with suspension of disbelief and things like that.
They want exact reality or it it's not valid.
If it isn't a perfect recreation of what existed or what they need something to be.
I don't really have much thought beyond that, but I feel like in general, I feel like a lot of things, like people have a hard time with that.
In our current state.
for whatever reason, a recent a bit of entertainment, actually, that deals with this is Chuck tingles bury your gaze.
It's a it's a Hollywood story.
Listen, listen, he's an amazing horror writer and in general.
But here's the thing.
He's dealing with this idea of, like, one of the storylines is there's an actor who died, but the studio owns his rights.
So they had this big poster about this new film that's coming out with him in it.
And perhaps later we find a monstrous version of like, what if you created like this I version of this actor, but he's living a role where he he's this vicious mafia, like enforcer, and bad stuff happens.
And it's dealing with that idea, you know, highly recommended.
Is a horror novel, though, so warning.
I want to also say in this case, we've had a whole bunch of amazing responses from people, which I'm just going to quickly highlight a couple at the moment.
Dallas and Rochester said, no one ever talks about the Ex Machina movie.
I know we have numerous fans here in the room.
It's wonderful.
Miles says that, David from Prometheus, and, Alien Covenant, is a great A.I.
robot in that case, Sky, on a YouTube comment said, I have never recovered from the movie.
I, I I'm very sorry, I love it.
I'm sorry.
We'll talk about it later.
and like that.
Yeah.
I could spend an entire time just talking about that movie, because I think between Kubrick and Spielberg and literally, like, between Joel Osment and then, like, you know, Teddy the Bear.
I think there's so much interesting things to talk about, in that case, about the tech and like how it relates to everything.
Rob says, Wally, come on people.
So, you know, I apologize to Wally.
Yeah, I, I'm embarrassed.
and I think that, like, that's that's the thing is, I think that as we're looking at like Hollywood and movies and art and like technology, like that's that's always been, you know, fighting against each other as much as it's living in tandem with each other.
Like, I think that we can create such amazing, like, movie magic using stuff, but also at the same time, like, we need to be careful because we don't want to, like, take anything from anyone else.
I think we like telling stories.
Sometimes we tell a story that the AI is like, say, Wally, that is benevolent and we want that.
And yet some folks would use tools that are antithetical to that.
You know, that's interesting.
Like the human desire for this, like kind of like savior robot, like, hey, can you fix this for us?
And yet we are actively right now, you know, we're actually making Wally's world.
Possibly.
So it's so it's so funny.
Are devils and angels, You know.
Yeah.
Are there any other there's a million other movies we could have talked about, but is there anything else specifically that we didn't touch on that you guys do want to mention?
I really like, Battlestar Galactica.
I think that has a very good.
Well, the one that I've seen is the more recent.
Yeah, like the 2001. and the bad guys that are Cylons, which are robots, are fighting the humans they're looking for for Earth.
and the Cylons look like human.
So you don't know who to trust.
Like, who is the bad guy?
I think that was one of my favorite portrayals.
Adam.
Anything?
Oh, come back to me.
I have one, actually.
This TV, I thought.
Well, one, I'm just gonna say data from next generation, but I can't get started.
There's a lot going on there.
But actually, what?
I'd love to talk.
there's an episode of The Twilight Zone called, I think, The Body of Electric from 1962, and it's a story written by Ray Bradbury, and it's about basically an electric grandma, this, family, these three kids and a dad.
The mom dies.
And in this world, you can go and get an electric grandma to look after the kids.
In the first, the two youngest lover instantly and the oldest takes her some time because this is not my mom.
But when the grandma gets hit by a car and lives, all of a sudden she's like, you can't go away.
So that's an interesting problem.
A thing that we can't fix is humans.
And then when they go to college, she's going to be retired and like, she's going to go live as a consciousness with other grandmas.
And I think that's what we want, right?
Electric grandmas.
Absolutely.
I'm being told that we can't stay in the booth the rest of the day, so I have to wrap up.
but thank you so much.
to my guest, Max Erwin, Megan Murphy, Adam Luto, and Scott Lucas for being here.
again, we could have gone on for hours.
thank you so much to the underwriters, supporters, listeners, and the team here at connections.
thank you so much, everybody, for listening.
and.
This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station, its staff, management, or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium without expressed written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the connections link at WXXI news.org.