Connections with Evan Dawson
It’s here to stay, so how do we play nice with AI?
10/30/2025 | 52m 16sVideo has Closed Captions
Explore how AI will shape jobs, health, and ethics at the Flower City AI Conference.
The Flower City AI Conference explores how AI will shape our lives—from transforming jobs and education to revolutionizing health care. What questions do you have about its future impact? Join discussions on the promises, risks, and ethics of AI as experts unpack the good, the bad, and the uncertain sides of this rapidly evolving technology.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
It’s here to stay, so how do we play nice with AI?
10/30/2025 | 52m 16sVideo has Closed Captions
The Flower City AI Conference explores how AI will shape our lives—from transforming jobs and education to revolutionizing health care. What questions do you have about its future impact? Join discussions on the promises, risks, and ethics of AI as experts unpack the good, the bad, and the uncertain sides of this rapidly evolving technology.
Problems playing video? | Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> From WXXI News.
This is Connections.
I'm Evan Dawson.
Our connection this hour was made in December of 2023, when a group of artificial intelligence professionals gathered in Rochester.
It was the inaugural Flower City A.I.
event.
I was there.
The future felt slightly scary, but filled with all kinds of cool and sometimes weird possibilities.
Presenters talked about how A.I.
would improve cancer research and medicine in general, how A.I.
could assist musicians.
It was kind of a general view of it by 2024, the second year of the conference, they were talking about adoption, how A.I.
could change sign language or improve our writing, even talking about how A.I., while there's all this talk about replacing workers, what about replacing the managers in your company?
Well, now, the 2025 Flower City A.I.
conference is coming, and they're talking about impact.
And to me, it feels like we are climbing a steep mountain path as we climb, the technology can do more and more impressive things.
If we fall, the crash would be much more severe.
Some Connections listeners think that I'm obsessed with the fall that I'm in the A.I.
Doomer camp, and maybe they're maybe they're right.
But I can acknowledge that there are things that this kind of technology can do that would make life better for just about everyone.
And of course, cancer care is at the top of the list.
But there's a lot more.
The founder of Flower City A.I.
is Max Irwin, one of the really smart and sober voices in this field, and Max is kind enough to come back on the program today.
Founder of Flower City A.I.
That's an annual conference focusing on the many applications of artificial intelligence.
The founder of Max.io, your company.
Welcome back to the program here.
>> Thank you so much, Evan.
It's great to be back.
>> do you think I'm a doomer an A.I.
doomer?
>> No, I think you're pretty balanced.
You know, it's good to have both skepticism and optimism in this field.
>> Oh, wait till you hear the clip that I've pulled.
You're just going to change your mind.
But let me back up and say, I really appreciate what you try to do, which I think from the start in our conversations, whether we've talked about crypto, whether we're talking about A.I., whether we're talking about, you know, sort of the different forms of the internet, I think you you are not a cheerleader.
I think you understand that you need to communicate in a very sober and straightforward way with the public.
I think you're one of the better people in the field doing it.
and I think it's really important that we have that because I've talked to A.I.
cheerleaders and that's fine.
I think that's great.
I mean, people really believe in it.
I view you as someone who, when I talk to you, I'm like, well, I'm going to talk to Max, see if I need to.
He can pull me off the ledge here a little bit.
You've pulled me off the ledge before.
So let's start with this.
What is your conference all about this year?
>> so, as you mentioned this year is all about impact.
So I haven't announced all of the talks.
The agenda is going to be finalized early next week, and then I'll announce all the talks I have.
I do have one speaker that I've already chosen, but I still have.
The call for presentations is open until the end of tomorrow.
Okay.
So if you want a presentation, you can submit it until the end of tomorrow.
>> And the kinds of folks who've presented have been literally people who work in techs, people who work in ethics, people who work in all kinds of different fields.
Right?
>> Yeah, absolutely.
It's a very broad spectrum and very diverse, which I try to do.
I focus on that because there are a lot of technical conferences that you can go to, and it's just like, okay, you get the talking heads in the room and you know, they're all talking about like the science fiction futurist stuff.
we're very realistic of what's happening, how it's happening and how it's impacting the community.
>> What I think what was interesting being at the first conference was, if you are not an A.I., you could go to this conference and learn a lot.
It's not still and still designed to be that way.
So it's not going to be so over your head that you just can't hang.
If you are an A.I., it probably felt pretty cool to be in that space and share a lot of ideas, but I was surprised at how accessible it was for the public.
I think it is important that it is designed that way.
This year's event is Wednesday, December 3rd at the Little Theater, and what's the website for?
People want to check out more of the conference.
>> Flower City A.I.
A.I.
and its flower is in the lilacs.
Not as in the all purpose.
>> So flower.
F l o w.
>> O w.
>> Flower City A.I.
So when you talk about impact, here we are in 2025.
If we had talked two years ago as you were getting ready for your first conference, and I would have told you that where we are technologically with A.I.
in 2025, you would have been surprised that it was this slow, this fast, this dangerous, this safe.
What would what would surprise you?
>> Oh, that's such a good question.
it's it's definitely surprising.
I would have been surprised, but I think in different ways.
>> Because in in what way?
>> I was surprised mostly with the kind of scale that everybody just jumped on it.
I think that was the surprise.
I think that's the surprising thing for me is that right now I feel like you can't get away with it, can't get away from it.
Like that's what so many people talk about in all facets of everything.
It kind of feels like when you know, it's like a much quicker, sped up version of the internet.
When I remember when I was much younger and the internet happened, it took like maybe ten years for people to, like, get on board the internet fully from its inception, where the World Wide Web, from its inception until when people were really, really using it day to day, that it a year went by and, you know, OpenAI had I think it was 300 million subscribers within a year.
That's a lot for one company.
>> No, it's an interesting observation on the time scale here, because what you are describing with tech, you could look at the automobile, you could look at cell phones.
Everyone can probably remember their first cell phone.
And obviously it wasn't a smartphone or most listeners listen could probably remember when it was in a smartphone.
And maybe you're like me when I remember seeing a car phone for the first time in the 90s and I was like, what is that like?
First of all, why would you want a phone in your car?
Second of all, it's like the size of a football.
I mean, it's so big.
The quality was so poor, it looked ridiculous to me.
And I thought, this is never no one's ever going to do this.
Which of course is a very good prediction because it goes from something that's kind of either a luxury or a curiosity that a few people have.
When the automobile was that way, when flying on airplanes was that way when cell phones were that way.
And now what you're describing with A.I.
is two years ago, it was in that curiosity camp.
Two years later, it's mainstream.
It is mainstream.
And I know there are some listeners who are going to say, I don't use A.I.
What are you talking about?
You probably use it without knowing it.
But part of what Max is talking about too, is probably generational.
If you're in your teens or your 20s, everyone is using A.I.
all the time.
Students are using it all the time.
Teachers are frustrated and trying to figure out how to structure their classes and examinations.
It's everywhere.
It's not a fringe thing, so it's the speed of that that you maybe thought would be a little bit longer.
Horizon.
>> Yeah, absolutely.
>> And and businesses are using of course.
>> Of course.
I mean there was you know, there was A.I.
and I use the air quotes for the people who just listening and not on YouTube.
There was A.I.
quote, unquote, before 2023, 2022, but it was very niche and very reserved for highly technical business use cases.
And, you know, Siri has been around for a long time, and that's A.I.
So but those were very early technologies.
And there were a lot of like little tricks that you could do.
But there were very specific and niche.
This is more of the just widely available.
It can do a lot of things generally.
which, you know, there's general artificial intelligence, which we haven't really got to yet.
>> But thank.
>> God they're there.
You know, you could go and I'd say it was the first time it really passed the Turing test.
listeners familiar with the Turing test, where.
>> Yeah, go.
>> Ahead.
so Alan Turing, he posited that if you can't tell the difference, to sum it up, if you can't tell the difference between talking to a machine or talking to a person, it passes a certain test of artificial intelligence.
and, you know, you can go and chat with, you know, one of the many chat models out there.
And it would be hard to tell if you're talking, you know, talking to a person or to a machine.
>> I have had situations where I'm dealing with, for example, getting plane tickets and rerouting a flight, and I'm talking to a chatbot.
But yeah, now it's pretty clear to people that these are chatbots, but as recently as like a year ago, they weren't listed as chatbots.
It was like, now you're going to talk to Jim, right?
And I'm asking Jim, like, are you a person?
Yeah.
Are you a human being?
And Jim's like, I'm not, but I can still help you.
What do you need to know?
And I'm like, this is so weird.
Yeah.
So that's ubiquitous now.
It's just it's just everywhere.
The business application of it is speeding up.
So yes, I think that that's an interesting observation that the adoption is quicker and scaling up faster, because adoption is always again for the people on YouTube.
It's like a flat line and then it's slightly up and then it's it's very steep and, and all of a sudden, if you didn't want A.I.
in your life, you're like, why I this is not democratic.
My mother never wanted an iPhone.
She's got an iPhone.
She's still terrible at voice to text.
It's really beautiful reading her voice to text.
They are a work of art, of terrible texting.
but she didn't want that.
She didn't.
She liked her life.
So for people who feel like this is undemocratic, it's happening fast.
It's it is really impacting life quickly.
So your conference will help people at least see, I think a range of it.
We talk a lot about GPT.
We talk about it in the classroom.
But I've been struck by Flower City A.I.
of covering a wide, wide range here, which I assume is part of the intention to.
>> Yes, absolutely.
And so it's a it's a conference for everybody.
So that's what I say.
If you're just mildly curious in what's going on and you don't have any background, you can show up and you'll understand most of the talks.
We do have a couple technical talks, because I do try to serve like, you know last year we had a very high technical talk from my friend Chad.
and he talked about something called embedding optimization, which is like a highly mathematical, mathematical, geometric, spatial field.
And most of the audience, it they didn't it went over their head.
But I loved it.
And, you know, there are, you know, a handful of people in the audience who loved it.
It was it was very interesting.
But then we had we had Katie born from Saint John Fisher last year talking about just she's doing she does direction of Education at Saint John Fisher and she's talking about like, what's going on at Saint John Fisher with students and what are they doing and how are they trying to get on top of it?
And what are the students doing and what are their how are they managing the change?
and that's for everybody.
Right.
So we have a full spectrum of various technical, technical and non-technical talks covering different fields.
>> Let me read a couple of emails that have come in already, and I'll tell the audience here, I would love to hear from listeners to know if you would agree with Max's characterization or my characterization, that the adoption of A.I.
has been really fast and that it's in widespread use.
Are you using it sort of intentionally in anything creatively?
Are you using it for work?
You know, people using it as students?
844295 talk.
If you want to call the program toll free.
8442958255263 WXXI.
If you call from Rochester 2639994, you can email the program Connections at wxxi.org.
You can join the chat if you're watching on YouTube.
Charles writes to say on a regular basis, my GPS tells me to turn right onto Chili Avenue, not chili, and my roombas get stuck on nothing about once a week.
I'm not worried.
That's from Charles, who's probably responding to me thinking, well, eventually our robot overlords are going to take over, he says.
My GPS still doesn't know how to pronounce Chili Avenue, therefore, I don't think it's moving quite as fast as people feel.
I think there's some examples you could point to that I would I think that's very funny, by the way, and I think I agree with him that there are times I go, wait a second, how come autocorrect is so bad at this autocorrect?
I never use this word.
I always intend that word.
How could you not figure this out by now?
But then there are times where I go, oh boy, it's really good.
It's really fast.
It's moving quickly.
How do you see the.
And it's not a straight line, but how do you see the development and the deployment of this?
>> yeah.
So when I hear things like that, I mean, absolutely, it messes up a lot of times, right?
Yeah.
I don't like to compare machines and people, but I will say, you know, you could probably pull somebody out of Saint Louis and they'd pronounce Chile incorrectly.
Yeah.
Right.
so there's a lot of context and nuance to some of the more detailed problems.
And it's easy to go and try to poke holes in things and say, oh, it didn't do this one thing.
And therefore it's entirely fallible.
as a broad statement I disagree with that because, you know, people are good at certain things and they're not great at others, and you can't just drop them into any situation.
And the way that A.I.
is trained, the, you know, a lot of these chatbots, the way they're trained, it's trained on data that it has, that it has access to.
And if it didn't have a sound bite of someone pronouncing Chile versus Chile, then it's not going to know that.
So then it it can't succeed in that task.
>> my general thought about technology is that especially in artificial intelligence is that you don't have to worry about the technology.
You have to worry about the people using the technology for different purposes, nefarious purposes.
Right?
Or people doing things without responsibility or accountability and just like, oh, I don't care what happens, let's just do this thing.
And they're not thinking about the impact on society or people.
So I find those things to be far more dangerous than like, your Roomba striking up against.
>> You.
Well, in a moment we're going to get really doomy.
But before we get there, Debbie writes in to say, I had a conversation with my niece.
Neither of us know what A.I.
is other than the stupid videos people post where someone has seven fingers or an elephant is attacking their child on a porch, kids cheating in school, or the meta thing that tries to take over my writing.
I only see negative uses for it so far, and I know that it will be misused for nefarious purposes.
I hate that I now have to look at every video as if it could be altered.
What is it really A.I.
and why do we need it?
That's from Debbie.
>> Debbie.
I spend a lot of time trying to answer this question, and I've come up with I've actually been on Connections before talking about this, I think six months ago and even before then.
So I'm going to give you a very simple definition of artificial intelligence.
Artificial intelligence is software, but for the most part, people make software.
But artificial intelligence is software that is created by learning patterns and data.
So all of this is very purposeful and mathematical.
But you have three ingredients artificial intelligence.
You have data.
The thing that you want an artificial intelligence model to learn from.
You have a task, a very specific thing that you want it to perform.
And then you have a metric to decide how well the the software will perform at that task.
Given the data.
So what you do is you set up a specific algorithm, and we call this training where the you have a whole bunch of machines, they go and they churn through the data trying to complete the task to improve the metric.
Now, all of these things are you have to distill this down into mathematics.
It's a very, very purposeful thing that you have to do.
You can't just give it a general task.
the task for GPT, if you use OpenAI's GPT the task is very simply, I'm going to give you a bunch of words in a sentence, but I'm not going to give you the last word.
Tell me what the last word.
Tell me what the next word in the sentence is.
If I start you off and then it will guess the word, and then the metric is.
Did you guess right or not?
So if you do this literally trillions of times eventually it gets better at predicting the next word.
So that's why when you use something like ChatGPT, you'll see it produce little pieces bit by bit, because it's just running through that process of showing the next word and the next word and the next word and the next word.
It's gotten very, very good at that task because of how much money and time and expertise the company has put into solving this task.
But that's pretty much it.
So all of these fancy things that you see, and I agree, A.I.
is I like to use the word mysterious because for most people it just looks like magic.
And you don't know.
And some magician shows up in the old times and they produce some magical thing.
But there's always it's like this magician trick, but there's always an explanation behind the scenes.
And that's the explanation.
It's just a very, very precise mathematical description of how well it performs a very specific task.
And then overall, it does really well and everybody's mystified and I can go into definitions of how it does this for images and video.
But, you know, it's very similar.
>> Yeah, I think that's a good description.
I hope that helps.
Debbie, send us a note if you're still confused.
But I appreciate that note.
And I want to share with you one of my observations on what I think has surprised me in two years since your first conference.
What has surprised me a lot is surprised me, but maybe the most is this two years ago, you would hear teachers, college professors, people who work in Hollywood saying things like, I would always be able to tell the difference between an A.I.
written product and a human written product, because the A.I.
doesn't have a soul, the A.I.
doesn't have the human experience, doesn't have the emotional wiring to produce anything of real value, that it will produce words.
But I'll be able to see which ones my students do, or which ones are really written created by humans.
And I think this is where I keep remembering what Kevin Cyrus told me on this program.
A tech entrepreneur who's back in Rochester now, a very smart guy.
If you put in a bad prompt, you're probably going to get a bad product.
Whether it's GPT or Suno or any of these other platforms, bad prompts lead to junk A.I.
But if you put a more sophisticated prompt in, you're you're getting stuff, at least in my view, that is not easy to discern.
If it was made by humans.
So in that sense, the A.I.
is getting good.
In my view, that's kind of disturbing, but it is not.
I think the idea that people had 2 or 3 years ago like, well, don't doesn't matter because I'll always be able to tell.
I don't think that's the case anymore, and I don't think that will be the case.
It is very impressive at when it creates poetry, music writing, when the prompts are really good.
What do you think?
>> Yeah, it's getting better at these things.
Right.
but you know, we're we're leveling off and I've seen a leveling off of the capabilities.
now the companies are working on, like, little fringe features to try to improve things, but I think we're kind of hitting a limit in certain cases.
>> I'm excited to hear more about this.
I like this, I like the idea of A.I.
hitting an upper ceiling.
>> Well, it's I mean, it's doing great at this task.
And like, how much better at this task can it get?
so this is I'll equate it to another time when, you know, when I was a student and the internet was available, it was very easy to go and, like, just copy something off of the internet that maybe the teacher hadn't seen.
And then, oh, I'm going to submit that as my own.
Right.
and that's cheating, right?
submitting a paper that is not your own, whether it's written by A.I.
or you take it from a book or something, is plagiarism.
so what I, what I talk about in technology is the cat and mouse game.
The cat and mouse game is that, you know, you can get into the situation as a teacher of trying to figure out whether the student is doing it, and then they figure out this one trick, and then the student comes up with a different trick to try to out trick the teacher.
And then you go back and forth until you know, and I, I think that going down that path of using, software to decide if the paper was written by A.I.
or by a student, I think that's a bad use of time.
I think a better use of time is say, you know, throw your hands up and say, you know what the whole process of somebody is going to go type a paper and then email it to me, and then I'm going to grade that paper as it was typed.
Maybe we need to look at that differently.
And maybe that is just an obsolete way of thinking about education, because it's just I can't figure it out.
So now instead I'll say, well, maybe instead of typing it up and teachers, you're going to hate me for saying this, but maybe if you have the students handwrite it out, right?
Or maybe if they have to give an oral presentation or a quiz on their paper, and then it's more about understanding whether or not the students student has gained knowledge, instead of figuring out whether or not the student has completed this homework assignment.
And I think that's the way of thinking, especially now with artificial intelligence.
>> Donna Rochester, I'll take your call on regulation and just a second, but kind of building on that point, I think the reason that a lot of people, teachers, but also a lot of other people thought several years ago, well, A.I.
will never get very good at this.
I'll never have to worry about this is because you think, well, robots don't have a soul.
Here's an argument I would like to posit to you, and you tell me what you think, right?
The reason A.I.
is good at it, the reason A.I.
can write poetry and songs and papers and analysis.
That does seem soulful.
That does seem like it has depth is because it's not like the robot that's writing.
It has the depth.
It's because it has consumed human made material.
Countless.
To your point, reams of it that we could never count.
And it is drawing on actual human creation to re composite something quote, unquote new, but it's actually just copying off of the human experience, whether that's songs or other things.
And so the product may seem actually pretty emotional.
It might seem like it has a lot of depth, but it is stealing.
We can't track what, but it's stealing from existing material from humans.
What do you think?
>> that.
Well, that's absolutely correct.
It just does what it's seen.
And it's.
>> Is the word stealing, correct?
>> not if you say the machine was stealing because the machine doesn't steal.
The machine just does what a person did.
So like.
>> Is it a form of theft.?
>> That is a so.
>> I think it's in the courts in a lot of places.
>> Yeah.
>> Well.
>> recently anthropic lost a class action lawsuit.
If you I have a coauthor of a book, and there are a lot of authors of books out there, and there was a whole corpus of books that was basically used by anthropic and other companies like meta, and probably OpenAI, to train models on just these books.
And the courts decided that that was copyright infringement.
You can't do that.
It wasn't fair use because you didn't buy a copy of the book.
If they had bought a copy of the book beforehand, then it would have been a little bit different than you could have.
Then they could have kind of used it, but they said, you did this first, and then you tried to cover your tracks by going off and buying it.
And you can't do that.
So I don't know the outcome in terms of what has been decided that anthropic must pay, but I think it's over $1 billion.
I think it was like 1.5 or something.
But don't sorry, internet, don't quote me on that.
I don't have the exact number in my head, but I remember it being quite a substantial amount.
so in that case, the courts decided the people behind anthropic.
It was theft.
and not so many words.
but the machine didn't steal anything.
The machine is just software.
It's just programmed by people to do something.
>> Yeah, okay.
That's fair, but I think this will be an ongoing question.
>> Absolutely.
>> Yeah.
Yes.
Donna Rochester on the phone.
Go ahead.
Sir.
>> thank you for taking my call.
I want to ask your guest there what he thinks about governmental regulation of this.
of this situation.
These companies are not developing this thing out of the goodness of their heart or their ha.
Or to benefit humanity.
They're only developing developing it to compete with each other, to compete, to get profits.
That is what it's all about, whether it has any benefit for humanity is beside the point.
They want.
They're in this for profit meta.
Google.
All of them are competing with each other to gain profits.
if they had some reasonable regulation and I know regulation is a dirty word in Silicon Valley, but if they had some reasonable regulation to maybe even slow down the process a little bit, but to guide the process a little bit, what does your guest think of that?
>> Good question.
Don.
>> well, first of all, I agree that there should be some regulation.
Exactly what that regulation is is hard to pin down.
And I'm not a regulatory expert.
but there was there was some legislation that was proposed earlier in the year in New York State which was a bill to try to make the data sources transparent on how the model was trained, which I think is a good way to approach it.
I think it's harder to say, you can't use this.
but if you look at the history of regulation with new stuff, like with automobiles and even with, you know, substances like cigarettes, it's always way too late in terms of like, well, the technology comes out and there are automobiles and, oh, maybe we need traffic lights and maybe we need non leaded gasoline, and maybe we need these things because they're causing all this harm.
So it's all the regulation always catches up in the United States.
Right Europe takes a different point of view.
They look at stuff and they're like no you can't do this yet.
We got to figure it out first.
But here we're much more reactive.
>> Innovate, put something to market, see if it causes harm.
Then regulate P post facto.
>> Yeah, I'm not sure I necessarily agree with that.
With the way that.
>> You're pointing, but that that is how it happened.
>> But that's how it happens.
Yeah.
>> Don thank you.
When we come back from our only break I've got a piece of sound from Max, and I'm very curious to know if he's going to agree or disagree.
I think I know what he's going to say, but I, I always love the conversation with Max Erwin.
He is the founder of Flower City A.I.
That's an annual conference focusing on the many applications of artificial intelligence, always pulls together a lot of interesting people, and they are still there's a still a call for speakers.
So they've got some folks planned for the December 3rd iteration of this conference.
It's coming back in just about what is that?
About six weeks from now?
Something like that.
Wednesday, December 3rd at the Little theater.
But, if you or someone you know might be interested that's that's the person you want to talk to.
Flower City A.I.
dot A.I.
is the website.
And this is going to be the third annual conference.
We'll come right back on Connections.
Coming up in our second hour, a new survey on the state of hate in our region is going to be debuted next week.
We're going to find out what participants in that survey had to say about hate and how it impacts our communities.
And this comes at a time when the Mobile Museum of Tolerance is rolling into Rochester.
We'll tell you all about it next hour.
>> Support for your public radio station comes from our members and from City magazine, covering arts and culture in Rochester and the Finger Lakes.
Since 1971, the online city calendar covers everything from live theater to local music.
More at.
Google.com.
>> All right, we are going to fact check you on the anthropic lawsuit.
>> Oh.
>> You ready for this?
Yes.
What did you say?
It was 1.5 billion.
>> I think.
>> So, that's correct.
That's correct.
According to Google A.I., Google, Google A.I.
says anthropic settled a major copyright lawsuit with book authors for $1.5 billion last month.
gorp on YouTube says I still get amazed when I tell A.I.
to make a picture of giraffes riding a merry go round, and in seconds, there's a picture of giraffes riding a merry go round.
I mean, yeah, it's you can tell to do a lot of things.
>> I'm going to say now ask it to give you a picture of a giraffe with a short neck and see what happens.
>> Okay.
Why?
>> What's the.
Because it's never seen a giraffe with a short neck.
>> Oh, but couldn't it intuitively figure out how to.
>> Well, I saw some interesting examples this week by a researcher and they all failed miserably.
>> Okay, very interesting.
and an email from Dave who says regarding using it at work, he says, I'm a software developer.
We've been instructed not to use A.I.
at work because of concerns like copyright passing our copyrighted code to the A.I.
engine, or general uncertainty.
But I've purchased my own subscription, and I'm using on my own time to learn and build applications and explore options.
So interesting.
So what David is saying there is probably the kind of debates a lot of companies are having, which is like, can we use it?
Are we are we good with regulation and copyright?
but I suspect there's just as many companies who are saying, cat's out of the bag, you know, use it.
What do you think, Max?
>> well, I use it whether or not I'm productive with it depends on the task and what I'm trying to do.
It works for very small things.
It doesn't work from, like, a large software development, overarching, like, big project type thing, but it does work at, like, getting your initial idea out the door and doing little things.
What I find personally, and we're actually we're going to have a talk on this.
The whole there's a whole theme about vibe coding that that term may be familiar to some people.
but it's the idea of I'm asking software to write, I'm asking the A.I.
to write software for me, and then it gives you something does it work or not?
So.
>> One thing that I find that I see a lot of is that people complain that they actually lose productivity because instead of actually writing the software mostly correctly, the first time, the way it's in their head is that they write the prompt, they get the software back, and then they spend all of their time going and fixing what the A.I.
output.
And that's a much less fun task for a software developer is to fix some nonsense that A.I.
gave you, instead of being creative.
And the process of creation.
And most people who write software are in it for the creative and problem solving mode.
>> Yeah, I, I agree, I think when it can be used as a creative tool, that's a very different deal than I gave it a prompt of some kind.
It created something, but there's probably some holes, and now I've just got to do a lot of work to try to fix what it did wrong.
That's that's very different.
Yeah.
Dave, thank you for that email.
and here's an email from Chris who says, thanks, Evan, for a great program.
So many companies are begging us to subscribe and use A.I.
services.
I'd like to opt out.
Well, anyone at your guest symposium be presenting on that topic how to opt out?
So it's an interesting idea.
Go ahead.
Max.
>> I, I have equal frustration with this because the way that the product development process works in a lot of companies is that they build something and they try to push it on you because they've built it.
That process is backwards.
They don't figure out what you actually need.
They just build something because they find it interesting or it's the new hype and they do that.
It depends on the software.
if I get emailed for stuff like, hey, try my A.I.
thing.
Honestly, I put a lot of those into spam because most of them are written by A.I.
in the first place.
It's not even it's just mass produced email.
And I just send it to junk.
But certain things I just don't use.
And yeah, the thing is, they're like, I use zoom a lot and zoom is like, use our new A.I.
workspace technology.
I'm like I just want to meet with someone.
I don't need that.
But, you know, with some customers, I have like a note taking thing that is A.I.
And I do find that useful.
I don't know, how do you opt out of a product when the product just totally changes as you and you don't like it anymore is you stop using the product.
I mean, if you miss the old version, I don't think they're going to switch for you.
But maybe if enough people stop using it, then maybe they'd get the hint and they say, oh, engagement went down and now we have to go back and do things.
>> Yeah, I would say also to Chris, this is what this is what I mean when I say that technological changes tend to change culture.
And it's it's an undemocratic change, which just means if there's enough momentum, it's going to feel like a tidal wave, and you're going to feel like you have to forever.
That's why I go back to that reference to my mother.
She eventually felt like she had to get an iPhone, didn't really want to do it.
that was an undemocratic change in her life, but she felt like she couldn't.
Not.
So she still has a landline, by the way.
And my older son thinks that's amazing.
He thinks it's so cool.
But but Chris, to the point about, you know, if you can opt out.
Yeah, the market will listen to people, but the market is moving fast.
It's going to be hard to opt out totally of A.I.
I think there will be individual sectors that we might see some pushback.
And I could be wrong about this, but an example would be the arts.
An example would be music or film.
So take music.
Jimmy Highsmith, a great local musician on this program, said he thinks we're going to start creating two categories of music.
just music and then synthetic music.
Synthetic music created by A.I.
or or largely or partially created by A.I.
And some segment of the audience won't care.
They'll say, well, it's a good jam.
I'm going to listen anyway.
And some will say, no, I don't want to support synthetic art.
I want real art created by real people.
And that it may carry a label in the way that music carries warning labels.
If you remember those and things like that.
So maybe we'll see that kind of label in the art world, and maybe people will decide on their own.
I'm not going to support this.
We had a conversation about the A.I., quote, unquote actor that there was all this hype a few weeks ago about this.
Well, the agents are going to sign the first A.I.
actor, and studios are going to want this A.I.
actor.
Who knows if that's actually going to come to pass.
But nobody seemed to like it.
That was not viewed very favorably.
And then the other thing is how you respond when you do see A.I.
that you think is distasteful.
So next week on this program, we're going to talk about a very political video that a group of Republicans in Pittsford made.
Have you seen that video, by the way, Max, have you heard about this?
>> I try not to watch political videos.
>> Okay.
so here's the short story.
And again, I don't know exactly the genesis of it.
I don't know that any one campaign was behind it.
But there's an election for town board.
There's an election for supervisor.
And this is a video.
It's like a minute, minute and a half long of quote, unquote, people who you see in places around Pittsford warning the camera like, hey, there's an election.
And the very identity of our town is up for stake and is is at stake.
And we could lose what it means to be Pittsford.
And all of a sudden, storm clouds are forming in the background.
And it's kind of cheesy, but the actors look like real people.
They are not.
They are A.I.
actors.
That's what it says at the end of this video.
And a lot of people were like, whoa, you couldn't find real people to send this message.
You used A.I.
actors.
Three out of the five actors are people of color in a 92% white town.
I mean, like, people felt a little icky about it.
And we're going to talk about it next week on this program.
but the response was quick, Max, people were like an A.I.
political ad, like, not people.
It was not very it was not viewed very comfortably.
I would say.
>> Yeah.
And this goes back to me saying that it's not the technology that causes the problems, it's the people wielding the technology that do stupid stuff like this.
And it's infuriating.
the problem is also is that, well, yes, this one thing got called out right.
It was obvious, and it I've, I've, I didn't watch the video, but I've seen the fuss about the video.
in various places.
Yeah.
the problem is, is that, well, there's that one video, but what about all the others that, you know, people see, and then there is no fuss because it just kind of goes under the radar and nobody sees it happening.
And there has been a lot of manipulation in the political sphere in the past.
Forever.
But especially, especially recently.
Yeah.
and now, like we talk about scale and it's so the bar is so low to be able to do something like this, and it's so much easier to be manipulative, you know, 20 years ago it would take an entire team like a year to be able to produce a fake video like that.
That was, you know, and but there were photos with Photoshop and stuff like that.
It's like you can go back for a while and say, oh, this was fake.
and somebody faked it, and then somebody can see that it was faked.
But most people don't understand the technology.
>> Right?
>> I get the point.
>> There's been manipulation.
>> There's been manipulation.
>> There's just the newest iteration of that.
>> Yes.
And it's far more prevalent and much easier to access is the problem.
>> Here's an email from a listener who's 88 years old.
Are you ready for this?
He says, I use A.I.
all the time.
I can summarize articles from the New Yorker.
I use it to perform linear regressions.
It gives me an output with plots and confidence intervals.
I learn things I didn't even know I didn't know.
I think it's fascinating.
I use it all the time, and I only hope that other people don't screw it up, because that's certainly a possibility.
That's Jim, 88 years old.
>> Jim, I'm in the same boat because that's how I use A.I., because I am all about knowledge and learning and expanding, expanding myself in in various ways.
And I think it's a fantastic research tool if you use it the right way.
Sometimes it can give you nonsense.
My background is in information retrieval and knowledge discovery, where I like to use source material that is summarized and then cited and then backed up.
And it's so good at doing that.
And that's how I use A.I.
in my business.
And the work that I do.
it's it's the way that the technology, in my opinion, was meant to be used as a knowledge tool.
>> Leveling us up.
Yes.
Not stripping us of critical thinking ability.
Yes.
Yeah.
And, Jim, I think his email is spot on in that way.
He is getting better at a lot of different tasks himself, and he's saving time.
He might say, here's a New Yorker article I'd love to read.
I'm not really sure I'm going to read it.
I'll take I'll take a summation, I think.
Yeah.
Great.
Right?
>> Yeah.
Great.
>> Good email from Jim.
Jim should go to the conference, by the way.
>> Jim, we'll see you there.
>> December 3rd at the Little Theater.
Flower City A.I.
is the website.
Tickets are available.
and before I forget, did want to mention anything about the tickets and what you're doing with helping a local cause there.
>> Yeah.
So if you don't know the federal government is shut down.
and snap benefits have been impacted by that.
I opened up a special ticket for Foodlink donation.
If you buy a ticket, if you go to the website and then go to buy a ticket, you'll see a Foodlink donation ticket.
It's a $50 ticket and all $50 will go to Foodlink.
And you get a ticket to the conference.
Normal tickets?
you pay $40.
That's the normal price.
I made it $50.
All 50 goes.
And then there's a nominal nominal fee for Eventbrite, which is the ticket provider.
I'm also going to say that, you know, the conference is a not for profit conference.
We don't do this to make money.
We do this for education, for empowering the community to get people together and talk about what's happening.
I'm personally as part of this, I'm going to donate $1,000 to Foodlink for this cause because of everything that's going on.
So please go.
And match my donation, buy a ticket.
and if you want to buy a ticket but you can't make it, then buy a ticket for somebody else and come.
We'd love to see you there.
It's going to be a very, very amazing day.
We have so many good ideas and speakers, and I've heard from multiple people many times that it's the best conference they've been to.
we are extraordinarily diverse in our viewpoints and what people talk about and who they meet.
the little is just the best venue for it because everybody's there talking, learning, enjoying, reflecting.
So please do come.
And I really hope to see you there.
>> All right.
Back to correspondence here.
Sheila says Evan, great program today.
One thing I'm wondering about is the ability to trust correspondence in business and academics.
If we get too dependent on A.I., for example, I recently spoke with a manager of a company who was asked to write a letter of recommendation for an employee for graduate school.
He said he used ChatGPT and it did a better job than he ever could, but the letter was not a personal recommendation.
Or was it?
That's what Sheila's asking.
>> I've got a I've got a couple thoughts on that.
so I was just before this, I was in a meeting with someone, and we were co-writing some notes to send to different people, and we tried to use the Google A.I.
to, like, write this, write this email.
And I think it did a terrible job.
because it wasn't in our voice.
And so we scrapped that and we just went back to writing them.
They were just quick notes, but I'm, I see there's a funny thing that happens is that people use A.I.
to they write a little prompt and they use A.I.
to generate this long email.
And then they send the long email off, and then the person on the other end is like, this email is too long.
I'm going to summarize it.
And then they use A.I.
to get it back down to a couple sentences.
So I think that's a very funny and kind of annoying use of A.I.
I've seen some really interesting examples of things gone wrong with email and A.I.
it's kind of your voice because it's your intention and hopefully you're at least reading it before you send it.
But I find that it's not much of a time saver.
Just write it yourself.
>> Yeah.
Letter of recommendation.
If someone was coming to XXY and letter of recommendation and they said, now this was written by ChatGPT, my response would be then I don't really want to read it.
I want to read a human being's actual words, not some sort of prompted recommendation.
but I'm I'm with you, Sheila.
We're probably going to see more and more of that.
Okay, so now to the doomy part.
You ready?
>> I'm ready.
>> Have you read Elliot Eliezer Yudkowsky book?
>> No.
>> If everyone builds it, everyone dies, is what it's called.
Eliezer Yudkowsky was recently on with Ezra Klein of The New York Times, and he is someone who is an A.I.
researcher.
He's worked in the field, and he basically stopped doing the work when he came to the conclusion that the guardrails weren't going to work and there wasn't enough ethics built into the entire effort there.
And he does believe now when he says, if anyone builds it, everyone dies.
He's talking about artificial superintelligence, which we've not arrived at yet.
He thinks we're 10 to 20 years away and his view is similar to he said, if you were building a new building in in your neighborhood, if you wanted to put a community center, when you put the community center in, you would probably kill an anthill.
Now, you wouldn't set out to kill the ants.
You weren't like we got to kill those ants.
It would be a collateral damage in something that was not related to your interaction with the ants.
They were just not a consideration for you.
He said.
Artificial intelligence is going to look at humans the same way when it is programmed with tasks, and it has a superintelligent capacity to carry those out.
And so Ezra Klein says, okay, but what if you train for that?
What if you plan for that?
Can you make sure that that doesn't happen?
I want to listen to what Yudkowsky said about that.
>> The way I would expect it to play out, given a lot of previous scientific history and where we are now on the ladder of understanding, is somebody tries to thing you're talking about it seems to, you know, it has a few weird failures while the A.I.
is small, the A.I.
gets bigger.
A new set of weird failures crop up, the A.I.
kills everyone.
You're like, oh, wait, okay, that's not that.
It turned out there was a minor flaw.
There you go back, you redo it.
You know, it seems to work on the smaller A.I.
again.
You make the bigger A.I., you think you fix the last problem, a new thing goes wrong.
The A.I.
kills everyone on Earth.
Everyone's dead.
You're like, oh, okay, that's, you know, new phenomenon.
We weren't expecting that exact thing to happen, but now we know about it.
You go back and try it again.
You know, like three to a dozen iterations into this process.
You actually get it nailed down.
Now you can build the A.I.
that works the way you say you want it to work.
The problem is that everybody died at like step one of this process.
>> That's Eliezer Yudkowsky talking to Ezra Klein of The New York Times.
What do you make of that, Max?
>> I, I don't know if I'm that much of a doomer on A.I.
I don't I don't see that A.I.
being this superintelligent thing is going to, you know, rise up against us.
That's very sci fi and very like, you know, the Terminator movies type thing.
Right?
That's not what I see.
The problem.
>> I don't think that's what he's saying either, though.
>> I see the problem as like, if you if you think about how we're building the technology, I see a worse problem of like, well, all of our global warming climate change goals are just out the window because in the past several years, the United States has built a thousand data centers for A.I.
And you think about all of the energy and all of the water required to run these data centers.
I think that's a much bigger problem than worrying about, oh, the algorithm is going to be super and decide that, you know, in some future we're a hindrance.
We're a hindrance or something like that, or we're just a we're just collateral damage in its goals.
Like, I don't see that as a problem because I understand how the technology works and I don't see the technology getting us there in 20 to 30 years.
>> Okay.
I appreciate you bringing up the energy thing.
And that's a whole other conversation at some point because it is an issue.
>> Yeah.
Well, that is so I've only announced one talk about, but I announced it in email.
I have an email list of people who I know and who have come to the conferences, about 3 or 400 people to that group I've, I've announced a talk and Madhura Anand is she works for the city of Rochester, specializing in A.I.
and data science.
And she's going to talk about energy in A.I., energy use in A.I.
And that was the only one that I announced.
And it was I just accepted it right off the bat, because I've been waiting for somebody to propose a talk on this.
>> It's important.
>> And I'm really excited to to see her do this, you know, present this material.
It's so, so, so topical and so important for people.
>> I think you and she should come on after the program and tell our listeners about it.
>> She I don't know if she's listening right now, but she's going to be very surprised that I just said her name on the radio.
>> Okay.
Well, she's welcome on this program to talk about this, and we really should.
Absolutely.
just briefly, what confidence interval.
Give me a percent chance.
Do you think Yudkowsky is correct?
The humanity is in real trouble.
>> In that case, in what he described, I'd say like 1 to 1% ish.
>> Okay.
So here's here's my rejoinder to that I, I would like to think that's correct.
If I told you I got a new car for you.
And this car is amazing, it's going to make your life better.
It's going to get you more efficiently to work.
It's going to be faster, but it's going to consume less.
there's a 1% chance that someday when you turn the key, it will blow up and it will take all of humanity with you.
But that's only 1%.
There's no way in the world you would get in that car, even if it was a 1% chance.
>> you're probably right.
Yeah.
but, you know, we just had, like, a massive recall of, like, Teslas because they were just powering down on the freeway.
So, I mean, people are still getting in Teslas.
you know.
>> I'm with you there.
I'm just saying, when I hear.
Dario Amodei and others saying, like, well, I think it's only like a 5% chance, I'm like, 5% is 1 in 20.
Yeah, a 1 in 20 chance that civilization is doomed here within decades because of what you're creating.
I don't think people are grappling with that.
I would like us to grapple with that more, that's all.
I'm not saying I think it's definitely going to happen.
I just want us to be.
I'd like us to have a future for our children to grow up in.
>> Well, I.
>> Put put a cap on it because you're gonna hear the music.
Go ahead.
Final thought.
>> just pay attention to what's going on.
Come to the conference, make your own predictions, know what's happening, and choose to use it or not choose.
And I hope to see you at Flower City A.I.
>> When I went two years ago, it was one of the most empowering and enlightening things I have done in regards to A.I.
I walked out of that after that day feeling like I am way smarter than I was when I walked in.
Because of the people that you pulled together for, you do a really, really great job.
It's a great service to the community.
And there's more information at Flower City A.I.
Like the flower, flower, f l o w e r, I guess.
Yeah.
There's also flower, flower Flower City A.I.
Max Irwin is the founder.
Have a great time.
Come back and talk to us sometime soon.
Well, we'll look forward to talking to the speakers you've collected for people who can't go, but I'm sure a lot of our listeners will see you on December 3rd at the little.
>> See you there.
Thanks, Evan.
>> Thanks, Max.
Great conversation as always.
More Connections coming up in just a moment.
>> This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium, without express written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the link at wxxinews.org.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Connections with Evan Dawson is a local public television program presented by WXXI