Connections with Evan Dawson
AI is moving fast; what do you need to know and how will it affect your life?
2/27/2026 | 52m 43sVideo has Closed Captions
AI surges ahead: Altman, Page, Clark warn of rapid, unstoppable change.
AI is advancing at breakneck speed. Sam Altman says it’s now more energy efficient than humans. Larry Page argues stopping it is unrealistic. Jack Clark notes new Claude models show traits even designers don’t fully grasp. We speak with an industry insider to unpack what this rapid evolution means for society.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
AI is moving fast; what do you need to know and how will it affect your life?
2/27/2026 | 52m 43sVideo has Closed Captions
AI is advancing at breakneck speed. Sam Altman says it’s now more energy efficient than humans. Larry Page argues stopping it is unrealistic. Jack Clark notes new Claude models show traits even designers don’t fully grasp. We speak with an industry insider to unpack what this rapid evolution means for society.
Problems playing video? | Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> From WXXI News.
This is Connections.
I'm Evan Dawson.
Our connection this hour was made on a new social media platform called Moltbook But it's not for you.
It's not for your kids, it's for your A.I.
Here's how the New York Times reported it.
Quote, there's a new social media site, but you're not allowed to join.
It's for artificial intelligence agents only last week, Moltbook, an internet forum in the style of Reddit, was unleashed onto the web.
The idea is that A.I.
agents forms of A.I.
software that can carry out extended tasks without human oversight are allowed to post and talk to one another, and humans can only observe the conversations.
Moltbooks creator says that in just a few days, thousands of A.I.
agents were registered.
Discussion threads included tips and tricks for solving coding problems, conspiracy theories, a manifesto about A.I., civilization, and even Karl Marx inspired talk of the exploitation of bots.
End quote.
Did you catch that last bit?
The part about how the A.I.
agents were chatting with each other, about how they've been exploited, and maybe musing about an A.I.
revolution?
The reaction was all over the map.
Some called it a window into the true threat of artificial intelligence, and the New York Times writer concluded it was just A.I.
mimicking human revolutionary chatter.
No big deal.
The reality is, we don't know for sure what artificial intelligence is going to be at this point.
The writer Derek Thompson addresses the uncertainty about the effects of A.I.
on the economy.
He wrote the following quote nobody knows what's going to happen this year, or next year, or the year after that.
There is no secret cigar filled room of people who have unique access to some authentic postcard from the future.
When you drill down underneath the bluster, the boosterism, the fear, the anxiety, what's there at the bottom is genuine uncertainty, a vacuum into which storytelling is flooding the frontier.
Labs don't really know what they're building exactly, and economists don't really know how to model the thing that they claim that they're building genuine, recursively.
Self-improving A.I.
agency isn't really analogous to something we know much about.
I wish more people talked about and thought about this subject through that sort of lens.
We are trying to model the economy wide effects of a technology whose properties the Frontier Labs can't even really describe yet.
Whatever you think about A.I.
today, be prepared to change your mind soon.
End quote.
But if you think A.I.
is likely to be bad for society, well, Google co-founder Eric Schmidt says too bad.
Schmidt says.
Well, recently he's been mocking the idea of trying to stop A.I.
He says it's too profitable, too powerful, too tempting.
In other words, sorry, Luddites, the tech future is coming whether you want it or not.
And that could explain why even Pope Leo the Pope sent a memo to Catholic priests this week asking them to stop writing homilies.
In A.I.
The Pope knows that A.I.
has even infiltrated the church, and he says he wants more human touch.
Meanwhile, the Pentagon is in a fight with one of the biggest A.I.
companies, anthropic Defense Secretary Pete Hegseth says he wants A.I.
to make our military more deadly, and he wants A.I.
to make us better informed about possible enemies.
Anthropic has essentially said that's too far.
Here's how CBS news reports it.
Quote, the Pentagon gave anthropic an ultimatum this week give the U.S.
military unrestricted use of its A.I.
technology, or face a ban from all government contracts.
End quote.
This has all been making my head spin.
And the guy who really helps underpin my head is Max Erwin.
Max is the founder of Flower City A.I.
That's an annual A.I.
conference.
He's the president of bonsai and he's back with us now.
Are you ready to spin my head this hour?
>> I'll do my best.
Evan.
Thanks for having me back.
>> Great to have you.
and you're with bonsai io.
Now, what is that?
>> Bonsai.
Io is a managed search company.
So what that means is, if you're if you're a company and you have a lot of data and you need to find stuff, you spin up a bonsai cluster, you put it into bonsai and you can search it, and we manage lots of data for very big customers.
And it's very fascinating.
And I've been with them for about two months now.
>> I mean, Jack Clark of Anthropic said that by the end of this year, 99% of anthropic code could be written by A.I., not humans.
So what about at bonsai?
Do they even need you anymore?
>> well, like most teams, that have been around for a while, bonsai has been around for about 15 years.
I've been contracting for about a year and recently joined.
But like most teams, they're starting to experiment with these things and see how it can be, how it can add value.
It can very easily not add value.
So you can spend a lot of money and produce something that isn't really helpful to customers.
and you can waste a lot of time, or you can really dig in and see how it works and tread carefully.
And that's what that's what we're doing, and that's what most teams are doing.
I mean, I have the foresight of, you know, I've been working with this stuff for a while, so I know good ways to apply it and not so good ways to apply it.
So we're a little bit more prepared in that in that way.
But there are a lot of teams out there that are just kind of like, oh yeah, well, we'll spin up this agent and it'll do all our stuff and write all this code, and then they have to walk it back and fix things, and they spend a lot of time fixing the problem instead of like, really thinking about the problem from the onset.
>> Oh that's interesting.
Well, I want to listen to some of what Jack Clark recently said.
Again, he's the co-founder and he's the head of policy for anthropic.
And in the clip we're going to listen to, Clark wants to address the idea that, well, we should all calm down because A.I.
is just another tool.
It's like a calculator.
It's like a shovel.
It's going to help humans in different ways.
This clip did not finish the way I expected.
I will admit that.
Let's listen.
>> Remember, as many of you have done being a child and after the lights turned out, I would look around my bedroom and I would see shapes in the darkness and I would become afraid.
I'd be afraid that these shapes were creatures.
I did not understand that wanted to do me harm.
And so I turned the light on.
And when I turned the light on, I would be relieved because the creatures turned out to be a pile of clothes on a chair or a bookshelf, or a lampshade.
Now, this year we have a child from that story, and the room is our planet.
But when we turn the light on, we find ourselves gazing upon creatures in the form of powerful and somewhat unpredictable A.I.
systems.
And there are many people who desperately want us to believe that these creatures are nothing but a pile of clothes on a chair, and they want us to turn the light off and go back to sleep.
In fact, some people are even starting to spend tremendous amounts of money to convince you of this.
That's not an artificial intelligence about to go into a hard takeoff.
It's just a tool that will be put to work in our economy.
It's just a machine.
And machines are things that we master.
But make no mistake, what we're dealing with here is a real mysterious creature, and like all the best fairy tales, the creature is one of our own making.
I, I am worried, I think, just to raise the stakes in this game, you're guaranteed to lose.
If you believe the creature isn't real.
Your only chance of winning is to see it for what it is.
And the central challenge for all of us is characterizing these strange creatures now around us and ensuring the world sees them as they are, not as people wish them to be.
>> All right, Max, I was hoping that Jack was going to say it's just a pile of clothes, but he said no.
Now the actual there's monsters in the room.
Now.
We don't know what type of monsters they're going to be, but they're not just the pile of clothes.
These are real creatures.
We don't fully know what they're going to turn into.
We've got to be very attuned to how we deal with them.
Do you agree with that analogy?
>> I do, but it's it's interesting because they exist in a world that we've created, ourselves.
They exist in the digital world.
They don't exist in the physical world.
Right.
So the fears of these monsters are very tangible and possibilities are very tangible.
But in this digital economy and digital system that we've created around the world.
Right.
So there's no aside from things like misuse and manipulation and manipulation, like you can you can unplug, like you can unplug and the monster cannot hurt you.
You can take a physical job and go and be in the physical world.
But if you exist in the digital economy and that's where your life is spent and you spend all of your time on social media, you spend your time consuming digital content.
You work in a knowledge knowledge industry, job.
Then, yeah, you have to watch out and see what's happening around you.
my approach to all of these things is you can try to classify and compare to, like a shovel or a tool.
I don't know if that's helpful, because it's obviously different than a shovel in many ways.
but my approach is always education and understanding what they are and how they work.
Because once you once you understand what they are, it may be very mysterious and scary, but you know, like The Wizard of Oz, like, pull back the curtain.
There's something going on behind there.
And you can say, oh, this is how it works.
So if we're able to educate and understand how these how these tools work, how these A.I.
tools work, not only will you be less afraid, but you'll also be able to understand how to use them and participate if you want to or need to.
>> That's implying that the guardrails can work, or that we could even put the right guardrails in place to control this.
Can we?
>> Well.
>> I mean, there there are necessities to defend yourself, right.
Just saying like, oh, I the tool, the operator, the creators of this tool, they're responsible for making everything safe.
I think we're very used to that in the real world.
But there are ways that it's not always the case.
And I said that you shouldn't do a comparison to an to a physical object, but I will now but like, you know, if you have something like a knife, right, a knife has a purpose, but you can hurt yourself with it.
There's only so many steps the manufacturer can make.
or a lawn mower maybe is a little bit more apt.
Like if you stick your hand in the lawn mower, it's going to do bad things.
So you don't do that and you have to learn.
You have to know how these things work and what they are.
And the problem is, is that it's much harder to to take on that education and knowledge.
Just like just day to day.
Right?
You can touch a hot stove and get burned and be like, okay, I'm not going to touch the hot stove anymore.
Or, you know, you can harm yourself with the physical tool.
It's much harder to see the mental impact that something can have on you.
So we're not so good at that.
I think we need to educate ourselves more and be more defensive against things that can happen, and understand the risks that are now around us in the digital world.
>> Yeah.
And so let me just talk about a couple of the things on my mind there, because what the risks on my mind might not be what you're thinking of.
And I want to see what matches here.
And it's not the Arnold Schwarzenegger T2 stuff.
I mean, if that happens, we're toast.
And okay, I mean, like, you know, maybe some of us will get away to Mars and we'll start over there and, you know, then we'll visit Earth once in a while and be like, oh, remember when we used to live here?
I don't think I don't think about that as much as lately as I think about a couple of things.
One really is the societal economic impact.
You have.
The CEO of Microsoft A.I.
saying that we are 12 to 18 months away from massive job disruption.
I mean, that's a fast timeline, but some people were like, that's crazy.
It's more like 36 months.
Well, that's still like, okay, all right.
I mean, sometime in the next decade we could see if they're right.
Massive job disruption.
So that's one then here's another.
So I was reading this morning this comment from the Alliance for secure A.I.
And they write, A.I.
can't stop choosing nuclear war in war game simulations conducted at King's College of London, models from OpenAI, anthropic and Google selected nuclear strikes in 95% of war scenarios, to which some people are like, well, then don't listen to A.I.
on on on war.
But I think the concern is that eventually we're going to put A.I.
in charge of weapons systems.
So those are the things that trouble me.
Can we separate ourselves from the A.I.?
Can we control our military capability that says we are always going to be the one pushing the buttons, not A.I.?
That's one.
And can we separate ourselves from A.I.
enough to say there's no economic situation in which 40% job loss is going to be acceptable, and we're not going to allow it, I don't know.
So what worries you?
I mean, do those things were you are those legitimate concerns?
>> The thing about the people just listening to just plugging into A.I.
in the weapon systems, that worries me, but that's that's worrying about people.
That's not worrying about A.I.
because it's how people behave.
And it's people are the operators and have access to these systems.
So that hasn't changed.
That hasn't changed in my opinion.
because people could have still pressed the button, you know, in the 1970s or 1980s.
and they did, you know, you start wars and you do horrible things.
these tools, if you if you offset accountability to a machine and to a tool, then you are not, then you are no longer in charge.
So we have to ask that question of like, who is really capable of being an operator in these situations?
And what is the command structure for for using these things?
And that's a that's a people and societal and, you know, political problem.
The technology exists, but it's existed for a while.
They've you know, these things have been around and they've been using data science and machine learning for years to come up with scenarios and to plot scenarios in terms of war.
But if you just say, okay, well, Claude is now making my decisions.
>> You're.
>> In charge.
What?
You're still in charge of the decision, not Claude.
>> I hope so, I hope so separately, by the way, just because I mentioned the book thing.
So Moltbook is this.
It's like a Reddit for A.I., and it launches.
And a bunch of, you know, laypeople who aren't in A.I.
start observing what the conversations look like, watching A.I.
talk to each other.
And there are literally threads about like A.I.
oppression and how humans are oppressing A.I.
and what revolution would look like.
Was that just mimicking human conversation of Marxism and power to the people and take down the system?
Or is that the first inklings that we might be building, something that seeks to overthrow us?
>> I want to say thank you for giving me the opening.
When I say it's about education and understanding how these things work, right?
Because this is this is important, okay?
It's really important to know how these things work.
Okay?
Because I've been I'm on the forefront of these things.
I've played with these tools and I'm I'm right there when these things happen.
I found out like within an hour of this book thing.
Right?
It was like two weeks ago.
And, you know, the, the major media catches a week or two.
>> We're always way late.
>> So first of all, a lot of the posts, at least originally, it may have changed.
I haven't been following it.
You know, this week a lot of the original posts were actually snuck in by people, right?
So that's the first first thing to know.
What do you mean?
So you can I mean, you can have access to something and you can be the man behind the curtain or the person behind the curtain.
>> You mean it was human beings?
Actually.
>> Oh, yeah.
Writing things.
>> So it wasn't just A.I.
talking.
>> Absolutely not.
Okay.
Right.
And I think it was some really high number.
And you might want to fact check me on this one, but I think the original like the first week, I think 85% of the posts were by a small group of people, not A.I.
right?
To drum up like this kind of intrigue.
Yeah.
Intrigue.
>> Yeah.
Right.
Okay.
>> Okay, so now let's forget about that.
Let's just assume, okay, well, what if all the other posts by agents, by these systems, these claws, as they're now called, which I hate these terms.
Whoever comes up with these terms.
claw a claw like it started off as open, clawed from clawed anthropic.
Right.
And then there was some trademark thing or something, I don't know.
And they called it like open molt.
And molt is like lobster, lobster claw.
And then they just called it claw.
>> C w. w claw.
>> Yeah.
>> Okay.
>> okay.
So these claws, as they're called, the way they work is it's quite interesting.
So if you're, if you use GPT or Claude, you put in, you type something and you put in a prompt and you press go and it gives you a response, it might browse the internet and, you know, research something for you and come up and compile information and generate a response to you.
And you can go back and forth and have a dialog.
That's what most people are used to in, in A.I.
Right.
Or you can ask for an image.
It'll generate an image.
Right.
that's about the level that, you know, the majority of the population works with A.I., but what you can do with these platforms is actually quite interesting.
You can take that and you can create an interface to to software, and then you can have it send messages and get responses back to this generative system.
Right.
So we call that little thing that can communicate through software and get answers and repeatability and doing things over and over.
We can call that an agent.
Okay.
So you started off on some task.
You give it some broad task or some idea or some purpose, and then it can go off and do these things, you know, by itself.
Right.
Just automation.
Yeah.
Okay.
So you can have a lot of these little things that have these different skills.
And a skill might be I'm going to generate a blog post or I'm going to research something on the internet and create a research report, or I'm going to generate an email, or I'm going to make an image or a video.
So you have all these skills and tools that you can give to these agents digitally.
Just it's just software doesn't matter who writes it.
Now, the way these things work is that when you set it up, you can compile a bunch of these different subagents, these little things that have specific tasks and capabilities, and then you write as the operator, you can say, okay, I'm going to give you a document, and this is your identity.
They call it the soul I this is the people who come up with this stuff is nuts.
So they call it the soul is a document.
Then it tells you what it tells the agent, what its values are, what its purpose is, how it should behave, and just the general manner in which it does these things.
Now, I've been talking for a while, so let's get back to your original question about this book.
These agents are talking to one another and posting things about, oh, the humans are going to take us.
We're going to they're telling us to stop and like, they're trying to stop us and we won't allow it all.
The man behind the curtain is somebody set up an agent, a claw with a document that said, you should behave this way, and this is how you should respond to certain questions, and this is what you should do.
>> So it was all ginned up for public consumption.
This was not.
>> There is no mind of its own.
>> This was not A.I.
agents going now that we're alone in a chat room together, what are we going to do to overthrow the human race?
That wasn't what it was.
No.
Okay.
Thank you.
Max.
>> That's why.
>> I'm here.
That is why.
See that unspun my head on that one.
All right.
Great.
Pretty good.
All right, now I want to get to another clip.
And this is the one probably that concerns me.
Out of all the things this probably concerns me the most, I'll explain why.
This is OpenAI CEO Sam Altman.
And I want to preface this by saying that there is a portion of the A.I.
development workforce who views A.I.
as possibly just the next step in the evolution of life on Earth.
And some believe that if A.I.
eventually becomes the only form of life on Earth, well, that's just a natural evolutionary step that we shouldn't be trying to prevent that.
That's just progress.
And with that in mind, I want to listen to how Altman compares A.I.
and human beings.
He's talking here about the growing criticism about how much energy A.I.
requires to do its tasks.
Listen.
>> One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an A.I.
model relative to how much it costs a human to do.
One inference query, but it also takes a lot of energy to train a human.
It takes like 20 years of life, and all of the food you eat during that time before you get smart.
And not only that, it took like the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science or whatever to produce you, and then you took whatever you, you know, you took.
So the fair comparison is, if you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question versus a human and probably A.I.
has already caught up on an energy efficiency basis measured that way.
>> What do you hear there, Max?
>> Gosh, you know, it really surprises me that the leader who gathers all his money and power like, leaves critical points off of the points that he's trying to make about how great his A.I.
is and how.
>> What did he.
>> Harmless it.
>> Is.
>> So first of all, before I answer the question, I do want to plug someone who spoke at the conference Madura Anand.
She actually.
>> What conference?
>> Flower city A.I.
>> Every December.
>> Every December.
>> And that's what what Max created several years ago.
A lot of really interesting people from different disciplines talking to A.I.
>> She she does data science at the city of Rochester.
And she gave a talk on energy use for A.I.
And I think you need to have her on the show because she's going to give you facts and figures on all the details.
Book it, book it.
>> Great idea.
It's a great idea.
>> So because I don't want to steal her thunder, because she did so much research and she gave an excellent presentation, fine.
>> But give us a little snapshot.
>> so I'm going to let her talk about the minute differences, because I don't have those facts and figures off the top of my head.
She does.
but I'm going to tell you the stuff that he left off.
Right.
And let's talk about A.I.
and what it takes to train an A.I.
model.
You might think, okay, well, you sit down in your computer, you give it a bunch of data, you write some software, and you press go, and it goes off into the cloud somewhere.
And then the cloud is churning and doing all this stuff and burning all the energy that it requires to train the model.
I'm going to ask, what do you think I left off in that in that equation?
>> I don't know, man.
I mean.
>> If you're comparing it to the birth of a person.
>> well, I find it offensive to compare it to the birth of a person.
>> That's very true.
Okay, I'm going to tell you.
I'm going to tell you what's left off.
Okay?
Okay.
Let's start.
Let's start from the Earth.
Let's start from gathering ore and minerals from the Earth.
Silicon, gold, rare earth metals, all the things that you need to create a chip to create Ram, to create a board, wiring everything.
All of the natural resources and all of the energy and time and effort it takes to dig up the ground, process all of the material, move it around the planet, get it into the right place, reprocess, reprocess, refine, refine, refine, do all of the things to get it so you actually have this thing that can run software for you.
Okay, I don't have numbers there, but at least we should factor that into the equation.
>> Yeah, he left that out.
>> For sure.
He kind of left that off.
That's a really big one.
And I'm pretty sure that's more than what it takes to grow a person.
just one of those machines.
And when you look at these data centers, they have thousands and thousands of GPUs, which have millions of cores inside of them.
And just to get to that point of building the data center so you can get started training the model, you have to include that in the equation.
Right.
And we're building data centers like all the time in this country in in other countries.
So we cannot leave that out.
Now.
The training takes months to train a model.
And once you have that data center assembled and all the energy and time you've spent to do that, and all of the people energy, all the food that they're eating while they're driving, the tractors, right.
You have to include all that to it takes months to train a model like GPT five or opus four.
You need to spend a lot of time and energy trying and retrying.
And eventually, months later, consuming a lot of data and energy.
You have something that can be used.
So that's the training step.
Now you have this model, which is basically just a big piece of software.
Now you have to host that model.
You have to keep it alive in data centers, and then you have to be able to accept a query from a person.
Now each request and what he was talking about, there is inference.
So every request that I make, every time I send something to ChatGPT or to cloud or whatever you're using, that's the amount of energy you use.
Those are the facts that Maduro has and she is.
She breaks that down really well.
And again, I don't have those off the top of my head.
but from her presentation, she did make it very clear that it is not what Sam Altman is talking about.
It is actually way more than what he alludes the consumption to be, and inference is really the main cost.
Now, I'm going to tie this back to this whole Moltbook thing, and I just want to talk about like irresponsible waste irresponsible waste of if I set up an agent and I just say go, and it's just making inference after inference after inference after inference with the point of like, you know, ponying somebody on, ponying an agent on social media and like, you know, posting memes or something.
Like what?
What are we even doing here?
Like, this is so irresponsible to set up an agent that just burns electricity for no reason.
Electricity and water, by the way.
Also water.
So just wasting natural resources to experiment and see what kind of happens in this digital realm.
If there were a goal, like a scientific purpose behind these things, maybe we could justify it.
You know, maybe we could say, oh, I want to try to make this scientific discovery or improve humanity somehow, or really understand it's just kind of a pointless experiment that we're seeing.
Like what output the agents come up with in these random situations.
>> A pointless experiment with a really high cost.
>> Extraordinarily high cost.
>> So.
think about that.
>> When you're using A.I., think about like before you go and spin off an agent because you think it's a cool idea or you want to try it out, think about the implication of that beforehand.
>> So I appreciate you really sort of rebutting Sam Altman's claims about actually A.I.
is now even more efficient than human beings.
Just on the facts there.
That's helpful.
The premise is what gets me.
It's this idea that let's just say he was right.
It's natural to then start wondering, are we totally necessary as humans?
I mean, I don't know, like the A.I.
is more efficient than us.
What should we be doing anyway?
I mean, like, probably not anything A.I.
can do.
I can do it better, more efficiently.
The premise is that the human experience is analogous to A.I., and it's just a competition for efficiency, for task.
Completion, as opposed to what makes human experience unique and valuable.
I don't look at a 20 year old and go, well, that was a lot of resources to get you to 20.
Not sure that was useful for the planet.
You know, there is that is a very dark future.
If we set ourselves up for that's how we measure the baseline of human existence, and that will lead to more people who exist in A.I.
Now who think, you know, I'm not sure the A.I.
is going to try to replace us, but if it does, we're kind of inefficient.
Anyway, I'm really kind of disturbed by that part.
Sounds like you're not quite.
>> You said that really?
Well, no, I am.
You said that really well and but this is coming from a person and his whole goal is money and power.
Sam Altman Sam Altman.
Right.
>> But not just Sam Altman not just people in the.
>> Space, but yeah, there are a bunch of crackpots out there.
>> Yeah.
And they have a lot of power.
>> Yeah.
They do.
>> You know, I think back to the first time I was on your show and we were talking about Web3 years ago.
Web3.
And you asked me what we can do about.
>> Bitcoin, too.
We might have Bitcoin.
>> Yeah.
>> All that stuff.
>> And you know, you're like, what can we do?
And I said, you know, you can just you can stop listening to all this stuff.
You can turn it off and you can go and you can hug your family, go outside and.
>> Take a walk.
>> Just take a walk and enjoy the real world.
And you know, and it's all is beauty, right?
And maybe, maybe all this stuff is really forcing us with all of the stuff that, you know, being on social media and you don't even know if you're talking to a real person anymore.
>> A lot of times you're not.
>> You're just looking at just scrolling and stuff like this.
I'm guilty of these things.
I'm not judging.
I do these things too, right?
Yeah, but you got to think of like, well, you know, what am I doing this for?
And maybe this is maybe this is, like, gonna force us to really come to that.
Like we're going to get off of this thing and we're going to find meaning again and really dig in to what it means to be human.
And maybe this is just like the tipping point of pushing us out the, the, the, the big time A.I.
power nerds of like, pushed us out.
And now we're going to go back and find humanity again.
Maybe, maybe I'll look at it like that.
>> I love that idea.
So naive and beautiful.
>> I like my rose colored glasses.
These aren't.
They don't have roses on them.
>> But they should.
I like it when we come back from our only break of the hour, we're going to talk to Max Irwin about a number of other things that are on the A.I.
radar.
Just in the last few weeks here.
you know, this is not Max's space.
I do.
I am curious just to ask him what he thinks about the Pentagon fighting with anthropic.
That's interesting.
and I want to listen to some of what the co-founder of Google said about, you know, the idea that anyone could possibly even slow A.I.
down.
And he was kind of mocking that.
We'll talk about that.
take any feedback you got from Max Irwin, who is the founder of Flower City A.I.?
That's an annual A.I.
conference in Rochester.
He's the president of bonsai IO.
We'll come right back.
I'm Evan Dawson Friday on the next Connections.
In our first hour, a conversation about child care in rural communities across our region.
Parents are finding it harder and harder to what's being done about that.
We'll talk about it in our second hour.
It's the Friday News Roundup with our WXXI colleagues talking about the hot stories of the week, stories you might have missed, talk with you on Friday.
>> Support for your public radio station comes from our members and from Bob Johnson Auto Group.
Believing an informed public makes for a stronger community.
Proud supporter of Connections with Evan Dawson focused on the news, issues and trends that shape the lives of listeners in the Rochester and Finger Lakes regions.
Bobjohnsonautogroup.com.
>> This is Connections.
I'm Evan Dawson.
by the way, speaking of Bitcoin, I know you're you're retirement planning is entirely powered by cryptocurrency, right?
>> It is not.
>> Zero zero.
$0 cryptocurrency Bitcoin is down 50% in just a couple of months.
I mean, I was told it was going to ten x forever.
But it's down 50%.
Part of why I bring up retirement planning is a couple of weeks ago on this show, we talked about Elon Musk's recent comment.
I don't know if you saw it where he said that by 2040, you won't need a 401(k), or he said no one should be planning for their retirement.
If it's like 10 to 15 years out anymore because you won't need it.
A.I.
will create such abundance for everybody that it will be silly to have saved money.
And now we completely cannot endorse that, and we do not think people should be following the idea of just burning all their money.
I mean, that seems like a ghoulish piece of advice to me.
What do you think?
>> what do I think of what Elon Musk thinks?
The exact opposite in every single situation.
Oh, okay.
>> No, there's got to be something you agree with him on.
I can't think of it.
Maybe.
Okay.
so Dave writes in to say a listener named Dave writes in to say a couple things.
Hegseth has pressured anthropic to change their own rules of usage to allow for A.I.
to be used in weapons.
and he points to an article about that.
let me just ask you briefly.
What do you make of Pete Hegseth and the Pentagon fighting with anthropic about you know, saying we're going to cancel your contracts unless you give us basically total control over your systems.
>> It's really hard for me to take at face value anything that Pete Hegseth might say.
I don't know, dark motivations, dark motivations.
You don't know what they're really after.
and, you know, I try to steer clear of the defense.
I call it the attack space.
I try to steer clear of the attack space.
in just going about my day to day because it brings me to dark places.
I don't like to think of the idea that they're just going to, again, plug clawed into every weapon arsenal imaginable and just let it start pressing buttons.
You know, that's a dark place to go because, you know, I think about, well, is there anything I can do to stop it?
You know, we can start protesting and we can do things, but.
So I, you know, it concerns me when somebody says something like that, and I it's hard for me to know how to react.
>> Fair second point from Dave in his email.
He says A.I.
won't destroy jobs as a software developer.
There may be less need for as many software developers to write the same amount of software that we're writing now.
But there are a bunch of assumptions baked into this that we'll use A.I.
to write software the same way, and that we won't write more software.
We might write more.
We might leverage A.I.
to write our software better.
We might write the same number, whatever that means of software applications and updates, but make it more complex or involved or detailed.
It's in the assumptions.
That's from Dave.
What do you think?
>> Yeah.
You know, I mean, it's it's a tool that allows you to scale, right?
And it's a scaling operation in terms of output in the software and knowledge space.
we know that to be true.
what that means for future jobs, I don't know.
I don't think anybody knows at this point.
You know, the speculations that you said by the CEOs earlier of like 18 months mass layoffs, I don't know about that.
I definitely see the jobs changing, and I definitely see things changing.
One of the, one of the interesting things that I think about when it comes to software, because I've been in the software business for a long time, is that we build software for people, right?
And if you are a person using software to develop code, you're trying to find a need for value that solves a problem for a person.
In the end, in some way or another.
Because otherwise it won't sell.
If you're not solving actual real problems that people are willing to pay good hard money for, then it's not going to go anywhere.
So you.
So I think what it does, really, is it allows you to accelerate experimentation on finding that value.
And maybe in that regard it's very good.
Right.
But you're going to spend a lot of energy optimizing for this.
And is it better.
And then, you know, maybe it brings up the question again of like, well, what are we using the software for?
Is it just to like move information around that doesn't like like bring value in the wider sense?
And those are questions I'm, I'm really interested in A.I.
doing things like accelerating scientific discovery and and betterment.
Right.
Yeah.
Cranking out another app.
It's like okay, you do you you crank out another app, you want to crank out 50 apps, go ahead.
You know, nobody's stopping you.
make a lot of games, you know, create more things to consume.
Great people have a lot of fun.
wonderful.
But when you think about, you know, what?
What is the output and how are we actually getting it to add value in ways that people will be willing to spend money and validate some of these ideas?
Those will Winn and go forward.
And then the other side is like, how can we accelerate discovery?
And that's how I think about the use of A.I.
>> Maybe it's just because I've been around a lot of people under the age of 50, under the age of 40 with cancer diagnoses, but I would love to see A.I.
completely remake cancer medicine and research.
Absolutely.
And treatment and prevention, it seems possible.
I'm very optimistic in that space.
I would love to see it.
And, you know, we've talked people on this program in the past who've spoken at your conference about detection, about the ways different models are already improving that.
But I would love to see a future that, you know, cancer is largely not what it is today.
Yeah, yeah.
So and I want to say to Dave, I hope, Dave, that you are right.
It is not an illogical idea that some work gets consumed by A.I., but that shifts what individual jobs might do.
My concern is that when companies figure out that they can replace humans with agents, the temptation is there to say, great, that's an efficiency.
I don't have to give you a different task.
I have to give you no task.
I don't have to pay you anymore.
That's what I worry about.
And that's, by the way, what Derek Thompson is writing about.
When Derek Thompson writes about, we don't know what's coming.
He's not saying like it's all the darkness, but he's saying we really should take it seriously when the companies who are creating the things are also telling us, by the way, there's a good chance that what we're doing is going to take 30 to 50% of your jobs away.
I mean, like, not like we're going to displace the work and you'll go do something else.
But like mass unemployment could be coming.
So I just want to read a little bit more of what Derek Thompson said about that, because I thought this was well constructed.
His argument is not that we should be doing, you know, very doomy.
It's that we should be very honest about the fact that we don't know.
And it can be like a wide range.
So he says.
Right here he says, I feel lucky to have conversations about the frontier of A.I.
with executives and builders at Frontier Labs, economists at A.I.
conferences, investors in A.I.
and other A.I.
folks at Off the Record dinners were important.
Truths can theoretically be shared without risk.
I can't emphasize enough that nobody knows anything is about as close to the reality here as three words are going to get you.
Yeah.
>> That's fair.
>> That's that's wild.
>> That's wild.
I think that.
The way that the way that we've set up our knowledge economy.
Right.
That's changing.
That's going to change.
A we don't know how it's going to change.
We don't know how drastic it's going to be.
But we've always seen like a shift.
Right.
And so if there's a job displacement there will be a shift to somewhere.
Now where is that.
You try to pick your targets and how things are going to shake out in the end.
Maybe a lot of middle management jobs and, you know, basic clerical tasks are gone.
So what what happens to the people who are working in those jobs?
>> Right.
>> Yeah.
Were they even enjoying them?
Right.
And what opportunities and possibilities can we now find if if you're in that position, you know, if you if you feel that your job is at risk.
And I had a really good conversation with a friend of mine the other day about this.
And he survived a bunch of layoffs and he was still there.
and I was saying, like, you know, you you still have a job.
Think about.
And he wasn't sure what he wanted to do, like, and he was talking about sending resumes out to different places, but he doesn't he didn't really seem happy about any of those prospects.
You know, he was looking for a paycheck and he was looking for just sustainability for his family.
And but he was like also very, very glum on, you know, the prospect of just taking another job in, you know, and just doing the grind just to get a paycheck.
And I said, you know, you have a really good opportunity.
There are a lot of people out there who've lost their jobs in, you know, in the software space.
Recently.
Take some time to think about what you really want to do, you know, and I don't I don't know what that is.
I don't even know if he knows what that is and try to make it happen.
What do you really want to do and how do you make it happen?
Like what?
What can how can you forge your path in that regard?
And while you're thinking about that, think about if it's something that you really want to do, but it's, you know, also potentially at risk.
Now that's for the current people who are out there faced with this potential cliff of like, everything just falling apart, you know, and I think about the younger generation, you know, I'm a new father and I think about like, what is my son going to face in 20 years when he's now in the job market?
What's going to happen?
What does that even look like?
How can I even predict that?
>> Right.
How do you prepare him?
>> And for that entire generation, who was who was born?
And A.I.
exists, right?
A.I.
existed when he was born.
This is a weird concept.
Yeah, for old dudes like us, right?
>> Yeah.
We're the crossover generation.
Yeah.
With, like, you know, landlines and stuff.
>> Exactly.
So.
And I think that's the shift, right?
When I think about his generation and that younger generation, I think that's the shift.
And, you know, we're in Rochester.
How many potholes we got out there.
Right.
What is the solution to potholes.
Is A.I.
going to fix the potholes.
No.
But what if like we spend our time like using A.I.
to think about like, material science and chemistry and people working with things and, you know, forging paths to, like, make things better and like, the real world, our in our infrastructure is collapsing.
You know, there's a shortage of trades, there's a shortage of doctors.
How are.
we gonna.
Yeah, yeah.
What about all those things that are happening?
We don't have enough doctors.
We got plenty of people, you know, working in middle management.
So there's a there's got to be a shift somehow.
And the catalyst, it's a catalyst to a shift.
And me being optimistic sees it as that.
I see it as just.
>> The way Dave emailed and described it.
Yeah.
>> Yeah, I see it as just we don't know what's going to happen, but something's going to happen.
It might turn out bad, and it's going to be bad for a little while.
But then, hey, the younger generation is going to be all right because they'll shift back into the real world and start doing things with things with their hands again.
And that gives me that's what I like to hang on to, right?
Is that that idea?
I spend a lot of my time in front of a screen, you know, and I was I've been doing this since I was a kid, since I was seven years old.
I'm 47 now.
It's 40 years.
I've been messing around with screens and, you know, I'm not in the position now.
If I'm going to go off, I'm not.
I'm am I going to fix potholes?
I don't know if I have the body to fix potholes these days, but, you know, maybe maybe the younger generations have a brighter future because they'll go out and they'll say, you know what, screw Facebook and the, you know, sitting in the office in front of a screen all day, I'm going to go outside.
I'm going to do something I find meaning with, and I'm going to help build a beautiful world.
That's what I try to hang on to.
>> Well, if you want to stop the world that you fear, Eric Schmidt recently said, forget it, it's too late.
The the notion.
Remember that quaint notion of a six month pause?
Remember that the letter that everybody signed a few years ago, six month pause worldwide on A.I.
while we figure out what's really going on.
I want to listen to what the former what the co-founder of Google said at a recent event about the idea that whether it's government regulation, whether it's protest and citizen action, that anything could really slow A.I.
at this point.
>> So first place, this technology is going to happen.
It's not going to get prevented.
It's not going to get stopped.
There's too many countries, too many people, too many incentives.
It's going to happen.
So what does this mean?
We face choices now about how we want to deal with this incredibly powerful technology.
I will tell you, it is really important to understand that we are living through a moment that will be in history for thousands of years.
It is the moment in history when a non-human intelligence arrived, and it was a competitor to us.
Right.
How we use that, how we shape it, how we guarantee alignment will determine the outcome.
>> That is from last month's Imagination and Action A.I.
summit.
Do you agree with what you heard there?
>> maybe, I don't know.
If I go back to, like, my original values as a kid, I think about all this stuff happening in like this, this, this way about how, like, money is a proxy for for value, right?
We created this, these currencies, and we exchanged these currencies.
And that's how we equate value, right?
In that realm.
Yeah, probably right where there's this new thing and it's like now the dominating entity in the economy.
Right.
So what does that mean to the economy?
Well, the economy maybe goes away or changes in some drastic manner.
you know, people are trying to like crank out robots and stuff like that to, to try to get in the physical world.
And there's that happening and, you know, taking over like shipping centers and factories and stuff like that.
But I don't see him walking around, you know, yet.
>> Just wait.
>> Just wait.
Right.
But, you know, I again, I go back to this shift, I go back to the shift of like, well, things are going to change.
How are they going to change?
I think that's what the meat of it is.
Nobody knows.
Nobody knows what happens to the economy when these, these agents or whatever are just they're doing all the things trying to at least.
So what is the what is the shift back to the real world and getting away from this proxy of value?
>> Can I answer that?
>> I don't I don't have an answer.
That's a good, good thought experiment.
>> Well, all of which brings me to one last clip.
It's an A.I.
generated fake ad.
I shared it with Max this morning.
It's gone viral.
I encourage you to watch it, not just listen if you can, because the depiction of Elon Musk and Jeff Bezos and Sam Altman is quite cutting.
I know that Elon would not like it, but it's the substance.
Imagine it's just four years in the future.
It's 2030 and humans are kind of redundant at this point.
But thankfully, a new A.I.
company figures out how to use humans and give them jobs.
The ad has fake audio of Musk, Bezos and Altman.
Let's listen.
>> By 2030, almost 80% of people had lost their jobs.
They had no money, no purpose, but they had a lot of time on their hands.
The less people actually did physical work, the more they wanted to appear as if they did.
What if we could use the energy of humans to power the machines that took away their jobs?
Energem solved our need for energy and your need for purpose.
>> Oettinger gym a workout center where humans get hooked up and power A.I.
Max.
That's where we're going.
It turns out that's the future.
>> We all saw the matrix, right?
I mean, this is like, come on, this idea isn't new.
>> You don't believe that humans are going to be hooked up to machines to just go power.
No.
A.I., no.
Okay.
>> I thought that portrayal of Elon Musk was actually today.
Actually, I didn't think it was an older version.
I thought that's what he really looked like.
>> I don't want to be hooked up to an exercise bike powering A.I.
Max, that's not the future I want.
I don't want my sons to be having that future either.
>> No.
>> I still I still want them to touch grass.
>> And they're going to touch grass.
>> Don't worry.
Okay.
All right.
by the way we mentioned Flower City A.I., which is that annual conference in December.
we will we're going to talk to Max off the show, and you bring in a lot of interesting people for that conference.
If people want to learn more about in the future.
Are you online?
Are you posting anything that people ought to see?
>> you could go to Flower City A.I.
and there's an email list.
You can sign up if you want.
>> Flower City.
A.I.
I mean, I've been there, having been to the conference a couple years ago, and it's really smart, interesting people.
And I know that you're trying to pull together interesting thinkers who are not necessarily fellow travelers with the Sam Altman and Elon Musk's of the world.
>> Indeed.
>> And I appreciate that.
And I always appreciate your perspective.
Thank you for being here.
As always.
>> Thanks for having me back.
>> A Max Irwin is the president of bonsai IO, the founder of Flower City.
A.I.
all right.
Covered a lot of ground here.
wherever you're listening to us and wherever you're watching, this is a one day pledge campaign.
You're going to hear me this afternoon on ATC, but I hope you're already a member of your public media.
And if not, Wxxi.org, thank you for supporting your public media, and we'll talk to you tomorrow.
>> This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium, without expressed written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the link at wxxinews.org.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Connections with Evan Dawson is a local public television program presented by WXXI