Connections with Evan Dawson
The movement to head off an AI catastrophe
3/31/2026 | 52m 22sVideo has Closed Captions
PauseAI urges slowing AI, citing risks to jobs and humanity.
PauseAI challenges the idea that AI progress is unstoppable. Citing warnings from tech firms about job loss and existential risk, its leaders push to slow development and raise public awareness, urging policymakers and society to change course before harms escalate.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
The movement to head off an AI catastrophe
3/31/2026 | 52m 22sVideo has Closed Captions
PauseAI challenges the idea that AI progress is unstoppable. Citing warnings from tech firms about job loss and existential risk, its leaders push to slow development and raise public awareness, urging policymakers and society to change course before harms escalate.
Problems playing video? | Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> From WXXI News.
This is Connections.
I'm Evan Dawson.
Our connection this hour was made in a fresh set of surveys on artificial intelligence.
Pew research asked Americans what they think of A.I.
and the answer, to put it bluntly, is that Americans kind of hate it.
Only 17% of Americans think A.I.
is going to have a positive effect on society.
A.I.
is more unpopular right now than President Trump, more unpopular than almost anything in public life.
Only the Democratic Party polls worse.
The only area where Americans think A.I.
will help is the medical field.
That's pretty much it.
Massive majorities of Americans think that A.I.
will make personal relationships worse.
Massive majorities think it will make people less creative and more lazy.
A strong majority of American children say that already their peers are using A.I.
to cheat in school.
And that's before we get to the really big stuff.
It's possible that A.I.
will only make us lazier or less creative, or less inclined to do our own work.
Maybe it stops there, but maybe A.I.
will wipe out tens of millions of jobs and create a work crisis unlike anything we've ever seen.
If that sounds hyperbolic, it's important to understand that A.I.
companies themselves are saying that this is possible, and they think it could happen within a few years.
One tech executive recently said that if you want to have a secure job in the future, you need to either be neurodivergent or work in the trades.
Another tech executive said that everyone should just plan to be their own boss, figure out their own companies.
Elon Musk said the entire concept of jobs will be invalid by 2040.
Or maybe humanity won't survive at all.
Again, this is not what the fringe doomers are saying.
This is what the A.I.
executives themselves are talking about.
The economics writer Noah Smith, who is a fan of A.I., recently wrote a piece in which he lamented A.I.
's sales pitch as the worst in history.
He notes that Sam Altman, head of OpenAI, has said he believes the risk of human extinction from A.I.
technology to be about 2%.
More recently, Altman amended that to to be big enough to take seriously, and years ago, he told the New Yorker, quote, I have guns, gold, potassium iodide, antibiotics, batteries, water, gas, masks from the Israeli Defense Force and a big patch of land in Big Sur that I can fly to.
End quote.
How about Dario Amodei, head of anthropic?
He tends to put the likelihood that A.I.
dooms humanity at 25%.
All of which brings me to today's guest.
She's here to make a case that we have not heard very often.
Most of the time, when we hear from people who are concerned about A.I.
We hear that we need more regulation.
We need better A.I.
ethics teams, or we need better public education so everyone can try to benefit equally.
That is not what Holly Elmore believes.
She is the executive director of PauseAI.
Her organization is dedicated to a global pause in A.I., an international freeze designed to give human beings the chance to evaluate whether we are actually on the verge of signing our own death warrants.
She is blunt.
She is not afraid to rub people the wrong way.
She thinks too many people are too polite about all of this, and I'm glad that she is with us this hour.
Holly Elmore, executive director of PauseAI us.
Welcome to the program.
Thanks for being with us.
>> Hi, Evan.
Thanks for having me.
>> How did you arrive at this point?
The idea that for the moment, we cannot allow A.I.
to continue.
What got you there?
>> So I've been aware of that.
There could be problems with artificial intelligence if we had it in my lifetime.
For many years.
And then ChatGPT comes out in 2022, and I felt it very strongly.
I felt, you know, I knew that a computer couldn't talk naturally to me before, and it could now.
And I. It didn't take someone with a lot of background to, to feel that way.
There's a big moment then.
And the rate of progress that was made from GPT two to GPT three, which is like the era of of ChatGPT, was so rapid.
And the, the, what they had done to make GPT three was just add more compute to the same process they had been using for GPT two.
So add more computer hardware.
And at that moment, it seemed clear that these these capabilities were just going to get better and better and better very quickly.
And if you're able to use the A.I.
advances to then make the advances go faster, you know, A.I.
is building AIS, you could get an exponential takeoff.
And so at that point, it seemed like the only thing we could do is just pause the frontier capabilities until we were able to make it safe, make sure things went well because the pace of progress was going to outpace any chance to just I'm in support of regulation, but it's regulation targeted at only one kind of harm or only protecting minors from having harmful interactions.
That's not going to be able to keep up with the general capabilities and the just massive amounts of general externalities that a rapidly increasing powerful A.I.
is going to allow.
>> So, for example, when Bernie Sanders and Alexandria Ocasio-Cortez are targeting data centers or trying to put a stop on that, that's an example to you of saying, okay, maybe that's a good idea, but that's just one small component of the bigger picture, which is not being regulated.
>> I think it's a form of hardware control, which is a very promising way to pause frontier capabilities.
We're very weirdly lucky.
It's weird to say that, but we're sort of lucky in this situation that in order to build A.I.
in the way that these A.I.
companies are doing it, they need a lot and a lot, a lot of chips.
And these chips are the most complicated machinery perhaps ever made by humanity.
And the supply chain is extremely fragile.
There's you know, one, at almost every step, there's one company that can do what's needed.
and then 90% of the chips are made in TSMC.
One company Nvidia is in control of most of the chips that they're purchasing.
so we have this opportunity to limit that the production and perhaps also to track the existing hardware in order to control the production of, you know, more and more powerful A.I.
and one way to control the amount of compute that these companies need to get the, the next model, and they need more and more every time, much more.
That's why we have these giant data data center construction programs putting a stop to putting a pause to that just in the U.S.
will that is where a lot of the data centers doing the work building this next generation of AIS are so it's, it's not a total solution, but it could be a good stopgap measure to just not have any further progress on the A.I.
before our government catches up and our government moves more slowly.
By design, you know, we're meant where it's supposed to not be able to just come in and curb, you know, what people free industry, but the progress of A.I.
is so fast that even though people are noticing, even though there's a scientific consensus about the danger of A.I.
we don't have the, the legal structures in place yet to say even just wait and stop, wait and stop.
So we regulation is important, but we can't even just have a single law the way we would normally pass a law saying, well, this is harmful, so you can't do that.
This has been proven harmful.
So it's not something you can just do as a free company.
so the, the general something to put some kind of wrench in, in this machine is good.
And the hardware, the, I would call, the Bernie AOC proposal like a hard, a means of hardware control.
But what we need is hardware control that we're doing on purpose because the entire world agrees that we shouldn't advance the capabilities of A.I.
until our safety, our ability to control and predict what's going to happen and agree on that democratically Representatively is caught up to that.
The ability to just produce power.
>> Okay, so let me also say that the audience, we'll talk a little bit more later in the hour about the very specific proposals that PauseAI has on their websites.
It is not sort of this amorphous thing.
It's an actual kind of menu of possible to do items here, and we'll work through that coming up here.
And we'll also link to PauseAI website in our show notes.
If listeners want to check out more and, and learn more about what Holly and the organization are doing, they've got branches in Europe.
I suspect this this organization is going to grow.
But before we even get there, for the people who hear this right at the outset, Holly, who say this sounds crazy, like this really sounds crazy, this sounds like domgre silliness that we're not going to see A.I.
The Terminator is not happening.
I want to say again something that I'm observing, and I want to hear your perspective on this.
When Altman says he puts it at 2%, that A.I.
could cause human extinction, I think there's no way in the world I would get on an airplane with a 1 in 50 chance of crashing.
I don't think most people would ever do that.
They wouldn't put their kids on a school bus that had a 1 in 50 chance of going off a cliff.
So I don't know why we think 2% is like a low number, but Amelia says 25%.
There's other numbers.
I mean, obviously people like Eliezer Yudkowsky has it much higher.
These are people who work in the field.
They're not like these gadflies on X. These are real people in the field.
So why hasn't that permeated the public?
Do you think?
>> I think it does sound very absurd.
I understand for a lot of for the public, you know, when they think of A.I., even still, for a lot of people, even even as the capabilities are more understood, it just feels like, well, it's something that's like in my computer, how does it come out and hurt me?
That doesn't seem likely, but also I'm very surprised by the number of people who just immediately grasped their immediate reaction to learning about, like other alien intelligences being created in a lab, you know, it was that it could be a threat to us.
I think also there's been a kind of deliberate use of p doom and like talking about safety in from the companies making this A.I.
in a way that inflates it and makes people inured.
so many people think that it's absurd, you know, that somebody would be making a product and say that it had, you know, 25% chance of killing them and causing human extinction.
They think that's absurd.
So they think like, well, how does this make sense?
It must be that this is some kind of flex of their ability, or they're trying to look cool or something, and they've been able to kind of hide behind that.
the story's a little bit more complicated than that.
I think, you know, for instance, Sam Altman and Dario Amodei needed to talk about extinction risk early on because the talent they were recruiting cared about that.
And that was why they were working on it.
They were they, the, they were trying to, you know, make a, an aligned A.I.
so that there wasn't a problem with superintelligence in the future.
and so they have to seem to respect that.
But then I think when they started talking about it in public, it, they were surprised that they didn't get the reaction that.
Sam Altman I don't know if anybody remembers in 2023, he did this kind of charm offensive tour.
And he spoke in Congress and he talked about how serious the risk of A.I.
was.
And that's why we have to build it.
And I think he was surprised to find that he didn't really need to make the case that he was doing it for safety because he wasn't really believed about like the power of his his product.
It was seen very cynically.
And so when he.
>> He, he found that he could use it instead to as like a shield to make it seem absurd.
If other people come up and bring up the the safety risks, it's like, well, he already said that and we don't believe it.
We think that's weird.
I think he kind of got lucky in that way.
unfortunately it's related to these companies coming out of sadly and weirdly out of like proto A.I.
safety spaces that were concerned about the possibility of A.I.
ever greater than human A.I.
being developed and how dangerous that would be.
and that's why a lot of expertise developed and technical expertise and the, unfortunately, it's easier to use that expertise to make a dangerous A.I.
than it is to make a safe A.I.
>> Well, and if you don't mind, I want to listen to some of what Dario Ahmad.a himself has said.
He's the CEO of anthropic and one of the emerging narratives in the last couple of weeks is that, well, it turns out Sam Altman and OpenAI are the villains here because they sold us out.
They did a deal with Pete Hegseth in the Pentagon.
But but it was anthropic who held the line, and they kind of looked like the heroes here.
And it's anthropic who self-reports more of the weird stuff that's going on with their own A.I.
And they put it out to the public when they don't have to.
And it's Dario Amodei who's doing interviews like this one on Fox News when he talks about the need for somebody else, probably the government to regulate his industry.
Let's listen.
>> It's very hard to predict these things, given how fast you know, given how fast A.I.
is making progress.
but I would, I would not be surprised if somewhere between 1 and 5 years we started to see big effects here.
You know, I've heard a number of people talk about this in private A.I.
CEO's talk about this in private.
CEOs of other companies talk about this in private.
and I really felt that, you know, the message that this is happening hasn't been getting out to, to ordinary people, hasn't been getting out to our legislators, our Congress people either.
And so I felt I needed to speak up on the record.
I do think that this is something we can prevent this, but but we need to act now.
I don't think we can stop the A.I.
bus.
You know, there are there are 6 or 7 companies just in the U.S.
working in this area, working in this area.
I just run one of them.
Even if our company stopped doing what it was doing today, all the other companies would would continue.
Even if all six companies started, then China would beat us.
I think that's a big and important threat.
So we can't stop the bus.
But I do think we may have an opportunity to steer.
>> It.
>> That's talking to Fox News.
So, Holly, one of your big themes I think is important here is that you don't think there are heroes in the A.I.
space and you're not really comfortable with people saying, praising Ahmad.a for those kind of statements or praising Jack Clarke for going on Ezra and being honest about the weird and unpredictable stuff happening happening in their own models.
Why aren't there heroes?
Why isn't anthropic an example of A.I.
doing it right?
>> Dario loves that story.
I think he's maybe the biggest originator of that story, that A.I.
is something that's just happening.
It's a bust, you know, no one's driving the bus.
It's just we're just trying to outrun the bus.
and it's kind of it's a form of destiny because technological progress just keeps climbing.
And if I don't do it, someone else will do it.
And it's the most exculpating narrative.
It is such an excuse for doing a bad thing.
So he got together a bunch of people who the anthropic employees are the most self-consciously believe that they're.
They have a safety motive and that they're doing the right thing.
And I knew many of these people, before anthropic was formed.
And they were motivated by protecting the world from dangerous superintelligence.
And Dario just knew how to talk to them, to kind of remove, to allow them to work on a very lucrative problem.
a very cool problem.
instead of working on sort of the hard and frustrating work of making it safe or accepting the possibility that it's not going to be made safely anytime soon, or maybe it can't be made safely in an acceptable way ever.
You know, perhaps you can get lucky and make a superintelligence that is not deadly, but looking forward prospectively, you know, I don't it might be impossible to get assurances like that.
And I think in that case, we shouldn't go ahead.
And Dario has just furnished excuse after excuse to be able to do it.
I his it relies entirely on, you know, it's not me, it's the incentives, the it's the incentives that are bad.
Well, somebody else is going to get rewarded for making superintelligence.
And he knows he knows very well that there are other options like governance.
if and the listening to that clip is so galling, you know, because he talks about going to the government and talking about the dangers as if he just, just, just discovered this.
Golly gee, well, why does he have an army of lobbyists doing the opposite in Washington, D.C.?
Why does he have lobbyists?
going against state regulations, especially kind of playing the same playbook as OpenAI or any other A.I.
company behind behind the scenes.
They're the same, but anthropic has this image and it doesn't always make sense.
It doesn't always play super well to the public, but I think a big piece of it is you have to understand that he has to keep his employees, happy, and they need to believe this very particular set of circumstances that makes it okay to build the the dangerous thing yourself.
He also is not being honest about the nature of the competition.
so there are, you know, about five A.I.
companies in the U.S.
that do these huge training runs on their own.
everybody else, you know, the Chinese companies are using a technique called distillation, which is where they use outputs from the a model to kind of recapitulate what's happening in the model.
So that's how deep C is made.
and if anthropic were not providing that model, if anthropic had not done the big training run to get to that model, there would be nothing for deep sea to base their, their training on.
And so it's, and myriad other ways us leading work is just like, it's leaked.
It's, there's like corporate espionage.
but it's, it is the U.S.
companies and the leading companies that are allowing the others to keep up.
So like an image I've seen that I thought was pretty good was it's like a water ski, you know, the boat, it has to go fast for the water ski to keep up.
And the water skier is never going to catch up to the boat because that's not how it works.
They're tethered to the boat.
There's a delay because they rely on the outputs of the U.S.
work.
So if anthropic shut down and said, we just can't do this in good conscience because it's so dangerous, Dario's acting like that would just have no effect on the production of A.I., when of course it would have a huge effect.
It would be much more effective for him to shut his company down than to go to politicians and beg to be regulated.
And then behind closed doors, beg to not be regulated by his lobby lobbyists.
So he's just lying.
He's he's lying.
And he has a very particular kind of spin that works on his employees.
And a lot of the people who are most into A.I.
closest to A.I.
and with the whole Department of War kerfuffle, he broke out a lot more into seeming like the good guy to a more general public who, you know, wasn't.
>> Yeah.
>> Would you I would agree with you.
I think the general public is much more aware of that story because it gets headlines in big news outlets than they are of some of the the granular stuff, the lobbying, what's happening behind closed doors.
And I do think anthropic has sort of picked up that narrative.
But this is where I think you go even further than a lot of your peers do.
Unless I'm wrong here.
I've seen, I think you say in public speeches and in your writings online that you don't even think we should be praising the well-intentioned people who work for the anthropic's, the people who want to be on the ethics team?
>> Absolutely not.
>> But they're trying.
They're trying to do the right thing from within.
>> But there is no doing that.
It's, you know, maybe on it's when you first hear that idea or, you know, in the first year of this whole thing that's worth trying, you know, maybe back in 2016 when OpenAI started, there was a debate in these spaces about whether it was okay to work at OpenAI in the safety team.
And at that time, I thought, why not?
Why wouldn't we have some of our people in there?
But then as OpenAI becomes the largest, most valuable company, the most valuable company in the world it's you're not going to go in there as one person and influence the culture and change things.
It's just a different environment.
Like they, it's, it's like saying, I'm going to go and be a spy without any training.
I'm just going to go and get embedded in this other culture.
And I'm going to get paid by them, and I'm going to make all of my best friends, because I work all the time, are going to be at this other country that and, and to think like that, the CIA loses spies to this all the time, like trained spies, like defect and, and join the.
>> Group because they're.
>> Because they're influenced.
>> Because they are influenced like the, the direction of influence goes with a giant company that, that has lots of money to offer, lots of security to offer lots of it provides a lot of structure.
I just people a person goes in with good intentions to kind of make things safer, is not equipped to resist that.
And it's mostly they just buy the narratives of the companies that actually, it seems from the outside that we're doing something bad, but really we can't do anything different because because of the race, because of the incentives.
And so actually those those guys out there, they don't have the whole story, but you're an insider now.
So we can tell you the whole story and you would hate to lose your insider status, wouldn't you?
Because then you could never do good.
you could you couldn't do anything about this problem.
And also, you can't make millions of dollars.
And also you can't have, you know, be part of this glitzy company that's at the forefront.
Okay?
The hottest thing in the world.
>> But, but let me, let me try because I'm, I'm becoming kind of a fellow traveler of yours.
I'm trying to check my priors.
So let me try to steel man this with a counterpoint here.
And I want to listen to another clip that we have in a moment related to this.
I would rather, as a member of the general public who would like to survive and see my children survive.
I would like to see well-intended people working on A.I.
teams as opposed to an army of Peter Thiel, Peter Thiel, in a conversation with The New York Times, Ross Douthat was asked what I thought was a pretty straightforward question does humanity have a right to survive?
Does the human race?
Should the human race persist into the future?
Or should we just simply accept the fact that maybe A.I.
will wipe us out?
And maybe that's the natural evolution of intelligence on this planet.
Maybe that's okay.
Listen to how much Peter Thiel struggles with the idea that maybe humans should be allowed to exist.
>> You would prefer the human race to endure, right?
You're hesitating.
Well, I yes, I don't know.
>> I would, I would.
>> This is a long hesitation.
It's a long hesitation.
>> There's so many questions.
>> And should the human race survive?
>> yes.
>> Okay.
>> But but I also would I also would like us to, to radically solve these problems.
And and so, you know, it's always, I don't know you know, yeah, transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body.
>> I'm trying, I'm trying not to laugh the last part, because we should take that very seriously, because he is not.
>> The only I mean it.
>> He's yeah, he's not the only person in the space who's talking about immortality, merging with machines, uploading your consciousness when the general public has no idea how, how common.
And I'm not saying that's the majority of people in tech or A.I., but that is not like a unicorn idea there from Peter Thiel.
He really struggled with the idea of just the human race as we are persisting, like maybe, maybe we shouldn't.
And so what I'm saying is I want to get your reaction to that clip, but I also I don't want just those Peter Thiel's on these teams in anthropic and deep sea wherever.
I want the idealist there who do think human beings should survive.
>> The idealists are very manipulable, is what it.
Unfortunately, the case.
So I like my perspective on this, I think is clear because I did know a lot of these people before these companies existed, and I've seen the whole evolution and I was horrified.
I've been horrified many times in the last three years when I, I expected, I expected people to like, live up to their word.
You know, I wasn't working in A.I.
I've never done technical A.I.
work.
And the whole reason they were supposed to be at these companies was to, to whistleblow to lead to, you know, take their insider status.
and reports or to make it really push within the company and threaten to leave the company in order to get what needed to happen, achieved.
And they just wouldn't do it like they were all ready by 2023.
I, pretty much everyone I knew who tried this like influence or work on safety at work, on technical safety at an A.I.
company was they had been taken in by some narrative like, well, things are different now.
Or actually what's important is that OpenAI keeps its lead so that it won't go faster in the race.
And so actually, that's why I have to keep working in OpenAI.
at anthropic, there's just been a slow turning down of you know, Dario kind of dials it down like, well, maybe it's not, it's not really that it's so risky or we can, we can deal with this empirically.
So we just have to make, have, you know, build a lot of models incrementally and kind of watch and see if they do anything bad, do a few tests, do I think pretty superficial kind of alignment work.
And, and actually that's the way to do it.
That's going to translate.
and they just accept it, you know, they just accept the, the influence.
And I think I just see too on like the personal side, how they're just unwilling to give up, what makes them special.
They have this cool thing they want to work.
They've always wanted to work directly on A.I.
models and the chance to, I even think going further back, the, the, the, like, let's just build it and test it model was that was one of an early erosion in what should have been like a strong bulwark of safety.
People at these companies that, that was even allowed.
but I think the, the appeal of that was getting to work on technology rather than just doing difficult, I don't know, game theory, math, or trying a bunch of directions or struggling without a product.
The A.I.
is, you know, the, the radium of this decade.
It's this cool product that can do anything.
And it's, and people just want to be near it and people want to eat it.
It's like eating the sun.
And so I, the, the people who say they have good motives or who maybe originally did in these companies are not immune to that either.
They don't want to give up their proximity to it.
And they tell themselves that this proximity is going to give them the chance to do the right thing when it's pivotal.
And I think they will never do the right thing when it's pivotal, if they won't do it, if they won't do it earlier.
and that's a big meme in anthropic especially, but, you know, a big like a big thing that's talked about is being in the room or getting in the room and they imagine that they're going to be in the room when a major decision is made, and they're going to be the one who says, no, let's not do it.
And that's why it's really important that they, like, make it to that room.
But they've made the goal, getting to the point where there are people in a room who could, you know, who could kill everyone.
And that's not what we that's not how to prevent that from happening.
How to prevent that from happening is to recruit many people to work on all of the fronts that it takes to keep that from happening.
There being such a room in the first place.
so it's kind of, I mean, that one is a hero fantasy to that.
But, but there's always excuses for why it's okay to further A.I.
development.
And meanwhile, more and more powerful A.I.
is what, you know, exposes us to more and more powerful externalities and more and more powerful A.I.
can make ever more powerful A.I.
And they're just letting that continue to, to tick that clock, continue to tick when what we need to do is stop that process, then we can think about all of the ways, all of the technical ways, all of the governmental ways to deal with the problem or how to go forward.
If we ever restart making A.I.
But some people involved have a profit motive for not stopping.
Some people involved have a glory motive for not stopping.
a lot of people involved in kind of in supporting them at lower levels just don't want to lose their connection to this powerful thing.
I just the influence of money and power and glory is just too strong for somebody to unilaterally come in just them against this entire company and this entire machine.
do something about it.
>> Yeah.
>> At this point, yeah.
>> Once in a while you hear, well, it's going to could cure cancer.
It could really help medicine and maybe it will.
The reason I wanted to play the Peter Thiel clip is I don't think that's an outlier.
I mean, the amount of, of just truly strange people with a ton of power who are obsessed with immortality or things that the average person would disagree with, like as a goal.
And yet they're driving A.I.
policy and charging forward.
It's, it's pretty wild.
so last couple.
>> Things, you'd think.
>> If they wanted to cure cancer, they'd start with cancer.
>> Yeah, yeah.
Or not just, you know, wipe out music and art while they're doing it.
But anyway just just briefly here I'm going to reset the scene here and then what we're going to do is after our only break, we're going to take some listener feedback.
We've got some emails to read at Connections at wxxi.org.
if you want to interact with the program, you can do that.
Holly Elmore is my guest, executive director of PauseAI U.S.
and we're going to get the PauseAI playbook before the end of the hour as they see it.
But just briefly here there's this idea.
Well, Tim emails to say, Evan, your guest is the horse and buggy industry in the age of the automobile.
So I know you've heard this before.
So briefly tell our listener named Tim why you think that's not a good comparison, that you're not just standing in the way of technological change.
That's inevitable.
>> False.
That there's technological change that's inevitable.
or that the shape of maybe it's true that as long as we persist, well, we're continuing to innovate.
But why does that mean A.I.?
Why does that mean something that could kill us?
I don't think that's progress.
>> Okay.
And and I guess I'm doing emails now because this is related.
Bob wanted to know, do you know of Bob says, do you know of any historical precedents for legislation coming before harm?
I wonder if there are lessons to be learned from the past here, but I can't think of anything.
Maybe you're trying to do something that's never been done before.
Holly.
Before the program related to Bob's question, I was trying to think of any tech that actually was brought to bear, brought online and then taken offline for the good of humanity.
And the closest thing I could think of is nuclear weapons.
But I don't know what you think about that.
>> Well, human cloning, there's a moratorium on human cloning that was just arrived at through diplomatic means and really has not been pushed.
So that would be one.
There was a thought that this could introduce a lot of we definitely could do it.
We have the technology, but we decided not.
We decided we should not do.
>> It.
>> Okay, so human cloning is nuclear weapons in that category for you?
>> Well.
>> I guess one was detonated and two were detonated in war, so I'm not sure if that counts as without harm, but.
>> No, no, no, certainly not without harm.
But there's this idea for 80 years that if the next one goes off, probably the whole world is gone.
>> yes, I think so.
I think the Nuclear Nonproliferation Treaty, other nuclear treaties are, our inspiration, for the pause treaty, which I sounds like we're going to talk about we can decide, you know, we can, I think humans can look ahead, you know, and see that something is dangerous.
I think we have this capacity.
We have done it before.
A lot of what, you know, most people were concerned about with nuclear weapons isn't something that like a harm that they personally experienced.
it's because they could see the destructive capacity certainly think humans can can do this.
Also, we don't have to.
There are harms from A.I.
today that are give us a sense of how much worse things could get.
And there's a lot of unexpected harms.
Like I didn't in the years that I was worried about having just the danger of just a powerful superintelligence.
I wasn't thinking about how it would have super persuasive powers.
And, you know, cause people to have like a mania from using it.
that was not something that occurred to me.
But we, we see that and we see sometimes ChatGPT body count is 18 at this point.
of people that it advised, you know, encouraged to kill themselves or crimes that it helped to commit, encourage people to commit delusions based on delusions that it fostered.
So there's, there is a lot of harm already.
there's job loss.
I would call harm.
I think forget who's what us entity has confirmed that 90,000 jobs were lost from A.I.
in the last year, and that's just only going to increase.
And people can see that, you know, we don't really have to wait for those numbers.
People can see that when the A.I.
can do what you do as your job, that it will more cheaply replace you and that it will do that.
There's nothing stopping it.
if we don't make a point of it's important for people to have jobs.
It's important for human society to be the way we want, rather than just.
However, the path of least resistance, technological development dictates.
>> So that's some of the diagnosis from Holly Elmore, who was the executive director of PauseAI U.S.
If you're just joining us, she is rejecting the idea that A.I.
is inevitable, that the best thing to do is sort of educate yourself, get up to speed on the tools, try to be part of the A.I.
economy, maybe ask Congress to regulate.
She is saying something else entirely that the job loss, huge scale or the possible human extinction, which the A.I.
companies themselves say is possible, is enough that we should be creating international treaties to stop this.
So how to do that?
On the other side of this break, we'll take some more of your feedback and we'll talk to Holly Elmore about how she sees the playbook going forward.
Coming up in our second hour, 50 million Americans live with hearing loss of some kind.
Our guests are going to talk about why those numbers are growing, what factors environmentally and maybe in our own digital and technological lives are affecting that, and what we can do about that.
And there's also a kind of stigma that goes with that.
We're going to talk about all of that and more next hour on Connections.
>> Support for your public radio station comes from our members and from Bob Johnson Auto Group.
Believing an informed public makes for a stronger community.
Proud supporter of Connections with Evan Dawson focused on the news, issues and trends that shape the lives of listeners in the Rochester and Finger Lakes regions.
Bobjohnsonautogroup.com.
>> This is Connections.
I'm Evan Dawson.
The Trump administration is certainly very, very A.I.
friendly.
They've sought to do everything they can to eliminate state regulation or regulation at the New York state level has been challenged by the Trump administration, and the Trump team has basically said that they want to be leaders on the world stage in in A.I.
Governor Kathy Hochul recently put out a statement saying she wants New York State to be responsible, but effective economic leaders in A.I.
And I am surprised that political leaders, especially presidential candidates, are not seeing the polling of how truly unpopular A.I.
is, how people say they don't want this.
It is an undemocratic technological change being pushed on them.
And it makes me think of a great philosopher from the 1980s who seemed to have been predicting this.
Now you're going to hear a clip from a movie, but this movie, instead of thinking about what they're talking about here, I want you to think that they're talking about A.I.
This is Bill Murray playing Peter Venkman of the Ghostbusters, and the mayor of New York City is on the verge of sending the Ghostbusters to go after Gozer, but he's not sure he believes it's real.
And so he's wondering, like, well, what if you're wrong about all of this?
I mean, like, what's actually going to happen?
Listen to that conversation, but replace ghosts with A.I.
here.
>> This city is headed for a disaster of biblical proportions.
>> Well, what do you mean biblical?
>> What he means is Old Testament.
Mr.
mayor, real wrath of God type stuff.
Fire and brimstone coming down from the skies, rivers and seas.
Boiling.
>> 40 years of darkness, earthquakes, volcanoes, the dead rising from the grave.
>> Human sacrifice.
Dogs and cats living together.
Mass hysteria.
Enough.
I get the point.
But what if you're wrong?
If I'm wrong, nothing happens.
We go to jail peacefully, quietly.
We'll enjoy it.
But if I'm right and we can stop this thing.
Lenny, you will have saved the lives of millions of registered voters.
>> So right now, millions of registered voters Holly Elmore are saying we would like to stop this day.
I think I don't know any presidential candidate for 2028 who is really talking that way.
Do you?
>> Not know.
I it's it's all well, you gotta balance the benefits with the.
>> Risks.
>> which it feels like a very old line at this point.
somebody is going to cash in big making this their issue.
>> You think that'll be a political winner?
>> I think so, because also, I think maybe politicians, their timescales are not they have not understood how quickly these things change.
I mean, I just I watch A.I.
opinion constantly and the things that I can say that are like pushing the envelope one day, a month later, they're not.
And everybody agrees.
And so maybe it feels early in the primary to be staking out an A.I.
position.
But by the time this goes to the general election, it the momentum of anti-ICE sentiment is only going to go, only going to have grown.
So it seems like a great bet to me.
And I would be so happy if somebody would take that opportunity.
I, I'm surprised I. So my, the mission of PauseAI is about education and rallying of the public.
And then speaking to well, governments, but mostly legislatures, you know, representative government, the to the representatives.
And.
That part is I thought that would be harder, like getting the public on board, getting them educated enough to speed.
And honestly, I think the public basically gets it.
They need to know sort of more details.
They need to know what they can do.
>> But so what can that.
>> They get it.
well, so our focus is on helping people make quality contacts with their representatives.
so like quick things you can do are, you know, signing our petition.
So PauseAIus.org for the U.S.
versions of things pauseai.info for more information.
And you can also kind of work your way back to PauseAI U.S.
from there.
you know, so that's the easiest lift is just some, you know, signing, getting on our mailing list you know, being informed of next actions.
We also have the sort of the next level up is calls to action.
So when something happens.
You know, for instance when Governor Hochul was sitting on the Raise act and hadn't signed it yet, we had a call to action that we'd sent out digitally and to our New York PauseAI -ers to call Governor Hochul and say, like, please, you know, go ahead and sign it.
And you can be informed of those things, do those things regularly.
That's pretty good.
And then making a meeting with your representatives contacting your representatives just to share your general concerns is an option.
We help our participants to do that.
People just often want a little coaching and it'll help navigating the system, but anybody can request a meeting with their representatives and you really should state level even is better.
You get more attention from them usually, you know, they don't cover as many people.
so that's fantastic.
And then you can take part in kind of bigger demonstrations and events.
You can put on your own events.
we, so we just got off of two weeks ago, the largest us A.I.
safety protest in history a march you, those kinds of things just happen because volunteers want it to happen.
And.
You can become a local leader for us.
That's my plug.
That's our sort of highest volunteer position is creating group PauseAI U.S.
group in your city.
We have 35 now.
We'd love to have them all.
So and you can apply for that on our website at PauseAI.
Usb.org.
There's many, many other things that you can do just in your life, not directly related to, you know, our programming just talking about A.I.
risk is important.
It's de-stigmatizing people are still sort of confused about, I don't think they're confused about the facts.
I think they do understand.
They see this very simple case that this is a powerful technology that's getting more powerful.
It's not controlled.
maybe if it were okay today, we're not going to be okay tomorrow given that situation, but it's so confusing to be in this time right now where it's just not agreed upon.
Like what is okay?
>> And with.
>> I think a really important thing you can do is be like a moral influencer where you say like, I'm, I don't feel helpless about this situation.
You know, maybe I can't stop these companies from building superintelligence on my own.
But I, you know, but that doesn't mean I have to say, well, what are you going to do?
And that's a very common reaction right now because people feel powerless or they're confused, or they think, well, I must not get something because it must be that these people running the show know something that I don't.
And the truth is they don't.
It's a very scary situation.
And the more that you put out that that's not okay the more that you put out perhaps that this is something you are going to vote on is your, your concerns about A.I.
The more that the world responds to that.
so I think just moving, taking that, the taking us from being confused about what to do and thinking there's no option and believing sort of Dario Ahmed's story that this is just happening.
And so we've got to just we got to deal with it.
And shifting that to we do not have to accept this that happens every time somebody takes a stand and every time somebody says, no, I don't think that's okay.
You know, I don't think this is something we should shrug our shoulders over.
You know, I, I plan to call my representatives.
Like I, I plan to be informed in my vote.
I plan to go to town halls and talk to candidates and it's, you know, tell them how important this is to me and that this is going to determine how I, you know, how I vote is how I protect myself.
>> So if, if I could jump in and just.
>> So much.
>> Power, if.
>> I could ask you to just maybe take a couple quick minutes and then we'll try to grab a couple more emails from listeners, but a couple quick minutes to talk about where ultimately, if this really succeeds, where you want to go, which is on an international pause treaty, here's Palantir, Palantir CEO Alex Karp said the Luddites arguing that we should pause A.I.
development, are not living in reality and are de facto, de facto, saying we should let our enemies win, Karp said, quote, if we didn't have adversaries, I'd be very much in favor of pausing A.I.
completely.
But we do have adversaries, and we can't.
That's Alex Karp from Palantir.
So let's work through that.
The adversaries in individual companies, that could be one thing, but you hear the Trump administration saying, we're not going to let China win.
So if you get this international treaty, I'm looking through your website and I appreciate all the specifics here, and listeners can find it there.
If you want to go to the open, not the OpenAI, the PauseAI website, Ali.
Not to be confused, but work people through what this treaty would do here.
>> Well, so we have a, I think you're referring to like our proposal, but.
so I yeah, the proposal is there basically to establish feasibility and to educate people like this is one way that it could happen.
But I want to emphasize there are a lot of ways that we could achieve a pause.
I think there are there are many ways that we could slow down progress or that we could deny companies the resources that they need to to build A.I.
in a way that is, or there's many ways we could prevent both people following the law and people skirting the law.
You know, if we set a law from still maintain continuing to build.
So the one we suggest is hardware control and the, as I said earlier in the, our hardware is it's a very shaky supply chain, a single supply chain it's very complicated to make.
And a lot, a lot, a lot of it is needed.
And every time, every next level of A.I.
training, a more and more and more is needed.
and so also, it's very easy to tell who has hardware because when you're doing a big training run, it's, it's visible from space, the heat that it's generating the, the data centers are visible from space.
So there's, I mean, some, I wish I could show an image right now of one of the data centers, the OpenAI Stargate ones in Abilene, Texas, with a football field next to it that just looks like a little like a little dot.
you can see these things from space.
It's really you can't hide them.
So people worry that, you know, oh, a criminal could just make superintelligence like.
Well, yes, but it would be against the law.
There's lots of things that are against the law because people want to do them and we stop them.
We have enforcement.
so if there were a treaty that we want a global treaty and it would not be a global treaty if China did not agree.
And it's a treaty is not just trusting somebody's word.
A treaty involves verification mechanisms.
And there are so.
>> Many, like.
>> You say, like the IAEA.
>> well, that's one possibility.
Yeah.
So there could be an agency like the IAEA or CERN you know, some nuclear research is dangerous.
There's an international institution that is the one that is allowed to do that kind of research.
And there's international input on whether it is safe and a little bit of a throwback.
But when I was in high school, there was an experiment at CERN that made, that made some people think there could be a black hole created and CERN took that very seriously and they got the, you know, they got it down to, you know ten to the negative, like 26 that it was even possible.
and that's the level of certainty that I would like about A.I.
research.
I don't think that I don't know that it's impossible.
Maybe it is, but I, I think it's possible that we could have a centralized.
Everyone on earth, every political entity on earth has, like the some input and oversight and the ability to check the others.
And then possibly we can proceed safely from there.
there are so many versions of this and we suggest, you know, possibilities just so people can understand how it could work.
But really it's, it's down to the government's negotiating.
So, you know, we're not trying to we're not married to any particular language.
and we want there to be the democratic input of all of the negotiators into whatever happens.
Also, something not our, our biggest desire is that the world agrees to pause and it agrees to have a very high standard of safety.
So like scientific consensus that the next step could be done safely and democratic buy in about, you know, what it'll do to this world or the risk that's being taken.
>> Well, let me just jump in because we're down to our last minute here.
And John wrote in to say a long email to basically say thank you to Miss Elmore for what she is doing and discussing and vigorously debating while a valuable tool like all tools, perhaps we need to have humanity slow down using this tool to keep people safe.
And he says, I did not use ChatGPT to write this email.
Thank you John.
and Alex, briefly take the last 30s.
Alex wants to know.
are you are you kind of selling out on the future?
Are you?
Alex wants to know if you're not going to have kids because you think there won't be an American future or a world future?
Are you that pessimistic?
Holly.
>> I want kids because I plan to live.
We're going to win.
That's what I think.
But it's.
This is the way we win.
We don't win by just letting A.I.
happen.
We win by deciding what to do with our future.
>> Thank you for making the time.
and online, we're going to put the links to PauseAI website so people can check it out.
You can read more about it, maybe get in touch with Holly and the team if you'd like to do that about what's happening here.
and let's talk again in the future.
These things are moving so fast, Holly, and I hope you'll come back and update us on what's going on.
>> I'm sure I'll have a lot more to say.
Thanks for having.
>> Me.
>> Holly Elmore founder executive director of PauseAI us.
More Connections coming up in just a moment.
>> This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium without expressed written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the Connections link at wxxinews.org.
>> Support for your public radio station comes from our members and from Fred Maxik, now part of Withum, a national advisory and public accounting firm.
The local team continues its decades long commitment to serving Western New York with advisory, tax and assurance services.
More at withum.com and The Golisano Foundation supporting Move to Include programming on WXXI and working with the community to lead change toward the inclusion of people with intellectual and physical disabilities.
Share your thoughts at Move to Include.org.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Connections with Evan Dawson is a local public television program presented by WXXI