Connections with Evan Dawson
AI, free speech, and the future
2/21/2025 | 52m 18sVideo has Closed Captions
OpenAI claims that AI should not lie, omit, or push a moral position. We discuss what this means.
AI companies have controlled what their models say, deciding what’s harmful, what’s neutral, and whether AI should ever take a stance. Now, OpenAI claims that AI should not lie, omit context, or push a moral position. But what does that actually look like in practice? Will AI become more open and informative, or will this change create new challenges in how we interact with these models?
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
AI, free speech, and the future
2/21/2025 | 52m 18sVideo has Closed Captions
AI companies have controlled what their models say, deciding what’s harmful, what’s neutral, and whether AI should ever take a stance. Now, OpenAI claims that AI should not lie, omit context, or push a moral position. But what does that actually look like in practice? Will AI become more open and informative, or will this change create new challenges in how we interact with these models?
Problems with Closed Captions? Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipFrom WXXI news.
This is connections I'm Ksenia color guest host and for Alvin Dawson.
This week open I made an announcement.
They're changing how they train AI models, moving toward intellectual freedom and embracing controversial topics instead of avoiding them.
If that sounds like a shift in the Silicon Valley's values, that's because it is.
For years, AI companies have walked a fine line deciding what their models should or shouldn't say, what is considered harmful, and whether I should ever take an editorial stance.
Now, OpenAI says it wants ChatGPT to seek the truth together with users, instead of refusing to answer controversial questions.
The new policy claims that I should not lie amid context or take a moral position.
You might think that sounds are great in theory, but what does it mean in practice?
If I had both sides present in all sides of every issue, including ones that are factually wrong or harmful, is that really neutrality?
There is another layer to this open AI shift towards free speech happens to align with the new Trump administration's stance on tech regulation.
Conservatives have long accused AI companies of bias, claiming platforms like ChatGPT favor left leaning perspectives.
OpenAI denies that politics played a role in this change, but critics aren't so sure.
So is this a win for free speech or a dangerous move that lets AI spread misinformation?
Is OpenAI adopt into a changing world, or just bending under political pressure?
Today we're diving into the future of AI, free speech and truth itself.
With me at a studio, I have my colleague, Monozygotic Aslam, the music director, WXXI classical.
Thank you for joining Mona.
Yep, excited to be part of making things work as part of live radio and discussing some of the philosophical ideas in this topic, and hopefully soon to be joined by Jeffrey Allen, director of the Institute for Responsible Technology and Assistant Professor in the School of Business and Leadership at.
Now there is universal.
You can join the conversation this hour by calling 844295 855 or if you're in Rochester, (585) 263-9994.
You can also email the program at connections@wxxi.org, and you can watch us on the WXXI news YouTube channel, where you can also leave comments in the chat.
Okay, Mona, let's kick that in.
So what's your overall opinion on this current OpenAI situation?
And what do you think about the regulation that I is going to probably face sometime?
Very soon?
Well, in general, you might say that free speech is a really good value.
But when we look at the idea that lying is one of the things that's allowed, that's strange, because that's also, you know, that's not a neutral position.
This claim of neutrality is sort of a false thing.
And if an action, if it leads to immoral or improper actions, it seems it seems so fraught because there's no ethical or even legal consideration from the actual AI model or the way that it's interacting.
So I guess what it's just it seems so worrisome and problematic.
And even while it's supposedly freeing.
So I guess, you know, what are some of the things that you find yourself concerned like that could happen with it?
So, Jeffrey, Doctor Jeffrey Allen just joined us.
Welcome.
Welcome, Jeff.
I am to be here virtually at least.
Yeah.
Thank you so much for joining in.
So OpenAI says this change is about intellectual freedom and present and old perspectives.
But critics argue that it could lead to AI legitimizing false or harmful ideas.
Where do you stand on this?
I think that there's some argument to be had there, even if it is a little bit of a twisted logic.
But, when it comes down to it, this is one of those situations where we really need to consider the pros and the cons quite carefully because, it's nice to have the, you know, the whole free speech aspect of it.
But in reality, what are the potential, what's the potential fallout as well as something we need to give some serious thought to?
OpenAI also says I shouldn't lie or omit important context, but in reality, I can't speak the truth like a human.
Can it just predict in words?
How do you think AI models should balance accuracy, neutrality and ethics?
I often when I'm using, models like ChatGPT, one of the things you always see is the disclaimer here at the bottom that, you know, ChatGPT can make mistakes.
We need to be careful and double check the, you know, the output.
But the reality is that almost seems like they're passing the buck in a way off to the user to do their, fact checking in every single query they put in there.
And while I think that it's just a good practice to be in that we verify the information that we're dealing with, day to day from AI, I also think that there should be some responsibility put on to the model developers themselves.
And, you know, guardrails are always a good thing to have in place.
and maybe it's, you know, more accurate to think of it, rather than guardrails, more like brakes.
When the car is going to fast, you know, we have the ability to apply them and slow them down and keep it in control.
So maybe that's a little bit of a better analogy.
Of course, there is a political angle here.
OpenAI is making this shift at the same time that the new Trump administration is vocal about AI censorship.
Do you think this is truly about free speech, or is OpenAI trying to stay on the right side of the government?
Well, that's certainly a loaded topic there.
we need to kind of, you know, hey, pay some, you know, respect to the reality that a lot of companies these days do seem to be following in a line politically.
open AI, on the other hand, I wonder if there's not something deeper going on there.
If we look at, what Elon Musk is doing on the AI side right now with Dot, AI and grok, which, just released their newest version this week, it almost makes me, wonder if it's not more of an attempt to get ahead of other competitors in the space, rather than necessarily politically driven.
And there is an argument to be made, I think fairly that when you put restrictions in AI, you're also limiting its potential.
But again, we have to kind of weigh the pros and cons here.
This isn't sort of like a black or white question.
It's a matter of taking all the factors into account.
It's like free speech.
Again, the argument you can't go into a crowded building and yell fire!
You know, you may have the right to free speech, but you don't have the right to incite panic.
So we need to kind of think about it like that as well.
From the AI side.
So I'm actually studying machine learning and AI at school right now.
And one of the things that I've been also thinking about is, how limiting AI, even in terms of politics, is kind of like telling the person that, no, you cannot like, lean towards like certain Democratic Party.
I wonder, what do you think about this comparison?
I certainly think that, bias, whether intentional or unintentional, has the ability to influence the way we think about certain topics and, the way that we ourselves respond, whether or not I mean, it goes that far.
I'm not quite sure myself at this point.
But the reality is, though, that the output is going to influence, the way that you perceive specific topics and items, whether it be political, whether it be things like related to health care or any other number of topics that can be divisive.
the more the biased the bias within the training data, as you know, from studying machine learning is going to have an unintended effect.
And it's going to be amplified.
So that's one of the things we need to be wary of.
If open AI stops moderate in responses entirely, could we see AI models become tools for disinformation campaigns?
I think that's not off the agenda.
I mean, why wouldn't we?
If you could use it for any legitimate purpose, you could certainly use it for illegitimate purposes as well.
And so if we are going to, you know, use it for things like, you know, the benevolent purposes of free speech that's also not going you know, it.
There's nothing to prevent bad actors from using it in the same way.
I think something that interesting that came up in the news recently was concerning, the, Cybertruck explosion that happened, in DC, at the end of last year and how the individual who, planned that attack used ChatGPT to do so.
And that was with a, censored version of ChatGPT.
We're not even talking about an uncensored version yet.
Is fully non censored version possible at all?
I think it is.
I think there's certainly been some attempts on the open source side to create uncensored versions of models, and to they've done so with fairly good, results.
I mean, good good being subjective.
Right.
we're talking about, AI models that, answer anything you ask it, if that's the criteria we're judging by and the danger, I think with ChatGPT and OpenAI is it's so much more powerful than these open source models, and they've got so much more or so many more resources to throw behind, these models.
So, I think we should be certainly concerned.
And I do think it is possible to get it there into an uncensored format.
So you mentioned the Cybertruck explosion and how a person was using ChatGPT to actually plan that.
And that made me think that if this new approach is actually free speech, or is it just shifting responsibility away from the company and onto the users?
So the cynic in me says it's the latter, and they're trying to shift responsibility away and try to open it up for various usage, which, in its censored version really isn't, possible.
So I think that, there is a possibility that, you know, we can unload some of the burden of censorship and guardrails on to the user and kind of give you more disclaimers that, you know, be wary of anything that the, model puts out.
so, certainly the cynic in me leans towards that side of the argument, since there is not much regulation going on in the AI world, do you think we're heading into a future where government pressures AI companies to shape responses in their favor?
we only need to look at deep sea model R1 in order to see that happening already.
I've certainly done a lot of playing around, and I've asked, deep seek things that are considered to be sensitive subjects in China, and without fail, it's refused to answer me or it's giving me answers that are very, very much in favor of the Chinese government compared to a more balanced, nuanced picture of what actually the situation is.
So do you think, do you see the do foresee that in the US that this change can lead to AI reinforcing harmful narratives like anti-science movements or local propaganda?
Yeah, and certainly the thing is true that once you open these models up, you know, it's like a Pandora's box.
We really don't know what is going to happen.
Right?
It remains to be seen.
And we certainly need to consider the fact that that may be the case, that, propaganda may become worse than it is right now.
Misinformation becomes more empowered.
And so those are the types of things that, you know, these are the considerations.
Again, we need to be thinking about that free speech versus the reality and the good of society.
What happens if an AI gives neutral responses to topics like climate change?
Vaccines are human rights violations.
Should I always present multiple perspectives even when there is a clear right and wrong?
So that's a that's a good question.
You know, philosophically, I would think that I prefer the right answer.
I have seen I give me and I pressed it on some, you know, very philosophical issues about, things like, morality, religion, things like that, that don't really have a clear answers.
But, it's tended to give me answers which are deferential to the thing that I'm asking about.
And I don't like those types of padded or caged hedged answers.
just because I'm more driven by facts, I want the quantitative.
What does it all mean?
What are the facts and figures behind it?
And so we need to, kind of make a determination.
Do we want answers that are sensitive to all viewpoints, or do we want answers that are more factually driven?
And so, it's certainly not something that, you know, again, it really comes back to what is our end purpose.
what groups are we trying to take into account?
And, is this a subject that needs to be handled with deference, or is it something that's quite, you know, straightforward and clear?
So right now at school, for example, we always have to cite AI when we get the answers from it.
However, it's still not considered like a trusted source of truth.
And since we're now moving toward a more neutral models, does that mean I should be used as a trusted source of truth?
I think it's getting better.
I also think, though, that like with any, you know, research you're doing, we should certainly be mindful of the fact that there could be multiple sources out there.
It makes sense to contrast and compare.
one of the things that, you know, we typically look at in research is what is the dissenting opinion?
what's still, you know, the weaknesses of the answer or what is the, what is the information that we're not taking into account contextually?
There's certain things we need to think about and the reality is that, you know, yeah, we we certainly, kind of need to take into account multiple perspectives here.
I see, I'm Santa Clara sitting in for Evan Dawson.
And today on connections, we're talking about AI free speech.
In the future, I'm joined by Jeffrey Allen, PhD, director of the Institute for Responsible Technology and assistant professor in the School of Business and Leadership in Nazareth University.
Jeff, I would like you to talk a little bit about your tech, experience and your AI experience as well, but absolutely.
So, I mean, I've spent about, 25 plus years in the tech industry.
I started working with some of the big companies, including IBM and SAP, and I worked on both the software engineering side as well as the more business management oriented side of the tech industry.
I moved away from that in more towards the world of startups out in Silicon Valley.
And, the longer I was out there, the more I started to see that there was an opportunity to do things with a bunch of data that was sitting around, but no one was really making use of.
So I founded a company called Echo Ridge back in Boston, and this company picked up some support from Dartmouth.
and we started creating, algorithms that could predict the outcome of regulatory and legislative actions across the United States.
So basically, every state as well as the federal government, we every week we would look at what legislation was out there.
And we could tell you this legislation is going to pass.
It's going to fail.
this is the reasons it's going to pass and fail.
So we had some fairly robust AI at our disposal, but we decided at the time that that was a really divisive area to be, and we didn't want to start predicting legislative and political outcomes because it was just an area that I don't think we had much appetite to be involved with.
But it also was something that potentially could add to the divisiveness that existed already politically in the country.
Again, know this is, you know, 2018, 2019, right around that era.
And, so we decided to go in a different direction where we looked at how legislative and political actions could impact businesses.
And so we kind of created a more, I would say, downstream usage of our AI.
And, it was a fairly novel and unique approach to doing predictive, machine learning as well as artificial intelligence in general.
And that's a little bit about my background and experience with it.
But I will say that my observation of the tech industry more broadly has been that there is obviously a profit driven motive.
There is an idea that we need to get the traction we need to capture the market, we need to get ahead of our competition.
Those things are fundamentally incompatible with, the things that are more oriented towards the safe and responsible usage of AI.
And so when we do start to think about our innovations in AI as a country, as an industry, we really need to start to think about how do we balance those two particular, points of view and motivations.
I'm also personally very excited about adoption of AI in the universities and schools.
Could you also tell me a little bit of your work in Nazareth and what exactly you do there?
Absolutely.
So the Institute for Responsible Technology at Nazareth looks at emerging technologies, primarily artificial intelligence, to see how it will impact society.
I mean, all right, it's fair to say our focus is more towards how businesses adopt AI and then from there, how it affects society.
But we're concerned with things like bias.
We're concerned with things, like irresponsible usages of AI such as, abusing people's privacy and things of those, things like that.
And so the institute is on one end, interested in doing partnerships with the business community where we can kind of act as a guiding voice in a way, as AI becomes more prevalent within business and society.
We also have some degree programs, for undergraduate students and as of the fall, will likely have a graduate program as well, which is intended to train students to not only be competent with the technology around artificial intelligence, but also be able to ask the right questions about whether it's being deployed responsibly and ethically.
And that's something that I think most programs out there right now just aren't really focus too much on that.
I mean, it might be a chapter in a book, or it might be a side note, you know, to what is really being discussed in classes.
But for us, we put a good amount of focus on ensuring that the, the ethical and the responsible is addressed just as much as the technology is.
That's a great point.
Actually.
I personally think that, as we have right in one on one compulsory for all the undergrad students, now, we should have prompt engineering 1 to 1 as well.
Do you do you agree with me?
I agree, and we actually just this semester, one of my colleagues, Mark Webber, he launched a new class which is called AI for everyone.
And it is.
So essentially what you're describing it is getting, undergraduate students on board and comfortable with using AI and doing so in a way that's responsible.
How about schools?
Are there any programs currently in the US that help, like high schoolers or middle schoolers understand I better?
I've heard of some programs out in California at UCLA.
I know that there's a center out there that looks at high school students, and particularly those from disadvantaged, underrepresented backgrounds, and helps them and helps them get skills and capabilities with an AI that will hopefully be, useful to them as they continue their educations and ultimately launch into their own careers.
That's the one that I'm familiar with so far.
There may be some other efforts out there I haven't really seen them publicized, but I have to imagine that there are.
There is some thinking, because I know here in New York, the school districts, particularly in this part of the state, are thinking about AI a lot and how they introduce it to their students.
I've certainly given a lot of talks at school districts and combined school districts, and I've actually been encouraged by how forward thinking they have been about this topic.
So my next question goes beyond open AI.
AI companies are now in the business of information, and they decide what's true, what's false, and what do we see?
Do you think it's too much power in the hands of tech companies?
I think it's certainly a lot of tech power, that they have.
And we, we should certainly be asking, what their responsibilities are and what their accountability is.
And and it's not just limited to AI.
We can make the same argument about, you know, social media companies.
Ten, 15 years ago.
Their algorithm dictates what comes across your feed.
So inadvertently or actually maybe intentionally, they're influencing what you're thinking, what advertisements you're seeing, what posts are influencing your political views.
So a lot of this has already been seen within the social media sector.
now we're seeing at a whole new level just because of the the pervasive power of artificial intelligence and its ability to create things which, they're rapidly creative bull as well as very, very authentic and genuine sounding.
So if we think about, you know, AI and its ability to come up with images, someone was saying to me the other day that they start to question every image that they see now saying, oh, this might be AI, or maybe it's a real image.
So I start to discount image, sorry, information across the board.
I become more skeptical of everything because now I don't know what's real and what's not.
So there is a decent amount of power put on to these tech companies just by the nature of what they do.
We can certainly take a lot of lessons, though, from what happened with social media and, you know, some of the things that were just, in retrospect, looking back a decade now and saying, oh, these things were harmful.
And, that was an opportunity for us to be able to learn from the past so that we're not repeating these mistakes in the future.
We got a question from Bob in Rochester, and he's wondering, what is the difference between the algorithms from years ago, nano marketing and AI?
I think that sophistication and capability is a big part of it.
That's probably the easiest way to answer that question, where, if you think of what you saw years ago as version one, now we're probably about on version three in terms of capability, the, speed at which things can be done, the flexibility at which things can be done.
These are a lot more than we could have done even just five years ago.
And, even in between the versions of the models themselves, there's a lot of capability.
I actually had the opportunity, because of our role at Echo Ridge, to work with the original version of ChatGPT, and I thought it was a it was a novelty at the time I looked at it, I could write short pieces of text and it could sound like a human.
But if you compare it to what's out there today on 400 or 103, there is just so much progress that's been made on the capabilities of these models and in terms of being able to deal with multiple, multi-modal type inputs such as images, videos, long documents, anything of that nature.
It's amazing.
I mean, it's mind blowing to see just what the capabilities are these days.
One of the things that we often talk about at school, and also at my job in WXXI is the bias.
And I was wondering if you think it's possible to build an AI system that truly represents all perspectives without reinforcing bias, and that's a difficult to answer question.
I do think we could get there.
I don't think we're there in the immediate future.
The reality is that the information that we're relying on is biased.
And I mean, like it or not, I as a with a background in psychology, I feel it's fair to say that every human has some bias one way or the other.
And so almost everything you're going to come across is, from a certain point of view, in terms of information.
And that's the same information that's being used to train these models.
Now, these models don't really understand bias one way or the other, but they do amplify it very, very well.
And so you can see it in terms of its output.
And I know there's efforts right now I am involved tangentially with an effort over at the University of Rochester.
And, what will is the emerging strong AI Institute to create AI that can train other AI without bias?
I don't know, if if that's something that's going to happen in the very near future, but I certainly think it can happen in the mid at least.
So what do you think is the biggest ethical dilemma that's going on in the AI world now that we may be do not talk as much about as we should.
And there's certainly a very big debate and argument about the training data that's being used on AI.
authors and other artists are obviously not happy with their data being used to train these models, and then essentially put them out of a job.
I understand their point of view, and there's some certainly some legal arguments to be made here.
And I'm not a lawyer, so I really can't speak well to those.
But the idea that, once we, you know, once we train all of this data on the works that have been created by these individuals, and we don't need these individuals anymore, we're irreparably doing them harm.
Now, we've seen a couple of cases in recent history here.
Thomson Reuters, with their Westlaw product, just won a case last week related to a legal AI startup using all of their Westlaw data to train the startup, model with.
And so that was one case where it went in favor of the copyright holders.
But I've seen other cases so far where large parts of the arguments have been dismissed.
So it's something that's still there, being litigated right now.
So the courts but I do think protecting copyright holders, rights and defining what those rights are, are an issue that's going to become, you know, more important here in the very near term, because it also plays a role in how I will continue to develop in the future.
We are going to take a short break now, and when we come back, we're going to talk about the future and where do we go from here?
With my guest, Jeffrey Allen, PhD and director of the Institute for Responsible Technology and assistant professor in the School of Business and Leadership at Nazareth University.
I'm Meghan Mac, coming up in our second hour, landing a job as a young professional feels harder than ever.
That's what members of Generation Z are saying as they're searching the job market.
Companies say they're desperate for talent, but young people aren't seeing the opportunities, or they're finding roadblocks when it comes to securing those positions.
Guest host Ksenia Kolarik continues the conversation about the job market.
Next on connections.
Support for your public radio station comes from our members and from Greater Rochester.
Habitat for Humanity Restore, a nonprofit home improvement store offering donated furniture, housewares and appliances in support of the mission to build homes for the community.
More at GR Habitat Dawgs Restore.
This is connections I'm Ksenia Color sitting in for Ivan Dawson and we were talking about AI with Jeffrey Allen.
You can join the conversation this hour by calling (844) 295-8255.
Or if you are in Rochester, (585) 263-9994, you can also email the program at connections@wxxi.org, and you can watch us on the WXXI news YouTube channel, where you can also leave comments in the chat.
So job.
The next thing I want to talk about is the future.
And if open AI, if open AI changes something, do you think other AI companies will follow?
And I'll read about to see a broader shift in how AI handles free speech.
That's probably a very good observation that we're going to see, AI companies follow the lead because they really won't have a choice.
When we look back to the whole, idea of shareholder value and things of that nature when it comes to running these companies, they have a fiduciary duty to make sure that they're at least staying, in and in line with their competitors moves so that they remain competitive.
So I don't really see that they're going to have much of a choice.
And, again, that really speaks to some of the deeper, problems in terms of compatibility of, what these companies are doing and the greater good of society.
But it's also one of those things that we just have to acknowledge is going to happen.
And it's, pretty much inevitable, Ari, approaching AGI or is it still just hype?
And also, if you could just talk a little bit about what AGI is for some of our listeners who might not be very familiar.
So AGI is, the the pinnacle, I guess you could say of AI, it is AI that does things.
Number one, it's not a unit task, or it just doesn't do one thing in particular, but it can do everything and do it to the same degree that it would be indistinguishable from a human doing it.
So that's the idea of AGI and I have seen some news floating around from Sam Altman in particular over the last month or so about, you know, how we're on the brink of it.
I don't know how much of that is, hyping up OpenAI versus how much of its reality.
I certainly think that from my experience with our current state of AI, even in the latest models, we're not there yet.
There's certainly some limitations.
I was having a conversation yesterday with an AI agent, on camera, and one of the things that I took away from that was that there there is a lack of the human nuance within the conversation that makes it very obvious that I'm talking to AI, not talking to a human.
Whereas you might feel a little bit of, you know, awkwardness in a conversation, or you might have, pauses that are unnatural and or natural for us, unnatural for AI within the conversation, things of that nature.
Just a little text that people have.
AI does it too.
Well, we would often say when we were having these type of agent conversations that the AI feels omnipresent and omniscient and unfortunately, most people are not omnipotent or, omniscient.
So there's tends to be hesitations in what they're saying.
There tends to be filler words, there tends to be, you know, a little bit of, laugh when they come across an awkward situation.
And that's missing from AI that, that human type feel to it.
So just from that level alone, I don't think we're close to AGI.
But then there's capabilities, right?
AI is fairly easy to trip up.
Still.
You can lead it down certain paths.
You can get it to do questionable things.
all you have to know is how to prompt it and how to get it in that direction.
So I don't think we're there in the coming year.
I do think, you know, it could happen within the next five years or so.
And that's actually, you know, dialing it back way back from what I initially had thought, because the very first time I came on connections, I think I made a prediction of it within the next 30 years.
And then I was also, surprised to see just how quickly things were progressing.
But we're not there yet.
We have a call from David from Rochester who wants to discuss toxic algorithms.
Hi, David, can you hear me?
Yes, I can, thank you.
Oh, what was your question?
If you could just briefly talk about it?
Yeah.
sure.
I was, turned on to this, movie called The Social Dilemma, by an art producer, videographer, and it really kind of goes, and chronicles, how, algorithms used by Facebook, and, well, that's really not a, are highly toxic.
And they, they did a great thing.
They, they interviewed, coders, who were instrumental in, coding Facebook, Instagram, Snapchat and other social media platforms.
so I'm just kind of curious whether, your guest has any, any comments regarding that since, algorithms are actually developed, I believe, now by AI a lot quicker.
Thank you.
David.
Yeah, I think that's a fair observation.
It kind of comes back to what I was talking about earlier with social media, and how a lot of what we're probably going to see is going to be an amplified version of what we've experienced within the social media side.
Obviously, Facebook has taken a lot of flak, and I think rightly so, for the way it developed its algorithms, particularly, you know, five, six, seven years ago, when there I it's fair to say Facebook was more relevant than it is right now.
But, their goal obviously is to capture their audience, keep their audience on Facebook as long as possible, and to make sure that they're seeing and reacting to as many ads as possible.
So we're starting from that point of view.
We're starting from that motivation.
And the reality is that you're going to do whatever you need to do, with very little thought to ethics or the overall well-being of your users, in terms of being able to accomplish those ends.
So that's kind of like I feel like that's a fair, portrayal of where they are and where they, have been in terms of developing algorithms.
And it's not just, of course, Facebook.
It's almost every social media company out there that would like to do the same things.
Facebook just did it better.
And, as these things came to light, they either, changed them or apologized for them.
but the reality was if they hadn't come to light, they probably would still be in place right now.
there's very little reason for them to be apologetic when, you know, no one is complaining about it.
My next hour, I'm going to talk about getting jobs as young adults and recent graduates.
So I kind of want to shift our conversation a little bit towards AI and work.
And I'm a big advocate for AI actually providing more jobs than taking.
So I was wondering, what do you think if, I actually creates more jobs than it destroys, or is it just too optimistic it's going to destroy a lot of jobs.
That's that's absolutely going to happen.
It's going to be things that are primarily, repetitive.
things that are easily automatable.
We think about back office data entry, the, health care industry is a big one because we talked about, you know, the potential for a lot of job losses there.
It won't be doctors and nurses and front line folks.
It will be people sitting in the back office doing billing and coding for insurance and doing patient records and things of that nature.
So we need to, be mindful that there will be a significant shift in terms of how, people are employed and what industries they're employed.
And I think about things that people, you know, are learning these days, which are, you know, within university campuses, which very, very quickly are going to become outdated and, obsolete in terms of in the era of AI.
That said, though, we need to start thinking about how we train people to number one, be very good with using AI, very competent at using AI.
And then number two, thinking about how that will fit within the future workforce.
So obviously jobs that, are very reliant on people skills, something that I is probably not going to be able to replace in the near future.
And I think those are some of the safer areas, things that require skills that really do have a physical interaction side to them.
Those are other areas which are probably safe for now.
But when we get to all these white collar positions that, you know, things like software coders and other people of that nature where Google is already using 25% AI generated code right now within their products.
those are the types of folks who really need to start having a serious, you know, serious thinking towards how AI is going to affect their future livelihood.
And so that's kind of like, you know, I would say thinking about future proofing and thinking strategically, how you fit into that, this, this future, this AI powered.
yeah.
Thank you for mentioning the code.
I honestly do not know how it'd be pass in my machine learning class if we did not have AI now.
And I hope I, professor, is not listening to this program right now.
So, we use, AI a lot, in terms of our decision making, both within work and school and outside of it.
But do we think if we're moving towards the like the world is moving towards, I just making all the major decision making, or do you think that will still be up to people?
In the near future and probably through the midterm, there's likely to be a lot of distrust and skepticism on AI's capabilities to make decisions.
And I think that's a good stance to have.
We should be skeptical.
We should be verifying everything that I put out there in the long term.
As AI gets better, I really don't know.
Maybe we'll shift towards something where a lot of decision making is happening by AI, and we're just overseeing it to a much lesser degree, and it's just one, again, one of those things when we think about how AI comes to make these decisions and what responsibilities we give it, we need to really think about the pros and the cons here.
I tell you, one of the areas that worries me the most is military technology.
I'm a I'm a former marine myself.
And, I have a good appreciation for the destructive power of military weaponry and the idea of AI, autonomous AI, making decisions with this type of weaponry just really frightens me.
And I think we should be really concerned about those type of applications and, lack of oversight within those type of, those systems.
That's another thing that I keep thinking about, actually, if, I will be used more as a weapon or as a tool for peace and diplomacy, you're probably going to run into both sides.
To be fair, it's just like in the world of hacking, you get your white hat and your black hat hackers.
and it really depends on what your motivations are and, what your morals and ethics are.
I would think that nation states will see AI as something that is a national security interest.
It's something that can be utilized to become cheaper, quicker, better than your a rival nations and will absolutely be put to use in a way that is could be destructive.
Ideally they'll, you know, you'll you'll have no choice except to engage with that type of behavior.
And hopefully we'll do it from a point of view that will be, peace is always, the primary goal.
And, you know, the, the idea that, having systems that can counter these bad actors will also be a deterrent.
Could I lead to a new kind of geopolitical power shift to where the countries that lead in AI dictate global policy?
I think that's a very good observation.
That's something I'm actually doing a little bit of research on with, my colleague Sarah Lou over at Nazareth.
Right now, we are looking at how they dynamic will shift, in terms of outsourcing, because during the late 90s, early 2000, a lot of work that could be offshored to countries like India or the Philippines and the call center world, for example, where it was a lower wages and cheaper to outsource to call centers their than to run them in the United States.
They will now be losing the advantage of cost effectiveness and competitiveness that they had previously, because I can obviously do it even cheaper than the lowest paid person could in a global economy.
So how will that affect their economies and their ability to grow and mature?
with in terms of like economics and so we really do need to think that long term there's going to be some economic fallout and policies need to be adjusted to prepare for that.
Because what happens when you have a a country like the Philippines, for example, which is the largest call center economy in the world right now, what happens when a big piece of that economy becomes irrelevant?
What do you do and how do you deal with that?
And it's not just, you know, developing economies, it's also developed economies where, you know, you've got information workers like here in the United States.
And, we're very much reliant on like the service sector.
All of these things really are, good candidates for, being automated.
And so we need to think about long term policy effects and how we prepare for that, and just what the, fallout is going to be, again, because I don't think people are doing a very good job of thinking of that right now.
We got an email from Tracy asking if ChatGPT is safe to use, and I also would like to narrow down this question to, I was wondering if AI is becoming a tool for authoritarian governments to control populations.
So there's there's a lot right there.
Right?
So is it safe to use?
Generally speaking, yes.
It can only do.
There's not much damage it can do in its current form.
Certainly it may use some of the, data that you feed it, to train this model more there's a ways to opt out of that within ChatGPT.
And its competing platforms.
There's also the idea of like, you know, governments using it to reinforce their positions and their propaganda.
I look at deep seek one again.
It has it's definitely very, very favorable towards China in terms of its ideology in terms of the narrative it wants to put out there on issues like Taiwan or even going back to like Tiananmen Square.
it just refuses to engage or it paints a glowing picture that favors one side over the other.
And I think that's just a one example of where, we're starting to see issues like that emerge.
So, another thing that I've been thinking about is, one of the degree that I'm getting at the Salmon Business School now is AI in business.
So I kind of like to look at people as AI managers in the future, but also wonder if, like when the AI becomes more powerful, will we need AI to monitor and regulate other AIS?
Yeah, I think that could be one approach to it.
Certainly.
What we're doing it, over at U of our right now, the, the initiative we're working on that is kind of like in the same vein of creating AI that can become, a watchdog for other AI.
And, that's ultimately where we'd like to be at, where we have multiple layers of protection, where, we're not just trusting and giving disclaimers.
Hey, the information from the I may be wrong, so verify it yourself.
It would be good to have AI in there that's also acting.
And, form of like a check and balance for other AI.
And do you think AI models will soon be able to fact check themselves?
I think they can.
But you always come back to the issue of bias within the training data and identifying that bias, within the training data, I think is the bigger challenge than fact checking itself.
So we need to think about how do we ensure that the data that's going in there is as bias free as possible?
And, that's kind of really the key to getting to that point of being able to accurately fact check.
So coming to the conclusion of our program, I was wondering, what's your biggest concern about where I is headed?
And on the flip side, what excites you the most?
I would say my biggest concern is AI that's being developed with more of an eye towards profitability than long term sustainability and responsibility.
That certainly concerns me quite a bit.
And I think it's, you know, having conversations like this helps towards, you know, raising awareness.
But we also need to take action.
We can't blind, you know, be blind to the fact that AI is here to stay.
It's not going away.
So we need to play an active or a proactive role and helping to shape it into something that will be helpful for us as a as a species, as a society.
And so that's what worries me.
But on the flip side of that, I'm very, very encouraged and excited about medical uses of AI.
There's actually very little downside in using it for diagnostics and treatment.
you know, worry about some of the other issues you might have with more generally facing AI.
It's showing its capabilities, for example, within radiology, early breast cancer detection, increasing treat ability rates to above 99.7%.
There's a lot of things that excite me within the medical field, and I really am eager to see how we put it to use to become something that helps people stay healthier and live longer.
AI, what's the worst case scenario for a future that you actually think is possible?
Certainly the worst case scenario I would worry about, autonomous weapons systems, worry me a lot.
If we think back to the Cold War, there are many documented instances where one cool head was what prevented us from going into a nuclear apocalypse, and whether it was on the Soviet side, whether it was on the American side.
There was a lot of people looking at systems and going, that doesn't make sense.
That doesn't seem right.
And, it really did save, literally save the world.
And if you're relying on completely automated systems, everyone here who's ever used a smartphone or a PC knows that systems make errors.
They crash.
They just do things that you don't understand and they don't explain themselves.
Well, imagine doing that.
with a nuclear arsenal at your disposal.
That would be something that certainly worries me quite a bit.
So, you mentioned and we talk quite a bit about kind of making a difference in the AI world and like acknowledging that it's here to stay.
And I caught myself thinking a few days ago about that.
I live in this, like bubble of a lot of AI professionals and my classmates who are also learning to become my professionals in the future.
But what are some small things that we can do on a daily basis to like, educate people on the AI usage and, kind of helping them not to be as afraid of it as the AI, but more excited, I would say, getting out there and using it and, learning what its capabilities are and even how to use it like in daily situations that are low risk.
situations like the other day, I said, give me a recipe for some pasta sauce, and I use the voice so that when I was there cooking, it could tell me, you know what I needed to do, and I would give it some feedback about, you know, what ingredients I had available.
And it was a really fluid conversation that made me more productive in the kitchen.
And the reality was that I think those low risk use cases like that are a really good way to engage with AI and start to learn what it is, start to peel away some of the hyperbole and some of the jargon around it.
Because even in my AI bubbles that I live in, you know, everything tends to be very technical and very dry and all about performance and algorithms and things like that.
But when we start to think about AI and what it means as a practical, functional tool for us day to day and how it can actually improve our quality of life, we're not going to know that unless we get out there and we start playing around with it and just learning what it is, and strip away some of the mystique and the complexity of it.
What are some other problems that you could recommend people try out?
Those who do not have any experience with AI usage.
I would say sudo su ano is a program I would recommend.
It's a generative AI, but it creates music.
You just give it an idea, you know, I want to create a song about being on Connections on Friday, and I want it to be within a pop dance format and within seconds you'll have two versions of a song that sounds very, very good lyrics, vocals, backing music.
It's, really fun to play with and it really excites people when they get in there and they start creating their own music.
turning your thoughts into actual songs is something that most people never are able to do within their life, but this is an AI application that just does something really neat, and it's available to use for free to a certain extent.
I would say play around with that.
I would also, of course, stay with the mainstream ones like Claude ChatGPT, and I'd even try grok, Elon Musk new AI, because from my early, experimenting with it, it's actually very, very capable and very fast.
And so there's some good aspects to it as well.
I would just get out there and, you know, get my hands dirty with all the AI that I could, but, you know, is definitely one that people tend to, like quite a bit.
What's your favorite, gen AI?
Personally, I tend to spend most of my time in ChatGPT just because it has multiple capabilities in terms of input and output.
there's some other things that I've been, playing around with a little bit here, such as, a program called lovable, which is, an application programing environment where you can go in and describe an application, a program, and it will essentially create the application for you.
And it's, it's something that is a game changer when it comes to software engineering.
so those are a couple of my favorite ones right now.
Thank you so much for sharing your perspective on AI.
It's been a great discussion, and I was joined here by Jeffrey Allen, PhD, director of the Institute for Responsible Technology and assistant professor in the School of Business and Leadership at the University.
Thanks again, Jeff.
And I'm Ksenia Koller, sitting in for Evan Dawson.
Thank you for listening to member supported public radio.
And.
This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management, or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium, without expressed written consent of Icsi is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the connections link at WXXI news.
Org.
Connections with Evan Dawson is a local public television program presented by WXXI