Connections with Evan Dawson
Is predictive AI humanity’s crystal ball?
12/9/2025 | 52m 18sVideo has Closed Captions
How predictive AI shapes daily life, from UPS route planning to its impact on you.
Predictive AI is becoming more advanced, and big companies are already using it daily. For instance, UPS uses predictive AI to map out the most efficient routes for drivers. What does it all mean for you and your life? We talk about it with our guest
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
Is predictive AI humanity’s crystal ball?
12/9/2025 | 52m 18sVideo has Closed Captions
Predictive AI is becoming more advanced, and big companies are already using it daily. For instance, UPS uses predictive AI to map out the most efficient routes for drivers. What does it all mean for you and your life? We talk about it with our guest
Problems playing video? | Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> From WXXI News.
This is Connections.
I'm Evan Dawson.
Our connection this hour was made in your office or at your company, or your school, or your nonprofit.
Workplaces everywhere are trying to figure out whether and how to integrate A.I.
into their workflow.
By the end of 2024.
One study found that nearly 40% of Americans said they were using generative A.I.
in their lives, and nearly 30% were using it at work.
That was a year ago.
Can you imagine what those numbers are?
Today they have to be rising.
And the World Economic Forum recently said that the skills needed for workers are likely to change by 70% in the next five years, 70% by 2030.
Most of that is because of A.I., but surveys find that Americans are deeply suspicious of A.I.
They tend to think that A.I.
won't help them in their jobs.
It will eliminate their jobs.
They don't trust their employers to use A.I.
to make the workplace better.
So what should we do about that?
You might know that I am among the suspicious Americans, but I want very badly to be wrong.
And certainly there's no question that we've got a choice in how we use A.I.
We've got a choice with how we use it as a culture, how we use it at work, what we want to do with it.
There's no question that predictive A.I.
is already a force in health care, finance, economics, probably government.
My guest this hour believes that we can get to a point when A.I.
essentially becomes invisible, blending into our lives without taking over.
Kind of like email and other aspects.
That used to be a big deal, but now just blend into the background.
Let's talk about it with John Laurie, who is the president of Cause and Effect Strategy in Rochester.
Welcome.
Thank you for being here.
>> Thank you very much for having me.
>> And for those watching on YouTube, I'm dressed like it's a very snowy day in John's, got great threads on.
So you're outclassing me today.
Can you tell people what cause and effect strategy is?
First of all?
>> Sure.
Cause and effect strategy is a data and analytics services firm that helps clients use data to make better decisions.
And create more success in their business.
>> Okay.
And so you've got clients here around the country maybe.
>> Yes we do.
We have some fun ones here locally that we enjoy working with.
some ones, you know, as far as across the pond as well.
>> So what do they call you to do?
They call John Loury to say, okay, our company needs to do X. We need you.
What is it?
>> Well, first of all, they don't call John Loury.
>> Okay?
They call.
Yeah.
>> They call.
Cause and effect strategy.
We have an amazing team of almost 35 people that are extremely talented.
You know, in the space of data architecture, data engineering, analytics and A.I., it could be as simple as, you know, we have a bunch of data.
We have a problem.
Can you help us use this data to give us an additional perspective on how we might solve it?
It could be we're using these tools and technologies.
We don't feel like we're getting enough out of them.
We'd like you to help us streamline the processes and help us get more out of the data and the technology that we're currently using.
>> Okay.
And are they asking specifically for you to bring A.I.
into their workspace?
>> Yes they are.
>> Okay.
how long have you been doing that?
>> working in this space?
We actually started Cause and Effect in 2015.
So it's been.
>> but in 2015, you weren't.
I mean, were you an A.I.
company in 2015?
>> We were not.
It was the idea of being a data company in 2015 was pretty foreign.
I can remember, you know, making some bizdev calls, you know, two, two people, myself and one of my partners in a room with no windows a couple of fold out tables and laptops, trying to figure it out and making those first bizdev calls, talking about helping people to use data to make better decisions was was rather difficult.
>> When did A.I.
enter the picture for you?
>> For us, probably three years ago.
and even more so in the last 18 months as tools and technologies have trickled their way down from highly sophisticated you know, only accessible to the fortune 100, fortune 500 a lot has been done.
And that's that's how I know that this is moving in a mainstream manner, is the tools and technologies that are available for Main Street are now here.
>> Yeah.
And so if I am someone at a company who's thinking about hiring you guys, and I want to really understand, but I'm, I'm not a tech person, I want you to bring it down to layman's terms.
Tell me in general, one of the really cool things that you think can be done that would make my workflow better.
>> Well, to start out, we sort of turned the tables on that.
we really want to start with learning more about the business and where your business is going.
what are the initiatives?
What are the growth strategies?
What are growth areas for your company that you're looking to get into so much like a traditional consultant might do some discovery and understanding your business.
That's where it starts.
So first we need to understand the use cases that A.I.
may potentially be a fit for.
But then in parallel, a very important parallel is data.
Where are you with data?
What maturity level?
Where is it located?
How much of it do you have?
What state is it in?
So what what we do as a firm is, is we kind of have two two sides of the paper.
The first side is the data side, and the other side is the business use case side.
Because I think that's one of the issues that people are having today is they're just throwing A.I.
at something because they want to get a star on the wall or plant a flag.
>> Or they think they're supposed to use A.I.
So therefore now we're using A.I.
>> There's a rush to market, and it doesn't necessarily have anything to do with what the success might be, what the adoption might be, what the guardrails are.
and also where do we go next?
I really appreciate your introduction about A.I.
being invisible, because when we stop talking about A.I.
as an initiative, like it's not a project, it's an initiative, it's going to be here.
It's not going anywhere.
It's going to be with us from now until whenever.
Just like, just like a lot of the other business tools that we have.
So it's not a project, it's an initiative.
And once we start understanding that it is not just something to be temporarily funded it's something that's going to be front and center for a long time.
>> Do you really think we can get to a point where, like, it's a strange question to say, like, how does email affect you?
Like, what are you talking about?
Email is just a tool.
Just a tool.
you really think A.I.
can get there?
>> Yes.
And it's already it's already happening.
>> Okay.
>> Did you use you use Microsoft Word today?
>> Yes, I just did.
>> did you write an email today on your.
>> I did, I did.
>> And did it attempt to finish your sentence for you?
>> It did.
>> All right.
There you go.
You're already using it.
>> I didn't let it finish the sentence.
I finished the sentence.
Just want you to know that.
>> And I actually do the same.
>> Okay?
But those are the the predictive text that comes with these language models is one thing.
It's another to say A.I.
can be integrated in a way that we don't think of it as a huge disrupter.
It's just in our lives and at the same time, we're we have a wide range of ideas that people in tech have, even people in tech about.
Are we going to have jobs in the future?
And do we want jobs in the future?
Right?
I mean I was listening, let's listen to I was listening to where do we hear Ellen on this podcast?
Megan Mack where was this?
We'll get the source of it.
But this is Elon Musk at a recent podcast just last week talking about where he thinks we are going with A.I.
Let's listen.
>> My prediction is less than in less than 20 years.
Working will be optional.
Working at all will be optional.
Like a hobby.
Pretty much.
>> And that would be because of increased productivity, meaning people do not have to work.
>> They don't have to.
I mean, look, this obviously people can play this back in 20 years and say, look, Elon made this ridiculous prediction and it's not true, but I think it will turn out to be true that in less than 20 years, it may be even as little as, I don't know, 10 or 15 years.
the advancements in A.I.
and robotics will bring us to the point where working is optional.
in the same way that like, say, you could you can grow your own vegetables in your garden, or you could go to the store and buy vegetables, you know, it's much harder to grow your own vegetables, but but, you know, some people like to grow their vegetables, which is fine, you know?
but it'll be optional in that way, is my prediction.
>> That's Elon Musk just a few days ago on the people by WTF podcast.
And first of all, what do you think they're John Loury when you hear that.?
>> I have some mixed feelings about that statement.
I think from a technology perspective, I think there will be a tremendous amount possible in the next 15 to 20 years.
I think when you talk about it from a social perspective, an economic perspective he didn't address those factors, as in, like, you know, money will still need to be in our bank accounts.
We'll still need to be able to purchase food.
So somehow we'll have to earn that money.
And usually it's through work.
but again, Elon is a lightning rod.
you make statements, you can argue multiple times a day that if you pulled them apart, there are kernels of truth.
Because of his exposure to advanced technology and R&D that he has, that very few people in the entire world have.
but from an application perspective, a day to day mainstream perspective, I would say that Elon is is a little bit detached.
>> I think that I think that's a good analysis, and I just want to set aside because it's for a different conversation on a different day here.
But if it is achieved what he is talking about, where there's essentially no required jobs, that any work is a choice, we'd have to entirely re-order how we think of currency money.
who gets it?
How much?
How it's distributed.
Social services support, healthcare, all of it.
And that's a whole conversation to be set aside.
Although I think John Loury is saying that's part of the problem of that statement is it's detached from just how significant a transformation it's seismic.
Right?
>> Yeah.
The practicality of how we live, work, breathe, interact, move would be implicated by that statement.
>> Yeah.
And again, we're talking 10 to 20 years here.
I've been hosting this program coming up on 12 years.
We're going to be in the middle of this.
But the one aspect that you seem to agree with is that it is possible that we are 1 to 2 decades away from at least achieving a level of A.I.
success, where work could be optional, assuming people decided they wanted a society like that, that we have massive change coming, and that the human input of work won't be required in many, many cases.
>> You know, input is an interesting statement.
Input is an thought or input, as in physically.
>> I don't know.
>> You know.
>> You tell me you're the the A.I.
guy.
>> So I you know, I do think that it's difficult, difficult to to really put my finger on it.
But the concept of sentient A.I.
and truly artificial intelligence, that's thinking on its own.
Yeah, it is coming.
It would be silly to say that that that's not where this is headed.
It is.
I am unaware.
I have not thought enough about or collaborated enough with people in the field to talk about truly, what, two decades from now this will look like and what the the real societal implications are.
And again, bringing that back to to today, you know, understanding the guardrails, understanding ethical, you know, challenges around the use of data, those are very real things that need to be discussed in order to be successful in 2025.
>> Yeah.
Part of why I bring this up is just because in a moment, we're going to get more into how you think individual workers and how you think companies can move into the present, let alone the future.
Right?
You know, sometimes with the help of people like you, and there's a lot of people feeling like we're kind of rushing so quickly that if they don't have someone to help them, it is hard to envision what the best next steps are.
If you're 18 years old, trying to figure out what your career is going to be, that is not an easy moment.
>> Oh, I agree, I have three children, six, four and two, six.
>> Four and two.
So are you worried about their job future?
>> I am, I am okay, I am there are universities that are at the that are trying to get out in front of this.
I'm a very proud University of Buffalo.
bachelor's master's alum.
And what UB is actually launching this fall, this upcoming fall will be a X degrees.
So.
>> Yeah, we're hearing more about this.
>> Yeah.
It's a very exciting concept where you're going to learn all the, the traditional stuff, the important stuff, the historical stuff, but also how to incorporate A.I.
tools within the context of that field.
>> So last thing on the Elon quote, I don't think most people want a society where.
>> And maybe this sounds really advantaged or detached myself, because we have a we've got a lot of income inequality.
We've got a lot of people who work multiple jobs and are still hurting a lot of people who feel like they are not getting what they should get out of their many, many hours of work a week.
And would love perhaps a future where they don't have to work 80 hours just to try to figure out how to get their family through the week.
I get that I do, but I don't think we want the way Elon and some of his colleagues talk about the future is detached in a way that I think misses the idea that people want purpose, that people want to feel connected and contributing and I'm not saying a 9 to 5 has to be part of that, but I don't think the future that he is describing is what most people want.
So if if you come into a company and the company is saying, hey, we need John Loury and we need cause and effect strategy, are they calling you to say, do they call you and say you need to help us cut the workforce?
We need A.I.
to cut the workforce?
Does that conversation happen?
>> Absolutely not.
>> It doesn't.
>> That has not happened as of yet.
>> You got dozens of clients and nobody has said to you, John, we just got we got to cut the workforce 20%.
Nope.
Okay.
What happens if they ask tell you that?
>> Well, I you know, we are of the view that A.I.
at this point is to help augment.
It's a tool, you know, just like the internet was a tool, just like the cotton gin was a tool.
you know, these things are tools that we bring on, and we have to figure out how to change our process and our people around them.
And one of the, the the way that we look at things from a business perspective is people, process and technology.
Those are the three things that we always need to address.
when we're when we're in an engagement and talking about things that are around how we're things going to be implemented and what the impact is.
So yes, things will change.
I mean we're not that old to think about life before the internet.
>> I 100% I'm with you there, and I'm not trying to be cynical about what companies are trying to do.
I'm just curious to know what your disposition would be if somebody said, I really do need to cut the workforce.
>> Oh, that's a good that's a cutting right to it.
Yeah, I think I think just the the nature of the type of people that we are at cause and effect, the, the sort of grassroots and startup culture that we've had and that we've, we've grown, especially in this community.
you know, I would push, I would push back on that.
And I, you know, as we do with a lot of our clients, just because somebody offers up an idea they're paying you for it doesn't mean that you don't, you know, give that's what they're paying you for is your perspective and your experience.
And we would share that.
We would share first, you know, how how this could augment what what you're doing.
might it impact hiring in the future?
Sure.
And that is a conversation that we do have.
So I don't want to be totally totally cut off the perspective that it does have an impact on jobs because it can you know, we have three billing specialists today, you know, bringing on some technology like this may mean that we only hire one next year instead of three.
That is a very real perspective.
But going at it from an immediate cost cutting there is no silver bullet.
A.I.
is not a silver bullet today.
And to be able to guarantee that you're going to get some type of results like that is a stretch.
>> Sure.
Do you think it is possible that A.I.
will create more jobs than it than it takes away in the future?
>> I'm going to try not to evade here, and I'm going to say that it is going to create different jobs.
100% will create.
It is already creating different jobs.
>> Yeah, for sure, but sometimes people bring you bring up the tools argument and sometimes people bring up horse and buggies and cars and the automobile industry did put a lot of horse and buggy makers out the the buggy makers out of work, but a lot of them found jobs in an automobile industry that grew much bigger than the buggy industry and employed a lot of people.
>> Right.
And then the robots came.
>> Sure.
And then automation came.
But when people point to these developments, they often that's an example where they say, well, look, that was a disruptive change, but it actually led to more jobs, at least for a while, or probably overall.
you don't seem to be saying that you think A.I.
will lead to more jobs.
It will create some new and different jobs.
But and it may reduce the number of certain jobs.
>> I think whenever you're in a situation where automation is happening, where processes are getting faster and you are able to do more with less, I think that that's a very real possibility that certain types of jobs could be affected.
>> I see the phone ringing and listeners, I want to hear from you as we as we talk about if A.I.
is integrated at work, I don't think John can fully diagnose your company over the phone here.
Although we can talk in general about how to think better as a worker, as a company, et cetera.
and that's part of what John and cause and effect strategy do.
And we're trying to have a realistic conversation about where we're going as a work sort of as an American workforce, where the world is going with A.I.
It's 844295 talk.
If you want to call the program, it's toll free.
8442958255263 WXXI.
If you're in Rochester.
2639994, email the program Connections at wxxi.org.
If you're watching on the WXXI News YouTube channel, you can join the chat there.
So before I grab a first phone call without mentioning a specific client, can you tell me a scenario where you feel like, hey, I'm glad they called us because things are more efficient.
Things are humming and they are happy, and this is how it's happening.
>> Sure.
>> Okay.
>> Sure.
you know, we work with a customer that does a lot of medical trials.
and they're a company that helps execute medical trials for other companies.
So they're kind of a third party executor of of these things.
And this company makes their living based on contracting out.
So they receive a lot of requests for, quote, requests for information.
They have to develop proposals.
So the challenge is that they've been around a while.
They have thousands of trials that they've executed on behalf of their clients, thousands of trials.
They have a tremendous amount of data that they could leverage when responding to these requests for proposals.
No one person can search through terabytes of information on trials and data and outcomes.
So by creating an agent and connecting that agent to multiple sources of data, CRM data existing proposal data that's in SharePoint other data that they have stored giving them access to that data in a way that they can extract meaningful information is a game changer.
What would have taken somebody or a team of people days or weeks to do can now be done by a single person and done quickly, and I mean quickly.
>> Using a A.I.
>> Yep.
Using a genetic A.I.
>> Okay.
And then that person who uses A.I.
as a tool can get that information a lot more efficiently.
And then you're on with your trial or you're making changes, et cetera.
>> I wouldn't, I wouldn't say that.
So we have to take it a step further.
So the the end result of this business use case is a response to a company who is attempting to contract a company to do this service.
So not only do you have to collect the information on past trials and past services, you then have to develop that proposal, which is what the second agent does.
So now you have an agent that can help find the relevant information, the relevant historical information.
Then you have another agent that can then help you draft that proposal.
in a way that in a speed that is unmatched from an individual perspective.
>> Okay, Keith, before I grab your phone call, when John mentions agents, he's talking about A.I.
>> Yes, I apologize.
>> Well, no, no, it's fine.
I mean, that's the term.
And people are going to hear that a lot.
Yes.
but in the case of A.I., you here working with an agent?
We're talking A.I.
And my ignorance about A.I.
You know how accurate it can be.
One of the things that we talk about is the importance of human beings ultimately being in control of either a final proposal, a product, et cetera., 100%.
So I want to get your perspective on that, because this is going to be the dumbest example, but bear with me here.
so our family likes to watch jeopardy to wind down at night and, you know for some reason, we got back into the Ken Jennings era and we're like, that guy won 74 games in a row.
Like, this is like watching all of Seinfeld all the way through.
We're watching the Ken Jennings shows, and I wanted to know, like a few episodes in, I'm like, did this guy was he ever trailing going into Final Jeopardy?
Because I know how he lost.
He had a four zero lead in the episode.
He lost.
He got the question wrong.
The person got the question right, and the the A.I.
tool that I was using said yes.
And the game that he lost, he was trailing because he was he was ahead 14,000 to 10,000.
But then his opponent got the question right and took the lead.
And then he was trailing.
Well, he wasn't trailing.
It was an incorrect answer.
I still actually don't know if he was ever trailing, but it was wrong.
But it was very confident.
Yes.
And that's the first time I've seen that recently where I went, oh wow, this A.I.
totally doesn't understand the question.
and got it wrong.
And if I would have just taken it and copy pasted it, I'd have been wrong.
So you always want human beings, I think, to to scan and to understand and to collect and analyze proposals.
I would think especially in medical trials or whatever you're working with.
>> That is 100% true.
We would never suggest someone take what A.I.
generates as the gospel or, you know, immediately take that and insert it someplace else.
A.I.
is wrong.
There are things that happen when you know those llms are processing.
It does kind of make stuff up, for lack of a better term.
>> And hallucinating is the.
>> Term.
That's the industry term.
Hallucinating.
I we're breaking out a genetic hallucinating.
There you go.
We're going to get people educated on this one way or another.
>> And so ultimately, what do you tell clients about when they when they use agents and they're able to work much quicker to to work through data.
How do they then go to the next step?
They want to get a proposal out.
You got a different agent, a different piece of A.I.
that's going to work with them on a proposal, sure.
But ultimately they've still got to see that, finalize it and sort of approve it.
Right.
>> Well, that validation piece is is critical.
that validation piece, not only from, you know, a, you know, words flow style, but also from a data perspective, it needs to be validated.
we often before we launch these things, you know, there is a burn in process where you're asking the questions that you already know the answer to.
You know, you're trying to stump it.
It has to learn.
It starts out as a child, and it makes a lot of mistakes.
And over time it gets better and better.
We add knowledge bases to the A.I.
to make it smarter.
and, you know, after we take this call, I can share with you an experience that I recently had with A.I.
and how we're implementing it.
and I had I had to look at myself in the mirror and realize that that my job is probably going to change a little bit.
Oh, boy.
Next year or.
>> Two, we'll get to that in a second.
I will say my concern is that there are there will be companies that work with the cause and effect strategies out there, and they're well trained and they're going to use A.I., but they're going to have human beings collate, collect, craft, tailor, edit, revise and put the final products out.
But there may be companies that go, nah, whatever.
Whatever this agent gives us is good enough.
Couldn't that happen?
>> Well, there might be lawyers that maybe have gone in front of courts in New York and done just that thing, and.
>> Oh boy.
>> Got their hands slapped pretty bad because of it.
>> I want that to become I would like that not to be the future that we're going into.
That's part of what I worry about.
Keith on the phone first.
Hey, Keith.
Go ahead.
>> Kevin, how's everybody feeling?
>> Good.
Very good.
>> Good.
>> Way back in the dark ages.
About 20 years ago, whenever I got or bought something, I filled out the warranty card.
But I always put a different letter in for my middle name.
So eight weeks later, I would always get junk mail addressed to me with the different middle name.
For example, if I put a B, that means I bought something at Best Buy and they filled my information and I got a junk mail addressed to me with my middle initial B. So the information that I supplied on the warranty card was wrong.
I would lie, I would exaggerate my income, job description, everything and I would get.
I would get a lot of stuff.
Fast forward now what A.I.
does, it takes available information and puts it together in a form that you can understand.
But what happens if somebody figures out how to corrupt publicly available information, and then you lose the trust?
It's going to happen.
>> okay.
Hang there Keith.
First thoughts on that, John?
>> Well, I think trust is critical.
And again, that's that's why there needs to be guardrails in place and processes in place and people in place to make sure that things like that aren't happening.
A.I.
doesn't happen in a vacuum.
You know, I think that's something that I think people have trouble visualizing what A.I.
is and how it works.
There isn't like a supercomputer that sits in a room.
there isn't AT2 thousand, you know, walking around, you know, that we have to hide from these are algorithms that you execute from a computer terminal.
You hit return, it executes, it comes back with information.
You check that information.
we do other.
You know, I simplified the, you know, the review and check process, but that stuff needs to happen.
And additionally around, you know, your, your data, your personal data, it's a great that we live in a country where not one source owns all of our data.
You know, most likely the data that you're talking about comes from one of the major credit bureaus.
And those credit bureaus do share information, and they do sell information.
But there isn't any one single source that can be corrupted.
at least at this time.
>> Keith.
>> But it it's possible.
And if somebody starts corrupting the publicly available data and they could do it extremely fast, because that's what makes A.I.
work is the speed that computers function now versus old.
286 that I used to have.
So you could not find out that the information was wrong until it was late.
I mean, just for example, if I needed a recipe for how to make goulash, okay, and somebody was able to corrupt it, I'm not going to find out until after I make the goulash, after I put it in a half a cup of sugar.
You don't put sugar.
On.
I mean, that's a easy example because all the recipes are publicly available.
But let's say you you get somebody who has a nefarious reason to do it, and now you have a it's like owning an encyclopedia where all the information is wrong.
>> Well, I'm glad you brought that up.
>> There's no plan B.
>> I'm glad you brought that up because that's.
Are you familiar with Wikipedia?
Keith?
>> Yeah.
So that people can put in post misinformation on that until it gets checked again 100%.
>> And there there are teams of people, especially at universities, that that do patrol Wikipedia and do provide that oversight when you're using ChatGPT, for example, you have the opportunity to tell it that it gave a bad response.
You it it will learn.
>> But you don't know if it's bad until you take the information and try to apply it.
And you know your test three weeks later is totally wrong.
You can get instantly bad information, but you may not for but you may not know.
It's bad for a measurable time frame.
>> Well, sure.
I mean, look.
at look at the internet in general though, Keith, when you do an internet search on something I think we've all been taught and told time and time again that you can't trust things just generally on social media or on the internet.
This is this is the next wave of that.
This is how humans interact with technology and the challenges that happen when a technology is first introduced.
You know, wild, wild West, and then over time, we begin to adopt rules and laws and, and norms around these technologies.
>> I think Keith's broader point, I think, is correct in that I take your point, John.
We're going to see norms change.
We're going to see a different rhythm that people get into with making sure things are verifiable.
But there's going to be some high profile mistakes, I think.
>> Oh, look at all the data breaches that we've had in the last five years, some of the some of the most trusted companies in the world have had data breaches where Social Security numbers and credit card numbers, that that is going to happen and that is sort of the culture around technology that we live in, that it's okay to break some eggs and it from the outside, we all say, yeah, it's okay for for the betterment of man for better and women and, and community and and family.
but right up until you're one of the eggs that gets broken, you know, and and that Keith is right.
I agree from that perspective, there are things that are going to happen, and they may shock and surprise and individuals may be impacted by it.
>> When we come back from our only break, I'm going to ask John to tell that story of his own experience.
If he doesn't mind.
John Laurie is the co-founder and president of Cause and Effect Strategy, and they've got dozens of clients who they work with regarding A.I., A.I., integration into their work, data collection, all kinds of things that really are happening.
Now.
This is not sort of theoretical in the future anymore.
And we're talking to John about where the workforce is going, how workers can prepare.
We've got a really good question from a listener about what John's going to tell his own children about what to study and how to prepare for that.
Not that he's got a perfect answer, but we'll come back after this only break and hit those topics next.
Coming up in our second hour, my guest is from Pen America, who works on free expression around the world, and we're talking about writers, artists, poets, journalists being imprisoned in places around the world simply for expressing themselves, for writing, for uncovering journalistic truths and more.
And yes, there is concern that it's on the rise in the United States, too.
We're going to talk about that next hour.
>> Support for your public radio station comes from our members and from Mary Cariola center, supporting residents to become active members of the community, from developing life skills to gaining independence.
Mary Cariola center Transforming lives of people with disabilities.
More online at.
Mary Cariola.
>> Welcome back to Connections.
If there's one place that I think I want everyone to agree, I think about what Jeffrey Allen from Nazareth University told us.
and they are designing programs for students on A.I., ethics and A.I.
application of A.I.
And Jeffrey Allen, someone who comes from the tech sector.
But he did say he also understands and comes from a military background where the the the wider you use A.I., the concern you have about the mistakes that Keith's talking about.
And I don't want any A.I.
program in charge of anybody's nuclear arsenal.
I just want to put that out there.
You good with that, John?
>> I'm good with.
>> That, too.
That's a good guardrail.
All right, well, we solve one problem today.
John Loury.
if you're just joining us, John, tell people where they can learn more about cause and effect strategy online.
>> cause and effect strategy.
Com would be a great place.
we're also on major social media channels especially LinkedIn.
>> so what was your own experience that made you gave you a little bit of pause about about work?
>> Well you know, we talked about this concept of a proposal agent.
So we built one.
And, in the last month, we've been training it, and I figured it was time for me to go head to head against that proposal.
Agent.
>> What do you mean by that?
>> I was going to draft a proposal, and then I was going to enable this agent to draft the proposal, and we would do a bake off.
>> Oh, boy.
And.
>> From a content perspective, I lost the bake Off.
>> Oh, boy.
>> I forgot to address two key deliverables in a proposal.
I listed them, but I didn't expand on them.
The agent did, however, the look of the proposal that the agent produced was pretty ugly.
So from a branding perspective, from a stylistic perspective, I still have a job.
>> Okay, but it's close.
>> I can say that I, you know, I pulled elements from the proposal that A.I.
created and I added them into my proposal.
I, you know, added, you know, stylistically, words to make it flow.
but from a content perspective, it had zeroed in on some of these deliverables and what it was going to take to create them.
And I had literally glossed over it, and I think it was because I was working on it and went and worked on something else, and then came back to it.
so, yeah the future of how we do this work is going to change.
>> How good was your prompt for the agent?
>> Very good.
>> Yeah.
I mean, this has become a refrain.
We hear it from a lot of people who work in spaces that you work in, but it's garbage in, garbage out, good prompts.
The better the prompt, the better the outcome.
>> I'd even go a step back.
it is as good as the data you put into it, and then the prompt that you give it.
Okay, so, you know, building these knowledge bases and or connecting these agents to good data not perfect data.
I don't want people to think that, you know, we can't get into A.I.
because our data is not perfect.
However, understanding where it is and how it integrates with other systems is critical.
And ultimately, like you said, garbage in, garbage out.
But I like to frame it a different way.
if A.I., the potential of A.I.
is a Ferrari, if you give it 88 octane.
How's that for are you going to perform?
It's going to sputter.
You give it that 99 octane and it's going to fly.
And that's why Cause and effect is continuing to grow as a company, because we help put companies in a position to be successful with A.I., not from a not only from a technical standpoint, but from a people and process standpoint.
>> What common mistakes are companies making trying to rush into using A.I.
>> Not being thoughtful about their use cases, and how to measure success and what adoption looks like.
Because you rush to the table with a solution, an A.I.
enabled solution, and maybe it does impact a person's job or three quarters of the things that they had previously been doing can now be automated.
Do you think that person's honestly going to adopt the tool?
You know, there is an element of self-preservation that we all have to deal with, that I am dealing with.
My company is dealing with.
And so it's about how can we be thoughtful about integrating these things from a tool perspective instead of coming at it from, you know, just I need to get something out the door.
I need to plant a flag for my board.
within A.I.
>> Let me work through some other emails from listeners.
This is from c y c y says so A.I.
helps you analyze data more efficiently, but can you use it in a way to tell you what type of data you should be collecting to better understand whatever it is you're trying to accomplish?
For example, if you're trying to improve some sort of assembly line efficiency, can I tell you what sort of information you need to have in order to see where the flaws are in your current setup?
Yes.
>> That is a great use case for A.I.
That is, that that is the type of thing that even some of the existing llms on the market can help you.
from a broad perspective, understand what those factors are.
If we're able to take even meaningful and relevant data to that business.
Absolutely.
It would be able to expose and bubble up where some of those gaps might be.
>> okay.
Continuing down the line with some questions here, Rick.
Thank you.
Cy.
Smart question though, wasn't it?
>> Yes, it.
>> Was, because you got to know what you're looking for and how to train for what you're looking for.
>> Yes.
And that again, you know, moves towards that idea of A.I.
as a tool that to help us with those blind spots, to help me with, you know, the half created proposal that I walked away from and came back to and, you know, forgot to expand on those two things and that that is a great use case.
>> Rick writes to say, Evan, your guest reminds us of the various technologies that were tools that allowed for advancement of the human enterprise.
He mentioned the cotton gin, and my first thought is that the cotton gin made human slavery economically viable.
So I wonder, how does your guest understand the incorporation of moral values or human values in A.I.?
No, technology is morally good or bad.
It is made good or bad by how it is used.
So how does A.I.
get used in ways that would be considered morally positive?
>> Well, I think with any tool and I don't want to certainly don't want to take this a political and a political direction.
But, you know, gun safety you know, firearms in many cases are tools for certain populations.
And in other populations they are not and with any tool, there needs to be both laws and norms that need to be established.
And again, it's, it's a, it's a real challenge for me being someone who works in this industry and, and who is trying to do things the right way, the moral way.
You know, when you do hear of, instances where things haven't been done, the quote, unquote right way.
but it's, it's it's going to be like that in any industry, in any tool.
It doesn't make it right.
But what it takes is a really thoughtful group of people.
in, in an industry to come together and create with those norms, with those laws, what that legislation looks like.
So then, so that it can be integrated in a safe, a manner, and as ethically and morally positive manner as can possibly be.
>> I think it I share your concern about and I share Rick's concern about making sure we are asking questions about how we are using these tools.
It wasn't that long ago that Sam Altman was on the list of people who thought, maybe we should have a six month pause in A.I.
That was always sort of not really a viable idea for a lot of different reasons.
The competition across countries, the competition across companies.
But Sam Altman was on the list of of signatories to that idea.
And just a few years later, he's out there boasting that their company is going to get erotic A.I.
out faster than everybody else.
Whoa.
It's a lot changed.
>> That.
We could have another show to talk about how the adult industry has has pushed technology, because it has been a driver and a lot of interesting technologies that we even use today.
>> Yeah, I just I mean, again, totally separate conversation, but we're going to get lonelier.
And I don't think it's going to make us feel better.
thank you for that.
Rick.
Matt.
Right.
Matthew says, curious on John's thoughts of the opposite situation where instead of losing your job to A.I., I am more concerned I am not working alongside A.I.
enough and will be left behind in the job market to people in other companies that are using A.I.
My company is not implementing much A.I., at least at the site level, possibly at the global level.
It's from Matthew.
>> Well, Matthew, I've had to look at myself in the mirror and not only look at me and the skills that I have as the president of a of, I guess, a mature startup today, I'm proud to say that but also the direction that my company is heading.
Look at the services that we offered ten years ago and the services that we offer today.
They have taken lefts and rights along the way.
We've we're still a services firm.
We started out focused on marketing analytics.
And that's changed to a more broad view.
We focused a lot on data architecture and business intelligence.
and now in the last two and a half years, there's been a major pivot towards A.I.
Once the technologies have become, you know, available more widely.
so that is something that I think makes a lot of sense.
I would encourage people to educate themselves on, you know, how how to augment some of the tasks that they're doing.
especially if they, you know, perceive those tasks to be more rudimentary, more, you know, repeatable.
Those are the things, you know, from a every day perspective, especially those who, you know, sit behind a desk.
Any of those repetitive tasks, many of them can be subject to some form of A.I.
to help support your being able to be a subject matter expert.
I think that's the transition.
And the question about my kids if we want to.
>> Yeah.
So that was from Kevin.
So it ties in nicely.
What will your guests tell his children about what they would need to be studying for the future of their I assume for their careers?
Yeah.
>> The number one thing is to be critical thinkers.
I think critical thinkers in any field are valuable.
The idea that, you know, someone is going to be judged based on a repetitive task, I think you talk, we go back to Elon and where the future is really headed.
I think a lot of these repetitive tasks, whether that repetitive task is reviewing an invoice or plowing a field, I think those are the broad types of implications that we're talking about.
However, knowing where to plow that field or knowing that one vendor pays differently than another.
Yes, the A.I.
will support the data that we provide it to make those decisions, but there are certain instinctual decisions.
There's decisions based on experience that a person develops that at this point in time cannot be cannot be replaced by A.I.
you know, another use case for A.I.
that we're pursuing at cause and effect is being able to understand all of our employees skills.
So with 34 people knowing what their their best skill is possible by our management team, but understanding what the fifth or sixth best thing that they do or skill that they have, it impossible.
So what we're doing now is building an agent that will help us collate individual skills.
We're going to use resume data.
We're going to use LinkedIn data.
We're going to use data that from an HR skills tool that people will be able to answer questions 1 to 10 on their performance level of hard skills, soft skills, software tools.
And we're going to combine all that together to be able to understand, you know, who are our top five people or who who would be good at at a given project or integration.
And then phase two of that will be taking our capacity planning tool and layering that over the top of it, being able to say who at this moment would be great for this project and has the time to work on it and and that will be a tool.
We will not use that as, you know, concrete.
We'll use that along with what we know about those individuals.
And again, the soft skills part.
Well, this person had a death in the family or this person has really been struggling at work lately.
They yeah.
He's not going to know that.
That's not going to be a part of the equation.
So there will be people that will need to be involved in order to make these things successful.
And if we want to choose to ignore the data, well, that's what we're going to do.
>> I really appreciate the idea that critical thinking is going to be one of the key skills that anybody could have in a future workforce, differentiating yourself from your fellow employees, from A.I.
And yet, it seems that one of the early trends in the A.I.
era is that we are outsourcing our critical thinking.
There was a professor who wrote a piece for Huffington Post about a Trojan horse.
a Trojan horse attempt to try to a method to to see if his students were cheating using ChatGPT.
I don't know if you've heard of how they do it, but we're going to talk to professors next in the next few weeks about this.
But you you give students a prompt, and if they copy and paste it and put it in ChatGPT, you might not know it.
However, if you put a part of the prompt in white on white text that's extremely small that they would not see, but the A.I.
would see if you copied and paste it and you give them some nonsensical prompts.
Include the word banana in this essay, things like that that they would never know were included.
Then all of a sudden and he found out he's got a third of his class still cheating and denying it, and he's going like, why?
He's like, I'm not even mad at you.
I just want to know why you're doing this.
And a lot of the students were saying, well, I wanted it to be good.
I wanted it to look smart.
Well, are we already given up the idea that we can't do the good work, that we can't look smart, that we can't be the critical thinker, that we have to assume that the machines are better?
I don't want to assume that.
I think this hour has been about the points that you make, about the ways that these tools can make us better if we choose.
>> Yeah.
I mean, there there are a lot of tools out there for a lot of the different things that we do and incorporate in life.
And if we let the tools you know, overtake us.
you know, there are larger questions around, you know, confidence and the way that our society, values knowledge and information and all of those things I think are going to continue to come to a head, they're not going to be solved overnight.
but I'm choosing to be someone who steps out in front and hopefully takes a leadership role, at least at the community level, of how these things are, are being incorporated into the work that we do.
and I'll continue to be a voice, you know, an ethical voice for those who we work with.
and I'm thankful that some of the clients we do work with are national brands that people do interact with on a daily basis, especially if, you know, maybe you go to the grocery store or get a car wash in this community.
there are clients that we are working with that are doing the right things and do want to benefit their business.
At the same time, they do want to support their workforces and continue to be competitive in this world.
>> Well, I'll close with this in our last 30s w k emails to say in the past, technological advances have increased productivity.
This creates more economic activity, which leads to more higher paying jobs.
Augmented intelligence suggests that people plus technology work better than machines alone.
This idea is discussed in The Second Machine Age by Andrew McAfee.
So people plus technology, not just technology.
Will you endorse that idea 100%?
>> And before we close, Evan, I do have to say hello to Aunt Joyce and Uncle Craig.
Aunt Joyce is a daily listener.
>> Oh, wow.
Well, that's very kind, Aunt Joyce.
Thank you.
I appreciate that, and I want to thank John Loury, the president of Cause and Effect strategy, for coming on and talking.
And our listeners know we we talk A.I.
a lot and a lot of different ways.
Come back and join a panel sometime.
And, you know, we'll roll up our sleeves and keep getting into it.
>> I'd love to thank you for the opportunity today.
Evan.
>> Tell people one more time where to find you online.
John.
>> Cause and effect strategy.
>> Com that's John Loury the president of cause and effect strategy.
We've got more Connections coming up in just a moment.
>> This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium, without expressed written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the link at wxxinews.org for.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Connections with Evan Dawson is a local public television program presented by WXXI