Artificial Intelligence Our Safety And Privacy: AS HEARD ON WGAN: [03-12-19]
Craig is on the WGAN Morning News with Ken and Matt. Joe Reagan sitting in for Ken this morning. They talked about the new cameras that can spot a shoplifter even before the steal something. They also discussed autonomous cars and their impact and risks.
Share This Episode
For Questions, Call or Text:
Below is a rush transcript of this segment, it might contain errors.
Airing date: 03/13/2019
Artificial Intelligence Our Safety And Privacy
Craig Peterson 0:00
Hey, good morning everybody. Craig Peterson here on this morning with WGAN. Ken was out. So we had Joe Reagan in this morning along with Matt and we had a little bit of chat about a couple of things. The new AI stuff, Joe had some interesting points today. I enjoyed having him on. But AI and what does it mean to us when from soup to nuts here shopping, police, our cars, etc. so here we go with the guys over at Maine’s number one morning show.
Matt Gagnon 0:36
And we’re back again. 7:37 on the WGAN Morning News with Ken and Matt. Joe Reagan is in for Ken today and Craig Peterson is on the line with us as he would be typically on 7:38 on a Wednesday. So Craig, are you this morning?
Hey, I’m doing good, Matt. What? 737 once you just leave it alone. Come on, guys.
Sorry. I know. A little early this time. So anyway, let’s start with our technology topics. Of course. As always, sir, I think the first one I’d like to deal with frankly, is is cameras, right. Spotting shoplifters before they even steal. So basically, we’re talking about what Minority Report here they get, like future future crimes, pre cognition. The cameras are basically I assumed, you know, keeping tabs on people and can predict when they’re gonna steal what’s happening here?
Yeah, this is part of an overall trend that we’ve been seeing over the last year or two. We have had in London, of course, we know right, it’s the most surveilled city in the world. They are tracking you everywhere you go. They’re using facial recognition. And they are using it also to listen listen for bullets been fired, we have that New York City as well. They know instantly when a gun is fired, where was fired, we have now software that’s being used by many every major city in the Western world now, that is predicting where crimes are going to be taking place. And in all of these cases, it’s kind of ok, still, right? Because basically, if there’s an area of the tide crime, you want the police there, right. And it’s kind of a normal thing. Now, we also have California doing yet another wacky thing, which is instead of having to go in front of a some form of a magistrate or judge to get bail, and then you post bail, and you can get out assuming you’re not a big risk to the community or flight risk. California has decided that they want to get rid of bail bondsmen entirely. And they just want a computer program to decide who gets to get out on on basically on bail. And it’s it’s really concerning now, as we give more and more power to computer systems which just aren’t fallible, if he wants to any Wi Fi show, right. You know, that’s true.
Joe Reagan 3:07
And then great. You know, one thing that’s come up a couple times is that when you talk about how these computer algorithms are doing this, so we talked about predictive analysis for shoplifters. A lot of that is based on correlation. And so therefore, one of the critiques of the systems is that it will it actually is discriminatory against usually racial minorities that might be at a statistically more, you know, our might be punished by criminal by crime more often. And so, it’s, it’s, it seems like it almost turns itself into a, I guess, a self licking ice cream cone, where it’s just, you’re just making it worse for people. And it’s not actually doing predictive analysis in terms of someone’s intentions, but actually trying to make judgments based off of past events.
I would like to see a self licking ice cream cone. I would be quite…
Joe, you brought up a really good point. Do you remember Microsoft came out with this little Twitter bot that they had designed to use machine learning
How can I forget? Like yesterday. Yeah.
Yeah, you remember this thing? And what it did was.
Yeah. Turned into Skynet in like 10 minutes.
Exactly. They had it monitor all these Twitter feeds. And what did it come back as it came back to as a nasty racist, and it was it was just crazy. Well, and your point about, you know, predictive correlation, etc. is a good one, too, because when you start looking at this in what Matt brought up here was this Japanese startup called Vaak that takes security camera footage and does predictions. So you take this and say, well, is this computer system going to turn into a racist like Microsoft computer system did because it notices that most of these crimes are committed by people who have black hoodies on that are covering the faces of black hoodies. And from a cultural standpoint, that particular type of dress is, is a minority is, is wearing that type of a dress. So now all of a sudden, the system that’s supposed to be looking for general body language, general trends is now looking at someone that walks in that’s dressed a certain way and automatically, bam, it’s accusing them of being potential shoplifters. Now, today, what Bloomberg is reporting is this system is being designed. So that is alert security and security goes over and asks the shopper, hey, do you need some help? And that alone is enough for most cases for them to stop the potential shoplifting that might happen, which is a big deal guys. We’re talking about $34 billion in retail shrinkage. It’s a biggest source, in fact, to shrink. And so it’s a very big deal. But my gosh, where’s this all going? We get more and more of this. And we’re not anywhere near artificial intelligence yet everybody. And what’s going to happen everything so far as Matt pointed out is basically turned into Skynet.
Yeah, absolutely. We’re talking to Craig Peterson, our tech guru who joins us at this time to go over what’s happening in the world of technology Okay, so another one that attracted my attention on our news list here today is the potential future in which cops can take over my self driving car obviously self driving cars are inevitable you can be terrified of it if you want to listener who’s listening to this right now, but it’s coming and you’re gonna have to get ready for it. So when you finally are forced to drive your self-driving car when you get pulled over the cops can just like take over your car man. What’s what’s what’s happening here. The is this like, be going to be like some sort of stealthy thing bill into all new cars. You think?
Well, that’s kind of where we’re going right now. And you look at what happened a couple of weeks ago, in Hawaii, there was a big conference out there. And they were talking about ethics and the ethics of artificial intelligence. And one of the things they brought up was our, you know, home devices like our Alexas, our Google Home devices, etc. Should have built into them some artificial intelligence that automatically calls the police if it detects something that that just might not be a great home environment, right. As always can figure that out. We already have these really cool things that reminds me of the Fast and the Furious, these remote control little little cars that the police can use. And they drive them underneath the car of a flame suspect. And they set off a small electromagnetic pulse which disabled the engine because of course, all these new engines have computers in them. Have you seen these things?
Matt and Joe 8:09
Yeah, little RC cars, and they’ll do 80 miles an hour. So in that case, obviously they’re damaging the cars. They just burned out the computers. And now we have people who are driving a semi autonomous cars, for instance, the Teslas and there’s a lot of Teslas on the road here in Maine. And they have this kind of semi-auto pilot thing. And the idea is you can have it kind of take over the steering wheel. And while you’re on the highway, it’ll stay in the lanes. It’ll drive down the road. And if the cars in front of you slow down, it’ll slow down. It really kind of makes you a really good patron of the roads is the idea and Elon Musk has announced that come next year, he’s going to have a software upgrade for the Teslas that makes them 100% autonomous. But in case now we have a couple that we know of, we have drivers that have fallen asleep. And this one case the driver as reported by Bloomberg in December last year was drunk. Fell sound asleep behind the wheel of his Tesla. His auto pilot was engaged and the police were chasing them down the freeway. I’m not sure exactly what they noticed, probably a sleeping driver. Might be something that’s worth noting. And they will try to get this car to stop it couldn’t wake up the the so called driver, the occupants of the vehicle and so they ended up having to box the car in. They had, you know, you get four police officers, the car has to slow down because the cars around it or slowing down came to a stop. So what do you do. The police are suggesting and Bloomberg’s suggesting that maybe what should happen here is the the cops should be able to not only have the car pull over to the side of the road and stop, but they’re talking about rerouting cars, as well as been able to force them to pull over. So I don’t know, Matt, maybe when you’re driving that car down the road, the police will just do that. Let’s say Hey, take you right to the local police station. And because you’re under arrest and all the doors locked.
So you’re suggesting to me that is that the whole OJ chase thing in 1991 or whatever is the thing of the past. We’ll never see that again.
Yeah, exactly. When was the last night you fell drunk on the road?
Well, you know that reason is your question because whether you’re talking about a vehicle that is fully autonomous or partially autonomous, it raises questions, and this is one of the insurance industry has dealt with for some time is how do you deal with damage that’s incurred from mechanical issues, whether that be a computer glitch, or some sort of actual mechanical glitch where the operator may or may not be 100% responsible for something that’s happens, I get it soon. This is really changing the dynamic of of how the insurance companies using auto insurance,
it will in this case, when you’re talking about that, Joe, you were talking about 90% of the risk, because right now, 90% of the accidents when they’ve been been investigated afterwards were attributed to some form of human error. So if you get rid of that 90%, if it’s no longer the human who’s driving who is causing the accident, or at least a large contributor to the accident, where does that liability go? Because, again, human or you know, involvement here, right? Human risk from the driver? Well, there are humans and I’m assuming I, you know, I got a little implication for what you’re saying. I’m assuming that you’re also including the humans that wrote to the software. You know, they don’t go after the guy that designed the switch that failed on the car. And that caused, you know, I just contributing to an accident. They don’t do that nowadays, right? They might go after the manufacturer, they might try and get a recall on the car, get that switch replaced, get that switch repaired. But that guy that misdesigned the switch doesn’t go to jail doesn’t face criminal charges. What’s the future going to hold when now we don’t even have most of these car companies making their own software, Apple got rid of their autonomous vehicle division, at least the guys that were going to make the cars and they’ve decided they’re going to make the software so they’re not going to make all of the software there’s already dozens of computers in modern cars. There’s a lot of different software, a lot of different companies. Joe, my gosh, your head’s going to explode when you get right down to it and think about where does a liability go? Who has to get insured? How long is this stuff going to get tied up in the courts and you know, Washington DC in state courts and and legislatures. This is this is the hugest part of this huge new problem we’re going to be facing with autonomous vehicles. All right.
Craig Peterson, our tech guru joins us at this time to go over the world of technology every Wednesday and today is no exception. Craig appreciate it as always, and we will talk to you again next week.
Hey gentlemen, take care.