Tim and Markus talk computer vision, artificial intelligence (AI), politics, crime, and other things at the DevIntersection conference in Las Vegas, November 2019.
Machine learning, artificial intelligence, and computer vision are among the hottest topics of the day, especially at a conference like DevIntersection that took place at the MGM Grand hotel in Las Vegas. At this event, Microsoft Regional Director Tim Huckaby and our own Markus Egger (publisher of CODE Magazine and also a Microsoft Regional Director) sat down and had a discussion of these topics as well as worries about AI features getting too powerful, privacy laws being problematic, the impact of AI on politics, doing good in the world with facial recognition, using computer vision for everyday business apps, and many more. Let’s eavesdrop on the discussion!
Markus: We run similar businesses. I run CODE Consulting, a custom app-dev, consulting, and training company [in addition to publishing CODE Magazine].
Tim: Which is a rough business. And I run InterKnowlogy, a 20-year-old custom app-dev company. And awesome company to work at. We have an amazing list of customers and really cool projects. Just not the greatest company in the world to own. It's such a tough business because, you know, there's only one CNN. We don't get reoccurring revenue from the Magic Wall [political analysis and visualization system seen on CNN in past elections] that we built for CNN. And the other tricky thing, and I'm sure it's the case in your business too, is that 98% of the work we do is protected by NDA. We can't put it on our website, we can't talk about it. You know, we're building software for software companies—Microsoft, Intel, Apple, Google, you name it—and really high tech, smaller companies too. But it's so one off and so unique. These businesses we own will never be 10,000-person businesses. There's just not a market big enough for custom solutions.
Markus: It's a bummer. We’re at DevIntersection here in Las Vegas and we both did presentations. I just finished one of my sessions, and there were probably three different samples where I would have liked to have shown the real-world thing. But I can't, because it's under NDA and I can only create a small example similar to a real-world scenario instead.
Tim: Right. And I've been working with the Custom Vision Team at Microsoft for a long time and they're just such brilliant, awesome people, you know, life-saving type people and you just can't talk about what we're doing or what they're doing. So it's fairly frustrating, in that respect. But I love InterKnowlogy. It's been around for 20 years and we, we really do cool stuff.
Markus: The CNN Magic Wall is one of the pieces of software your company wrote that probably everybody has seen, even if they didn't know you guys wrote it. There's another election coming up. Are they still using the same tools?
Tim: No. Somebody else has done their most recent software. I don't know how much I can say about all this, but Microsoft was a heavy sponsor of all that election stuff we did. We did the analytics for both parties. We go back three elections through both Obama elections. We provided a ton of data science to both parties on an even playing field. We built the voting app for the Iowa caucus, which is basically 22 voting apps in Xamarin. The good people of Iowa could vote electronically for the first time in United States history. But Microsoft—as you've seen—has become really gun-shy of being involved in politics, being involved in controversial AI things. Microsoft and CNN got a little sideways and we unfortunately were part of the byproduct of that. They [CNN] are using a lesser version of a Web-based client in this election. Unless something weird happens last minute, and they say “we need InterKnowlogy to save the day again.” I hope they do. I mean, CNN is here [at the DevIntersection conference we are at].
Markus: Yes, they just did a keynote panel.
Tim: I was in that panel. George Howell [from CNN, who was part of the panel] is such a talented guy. Anyway, so, yeah, we do cool stuff at InterKnowlogy. That's one part of my life. The other part of my life is that in my infinite brilliance—and you know I say that facetiously because this was really a dumb move on my part—is that for the last three years, I've been working two full time jobs. There's my role at InterKnowlogy as the founder, chairman, and basically a strategy guy. And there's the lines of business that I love and we should focus on. You know, it has a CEO and it has awesome technical people, but it needs me for lead generation. And it didn't have me for lead generation for the last few years because I was serving as the CTO of a public company and that is VSBLTY, which I founded. We spawned some IP, we pulled it out of InterKnowlogy, and we ran with a product based on computer vision and it's now deployed world-wide and it's got a security and a retail component to it. Our tagline is “The intersection of marketing and security”. It looks for the bad guy and it does weapon detection.
Markus: Is looking for the bad guy the main thing VSBLTY does or does the company produce all kinds of computer vision software?
Tim: We have enrolled for patents that I'm named on, which is kind of cool. Some brilliant ideas by me and some other folks. This is in the weeds of computer vision. For instance, we do virtual zones, based on a lot of trig and calculus, because we don't get depth out of commodity cameras. We have to do in-depth math based on what the computer sees visually in a flat image. We have cameras in places like a digital sign and we play interesting content on that sign. It's typically beautiful people doing beautiful things. Like beautiful people drinking champagne and scantily clad women dancing around with champagne. And all this is done in the hope of getting the “bad guy” to look at our camera. The really bad guys, they wear hoodies and they look down and they're good at not looking at surveillance cameras and stuff like that. But this sort of content always gets them to look at the camera that is tiny and in the digital screen.
Markus: Should we tell people this, by the way? [laughs].
Tim: It's up to you. You said we can edit if we need to. [laughs].
Markus: If you don't mind people knowing this, then that's fine. I guess the people who read CODE Magazine are all good people. I think we can tell them. [winks]
Tim: Right. And we are privacy compliant! I've become an expert in privacy law, unfortunately. Who would have known that I would be? World-wide!
Markus: And a lot of fun that stuff is!
Tim: But you have to be an expert! For instance, the privacy in your home-world, as you're so close to Germany, is very very strict. [Note: Markus is originally from Austria, and CODE still has offices there that Markus visits a few times a year.] The privacy law in China doesn't exist, right? Or in all of Latin America, it doesn't exist. And it is what it is. Anyways, once you look, we can do a number of things. Obviously, we can do a facial recognition against a bad guy database. And then send an alert to an authority that says, “We are 87% confident that Joe Bad Guy is here at this location” and it's, you know, “send the picture with it and blah, blah blah.” But in the creepy factor, where you're going, it could also change the content based on what it sees. For instance, if someone walks in front of the digital sign and she's holding a Coke, and one of the advertisers behind the digital screen is Pepsi, the content may change to Beyoncé drinking Diet Pepsi. That’s wildly effective for a certain demographic. For the technology elite, they might not get fooled by that. But there is a demographic that subliminally says, “Oh, Beyoncé drinks diet Pepsi. I should try that.” [chuckles] It just is what it is!
Markus: How worried are you about that? By the way, I had a similar talk with fellow RD Ciprian [another Regional Director] yesterday. What we talked about was the ability to use AI or machine learning and pattern recognition to identify that subset of people that are most easily swayed but could be tremendously important. It brings us full circle back to the election stuff and things like Brexit, right?
Tim: Yeah. The election is its own monster. And “Chippy” [Ciprian] as I call him, like you, is a brilliant speaker. I sat in his session also and was just in awe because he does a presentation where his content is basically about a dozen words. It's 12 slides each with one word on it and he just talks for an hour and it's so riveting. That said, you know, we have this responsibility and ethics in AI, and I'm all about it. You know, that's what the panel was today. We have this “nuclear weapon” we call artificial intelligence and it's over-hyped and it's been around for years. It's not brand new. For instance, take actuaries, which have been around for a hundred years. Actuaries have the hardest degree to get in the US. It's a math degree, but it's well beyond a doctor. It takes something like 18 years to get this actuary degree. There's only about a hundred of them in the world. They're the ones who do the algorithms for insurance companies. There are very few actuaries who don't work for an insurance company. They are immediately biased, these people. Or perhaps not these people, but the insurance industry. And a great example of something that's atrocious that’s been going on forever and will still go on, which is completely ignored, is shown by the use case of auto insurance. Let's go with Chicago, which has very strict privacy law. One of the U.S. states with a very strict privacy law that’s contradictory to the federal privacy law. But that's a different issue. We can get to that in politics. In the inner city, in Chicago, the south side of Chicago, what happens is the car insurance rates are dramatically higher than just five miles away in the suburbs. And it's solely because the data science says that they're riskier, that a particular race is more risky, and that race is 98% of the south side of Chicago. In the suburbs, it's a different race of people who are statistically proven to be safer drivers. This has been going on for years!
Markus: And are you really talking about race or they are just labeling whoever lives in those areas?
Tim : Well there's… there's… [thinks]...[sighs]...I don't know. We're not actuaries, we don't build this software. But Microsoft frequently uses this example of bias in AI. And the dilemma is, “okay, evil insurance company, so you can prove that this is the case. This particular demographic is riskier than that one. But what if one of these moves into there?” I'm pointing with my hands [gestures from one place to another], but what if someone from the south side of Chicago moves into the suburbs? Do you change their rate? The answer is “no.” No! They still have super high car insurance.
Markus: What if it goes the other way?
Tim: I don't know. They probably wouldn’t. The answer is still “no.”
Markus: You know, one of the things that I always wonder about, I know we're both really passionate about computer vision, but also the bigger AI topic, right? And I'm not a big believer in AI taking over the world and the “Terminator Scenario”…
Tim: The dystopian view.
Markus: Right. What I do wonder about is a slightly different angle of this, and it's just, is society in risk of a breakdown because of these sorts of things where you could identify those who are easily swayed and fake news and fake fact checking and fake this and that and identifying, I mean, it's relatively easy as a computational pattern-matching problem these days to recognize those who are easily fooled, right?
Tim: Yeah.
Markus: And in Brexit, we've seen that, right? There was a group of people who’ve never voted before, who were identified as such and were identified as easily swayed and heavily marketed toward with very questionable information, to put it diplomatically. That's a tough problem and difficult to prevent.
Tim: That demographic I talked about earlier that was swayed by diet Pepsi, we're not allowed to say which one it is, but we all know, um, yeah, we're, we're at a dilemma point here. And, um, the whole panel this morning was all about this and the dilemma point is that we've got this “nuclear weapon,” but you could also produce clean energy and it can produce the microwave oven.
Markus: Oh, absolutely. Yeah.
Tim: I mean, what do we get out of nuclear power? We've got the microwave. Right?
Markus: No, I mean, the potential for good is great.
Tim: The potential for good is great. Our dilemma is our governments, whether they be at a federal level or at a state level, or at a civic level, a city level. Our governments need help in understanding this stuff. Here's a great example. The federal privacy law is in contradiction with California's privacy law, which is in contradiction with San Francisco's privacy law. Just recently, the city of San Francisco panicked because government people were making technology decisions based on a very poor Amazon algorithm, and made the assumption that all facial recognition is bad and could produce false positives that would invade people's privacy to the point where we'd accidentally arrest someone. And the simple fact is that facial recognition is a solved problem. It was solved two or three years ago. It's also not really known that Microsoft has the NIST-awarded best facial recognition,
Markus: That's a standardized test, which is unusual in AI.
Tim: The NIST is the National Institutional for Standards and Technology. And Microsoft submitted for the first time ever. NEC has won this thing forever, for the last 25 years. NEC offers wildly expensive security systems. They're great at what they do there. They run in all the airports and all over the world, you know, they're the ones looking for terrorists and stuff, but Microsoft won, because they have so much talent in R&D.
Markus: By quite a margin, if I remember correctly, right?
Tim: Yeah. 98% in the test that mattered. They did a number of tests, but the tests that mattered were profile, occlusion by hat or sunglasses, beard, and facial hair, stuff like that. Movement. And they got 99.6% accuracy in perfect environments. Good input produces good output. NEC wasn't far behind, but they did lose. NEC won in the test where you're being photographed. So you step in front of a well-lit camera.
Markus: Like immigration.
Tim: Exactly. And you look straight into it and they only won by a percentage point. Their accuracy was like 99.5% or something like that? The point is that this is a solved problem. When you take this away from the police officers and the FBI in San Francisco, you're providing a risk to the population that makes them unsafe and it's simply based on hysteria. Do you know Tim O'Brien from Microsoft? This is his job. He and his boss, Brad Smith, the corporate VP. Responsibility in AI is their job and they're leading this charge for Microsoft. And Tim will tell you, “if you're worried about facial recognition, you're nuts. Because the deep fakes thing is real.” There are other “nuclear weapons” that are in the AI space and they're not “facial recognition.” We could help the world with facial recognition.
Markus: The audio fakes are incredibly scary. But it's also the video stuff that's pretty far along. The deep fake stuff is amazing, right? I mean, put a fork in it! It's done and here to stay and we will have to deal with it. This is not science fiction.
Tim: I’m not sure how much of this we can say in public, but there was the incident of high-level Microsoft executives playing around with their voice technology and voice synthesis. They quickly realized how incredibly powerful and dangerous that technology can be. And they immediately said “yeah, that's a nuclear weapon. We are not giving them to the world.” And they’re keeping it under lock and key.
Markus: It’s good that a conscientious company like Microsoft is at the forefront of this. Did you hear that they just were voted the most ethical company in the US? They have a business model that isn’t based on doing stuff like that and then selling the data. But there are many others. And the danger is that other companies are eventually going to do something similar. It’s probably inevitable.
Tim: Right. Remember when Satya Nadella did his keynote at BUILD 2016? He talked a lot about ethical AI. And without getting into too much detail, that caused a lot of unhappy reactions from certain competitor companies that are exactly in the business of selling that kind of data and not acting ethically. Satya shocked the press and analyst community because he talked about responsibility in AI and he talked about this dystopian view.
Markus: Yeah. I was at that keynote. Nice that somebody is at the helm whose main business model is not that.
Tim: Exactly! There's the point that Microsoft is in a unique position to be the leader in ethics in AI because their revenue isn’t based on advertising or tracking people like these other companies. It's a consumptive model in a cloud.
Markus: It's interesting what's going on in Asia. I mean, Asian players—whether that's the government of China or whether that's players like Alibaba, Baidu, or different ones—what they do with tracking…
Tim: And they don't have privacy laws, they don't have privacy. They're making whatever they want.
Markus: Exactly! China's doing this social score thing and detecting when people do things like walking across at red lights. That’s technologically very interesting. But it’s certainly an Orwellian state going on there, right?
Tim: Orwell, Huxley wrote books about this dystopian view. You know, I'm totally into Moore's Law. Remember when Bill Gates 20 years ago used to do the digital decades speech and he talked about Moore's Law and how it's going to help us, software people in blah, blah, blah. Most people probably know what it is, but to recap quickly, Moore's Law is basically a prediction by the genius before he founded Intel and who was a young man at the time. He saw a trend that we were cramming twice as many transistors on a circuit board and he predicted that would go on for another decade. Well, it's gone on for 55 years and it equates to CPU power. So now our CPUs are calculating at the speed of small animals like rabbits, like 10 trillion. I could get the dystopian view of it. We are heading for the singularity, if Moore's Law continues. By 2025, the Intel CPU will be calculating at the speed of the human brain. That doesn't mean machines are taking over the world. It just means that in computer vision, we're going to be able to see a lot more. You did a demo at the session you just presented, of looking at a handful of fish under the water. But how about recognizing every coral uniquely at the same time or the algae at the same time? And the water clarity and temperature, which is a big deal for our world. If you're into CO2 buildup and climate change and all that stuff. It's scary. What if you could see all that? We can't do that now because we don't have enough processing power. Simple as that. It's one of the few times in our world, Markus, where the software is waiting for the hardware. Normally it's the other way around, right? But, but the real dystopian view is, okay, so you get to 2025 and we have these amazing CPUs that calculate the speed of the human brain, which simply means that you can see a bunch more things, but you can't make a blind man see. You'll be able to see maybe, I dunno, 150 things and recognize them all with confidence under the water. By 2045, if the trend continues, we’ll have a CPU that will calculate at the speed of all the humans combined on the earth. Now that is, like, whoa.
Markus: That's mind blowing.
Tim: That makes a blind man see in a non-dystopian way, a non-negative way of looking at it. That means the guys wearing the glasses, it might have one hell of a battery on his backpack, right? Forget about that weakness. But he's wearing a pair of glasses and there's a CPU and a computer in there and it's telling him everything in front of him. Everything! Hundreds of millions of objects. God knows how that would happen. Like nothing's going to replace the human vision system. It didn't matter how quickly a CPU calculates, right? Nothing would be able to be able to suck in that much information at once. But you could do object recognition of hundreds of millions of things with that type of processing power. Right?
Markus: And that brings us right back around. Right? It's kind of fun as we sit here and chat to talk about the dystopian future. But the reality is both of our companies use this stuff to do good things, right? If we don't, we wouldn't admit it. Right? [laughs] But there are a lot of interesting things we do, I mean, you try to make the world a better place.
Tim: I want to be there! We did a project that brought everyone to tears from Microsoft in PUV, uh, “Posterior Urethral Valve Syndrome,” and if missed by the physician, it has a 100% mortality rate. It's a very rare syndrome. And if the physician isn’t trained to see it and they miss it, the baby dies. If they're trained, it's very simple to recognize, but they're human so they miss it. This is an AI replacing doctors. These are simple tools like, “hey doctor, look at this one. We're 67%. The computer is 67% confident that it’s PUV. Maybe you should take a peek at that.” And it's really simple to fix, by the way. From “100% fatal” to “simple to fix,” just like that.
Markus: That’s a lot of the stuff we do too, right? Where it's not just “are we better than a human eye.” It's also everybody paying attention and then bringing things to people's attention. We have a lot of things we built on AI in general and on vision specifically where it's really just, “this is something you should take a look at.” Are they making a call on it? But it's bringing it to somebody's attention.
Tim: Exactly!
Markus: “Here's 10,000 things. You, the human can't look through all this, but we'll highlight the hundred most important ones that you should take a look at manually.” That’s how I see a lot of AI work going forward. And that provides real benefits in all kinds of apps. Including conventional business apps that most of us have been building for decades. We are doing quite a bit of that with our customers and they get great benefits in scenarios where initially they never thought AI would be for them at all.
Tim: I'll give you an exact example, a use case of that. I can't say the companies involved, but one might speculate the largest software company in the world that's in the Silicon Valley whose sole revenue is based on advertising. Okay? Even they have bad guys. They have a bad guy database. There are others too. The famous one is Taylor Swift. Taylor Swift has over 2,500 bad guys in her database. Don't ask me how I know that, but it's over 2,500 people. Not “Miss Swift, I hate you. You're ugly and your music sucks.” This is, “I want to kill you,” or “I want to do awful things to you” where a federal authority has gotten involved and produced some type of incarceration or restraining order or something. That’s 2,500 people for Miss Swift alone! Well, even the CEO of a large tech company has people that hate him, right? People that threatened him credibly. Mostly they're Internet trolls who say stupid things, but it catches the FBI's attention. So I'm at this giant event, and we're looking for the bad guy because they're afraid that one of these particular bad guys is going to get into the booth at CES and cause havoc.
So we are monitoring this conference with the VSBLTY software and looking for people with a certain degree of confidence. But we bring the confidence throttle way down on that one guy. Everyone else is like 60%, 65%. We send an alert if we're 65%, but with this guy, we pulled it down to 40%. They're okay with false positives because humans are going to make a decision, right? We got 15 false positives on this guy and they deploy on one. And I was in the security center, standing behind all these security guys when it was going down. And we are looking at this nice portal we built. The alerts come in and they're looking at what the software sees and they're looking at the original image in the facial recognition database. Then they're looking back at the live feed showing this person. And I'm looking at him, and I'm like, “that's the guy! That's the guy!” And we were getting, I don't know, 67% confidence or something like that. And I'm like, “that's him!” I'm saying this to myself. And in their security mumbo jumbo, they're saying the same thing. So it's not like they ran out there and shot the guy. Instead, they simply deploy security officers who walk up to the person, and they say “Hey, how are you doing? Welcome to the booth. Um, what's your name? And, uh, you know—can you prove it?” [laughs] And sure enough, this guy proved it wasn't him. It just looked exactly like him. I think that's important because we're not going to be replacing police with machines—that’s dangerous. That’s seriously dangerous.
Markus: That’s interesting stuff. That’s a very good example and an interesting scenario. A lot of this is more pedestrian, right? For instance, we help companies optimize their turnover in the warehouse by 5% or 10% using this technology. Those things are huge for most businesses. Those are probably more everyday scenarios. But yeah, I mean those scenarios you just mentioned are certainly amazing and I think will change things for the better. I think this will get a lot of use on a variety of fronts in very positive ways.
Tim: Yeah, I am optimistic.
Markus: I think we'll be able to use that technology to do things like identify deep fakes. For a while at least. But that's going to get tougher and tougher.
Tim: Deep fakes is the one that scares me. All the other stuff. If we just get government smart, and we have some governments and regulation, which Microsoft has basically told the United States federal government and other governments in Europe, you need to put in regulation to control this monster. The Deep States Fakes thing worries me. It does. Because I've seen even super smart technology people get fooled. And if those people are getting fooled, then Grandma Huckaby has no shot. She's doomed. Right?
Markus: Yes. The fakes are just getting so good!
Tim: And how do some of these fakes make it through the filters?!? Just the phishing stuff. How do they make it through the Office 365 filter? I don't get that. I got a phishing email on Thursday that was so good. I brought engineers into my office and said, you got to look at this one. Like how did they pull it off? I don't know the answer. No one did. It was from a microsoft.com domain. The content was very good and targeted. How did they pull that off?
Markus: Yeah. It's gotten a lot more sophisticated. And a lot more targeted with the “spear phishing” attacks. I’ve heard the term “laser phishing” used quite a bit lately. Well, it was a great conference though, wasn't it? I really enjoyed it. Definitely well done this year. I’ve missed a few [DevIntersection conferences] and I’ve always enjoyed them, but this one even more so.
Tim: Yeah, they do a good job. And good talking to you.
Markus: Good talking to you too! I'll see you at the MVP Summit!