May 02, 2018 |
Collision Spotlights Artificial Intelligence |
NEW ORLEANS—The increasing influence of artificial intelligence in everyday life raises ongoing questions about public trust, industry experts said Wednesday at the Collision Conference. While intelligence demonstrated by machines—better known as AI—represents the future of modern society, it remains flawed. “Companies know so much about us, more than we would want to know really,” Paul Asel, managing partner of NGP Capital, told AVN Wednesday at the Ernest N. Morial Convention Center. A global technology investor for more than 25 years, Asel is one of dozens of venture capitalists participating in the fifth annual gathering expected to draw more than 25,000 attendees from 120 different countries as the show continues through Thursday. “What it really comes down to is these companies establishing a trusted relationship with us,” said Asel, who has been engaged in acquisitions and IPOs valued at more than $25 billion and is currently focused on investments in the mobile, internet of things (IOT), automobile and AI sectors. “An example of this is I’m investing in a company called Zubie that puts tracking devices in cars so they can tell where you’re driving and how you’re driving. “And the wife said, ‘I don’t want that in my car. The car knows everything that I’m doing.’ I said, ‘Well don’t you realize that the Apple phone in your pocket knows that already?’ She said, ‘Oh, but that’s different.’” Asel continued, “How is it different? The Apple phone knows a lot more and is a lot more intelligent device than that device in the car. The difference is the Apple phone provides some utility to you. So we allow it to do many more things that we wouldn’t permit others to do. Think of it this way: if you knew everything that Apple and Google knew about you, is that information that you would share with your best friend? Would you share it with your co-worker? Would you share it with your classmate? Chances are you wouldn’t. And they know that much about us.” In one of the featured sessions of Collision’s Growth Summit, Asel moderated a special panel with the heads of three of the world’s fastest growing AI companies. The entrepreneur, who first invested in AI five years ago with WorkFusion, said today “literally thousands” of companies engineer AI technology, which has been around for decades but is accelerating at breakneck speed. The major difference between now and the old days of AI, according to panelist Jean-François Gagné, the co-founder and CEO of Element AI, is that “today it works.” “I remember not that long ago trying to get some of the main approaches to work on some of the significant data sets and we couldn’t figure out how to get it to the right level of accuracy or the amount of effort required, which is insane,” said Gagné, whose 18-month-old company is based in Montreal and Toronto and employs 300—80 of which have Ph.D.’s in AI. “You could get two or three steps of a process to work but then there’s one step in the middle that needed to be done by people. But at this point now you can basically have the whole thing piloted by AI and supported by people.” Eamon Jubbawy co-founded Onfido, which uses machine learning to help businesses verify the identify of their customers without meeting them face to face. Backed by $60 million in funding, his client portfolio includes more than 1500 businesses and counting. “We founded the business six years ago and at the time there were definitely people with the required skill sets but it was difficult to get a hold of them,” Jubbawy said. “Now there is a bit of a hype with lots of people talking about [AI] that don’t necessarily understand the nuances, but that hype has definitely increased these meet-ups and places where you can meet Ph.D’s and AI researchers. The glut of universities churning out good top-quality AI talent has now made it easier for businesses to start recruiting and building teams to deliver on these converging trends.” Vishal Chatrath, co-founder and CEO of PROWLER.io, said the rapid rise of AI, which is programmed to make human decisions, will not wipe out the workforce as we know it. “Will there be a shifting of jobs? For sure. That’s just the history,” said Chatrath, whose 85-person company is based in Cambridge, England. “But at the same time I don’t know of a single moment in human history that any disruptive technology has resulted in mass unemployment. It’s never happened. And I’m not sure it’s going to happen here.” Element AI CEO Gagné added, “The way I’d like to present this is AI is actually aiding software. It’s a new way to program. We used to program every single condition. If we see this, then you do that, et cetera, et cetera, and you use a lot of manpower to do that—to encode all the conditions. “Now through experience that happens to be done through data sets that we build. We can teach an algorithm to discover these conditions and by rewarding or penalizing this algorithm we can steer it towards the objectives we have. But it’s software at the end. So yes it adapts and it learns and it is a fundamental shift in how we’re engaging with machines around us, but at the end it’s doing more of the same things we’ve been doing. At some point a lot more begins to feel different. But it’s the same path we’ve been on for many, many years that is just picking up speed and accelerating.” There are many things that machines still can’t do, noted Onfido’s Jubbawy. “For us it’s less about who does what,” Jubbawy reasoned. “It’s more about what does the customer want at the end of the day? So when we’re selling into businesses we don’t talk about AI vs. humans; we just talk about the problems they have. And on our end we go away and try to solve them the best way possible. “So when you’re looking at image recognition and document verification for example, there’s certain elements which we skew more machine side and there’s certain elements which we still retain humans to provide a level of expertise that machines can’t get to now. “The idea of machines replacing humans is not necessarily accurate; it’s just changing the nature of the jobs that humans are doing.” On the topic of privacy, Jubbawy said it must be achieved “by design.” “Privacy lives in your code. It’s not something you talk about in the office; it’s something that has to be built from the ground up,” according to Jubbawy. The effects of drama on social media platforms are also informing the future of AI, according to one seasoned researcher who has been tracking it for a living. The New Orleans native Caroline Sinders, a machine learning designer and digital anthropologist, studies “conversational spaces” on digital platforms such as Twitter. “Everything you do online is a form of data,” said Sinders, who delivered a solo presentation titled “Building and communicating with responsible AI” on the Binate.io stage. “That data is human output. Any kind of altercation you have on the internet is actually a form of trauma. That trauma is caught as data.” Sinders told the audience she works at the intersection of social media, language and sub-cultures. She has spent the past two years studying online harassment and political activism on social media and is currently an Eyebeam Open Labs fellow, prototyping a machine learning system to combat online harassment. “What I find really fascinating, especially in light of the Cambridge Analytica leaks as well as Mark Zuckerberg being in front of the House and Senate, is how much of that data actually is now being used [in AI applications]," she added. “What I find most fascinating about machine learning though is how it can be somewhat fallible especially with algorithms. … Machine learning can be subjective depending on the kinds of questions that you’re asking.” Sinders offered the example of so-called “resting bitch face,” which is when a person unintentionally looks angry or annoyed when their face is expressionless. “How many of you know what ‘resting bitch face’ is? How many of you have it?” Sinders asked. “You can be extremely happy and look extremely pissed off. Part of the problem with this is it’s important to highlight how many of us are actually in an everyday scenario expressing our truth. "Who gets to determine what happiness looks like? Who knows what happiness looks like if you have resting bitch face? How can someone see if you’re scared if you’re smiling?” She said reading emotions is an area where AI technology inherently falls short. NGP Capital's Asel, who sits on the Boards of Gigwalk, WorkFusion and Zubie and formerly was responsible for tech investments in Asia at the International Finance Corporation, said as long as AI is beneficial it’s a net positive. “Yes the systems are becoming very intelligent and they’re also becoming more seductive,” Asel said. “So we need to exercise care.” He said companies such as Amazon that use AI extensively to customize repeat browsing experiences are not doing anything wrong. “Amazon has been doing that for years and it’s a feature that I really enjoy. When I go on there I know it’s going to give me selections that are relevant to me,” Asel said. “What they’re doing is taking information about us and they’re personalizing recommendations but they’re doing it on a very statistical and aggregated basis. So in that sense they can see deviations in behavior that would suggest certain things you would prefer and they’re trying to get you information that is beneficial to you.” Collision continues Thursday with featured speakers such as Brad Smith, president of Microsoft; activist and actress, Sophia Bush; and actor/comedian Damon Wayans Jr. For additional coverage of Collision 2018, click here. Pictured above from left: Jean-François Gagné, Vishal Chatrath, Eamon Jubbawy and Paul Asel. From left: Tristan Harris (Center for Humane Technology), Jim Steyer (Common Sense Media) and Alyssa Newcomb (NBC News) at the "How tech has us hooked" panel. Caroline Sinders discusses "Building and communicating with responsible AI."
|