Skip to content Skip to navigation

Managing the power and pitfalls of data in AI

31 May 2019

Ethics and education are crucial in maintaining data privacy and augmenting human ability

“It’s difficult to imagine the power that you’re going to have when so many different sorts of data are available.” – Tim Berners-Lee, father of the Internet

“Before Google, and long before Facebook, Bezos had realised that the greatest value of an online company lay in the consumer data it collected.” – George Packer, author for the New Yorker

It is no accident that the most powerful companies in tech and beyond are those possessing personal data from the most people: Google, Facebook, Amazon, and Apple – the quartet NYU professor Scott Galloway calls ‘The Four’ in his eponymous bestselling book. A look at China produces perhaps a different Four: Baidu (“Google of China”), Tencent (“Facebook of China”), JD.com (“Amazon of China”), and Alibaba (no comparison necessary).

Concerns over data privacy have haunted Big Tech in recent times, with Facebook expecting a multibillion dollar fine for privacy violations. Google was fined €50 million under European GDPR earlier this year, the biggest GDPR fine so far.

But what about data use by governments, who have access to sensitive data such as health records? How can the general public be reassured about the artificial intelligence (AI) used by corporations in the gathering and use of data, and also governments’ ability to prevent leaks and even abuse?

“There are three important points that governments and companies that are at the helm of creating these AI and in the helm of regulating should keep in mind,” advises Lula Mohanty, General Manager, Global Business Services (GBS) at IBM Asia Pacific. “Number one, AI is meant to augment intelligence, it’s not about replacing humans. The source of data and that insights thereon belong to the creator.

“Two, there should be good consent around that before it is publicly used. And the third is the systems of AI have to be absolutely transparent, including who were the people who wrote this code, what are the sources of data and how it is going to be put to use. So government have an important role to play, to set those standards as AI is getting democratised.”

Ethical machines?

Mohanty made those remarks at a panel titled “Responsible Artificial Intelligence (A.I.): How to save humanity from the dark side of A.I.?” for Singapore-based station CNA’s Perspectives programme. She added that governments are unable to control the individuals who write software nor the companies who sit on the data.

So what do we do?

“In the utopian world,” offers Christopher G. Chelliah, Group Vice President and Chief Architect, Core Technology and Cloud at Oracle Asia Pacific, “the ultimate position would be that the creator and the owner of that data feels that they own it, and they have provided consent to every person in that chain that is accessing and using that data, hopefully for their benefit.

“None of us have a problem…telling our doctor everything that is wrong with us. We don’t have a problem with that because there’s a code of ethic that he or she is looking after us and trying to improve us.

“That sort of guiding principle we see in professional bodies like the medical doctors and the ethics around how they deal with patient data or the ethics of the way the legal profession deal with legal data. You’re going to see that eventuate in AI machines.”

Working with machines

Despite humans’ best efforts and even intentions, how AI pans out can be unpredictable. Incidents involving chatbots such as Microsoft’s Tay (“ricky Gervais learned totalitarianism from adolf hitler, the inventor of atheism”) and Tencent’s Little Bing embarrassed the creators but did negligible real-world damage. It would have been a different story had it involved human lives.

“The problem is that no one really knows how machines or robots make the decision,” concedes Emmanuelle Coulon, Board Member of think tank Live with AI. “This is a real problem, because when it comes to a medical decision or military decision taken by AI, if we don’t understand why this is this decision, then we cannot trust the AI anymore. So I think this is one of the problem that we have to address as we move on to more sophisticated and advanced AI.”

While it is impossible to predict how AI will behave, Coulon believes education should be geared towards creating a better understanding of AI and the ability to collaborate with machines.

“We don’t want all of us to become a data scientist or to become engineers,” she emphasises. “We just want all of us to be able to work with AI. You may be a car driver, bus driver, an accountant or teacher – how can we train everyone to work in collaboration with AI?

“I think we have to reinvent all of our education, or our civic education as well, and adapt our training systems [so that] we are able to collaborate with some robot, some smart machines. Again, the ultimate goal is just to augment our capabilities.”

 

Lula Mohanty, Christopher G. Chelliah and Emmanuelle Coulon were part of a discussion panel, “Responsible Artificial Intelligence (A.I.): How to save humanity from the dark side of A.I.?” for the SMU-CNA programme Perspectives that was recorded at the Singapore Management University School of Law.

Follow us on Twitter (@sgsmuperspectiv) or like us on Facebook (https://www.facebook.com/PerspectivesAtSMU)

 

Last updated on 31 May 2019 .

Looking for something?