Skip to content Skip to navigation

Artificial intelligence, real concerns…and cash

30 Apr 2019

Regulating development of self-aware robots is crucial. Data privacy is key to user-app power dynamic

Most artificial intelligence (AI) currently in use can be classified as ‘narrow AI’ that does one specific task. Spam filtering tools, chatbots, and Apple’s Siri app are all examples of narrow AI.

Self-driving cars are also examples of narrow AI albeit complex ones. As the algorithms and programmes are iteratively improved, how they are created is affected by the biases of the programmers involved, as evidenced by a research paper titled Predictive Inequity in Object Detection by three Georgia Tech computer science professors. The authors expressed concerns that automated vehicles were less likely to detect darker-skinned people than they would lighter-skinned ones.

The authors concluded that “standard models for the task of object detection, trained on standard datasets, appear to exhibit higher precision on” fairer skin tones than darker skin tones.

“You need to know the kinds of biases that exist, and need to equip the machine to understand these biases.” explains SMU Associate Professor of Information Systems Pradeep Reddy Varakantham. “For the machines to be trustable and usable, those biases would need to be figured out and eliminated automatically.” He adds:

“If the biases have to be figured out manually for every problem being addressed, it will become extremely challenging. That's why researchers are working on automated methods of making systems more sentient (providing perspective) – figuring out all the things that will impact the problem at hand or the task at hand – so as to eliminate the biases.”

With the history of crashes involving self-driving cars, that is perhaps a good thing. From Tesla’s 2016 Florida ‘white truck’ accident to its 2018 Model X crash in California – both fatal – these ‘boundary cases’, as Varakantham describes them, fall within the very small number of instances where certain conditions combine to produce scenarios that algorithms fail to process correctly. As each scenario is programmed into future software updates, “this kind of iterative modification would be the right way of fixing those 0.01 percent of errors”, Varakantham says.

Symbolic learning

Self-driving cars can be programmed using a branch of artificial intelligence called symbolic AI or symbolic learning, which also encompasses robotics and involves image processing and computer vision. While improvements to self-driving cars might improve safety perceptions, advancements to robots often inspire fears of real life incarnations of Hollywood movies such as Terminator and I, Robot.

On the website of Boston Dynamics, a MIT spinoff that specialises in robotics, there is a section featuring a humanoid robot called Atlas. It is able to run, jump, open doors and even do backflips. In a video demonstration, it is able to adapt and overcome a scientist’s repeated attempts to foil its goal of lifting a 10-pound box.

“Stereo vision,” says the website, “range sensing and other sensors give Atlas the ability to manipulate objects in its environment and to travel on rough terrain”. And perhaps rather ominously: “Atlas keeps its balance when jostled or pushed and can get up if it tips over”.

“It's still very goal-oriented, it's specialised,” Varakantham tells Perspectives@SMU, pointing out that robots are unable to do general tasks for which they were not specifically programmed, and are therefore still a very long way off to becoming the sentient machines depicted by Hollywood. But noting that Boston Dynamics has done work with the U.S. military, Varakantham points out the need to regulate developments on that front.

“We do not know what the governments are doing with these technologies but there should definitely be a regulatory authority, but who regulates the government?” asks Varakantham, who serves as course coordinator for the School of Information’s Artificial Intelligence Track. “Definitely these robots shouldn’t be given access to any general decisions that will directly impact humans.

“As long as they are in a lab somewhere, they are bounded by that. But if they are doing anything that directly impacts humans, it would need to be regulated.”

Free services…paid with data?

While self-aware robots will not be a reality anytime soon, AI in the form of social media feeds and customised audio playlists are commonplace. These applications run on algorithms controlled by technology giants Google, Apple, Facebook, Amazon etc. who create massive revenue via users’ data.

Should the people on whose data these companies depend to create money be paid for the use of their data? Can it be done? Should it be done?

“I believe it should be done if the companies are making money off these data points that customers are providing,” Varakantham opines. “In some sense in places where the data is more actively being contributed, it already happens, like on Youtube.

“But on things like Facebook where you just post your data once…what amount each person contributes to [the bottom line is unclear]. I think it would make sense for the companies to [pay these users] so that they also have customers who are loyal to the website. I think it will happen as and when the companies figure out the right way of connecting the data to the actual money coming in.”

When asked if legislation should be introduced to make tech companies share revenues generated from users’ data, Varakantham points out that these companies could turn the tables and say users are paying for an otherwise free service with their data. The key to the service provider-user platform, he says, is data privacy.

“There is this notion of privacy controls they have introduced and if a lot of people are using them, to great deterrence for Facebook, then I think they will start thinking about these money incentives.

“I think at that point maybe there will also be proper legislation on how much privacy controls each company should provide because if they provide too little then the customers have no say in it. But if there is enough controls that are provided for the customers generating the data… and there are enough people who are controlling their data properly, then I think the incentives will be more forthcoming.”

 

Follow us on Twitter (@sgsmuperspectiv) or like us on Facebook (https://www.facebook.com/PerspectivesAtSMU)

Last updated on 30 Apr 2019 .

Looking for something?