AI impact on human rights to come under scrutiny

By Neena Bhandari

AI impact on human rights to come under scrutiny

[SYDNEY] The Australian human rights commission has embarked on a project looking at the social risks and benefits of artificial intelligence (AI) — a technology whose implications experts say are also beginning to emerge in the developing world.

Launched with a report published on 24 July, the project is looking at the impact of the technology on the right to life, privacy, security, safety and equality.

AI is already employed in designing driverless cars that reduce road traffic deaths and robots that can perform minimally invasive surgeries. The technology also finds use in robotic weapons deployed in conflict situations and plays a role in decisions that impact public health, livelihoods, social interaction and human rights.

Experts from around the world coming from academia, industry and civil society as well as government officials contributed to the paper, which was launched at the International Human Rights and Technology conference in Sydney.

“How automation is going to reorganise jobs is a serious concern for women and the marginalised who work largely in the informal sector.”

Anita Gurumurthy, IT for Change

Delegates at the conference expressed concern at the possible abuse of facial recognition technologies for surveillance. They also highlighted algorithmic bias, big-data targeting of democratic processes and the problem of personal data being hacked.

“Our project is looking at these questions in the context of Australia, but the same conversations can and should be happening in developing countries,” Australian human rights commissioner Edward Santow tells SciDev.Net. “We are asking what needs to happen in law and practice in order to ensure we always have tech for social good.”

“In developing countries, the impact of AI on larger socio-economic issues is only beginning to unfold,” says Anita Gurumurthy, founder and executive director of IT for Change, a Bangalore-based NGO that works on digital technologies for human rights and social justice. “How automation is going to reorganise jobs is a serious concern for women and the marginalised who work largely in the informal sector.”

Gurumurthy said there is a need for transparency in the use of AI in social welfare.

“Prediction models based on biased training data sets — for example, household records that mainly show men as farmers — are bound to generate policy scenarios where women’s farm-based activity is not counted. We await a data governance law in India and, based on that, we need effective institutional protocols,” Gurumurthy tells SciDev.Net.

But AI can also be used to promote health rights. Jake Lucchi, head of public policy and government relations at Google Asia Pacific, cited work on captioning videos which has made audiovisual content more accessible for hearing impaired people.

Lucchi also cited a “deep learning algorithm” that warns of diabetic retinopathy early, helping to avoid irreversible blindness, especially in patients living in countries with limited access to medical care.
Human rights treaties do not prescribe specific rules in respect to technology. But acknowledging that this is a growing area of importance, the UN Office of the High Commissioner for Human Rights has published a set of human rights principles to guide data collection.

In May this year, the second AI for Good Global Summit, organised by the International Telecommunication Union, which leads UN dialogue on information and communication technology, had also focused on AI solutions towards achieving the Sustainable Development Goals, especially in mapping poverty and providing aid in disaster situations.

This piece was produced by SciDev.Net’s Asia & Pacific desk.


This article was originally published on SciDev.Net. Read the original article.