Artificial Intelligence
Google is launching a new team to to investigate ethics related to artificial intelligence. HypnoArt/Pixabay

DeepMind, the Google-owned artificial intelligence company, announced the formation of a new research group that will investigate ethical questions surrounding the development of AI and its impact on society.

DeepMind Ethics and Society, also known as DMES, will start publishing research papers on a variety of topics relating to the development of artificial intelligence and the potential effects of the technology starting in early 2018.

Google has staffed the group with eight full-time researchers to start, including six unpaid external fellows who will partner with academic groups at other institutions who are conducting similar research. DMES will eventually grow to around 25 full-time staffers within the next year.

Initial partnerships will include the AI Now Institute at NYU and the Leverhulme Centre for the Future of Intelligence. One of the first fellows working with the research team is Nick Bostrom, a philosopher at Oxford and founding director of the Future of Humanity Institute.

AI/CAPITAL MARKET use this one***
Newsweek is hosting an AI and Data Science in Capital Markets conference in NYC, Dec. 6-7. Newsweek Media Group

Bostrom is best known for his best-selling book “Superintelligence: Paths, Dangers, Strategies,” in which he argued that artificial intelligence could one day surpass the human brain in general intelligence and could replace humans as the dominant lifeform.

In a blog post announcing the formation of the research group, DMES co-leaders Verity Harding and Sean Legassick wrote that they intend to “explore and understand the real-world impacts of AI.”

The duo highlighted a number of topics they plan to tackling, including an investigation into racial disparities found in software and algorithms relied upon by law enforcement, a problem that can work to deepen the institutionalized biases that already create challenges in the criminal justice system.

The group also intends to explore ethical concerns surrounding driverless cars, like how the automated systems that control the vehicle decide how to handle crashes. If a driverless car is involved in a crash, it may have to make a choice to harm a human life, and it is not clear how the system will make such a decision and what type of information will guide it.

“If AI technologies are to serve society, they must be shaped by society’s priorities and concerns,” Harding and Legassick wrote. “This isn’t a quest for closed solutions but rather an attempt to scrutinize and help design collective responses to the future impacts of AI technologies.”

DeepMind has already had its own run-in with ethical quandaries as the company has worked to implement AI in different environments. The company came under fire last year after it processed the medical data of more than 1.6 million patients from the United Kingdom’s National Health Service (NHS) without informing individuals their data was being used.

The Google subsidiary company was found to have violated data privacy laws but managed to avoid suffering a fine for its actions. Instead, the group agreed to implement more stringent checks to ensure data it uses does not violate anyone’s privacy.