In a move to counter the run-in it had with the UK government last year, Google-owned DeepMind has formed a new research unit, DeepMind Ethics and Society (DMES), to oversee the ethical boundaries around artificial intelligence, in order to judge how far is too far.
DeepMind is one of the heavyweights in the AI community, with the company creating highly advanced machine learning models and neural networks, which have resulted in successes such as the defeat of the world's top Go players last year with AlphaGo. That's been retired now, with DeepMind looking to apply what it learned during the development of AlphaGo to other products and research. (See AI's the Winner: AlphaGo Defeats Ke Jie and DeepMind Retires AlphaGo From Competitive Go.)
However, the company has also occasionally hit the headlines for the wrong reasons. In 2015 it agreed a data-sharing deal with three hospitals in the UK to process the data of 1.6 million patients for "new methods of clinical detection, diagnosis and prevention application," but it went awry when the NHS regulator said the deal had breached data protection laws. In response, DeepMind said it was "working hard" on data transparency both internally and externally. (See DeepMind NHS Deal Breached Data Protection Laws, Says ICO.)
DeepMind's AlphaGo playing world number 1 Lee Sedol in March 2016.
The new committee seems to be the fruit of its labors. In the blog post explaining the formation of DMES, DeepMind notes that, "As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work."
That indicates that it believes it needs to take a stronger role in regulating how it applies its powerful algorithms to situations where it doesn't have much expertise. This particularly comes into play when considering the NHS debacle, where DeepMind said it "underestimated the rules around patient data" and the "complexity of the NHS."
This new unit, then, is surely a good thing, showing that DeepMind wants to understand more about how its AI will impact wider society -- and not just the tech industry -- in both the long and the short term. As the company says in the blog post, the ethical considerations of AI is not a new thing. It highlights Julia Angwin's groundbreaking study of the ethical quandary of racism in criminal justice algorithms (in short, looking at how algorithms are racist against people of color in criminal justice situations) and Kate Crawford and Ryan Calo's in-depth paper on how AI impacts society on wider, broader scale.
To help DeepMind with this issue, the company has also appointed six respected Fellows who will consult and help DMES when needs arise. The Fellows include Nick Bostrom, the University of Oxford professor who wrote the book of existential risk that influenced both Elon Musk and Bill Gates.