It's a fact of life that humans are biased. We make word associations with objects, colours, people, animals, places and feelings that have been ingrained into our nature over millennia. It turns out artificial intelligence does this too, as per research from Princeton University and the University of Bath.
Machine learning is taught by using thousands of datasets to 'teach' the AI to do things. For example, if you wanted machine learning to identify pictures of cats, dogs, horses, squirrels and elephants, a computer scientist would feed the network a large amount of data, comprising many pictures of each of these animals. The computer would then 'learn' what the difference is between each of these animals, completely independently. Of course, this requires data, which is why both 'big data' (essentially large amounts of data) and data scientists are so important to today's industry. (See Poll Result: Y'All Need Data Scientists.)
What the University of Bath and Princeton University researchers found was that as data was fed into a machine learning network -- a "purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web" -- it formed biases and associations, much like humans do. This could be basic colour associations, such as red meaning danger and blue meaning cool and refreshing, or that flowers are pleasant and insects are not. It could also involve hugely controversial associations related to race and gender.
This obviously has implications for how we use artificial intelligence. We already know that computers can have pretty extreme racial bias when they're given particular data -- Microsoft's ill-fated Twitter bot showed that -- and the whole point of technology-based automation is that it shouldn't be susceptible to the same kind of bias that humans are. But if the computers are susceptible to bias through word association, that means it might not be possible to ever completely remove such bias: Because humans construct the base data, it's going to have implicit, low-lying bias even at the most basic levels.
In that sense, it's something we can't escape from -- because we are biased, the robots we build will be too. That is, until AI is smart enough to construct its own data, which is one of the constructs which denotes we've reached the singularity -- where a robot can teach itself without the need for human interference or involvement.
The full report is available here, but keep in mind you need to be a member of the American Association of the Advancement of Science (AAAS) to access it.