Machine learning is powered by data -- lots of data. The one company that has this in abundance is Google, which uses data from email and web browsing to power its portfolio of products. At a Wednesday hardware event, it announced two new products, powered by machine learning, which are intended to help make the world a little bit smaller and more connected.
At what has now become an annual hardware-dedicated event, Google unveiled a new laptop/tablet hybrid, new phones, and a camera which uses AI to take photos. However, the real star of the show was the Pixel Buds, a set of earphones, and its headline feature: Google Assistant translation. It switches to listening mode when a user taps the right earbud and then speaks to the Assistant (when the buds are connected). A user could, for instance, say "help me speak Japanese," which would lead the technology to start translating the user's speech into Japanese and delivering this translation through a Pixel phone. The user would also hear a Japanese-to-English translation in his or her earbuds when communicating with a Japanese speaker.
This system, of course, uses Google Translate to provide the translation. But Google is also using its powerful machine learning models to support the translation. Other technology companies have previously attempted a similar thing, although not always successfully. Microsoft's efforts with Skype proved disappointing, for example.
Admittedly, the earbuds are not needed to do the near-instant translation Google is boasting about. Translation can already be done on any Android phone with the Translate app -- simply by tapping the microphone button and speaking into the device. This process uses the same backend technology as the earbuds.
While the Pixel Buds worked well in the presentation, it remains to be seen exactly how they will function in the real world, with other noises disturbing the microphones, spotty WiFi or cellular connections and other such interference. The more normal Google Assistant functions -- making a call, sending a text, setting an alarm, playing music -- seemed to work well during the demonstration, too. The buds have no independent charging function: Instead they are charged when in the specially provided case, which can keep them powered for up to 24 hours at a time.
The other machine-learning-powered device that Google announced is Clips. As the name implies, it's a camera, but one that learns how and when to take a photo, recognising key people (like close friends or family members) and snapping shots of them. The response to this has been mixed, with some saying it is creepy and invasive and others believing it means photos will be more candid and less staged. It is important to note that the machine learning happens entirely on the device -- i.e. "at the edge" -- rather than in the cloud, meaning no photos are shared unless the user explicitly requests it.
So, what is the long-term goal of all this? Google is a business, so it aims to make money. It is trying to stay ahead of rivals including Apple, Facebook, Amazon and Samsung in the machine learning area because it obviously appreciates how important this will become to the future global economy. In a world which is increasingly going to rely on artificial intelligence to assist and help with jobs, perhaps even taking some over completely, Google's aim is to be in the lead -- just as Microsoft led the personal computing revolution in the 1980s and 90s.
Historically, however, Google has made its money by selling advertising space in search results, with those ads becoming a lot more personalized in recent years as Google has mined data. But Google is under pressure to diversify and find new sources of revenues and growth, and it is adopting Apple's method of creating its own hardware and software in parallel.
In the short term, the diversification strategy puts Google in a fierce battle over voice assistant technology, where it is taking on Apple's Siri, Amazon's Alexa and Microsoft's Cortana. Google knows that machine intelligence, deep learning and neural networks are key to these voice assistants, and has been investing heavily in voice assistant hardware and software. It announced two new Google Home products on Wednesday -- Mini, a £49/$49 Google Home miniaturized box that complements the main Home device, and Max, a bigger, $399 high-end smart speaker that is not unlike a Sonos or iPod Hi-Fi.
While we've known about machine learning for quite a while, only now are we beginning to see it make its way into hardware in a way that can really be of benefit. This is only the start, however -- expect to see many more AI-powered products in the next few years as the battle of integrated hardware and software begins.