Google does a great deal of research into natural language processing and synthesis, but not every project has to be a new Assistant feature or voice improvement. The company has a little fun now and then, when the master AI permits it, and today it has posted a few web experiments that let you engage with its word-association systems in a playful way. First is an interesting way of searching through Google Books, that fabulous database so rarely mentioned these days. Instead of just searching for text or title verbatim, you can ask questions, like Why was Napoleon exiled? or What is the nature of consciousness?It returns passages from books that, based on their language only, are closely associated with your question. And while the results are hit and miss, they are nice and flexible. Sentences answering my questions appeared even though they were not directly adjacent to key words or particularly specific about doing so. I found, however, its not a very intuitive way to interact with a body of knowledge, at least for me. When I ask a question, I generally want to receive an answer, not a competing variety of quotes that may or may not bear on your inquiry. So while I cant really picture using this regularly, its an interesting way to demonstrate the flexibility of the semantic engine at work here. And it may very well expose you to some new authors, though the 100,000 books included in the database are something of a mixed bag. The second project Google highlights is a game it calls Semantris, though I must say its rather too simple to deserve the -tris moniker. Youre given a list of words and one in particular is highlighted. You type the word you most associate with that one, and the words will reorder with, as Googles AI understands it, the closest matches to your word on the bottom. If you moved the target word to the bottom, it blows up a few words and adds some more. Its a nice little time waster, but I couldnt help but feel I was basically just a guinea pig providing testing and training for Googles word association agent. It was also pretty easy — I didnt feel much of an achievement for associating water with boat — but maybe it gets harder as it goes on. Ive asked Google if our responses are feeding into the AIs training data. For the coders and machine learning enthusiasts among you, Google has also provided some pre-trained TensorFlow modules, and of course documented their work in a couple of papers linked in the blog post.
Google today announced a pair of new artificial intelligence experiments from its research division that let web users dabble in semantics and natural language processing. For Google, a company thats primary product is a search engine that traffics mostly in text, these advances in AI are integral to its business and to its goals of making software that can understand and parse elements of human language. The website will now house any interactive AI language tools, and Google is calling the collection Semantic Experiences. The primary sub-field of AI its showcasing is known as word vectors, a type of natural language understanding that maps semantically similar phrases to nearby points based on equivalence, similarity or relatedness of ideas and language. Its a way to enable algorithms to learn about the relationships between words, based on examples of actual language usage, says Ray Kurzweil, notable futurist and director of engineering at Google Research, and product manager Rachel Bernstein in a blog post. Google has published its work on the topic in a paper here, and its also made a pre-trained module available on its TensorFlow platform for other researchers to experiment with. The first of the two publicly available experiments released today is called Talk to Books, and it quite literally lets you converse with a machine learning-trained algorithm that surfaces answers to questions with relevant passages from human-written text. As described by Kurzweil and Bernstein, Talk to Books lets you make a statement or ask a question, and the tool finds sentences in books that respond, with no dependence on keyword matching. The duo add that, In a sense you are talking to the books, getting responses which can help you determine if youre interested in reading them or not. It is a legitimately neat and super polished product, from my experience using the web interface. Ask it a question like why is the sky blue? and youll get a number of different answers displayed in clear text, sourced from books on the subject, like, The Rayleigh scattering of light by molecules in the atmosphere gets stronger as the wavelength decreases. But, as opposed to using standard Google Search and having to click a link and parse an article or webpage, the Talk To Books algorithm does that work for you. The models driving this experience were trained on a billion conversation-like pairs of sentences, learning to identify what a good response might look like, Kurzweil and Berstein explain. Once you ask your question (or make a statement), the tools searches all the sentences in over 100,000 books to find the ones that respond to your input based on semantic meaning at the sentence level; there are no predefined rules bounding the relationship between what you put in and the results you get. Of course, as you might suspect, there are some limitations here. The tool is better for answering raw factual questions and doesnt perform quite as well handling complex geopolitical questions or topics of modern cultural and historical importance. But as a simple web tool, and one Google says helps improve products like Gmail Smart Reply, Talk to Books is a fun way to explore the web in a semantically natural way. It also gives us a glimpse of what future interfaces might look like when AI is actually sophisticated enough to handle almost any query we throw at it. The second of the two experiments released today is far more interactive. Its a game called Semantris, and it basically tests your word association abilities as the same software that powers Talk to Books ranks and scores the words on-screen based on how well they correspond to the answers you input. For instance, if youre given the word bed at the top of a collection of 10 words, you might think to type sleep as a response. Semantris will then rank the 10 words and give you points based on how well it thinks the semantic relationship between bed and sleep is in comparison to the relationship between bed and every other word in the list. It should be noted that a lot of these Google experiments are also ways for the company to gather user data, which can help inform its technology by giving it ample human-grade information on word relationships and so on. That appears to be the case with Semantris, but regardless, the game is a fun way to test your own abilities and to see how well the software judges the associations between words. You can also play a Tetris-like version of the game that lets you input words to clear blocks from the screen, based on your own assumptions of what associations the software may draw between words written on the colored blocks and the answer you type into the bottom. Like many of Googles past AI experiments, like the recent Teachable Machine tool for letting users train their own basic algorithm and past ones focused on doodling and music- making, these web games and tools are valuable ways to interact with and learn more about artificial intelligence in the ways its more readily applied in the real world. AI, as well as terms and phrases like machine learning and neural networks, is often an abstract concept we hear thrown around a lot without much context or in a way meant to obfuscate or gloss over whats really going on under the hood of the worlds most powerful software applications and platforms. But with experiments like these, Google is able to demystify the technology in a way thats beneficial for everyone.
The tech giant's other AI experiment is all about word association. Google Research is giving us a (fun) glimpse of how far natural language processing in artificial intelligence has come. Mountain View's research division has rolled out a couple of what it calls Semantic Experiences, which are websites with interesting activities that demonstrate AIs' ability to understand how we speak. One of the two experiences is called " Talk to Books," because, well, you can use the website to talk to books to a certain extent. You simply type in a statement or a question, and it will find whole sentences in books related to what you typed. In the announcement post, notable futurist/Google Research Director of Engineering Ray Kurzweil and Product Manager Rachel Bernstein said the system doesn't depend on keyword matching. They trained its AI by feeding it "billion conversation-like pairs of sentences," so it can learn to identify what a good response looks like. Talk to Books can help you find titles simple keyword searches might not surface -- for instance, when I searched "He says he's the greatest detective who ever lived," one of the results highlighted a sentence that doesn't contain any of my query's keywords, because the AI associated the word "detective" with "investigator." Google Research's other new website called Semantris offers word association games, including a Tetris-like break-the-blocks experience. The two games can recognize both opposite and neighboring concepts, even sounds like "vroom" for motorcycle or "meow" for cat. The development in word vector, an AI-training model that enables algorithms to learn relationships between words based on actual language usage, led to the advancement in natural language processing over the past few years. According to Kurzweil and Bernstein, these websites show how AIs' "new capabilities can drive applications that weren't possible before." They said other potential applications include "classification, semantic similarity, semantic clustering, whitelist applications (selecting the right response from many alternatives) and semantic search (of which Talk to Books is an example). " Google has released a module on TensorFlow other researchers and developers can use, so the tech giant's work could lead to more AI-powered applications that can understand how we wield words better than their older counterparts can.