Banner

Follow Along

RSS Feed Join Us on Twitter On Facebook

Get Engaged

Banner

Related Reading

Our Sponsors

Banner
Banner
Banner

Join Us

Banner
Newsfeeds from around the industry
Google Research Blog
The latest news on Google Research.

  • Sergey and Larry awarded the Seoul Test-of-Time Award from WWW 2015
    Posted by Andrei Broder, Google Distinguished Scientist

    Today, at the 24th International World Wide Web Conference (WWW) in Florence, Italy, our company founders, Sergey Brin and Larry Page, received the inaugural Seoul Test-of-Time Award for their 1998 paper “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, which introduced Google to the world at the 7th WWW conference in Brisbane, Australia. I had the pleasure and honor to accept the award on behalf of Larry and Sergey from Professor Chin-Wan Chung, who led the committee that created the award.
    Except for the fact that I was myself in Brisbane, it is hard to believe that Google began just as a two-student research project at Stanford University 17 years ago with the goal to “produce much more satisfying search results than existing systems.” Their paper presented two breakthrough concepts: first, using a distributed system built on inexpensive commodity hardware to deal with the size of the index, and second, using the hyperlink structure of the Web as a powerful new relevance signal. By now these ideas are common wisdom, but their paper continues to be very influential: it has over 13,000 citations so far and more are added every day.

    Since those beginnings Google has continued to grow, with tools that enable small business owners to reach customers, help long lost friends to reunite, and empower users to discover answers. We keep pursuing new ideas and products, generating discoveries that both affect the world and advance the state-of-the-art in Computer Science and related disciplines. From products like Gmail, Google Maps and Google Earth Engine to advances in Machine Intelligence, Computer Vision, and Natural Language Understanding, it is our continuing goal to create useful tools and services that benefit our users.

    Larry and Sergey sent a video message to the conference expressing their thanks and their encouragement for future research, in which Sergey said “There is still a ton of work left to do in Search, and on the Web as a whole and I couldn’t think of a more exciting time to be working in this space.” I certainly share this view, and was very gratified by the number of young computer scientists from all over the world that came by the Google booth at the conference to share their thoughts about the future of search, and to explore the possibility of joining our efforts.


  • Tone: An experimental Chrome extension for instant sharing over audio
    Posted by Alex Kauffmann, Interaction Researcher, and Boris Smus, Software Engineer

    Sometimes in the course of exploring new ideas, we'll stumble upon a technology application that gets us excited. Tone is a perfect example: it's a Chrome extension that broadcasts the URL of the current tab to any machine within earshot that also has the extension installed. Tone is an experiment that we’ve enjoyed and found useful, and we think you may as well.

    As digital devices have multiplied, so has the complexity of coordinating them and moving stuff between them. Tone grew out of the idea that while digital communication methods like email and chat have made it infinitely easier, cheaper, and faster to share things with people across the globe, they've actually made it more complicated to share things with the people standing right next to you. Tone aims to make sharing digital things with nearby people as easy as talking to them.


    The first version was built in an afternoon for fun (which resulted in numerous rickrolls), but we increasingly found ourselves using it to share documents with everyone in a meeting quickly, to exchange design files back and forth while collaborating on UI design, and to contribute relevant links without interrupting conversations.

    Tone provides an easy-to-understand broadcast mechanism that behaves like the human voice—it doesn't pass through walls like radio or require pairing or addressing. The initial prototype used an efficient audio transmission scheme that sounded terrible, so we played it beyond the range of human hearing. However, because many laptop microphones and nearly all video conferencing systems are optimized for voice, it improved reliability considerably to also include a minimal DTMF-based audible codec. The combination is reliable for short distances in the majority of audio environments even at low volumes, and it even works over Hangouts.

    Because it's audio based, Tone behaves like speech in interesting ways. The orientation of laptops relative to each other, the acoustic characteristics of the space, the particular speaker volume and mic sensitivity, and even where you're standing will all affect Tone's reliability. Not every nearby machine will always receive every broadcast, just like not everyone will always hear every word someone says. But resending is painless and debugging generally just requires raising the volume. Many groups at Google have found that the tradeoffs between ease and reliability worthwhile—it is our hope that small teams, students in classrooms, and families with multiple computers will too.

    To get started, first install the Tone extension for Chrome. Then simply open a tab with the URL you want to share, make sure your volume is on, and press the Tone button. Your machine will then emit a short sequence of beeps. Nearby machines receive a clickable notification that will open the same tab. Getting everyone on the same page has never been so easy!


  • Paper to Digital in 200+ languages
    Posted by Dmitriy Genzel and Ashok Popat, Research Scientists and Dhyanesh Narayanan, Product Manager

    Many of the world’s important sources of information - books, newspapers, magazines, pamphlets, and historical documents - are not digital. Unlike digital documents, these paper-based sources of information are difficult to search through or edit, or worse, completely inaccessible to some people. Part of the solution is scanning, getting a digital image of the page, but raw image pixels aren’t yet recognized as textual content from the computer’s point of view.

    Optical Character Recognition (OCR) technology aims to turn pictures of text into computer text that can be indexed, searched, and edited. For some time, Google Drive has provided OCR capabilities. Recently, we expanded this state-of-the-art technology to support all of the world’s major languages - that’s over 200 languages in more than 25 writing systems. This technology is available to users in 2 easy steps:

    1. Upload a scanned document in its current form (say, as an image or PDF). The example below shows a scanned document in Hindi uploaded to a user’s Drive account as a PNG.
    2. Right-click on the document in the Drive interface, and select ‘Open with’ -> ‘Google Docs’.
    This opens a Google document with the original image followed by the extracted text.
    You don’t even need to specify which language the document is in; the system will determine that automatically. Or, you can use the Google Drive API for more explicit control over the language detection in documents. For example, here is an invocation of the Drive API in Python:
    The OCR capability in Drive is also available in the Drive App for Android.

    To make this possible, engineering teams across Google pursued an approach to OCR focused on broad language coverage, with a goal of designing an architecture that could potentially work with all existing languages and writing systems. We do this in part by using Hidden Markov Models (HMMs) to make sense of the input as a whole sequence, rather than first trying to break it apart into pieces. This is similar to how modern speech recognition systems recognize audio input.

    OCR and speech recognition share some challenges - like dealing with background “noise,” different languages, and low-quality inputs. But some challenges are specific to OCR: the variety of typefaces, the different types of scanners and cameras, and the need to work on older material that may contain archaic orthographic and linguistic elements. In addition to utilizing HMMs, we leveraged many of the same technologies used in the Google Handwriting Input app to allow automatic learning of features and to give preference to more likely output, as well as minimum-error-rate training to allow effective combination of multiple sources of information, and modern methods in machine learning to minimize manual design and maximize use of data. We also take advantage of advances in internationalization and typesetting, by using synthetic data in our training.

    Currently, the OCR works best on cleanly scanned, high-resolution documents in the most commonly used typefaces. We are working to improve performance on poor quality scans and challenging text layouts. Give it a try and let us know how it works for you.


  • Google Handwriting Input in 82 languages on your Android mobile device
    Posted by Thomas Deselaers, Daniel Keysers, Henry Rowley, Li-Lun Wang, Victor Cărbune, Ashok Popat, Dhyanesh Narayanan, Handwriting Team, Google Research

    Entering text on mobile devices is still considered inconvenient by many; touchscreen keyboards, although much improved over the years, require a lot of attention to hit the right buttons. Voice input is an option, but there are situations where it is not feasible, such as in a noisy environment or during a meeting. Using handwriting as an input method can allow for natural and intuitive input method for text entry which complements typing and speech input methods. However, until recently there have been many languages where enabling this functionality presented significant challenges.

    Today we launched Google Handwriting Input, which lets users handwrite text on their Android mobile device as an additional input method for any Android app. Google Handwriting Input supports 82 languages in 20 distinct scripts, and works with both printed and cursive writing input with or without a stylus. Beyond text input, it also provides a fun way to enter hundreds of emojis by drawing them (simply press and hold the ‘enter’ button to switch modes). Google Handwriting Input works with or without an Internet connection.
    By building on large-scale language modeling, robust multi-language OCR, and incorporating large-scale neural-networks and approximate nearest neighbor search for character classification, Google Handwriting Input supports languages that can be challenging to type on a virtual keyboard. For example, keyboards for ideographic languages (such as Chinese) are often based on a particular dialect of the language, but if a user does not know that dialect, they may be hard to use. Additionally, keyboards for complex script languages (like many South Asian languages) are less standardized and may be unfamiliar. Even for languages where virtual keyboards are more widely used (like English or Spanish), some users find that handwriting is more intuitive, faster, and generally more comfortable.
    Writing 'Hello' in Chinese, German, and Tamil.
    Google Handwriting Input is the result of many years of research at Google. Initially, cloud based handwriting recognition supported the Translate Apps on Android and iOS, Mobile Search, and Google Input Tools (in Chrome, ChromeOS, Gmail and Docs, translate.google.com, and the Docs symbol picker). However, other products required recognizers to run directly on an Android device without an Internet connection. So we worked to make recognition models smaller and faster for use in Android handwriting input methods for Simplified and Traditional Chinese, Cantonese, and Hindi, as well as multi-language support in Gesture Search. Google Handwriting Input combines these efforts, allowing recognition both on-device and in the cloud (by tapping on the cloud icon) in any Android app.

    You can install Google Handwriting Input from the Play Store here. More information and FAQs can be found here.


  • Beyond Short Snippets: Deep Networks for Video Classification
    Posted by Software Engineers George Toderici and Sudheendra Vijayanarasimhan

    Convolutional Neural Networks (CNNs) have recently shown rapid progress in advancing the state of the art of detecting and classifying objects in static images, automatically learning complex features in pictures without the need for manually annotated features. But what if one wanted not only to identify objects in static images, but also analyze what a video is about? After all, a video isn’t much more than a string of static images linked together in time.

    As it turns out, video analysis provides even more information to the object detection and recognition task performed by CNN’s by adding a temporal component through which motion and other information can be also be used to improve classification. However, analyzing entire videos is challenging from a modeling perspective because one must model variable length videos with a fixed number of parameters. Not to mention that modeling variable length videos is computationally very intensive.

    In Beyond Short Snippets: Deep Networks for Video Classification, to be presented at the 2015 Computer Vision and Pattern Recognition conference (CVPR 2015), we1 evaluated two approaches - feature pooling networks and recurrent neural networks (RNNs) - capable of modeling variable length videos with a fixed number of parameters while maintaining a low computational footprint. In doing so, we were able to not only show that learning a high level global description of the video’s temporal evolution is very important for accurate video classification, but that our best networks exhibited significant performance improvements over previously published results on the Sports 1 million dataset (Sports-1M).

    In previous work, we employed 3D-convolutions (meaning convolutions over time and space) over short video clips - typically just a few seconds - to learn motion features from raw frames implicitly and then aggregate predictions at the video level. For purposes of video classification, the low level motion features were only marginally outperforming models in which no motion was modeled.

    To understand why, consider the following two images which are very similar visually but obtain drastically different scores from a CNN model trained on static images:
    Slight differences in object poses/context can change the predicted class/confidence of CNNs trained on static images.
    Since each individual video frame forms only a small part of the video’s story, static frames and short video snippets (2-3 secs) use incomplete information and could easily confuse subtle fine-grained distinctions between classes (e.g: Tae Kwon Do vs. Systema) or use portions of the video irrelevant to the action of interest.

    To get around this frame-by-frame confusion, we used feature pooling networks that independently process each frame and then pool/aggregate the frame-level features over the entire video at various stages. Another approach we took was to utilize an RNN (derived from Long Short Term Memory units) instead of feature pooling, allowing the network itself to decide which parts of the video are important for classification. By sharing parameters through time, both feature pooling and RNN architectures are able to maintain a constant number of parameters while capturing a global description of the video’s temporal evolution.

    In order to feed the two aggregation approaches, we compute an image “pixel-based” CNN model, based on the raw pixels in the frames of a video. We processed videos for the “pixel-based” CNNs at one frame per second to reduce computational complexity. Of course, at this frame rate implicit motion information is lost.

    To compensate, we incorporate explicit motion information in the form of optical flow - the apparent motion of objects across a camera's viewfinder due to the motion of the objects or the motion of the camera. We compute optical flow images over adjacent frames to learn an additional “optical flow” CNN model.
    Left: Image used for the pixel-based CNN; Right: Dense optical flow image used for optical flow CNN
    The pixel-based and optical flow based CNN model outputs are provided as inputs to both the RNN and pooling approaches described earlier. These two approaches then separately aggregate the frame-level predictions from each CNN model input, and average the results. This allows our video-level prediction to take advantage of both image information and motion information to accurately label videos of similar activities even when the visual content of those videos varies greatly.
    Badminton (top 25 videos according to the max-pooling model). Our methods accurately label all 25 videos as badminton despite the variety of scenes in the various videos because they use the entire video’s context for prediction.
    We conclude by observing that although very different in concept, the max-pooling and the recurrent neural network methods perform similarly when using both images and optical flow. Currently, these two architectures are the top performers on the Sports-1M dataset. The main difference between the two was that the RNN approach was more robust when using optical flow alone on this dataset. Check out a short video showing some example outputs from the deep convolutional networks presented in our paper.


    1 Research carried out in collaboration with University of Maryland, College Park PhD student Joe Yue-Hei Ng and University of Texas at Austin PhD student Matthew Hausknecht, as part of a Google Software Engineering Internship



All the Latest

Getting Around the Site

Home - all the latest on SNC
SEO - our collection of SEO articles
Technical SEO - for the geeks
Latest News - latest news in search
Analytics - measure up and convert
RSS Rack - feeds from around the industry
Search - looking for something specific?
Authors - Author Login
SEO Training - Our sister site
Contact Us - get in touch with SNC

What's New?

All content and images copyright Search News Central 2014
SNC is a Verve Developments production, the Forensic SEO Specialists- where Gypsies roam.