Banner

Follow Along

RSS Feed Join Us on Twitter On Facebook

Get Engaged

Banner

Related Reading

Our Sponsors

Banner
Banner
Banner

Join Us

Banner
Newsfeeds from around the industry
Google Research Blog
The latest news on Google Research.

  • Young people who are changing the world through science
    Posted by Andrea Cohan, Google Science Fair Program Manager

    (Cross-posted from the Google for Education Blog)

    Sometimes the biggest discoveries are made by the youngest scientists. They’re curious and not afraid to ask, and it’s this spirit of exploration that leads them to try, and then try again. Thousands of these inquisitive young minds from around the world submitted projects for this year’s Google Science Fair, and today we’re thrilled to announce the 20 Global Finalists whose bright ideas could change the world.

    From purifying water with corn cobs to transporting Ebola antibodies through silk; extracting water from air or quickly transporting vaccines to areas in need, these students have all tried inventive, unconventional things to help solve challenges they see around them. And did we mention that they’re all 18 or younger?

    We’ll be highlighting each of the impressive 20 finalist projects over the next 20 days in the Spotlight on a Young Scientist series on the Google for Education blog to share more about these inspirational young people and what inspires them.
    Then on September 21st, these students will join us in Mountain View to present their projects to a panel of notable international scientists and scholars, eligible for a $50,000 scholarship and other incredible prizes from our partners at LEGO Education, National Geographic, Scientific American and Virgin Galactic.

    Congratulations to our finalists and everyone who submitted projects for this year’s Science Fair. Thank you for being curious and brave enough to try to change the world through science.


  • See through the clouds with Earth Engine and Sentinel-1 Data
    Posted by Luc Vincent, Engineering Director, Geo Imagery

    This year the Google Earth Engine team attended the European Geosciences Union General Assembly meeting in Vienna, Austria to engage with a number of European geoscientific partners. This was just the first of a series of European summits the team has attended over the past few months, including, most recently, the IEEE Geoscience and Remote Sensing Society meeting held last week in Milan, Italy.
    Noel Gorelick presenting Google Earth Engine at EGU 2015.
    We are very excited to be collaborating with many European scientists from esteemed institutions such as the European Commission Joint Research Centre, Wageningen University, and University of Pavia. These researchers are utilizing the Earth Engine geospatial analysis platform to address issues of global importance in areas such as food security, deforestation detection, urban settlement detection, and freshwater availability.

    Thanks to the enlightened free and open data policy of the European Commission and European Space Agency, we are pleased to announce the availability of Copernicus Sentinel-1 data through Earth Engine for visualization and analysis. Sentinel-1, a radar imaging satellite with the ability to see through clouds, is the first of at least 6 Copernicus satellites going up in the next 6 years.
    Sentinel-1 data visualized using Earth Engine, showing Vienna (left) and Milan (right).
    Wind farms seen off the Eastern coast of England.
    This radar data offers a powerful complement to other optical and thermal data from satellites like Landsat, that are already available in the Earth Engine public data catalog. If you are a geoscientist interested in accessing and analyzing the newly available EC/ESA Sentinel-1 data, or anything else in our multi-petabyte data catalog, please sign up for Google Earth Engine.

    We look forward to further engagements with the European research community and are excited to see what the world will do with the data from the European Union's Copernicus program satellites.


  • ICSE 2015 and Software Engineering Research at Google
    Posted by Mohsen Vakilian, Software Engineer

    The large scale of our software engineering efforts at Google often pushes us to develop cutting-edge infrastructure. In May 2015, at the International Conference on Software Engineering (ICSE 2015), we shared some of our software engineering tools and practices and collaborated with the research community through a combination of publications, committee memberships, and workshops. Learn more about some of our research below (Googlers highlighted in blue).

    Google was a Gold supporter of ICSE 2015.

    Technical Research Papers:
    A Flexible and Non-intrusive Approach for Computing Complex Structural Coverage Metrics
    Michael W. Whalen, Suzette Person, Neha Rungta, Matt Staats, Daniela Grijincu

    Automated Decomposition of Build Targets
    Mohsen Vakilian, Raluca Sauciuc, David Morgenthaler, Vahab Mirrokni

    Tricorder: Building a Program Analysis Ecosystem
    Caitlin Sadowski, Jeffrey van Gogh, Ciera Jaspan, Emma Soederberg, Collin Winter

    Software Engineering in Practice (SEIP) Papers:
    Comparing Software Architecture Recovery Techniques Using Accurate Dependencies
    Thibaud Lutellier, Devin Chollak, Joshua Garcia, Lin Tan, Derek Rayside, Nenad Medvidovic, Robert Kroeger

    Technical Briefings:
    Software Engineering for Privacy in-the-Large
    Pauline Anthonysamy, Awais Rashid

    Workshop Organizers:
    2nd International Workshop on Requirements Engineering and Testing (RET 2015)
    Elizabeth Bjarnason, Mirko Morandini, Markus Borg, Michael Unterkalmsteiner, Michael Felderer, Matthew Staats

    Committee Members:
    Caitlin Sadowski - Program Committee Member and Distinguished Reviewer Award Winner
    James Andrews - Review Committee Member
    Ray Buse - Software Engineering in Practice (SEIP) Committee Member and Demonstrations Committee Member
    John Penix - Software Engineering in Practice (SEIP) Committee Member
    Marija Mikic - Poster Co-chair
    Daniel Popescu and Ivo Krka - Poster Committee Members


  • How Google Translate squeezes deep learning onto a phone
    Posted by Otavio Good, Software Engineer, Google Translate

    Today we announced that the Google Translate app now does real-time visual translation of 20 more languages. So the next time you’re in Prague and can’t read a menu, we’ve got your back. But how are we able to recognize these new languages?

    In short: deep neural nets. When the Word Lens team joined Google, we were excited for the opportunity to work with some of the leading researchers in deep learning. Neural nets have gotten a lot of attention in the last few years because they’ve set all kinds of records in image recognition. Five years ago, if you gave a computer an image of a cat or a dog, it had trouble telling which was which. Thanks to convolutional neural networks, not only can computers tell the difference between cats and dogs, they can even recognize different breeds of dogs. Yes, they’re good for more than just trippy art—if you're translating a foreign menu or sign with the latest version of Google's Translate app, you're now using a deep neural net. And the amazing part is it can all work on your phone, without an Internet connection. Here’s how.

    Step by step

    First, when a camera image comes in, the Google Translate app has to find the letters in the picture. It needs to weed out background objects like trees or cars, and pick up on the words we want translated. It looks at blobs of pixels that have similar color to each other that are also near other similar blobs of pixels. Those are possibly letters, and if they’re near each other, that makes a continuous line we should read.
    Second, Translate has to recognize what each letter actually is. This is where deep learning comes in. We use a convolutional neural network, training it on letters and non-letters so it can learn what different letters look like.

    But interestingly, if we train just on very “clean”-looking letters, we risk not understanding what real-life letters look like. Letters out in the real world are marred by reflections, dirt, smudges, and all kinds of weirdness. So we built our letter generator to create all kinds of fake “dirt” to convincingly mimic the noisiness of the real world—fake reflections, fake smudges, fake weirdness all around.

    Why not just train on real-life photos of letters? Well, it’s tough to find enough examples in all the languages we need, and it’s harder to maintain the fine control over what examples we use when we’re aiming to train a really efficient, compact neural network. So it’s more effective to simulate the dirt.
    Some of the “dirty” letters we use for training. Dirt, highlights, and rotation, but not too much because we don’t want to confuse our neural net.
    The third step is to take those recognized letters, and look them up in a dictionary to get translations. Since every previous step could have failed in some way, the dictionary lookup needs to be approximate. That way, if we read an ‘S’ as a ‘5’, we’ll still be able to find the word ‘5uper’.

    Finally, we render the translation on top of the original words in the same style as the original. We can do this because we’ve already found and read the letters in the image, so we know exactly where they are. We can look at the colors surrounding the letters and use that to erase the original letters. And then we can draw the translation on top using the original foreground color.

    Crunching it down for mobile

    Now, if we could do this visual translation in our data centers, it wouldn’t be too hard. But a lot of our users, especially those getting online for the very first time, have slow or intermittent network connections and smartphones starved for computing power. These low-end phones can be about 50 times slower than a good laptop—and a good laptop is already much slower than the data centers that typically run our image recognition systems. So how do we get visual translation on these phones, with no connection to the cloud, translating in real-time as the camera moves around?

    We needed to develop a very small neural net, and put severe limits on how much we tried to teach it—in essence, put an upper bound on the density of information it handles. The challenge here was in creating the most effective training data. Since we’re generating our own training data, we put a lot of effort into including just the right data and nothing more. For instance, we want to be able to recognize a letter with a small amount of rotation, but not too much. If we overdo the rotation, the neural network will use too much of its information density on unimportant things. So we put effort into making tools that would give us a fast iteration time and good visualizations. Inside of a few minutes, we can change the algorithms for generating training data, generate it, retrain, and visualize. From there we can look at what kind of letters are failing and why. At one point, we were warping our training data too much, and ‘$’ started to be recognized as ‘S’. We were able to quickly identify that and adjust the warping parameters to fix the problem. It was like trying to paint a picture of letters that you’d see in real life with all their imperfections painted just perfectly.

    To achieve real-time, we also heavily optimized and hand-tuned the math operations. That meant using the mobile processor’s SIMD instructions and tuning things like matrix multiplies to fit processing into all levels of cache memory.

    In the end, we were able to get our networks to give us significantly better results while running about as fast as our old system—great for translating what you see around you on the fly. Sometimes new technology can seem very abstract, and it's not always obvious what the applications for things like convolutional neural nets could be. We think breaking down language barriers is one great use.


  • The Thorny Issue of CS Teacher Certification
    Posted by Chris Stephenson, Head of Computer Science Education Programs

    (Cross-posted on the Google for Education Blog)

    There is a tremendous focus on computer science education in K-12. Educators, policy makers, the non-profit sector and industry are sharing a common message about the benefits of computer science knowledge and the opportunities it provides. In this wider effort to improve access to computer science education, one of the challenges we face is how to ensure that there is a pipeline of computer science teachers to meet the growing demand for this expertise in schools.

    In 2013 the Computer Science Teachers Association (CSTA) released Bugs in the System: Computer Science Teacher Certification in the U.S. Based on 18 months of intensive Google-funded research, this report characterized the current state of teacher certification as being rife with “bugs in the system” that prevent it from functioning as intended. Examples of current challenges included states where someone with no knowledge of computer science can teach it, states where the requirements for teacher certification are impossible to meet, and states where certification administrators are confused about what computer science is. The report also demonstrated that this is actually a circular problem - States are hesitant to require certification when they have no programs to train the teachers, and teacher training programs are hesitant to create programs for which there is no clear certification pathway.
    Addressing the issues with the current teacher preparation and certification system is a complex challenge and it requires the commitment of the entire computer science community. Fortunately, some of this work is already underway. CSTA’s report provides a set of recommendations aimed at addressing these issues. Educators, advocates, and policymakers are also beginning to examine their systems and how to reform them.

    Google is also exploring how we might help. We convened a group of teacher preparation faculty, researchers, and administrators from across the country to brainstorm how we might work with teacher preparation programs to support the inclusion of computational thinking into teacher preparation programs. As a result of this meeting, Dr. Aman Yadav, Professor of Educational Psychology and Educational Technology at Michigan State University, is now working on two research articles aimed at helping teacher preparation program leaders better understand what computational thinking is, and how it supports learning across multiple disciplines.

    Google will also be launching a new online course called Computational Thinking for Educators. In this free course, educators working with students between the ages of 13 and 18 will learn how incorporating computational thinking can enhance and enrich learning in diverse academic disciplines and can help boost students’ confidence when dealing with ambiguous, complex or open-ended problems. The course will run from July 15 to September 30, 2015.

    These kind of community partnerships are one way that Google can contribute to practitioner-centered solutions and help further the computer science education community’s efforts to help everyone understand that computer science is a deeply important academic discipline that deserves a place in the K-12 canon and well-prepared teachers to share this knowledge with students.


All the Latest

Getting Around the Site

Home - all the latest on SNC
SEO - our collection of SEO articles
Technical SEO - for the geeks
Latest News - latest news in search
Analytics - measure up and convert
RSS Rack - feeds from around the industry
Search - looking for something specific?
Authors - Author Login
SEO Training - Our sister site
Contact Us - get in touch with SNC

What's New?

All content and images copyright Search News Central 2014
SNC is a Verve Developments production, the Forensic SEO Specialists- where Gypsies roam.