Banner

Follow Along

RSS Feed Join Us on Twitter On Facebook

Get Engaged

Banner

Related Reading

Our Sponsors

Banner
Banner
Banner

Join Us

Banner
Newsfeeds from around the industry
Google Research Blog
The latest news on Google Research.

  • Course Builder now supports the Learning Tools Interoperability (LTI) Specification
    Posted by John Cox, Software Engineer

    Since the release of Course Builder two years ago, it has been used by individuals, companies, and universities worldwide to create and deliver online courses on a variety of subjects, helping to show the potential for making education more accessible through open source technology.

    Today, we’re excited to announce that Course Builder now supports the Learning Tools Interoperability (LTI) specification. Course Builder can now interoperate with other LTI-compliant systems and online learning platforms, allowing users to interact with high-quality educational content no matter where it lives. This is an important step toward our goal of making educational content available to everyone.

    If you have LTI-compliant software and would like to serve its content inside Course Builder, you can do so by using Course Builder as an LTI consumer. If you want to serve Course Builder content inside another LTI-compliant system, you can use Course Builder as an LTI provider. You can use either of these features, both, or none—the choice is entirely up to you.

    The Course Builder LTI extension module, now available on Github, supports LTI version 1.0, and its LTI provider is certified by IMS Global, the nonprofit member organization that created the LTI specification. Like Course Builder itself, this module is open source and available under the Apache 2.0 license.

    As part of our continued commitment to online education, we are also happy to announce we have become an affiliate member of IMS Global. IMS Global shares our desire to provide education online at scale, and we look forward to working with the IMS community on LTI and other online education technologies.


  • Building a deeper understanding of images
    Posted by Christian Szegedy, Software Engineer

    The ImageNet large-scale visual recognition challenge (ILSVRC) is the largest academic challenge in computer vision, held annually to test state-of-the-art technology in image understanding, both in the sense of recognizing objects in images and locating where they are. Participants in the competition include leading academic institutions and industry labs. In 2012 it was won by DNNResearch using the convolutional neural network approach described in the now-seminal paper by Krizhevsky et al.[4]

    In this year’s challenge, team GoogLeNet (named in homage to LeNet, Yann LeCun's influential convolutional network) placed first in the classification and detection (with extra training data) tasks, doubling the quality on both tasks over last year's results. The team participated with an open submission, meaning that the exact details of its approach are shared with the wider computer vision community to foster collaboration and accelerate progress in the field.
    The competition has three tracks: classification, classification with localization, and detection. The classification track measures an algorithm’s ability to assign correct labels to an image. The classification with localization track is designed to assess how well an algorithm models both the labels of an image and the location of the underlying objects. Finally, the detection challenge is similar, but uses much stricter evaluation criteria. As an additional difficulty, this challenge includes a lot of images with tiny objects which are hard to recognize. Superior performance in the detection challenge requires pushing beyond annotating an image with a “bag of labels” -- a model must be able to describe a complex scene by accurately locating and identifying many objects in it. As examples, the images in this post are actual top-scoring inferences of the GoogleNet detection model on the validation set of the detection challenge.
    This work was a concerted effort by Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Drago Anguelov, Dumitru Erhan, Andrew Rabinovich and myself. Two of the team members -- Wei Liu and Scott Reed -- are PhD students who are a part of the intern program here at Google, and actively participated in the work leading to the submissions. Without their dedication the team could not have won the detection challenge.

    This effort was accomplished by using the DistBelief infrastructure, which makes it possible to train neural networks in a distributed manner and rapidly iterate. At the core of the approach is a radically redesigned convolutional network architecture. Its seemingly complex structure (typical incarnations of which consist of over 100 layers with a maximum depth of over 20 parameter layers), is based on two insights: the Hebbian principle and scale invariance. As the consequence of a careful balancing act, the depth and width of the network are both increased significantly at the cost of a modest growth in evaluation time. The resultant architecture leads to over 10x reduction in the number of parameters compared to most state of the art vision networks. This reduces overfitting during training and allows our system to perform inference with low memory footprint.
    For the detection challenge, the improved neural network model is used in the sophisticated R-CNN detector by Ross Girshick et al.[2], with additional proposals coming from the multibox method[1]. For the classification challenge entry, several ideas from the work of Andrew Howard[3] were incorporated and extended, specifically as they relate to image sampling during training and evaluation. The systems were evaluated both stand-alone and as ensembles (averaging the outputs of up to seven models) and their results were submitted as separate entries for transparency and comparison.

    These technological advances will enable even better image understanding on our side and the progress is directly transferable to Google products such as photo search, image search, YouTube, self-driving cars, and any place where it is useful to understand what is in an image as well as where things are.

    References:

    [1] Erhan D., Szegedy C., Toshev, A., and Anguelov, D., "Scalable Object Detection using Deep Neural Networks", The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 2147-2154.

    [2] Girshick, R., Donahue, J., Darrell, T., & Malik, J., "Rich feature hierarchies for accurate object detection and semantic segmentation", arXiv preprint arXiv:1311.2524, 2013.

    [3] Howard, A. G., "Some Improvements on Deep Convolutional Neural Network Based Image Classification", arXiv preprint arXiv:1312.5402, 2013.

    [4] Krizhevsky, A., Sutskever I., and Hinton, G., "Imagenet classification with deep convolutional neural networks"Advances in neural information processing systems, 2012.



  • Working Together to Support Computer Science Education
    Posted by Chris Stephenson, Computer Science Education Program Manager

    (Cross-posted from the Google for Education blog)

    Computer Science (CS) education in K-12 is receiving an increasing amount of attention from media and policy makers. Education groups have been working for years to build the infrastructure needed to support CS both inside and outside the school environment, including standards development and dissemination, models for teacher professional development, research, resources for educators, and the building of peer-driven and peer-supported communities of learning.

    At Google, we strive to increase opportunities in CS and be a strong contributor to the community of those seeking to improve CS education through our engagement in research, curriculum resource development and dissemination, professional development of teachers, tools development, and large-scale efforts to engage young women and underrepresented groups in computer science. However, despite these efforts, there are still many challenges to overcome to improve the state of CS education.

    For example, many people confuse computer science with education technology (the use of computing to support learning in other disciplines) and computer literacy (a very basic understanding of a limited number of computer applications). This confusion leads to the assumption that computer science education is taking place, when in fact in many schools it is not.

    Women and minorities are still underrepresented in computer science education and in the high tech workplace. In her introduction to Jane Margolis’ Stuck in the Shallow End: Education, Race, and Computing, distinguished scientist Shirley Malcolm refers to computer science as “privileged knowledge” to which minority students often have no access. This statement is supported by data from the College Board and the National Center for Women and Information Technology.

    Poverty also has a significant but often ignored impact on access to technology and quality computer science education. At present there are more than 16 million U.S. children living in poverty; these children are the least likely to have access to computer science knowledge and tools in their schools and homes.

    There are many organizations and programs which focus on CS education, working hard to address these issues, and others. This gives Google the unique opportunity to analyze gaps in existing efforts and apply our resources towards programs that are most needed. In so doing, we hope to help uncover new strategies and create sustainable improvements to CS education.

    Achieving systemic and sustained change in K-12 CS education is a complex undertaking that requires strategic support that complements both existing formal school programs and extracurricular education. Google is proud to be a member of the community committed to making tangible improvements to the state of CS education. In future blog posts, we will introduce you so some of the programs and resources that Google has been working on.


  • Hardware Initiative at Quantum Artificial Intelligence Lab
    Posted by Hartmut Neven, Director of Engineering

    The Quantum Artificial Intelligence team at Google is launching a hardware initiative to design and build new quantum information processors based on superconducting electronics. We are pleased to announce that John Martinis and his team at UC Santa Barbara will join Google in this initiative. John and his group have made great strides in building superconducting quantum electronic components of very high fidelity. He recently was awarded the London Prize recognizing him for his pioneering advances in quantum control and quantum information processing. With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture. We will continue to collaborate with D-Wave scientists and to experiment with the “Vesuvius” machine at NASA Ames which will be upgraded to a 1000 qubit “Washington” processor.


  • Teaching machines to read between the lines (and a new corpus with entity salience annotations)
    Posted by Dan Gillick, Research Scientist, and Dave Orr, Product Manager

    Language understanding systems are largely trained on freely available data, such as the Penn Treebank, perhaps the most widely used linguistic resource ever created. We have previously released lots of linguistic data ourselves, to contribute to the language understanding community as well as encourage further research into these areas.

    Now, we’re releasing a new dataset, based on another great resource: the New York Times Annotated Corpus, a set of 1.8 million articles spanning 20 years. 600,000 articles in the NYTimes Corpus have hand-written summaries, and more than 1.5 million of them are tagged with people, places, and organizations mentioned in the article. The Times encourages use of the metadata for all kinds of things, and has set up a forum to discuss related research.

    We recently used this corpus to study a topic called “entity salience”. To understand salience, consider: how do you know what a news article or a web page is about? Reading comes pretty easily to people -- we can quickly identify the places or things or people most central to a piece of text. But how might we teach a machine to perform this same task? This problem is a key step towards being able to read and understand an article.

    One way to approach the problem is to look for words that appear more often than their ordinary rates. For example, if you see the word “coach” 5 times in a 581 word article, and compare that to the usual frequency of “coach” -- more like 5 in 330,000 words -- you have reason to suspect the article has something to do with coaching. The term “basketball” is even more extreme, appearing 150,000 times more often than usual. This is the idea of the famous TFIDF, long used to index web pages.
    Congratulations to Becky Hammon, first female NBA coach! Image via Wikipedia.
    Term ratios are a start, but we can do better. Search indexing these days is much more involved, using for example the distances between pairs of words on a page to capture their relatedness. Now, with the Knowledge Graph, we are beginning to think in terms of entities and relations rather than keywords. “Basketball” is more than a string of characters; it is a reference to something in the real word which we already already know quite a bit about.

    Background information about entities ought to help us decide which of them are most salient. After all, an article’s author assumes her readers have some general understanding of the world, and probably a bit about sports too. Using background knowledge, we might be able to infer that the WNBA is a salient entity in the Becky Hammon article even though it only appears once.

    To encourage research on leveraging background information, we are releasing a large dataset of annotations to accompany the New York Times Annotated Corpus, including resolved Freebase entity IDs and labels indicating which entities are salient. The salience annotations are determined by automatically aligning entities in the document with entities in accompanying human-written abstracts. Details of the salience annotations and some baseline results are described in our recent paper: A New Entity Salience Task with Millions of Training Examples (Jesse Dunietz and Dan Gillick).

    Since our entity resolver works better for named entities like WNBA than for nominals like “coach” (this is the notoriously difficult word sense disambiguation problem, which we’ve previously touched on), the annotations are limited to names.

    Below is sample output for a document. The first line contains the NYT document ID and the headline; each subsequent line includes an entity index, an indicator for salience, the mention count for this entity in the document as determined by our coreference system, the text of the first mention of the entity, the byte offsets (start and end) for the first mention of the entity, and the resolved Freebase MID.
    Features like mention count and document positioning give reasonable salience predictions. But because they only describe what’s explicitly in the document, we expect a system that uses background information to expose what’s implicit could give better results.

    Download the data directly from Google Drive, or visit the project home page with more information at our Google Code site. We look forward to seeing what you come up with!


All the Latest

Getting Around the Site

Home - all the latest on SNC
SEO - our collection of SEO articles
Technical SEO - for the geeks
Latest News - latest news in search
Analytics - measure up and convert
RSS Rack - feeds from around the industry
Search - looking for something specific?
Authors - Author Login
SEO Training - Our sister site
Contact Us - get in touch with SNC

What's New?

All content and images copyright Search News Central 2014
SNC is a Verve Developments production, the Forensic SEO Specialists- where Gypsies roam.