Banner

Follow Along

RSS Feed Join Us on Twitter On Facebook

Get Engaged

Banner

Related Reading

Our Sponsors

Banner
Banner
Banner

Join Us

Banner
Newsfeeds from around the industry
Google Research Blog
The latest news on Google Research.

  • Announcing an Open Source ADC board for BeagleBone
    Posted by Jason Holt, Software Engineer

    (Cross-posted on the Google Open Source Blog)

    Working with electronics, we often find ourselves soldering up a half baked electronic circuit to detect some sort of signal. For example, last year we wanted to measure the strength of a carrier. We started with traditional analog circuits — amplifier, filter, envelope detector, threshold. You can see some of our prototypes in the image below; they get pretty messy.
    While there's a certain satisfaction in taming a signal using the physical properties of capacitors, coils of wire and transistors, it's usually easier to digitize the signal with an Analog to Digital Converter (ADC) and manage it with Digital Signal Processing (DSP) instead of electronic parts. Tweaking software doesn't require a soldering iron, and lets us modify signals in ways that would require impossible analog circuits.

    There are several standard solutions for digitizing a signal: connect a laptop to an oscilloscope or Data Acquisition System (DAQ) via USB or Ethernet, or use the onboard ADCs of a maker board like an Arduino. The former are sensitive and accurate, but also big and power hungry. The latter are cheap and tiny, but slower and have enough RAM for only milliseconds worth of high speed sample data.

    That led us to investigate single board computers like the BeagleBone and Raspberry Pi, which are small and cheap like an Arduino, but have specs like a smartphone. And crucially, the BeagleBone's system-on-a-chip (SoC) combines a beefy ARMv7 CPU with two smaller Programmable Realtime Units (PRUs) that have access to all 512MB of system RAM. This lets us dedicate the PRUs to the time-sensitive and repetitive task of reading each sample out of an external ADC, while the main CPU lets us use the data with the GNU/Linux tools we're used to.

    The result is an open source BeagleBone cape we've named PRUDAQ. It's built around the Analog Devices AD9201 ADC, which samples two inputs simultaneously at up to 20 megasamples per second, per channel. Simultaneous sampling and high sample rates make it useful for software-defined radio (SDR) and scientific applications where a built-in ADC isn't quite up to the task.

    Our open source electrical design and sample code are available on GitHub, and GroupGets has boards ready to ship for $79. We also were fortunate to have help from Google intern Kumar Abhishek. He added support for PRUDAQ to his Google Summer of Code project BeagleLogic that performs much better than our sample code.

    We started PRUDAQ for our own needs, but quickly realized that others might also find it useful. We're excited to get your feedback through the email list. Tell us what can be done with inexpensive fast ADCs paired with inexpensive fast CPUs!


  • Towards an exact (quantum) description of chemistry
    Posted by Ryan Babbush, Quantum Software Engineer

    ...nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical...” - Richard Feynman, Simulating Physics with Computers

    One of the most promising applications of quantum computing is the ability to efficiently model quantum systems in nature that are considered intractable for classical computers. Now, in collaboration with the Aspuru-Guzik group at Harvard and researchers from Lawrence Berkeley National Labs, UC Santa Barbara, Tufts University and University College London, we have performed the first completely scalable quantum simulation of a molecule. Our experimental results are detailed in the paper Scalable Quantum Simulation of Molecular Energies, which recently appeared in Physical Review X.

    The goal of our experiment was to use quantum hardware to efficiently solve the molecular electronic structure problem, which seeks the solution for the lowest energy configuration of electrons in the presence of a given nuclear configuration. In order to predict chemical reaction rates (which govern the mechanism of chemical reactions), one must make these calculations to extremely high precision. The ability to predict such rates could revolutionize the design of solar cells, industrial catalysts, batteries, flexible electronics, medicines, materials and more. The primary difficulty is that molecular systems form highly entangled quantum superposition states which require exponentially many classical computing resources in order to represent to sufficiently high precision. For example, exactly computing the energies of methane (CH4) takes about one second, but the same calculation takes about ten minutes for ethane (C2H6) and about ten days for propane (C3H8).

    In our experiment, we focus on an approach known as the variational quantum eigensolver (VQE), which can be understood as a quantum analog of a neural network. Whereas a classical neural network is a parameterized mapping that one trains in order to model classical data, VQE is a parameterized mapping (e.g. a quantum circuit) that one trains in order to model quantum data (e.g. a molecular wavefunction). The training objective for VQE is the molecular energy function, which is always minimized by the true ground state. The quantum advantage of VQE is that quantum bits can efficiently represent the molecular wavefunction whereas exponentially many classical bits would be required.

    Using VQE, we quantum computed the energy landscape of molecular hydrogen, H2. We compared the performance of VQE to another quantum algorithm for chemistry, the phase estimation algorithm (PEA). Experimentally computed energies, as a function of the H - H bond length, are shown below alongside the exact curve. We were able to obtain such high performance with VQE because the neural-network-like training loop helped to establish experimentally optimal circuit parameters for representing the wavefunction in the presence of systematic control errors. One can understand this by considering a hardware implementation of a neural network with a faulty weight, e.g. the weight is only represented half as strong as it should be. Because the weights of the neural network are established via a closed-loop training procedure which can compensate for such systematic errors, the hardware neural network is robust against such imperfections. Likewise, despite systematic errors in our implementation of the VQE circuit, we are still able to learn an accurate model for the wavefunction. This robustness inspires hope that VQE may be able to solve classically intractable problems without quantum error correction.
    While the energies of molecular hydrogen can be computed classically (albeit inefficiently), as one scales up quantum hardware it becomes possible to simulate even larger chemical systems, including classically intractable ones. For instance, with only about a hundred reliable quantum bits one could model the process by which bacteria produce fertilizer at room temperature. Elucidating this mechanism is a famous open problem in chemistry because the way humans produce fertilizer is extremely inefficient and consumes 1-2% of the world's energy annually. Such calculations could also assist with breakthroughs in fundamental science, for instance, in the understanding of high temperature superconductivity.

    Though many theoretical and experimental challenges lay ahead, a quantum enabled paradigm shift from qualitative / descriptive chemistry simulations to quantitative / predictive chemistry simulations could modernize the field so dramatically that the examples imaginable today are just the tip of the iceberg.


  • Wide & Deep Learning: Better Together with TensorFlow
    Posted by Heng-Tze Cheng, Senior Software Engineer, Google Research

    The human brain is a sophisticated learning machine, forming rules by memorizing everyday events (“sparrows can fly” and “pigeons can fly”) and generalizing those learnings to apply to things we haven't seen before (“animals with wings can fly”). Perhaps more powerfully, memorization also allows us to further refine our generalized rules with exceptions (“penguins can't fly”). As we were exploring how to advance machine intelligence, we asked ourselves the question—can we teach computers to learn like humans do, by combining the power of memorization and generalization?

    It's not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It's useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.
    Today we’re open-sourcing our implementation of Wide & Deep Learning as part of the TF.Learn API so that you can easily train a model yourself. Please check out the TensorFlow tutorials on Linear Models and Wide & Deep Learning, as well as our research paper to learn more.

    How Wide & Deep Learning works.
    Let's say one day you wake up with an idea for a new app called FoodIO*. A user of the app just needs to say out loud what kind of food he/she is craving for (the query). The app magically predicts the dish that the user will like best, and the dish gets delivered to the user's front door (the item). Your key metric is consumption rate—if a dish was eaten by the user, the score is 1; otherwise it's 0 (the label).

    You come up with some simple rules to start, like returning the items that match the most characters in the query, and you release the first version of FoodIO. Unfortunately, you find that the consumption rate is pretty low because the matches are too crude to be really useful (people shouting “fried chicken” end up getting “chicken fried rice”), so you decide to add machine learning to learn from the data.

    The Wide model.
    In the 2nd version, you want to memorize what items work the best for each query. So, you train a linear model in TensorFlow with a wide set of cross-product feature transformations to capture how the co-occurrence of a query-item feature pair correlates with the target label (whether or not an item is consumed). The model predicts the probability of consumption P(consumption | query, item) for each item, and FoodIO delivers the top item with the highest predicted consumption rate. For example, the model learns that feature AND(query="fried chicken", item="chicken and waffles") is a huge win, while AND(query="fried chicken", item="chicken fried rice") doesn't get as much love even though the character match is higher. In other words, FoodIO 2.0 does a pretty good job memorizing what users like, and it starts to get more traction.
    The Deep model.
    Later on you discover that many users are saying that they're tired of the recommendations. They're eager to discover similar but different cuisines with a “surprise me” state of mind. So you brush up on your TensorFlow toolkit again and train a deep feed-forward neural network for FoodIO 3.0. With your deep model, you're learning lower-dimensional dense representations (usually called embedding vectors) for every query and item. With that, FoodIO is able to generalize by matching items to queries that are close to each other in the embedding space. For example, you find that people who asked for “fried chicken” often don't mind having “burgers” as well.
    Combining Wide and Deep models.
    However, you discover that the deep neural network sometimes generalizes too much and recommends irrelevant dishes. You dig into the historic traffic, and find that there are actually two distinct types of query-item relationships in the data.

    The first type of queries is very targeted. People shouting very specific items like “iced decaf latte with nonfat milk” really mean it. Just because it's pretty close to “hot latte with whole milk” in the embedding space doesn't mean it's an acceptable alternative. And there are millions of these rules where the transitivity of embeddings may actually do more harm than good. On the other hand, queries that are more exploratory like “seafood” or “italian food” may be open to more generalization and discovering a diverse set of related items. Having realized these, you have an epiphany: Why do I have to choose either wide or deep models? Why not both?
    Finally, you build FoodIO 4.0 with Wide & Deep Learning in TensorFlow. As shown in the graph above, the sparse features like query="fried chicken" and item="chicken fried rice" are used in both the wide part (left) and the deep part (right) of the model. During training, the prediction errors are backpropagated to both sides to train the model parameters. The cross-feature transformation in the wide model component can memorize all those sparse, specific rules, while the deep model component can generalize to similar items via embeddings.

    Wider. Deeper. Together.
    We're excited to share the TensorFlow API and implementation of Wide & Deep Learning with you, so you can try out your ideas with it and share your findings with everyone else. To get started, check out the code on GitHub and our TensorFlow tutorials on Linear Models and Wide & Deep Learning.

    Acknowledgement
    Bringing Wide & Deep from idea, research to implementation has been a huge team effort. We'd to like to thank all the people who have contributed to the project or have given us advice, including: Heng-Tze Cheng, Mustafa Ispir, Zakaria Haque, Lichan Hong, Rohan Anil, Denis Baylor, Vihan Jain, Salem Haykal, Robson Araujo, Xiaobing Liu, Yonghui Wu, Thomas Strohmann, Tal Shaked, Jeremiah Harmsen, Greg Corrado, Glen Anderson, D. Sculley, Tushar Chandra, Ed Chi, Rajat Monga, Rob von Behren, Jarek Wilkiewicz, Christine Robson, Illia Polosukhin, Martin Wicke, Gus Katsiapis, Alexandre Passos, Olivier Chapelle, Levent Koc, Akshay Naresh Modi, Wei Chai, Hrishi Aradhye, Othar Hansson, Xinran He, Martin Zinkevich, Joe Toth, Anton Rusanov, Hemal Shah, Petros Mol, Frank Li, Yutaka Suematsu, Sameer Ahuja, Eugene Brevdo, Philip Tucker, Shanqing Cai, Kester Tong, and more.

    * For illustration only. FoodIO is not a real app.


  • CVPR 2016 & Research at Google
    Posted by Rahul Sukthankar, Research Scientist

    This week, Las Vegas hosts the 2016 Conference on Computer Vision and Pattern Recognition (CVPR 2016), the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. As a leader in computer vision research, Google has a strong presence at CVPR 2016, with many Googlers presenting papers and invited talks at the conference, tutorials and workshops.

    We congratulate Google Research Scientist Ce Liu and Google Faculty Advisor Abhinav Gupta, who were selected as this year’s recipients of the PAMI Young Researcher Award for outstanding research contributions within computer vision. We also congratulate Googler Henrik Stewenius for receiving the Longuet-Higgins Prize, a retrospective award that recognizes up to two CVPR papers from ten years ago that have made a significant impact on computer vision research, for his 2006 CVPR paper “Scalable Recognition with a Vocabulary Tree”, co-authored with David Nister.

    If you are attending CVPR this year, please stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for hundreds of millions of people. The Google booth will also showcase several recent efforts, including the technology behind Motion Stills, a live demo of neural network-based image compression and TensorFlow-Slim, the lightweight library for defining, training and evaluating models in TensorFlow. Learn more about our research being presented at CVPR 2016 in the list below (Googlers highlighted in blue).

    Oral Presentations
    Generation and Comprehension of Unambiguous Object Descriptions
    Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, Kevin Murphy

    Detecting Events and Key Actors in Multi-Person Videos
    Vignesh Ramanathan, Jonathan Huang, Sami Abu-El-Haija, Alexander Gorban, Kevin Murphy, Li Fei-Fei

    Spotlight Session: 3D Reconstruction
    DeepStereo: Learning to Predict New Views From the World’s Imagery
    John Flynn, Ivan Neulander, James Philbin, Noah Snavely

    Posters
    Discovering the Physical Parts of an Articulated Object Class From Multiple Videos
    Luca Del Pero, Susanna Ricco, Rahul Sukthankar, Vittorio Ferrari

    Blockout: Dynamic Model Selection for Hierarchical Deep Networks
    Calvin Murdock, Zhen Li, Howard Zhou, Tom Duerig

    Rethinking the Inception Architecture for Computer Vision
    Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna

    Improving the Robustness of Deep Neural Networks via Stability Training
    Stephan Zheng, Yang Song, Thomas Leung, Ian Goodfellow

    Semantic Image Segmentation With Task-Specific Edge Detection Using CNNs and a Discriminatively Trained Domain Transform
    Liang-Chieh Chen, Jonathan T. Barron, George Papandreou, Kevin Murphy, Alan L. Yuille

    Tutorial
    Optimization Algorithms for Subset Selection and Summarization in Large Data Sets
    Ehsan Elhamifar, Jeff Bilmes, Alex Kulesza, Michael Gygli

    Workshops
    Perceptual Organization in Computer Vision: The Role of Feedback in Recognition and Reorganization
    Organizers: Katerina Fragkiadaki, Phillip Isola, Joao Carreira
    Invited talks: Viren Jain, Jitendra Malik

    VQA Challenge Workshop
    Invited talks: Jitendra Malik, Kevin Murphy

    Women in Computer Vision
    Invited talk: Caroline Pantofaru

    Computational Models for Learning Systems and Educational Assessment
    Invited talk: Jonathan Huang

    Large-Scale Scene Understanding (LSUN) Challenge
    Invited talk: Jitendra Malik

    Large Scale Visual Recognition and Retrieval: BigVision 2016
    General Chairs: Jason Corso, Fei-Fei Li, Samy Bengio

    ChaLearn Looking at People
    Invited talk: Florian Schroff

    Medical Computer Vision
    Invited talk: Ramin Zabih


  • Project Bloks: Making code physical for kids
    Posted by Steve Vranakis and Jayme Goldstein, Executive Creative Director and Project Lead, Google Creative Lab

    At Google, we’re passionate about empowering children to create and explore with technology. We believe that when children learn to code, they’re not just learning how to program a computer—they’re learning a new language for creative expression and are developing computational thinking: a skillset for solving problems of all kinds.

    In fact, it’s a skillset whose importance is being recognised around the world—from President Obama’s CS4All program to the inclusion of Computer Science in the UK National Curriculum. We’ve long supported and advocated the furthering of CS education through programs and platforms such as Blockly, Scratch Blocks, CS First and Made w/ Code.

    Today, we’re happy to announce Project Bloks, a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. As a first step, we’ve created a system for tangible programming and built a working prototype with it. We’re sharing our progress before conducting more research over the summer to inform what comes next.

    Physical coding
    Kids are inherently playful and social. They naturally play and learn by using their hands, building stuff and doing things together. Making code physical - known as tangible programming - offers a unique way to combine the way children innately play and learn with computational thinking.

    Project Bloks is preceded and shaped by a long history of educational theory and research in the area of hands-on learning. From Friedrich Froebel, Maria Montessori and Jean Piaget’s pioneering work in the area of learning by experience, exploration and manipulation, to the research started in the 1970s by Seymour Papert and Radia Perlman with LOGO and TORTIS. This exploration has continued to grow and includes a wide range of research and platforms.

    However, designing kits for tangible programming is challenging—requiring the resources and time to develop both the software and the hardware. Our goal is to remove those barriers. By creating an open platform, Project Bloks will allow designers, developers and researchers to focus on innovating, experimenting and creating new ways to help kids develop computational thinking. Our vision is that, one day, the Project Bloks platform becomes for tangible programming what Blockly is for on-screen programming.
    The Project Bloks system
    We’ve designed a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences.
    A birdseye view of the customisable and reconfigurable Project Bloks system
    The Project Bloks system is made up of three core components the “Brain Board”, “Base Boards” and “Pucks”. When connected together they create a set of instructions which can be sent to connected devices, things like toys or tablets, over wifi or Bluetooth.
    The three core components of the Project Bloks system
    Pucks: abundant, inexpensive, customisable physical instructions
    Pucks are what make the Project Bloks system so versatile. They help bring the infinite flexibility of software programming commands to tangible programming experiences. Pucks can be programmed with different instructions, such as ‘turn on or off’, ‘move left’ or ‘jump’. They can also take the shape of many different interactive forms—like switches, dials or buttons. With no active electronic components, they’re also incredibly cheap and easy to make. At a minimum, all you'd need to make a puck is a piece of paper and some conductive ink.
    Pucks allow for the creation and customisation of endless amount of different domain-specific physical instructions cheaply and easily.
    Base Boards: a modular design for diverse tangible programming experiences
    Base Boards read a Puck’s instruction through a capacitive sensor. They act as a conduit for a Puck’s command to the Brain Board. Base Boards are modular and can be connected in sequence and in different orientations to create different programming flows and experiences.
    The modularity of the Base Boards means they can be arranged in different configurations and flows
    Each Base Board is fitted with a haptic motor and LEDs that can be used to give end-users real time feedback on their programming experience. The Base Boards can also trigger audio feedback from the Brain Board’s built-in speaker.

    Brain Board: control any device that has an API over WiFi or Bluetooth
    The Brain Board is the processing unit of the system, built on a Raspberry Pi Zero. It also provides the other boards with power, and contains an API to receive and send data to the Base Boards. It sends the Base Boards’ instructions to any device with WiFi or Bluetooth connectivity and an API.

    As a whole, the Project Bloks system can take on different form factors and be made out of different materials. This means developers have the flexibility to create diverse experiences that can help kids develop computational thinking: from composing music using functions to playing around with sensors or anything else they care to invent.
    The Project Bloks system can be used to create all sorts of different physical programming experiences for kids
    The Coding Kit
    To show how designers, developers, and researchers might make use of system, the Project Bloks team worked with IDEO to create a reference device, called the Coding Kit. It lets kids learn basic concepts of programming by allowing them to put code bricks together to create a set of instructions that can be sent to control connected toys and devices—anything from a tablet, to a drawing robot or educational tools for exploring science like LEGO® Education WeDo 2.0.
    What’s next?
    We are looking for participants (educators, developers, parents and researchers) from around the world who would like to help shape the future of Computer Science education by remotely taking part in our research studies later in the year. If you would like to be part of our research study or simply receive updates on the project, please sign up.

    If you want more context and detail on Project Bloks, you can read our position paper.

    Finally, a big thank you to the team beyond Google who’ve helped us get this far—including the pioneers of tangible learning and programming who’ve inspired us and informed so much of our thinking.




All the Latest

Getting Around the Site

Home - all the latest on SNC
SEO - our collection of SEO articles
Technical SEO - for the geeks
Latest News - latest news in search
Analytics - measure up and convert
RSS Rack - feeds from around the industry
Search - looking for something specific?
Authors - Author Login
SEO Training - Our sister site
Contact Us - get in touch with SNC

What's New?

All content and images copyright Search News Central 2014
SNC is a Verve Developments production, the Forensic SEO Specialists- where Gypsies roam.