Banner

Follow Along

RSS Feed Join Us on Twitter On Facebook

Get Engaged

Banner

Related Reading

Our Sponsors

Banner
Banner
Banner

Join Us

Banner
Newsfeeds from around the industry
Google Research Blog
The latest news on Google Research.

  • Bringing Precision to the AI Safety Discussion
    Posted by Chris Olah, Google Research

    We believe that AI technologies are likely to be overwhelmingly useful and beneficial for humanity. But part of being a responsible steward of any new technology is thinking through potential challenges and how best to address any associated risks. So today we’re publishing a technical paper, Concrete Problems in AI Safety, a collaboration among scientists at Google, OpenAI, Stanford and Berkeley.

    While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative. We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.

    We’ve outlined five problems we think will be very important as we apply AI in more general circumstances. These are all forward thinking, long-term research questions -- minor issues today, but important to address for future systems:

    • Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
    • Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
    • Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
    • Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
    • Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.

    We go into more technical detail in the paper. The machine learning research community has already thought quite a bit about most of these problems and many related issues, but we think there’s a lot more work to be done.

    We believe in rigorous, open, cross-institution work on how to build machine learning systems that work as intended. We’re eager to continue our collaborations with other research groups to make positive progress on AI.


  • ICML 2016 & Research at Google
    Posted by Afshin Rostamizadeh, Research Scientist

    This week, New York hosts the 2016 International Conference on Machine Learning (ICML 2016), a premier annual Machine Learning event supported by the International Machine Learning Society (IMLS). Machine Learning is a key focus area at Google, with highly active research groups exploring virtually all aspects of the field, including deep learning and more classical algorithms.

    We work on an extremely wide variety of machine learning problems that arise from a broad range of applications at Google. One particularly important setting is that of large-scale learning, where we utilize scalable tools and architectures to build machine learning systems that work with large volumes of data that often preclude the use of standard single-machine training algorithms. In doing so, we are able to solve deep scientific problems and engineering challenges, exploring theory as well as application, in areas of language, speech, translation, music, visual processing and more.

    As Gold Sponsor, Google has a strong presence at ICML 2016 with many Googlers publishing their research and hosting workshops. If you’re attending, we hope you’ll visit the Google booth and talk with our researchers to learn more about the exciting work, creativity and fun that goes into solving interesting ML problems that impact millions of people. You can also learn more about our research being presented at ICML 2016 in the list below (Googlers highlighted in blue).

    ICML 2016 Organizing Committee
    Area Chairs include: Corinna Cortes, John Blitzer, Maya Gupta, Moritz Hardt, Samy Bengio

    IMLS
    Board Members include: Corinna Cortes

    Accepted Papers
    ADIOS: Architectures Deep In Output Space
    Moustapha Cisse, Maruan Al-Shedivat, Samy Bengio

    Associative Long Short-Term Memory
    Ivo Danihelka (Google DeepMind), Greg Wayne (Google DeepMind), Benigno Uria (Google DeepMind), Nal Kalchbrenner (Google DeepMind), Alex Graves (Google DeepMind)

    Asynchronous Methods for Deep Reinforcement Learning
    Volodymyr Mnih (Google DeepMind), Adria Puigdomenech Badia (Google DeepMind), Mehdi Mirza, Alex Graves (Google DeepMind), Timothy Lillicrap (Google DeepMind), Tim Harley (Google DeepMind), David Silver (Google DeepMind), Koray Kavukcuoglu (Google DeepMind)

    Binary embeddings with structured hashed projections
    Anna Choromanska, Krzysztof Choromanski, Mariusz Bojarski, Tony Jebara, Sanjiv Kumar, Yann LeCun

    Discrete Distribution Estimation Under Local Privacy
    Peter Kairouz, Keith Bonawitz, Daniel Ramage

    Dueling Network Architectures for Deep Reinforcement Learning (Best Paper Award recipient)
    Ziyu Wang (Google DeepMind), Nando de Freitas (Google DeepMind), Tom Schaul (Google DeepMind), Matteo Hessel (Google DeepMind), Hado van Hasselt (Google DeepMind), Marc Lanctot (Google DeepMind)

    Exploiting Cyclic Symmetry in Convolutional Neural Networks
    Sander Dieleman (Google DeepMind), Jeffrey De Fauw (Google DeepMind), Koray Kavukcuoglu (Google DeepMind)

    Fast Constrained Submodular Maximization: Personalized Data Summarization
    Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi

    Greedy Column Subset Selection: New Bounds and Distributed Algorithms
    Jason Altschuler, Aditya Bhaskara, Gang Fu, Vahab Mirrokni, Afshin Rostamizadeh, Morteza Zadimoghaddam

    Horizontally Scalable Submodular Maximization
    Mario Lucic, Olivier Bachem, Morteza Zadimoghaddam, Andreas Krause

    Continuous Deep Q-Learning with Model-based Acceleration
    Shixiang Gu, Timothy Lillicrap (Google DeepMind), Ilya Sutskever, Sergey Levine

    Meta-Learning with Memory-Augmented Neural Networks
    Adam Santoro (Google DeepMind), Sergey Bartunov, Matthew Botvinick (Google DeepMind), Daan Wierstra (Google DeepMind), Timothy Lillicrap (Google DeepMind)

    One-Shot Generalization in Deep Generative Models
    Danilo Rezende (Google DeepMind), Shakir Mohamed (Google DeepMind), Daan Wierstra (Google DeepMind)

    Pixel Recurrent Neural Networks (Best Paper Award recipient)
    Aaron Van den Oord (Google DeepMind), Nal Kalchbrenner (Google DeepMind), Koray Kavukcuoglu (Google DeepMind)

    Pricing a low-regret seller
    Hoda Heidari, Mohammad Mahdian, Umar Syed, Sergei Vassilvitskii, Sadra Yazdanbod

    Primal-Dual Rates and Certificates
    Celestine Dünner, Simone Forte, Martin Takac, Martin Jaggi

    Recommendations as Treatments: Debiasing Learning and Evaluation
    Tobias Schnabel, Thorsten Joachims, Adith Swaminathan, Ashudeep Singh, Navin Chandak

    Recycling Randomness with Structure for Sublinear Time Kernel Expansions
    Krzysztof Choromanski, Vikas Sindhwani

    Train faster, generalize better: Stability of stochastic gradient descent
    Moritz Hardt, Ben Recht, Yoram Singer

    Variational Inference for Monte Carlo Objectives
    Andriy Mnih  (Google DeepMind), Danilo Rezende (Google DeepMind)

    Workshops
    Abstraction in Reinforcement Learning
    Organizing Committee: Daniel Mankowitz, Timothy Mann (Google DeepMind), Shie Mannor
    Invited Speaker: David Silver (Google DeepMind)

    Deep Learning Workshop
    Organizers: Antoine Bordes, Kyunghyun Cho, Emily Denton, Nando de Freitas (Google DeepMind), Rob Fergus
    Invited Speaker: Raia Hadsell (Google DeepMind)

    Neural Networks Back To The Future
    Organizers: Léon Bottou, David Grangier, Tomas Mikolov, John Platt

    Data-Efficient Machine Learning
    Organizers: Marc Deisenroth, Shakir Mohamed (Google DeepMind), Finale Doshi-Velez, Andreas Krause, Max Welling

    On-Device Intelligence
    Organizers: Vikas Sindhwani, Daniel Ramage, Keith Bonawitz, Suyog Gupta, Sachin Talathi
    Invited Speakers: Hartwig Adam, H. Brendan McMahan

    Online Advertising Systems
    Organizing Committee: Sharat Chikkerur, Hossein Azari, Edoardo Airoldi
    Opening Remarks: Hossein Azari
    Invited Speakers: Martin Pál, Todd Phillips

    Anomaly Detection 2016
    Organizing Committee: Nico Goernitz, Marius Kloft, Vitaly Kuznetsov

    Tutorials
    Deep Reinforcement Learning
    David Silver (Google DeepMind)

    Rigorous Data Dredging: Theory and Tools for Adaptive Data Analysis
    Moritz Hardt, Aaron Roth


  • Announcing Google Research, Europe
    Posted by Emmanuel Mogenet, Head of Google Research, Europe

    Google’s ongoing research in Machine Intelligence is what powers many of the products being used by hundreds of millions of people a day - from Translate to Photo Search to Smart Reply for Inbox. One of the things that enables these advances is the extensive collaboration between the Google researchers in our offices across the world, all contributing their unique knowledge and disseminating ideas in state-of-the-art Machine Learning (ML) technologies and techniques in order to develop useful tools and products.

    Today, we’re excited to announce a dedicated Machine Learning research group in Europe, based in our Zurich office. Google Research, Europe, will foster an environment where software engineers and researchers specialising in ML will have the opportunity to develop products and conduct research right here in Europe, as part of the wider efforts at Google.
    Zurich is already the home of Google’s largest engineering office outside the US, and is responsible for developing the engine that powers Knowledge Graph, as well as the conversation engine that powers the Google Assistant in Allo. In addition to continued collaboration with Google’s various research teams, Google Research, Europe will be focused on three key areas:
    In pursuit of these areas, the team will actively research ways in which to improve ML infrastructure, broadly facilitating research for the community, and enabling it to be put to practical use. Furthermore, researchers in the Zurich office will be uniquely able to work closely with team linguists, advancing Natural Language Understanding in collaboration with Google Research groups across the world, all while enjoying Mountain Views of a different kind.
    Europe is home to some of the world’s premier technical universities, making it an ideal place to build a top-notch research team. We look forward to collaborating with all the excellent Computer Science research that is coming from the region, and hope to contribute towards the wider academic community through our publications and academic support.


  • Quantum annealing with a digital twist
    Posted by Rami Barends and Alireza Shabani, Quantum Electronics Engineers

    One of the key benefits of quantum computing is that it has the potential to solve some of the most complex problems in nature, from physics to chemistry to biology. For example, when attempting to calculate protein folding, or when exploring reaction catalysts and “designer” molecules, one can look at computational challenges as optimization problems, and represent the different configurations of a molecule as an energy landscape in a quantum computer. By letting the system cool, or “anneal”, one finds the lowest energy state in the landscape - the most stable form of the molecule. Thanks to the peculiarities of quantum mechanics, the correct answer simply drops out at the end of the quantum computation. In fact, many tough problems can be dealt with this way, this combination of simplicity and generality makes it appealing.

    But finding the lowest energy state in a system is like being put in the Alps, and being told to find the lowest elevation - it’s easy to get stuck in a “local” valley, and not know that there is an even lower point elsewhere. Therefore, we use a different approach: We start with a very simple energy landscape - a flat meadow - and initialize the system of quantum bits (qubits) to represent the known lowest energy point, or “ground state”, in that landscape. We then begin to adjust the simple landscape towards one that represents the problem we are trying to solve - from the smooth meadow to the highly uneven terrain of the Alps. Here’s the fun part: if one evolves the landscape very slowly, the ground state of the qubits also evolves, so that they stay in the ground state of the changing system. This is called “adiabatic quantum computing”, and qubits exploit quantum tunneling to ensure they always find the lowest energy "valley" in the changing system.

    While this is great in theory, getting this to work in practice is challenging, as you have to set up the energy landscape using the available qubit interactions. Ideally you’d have multiple interactions going on between all of the qubits, but for a large-scale solver the requirements to accurately keep track of these interactions become enormous. Realistically, the connectivity has to be reduced, but this presents a major limitation for the computational possibilities.

    In "Digitized adiabatic quantum computing with a superconducting circuit", published in Nature, we’ve overcome this obstacle by giving quantum annealing a digital twist. With a limited connectivity between qubits you can still construct any of the desired interactions: Whether the interaction is ferromagnetic (the quantum bits prefer an aligned) or antiferromagnetic (anti-aligned orientation), or even defined along an arbitrary different direction, you can make it happen using easy to combine discrete building blocks. In this case, the blocks we use are the logic gates that we've been developing with our superconducting architecture.
    Superconducting quantum chip with nine qubits. Each qubit (cross-shaped structures in the center) is connected to its neighbors and individually controlled. Photo credit: Julian Kelly.
    The key is controllability. Qubits, like other physical objects in nature, have a resonance frequency, and can be addressed individually with short voltage and current pulses. In our architecture we can steer this frequency, much like you would tune a radio to a broadcast. We can even tune one qubit to the frequency of another one. By moving qubit frequencies to or away from each other, interactions can be turned on or off. The exchange of quantum information resembles a relay race, where the baton can be handed down when the runners meet.

    You can see the algorithm in action below. Any problem is encoded as local “directions” we want qubits to point to - like a weathervane pointing into the wind - and interactions, depicted here as links between the balls. We start by aligning all qubits into the same direction, and the interactions between the qubits turned off - this is the simplest ground state of the system. Next, we turn on interactions and change qubit directions to start evolving towards the energy landscape we wish to solve. The algorithmic steps are implemented with many control pulses, illustrating how the problem gets solved in a giant dance of quantum entanglement.
    Top: Depiction of the problem, with the gold arrows in the blue balls representing the directions we’d like each qubit to align to, like a weathervane pointing to the wind. The thickness of the link between the balls indicates the strength of the interaction - red denotes a ferromagnetic link, and blue an antiferromagnetic link. Middle: Implementation with qubits (yellow crosses) with control pulses (red) and steering the frequency (vertical direction). Qubits turn blue when there is interaction. The qubits turn green when they are being measured. Bottom: Zoom in of the physical device, showing the corresponding nine qubits (cross-shaped).
    To run the adiabatic quantum computation efficiently and design a set of test experiments we teamed up with the QUTIS group at the University of the Basque Country in Bilbao, Spain, led by Prof. E. Solano and Dr. L. Lamata, who are experts in synthesizing digital algorithms. It’s the largest digital algorithm to date, with up to nine qubits and using over one thousand logic gates.

    The crucial advantage for the future is that this digital implementation is fully compatible with known quantum error correction techniques, and can therefore be protected from the effects of noise. Otherwise, the noise will set a hard limit, as even the slightest amount can derail the state from following the fragile path to the solution. Since each quantum bit and interaction element can add noise to the system, some of the most important problems are well beyond reach, as they have many degrees of freedom and need a high connectivity. But with error correction, this approach becomes a general-purpose algorithm which can be scaled to an arbitrarily large quantum computer.


  • Motion Stills – Create beautiful GIFs from Live Photos
    Posted by Ken Conley and Matthias Grundmann, Machine Perception

    Today we are releasing Motion Stills, an iOS app from Google Research that acts as a virtual camera operator for your Apple Live Photos. We use our video stabilization technology to freeze the background into a still photo or create sweeping cinematic pans. The resulting looping GIFs and movies come alive, and can easily be shared via messaging or on social media.
    With Motion Stills, we provide an immersive stream experience that makes your clips fun to watch and share. You can also tell stories of your adventures by combining multiple clips into a movie montage. All of this works right on your phone, no Internet connection needed.
    A Live Photo before and after stabilization with Motion Stills
    How does it work?
    We pioneered this technology by stabilizing hundreds of millions of videos and creating GIF animations from photo bursts. Our algorithm uses linear programming to compute a virtual camera path that is optimized to recast videos and bursts as if they were filmed using stabilization equipment, yielding a still background or creating cinematic pans to remove shakiness.

    Our challenge was to take technology designed to run distributed in a data center and shrink it down to run even faster on your mobile phone. We achieved a 40x speedup by using techniques such as temporal subsampling, decoupling of motion parameters, and using Google Research’s custom linear solver, GLOP. We obtain further speedup and conserve storage by computing low-resolution warp textures to perform real-time GPU rendering, just like in a videogame.
    Making it loop
    Short videos are perfect for creating loops, so we added loop optimization to bring out the best in your captures. Our approach identifies optimal start and end points, and also discards blurry frames. As an added benefit, this fixes “pocket shots” (footage of the phone being put back into the pocket).

    To keep the background steady while looping, Motion Stills has to separate the background from the rest of the scene. This is a difficult task when foreground elements occlude significant portions of the video, as in the example below. Our novel method classifies motion vectors into foreground (red) and background (green) in a temporally consistent manner. We use a cascade of motion models, moving our motion estimation from simple to more complex models and biasing our results along the way.
    Left: Original with virtual camera path (red rectangle) and motion classification; foreground(red) vs. background(green) Right: Motion Stills result
    Try it out
    We’re excited to see what you can create with this app. From fun family moments to exciting adventures with friends, try it out and let us know what you think. Motion Stills is an on-device experience with no sign-in: even if you’re on top of a glacier without signal, you can see your results immediately. You can show us your favorite clips by using #motionstills on social media.

    This app is a way for us to experiment and iterate quickly on the technology needed for short video creation. Based on the feedback we receive, we hope to integrate this feature into existing products like Google Photos.

    Motion Stills is available on the App Store.



All the Latest

Getting Around the Site

Home - all the latest on SNC
SEO - our collection of SEO articles
Technical SEO - for the geeks
Latest News - latest news in search
Analytics - measure up and convert
RSS Rack - feeds from around the industry
Search - looking for something specific?
Authors - Author Login
SEO Training - Our sister site
Contact Us - get in touch with SNC

What's New?

All content and images copyright Search News Central 2014
SNC is a Verve Developments production, the Forensic SEO Specialists- where Gypsies roam.