Warning: file_put_contents() [function.file-put-contents]: Only 0 of 27 bytes written, possibly out of free disk space in /home/searchne/public_html/libraries/joomla/filesystem/file.php on line 304

Warning: Cannot modify header information - headers already sent by (output started at /home/searchne/public_html/libraries/joomla/filesystem/file.php:304) in /home/searchne/public_html/templates/jv_nicebody/libs/zt_tools.php on line 226
Newsfeeds from around the industry | Search News Central


Banner

Follow Along

RSS Feed Join Us on Twitter On Facebook

Get Engaged

Banner

Related Reading

Our Sponsors

Banner
Banner
Banner

Join Us

Banner
Newsfeeds from around the industry
Google Research Blog
The latest news on Google Research.

  • Google Faculty Research Awards: Summer 2015
    posted by Maggie Johnson, Director of Education and University Relations

    We have just completed another round of the Google Faculty Research Awards, our annual open call for research proposals on Computer Science and related topics, including systems, machine learning, software engineering, security and mobile. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.

    This round we received 805 proposals, about the same as last round, covering 48 countries on 6 continents. After expert reviews and committee discussions, we decided to fund 113 projects, with 27% of the funding awarded to universities outside the U.S. The subject areas that received the highest level of support were systems, machine perception, software engineering, and machine learning.

    The Faculty Research Awards program plays a critical role in building and maintaining strong collaborations with top research faculty globally. These relationships allow us to keep a pulse on what’s happening in academia in strategic areas, and they help to extend our research capabilities and programs. Faculty also report, through our annual survey, that they and their students benefit from a direct connection to Google as a source of ideas and perspective.

    Congratulations to the well-deserving recipients of this round’s awards. If you are interested in applying for the next round (deadline is October 15), please visit our website for more information.


  • The Next Chapter for Flu Trends
    Posted by The Flu Trends Team

    When a small team of software engineers first started working on Flu Trends in 2008, we wanted to explore how real-world phenomena could be modeled using patterns in search queries. Since its launch, Google Flu Trends has provided useful insights and served as one of the early examples for “nowcasting” based on search trends, which is increasingly used in health, economics, and other fields. Over time, we’ve used search signals to create prediction models, updating and improving those models over time as we compared our prediction to real-world cases of flu.

    Instead of maintaining our own website going forward, we’re now going to empower institutions who specialize in infectious disease research to use the data to build their own models. Starting this season, we’ll provide Flu and Dengue signal data directly to partners including Columbia University’s Mailman School of Public Health (to update their dashboard), Boston Children’s Hospital/Harvard, and Centers for Disease Control and Prevention (CDC) Influenza Division. We will also continue to make historical Flu and Dengue estimate data available for anyone to see and analyze.

    Flu continues to affect millions of people every year, and while it’s still early days for nowcasting and similar tools for understanding the spread of diseases like flu and dengue fever—we’re excited to see what comes next. To download the historical data or learn more about becoming a research partner, please visit the Flu Trends web page.


  • Pulling Back the Curtain on Google’s Network Infrastructure
    Posted by Amin Vahdat, Google Fellow

    Pursuing Google’s mission of organizing the world’s information to make it universally accessible and useful takes an enormous amount of computing and storage. In fact, it requires coordination across a warehouse-scale computer. Ten years ago, we realized that we could not purchase, at any price, a datacenter network that could meet the combination of our scale and speed requirements. So, we set out to build our own datacenter network hardware and software infrastructure. Today, at the ACM SIGCOMM conference, we are presenting a paper with the technical details on five generations of our in-house data center network architecture. This paper presents the technical details behind a talk we presented at Open Network Summit a few months ago.

    From relatively humble beginnings, and after a misstep or two, we’ve built and deployed five generations of datacenter network infrastructure. Our latest-generation Jupiter network has improved capacity by more than 100x relative to our first generation network, delivering more than 1 petabit/sec of total bisection bandwidth. This means that each of 100,000 servers can communicate with one another in an arbitrary pattern at 10Gb/s.

    Such network performance has been tremendously empowering for Google services. Engineers were liberated from optimizing their code for various levels of bandwidth hierarchy. For example, initially there were painful tradeoffs with careful data locality and placement of servers connected to the same top of rack switch versus correlated failures caused by a single switch failure. A high performance network supporting tens of thousands of servers with flat bandwidth also enabled individual applications to scale far beyond what was otherwise possible and enabled tight coupling among multiple federated services. Finally, we were able to substantially improve the efficiency of our compute and storage infrastructure. As quantified in this recent paper, scheduling a set of jobs over a single larger domain supports much higher utilization than scheduling the same jobs over multiple smaller domains.

    Delivering such a network meant we had to solve some fundamental problems in networking. Ten years ago, networking was defined by the interaction of individual hardware elements, e.g., switches, speaking standardized protocols to dynamically learn what the network looks like. Based on this dynamically learned information, switches would set their forwarding behavior. While robust, these protocols targeted deployment environments with perhaps tens of switches communicating between between multiple organizations. Configuring and managing switches in such an environment was manual and error prone. Changes in network state would spread slowly through the network using a high-overhead broadcast protocol. Most challenging of all, the system could not scale to meet our needs.

    We adopted a set of principles to organize our networks that is now the primary driver for networking research and industrial innovation, Software Defined Networking (SDN). We observed that we could arrange emerging merchant switch silicon around a Clos topology to scale to the bandwidth requirements of a data center building. The topology of all five generations of our data center networks follow the blueprint below. Unfortunately, this meant that we would potentially require 10,000+ individual switching elements. Even if we could overcome the scalability challenges of existing network protocols, managing and configuring such a vast number of switching elements would be impossible.
    So, our next observation was that we knew the exact configuration of the network we were trying to build. Every switch would play a well-defined role in a larger-ensemble. That is, from the perspective of a datacenter building, we wished to build a logical building-scale network. For network management, we built a single configuration for the entire network and pushed the single global configuration to all switches. Each switch would simply pull out its role in the larger whole. Finally, we transformed routing from a pair-wise, fully distributed (but somewhat slow and high overhead) scheme to a logically-centralized scheme under the control of a single dynamically-elected master as illustrated in the figure below from our paper.
    Our SIGCOMM paper on Jupiter is one of four Google-authored papers being presented at that conference. Taken together, these papers show the benefits of embedding research within production teams, reflecting both collaborations with PhD students carrying out extended research efforts with Google engineers during internships as well as key insights from deployed production systems:

    • Our work on Bandwidth Enforcer shows how we can allocate wide area bandwidth among tens of thousands of individual applications based on centrally configured policy, substantially improving network utilization while simultaneously isolating services from one another.
    • Condor addresses the challenges of designing data center network topologies. Network designers can specify constraints for data center networks; Condor efficiently generates candidate network designs that meet these constraints, and evaluates these candidates against a variety of target metrics.
    • Congestion control in datacenter networks is challenging because of tiny buffers and very small round trip times. TIMELY shows how to manage datacenter bandwidth allocation while maintaining highly responsive and low latency network roundtrips in the data center.

    These efforts reflect the latest in a long series of substantial Google contributions in networking. We are excited about being increasingly open about results of our research work: to solicit feedback on our approach, to influence future research and development directions so that we can benefit from community-wide efforts to improve networking, and to attract the next-generation of great networking thinkers and builders to Google. Our focus on Google Cloud Platform further increases the importance of being open about our infrastructure. Since the same network powering Google infrastructure for a decade is also the underpinnings of our Cloud Platform, all developers can leverage the network to build highly robust, manageable, and globally scalable services.


  • Say hello to the Enigma conference
    Posted by Elie Bursztein - Anti-abuse team, Parisa Tabriz - Chrome Security and Niels Provos - Security team

    USENIX Enigma is a new conference focused on security, privacy and electronic crime through the lens of emerging threats and novel attacks. The goal of this conference is to help industry, academic, and public-sector practitioners better understand the threat landscape. Enigma will have a single track of 30-minute talks that are curated by a panel of experts, featuring strong technical content with practical applications to current and emerging threats.
    Google is excited to both sponsor and help USENIX build Enigma, since we share many of its core principles: transparency, openness, and cutting-edge security research. Furthermore, we are proud to provide Enigma with with engineering and design support, as well as volunteer participation in program and steering committees.

    The first instantiation of Enigma will be held January 25-27 in San Francisco. You can sign up for more information about the conference or propose a talk through the official conference site at http://enigma.usenix.org


  • KDD 2015 Best Research Paper Award: “Algorithms for Public-Private Social Networks”
    Posted by Posted by Corinna Cortes, Head, Google Research NY

    The 21st ACM conference on Knowledge Discovery and Data Mining (KDD’15), a main venue for academic and industry research in data management, information retrieval, data mining and machine learning, was held last week in Sydney, Australia. In the past several years, Google has been actively participating in KDD, with several Googlers presenting work at the conference in the research and industrial tracks. This year Googlers presented 12 papers at KDD (listed below, with Googlers in blue), all of which are freely available at the ACM Digital Library.

    One of these papers, Efficient Algorithms for Public-Private Social Networks, co-authored by Googlers Ravi Kumar, Silvio Lattanzi, Vahab Mirrokni, former Googler intern Alessandro Epasto and research visitor Flavio Chierichetti, was awarded Best Research Paper. The inspiration for this paper comes from studying social networks and the importance of addressing privacy issues in analyzing such networks.

    Privacy issues dictate the way information is shared among the members of the social network. In the simplest case, a user can mark some of her friends as private; this would make the connections (edges) between this user and these friends visible only to the user. In a different instantiation of privacy, a user can be a member of a private group; in this case, all the edges among the group members are to be considered private. Thus, each user in the social network has her own view of the link structure of the network. These privacy issues also influence the way in which the network itself can be viewed and processed by algorithms. For example, one cannot use the list of private friends of user X for suggesting potential friends or public news items to another user on the network, but one can use this list for the purpose of suggesting friends for user X.

    As a result, enforcing these privacy guarantees translates to solving a different algorithmic problem for each user in the network, and for this reason, developing algorithms that process these social graphs and respect these privacy guarantees can become computationally expensive. In a recent study, Dey et al. crawled a snapshot of 1.4 million New York City Facebook users and reported that 52.6% of them hid their friends list. As more users make a larger portion of their social neighborhoods private, these computational issues become more important.

    Motivated by the above, this paper introduces the public-private model of graphs, where each user (node) in the public graph has an associated private graph. In this model, the public graph is visible to everyone, and the private graph at each node is visible only to each specific user. Thus, any given user sees their graph as a union of their private graph and the public graph.

    From algorithmic point of view, the paper explores two powerful computational paradigms for efficiently studying large graphs, namely, sketching and sampling, and focuses on some key problems in social networks such as similarity ranking, and clustering. In the sketching model, the paper shows how to efficiently approximate the neighborhood function, which in turn can be used to approximate various notions of centrality scores for each node - such centrality scores like the PageRank score have important applications in ranking and recommender systems. In the sampling model, the paper focuses on all-pair shortest path distances, node similarities, and correlation clustering, and develop algorithms that computes these notions on a given public-private graph and at the same time. The paper also illustrates the effectiveness of this model and the computational efficiency of the algorithms by performing experiments on real-world social networks.

    The public-private model is an abstraction that can be used to develop efficient social network algorithms. This work leaves a number of open interesting research directions such as: obtaining efficient algorithms for the densest subgraph/community detection problems, influence maximization, computing other pairwise similarity scores, and most importantly, recommendation systems.

    KDD’15 Papers, co-authored by Googlers:

    Efficient Algorithms for Public-Private Social Networks (Best Paper Award)
    Flavio Chierichetti, Alessandro Epasto, Ravi Kumar, Silvio Lattanzi, Vahab Mirrokni

    Large-Scale Distributed Bayesian Matrix Factorization using Stochastic Gradient MCMC
    Sungjin Ahn, Anoop Korattikara, Nathan Liu, Suju Rajan, Max Welling

    TimeMachine: Timeline Generation for Knowledge-Base Entities
    Tim Althoff, Xin Luna Dong, Kevin Murphy, Safa Alai, Van Dang, Wei Zhang

    Algorithmic Cartography: Placing Points of Interest and Ads on Maps
    Mohammad Mahdian, Okke Schrijvers, Sergei Vassilvitskii

    Stream Sampling for Frequency Cap Statistics
    Edith Cohen

    Dirichlet-Hawkes Processes with Applications to Clustering Continuous-Time Document Streams
    Nan Du, Mehrdad Farajtabar, Amr Ahmed, Alexander J.Smola, Le Song

    Adaptation Algorithm and Theory Based on Generalized Discrepancy
    Corinna Cortes, Mehryar Mohri, Andrés Muñoz Medina (now at Google)

    Estimating Local Intrinsic Dimensionality
    Laurent Amsaleg, Oussama Chelly, Teddy Furon, Stéphane Girard, Michael E. Houle Ken-ichi Kawarabayashi, Michael Nett

    Unified and Contrasting Cuts in Multiple Graphs: Application to Medical Imaging Segmentation
    Chia-Tung Kuo, Xiang Wang, Peter Walker, Owen Carmichael, Jieping Ye, Ian Davidson

    Going In-depth: Finding Longform on the Web
    Virginia Smith, Miriam Connor, Isabelle Stanton

    Annotating needles in the haystack without looking: Product information extraction from emails
    Weinan Zhang, Amr Ahmed, Jie Yang, Vanja Josifovski, Alexander Smola

    Focusing on the Long-term: It's Good for Users and Business
    Diane Tang, Henning Hohnhold, Deirdre O'Brien


All the Latest

Getting Around the Site

Home - all the latest on SNC
SEO - our collection of SEO articles
Technical SEO - for the geeks
Latest News - latest news in search
Analytics - measure up and convert
RSS Rack - feeds from around the industry
Search - looking for something specific?
Authors - Author Login
SEO Training - Our sister site
Contact Us - get in touch with SNC

What's New?

All content and images copyright Search News Central 2014
SNC is a Verve Developments production, the Forensic SEO Specialists- where Gypsies roam.