The Question of Quantum Supremacy

Posted by Sergio Boixo, Research Scientist and Theory Team Lead, and Charles Neill, Quantum Electronics Engineer, Quantum A.I. LabQuantum computing integrates the two largest technological revolutions of the last half century, information technology and quantum mechanics. If we compute using the rules of quantum mechanics, instead of binary logic, some intractable computational tasks become feasible. An important goal in the pursuit of a universal quantum computer is the determination of the smallest computational task that is prohibitively hard for today’s classical computers. This crossover point is known as the “quantum supremacy” frontier, and is a critical step on the path to more powerful and useful computations.In “Characterizing quantum supremacy in near-term devices” published in Nature Physics (arXiv here), we present the theoretical foundation for a practical demonstration of quantum supremacy in near-term devices. It proposes the task of sampling bit-strings…
Original Post: The Question of Quantum Supremacy

Announcing Open Images V4 and the ECCV 2018 Open Images Challenge

Posted by Vittorio Ferrari, Research Scientist, Machine PerceptionIn 2016, we introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning thousands of object categories. Since its initial release, we’ve been hard at work updating and refining the dataset, in order to provide a useful resource for the computer vision community to develop new modelsToday, we are happy to announce Open Images V4, containing 15.4M bounding-boxes for 600 categories on 1.9M images, making it the largest existing dataset with object location annotations. The boxes have been largely manually drawn by professional annotators to ensure accuracy and consistency. The images are very diverse and often contain complex scenes with several objects (8 per image on average; visualizer). In conjunction with this release, we are also introducing the Open Images Challenge, a new object detection challenge to be held…
Original Post: Announcing Open Images V4 and the ECCV 2018 Open Images Challenge

Google at ICLR 2018

Posted by Jeff Dean, Google Senior Fellow, Head of Google Research and Machine IntelligenceThis week, Vancouver, Canada hosts the 6th International Conference on Learning Representations (ICLR 2018), a conference focused on how one can learn meaningful and useful representations of data for machine learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.At the forefront of innovation in cutting-edge technology in neural networks and deep learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2018, Google will have a strong presence with over 130 researchers attending, contributing to and learning from the broader academic research community by presenting papers and…
Original Post: Google at ICLR 2018

Announcing the Google Cloud Platform Research Credits Program

Posted by Steven Butschi, Head of Higher Education, GoogleScientists across nearly every discipline are researching ever larger and more complex data sets, using tremendous amounts of compute power to learn, make discoveries and build new tools that few could have imagined only a few years ago. Traditionally, this kind of research has been limited by the availability of resources, with only the largest universities or industry partners able to successfully pursue these endeavors. However, the power of cloud computing has been removing obstacles that many researchers used to face, enabling projects that use machine learning tools to understand and address student questions and that study robotic interactions with humans, among many more.In order to ensure that more researchers have access to powerful cloud tools, we’re launching Google Cloud Platform (GCP) research credits, a new program aimed to support faculty in…
Original Post: Announcing the Google Cloud Platform Research Credits Program

Google’s Workshop on AI/ML Research and Practice in India

Posted by Pankaj Gupta and Anand Rangarajan, Engineering Directors, Google IndiaLast month, Google Bangalore hosted the Workshop on Artificial Intelligence and Machine Learning, with the goal of fostering collaboration between the academic and industry research communities in India. This forum was designed to exchange current research and industry projects in AI & ML, and included faculty and researchers from Indian Institutes of Technology (IITs) and other leading universities in India, along with industry practitioners from Amazon, Delhivery, Flipkart, LinkedIn, Myntra, Microsoft, Ola and many more. Participants spoke on the ongoing research and work being undertaken in India in deep learning, computer vision, natural language processing, systems and generative models (you can access all the presentations from the workshop here).Google’s Jeff Dean and Prabhakar Raghavan kicked off the workshop by sharing Google’s uses of deep learning to solve challenging problems and…
Original Post: Google’s Workshop on AI/ML Research and Practice in India

Introducing the CVPR 2018 On-Device Visual Intelligence Challenge

Posted by Bo Chen, Software Engineer and Jeffrey M. Gilbert, Member of Technical Staff, Google ResearchOver the past year, there have been exciting innovations in the design of deep networks for vision applications on mobile devices, such as the MobileNet model family and integer quantization. Many of these innovations have been driven by performance metrics that focus on meaningful user experiences in real-world mobile applications, requiring inference to be both low-latency and accurate. While the accuracy of a deep network model can be conveniently estimated with well established benchmarks in the computer vision community, latency is surprisingly difficult to measure and no uniform metric has been established. This lack of measurement platforms and uniform metrics have hampered the development of performant mobile applications.Today, we are happy to announce the On-device Visual Intelligence Challenge (OVIC), part of the Low-Power Image Recognition…
Original Post: Introducing the CVPR 2018 On-Device Visual Intelligence Challenge

DeepVariant Accuracy Improvements for Genetic Datatypes

Posted by Pi-Chuan Chang, Software Engineer and Lizzie Dorfman, Technical Program Manager, Google Brain TeamLast December we released DeepVariant, a deep learning model that has been trained to analyze genetic sequences and accurately identify the differences, known as variants, that make us all unique. Our initial post focused on how DeepVariant approaches “variant calling” as an image classification problem, and is able to achieve greater accuracy than previous methods.Today we are pleased to announce the launch of DeepVariant v0.6, which includes some major accuracy improvements. In this post we describe how we train DeepVariant, and how we were able to improve DeepVariant’s accuracy for two common sequencing scenarios, whole exome sequencing and polymerase chain reaction sequencing, simply by adding representative data into DeepVariant’s training process.Many Types of Sequencing DataApproaches to genomic sequencing vary depending on the type of DNA sample…
Original Post: DeepVariant Accuracy Improvements for Genetic Datatypes

Introducing Semantic Experiences with Talk to Books and Semantris

Posted by Ray Kurzweil, Director of Engineering and Rachel Bernstein, Product Manager, Google ResearchNatural language understanding has evolved substantially in the past few years, in part due to the development of word vectors that enable algorithms to learn about the relationships between words, based on examples of actual language usage. These vector models map semantically similar phrases to nearby points based on equivalence, similarity or relatedness of ideas and language. Last year, we used hierarchical vector models of language to make improvements to Smart Reply for Gmail. More recently, we’ve been exploring other applications of these methods.Today, we are proud to share Semantic Experiences, a website showing two examples of how these new capabilities can drive applications that weren’t possible before. Talk to Books is an entirely new way to explore books by starting at the sentence level, rather than…
Original Post: Introducing Semantic Experiences with Talk to Books and Semantris

Seeing More with In Silico Labeling of Microscopy Images

Eric Christiansen, Senior Software Engineer, Google ResearchIn the fields of biology and medicine, microscopy allows researchers to observe details of cells and molecules which are unavailable to the naked eye. Transmitted light microscopy, where a biological sample is illuminated on one side and imaged, is relatively simple and well-tolerated by living cultures but produces images which can be difficult to properly assess. Fluorescence microscopy, in which biological objects of interest (such as cell nuclei) are specifically targeted with fluorescent molecules, simplifies analysis but requires complex sample preparation. With the increasing application of machine learning to the field of microscopy, including algorithms used to automatically assess the quality of images and assist pathologists diagnosing cancerous tissue, we wondered if we could develop a deep learning system that could combine the benefits of both microscopy techniques while minimizing the downsides.With “In Silico…
Original Post: Seeing More with In Silico Labeling of Microscopy Images