Posted on Leave a comment

Minimizing trial and error in the drug discovery process

molecules, stock image

In 1928, Alexander Fleming accidentally let his petri dishes go moldy, a mistake that would lead to the breakthrough discovery of penicillin and save the lives of countless people. From these haphazard beginnings, the pharmaceutical industry has grown into one of the most technically advanced and valuable sectors, driven by incredible progress in chemistry and molecular biology. Nevertheless, a great deal of trial and error still exists in the drug discovery process. With an estimated space of 1060 small organic molecules that could be tried and tested, it is no surprise that finding useful compounds is difficult and that the process is full of costly dead ends and surprises.

The challenge of molecule design also lies at the heart of many applications outside pharmacology, including in the optimization of energy production, electronic displays, and plastics. Each of these fields has developed computational methods to search through molecular space and pinpoint useful leads that are followed up in the lab or in more detailed physical simulations. As a result, there are now vast libraries of molecules tagged with useful properties. The abundance of data has encouraged researchers to turn to data-driven approaches to reduce the degree of trial and error in chemical development, and the aim of our paper being presented at the 2018 Conference on Neural Information Processing Systems (NeurIPS) is to investigate how recent advances, specifically in deep learning techniques, could help harness these libraries for new molecular design tasks.

Deep learning with molecular data

Figure 1: The chemical structure of naturally occurring penicillin (penicillin G) and its representation as a graph in a GGNN. The messages passed in the environment of a single node are shown as curved arrows, and the neural networks that transform the messages are shown as small squares. Repeated rounds of message passing allow each node to learn about its surroundings (gray circles).

Deep learning methods have revolutionized a range of applications requiring understanding or generation of unstructured data such as pictures, audio, and text from large datasets. Applying similar methods to organic molecules poses an interesting challenge because molecules contain a lot of structure that is not easy to concisely capture with flat text strings or images (although some schemes do exist). Instead, organic chemists typically represent molecules as a graph where nodes represent atoms and edges represent covalent bonds between atoms. Recently, a class of methods that have collectively become known as neural message passing has been developed precisely to handle the task of deep learning on graph-structured data. The idea of these methods is to encode the local information, such as which element of the periodic table a node represents, into a low-dimensional vector at each node and then pass these vectors along the edges of the graph to inform each node about its neighbors (see Figure 1). Each message is channeled through small neural networks that are trained to extract and combine information to update the destination node’s vector representation to be informative for the downstream task. The message passing can be iterated to allow each node to learn about its more distant neighbors in the graph. Microsoft Research developed one of the earliest variants of this class of deep learning models—the gated graph neural network (GGNN). Microsoft’s primary application focus for GGNNs is in the Deep Program Understanding project, where they are used to analyze program source code (which can also be represented using graphs). Exactly the same underlying techniques are applicable to molecular graphs.

Generating molecules

Figure 2: Example molecules generated by our system after being trained on organic solar cell molecules (CEP database).

Broadly speaking, there are two types of questions that a machine learning system could try to solve in molecule design tasks. First, there are discriminative questions of the following form: What is the property Y of molecule X? A system trained to answer such questions can be used to compare given molecules by predicting their properties from their graph structure. Second, there are generative questions—what is the structure of molecule X that has the optimum property Y?—that aim to invent structures that are similar to molecules seen during training but that optimize for some property. The new paper concentrates on the latter, generative question; GGNNs have already shown great promise in the discriminative setting (for example, see the code available here).

The basic idea of the generative model is to start with an unconnected set of atoms and some latent “specification” vector for the desired molecule and gradually build up molecules by asking a GGNN to inspect the partial graph at each construction step and decide where to add new bonds to grow a molecule satisfying the specification. The two key challenges in this process are ensuring the output of chemically stable molecules and designing useful low-dimensional specification vectors that can be decoded into molecules by the generative GGNN and are amenable to continuous optimization techniques for finding locally optimal molecules.

For the first challenge, there are many chemical rules that dictate whether a molecular structure is stable. The simplest are the valence rules, which dictate how many bonds an element can make in a molecule. For example, carbon atoms have a valency of four and oxygen a valency of two. Inferring these known rigid rules from data and learning to never violate them in the generative process is a waste of the neural network’s capacity. Instead, in the new work, we simply incorporate known rules into the model, leaving the network free to discover the softer trends and patterns in the data. This approach allows injection of domain expertise and is particularly important in applications where there is not enough data to spend on relearning existing knowledge. We believe that combining this domain knowledge and machine learning will produce the best methods in the future.

Figure 3: Example molecule optimization trajectory when optimizing the quantitative estimate of drug-likeness (QED) of a molecule after training on the ZINC database. The initial molecule has a QED of 0.4, and the final molecule has a QED of 0.9

Figure 3: Example molecule optimization trajectory when optimizing the quantitative estimate of drug-likeness (QED) of a molecule after training on the ZINC database. The initial molecule has a QED of 0.4, and the final molecule has a QED of 0.9

For the second challenge, we used an architecture known as a variational autoencoder to discover a space of meaningful specification vectors. In this architecture, a discriminative GGNN is used to predict some property Y of a molecule X, and the internal vector representations in this discriminative GGNN are used as the specification vector for a generative GGNN. Since these internal representations contain information about both the structure of molecule X and the property Y, continuous optimization methods can be used to find the representation that optimizes property Y; the representation is then decoded to find useful molecules. Example molecules generated by the new system are shown in Figures 2 and 3.

Collaborating with experts

The results in the paper are very promising on simple molecule design tasks. However, deep learning methods for molecule generation are still in their infancy, and real-world molecule design is a very complicated process with many different objectives to consider, such as molecule efficacy, specificity, side effects, and production costs. To make significant further progress will require collaboration of machine learning experts and expert chemists. One of the main aims of this paper is to showcase the basic capabilities of deep learning in this space and thereby act as a starting point for dialogue with chemistry experts to see how these methods could enhance their productivity and have the most impact.

Posted on Leave a comment

Toyota and thyssenkrupp breaking new ground with mixed reality

It’s been almost two months since we officially joined the Dynamics 365 family with the general availability of our first two mixed reality business applications: Dynamics 365 Remote Assist and Dynamics 365 Layout.

We are thrilled with the response from our partners and customers since our launch. It’s so much fun learning from and hearing about how others are forging their way using mixed reality.

For this month’s post, I wanted to take a quick moment to highlight a couple customers that are early adopters and long-term partners: Toyota and thyssenkrupp. Both companies are breaking new ground while helping us usher in the era of mixed reality.

Toyota

Toyota is well known for their values of quality, safety and continuous improvement. With significant amounts of 3D CAD data, they understand the power of bringing the digital and physical worlds together. When they learned of mixed reality and Microsoft HoloLens, they began to look at processes that could be transformed.

As part of the painting process for their vehicles, Toyota performs a process called “film coating thickness inspection” to manage the thickness of the paint to ensure consistent quality of coating on every vehicle — a process that is performed regularly, even after mass production has started.

Before using HoloLens, Toyota performed film coating thickness inspection manually with automobile-sized paper patterns, similar to those used in dressmaking. The automotive pattern has approximately 500 holes, punched as measurement points based on existing CAD drawings. Because of the size, complexity and manual nature of the patterns, they were often error prone and took days to create. And, due to risk of dust and dirt from the paper patterns, the vehicles being inspected had to be removed from the manufacturing facility, adding more than a day to production timelines.

With the HoloLens solution, Toyota can now take their existing 3D CAD data used in the vehicle design process and project it directly onto the vehicle for measurements, optimizing existing processes and minimizing errors. In addition, the process can be standardized and replicated across their global operations. Using mixed reality, a process which previously took one to two days and multiple people to execute now takes four hours and one person.

Toyota proved the value of mixed reality and is currently trialing Dynamics 365 Layout to improve machinery layout within their facilities and Dynamics 365 Remote Assist to provide workers with expert support from off-site designers and engineers.

ThyssenKrupp

We’ve partnered closely with thyssenkrupp since the very early days of HoloLens, jointly developing mixed reality solutions that could be implemented at scale. First, in the earliest iterations of Dynamics 365 Remote Assist, they were able to drastically improve response time, increase efficiency and raise elevator uptimes for many of their 24,000 thyssenkrupp elevator-service engineers by providing hands-free remote assistance — the length of their service calls was reduced fourfold. We continue to work closely with thyssenkrupp on current and future Dynamics 365 mixed reality applications as well as extending existing custom applications into broadly available solutions.

For example, thyssenkrupp has recently announced the broad rollout of HoloLinc, a first-of-its-kind, fully digitized sales process for the stair lift industry. After a pilot in The Netherlands with more than 300 successful installations to date, the HoloLinc solution is being rolled out from this month onward in the U.K., Germany, Belgium, France, Italy, Spain and France, with Norway and Japan to follow next year.

The rollout on a global level is seen as a major milestone in mass industrialization innovation. To date, thyssenkrupp Elevator has equipped 120 salespeople with the HoloLinc toolkit, which comprises a Microsoft HoloLens, a tablet, a portable printer, and other technical accessories. What’s most exciting is that HoloLinc improves a process for customers who need it most, as mobility issues have a huge impact on quality of life. By using mixed reality, a process which formerly took around 40 to 70 days can now be done in just 14 days. Now that is true digital transformation of a long-standing process!

I am looking forward to sharing more about the work our customers and partners are doing with mixed-reality business applications. As always, I’m available on Twitter (@lorrainebardeen) and eager to hear about what you’re doing with mixed reality.

Posted on Leave a comment

Princeton and Microsoft collaborate to tackle fundamental challenges in microbiology

Princeton University has teamed up with Microsoft to collaborate on the leading edge of microbiology and computational modelling research.   

In this project, Microsoft is helping Princeton to better understand the mechanisms of biofilm formation by providing advanced technology that will greatly extend the type of research analysis capable today. Biofilms — surface-associated communities of bacteria — are the leading cause of microbial infection worldwide and kill as many people as cancer does. They are also a leading cause of antibiotic resistance, a problem highlighted by the World Health Organization as “a global crisis that we cannot ignore.” Understanding how biofilms form could enable new strategies to disrupt them. 

Ned Wingreen

Ned Wingreen, the Howard A. Prior Professor in the Life Sciences and professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics.

To support Princeton, a Microsoft team led by Dr. Andrew Phillips, head of the Biological Computation group at Microsoft Research, will be working closely with Bonnie Bassler, a global pioneer in microbiology who is the Squibb Professor in Molecular Biology and chair of the Department of Molecular Biology at Princeton and a Howard Hughes Medical Institute Investigator, and with Ned Wingreen, the Howard A. Prior Professor in the Life Sciences and professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics.

Using the power of Microsoft’s cloud and advanced machine learning, Princeton will be able to study different strains of biofilms in new ways to better understand how they work. Microsoft is contributing a cloud-based prototype that can be used for biological modelling and experimentation that will be deployed at Princeton. This work combines programming languages and compilers, which generate biological protocols that can be executed using lab automation technology. It allows experimental data to be uploaded to the cloud where it can be analyzed at scale using advanced machine learning and data analysis methods, to generate biological knowledge. This in turn informs the design of subsequent experiments, to provide insight into the mechanisms of biofilm formation. Princeton is contributing world-leading expertise in experiments and modelling of microbial biofilms.  

“This collaboration enables us to bring together advances in computing and microbiology in powerful new ways,” said Brad Smith, president of Microsoft. “This partnership can help us unlock answers that we hope someday may help save millions of people around the world.”

“By combining our distinctive strengths, Princeton and Microsoft will increase our ability to make the discoveries needed to improve lives and serve society,” said Christopher L. Eisgruber, president of Princeton University. “Technology is creating new possibilities for collaboration, and we hope this venture will inspire other innovative partnerships in the years ahead.”

Pablo Debenedetti, Princeton’s dean for research, said: “We are delighted to be collaborating with Microsoft to advance scientific innovation with this new project, investigating the fundamentals that underlie urgent biomedical problems. Doing cutting-edge research that helps define the boundaries of knowledge and that could ultimately benefit society at large is what we strive for at Princeton.”

Princeton’s relationship with Microsoft is one of the University’s most extensive with industry, spanning collaborations in computer science, cybersecurity and now biomedical research.

As a global research university and leader in innovation, Princeton University cultivates mutually beneficial relationships with companies to support the University’s educational, scientific and scholarly mission. The University is guided by the principle that initiatives to fortify and connect with the innovation ecosystem will advance Princeton’s role as an internationally renowned institution of higher education and accelerate its ability to have greater impact in the world. 

Posted on Leave a comment

Fishy business: Putting AI to work in Australia’s Darwin Harbour

Identifying and counting fish species in murky water filled with deadly predators is a difficult job. But fisheries scientists in the Northern Territory are working on an artificial intelligence project with Microsoft that has incredible potential for marine science around the world.

Your mission should you choose to accept it, is to go into one of Australia’s largest harbours and count the fish. Think this sounds daunting? You don’t know the half of it.

First, there’s the water. There’s a lot of it in Darwin Harbour – five times more than Sydney Harbour, to be precise. Heavy tides swell more than seven metres then retract, leaving little visibility in their wake.

And if you think you’ve got some occupational hazards at work, try getting your job done in an environment teeming with some of the world’s most intimidating apex predators – saltwater crocodiles, along with tiger, bull and hammerhead sharks. More than 300 salties are caught in the harbour each year.

This is the daunting task of the Department of Primary Industry and Resources for the Northern Territory Government, as it goes about ensuring fisheries resources are sustainably managed and developed for future generations.

Identifying and counting fish species in murky water filled with deadly predators makes diving to count fish species impossible.

Murky water filled with deadly predators like the saltwater crocodile make diving to count and identify fish species impossible.

“If you’re in the water with a crocodile you aren’t taking a calculated risk. You’re going to be a statistic. That’s it. If you’re in the water and he’s there, he wants you and you’re gone.” – Wayne Baldwin, Research Technical Officer, NT Fisheries

If shooting fish in a barrel is a metaphor for something all-too-easy, the correct metaphor for something exceptionally challenging might be counting fish in Darwin Harbour. Yet the NT Fisheries team, led by Dr Shane Penny, Fisheries Research Scientist, do it every day. As the old saying goes, you can’t manage what you can’t measure, so their work begins with knowing how many fish there are.

But they were bogged down by the time it took to wade through hours of underwater footage. The team needed to assess the abundance of critical fish species faster and more accurately, while maintaining a safe distance from deadly predators.

A meeting of the minds

It was from these murky depths that an innovative project showed the potential for artificial intelligence (AI) to support the important work being done by this team of marine biologists. Amid rising debate about the potential impact of AI on society, a collaboration between these scientists and Microsoft engineers became an opportunity to test out its powers as a force for good. Could technology hold the key to safely, accurately and rapidly counting fish – giving the NT Fisheries team more time to devote to analysing this data and improving the sustainable management of NT fish stocks?

The NT Fisheries team had high hopes. They had been using a baited remote underwater video (BRUV) to help with high-risk data gathering. The camera allows the team to see what’s in the water without going in. But even with BRUV on their side, the task was formidable.

Using a GoPro, researchers at NT Fisheries begin the process of assessing critical fish species.

Shane Penny, Fisheries Research Scientist and his team using baited underwater cameras.

“We’ve had quite a few problems with sharks coming in and taking the baits away. Tawny sharks have learned how to open our baits and suck it all out before we have a chance to collect any video.”
– Wayne Baldwin, Research Technical Officer, NT Fisheries

Then there was the sheer quantity of work involved. Once the video is collected, terabytes of footage must be viewed, and its content scoured and quantified. To put this in perspective, a single terabyte would store 500-hours of your favourite movies. The team was identifying vast quantities of different fish species and tracking their behaviour. This diversity and the murkiness of the water meant classification was often far from simple.

Steve van Bodegraven, a Microsoft machine learning engineer and Darwin local, worked with the NT Fisheries team over several months to see whether computer vision would be up to the ambitious task of identifying fish in underwater images.

In a similar way to how tags are suggested for friends and relatives in the photos you upload to social media – through repeated exposure and the discovery of patterns – the project’s success depended on feeding the system with training images. Along the way they had to confront an array of unusual problems. For example, how would Microsoft’s AI solution respond to fish like gold-spotted cod that can change colour to blend into their environment?

“We went in and talked to them about how they work and the challenges they face,” van Bodegraven says. “From that we tried to figure out how we could help. Everything we do is explorative, so we don’t necessarily have solutions out of the box.”

Three months and thousands of images later, results are encouraging to the scientists. To date the system is showing great potential, having learnt to identify 15 different species, from black jewfish to golden snapper which are under careful management to rebuild breeding stocks.

fisheries gif

The AI solution automates the laborious process of counting local fish stocks by progressively learning to identify different varieties of fish.

“We threw a few test images of fish it’s never seen before and it’s managed to pull those out and differentiate them from the fish it does know about. Once we had that first positive identification of a fish, we really felt we were onto something. From there it was just a matter of finding the right tools to improve and optimise.”
Dr Shane Penny, Fisheries Research Scientist

With each new fish analysed, the power of the machine learning technology increases. Samantha Nowland, the team’s Darwin-born research assistant, sees the potential for such systems to change the game in marine management.  NT has some of the most pristine waters in the world with healthy populations of endangered species such as sawfish and sharks. The development of this technology and its availability may help other areas of the world to improve their understanding of aquatic resources and ensure they are managed sustainably.

Beyond the harbour

While there’s already talk of using the system to create a global database of fish species, the NT Fisheries team is focused on analysing trends, coming up with management plans and expanding its reach.

“It’s going to help us monitor any marine species in Darwin Harbour and around the region,” Penny says. “We have a lot of endangered species and many more where we don’t have enough data. We need research projects that can identify species accurately.”

Microsoft’s van Bodegraven hopes it will open people’s eyes to the transformative potential of AI in fisheries and marine management and beyond. The project has already piqued the interest of fisheries departments across Australia, while the possibility of using the technology to monitor other animal species, like the iconic Kookaburra, is being actively explored.

Microsoft is also exploring how it could support similar projects elsewhere. By making the technology available via open source platform GitHub, the technology giant is encouraging others to build AI solutions that address their unique scenarios.

“Projects like this set a new precedent. Hopefully it will make people curious and give them the confidence to explore the application of AI in their industries,” van Bodegraven says. “It’s going to change industries and societies. The potential is only limited by imagination.”

Steve van Bodegraven, Machine Learning Engineer at Microsoft and Dr Shane Penny, Fisheries Research Scientist at NT Fisheries review the identified fish species using the AI solution.
Posted on Leave a comment

New breakthroughs in combatting tech support scams

On Nov. 27 and 28, over 100 local India law enforcement officials from Gurgaon and Noida raided 16 call center locations identified as engaged in tech support fraud by Microsoft, resulting in 39 arrests so far. These call center operations fraudulently represented themselves as affiliated with a number of respected companies including Microsoft, Apple, Google, Dell and HP.  The New York Times reports that Senior Superintendent of Police Ajay Pal Sharma stated “the scammers had extracted money from thousands of victims, most of whom were American or Canadian.” Microsoft alone has received over 7,000 victim reports associated with these 16 locations from over 15 countries.

Anyone may receive an unwanted phone call or experience a pop-up window on your device with a “warning” that your computer has a problem requiring immediate tech support. These messages are often very convincing and use scare tactics to entice consumers into contacting a fraudulent “tech support” call center. Call center operators typically encourage the victim to provide remote access to their device for “further diagnosis” before charging the victim a fee – typically between $150 – $499 – for unnecessary tech support services. In addition to losing money, victims leave their computer vulnerable to other attacks, such as malware, during a remote access session.

This latest raid comes just six weeks after the successful raid operation by the Delhi Cyber Crime Cell of 10 call center locations resulting in the arrest of 24 individuals and the seizure of substantial evidence including call scripts, live chats, voice call recordings and customer records from tech support fraud operations. The case was also registered by the Delhi Cyber Crime Cell on the basis of a complaint by Microsoft.

Chart on fighting tech support scams

Tech support fraud operations typically involve multiple entities including those engaged in marketing, payment processing and call centers. Recent law enforcement successes in India build on a solid track record of global law enforcement taking action to combat the multiple layers of tech support fraud supported by referrals from Microsoft and other industry partners. For example, the U.S. Federal Trade Commission and multiple partners announced 16 separate civil and criminal enforcement actions against tech support fraudsters in May 2017 as part of “Operation Tech Trap.”  And, in June 2017, the City of London Police announced the arrest of four individuals engaged in computer software services fraud.

Our work to partner with law enforcement agencies in addressing this problem is driven by a combination of technology and action taken by our customers. In 2014, Microsoft launched an online “report a scam” portal to enable victims to share their tech support fraud experiences directly with our Digital Crimes Unit team. The reports have been a critical starting point for our international investigations and referrals. Our data analytics and innovation team has added additional tools to proactively hunt and pull data from approximately 150,000 suspicious pop-ups daily targeting millions of people and use machine learning to identify those related to tech support fraud.

In addition to making referrals to law enforcement based on this data, we are building what we learn about cybercriminals’ behavior into improved products and services for consumers. Microsoft has built-in protection in Windows 10 which includes more security features, safer authentication and ongoing updates delivered for the supported lifetime of a device. Windows Defender delivers comprehensive, real-time protection against software threats across email, cloud and the web. The SmartScreen filter, built into Windows, Microsoft Edge and Internet Explorer, helps protect against malicious websites and downloads, including many of those frustrating pop-up windows. People who have experienced tech support scams should know they aren’t alone, but there are steps you can take to identify and help defend yourself against criminals looking to impersonate legitimate companies. According to our recently released 2018 global survey, three out of five consumers have experienced a tech support scam in the previous 12 months. Although this reflects movement in the right direction, and a 5-point reduction since 2016, these scams persist and successfully target people across all ages and geographies. The best thing you can do to help protect yourself from fraud is to educate yourself. If you receive a notification or call from someone claiming to be from a reputable software company, here are a few key tips to keep in mind:

  • Be wary of any unsolicited phone call or pop-up message on your device.
  • Microsoft will never proactively reach out to you to provide unsolicited PC or technical support. Any communication we have with you must be initiated by you.
  • Do not call the phone number in a pop-up window on your device and be cautious about clicking on notifications asking you to scan your computer or download software. Many scammers try to fool you into thinking their notifications are legitimate.
  • Never give control of your computer to a third party unless you can confirm that it is a legitimate representative of a computer support team with whom you are already a customer.
  • If skeptical, take the person’s information down and immediately report it to your local authorities.

Tags: ,

Posted on Leave a comment

New to Microsoft 365 in November: Tools to keep you in your workflow and fine-tune content

This month, we released new features in Microsoft 365 that help users stay focused in their workflow while delivering richer, more engaging content.

Here’s a look at what’s new in November.

Stay in your flow of work

We’re launching new capabilities in Microsoft 365 to help simplify common tasks in your workflow and save you time.

Keep track of to-dos and collaborate with others with new AI features in Word—Now you can more easily focus on your writing when working in Word. When you type a to-do item, like TODO: finish this section or <<insert closing here>>, Word recognizes and tracks them as to-dos. When you come back to the document, you’ll see a list of your remaining to-dos, and you can click each one to navigate back to the right spot. Additionally, when you @mention someone within a to-do, Word emails them a notification with a “deep link” to the relevant place in the document. This new capability is available today in preview for Word on the Mac for Office Insiders, with availability for all Office 365 subscribers coming soon.

Animated screenshot of a Word document open using the AI-powered To-Do feature.

Perform more tasks on the go with new features for Microsoft Teams on iOS and Android—The updated Teams mobile app now empowers you to be more productive on the go with features like the ability to schedule meetings and search for colleagues within your organization’s directory. With Quiet Hours, you can focus on other activities with greater control over push notifications. These features are available now via the Teams mobile app.

Image of a phone screen displaying Microsoft Teams, with quiet hours set from 7 p.m. to 7 a.m.

Manage tasks without breaking your email flow in Outlook on the web—The updated Tasks feature in the new Outlook on the web now lets you create a task by dragging and dropping an email into your Tasks pane or easily schedule a task by dragging it from the Tasks pane to your calendar. Your tasks then travel with you on the To-Do app. The new Tasks pane capability will start rolling out to customers who opt in to the new Outlook on the web in December 2018.

Animated screenshot of tasks being added in Outlook on the web.

More ways to sign in to Outlook on the web—We listened to your feedback about wanting an easier way to sign in to Outlook on the web. Starting in December, users with an Office 365 account who use Outlook on the web can now sign in to their work or school accounts through www.outlook.com. When a user signs in through www.outlook.com, Outlook redirects them to their organization’s sign-in page, which is pre-populated with the email address they entered. From there, they just follow their organization’s sign-in process.

An animated image showing a user signing in to Outlook.com.

Create richer, more engaging content

We’re introducing new intelligence capabilities in Microsoft 365 to help users create richer content within PowerPoint.

Improve your writing with Editor in PowerPoint—Editor leverages machine learning and natural language processing to provide an intelligent proofing and editing service that delivers recommendations in context. For example, Editor highlights and provides suggestions to fix awkward word choices and incorrect grammar and offers guidance on clarity and conciseness. Editor in PowerPoint will be available to all users with an Office 365 subscription beginning December 2018.

An animated image showing the new Editor feature in PowerPoint.

Engage your audience with interactive forms and quizzes in PowerPoint—Microsoft Forms is now integrated with PowerPoint for Office 365, providing a seamless way for speakers, trainers, and educators to connect and interact with participants. Now, presenters can get real-time audience feedback via forms and quizzes without asking them to leave PowerPoint. To get started, in PowerPoint under Insert, click the Forms icon to create a new form/quiz or insert one you’ve already created.

Other updates

  • Today, we unveiled redesigned Office app icons to honor the heritage of Office and welcome in the future. These app icons provide flexible visuals that reflect product changes and the new ways in which people are working.
  • AI features like facial detection to identify speakers in videos, speech-to-text and closed captions, and transcript search and timecodes are now available to all Office 365 Enterprise, Firstline Worker, and Education plans through Microsoft Stream.
  • You can now use Windows Hello or a FIDO2 device to sign in to your Microsoft account on the Edge browser instead of using a username and password.
  • The Office Customization Tool is now available, enabling IT pros to easily customize deployment of Office 365 ProPlus and other Click-to-Run managed Office products using a simple, intuitive, and web-based interface.
Posted on Leave a comment

New feature on SwiftKey for Android gives you the fastest way to share anything from the web

Hi everyone, 

Today, we’re unveiling a new feature on SwiftKey for Android that makes it easy and quick to share anything you want from the web. With search built right into the keyboard, you have a faster, easier experience browsing the web.

In just a few taps, you have access to search results that you can then easily screenshot, crop and share with your friends. Whether it’s the whole webpage, just one picture or a snippet of text, this feature makes it easy to share exactly what you want. 

To use, simply open the Toolbar by tapping the “+” on the top left, select the search icon and type what you’re looking for into the box right there in the Toolbar. If you type a search term, you’ll have instant access to rich search content from Bing; if you type in a URL, you’ll be taken to that webpage. 

As of today, this feature for easy searching and sharing is available to users in 11 countries: US, UK, Canada, France, Germany, Australia, Japan, Brazil, India, Italy and Spain.  

Colleen Hall, Senior Product Manager at SwiftKey, gave more background to today’s update, “We’re always looking for ways to make typing and messaging faster and smarter for our users. By having search right there in the keyboard, users can browse for information and share it with their contacts without leaving the conversation, whether that’s for quick fact-finding, checking the local weather or sharing news headlines and images in a message.”  

Search is the latest feature to be added to the Toolbar, which gives you quick and easy access to all the different ways you can customize SwiftKey. Add Stickers to your messages, share your Calendar or Location details, translate in over 60 languages and now, share anything from the web.  

Download or update your SwiftKey Keyboard for Android now to try it out. Let us know what you think on Twitter! 

Cheers,
The SwiftKey Team

Posted on Leave a comment

Redesigning the Office app icons to embrace a new world of work

Design is becoming the heart and soul of Office. Learn how we evolved our visual identity to reflect the simple, powerful, and intelligent experiences of Office 365.

Whoever said that nothing is more intimidating than the blank page probably never faced a redesign.

The last time we updated the Microsoft Office icons was in 2013, when selfies were new enough to become Oxford Dictionaries’ Word of the Year and emojis were new enough to be considered buzzworthy.

Clearly, a lot has changed since then — including how people get things done.

Over 1 billion people from vastly different industries, geographies, and generations use Office. They work on different platforms and devices and in environments that are faster, more distracting, and more connected than ever before.

To support this changing world of work, Office is transforming into a collaborative suite that lets you work together in real-time from almost any device. We’ve infused our tools with powerful AI: you can get insights from data with less effort, write a paper using your voice, or make your resume using LinkedIn insights. We’ve also added totally new apps to the suite like our AI-powered meetings and chat service, Microsoft Teams. In the end, it’s great design that makes these experiences fluid and seamless.

As a signal to our customers, we’ve evolved our Office icons to reflect these significant product changes. We’re thrilled to share the new icons for Office 365 with you today and tell the story behind their creation.

Carefully crafted designs that honor heritage and welcome the future

From the get-go, we embraced Office’s rich history and used it to inform design decisions. Strong colors have always been at the core of the Office brand, and new icons are a chance to evolve our palette. Color differentiates apps and creates personality, and for the new icons we chose hues that are bolder, lighter and friendlier — a nod to how Office has evolved.

We also used gestalt principles to further emphasize key product changes. Simplicity and harmony are key visual elements that reflect the seamless connectivity and intuitiveness of Office apps. While each icon has a unique and identifiable symbol, there are connections within each app’s symbol and the collective suite.

Flexible visual systems that work across platforms, devices, and generations

Today’s workforce includes five generations using Office on multiple platforms and devices and in environments spanning work, home, and on the go. We wanted a visual language that emotionally resonates across generations, works across platforms and devices, and echoes the kinetic nature of productivity today.

Our design solution was to decouple the letter and the symbol in the icons, essentially creating two panels (one for the letter and one for the symbol) that we can pair or separate. This allows us to maintain familiarity while still emphasizing simplicity inside the app.

Separating these into two panels also adds depth, which sparks opportunities in 3D contexts. Through this flexible system, we keep tradition alive while gently pushing the envelope.

Human-centered designs that emphasize content and reflect the speed of modern life

We all know modern life is faster and more connected: we’re living in it. Office supports this by making it fast and easy to express ideas, collaborate with others, and stay focused and in the flow. It’s why Office apps compose together, enabling users to open PowerPoint or Excel beside conversations in Teams or Outlook.

To reflect this in the icons, we removed a visual boundary: the traditional tool formatting. Whereas prior Office icons had a document outline for Microsoft Word and a spreadsheet outline for Excel, we now show lines of text for Word and individual cells for Excel. By focusing on the content rather than any specific format, these icons embody the collaborative nature of the apps they represent.

Similarly, we’ve changed the letter-to-symbol ratio. Traditionally, the letter occupied two-thirds of the icon, and the symbol took up one-third. We’ve changed this ratio to now emphasize the symbol because while the letter represents the tool itself, the symbol speaks more to people’s creations.

Being part of the design community

Our new icons will begin rolling out across platforms in the coming months, starting with mobile and web. They are the result of many iterations, a lot of research and testing, and plenty of late nights and weekends. They’re also part of an ongoing journey. As designers, we love the creative community’s ability to inspire each other and create momentum, so don’t hesitate to leave a comment below.

Posted on Leave a comment

Feeding the world with AI-driven agriculture innovation

In the 1950s and 1960s, plant biologist Norman Borlaug famously led the “Green Revolution,” developing high-yield grains that helped drive up global food production when paired with innovations in chemical fertilizers, irrigation, and mechanized cultivation. By so doing, Borlaug and his peers helped save a billion people from starvation. However, this new form of farming was not sustainable and created multiple environmental issues.

Today, farmers are using technology to transform production again, driven by the need to feed more with less and to address the impacts of industrial farming on the environment. Currently, nearly half of current food produced, or 2 billion tons a year, ends up as waste, while an estimated 124 million people in 51 countries face food insecurity or worse. In addition, new sources of arable land are limited, fresh water levels are receding, and climate change puts pressure on resources and will lower agricultural production over time. Governments need to solve these issues swiftly, as the world’s population is slated to grow from 7.6 billion to 9.8 billion 2050. Agencies and companies will need to team with growers to drive a 70 percent increase in food production.

The good news is that we’re now in the midst of a second Green Revolution that’s part of the Fourth Industrial Revolution. Here’s how technology innovation, driven by big data, the Internet of Things (IoT), artificial intelligence (AI), and machine learning, will reap a more bountiful harvest.

A vision for AI in agriculture

Farmers are deploying robots, ground-based wireless sensors, and drones to assess growing conditions. They then capitalize on cloud services and edge computing to process the data. By 2050, the typical farm is expected to generate an average of 4.1 million data points every day.

AI and machine learning interpret findings for farmers, helping them continually tweak crop inputs to boost yields. Farmers can use AI to determine the optimal date to sow crops, precisely allocate resources such as water and fertilizer, identify crop diseases for swifter treatment, and detect and destroy weeds. Machine learning makes these activities smarter over time. It can also help farmers forecast the year ahead by using historic production data, long-term weather forecasts, genetically modified seed information, and commodity pricing predictions, among other inputs, to recommend how much seed to sow.

Such precision farming technology augments and extends farmers’ deep knowledge about their land, making production more sustainable. Advanced technology can increase farm productivity by 45 percent while reducing water intake by 35 percent. However, the key is ensuring equitable access: Often the communities that need AI the most lack the physical and technology infrastructure required to support it.

Connecting communities with broadband

Access to high-speed connectivity and reliable power are still challenges in many parts of the world. That’s one reason Microsoft and its partners are bringing affordable broadband to rural communities in countries such as Colombia, India, Kenya, South Africa, and the United States through the Airband Initiative.

When communities are connected, farmers can benefit from AI and machine learning, even if they lack internet access to their individual farms. Microsoft employee Prashant Gupta and his team used advanced analytics and machine learning to create a Personalized Village Advisory Dashboard for 4,000 farmers in 106 villages and a Sowing App for 175 farmers in a district in the southeastern coastal state of Andhra Pradesh in India. Farmers with simple SMS-enabled phones can access Sowing App recommendations, which apply AI to data such as weather and soil conditions to optimize planting times. Farmers who followed the AI-driven advice increased yields by 30 percent over those who adhered to traditional planting schedules.

Using IoT and AI on individual farms

Farmers with connectivity can use IoT to get customized recommendations. The Microsoft FarmBeats program, driven by principal researcher Ranveer Chandra, has developed an end-to-end IoT platform that uses low-cost sensors, drones, and vision/machine learning algorithms to increase the productivity and profitability of farms. FarmBeats is part of Microsoft AI for Earth, a program that provides cloud and AI tools to teams seeking to develop sustainable solutions to global environmental issues.

In the United States, FarmBeats solves the problem of internet connectivity by accessing unused TV white spaces to set up high-bandwidth links between a farmer’s home internet connections and an IoT base station on the farm. Sensors, cameras, and drones connect to this base station, which is both solar- and battery-powered. To avoid unexpected shutdowns due to battery drain, the base station uses weather forecasts to plan its power usage. Similarly, drones leverage an IoT-driven algorithm based on wind patterns to help accelerate and decelerate mid-flight, reducing battery draw.

IoT data processing—for bandwidth-hogging information like drone videos, photos, and sensor feedback—is done by a PC at the farmer’s home. The PC performs local computations and consolidates findings into lower-memory summaries, which can be distributed over bandwidth more easily, while also serving as a backup during network outages.

AI for everyone means more food for the world

Over time, AI will help farmers evolve into agricultural technologists, using data to optimize yields down to individual rows of plants. Farmers without connectivity can get AI benefits right now, with tools as simple as an SMS-enabled phone and the Sowing App. Meanwhile, farmers with Wi-Fi access can use FarmBeats to get a continually AI-customized plan for their lands. With such IoT- and AI-driven solutions, farmers can meet the world’s needs for increased food sustainably—growing production and revenues without depleting precious natural resources.

Be the first to know about new advancements in the Microsoft AI farming initiative. Follow us at FarmBeats.

To stay up to date on the latest news about Microsoft’s work in the cloud, bookmark this blog and follow us on TwitterFacebook, and LinkedIn.

Posted on Leave a comment

New batch of Xbox Game Pass titles are on their way

Hello, Xbox Game Pass members!

Now that we’ve had our annual dose of turkey and family, normal service resumes here at Xbox Game Pass. And on that note, we’re excited to share three important public service announcements:

  1. If the 16 games we announced at X018 were not enough, we are adding three more to the Xbox Game Pass catalog over the coming week, starting with The Gardens Between on November 29, Mutant Year Zero on December 4, and Strange Brigade on December 6. We require you to play these, as well as the over 100 other great games included in your membership – no negotiation.
  2. If you have a mobile telephone, you need to download the Xbox Game Pass app for iOS and Android. Through this app you can remotely install Xbox Game Pass games to your Xbox One so they’re ready to play when you come home from school or work. We’ll also send you notifications to let you know when games are added, so say goodbye to FOMO (and your job/free time).
  3. Follow our Xbox Game Pass Instagram and Twitter You don’t want to miss out on life-changing memes and gaming banter.

What’s that sound? It’s like a voice on the wind saying “We want more…” We hear it. Keep it tuned here to Xbox Wire, the Xbox Game Pass Twitter and Instagram channels (see above) and the Xbox Game Pass app (again, see above) where we’ll be dropping even more games throughout the month of December.

Without further ado, let’s dive into some details on the new games coming soon to Xbox Game Pass:

The Gardens Between (November 29)

Follow two best friends Arina and Frendt who fall into a surreal world full of dreamlike garden islands. Together they’ll embark on a nostalgic journey through a world built around objects from their childhood, lighting constellations and illuminating threads from a time when friendship was the most important thing in the world. Manipulate time to solve puzzles and traverse each precious memory to reach the summit of each island in this single-player puzzle adventure.

Mutant Year Zero (December 4)

A tactical adventure game with turn-based combat and real-time exploration of a post-apocalyptic world, Mutant Year Zero: Road to Eden puts you in command of a unique squad of mutant soldiers as you scavenge through the remains of civilization to survive. From the creators of Hitman and PayDay, experience a mix of story, exploration, stealth, and strategy in the brutal environment of a decimated planet Earth as your band of heroes embark on a quest to discover the legend of Eden.

Strange Brigade (December 6)

Join an elite agency of extraordinary adventurers and investigate the darkest corners of Egypt, where mythological menaces manifest! Strange Brigade offers a rip-roaring, 1930s-style campaign full of peril, intrigue, and good old derring-do for one to four players as you tackle an army of mummified monsters! Solve mind-bending puzzles, defy dastardly traps, unearth incredible treasure in a mysterious, ancient landscape and use the most marvelous firearms known to man in thrilling third-person combat to defeat the witch queen Seteki and send her petrifying minions back to the afterlife.

Join Xbox Game Pass Today

With over 100 great games for one low monthly price, including highly-anticipated new Xbox One exclusives the day they’re released, including the highest-rated* Xbox exclusive of this generation Forza Horizon 4, plus more games added all the time, Xbox Game Pass gives you the ultimate freedom to play. If you haven’t tried Xbox Game Pass, join today and get your first month for $1, and discover your next favorite game.

For the latest Xbox Game Pass news, follow us on Twitter and Instagram and stay tuned to Xbox Wire. Until next month, game on!

*Source: Metacritic