Posted on Leave a comment

UN International Day of Persons with Disabilities: sharing our learnings to create a more accessible world

A More Accessible and Inclusive World

Organizations across the globe are increasingly recognizing people with disabilities as talent and setting them up for success through accessible technology.

United Kingdom: GlaxoSmithKline, a multi-national pharmaceutical company, relies on Teams accessibility features such as background blur and live captions to help employees with remote collaboration.

“I have definitely seen an increase in engagement by using Microsoft Teams for everyone, but especially for people with disabilities.” Tracy Lee Mitchelson, Project Lead, Disability Confidence at GSK.

France: Sodexo, a provider of quality of life services, is helping employees work independently through building digital accessibility skills using Microsoft 365 as its communications and collaboration platform.

“Some of the work that’s happened with our front-line workers helping them to understand the Microsoft tools that were available have really helped with their digital accessibility.” Megan Horsburgh Head of D&I, Sodexo UK & Ireland.

India: By embracing Microsoft Teams, v-shesh, a workforce services provider, is delivering engaging lessons in the cloud and equips people with disabilities to be part of the digital-first economy.

“With Teams, we are not only creating an inclusive environment but also equipping our learners to be ready for the future of work.” P Rajasekharan, Co-founder at v-shesh.

United States: Fannie Mae, a provider in mortgage services, is seeing Microsoft 365 play a crucial part in digital transformation and building an inclusive workplace that helps all employees reach their potential.

“We use the accessibility tools in Microsoft 365 as part of our drive to approach everything from an inclusive mindset. When you do that, not only do you problem-solve for the needs of your customers, you drive innovation as a natural by-product.”  Cassie Hong, Product Designer at Fannie Mae.

Posted on Leave a comment

Keep students engaged over the winter break with virtual field trips and creative workshops


Winter break may look different this year, but that doesn’t mean it has to be any less fun. This year, Microsoft is offering a collection of winter break camps, or free creative workshops and virtual field trips, to keep students engaged during their extra time while on holiday break.

These events will give students the opportunity to see new places and learn new skills from the safety and comfort of home. They’ll be having so much fun at camp that they won’t even realize they’re gaining valuable skills covering everything from history to coding.

With nearly 200 virtual events to choose from, it’s easy to find activities that best fit your student’s interests, schedule, and learning needs. Here are just a few events to keep students engaged over the holidays:

  • Follow the Iditarod: Travel the iconic Iditarod dog sledding route across Alaska, learning about its history and destinations along the way.
  • Winter World Tour: Visit four incredible snow- and ice-themed destinations, including Finland’s arctic zoo and Breckenridge in the Rocky Mountains.
  • Holiday Greeting Card Magic: Create interactive holiday e-cards in PowerPoint to send to loved ones. You’ll be able to include GIFs and animations in the designs and learn to share them easily online.

Check out our schedule of virtual workshops for students of all levels. And to learn how Microsoft tools can help you connect with family, celebrate the holidays, and make the most out of winter break, read the full Microsoft Store community post.

Browse affordable devices starting at $219Browse affordable devices starting at $219
Posted on Leave a comment

Seeing AI empowers people who are blind or with low vision for everyday life

A mother and daughter celebrate the first anniversary of the app’s Japanese-language version

By Kumiko Tezuka,
Microsoft Japan

A mother and young daughter read a book
Akiko Ishii and her daughter Ami using Seeing AI to read a picture book together.

Akiko Ishii sits in her living room holding a picture book and balancing her four-year-old daughter, Ami, on her lap. It looks like a typical domestic scene in a typical Tokyo neighborhood, but there’s something special going on here.

Akiko is blind and her smartphone is doing the reading for her.

As the phone’s camera scans each page, Microsoft’s Seeing AI app reads out the text aloud. Akiko and Ami smile as they listen. With this technology, they can spend invaluable time reading together and bonding — just like mothers and children do anywhere.

Power of artificial intelligence

Seeing AI is a free app that narrates the world for the blind and low vision community. It’s the product of an ongoing research project that harnesses the power of artificial intelligence (AI) to open up the visual world by describing nearby people, text and objects.

It’s currently available in 70 countries and a number of languages. The Japanese-language version was launched a year ago.

Seeing AI uses AI technology not only to recognize and read short text passages, documents, product labels and so on, but also to describe people and scenery captured by a mobile phone camera.

Akiko, who lost her sight as a result of surgery, was already using the English-language version of Seeing AI before the Japanese version was released.

“I’ve been using the Japanese-language version of Seeing AI now for about a year, and I can say with conviction that it has become indispensable to honing my own individuality.”

She says that while there are a number of other applications available in Japan for assisting people with visual impairments, they tend to be designed for a single, specific purpose, and don’t provide such a wide range of easy-to-use functions all in one package.

With Seeing AI she can carry out all sorts of everyday life by using her phone. Apart from reading, she can use it to check the brightness of lights in a room and for describing her surroundings and identifying people and objects as she moves about. Watch the following video:

The app’s Japanese audio readout is so speedy that someone not familiar with it might find it too fast to follow. “It’s difficult to keep up, isn’t it?” Akiko says with a laugh.

“It is much faster than the speed of normal conversation and I’m okay with that because I’m blind and do everything through my ears. Things like reading or looking at something, that people can do in an instant, I have to listen and take in before I can act.

a bento
A bento that Akiko made for Ami, rice balls on the right and an array of morsels on the left.

“That is why I have gotten used to listening at high speed. The English-language version could not recognize Japanese text. When I used it to describe my surroundings and such like, I couldn’t listen at this speed, since I’m not a native English speaker and don’t have a good enough grasp of the language.”

The Japanese version has empowered Akiko to do much more by herself. For instance, she can now read notes from Ami’s kindergarten that tell her what Ami needs to take the next day. Before she had Seeing AI Akiko would have to scan each handout and use optical character recognition software to generate digital text that could be read out on her PC. Now it is just a matter of using her phone.

Being able to easily carry out tasks has given her more independence and confidence to do things she enjoys, such as cooking.

She uses Seeing AI’s “short text” function to read grocery labels, check use-by dates and identify ingredients. The app has a color recognition function that, for instance, can say whether a bell pepper is red, yellow or green.

Recognizing people, colors and much more

“I really enjoy making Ami’s bento these days with Seeing AI, I can think about the color of the ingredients as I prepare the lunchbox, so I might for example use a piece of yellow bell pepper to match the egg in tomorrow’s bento.”

Akiko’s phone contains a wealth of family photos, many of them are shots of her with her husband, Yoshimi, and Ami.

“I have set up Seeing AI’s person function to recognize Ami and my husband, so it can now pick out their faces in the photos I’ve taken.”

Mother, daughter and father
Akiko, Ami and husband Yoshimi at an amusement park.

In the past, Akiko had no way of knowing what each photo showed, but now she uses Seeing AI’s scene function to describe photos to her. And by pre-setting the app to recognize Ami and her husband, she can also go through photos showing a lot of people to find only those shots in which Ami or her husband appear.

“I always want my photos of Ami and the family close at hand. They’re also a record of her growth, so I do not want to delete any of them.”

Akiko says her use of the app continues to evolve at home and also at work in her role leading an organization that helps people with disabilities become more independent and attain a better quality of life.

“I see a disability as a unique individuality,” says Akiko. “And by honing that individuality (disability), I think that we can turn each rough stone into a glittering diamond. I’ve been using the Japanese language version of Seeing AI now for about a year, and I can say with conviction that it has become indispensable to honing my own individuality.”

Moter and young daughter
Akiko and Ami happily relaxing.

TOP IMAGE: Akiko Ishii and her daughter Ami use Seeing AI to read a picture book.

Posted on Leave a comment

How Microsoft does DevOps – from the 1990s to now

Abel

Abel

So, just how does Microsoft do DevOps? I get asked this all the time. The answer is a little bit complex because to really understand how Microsoft does DevOps, you need to understand where Microsoft was in the late 90’s and early 2000’s. And just what kind of changes we had to go through to truly embrace a DevOps world.

Microsoft’s Enterprise DevOps Transformation Story

Check out this talk where I walk you through Microsoft Enterprise DevOps Transformation Story.

[youtube https://www.youtube.com/watch?v=WhRRGUmwoq4?feature=oembed&w=640&h=360]

Ok, That’s a Cool Start, But I Want More Details

That was a pretty cool story right? I touched on a lot of DevOps topics in this story but I’m sure some of you want more details!

Image bae19c58437616eccb8d4c0eb073e96e4b48810f6887f1e2c4dc7c2fc590c266

Don’t worry, we got you.

and so much more goodness. Check out everything at DevOps At Microsoft

Posted on Leave a comment

What’s new in Teams: suppress background noise, select new Together mode scenes, and more

Welcome to December! I know you have been waiting for this post as November has been another month with a lot of great features that will help you get the most out of Microsoft Teams. Let’s jump right in!

What’s New: Meetings
AI-based noise suppression
Our real-time AI noise suppression feature automatically removes unwelcome background noise during your meetings. The AI-based noise suppression analyzes your audio feed filtering out the noise and retaining only the speech signal. You can also control how much noise suppression you want, including a high setting to suppress more background noise. 

New Together mode scene selection
Together mode reimagines meeting experiences to help participants feel closer together even when you are apart. We are excited to introduce new Together mode scene selections to transport your team to a variety of settings. Choose a scene to set the tone and create a unique experience for your meeting, whether it be a smaller conference room meeting, or an all-hands meeting held in an outdoor amphitheater.

TogetherMode.png

Polls in Teams Meetings
Polls in Teams meetings is a seamless experience powered by Forms that helps you conduct more engaging and productive meetings. As a meeting organizer or presenter, you can prepare, launch, and evaluate polls before, during, and after meetings, respectively. Your attendees can easily view and respond to the polls in the pop-up bubble or chat pane. To enable this feature as the meeting organizer or presenter, simply add the Forms app as a tab in your Teams meeting. Learn more.

Full screen support in new meetings experience
We heard you, full screen mode is back! With full screen mode on Windows the meeting window fills up the whole screen, removing all other screen elements, including the title bar on the top and task bar on the bottom. On Mac OS, full screen mode maximizes the meeting window and the title bar is hidden. This helps you to reduce distractions and focus your attention.

fullscreen.png

Start an instant meeting from your mobile device
You’ll now find the familiar Meet Now icon on the calendar tab and in the Teams channel helping you connect with your team instantly. Once you start your meeting, you can use any messaging app on your mobile device to share the invite or add participants directly to the meeting, and anyone in the Teams channel can join without an invite.

InstantMeetingMobile (1).png

 

Updated layout for meetings on iOS 

We have improved the Teams experience on iOS devices with a new presentation mode, the ability to see more participants, and the ability to see shared content and a spotlighted participant concurrently.

Meetings on iOS.png

 

 

What’s new: Calling
Call Merge
While you’re on a call with another person (or a group), you might want to add another expert to participate in the call. Similarly, you may receive an incoming call that makes sense to connect with one you’re already on. Call Merge gives end users the capability to merge their Teams VoIP and PSTN active 1-1 calls into another 1-1 call or another group call. You can merge your calls, simply by choosing the “…” (more actions) button from the call controls and select “merge calls”. Learn more.

Survivable Branch Appliance
To support the most critical conversations in the event of an outage, the new Survivable Branch Appliance (SBA) allows you to place and receive PSTN calls even in the event of a WAN outage. This SBA is now available to certified Session Border Controllers (SBC) vendors, allowing SBCs to link with the Teams client in the event the client cannot reach the Microsoft Calling network.

Ericsson Session Border Controller certification
Ericsson has completed the Session Border Controller (SBC) certification process, which ensures that their SBC supports Direct Routing for Microsoft Teams, joining the list of certified SBCs. This rigorous certification process [insert link here] includes intense 3rd party testing and validation in production and pre-production Direct Routing environments. Direct Routing permits customers to connect their own carriers and infrastructure with Phone System to enable Teams Calling. Learn more.

 

What’s New: Devices
Microsoft Teams displays
Microsoft Teams displays is a new category of all-in-one dedicated Teams devices that features an ambient touchscreen and a hands-free experience powered by Cortana. These devices seamlessly integrate with your PC, providing easy access to Teams chat, meetings, calling, calendar, and files. With natural language, users can ask Cortana to join and present in meetings, dictate replies to a Teams chat, and more. Learn more.

ThinkSmart.png

 

USB phone by Yealink: MP50
The plug and play format of the newly available MP50 by Yealink offers a new way to experience calling features in Teams, allowing you to connect a phone to your PC and start engaging in full phone functionality instantly. The MP50 provides a cost-effective option, giving you a traditional handset experience with a dedicated Teams button, for quick meeting and calling join, as well as USB and Bluetooth connection for both mobile and PC.

Yealink.png

 

Yealink A20
Yealink A20 is an integrated, Android-based Microsoft Teams Room designed for small meeting rooms and huddle spaces. The A20 delivers premium audio and video experiences through a 20-megapixel 133-degree horizontal field of view lens, 8 MEMS microphone array and built-in speaker. The A20 is easy to deploy and brings Teams Rooms features like wireless content sharing and whiteboarding, to small meeting spaces.

Yealink A20.png

 

Poly Sync 20 USB/Bluetooth® smart speakerphone now certified for Microsoft Teams
Poly Sync 20 is a portable personal speakerphone certified for Microsoft Teams that delivers great audio for your meetings as well as music. Combined with up to 20 hours of talk time, the ability to charge your smartphone and IP64 dust and water resistance, it’s a great companion for hybrid workers. Learn more.

Poly Sync 20_Top_Teams_low_res_rgb.jpg

 

New features rolling out to Microsoft Teams Rooms and Surface Hub
The latest app for Microsoft Teams Rooms on Android, version 1.0.94.2020102101, is now available through the Teams Admin Center. New features have also begun rolling out to Surface Hub! Features enabled through this update include:
Microsoft Teams Rooms on Android

  • Support for dual screens: Now you can use Teams Rooms on Android in spaces with a dual screen configuration, allowing one screen to be focused on meeting participants in the gallery view, while the second screen can be used to show content or whiteboard.
  • New gallery views: Teams Rooms on Android now supports the 3×3, large gallery, and Together Mode gallery views.
  • Auto-answer for meetings: In some meeting scenarios where Teams Rooms devices are deployed, like a healthcare patient room, meeting participants want to be able to connect to an incoming call without taking an action to accept it. Now, we’re providing a setting that allows calls to be answered automatically. This new feature can be enabled through the Admin settings.

Dual Screen on MTR.png

Surface Hub

  • Together mode: view meeting participants in the new Together mode, which brings everyone into a shared virtual space.
  • Large gallery: view up to 49 meeting participants simultaneously in full screen mode,
    in the new 7×7 video grid.

Together Mode on Surface Hub.png

What’s New: Chat & Collaboration

Pinned Posts

Keep important information easily accessible and top of mind with Microsoft Teams. You can pin any message in a channel, and it appears in the channel information pane for all members of the channel to see.

Pinned Post.png

 

More options to use polls, surveys & checklists in Teams
Easily gather information or keep track of things in chat and channels with new app templates for polls, surveys, and checklists in Teams. Once installed and configured by the Teams administrators, these messaging extensions provide a simple and intuitive experience for users across all platforms without the need to use 3rd party apps.

Quickly create and send polls to gather input to make decisions.

Easily create surveys to gather feedback to improve your processes.

Collaborate with your team and keep things on track by creating a shared checklist.

Set presence status duration
Let others know when you are available in Teams by managing your presence status. Users can now change their presence status for a specific period. Learn more.

Android On-Demand Chat Translation
Inline message translation gives all your team members a voice and facilitates global collaboration. With a simple click, people who speak different languages can fluidly communicate with one another by translating posts in channels and chat.

What’s New: Power Platform and custom development
Build solutions with Power Apps in Teams
The new Power Apps app for Teams is now generally available. It allows you to build and deploy custom apps without leaving Teams. With the simple, embedded graphical app studio, it has never been easier to build low code apps for Teams. You can also harness immediate value from built in templates like the Great Ideas or Inspections apps, which can be deployed in one click and customized easily. The new Power Apps app for Teams can be backed by a new relational datastore – Dataverse for Teams. Learn more.

New Power Automate App for Teams
The new Power Automate app for Teams is now generally available. The new app makes it easier than ever to automate workflows within teams. With the simplified flow designer, you can easily build flows by selecting from a number of templates and simply selecting your options from drop down menus. Also, the home screen of the new app improves your visibility into your flows and let’s manage your flows for Teams from there. Learn more.

Power Virtual Agents (PVA) for Teams
Power Virtual Agents (PVA) for Teams is now generally available. Since announcing the PVA preview at Ignite 2020 users have found chat bots useful well as easy to create and we have seen thousands of bots created in the past few weeks. We are now also providing additional features including native authentication, where bots can be designed to provide information to users based upon their identity. You can now also easily make your bot available to your teammates and have admin approval to make it available for the whole organization.

Teams apps for meetings now generally available
Teams apps for meetings are now generally available with nearly 20 new apps in the Teams app store, such as Asana, HireVue, Monday.com, Slido, and Teamflect, as well as familiar Microsoft apps such as Forms. Learn more. If you’re a developer, learn more about creating Teams apps for meetings.

Support for Single Sign-On (SSO) for Bots
We are thrilled that Single Sign-on (SSO) support for bots is now available. SSO authentication in Azure Active Directory (Azure AD) minimizes the number of times users need to enter their login credentials by silently refreshing the authentication token. If users agree to use your app, they will not have to consent again on another device and will be signed in automatically. Learn more.

Microsoft Teams App Development Challenge
We’re continuously planning new events and ways to connect with the developer community. For example, Microsoft is launching the Microsoft Teams App Development Challenge. Starting November 16, 2020 through February 8, 2021, developers, partners, and organizations can participate in a challenge to develop a new and innovative Teams App for publishing to AppSource, to be eligible to win a share of $45,000 in cash and prizes. For full challenge details visit http://microsoftteams.devpost.com

App Development Challenge.jpg

App Spotlight

service now.png

This month the Now Virtual Agent app by ServiceNow features new capabilities that help improve employee productivity with seamless self-service and faster case resolution – allowing you as a user to submit support requests, view open ticket approvals, act on notifications, chat with virtual agents for automated assistance, or streamline communication between agents and employees – all while staying in the flow of work in Teams.

 

 

What’s New: Management
Device management automatic alerting in Teams Admin Center
Device management automatic alerting provides more efficiency in identifying devices issues by triggering notifications that can be turned into an immediate correction action.

What’s New: Teams for Education
Insights across classes and spotlight student activity
New capabilities in Insights helps you as an educator to understand engagement and progress of students over time and across your classes. Now, educators can see high-level trends across classes, like inactive students, active students per day, missed online classes and missed assignments. And within a class, new spotlight cards show trending student behaviors an educator may want to take action on. Learn more.

What’s New: Firstline Workers
Shift schedule assistance
Shifts schedule assistance will alert managers if conflicts occur anywhere in the schedule and they will receive conflict warnings when approving schedule change requests. This alerting saves managers time, makes shift scheduling more efficient and reduces inaccuracies that lead to employees not turning up for their shift. Learn more.

What’s New: Government
These features currently available to Microsoft’s commercial customers in multi-tenant cloud environments are now rolling out to our customers in US Government Community Cloud (GCC), US Government Community Cloud High (GCC-High), and/or United States Department of Defense (DoD).

Full screen support in new meetings experience
We heard you, full screen mode is back! With full screen mode on Windows the meeting window fills up the whole screen, removing all other screen elements, including the title bar on the top and task bar on the bottom. On Mac OS, full screen mode maximizes the meeting window, and the title bar is hidden. This helps you to reduce distractions and focus your attention. GCC only in November.

OneNote in Teams DoD
You can now add new or existing OneNote Notebooks tab to your Teams channels if you’re a DoD customer. You can also go to Files or add OneNote Personal App to open your OneNote notebooks directly. Learn more.

Prevent attendees from unmuting in Teams Meetings
Meeting organizers and presenters in the US Government Community Cloud can now prevent attendees from unmuting during the meeting and enable specific attendees to unmute when they raise their hands. This can be helpful for press conferences and classrooms scenarios where you want to be in control of who’s speaking. Learn more.

PreventUnmute.png

Posted on Leave a comment

Attend Dec. 3 online event to explore how data and analytics will impact your business

Planning strategic data and analytics initiatives is now critical for helping your organization build the agility and resilience needed to successfully navigate the future. Join us on December 3, 2020, from 9:00 AM to 10:30 AM Pacific Time (UTC-8), for the Shape Your Future with Azure Data and Analytics digital event to explore how data and analytics impact the future of your business—and see how to use Azure to change the way you make strategic business decisions.

Register for the event to:

  • Hear Microsoft Chief Executive Officer Satya Nadella provide insight into the power of data and analytics and discuss new Azure innovations.
  • Learn from Amy Hood, Microsoft Chief Financial Officer and Executive Vice President, and Julia White, Corporate Vice President of Microsoft Azure, about how Microsoft used data and analytics to transform its own finance organization.
  • Attend a CEO roundtable with Judson Althoff, Microsoft Executive Vice President of Worldwide Commercial Business, and other executives from some of the world’s most successful companies as they talk about how they are using data and analytics to recover, strengthen, and innovate in the face of uncertainty.
  • Dive deep into the latest Azure data and analytics announcements with Rohan Kumar, Corporate Vice President, Microsoft Azure Data and see new demos illustrating the native integration of Azure Synapse Analytics, the new Azure data governance service, Azure Machine Learning, and Microsoft Power BI.
  • Ask questions and get answers in real-time from Microsoft engineers during a live Q&A.

Shape your future with Azure Data and Analytics. Attend the digital event December 3, 2020.

By the end of the event, you’ll have a better understanding of how to develop a strategic view of your analytics initiatives and a solid foundation for creating a strategic, unified framework for using data and analytics to gain insights, make decisions, and improve business outcomes. We hope to see you there.

Register now button that opens registratoin page in a new window.

Posted on Leave a comment

The human side of AI for chess

As artificial intelligence continues its rapid progress, equaling or surpassing human performance on benchmarks in an increasing range of tasks, researchers in the field are directing more effort to the interaction between humans and AI in domains where both are active. Chess stands as a model system for studying how people can collaborate with AI, or learn from AI, just as chess has served as a leading indicator of many central questions in AI throughout the field’s history.

AI-powered chess engines have consistently bested human players since 2005, and the chess world has undergone further shifts since then, such as the introduction of the heuristics-based Stockfish engine in 2008 and the deep reinforcement learning-based AlphaZero engine in 2017. The impact of this evolution has been monumental: chess is now seeing record numbers of people playing the game even as AI itself continues to get better at playing. These shifts have created a unique testbed for studying the interactions between humans and AI: formidable AI chess-playing ability combined with a large, growing human interest in the game has resulted in a wide variety of playing styles and player skill levels.

There’s a lot of work out there that attempts to match AI chess play to varying human skill levels, but the result is often AI that makes decisions and plays moves differently than human players at that skill level. The goal for our research is to better bridge the gap between AI and human chess-playing abilities. The question for AI and its ability to learn is: can AI make the same fine-grained decisions that humans do at a specific skill level? This is a good starting point for aligning AI with human behavior in chess.

Our team of researchers at the University of Toronto, Microsoft Research, and Cornell University has begun investigating how to better match AI to different human skill levels and, beyond that, personalize an AI model to a specific player’s playing style. Our work comprises two papers, “Aligning Superhuman AI with Human Behavior: Chess as a Model System” and “Learning Personalized Behaviors of Human Behavior in Chess,” as well as a novel chess engine, called Maia, which is trained on games played by humans to more closely match human play. Our results show that, in fact, human decisions at different levels of skill can be predicted by AI, even at the individual level. This represents a step forward in modeling human decisions in chess, opening new possibilities for collaboration and learning between humans and AI.

AlphaZero changed how AI played the game by practicing against itself with only knowledge of the rules (“self-play”), unlike previous models that relied heavily on libraries of moves and past games to inform training. Our model, Maia, is a customized version of Leela Chess Zero (an open-source implementation of AlphaZero). We trained Maia on human games with the goal of playing the most human-like moves, instead of being trained on self-play games with the goal of playing the optimal moves. In order to characterize human chess-playing at different skill levels, we developed a suite of nine Maias, one for each Elo rating between 1100 and 1900. (Elo ratings are a system for evaluating players’ relative skill in games like chess.) As you’ll see below, Maia matches human play more closely than any chess engine ever created.

  • CODE Maia Chess Explore our nine final maia models saved as Leela Chess neural networks, and the code to create more and reproduce our results.

If you’re curious, you can play against a few versions of Maia on Lichess, the popular open-source online chess platform. Our bots on Lichess are named maia1, maia5, and maia9, which we trained on human games at Elo rating 1100, 1500, and 1900, respectively. You can also download these bots and other resources from the GitHub repo.

Measuring human play

What does it mean for a chess engine to match human play? For our purposes, we settled on a simple metric: given a position that occurred in an actual human game, what is the probability that the engine plays the move that the human played in the game?

Making an engine that matches human play according to this definition is a difficult task. The vast majority of positions seen in real games only happen once, because the sheer number of possible positions is astronomical: after just four moves by each player, the number of potential positions enters the hundreds of billions. Moreover, people have a wide variety of styles, even at the same rough skill level. And even the same exact person might make a different move if they see the same position twice!

Creating a dataset

To rigorously compare engines in how well they match human play, we need a good test set to evaluate them with. We made a collection of nine test sets, one for each narrow rating range. Here’s how we made them:

  • First, we made rating bins for each range of 100 rating points (such as 1200-1299, 1300-1399, and so on).
  • In each bin, we put all games where both players are in the same rating range.
  • We drew 10,000 games from each bin, ignoring games played at Bullet and HyperBullet speeds. At those speeds (one minute or less per player), players tend to play lower quality moves to not lose by running out of time.
  • Within each game, we discarded the first 10 moves made by each player to ignore most memorized opening moves.
  • We also discarded any move where the player had less than 30 seconds to complete the rest of the game (to avoid situations where players are making random moves).

After these restrictions we had nine test sets, one for each rating range, which contained roughly 500,000 positions each.

Differentiating our work from prior attempts

People have been trying to create chess engines that accurately match human play for decades. For one thing, they would make great sparring partners. But getting crushed like a bug every single game isn’t that fun, so the most popular attempts at engines that match human play have been some kind of attenuated version of a strong chess engine. Attenuated versions of an engine are created by limiting the engine’s ability in some way, such as reducing the amount of data it’s trained on or limiting how deeply it searches to find a move. For example, the “play with the computer” feature on Lichess is a series of Stockfish models that are limited in the number of moves they are allowed to look ahead. Chess.com, ICC, FICS, and other platforms all have similar engines. How well do these engines match human play?

Stockfish: We created several attenuated versions of Stockfish, one for each depth limit (for example, the depth 3 Stockfish can only look 3 moves ahead), and then we tested them on our test sets. In the plot below, we break out the accuracies by rating level so you can see if the engine thinks more like players of a specific skill level.

Figure 1: Accuracy of Stockfish models with depth 1, 3, 5, 7, 9, 11, 13, and 15 shown form 1100 to 1900 Elo ratings. Depth 5 matching is the lowest accuracy, starting at under 35% at 1100 and rising to just above 35% for 1900 rating. The best move matching is at Depth 15, starting at roughly 36% at 1100 and rising to over 40% at 1900.
Figure 1: Move matching accuracy for Stockfish compared with the targeted player’s Elo rating

As you can see, it doesn’t work that well. Attenuated versions of Stockfish only match human moves about 35-40% of the time. And equally importantly, each curve is strictly increasing, meaning that even depth-1 Stockfish does a better job at matching 1900-rated human moves than it does at matching 1100-rated human moves. This means that attenuating Stockfish by restricting the depth it can search doesn’t capture human play at lower skill levels—instead, it looks like it’s playing regular Stockfish chess with a lot of noise mixed in.

Leela Chess Zero: Attenuating Stockfish doesn’t characterize human play at specific levels. What about Leela Chess Zero, an open-source implementation of AlphaZero, which learns chess through self-play games and deep reinforcement learning? Unlike Stockfish, Leela incorporates no human knowledge in its design. Despite this, however, the chess community was very excited by how Leela seemed to play more like human players.

Figure 2: Leela ratings from 800 to 3200 graphed for accuracy. Leela does better than Stockfish for move matching, but as Elo rating gets better, each version of Leela has better or worse accuracy. Accuracy ranges from under 20% (800-rated Leela predicting 1900-level play) to about 47% (3200-rated Leela predicting 1900-level play).
Figure 2: Move matching accuracy for Leela compared with the targeted player’s Elo rating

In the analysis above, we looked at a number of different Leela generations, with the ratings being their relative skill (commentators noted that early Leela generations played particularly similar to humans). People were right in that the best versions of Leela match human moves more often than Stockfish. But Leela still doesn’t capture human play at different skill levels: each version is always getting better or always getting worse as the human skill level increases. To characterize human play at a particular level, we need another approach.

Maia: A better solution for matching human skill levels

Maia is an engine designed to play like humans at a particular skill level. To achieve this, we adapted the AlphaZero/Leela Chess framework to learn from human games. We created nine different versions, one for each rating range from 1100-1199 to 1900-1999. We made nine training datasets in the same way that we made the test datasets (described above), with each training set containing 12 million games. We then trained a separate Maia model for each rating bin to create our nine Maias, from Maia 1100 to Maia 1900.

Figure 3: Maia trained models from 1100 to 1900 ratings. These are shown predicting player moves at 1100 to 1900 ratings. Maia’s worst accuracy is 46% when a 1900-rated Maia model predicts moves of a 1100-rated player. The highest is 52%, far greater than prior AI chess models.
Figure 3: Move matching accuracy for Maia compared with the targeted player’s Elo rating

As you can see, the Maia results are qualitatively different from Stockfish and Leela. First off, the move matching performance is much higher: Maia’s lowest accuracy, when it is trained on 1900-rated players but predicts moves made by 1100-rated players, is 46%—as high as the best performance achieved by any Stockfish or Leela model on any human skill level we tested. Maia’s highest accuracy is over 52%. Over half the time, Maia 1900 predicts the exact move a 1900-rated human played in an actual game.

Figure 4: Figures 1, 2, and 3 combined showing that Maia’s accuracy greatly surpasses prior models’ performance.
Figure 4: Move matching accuracy for all the models compared with the targeted player’s Elo rating

Importantly, every version of Maia uniquely captures a specific human skill level since every curve achieves its maximum accuracy at a different human rating. Even Maia 1100 achieves over 50% accuracy in predicting 1100-rated moves, and it’s much better at predicting 1100-rated players than 1900-rated players!

This means something deep about chess: there is such a thing as “1100-rated style.” And furthermore, it can be captured by a machine learning model. This was surprising to us: it would have been possible that human play is a mixture of good moves and random blunders, with 1100-rated players blundering more often and 1900-rated players blundering less often. Then it would have been impossible to capture 1100-rated style, because random blunders are impossible to predict. But since we can predict human play at different levels, there is a reliable, predictable, and maybe even algorithmically teachable difference between one human skill level and the next.

Maia’s predictions

You can find all of the juicy details in the paper, but one of the most exciting things about Maia is that it can predict mistakes. Even when a human makes an absolute howler—“hanging” a queen, in other words letting an opponent capture it for free, for example—Maia predicts the exact mistake made more than 25% of the time. This could be really valuable for average players trying to improve their game: Maia could look at your games and tell which blunders were predictable and which were random mistakes. If your mistakes are predictable, you know what to work on to hit the next level.

Figure 5: Matching accuracy (predicting move quality) of Maia versus Leela. Quality prediction is much more consistent and consistently higher across the full range of Maia models, at its height above 60%, when compared with Leela, which has a much broader range of accuracy when looking at the full range of models.
Figure 5: Move matching accuracy as a function of the quality of the move played in the game

Modeling individual players’ styles with Maia

In current work, we are pushing the modeling of human play to the next level: can we actually predict the moves a particular human player would make?

It turns out that personalizing Maia gives us our biggest performance gains. Whereas base Maia predicts human moves around 50% of the time, some personalized models can predict an individual’s moves with accuracies up to 75%!

We achieve these results by fine-tuning Maia. Starting with a base Maia, say Maia 1900, we update the model by continuing training on an individual player’s games. Below, you can see that for predicting individual players’ moves, the personalized models all show large improvements over the non-personalized models. The gains are so large that the personalized models are almost non-overlapping with the non-personalized ones: the personalized model for the hardest-to-predict player still gets almost 60% accuracy, whereas even the non-personalized models don’t achieve this accuracy on even the easiest-to-predict players.

Personalized Maia models show a greatly improved range of mean accuracy when compared to non-personalized Maia models: anywhere from just under 60% at the low end to just over 80% at the high end.

The personalized models are so accurate that given just a few games, we can tell which player played them! In this stylometry task—where the goal is to recognize an individual’s playing style—we train personalized models for 400 players of varying skill levels, and then have each model predict the moves from 4 games by each player. For 96% of the 4-game sets we tested, the personalized model that achieved the highest accuracy (that is, predicted the player’s actual moves most often) was the one that was trained on the player who played the games. With only 4 games of data, we can pick out who played the games from a set of 400 players. The personalized models are capturing individual chess-playing style in a highly accurate way.

Using AI to help improve human chess play

We designed Maia to be a chess engine that predicts human moves at a particular skill level, and it has progressed into a personalized engine that can identify the games of individual players. This is an exciting step forward in our understanding of human chess play, and it brings us closer to our goal of creating AI chess-teaching tools that help humans improve. Among the many capabilities of a good chess teacher, two of them are understanding how students at different skill levels play and recognizing the playing styles of their students. Maia has shown that these capabilities are realizable using AI.

The ability to create personalized chess engines from publicly available, individual player data opens an interesting discussion on the possible uses (and misuses) of this technology. We initiate this discussion in our papers, but there is a long road ahead in understanding the full potential and implications of this line of research. As in countless times before, Chess will be one model AI system that sets the stage for this discussion.

Acknowledgments

Many thanks to Lichess.org for providing the human games that we trained on, and hosting our Maia models that you can play against. Ashton Anderson was supported in part by an NSERC grant, a Microsoft Research gift, and a CFI grant. Jon Kleinberg was supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a MURI grant, and a MacArthur Foundation grant.

Posted on Leave a comment

GitHub to the rescue for developers facing ‘too many options’

Abel

Abel

By Isaac Levin & Abel Wang

Let’s face it, as the demands of writing software increase, more pressure is put on devs to be as productive as humanly possible. And with this demand, the landscape for being a developer has never been more challenging.

With increased responsibilities and technology options, developers are asked to not just worry about the code they write, but all other aspects of the application development life cycle.

The plethora of tools at a developer’s disposal has also made developer responsibilities even more challenging. Determining what tools and platforms to develop on and with have caused a “too many options” scenario in lots of cases.

GitHub To The Rescue

As challenging as this landscape is, there are solutions. One thing that’s abundantly clear, GitHub is the place where developers go to learn and collaborate with the community.

GitHub has enabled developers to have the power of the entire Open-Source community at their disposal and the freedom to learn from the work of others to better themselves as a developer. GitHub has unveiled many tools aside from hosting source code to make the ecosystem as comfortable as possible.

One of those tools I am fired up about is “GitHub Actions”. Look, as you all know, I’m a DevOps guy , and I think about DevOps every time I put on my developer hat. I believe that no matter the app, if you are going to send it somewhere that isn’t your local machine, that app deserves to have some continuous integration and continous delivery (CI/CD), simple as that.

The question becomes, if my code is already in GitHub, what is the easiest way to hook CI/CD into my app? The answer is GitHub Actions.

GitHub Actions allow us to easily configure custom workflows, using a bounty of existing community-created workflows to fulfill the needs of our app. It isn’t just starting from scratch as there are workflows to do nearly everything, and your goal is to just build your Action in the way YOU need it. With GitHub Actions, getting your app to the Cloud is super easy.

But What About Developing My App?

Talking about where it goes before I write and compile my app seems non-sensical, but I believe that thinking about these things early will allow us to choose the technology/tools we need to be most productive.

In my opinion, one of the best tools to write the code that powers our software is Visual Studio Code. Visual Studio Code is a cross-platform, multi-language editor that provides extensive extensibility to create the perfect environment to write code.

Whether you are writing express apps in Node.js, multi-thread highly scalable applications in Go, or writing cutting-edge modules on your Raspberry Pi with Python, VS Code gives you the ability to work in the way that benefits you the most.

The best part, VS Code offers deep integrations with GitHub, which allows you to clone/pull/push your repositories as well as manage your work on issues and pull requests, without leaving the editor, HOW COOL IS THAT! Once you have your repo in VS Code, the editor gives you access to an oversupply of Open-Source extensions to do everything from configure your experience, down to your font and background of choice. Nearly anything is possible with VS Code, and with first-class GitHub support, it truly is one of the best options.

Image 0cba4698daf913f579a3813f30b25634441b96a73f1e47c167cbe9ea5bea6d9e

What Should Host My App?

With our code being built in VS Code and stored in GitHub and DevOps’d with GitHub Actions, the last thing we need to think about is now what?

Personally, I think Azure is the ultimate Cloud for GitHub. There is no other Cloud that best enables developers to be the most productive, and with integrations to VS Code AND GitHub, you truly have “best of breed” capabilities at your disposal.

Let’s start with VS Code Integrations, and boy there are a ton of them. Existing extensions enable developers to seamlessly connect to their Azure tenants and have full management capability of the resources in their subscriptions, including creation, scaling, configuration, debugging and if we want, even deployment!

Microsoft has built an extension pack called “Azure Tools” which includes extensions to manage every major Azure resource type as well as Azure CLI support AND Docker. There are also other extensions published by Microsoft that connect and manage nearly every Azure Resource. This means when new features come to the platform, they will be coming to the Microsoft published extensions.

Image b056a547a020c75ed168a0979c936e5d85d2d1e20cbae33d3a122aa39fa194af

Azure Loves GitHub

Finally, it is safe to say that Azure loves GitHub and the integrations are plentiful. From Azure Portal authentication with GitHub Ids, to individual services interfacing with GitHub (one great example of this is Azure Static Web Apps, which allow CI/CD to be configured on creation of the resource).

There is no better integration story between GitHub and Azure than “GitHub Actions for Azure” a set of pre-built GitHub Action workflows that helps you automate your app’s story on Azure, from deployment to monitoring and everything in between.

The team has built over 30 of these workflows and they are documented in a way that you will be able to use them without hesitation. One of my favorite examples is container image scan which allows you to scan the container images you are using for known vulnerabilities as well as linting to ensure you are using best practices.

Image 249fb6aa4e0898e90bff8aaa2072d956d527cf2db64defa0290507db862365b7

GitHub Universe 2020

This is the messaging we’re delivering at GitHub Universe 2020. Check out our booth video:

[youtube https://www.youtube.com/watch?v=hr-qu_VQ1Ho?feature=oembed&w=640&h=360]

Conclusion

GitHub + Visual Studio Code + Azure ensures you as a developer can just trust the tools and get your work done. There are tons of more features that I didn’t talk about here, so please take a look at some of the resources below to get started enhancing your developer productivity.

More Resources

GitHub Actions for Azure | Create workflows to build, test, package, release and deploy to Azure

GitHub Actions for Azure | Microsoft Docs

Automate your workflow with GitHub Actions | Microsoft Learn

Visual Studio Code Azure Extensions

Create free services with Azure free account | Microsoft Docs

Posted on Leave a comment

Microsoft, Code.org partner to teach AI + ethics from elementary to high school

Code.org

Microsoft and Code.org are excited to announce a partnership that gives every student from elementary school to high school the opportunity to learn about artificial intelligence (AI).

We’re excited to unveil our new video series on artificial intelligence and machine learning. Microsoft CEO Satya Nadella introduces the series.

At a time when AI and machine learning are changing the very fabric of society and transforming entire industries, it is more important than ever to give every student the opportunity to not only learn how these technologies work, but also to think critically about the ethical and societal impacts of AI.

AI is used everywhere, from voice assistants to self-driving cars, and it’s rapidly becoming the most important technological innovation of current times. AI has the potential to play a major role in addressing global problems, such as detecting and curing diseases, cleaning oceans, eliminating poverty, or harnessing clean energy.

At the same time, with great power comes great responsibility, and budding computer scientists must learn to consider technology’s ethical impacts. How does algorithmic bias impact social justice or deep fakes impact democracy? How does society cope with rapid job automation? By learning how to consider the ethical issues that AI raises, these future computer scientists will be better able to envision the appropriate safeguards that help to maximize the benefits of AI technologies and reduce their risks.

Made possible by Microsoft’s latest donation of $7.5 million, Code.org plans a comprehensive and age-appropriate approach to teaching how AI works along with the social and ethical considerations, from elementary school through high school.

Available on December 1:

  • A new video series on AI, featuring Microsoft CEO Satya Nadella along with leading technologists across industry and academia
See the playlist with all videos here.
AI for Oceans is available in 25+ languages and is optimized for mobile devices.

Within the coming year, AI and machine learning lessons will be integrated into Code.org’s CS Discoveries curriculum, which is one of the most widely-used computer science courses for students in grades 6–10, and in App Lab, Code.org’s popular app-creation platform used throughout middle school and high school.

In CS Discoveries, students will learn to work with datasets to create machine learning models that they can incorporate into their apps, and explore how advances in new technologies such as computer vision and neural networks require new ethical computer scientists to avoid bias and harm. Curated datasets will help students better understand the real-world impact that these technologies have.

Code.org will also help students and teachers find additional educational resources from a variety of partners and the broader community behind AI education.

A look at a new lesson in Minecraft: Education Edition. In these new lessons, students use AI in a range of exciting real-world scenarios: to preserve wildlife and ecosystems, help people in remote areas, and research climate change.

Additionally, last month the Microsoft AI for Earth team partnered with Minecraft: Education Edition to release five lessons challenging students to use the power of AI in a range of exciting real-world scenarios: to preserve wildlife and ecosystems, help people in remote areas, and research climate change.

What’s more, Microsoft’s Imagine Cup Junior 2021 challenge provides students aged 13 to 18 the opportunity to learn about technology and how it can be used to positively change the world.

The global challenge is focused on Artificial Intelligence (AI), introducing students to AI and Microsoft’s AI for Good initiatives so they can come up with ideas to solve social, cultural and environmental issues.

Microsoft’s Imagine Cup Junior challenge is geared towards students ages 13 to 18. Learn more and join the competition here.

On Code.org, 45% of students are young women, and in the US, 50% are students from underrepresented racial and ethnic groups and 45% are in high needs schools. Reaching the tens of millions of students in Code.org’s courses and on its platform, the partnership between Microsoft and Code.org works to democratize access to learning AI because all students deserve the opportunity to shape the world they live in — and because creating an equitable and socially just future will take all of us.

-Code.org CEO Hadi Partovi and Microsoft President Brad Smith

Posted on Leave a comment

How tech can support secure, safe and equitable vaccine distribution

Senior male is about to receive Covid-19 coronavirus vaccineSenior male is about to receive Covid-19 coronavirus vaccine

How technology can help meet the challenge of our lifetime

As several COVID-19 vaccines near regulatory approval in the U.S., the E.U., Japan, and other countries, governments around the world must establish systems to ensure effective and equitable distribution within their countries. At Microsoft, we have been working with public and private sector organizations around the world to help support this monumental task.

In some ways, the challenges related to the distribution and administration of the COVID-19 vaccine are similar to other vaccines. There are logistical challenges of supply procurement and demand forecasting, distribution, adverse reaction tracking and reporting, and integration with immunization records. But there are unique challenges as well: fair allocation, prioritization and phased eligibility, registration, tracking, as needed cold-chain storage supply, and the need to vaccinate a critical mass of the world’s population of over seven billion people in short order during a global pandemic.

This is a complex and multifaceted challenge. Government and public health officials will need to track multi-dose vaccinations, assess how public skepticism may impact demand, and coordinate with hospitals, clinics, nursing homes, pharmacies, and other vaccination sources to ensure public safety. This is on top of all the other challenges health workers are grappling with during the pandemic which includes overloaded hospitals, lack of Personal Protective Equipment (PPE), staff suffering from burnout, and much more.

chart, bubble chartchart, bubble chart

The World Economic Forum states that logistics around the COVID-19 vaccines are The challenge of a lifetime” and that to achieve global distribution, “technology will play a vital role in ensuring the smooth execution along every step of the supply chain … currently, no platform exists that covers all those visibility needs.”¹

“The goal is to enable a fair, equitable, and efficient distribution of the COVID-19 vaccine,” remarks Dr. David Rhew, Microsoft’s Worldwide Chief Medical Officer. “In response to this urgent need, we need a secure and interoperable platform that balances the complexities of the registration, scheduling, and supply chain distribution, with the broader public health mission to deliver a safe and effective vaccine in a prioritized manner.”

How technology can support this global challenge

In our discussions with public health officials and customers, we have identified several imperatives that any vaccine management offering should include:

  • Purpose-driven solutions designed for a fair, equitable, and efficient procurement and distribution of the vaccine.
  • Comprehensive use cases that support cold chain supply chain management, patient/provider/clinic registration followed by a phased vaccination scheduling and management with forecasting tools. The platform also needs to enable automated reporting to local, regional, and national agencies related to vaccination progress and capture of potential side effects from the vaccine.
  • Leverage existing data systems and interoperability standards to facilitate rapid implementation at the lowest cost. By leveraging interoperability standards such as HL7/FHIR, clinical data can be shared in a scalable manner.
  • Security, privacy, and compliance are non-negotiable characteristics of any platform used by public sector and health entities. 

Partnerships are essential to meet the challenges ahead

Given the scale and complexity, no single government or organization can solve this vaccine distribution challenge on its own. It will take strategic alliances, an ecosystem of delivery partners, and interoperable technology offerings that are secure, transparent, and can scale to meet global demand. Data and artificial intelligence (AI) solutions will be especially important to provide insights and enable public health and government officials to make informed decisions about the virus and facilitate cross-agency collaboration, enable remote work, and deliver trusted services without interruption.

At Microsoft, we have a proven track record of partnering with governments, public health agencies, healthcare organizations, pharmaceutical companies, logistics providers, and other key stakeholders to tackle tough challenges. In the early part of the COVID-19 pandemic, we observed a massive influx of inquiries related to concerns about COVID-19 flooding health care agencies. This led to subsequent overloading of call centers and the crowding of urgent care clinics and hospital emergency rooms, which further increased the risk of spreading the infection. To address these urgent issues, we partnered with governments, public health agencies, and healthcare organizations across the globe to develop and deploy AI-based chatbot technology that could deliver individualized COVID-19 guidance. Today, over 680 million individualized COVID-19 messages have been delivered worldwide since March. The U.S. Centers for Disease Control and Prevention has adopted Microsoft technology and delivered over 37 million messages in October alone.

This same bot technology has been adopted by pharmaceutical companies and researchers to enable large-scale recruitment of donors for clinical trials. With the “The Fight is In Us” campaign, Microsoft in collaboration with the Bill and Melinda Gates Foundation is partnering with academic medical centers, plasma companies, national blood donor organizations, and several other stakeholders to advance the study of convalescent plasma to improve outcomes for patients with COVID-19 infection.² Microsoft is also partnering with Adaptive Biotechnologies to facilitate the evaluation of the immune response in patients exposed to COVID-19 as part of the Immune Race clinical trial

Through a collaboration with the American Hospital Association, Kaiser Permanente, Kearney, Merit Solutions, and UPS, we are facilitating the equitable donation and distribution of PPE and other medical supplies to places that have the greatest need.⁴

For decades, Microsoft has cultivated a robust ecosystem of technology partners from global system integrators to local independent software vendors. These partners build industry-specific solutions using Microsoft cloud services and other technologies. Microsoft’s Data and AI technologies, Business Applications, and Modern Workplace offerings can provide powerful analytics, relevant applications, and collaboration tools—and those capabilities are amplified when they are customized by Microsoft’s partners. Today, Microsoft Consulting Services along with several of Microsoft’s partners are helping public health customers address aspects of COVID-19 such as contact tracing, testing, return to work, return to school, and most recently the planning and preparation for vaccination distribution and administration.

Microsoft’s commitment

Microsoft CEO Satya Nadella says “We are adopting a first responder mindset across the company, working with so many customers on the front lines, including governments, health providers, schools, food suppliers, and other commercial customers critical to the continuity and stability of services in every country.” Let us work together during this pandemic to embrace the power of digital, and the power of human innovation to move global vaccination further forward so not only are COVID-19 vaccines available and accessible to all but come when people truly need them most.

We will continue to do our part to help our customers and the global community address this historic challenge.

Microsoft is committed to supporting public health and safety by equipping governments with the resources they need. For further information, use these resources:

Learn more about Microsoft in Government and how you can realize the true transformational power of AI.

References:

¹ The challenge of a lifetime: how to get billions of COVID-19 vaccines around the world

² THE FIGHT IS IN US

³ Help us understand how different people respond to the COVID-19 virus

Coalition of organizations launch the ‘Protecting People Everywhere’ initiative answering the call to source safe and effective PPE for front-line health care workers