Posted on Leave a comment

First episode of #EduTechTalks podcast looks at managing your school’s devices

[youtube https://www.youtube.com/watch?v=QsAiEljj9mI?feature=oembed&w=500&h=281]

For a while now, my good friend and colleague Amit Pawar and I have been talking about launching a podcast to talk about the crossover of technology and education. After a bit of back and forth, we have finally made it happen (with a lot of help from Liezl Milan!)

In this first episode, we discuss why you should manage devices, the role of an MDM in supporting this, and how Microsoft Intune can simply this process for schools.

There are more pods in the works, so stay tuned.

Podcast Image

Podcast Image

Posted on Leave a comment

How going digital can help governments hone their focus on serving citizens

When governments offer their citizens quality digital experiences, they can expect to see levels of trust rise by as much as 58 percent.

But, with most governments dealing with outdated technology infrastructures, ever-shrinking budgets, and long-established but inefficient processes, this can be a difficult aim to achieve.

When government organizations build roadmaps for digital transformation, it’s easy to just focus on the obvious wins—such as operational improvements and cost saving—but in doing so, many miss a key opportunity to enhance citizen engagement. If you’re planning your own path to transformation, you need to consider the evolving needs of your citizens, in addition to the government’s.

The journey to citizen-first digital cities

Take the city of Tel Aviv.

One of the early pioneers of citizen-focused digital transformation, the Israeli city worked with Microsoft on its DigiTel initiative to boost communication between the government and citizens while bringing them a broader range of digital services.

Tel Aviv already offered a wealth of digital services through its website—but it wanted to expand the scope and convenience. So, it created the DigiTel app, which uses location-based personalization to deliver real-time information to its citizens, such as transport timetables, nearby restaurants, and parking availability.

The app also offers citizens an easier way to get involved with municipal issues and improve their neighborhoods. Say someone spots a pothole on their street, for example—using the DigiTel app, they can take a photo and share it directly with the people who can get it fixed. The city can also seek citizens’ input on projects such as new developments and education plans, encouraging them to become stakeholders in the city.

The goal of the program is to bring services to the people, rather than making them seek out support. “We want to make it as easy as possible to interact with the city,” says Liora Shechter, Tel Aviv’s CTO. “So, with every project we develop, we try to go the extra mile and develop the most friendly, intuitive, and useful service for people.”

How to build your own smart city

If you’re one of the thousands of governments that are steeped in aging technologies, take the first step toward citizen-focused transformation by assessing your current infrastructure and making a plan to improve it in ways that prioritize civic engagement. In Tel Aviv, that meant moving some of its key assets off premises and into the Microsoft cloud.

But for your government, this journey will likely look slightly—or vastly—different. It’s all about finding the technologies and strategies that suit you and your citizens’ needs.

So, ask yourself, what does progress look like for your city?

We’ve put together a fast, easy assessment to help you understand your government’s digital maturity—and give you some pointers for what your roadmap should look like. Simply answer five questions, and we’ll help you take stock of where you are on your transformation journey, and where you should head next.

Take the assessment now.

Posted on Leave a comment

New research highlights massive opportunity to empower Firstline Workers with technology

As companies around the world digitally transform their business models, operations, and corporate cultures, many have rolled out cloud and mobile technologies that have also transformed the employee experience. For information workers, technology has created a more networked and open flow of information, made collaboration easier, and provided more flexibility in where, when, and how they work.

But there’s another large and important segment of the workforce that has been underserved by technology to date. These are the more than two billion Firstline Workers worldwide, who work in roles that make them the first point of contact between a company and its customers or products. Firstline Workers comprise the majority of the global workforce and play a critical role in the global economy.

At Microsoft, we believe it should be a top priority of company leaders to empower employees in all roles to do their best work, and that a digitally enabled workforce represents a true competitive advantage.

An infographic showing a sample of Firstline Worker roles, including bank tellers, flight attendants, store managers, administrators, waiters, and field technicians.

Firstline Workers outnumber other corporate workers 41 in industries such as hospitality, manufacturing, retail, and healthcare.

We recently commissioned Forrester Consulting to conduct a study on the potential impact of providing Firstline Workers with technology. The study, titled Equip Firstline Workers With Better Tools To Drive Engagement was released today.

“Firstline Workers comprise the largest category of employees at most organizations … as the face of the company, they play a role in just about every customer interaction and every product and service that is delivered. Empowering them to do their best work and leveraging their experiences to guide business decisions is a huge opportunity for businesses.”
—Forrester Opportunity Snapshot: Equip Firstline Workers With Better Tools To Drive Engagement: December 2018

Forrester surveyed 304 manager-and-above decision makers and 301 Firstline Workers at companies in the U.S., U.K., Germany, France, and Canada. The companies span the retail, healthcare, government, and financial services industries and range in size from 500 to more than 20,000 employees.

The study offers strong evidence that by empowering Firstline Workers with modern tools, businesses can improve the customer and employee experience, enhance workforce productivity, and improve the bottom line.

But while 77 percent of Firstline Workers agree or strongly agree that technology is important to their roles at work, this segment of the workforce is often left out of enterprises’ digital transformation investments. In fact, the study revealed sizeable gaps between the perceptions of management and the actual experiences of Firstline Workers.

Only 23 percent of Firstline Workers strongly agree that they currently have the technology they need to be productive, yet 50 percent of managers say they feel the tools they offer make employees’ jobs easier and/or more satisfying, enough to be considered a competitive perk.

Opportunities to empower Firstline Workers

The study uncovers several areas of opportunity for companies considering how technology can transform the way their Firstline Workforce operates, including the specific types of technology that are likely to have the most impact. The study brings to light a few recommendations in particular:

  • Provide tools that support a mobile workforce. Firstline Workers such as retail associates, flight crews, and field service workers are mobile by nature of their jobs, but their most important tools often aren’t portable, and management doesn’t realize it. Less than half of the Firstline Workers surveyed agree that the tools they use do a good job of allowing them to be mobile, whereas 75 percent of managers feel the tools they provide do a good job of this.
  • Enable digital communication and collaboration. Many Firstline Workers rely on their coworkers, managers, and even customers for information and guidance throughout the day. They need modern tools that simplify collaboration, communication, and access to information. Nearly half (46 percent) of Firstline Workers surveyed indicated the ability to work collaboratively with teammates as the primary capability required to do their job.
  • Boost Firstline Workers’ access to data and artificial intelligence (AI). AI and business intelligence tools are increasingly being used by information workers to automate manual tasks and deliver data insights. But only 30 percent of Firstline Workers report having access to a predictive tool, and only 21 percent have access to a digital assistant. Such tools can help Firstline Workers do their jobs more efficiently and effectively and automate certain activities so employees can focus on higher value work—like delighting customers and solving more complex business problems.

Involving every worker in digital transformation

At Microsoft, we have always believed in people’s ability to adapt and innovate. As part of our mission to empower people and organizations to achieve more, we also believe in the power of technology to amplify what humans can do. That’s why we are building new capabilities in Microsoft Teams that are tailored to helping Firstline Workers manage their workday, access information to do their job more effectively, and easily share insights and communicate with others.

Stay tuned for more from us on this front in the coming months. Learn more about how Microsoft 365 and Teams can help you maximize the impact of your Firstline Workforce and read the full Forrester report.

Posted on Leave a comment

How Microsoft Threat Protection took down recent ‘Tropic Trooper’ cybersecurity exploit

December was another month of significant development for Microsoft Threat Protection capabilities. As a quick recap, Microsoft Threat Protection is an integrated solution securing the modern workplace across identities, endpoints, user data, cloud apps, and infrastructure. Last month, we shared updates on capabilities for securing identities, endpoints, user data, and cloud apps. This month, we provide an update for Azure Security Center which secures organizations from threats across hybrid cloud workloads. Additionally, we overview a real-world scenario showcasing Microsoft Threat Protection in action.

Enhancing your infrastructure security using Azure Security Center

Azure Security Center is a sophisticated service designed to help organizations:

  • Understand their security state across on-premises and cloud workloads.
  • Find vulnerabilities and remediate quickly.
  • Limit exposure to threats.
  • Detect and respond swiftly to attacks.

With modern organizations now adopting hybrid ecosystems, securing the infrastructure across hybrid cloud workloads becomes more critical. Azure Security Center was developed to address the complexities of the modern infrastructure by helping strengthen your security posture and protect against threats to the infrastructure. Azure Security Center can now provide better visibility over an organization’s security state across virtual networks, subnets, and nodes by generating a topology map of the layout of each of these infrastructure components (Figure 1). As admins review the components of the network, Azure Security Center offers recommendations to help quickly respond to detected network issues. Additionally, Azure Security Center continuously analyzes the network security group (NSG) rules in the workload and presents a graph containing the possible reachability of every virtual machine (VM) in that workload.

Figure 1. Network topology map highlighting virtual networks, subnets, and nodes.

Another important enhancement is a new permissions model for “Just in Time (JIT) VM” access (Figure 2). Azure Security Center has updated its required privileges for a user to successfully request JIT access to a VM from write to read, making it easier for customers to follow the “least privileged” Role-Based Access Control (RBAC) model. JIT VM access is used to reduce impact from brute force attacks targeting management ports to gain access to a VM. If successful, an attacker can take control over the VM and establish a foothold into your environment. When JIT access is enabled, Azure Security Center locks down inbound traffic to Azure VMs by creating an NSG rule. Admins select the ports on the VM to which inbound traffic will be locked down. These ports are controlled by the JIT solution. Before, when a user requested access to a VM, Azure Security Center checked a user’s RBAC permissions for write access for the VM, and now the user must only have read access.

Figure 2. The Azure Security Center highlighting the JIT VM access feature.

Microsoft Threat Protection stops threats as envisioned

Security solutions always sound effective in theory, but in practice, often the capabilities do not match the vision. Microsoft Threat Protection was recently put to the test against a real-world threat known as Tropic Trooper (Figure 3), which has been targeting Asian enterprises in the energy and food and beverage industries since 2012.

Figure 3. Tropic Trooper attack chain.

Seamless integration between disparate services is a core differentiator of Microsoft Threat Protection. During the Tropic Trooper campaign, Windows Defender Advanced Threat Protection (ATP), Azure Active Directory (Azure AD), and Office 365 ATP services worked in sync, helping ensure the threat was addressed quickly with no adverse impact. The campaign initiated several Windows Defender ATP alerts triggering its device risk calculation mechanism, which ascribed affected endpoints with high risk scores. These endpoints were put to the top of the list in Windows Defender Security Center leading to early detection and discovery of the attack. Windows Defender ATP seamlessly integrates with Azure AD featuring conditional access. During Tropic Trooper, conditional access blocked high-risk endpoints from accessing sensitive content, protecting other users, devices, and data in the network.

The Windows team examined the alert timeline (Figure 4) to further investigate and ultimately remediated the threat. Investigating the alerts, the Windows team uncovered the malicious document carrying the Tropic Trooper exploit. Since signal is shared between Microsoft Threat Protection services, the Windows team used Office 365 Threat Intelligence’s Threat Explorer to find the specific emails used to distribute the exploit. The investigation also showed that Office 365 ATP blocked the malicious emails at the onset, stopping the attack’s entry point and protecting Office 365 ATP customers. Endpoints remained secure through Windows Defender ATP’s sophisticated automated investigation and remediation capabilities that discovered malicious artifacts on affected endpoints and remediated them. This sequence of actions ensured that the attackers no longer had a foothold on the endpoint ecosystem and that all endpoints returned to normal working state. Importantly, Microsoft Threat Protection services collectively secured identities, endpoints, and Office 365.

Figure 4. Windows Defender ATP alert timeline for Tropic Trooper.

Experience the evolution of Microsoft Threat Protection

Take a moment to learn more about Microsoft Threat Protection. Organizations have already transitioned to Microsoft Threat Protection and partners are leveraging its powerful capabilities. Begin trials of the Microsoft Threat Protection services today to experience the benefits of the most comprehensive, integrated, and secure threat protection solution for the modern workplace.

Posted on Leave a comment

First expansion of ‘Forza Horizon 4’ launches

Forza fans might have December 13th circled on their calendars, and for good reason. After all, tomorrow is the launch day the first expansion for the game – Fortune Island – allowing FH4 players the opportunity to explore a brand-new island location full of danger, treasure, and epic driving adventures. Alongside the launch of Fortune Island, Playground Games is also releasing the fourth update for Forza Horizon 4. This update is bringing new content and experiences to Forza Horizon 4, along with lots of fixes and improvements to the game overall.

Here’s a look at some of the highlights coming with Update 4:

Horizon Life MP Update

With this update, Playground has made some improvements to Horizon Life multiplayer events. Now when you create a multiplayer event, you can post it to the Activity Panel and then return to Freeroam while you wait for other players to join. The previous 40-second limit has been removed, and now you can wait for as long as you like for players to join before you start the session.

Photo Mode Improvements

Forzatographers, rejoice! New improvements to Photo Mode are coming. Now you separately toggle vehicle lights, drivers, wildlife, and crowds to specify elements you want in your photographic masterpieces. In addition, there are new bokeh, focus, and sampling options, as well as more freedom of movement with the photo camera.

Horizon Holiday Party

It’s the holiday season and Forza Horizon 4 players can celebrate with a load of new holiday-themed items available with the latest update. Twelve new festive clothing options are all available from Season Events, The Trial, Wheelspins, and the #Forzathon Shop.

Other Improvements

In addition to new content, Update 4 is bringing with it new improvements across various areas of the game. Highlights include fixing an exploit in Route Creator which allowed players to create routes to easily gain a large number of skill points, as well as fixing an exploit in Rivals which allowed players from being able to use the cars that did not meet the event restrictions. Elsewhere, emotes can now occasionally be found for purchase in the #Forzathon Shop, and the team has added improved visibility of driving line on mini-map in various colorblind modes.

For the full list of fixed in Update 4, check out the Release Notes on the Forza Support section on our Web site.

Don’t forget, Playground Games will be giving fans a world premier of the Fortune Island expansion live from PG HQ on December 13. The show starts at 7 a.m. Pacific/3 p.m. GMT on the official Forza Mixer and Twitch channels and the PG team will be exploring Fortune Island and answering community questions as well. Following Playground’s Fortune Island stream, we’ll have with the December update of “Forza Monthly” which kicks off at 10 a.m. Pacific/6 p.m. GMT, where we’ll be joined by members of Playground to dive further into the Fortune Island expansion. Don’t miss the shows live!

Posted on Leave a comment

Highlights from Microsoft Computer Science Education Week 2018

Teacher helps young student at Hour of Code
A Microsoft Fargo volunteer helps a student at an Hour of Code event at Northern Cass Elementary School in Hunter, North Dakota. Photo credit: Dennis Krull.

Last week, people around the world celebrated Computer Science Education Week. Millions of kids participated in an Hour of Code, a global call to action to spend an hour learning the basics of coding.

At Microsoft we know that computer science is more than just “coding.” It is a way to think, and a tool for creation. It enables people to do, build and invent. Importantly, it also puts youth on the path for success as they enter the workforce. In the U.S. alone, over 500,000 computing jobs, across all industries, remain unfilled because employers cannot find qualified candidates. Digital skills, especially computer science and coding, are a foundation of our future jobs market and economy.

We are inspired by the educators and volunteers across the globe who have brought computer science into their schools, including volunteers from the Microsoft Philanthropies program, Technology Education and Literacy in Schools (TEALS). But we all need to do more so that all K-12 students are getting computer science education – in particular, we need to increase the number of teachers who are trained to bring computer science to their students.

To address this gap, Microsoft President Brad Smith announced last week that we have committed an additional $10 million to help Code.org ensure that by 2020, teachers in every school have access to Code.org professional development, and that every state will have passed policies to expand access to computer science.

Code.org is one of the world’s leading nonprofits helping to expand access to computer science education. Its annual Hour of Code campaign has engaged 10 percent of students around the world, thanks in part to collaboration with Microsoft on three different Minecraft adventures for the Hour of Code; and Code.org’s professional development resources have helped 87,000 new teachers learn computer science across grades K-12.

Jake Baskin, Brad Smith, Melinda Gates and Hadi Partovi
From left, Jake Baskin, of the Computer Science Teachers Association, Microsoft President Brad Smith, Melinda Gates of the Bill and Melinda Gates Foundation, and Hadi Partovi of Code.org Photo credit: Code.org.

Since 2013, when Code.org’s Advocacy Coalition began its work with key partners such as Microsoft – its founding supporter and largest corporate sponsor – the number of states that have made computer science count toward high school graduation has gone from nine to 40. Our renewed commitment to Code.org will help build on this great work over the next three years and into the future.

But our news didn’t stop there: Last week we saw thousands of students, educators, and professionals participating in computer science celebrations together with Microsoft across the globe:

  • Worldwide,14,000 classrooms and nearly 700,000 students from 111 countries registered for Skype in the Classroom’s Meet Code Creators Series. Guest speakers gave students a look at what code makes possible, from computer science in movie graphics and animation, to technologies in dance, fashion and design, to Microsoft’s AI for Earth program. 
  • Microsoft CEO Satya Nadella met with TEALS high school students and teachers from Brooklyn and the Bronx to learn about their coding projects.
  • In the U.S., we ran Hour of Code events in every Microsoft Store and in each of our six TechSpark regions, calling attention to computer science with the new Minecraft Hour of Code tutorial, Voyage Aquatic.
  • In North Dakota, Microsoft’s TechSpark initiative helped bring together the first simultaneous statewide Hour of Code. More than 5,000 students from across 100 schools participated.
  • In Europe, Microsoft participated in more than 30 events across 20 countries, reaching more than 10,000 participants, hosting Hour of Code events for students, teacher trainings and celebrations.

Working with educators and partners, we can help ensure that any young person who wants to learn critical computational skills is able to, and has the tools to create, invent and succeed in the economy of today – and tomorrow.

Tags: , , , ,

Posted on Leave a comment

Tackling the conservation crisis with the right data

Tourism is big business, accounting for 10.4 percent of the world’s GDP and supporting one out of every 10 jobs on the planet. For economically fragile communities it can be a lifeline, spurring business development and creating living-wage jobs. But sometimes this growth comes at a price.

The top twenty countries now represent nearly two-thirds of all international arrivals. This concentrated tourism means some of the world’s most beautiful sites are in danger of being “loved to death,” according to a new report from McKinsey & Company, “Coping With Success: Managing Overcrowding in Tourism Destinations.”

An estimated 32 million people will visit Greece in 2018, and just five small islands—Santorini, Crete, Corfu, Rhodes, and Mykonos—will receive much of the volume, stressing their infrastructure and ecosystems. The Peruvian government has tried to limit the number of visitors to Machu Picchu, because of concerns about irreversible ecologic impacts. Tourism hotspot Venice is suffering because the vast crowds that descend on its 100 small islands every year displace locals. And nearly 80 percent of the reefs in Thailand’s popular Koh Khai islands have been damaged by humans, causing the government to close three islands, states the McKinsey report.

In nearly every tourist attraction location, governments are struggling to manage and mitigate the environmental impacts, which include waste, erosion, defacement of artifacts, habitat loss, and water stress. Popular tourism sites provide a compelling example of why national and local governments need to craft long-range sustainability strategies accompanied by specific actions that start today to protect their valuable ecosystems.

The Nature Conservancy (TNC) is taking the lead on assessing the economic and ecologic politics of tourism. More importantly for the countries, economies, and ecosystems in question, TNC is using the power of the cloud and AI to provide insights about how to develop a more sustainable path forward.

Using big data to protect fragile tourism destinations

TNC has worked for years to protect and conserve the lands and waters on which all life depends. Their goal is to enable a world where people and nature thrive. To create that world, people need better and more accurate information to understand what is happening today and why, to prove the economic value of investing in data-led solutions for conservation issues, and to pursue focused actions to preserve nature for future generations.

TNC has historically relied on traditional and academic research to build a business case for sustainability. However, the organization also has lacked a way to combine that research with big data and social media to create a compelling rationale for protecting fragile ecologic systems, such as coral reefs, cities prone to flooding, and more.

Microsoft’s AI for Earth program, which is part of Microsoft’s AI for Good initiative, helps organizations use artificial intelligence (AI) to solve the world’s thorniest environmental challenges. Microsoft became a global partner of TNC with its Upgrade Your World program, launched in conjunction with the Windows 10 release in 2015. Backed by Microsoft’s resources, TNC can now use data in a more powerful way, and even work toward dissolving boundaries between organizations that deal with environmental issues, such as urban planning, economic development, corporate sustainability, and ecology preservation groups.

“If we don’t have proof or numbers on the important facets of nature and why we need to protect it, we sound vague,” explains Dr. Mark Spalding, senior marine scientist at The Nature Conservancy. “My first thought was that with advances in technology, we can show local economies how valuable nature is. If we can show them where nature provides significant economic returns, then we can do a much better job of persuading them to look after nature.”

Artificial intelligence improves conservation decision-making

Through an AI for Earth grant fulfilled by NetHope, TNC leveraged Microsoft Azure cloud services to help link data with AI and machine-learning tools to develop decision models that can be shared among cross-disciplinary organizations. Each group can use the models to prove, plan, and track the impact of sustainability initiatives, providing economic data decision-makers with the information they need to drive policy-making and investing.

Emerging technology is also helping break down information silos that for years have stood in the way of better scientific insights. For example, Esri, the University of California at Santa Cruz, Spatial Development International, and the Natural Capital Project worked with TNC and Microsoft to brainstorm conservation applications based on Azure’s cognitive services API. One groundbreaking result is TNC’s Mapping Ocean Wealth initiative. The nonprofit crafted an AI-powered web app in tandem with Microsoft AI for Earth and Esri, building the software and training the algorithm.

The app can precisely analyze geo-tagged photos that are uploaded to the photo-sharing site Flickr, processing millions of images in hours. Machine learning helps the app distinguish between a scuba diver in a fragile area versus one in a pool, for instance. By matching the frequency and number of coral-related photos to economic tourism data, data scientists can quantify the value of coral reefs, kilometer by kilometer.

Data visualization reveals the true value of natural resources

When TNC leaders shared its AI-powered map of the Florida Keys coral reefs to local officials, the policymakers realized that in high-tourism areas in their waters, every square kilometer of reef accounts for up to more than $1 million in revenue each year. “People are starting to have ‘aha’ moments,” Spalding says. “Seeing that hard data helps localities plan and realize their natural resources truly are precious.”

Those insights, delivered with powerful data visualizations, can help local agencies balance tourism goals with preservation objectives. That type of decision-making is already occurring. In Cancun, Mexico, local hotels are contributing to a voluntary tax fund to repair the area’s “million-dollar reefs” when they are damaged after natural or other disasters. TNC plans to run its app in real time to rapidly identify such changes, which will empower groups to accelerate repair efforts, translating to a healthier, more sustainable environment.

Similarly, TNC has teamed with Minecraft to create an immersive world that enables players to protect and restore coral reefs through play. Players can place five types of coral reefs in Minecraft’s in-game oceans and use the Coral Crafter Skin Pack to create character costumes, learning about the importance of ecologic preservation.

“Thanks to our work with Microsoft, we have the incredible opportunity to leverage technology to link science to actionable planning,” says Zach Ferdaña, program manager at The Nature Conservancy. “We’re using AI, machine learning, and other technology tools to accelerate our impact and increase coastal communities’ resilience. We’re hacking the future.”

Read the case study.

To stay up to date on the latest news about Microsoft’s work in the cloud, bookmark this blog and follow us on TwitterFacebook, and LinkedIn.

Posted on Leave a comment

Arccos Golf’s virtual caddie uses AI to improve your game

Some golfers have never known the pleasure of playing with a caddie who understands how far they can hit with each club, who knows the layout of the course or who can predict how the wind and weather will affect their game.

That’s why Arccos Golf used tools in Microsoft Azure to create the first virtual caddie— one that uses artificial intelligence to parse all that data and offer personalized recommendations about what clubs to use and how to play a hole.

The goal? To create a level playing field for all golfers and help newcomers and longtime golfers alike improve at a quicker pace.

Jack Brown, Arccos senior vice-president of product and software, chatted with Transform in advance of a recent AI in Business event in San Francisco about developing its AI-powered caddie.

TRANSFORM: How does technology fuel innovation at Arccos?

BROWN: Fundamentally, we are a group of people who love new technology and seeing how we can apply it to golf. We’re always thinking, “Hey, there’s a new tool out. Can that help a golfer improve their game?”

TRANSFORM: How did Arccos get started with AI?

BROWN: We didn’t start out with the mindset of using AI or creating a virtual caddie. The original thought was, “Hey, we’ve got this great hardware concept where we can put sensors on every golf club and use them to track the distance that individuals are hitting each of their clubs.”

So right away we were able provide users with analytics and insights around parts of their game that they’re struggling with and what they need to work on. But then we had a lot of users asking if we could use that data to also recommend what clubs they should be using.

So we thought, “Why don’t we use AI to create a virtual caddie that can use all your data and environmental data to offer that same kind of personalized advice as a human caddie?”

TRANSFORM: How did you move from idea to implementation?

BROWN: We started researching what tools were out there to help us take that next step and actually create this virtual caddie. We knew we needed a scalable tool that could accommodate the complexity and breadth of the data we were processing. So enter Microsoft Azure.

We’ve been using Microsoft Azure Cosmos DB, Azure Machine Learning and Azure Kubernetes Service, and the thing we love about the system is that it’s just extremely fast, which is really important when a user is out there on the course pinging the AI. You’ve got other golfers in your group and the ones behind you, and no one wants to sit around waiting for your virtual caddie to respond.

TRANSFORM: What were some of the initial challenges you faced in using AI?

BROWN: Probably the biggest hurdle right off the bat — because we are so data rich — was to figure out how to keep things simple. Initially you think, “Hey, I want to know how much it rained yesterday because the grass will be wet and the ball won’t roll as much,” and then you have to say, “No, stop. Don’t grab everything just yet. Let’s do something that works first and really shows value to our customers. Then we can build on that success.”

Ideally, we wanted our virtual caddie to be able to give you a recommendation regardless of where you are on the course, which is what we have today. But in our initial version we decided not to use any environmental data and just use the shot data from Arccos users and only give you a recommendation on how to play a hole from the tee. So if you slice your shot and you’re now over in the rough, you couldn’t follow that strategy anymore. But it was a good starting point, and users thought it was awesome.

The next update took environmental data like wind and elevation into account. Then Arccos Caddie 2.0, which we released with Microsoft earlier this year, is now able to recalibrate its recommendations on the fly and give you a new strategy regardless of where you are on the hole.

TRANSFORM: Have the benefits from AI accrued over time?

BROWN: You do get more transformative benefits the longer you use AI. When our virtual caddie is deciding what club to recommend, it’s looking at your data, and data of every other user like you. By now we’ve captured over 100 million shots from our users, we’ve mapped 40,000 golf courses and we have more than 1 billion geotagged data points on those greens.

And as we get more and more players using the Arccos ecosystem and playing on the course you’re playing on right now, our recommendations continue to get better and better, and that would be the case even if we didn’t continue to improve our models. And the proof is in the pudding; Arccos users are improving by 3.79 strokes in their first year of using our virtual caddie.

TRANSFORM: What advice would you offer to companies just getting started in AI?

BROWN: Really think through whether your use of AI is a novelty or something that’s truly beneficial to your users. Some people are using AI in ways that aren’t really adding much value. It’s really something neat that allows you to say, “Hey, I’m using AI” but you probably could have just used a complex algorithm.

Top photo: Arccos Golf SVP Jack Brown demonstrates Arccos Caddie at Conversations on AI, a Microsoft event in San Francisco.

Related:

  • Read more about how customers including Arccos are using AI.
  • Learn more about Microsoft’s AI tools for businesses.
Posted on Leave a comment

Tomorrow’s innovators: First class graduates from Global Innovation Exchange

Archisa Guharoy and other graduates watch a video
Archisa Guharoy and other graduates watch a video during the graduation ceremony.

“When I measure the value of education, I look at those ‘aha’ moments. I had a lot of those moments here. I’ve been really happy with the program.”

GIX trains students to think globally and ethically in an era of rapid change and increasingly shorter business and technology cycles, says Vikram Jandhyala, GIX’s co-executive director and vice president for innovation strategy at the University of Washington.

“How can we build a future set of innovators who will create products and services and be part of organizations that can navigate this changing landscape and lead innovation?” Jandhyala asks. “That is the premise. It’s not business as usual.”

Most of the 10 team projects of GIX’s inaugural class were sponsored by the institute’s industry partners, which include Microsoft, Boeing, T-Mobile, AT&T and Chinese technology company Baidu. Company leaders pitched loosely defined projects to GIX students, then mentored the teams as they developed their projects, leveraging the companies’ technologies.

That approach to learning makes GIX unique, says Ranveer Chandra, chief scientist for Microsoft Azure Global and an advisor on the chicken-monitoring project, named Cluck AI.

“The students are able to take the latest research from industry, build on top of it and show what can be achieved,” Chandra says.

Students give a presentation
Students (from left) Xu Yan, David “Davo” Franco, Padraic Casserly and Ibtasam “Ibi” Sharif present their Cluck AI solution, which uses machine learning to identify when farm chickens are in distress.

“This kind of an industry-academic collaboration, where industry’s not just handing off something to students but providing them with cutting-edge research and working closely with them, is something I haven’t seen before as part of a curriculum.”

The Cluck AI team worked with Microsoft engineers to use machine learning to identify when chickens on a farm are in distress — when a predator is nearby, for example, or when they are overheated. A microphone captures the animals’ sounds and sends them to Azure storage containers. When an anomaly is detected, the audio data is pushed to a dashboard, along with an image of the chicken in apparent distress. The farmer, who may not be onsite to monitor the livestock at all times, gets an email notification and can assess what’s wrong.

“It’s kind of like a baby monitor for poultry farmers,” says team member Padraic Casserly, 32.

The solution uses technology developed for Microsoft’s FarmBeats project, an artificial intelligence and Internet of Things platform that harnesses data to increase farm productivity and cut costs. The four-member GIX team launched the project after Chandra, then the principal researcher behind FarmBeats, suggested that audio data could have untapped potential in farming.

The team decided to focus on chickens, interviewing farmers and even buying two chickens of their own — named Margarita and Daisy — to record them. The students hope to develop the solution further and are looking into possible funding for a start-up.

Posted on Leave a comment

Podcast: Soundscaping the world with Amos Miller

Product Strategist Amos Miller

Episode 54, December 12, 2018

Amos Miller is a product strategist on the Microsoft Research NeXT Enable team, and he’s played a pivotal role in bringing some of MSR’s most innovative research to users with disabilities. He also happens to be blind, so he can appreciate, perhaps in ways others can’t, the value of the technologies he works on, like Soundscape, an app which enhances mobility independence through audio and sound.

On today’s podcast, Amos Miller answers burning questions like how do you make a microwave accessible, what’s the cocktail party effect, and how do you hear a landmark? He also talks about how researchers are exploring the untapped potential of 3D audio in virtual and augmented reality applications, and explains how, in the end, his work is not so much about making technology more accessible, but using technology to make life more accessible.

Related:


Episode Transcript

Amos Miller: Until you are out there in the wind, in the rain, with the people, experiencing, or at least trying to get a sense for the kind of experience they’re going through, you’ll never understand the context in which your technology is going to be used. It’s not something you can imagine, or glean from secondary data, or even from video or anything. Until you are there, seeing how they grapple with issues that they are dealing with, it’s almost impossible to really understand that context.

(music plays)

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Amos Miller is a product strategist on the Microsoft Research NeXT Enable team, and he’s played a pivotal role in bringing some of MSR’s most innovative research to users with disabilities. He also happens to be blind, so he can appreciate, perhaps in ways others can’t, the value of the technologies he works on, like Soundscape, an app which enhances mobility independence through audio and sound.

On today’s podcast, Amos Miller answers burning questions like how do you make a microwave accessible, what’s the cocktail party effect, and how do you hear a landmark? He also talks about how researchers are exploring the untapped potential of 3D audio in virtual and augmented reality applications, and explains how, in the end, his work is not so much about making technology more accessible, but using technology to make life more accessible. That and much more on this episode of the Microsoft Research Podcast.

Host: Amos Miller, welcome to the podcast.

Amos Miller: Thank you. It’s great to be here.

Host: You are unique in the Microsoft Research ecosystem. Your work is mission-driven. Your personal life strongly informs your professional life and, we’ll get more specific in a bit. But for starters, in broad strokes, tell us what gets you up in the morning. Why do you do what you do?

Amos Miller: I’ve always been passionate about technology from a very young age. But, really, in the way that it impacts people’s lives. And it’s not a mission that I necessarily knew about when I went through my career and experiences with technology. But when I look back, I see that those are the areas where I could see that a person feels differently about themselves or about the environment as a result of their interaction with that technology. That’s where I thought okay, that is having meaning to this person. And I have this huge, wonderful opportunity to do what I do in Microsoft Research to actually have turned that passion into my day job, which is very… I feel extremely fortunate with that. And I sometimes have to pinch myself to see that it’s not a dream.

Host: Well, tell us a little bit about your background and how that plays into what you are doing here.

Amos Miller: I’m very much a person that grew up in the technology world. I also moved a number countries over my career, and my life. I grew up in Israel. I spent many years in the UK, in London. I spent a few other years in Asia, in Singapore, and now I’m here, so all of these aspects of my life have been very important to me. I also happen to be blind. I suffer from a genetic eye condition called retinitis pigmentosa. It was diagnosed when I was five and I gradually lost my sight. I started university with good enough sight to manage and finish university with a service dog and any kind of technology I could find to help me read the whiteboard, to help me read the text on the computer. And I’d say by the age of 30, I totally stopped using my sight. And that’s when I really started living life as a fully blind person.

Host: Let’s talk about your job for a second. You are a product strategist at Microsoft Research, so how would you describe what you do?

Amos Miller: So, I work in a part of the organization at Microsoft Research that looks at really transferring technology ideas into impact. Into a way that they impact business, impact people. A good idea will only have an impact when it’s applied in the right way, in the right environment, so that the social, the business, the technological context in which it operates is going to make it thrive. Otherwise it doesn’t matter how good it is, it’s not going to have an impact.

Host: Right. So, let’s circle over to this previous role you had which was in Microsoft’s Digital Advisory program. And I bring it up in as much as it speaks to how often our previous work can inform our current work, and you referred to that time as your “customer-facing life.” How does it inform your role as a strategist today?

Amos Miller: What always energizes me is when I see and observe the meaning and the impact that technology can really have for people. And I don’t say it lightly. Until you are out there in the wind, in the rain, with the people, experiencing, or at least trying to get a sense for the kind of experience they are going through, you’ll never understand the context in which your technology is going to be used. It’s not something you can imagine, or glean from secondary data, or even from video or anything. Until you are there, seeing how they grapple with the issues that they are dealing with, it’s almost impossible to really understand that context. And the work that I’ve done in, actually, my first nine years in Microsoft, I worked in a customer-facing part of the business, in the Strategic Advisory Services, today known as the Digital Advisory Services. It’s work that we do with our largest customers around the world to really help them figure out how they can transform their own businesses and leverage advancements in technology.

Host: Right. So now, as you are working in Microsoft Research, as a product strategist, how does that transfer to what you do today?

Amos Miller: First of all, I want to introduce, for a moment, the team that I work with, which is the Enable team in Microsoft Research. And the Enable team is looking at technological innovations, especially with disabilities in mind. In our case, our two primary groups are people with ALS and people who are blind. As a product strategist, my role is to work across the research, engineering, marketing and our customer segment and really figure out and understand how we can harness what we have from a technology perspective and, as an organization, to maximize and have that impact that we aspire to have with that community. And that takes a great deal of – again, going back to my earlier point – spending time with that community, going out there and spending time, in my case, with other people who are blind because I only know my own experience. I don’t have everybody else’s experience. The only way for me to learn about that is to be out there. And in our team, every developer goes out there to spend time with end users because that’s the only way you can really get under the covers and understand what’s going on.

Host: Right.

(music plays)

Host: So, the website says you drive a research program that “seeks to understand and invent accessibility in a world” – this is the fun part – “where AI agents and mixed reality are the primary forms of interaction.” It sounds kind of sci-fi to me…

Amos Miller: A little bit. Let me unpack that a little bit. When we traditionally think about accessibility, we think about, how do you make something accessible? So how do you make a microwave more accessible? Well, there isn’t anything inherently inaccessible in putting a piece of pizza and warming it up in the microwave. The only reason it’s inaccessible is because the microwave was designed in an inaccessible way. It could have been accessible from the beginning.

Host: Sure.

Amos Miller: But the world we are moving to is, it’s not about me operating the microwave, it’s not about the accessibility of the microwave, it’s about me preparing dinner for my family. That’s the experience that I’m in. And there’s a bunch of technologies that support that experience. And that experience is what I am seeking to make an accessible and inclusive experience.

Host: Okay.

Amos Miller: That means that we are no longer talking about the microwave, we are talking about a set of interactions that involve people, that involve technology, that involves physical things in the environment. It’s not about making the technology accessible, it’s about using technology to make life more accessible, whether you are going for a walk with a friend, whether you are going to see a movie with a friend, whether it’s sitting in a meeting and brainstorming a storyboard for video. All of these are experiences, and the goal is, how do you make those experiences accessible experiences? That kind of gets you thinking about accessibility in a very different way, where your interaction is with the person that you are sitting in front of. The technology is just there in support of that interaction.

Host: Right. As I’m researching the interview, I’m find myself thinking of the various solutions – maybe the “technical guide dog” mentality – like let’s replace all these things, with technology, that people have traditionally used for independence. And the technology as it enters that ecosystem, some people might think the aim is to replace those things, but I don’t think that’s the point of what’s going on here. Am I right?

Amos Miller: That’s right. There is a tendency, when you come at a problem with a technology solution, to look at what you are currently doing and replace that with something that’s automatic. Right? Oh, you are using a guide dog? How can I replace that guide dog and give you a robot? So, I work on technology that enhances mobility independence through audio and sound, which we’ll talk about in a minute.

Host: Right.

Amos Miller: But often people ask me, how would that work for people who can’t hear? And the natural inclination to them is to say, oh, okay, well you’ll have to deliver the information in a different way. The thing is that people get a sense of their space and their surroundings using the senses that they have. To me, the question is not, how do we shortcut that? It’s how do they sense their space today? They do. They don’t sit there feeling completely disconnected. And if you are going to intervene in that, you better be consistent with how they’re experiencing it today.

Host: Yeah, and that leads me right into the next question because you and I talked earlier about the fundamental role that design plays in the success of human computer interaction. And I’m really eager to have you weigh in on the topic. Let’s frame this broadly in terms of assumptions. And that’s kind of what you were just referring to.

Amos Miller: Yeah.

Host: You know, if I’m looking at you and I think, well my solution to how you interact with the world with technology would be Braille, that’s an assumption. So, I’m just going to give you free reign here. Tell us what you want us to know about this from your perspective.

Amos Miller: We all make assumptions about other people’s other people’s experience of life. You are referring to Bill Buxton who was on your podcast a few weeks ago.

Host: Right.

Amos Miller: And he’s actually been a very close friend and mentor throughout the work that we are doing on Soundscape, which we’ll talk about in a minute. And he’s really brought to our attention that what we’ve done, of going out there and experiencing the real situation that people are experiencing, is about empathy and it’s about trying to understand and probe ideas that challenge your assumptions about what effect they will have. But, really seeing, observing and understanding their experience in that particular situation, and then maybe applying, from your learning, some form of intervention into that experience and observing how that affects that experience. It doesn’t have to be a complete piece of software or technology, it’s just an intervention. It can be completely low-fi. That helps you to start expanding your understanding. And you don’t have to do it with 100 people. Do it with two… three people. You will discover a whole new world you didn’t know about. I’m sorry, but you don’t need 200 data points to support that experience, you’ve just seen it. And you can build on that. So, can you enhance that, in any way, to give them an even richer awareness of their surrounding? And those are the kind of questions that taking design through that very experiential lens has led us to the work that we are actually doing our work on Soundscape, which is the technology that we’ve been developing over the last few years, to really see how far we can take this notion of how people perceive the world and how you can enhance that so their perception is enhanced.

(music plays)

Host: Well, let’s talk about 3D sound and an exciting launch earlier this year in the form of Microsoft’s Soundscape. This is such a cool technology with so many angles to talk about. First, just give our listeners an overview of Soundscape. What is it, who is it for, how does it work, how do people experience it?

Amos Miller: Soundscape is a technology that we developed in collaboration with Guide Dogs, certainly in the early stages, and still do. And the idea is very much using audio that’s played in 3D. Using a stereo headset, you can hear the landmarks that are around you and you can, thereby, really enrich your awareness of your surroundings, of what’s where in a very natural, easy way. And that really helps you feel more independent, more confident, to explore the world beyond what you know.

Host: How do you hear a landmark?

Amos Miller: How do you hear a landmark? So, for example, if you are standing and Starbucks is in front of you and to the right, we will say the word Starbucks, but we won’t say it’s in front of you and to the right, it will sound like it is over there where Starbucks is.

Host: Oh.

Amos Miller: OK? And that’s generated using, the technical term is head rotation transfer of synthetic binaural audio. So, it’s work that actually was developed in Microsoft Research, over a number of years, by Ivan Tashev and his team. And effectively, you can generate sound to make it sound like it’s not in between your ears. You can hear it as though it’s out in the space around you. It’s really quite amazing. And we also use non-audio cues. For example, one of the ideas that we built into Soundscape is this notion of a virtual audio beacon. Not to be confused with Bluetooth beacons! It’s completely virtual. But let’s suppose that you are standing on a street corner and you are heading to a restaurant that’s a block and a half away. What you can do with Soundscape is play some audio beacon that will sound like it’s coming from that restaurant, so no matter which way you’re standing, which way you’re heading, you can always hear that “click-click” sound so you know exactly where that restaurant is. You can see it with your ears.

Host: How do you do that? How do you place a beacon someplace, technically?

Amos Miller: Binaural audio is when you have a slightly different sound in each ear which tricks the brain into having a sense of, that sound is three dimensional. It’s exactly the same way that 3D images work. Audio works almost the same. If Ivan was here, he’ll say it’s not exactly the same, but by generating a slightly different soundwave in each ear, you’re able to make sound, sound like it’s coming from a specific direction. But by playing it in each ear slightly differently, it will actually sound like it’s coming from in front of you and to the right. OK? Now how do we know where to place that beacon?

Host: Right.

Amos Miller: At present, we – it’s largely designed to be used outdoors – so, we use GPS, so we know where you are standing. We know where that restaurant is, so we have two coordinates to work with. We also estimate which way you are facing. So, if you were facing the restaurant, we would want to play that beacon right in front of you. If you were standing at 90 degrees to the restaurant, we’d want to make that beacon sound like it’s coming not only from your right ear, but 100 meters away to your right.

Host: Unbelievable…

Amos Miller: Yeah? And so, taking all of those sensory inputs and taking the information from the map, the GPS location, the direction, we reproduce the sound image in your stereo headset so that you can hear the direction of the sound and where the thing is. And the most amazing thing is, this is all done in real time, completely dynamic. So, as you walk down the street, that restaurant may sound in front of you at 45 degrees to your right, and as you progress, you’ll hear it getting closer and closer and further and further to your right and further and further to your right. And if you overshoot it, it’ll start to sound behind you a little bit, yeah? Now, why is this so important? Because I’m not going to the restaurant on my own. I’m there with my kid or with my wife, or with my friend. And, if I were to hold a phone with the GPS instructions and all of that, I can’t hold a conversation with that person at the same time because I’m so engaged with the technology. And we talked earlier about, how do you get technology to be in the background? That beacon sound is totally in the background. You don’t have to think about it, you don’t have to attend to it mentally, it’s just there. So, you know where the restaurant is, and you continue to have a conversation with the person you are with, or you can daydream, or you can read your emails, listen to a podcast, and all of that happens at the same time. Because it’s played in 3D space, because it’s non-intrusive. You minimize the use of language. And all of these subtle aspects are absolutely crucial for this kind of technology to be relevant to this situation. You’re not sitting in front of the computer and it’s the only thing you are doing. You are outdoors. There’s a ton of things happening all the time that you have to deal with. You can’t expect the person to disassociate themselves from all of that. You know, Soundscape is one way of addressing this very, very interesting and important question. Throughout history, technology has always changed the way that we do things. But I think that we’re starting to see that, as technology developers, we really have to be much more mindful about just from the subtleties of how we design something on, what is the relationship between the technology and the person in that situation? How can a technology do exactly the same as it has done, but do so in a way that makes the person feel empowered and develop a new skill. Great runners learn to feel their heartbeat. But if they have a heart monitor, they’ll stop feeling that heartbeat because the device on their wrist tells them what it is. Well, that’s only because that’s how it was designed. If the heart monitor, instead of telling you, you are at, I don’t know, 150, it’d say, what do you think you’re at? And you’d say, oh, I’m at 140, and it’ll say, oh, you are actually at 150. You will have learned something new from that. It’s exactly the same function, but you have developed yourself as a result of that interaction. And I think that that’s the kind of opportunity that we need to start looking for.

Host: I want to circle back to this 3D audio and the technology behind it, and something that you referred to as “the cocktail party effect.” Can you explain that a little bit and how Microsoft Research is sort of leading the way here?

Amos Miller: The cocktail party effect is an effect, in the world of psycho-acoustics, that is very simple. If you imagine you’re sitting around a table in a cocktail party having a very exciting conversation with somebody, and there are lots of other similar conversations happening around you at the same time, because all of those conversations are happening in 3D space, you are actually able to hear all of those conversations even though you are attending just to yours. You are listening and you can understand and engage in your conversation, but if your name came up in any of those other conversations, you’ll immediately turn your head and say, hey guys, what are you talking about there? And that’s an incredible capability of the brain to manage a very rich set of inputs in the auditory space that is very much under-utilized today in the technology space. We always feel that if we need to convert something into audio, it’s got to be sequenced, because we can only hear one thing at a time. When it’s in 3D, that’s no longer the case. And that’s a huge opportunity. We play a lot of that in VR and augmented reality and we spend a lot of time on the visual aspect of virtual reality and really pushing the envelope on how far we can take the use of immersive experiences in objects in all directions. But the same is available with audio. Even more with audio because your eyes are no longer engaged. Audio is in 360. If we block our ears for a moment, all of a sudden, our awareness level drops. But we are so unaware of the power of audio because vision just takes over everything. And I think the work that we have done, both in the acoustic work on 3D audio, and the application, especially in the disability space where we placed the constraints on the team – there is no vision, now let’s figure it out – and that leads to new frontiers of discovery and innovation in this space that I think could be applicable and would be applicable in many other spaces. And that, you know, that heads-up experience when you are out and about in the streets, not focused on the screen, but engaged in your surroundings. And that’s a perfect situation where audio has huge advantages that we can look at.

(music plays)

Host: I ask each of my guests some version of the question, what keeps you up at night? Because I’m interested in how researchers are addressing unintended consequences of the work they’re doing. Is there anything that concerns you, Amos? Anything that keeps you up at night?

Amos Miller: I think things keeps me up at night because they are so interesting and yet unsolved. You know, we talked a bit about, how do you really express and portray the physical space around you in ways that utilize your other senses and really maximize the ability of the brain to make sense of places without vision? And I really think that, with Soundscape, we’ve only started to scratch the surface of that question. Over half of the brain is devoted to perception. And I think that, when we find ways to really engage, even further engage that incredible human capability, we will discover a whole new frontier of machine and human interaction in ways that we don’t understand today.

Host: You said you arrived at Microsoft Research from “left field.” What’s your story on how you came to be working on research in accessibility at Microsoft Research?

Amos Miller: I started life as a developer, and I did a business degree and joined the Strategic Advisory Services in Microsoft Consulting in the UK. And I think it was a very special moment in Microsoft, over the last few years, when we really started to understand the meaning of impacting every person on the planet with technology and seeing that as our mission. And that led to a series of conversations that opened an opportunity for us to actually get behind that statement and we basically joined Microsoft Research through that mission, through the work that we’re doing in Soundscape. And because we already had very strong relationships, thanks to some wonderful people in the company, and strong relationships here in Microsoft Research and in other parts of the company.

Host: Before we close, Bill Buxton asked me to ask you about the kayak regatta that you organized.

Amos Miller: Uh huh. Oh, we didn’t talk about that.

Host: Just tell that story quickly because I do have one question I want to wrap up with before we go.

Amos Miller: Okay. Well we talked about Soundscape as a technology that really enables you to hear signals in 3D around you. And that was largely designed to be used in the street, right? And then we thought, what would happen if we placed that audio beacon on a lake? So, we got a bunch of people during the summer hackathon and said, okay, well let’s try it out. So, we organized an event on Lake Sammamish. We hacked Soundscape to work on the lake and placed some virtual audio beacons around the lake and invited a group of people who are blind to come and kayak with us and see how they enjoy it. And they absolutely loved it. And I think that was a real eye-opener for us. You have to understand the difference here, you know? Could they kayak before? Sure, no problem, because a sighted person would be with them and tell them, okay, now you go straight, now you row left… But I’m sorry, that’s a very boring experience. You are not in control, you are not independent, you are just doing the work. And by being able to hear where those beacons are, you are truly in the driving seat. And that is a sense of independence that we’ve not really seen to that extent before we did this event.

Host: I like how you called it an eye-opening event!

Amos Miller: It was!

Host: There are so many metaphors about vision that we just sort of take for granted, right?

Amos Miller: Maybe it’s because I have prior sight, maybe not, but I, first of all, I use those metaphors all the time, and I also feel, you know, I could close my eyes and feel that my eyes are closed and open them and feel that they’re open. And I definitely take everything in in a very different way, even though the eyes don’t actually do the scientific aspect of what they’re designed to do.

Host: As we close, I always ask my guests to offer some parting advice to our listeners whether that be in the form of inspiration for future research or challenges that remain to be solved or personal advice on next steps along the career path, whether you have a guide dog with you or Soundscape… What would you say to your 25-year old self if you were just getting started in this arena?

Amos Miller: I honestly would say, get real life experience. Especially in the areas that you are passionate about. Be passionate about them with even more energy and see the work that you do in the context of what you are passionate about. Because you can only really apply your personal experiences to what you do. It’s so great here, in Microsoft Research, to see the interns coming here in the summer. And the creativity and passion, and new perspectives that they bring to our work here. And there’s a little bit of a side of me that worried they’ll jump into the job before they went out and explored the world. And I think it’s important that they find a way to do something that gives them that meaningful context to the work that they’ll be doing here.

(music plays)

Host: Amos Miller, thank you for joining us today. It’s been – can I say it? – an eye-opening experience!

Amos Miller: Sure. My pleasure. Thanks so much for having me.

To learn more about Amos Miller and the latest innovations in audio, sound and accessibility technology, visit Microsoft.com/research