Posted on Leave a comment

From Microsoft Ignite in Amsterdam: New Microsoft 365 enhancements to reduce costs, increase security and boost productivity

Today, we’re announcing several new Microsoft 365 enhancements to help IT reduce costs, increase security, and boost employee productivity.

Here’s a quick summary:

  • Windows Virtual Desktop is now in public preview, providing the best virtualized Microsoft 365 experience across devices.
  • Microsoft Defender Advanced Threat Protection (ATP) now supports Mac, extending Microsoft 365 advanced endpoint security across platforms.
  • The new Microsoft Threat and Vulnerability Management (TVM) capability in Microsoft Defender ATP will help detect, assess, and prioritize threats across endpoints.
  • Office 365 ProPlus will now include the Microsoft Teams app, enabling a new way to work.
  • We’re reducing the time it takes to apply Windows 10 feature updates, making it easier to deploy and service Windows 10.
  • We’re enhancing Configuration Manager and Microsoft Intune with new insights and deployment options to make it easier to manage your devices across platforms.
  • Microsoft 365 admin center is now generally available.

[youtube https://www.youtube.com/watch?v=V1YfK0Kdhzs?feature=oembed&w=960&h=540]

Virtualize Windows 10 and Office 365 on Azure with Windows Virtual Desktop—now in public preview

Today, we’re happy to announce the public preview of Windows Virtual Desktop. Windows Virtual Desktop is the only service that delivers simplified management, multi-session Windows 10, optimizations for Office 365 ProPlus, and support for Remote Desktop Services (RDS) environments in a shared public cloud. With Windows Virtual Desktop, you can deploy and scale your Windows desktops and apps on Azure in minutes, with built-in security and compliance.

For more information about Windows Virtual Desktop or how to get started with the public preview, read the full announcement and watch the new Mechanics video.

Address risks and protect more of your Microsoft 365 devices and endpoints with Microsoft Defender ATP—now in public preview

New today, we’re extending support for our Microsoft Defender threat protection platform to Mac. And because we’re extending support beyond the Windows ecosystem, we’re renaming the platform from Windows Defender Advanced Threat Protection (ATP) to Microsoft Defender Advanced Threat Protection (ATP). Starting today, Microsoft Defender ATP customers can sign up for a public preview. For more information, visit our Tech Community blog.

We’re also announcing Threat and Vulnerability Management (TVM), a new capability within Microsoft Defender ATP, designed to empower security teams to discover, prioritize, and remediate known vulnerabilities and misconfigurations exploited by threat actors. Using TVM, customers can evaluate the risk-level of threats and vulnerabilities and prioritize remediation based on signals from Microsoft Defender ATP. TVM will be available as a public preview for Microsoft Defender ATP customers within the next month. Learn more about it in our Tech Community blog.

Today’s security announcements are an important milestone in our Microsoft 365 endpoint security journey. For more details, check out Rob Lefferts’s post on the Microsoft Security blog.

Enable a new way to work with Office 365 ProPlus and Teams—starting in March

Starting in March, new installs of Office 365 ProPlus will include the Teams app by default. As a “hub for teamwork,” Teams combines chat, voice, video, files, meetings, and calls into a single, integrated experience.

In addition, the default installation for ProPlus will now be 64 bit, enabling better reliability and more effective use of newer PC hardware. If you have earlier 32-bit installs, a soon-to-be-released in-place upgrade from 32-bit to 64-bit Office 365 ProPlus will allow you to upgrade the Office apps without uninstalling and reinstalling.

Reducing the time required for Windows 10 feature updates—starting with version 1709

We made important changes to the Windows update process. Starting with Windows 10 version 1709, devices are updating up to 63 percent faster. Additionally, with the release of Windows 10 version 1703, we’ve seen a 20 percent reduction in operating system and driver stability issues.

Simplify and modernize management with Configuration Manager and Intune

Configuration Manager current branch offers CMPivot for real-time queries and updates to management insights that help with co-management readiness. What’s more, you can now take advantage of new deployment options, including phased deployments and configuring known-folder mapping to OneDrive.

Mobile Device Management (MDM) Security Baselines are now in preview in Intune. These baselines are a group of Microsoft-recommended configuration settings that increase your security posture and operational efficiency and reduce costs. We’re also announcing several new Intune capabilities for unified endpoint management across devices and platforms.

Check out What’s new in Microsoft Intune and Configuration Manager for more detailed information on our broad unified endpoint management investments.

Manage Microsoft 365 with a new admin center—rolling out now

We’re also announcing the that the new Microsoft 365 admin center, previously in preview, will become the default experience for all Microsoft 365 and Office 365 admins. Admin.microsoft.com is your single entry point for managing your Microsoft 365 services and includes new features like guided setup experiences, improved groups management, Multi-Factor Authentication (MFA) for admins, and more.

For more information on this new release, check out the detailed post on the Microsoft 365 Tech Community blog.

More at Microsoft Ignite: The Tour in Amsterdam

We’re sharing more on each of these announcements this week at Microsoft Ignite: The Tour in Amsterdam. I’ll be there to co-present a session with Jeremy Chapman on “Simplifying IT with Windows 10 and Office 365 ProPlus.” You’ll have a chance to learn more from many of my colleagues in the teamwork, modern desktop, and security sessions. I hope to see you there!

Editor’s note 3/21/2019:
Blog post was updated to correct information regarding Configuration Manager current branch.

Posted on Leave a comment

With a ‘hello,’ Microsoft and UW demonstrate first fully automated DNA data storage

Researchers from Microsoft and the University of Washington have demonstrated the first fully automated system to store and retrieve data in manufactured DNA — a key step in moving the technology out of the research lab and into commercial datacenters.

In a simple proof-of-concept test, the team successfully encoded the word “hello” in snippets of fabricated DNA and converted it back to digital data using a fully automated end-to-end system, which is described in a new paper published March 21 in Nature Scientific Reports.

DNA can store digital information in a space that is orders of magnitude smaller than datacenters use today. It’s one promising solution for storing the exploding amount of data the world generates each day, from business records and cute animal videos to medical scans and images from outer space.

Microsoft is exploring ways to close a looming gap between the amount of data we are producing that needs to be preserved and our capacity to store it. That includes developing algorithms and molecular computing technologies to encode and retrieve data in fabricated DNA, which could fit all the information currently stored in a warehouse-sized datacenter into a space roughly the size of a few board game dice.

“Our ultimate goal is to put a system into production that, to the end user, looks very much like any other cloud storage service — bits are sent to a datacenter and stored there and then they just appear when the customer wants them,” said Microsoft principal researcher Karin Strauss. “To do that, we needed to prove that this is practical from an automation perspective.”

Information is stored in synthetic DNA molecules created in a lab, not DNA from humans or other living things, and can be encrypted before it is sent to the system. While sophisticated machines such as synthesizers and sequencers already perform key parts of the process, many of the intermediate steps until now have required manual labor in the research lab. But that wouldn’t be viable in a commercial setting, said Chris Takahashi, senior research scientist at the UW’s Paul G. Allen School of Computer Science & Engineering.

“You can’t have a bunch of people running around a datacenter with pipettes — it’s too prone to human error, it’s too costly and the footprint would be too large,” Takahashi said.

YouTube Video

For the technique to make sense as a commercial storage solution, costs need to decrease for both synthesizing DNA — essentially custom building strands with meaningful sequences — and the sequencing process that extracts the stored information. Trends are moving rapidly in that direction, researchers say.

Automation is another key piece of that puzzle, as it would enable storage at a commercial scale and make it more affordable, Microsoft researchers say.

Under the right conditions, DNA can last much longer than current archival storage technologies that degrade in a matter of decades. Some DNA has managed to persist in less than ideal storage conditions for tens of thousands of years in mammoth tusks and bones of early humans, and it should have relevancy as long as people are alive.

The automated DNA data storage system uses software developed by the Microsoft and UW team that converts the ones and zeros of digital data into the As, Ts, Cs and Gs that make up the building blocks of DNA. Then it uses inexpensive, largely off-the-shelf lab equipment to flow the necessary liquids and chemicals into a synthesizer that builds manufactured snippets of DNA and to push them into a storage vessel.

When the system needs to retrieve the information, it adds other chemicals to properly prepare the DNA and uses microfluidic pumps to push the liquids into other parts of the system that “read” the DNA sequences and convert it back to information that a computer can understand. The goal of the project was not to prove how fast or inexpensively the system could work, researchers say, but simply to demonstrate that automation is possible.

One immediate benefit of having an automated DNA storage system is that it frees researchers up to probe deeper questions, instead of spending time searching for bottles of reagents or repetitively squeezing drops of liquids into test tubes.

“Having an automated system to do the repetitive work allows those of us working in the lab to take a higher view and begin to assemble new strategies — to essentially innovate much faster,” said Microsoft researcher Bichlien Nguyen.

The team from the Molecular Information Systems Lab has already demonstrated that it can store cat photographs, great literary works, pop videos and archival recordings in DNA, and retrieve those files without errors in a research setting. To date they’ve been able to store 1 gigabyte of data in DNA, besting their previous world record of 200 MB.

To store data in DNA, algorithms convert the 1s and 0s in digital data to ACTG sequences in DNA. Microsoft and University of Washington researchers stored and retrieved the word “hello” using the first fully automated system for DNA storage.

The researchers have also developed techniques to perform meaningful computation — like searching for and retrieving only images that contain an apple or a green bicycle — using the molecules themselves and without having to convert the files back into a digital format.

“We are definitely seeing a new kind of computer system being born here where you are using molecules to store data and electronics for control and processing. Putting them together holds some really interesting possibilities for the future,” said UW Allen School professor Luis Ceze.

Unlike silicon-based computing systems, DNA-based storage and computing systems have to use liquids to move molecules around. But fluids are inherently different than electrons and require entirely new engineering solutions.

The UW team, in collaboration with Microsoft, is also developing a programmable system that automates lab experiments by harnessing the properties of electricity and water to move droplets around on a grid of electrodes. The full stack of software and hardware, nicknamed “Puddle” and “PurpleDrop,” can mix, separate, heat or cool different liquids and run lab protocols.

The goal is to automate lab experiments that are currently being done by hand or by expensive liquid handling robots — but for a fraction of the cost.

Next steps for the MISL team include integrating the simple end-to-end automated system with technologies such as PurpleDrop and those that enable searching with DNA molecules. The researchers specifically designed the automated system to be modular, allowing it to evolve as new technologies emerge for synthesizing, sequencing or working with DNA.

“What’s great about this system is that if we wanted to replace one of the parts with something new or better or faster, we can just plug that in,” Nguyen said. “It gives us a lot of flexibility for the future.”

Top image: Microsoft and University of Washington researchers have successfully encoded and retrieved the word “hello” using this new system that fully automates DNA storage. It’s a key step in moving the technology out of the lab and into commercial datacenters.

Related to DNA storage:

Jennifer Langston writes about Microsoft research and innovation. Follow her on Twitter.

Posted on Leave a comment

Project Triton and the physics of sound with Microsoft Research’s Dr. Nikunj Raghuvanshi

Episode 68, March 20, 2019

If you’ve ever played video games, you know that for the most part, they look a lot better than they sound. That’s largely due to the fact that audible sound waves are much longer – and a lot more crafty – than visual light waves, and therefore, much more difficult to replicate in simulated environments. But Dr. Nikunj Raghuvanshi, a Senior Researcher in the Interactive Media Group at Microsoft Research, is working to change that by bringing the quality of game audio up to speed with the quality of game video. He wants you to hear how sound really travels – in rooms, around corners, behind walls, out doors – and he’s using computational physics to do it.

Today, Dr. Raghuvanshi talks about the unique challenges of simulating realistic sound on a budget (both money and CPU), explains how classic ideas in concert hall acoustics need a fresh take for complex games like Gears of War, reveals the computational secret sauce you need to deliver the right sound at the right time, and tells us about Project Triton, an acoustic system that models how real sound waves behave in 3-D game environments to makes us believe with our ears as well as our eyes.

Related:


Final Transcript

Nikunj Raghuvanshi: In a game scene, you will have multiple rooms, you’ll have caves, you’ll have courtyards, you’ll have all sorts of complex geometry and then people love to blow off roofs and poke holes into geometry all over the place. And within that, now sound is streaming all around the space and it’s making its way around geometry. And the question becomes how do you compute even the direct sound? Even the initial sound’s loudness and direction, which are important? How do you find those? Quickly? Because you are on the clock and you have like 60, 100 sources moving around, and you have to compute all of that very quickly.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: If you’ve ever played video games, you know that for the most part, they look a lot better than they sound. That’s largely due to the fact that audible sound waves are much longer – and a lot more crafty – than visual light waves, and therefore, much more difficult to replicate in simulated environments. But Dr. Nikunj Raghuvanshi, a Senior Researcher in the Interactive Media Group at Microsoft Research, is working to change that by bringing the quality of game audio up to speed with the quality of game video. He wants you to hear how sound really travels – in rooms, around corners, behind walls, out doors – and he’s using computational physics to do it.

Today, Dr. Raghuvanshi talks about the unique challenges of simulating realistic sound on a budget (both money and CPU), explains how classic ideas in concert hall acoustics need a fresh take for complex games like Gears of War, reveals the computational secret sauce you need to deliver the right sound at the right time, and tells us about Project Triton, an acoustic system that models how real sound waves behave in 3-D game environments to makes us believe with our ears as well as our eyes. That and much more on this episode of the Microsoft Research Podcast.

Host: Nikunj Raghuvanshi, welcome to the podcast.

Nikunj Raghuvanshi: I’m glad to be here!

Host: You are a senior researcher in MSR’s Interactive Media Group, and you situate your research at the intersection of computational acoustics and graphics. Specifically, you call it “fast computational physics for interactive audio/visual applications.”

Nikunj Raghuvanshi: Yep, that’s a mouthful, right?

Host: It is a mouthful. So, unpack that! How would you describe what you do and why you do it? What gets you up in the morning?

Nikunj Raghuvanshi: Yeah, so my passion is physics. I really like the mixture of computers and physics. So, the way I got into this was, many, many years ago, I picked up this book on C++ and it was describing graphics and stuff. And I didn’t understand half of it, and there was a color plate in there. It took me two days to realize that those are not photographs, they were generated by a machine, and I was like, somebody took a photo of a world that doesn’t exist. So, that is what excites me. I was like, this is amazing. This is as close to magic as you can get. And then the idea was I used to build these little simulations and I was like the exciting thing is you just code up these laws of physics into a machine and you see all this behavior emerge out of it. And you didn’t tell the world to do this or that. It’s just basic Newtonian physics. So, that is computational physics. And when you try to do this for games, the challenge is you have to be super-fast. You have 1/60th of a second to render the next frame to produce the next buffer of audio. Right? So, that’s the fast portion. How do you take all these laws and compute the results fast enough that it can happen at 1/60th of a second, repeatedly? So, that’s where the computer science enters the physics part of it. So, that’s the sort of mixture of things where I like to work in.

Host: You’ve said that light and sound, or video and audio, work together to make gaming, augmented reality, virtual reality, believable. Why are the visual components so much more advanced than the audio? Is it because the audio is the poor relation in this equation, or is it that much harder to do?

Nikunj Raghuvanshi: It is kind of both. Humans are visual dominant creatures, right? Because visuals are what is on our conscious mind and when you describe the world, our language is so visual, right? Even for sound, sometimes we use visual metaphors to describe things. So, that is part of it. And part of it is also that for sound, the physics is in many ways tougher because you have much longer wavelengths and you need to model wave diffraction, wave scattering and all these things to produce a believable simulation. And so, that is the physical aspect of it. And also, there’s a perceptual aspect. Our brain has evolved in a world where both audio/visual cues exist, and our brain is very clever. It goes for the physical aspects of both that give us separate information, unique information. So, visuals give you line-of-sight, high resolution, right? But audio is lower resolution directionally, but it goes around corners. It goes around rooms. That’s why if you put on your headphones and just listen to music at the loud volume, you are a danger to everybody on the street because you have no awareness.

Host: Right.

Nikunj Raghuvanshi: So, audio is the awareness part of it.

Host: That is fascinating because you’re right. What you can see is what is in front of you, but you could hear things that aren’t in front of you.

Nikunj Raghuvanshi: Yeah.

Host: You can’t see behind you, but you can hear behind you.

Nikunj Raghuvanshi: Absolutely, you can hear behind yourself and you can hear around stuff, around corners. You can hear stuff you don’t see, and that’s important for anticipating stuff.

Host: Right.

Nikunj Raghuvanshi: People coming towards you and things like that.

Host: So, there’s all kinds of people here that are working on 3D sound and head-related transfer functions and all that.

Nikunj Raghuvanshi: Yeah, Ivan’s group.

Host: Yeah! How is your work interacting with that?

Nikunj Raghuvanshi: So, that work is about, if I tell you the spatial sound field around your head, how does it translate into a personal experience in your two ears? So, the HRTF modeling is about that aspect. My work with John Snyder is about, how does the sound propagate in the world, right?

Host: Interesting.

Nikunj Raghuvanshi: So, if there is a sound down a hallway, what happens during the time it gets from there up to your head? That’s our work.

Host: I want you to give us a snapshot of the current state-of-the-art in computational acoustics and there’s apparently two main approaches in the field. What are they, and what’s the case for each and where do you land in this spectrum?

Nikunj Raghuvanshi: So, there’s a lot of work in room acoustics where people are thinking about, okay, what makes a concert hall sound great? Can you simulate a concert hall before you build it, so you know how it’s going to sound? And, based on the constraints on those areas, people have used a lot of ray tracing approaches which borrow on a lot of literature in graphics. And for graphics, ray tracing is the main technique, and it works really well, because the idea is you’re using a short wavelength approximation. So, light wavelengths are submicron and if they hit something, they get blocked. But the analogy I like to use is sound is very different, the wavelengths are much bigger. So, you can hold your thumb out in front of you and blot out the sun, but you are going to have a hard time blocking out the sound of thunder with a thumb held out in front of your ear because the waves will just wrap around. And, that’s what motivates our approach which is to actually go back to the physical laws and say, instead of doing the short wave length approximation for sound, we revisit and say, maybe sounds needs the more fundamental wave equation to be solved, to actually model these diffraction effects for us. The usual thinking is that, you know, in games, you are thinking about we want a certain set of perceptual cues. We want walls to occlude sound, we want a small room to reverberate less. We want a large hall to reverberate more. And the thought is, why are we solving this expensive partial differential equation again? Can’t we just find some shortcut to jump straight to the answer instead of going through this long-winded route of physics? And our answer has been that you really have to do all the hard work because there’s a ton of information that’s folded in and what seems easy to us as humans isn’t quite so easy for a computer and and there’s no neat trick to get you straight to the perceptual answer you care about.

(music plays)

Host: Much of the work in audio and acoustic research is focused on indoor sound where the sound source is within the line of sight and the audience and the listener can see what they were listening to…

Nikunj Raghuvanshi: Um-hum.

Host: …and you mentioned that the concert hall has a rich literature in this field. So, what’s the gap in the literature when we move from the concert hall to the computer, specifically in virtual environments?

Nikunj Raghuvanshi: Yeah, so games and virtual reality, the key demand they have is the scene is not one room, and with time it has become much more difficult. So, a concert hall is terrible if you can’t see the people who are playing the sound, right? So, it allows for a certain set of assumptions that work extremely nicely. The direct sound, which is the initial sound, which is perceptually very critical, just goes in a straight line from source to listener. You know the distance so you can just use a simple formula and you know exactly how loud the initial sound is at the person. But in a game scene, you will have multiple rooms, you’ll have caves, you’ll have courtyards, you’ll have all sorts of complex geometry and then people love to blow off roofs and poke holes into geometry all over the place. And within that, now sound is streaming all around the space and it’s making its way around geometry. And the question becomes, how do you compute even the direct sound? Even the initial sound’s loudness and direction, which are important? How do you find those? Quickly? Because you are on the clock and you have like 60, 100 sources moving around, and you have to compute all of that very quickly. So, that’s the challenge.

Host: All right. So, let’s talk about how you’re addressing it. A recent paper that you’ve published made some waves, sound waves probably. No pun intended… It’s called Parametric Directional Coding for Pre-computed Sound Propagation. Another mouthful. But it’s a great paper and the technology is so cool. Talk about this… research this that you’re doing.

Nikunj Raghuvanshi: Yeah. So, our main idea is, actually, to look at the literature in lighting again and see the kind of path they’d followed to kind of deliver this computational challenge of how you do these extensive simulations and still hit that stringent CPU budget in real time. And one of the key ideas is you precompute. You cheat. You just look at the scene and just compute everything you need to compute beforehand, right? Instead of trying to do it on the fly during the game. So, it does introduce the limitation that the scene has to be static. But then you can do these very nice physical computations and you can ensure that the whole thing is robust, it is accurate, it doesn’t suffer from all the sort of corner cases that approximations tend to suffer from, and you have your result. You basically have a giant look-up table. If somebody tells you that the source is over there and the listener is over here, tell me what the loudness of the sound would be. We just say okay, we this a giant table, we’ll just go look it up for you. And that is the main way we bring the CPU usage into control. But it generates a knock-off challenge that now we have this huge table, there’s this huge amount of data that we’ve stored and it’s 6-dimensional. The source can move in 3-dimensions and the listener can move in 3-dimensions. So, we have the giant table which is terabytes or even more on data.

Host: Yeah.

Nikunj Raghuvanshi: And the game’s typical budget is like 100 megabytes. So, the key challenge we’re facing is, how do we fit everything in that? How do we take this data and extract out something salient that people listen to and use that? So, you start with full computation, you start as close to nature as possible and then we’re saying okay, now what would a person hear out of this? Right? Now, let’s do that activity of, instead of doing a shortcut, now let’s think about okay, a person hears the directional sound comes from. If there is a doorway, the sound should come from the doorway. So, we pick out these perceptual parameters that are salient for human perception and then we store those. That’s the crucial way you kind of bring down this enormous data set and do a sort of memory budget that’s feasible.

Host: So, that’s the paper.

Nikunj Raghuvanshi: Um-hum.

Host: And how has it played out in practice, or in project, as it were?

Nikunj Raghuvanshi: So, a little bit of history on this is, we had a paper SIGGRAPH 2010, me and John Snyder and some academic collaborators, and at that point, we were trying to think of just physical accuracy. So, we took the physical data and we were trying to stay as close to physical reality as possible and we were rendering that. And around 2012, we got to talking with Gears of War, the studio, and we were going through what the budgets will be, how things would be. And we were like we need… this needs to… this is gigabytes, it needs to go to megabytes…

Host: Really?

Nikunj Raghuvanshi: …very quickly. And that’s when we were like, okay, let’s simplify. Like, what’s the four like most basic things that you really want from an acoustic system? Like walls should occlude sound and thing like that. So, we kind of re-winded and came to it from this perceptual viewpoint that I was just describing. Let’s keep only what’s necessary. And that’s how we were able to ship this in 2016 in Gears of War 4 by just re-winding and doing this process.

Host: How is that playing in to, you know… Project Triton is the big project that we’re talking about. How would you describe what that’s about and where it’s going? Is it everything you’ve just described or is there… other aspects to it?

Nikunj Raghuvanshi: Yeah. Project Triton is this idea that you should precompute the wave physics, instead of starting with approximations. Approximate later. That’s one idea of Project Triton. And the second is, if you want to make it feasible for real games and real virtual reality and augmented reality, switch to perceptual parameters. Extract that out of this physical simulation and then you have something feasible. And the path we are on now, which brings me back to the recent paper you mentioned…

Host: Right.

Nikunj Raghuvanshi: …is, in Gears of War, we shipped some set of parameters. We were like, these make a big difference. But one thing we lacked was if the sound is, say, in a different room and you are separated by a doorway, you would hear the right loudness of the sound, but its direction would be wrong. Its direction would be straight through the wall, going from source to listener.

Host: Interesting.

Nikunj Raghuvanshi: And that’s an important spatial cue. It helps you orient yourself when sounds funnel through doorways.

Host: Right.

Nikunj Raghuvanshi: Right? And it’s a cue that sound designers really look for and try to hand-tune to get good ambiances going. So, in the recent 2018 paper, that’s what we fixed. We call this portaling. It’s a made-up word for this effect of sounds going around doorways, but that’s what we’re modeling now.

Host: Is this new stuff? I mean, people have tackled these problems for a long time.

Nikunj Raghuvanshi: Yeah.

Host: Are you people the first ones to come up with this, the portaling and…?

Nikunj Raghuvanshi: I mean, the basic ideas have been around. People know that, perceptually, this is important, and there are approaches to try to tackle this, but I’d say, because we’re using wave physics, this problem becomes much easier because you just have the waves diffract around the edge. With ray tracing you face the difficult problem that you have to trace out the rays “intelligently” somehow to hit an edge, which is like hitting a bullseye, right?

Host: Right.

Nikunj Raghuvanshi: So, the ray can wrap around the edge. So, it becomes really difficult. Most practical ray tracing systems don’t try to deal with this edge diffraction effect because of that. Although there are academic approaches to it, in practice it becomes difficult. But as I worked on this over the years, I’ve kind of realized, these are the real advantages of this. Disadvantages are pretty clear: it’s slow, right? So, you have to precompute. But we’re realizing, over time, that going to physics has these advantages.

Host: Well, but the precompute part is innovative in terms of a thought process on how you would accomplish the speed-up…

Nikunj Raghuvanshi: There have been papers on precomputed acoustics, academically before, but this realization that mixing precomputation and extracting these perceptual parameters? That is a recipe that makes a lot of practical sense. Because a third thing that I haven’t mentioned yet is going to the perceptual domain, now the sound designer can make sense of the numbers coming out of this whole system. Because it’s loudness. It’s reverberation time, how long the sound is reverberating. And these numbers that are super-intuitive to sound designers, they already deal with them. So, now what you are telling them is, hey, you used to start with a blank world, which had nothing, right? Like the world before the act of creation, there’s nothing. It’s just empty space and you are trying to make things reverberate this way or that, now you don’t need to do that. Now physics will execute first ,on the actual scene with the actual materials, and then you can say I don’t like what physics did here or there, let me tweak it, let me modify what the real result is and make it meet the artistic goals I have for my game.

(music plays)

Host: We’ve talked about indoor audio modeling, but let’s talk about the outdoors for now and the computational challenges to making natural outdoor sounds, sound convincing.

Nikunj Raghuvanshi: Yeah.

Host: How have people hacked it in the past and how does your work in ambient sound propagation move us forward here?

Nikunj Raghuvanshi: Yeah, we’ve hacked it in the past! Okay. This is something we realized on Gears of War because the parameters we use were borrowed, again, from the concert hall literature and, because they’re parameters informed by concert halls, things sound like halls and rooms. Back in the days of Doom, this tech would have been great because it was all indoors and rooms, but in Gears of War, we have these open spaces and it doesn’t sound quite right. Outdoors sounds like a huge hall and you know, how do we do wind ambiances and rain that’s outdoors? And so, we came up with a solution for them at that time which we called “outdoorness.” It’s, again, an invented word.

Host: Outdoorness.

Nikunj Raghuvanshi: Outdoorness.

Host: I’m going to use that. I like it.

Nikunj Raghuvanshi: Because the idea it’s trying to convey is, it’s not a binary indoor/outdoor. When you are crossing a doorway or a threshold, you expect a smooth transition. You expect that, I’m not hearing rain inside, I’m feeling nice and dry and comfortable and now I’m walking into the rain…

Host: Yeah.

Nikunj Raghuvanshi: …and you want the smooth transition on it. So, we built a sort of custom tech to do that outdoor transition. But it got us thinking about, what’s the right way to do this? How do you produce the right sort of spatial impression of, there’s rain outside, it’s coming through a doorway, the doorway is to my left, and as you walk, it spreads all around you. You are standing in the middle of rain now and it’s all around you. So, we wanted to create that experience. So, the ambient sound propagation work was an intern project and now we finished it up with our collaborators in Cornell. And that was about, how do you model extended sound sources? So, again, going back to concert halls, usually people have dealt with point-like sources which might have a directivity pattern. But rain is like a million little drops. If you try to model each and every drop, that’s not going to get you anywhere. So, that’s what the paper is about, how to treat it as one aggregate that somebody gave us? And we produce an aggregate sort of energy distribution of that thing along with this directional characteristics and just encode that.

Host: And just encode it.

Nikunj Raghuvanshi: And just encode it.

Host: How is it working?

Nikunj Raghuvanshi: It works nice. It sounds good. To my ears it sounds great.

Host: Well you know, and you’re the picky one, I would imagine.

Nikunj Raghuvanshi: Yeah. I’m the picky one and also when you are doing iterations for a paper, you also completely lose objectivity at some point. So, you’re always looking for others to get some feedback.

Host: Here, listen to this.

Nikunj Raghuvanshi: Well, reviewers give their feedback, so, yeah.

Host: Sure. Okay. Well, kind of riffing on that, there’s another project going on that I’d love for you to talk as much as you can about called Project Acoustics and kind of the future of where we’re going with this. Talk about that.

Nikunj Raghuvanshi: That’s really exciting. So, up to now, Project Triton was an internal tech which we managed to propagate from research into actual Microsoft product, internally.

Host: Um-hum.

Nikunj Raghuvanshi: Project Acoustics is being led by Noel Cross’s team in Azure Cognition. And what they’re doing is turning it into a product that’s externally usable. So, trying to democratize this technology so it can be used by any game audio team anywhere backed by Azure compute to do the precomputation.

Host: Which is key, the Azure compute.

Nikunj Raghuvanshi: Yeah, because you know, it took us a long time, with Gears of War to figure out, okay, where is all this precompute going to happen?

Host: Right.

Nikunj Raghuvanshi: We had to figure out the whole cluster story for themselves, how to get the machines, how to procure them, and there’s a big headache of arranging compute for yourself. And so that’s, logistically, a key problem that people face when they try to think of precomputed acoustics. The run-time side, Project Acoustics, we are going to have plug-ins for all the standard game audio engines and everything. So, that makes things simpler on that side. But a key blocker in my view was always this question of, where are you going to precompute? So, now the answer is simple. You get your Azure badge account and you just send your stuff up there and it just computes.

Host: Send it to the cloud and the cloud will rain it back down on you.

Nikunj Raghuvanshi: Yes. It will send down data.

Host: Who is your audience for Project Acoustics?

Nikunj Raghuvanshi: Project Acoustics, the audience is the whole game audio industry. And our real hope is that we’ll see some uptake on it when we announce it at GDC in March, and we want people to use it, as many teams, small, big, medium, everybody, to start using this because we feel there’s a positive feedback loop that can be set up where you have these new tools available, designers realize that they have these new tools available that have shipped in Triple A games, so they do work. And for them to give us feedback. If they use these tools, we hope that they can produce new audio experiences that are distinctly different so that then they can say to their tech director, or somebody, for the next game, we need more CPU budget. Because we’ve shown you value. So, a big exercise was how to fit this within current budgets so people can produce these examples of novel possible experiences so they can argue for more. So, to increase the budget for audio and kind of bring it on par with graphics over time as you alluded to earlier.

Host: You know, if we get nothing across in this podcast, it’s like, people, pay attention to good audio. Give it its props. Because it needs it. Let’s talk briefly about some of the other applications for computational acoustics. Where else might it be awesome to have a layer of realism with audio computing?

Nikunj Raghuvanshi: One of the applications that I find very exciting is for audio rendering for people who are blind. I had the opportunity to actually show the demo of our latest system to Daniel Kish, who, if you don’t know, he’s the human echo-locator. And he uses clicks from his mouth to actually locate geometry around him and he’s always oriented. He’s an amazing person. So that was a collaboration, actually, we had with a team in the Garage. They released a game called Ear Hockey and it was a nice collaboration, like there was a good exchange of ideas over there. That’s nice because I feel that’s a whole different application where it can have a potential social positive impact. The other one that’s very interesting to me is that we lived in 2-D desktop screens for a while and now computing is moving into the physical world. That’s the sort of exciting thing about mixed reality, is moving compute out into this world. And then the acoustics of the real world being folded into the sounds of virtual objects becomes extremely important. If something virtual is right behind the wall from you, you don’t want to listen to it with full loudness. That would completely break the realism of something being situated in the real world. So, from that viewpoint, good light transport and good sound propagation are both required things for the future compute platform in the physical world. So that’s a very exciting future direction to me.

(music plays)

Host: It’s about this time in the podcast I ask all my guests the infamous “what keeps you up at night?” question. And when you and I talked before, we went down kind of two tracks here, and I felt like we could do a whole podcast on it, but sadly we can’t… But let’s talk about what keeps you up at night. Ironically to tee it up here, it deals with both getting people to use your technology…

Nikunj Raghuvanshi: Um-hum.

Host: And keeping people from using your technology.

Nikunj Raghuvanshi: No! I wanted everybody to use the technology. But I’d say like five years ago, what used to keep me up at night is like, how are we going to ship this thing in Gears of War? Now what’s keeping me up at night is how do we make Project Acoustics succeed and how do we you know expand the adoption of it and, in a small way, try to improve, move the game audio industry forward a bit and help artists do the artistic expression they need to do in games? So, that’s what I’m thinking right now, how can we move things forward in that direction? I frankly look at video games as an art form. And I’ve gamed a lot in my time. To be honest, all of it wasn’t art, I was enjoying myself a lot and I wasted some time playing games. But we all have our ways to unwind and waste time. But good games can be amazing. They can be much better than a Hollywood movie in terms of what you leave them with. And I just want to contribute in my small way to that. Giving artists the tools to maybe make the next great story, you know.

Host: All right. So, let’s do talk a little bit, though, about this idea of you make a really good game…

Nikunj Raghuvanshi: Um-hum.

Host: Suddenly, you’ve got a lot of people spending a lot of time. I won’t say wasting. But we have to address the nature of gaming, and the fact that there are you know… you’re upstream of it. You are an artist, you are a technologist, you are a scientist…

Nikunj Raghuvanshi: Um-hum.

Host: And it’s like I just want to make this cool stuff.

Nikunj Raghuvanshi: Yeah.

Host: Downstream, it’s people want people to use it a lot. So, how do you think about that and the responsibilities of a researcher in this arena?

Nikunj Raghuvanshi: Yeah. You know, this reminds me of Kurt Vonnegut’s book, Cat’s Cradle? He kind of makes – what there’s scientist who makes Ice 9 and it freezes the whole planet or something. So, you see things about video games in the news and stuff. But I frankly feel that the kind of games I’ve participated in making, these games are very social experiences. People meet on the games a lot. Like Sea of Thieves is all about, you get a bunch of friends together, you’re sitting on the couch together, and you’re just going crazy like on these pirate ships and trying to just have fun. So, they are not the sort of games where a person is being separated from society by the act of gaming and just is immersed in the screen and is just not participating in the world. They are kind of the opposite. So, games have all these aspects. And so, I personally feel pretty good about the games I’ve contributed to. I can at least say that.

Host: So, I like to hear personal stories of the researchers that come on the podcast. So, tell us a little bit about yourself. When did you know you wanted to do science for a living and how did you go about making that happen?

Nikunj Raghuvanshi: Science for a living? I was the guy in 6th grade who’d get up and say I want to be a scientist. So, that was then, but what got me really hooked was graphics, initially. Like I told you, I found the book which had these color plates and I was like, wow, that’s awesome! So, I was at UNC Chapel Hill, graphics group, and I studied graphics for my graduate studies. And then, in my second or third year, my advisor, Ming Lin, she does a lot of research in physical simulations. How do we make water look nice in physical simulations? Lots of it is CGI. How do you model that? How do you model cloth? How do you model hair? So, there’s all this physics for that. And so, I took a course with her and I was like, you know what? I want to do audio because you get a different sense, right? It’s simulation, not for visuals, but you get to hear stuff. I’m like okay, this is cool. This is different. So, I did a project with her and I published a paper on sound synthesis. So, like how rigid bodies, like objects rolling and bouncing around and sliding make sound, just from physical equations. And I found a cool technique and I was like okay, let me do acoustics with this. It’s going to be fun. And I’m going to publish another paper in a year. And here I am, still trying to crack that problem of how to do acoustics in spaces!

Host: Yeah, but what a place to be. And speaking of that, you have a really interesting story about how you ended up at Microsoft Research and brought your entire PhD code base with you.

Nikunj Raghuvanshi: Yeah. It was an interesting time. So, when I was graduating, MSR was my number one choice because I was always thinking of this technology as, it would be great if games used this one day. This is the sort of thing that would have a good application in games. And then, around that time, I got hired to MSR and it was a multicore incubation back then, my group was looking at how do these multicore systems enable all sorts of cool new things? And one of the things my hiring manager was looking at was how can we do physically based sound synthesis and propagation. So, that’s what my PhD was, so they licensed the whole code base and I built on that.

Host: You don’t see that very often.

Nikunj Raghuvanshi: Yeah, it was nice.

Host: That’s awesome. Well, Nikunj, as we close, I always like to ask guests to give some words of wisdom or advice or encouragement, however it looks to you. What would you say to the next generation of researchers who might want to make sound sound better?

Nikunj Raghuvanshi: Yeah, it’s an exciting area. It’s super-exciting right now. Because even like just to start from more technical stuff, there are so many problems to solve with acoustic propagation. I’d say we’ve taken just the first step of feasibility, maybe a second one with Project Acoustics, but we’re right at the beginning of this. And we’re thinking there are so many missing things, like outdoors is one thing that we’ve kind of fixed up a bit, but we’re going towards what sorts of effects can you model in the future? Like directional sources is one we’re looking at, but there are so many problems. I kind of think of it as the 1980s of graphics when people first figured out that you can make this work. You can make light propagation work. What are the things that you need to do to make it ever closer to reality? And we’re still at it. So, I think we’re at that phase with acoustics. We’ve just figured out this is one way that you can actually ship in practical applications and we know there are deficiencies in its realism in many, many places. So, I think of it as a very rich area that students can jump in and start contributing.

Host: Nowhere to go but up.

Nikunj Raghuvanshi: Yes. Absolutely!

Host: Nikunj Raghuvanshi, thank you for coming in and talking us today.

Nikunj Raghuvanshi: Thanks for having me.

(music plays)

To learn more about Dr. Nikunj Raghuvanshi and the science of sound simulation, visit Microsoft.com/research

Posted on Leave a comment

‘Sea of Thieves’ anniversary update coming April 30

Today, we are celebrating a year of adventures in Sea of Thieves! It’s been both humbling and awe-inspiring to see millions of players from all over the world set sail and create their own stories on the Sea of Thieves. On behalf of everyone at Rare, I’d like to say a huge thank you to everyone who’s played and made Sea of Thieves what it is today.

While it’s been an amazing first year, we’re not done yet. With four major updates, nine time-limited events and more than thirty game updates, we’re still just getting started. We’re as excited about Sea of Thieves today as we were a year ago, and I’m pleased to be able to share with you a big reason why – The Anniversary Update, our biggest content update so far, will be coming free to all players on April 30.

The Anniversary Update is a game changer that takes Sea of Thieves to a whole new level. Packed full of new content that’s guaranteed to have something for everyone, tune into a series of live streams on Mixer, Twitch and YouTube where we’ll share more details, insights and glimpses from behind the scenes on the run-up to April 30:

17:00BST (9:00 a.m. PT)

Wednesday, April 10

Anniversary Preview: The Arena

Find out more about the all-new competitive game mode that will let you and your friends test yourselves against other crews in fun and fast-paced matches to amass the most loot.

17:00BST (9:00 a.m. PT)

Tuesday, April 16

Anniversary Preview: The Hunter’s Call

The Hunter’s Call is a new trading company that gives you more ways to play and progress towards Pirate Legend. Find out more about what you can do, the company behind it and the rewards on offer.

17:00BST (9:00 a.m. PT)

Tuesday, April 23

Anniversary Preview: Tall Tales – Shores of Gold

This is Sea of Thieves like you’ve never seen it before. Tall Tales are a collection of story-rich quests that are played out in our shared world and can be fully experienced by yourself or with your crew. This first collection, ‘Shores of Gold’, invites you to embark on an epic adventure of love, honour and betrayal in search of the mythical Shores of Gold.

Join the party

To say thank you to our wonderful community for their support over the past year, we have a range of goodies on offer to help everyone celebrate. These are all available now, so go get ‘em!

  • New Mercenary Voyages offering medleys in The Shores of Plenty and Devil’s Roar, together with a new Reaper’s Run voyage, commendations and cosmetics. There’s also a special Gilded Mercenary Voyage of Legends for Pirate Legends
  • Captain Bones’ Original Pirate Cutlass has been added to the weapons chest for any player who played in Year One
  • Any player who attained Pirate Legends status during Year One will receive a Golden Legendary Tankard, a Golden Legendary Hurdy Gurdy, a Golden Legendary Blunderbuss together with Golden Legendary ship customisations including a sail, hull and figurehead
  • All players will be able to purchase the Golden Sailor Hat and Golden Sailor Cannons from any Outpost shop for just 320 gold between today and the launch of the Anniversary Update
  • A range of Sea of Thieves themed Gamerpics you can use to personalise your Xbox Live page
  • No pirate birthday would be complete without a sing-song, so we’re pleased to announce that we are releasing We Shall Sail Together free through digital outlets (Spotify, Deezer, Google Play, iTunes, Amazon Music and Tidal). Make sure you have a listen to get you in the pirate spirit!

Thank you again to all the players who have joined us on this journey with Xbox Game Pass, Xbox One and Windows 10 PC. We couldn’t be more excited to bring you the Anniversary Update on April 30 and to share more details soon. See you on the seas!

New to Sea of Thieves? Join the adventure with Xbox Game Pass or on Xbox One and Windows 10 PC. If you haven’t tried Xbox Game Pass yet, join today and get your first month for $1. Prospective pirates can learn more at xbox.com/seaofthieves or visit the Sea of Thieves website at SeaofThieves.com to embark on an epic journey with one of gaming’s most welcoming communities.

Posted on Leave a comment

New integrations from Microsoft and NVIDIA unlock GPU acceleration for devs and data scientists

With ever-increasing data volume and latency requirements, GPUs have become an indispensable tool for doing machine learning (ML) at scale. This week, we are excited to announce two integrations that Microsoft and NVIDIA have built together to unlock industry-leading GPU acceleration for more developers and data scientists.

  • Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, an open source software library from NVIDIA that allows traditional machine learning practitioners to easily accelerate their pipelines with NVIDIA GPUs
  • ONNX Runtime has integrated the NVIDIA TensorRT acceleration library, enabling deep learning practitioners to achieve lightning-fast inferencing regardless of their choice of framework.

These integrations build on an already-rich infusion of NVIDIA GPU technology on Azure to speed up the entire ML pipeline.

“NVIDIA and Microsoft are committed to accelerating the end-to-end data science pipeline for developers and data scientists regardless of their choice of framework,” says Kari Briski, Senior Director of Product Management for Accelerated Computing Software at NVIDIA. “By integrating NVIDIA TensorRT with ONNX Runtime and RAPIDS with Azure Machine Learning service, we’ve made it easier for machine learning practitioners to leverage NVIDIA GPUs across their data science workflows.”

Azure Machine Learning service integration with NVIDIA RAPIDS

Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, providing up to 20x speedup for traditional machine learning pipelines. RAPIDS is a suite of libraries built on NVIDIA CUDA for doing GPU-accelerated machine learning, enabling faster data preparation and model training. RAPIDS dramatically accelerates common data science tasks by leveraging the power of NVIDIA GPUs.

Exposed on Azure Machine Learning service as a simple Jupyter Notebook, RAPIDS uses NVIDIA CUDA for high-performance GPU execution, exposing GPU parallelism and high memory bandwidth through a user-friendly Python interface. It includes a dataframe library called cuDF which will be familiar to Pandas users, as well as an ML library called cuML that provides GPU versions of all machine learning algorithms available in Scikit-learn. And with DASK, RAPIDS can take advantage of multi-node, multi-GPU configurations on Azure.

Learn more about RAPIDS on Azure Machine Learning service or attend the RAPIDS on Azure session at NVIDIA GTC.

ONNX Runtime integration with NVIDIA TensorRT in preview

We are excited to open source the preview of the NVIDIA TensorRT execution provider in ONNX Runtime. With this release, we are taking another step towards open and interoperable AI by enabling developers to easily leverage industry-leading GPU acceleration regardless of their choice of framework. Developers can now tap into the power of TensorRT through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from PyTorch, TensorFlow, MXNet and many other popular frameworks. Today, ONNX Runtime powers core scenarios that serve billions of users in Bing, Office, and more.

With the TensorRT execution provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. We have seen up to 2X improved performance using the TensorRT execution provider on internal workloads from Bing MultiMedia services.

To learn more, check out our in-depth blog on the ONNX Runtime and TensorRT integration or attend the ONNX session at NVIDIA GTC.

Accelerating machine learning for all

Our collaboration with NVIDIA marks another milestone in our venture to help developers and data scientists deliver innovation faster. We are committed to accelerating the productivity of all machine learning practitioners regardless of their choice of framework, tool, and application. We hope these new integrations make it easier to drive AI innovation and strongly encourage the community to try it out. Looking forward to your feedback!

Posted on Leave a comment

At GDC, find out how the ID@Xbox program is evolving for developers

The Game Developers Conference is here again and I’m hecka stoked to join the ID@Xbox team in our annual trip to the San Francisco Bay Area. GDC is an amazing time to highlight new games, catch-up with other developers and friends in the industry and listen to their feedback to continue driving innovation forward for Xbox.

2018 was a fantastic year for developers in the ID@Xbox program and we can’t be more excited to share what we have planned for 2019 with our fans in the coming days and months. Nearly 400 titles were released through the program last year, helping us reach a milestone of over 1,000 titles launched via ID@Xbox. We sent out developer kits to nearly 500 new studios in 2018, which means that over 3,000 studios across 67 countries now have Xbox One developer kits. With over 1,600 games in active development, we’re excited to see the variety of new experiences that teams are bringing to fans. Finally, games coming through the program have generated more than $1.2 billion in revenue since the program’s inception.

Xbox Game Pass has been and continues to be a great way for independent developers to get their games discovered while also enabling members to find their next favorite game. To date, we’ve worked with over 125 developers who have brought their games to Xbox Game Pass. Xbox Game Pass has also helped give independent developers more exposure with audiences that they wouldn’t have normally been able to reach. For example, of all the Xbox Game Pass members who played “Human: Fall Flat,” more than 40% had never played a puzzle game on Xbox before.

Looking ahead, 2019 will be another great year with a stellar lineup of games slated to release through the ID@Xbox program. Folks can once again try out a number of these titles at the Microsoft Booth in the Moscone Center or at our Developer Showcase Event for press. Among these games, titles from acclaimed creators like Night School Studios, Blue Manchu Games and Zen Studios will launch Afterparty, Void Bastards and Operencia: The Stolen Sun day-and-date into Xbox Game Pass, bringing their games to millions of members upon release. We’re also excited to see games like The Artful Escape, Sable and Tunic from our friends at Beethoven & Dinosaur, Shedworks and Dicey respectively continue their development path through the program.

While Xbox Game Pass has created amazing opportunities for developers to have their games discovered by an established player base, Xbox Live continues to be an amazing social tool for players and developers alike. Whether it’s Gamerscore, achievements, cloud saves, multiplayer, leaderboards or just seeing if your friends are online (and what they’re playing), Xbox Live makes playing games better for fans, and implementation of key social features easier for devs.

We’re super excited this week to start talking to developers about how they can use Xbox Live on platforms other than Xbox and Windows 10 PCs, starting with iOS and Android! This has been a big request from players and developers over the years and while we’ve done a lot of work to implement Xbox Live in some of our Xbox Game Studios games on other platforms (such as Microsoft Solitaire and Wordament), and as part of the foundational technology for the Bedrock Edition of Minecraft, we’re excited to open up this functionality to more developers soon via easy-to-use SDKs! You can learn a lot more in Kareem Choudhry’s blog posts. If you’re a developer interested in implementing Xbox Live in your iOS and Android titles, please reach out to us at www.xbox.com/id and we’d be happy to help get you started!

Cuphead

Cuphead

As with most new technologies, we love to work with partners to help us explore new innovations we’re bringing to developers. That brings us to Cuphead! Our friends at StudioMDHR already have experience with Xbox Live beyond the console – Cuphead is available on Windows 10 with Xbox Live as well. We’ve had some good conversations with them about Xbox Live and the gaming community, especially after we saw what Mojang had done with Minecraft and the Bedrock Edition.

Growing out of these recent discussions, we are partnering with StudioMDHR to investigate bringing Xbox Live features beyond Xbox and PC to Nintendo Switch. Yes, this means that fans will now have the opportunity to experience StudioMDHR’s award-winning debut game on Nintendo Switch with Xbox Live! We’ll be working with StudioMDHR to implement Xbox Live features into Cuphead on the Nintendo Switch in the coming months. Given the early stage of our work with StudioMDHR, the Xbox Live features will appear in a post-launch update to Cuphead on Nintendo Switch. We’d like to thank Studio MDHR and Nintendo for their help in this investigation!

To both players and developers – thanks again for playing and supporting the amazing work showcased through ID@Xbox! We can’t wait for what’s to come this week at GDC!

Posted on Leave a comment

Newly formed Northwest Quantum Nexus aims to accelerate quantum information science

The Northwest is brimming with talented, dedicated people who can deliver quantum computing advances today and secure our quantum future for tomorrow. Today, at the inaugural Northwest Quantum Nexus Summit, we announced the Northwest Quantum Nexus, a coalition assembled by three keystone partners: Microsoft Quantum, the Pacific Northwest National Laboratory, and the University of Washington.

In line with the goals of the National Quantum Initiative Act, the Northwest Quantum Nexus accelerates Quantum Information Science (QIS) to develop a quantum economy and workforce in the greater Pacific Northwest region of the United States and Canada. The high concentration of quantum activity in the Northwest makes it one of the top regions globally to address key QIS needs. The goal of the two-day Summit this week is to bring together the region’s experts who can define the region’s potential to drive quantum computing’s future.

Its objectives include:

  • Bringing together academia, government, startups, and industry to pursue multi-disciplinary QIS research to deliver scalable quantum computing.
  • Pursuing quantum computing via collaborative research and development, targeted quantum algorithms and programming, and the development of quantum materials.
  • Capitalizing on public-private partnerships to promote a rapid exchange of knowledge and resources and drive discoveries in quantum technologies.
  • Applying research outcomes to application areas and testbeds, including clean energy and sustainability.
  • Cultivating the future quantum workforce through programs that range from early to higher education and professional levels, as well as the corresponding network of institutions and outlets offering curriculum and training opportunities.

“The Northwest Quantum Nexus represents another big step toward the development of scalable, stable quantum computers,” says Todd Holmdahl, Corporate Vice President of Microsoft Quantum. “The partnership just makes a lot of sense – we’re already one of the top regions in the world for quantum research, and the Nexus will help us leverage that expertise to build a quantum-ready workforce and boost the region and nation’s quantum ecosystem.”

Microsoft Quantum Computing Project in Delft, The Netherlands. June 2018
Todd Holmdahl, Corporate Vice President of Microsoft Quantum

The Northwest Quantum Nexus intersects with another complementary, broad-based initiative led by Microsoft, the Microsoft Quantum Network. Both the Northwest Quantum Nexus and the Microsoft Quantum Network were begun with the understanding that creating a scalable quantum computer will require the collective effort of many skilled and diverse teams. Just last week, Microsoft hosted the Microsoft Quantum Network’s first Startup Summit.

Creating a regional quantum powerhouse

The Northwest Quantum Nexus partnership unites considerable intellectual talent. The University of Washington is one of the top research institutions in the world. It recently established UW Quantum X, which joins existing research endeavors across the university in QIS, including quantum sensing, quantum computing, quantum communication, and quantum materials.

Recently, in a partnership of Microsoft Quantum researchers and the University of Washington faculty, we identified opportunities to ready students with quantum computing programming skills and an understanding of quantum algorithms. Microsoft researchers now teach quantum computing programming and algorithm development with the Q# programming language, giving students a head start in developing quantum solutions.

Pacific Northwest National Laboratory’s (PNNL) QIS program includes capabilities in algorithm development and programming, as well as expertise in materials synthesis and characterization, quantum chemistry applications, quantum sensing, and workforce development – all fields are seeing tremendous advances today due to the power of a quantum computer.

Microsoft has recently collaborated with PNNL on the Microsoft Quantum Development Kit chemistry library. The library can be used in conjunction with NWChem, an open-source, high-performance computational chemistry tool funded by the U.S. Department of Energy’s Office of Science.   With the state-of-the-art tools provided in the Quantum Development Kit – including resource estimation, algorithm programming, debugging and simulation – this collaboration enables chemists to develop quantum chemistry solutions for a quantum computer, and better understand what that solution can look like today.

As for Microsoft, we’ve been driving advances in quantum computing and software development for 15 years. We see the power of quantum computing as solving some of the world’s most challenging problems for a wide range of industries – healthcare, environmental sciences, financial services, auto engineering, and others. Our team of experts in quantum physics, mathematics, computer science, and engineering have partnered with universities, industry, and government on cross-cutting research to advance our scalable qubit approach across the globe.

Researcher working on quantum hardware

The prominence of the Northwest Quantum Nexus is expected to increase the visibility of QIS research, leading to even greater collaboration and drawing quantum talent – trainees and employers – to the U.S. Locally, the Northwest Quantum Nexus will help position the greater Pacific Northwest region as a global leader for creating and sustaining an exceptional quantum workforce and economy.

Says Krysta Svore, General Manager for Quantum Software at Microsoft: “The Northwest Quantum Nexus is a pivotal element to making scalable quantum computing a reality. It enables the type of synergistic research and development needed to deliver critical technological advances from quantum algorithms and programming to materials design and development.”

Visit the Northwest Quantum Nexus web site to learn more about this important coalition.

Posted on Leave a comment

UK’s Team Finderr wins Imagine Cup EMEA for smartphone app that helps find lost objects

Imagine EMEA group photo blog.jpg

The Imagine Cup 2019 competition is well underway with our second Regional Final wrapping up in Amsterdam, the Netherlands this week. Team Finderr from the United Kingdom took home the first-place title and a spot in the World Championship for their app solution to find lost objects with a smartphone. Congratulations!

Microsoft’s Imagine Cup aims to empower student developers around the globe to develop the next great technology solutions. “We understand that today’s students are tomorrow’s entrepreneurs. They will also become technology decision makes. Whatever we can do to help make them better for tomorrow, we want to invest in that,” shares Jennifer Ritzinger, our Senior Director of Academic Ecosystems.

Students are given the opportunity to travel, receive one on one mentorship, develop business and technology skills, and network with like-minded peers to bring their ideas to life. Out of hundreds of EMEA submissions, 12 finalists from 10 countries were selected to travel to Amsterdam and compete at the Regional Final, hosted at Microsoft Ignite | The Tour. The first-place prize  up for grabs? USD15,000 and a trip to the 2019 World Championship. Challenged with creating an original technology solution, finalist projects ranged from tackling issues in healthcare, agriculture, accessibility and education, among others, and utilized a variety of Microsoft technologies including Azure Virtual Machines, Mixed Reality, and AI. “Imagine Cup is amazing, we feel empowered to do more,” said Ian Kamau from team iCropal of his competition experience. “For us, this is not only a competition, but also a way to exchange opinions with colleagues in order to develop ourselves and our product,” elaborated team AirCloud on their decision to enter.

Over three days of competition, teams had the opportunity participate in an Entrepreneur Day and Ignite | The Tour activities to learn how to perfect their pitches and integrate the latest and greatest technologies into their solutions. The Regional Final culminated with each team giving a live demo of their projects to a panel of judges, and the top three were chosen.

The competition doesn’t end here! On April 1 we will announce the Americas Regional finalists competing for the final spot in the World Championship. Team Finderr and our Asia Regional Final winners, Team Caeli from India, will travel to Seattle this May for the chance at the 2019 trophy. Who will go home with USD100,000 and a mentoring session with Microsoft CEO Satya Nadella? Follow the action on Twitter and Instagram to find out.

Meet the top 3 EMEA Teams

FINderr 2.jpg

1st place – Finderr, United Kingdom

Prize: USD15,000 and a spot in the 2019 World Championship

The team created an app solution which uses Cognitive Services and Virtual Machines to help make finding lost objects accessible to everyone through their phones, including visually impaired individuals.

athena IO.jpg

2nd place – Athena-IO, Tunisia

The team created a cross-platform solution enabling content creation and visualization on Mixed Reality devices to help corporations train their workforce and make technology more accessible to everyone. 

Prize: USD5,000

Pi project.jpg

3rd place – π-project, Russia

The team developed a low-cost and non-invasive solution for everyday health monitoring using Artificial Intelligence to analyze the urine transmission spectra.

Prize: USD1,000

Congratulations to all our EMEA Regional Finalists for their incredible projects and hard work. The competition doesn’t end here! Sign up for updates to follow this year’s competition and see who’ll take home the 2019 Imagine Cup trophy.

Posted on Leave a comment

Unveiled at HP Reinvent: Latest innovations in PC experiences for work and life

At HP Reinvent, the company’s largest global partner event, HP unveiled a variety of new offerings designed to transform experiences across work and life – including a ground-breaking security service, a cutting-edge commercial virtual reality (VR) headset, as well as new consumer and commercial PCs.

As the security landscape continues to evolve, HP aims to address endpoint challenges through a new security-focused managed service designed to enforce policies, actively monitor, proactively respond and defend against the threat of undetected attacks.

HP DaaS Proactive Security Service is designed to go beyond anti-virus solutions and provide a critical extra layer of defense. As the world’s most advanced isolation security service for files and browsing on Windows 10 PCs [1], this service extends protection and security intelligence to transform endpoints from biggest risk to best defense. It provides real-time malware protection for computing endpoints and threat analytics through HP TechPulse. The service also includes a security self-assessment tool and score; and cyber security solutions including assessment, incident response and cyber insurance services from Aon, a leading global professional services firm.

It will be available in more than 50 countries worldwide in April. The DaaS Proactive Security Service with Aon offerings will be available in the U.S. in April, with additional geographies to be added later this year*.

HP Reverb Virtual Reality Headset

HP Reverb Virtual Reality Headset

Also at HP Reinvent, the company revealed the HP Reverb Virtual Reality Headset – Professional Edition. It’s ultra-light at 1.1 pounds and ultra-immersive with redesigned optics to increase the visual “sweet spot.”

It joins HP’s comprehensive VR portfolio with panels measuring 2160 x 2160 per eye, with double the resolution [2] and a 114-degree field of view.

It has integrated Bluetooth with pre-paired motion controllers right out of the box and support for Windows Mixed Reality and Steam VR. With Windows Mixed Reality’s inside-out tracking, setup is easy – just plug in the VR headset and start the experience.

“With more than 2,500 VR experiences available and counting, Windows Mixed Reality continues to serve as the home for cutting-edge innovations that are fundamentally changing the way we work and play,” says Alex Kipman, a technical fellow with Microsoft. “The HP Reverb headset is an amazing example of the type of innovation we are seeing take place as we push forward and bring the next era of computing – the era of mixed reality – to the masses.”

HP worked closely with customers and found out that VR-based training has a 75 percent retention rate compared to lectures at 10 percent and reading at 5 percent [3].

HP Reverb also features integrated headphones with spatial audio and smart assistant-compatible dual microphones for a greater immersive experience and collaboration in multi-user VR environments.

The HP Reverb Professional and Consumer Editions are expected to be available in late April for $649 and $599*, respectively.

HP has also expanded its AMD commercial portfolio, designed for modern small and medium-sized businesses (SMBs).

HP ProBook 445R G6 open, with screen facing right

HP ProBook 445R G6

Powered by 2nd Generation AMD Ryzen mobile processors, the HP ProBook 445R G6 and HP ProBook 455R G6 help professionals stay productive in the office and on the go. The new notebooks adopt the elegant design concept found across the HP EliteBook portfolio, featuring an ultra-slim aluminum chassis with crisp lines and clean edges. The 180-degree hinge allows users to lay the devices flat, making it easier to share content and collaborate, while HP Noise Cancellation reduces background noise by up to 20dB [4], providing a robust audio and video conferencing experience. They’re expected to be available in June*.

The HP ProDesk 405 G4 Desktop Mini delivers the performance, expandability and security that SMBs need in a compact and stylish design. The PC is the company’s first 400-series desktop mini; it features a 2nd Generation AMD Ryzen PRO processor with built-in Radeon Vega graphics and can support up to three displays [5], serving a new market of customers who need a powerful system to create content coupled with the advanced security and manageability capabilities of the AMD processor.

It is expected to be available in April for a starting price of $499*.

With 84 percent of Gen Z workers preferring in-person communication and virtual tools like Skype as a way to meet face-to-face with colleagues [6], HP’s Zoom Room solutions make it easier for IT decision makers to choose a conferencing and collaboration solution that best meets the needs of their organization. These solutions provide easy, customizable and flexible meeting room options that are secure and manageable for small, medium and large organizations. They’re expected to be available starting in July*.

HP also announced HP Premier Care Solutions to enhance its premium commercial notebooks, including HP EliteBooks and HP ZBooks.

The new line-up of HP ENVY laptops and x360s features HP Command Center for powerful performance, sophisticated design with a geometric pattern for audio and thermal venting, and robust security – including a biometric fingerprint reader, a privacy camera kill switch and an optional HP Sure View privacy screen [7] to ensure screen content isn’t exposed.

HP ENVY 13 Laptop, open with screen facing left

HP ENVY 13 Laptop

Built with mobility in mind, the HP ENVY 13 Laptop features the latest Intel processors to power up to 19 hours of battery life [8] – nearly a 41 percent improvement versus the previous generation. It is expected to be available in April through HP.com for a starting price of $899.99. The new HP ENVY x360 13 features a powerful 2nd Generation AMD Ryzen processor and up to 14.5 hours of battery life [8] in a convertible form factor. It is expected to be available in April for a starting price of $699.99*.

HP ENVY x360 13 in black, open in tent position with a landscape image

HP ENVY x360 13

HP ENVY x360 15 offers versatility with either an AMD or the latest 8th Generation Intel Core processor [9], with up to 13 hours of battery life [8], a 28 percent top-bezel reduction versus the previous generation and an optional AMOLED display for stunning colors and brightness for browsing or streaming video. Versions with both Intel and AMD processors are expected to be available in April via HP.com and also available through Best Buy beginning in May. The Intel version has a starting price of $869.99, while the AMD version starts at $799.99*.

HP ENVY 17 Laptop, shown with back of laptop facing viewer

HP ENVY 17 Laptop

The HP ENVY 17 Laptop is built for performance with an 8th Generation Intel Core processor and NVIDIA GeForce MX250 graphics for productivity, creativity and entertainment. The 17-inch device offers a better screen-to-body ratio for a more immersive viewing experience, thanks to its 45 percent top bezel reduction versus the previous generation. It is expected to be available in April via HP.com for a starting price of $899.99, and also available through Best Buy beginning in May*.

[1] Based on HP’s internal analysis of isolation security services that offer SaaS and managed services that include on-board and configure, compliance enforcement and malware threat analytics. Most advanced based on hardware VM isolation-enforced protection with individual browser tabs and apps in isolation as of March 2019.

[2] HP Reverb at 2160 x 2160 per eye with 9.3M Pixel doubles the resolution of similar price point. 

[3] According to a MASiE 2017 Report (January 2017, by Bobby Carlton) in a study carried out by the National Training Laboratory.

[4] The test setup and results are based on Delta SNR (Signal to Noise Ratio) from 3QUEST(3-fold Quality Evaluation of Speech in Telecommunications) test as defined in ETSI TS 103 106 specification with testing performed in a train station background noise. Results will vary based on type and surroundings

[5] Displays sold separately. Support for up to three video outputs via two standard video connectors and an optional third video port connector, which provides the following choices: DisplayPort 1.2, HDMI 2.0, VGA or USB Type-C with DisplayPort Output.

[6] Stillman, David, and Jonah Stillman. “Gen Z@Work: How the Next Generation Is Transforming the Workplace.” Harper Business, an Imprint of HarperCollinsPublishers, 2017.

[7] HP Sure View integrated privacy screen functions in landscape orientation. Optional available on select FHD Sure View panels.

[8] Windows 10/MM14 battery life will vary depending on various factors including product model, configuration, loaded applications, features, use, wireless functionality and power management settings. The maximum capacity of the battery will naturally decrease with time and usage. See www.bapco.com for additional details.

[9] Multi-core is designed to improve performance of certain software products. Not all customers or software applications will necessarily benefit from use of this technology. Performance and clock frequency will vary depending on application workload and your hardware and software configurations. Intel’s numbering, branding and/or naming is not a measurement of higher performance.

*PRICING and AVAILABILITY: Not available in all countries, pricing from HP.com, subject to change without notice. See Best Buy for pricing details.

Posted on Leave a comment

How state and local governments are using chatbots to improve operations

State & Local Governments are leveraging chatbots to engage citizens, make state and local services more easily accessible, and enable administrative operational activities in order to free up staff time to focus on more complex, mission critical initiatives. With tight budgets and expanded service responsibilities, leveraging technology becomes more important to state and local agencies than ever before.

Awarded for Best Application Serving the Public by the Center for Digital Government, the California Secretary of State’s Eureka Chatbot, developed in partnership with Microsoft, answers frequently asked business entity and trademark questions, helping to better serve approximately 400,000 customers who contacted the agency last year. Through Eureka, customers can ask questions like “How do I check my business filing status online?” and they will be linked to the California Business Search website where they can look up their business record and access documents for free.

Other state and local governments are implementing chatbots with a variety of citizen needs in mind:

  • Of the 3,000 calls central IT receives each week in one state, 45-50% are for password reset. In response to this, the State Deputy CIO is building the first chatbot in the state, which serves employees statewide to automate their password reset. It is integrated with Microsoft’s ServiceNow, which improves service levels by creating, reading, and updating records stored within the system, including incidents, questions, users, and more.
  • Two other state and local governments are building Q&A bots for central IT operations and are working toward a bot-of-bots approach, so the public can access information. Citizens will eventually be able to transact with 30+ county departments and agencies through the main county web site. This one stop shopping uses Azure Dispatch to route queries appropriately i.e. ‘property tax’ query goes to Assessor while ‘sales tax’ goes to Business Licensing.
  • Another state is building their Q&A bot leveraging Eureka code/docs with the goal of allowing the public to access information and perform transaction across the 15 state departments/agencies they support. The first project is for the Department of Higher Education, servicing college students and parents with information about financial aid.

Learn more about how you can build a chatbot like Eureka.