Posted on Leave a comment

Returning to Taos Pueblo for the birth of a son

Adonis grew up at Taos Pueblo, but his parents sent him to school in Tucson when he was transitioning from middle school to high school. Every summer, he’d return to Taos Pueblo and fall in love with the land more and more.

But spending the school year elsewhere gave him a different perspective when he returned. He wanted to do something with the skills he was gaining every year to help Taos Pueblo keep growing and thriving. This desire eventually solidified into his life’s purpose: to serve indigenous communities. What wasn’t as clear was how he’d do that.

His path led him to graduate school to study business, with the idea that a business degree would provide many useful tools and bring multiple opportunities to fulfil this purpose.

A side-by-side photo of a boy playing outside and playing baseball

During his time in grad school, he worked on a project for Microsoft and was introduced to one of the project sponsors, Mike Miles. Mike was just putting together a new team at the company, now the Talent Workforce and Community Development Team, whose goal was to build and nurture relationships with the local communities that hosted Microsoft’s datacenters. Adonis took interest in the project because he observed a genuine interest in the company to create meaningful and long-term impact—something that at the time was counterintuitive to his perspective of corporate America.

Over a few months of working with Mike, who Adonis describes as a visionary and authentic leader, Adonis began to soften to the idea of corporate work, especially if it meant he had a way in to develop relationships with local and potentially native peoples. A year later, he started working on Mike’s team.

In addition, Adonis has taken a voluntary leadership role to organize with a group of indigenous employees who work at Microsoft to create their own official employee resource group inside the company.

“I see myself building on top of all of this work to continue to serve indigenous peoples,” he says.

Posted on Leave a comment

New to Microsoft Garage Wall of Fame: Azure Quickstart Center

It started with data. Data showed customers were experiencing difficulties onboarding to Azure, not knowing how to get started, with murky understanding of all the tools and services available to them. Ayesha Ghaffar, a data scientist who analyzed these and many other customer onboarding experiences, saw an opportunity to improve the existing process. She put together a proposal of a new way to acclimate customers to Azure. Ayesha began meeting with a few product teams about her idea and everyone felt it was worthwhile to pursue – but what she found was resources were thin and other commitments took priority. “I just wanted to make this idea a reality, but I was a data scientist and I didn’t know how on my own. I needed help to make this happen.” The answer came in the 2017 company-wide Hackathon. “A week before the Hackathon started, I thought, ‘I’m just going to write an email and put it out there. This is the problem, this is how we can solve it, and this is the impact.’ I sent it to everyone.”

Soon, employees from around the world – designers, user researchers, developers, program managers – wanted to join her Hackathon project. They found a common passion for the customer experience. Peilin Nee shared her experience as part of the hack team. “Designing with empathy begins with yourself. This was my first time working with 15 others coming from different disciplines who have never met before. What amazed me is how we put aside our differences and work towards our shared dream – provide new Azure users a better getting-started experience and ultimately increase trial users’ adoption rate. And, we did just that in two and a half days.”

Azure Quickstart Center

For their hack project, called Azure Launchpad, they created an extension in the Azure Portal that served as a hub for key getting started resources and personal recommendations for customers to achieve their cloud goals. Their efforts were noticed by the head of the Azure Portal product, Leon Welicki who was able to secure a Portal developer, Asher Garland, to continue to work on the project. While Ayesha was still a data scientist, she worked with Asher on a private preview version of Quickstart Center for the Ignite 2017 conference.

After the positive feedback they received from the private preview released for the conference, they were invited to present their work to Scott Guthrie who later mentioned it in one of his employee meetings. Because Ayesha moonlit as a product manager for the hack project, the Portal team eventually hired her to work on it full time as the PM in November 2017.

At Build 2018, Azure Quickstart Center shipped in public preview and became available to all customers through the Azure Portal. In 2019, Azure Quickstart Center partnered with the FastTrack team and Azure Docs to introduce Azure setup guides in the Quickstart Center to help customers deal with more complex scenarios like governance and migration based on the Microsoft Cloud Adoption Framework for Azure.

“The Cloud Adoption Framework provides the One Microsoft voice for cloud adoption. Azure Quickstart Center has made that voice actionable by injecting pointed and relevant guidance into the customer experience. The user now has prescriptive instructions at the point of execution. Together, these assets create a seamless integration of strategy and tactical execution,” shared Brian Blanchard, Sr. Director, Cloud Adoption Framework.

The Hackathon helped the team get feedback and continues to be a way to showcase the value of their ideas and projects. For Ayesha, “Hackathon is one of my favorite things at Microsoft – it helped me land a job as a PM, get visibility on the solution, and I learned how to solve customer problems in depth.” For the 2019 Hackathon, Ayesha and team (along with FastTrack team) iterated on features they could add that customers have said they wanted. “The goal is to democratize knowledge and put it in these custom guides. Our superpower is enabling our customers and partners to find their own superpowers. Cloud can be complicated, we want to enable customers and partners to simplify it for their organizations by creating their own custom guides for their users.”

Posted on Leave a comment

Brendan Burns: Empowering cloud-native developers on Kubernetes anywhere

Hello KubeCon and welcome to San Diego! It’s fantastic to have the chance to get some warm California sun, as well as the warmth of the broader Kubernetes community. From the very first community meeting, through the first KubeCon and on to today, it’s been truly amazing to have been able to watch and help the Kubernetes community grow. As KubeCon arrives, I’m excited to note how we are continuing to innovate and empower cloud-native developers on Kubernetes anywhere.

In the spirit of innovation, I’m thrilled to announce our new open source effort to enable trusted execution environments for Kubernetes. Trusted execution environments or “enclaves” are a hardware-backed secure execution environment that can ensure processes and their memory are secure while they execute. Today, we’re enabling trusted computing on Kubernetes anywhere via the Open Enclave SDK.

We’re also releasing a resource plugin that makes Encrypted Page Cache RAM a resource that the Kubernetes scheduler can use to make scheduling decisions. The number of enclaves on a CPU is limited, and this plugin ensures that Pods that need enclaves will be guaranteed to land on a node with an enclave available. This scheduler support is critical to running trusted compute environments in cloud-native applications via Pods.

Beyond these innovations for secure computing, I’m incredibly proud of the work that the Helm community has done to build and release Helm 3.0 last week. The vast majority of workloads deployed to Kubernetes are deployed via Helm, and Helm 3 is the next step in this journey. Over the past few years, the Helm team has carefully listened to user feedback about what was working and where changes were needed.

Of the many fixes and improvements, the most popular is probably the removal of Tiller from the cluster, making Charts more Kubernetes native and more secure by default. Speaking of security, the recent glowing independent security review of the Helm code base shows how dedicated and careful the Helm community has been in building a tool that is not just incredibly useful, but also secure as well. Many congratulations to the Helm community on this important milestone.

Just like the Helm team, in Azure, our open source work begins by listening to our customers. In particular, our customers in IoT and telecommunications. This feedback led us to understand how important it was for Kubernetes to support both IPv4 and IPv6 addresses for the same Pods in Kubernetes. Major kudos are due to Kal Henidak for his dedicated and tireless work in engineering both the code and design changes necessary to support multiple addresses per Pod. As you might imagine this change required careful work and coordination across the entire Kubernetes code base and community. Kal’s hard work in collaboration with the SIG-Networking community is being recognized with a shared keynote with Tim Hockin. Plan on attending the keynote to learn more about IPv4 and IPv6 in Kubernetes!

Finally, by combining both open source community and innovation we have a remarkable collection of open source projects reaching important milestones at KubeCon. The newly announced Buck (Brigade Universal Controller for Kubernetes) project shows how Cloud Native Application Bundles (CNAB) with Brigade radically simplify the development of new operators. The Kubernetes-based Event-driven Autoscaling (KEDA) has shown incredible community interest. It’s a great collaboration between Azure Functions, Red Hat, and others. Here at KubeCon, the KEDA community is hitting the 1.0 milestone and is stable and ready for production use. I also want to congratulate the Cloud Events community on their recent 1.0 release and I’m excited that Azure Event Grid has correspondingly added support for the 1.0 version of Cloud Events. Cloud Events is a CNCF project for an open and portable API for event-driven programming and it’s awesome that it is available in a managed environment in Azure.

Of course, containers and DevOps are a year-round focus for my teams beyond KubeCon. We’ve been busy this fall.

In the four weeks since we launched the Distributed Application Runtime (Dapr) project, we have seen strong interest from the community and have been listening to the many stories of how people are using Dapr in their projects, including modernizing Java code, building games, and integrating with IoT solutions. The breadth across different industries is amazing to see. The interest in the Dapr runtime repo has grown beyond our expectations. It’s been awesome to see the community come together and continue the momentum. We are excited to announce the release of Dapr v0.2.0, focusing on community-driven components, fixes across the Dapr runtime and CLI, updates to documentation, samples, and the addition of an end-to-end testing framework. You can find out more about the v0.2.0 release at the Dapr repo.

Just building distributed systems isn’t enough, you need to be able to observe how they are running in production, and the CNCF Prometheus project has emerged as a de-facto standard for exposing metrics on all sorts of servers. But it’s still easier to integrate with cloud-based monitoring rather than run your own metrics server. To enable this, Azure Monitor for containers can scrape the metrics exposed from Prometheus end-points so you can quickly gather failure rates, response per secs, and latency. From Log Analytics, you can easily run a Kusto Query Language (KQL) query and create your custom dashboard in the Azure portal dashboard. For many customers using Grafana to support their dashboard requirements, you can visualize the container and Prometheus metrics in a Grafana dashboard. Azure monitoring combines the best of open technology with the reliability of a cloud service.

In the last few years, KubeCon has grown from a single-track to many tracks and thousands of people. For me personally, and the community in general, it’s been an incredible journey. I’m excited to see people in San Diego – please stop by the Azure booth and say hello!

Posted on Leave a comment

‘Where is my package?’ Easing the pains of courier delays

Making sure no parcel is missed

Delivering over 40 million shipments every year, Romania’s preferred courier company Urgent Cargus is taking it one step further. With its system managing between 110,000 – 150,000 shipments at any given time and a fleet of over 2,600 courier vehicles, Urgent Cargus found itself handling millions of different requests. In such a large operation, with an even larger volume of orders to fulfill, the company needed a way to predict how much data processing and storage was required to deliver on its commitment to customers.

“We operate in an unpredictable industry. Depending on the season or festivity, we often find ourselves needing to handle a very high volume of deliveries throughout the year,” said Chief Information Officer, Marian Pletea at Urgent Cargus. “There can be as many as more than one million new requests every 24 hours, which makes it impossible to predict what data storage resource we require.”

“We also wanted to mitigate the risk of system downtime. Without the right availability, we don’t ship parcels at the right rate, nor receive the customers’ bookings, which could lead to serious business blockages.”

To address this issue, Urgent Cargus moved its customer job booking application onto the cloud and adopted a Platform-as-a-Service approach – purchasing additional resources when and if they need it. A move that gives a self-service option to customers, from booking to invoicing, as well as minimizing the risk of downtime in the event of an unpredictable peak.

According to Marian: “We recognize AI as a transformative technology, which is why we’re testing it to automate certain services to make our customers’ lives easier. This will also free-up our employees to tackle more.”

Data driving cross-border deliveries

With parcels placed into the system, sorted and distributed, companies like Budapest-headquartered Waberer’s International specialize in delivering goods and supplies over long-distances across Europe – getting packages to their destination as quickly as possible. Racking up nearly 500 million kilometres of road travel each year, the company meets demanding customer requirement with a fleet of over 4,300 modern vehicles and its bespoke logistics warehouse.

Needing to move away from hand-written notes and paper-based spreadsheets, Waberer’s International digitized its scheduling processes to better account for every journey and the over 1,000 transactions they process each day. With the help of AI, the company developed a planning platform called WIPE. WIPE uses complex algorithms to allocate drivers, load and journey schedules in the most efficient way. It’s a one-stop-shop for Waberer’s International to oversee all of their logistical operations in one place. This automated process means the business can track deliveries at all times and decide where and when resources are best placed.

In addition to driving efficiencies in resourcing and deliveries, the company has also been able to improve bottom-line performance. Thanks to the automated truck scheduling function, Waberer’s International achieved a better than an industry-average loaded ratio of 92%.

With an ever-increasing and demanding customer base, and some retailers promising same-day delivery, there’s no doubt that postal services and logistics companies face many challenges.

By digitizing processes and gleaning predictive insights with technology like the cloud and AI, these examples show how data-driven decisions make processes more efficient and customers more satisfied. Perhaps AI should be top of everyone’s wish list this year!

Posted on Leave a comment

Microsoft Azure now available from new cloud datacenter regions in Norway

Norway city skyline and waterfront.

Today, we’re announcing the availability of Microsoft Azure from our new cloud datacenter regions in Norway, marking a major milestone as the first global cloud provider to deliver enterprise-grade services in the country. These new regions demonstrate our ongoing investment to help enable digital transformation and advance intelligent cloud and intelligent edge computing technologies across both commercial and public sectors.

DNB, Equinor, Lånekassen, and Posten are just a few of the customers and partners leveraging our cloud services to accelerate innovation and increase computing resources. This new offering of Microsoft Azure delivers scalable, highly available, and resilient cloud services to Norwegian companies and organizations while meeting data residency, security, and compliance needs.

Our President, Brad Smith, recently visited Norway to celebrate this important launch and to discuss how vital trust is for those we serve, not only to help bring forth innovation but to ensure our customers are protected.

“Our customers have entrusted us to protect, operate, and develop our platform in a way that keeps their data private and secure. This is an immense responsibility that we can’t just claim, but a responsibility that we must earn every single day.” Brad Smith, President, Microsoft

Accelerating digital transformation in Norway

As we further our expansion commitment, we consider the demand for locally delivered cloud services and the opportunity for digital transformation in the market. Azure enables our customers and partners to increase their utilization of public cloud services and accelerate investments into private and hybrid cloud solutions. Norwegian organizations can now embrace these benefits to further innovation and build digital businesses at scale. Below are just a few of the customers and partners embracing Microsoft Azure in Norway.

The Norwegian banking industry is recognized for its rapid technology adoption, digitalizing the services that build the best products for customers. As Norway’s largest financial services group, DNB Group is a major operator in several industries, for which they also have a Nordic or international strategy. With Microsoft Azure, DNB will be able to migrate to the cloud in accordance with Norwegian data handling regulations to modernize, gain operational efficiency, and secure the best experience for its customers. 

“The possibility of data residency was a decisive factor in choosing Microsoft’s datacenter regions. Now we are looking forward to using the cloud to modernize and achieve efficiency and agility in order to ensure the best experience for our customers.” – Alf Otterstad, Executive Vice President, Group IT, DNB

Equinor, a broad energy company developing oil, gas, wind, and solar energy in more than 30 countries worldwide, has chosen Microsoft Azure to enable its digital transformation journey through a seven-year consumption and development agreement. With this strategic partnership, anchored in cloud-enabled innovation, and by moving its whole system portfolio to Azure, Equinor is aiming to achieve a more cost-efficient, safer, and more reliable operation. Equinor will utilize a variety of cloud services like machine learning and advanced analytics to improve performance, decrease costs, and increase safety. Through the partnership with Microsoft and leveraging capabilities within Azure, Equinor seeks to be a leader in the transformation of the energy industry worldwide and a growing force in renewables.

“Equinor’s ambition is to become a global digital leader within our industry. We have a long history of innovation and technology development. The strategic partnership will, through cloud services, involve development of the next-generation IT workplace, extended business application platforms, and mixed-reality solutions.” Åshild Hanne Larsen, CIO and SVP, Corporate IT Equinor

Lånekassen, the Norwegian State Educational Loan Fund, has over 1.1 million customers, composed of former and current students. By moving to Azure, it seeks to develop new and transformative citizen services, based on cognitive and analytical technologies. Lånekassen’s purpose is to make education possible, and to provide the Norwegian workforce with relevant competences. It aims to strengthen student funding as well as maintain and increase the already high level of automatized customer services and application processes.

“It has been a priority for Lånekassen to focus on how we can utilize new technology to deliver an even better service for our students and manage our student financing schemes even more efficiently. As we move our core solutions into the cloud, it will give us increased opportunities to innovate. We have already had great success with using machine learning, and we are now looking forward to optimizing our operations further.” Nina Schanke Funnemark, CEO, Lånekassen

Posten Norge AS has chosen to use the Microsoft Azure platform to meet ever-changing market demands by modernizing some of its existing applications estate and creating new services for its customers and partners. Posten’s next-generation logistics system will provide its workforce with new digital toolsets to deliver even better customer experiences.

“Posten’s vision is to make everyday life simpler and the world smaller. With this vision, we aim to simplify and increase the value of trade and communication for people and businesses in the Nordic region. With the opening of Norwegian datacenter regions, we hope to accelerate and fuel our vision further.” Arne Erik Berntzen, CIO, Posten AS

Bringing the complete cloud to Norway

The new cloud regions in Norway connect with Microsoft’s 54 regions via our global network, one of the largest and most innovative on the planet, spanning more than 130,000 miles of terrestrial fiber and subsea cable systems to deliver services to customers. Microsoft brings the global cloud closer to home for Norwegian organizations and citizens through our transatlantic system Marea, the highest-capacity subsea cable to cross the Atlantic.

The new cloud regions in Norway are targeted to expand in 2020 with Office 365, one of the world’s leading cloud-based productivity solutions, and Dynamics 365 and Power Platform, the next generation of intelligent business applications and tools.

Learn more about the new cloud services in Norway and the availability of Azure regions and services across the globe.

Posted on Leave a comment

Microsoft 365 CVP Jared Spataro: 5 attributes of successful teams

The way we work has changed. Today, winning in business requires constant innovation, and this innovation in turn requires collaboration across disciplines, geographies, and cultures. Prior to this seismic shift in the workplace, the atomic unit of productivity was the individual. Now it’s the team. A high-performing team brings together talented individuals and operates as more than the sum of its parts. It draws on the strengths of each member and compensates for individual limitations. But in an increasingly distributed and fast-paced world, even the perfect team chemistry isn’t enough. Today’s teams also need collaboration tools that help them put that chemistry to work. They need technology that can reach across space and time and help team members feel like they’re just a few feet away, even when they’re worlds apart.

At Microsoft, we’re on a mission to help every team become a successful team. And so, we partnered with IDEO, a global design company known for its human-centered, interdisciplinary approach. Together, we researched successful workplace teams to find out what they had in common. Then we used what we learned to create The Art of Teamwork—a new digital curriculum built around the five attributes of a successful team.

And that’s not all. We’re also using the findings from the research to refine Microsoft Teams, the hub for teamwork in Microsoft 365. What’s clear from the data is that the future of workplace collaboration won’t be defined by any one technology. Instead, successful teams need collaboration tools that combine a wide range of technologies in new and innovative ways. Teams brings together chat, meetings, calling, document collaboration, and workflow into a single app—and this unique combination is catching fire.

In fact, today Teams has more than 20 million daily active users. What’s more, while these users start with simple text-based chat, they quickly move on to richer forms of communication and collaboration. For instance, last month Teams customers participated in more than 27 million voice or video meetings and performed over 220 million open, edit, or download actions on files stored in Teams.

The five attributes

Partnering with IDEO, we researched diverse workplace teams—including astronauts, chefs, television producers, and nurses—to understand what high-performing collaborators have in common. With a variety of workplace teams as our subjects, we used in-context observation, expert interviews, secondary research, and prototype activities to identify the specific dynamics that high-performing teams had in common. We found that successful teams shared five attributes:

  1. Team purpose—Keeps teams focused, fulfilled, and aligned on achieving their objectives.
  2. Collective identity—Fosters a sense of belonging and helps team members work together as a unit.
  3. Awareness and inclusion—Enables teams to navigate interpersonal dynamics and value everyone’s perspective.
  4. Trust and vulnerability—Encourages interpersonal risk-taking in teams.
  5. Constructive tension—Serves as a generative force for new ideas, driving better outcomes.

To learn more about the framework, check out the video below:

Teams customer success stories

With Teams, our customers are breaking through the artificial boundaries created by standalone or loosely coupled collaboration tools and working together in new ways. Their stories bring the five attributes of successful teams to life and paint a picture of what is possible.

Bold beauty: L’Oreal

At L’Oreal, the global beauty company, Chief Technology and Operations Officer, Barbara Lavernos explains, “Our momentum is driven by people interacting, putting ideas on the table, and jumping on them through spontaneous discussion. The new technology that allows for personality, contributions, and real innovation is Microsoft Teams.” With Teams, L’Oreal employees have a space to be creative while moving at the speed and scale needed to deliver over 7 billion products to customers annually.

From the factory floor to the C-Suite: Alcoa

Firstline Workers at Alcoa, a global leader in bauxite, alumina, and aluminum production, have embraced Teams to access business information on their own mobile devices while working at their remote Iceland facility. Before using Teams, when managers needed someone to come in for a shift, they had to call them, often late at night. Now with Teams, managers schedule plant employees using Shifts, which they can access on their mobile devices. With Shifts, the problem of workers missing scheduled assignments has resolved, with the absentee rate falling to nearly zero. “Shifts in Teams is much more efficient for organizing people,” says Friopjófur Tómasson, a plant supervisor. “If I had to use one word to tell you what Teams means to me, it’s ‘efficiency.’ Shifts saves me at least an hour a day.”

Simplified collaboration: Telefónica

At telecommunications company Telefonica, employees conduct business-building projects in Teams. An offsite meeting of executives can now be coordinated in a highly secure, centralized hub. This simple meeting used to be a coordination nightmare, with up to 20 colleagues from several departments focusing on various workstreams. “Before Teams, we had to integrate different aspects of a project that had been created in isolation from each other. Now, a group of colleagues can build on the documents in a collaborative way and edit the project directly. This process used to take four weeks, now we can accomplish it in a matter of days,” says Jamie Rodriguez-Ramos Fernandez, Director of Strategic Analysis at Telefónica.

Putting patient care first: St. Luke’s University Health Network (SLUHN)

St. Luke’s University Health Network providers are replacing different third-party collaboration apps with Teams, simplifying their lives with a single workspace for anytime, anywhere conversations about patients. “Even as we look to secure text messaging as one of the advantages of the ubiquitous cellphone, we still have to comply with HIPAA and other privacy requirements,” says Dr. James Balshi, MD, Chief Medical Information Officer and Vascular Surgeon at SLUHN. “But we don’t have to worry about that with Teams. It’s equally functional on the smartphone, tablet, and desktop computer. I use the camera technology on the phone to share patient information in a more secure and HIPAA-compliant manner with colleagues during a Teams video call. I’ve also shared EMR notes and X-ray images.”

Connecting across the globe: Trek

Trek is a leading bicycle designer, manufacturer, and retailer that is adapting quickly to a fast-changing environment. Some U.S.-based employees work in locations other than the company’s Waterloo, Wisconsin, headquarters, including at other facilities, home, or retail locations. Trek’s international business has also grown quickly with the company expanding its operations to 17 established offices around the world. “Having good connectivity and digital meeting experiences is critical in making the way we work a success,” says Nathan Pieper, IT Business Applications and Collaboration Manager at Trek. The meetings capabilities in Teams “was the carrot that got people in,” says Pieper. “Teams adoption grew because people used it, told other people to use it, and invited them to online meetings on it.”

A high-performing team can do what otherwise would be impossible. And with Teams, there’s no limit to what you and your team can achieve at work. To improve teamwork in your organization, check out the digital teamwork guide at the Art of Teamwork home page. And if you’re not using Teams yet, get started today!

Note: Customers are ultimately responsible for their own HIPAA compliance.

Posted on Leave a comment

Check out Azure’s new high-performance computing offerings at Supercomputing 19 in Denver

For more than three decades, the researchers and practitioners that make up the high-performance computing (HPC) community will come together for their annual event. More than ten thousand strong in attendance, the global community will converge on Denver, Colorado to advance the state-of-the-art in HPC. The theme for Supercomputing ‘19 is “HPC is now” – a theme that resonates strongly with the Azure HPC team given the breakthrough capabilities we’ve been working to deliver to customers.

Azure is upending preconceptions of what the cloud can do for HPC by delivering supercomputer-grade technologies, innovative services, and performance that rivals or exceeds some of the most optimized on-premises deployments. We’re working to ensure Azure is paving the way for a new era of innovation, research, and productivity.

At the show, we’ll showcase Azure HPC and partner solutions, benchmarking white papers and customer case studies – here’s an overview of what we’re delivering.

  • Massive Scale MPI – Solve problems at the limits of your imagination, not the limits of other public cloud’s commodity network. Azure supports your tightly coupled workloads up to 80,000 cores per job featuring the latest HPC-grade CPUs and ultra-low latency InfiniBand hDR networking.
  • Accelerated Compute – Choose from the latest GPUs, field programmable gate array (FPGAs), and now IPUs for maximum performance and flexibility across your HPC, AI, and visualization workloads.
  • Apps and Services – Leverage advanced software and storage for every scenario: from hybrid cloud to cloud migration, from POCs to production, from optimized persistent deployments to agile environment reconfiguration. Azure HPC software has you covered.

Azure HPC unveils new offerings

  • The preview of new second gen AMD EPYC based HBv2 Azure Virtual Machines for HPC – HBv2 virtual machines (VMs) deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. HBv2 VMs feature 120 non-hyperthreaded CPU cores from the new AMD EPYC™ 7742 processor, up to 45 percent more memory bandwidth than x86 alternatives, and up to 4 teraFLOPS of double-precision compute. Leveraging the cloud’s first deployment of 200 Gigabit HDR InfiniBand from Mellanox, HBv2 VMs support up to 80,000 cores per MPI job to deliver workload scalability and performance that rivals some of the world’s largest and most powerful bare metal supercomputers. HBv2 is not just one of the most powerful HPC servers Azure has ever made, but also one of the most affordable. HBv2 VMs are available now.
  • The preview of new NVv4 Azure Virtual Machines for virtual desktops – NVv4 VMs enhance Azure’s portfolio of Windows Virtual Desktops with the introduction of the new AMD EPYC™ 7742 processor and the AMD RADEON INSTINCT™ MI25 GPUs. NVv4 gives customers more choice and flexibility by offering partitionable GPUs. Customers can select a virtual GPU size that is right for their workloads and price points, with as little as 2GB of dedicated GPU frame buffer for an entry-level desktop in the cloud, up to an entire MI25 GPU with 16GB of HBM2 memory to provide powerful workstation-class experience. NVv4 VMs are available now in preview.
  • The preview NDv2 Azure GPU Virtual Machines – NDv2 VMs are the latest, fastest, and most powerful addition to Azure GPU family, and are designed specifically for the most demanding distributed HPC, artificial intelligence (AI), and machine learning (ML) workloads. These VMs feature 8 NVIDIA Tesla V100 NVLink interconnected GPUs each with 32 GB of HBM2 memory, 40 non-hyperthreaded cores from the Intel Xeon Platinum 8168 processor, and 672 GiB of system memory. NDv2-series virtual machines also now feature 100 Gigabit EDR InfiniBand from Mellanox with support for standard OFED drivers and all MPI types and versions. NDv2-series virtual machines are ready for the most demanding machine learning models and distributed AI training workloads with out-of-box NCCL2 support for InfiniBand- allowing for easy scale-up to supercomputer-sized clusters that can run workloads utilizing CUDA, including popular ML tools and  frameworks. NDv2 VMs are available now in preview.
  • Azure HPC Cache now available – When it comes to file performance, the new Azure HPC Cache delivers flexibility and scale. Use this service right from the Azure Portal to connect high-performance computing workloads to on-premises network-attached storage or run Azure Blob as the portable operating system interface.
  • The preview of new NDv3 Azure Virtual Machines for AI –The preview of NDv3 VMs are Azure’s first offering featuring the Graphcore IPU, designed from the ground up for AI training and inference workloads. The IPU novel architecture enables high-throughput processing of neural networks even at small batch sizes, which accelerates training convergence and enables short inference latency. With the launch of NDv3, Azure is bringing the latest in AI silicon innovation to the public cloud and giving customers additional choice in how they develop and run their massive scale AI training and inference workloads. NDv3 VMs feature 16 IPU chips, each with over 1200 cores and over 7000 independent threads and a large 300MB on-chip memory that delivers up to 45 TB/s of memory bandwidth. The 8 Graphcore accelerator cards are connected through a high-bandwidth, low-latency interconnect enabling large models to be trained in a model-parallel (including pipelining) or data parallel way. An NDv3 VM also includes 40 cores of CPU, backed by Intel Xeon Platinum 8168 processors and 768 GB of memory. Customers can develop applications for IPU technology using Graphcore’s Poplar® software development toolkit, and leverage the IPU-compatible versions of popular machine learning frameworks. NDv3 VMs are now available in preview.
  • NP Azure Virtual Machines for HPC coming soon – Our Alveo U250 FPGA-Accelerated VMs offer from 1-to-4 Xilinx U250 FPGA devices as an Azure VM- backed by powerful Xeon Platinum CPU cores, and fast NVMe-based storage. The NP series will enable true lift-and-shift and single-target development of FPGA applications for a general purpose cloud. Based on a board and software ecosystem customers can buy today, RTL and high-level language designs targeted at Xilinx’s U250 card and SDAccel 2019.1 runtime will run on Azure VMs just as they do on-premises and on the edge, enabling the bleeding edge of accelerator development to harness the power of the cloud without additional development costs.

  • Azure CycleCloud 7.9 Update We are excited to announce the release of Azure CycleCloud 7.9. Version 7.9 focuses on improved operational clarity and control, in particular for large MPI workloads on Azure’s unique InfiniBand interconnected infrastructure. Among many other improvements, this release includes:

    • Improved error detection and reporting user interface (UI) that greatly simplify diagnosing VM issues.

    • ​Node time-to-live capabilities via a “Keep-alive” function, allowing users to build and debug MPI applications that are not affected by autoscaling policies.

    • VM placement group management through the UI that provides users direct control into node topology for latency sensitive applications.

    • Support for Ephemeral OS disks, which improve virtual machines and virtual machines scale sets start-up performance and cost.

  • Microsoft HPC Pack 2016, Update 3 – released in August, Update 3 includes significant performance and reliability improvements, support for Windows Server 2019 in Azure, a new VM extension for deploying Azure IaaS Windows nodes, and many other features, fixes, and improvements.

In all of our new offerings and alongside our partners, Azure HPC aims to consistently offer the latest in capabilities for HPC oriented use cases. Together with our partners Intel, AMD, Mellanox, NVIDIA, Graphcore, Xilinx, and many more, we look forward to seeing you next week in Denver!

Posted on Leave a comment

Lenovo’s smarter devices stoke professional passions

Juan Dimida in front of a brick wall, holding a Lenovo ThinkPad in front of a Lenovo logo he drew graffiti-style

In Philadelphia, Juan Dimida, 40, creates graphic art and electronic music on touchscreen devices, working them into beats with other songs or multimedia pieces.

He recently created an album of electronic music on his Motorola G3 over the summer and has been performing it on his Lenovo Yoga PC, connected to drum machines and synthesizers. He’s playing this music live in November.

His artistic background began with graffiti art as a teen, but then he joined a city-run art program in his 20s that channeled his creative energy into colorful murals that covered up graffiti through community-based commissions. These collaborative projects usually involved four to five people and would include elaborate scenery, characters and animation. While each had a theme, the artists also improvised.

Dimida used Photoshop to get designs together and make alterations. While he was working on these murals, an event planner stopped by with a Lenovo ThinkPad tablet, and gave it to him to draw on. He hired Dimida to create art for a 2012 event, where Dimida connected different devices, such as a Lenovo IdeaCentre AIO, to projectors. Dimida drew mosaics on that screen that projected onto 80-foot walls.

After that event, he gained traction to host his own events, showing his original projections at art shows and parties.

Sound visualizations are something he particularly enjoys. Dimida uses a Lenovo ThinkPad X220t to record different sounds, so he’s able to set up different scenes, music effects and visuals, using multiple projectors. He has a separate Lenovo Yoga feed into that, where he draws on its screen. The ThinkPad X220t adds sounds and projects that out.

Posted on Leave a comment

Introducing more privacy transparency for our commercial cloud customers

At Microsoft, we listen to our customers and strive to address their questions and feedback, because one of our foundational principles is to help our customers succeed. Today Microsoft is announcing an update to the privacy provisions in the Microsoft Online Services Terms (OST) in our commercial cloud contracts that stems from additional feedback we’ve heard from our customers.

Our updated OST will reflect contractual changes we have developed with one of our public sector customers, the Dutch Ministry of Justice and Security (Dutch MoJ). The changes we are making will provide more transparency for our customers over data processing in the Microsoft cloud.

Microsoft is currently the only major cloud provider to offer such terms in the European Economic Area (EEA) and beyond.

We are also announcing that we will offer the new contractual terms to all our commercial customers – public sector and private sector, large enterprises and small and medium businesses – globally. At Microsoft we consider privacy a fundamental right, and we believe stronger privacy protections through greater transparency and accountability should benefit our customers everywhere.

Clarifying Microsoft’s responsibilities for cloud services under the OST update

In anticipation of the General Data Protection Regulation (GDPR), Microsoft designed most of its enterprise services as services where we are a data processor for our customers, taking the necessary steps to comply with the new data protection laws in Europe. At a basic level, this means Microsoft collects and uses personal data from its enterprise services to provide the online services requested by our customers and for the purposes instructed by our customers. As a processor, Microsoft ensures the integrity and safety of customer data, but that data itself is owned, managed and controlled by the customer.

Through the OST update we are announcing today we will increase our data protection responsibilities for a subset of processing that Microsoft engages in when we provide enterprise services. In the OST update, we will clarify that Microsoft assumes the role of data controller when we process data for specified administrative and operational purposes incident to providing the cloud services covered by this contractual framework, such as Azure, Office 365, Dynamics and Intune. This subset of data processing serves administrative or operational purposes such as account management; financial reporting; combatting cyberattacks on any Microsoft product or service; and complying with our legal obligations.

The change to assert Microsoft as the controller for this specific set of data uses will serve our customers by providing further clarity about how we use data, and about our commitment to be accountable under GDPR to ensure that the data is handled in a compliant way.

Meanwhile, Microsoft will remain the data processor for providing the services, improving and addressing bugs or other issues related to the service, ensuring security of the services, and keeping the services up to date.

As noted above, the updated OST reflects the contractual changes we developed with the Dutch MOJ.  The only substantive differences in the updated terms relate to customer-specific changes requested by the Dutch MOJ, which had to be adapted for the broader global customer base.

The work to provide our updated OST has already begun. We anticipate being able to offer the new contract provisions to all public sector and enterprise customers globally at the beginning of 2020.

Working with our customers to strengthen privacy

Before and after GDPR became law in the EU, Microsoft has taken steps to ensure that we protect the privacy of all who use our products and services. We continue to work on behalf of customers to remain aligned with the evolving legal interpretations of GDPR.  For example, customer feedback from the Dutch MoJ and others has led to the global roll out of a number of new privacy tools across our major services, specific changes to Office 365 ProPlus as well as increased transparency regarding use of diagnostic data.

We remain committed to listening closely to our customers’ needs and concerns regarding privacy. Whenever customer questions arise, we stand ready to focus our engineering, legal and business resources on implementing measures that our customers require. At Microsoft, this is part of our mission to empower every individual and organization on the planet to achieve more.

Tags: , ,

Posted on Leave a comment

New Microsoft open source software connector accelerates Internet of Medical Things on FHIR

Microsoft is expanding the ecosystem of FHIR® for developers with a new tool to securely ingest, normalize, and persist Protected Health Information (PHI) from IoMT devices in the cloud.  

Continuing our commitment to remove the barriers of interoperability in healthcare, we are excited to expand our portfolio of Open Source Software (OSS) to support the HL7 FHIR Standard (Fast Healthcare Interoperability Resource). The release of the new IoMT FHIR Connector for Azure is available today in GitHub.


An illustration of medical data being connected to FHIR with IoMT FHIR Connector for Azure

The Internet of Medical Things (IoMT) is the subset of IoT devices that capture and transmit patient health data. It represents one of the largest technology revolutions changing the way we deliver healthcare, but IoMT also presents a big challenge for data management.

Data from IoMT devices is often high frequency, high volume, and requires sub-second measurements. Developers have to deal with a range of devices and schemas, from sensors worn on the body, ambient data capture devices, applications that document patient reported outcomes, and even devices that only require the patient to be within a few meters of a sensor.

Traditional healthcare providers, innovators, and even pharma and life sciences researchers are ushering in a new era of healthcare that leverages machine learning and analytics from IoMT devices. Most see a future where devices monitoring patients in their daily lives will be used as a standard approach to deliver cost savings, improve patient visibility outside of the physician’s office, and to create new insights for patient care. Yet as new IoMT apps and solutions are developed, two consistent barriers are preventing broad scalability of these solutions: interoperability of IoMT device data with the rest of the healthcare data, such as clinical or pharmaceutical records, and the security and private exchange of protected health information (PHI) from these devices in the cloud.

In the last several years, the provider ecosystem began to embrace the open source standard of FHIR as a solution for interoperability. FHIR is rapidly becoming the preferred standard for exchanging and managing healthcare information in electronic format and has been most successful in the exchange of clinical health records. We wanted to expand the ecosystem and help developers working with IoMT devices to normalize their data output in FHIR. The robust, extensible data model of FHIR standardizes the semantics of healthcare data and defines standards for exchange, so it fuels interoperability across data systems. We imagined a world where data from multiple device inputs and clinical health data sets could be quickly normalized around FHIR and work together in just minutes, without the added cost and engineering work to manage custom configurations and integration with each and every device and app interface. We wanted to deliver foundational technology that developers could trust so they could focus on innovation. And today, we’re releasing the IoMT FHIR Connector for Azure.

This OSS release opens an exciting new horizon for healthcare data management. It provides a simple tool that can empower application developers and technical professionals working with data from devices to quickly ingest and transform that data into FHIR. By connecting to the Azure API for FHIR, developers can set up a robust and secure pipeline to manage data from IoMT devices in the cloud.

The IoMT FHIR Connector for Azure enables easy deployment in minutes, so developers can begin managing IoMT data in a FHIR Server that supports the latest R4 version of FHIR:

  • Rapid provisioning for ingestion of IoMT data and connectivity to a designated FHIR Server for secure, private, and compliant persistence of PHI data in the cloud
  • Normalization and integrated mapping to transform data to the HL7 FHIR R4 Standard
  • Seamless connectivity with Azure Stream Analytics to query and refine IoMT data in real-time
  • Simplified IoMT device management and the ability to scale through Azure IoT services (including Azure IoT Hub or Azure IoT Central)
  • Secure management for PHI data in the cloud, the IoMT FHIR Connector for Azure has been developed for HIPAA, HITRUST, and GDPR compliance and in full support of requirements for protected health information (PHI)

To enhance scale and connectivity with common patient-facing platforms that collect device data, we’ve also created a FHIR HealthKit framework that works with the IoMT FHIR Connector. If patients are managing data from multiple devices through the Apple Health application, a developer can use the IoMT FHIR Connector to quickly ingest data from all of the devices through the HealthKit API and export it to their FHIR server.

Playing with FHIR
The Microsoft Health engineering team is fully backing this open source project, but like all open source, we are excited to see it grow and improve based on the community’s feedback and contributions. Next week we’ll be joining developers around the world for FHIR Dev Days in Amsterdam to play with the new IoMT FHIR Connector for Azure. Learn more about the architecture of the IoMT FHIR Connector and how to contribute to the project on our GitHub page.


FHIR® is the registered trademark of HL7 and is used with the permission of HL7