Today, I am honored to kick off our inaugural Marketplace Summit. For the first time, we are welcoming thousands of independent software vendors (ISVs) and Software as a service (SaaS) providers as we broadcast live from Microsoft Studios to share how we are committed to our shared success by helping ISVs maximize the marketplace opportunity.
The mass migration to the cloud since the pandemic is forcing an inflection point where customers are looking to optimize and streamline their investments — this creates an opportunity for SaaS providers and ISVs. Because of this growing demand for SaaS services, we are firmly committed to provide the right resources, tools and support to take advantage of these opportunities. Today, we are thrilled to announce the global public preview of the ISV Success Program. We are excited to offer this personalized support at the solution level and support every ISV to innovate on the Microsoft Cloud and rapidly go-to-market through the commercial marketplace.
Creating scalable growth models for ISVs Marketplace is at the core of the ISV Success program as we aim to help every ISV not only build on the Microsoft Cloud but go-to-market with Microsoft and sell through the commercial marketplace.
In an uncertain economic environment cloud budgets are durable and still on an upward trajectory. Cloud marketplaces are outpacing cloud growth and are becoming the centralized go-to-market channel of the future. According to Tackle’s 2022 annual State of Cloud Marketplaces report, cloud marketplaces continue to generate more revenue year over year and Bessemer Venture Partners noted that in 2021, marketplace transactions grew an estimated 70% to $4 billion, which is 3x faster growth than the public cloud at large.
With the cloud embedded into every business, there is massive opportunity for any company selling SaaS or cloud-based solutions, but it’s becoming harder and harder for ISVs to differentiate and acquire new customers. Those who embrace the marketplace and are strategically executing a marketplace-first, go-to-market approach are seeing the most cost-effective growth. At Microsoft the marketplace continues to become central to how we connect customers and partners at scale and ISVs of all sizes and maturity find success when they build a cloud-centric, go-to-market strategy.
“Wiz was born in the cloud and our ecosystem is built around the marketplace motion,” says Trish Cagliostro, Head of Worldwide Channels and Alliances at Wiz. “When we work with customers in the marketplace, they usually have cloud consumption commitments through their Microsoft contracts. They can use this to consolidate IT spend and reduce total cost of ownership, which is powerful and gives Wiz the opportunity to have elevated conversations with customers.”
To support every ISV in harnessing the growing cloud opportunity, Microsoft is investing in programs and benefits across the Microsoft commercial marketplace. We are making concrete investments in programs that can benefit all ISVs, regardless of their size, scope or history. Over the past year, the marketplace has matured into a thriving commerce environment. We’ve seen:
288% growth in SaaS-billed sales
319% increase in customers with consumption commitments buying via marketplace
200% increase in private offers sold by Cloud Solution Providers
This momentum continues as customers look to centralize their cloud portfolio through B2B marketplaces to increase efficiency, buy with confidence, and spend smarter. Some of the recent marketplace innovations include:
Enterprise deal making With 95% of the Fortune 500 using Azure, Microsoft has direct access to enterprise customers and through the marketplace, we can unlock access to these customers. We’ve seen a 164% increase in enterprise sales through the marketplace, and this will continue to grow with our commitment to partner success. Furthermore, Microsoft is unique in that we offer every dollar of cloud spend to count toward a customer’s cloud consumption commitment when they buy an eligible solution. This, paired with our enhancements to private offers and multi-year SaaS functionality, is cutting through procurement red tape and closing 7- and 8- digit deals regularly through the marketplace.
Empowering the ecosystem Microsoft has always been a partner-focused organization. As customer needs continue to evolve, Microsoft commercial marketplace is empowering our ecosystem of over 400,000 partners around the world to work together at a faster pace, on a larger scale and on their terms. Earlier this year, we announced the general availability of margin sharing. ISV partners can create private offers and extend a margin to scale through partners in the Cloud Solution Provider program. With margin sharing, SaaS providers and ISVs can extend their sales forces and reach new markets.
To further enhance the ability for all partners to work together, we also announced that multi-party partner private offers will soon be entering private preview. This additional functionality will empower any set of partners to come together, create personalized offers with customized payouts, sell directly to Microsoft customers through marketplace and have the sale count toward the customers’ cloud consumption commitment. This will pioneer new opportunities for every partner and we look forward to sharing more details in the coming months.
The value of partnership Only through co-innovating and jointly going to market can we truly meet customers where they are — and that synergy is the goal of the commercial marketplace. According to IDC, partners that build their own software and services are the most profitable high-growth partner business model. They estimate for every $1 of Microsoft revenue, $10.11 is made for partners that build software — 25% more than the estimates for services-led partners.
From the beginning, Microsoft has been a partner-first company. And as Microsoft’s portfolio has grown, we’ve increased our focus to ensure that our partners can access all of the innovation we create through our research and development investments, as well as to deliver their own services and solutions on top of that innovation.
Today marks an important milestone as we commit to supporting every ISV in their growth with Microsoft with the ISV Success Program and the variety of other tools and resources we outlined to support ISV success. For more details, Anthony Joseph, vice president for Marketplace and ISV Journey, covers the opportunity with the ISV Success Program on the Azure blog channel. We are excited to work side-by-side with the ISV and SaaS community to realize all the potential of building on the Microsoft Cloud and selling through the commercial marketplace.
Microsoft Azure Quantum Resource Estimator enables quantum innovators to develop and refine algorithms to run on tomorrow’s scaled quantum computers. This new tool is one way Microsoft empowers innovators to have breakthrough impact with quantum at scale.
The quantum computers available today enable interesting experimentation and research but they are unable to accelerate the computations necessary to solve real-world problems. While the industry awaits hardware advances, quantum software innovators are eager to make progress and prepare for a quantum future. Creating algorithms today that will eventually run on tomorrow’s fault-tolerant scaled quantum computers is a daunting task. These innovators are faced with questions such as; What hardware resources are required? How many physical and logical qubits are needed and what type? What’s the runtime? Azure Quantum Resource Estimator was designed specifically to answer these questions. Understanding this data will help innovators create, test, and refine their algorithms and ultimately lead to practical solutions that take advantage of scaled quantum computers when they become available.
The Azure Quantum Resource Estimator started as an internal tool and has been key in shaping the design of Microsoft’s quantum machine. The insights it has provided have informed our approach to engineering a machine capable of the scale required for impact including the machine’s architecture and our decision to use topological qubits. We’re making progress on our machine and recently had a physics breakthrough that was detailed in a preprint to the arXiv. On Thursday, we will take another step forward in transparency by publicly publishing the raw data and analysis in interactive Jupyter notebooks on Azure Quantum. These notebooks provide the exact steps needed to reproduce all the data in our paper. While engineering challenges remain, the physics discovery demonstrated in this data proves out a fundamental building block for our approach to a scaled quantum computer and puts Microsoft on the path to deliver a quantum machine in Azure that will help solve some of the world’s toughest problems.
As we advance our hardware, we are also focused on empowering software innovators to advance their algorithms. The Azure Quantum Resource Estimator performs one of the most challenging problems for researchers developing quantum algorithms. It breaks down the resources required for a quantum algorithm, including the total number of physical qubits, the computational resources required including wall clock time, and the details of the formulas and values used for each estimate. This means algorithm development becomes the focus, with the goal of optimizing performance and decreasing cost. For the first time, it is possible to compare resource estimates for quantum algorithms at scale across different hardware profiles. Start from well-known, pre-defined qubit parameter settings and quantum error correction (QEC) schemes or configure unique settings across a wide range of machine characteristics such as operation error rates, operation speeds, and error correction schemes and thresholds.
“Resource estimation is an increasingly important task for development of quantum computing technology. We are happy we could use Microsoft’s newtool for our research on this topic. It’s easy to use. The integration process was simple, and the results give both a high-level overview helpful for people new to error correction, as well as a detailed breakdown for experts. Resource estimation should be a part of the pipeline for anyone working on fault-tolerant quantum algorithms. Microsoft’s new tool is great for this.”— Michał Stęchły, Tech Lead at Quantum Software Team, Zapata Computing.
The Resource Estimator will help drive the transition from today’s noisy intermediate scale quantum (NISQ) systems to tomorrow’s fault-tolerant quantum computers. Today’s NISQ systems might enable running small numbers of operations in an algorithm successfully, but to get to practical quantum advantage there will need to be trillions and more operations running successfully. This gap will be closed by scaling up to a fault-tolerant quantum machine with built-in Quantum Error Correction. This means each qubit and operation requested in a user’s program will be encoded into some number of physical qubits and operations at the hardware level, and the software stack will perform this conversion automatically. Now with the Resource Estimator, you can walk through these conversions, estimate the overheads in time and space required to enable implementation of your scaled quantum algorithms on a variety of hardware designs, and use the information to improve your algorithms and applications well before scaled fault-tolerant hardware is available. In our recent preprint on the arXiv, we show how to use the Resource Estimator to understand the cost of three important quantum algorithms that promise practical quantum advantage.
Resource Estimation paves the way for hardware-software co-design, enabling hardware designers to improve their architectures based on how large-scale algorithms might run on their specific implementation, and in turn, allowing algorithm and software developers to iterate on bringing down the cost of algorithms at scale.
“The Resource Estimator breaks down the resources needed to run a useful algorithm at scale. Putting precise numbers on the actual scale at which quantum computing provides industry-relevant solutions sheds light on the tremendous effort that has yet to be realized. This strengthens our commitment to our roadmap, which is focused on delivering an error-corrected quantum computer using a hardware-efficient approach.”—Jérémie Guillaud, Chief of Theory at Alice&Bob.
Built on the foundation of community-supported quantum intermediate representation (QIR), it is both extensible and portable and can be used with popular quantum SDKs and languages such as Q# and Qiskit. QIR was created in alliance with the Linux Foundation and other partners and is an open source standard that serves as a common interface between many languages and target quantum computation platforms.
Getting started with resource estimation
It is easy to get started and gain your first insights with the tool. The example below shows how to estimate and analyze the physical resources required to run a quantum program on a fault-tolerant quantum computer.
1. Set up your Azure Quantum workspace and get started with Resource Estimation.
b) On the left panel, under Operations, select Providers
c) Select + Add a provider
d) Select Microsoft Quantum Computing
e) Select Learn & Develop and select Save
2. Start with a ready-to-use sample.
To start running quantum programs with no installation required, try our free hosted notebooks experience in the Azure Portal. Our hosted Jupyter Notebooks enable a variety of languages and Quantum SDKs. You will find them in your Azure Quantum workspace (#1). Selecting Notebooks in the portal will take you to the sample gallery, where you will find the Resource Estimation tab (#2). Once there, choose one of the first two samples and then select the “Copy to my notebooks” button (#3) to add the sample to your workspace (#3).
3. Run your first Resource Estimation
After the sample has been copied to My notebooks you can select it from the Workspace menu to load it as a hosted notebook in the Azure Portal. From there, just select Run all from the top of the Jupyter Notebook to execute the program. You will be able to run an entire Resource Estimation job without writing a single line of code!
The results will immediately provide estimates of total physical qubits and runtime for the algorithm provided. For a deeper understanding of the resources consumed by the algorithm, you can trace the source of each result with detailed explanations of formulas. These deeper results can be re-used and shared in your research.
From Copilot for business to Codespaces for all, at GitHub Universe we’re bringing our breakthrough offerings to even more organizations and developers around the world.
As a developer, as someone who has been in love with writing code my entire life, I believe it’s time for a new developer experience. Software has advanced in all aspects of our work and life. Running, maintaining and building software for a global population has never been more complex. We are at a turning point. GitHub has built one, integrated platform where the world’s developers can build, create, collaborate and have the best times of their lives doing it. One, integrated platform for one purpose: Putting the developer first. From writing code with Copilot and Hey GitHub, running an ML model in a Codespace, automating your pull requests with Actions and Advanced Security, to the more than 15,000 integrations in our Marketplace that unlock the value of a true platform in GitHub. We have built the place that gives developers everything they need to be creative, to be happier, and to build the best work of their lives. Read the Universe blog below for the latest on how we’re enabling this new developer experience. https://lnkd.in/e-zjdP8g
“Many in Microsoft’s military community are veterans and others still serve today, in reserve corps for their countries. They commit weekends to training. They accept the responsibility of periodic deployments, some weeks and even months in duration. They, along with active-duty peers, are at the ready to be called to serve by their country. They are also outstanding colleagues who have made a choice to apply their well-earned skills and capabilities at Microsoft. We are better as a result.”
To celebrate a decade of Surface, a special edition of Microsoft’s signature 2-in-1 device draws its inspiration from Windows 11’s desktop wallpaper and nearly 150 years of history at a world-renowned global design house.
The Surface Pro Liberty Keyboard with Slim Pen 2*, which shows off the collaboration between Microsoft Surface and the London-based Liberty, is now available in the U.S., Canada, the U.K. and Japan (though supplies are limited).
“It’s such a contrast of things and you would never expect this to happen in some ways,” says Elliott Hsu, a principal hardware designer at Microsoft who headed a team that worked with Liberty to also create the laser engraved Surface Pro 9 Liberty Special Edition* and printed keyboard, which embodies the elaborate florals Liberty is known for and serves as a natural branch of the Windows 11 Bloom that debuted last year.
Windows 11 introduced Bloom to the world in 2021 as the very first image you saw on the screen: a desktop wallpaper that served as a symbolic image of starting anew with this operating system. Inspired by flowers, it was created entirely in the digital world.
That was something that fascinated Adam Herbert and his design team at Liberty, a company he’d long admired and loved before he started working there four years ago.
While Microsoft’s designers use technology in almost every step of their creative process, Liberty’s designers use pens, pencils and paper. They go outside to sketch. They draw from the real world.
“I’ve always loved Liberty,” Herbert says. “I think it is a key figure in the DNA of British design. We create things that feel traditional and also unexpected at the same time. All of our designs start out as a drawing, painting, collage or sketch. It’s a really interesting melting pot of ideas and design approaches.”
Herbert admits it was “mind-blowing” for him to see how Microsoft created the Bloom flower that’s come to symbolize Windows 11.
Established in 1875, Liberty has seen many trends come and go. But fashion and design being cyclical, the company has been able to ride many waves and stay relevant.
“What’s magical about their brand is a 90-year-old and a 19-year-old can wear the same scarf in a different way with the same print, and it works,” Hsu says. “From the design standpoint, we had always been inspired by Liberty and what they do, from their craftsmanship to brand ethos.”
It’s a proud tradition that Herbert grew to appreciate even more after discovering the vastness of Liberty’s archives, which reveals different eras of the company’s design history.
“Whatever we create now goes back into that archive,” Herbert says. “We look at designs from 100 years ago and we redraw them and bring them back to life. I love to think that in 100 years, Liberty Designers will look at what we did now and they might revisit our work and use it in a totally different way.”
Liberty and Microsoft’s relationship began in 2019, when Hsu visited Liberty’s headquarters in London to find design inspiration in the brand’s well-known floral patterns. At that time, he and Herbert – who hit it off almost immediately – focused on the Washington state flower, the rhododendron, in trying to make an initial connection between the brands.
The pandemic put a pause on their progress on the project, but it reiterated something that became integral to developing this special edition: the prominence of PCs in daily life.
“PCs and devices have become even more personal to people,” Hsu says. “During the pandemic, your PC never went away. It was always on your desk, always on your table. Everything in your personal and professional life came through that portal, which keeps us connected in trying times.”
For Herbert, his interaction with technology during the pandemic pushed him out of his comfort zone. Suddenly he had to work and communicate with his team while they were scattered all over London, when previously they put together design presentations with boards that had physical fabric swatches attached to them.
In this episode of “Digital Now,” Penelope Prett, the chief information, data and analytics officer at global professional services company Accenture, explains why her tech-savvy workforce, one of the largest in the world, expects tech to be not just ready to go at the point of service and accessible from everywhere, but also to be a delightful experience that supports engagement.
“The thing that’s great about it is when you have a technology-literate population … they are willing to play with you,” she says. “So you can try new things, you can push the edge of the envelope. There’s a much higher tolerance for new things, for failure and for learning.”
“Digital Now” is a video series hosted by Andrew Wilson, chief digital officer at Microsoft, who invites friends and industry leaders inside and outside of Microsoft to share how they are tackling digital and business transformation, and explores themes like the future of work, security, artificial intelligence, and the democratization of code and data.
Prett and Wilson also discuss how the democratization of technology presents new opportunities for business growth, but still needs guardrails in place to address security and usability. At Accenture, says Prett, democratization is celebrated, but user experience, data and security will always remain under the protection of the IT organization.
Microsoft’s longest-running franchise has inspired and captivated aviation enthusiasts and professionals throughout the world for 40 years.
Today we celebrate the exciting history of aviation with the release of the Microsoft Flight Simulator 40th Anniversary Edition, the most advanced version of this beloved franchise yet. Among the many features included in this update is a true-to-life airliner, the Airbus A310-300, rendered with stunning accuracy. The 40th Anniversary Edition also features, for the first time since the platform’s 2006 release, helicopters and gliders that perform with amazing life-like realism.
We’re also introducing seven renowned historical aircraft: the 1903 Wright Flyer, the 1915 Curtiss JN-4 Jenny, the 1927 Ryan NYP Spirit of St. Louis, the 1935 Douglas DC-3, the beautiful 1937 Grumman G-21 Goose, the 1947 Havilland DHC-2 Beaver, and the famous 1947 Hughes H-4 Hercules “Spruce Goose,” the largest seaplane and wooden aircraft ever built.
We have also added four classic airports, including the Meigs Field in Chicago, a traditional home airport for the Microsoft Flight Simulator franchise.
It is an incredibly exciting update celebrating aviation history, introducing significant technical advancements in flight dynamics and simulation, and featuring two new types of aircraft (gliders and helicopters) — all to delight our community and showcase the beauty and the thrill of flight!
In summary, the Microsoft Flight Simulator 40th Anniversary Edition delivers the following brand-new content:
1 true-to-life Airbus A310 airliner
2 helicopters and 14 heliports
2 gliders and 15 glider airports
7 famous historical aircraft including the Hughes H-4 Hercules (also known as the Spruce Goose)
4 classic commercial airports
24 classic missions from the franchise’s past
Test your piloting skills against the challenges of riding thermals in an unpowered glider, controlling rotor-wing aircraft over dense urban cityscapes, improved real-time atmospheric simulation and live weather in a dynamic and vibrant world. Create your flight plan to anywhere on the planet. Join us in celebrating the award-winning franchise with the Microsoft Flight Simulator 40th Anniversary Edition, loaded with all-new features, aircraft, and content that span the history of aviation. The sky is calling!
Check out the Microsoft Flight Simulator 40th Anniversary Edition today, available as a free update for existing players. For new simmers, the 40th Anniversary Edition is the perfect entry point to the franchise.
Microsoft Flight Simulator 40th Anniversary Edition is available for Xbox Series X|S and PC with Xbox Game Pass, PC Game Pass, Windows, and Steam, and on Xbox One and supported mobile phones, tablets, and lower-spec PCs via Xbox Cloud Gaming.
For the latest information on Microsoft Flight Simulator, stay tuned to @MSFSOfficial on Twitter.
Microsoft Flight Simulator
Xbox Game Studios
☆☆☆☆☆357
★★★★★
Xbox One X Enhanced
PC Game Pass
Xbox Game Pass
Microsoft Flight Simulator is the next generation of one of the most beloved simulation franchises. From light planes to wide-body jets, fly highly detailed and stunning aircraft in an incredibly realistic world. Create your flight plan and fly anywhere on the planet. Enjoy flying day or night and face realistic, challenging weather conditions.
At CyberWarCon 2022, Microsoft and LinkedIn analysts presented several sessions detailing analysis across multiple sets of actors and related activity. This blog is intended to summarize the content of the research covered in these presentations and demonstrates Microsoft Threat Intelligence Center’s (MSTIC) ongoing efforts to track threat actors, protect customers from the associated threats, and share intelligence with the security community.
The CyberWarCon sessions summarized below include:
“They are still berserk: Recent activities of BROMINE” – a lightning talk covering MSTIC’s analysis of BROMINE (aka Berserk Bear), recent observed activities, and potential changes in targeting and tactics.
“The phantom menace: A tale of Chinese nation-state hackers” – a deep dive into several of the Chinese nation-state actor sets, their operational security patterns, and case studies on related tactics, techniques, and procedures (TTPs).
“ZINC weaponizing open-source software” – a lighting talk on MSTIC and LinkedIn’s analysis of ZINC, a North Korea-based actor. This will be their first public joint presentation, demonstrating collaboration between MSTIC and LinkedIn’s threat intelligence teams.
MSTIC consistently tracks threat actor activity, including the groups discussed in this blog, and works across Microsoft Security products and services to build detections and improve customer protections. As with any observed nation-state actor activity, Microsoft has directly notified customers that have been targeted or compromised, providing them with the information they need to help secure their accounts. Microsoft uses DEV-#### designations as a temporary name given to an unknown, emerging, or a developing cluster of threat activity, allowing MSTIC to track it as a unique set of information until we reach a high confidence about the origin or identity of the actor behind the activity. Once it meets the criteria, a DEV is converted to a named actor.
They are still berserk: Recent activities of BROMINE
BROMINE overlaps with the threat group publicly tracked as Berserk Bear. In our talk, MSTIC provided insights into the actor’s recent activities observed by Microsoft. Some of the recent activities presented include:
Targeting and compromise of dissidents, political opponents, Russian citizens, and foreign diplomats. These activities have spanned multiple methods and techniques, ranging from the use of a custom malicious capability to credential phishing leveraging consumer mail platforms. In some cases, MSTIC has identified the abuse of Azure free trial subscriptions and worked with the Azure team to quickly take action against the abuse.
Continued targeting of organizations in the manufacturing and industrial technology space. These sectors have been continuous targets of the group for years and represent one of the most durable interests.
An opportunistic campaign focused on exploiting datacenter infrastructure management interfaces, likely for the purpose of access to technical information of value.
Targeting and compromise of diplomatic sector organizations focused on personnel assigned to Eastern Europe.
Compromise of a Ukrainian nuclear safety organization previously referenced in our June 2022 Special Report on Defending Ukraine (https://aka.ms/ukrainespecialreport).
Overall, our findings continue to demonstrate that BROMINE is an elusive threat actor with a variety of potential objectives, yet sporadic insights from various organizations, including Microsoft, demonstrate there is almost certainly more to find. Additionally, our observations show that as a technology platform provider, threat intelligence enables Microsoft’s ability to protect both enterprises and consumers and disrupt threat activity affecting our customers.
The phantom menace: A tale of China-based nation state hackers
Over the past few years, MSTIC has observed a gradual evolution of the TTPs employed by China-based threat actors. At CyberWarCon 2022, Microsoft analysts presented their analysis of these trends in Chinese nation-state actor activity, covering:
Information about new tactics that these threat actors have adopted to improve their operational security, as well as a deeper look into their techniques, such as leveraging vulnerable SOHO devices for obfuscating their operations.
Three different case studies, including China-based DEV-0401 and nation-state threat actors GALLIUM and DEV-0062, walking through (a) the initial vector (compromise of public-facing application servers, with the actors showing rapid adoption of proofs of concept for vulnerabilities in an array of products), (b) how these threat actors maintained persistence on the victims (some groups dropping web shells, backdoors, or custom malware), and (c) the objectives of their operations: intelligence collection for espionage.
A threat landscape overview of the top five industries that these actors have targeted—governments worldwide, non-government organizations (NGO)s and think tanks, communication infrastructure, information technology (IT), and financial services – displaying the global nature of China’s cyber operations in the span of one year.
As demonstrated in the presentation, China-based threat actors have targeted entities nearly globally, employing techniques and using different methodologies to make attribution increasingly harder. Microsoft analysts assess that China’s cyber operations will continue to move along their geopolitical agenda, likely continuing to use some of the techniques mentioned in the presentation to conduct their intelligence collection. The graphic below illustrates how quickly we observe China-based threat actors and others exploiting zero-day vulnerabilities and then those exploits becoming broadly available in the wild.
In this talk, Microsoft and LinkedIn analysts detail recent activity of a North-Korea based nation-state threat actor we track as ZINC. Analysts detailed the findings of their investigation (previously covered in this blog) and walked through the series of observed ZINC attacks that targeted 125 different victims spanning 34 countries, noting the attacks appear to be motivated by traditional cyber-espionage and theft of personal and corporate data. A few highlights include:
In September 2022, Microsoft disclosed detection of a wide range of social engineering campaigns using weaponized legitimate open-source software. MSTIC observed activity targeting employees in organizations across multiple industries including media, defense and aerospace, and IT services in the US, UK, India, and Russia.
Based on the observed tradecraft, infrastructure, tooling, and account affiliations, MSTIC attributes this campaign with high confidence to ZINC, a state-sponsored group based out of North Korea with objectives focused on espionage, data theft, financial gain, and network destruction.
When analyzing the data from an industry sector perspective, we observed that ZINC chose to deliver malware most likely to succeed in a specific environment, for example, targeting IT service providers with terminal tools and targeting media and defense companies with fake job offers to be loaded into weaponized PDF readers.
ZINC has successfully compromised numerous organizations since June 2022, when the actor began employing traditional social engineering tactics by initially connecting with individuals on LinkedIn to establish a level of trust with their targets.
Upon successful connection, ZINC encouraged continued communication over WhatsApp, which acted as the means of delivery for their malicious payloads. MSTIC observed ZINC weaponizing a wide range of open-source software including PuTTY, KiTTY, TightVNC, Sumatra PDF Reader, and muPDF/Subliminal Recording software installer for these attacks. ZINC was observed attempting to move laterally across victim networks and exfiltrate collected information from.
As the threat landscape continues to evolve, Microsoft strives to continuously improve security for all, through collaboration with customers and partners and by sharing our research with the larger security community. We would like to extend our thanks to CyberWarCon and LinkedIn for their community partnership.
When legendary computer scientist Jim Gray accepted the Turing Award in 1999, he laid out a dozen long-range information technology research goals. One of those goals called for the creation of trouble-free server systems or, in Gray’s words, to “build a system used by millions of people each day and yet administered and managed by a single part-time person.”
Gray envisioned a self-organizing “server in the sky” that would store massive amounts of data, and refresh or download data as needed. Today, with the emergence and rapid advancement of artificial intelligence (AI), machine learning (ML) and cloud computing, and Microsoft’s development of Cloud Intelligence/AIOps, we are closer than we have ever been to realizing that vision—and moving beyond it.
Over the past fifteen years, the most significant paradigm shift in the computing industry has been the migration to cloud computing, which has created unprecedented digital transformation opportunities and benefits for business, society, and human life.
The implication is profound: cloud computing platforms have become part of the world’s basic infrastructure. As a result, the non-functional properties of cloud computing platforms, including availability, reliability, performance, efficiency, security, and sustainability, have become immensely important. Yet the distributed nature, massive scale, and high complexity of cloud computing platforms—ranging from storage to networking, computing and beyond—present huge challenges to building and operating such systems.
What is Cloud Intelligence/AIOps?
Cloud Intelligence/AIOps (“AIOps” for brevity) aims to innovate AI/ML technologies to help design, build, and operate complex cloud platforms and services at scale—effectively and efficiently.
AIOps has three pillars, each with its own goal:
AI for Systems to make intelligence a built-in capability to achieve high quality, high efficiency, self-control, and self-adaptation with less human intervention.
AI for Customers to leverage AI/ML to create unparalleled user experiences and achieve exceptional user satisfaction using cloud services.
AI for DevOps to infuse AI/ML into the entire software development lifecycle to achieve high productivity.
Where did the research on AIOps begin?
Gartner, a leading industry analyst firm, first coined the term AIOps (Artificial Intelligence for IT Operations) in 2017. According to Gartner, AIOps is the application of machine learning and data science to IT operation problems. While Gartner’s AIOps concept focuses only on DevOps, Microsoft’s Cloud Intelligence/AIOps research has a much broader scope, including AI for Systems and AI for Customers.
The broader scope of Microsoft’s Cloud Intelligence/AIOps stems from the Software Analytics research we proposed in 2009, which seeks to enable software practitioners to explore and analyze data to obtain insightful and actionable information for data-driven tasks related to software and services. We started to focus our Software Analytics research on cloud computing in 2014 and named this new topic Cloud Intelligence (Figure 1). In retrospect, Software Analytics is about the digital transformation of the software industry itself, such as empowering practitioners to use data-driven approaches and technologies to develop software, operate software systems, and engage with customers.
Figure 1: From Software Analytics to Cloud Intelligence/AIOps
What is the AIOps problem space?
There are many scenarios around each of the three pillars of AIOps. Some example scenarios include predictive capacity forecasting for efficient and sustainable services, monitoring service health status, and detecting health issues in a timely manner in AI for Systems; ensuring code quality and preventing defective build deployed into production in AI for DevOps; and providing effective customer support in AI for Customers. Across all these scenarios, there are four major problem categories that, taken together, constitute the AIOps problem space: detection, diagnosis, prediction, and optimization (Figure 2). Specifically, detection aims to identify unexpected system behaviors (or anomalies) in a timely manner. Given the symptom and associated artifacts, the goal of diagnosis is to localize the cause of service issues and find the root cause. Prediction attempts to forecast system behaviors, customer workload patterns, or DevOps activities, and so on. Lastly, optimization tries to identify the optimal strategies or decisions required to achieve certain performance targets related to system quality, customer experience and DevOps productivity.
Figure 2: Problems and challenges of AIOps
Each problem has its own challenges. Take detection as an example. To ensure service health at runtime, it is important for engineers to continuously monitor various metrics and detect anomalies in a timely manner. In the development process, to ensure the quality of the continuous integration/continuous delivery (CI/CD) practice, engineers need to create mechanisms to catch defective builds and prevent them from being deployed to other production sites.
Both scenarios require timely detection, and in both there are common challenges for conducting effective detection. For example, time series data and log data are the most common data forms. Yet they are often multi-dimensional, there may be noise in the data, and they often have different detection requirements—all of which can pose significant challenges to reliable detection.
Microsoft Research: Our AIOps vision
Microsoft is conducting continuous research in each of the AIOps problem categories. Our goal for this research is to empower cloud systems to be more autonomous, more proactive, more manageable, and more comprehensive across the entire cloud stack.
Making cloud systems more autonomous
AIOps strives to make cloud systems more autonomous, to minimize human operations and rule-based decisions, which significantly helps reduce user impact caused by system issues, make better operation decisions, and reduce maintenance cost. This is achieved by automating DevOps as much as possible, including build, deployment, monitoring, and diagnosis. For example, the purpose of safe deployment is to catch a defective build early to prevent it from rolling out to production and resulting in significant customer impact. It can be extremely labor intensive and time consuming for engineers, because anomalous behaviors have a variety of patterns that may change over time, and not all anomalous behaviors are caused by a new build, which may introduce false positives.
At Microsoft Research, we used transfer learning and active learning techniques to develop a safe deployment solution that overcomes these challenges. We’ve been running the solution in Microsoft Azure, and it has been highly effective at helping to catch defective builds – achieving more than 90% precision and near 100% recall in production over a period of 18 months.
Root cause analysis is another way that AIOps is reducing human operations in cloud systems. To shorten the mitigation time, engineers in cloud systems must quickly identify the root causes of emerging incidents. Owing to the complex structure of cloud systems, however, incidents often contain only partial information and can be triggered by many services and components simultaneously, which forces engineers to spend extra time diagnosing the root causes before any effective actions can be taken. By leveraging advanced contrast-mining algorithms, we have implemented autonomous incident-diagnosis systems, including HALO and Outage Scope, to reduce response time and increase accuracy in incident diagnosis tasks. These systems have been integrated in both Azure and Microsoft 365 (M365), which has considerably improved engineers’ ability to handle incidents in cloud systems.
Making cloud systems more proactive
AIOps makes cloud systems more proactive by introducing the concept of proactive design. In the design of a proactive system, an ML-based prediction component is added to the traditional system. The prediction system takes the input signals, does the necessary processing, and outputs the future status of the system. For example, what the capacity status of cluster A looks like next week, whether a disk will fail in a few days, or how many virtual machines (VMs) of a particular type will be needed in the next hour.
Knowing the future status makes it possible for the system to proactively avoid negative system impacts. For example, engineers can live migrate the services on an unhealthy computing node to a healthy one to reduce VM downtime, or pre-provision a certain number of VMs of a particular type for the next hour to reduce the latency of VM provisioning. In addition, AI/ML techniques can enable systems to learn over time which decision to make.
As an example of proactive design, we built a system called Narya, which proactively mitigated potential hardware failures to reduce service interruption and minimize customer impact. Narya, which is in production in Microsoft Azure, performs prediction on hardware failures and uses a bandit algorithm to decide which mitigation action to take.
Making cloud systems more manageable
AIOps makes cloud systems more manageable by introducing the notion of tiered autonomy. Each tier represents a set of operations that require a certain level of human expertise and intervention. These tiers range from the top tier of autonomous routine operations to the bottom tier, which requires deep human expertise to respond to rare and complex problems.
AI-driven automation often cannot handle such problems. By building AIOps solutions targeted at each tier, we can make cloud platforms easier to manage across the long tail of rare problems that inevitably arise in complex systems. Furthermore, the tiered design ensures that autonomous systems are developed from the start to evaluate certainty and risk, and that they have safe fallbacks when automation fails or the platform faces a previously unseen set of circumstances, such as the unforeseen increase in demand in 2020 due to the COVID-19 pandemic.
As an example of tiered autonomy, we built Safe On-Node Learning (SOL), a framework for safe learning and actuation on server nodes for the top tier. As another example, we are exploring how to predict the commands that operators should perform to mitigate incidents, while considering the associated certainty and risks of those commands when the top-tier automation fails to prevent the incidents.
Making AIOps more comprehensive across the cloud stack
AIOps can also be made more comprehensive by spanning the cloud stack—from the lowest infrastructure layers (such as network and storage) through the service layer (such as the scheduler and database) and on to the application layer. The benefit of applying AIOps more broadly would be a significant increase in the capability for holistic diagnosis, optimization, and management.
Microsoft services built on top of Azure are called first-party (1P) services. A 1P setting, which is often used to optimize system resources, is particularly suited to a more comprehensive approach to AIOps. This is because with the 1P setting a single entity has visibility into, and control over, the layers of the cloud stack, which enables engineers to amplify the AIOps impact. Examples of 1P services at Microsoft include large and established services such as Office 365, relatively new but sizeable services such as Teams, and up and coming services such as Windows 365 Cloud PC. These 1P services typically account for a significant share of resource usage, such as wide-area network (WAN) traffic and compute cores.
As an example of applying a more comprehensive AIOps approach to the 1P setting, the OneCOGS project, which is a joint effort of Azure, M365, and MSR, considers three broad opportunities for optimization:
Modeling users and their workload using signals cutting across the layers—such as using the user’s messaging activity versus fixed working hours to predict when a Cloud PC user will be active—thereby increasing accuracy to enable enabling appropriate allocation of system resources.
Jointly optimizing the application and the infrastructure to achieve cost savings and more.
Tame the complexity of data and configuration, thereby democratizing AIOps.
The AIOps methodologies, technologies and practices used for cloud computing platforms and 1P services are also applicable to third-party (3P) services on the cloud stack. To achieve this, further research and development are needed to make AIOps methods and techniques more general and/or easily adaptable. For example, when operating cloud services, detecting anomalies in multi-dimensional space and the subsequent fault localization are common monitoring and diagnosis problems.
Motivated by the real-world needs of Azure and M365, we proposed the technique AiDice, which automatically detects anomalies in multi-dimensional space, and HALO, a hierarchy-aware approach to locating fault-indicating combinations that uses telemetry data collected from cloud systems. In addition to deploying AiDice and HALO in Azure and M365, we’re also collaborating with product team partners to make AiDice and HALO AIOps services that can be leveraged by third-party services.
Conclusion
AIOps is a rapidly emerging technology trend and an interdisciplinary research direction across system, software engineering, and AI/ML communities. With years of research on Cloud Intelligence, Microsoft Research has built up rich technology assets in detection, diagnosis, prediction, and optimization. And through close collaboration with Azure and M365, we have deployed some of our technologies in production, which has created significant improvements in the reliability, performance, and efficiency of Azure and M365 while increasing the productivity of developers working on these products. In addition, we are collaborating with colleagues in academia and industry to promote the AIOps research and practices. For example, with the joint efforts we have organized 3 editions of AIOps Workshop at premium academic conferences AAAI 2020, ICSE 2021, and MLSys2022.
Moving forward, we believe that as a new dimension of innovation, Cloud Intelligence/AIOps will play an increasingly important role in making cloud systems more autonomous, more proactive, more manageable, and more comprehensive across the entire cloud stack. Ultimately, Cloud Intelligence/AIOps will help us make our vision for the future of the cloud a reality.
This post was co-authored by Jyothi Venkatesh, Senior Product Manager, Azure HPC and Fanny Ou, Technical Program Manager, Azure HPC.
The next generation of purpose-built Azure HPC virtual machines
Today, we are excited to announce two new virtual machines (VMs) that deliver more performance, value-adding innovation, and cost-effectiveness to every Azure HPC customer. The all-new HX-series and HBv4-series VMs are coming soon to the East US region, and thereafter to the South Central US, West US3, and West Europe regions. These new VMs are optimized for a variety of HPC workloads such as computational fluid dynamics (CFD), finite element analysis, frontend and backend electronic design automation (EDA), rendering, molecular dynamics, computational geoscience, weather simulation, AI inference, and financial risk analysis.
Innovative technologies to help HPC customers where it matters most
HX and HBv4 VMs are packed with new and innovative technologies that maximize performance and minimize total HPC spend, including:
4th Gen AMD EPYC™ processors (Preview, Q4 2022).
Upcoming AMD EPYC processors, codenamed “Genoa-X,” (with general availability in 1H 2023).
800 GB/s of DDR5 memory bandwidth (STREAM TRIAD).
400 Gb/s NVIDIA Quantum-2 CX7 InfiniBand, the first on the public cloud.
80 Gb/s Azure Accelerated Networking.
PCIe Gen4 NVMe SSDs delivering 12 GB/s (read) and 7 GB/s (write) of storage bandwidth.
Below are preliminary benchmarks from the preview of HBv4 and HX series VMs using 4th Gen AMD EPYC processors across several common HPC applications and domains. For comparison, performance information is also included from Azure’s most recent H-series (HBv3-series with Milan-X processors), as well as a 4-year-old HPC-optimized server commonly found in many on-premises datacenters (represented here by Azure HC-series with Skylake processors).
Figure 1: Performance comparison of HBv4/HX-series in Preview to HBv3-series and four-year-old server technology in an HPC-optimized configuration across diverse workloads and scientific domains.
HBv4-series brings performance leaps across a diverse set of HPC workloads
Azure HBv3 VMs with 3rd Gen AMD EPYC™ processors with AMD 3D V-cache™ Technology already deliver impressive levels of HPC performance, scaling MPI workloads up to 27x higher than other clouds, surpassing many of the leading supercomputers in the world, and offering the disruptive value proposition of faster time to solution with lower total cost. Unsurprisingly, the response from customers and partners has been phenomenal. With the introduction of HBv4 series VMs, Azure is raising the bar yet again—this time across an even greater diversity of memory performance-bound, compute-bound, and massively parallel workloads.
VM Size
Physical CPU Cores
RAM (GB)
Memory Bandwidth (STREAM TRIAD) (GB/s)
L3 Cache/VM (MB)
FP64 Compute (TFLOPS)
InfiniBand RDMA Network (Gbps)
Standard_HB176rs_v4
176
688
800
768 MB
6
400
Standard_HB176-144rs_v4
144
688
800
768 MB
6
400
Standard_HB176-96rs_v4
96
688
800
768 MB
6
400
Standard_HB176-48rs_v4
48
688
800
768 MB
6
400
Standard_HB176-24rs_v4
24
688
800
768 MB
6
400
Notes: 1) “r” denotes support for remote direct memory access (RDMA) and “s” denotes support for Premium SSD disks. 2) At General Availability, Azure HBv4 VMs will be upgraded to Genao-X processors featuring 3D V-cache. Updated technical specifications for HBv4 will be posted at that time.
HX-series powers next generation silicon design
In Azure, we strive to deliver the best platform for silicon design, both now and far into the future. Azure HBv3 VMs, featuring 3rd Gen AMD EPYC processors with AMD 3D V-cache Technology, are a significant step toward this objective, offering the highest performance and total cost effectiveness in the public cloud for small and medium memory EDA workloads. With the introduction of HX-series VMs, Azure is enhancing its differentiation with a VM purpose-built for even larger models becoming commonplace among chip designers targeting 3, 4, and 5 nanometer processes.
HX VMs will feature 3x more RAM than any prior H-series VM, up to nearly 60 GB of RAM per core, and constrained cores VM sizes to help silicon design customers maximize ROI of their per-core commercial licensing investments.
VM Size
Physical CPU Cores
RAM (GB)
Memory/Core(GB)
L3 Cache/VM (MB)
Local SSD NVMe (TB)
InfiniBand RDMA Network (Gbps)
Standard_HX176rs
176
1,408
8
768
3.6 TB
400
Standard_HX176-144rs
144
1,408
10
768
3.6 TB
400
Standard_HX176-96rs
96
1,408
15
768
3.6 TB
400
Standard_HX176-48rs
48
1,408
29
768
3.6 TB
400
Standard_HX176-24rs
24
1,408
59
768
3.6 TB
400
Notes: 1) “r” denotes support for remote direct memory access (RDMA) and “s” denotes support for Premium SSD disks. 2) At General Availability, Azure HBv4 VMs will be upgraded to Genoa-X processors featuring 3D V-cache. Updated technical specifications for HBv4 will be posted at that time.
400 Gigabit InfiniBand for supercomputing customers
HBv4 and HX VMs are Azure’s first to leverage 400 Gigabit NVIDIA Quantum-2 InfiniBand. This newest generation of InfiniBand brings greater support for the offload of MPI collectives, enhanced congestion control, and enhanced adaptive routing capabilities. Using the new HBv4 or HX-series VMs and only a standard Azure Virtual Machine Scale Set (VMSS), customers can scale CPU-based MPI workloads beyond 50,000 cores per job.
Continuous improvement for Azure HPC customers
Microsoft and AMD share a vision for a new era of high-performance computing in the cloud: one defined by constant improvements to the critical research and business workloads that matter most to our customers. Azure continues to collaborate with AMD to make this vision a reality by raising the bar on the performance, scalability, and value we deliver with every release of Azure H-series VMs.
Figure 2: Azure HPC Performance 2019 through 2022.
“We’re pleased to see Altair® AcuSolve®’s impressive linear scale-up on the HBv3 instances, showing up to 2.5 times speedup. Performance increases 12.83 times with an 8-node (512-core) configuration on 3rd AMD EPYC™ processors, an excellent scale-up value for AcuSolve compared to the previous generation delivering superior price performance. We welcome the addition of the new Azure HBv4 and HX-series virtual machines and look forward to pairing them with Altair software to the benefit of our joint customers.”
—Dr. David Curry, Senior Vice President, CFD and EDEM
“Customers in the HPC industry continue to demand higher performance and optimizations to run their most mission-critical and data-intensive applications. 4th Gen AMD EPYC processors provide breakthrough performance for HPC in the cloud, delivering impressive time to results for customers adopting Azure HX-series and HBv4-series VMs.”
“Ansys electronics, semiconductor, fluids, and structures customers demand more throughput out of their simulation tools to overcome challenges posed by product complexity and project timelines. Microsoft’s HBv3 virtual machines, featuring AMD’s 3rd Gen EPYC processors with 3D V-Cache, have been giving companies a great price/performance crossover point to support these multiphysics simulations on-demand and with very little IT overhead. We look forward to leveraging Azure’s next generation of HPC VMs featuring 4th Gen AMD EPYC processors, the HX and HBv4 series, to enable even greater simulation complexity and speed to help engineers reduce risk and meet time-to-market deadlines.”
—John Lee, Vice President and General Manager, Electronics and Semiconductor, Ansys
“We’ve helped thousands of customers combine the performance and scalability of the cloud, providing ease-of-use and instance access to our powerful computational software, which speeds the delivery of innovative designs. The two new high-performance computing virtual machines powered by the AMD Genoa processor on Microsoft Azure can provide our mutual customers with optimal performance as they tackle the ever-increasing demands of compute and memory capacity for gigascale, advanced-node designs.”
—Mahesh Turaga, Vice President, Cloud Business Development, Cadence
“Hexagon simulation software powers some of the most advanced engineering in the world. We’re proud to partner with Microsoft, and excited to pair our software with Azure’s new HBv4 virtual machines. During early testing in collaboration with the Azure HPC team, we have seen a generational performance speedups of 400 percent when comparing structural simulations running on HBv3 and HX-series VMs. We look forward to seeing what our joint customers will do with this remarkable combination of software and hardware to advance their research and productivity, now and tomorrow. In the first quarter of 2023, we will be benchmarking heavy industrial CFD computations, leveraging multiple HBv4 virtual machines connected through InfiniBand.”
—Bruce Engelmann, CTO, Hexagon
“Microsoft Azure has once again raised the bar for HPC infrastructure platform in the cloud this time with the launch of Azure HBv4 and HX virtual machines based on AMD’s 4th gen EPYC Genoa CPUs. We are expecting a strong customer demand for HBv4 and are excited to offer it to our customers that would like to run CFD, EDA, or other types of HPC workloads in the cloud.
—Mulyanto Poort, Vice President of HPC Engineering at Rescale
“Early testing by AMD with Siemens EDA workloads showed 15 percent to 22 percent improvements in runtimes with Microsoft Azure’s new AMD-based virtual machines compared to the previous generation. Semiconductor chip designers face a range of technical challenges that make hitting release dates extremely difficult. The combined innovation of AMD, Microsoft Azure, and Siemens provides a simplified path to schedule predictability through the increased performance possible with the latest offerings.”
“Customer adoption of the cloud for chip development is accelerating, driven by complexity and time-to-market advantages. The close collaboration between Synopsys and Microsoft brings together EDA and optimized compute to enable customers to scale under the Synopsys FlexEDA pay-per-use model. Verification represents a significant EDA workload in today’s complex SoCs and with the release of AMD’s next-generation EPYC processor available on Microsoft Azure, customers can take advantage of the optimized cache utilization and NUMA-aware memory layout techniques to achieve up to 2x verification throughput over previous generations.”
—Sandeep Mehndiratta, Vice President of Cloud at Synopsys