Posted on Leave a comment

Sea of Thieves ‘Cursed Sails’ free update to unleash skeleton ships on July 31

At last month’s E3 we gave players a tease of the upcoming Sea of Thieves content updates that we have planned for the summer. We’re thrilled to announce that the first of these summer updates – Cursed Sails – will be available for free to all Sea of Thieves players from July 31.

New Content To Include Skeleton Ship AI Enemies and the Brigantine Ship Designed For Three-Player Crews

As part of our ongoing commitment to deliver new Sea of Thieves content throughout the year, Cursed Sails will offer our biggest update yet at no charge for those who own the game or access it through Xbox Game Pass. Players will now be able to battle terrifying new skeleton ships, set sail in the new three-player Brigantine ship and form Alliances with other players to take on bigger challenges and share greater rewards.

Sea of Thieves Cursed Sails Skeleton Ship

Sea of Thieves Cursed Sails Skeleton Ship

This permanent new content will be introduced into the game through a special time-limited campaign, Cursed Sails. Offering players a range of unique rewards, this three-week campaign will see skeleton ships terrorise outposts and challenge pirates to do battle on the seas. A story-driven side quest will also allow players to investigate the source of the skeleton scourge. For a taste of what’s to come, watch our new trailer above.

This Cursed Sails campaign and content update will be available free and without charge to all Sea of Thieves players who have bought the game across the Xbox One family of devices or on Windows 10 PC, or who have access to it as part of Xbox Game Pass. Once it goes live from July 31, simply download and install the latest Sea of Thieves update to get access. And don’t worry, if you’re unable to take part in the time-limited campaign, all the features introduced with this campaign will remain in the Sea of Thieves world for everyone to see and experience.

New to Sea of Thieves? Join over four million players at xbox.com/seaofthieves, and visit the Sea of Thieves website at SeaofThieves.com to embark with the community. See you out on the seas.

Posted on Leave a comment

VentureBeat: ‘Microsoft’s AI for Earth Innovation Grant gives data scientists access to AI tools’

Microsoft and National Geographic are teaming up to support data scientists who are tackling the “world’s biggest challenges.” The two companies today announced the AI for Earth Innovation Grant program, a $1 million grant that’ll provide recipients financial assistance, access to AI tools and cloud services, and more to advance conservation research.

The grant program, which is accepting applications until October 8, will support between five and 15 projects in five core areas: agriculture, biodiversity, conservation, climate change, and water. In addition to funding, researchers will gain access to Microsoft’s AI platform and development tools, inclusion in the National Geographic Explorer community, and affiliation with National Geographic Labs, National Geographic’s research incubation and accelerator initiative.

“[I]n Microsoft, we found a partner that is well-positioned to accelerate the pace of scientific research and new solutions to protect our natural world,” Jonathan Baillie, chief scientist and executive vice president at the National Geographic Society, said in a statement. “With today’s announcement, we will enable outstanding explorers seeking solutions for a sustainable future with the cloud and AI technologies that can quickly improve the speed, scope, and scale of their work, as well as support National Geographic Labs’ activities around technology and innovation for a planet in balance.”

The aim is to make trained algorithms broadly available to the global community of environmental researchers, Lucas Joppa, Microsoft’s chief environmental scientist, said in a press release.

“Microsoft is constantly exploring the boundaries of what technology can do, and what it can do for people and the world,” Joppa said. “We believe that humans and computers, working together through AI, can change the way that society monitors, models, and manages Earth’s natural systems. We believe this because we’ve seen it — we’re constantly amazed by the advances our AI for Earth collaborators have made over the past months. Scaling this through National Geographic’s … network will create a whole new generation of explorers who use AI to create a more sustainable future for the planet and everyone on it.”

Selected recipients will be announced in December.

The AI for Earth Innovation Grant is an expansion of Microsoft’s AI for Earth program, announced in June 2017. In December, the Redmond company committed $50 million to an “extended strategic plan” that includes providing advanced training to universities and NGOs and the formation of a “multi-disciplinary” team of AI and sustainability experts.

Microsoft claims that in the past two years, the AI for Earth program has awarded more than 35 grants globally for access to its Azure platform and AI technologies.

Posted on Leave a comment

Lilly strives to speed innovation with help from Microsoft 365 Enterprise


Profile picture of Ron Markezich.The nearly 40,000 employees of Eli Lilly and Company are on a mission to make medicines that help people live longer, healthier, and more active lives. But they know that developing new treatments for cancer, diabetes, and other debilitating diseases requires collaboration with the best minds working together to foster innovation.

That’s why Lilly takes a collaborative approach to discovering and developing new medicines—between lab researchers and the rest of the company—as well as with a global network of physicians, medical researchers, and healthcare organizations. Working together—creatively and efficiently—can help generate new ideas that fuel innovation. To bring together scientists across hundreds of locations and organizations and truly empower the workforce, Lilly selected Microsoft 365 Enterprise.

While Lilly is in the early stage of deployment, these cloud-based collaboration tools, including Microsoft Teams, are making an impact. Mike Meadows, vice president and chief technology officer at Lilly, says that the technology will allow for enhanced productivity and teamwork, while helping to protect IP:

“Collaboration tools like Microsoft Teams enhance our ability for researchers and other employees to work together in faster and more creative ways, advancing our promise to make life better through innovative medicines. Microsoft 365 helps us bring the best minds together while keeping data secure and addressing regulatory compliance requirements.”

Like enterprise customers across the globe, Lilly sees Microsoft 365 as a robust, intelligent productivity and collaboration solution that empowers employees to be creative and work together. And when deployment of Windows 10 is complete, employees across the company will advance a new culture of work where creative collaboration that sparks critical thinking and innovation happens anywhere, anytime.

At Microsoft, we’re humbled to play a role in helping Lilly make life better for people around the world.

—Ron Markezich

Posted on Leave a comment

Microsoft and National Geographic form AI for Earth Innovation Grant partnership

New grant offering will support research and scientific discovery with AI technologies to advance agriculture, biodiversity conservation, climate change and water

REDMOND, Wash., and WASHINGTON, D.C. — July 16, 2018 — On Monday, Microsoft Corp. and National Geographic announced a new partnership to advance scientific exploration and research on critical environmental challenges with the power of artificial intelligence (AI). The newly created $1 million AI for Earth Innovation Grant program will provide award recipients with financial support, access to Microsoft cloud and AI tools, inclusion in the National Geographic Explorer community, and affiliation with National Geographic Labs, an initiative launched by National Geographic to accelerate transformative change and exponential solutions to the world’s biggest challenges by harnessing data, technology and innovation. Individuals and organizations working at the intersection of environmental science and computer science can apply today at https://www.nationalgeographic.org/grants/grant-opportunities/ai-earth-innovation/.

National Geographic logo“National Geographic is synonymous with science and exploration, and in Microsoft we found a partner that is well-positioned to accelerate the pace of scientific research and new solutions to protect our natural world,” said Jonathan Baillie, chief scientist and executive vice president, science and exploration at the National Geographic Society. “With today’s announcement, we will enable outstanding explorers seeking solutions for a sustainable future with the cloud and AI technologies that can quickly improve the speed, scope and scale of their work as well as support National Geographic Labs’ activities around technology and innovation for a planet in balance.”

“Microsoft is constantly exploring the boundaries of what technology can do, and what it can do for people and the world,” said Lucas Joppa, chief environmental scientist at Microsoft. “We believe that humans and computers, working together through AI, can change the way that society monitors, models and manages Earth’s natural systems. We believe this because we’ve seen it — we’re constantly amazed by the advances our AI for Earth collaborators have made over the past months. Scaling this through National Geographic’s global network will create a whole new generation of explorers who use AI to create a more sustainable future for the planet and everyone on it.”

The $1 million AI for Earth Innovation Grant program will provide financial support to between five and 15 novel projects that use AI to advance conservation research toward a more sustainable future. The grants will support the creation and deployment of open-sourced trained models and algorithms that will be made broadly available to other environmental researchers, which offers greater potential to provide exponential impact.

Qualifying applications will focus on one or more of the core areas: agriculture, biodiversity conservation, climate change and water. Applications are open as of today and must be submitted by Oct. 8, 2018. Recipients will be announced in December 2018. Those who want more information and to apply can visit https://www.nationalgeographic.org/grants/grant-opportunities/ai-earth-innovation/.

About the National Geographic Society

The National Geographic Society is a leading nonprofit that invests in bold people and transformative ideas in the fields of exploration, scientific research, storytelling and education. The Society aspires to create a community of change, advancing key insights about the planet and probing some of the most pressing scientific questions of our time, all while ensuring that the next generation is armed with geographic knowledge and global understanding. Its goal is measurable impact: furthering exploration and educating people around the world to inspire solutions for the greater good. For more information, visit www.nationalgeographic.org.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,

rrt@we-worldwide.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Posted on Leave a comment

Bloomberg: ‘How Amy Hood won back Wall Street and helped reboot Microsoft’

Terms of Service Violation

Your usage has been flagged as a violation of our terms of service.

For inquiries related to this message please contact support. For sales inquiries, please visit http://www.bloomberg.com/professional/request-demo

If you believe this to be in error, please confirm below that you are not a robot by clicking “I’m not a robot” below.

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review the Terms of Service and Cookie Policy.

Block reference ID:

Posted on Leave a comment

TextWorld, an open-source project for generating text-based games, can train and test AI agents

Today, fresh out of the Microsoft Research Montreal lab, comes an open-source project called TextWorld. TextWorld is an extensible Python framework for generating text-based games. Reinforcement learning researchers can use TextWorld to train and test AI agents in skills such as language understanding, affordance extraction, memory and planning, exploration and more. Researchers can study these in the context of generalization and transfer learning. TextWorld further runs existing text-based games, like the legendary Zork, for evaluating how well AI agents perform in complex, human-designed settings.

Figure 1 – Enter the world of TextWorld. Get the code at aka.ms/textworld.

Text-based games – also known as interactive fiction or adventure games – are games in which the play environment and the player’s interactions with it are represented solely or primarily via text. As players moves through the game world, they observe textual descriptions of their surroundings (typically divided into discrete ‘rooms’), what objects are nearby, and any other pertinent information. Players issue text commands to an interpreter to manipulate objects, other characters in the game, or themselves. After each command, the game usually provides some feedback to inform players how that command altered the game environment, if at all. A typical text-based game poses a series of puzzles to solve, treasures to collect, and locations to reach. Goals and waypoints may be specified explicitly or may have to be inferred from cues.

Figure 2 – An example game from TextWorld with a house-based theme.

Text-based games couple the freedom to explore a defined space with the restrictions of a parser and game world designed to respond positively to a relatively small set of textual commands. An agent that can competently navigate a text-based game needs to be able to not only generate coherent textual commands but must also generate the right commands in the right order, with little to no mistakes in between. Text-based games encourage experimentation and successful playthroughs involve multiple game losses and in-game “deaths.” Close observation and creative interpretation of the text the game provides and a generous supply of common sense are also integral to winning text-based games. The relatively simple obstacles present in a TextWorld game serve as an introduction to the basic real-life challenges posed by text-based games. In TextWorld, an agent needs to learn how to observe, experiment, fail and learn from failure.

TextWorld has two main components: a game generator and a game engine. The game generator converts high-level game specifications, such as number of rooms, number of objects, game length, and winning conditions, into an executable game source code in the Inform 7 language. The game engine is a simple inference machine that ensures that each step of the generated game is valid by using simple algorithms such as one-step forward and backward chaining.

Figure 3 – An overview of the TextWorld architecture.

“One reason I’m excited about TextWorld is the way it combines reinforcement learning with natural language,” said Geoff Gordon, Principal Research Manager at Microsoft Research Montreal “These two technologies are both really important, but they don’t fit together that well yet. TextWorld will push researchers to make them work in combination.” Gordon pointed out that reinforcement learning has had a number of high-profile successes recently (like Go or Ms. Pac-Man), but in all of these cases the agent has fairly simple observations and actions (for example, screen images and joystick positions in Ms. Pac-Man). In TextWorld, the agent has to both read and produce natural language, which has an entirely different and, in many cases, more complicated structure.

“I’m excited to see how researchers deal with this added complexity, said Gordon.”

Microsoft Research Montreal specializes in start-of-the art research in machine reading comprehension, dialogue, reinforcement learning, and FATE (Fairness, Accountability, Transparency, and Ethics in AI). The lab was founded in 2015 as Maluuba and acquired by Microsoft in 2017. For more information, check out Microsoft Research Montreal.

This release of TextWorld is a beta and we are encouraging as much feedback as possible on the framework from fellow researchers across the world. You can send your feedback and questions to textworld@microsoft.com. Also, for more information and to get the code, check out TextWorld, and our related publications TextWorld: A Learning Environment for Text-based Games and Counting to Explore and Generalize in Text-based Games. Thank you!

Posted on Leave a comment

Dell debuts ‘world’s most powerful 1U rack workstation’

Dell delivers compact solutions that pack a punch, including a 1U rack workstation and towers starting at $649.

Companies of all sizes and budgets looking for powerful, affordable, compact industry-leading workstations now have new choices for their needs from Dell, including the world’s most powerful 1U rack workstation.¹

The Dell Precision 3930 Rack delivers powerful performance in a compact industrial footprint. Its 1U rack height delivers better rack density and extended operating temperatures, while features such as its short depth, dust filters, and legacy ports allow it to integrate seamlessly into complex medical imaging and industrial automation solutions.

Other features include:

  • The rack workstation provides up to 64 GB of 2666MHz DDR4 memory thanks to the introduction of Intel Xeon E processors and an 8th Generation Intel Core CPU.
  • The Intel Xeon E processor supports Error Correcting Code (ECC) for increased reliability.
  • The rack workstation offers best-in-class workstation performance and provides the flexibility of up to 250W of doublewide GPUs, and scalability with up to 24 TB of storage.
  • With 3 PCIe slots, including an optional PCI slot, this workstation can tackle complex tasks with ease.
  • A range of NVIDIA Quadro professional GPUs are available. With the Quadro P6000, users benefit from 24GB of GDDR5X and powerful ultra-high-end graphics performance.
  • In addition, customers have the option to choose AMD Radeon Pro graphics.

If your company is looking for versatile, secure, and fast remote 1:1 user access, you can add optional Teradici PCOIP technology. The rack workstation effortlessly integrates into the datacenter, which helps reduce clutter at your desk.

A smaller footprint that doesn’t skimp on performance

Going small can lead to big things, so Dell has built these new entry-level workstations to fuel the future of innovation across engineering design, science, mathematics, and other data- and graphics-intensive fields. Running a highly powerful machine no longer requires having a large work space or a large budget, making this level of performance available to many companies and workers for the first time.

“Customers across Engineering & Manufacturing, Media & Entertainment, and beyond have come to rely on Dell workstations to deliver the highest performing systems for their critical workload. But as we enter the next era of workstations, the conversation is accelerating to immersive workflows utilizing even smaller footprints. Dell is leading the way in this evolution with these new entry-level workstations designed to deliver the ultimate in performance with a substantially smaller footprint,” says Rahul Tikoo, vice president and general manager of Dell Precision. “When access to leading technology improves, innovation flourishes. Sometimes something as simple as a smaller form factor can unleash new ideas and capabilities that have the power to reshape an industry.”

The Dell Precision 3630 Tower is 23 percent smaller² than the previous generation with more expandability, so workers can get the precise solution they need regardless of workspace constraints. It features a range of easy-to-reach ports that make it possible to connect to external data sources, storage devices, and more. It offers scalable storage featuring SATA and PCIe NVMe SSDs, which can be configured for up to 14 TB with RAID support.

As workstation users often create intellectual property, Dell will also offer an optional Smart Card (CAC/PIV) reader to make secure data management easier.

If you’re interested in creating or enjoying VR experiences and other resource-intensive tasks, this workstation is a good choice, thanks to an 8th Generation Intel Core i and new professional-grade Xeon E processors with faster memory speeds up to 2666MHz 64 GB. It also offers up to 225W of NVIDIA Quadro and AMD Radeon Pro graphics support.

The new Dell Precision 3430 Small Form Factor Tower is a great fit for many workstation users, offering many of the same benefits as the Precision 3630, but in an even smaller form factor and up to 55W of graphics support. It’s also expandable with up to 6TB of storage with RAID support.

Dell also introduced support for Intel Core Xseries processors in addition to the Intel Xeon W processor options already available on the Dell Precision 5820 Tower. These new processor options bring the enhanced performance and reliability of a workstation at a more affordable price point for customers.

Adding Intel Optane memory keeps responsiveness high and high-capacity storage costs lower on all these new Dell Precision 3000 series workstations. Customers can expect the same build quality and reliability of the Dell Precision line.

Available now: The Dell Precision 3430 Small Form Factor Tower and the Dell Precision 3630 Tower (both starting at $649) and the Dell Precision 5820 Tower workstation.

Available worldwide July 26, 2018: The Dell Precision 3930 Rack, which starts at $899.

¹When equipped with Intel Xeon E-2186G processor, available 64GB of 2666MHz memory capacity, NVIDIA P6000 graphics and 2xM.2PCIe storage. Based on Dell internal analysis of competitive workstation products as of July 2018.

² Gen-over-Gen claim based on internal testing, July 2018.

Posted on Leave a comment

Facial recognition technology: The need for public regulation and corporate responsibility

All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.

Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses. In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act.

We’ve set out below steps that we are taking, and recommendations we have for government regulation.

First, some context

Facial recognition technology has been advancing rapidly over the past decade. If you’ve ever seen a suggestion on Facebook or another social media platform to tag a face with a suggested name, you’ve seen facial recognition at work. A wide variety of tech companies, Microsoft included, have utilized this technology the past several years to turn time-consuming work to catalog photos into something both instantaneous and useful.

So, what is changing now? In part it’s the ability of computer vision to get better and faster in recognizing people’s faces. In part this improvement reflects better cameras, sensors and machine learning capabilities. It also reflects the advent of larger and larger datasets as more images of people are stored online. This improvement also reflects the ability to use the cloud to connect all this data and facial recognition technology with live cameras that capture images of people’s faces and seek to identify them – in more places and in real time.

Advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives. This means the potential uses of facial recognition are myriad. At an elementary level, you might use it to catalog and search your photos, but that’s just the beginning. Some uses are already improving security for computer users, like recognizing your face instead of requiring a password to access many Windows laptops or iPhones, and in the future a device like an automated teller machine.

Some emerging uses are both positive and potentially even profound. Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.

But other potential applications are more sobering. Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

Perhaps as much as any advance, facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?

The issues become even more complicated when we add the fact that facial recognition is advancing quickly but remains far from perfect. As reported widely in recent months, biases have been found in the performance of several fielded face recognition technologies. The technologies worked more accurately for white men than for white women and were more accurate in identifying persons with lighter complexions than people of color. Researchers across the tech sector are working overtime to address these challenges and significant progress is being made. But as important research has demonstrated, deficiencies remain. The relative immaturity of the technology is making the broader public questions even more pressing.

Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way. And the issues relating to facial recognition go well beyond questions of bias themselves, raising critical questions about our fundamental freedoms.

Politics meets Silicon Valley

In recent weeks, the politics of the United States have become more intertwined with these technology developments on the West Coast. One week in the middle of June put the issues raised by facial recognition technology in bold relief for me and other company leaders at Microsoft. As the country was transfixed by the controversy surrounding the separation of immigrant children from their families at the southern border, a tweet about a marketing blog Microsoft published in January quickly blew up on social media and sparked vigorous debate. The blog had discussed a contract with the U.S. Immigration and Customs Enforcement, or ICE, and said that Microsoft had passed a high security threshold; it included a sentence about the potential for ICE to use facial recognition.

We’ve since confirmed that the contract in question isn’t being used for facial recognition at all. Nor has Microsoft worked with the U.S. government on any projects related to separating children from their families at the border, a practice to which we’ve strongly objected. The work under the contract instead is supporting legacy email, calendar, messaging and document management workloads. This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world. Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE.

The ensuing discussion has illuminated broader questions that are rippling across the tech sector. These questions are not unique to Microsoft. They surfaced earlier this year at Google and other tech companies. In recent weeks, a group of Amazon employees has objected to its contract with ICE, while reiterating concerns raised by the American Civil Liberties Union (ACLU) about law enforcement use of facial recognition technology. And Salesforce employees have raised the same issues related to immigration authorities and these agencies’ use of their products. Demands increasingly are surfacing for tech companies to limit the way government agencies use facial recognition and other technology.

These issues are not going to go away. They reflect the rapidly expanding capabilities of new technologies that increasingly will define the decade ahead. Facial recognition is the technology of the moment, but it’s apparent that other new technologies will raise similar issues in the future. This makes it even more important that we use this moment to get the direction right.

The need for government regulation

The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.

While we appreciate that some people today are calling for tech companies to make these decisions – and we recognize a clear need for our own exercise of responsibility, as discussed further below – we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic. We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.

Such an approach is also likely to be far more effective in meeting public goals. After all, even if one or several tech companies alter their practices, problems will remain if others do not. The competitive dynamics between American tech companies – let alone between companies from different countries – will likely enable governments to keep purchasing and using new technology in ways the public may find unacceptable in the absence of a common regulatory framework.

It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike. The auto industry spent decades in the 20th century resisting calls for regulation, but today there is broad appreciation of the essential role that regulations have played in ensuring ubiquitous seat belts and air bags and greater fuel efficiency. The same is true for air safety, foods and pharmaceutical products. There will always be debates about the details, and the details matter greatly. But a world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards.

That’s why Microsoft called for national privacy legislation for the United States in 2005 and why we’ve supported the General Data Protection Regulation in the European Union. Consumers will have more confidence in the way companies use their sensitive personal information if there are clear rules of the road for everyone to follow. While the new issues relating to facial recognition go beyond privacy, we believe the analogy is apt.

It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime. Governments may monitor the exercise of political and other public activities in ways that conflict with longstanding expectations in democratic societies, chilling citizens’ willingness to turn out for political events and undermining our core freedoms of assembly and expression. Similarly, companies may use facial recognition to make decisions without human intervention that affect our eligibility for credit, jobs or purchases. All these scenarios raise important questions of privacy, free speech, freedom of association and even life and liberty.

So what issues should be addressed through government regulation? That’s one of the most important initial questions to address. As a starting point, we believe governments should consider the following issues, among others:

  • Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime?
  • Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?
  • What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology?
  • Should use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy?
  • Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces?
  • Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent?
  • Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces?
  • Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?

This list, which is by no means exhaustive, illustrates the breadth and importance of the issues involved.

Another important initial question is how governments should go about addressing these questions. In the United States, this is a national issue that requires national leadership by our elected representatives. This means leadership by Congress. While some question whether members of Congress have sufficient expertise on technology issues, at Microsoft we believe Congress can address these issues effectively. The key is for lawmakers to use the right mechanisms to gather expert advice to inform their decision making.

On numerous occasions, Congress has appointed bipartisan expert commissions to assess complicated issues and submit recommendations for potential legislative action. As the Congressional Research Service (CRS) noted last year, these commissions are “formal groups established to provide independent advice; make recommendations for changes in public policy; study or investigate a particular problem, issue, or event; or perform a duty.” Congress’ use of the bipartisan “9/11 Commission” played a critical role in assessing that national tragedy. Congress has created 28 such commissions over the past decade, assessing issues ranging from protecting children in disasters to the future of the army.

We believe Congress should create a bipartisan expert commission to assess the best way to regulate the use of facial recognition technology in the United States. This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology. The purpose of such a commission should include advice to Congress on what types of new laws and regulations are needed, as well as stronger practices to ensure proper congressional oversight of this technology across the executive branch.

Issues relating to facial recognition go well beyond the borders of the United States. The questions listed above – and no doubt others – will become important public policy issues around the world, requiring active engagement by governments, academics, tech companies and civil society internationally. Given the global nature of the technology itself, there likely will also be a growing need for interaction and even coordination between national regulators across borders.

Tech sector responsibilities

The need for government leadership does not absolve technology companies of our own ethical responsibilities. Given the importance and breadth of facial recognition issues, we at Microsoft and throughout the tech sector have a responsibility to ensure that this technology is human-centered and developed in a manner consistent with broadly held societal values. We need to recognize that many of these issues are new and no one has all the answers. We still have work to do to identify all the questions. In short, we all have a lot to learn. Nonetheless, some initial conclusions are clear.

First, it’s incumbent upon those of us in the tech sector to continue the important work needed to reduce the risk of bias in facial recognition technology. No one benefits from the deployment of immature facial recognition technology that has greater error rates for women and people of color. That’s why our researchers and developers are working to accelerate progress in this area, and why this is one of the priorities for Microsoft’s Aether Committee, which provides advice on several AI ethics issues inside the company.

As we pursue this work, we recognize the importance of collaborating with the academic community and other companies, including in groups such as the Partnership for AI. And we appreciate the importance not only of creating data sets that reflect the diversity of the world, but also of ensuring that we have a diverse and well-trained workforce with the capabilities needed to be effective in reducing the risk of bias. This requires ongoing and urgent work by Microsoft and other tech companies to promote greater diversity and inclusion in our workforce and to invest in a broader and more diverse pipeline of talent for the future. We’re focused on making progress in these areas, but we recognize that we have much more work to do.

Second, and more broadly, we recognize the need to take a principled and transparent approach in the development and application of facial recognition technology. We are undertaking work to assess and develop additional principles to govern our facial recognition work. We’ve used a similar approach in other instances, including trust principles we adopted in 2015 for our cloud services, supported in part by transparency centers and other facilities around the world to enable the inspection of our source code and other data. Similarly, earlier this year we published an overall set of ethical principles we are using in the development of all our AI capabilities.

As we move forward, we’re committed to establishing a transparent set of principles for facial recognition technology that we will share with the public. In part this will build on our broader commitment to design our products and operate our services consistent with the UN’s Guiding Principles on Business and Human Rights. These were adopted in 2011 and have emerged as the global standard for ensuring corporate respect for human rights. We periodically conduct Human Rights Impact Assessments (HRIAs) of our products and services, and we’re currently pursuing this work with respect to our AI technologies.

We’ll pursue this work in part based on the expertise and input of our employees, but we also recognize the importance of active external listening and engagement. We’ll therefore also sit down with and listen to a variety of external stakeholders, including customers, academics and human rights and privacy groups that are focusing on the specific issues involved in facial recognition. This work will take  up to a few months, but we’re committed to completing it expeditiously .

We recognize that one of the difficult issues we’ll need to address is the distinction between the development of our facial recognition services and the use of our broader IT infrastructure by third parties that build and deploy their own facial recognition technology. The use of infrastructure and off-the-shelf capabilities by third parties are more difficult for a company to regulate, compared to the use of a complete service or the work of a firm’s own consultants, which readily can be managed more tightly. While nuanced, these distinctions will need consideration.

Third, in the meantime we recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology. Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. “Move fast and break things” became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.

For this reason, based in part on input from the Aether Committee, we’re moving more deliberately with our facial recognition consulting and contracting work. This has led us to turn down some customer requests for deployments of this service where we’ve concluded that there are greater human rights risks. As we’re developing more permanent principles, we will continue to monitor the potential uses of our facial recognition technologies with a view to assessing and avoiding human rights abuses.

In a similar vein, we’re committed to sharing more information with customers who are contemplating the potential deployment of facial recognition technology. We will continue work to provide customers and others with information that will help them understand more deeply both the current capabilities and limitations of facial recognition technology, how these features can and should be used, and the risks of improper uses.

Fourth, we’re committed to participating in a full and responsible manner in public policy deliberations relating to facial recognition. Government officials, civil liberties organizations and the broader public can only appreciate the full implications of new technical trends if those of us who create this technology do a good job of sharing information with them. Especially given our urging of governments to act, it’s incumbent on us to step forward to share this information. As we do so, we’re committed to serving as a voice for the ethical use of facial recognition and other new technologies, both in the United States and around the world.

We recognize that there may be additional responsibilities that companies in the tech sector ought to assume. We provide the foregoing list not with the sense that it is necessarily complete, but in the hope that it can provide a good start in helping to move forward.

Some concluding thoughts

Finally, as we think about the evolving range of technology uses, we think it’s important to acknowledge that the future is not simple. A government agency that is doing something objectionable today may do something that is laudable tomorrow. We therefore need a principled approach for facial recognition technology, embodied in law, that outlasts a single administration or the important political issues of a moment.

Even at a time of increasingly polarized politics, we have faith in our fundamental democratic institutions and values. We have elected representatives in Congress that have the tools needed to assess this new technology, with all its ramifications. We benefit from the checks and balances of a Constitution that has seen us from the age of candles to an era of artificial intelligence. As in so many times in the past, we need to ensure that new inventions serve our democratic freedoms pursuant to the rule of law. Given the global sweep of this technology, we’ll need to address these issues internationally, in no small part by working with and relying upon many other respected voices. We will all need to work together, and we look forward to doing our part.

Tags: , , ,

Posted on Leave a comment

4 new ways Microsoft 365 takes the work out of teamwork—including free version of Microsoft Teams

It’s been one year since we introduced Microsoft 365, a holistic workplace solution that empowers everyone to work together in a secure way. In that time, Microsoft 365 seats have grown by more than 100 percent, building on the more than 135 million commercial monthly Office 365 users, 200 million Windows 10 commercial devices in use, and over 65 million seats of Enterprise Mobility + Security.

This momentum is driven by customers—in every industry—who are transforming their organizations to enable high performance from a workforce that is more diverse, distributed, and mobile than ever before. Microsoft 365 is designed to empower every type of worker—whether on the first lines of a business, managing a small team, or leading an entire organization.

Today, we are introducing four new ways Microsoft 365 connects people across their organization and improves collaboration habits, including extending the power of Microsoft Teams and new AI-infused capabilities in Microsoft 365.

1—Try Microsoft Teams, now available in a free version

To address the growing collaboration needs of our customers, last year we introduced Microsoft Teams, a powerful hub for teamwork that brings together chat, meetings, calling, files, and apps into a shared workspace in Microsoft 365. Now, more than 200,000 businesses across 181 markets use Teams to collaborate and get work done.

Beginning today, Teams is available in a free version worldwide in 40 languages. Whether you’re a freelancer, a small business owner, or part of a team inside a large organization, you can start using Teams today.

The free version includes the following for up to 300 people:

  • Unlimited chat messages and search.
  • Built-in audio and video calling for individuals, groups, and full team meetups.
  • 10 GB of team file storage plus additional 2 GB per person for personal storage.
  • Integrated, real-time content creation with Office Online apps, including built-in Word, Excel, PowerPoint, and OneNote.
  • Unlimited app integrations with 140+ business apps to choose from—including Adobe, Evernote, and Trello.
  • Ability to communicate and collaborate with anyone inside or outside your organization, backed by Microsoft’s secure, global infrastructure.

This new offering provides a powerful introduction to Microsoft 365. Teams in Microsoft 365 includes everything in the free version plus additional storage, enterprise security, and compliance, and it can be used for your whole organization, regardless of size.

As we advance our mission to empower every person and organization on the planet to achieve more, what’s most exciting are the stories of customers taking on big projects with a small workforce, such as The Hustle Media Company—who helps movers, shakers, and doers make their dent in the world. Their popular daily email provides their audience with the tech and business news they need to know.

“As a media company that nearly quadrupled in size over the last year, it became apparent we needed a solution to connect all of The Hustle’s offices. As previous Slack users, we found that Microsoft Teams has all the features that other chat-based apps bring, but the teamwork hub allows everything to live in one place.”
—Adam Ryan, vice president of Media at The Hustle

Or, take it from Urban Agriculture Company, a small business specializing in organic, easy-to-use grow kits of vegetables, flowers, and herbs. After landing on Oprah’s favorite things, founder Chad Corzine turned to Microsoft 365 Business and Teams to manage communication among his rapidly growing departments, onboard employees, and protect customer data.

[embedded content]

2—Use new intelligent event capabilities in Microsoft 365

Today, we’re also introducing new capabilities that allow anyone in your organization to create live and on-demand events in Microsoft 365. Events can be viewed in real-time or on-demand, with high-definition video and interactive discussion.

AI-powered services enhance the on-demand experience with:

  • A speaker timeline, which uses facial detection to identify who is talking, so you can easily jump to a particular speaker in the event.
  • Speech-to-text transcription, timecoding, and transcript search, so you can quickly find moments that matter in a recording.
  • Closed captions to make the event more accessible to all.

Events can be as simple or as sophisticated as you prefer. You can use webcams, content, and screen sharing for informal presentations, or stream a studio-quality production for more formal events.

3—Leverage analytics to build better collaboration habits

We’re rolling out the preview of a new Workplace Analytics solution, which uses collaboration insights from the Microsoft Graph, to help teams run efficient meetings, create time for focused work, and respect work/life boundaries. Organizations can use aggregate data in Workplace Analytics to identify opportunities for improving collaboration, then share insights and suggest habits to specific teams using MyAnalytics.

We’re also rolling out nudges, powered by MyAnalytics in Microsoft 365, which deliver habit-changing tips in Outlook, such as flagging that you’re emailing coworkers after hours or suggesting you book focused work time for yourself.

4—Work with others on a shared digital canvas with Microsoft Whiteboard

Microsoft Whiteboard is now generally available for Windows 10, coming soon to iOS, and preview on the web. Whether meeting in person or virtually, people need the ability to collaborate in real-time. The new Whiteboard application enables people to ideate, iterate, and work together both in person and remotely, across multiple devices. Using pen, touch, and keyboard, you can jot down notes, create tables and shapes, freeform drawings, and search and insert images from the web.

Get started

Whether you’re managing a new project or creating your own business, it helps to have your team behind you to brainstorm ideas, tackle the work together, and have some fun along the way. Take your teamwork to the next level and start using Teams today.

Learn more about how Microsoft 365 enables teamwork below:

Posted on Leave a comment

WIRED: ‘How artificial intelligence could prevent natural disasters’

On May 27, a deluge dumped more than 6 inches of rain in less than three hours on Ellicott City, Maryland, killing one person and transforming Main Street into what looked like Class V river rapids, with cars tossed about like rubber ducks. The National Weather Service put the probability of such a storm at once in 1,000 years. Yet, “it’s the second time it’s happened in the last three years,” says Jeff Allenby, director of conservation technology for Chesapeake Conservancy, an environmental group.

Floods are nothing new in Ellicott City, located where two tributaries join the Patapsco River. But Allenby says the floods are getting worse, as development covers what used to be the “natural sponge of a forest” with paved surfaces, rooftops, and lawns. Just days before the May 27 flood, the US Department of Homeland Security selected Ellicott City—on the basis of its 2016 flood—for a pilot program to deliver better flood warnings to residents via automated sensors.

Recently, Allenby developed another tool to help predict, plan, and prepare for future floods: a first-of-its-kind, high-resolution map showing what’s on the ground—buildings, pavement, trees, lawns—across 100,000 square miles from upstate New York to southern Virginia that drain into Chesapeake Bay. The map, generated from aerial imagery with the help of artificial intelligence, shows objects as small as 3 feet square, roughly 1,000 times more precise than the maps that flood planners previously used. To understand the difference, imagine trying to identify an Uber driver on a crowded city street using a map that can only display objects the size of a Walmart.

Creating the map consumed a year and cost $3.5 million, with help from Microsoft and the University of Vermont. Allenby’s team pored over aerial imagery, road maps, and zoning charts to establish rules, classify objects, and scrub errors. “As soon as we finished the first data set,” Allenby says, “everyone started asking ‘when are you going to do it again?’” to keep the map fresh.

Enter AI. Microsoft helped Allenby’s team train its AI for Earth algorithms to identify objects on its own. Even with a robust data set, training the algorithms wasn’t easy. The effort required regular “pixel peeping”—manually zooming in on objects to verify and amend the automated results. With each pass, the algorithm improved its ability to recognize waterways, trees, fields, roads, and buildings. As relevant new data become available, Chesapeake Conservancy plans to use its AI to refresh the map more frequently and easily than the initial labor-intensive multi-million dollar effort.

Now, Microsoft is making the tool available more widely. For $42, anyone can run 200 million aerial images through Microsoft’s AI for Earth platform and generate a high-resolution land-cover map of the entire US in 10 minutes. The results won’t be as precise in other parts of the country where the algorithm has not been trained on local conditions—a redwood tree or saguaro cactus looks nothing like a willow oak.

A map of land use around Ellicott City, Maryland, built with the help of artificial intelligence (left) offers far more detail than its predecessor (right).

Chesapeake Conservancy

To a society obsessed with location and mapping services—where the physical world unfolds in the digital every day—the accomplishment may not seem groundbreaking. Until recently, though, neither the high-resolution data nor the AI smarts existed to make such maps cost-effective for environmental purposes, especially for nonprofit conservation organizations. With Microsoft’s offer, AI on a planetary scale is about to become a commodity.

Detailed, up-to-date information is paramount when it comes to designing stormwater management systems, Allenby says. “Looking at these systems with the power of AI can start to show when a watershed” is more likely to flood, he says. The Center for Watershed Protection, a nonprofit based in Ellicott City, reported in a 2001 study that when 10 percent of natural land gets developed, stream health declines and it begins to lose its ability to manage runoff. At 20 percent, runoff doubles, compared with undeveloped land. Allenby notes that paved surfaces and rooftops in Ellicott City reached 19 percent in recent years.

Allenby says the more detailed map will enable planners to keep up with land-use changes and plan drainage systems that can accommodate more water. Eventually, the map will offer “live dashboards” and automated alerts to serve as a warning system when new development threatens to overwhelm stormwater management capacity. The Urban Forestry Administration in Washington, DC, has used the new map to determine where to plant trees by searching the district for areas without tree cover where standing water accumulates. Earlier this year, Chesapeake Conservancy began working with conservation groups in Iowa and Arizona to develop training sets for the algorithms specific to those landscapes.

The combination of high-resolution imaging and sensor technologies, AI, and cloud computing is giving conservationists deeper insight into the health of the planet. The result is a near-real-time readout of Earth’s vital signs, firing off alerts and alarms whenever the ailing patient takes a turn for the worse.

Others are applying these techniques around the world. Global Forest Watch (GFW), a conservation project established by World Resources Institute, began offering monthly and weekly deforestation alerts in 2016, powered by AI algorithms developed by the University of Maryland.1 The algorithms analyze satellite imagery as it’s refreshed to detect “patterns that may indicate impending deforestation,” according to the organization’s website. Using GFW’s mobile app, Forest Watcher, volunteers and forest rangers take to the trees to verify the automated alerts in places like the Leuser Ecosystem in Indonesia, which calls itself “the last place on Earth where orangutans, rhinos, elephants and tigers are found together in the wild.”

The new conservation formula is also spilling into the oceans. On June 4, Paul Allen Philanthropies revealed a partnership with the Carnegie Institution of Science, the University of Queensland, the Hawaii Institute of Marine Biology, and the private satellite company Planet to map all of the world’s coral reefs by 2020. As Andrew Zolli, a Planet vice president, explains: For the first time in history, “new tools are up to the [planetary] level of the problem.”

By the end of 2017, Planet deployed nearly 200 satellites, forming a necklace around the globe that images the entire Earth every day down to 3-meter resolution. That’s trillions of pixels raining down daily, which could never be transformed into useful maps without AI algorithms trained to interpret them. The partnership leverages the Carnegie Institution’s computer-vision tools and the University of Queensland’s data on local conditions, including coral, algae, sand, and rocks.

“Today, we have no idea of the geography, rate, and frequency of global bleaching events,” explains Greg Asner, a scientist at Carnegie’s Department of Global Ecology. Based on what is known, scientists project that more than 90 percent of the world’s reefs, which sustain 25 percent of marine life, will be extinct by 2050. Lauren Kickham, impact director for Paul Allen Philanthropies, expects the partnership will bring the world’s coral crisis into clear view and enable scientists to track their health on a daily basis.

In a separate coral reef project, also being conducted with Planet and the Carnegie Institution, The Nature Conservancy is leveraging Carnegie’s computer vision AI to develop a high-resolution map of the shallow waters of the Caribbean basin. “By learning how these systems live and how they adapt, maybe not our generation, but maybe the next will be able to bring them back,” says Luis Solorzano, The Nature Conservancy’s Caribbean Coral Reef project lead.

Mapping services are hardly new to conservation. Geographic Information Systems have been a staple in the conservation toolkit for years, providing interactive maps to facilitate environmental monitoring, regulatory enforcement, and preservation planning. But, mapping services are only as good as the underlying data, which can be expensive to acquire and maintain. As a result, many conservationists resort to what’s freely available, like the 30-meter-resolution images supplied by the United States Geological Survey.

Ellicott City and the Chesapeake watershed demonstrate the challenges of responding to a changing climate and the impacts of human activity. Since the 1950s, the bay’s oyster reefs have declined by more than 80 percent. Biologists discovered one of the planet’s first marine dead zones in Chesapeake Bay in the 1970s. Blue crab populations plunged in the 1990s. The sea level has risen more than a foot since 1895, and, according to a 2017 National Oceanic and Atmospheric Administration (NOAA) report, may rise as much as 6 feet by the end of this century.

Allenby joined the Chesapeake Conservancy in 2012 when technology companies provided a grant to explore the ways in which technology could help inform conservation. Allenby sought ways to deploy technology to help land managers, like those in Ellicott City, improve upon the dated 30-meter-resolution images that FEMA also uses for flood planning and preparation.

In 2015, Allenby connected with the University of Vermont—nationally recognized experts in generating county-level high-resolution land-cover maps—seeking a partner on a bigger project. They secured funding from a consortium of state and local governments, and nonprofit groups in 2016. The year-long effort involved integrating data from such disparate sources as aerial imagery, road maps, and zoning charts. As the data set came together, a Conservancy board member introduced Allenby to Microsoft, which was eager to demonstrate how its AI and cloud computing could be leveraged to support conservation.

“It’s been the frustration of my life to see what we’re capable of, yet how far behind we are in understanding basic information about the health of our planet,” says Lucas Joppa, Microsoft’s chief environmental scientist, who oversees AI for Earth. “And to see that those individuals on the front line solving society’s problems, like environmental sustainability, are often in organizations with the least resources to take advantage of the technologies that are being put out there.”

The ultimate question, however, is whether the diagnoses offered by these AI-powered land-cover maps will arrive in time to help cure the problems caused by man.

1 CORRECTION, July 11, 1:10PM: Deforestation alerts from Global Forest Watch are powered by algorithms developed by the University of Maryland. An earlier version of this article incorrectly said the algorithms were developed by Orbital Insight.


More Great WIRED Stories