Posted on Leave a comment

6 Open Source AI Tools to Know

In open source, no matter how original your own idea seems, it is always wise to see if someone else has already executed the concept. For organizations and individuals interested in leveraging the growing power of artificial intelligence (AI), many of the best tools are not only free and open source, but, in many cases, have already been hardened and tested.

At leading companies and non-profit organizations, AI is a huge priority, and many of these companies and organizations are open sourcing valuable tools. Here is a sampling of free, open source AI tools available to anyone.

Acumos. Acumos AI is a platform and open source framework that makes it easy to build, share, and deploy AI apps. It standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies rather than endlessly customizing, modeling, and training an AI implementation.

Acumos is part of the LF Deep Learning Foundation, an organization within The Linux Foundation that supports open source innovation in artificial intelligence, machine learning, and deep learning. The goal is to make these critical new technologies available to developers and data scientists, including those who may have limited experience with deep learning and AI. The LF Deep Learning Foundation just recently approved a project lifecycle and contribution process and is now accepting proposals for the contribution of projects.

Facebook’s Framework. Facebook has open sourced its central machine learning system designed for artificial intelligence tasks at large scale, and a series of other AI technologies. The tools are part of a proven platform in use at the company. Facebook has also open sourced a framework for deep learning and AI called Caffe2.

Speaking of Caffe. Yahoo also released its key AI software under an open source license. The CaffeOnSpark tool is based on deep learning, a branch of artificial intelligence particularly useful in helping machines recognize human speech or the contents of a photo or video. Similarly, IBM’s machine learning program known as SystemML is freely available to share and modify through the Apache Software Foundation.

Google’s Tools. Google spent years developing its TensorFlow software framework to support its AI software and other predictive and analytics programs. TensorFlow is the engine behind several Google tools you may already use, including Google Photos and the speech recognition found in the Google app.

Two AIY kits open sourced by Google let individuals easily get hands-on with artificial intelligence. Focused on computer vision and voice assistants, the two kits come as small self-assembly cardboard boxes with all the components needed for use. The kits are currently available at Target in the United States, and are based on the open source Raspberry Pi platform — more evidence of how much is happening at the intersection of open source and AI.

H2O.ai. I previously covered H2O.ai, which has carved out a niche in the machine learning and artificial intelligence arena because its primary tools are free and open source.  You can get the main H2O platform and Sparkling Water, which works with Apache Spark, simply by downloading them. These tools operate under the Apache 2.0 license, one of the most flexible open source licenses available, and you can even run them on clusters powered by Amazon Web Services (AWS) and others for just a few hundred dollars.

Microsoft Onboard. “Our goal is to democratize AI to empower every person and every organization to achieve more,” Microsoft CEO Satya Nadella has said. With that in mind, Microsoft is continuing to iterate its Microsoft Cognitive Toolkit. It’s an open source software framework that competes with tools such as TensorFlow and Caffe. Cognitive Toolkit works with both Windows and Linux on 64-bit platforms.

“Cognitive Toolkit enables enterprise-ready, production-grade AI by allowing users to create, train, and evaluate their own neural networks that can then scale efficiently across multiple GPUs and multiple machines on massive data sets,” reports the Cognitive Toolkit Team.

Learn more about AI in this new ebook from The Linux Foundation. Open Source AI: Projects, Insights, and Trends by Ibrahim Haddad surveys 16 popular open source AI projects – looking in depth at their histories, codebases, and GitHub contributions. Download the free ebook now.

Posted on Leave a comment

Heather Kirksey on Integrating Networking and Cloud Native

As highlighted in the recent Open Source Jobs Report, cloud and networking skills are in high demand. And, if you want to hear about the latest networking developments, there is no one better to talk with than Heather Kirksey, VP, Community and Ecosystem Development, Networking at The Linux Foundation. Kirksey was the Director of OPNFV before the recent consolidation of several networking-related projects under the new LF Networking umbrella, and I spoke with her to learn more about LF Networking (LFN) and how the initiative is working closely with cloud native technologies.

Kirksey explained the reasoning behind the move and expansion of her role. “At OPNFV, we were focused on integration and end-to-end testing across the LFN projects. We had interaction with all of those communities. At the same time, we were separate legal entities, and things like that created more barriers to collaboration. Now, it’s easy to look at them more strategically as a portfolio to facilitate member engagement and deliver solutions to service providers.”

Read more at The Linux Foundation

Posted on Leave a comment

Google’s Fuchsia Adds Emulator for Running Linux Apps

Google has added a Guest app to its emergent and currently open source Fuchsia OS to enable Linux apps to run within Fuchsia as a virtual machine (VM). The Guest app makes use of a library called Machina that permits closer integration with the OS than is available with typical emulators, according to a recent 9to5Google story.

Last month, Google announced a Project Crostini technology that will soon let Chromebook users more easily run mainstream Linux applications within a Chrome OS VM. This week, Acer’s Chromebook Flip C101 joined the short list of Chromebooks that will offer Linux support later this year.

While it’s encouraging that Chrome OS will soon support Linux apps is addition to Android, it’s not entirely surprising — since Android and Chrome OS are based on Linux. Yet, one of the first things Google emphasized when it revealed Fuchsia in 2016 was that it’s not based on the Linux kernel.

To some, Fuchsia seemed to be something of a betrayal considering how Linux not only forms the basis for Android and Chrome OS but also the Google enterprise platforms. Why add another Windows or iOS when we were getting so close to everyone sharing a common Linux foundation?

No doubt, Google has some very good reasons for avoiding Linux. One reason may be the age and complexity of Linux. By starting from scratch, Google can escape that aspect and deliver more elegant, up-to-date code with fewer targets for hackers. Google is also baking secure updates deeply into the OS, and unlike Linux, is isolating applications from having direct kernel access.

Open for now

Back in 2016, we thought Google might be skipping over Linux to shift to a proprietary OS that it could control the way Apple dictates all things iOS. That may still happen, but for now Fuchsia is an open source project.

Some also speculated at time that considering the trim little microkernel, Google was bypassing Linux due to its inability to scale downward into the MCU realm. Yet, MCU-based IoT does not appear to be the current focus of Fuchsia. Several reports, including a TechRadar post last week, have said that Fuchsia is intended to replace both Android and Chrome OS, and the combined platform will eventually be called Google Andromeda.

Earlier this year, 9to5Google reported that Fuchsia would include separate UIs — an Armadillo UI for phones and a Capybara UI for desktops — and like Android Things and other new Android variants, would tightly integrate Google Assistant voice technology. Essentially, this is the same idea that was behind Microsoft’s failed plan to offer a common Windows for phones and laptops, or Canonical’s defunct “convergence” version of Ubuntu.

Guest ex Machina

Whatever Fuchsia’s destiny, Google needs to attract mature applications, as well as developers, and the best way to do that is to add Linux app compatibility. The new Guest app, which initially supports Linux-based platforms including Debian, works with the Machina library to accomplish this in a way that goes beyond what you can get from QEMU, suggests 9to5Google.

Google describes Fuchsia’s Machina as “a library that builds on top of the Zircon hypervisor to provide virtualized peripherals that integrate with a garnet system.” Zircon is the Fuchsia microkernel, based on Little Kernel (LK), and formerly called Magenta. Garnet is the layer that sits directly atop Zircon and offers device drivers, the Escher graphics renderer, Fuchsia’s Amber updater, and the Xi Core engine for the Xi text and code editor. Other layers include Peridot for app design, and on top, Topaz, a Flutter-supported app layer.

Machina adopts the Virtio virtualization standard, which is also used by the Linux Kernel-based Virtual Machine (KVM). It makes use of Virtio’s vsock virtual socket, “which can open direct channels between a host operating system and its guest, to allow for conveniences that would be otherwise impossible,” says 9to5Google.

This extra effort will likely enable enable fast mouse performance, automatically adjusted screen resolution, and support for multiple displays, file transfers, and copy and paste, says the story. This appears to be much like the allegedly superior emulation that is expected with Google’s Project Crostini for running Linux apps on Chrome OS. The news of the Guest app follows earlier reports that suggested that Google is building an Android Runtime into Fuchsia rather than depending on emulation to run Android apps.

App emulators should be viewed with some skepticism. Most of the mobile Linux OS contenders promised some sort of Android app compatibility, but they have generally failed to deliver. Still, by building emulation deeply into the stack from the start rather than adding an emulator on later, Fuchsia may well offer Linux developers an emulator that they can live with.  

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Open Source Skills Soar In Demand According to 2018 Jobs Report

Linux expertise is again in the top spot as the most sought after open source skill, says the latest Open Source Jobs Reportfrom Dice and The Linux Foundation. The seventh annual report shows rapidly growing demand for open source skills, particularly in areas of cloud technology.

Key findings of the report include:

  • Linux tops the list as the most in-demand open source skill, making it mandatory for most entry-level open source careers. This is due in part to the growth of cloud and container technologies, as well as DevOps practices, all of which are typically built on Linux.
  • Container technology is rapidly growing in popularity and importance, with 57% of hiring managers seeking those skills, up from 27% last year.
  • Hiring open source talent is a priority for 83% of hiring managers, up from 76% in 2017.
  • Hiring managers are increasingly opting to train existing employees on new open source technologies and help them gain certifications.
  • Many organizations are getting involved in open source with the express purpose of attracting developers.

Career Building

In terms of job seeking and job hiring, the report shows high demand for open source skills and a strong career benefit from open source experience.

  • 87% of open source professionals say knowing open source has advanced their career.
  • 87% of hiring managers experience difficulties in recruiting open source talent.

Hiring managers say they are specifically looking to recruit in the following areas:

OS Jobs skills

Diversity

This year’s survey included optional questions about companies’ initiatives to increase diversity in open source hiring, which has become a hot topic throughout the tech industry. The responses showed a significant difference between the views of hiring managers and those of open source pros — with only 52% of employees seeing those diversity efforts as effective compared with 70% of employers.

Overall, the 2018 Open Source Jobs Report indicates a strong market for open source talent, driven in part by the growth of cloud-based technologies. This market provides a wealth of opportunities for professionals with open source skills, as companies increasingly recognize the value of open source.

The 2018 Open Source Jobs Survey and Report, sponsored by Dice and The Linux Foundation, provides an overview of the latest trends for open source careers. Download the complete Open Source Jobs Report now.

This article originally appeared at The Linux Foundation.

Posted on Leave a comment

Systemd Services: Monitoring Files and Directories

So far in this systemd multi-part tutorial, we’ve covered how to start and stop a service by hand, how to start a service when booting your OS and have it stop on power down, and how to boot a service when a certain device is detected. This installment does something different yet again and covers how to create a unit that starts a service when something changes in the filesystem. For the practical example, you’ll see how you can use one of these units to extend the surveillance system we talked about last time.

Where we left off

Last time we saw how the surveillance system took pictures, but it did nothing with them. In fact, it even overwrote the last picture it took when it detected movement so as not to fill the storage of the device.

Does that mean the system is useless? Not by a long shot. Because, you see, systemd offers yet another type of units, paths, that can help you out. Path units allow you to trigger a service when an event happens in the filesystem, say, when a file gets deleted or a directory accessed. And, overwriting an image is exactly the kind of event we are talking about here.

Anatomy of a Path Unit

A systemd path unit takes the extension .path, and it monitors a file or directory. A .path unit calls another unit (usually a .service unit with the same name) when something happens to the monitored file or directory. For example, if you have a picchanged.path unit to monitor the snapshot from your webcam, you will also have a picchanged.service that will execute a script when the snapshot is overwritten.

Path units contain a new section, [Path], with few more directives. First, you have the what-to-watch-for directives:

  • PathExists= monitors whether the file or directory exists. If it does, the associated unit gets triggered. PathExistsGlob= works in a similar fashion, but lets you use globbing, like when you use ls *.jpg to search for all the JPEG images in a directory. This lets you check, for example, whether a file with a certain extension exists.
  • PathChanged= watches a file or directory and activates the configured unit whenever it changes. It is not activated on every write to the watched file but only when a monitored file open for for writing is changed and then closed. The associated unit is executed when the file is closed.
  • PathModified=, on the other hand, does activate the unit when anything is changed in the file you are monitoring, even before you close the file.
  • DirectoryNotEmpty= does what it says on the box, that is, it activates the associated unit if the monitored directory contains files or subdirectories.

Then, we have Unit= that tells the .path which .service unit to activate, in case you want to give it a different name to that of your .path unit; MakeDirectory= can be true or false (or 0 or 1, or yes or no) and creates the directory you want to monitor before monitoring starts. Obviously, using MakeDirectory= in combination with PathExists= does not make sense. However, MakeDirectory= can be used in combination with DirectoryMode=, which you use to set the the mode (permissions) of the new directory. If you don’t use DirectoryMode=, the default permissions for the new directory are 0755.

Building picchanged.path

All these directives are very useful, but you will be just looking for changes made to one single file, so your .path unit is very simple:

#picchanged.path
[Unit] Wants= webcam.service [Path] PathChanged= /home/[user name]/monitor/monitor.jpg 

In the Unit= section the line that says

Wants= webcam.service 

The Wants= directive is the preferred way of starting up a unit the current unit needs to work properly. webcam.service is the name you gave the surveillance service that you saw in the previous article and is the service that actually controls the webcam and makes it take a snap every half second. This means it’s picchanged.path that is going to start up webcam.service now, and not the Udev rule you saw in the prior article. You will use the Udev rule to start picchanged.path instead.

To summarize: the Udev rule pulls in your new picchanged.path unit, which, in turn pulls in the webcam.service as a requirement for everything to work perfectly.

The “thing” that picchanged.path monitors is the monitor.jpg file in the monitor/ directory in your home directory. As you saw last time, webcam.service called a script, checkimage.sh, took a picture at the beginning of its execution and stored it in monitor/temp.jpg. checkimage.sh then took another pic, temp.jpg, and compared it with monitor.jpg. If it found significant differences (like when somebody walks into frame) the script overwrote monitor.jpg with the temp.jpg. That is when picchanged.path fires.

As you haven’t included a Unit= directive in your .path, the unit systemd expects a matching picchanged.service unit which it will trigger when /home/[user name]/monitor/monitor.jpg gets modified:

#picchanged.service
[Service] Type= simple ExecStart= /home/[user name]/bin/picmonitor.sh

For the time being, let’s make picmonitor.sh save a time-stamped copy of monitor.jpg every time changes get detected:

#!/bin/bash
# This is the pcmonitor.sh script cp /home/[user name]/monitor/monitor.jpg /home/[user name]/monitor/"`date`.jpg"

Udev Changes

You have to change the custom Udev rule you wrote in the previous installment so everything works. Edit /etc/udev/rules.d/01-webcam.rules so instead of looking like this:

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0",   ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd",   MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service"

It looks like this:

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0",   ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd",   MODE="0666", ENV{SYSTEMD_WANTS}="picchanged.path"

The new rule, instead of calling webcam.service, now calls picchanged.path when your webcam gets detected. (Note that you will have to change the idVendor and IdProduct to those of your own webcam — you saw how to find these out previously).

For the record, I also changed checkimage.sh from using PNG to JPEG images. I did this because I found some dependency problems with PNG images when working with mplayer on some versions of Debian. checkimage.sh now looks like this:

#!/bin/bash mplayer -vo jpeg -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
  /dev/mywebcam &>/dev/null mv 00000001.jpg /home/paul/monitor/monitor.jpg while true do mplayer -vo jpeg -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
  /dev/mywebcam &>/dev/null mv 00000001.jpg /home/paul/monitor/temp.jpg imagediff=`compare -metric mae /home/paul/monitor/monitor.jpg   /home/paul/monitor/temp.jpg /home/paul/monitor/diff.png 2>&1 >   /dev/null | cut -f 1 -d " "` if [ `echo "$imagediff > 700.0" | bc` -eq 1 ] then mv /home/paul/monitor/temp.jpg /home/paul/monitor/monitor.jpg fi sleep 0.5 done

Firing up

This is a multi-unit service that, when all its bits and pieces are in place, you don’t have to worry much about: you plug in the designated webcam (or boot the machine with the webcam already connected), picchanged.path gets started thanks to the Udev rule and takes over, bringing up the webcam.service and starting to check on the snaps. There is nothing else you need to do.

Conclusion

Having the process split into two doesn’t only help explain how path units work, but it’s also very useful for debugging. One service does not “touch” the other in any way, which means that you could, for example, improve the “motion detection” part, and it would be very easy to roll back if things didn’t work as expected.

Admittedly, the example is a bit goofy, as there are definitely better ways of monitoring movement using a webcam. But remember: the main aim of these articles is to help you learn how systemd units work within a context.

Next time, we’ll finish up with systemd units by looking at some of the other types of units available and show how to improve your home-monitoring system further by setting up service that sends images to another machine.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

How SUSE Is Bringing Open Source Projects and Communities Together

The modern IT infrastructure is diverse by design. People are mixing different open source components that are coming from not only different vendors, but also from different ecosystems. In this article, we talk with Thomas Di Giacomo, CTO of SUSE, about the need for better collaboration between open source projects that are being used across industries as we are move toward a cloud native world.

Linux.com: Does the mix of different open source components create a challenge in terms of a seamless experience for customers? How can these projects work more closely with each other?

Thomas Di Giacomo: Totally, more and more, and it’s unlikely to slow down. It can be because of past investments and decisions, with existing pieces of IT and new ones needed to be added to the mix. Or, it might be because of different teams or different parts of an organization working on their own projects with different timelines etc. Or, again, because companies work with partners coming with their own stacks. But maybe even more importantly, it is also because no single one project can be the only answer on its own to what needs to be done.

An OS needs additional modules and applications on top of it to address use cases. To address use cases, IaaS needs to handle specific networking and storage components that are provided by relevant projects. Infrastructure on its own is pretty useless if it’s not paired with application delivery elements, not only to manage the compute part but to tie in software development and application lifecycle.

Linux.com: Can you point out some industry wide efforts in that direction?

Thomas Di Giacomo: There’s a lot of more or less structured initiatives and approaches to that. On one hand, open source is de facto facilitating cross-project work, not only because the code is visible but with a focus on (open) APIs for instance, but it is also indirectly making it sometimes challenging as more and more open source projects are being started. That’s definitely a great thing for innovation, for people to contribute their ideas, for new ideas to grow, etc., but it requires specific attention and focus on helping users with putting together cross-project solutions they need for achieving their plans. Making sure cross-project solutions are easy to install and maintain, for example, and can co-exist with what’s already there.

What starts to happen is cross-project development, integration, and testing with, for instance, shared CI/CD flows and tools between different project. A good example is what OPNFV has initiated a while ago now, with cross CI/CD between OPNFV, OpenStack, OpenDaylight, and others.

Linux.com: At the same time, certain technologies like Kubernetes cut through many different landscapes — whether it be cloud, IoT, Paas, IaaS, containers, etc. That also means the expectations from traditional OS change. Can you talk about how SUSE Linux Enterprise (SLE) is evolving to handle containerized workloads and transactions/atomic updates?

Thomas Di Giacomo: Yes, indeed. Cutting through many different landscapes is also something Linux did (and still does) — from different CPU architectures, form factors, physical and virtualized, on-prem and public clouds, embedded to mainframes, etc.

But you’re right, although the abstractions are improving — getting to higher levels and better at making the underlying layers become less visible (that’s the whole point of abstracting) — the infrastructure components and even the OS, are still there and foundational for the abstracted layers to work. Hence, they have to evolve to meet today’s needs for portability, agility, stability.

We’ve constantly worked on evolving Linux in the past 26 years now, including some specific directions and optimizations to make SUSE Linux both a great container host OS or container base OS, so that container based technologies and use cases would run as smoothly, securely and infrastructure agnostically as possible. Technically, the snapshotting and transactional upgrade/rollback capabilities coming from btrfs as a filesystem, as well as having different possible container engines, keeping the certification, stability and maintainability of an enterprise-grade OS really makes it uniquely appropriate for running container clusters.

Linux.com: While we are talking about OSes, SUSE has both platforms — traditional SLE and atomic/transactional Kubic/SUSE CaaSP. How do these two projects work together, while making life easier for customers?

Thomas Di Giacomo: There are two angles of “together” here. The first one is our usual community/upstream first philosophy, where Kubic/openSUSE Tumbleweed are the core upstream projects for SUSE CaaS Platform and SUSE Linux Enterprise.

The other “together” is about bringing traditional and container-optimized OS closer together. FIrst, the operating system is required to be super modular, where not just a particular functionality is a module but where everything is a module. Second, the OS needs to be multi-modal. By that we mean it should be designed to take care of requirements for both traditional infrastructure and software-defined/cloud-native container-based infrastructure. This is what the community is putting together with Leap15, and what we’re doing for SUSE Linux Enterprise 15 coming out very soon.

Linux.com: SUSE is known for working with partners, instead of building its own stack. How do you cross-pollinate ideas, talent, and technologies as you (SUSE) work across platforms and projects like SLE, Kubic, Cloud Foundry, and Kubernetes?

Thomas Di Giacomo: We work upstream in the respective open source projects as much as we can. Sometimes some open source components are in different projects or outside upstream, and here again we try to bring them back as much as possible. Let me give just a couple of examples to illustrate that.

We’ve been initiating and contributing to a project called openATTIC, aiming at providing a management tool for storage, software-defined storage solutions, and especially for Ceph. openATTIC is obviously open source like everything we do, but it was sitting outside of Ceph. Working with the Ceph community, we’ve started contributing openATTIC code and features to the upstream ceph dashboard/ceph manager, speeding it up with fueling more existing capabilities rather than re-developing the whole from scratch. And then together with the Ceph partners/community and with other Ceph components, we’re facilitating cross-projects by somehow merging them.

Another example is a SUSE project called Stratos. It is a UI for Cloud Foundry distributions (any one of them, upstream and vendors), which we contributed to Cloud Foundry upstream.

Linux.com: Thanks to Cloud Foundry Container Runtime (CFCR), Cloud Foundry and Kubernetes are working closely, can you tell us about the work SUSE is doing with these two communities?

Thomas Di Giacomo: There are lots of container-related initiatives within the Cloud Foundry Foundation, for instance. Some of them we’re leading, some of them we are involved with, and in any case working together with the community and partner companies on those topics. We, for instance, focus on the containerization of Cloud Foundry itself, so that it is lightweight, portable, easily deployable, upgradable on any type of Kubernetes infrastructure (via Helm), so that containers and services are available to both Kubernetes and Cloud Foundry applications on there, and that actually simply containerized applications and Cloud Foundry developed ones co-exist easily.

So today such a containerized Cloud Foundry is available on top of AKS or EKS, on top of SUSE CaaS Platform obviously as well, as possibly any Kubernetes. This was started a while ago and now part of Cloud Foundry upstream, used by our solutions obviously but also by others to provide the CF developer experience on Kubernetes in the most straightforward and native way as possible. There are other activities focused on providing a pluggable container scheduler for CFCR, as well as improving the cross-interoperable service capabilities.

Now this is currently mostly happening in the CF upstream and CF community, and we’re also working to start a workgroup within CNCF on the same topic (especially the containerization of Cloud Foundry), to bring the projects and their communities closer together.

This article was sponsored by SUSE and written by The Linux Foundation.

Sign up to receive updates:

Posted on Leave a comment

Call for Code Is Open and Organizations Are Lining Up to Join the Cause

By Bob Lord, Chief Digital Officer, IBM

Today is the first official day of Call for Code, an annual global initiative from creator David Clark Cause, with IBM proudly serving as Founding Partner. Call for Code aims to unleash the collective power of the global open source developer community against the growing threat of natural disasters.

Even as we prepare to accept submissions from technology teams around the world, the response from the technology community has been overwhelming and today I am thrilled to announce two new partners joining the cause.

New Enterprises Associates (NEA) has confirmed its participation as a Partner Affiliate and the official Founding Venture Capital Partner to the cause. With over $20 billion in committed capital and a track record of partnering with entrepreneurs and innovations that have truly changed the world, NEA will extend the Call for Code into the startup and venture capital ecosystem and the Global Prize Winners will have the opportunity to pitch their solution to NEA for evaluation and feedback.

The Cloud Native Computing Foundation (CNCF) has also confirmed it will join the Call for Code as a Gold Sponsor. CNCF will bring invaluable experience and advice for technology teams looking to deploy their solutions across a variety of topologies and real-world constraints.

With NEA and CNCF on board the commitment to the cause is widening, and this is only the beginning. Since making the announcement, technology companies, consumer companies, universities, NGOs and celebrities have all expressed interest in answering or supporting the call. Events have taken place in 50 cities around the world, and many more are planned in coming months, providing training and bringing teams together.

Announced on May 24 by IBM Chairman, President and CEO Ginni Rometty, IBM is investing $30 million over five years as well as technology and resources to help kick start Call for Code to address some of the toughest social issues we face today. The goal is to develop technology solutions that significantly improve disaster preparedness, provide relief from devastation caused by fires, floods, hurricanes, tsunamis and earthquakes, and benefit Call for Code’s charitable partners — the United Nations Human Rights Office and the American Red Cross.

The need was never more apparent. Even as we made the announcement in Paris, Hawaii’s Kilauea volcano was erupting, reportedly destroying more than 450 homes. In recent weeks, Guatemala’s ‘Volcano of Fire’ reportedly left 110 dead and around 200 missing. In a worrying preview to the 2018 Atlantic hurricane Season, two category 4 hurricanes – Aletta and Bud – formed in a matter of days last week.

2017 was in fact one of the worst on record for catastrophic natural disasters, impacted millions of lives and billions of dollars of damage – from heat waves in Australia and sustained extreme heat in Europe to famine from drought in Somalia and massive floods and landslides in South East Asia.

We can’t stop a hurricane or a lava flow from wreaking havoc, but we can work together to predict their path; get much needed supplies into an area before disaster strikes, and help emergency support teams allocate their precariously stretched resources.

Last week, The Weather Company, an IBM business, announced it would make weather APIs available to Call for Code participants for access to data on weather conditions and forecasts. IBM Code Patterns get developer teams up and running in minutes, with access to cloud, data, AI and blockchain technologies.

Of course, the real magic happens when coders code. The open source developer community has helped build so much of the technology that is transforming our world. IBM has been supporting that community for over two decades and together we have helped reinvent the social experience. Our hope is that this community can help transform the experience of so many people impacted by natural disasters in coming years.

To help rally that community the Linux Foundation, a long-term partner for IBM, is lending its support and Linus Torvalds, the creator of Linux, will join a panel of eminent technologists to evaluate submissions.

Less surprising, at least to me, was the enthusiasm IBMers showed in responding to the call. We saw internal celebrations around the world in support of the launch last month and we anticipate a healthy contribution to the cause from the 35,000 developers within IBM, plus of course IBM’s own Corporate Service Corps will help deploy the winning ideas on the ground.

Ultimately, the real measure of success will be the impact Call for Code has on some of the most at-risk communities around the globe, and the lives that are saved and improved. With Call for Code now open, the time to make a difference is now.

This article originally appeared on the IBM developerWorks blog.

Posted on Leave a comment

5 Commands for Checking Memory Usage in Linux

The Linux operating system includes a plethora of tools, all of which are ready to help you administer your systems. From simple file and directory tools to very complex security commands, there’s not much you can’t do on Linux. And, although regular desktop users may not need to become familiar with these tools at the command line, they’re mandatory for Linux admins. Why? First, you will have to work with a GUI-less Linux server at some point. Second, command-line tools often offer far more power and flexibility than their GUI alternative.

Determining memory usage is a skill you might need should a particular app go rogue and commandeer system memory. When that happens, it’s handy to know you have a variety of tools available to help you troubleshoot. Or, maybe you need to gather information about a Linux swap partition or detailed information about your installed RAM? There are commands for that as well. Let’s dig into the various Linux command-line tools to help you check into system memory usage. These tools aren’t terribly hard to use, and in this article, I’ll show you five different ways to approach the problem.

I’ll be demonstrating on the Ubuntu Server 18.04 platform. You should, however, find all of these commands available on your distribution of choice. Even better, you shouldn’t need to install a single thing (as most of these tools are included).

With that said, let’s get to work.

top

I want to start out with the most obvious tool. The top command provides a dynamic, real-time view of a running system. Included in that system summary is the ability to check memory usage on a per-process basis. That’s very important, as you could easily have multiple iterations of the same command consuming different amounts of memory. Although you won’t find this on a headless server, say you’ve opened Chrome and noticed your system slowing down. Issue the top command to see that Chrome has numerous processes running (one per tab – Figure 1).

Chrome isn’t the only app to show multiple processes. You see the Firefox entry in Figure 1? That’s the primary process for Firefox, whereas the Web Content processes are the open tabs. At the top of the output, you’ll see the system statistics. On my machine (a System76 Leopard Extreme), I have a total of 16GB of RAM available, of which just over 10GB is in use. You can then comb through the list and see what percentage of memory each process is using.

One of the things top is very good for is discovering Process ID (PID) numbers of services that might have gotten out of hand. With those PIDs, you can then set about to troubleshoot (or kill) the offending tasks.

If you want to make top a bit more memory-friendly, issue the command top -o %MEM, which will cause top to sort all processes by memory used (Figure 2).

The top command also gives you a real-time update on how much of your swap space is being used.

free

Sometimes, however, top can be a bit much for your needs. You may only need to see the amount of free and used memory on your system. For that, there is the free command. The free command displays:

  • Total amount of free and used physical memory

  • Total amount of swap memory in the system

  • Buffers and caches used by the kernel

From your terminal window, issue the command free. The output of this command is not in real time. Instead, what you’ll get is an instant snapshot of the free and used memory in that moment (Figure 3).

You can, of course, make free a bit more user-friendly by adding the -m option, like so: free -m. This will report the memory usage in MB (Figure 4).

Of course, if your system is even remotely modern, you’ll want to use the -g option (gigabytes), as in free -g.

If you need memory totals, you can add the t option like so: free -mt. This will simply total the amount of memory in columns (Figure 5).

vmstat

Another very handy tool to have at your disposal is vmstat. This particular command is a one-trick pony that reports virtual memory statistics. The vmstat command will report stats on:

  • Processes

  • Memory

  • Paging

  • Block IO

  • Traps

  • Disks

  • CPU

The best way to issue vmstat is by using the -s switch, like vmstat -s. This will report your stats in a single column (which is so much easier to read than the default report). The vmstat command will give you more information than you need (Figure 6), but more is always better (in such cases).

dmidecode

What if you want to find out detailed information about your installed system RAM? For that, you could use the dmidecode command. This particular tool is the DMI table decoder, which dumps a system’s DMI table contents into a human-readable format. If you’re unsure as to what the DMI table is, it’s a means to describe what a system is made of (as well as possible evolutions for a system).

To run the dmidecode command, you do need sudo privileges. So issue the command sudo dmidecode -t 17. The output of the command (Figure 7) can be lengthy, as it displays information for all memory-type devices. So if you don’t have the ability to scroll, you might want to send the output of that command to a file, like so: sudo dmidecode –t 17 > dmi_infoI, or pipe it to the less command, as in sudo dmidecode | less.

/proc/meminfo

You might be asking yourself, “Where do these commands get this information from?”. In some cases, they get it from the /proc/meminfo file. Guess what? You can read that file directly with the command less /proc/meminfo. By using the less command, you can scroll up and down through that lengthy output to find exactly what you need (Figure 8).

One thing you should know about /proc/meminfo: This is not a real file. Instead /pro/meminfo is a virtual file that contains real-time, dynamic information about the system. In particular, you’ll want to check the values for:

  • MemTotal

  • MemFree

  • MemAvailable

  • Buffers

  • Cached

  • SwapCached

  • SwapTotal

  • SwapFree

If you want to get fancy with /proc/meminfo you can use it in conjunction with the egrep command like so: egrep –color ‘Mem|Cache|Swap’ /proc/meminfo. This will produce an easy to read listing of all entries that contain Mem, Cache, and Swap … with a splash of color (Figure 9).

Keep learning

One of the first things you should do is read the manual pages for each of these commands (so man top, man free, man vmstat, man dmidecode). Starting with the man pages for commands is always a great way to learn so much more about how a tool works on Linux.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Going Global with Kubernetes

Kubernetes is often touted as the Linux of the cloud world, and that comparison is fair when you consider its widespread adoption. But, with great power comes great responsibility and, as the home of  Kubernetes, the Cloud Native Computing Foundation (CNCF) shoulders many responsibilities, including learning from the mistakes of other open source projects while not losing sight of the main goal. The rapid global growth of CNCF also means increased responsibility in terms of cultural diversity and creating a welcoming environment.

Rise of Kubernetes in China

CNCF in general has more than 216 members, making it the second largest project under the umbrella of The Linux Foundation. The project is enjoying massive adoption and growth in new markets, especially in China. For example, JD.com, one of the largest e-commerce companies in China, has moved to Kubernetes.

“If you are looking to innovate as a company, you are not going to always buy off-the-shelf technologies, you take Open Source technologies and customize them to your needs. China has over a billion people and they have to meet the needs of these people; they need to scale. Open Source technologies like Kubernetes enable them to customize and scale technologies to their needs,” said Chris Aniszczyk, CTO, CNCF.

This growth in Asia has inspired CNCF to bring KubeCon and CloudNativeCon to China. The organization will be organizing their first KubeCon + CloudNativeCon in Shanghai, November 11-13, 2018. China is already using open source cloud-native technologies, and through these and other efforts, CNCF wants to build a bridge to help Chinese developers increase their contribution to various projects. CNCF is also gearing up to help the community by offering translations of documentations, exams, certifications, etc.

In interviews and at events in China, language often becomes a barrier to collaboration and the free exchange of ideas and information. CNCF is aware of this. And, according to Aniszczyk, is working on plans for live translation at events to allow presenters to speak in their native language.

CNCF projects are growing not only in new regions but also in scope; people are finding new use-cases every day. While they are enjoying this adoption, the community has also started to prepare themselves for what lies ahead. They certainly can’t predict how some smart organization will use their technology in an area they never envisioned; but they can prepare the community to embrace new requirements.

We have started to hear about CNCF 2020 vision that goes beyond Kubernetes proper and looks at areas such as security and policy. The community has started adding new projects that deal with some of these topics, including Spiffy, which helps users deal with service identity and security at scale for Kubernetes related services, and OPA, a policy management project.

“We are witnessing a wide expansion of areas that CNCF is investing in to bring cloud native technologies to users,” said Aniszczyk.

Bane or boon?

Adoption is great, but we have seen how many open source projects lose track of their core mission and became bloated in order to cater to every use-case. The CNCF is not immune to such problems, but the community — at both developer and organizational level — is acutely aware of the risk and is working to protect itself.

“We have taken several approaches. First and foremost, unlike many other open source projects, CNCF doesn’t force integration. We don’t have one major release that bundles everything. We don’t have any gatekeeping processes that other foundations have,” said Aniszczyk.

What CNCF does do is allow its members and end users to come up with integration themselves to build products that solves the problems of their users. If such integration is useful, then they contribute it back to CNCF.  “We have a set of loosely coupled projects that are integrated by users; we don’t force any such integration,” said Aniszczyk.

According to Aniszczyk, CNCF  acts almost like a release valve and experimentation center for new things. It creates an environment to test new projects. “They are like sandbox projects doing some interesting innovation, solving some serious problems. We will see if they work or not. If they do work then the community may decide to integrate them, but none of it is forced,” said Aniszczyk.

It’s magic

All of this makes CNCF a unique project in the open source ecosystem. Kubernetes has now been widely adopted across industries. Look at cloud providers, for example, and you see that Kubernetes has the blessing of the public cloud trinity, which includes AWS, Azure, and Google Cloud. Three top Linux vendors — SUSE, Red Hat, and Canonical — have put their weight behind Kubernetes, as well as many other companies and organizations.

“I‘m so proud of being a person that’s been involved in open source and seeing all these companies working together under one neutral umbrella,” Aniszczyk said.

Join us at Open Source Summit in Vancouver, August for 250+ sessions covering the latest technologies and best practices in Kubernetes, cloud, open source, and more.

Posted on Leave a comment

The Schedule for Open Source Summit North America Is Now Live

Join us August 29-31, in Vancouver, BC, for 250+ sessions covering a wide array of topics including Linux Systems, Cloud Native Applications, Blockchain, AI, Networking, Cloud Infrastructure, Open Source Leadership, Program Office Management and more. Arrive early for new bonus content on August 28 including co-located events, tutorials, labs, workshops, and lightning talks.

VIEW THE FULL SCHEDULE »

Register to save $300 through June 17.

REGISTER NOW »

Read more at The Linux Foundation