Posted on Leave a comment

Free Resources for Open Source Certification and Training

July was a hot month for certification on Linux.com. In case you missed it, we covered the open source certification process in a series of articles examining why certification is important, general tips for success, specific advice for exam prep, and answers to some commonly asked questions. Learn more and check out this year’s LiFT scholarship opportunities as well.

“As open source has become the new normal in everything from startups to Fortune 2000 companies, it is important to start thinking about the career road map, the paths that you can take and how Linux and open source in general can help you reach your career goals,” says Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation.

5 Reasons Open Source Certification Matters More Than Ever

Here, we cover the growing need for certification and some of the benefits of obtaining open source credentials.

Tips for Success with Open Source Certification

We look at the kinds of certifications that are making a difference and what is involved in completing necessary training and passing performance-based exams, with tips from Clyde Seepersad.

Open Source Certification: Preparing for the Exam

In this article, we  focus on preparing for exams, what to expect during an exam, and how testing for open source certification differs from traditional types of testing.

Open Source Certification: Questions and Answers

In the final article of the series, Seepersad answers some commonly asked questions pertaining to certification and exam-taking.

Achieving the level of Linux Foundation Certified System Administrator or Engineer is no small feat, so The Linux Foundation also offers this free certification guide to help with preparation. In this guide, you’ll find:

  • Critical things to keep in mind on test day

  • An array of both free and paid study resources

  • Tips and tricks that could make the difference at exam time

  • A checklist of all the domains and competencies covered in the exam

LiFT Scholarships

For the eighth year in a row, The Linux Foundation Training (LiFT) Scholarship Program is also providing training opportunities to developers and sysadmins who may not otherwise have the ability to attend courses. This year’s program will award training scholarships to 16 individuals in eight categories who want to contribute to the advancement of open source software. An additional 15 scholarships will be awarded in the Open Source Newbies category. Learn more about the application process here.

Posted on Leave a comment

Open Source Summit Features 11 Co-Located Events, Kubernetes Training, Security Summit, LF Deep Learning Workshops & More

What makes attending Open Source Summit so valuable?

The people who attend, and the sharing of information that transpires when 2,000 open source leaders from around the globe gather to work together to transform technology.

In addition to education opportunities stemming from 250+ conference sessions and a plethora of collaboration opportunities in the hallway track and at networking events, Open Source Summit (previously LinuxCon + ContainerCon + CloudOpen) offers added learning opportunities with a variety of co-located events: 11 this year to be exact.

The cost of travel can be the biggest hardship of attending an event, so you should make the most of it. Open Source Summit offers a number of ways to gain additional value from your trip.

This year’s co-located events and special events offerings include:

Linux Security Summit North America

mountpoint 2018

LF Deep Learning Workshop

CHAOSScon North America

Cloud-Native Network Functions (CNF) Seminar

Egeria Open Metadata & Governance Workshop

OpenAPI Workshop

OpenChain Mini Summit

OpenHPC Workshop

LFCS & Linux on Azure Training Courses

Cloud & Container Apprentice Linux Engineer Tutorials

Check out additional attendee experiences and read more at The Linux Foundation

Posted on Leave a comment

Systemd Timers: Three Use Cases

In this systemd tutorial series, we have already talked about systemd timer units to some degree, but, before moving on to the sockets, let’s look at three examples that illustrate how you can best leverage these units.

Simple cron-like behavior

This is something I have to do: collect popcon data from Debian every week, preferably at the same time so I can see how the downloads for certain applications evolve. This is the typical thing you can have a cron job do, but a systemd timer can do it too:

# cron-like popcon.timer [Unit] Description= Says when to download and process popcons [Timer] OnCalendar= Thu *-*-* 05:32:07 Unit= popcon.service [Install] WantedBy= basic.target

The actual popcon.service runs a regular wget job, so nothing special. What is new in here is the OnCalendar= directive. This is what lets you set a service to run on a certain date at a certain time. In this case, Thu means “run on Thursdays” and the *-*-* means “the exact date, month and year don’t matter“, which translates to “run on Thursday, regardless of the date, month or year“.

Then you have the time you want to run the service. I chose at about 5:30 am CEST, which is when the server is not very busy.

If the server is down and misses the weekly deadline, you can also work an anacron-like functionality into the same timer:

# popcon.timer with anacron-like functionality [Unit] Description=Says when to download and process popcons [Timer] Unit=popcon.service OnCalendar=Thu *-*-* 05:32:07
Persistent=true [Install] WantedBy=basic.target

When you set the Persistent= directive to true, it tells systemd to run the service immediately after booting if the server was down when it was supposed to run. This means that if the machine was down, say for maintenance, in the early hours of Thursday, as soon as it is booted again, popcon.service will be run immediately and then it will go back to the routine of running the service every Thursday at 5:32 am.

So far, so straightforward.

Delayed execution

But let’s kick thing up a notch and “improve” the systemd-based surveillance system. Remember that the system started taking pictures the moment you plugged in a camera. Suppose you don’t want pictures of your face while you install the camera. You will want to delay the start up of the picture-taking service by a minute or two so you can plug in the camera and move out of frame.

To do this; first change the Udev rule so it points to a timer:

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207", TAG+="systemd", ENV{SYSTEMD_WANTS}="picchanged.timer", SYMLINK+="mywebcam", MODE="0666"

The timer looks like this:

# picchanged.timer [Unit] Description= Runs picchanged 1 minute after the camera is plugged in [Timer] OnActiveSec= 1 m
Unit= picchanged.path [Install]
WantedBy= basic.target

The Udev rule gets triggered when you plug the camera in and it calls the timer. The timer waits for one minute after it starts (OnActiveSec= 1 m) and then runs picchanged.path, which monitors to see if the master image changes. The picchanged.path is also in charge of pulling in the webcam.service, the service that actually takes the picture.

Start and stop Minetest server at a certain time every day

In the final example, let’s say you have decided to delegate parenting to systemd. I mean, systemd seems to be already taking over most of your life anyway. Why not embrace the inevitable?

So you have your Minetest service set up for your kids. You also want to give some semblance of caring about their education and upbringing and have them do homework and chores. What you want to do is make sure Minetest is only available for a limited time (say from 5 pm to 7 pm) every evening.

This is different from “starting a service at certain time” in that, writing a timer to start the service at 5 pm is easy…:

# minetest.timer [Unit]
Description= Runs the minetest.service at 5pm everyday [Timer]
OnCalendar= *-*-* 17:00:00 Unit= minetest.service [Install]
WantedBy= basic.target

… But writing a counterpart timer that shuts down a service at a certain time needs a bigger dose of lateral thinking.

Let’s start with the obvious — the timer:

# stopminetest.timer [Unit]
Description= Stops the minetest.service at 7 pm everyday [Timer]
OnCalendar= *-*-* 19:05:00 Unit= stopminetest.service [Install]
WantedBy= basic.target

The tricky part is how to tell stopminetest.service to actually, you know, stop the Minetest. There is no way to pass the PID of the Minetest server from minetest.service. and there are no obvious commands in systemd’s unit vocabulary to stop or disable a running service.

The trick is to use systemd’s Conflicts= directive. The Conflicts= directive is similar to systemd’s Wants= directive, in that it does exactly the opposite. If you have Wants=a.service in a unit called b.service, when it starts, b.service will run a.service if it is not running already. Likewise, if you have a line that reads Conflicts= a.service in your b.service unit, as soon as b.service starts, systemd will stop a.service.

This was created for when two services could clash when trying to take control of the same resource simultaneously, say when two services needed to access your printer at the same time. By putting a Conflicts= in your preferred service, you could make sure it would override the least important one.

You are going to use Conflicts= a bit differently, however. You will use Conflicts= to close down cleanly the minetest.service:

# stopminetest.service [Unit]
Description= Closes down the Minetest service
Conflicts= minetest.service [Service]
Type= oneshot
ExecStart= /bin/echo "Closing down minetest.service"

The stopminetest.service doesn’t do much at all. Indeed, it could do nothing at all, but just because it contins that Conflicts= line in there, when it is started, systemd will close down minetest.service.

There is one last wrinkle in your perfect Minetest set up: What happens if you are late home from work, it is past the time when the server should be up but playtime is not over? The Persistent= directive (see above) that runs a service if it has missed its start time is no good here, because if you switch the server on, say at 11 am, it would start Minetest and that is not what you want. What you really want is a way to make sure that systemd will only start Minetest between the hours of 5 and 7 in the evening:

# minetest.timer [Unit]
Description= Runs the minetest.service every minute between the hours of 5pm and 7pm [Timer]
OnCalendar= *-*-* 17..19:*:00
Unit= minetest.service [Install]
WantedBy= basic.target

The line OnCalendar= *-*-* 17..19:*:00 is interesting for two reasons: (1) 17..19 is not a point in time, but a period of time, in this case the period of time between the times of 17 and 19; and (2) the * in the minute field indicates that the service must be run every minute. Hence, you would read this as “run the minetest.service every minute between 5 and 7 pm“.

There is still one catch, though: once the minetest.service is up and running, you want minetest.timer to stop trying to run it again and again. You can do that by including a Conflicts= directive into minetest.service:

# minetest.service [Unit]
Description= Runs Minetest server
Conflicts= minetest.timer [Service]
Type= simple
User= <your user name> ExecStart= /usr/bin/minetest --server ExecStop= /bin/kill -2 $MAINPID [Install]
WantedBy= multi-user.targe

The Conflicts= directive shown above makes sure minetest.timer is stopped as soon as the minetest.service is successfully started.

Now enable and start minetest.timer:

systemctl enable minetest.timer
systemctl start minetest.timer

And, if you boot the server at, say, 6 o’clock, minetest.timer will start up and, as the time falls between 5 and 7, minetest.timer will try and start minetest.service every minute. But, as soon as minetest.service is running, systemd will stop minetest.timer because it “conflicts” with minetest.service, thus avoiding the timer from trying to start the service over and over when it is already running.

It is a bit counterintuitive that you use the service to kill the timer that started it up in the first place, but it works.

Conclusion

You probably think that there are better ways of doing all of the above. I have heard the term “overengineered” in regard to these articles, especially when using systemd timers instead of cron.

But, the purpose of this series of articles is not to provide the best solution to any particular problem. The aim is to show solutions that use systemd units as much as possible, even to a ridiculous length. The aim is to showcase plenty of examples of how the different types of units and the directives they contain can be leveraged. It is up to you, the reader, to find the real practical applications for all of this.

Be that as it may, there is still one more thing to go: next time, we’ll be looking at sockets and targets, and then we’ll be done with systemd units.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

5 Essential Tools for Linux Development

Linux has become a mainstay for many sectors of work, play, and personal life. We depend upon it. With Linux, technology is expanding and evolving faster than anyone could have imagined. That means Linux development is also happening at an exponential rate. Because of this, more and more developers will be hopping on board the open source and Linux dev train in the immediate, near, and far-off future. For that, people will need tools. Fortunately, there are a ton of dev tools available for Linux; so many, in fact, that it can be a bit intimidating to figure out precisely what you need (especially if you’re coming from another platform).

To make that easier, I thought I’d help narrow down the selection a bit for you. But instead of saying you should use Tool X and Tool Y, I’m going to narrow it down to five categories and then offer up an example for each. Just remember, for most categories, there are several available options. And, with that said, let’s get started.

Containers

Let’s face it, in this day and age you need to be working with containers. Not only are they incredibly easy to deploy, they make for great development environments. If you regularly develop for a specific platform, why not do so by creating a container image that includes all of the tools you need to make the process quick and easy. With that image available, you can then develop and roll out numerous instances of whatever software or service you need.

Using containers for development couldn’t be easier than it is with Docker. The advantages of using containers (and Docker) are:

  • Consistent development environment.

  • You can trust it will “just work” upon deployment.

  • Makes it easy to build across platforms.

  • Docker images available for all types of development environments and languages.

  • Deploying single containers or container clusters is simple.

Thanks to Docker Hub, you’ll find images for nearly any platform, development environment, server, service… just about anything you need. Using images from Docker Hub means you can skip over the creation of the development environment and go straight to work on developing your app, server, API, or service.

Docker is easily installable of most every Linux platform. For example: To install Docker on Ubuntu, you only have to open a terminal window and issue the command:

sudo apt-get install docker.io

With Docker installed, you’re ready to start pulling down specific images, developing, and deploying (Figure 1).

Version control system

If you’re working on a large project or with a team on a project, you’re going to need a version control system. Why? Because you need to keep track of your code, where your code is, and have an easy means of making commits and merging code from others. Without such a tool, your projects would be nearly impossible to manage. For Linux users, you cannot beat the ease of use and widespread deployment of Git and GitHub. If you’re new to their worlds, Git is the version control system that you install on your local machine and GitHub is the remote repository you use to upload (and then manage) your projects. Git can be installed on most Linux distributions. For example, on a Debian-based system, the install is as simple as:

sudo apt-get install git

Once installed, you are ready to start your journey with version control (Figure 2).

Github requires you to create an account. You can use it for free for non-commercial projects, or you can pay for commercial project housing (for more information check out the price matrix here).

Text editor

Let’s face it, developing on Linux would be a bit of a challenge without a text editor. Of course what a text editor is varies, depending upon who you ask. One person might say vim, emacs, or nano, whereas another might go full-on GUI with their editor. But since we’re talking development, we need a tool that can meet the needs of the modern day developer. And before I mention a couple of text editors, I will say this: Yes, I know that vim is a serious workhorse for serious developers and, if you know it well vim will meet and exceed all of your needs. However, getting up to speed enough that it won’t be in your way, can be a bit of a hurdle for some developers (especially those new to Linux). Considering my goal is to always help win over new users (and not just preach to an already devout choir), I’m taking the GUI route here.

As far as text editors are concerned, you cannot go wrong with the likes of Bluefish. Bluefish can be found in most standard repositories and features project support, multi-threaded support for remote files, search and replace, open files recursively, snippets sidebar, integrates with make, lint, weblint, xmllint, unlimited undo/redo, in-line spell checker, auto-recovery, full screen editing, syntax highlighting (Figure 3), support for numerous languages, and much more.

IDE

Integrated Development Environment (IDE) is a piece of software that includes a comprehensive set of tools that enable a one-stop-shop environment for developing. IDEs not only enable you to code your software, but document and build them as well. There are a number of IDEs for Linux, but one in particular is not only included in the standard repositories it is also very user-friendly and powerful. That tool in question is Geany. Geany features syntax highlighting, code folding, symbol name auto-completion, construct completion/snippets, auto-closing of XML and HTML tags, call tips, many supported filetypes, symbol lists, code navigation, build system to compile and execute your code, simple project management, and a built-in plugin system.

Geany can be easily installed on your system. For example, on a Debian-based distribution, issue the command:

sudo apt-get install geany

Once installed, you’re ready to start using this very powerful tool that includes a user-friendly interface (Figure 4) that has next to no learning curve.

diff tool

There will be times when you have to compare two files to find where they differ. This could be two different copies of what was the same file (only one compiles and the other doesn’t). When that happens, you don’t want to have to do that manually. Instead, you want to employ the power of tool like Meld. Meld is a visual diff and merge tool targeted at developers. With Meld you can make short shrift out of discovering the differences between two files. Although you can use a command line diff tool, when efficiency is the name of the game, you can’t beat Meld.

Meld allows you to open a comparison between to files and it will highlight the differences between each. Meld also allows you to merge comparisons either from the right or the left (as the files are opened side by side – Figure 5).

Meld can be installed from most standard repositories. On a Debian-based system, the installation command is:

sudo apt-get install meld

Working with efficiency

These five tools not only enable you to get your work done, they help to make it quite a bit more efficient. Although there are a ton of developer tools available for Linux, you’re going to want to make sure you have one for each of the above categories (maybe even starting with the suggestions I’ve made).

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

EdgeX Adds Security and Reduces Footprint with “California” Release

The Linux Foundation’s EdgeX Foundry announced its second major EdgeX release of its EdgeX IoT middleware for edge computing. The “California” release adds security features including reverse proxy and secure credentials storage. It’s also rewritten in Go to offer a smaller footprint This makes it possible to run EdgeX on the Raspberry Pi 3, which has been chosen as the official target platform for California.

EdgeX Foundry was announced in late July 2017, with a goal of developing a standardized, open source interoperability framework for Internet of Things edge computing. EdgeX Foundry is creating and certifying an ecosystem of interoperable, plug-and-play components to create an open source EdgeX stack that will mediate between multiple sensor network messaging protocols as well as cloud and analytics platforms.

The framework facilitates interoperability code that spans edge analytics, security, system management, and services. It also eases the integration of pre-certified software for IoT gateways and smart edge devices.

Security and flexibility

“Our goal is to decouple connectivity standards and device interfaces from applications,” said Dell developer and major EdgeX contributor Jason A. Shepherd in an email interview with Linux.com. “EdgeX will enable flexibility and scalability through platform independence, loosely-coupled microservices, and the ability to bring together services written in different languages through common APIs. These cloud-native tenets are absolutely required at the edge to scale in an inherently fragmented, multi-edge and multi-cloud world.”

EdgeX is based on Dell’s seminal FUSE IoT middleware framework, with inputs from a similar AllJoyn-compliant project called IoTX. Dell is one of three Platinum members alongside Analog Devices, and Samsung. EdgeX Foundry now has 61 members overall, including AMD, Canonical, Cloud Foundry, Linaro, Mocana, NetFoundry, Opto 22, RFMicron, and VMware.

The California release follows the initial Barcelona release, which arrived last October. Barcelona provided reference Device Services supporting BACNet, Modbus, Bluetooth Low Energy (BLE), MQTT, SNMP, and Fischertechnik, as well as connectors to Azure IoT Suite and Google IoT Core.

The major new features in in EdgeX California aim to improve security. A new reverse proxy based on Kong helps protect REST API communications and secrets storage. The reverse proxy requires any external client of an EdgeX microservice to first authenticate itself before loading an EdgeX API.

The new secure storage facility for secrets is based on HashiCorp’s open source Vault. It lets you securely store sensitive data such as username/password credentials, certificates, and tokens within EdgeX for performing tasks such as encryption, making HTTPS calls to the enterprise, or securely connecting EdgeX to a cloud provider.

“Our Barcelona release had no security features because we wanted all the security layers to be defined by a community of industry experts such as RSA, Analog Devices, Thales, ForgeRock, and Mocana, rather than only from Dell,” said Shepherd. “The Reverse Proxy and Secrets Store is the foundation from which everything else is built.”

Shift to Go

The other major change in California was that the code was rebuilt from the original Java with the Go programming language. The process delayed the release by several months, but as a result, California has a significantly reduced footprint, startup time, memory, and CPU usage. It fits into 42MB — or 68MB with container – and can now boot in less than a second per service compared to about 35 seconds (see chart below).

Additional new features in the California release include:

  • Export services additions for “northbound” connectivity to the XMPP messaging standard, the ThingsBoard IoT platform for device management, data collection, processing, and visualization, and Samsung’s Brightics IoT IoT interoperability platform,
  • Improved documentation, now available in Github
  • Full support for Arm 64
  • Blackbox tests for all micro services within build and continuous integration processes
  • Improved continuous integration to streamline developer contributions

According to Dell’s Shepherd, the switch to Go was not only about reducing footprint, but to avoid the need for vendors to pay a Java license fee for commercial deployments. In addition, Go has expanded EdgeX’s developer base.

“Go’s concurrency model is superior to most programming languages, has the support of Google, is used by Docker, Kubernetes and many other large software development efforts, and is growing broadly in IoT circles,” said Shepherd. “We doubled our community in the months after the January Go-Lang Preview. There is a learning curve associated with getting a typical object (Java, C++) developer to move to Go (a functional versus object language), but overall the move has been good for fostering more enthusiasm about the platform as well as improving it.”

Shepherd noted that Go is only a baseline reference language. Developers can use the same APIs with other languages, and the project will support C in addition to Go for the Device Service SDKs. Because C can reduce the footprint even further than Go, it may be the better choice for applications built on a low-end “thin edge” gateway with a lot of Device Services, such as many different sensor protocols, said Shepherd. However, EdgeX Foundry chose Go because it is more platform independent in terms of hardware and OS.

Next up: Delhi and beyond

The upcoming Delhi release due in October will include components such as manageability services, Device Service SDKs, improved unit and performance testing, and a basic EdgeX UI for demos. It will also add more security features including improved security service bootstrapping of Kong and Vault.

According to Shepherd, other security enhancements planned for Delhi include “tying Kong and potentially other security services to an access control system providing access control lists for granting access to various services.” Future versions of EdgeX will also establish a Chain of Trust API for systems that don’t have something like TPM. “We want to build out an API that allows EdgeX to establish a root of trust with the platform it rides on,” said Shepherd.

Other plans call for automating security testing, including “building an entire security testing apparatus and look at pen-testing type of needs,” said Shepherd. The project will also enhance the Vault-based secure storage system. “Today, EdgeX microservices get their configuration and secrets from the Consul configuration/registry service, but the secrets, such as passwords for database connections, are not secure. We want application secrets to come from Vault.  Vault and Consul are provided by HashiCorp and we think there is a good way to use the two together.”

Looking forward to future releases, EdgeX plans to reduce the footprint even more to run in 128MB or lower. There are also roadmap items for “more integration to edge analytics, rules engines, and CEPs,” said Shepherd.  “We are currently working with NodeRed as an example. “

When asked about the potential for integrating with other cloud-driven IoT platforms such as AWS Greengrass or Google’s new Cloud IoT Edge platform, Shepherd had this to say:

Our ability to work with some of the proprietary cloud stacks depends on their openness and architecture, but we are certainly exploring the opportunities. The whole point is that a developer or end user can use their choice of edge analytics and backend services without having to reinvent the foundational elements for data ingestion, security and manageability.”

Separately, Shepherd noted: “Our completely open APIs — managed by the vendor-neutral Technical Steering Committee (TSC) to ensure stability and transparency — decouple developers’ choice of standards and application/cloud services to prevent them from being locked in via one particular provider’s proprietary APIs when the data meter starts spinning.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Michelle Noorali: Helping Users and Developers Consume Open Source

Open source events create the best interaction points between developers and users, and one person you’re likely to meet at these events is Michelle Noorali, one of the most visible and recognizable faces in one of the biggest open source communities: Kubernetes.

Most modern software development, which is by default open source, is done by people spread across the globe, many of whom have never met in person. That’s why events like Open Source Summit are extremely important in creating opportunities for interaction for the people who are managing, developing, and using these open source projects.

Noorali, Senior Software Engineer at Microsoft, says she loves meeting people at events and learning about how they are using cloud-native tools and what they need. “I am trying to see if those tools that I work on can also meet other people’s needs,” she said.

This direct interaction gives Noorali a unique perspective for understanding the pain points. For example, “It’s really hard to pick from all of the cloud native technologies and figure out how they work together because at the end of the day, you are trying to deploy and run applications in the cloud or on bare metal,” she said. “The second point is how do I expose my developers, my teams to this stuff and get them to actually use cloud native tools, without having to learn about everything from scratch.”

Read more at The Linux Foundation

Posted on Leave a comment

Open Source Certification: Questions and Answers

Open source is now so pervasive at organizations of all sizes that there is outsized demand for workers skilled with open platforms and tools. This has created profound changes in the job market, and across industries the skills gap is widening, making it ever more difficult to hire people with much needed job skills.

So far in this series, we’ve discussed why certification matters so much, explained the kinds of certifications that are making a difference, and covered some strategic ways to prepare for the task-centric exams that lead to certification. In this last article of the series, Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation, answers some commonly asked questions pertaining to certification and exam-taking.

Performance-based testing

In a recent webinar during which Seepersad discussed these topics, one participant asked about the differences between performance-based tests or multiple-choice tests.

“I’m a very passionate believer in performance-based tests,” Seepersad said, “and the reason is that it really reflects the reality of how you do your work as an IT professional. You do your work on a live system. You do your work at the command line. You don’t do your work by being quizzed and being handed a set of answers.”

“When I think of my own role in the past as a hiring manager,” Seepersad added, “if you gave me the option between the two I would always pick the one where the candidate has proven that they can do the work in a live,  timed hands-on environment because that’s going to be a better reflection of what I’m going to expect them to do in the real world. A performance-based test is definitely going to give me a lot more confidence in a candidate than a multiple-choice exam.”

In this article series, we have looked at what is involved in obtaining Linux Foundation certifications, but other organizations offer training and certification for open source platforms and tools as well. Another participant asked about the differences between Red Hat certification and Linux Foundation certification, for example.

“One of the things that I really like and respect about the Red Hat program is that just like The Linux Foundation program, it is performance-based,” Seepersad said. “It is a live system that the candidate has to work on, which is great. Red Hat continues to be a great option for users who know for a fact that they’re going to be working in a Red Hat-only environment.”  

“One other distinction is that we deliver our exam 100 percent online,” he added. “For the Red Hat exams, you have to go to a physical testing center or a kiosk. From a convenience factor, depending on where you’re located, if you’re not in an urban area or if you’re in a country that maybe doesn’t have a lot of test infrastructure, being able to take an exam from your own computer and take it 24/7 can matter a lot.”

Exam Insurance

Seepersad was also asked what is meant by “exam insurance” for Linux Foundation certification exams. Seepersad said that soon after the training program was launched, they talked particularly with candidates who were taking a bit longer than expected about why that was.

“The reason was that they were trying to save up for delivering their solutions until they were really sure they were ready,” he said. “Quite often that meant they were about to run out of time. We thought about how to take the stress out of this. The way we take the stress out is by offering a no-questions-asked exam retake option. If you take either exam, LFCS or LFCE, and you do not succeed on your first attempt, you are automatically eligible to have a free second attempt.”

With certification playing a more important role in securing a rewarding long-term career, are you interested in learning about options for gaining credentials? If so, please check out the other stories in this series and stay tuned for more information about open source training and certification.

Learn more about Linux training and certification.

Posted on Leave a comment

Open Networking Summit Europe Schedule Announced! 3 Days Left to Save $805 | Register Now

Open Networking Summit, the premier open networking event in North America, comes to Europe for the first time this year, gathering enterprises, service providers, and cloud providers across the open networking ecosystem.

Join 1000+ architects, developers, and thought leaders in Amsterdam, September 25-27, to share learnings, highlight innovation and discuss the future of open networking, including SDN, NFV, orchestration, and the automation of cloud, network, and IoT services.

Keynote Sessions Include:

  • Talks from Deutsche Telekom, Orange, and Türk Telekom
  • Sessions and panels on the intersection of cloud native and networking; the intersection of blockchain and networking; ONAP leadership; and vendor innovation in open source.
  • Cross Domain/Cross-Layer VPN Service Orchestration Demo from China Mobile, Huawei, and Vodafone
  • Virtual Central Office (VCO) 2.0 – Virtualized Mobile Network Demo showing new and improved use cases extending the capabilities of the VCO, with presenters from China Mobile, Red Hat, and more.

REGISTER BY AUGUST 4 TO SAVE $805 »

Read more at The LInux Foundation

Posted on Leave a comment

Join Interactive Workshop on Cloud-Native Network Functions at Open Source Summit

ONAP and Kubernetes – two of the fastest-growing Linux Foundation projects – are coming together in the next generation of telecom architecture.  

ONAP provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions, and Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Telcos are now examining how these virtual network functions (VNFs) could evolve into cloud-native network functions (CNFs) running on Kubernetes.

In a three-hour interactive workshop on cloud-native network functions at Open Source Summit, Dan Kohn, Executive Director, Cloud Native Computing Foundation, and Arpit Joshipura, GM Networking & Orchestration, The Linux Foundation, will explain networking and cloud-native terms and concepts side by side.

“As the next-generation of telco architecture evolves, CSPs are exploring how their Virtual Network Functions (VNFs) can evolve into Cloud-native Network Functions (CNFs), ” said Joshipura. “This seminar will explore  what’s involved in migrating from VNFs to CNFs, with a specific focus on the roles played by ONAP and Kubernetes. We hope to see a broad swatch of community members from both the container and networking spaces join us for an engaging and informative discussion in Vancouver.”

Session highlights will include:

  • Migrating and automating network functions to virtual networking functions to CNFs
  • Overview of sub-projects focusing on this migration, including cross-cloud CI, ONAP/OVP, FD.io/VPP, etc.
  • The role for a service mesh, such as like Envoy, Istio, or Linkerd, in connecting CNFs with load balancing, canary deployments, policy enforcement, and more.
  • What is involved in telcos adopting modern continuous integration / continuous deployment (CI/CD) tools to be able to rapidly innovate and improve their CNFs while retaining confidence in the reliability.
  • Differing security needs of trusted (open source and vendor-provided) code vs. running untrusted code
  • The role for security isolation technologies like gVisor or Kata
  • Requirements of the underlying operating system
  • Strengths and weaknesses of different network architectures such as multi-interface pods and Network Service Mesh
  • Status of IPv6 and dual-stack support in Kubernetes

Additional registration is required for this session, but there is no extra fee. Space is limited in the workshop, so reserve your spot soon. And, if you plan to attend, please be willing to participate. Learn more and sign up now!

This article originally appeared at The Linux Foundation.

Posted on Leave a comment

Open Source Networking Jobs: A Hotbed of Innovation and Opportunities

As global economies move ever closer to a digital future, companies and organizations in every industry vertical are grappling with how to further integrate and deploy technology throughout their business and operations. While Enterprise IT largely led the way, the advantages and lessons learned are now starting to be applied across the board. While the national unemployment rate stands at 4.1%, the overall unemployment rate for tech professionals hit 1.9% in April and the future for open source jobs looks particularly bright. I work in the open source networking space and the innovations and opportunities I’m witnessing are transforming the way the world communicates.

Once a slower moving industry, the networking ecosystem of today — made up of network operators, vendors, systems integrators, and developers — is now embracing open source software and is shifting significantly towards virtualization and software defined networks running on commodity hardware. In fact, nearly 70% of global mobile subscribers are represented by network operator members of LF Networking, an initiative working to harmonize projects that makes up the open networking stack and adjacent technologies.  

Demand for Skills

Developers and sysadmins working in this space are embracing cloud native and DevOps approaches and methods to develop new use cases and tackle the most pressing industry challenges. Focus areas like containers and edge computing are red hot and the demand for developers and sysadmins who can integrate, collaborate, and innovate in this space is exploding.

Open source and Linux makes this all possible, and per the recently published 2018 Open Source Jobs Report, fully 80% of hiring managers are looking for people with Linux skills while 46% are looking to recruit in the networking area and a roughly equal equal percentage cite “Networking” as a technology most affecting their hiring decisions.

Developers are the most sought-after position, with 72% of hiring managers looking for them, followed by DevOps skills (59%), engineers (57%) and sysadmins (49%). The report also measures the incredible growth in demand for containers skills which matches what we’re seeing in the networking space with the creation of cloud native virtual functions (CNFs) and the proliferation of Continuous Integration / Continuous Deployment approaches such as the XCI initiative in the OPNFV.

Get Started

The good news for job seekers in that there are plenty of onramps into open source including the free Introduction to Linux course. Multiple certifications are mandatory for the top jobs so I encourage you to explore the range of training opportunities out there. Specific to networking, check out these new training courses in the OPNFV and ONAP projects, as well as this introduction to open source networking technologies.

If you haven’t done so already, download the 2018 Open Source Jobs Report now for more insights and plot your course through the wide world of open source technology to the exciting career that waits for you on the other side!

Download the complete Open Source Jobs Report now and learn more about Linux certification here.