Posted on Leave a comment

Top 5 Linux Distributions for New Users

Linux has come a long way from its original offering. But, no matter how often you hear how easy Linux is now, there are still skeptics. To back up this claim, the desktop must be simple enough for those unfamiliar with Linux to be able to make use of it. And, the truth is that plenty of desktop distributions make this a reality.

No Linux knowledge required

It might be simple to misconstrue this as yet another “best user-friendly Linux distributions” list. That is not what we’re looking at here. What’s the difference? For my purposes, the defining line is whether or not Linux actually plays into the usage. In other words, could you set a user in front of a desktop operating system and have them be instantly at home with its usage? No Linux knowledge required.

Believe it or not, some distributions do just that. I have five I’d like to present to you here. You’ve probably heard of all of them. They might not be your distribution of choice, but you can guarantee that they slide Linux out of the spotlight and place the user front and center.

Let’s take a look at the chosen few.

Elementary OS

The very philosophy of Elementary OS is centered around how people actually use their desktops. The developers and designers have gone out of their way to create a desktop that is as simple as possible. In the process, they’ve de-Linux’d Linux. That is not to say they’ve removed Linux from the equation. No. Instead, what they’ve done is create an operating system that is about as neutral as you’ll find. Elementary OS is streamlined in such a way as to make sure everything is perfectly logical. From the single Dock to the clear-to-anyone Applications menu, this is a desktop that doesn’t say to the user, “You’re using Linux!” In fact, the layout itself is reminiscent of Mac, but with the addition of a simple app menu (Figure 1).

Another important aspect of Elementary OS that places it on this list is that it’s not nearly as flexible as some other desktop distributions. Sure, some users would balk at that, but having a desktop that doesn’t throw every bell and whistle at the user makes for a very familiar environment — one that neither requires or allows a lot of tinkering. That aspect of the OS goes a long way to make the platform familiar to new users.

And like any modern Linux desktop distribution, Elementary OS includes and App Store, called AppCenter, where users can install all the applications they need, without ever having to touch the command line.

Deepin

Deepin not only gets my nod for one of the most beautiful desktops on the market, it’s also just as easy to adopt as any desktop operating system available. With a very simplistic take on the desktop interface, there’s very little in the way of users with zero Linux experience getting up to speed on its usage. In fact, you’d be hard-pressed to find a user who couldn’t instantly start using the Deepin desktop. The only possible hitch in that works might be the sidebar control center (Figure 2).

But even that sidebar control panel is as intuitive as any other configuration tool on the market. And anyone that has used a mobile device will be instantly at home with the layout. As for opening applications, Deepin takes a macOS Launchpad approach with the Launcher. This button is in the usual far right position on the desktop dock, so users will immediately gravitate to that, understanding that it is probably akin to the standard “Start” menu.

In similar fashion as Elementary OS (and most every Linux distribution on the market), Deepin includes an app store (simply called “Store”), where plenty of apps can be installed with ease.

Ubuntu

You knew it was coming. Ubuntu is most often ranked at the top of most user-friendly Linux lists. Why? Because it’s one of the chosen few where a knowledge of Linux simply isn’t necessary to get by on the desktop. Prior to the adoption of GNOME (and the ousting of Unity), that wouldn’t have been the case. Why? Because Unity often needed a bit of tweaking to get it to the point where a tiny bit of Linux knowledge wasn’t necessary (Figure 3). Now that Ubuntu has adopted GNOME, and tweaked it to the point where an understanding of GNOME isn’t even necessary, this desktop makes Linux take a back seat to simplicity and usability.

Unlike Elementary OS, Ubuntu doesn’t hold the user back. So anyone who wants more from their desktop, can have it. However, the out of the box experience is enough for just about any user type. Anyone looking for a desktop that makes the user unaware as to just how much power they have at their fingertips, could certainly do worse than Ubuntu.

Linux Mint

I will preface this by saying I’ve never been the biggest fan of Linux Mint. It’s not that I don’t respect what the developers are doing, it’s more an aesthetic. I prefer modern-looking desktop environments. But that old school desktop metaphor (found in the default Cinnamon desktop) is perfectly familiar to nearly anyone who uses it. With a taskbar, start button, system tray, and desktop icons (Figure 4), Linux Mint offers an interface that requires zero learning curve. In fact, some users might be initially fooled into thinking they are working with a Windows 7 clone. Even the updates warning icon will look instantly familiar to users.

Because Linux Mint benefits from being based on Ubuntu, it’ll not only enjoy an immediate familiarity, but a high usability. No matter if you have even the slightest understanding of the underlying platform, users will feel instantly at home on Linux Mint.

Ubuntu Budgie

Our list concludes with a distribution that also does a fantastic job of making the user forget they are using Linux, and makes working with the usual tools a simple, beautiful thing. Melding the Budgie Desktop with Ubuntu makes for an impressively easy to use distribution. And although the layout of the desktop (Figure 5) might not be the standard fare, there is no doubt the acclimation takes no time. In fact, outside of the Dock defaulting to the left side of the desktop, Ubuntu Budgie has a decidedly Elementary OS look to it.

The System Tray/Notification area in Ubuntu Budgie offers a few more features than the usual fare: Features such as quick access to Caffeine (a tool to keep your desktop awake), a Quick Notes tool (for taking simple notes), Night Lite switch, a Places drop-down menu (for quick access to folders), and of course the Raven applet/notification sidebar (which is similar to, but not quite as elegant as, the Control Center sidebar in Deepin). Budgie also includes an application menu (top left corner), which gives users access to all of their installed applications. Open an app and the icon will appear in the Dock. Right-click that app icon and select Keep in Dock for even quicker access.

Everything about Ubuntu Budgie is intuitive, so there’s practically zero learning curve involved. It doesn’t hurt that this distribution is as elegant as it is easy to use.

Give One A Chance

And there you have it, five Linux distributions that, each in their own way, offer a desktop experience that any user would be instantly familiar with. Although none of these might be your choice for top distribution, it’s hard to argue their value when it comes to users who have no familiarity with Linux.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Xen Project Celebrates Unikraft Unikernel Project’s One Year Anniversary

It has been one year since the Xen Project introduced Unikraft as an incubator project. In that time, the team has made great strides in simplifying the process of building unikernels through a unified and customizable code base.

Unikraft is an incubation project under the Xen Project, hosted by the Linux Foundation, focused on easing the creation of building unikernels, which compile source code into a lean operating system that only includes the functionality required by the application logic. As containers increasingly become the way cloud applications are built, there is a need to drive even more efficiency into the way these workloads run. The ultra lightweight and small trusted compute base nature of unikernels make them ideal not only for cloud applications, but also for fields where resources may be constrained or safety is critical.

Unikraft tackles one of the fundamental downsides of unikernels: despite their clear potential, building them is often manual, time-consuming work carried out by experts. Worse, the work, or at least chunks of it, often needs to be redone for each target application. Unikraft’s goal is to provide an automated build system where non-experts can easily and quickly generate extremely efficient and secure unikernels without having to touch a single line of code. Further, Unikraft explicitly supports multiple target platforms: not only virtual machines for Xen and KVM, but also OCI-compliant containers and bare metal images for various CPU architectures.

Over the last year the lead team at NEC Laboratories Europe along with external contributors from companies like ARM and universities such as University of Bucharest have made great strides in developing and testing Unikraft’s base functionality, including support for a number of CPU architectures, platforms, and operating system primitives. Notable updates include support for ARM64.

The Unikraft community continues to grow. Over the last year, we’ve seen impressive momentum in terms of community support and involvement:

  • Contributions from outside the project founders (NEC) now make up 25% of all contributions.

  • Active contributors rose 91%, from 2 contributors to 23.

  • The initial NEC code contribution was around 86KLOC: since then around 34KLOC of code have been added and/or modified.

An upcoming milestone for the project is the Unikraft v0.3 release, which will ship in February. This release includes:

  • Xenstore and Xen bus support

  • ARM32 support for Xen

  • ARM64 support for QEMU/KVM

  • X86_64 bare metal support

  • Networking support, including an API that allows for high-speed I/O frameworks (e.g., DPDK, netmap)

  • A lightweight network stack (lwip)

  • Initial VFS support along with an a simple but performant in-RAM filesystem

We are very excited about this coming year, where the focus will be on automating the build process and supporting higher-layer functionality and applications:

  • External standard libraries: musl, libuv, zlib, openssl, libunwind, libaxtls (TLS), etc.

  • Language environments: Javascript (v8), Python, Ruby, C++

  • Frameworks: Node.js, PyTorch, Intel DPDK

  • Applications: lighttpd, nginx, SQLite, Redis, etc.  

Looking forward, in the first half of 2019 Unikraft will be concentrating its efforts towards supporting an increasing number of programming languages and applications and towards actively creating links to other unikernel projects in order to ensure that the project delivers on its promise. Stay tuned for what’s in store. If you want to take Unikraft out for a spin, to contribute or to simply find out more information about Unikraft please head over to the project’s website.

Also, if you are attending FOSDEM, February 2nd and 3rd, please stop by room AW1.121 for the talk “Unikraft: Unikernels Made Easy,” given by Simon Kuenzer. Simon, a senior systems researcher at NEC Labs and the lead maintainer of Unikraft, will be speaking all about Unikraft and giving a comprehensive overview of the project, where it’s been and what’s in store.  

Want to learn more about Unikraft and connect with the Xen community at large? Registration for the annual Xen Project Developer and Design Summit is open now! Check out information on sponsorships, speaking opportunities and more here.

This article originally appeared at Xen Project.

Posted on Leave a comment

SAP: One of Open Source’s Best Kept Secrets

SAP has been working with open source for decades and has now established an open source program office (OSPO) to further formalize the coordination of its open source activities and expand its engagement with the open source communities. “SAP was one of the first industry players to formally define processes for open source consumption and contribution,” says Peter Giese, director of the Open Source Program Office.

Even so, many people do not yet consider SAP to be a company that embraces open source engagement and contributions.

“In the past, we may not have been active enough in sharing our open source activities,” says Giese.

Now, SAP is shining a spotlight on its work in open source. Transparency is an essential part of the new open source mandate, beginning with an explanation of what the company has been up to and where it is headed with open source.

How SAP came to adopt open source

“In 1998, SAP started to port the R/3 system, our market-leading ERP system, to Linux,” says Giese. “That was an important milestone for establishing Linux in the enterprise software market.”

Porting a system to Linux was just a first step, and a successful one. The action spurred an internal discussion and exploration of how and where to adopt Linux going forward.

Read more at The Linux Foundation

Posted on Leave a comment

More About Angle Brackets in Bash

In the previous article, we introduced the subject of angle brackets (< >) and demonstrated some of their uses. Here, we’ll look at the topic from a few more angles. Let’s dive right in.

You can use < to trick a tool into believing the output of a command is data from a file.

Let’s say you are not sure your backup is complete, and you want to check that a certain directory contains all the files copied over from the original. You can try this:

diff <(ls /original/dir/) <(ls /backup/dir/)

diff is a tool that typically compares two text files line by line, looking for differences. Here it gets the output from two ls commands and treats them as if coming from a file and compares them as such.

Note that there is no space between the < and the (...).

Running that on the original and backup of a directory where I save pretty pictures, I get:

diff <(ls /My/Pictures/) <(ls /My/backup/Pictures/) 5d4 < Dv7bIIeUUAAD1Fc.jpg:large.jpg

The < in the output is telling me that there is file (Dv7bIIeUUAAD1Fc.jpg:large.jpg) on the left side of the comparison (in /My/Pictures) that is not on the right side of the comparison (in /My/backup/Pictures), which means copying over has failed for some reason. If diff didn’t cough up any output, it would mean that the list of files were the same.

So, you may be wondering, if you can take the output of a command or command line, make it look like the contents of a file, and feed it to an instruction that is expecting a file, that means that in the sorting by favorite actor example from above, you could’ve done away with the intermediate file and just piped the output from the loop into sort.

In short, yep! The line:

sort -r <(while read -r name surname films;do echo $films $name $surname ; done < CBactors)

does the trick nicely.

Here string! Good string!

There is one more case for redirecting data using angle brackets (or arrows, or whatever you want to call them).

You may be familiar with the practice of passing variables to commands using echo and a pipe (|). Say you want to convert a variable containing a string to uppercase characters because… I don’t know… YOU LIKE SHOUTING A LOT. You could do this:

myvar="Hello World" echo $myvar | tr '[:lower:]' '[:upper:]' HELLO WORLD

The tr command translates strings to different formats. In the example above, you are telling tr to change all the lowercase characters that come along in the string to uppercase characters.

It is important to know that you are not passing on the variable, but only its contents, that is, the string “Hello World“. This is called the here string, as in “it is here, in this context, that we know what string we are dealing with“. But there is shorter, clearer, and all round better way of delivering here strings to commands. Using

tr '[:lower:]' '[:upper:]' <<< $myvar

does the same thing with no need to use echo or a pipe. It also uses angle brackets, which is the whole obsessive point of this article.

Conclusion

Again, Bash proves to give you lots of options with very little. I mean, who would’ve thunk that you could do so much with two simple characters like < and >?

The thing is we aren’t done. There are plenty of more characters that bring meaning to chains of Bash instructions. Without some background, they can make shell commands look like gibberish. Hopefully, post by post, we can help you decipher them. Until next time!

Posted on Leave a comment

Hyperledger Bootcamp Hong Kong

Hyperledger Bootcamp is where we help get community members up to speed on how to contribute. Most of the participants are fairly new and we understand that contributing to your first project can be a be daunting. This process take the fear out of the process. For existing contributors and maintainer, this is the ideal place to recruit more help for your project or group.

Learn more

Posted on Leave a comment

Project EVE Promotes Cloud-Native Approach to Edge Computing

The LF Edge umbrella organization for open source edge computing that was announced by The Linux Foundation last week includes two new projects: Samsung Home Edge and Project EVE. We don’t know much about Samsung’s project for home automation, but we found out more about Project EVE, which is based on Zededa’s edge virtualization technology. Last week, we spoke with Zededa co-founder Roman Shaposhnik about Project EVE, which provides a cloud-native based virtualization engine for developing and deploying containers for industrial edge computers (see below).

LF Edge aims to establish “an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.” It is built around The Linux Foundation’s telecom-oriented Akraino Edge Stack, as well as its EdgeX Foundry, an industrial IoT middleware project..

Like the mostly proprietary cloud-to-edge platforms emerging from Google (Google Cloud IoT Edge), Amazon (AWS IoT), Microsoft (Azure Sphere), and most recently Baidu (Open Edge), among others, the LF Edge envisions a world where software running on IoT gateway and edge devices evolves top down from the cloud rather than from the ground up with traditional embedded platforms.

The Linux Foundation, which also supports numerous “ground up” embedded projects such as the Yocto Project and Iotivity, but with LF Edge it has taken a substantial step toward the cloud-centric paradigm. The touted benefits of a cloud-native approach for embedded include easier software development, especially when multiple apps are needed, and improved security via virtualized, regularly updated container apps. Cloud-native edge computing should also enable more effective deployment of cloud-based analytics on the edge while reducing expensive, high-latency cloud communications.

None of the four major cloud operators listed above are currently members of LF Edge, which poses a challenge for the organization. However, there’s already a deep roster of companies onboard, including Arm, AT&T, Dell EMC, Ericsson, HPE, Huawei, IBM, Intel, Nokia Solutions, Qualcomm, Radisys, Red Hat, Samsung, Seagate, and WindRiver (see the LF Edge announcement for the full list.)

With developers coming at the edge computing problem from both the top-down and bottom-up perspectives, often with limited knowledge of the opposite realm, the first step is agreeing on terminology. Back in June, the Linux Foundation launched an Open Glossary of Edge Computing project to address this issue. Now part of LF Edge, the Open Glossary effort “seeks to provide a concise collection of terms related to the field of edge computing.”

There’s no mention of Linux in the announcements for the LF Edge projects, all of which propose open source, OS-agnostic, approaches to edge computing. Yet, there’s no question that Linux will be the driving force here.

Project EVE aims to be the Android of edge computing

Project EVE is developing an “open, agnostic and standardized architecture unifying the approach to developing and orchestrating cloud-native applications across the enterprise edge,” says the Linux Foundation. Built around an open source EVE (Edge Virtualization Engine) version of the proprietary Edge Virtualization X (EVx) engine from Santa Clara startup Zededa, Project EVE aims to reinvent embedded using Docker containers and other open source cloud-native software such as Kubernetes. Cloud-native edge computing’s “simple, standardized orchestration” will enable developers to “extend cloud applications to edge devices safely without the need for specialized engineering tied to specific hardware platforms,” says the project.

Earlier this year, Zededa joined the EdgeX Foundry project, and its technology similarly targets the industrial realm. However, Project EVE primarily concerns the higher application level rather than middleware. The project’s cloud-native approach to edge software also connects it to another LF project: the Cloud Native Computing Foundation.

In addition to its lightweight virtualization engine, Project EVE also provides a zero-trust security framework. In conversation with Linux.com, Zededa co-founder Roman Shaposhnik proposed to consign the word “embedded” to the lower levels of simple, MCU-based IoT devices that can’t run Linux. “To learn embedded you have to go back in time, which is no longer cutting it,” said Shaposhnik We have millions of cloud-native software developers who can drive edge computing. If you are familiar with cloud-native, you should have no problem in developing edge-native applications.”

If Shaposhnik is critical of traditional, ground-up embedded development, with all its complexity and lack of security, he is also dismissive of the proprietary cloud-to-edge solutions. “It’s clear that building silo’d end-to-end integration cloud applications is not really flying,” he says, noting the dangers of vendor lock-in and lack of interoperability and privacy.

To achieve the goals of edge computing, what’s needed is a standardized, open source approach to edge virtualization that can work with any cloud, says Shaposhnik. Project EVE can accomplish this, he says, by being the edge computing equivalent of Android.

“The edge market today is where mobile was in the early 2000s,” said Shaposhnik, referring to an era when early mobile OSes such as Palm, BlackBerry, and Windows Mobile created proprietary silos. The iPhone changed the paradigm with apps and other advanced features, but it was the far more open Android that really kicked the mobile world into overdrive.

“Project EVE is doing with edge what Android has done with mobile,” said Shaposhnik. The project’s standardized edge virtualization technology is the equivalent of Android package management and Dalvik VM for Java combined, he added. “As a mobile developer you don’t think about what driver is being used. In the same way our technology protects the developer from hardware complexity.”

Project EVE is based on Zededa’s EVx edge virtualization engine, which currently runs on edge hardware from partners including Advantech, Lanner, SuperMicro, and Scalys. Zededa’s customers are mostly large industrial or energy companies that need timely analytics, which increasingly requires multiple applications.

“We have customers who want to optimize their wind turbines and need predictive maintenance and vibration analytics,” said Shaposhnik. “There are a half dozen machine learning and AI companies that could help, but the only way they can deliver their product is by giving them a new box, which adds to cost and complexity.”

A typical edge computer may need only a handful of different apps rather than the hundreds found on a typical smartphone. Yet, without an application management solution such as virtualized containers, there’s no easy way to host them. Other open source cloud-to-edge solutions that use embedded container technology to provide apps include the Balena IoT fleet management solution from Balena (formerly Resin.io) and Canonical’s container-like Ubuntu Core distribution.

Right now, the focus is on getting the open source version of EVx out the door. Project EVE plans to release a 1.0 version of the EVE in the second quarter along with an SDK for developing EVE edge containers. An app store platform will follow later in the year. More information may be found in this Zededa blog post.

Learn more about LF Edge 

Posted on Leave a comment

Top 5 Linux Distributions for Development in 2019

One of the most popular tasks undertaken on Linux is development. With good reason: Businesses rely on Linux. Without Linux, technology simply wouldn’t meet the demands of today’s ever-evolving world. Because of that, developers are constantly working to improve the environments with which they work. One way to manage such improvements is to have the right platform to start with. Thankfully, this is Linux, so you always have a plethora of choices.

But sometimes, too many choices can be a problem in and of itself. Which distribution is right for your development needs? That, of course, depends on what you’re developing, but certain distributions that just make sense to use as a foundation for your task. I’ll highlight five distributions I consider the best for developers in 2019.

Ubuntu

Let’s not mince words here. Although the Linux Mint faithful are an incredibly loyal group (with good reason, their distro of choice is fantastic), Ubuntu Linux gets the nod here. Why? Because, thanks to the likes of AWS, Ubuntu is one of the most deployed server operating systems. That means developing on a Ubuntu desktop distribution makes for a much easier translation to Ubuntu Server. And because Ubuntu makes it incredibly easy to develop for, work with, and deploy containers, it makes perfect sense that you’d want to work with this platform. Couple that with Ubuntu’s inclusion of Snap Packages, and Canonical’s operating system gets yet another boost in popularity.

But it’s not just about what you can do with Ubuntu, it’s how easily you can do it. For nearly every task, Ubuntu is an incredibly easy distribution to use. And because Ubuntu is so popular, chances are every tool and IDE you want to work with can be easily installed from the Ubuntu Software GUI (Figure 1).

If you’re looking for ease of use, simplicity of migration, and plenty of available tools, you cannot go wrong with Ubuntu as a development platform.

openSUSE

There’s a very specific reason why I add openSUSE to this list. Not only is it an outstanding desktop distribution, it’s also one of the best rolling releases you’ll find on the market. So if you’re wanting to develop with and release for the most recent software available, openSUSE Tumbleweed should be one of your top choices. If you want to leverage the latest releases of your favorite IDEs, if you always want to make sure you’re developing with the most recent libraries and toolkits, Tumbleweed is your platform.

But openSUSE doesn’t just offer a rolling release distribution. If you’d rather make use of a standard release platform, openSUSE Leap is what you want.

Of course, it’s not just about standard or rolling releases. The openSUSE platform also has a Kubernetes-specific release, called Kubic, which is based on Kubernetes atop openSUSE MicroOS. But even if you aren’t developing for Kubernetes, you’ll find plenty of software and tools to work with.

And openSUSE also offers the ability to select your desktop environment, or (should you chose) a generic desktop or server (Figure 2).

Fedora

Using Fedora as a development platform just makes sense. Why? The distribution itself seems geared toward developers. With a regular, six month release cycle, developers can be sure they won’t be working with out of date software for long. This can be important, when you need the most recent tools and libraries. And if you’re developing for enterprise-level businesses, Fedora makes for an ideal platform, as it is the upstream for Red Hat Enterprise Linux. What that means is the transition to RHEL should be painless. That’s important, especially if you hope to bring your project to a much larger market (one with deeper pockets than a desktop-centric target).

Fedora also offers one of the best GNOME experiences you’ll come across (Figure 3). This translates to a very stable and fast desktops.

But if GNOME isn’t your jam, you can opt to install one of the Fedora spins (which includes KDE, XFCE, LXQT, Mate-Compiz, Cinnamon, LXDE, and SOAS).

Pop!_OS

I’d be remiss if I didn’t include System76’s platform, customized specifically for their hardware (although it does work fine on other hardware). Why would I include such a distribution, especially one that doesn’t really venture far away from the Ubuntu platform for which is is based? Primarily because this is the distribution you want if you plan on purchasing a desktop or laptop from System76. But why would you do that (especially given that Linux works on nearly all off-the-shelf hardware)? Because System76 sells outstanding hardware. With the release of their Thelio desktop, you have available one of the most powerful desktop computers on the market. If you’re developing seriously large applications (especially ones that lean heavily on very large databases or require a lot of processing power for compilation), why not go for the best? And since Pop!_OS is perfectly tuned for System76 hardware, this is a no-brainer.
Since Pop!_OS is based on Ubuntu, you’ll have all the tools available to the base platform at your fingertips (Figure 4).

Pop!_OS also defaults to encrypted drives, so you can trust your work will be safe from prying eyes (should your hardware fall into the wrong hands).

Manjaro

For anyone that likes the idea of developing on Arch Linux, but doesn’t want to have to jump through all the hoops of installing and working with Arch Linux, there’s Manjaro. Manjaro makes it easy to have an Arch Linux-based distribution up and running (as easily as installing and using, say, Ubuntu).

But what makes Manjaro developer-friendly (besides enjoying that Arch-y goodness at the base) is how many different flavors you’ll find available for download. From the Manjaro download page, you can grab the following flavors:

  • GNOME

  • XFCE

  • KDE

  • OpenBox

  • Cinnamon

  • I3

  • Awesome

  • Budgie

  • Mate

  • Xfce Developer Preview

  • KDE Developer Preview

  • GNOME Developer Preview

  • Architect

  • Deepin

Of note are the developer editions (which are geared toward testers and developers), the Architect edition (which is for users who want to build Manjaro from the ground up), and the Awesome edition (Figure 5 – which is for developers dealing with everyday tasks). The one caveat to using Manjaro is that, like any rolling release, the code you develop today may not work tomorrow. Because of this, you need to think with a certain level of agility. Of course, if you’re not developing for Manjaro (or Arch), and you’re doing more generic (or web) development, that will only affect you if the tools you use are updated and no longer work for you. Chances of that happening, however, are slim. And like with most Linux distributions, you’ll find a ton of developer tools available for Manjaro.

Manjaro also supports the Arch User Repository (a community-driven repository for Arch users), which includes cutting edge software and libraries, as well as proprietary applications like Unity Editor or yEd. A word of warning, however, about the Arch User Repository: It was discovered that the AUR contained software considered to be malicious. So, if you opt to work with that repository, do so carefully and at your own risk.

Any Linux Will Do

Truth be told, if you’re a developer, just about any Linux distribution will work. This is especially true if you do most of your development from the command line. But if you prefer a good GUI running on top of a reliable desktop, give one of these distributions a try, they will not disappoint.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

How Machine Learning Will Change Software Development

Artificial intelligence (AI) is not sci-fi anymore; machines have made their way into our lives with ever-increasing importance. Today, humans are teaching machines and machines already affect the way we live, make choices, and get entertained.

There are many ways we already use AI in our everyday lives:

* We ask our devices to perform simple searching tasks, play music, or send messages without touching them.

* We are overwhelmed with sometimes creepy suggestions of things we “want” to buy or lists of movies we will enjoy watching according to some smart algorithms.

* We’re already used to the idea of self-driving cars.

* And we can’t ignore the convenience of the new auto-fill and follow-up Gmail features.

Machine Learning on Code

As AI technology matures and the number of use cases grows, you would think that developers would already be using machine learning to automate some aspects of the software development lifecycle. However, Machine Learning on Code is actually a field of research that is just starting to materialize into enterprise products. One of the pioneers of movement is a company called source{d}, which is building a series of open source projects turning code into actionable data and training machine learning models to help developers respect technical guidelines.

With every company quickly becoming a software company, intangible assets such as code represent a larger share of their market value. Therefore companies should strive to understand their codebase through meaningful analytic reports to inform engineering decisions and develop a competitive advantage for the business.

On one hand, managers can use tools like the open source source{d} engine to easily retrieve and analyze all their Git repositories via a friendly SQL API. They can run it from any Unix system, and it will automatically parse their companies’ source code in a language-agnostic way to identify trends and measure progress made on key digital transformation initiatives.

For example, as an engineering manager, you can track the evolution of your software portfolio. You can easily see what programming languages, open source or proprietary frameworks are becoming more popular as part of your development process. With that extra visibility, it becomes a whole lot easier to decide who to hire and develop a set of company-wide best practices

On the other hand, developers can save an incredible chunk of time by training bots to review their code as they submit pull requests (PRs). Once enabled across a large set of repositories, this could automate part of the code review process and enable developers to ship secure and qualitative code faster than ever before.

At the moment it checks for common mistakes, makes sure the style and format of each commits is consistent with the existing code base or highlights hotspots that might need closer attention. That’s huge already and clearly can benefit not only developers but companies as well. Imagine how much time and resources you could save from delegating your code review to a bot capable of working 24/7.

Assisted or automated code review is not the only Machine Learning on Code use case. In the coming years, machine learning will be used to automate quality assurance and testing, as well as bug prediction or hardware performance. For now, you can try source{d} Lookout and install it on your repository. It will listen for PRs, run analyzers and comment results directly on GitHub.

This article was produced in partnership with Holberton School.

Posted on Leave a comment

LF Edge: Bringing Complementary Initiatives Together to Advance Emerging Applications at the Edge

Earlier today, we announced the launch of LF Edge, a new umbrella organization designed to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. The goal is to foster a unified, open framework that brings complementary projects under one central umbrella to create collaborative solutions that are compatible and support the ecosystem.

LF Edge is comprised of five anchor projects, which includes the existing Akraino Edge Stack, EdgeX Foundry, and Open Glossary of Edge Computing, as well as two new projects – Home Edge Project, and Project EVE, with seed code and initial architecture donated by Samsung and ZEDEDA, respectively. (More details on these projects are available on the new LF Edge website.)

Everyone is talking about Edge computing, but what does it really mean?

That is the million dollar question. Here’s how I like to define “Edge computing”: it’s a distributed computing paradigm in which computation is largely or completely performed on distributed device nodes known as “smart devices” or “edge devices,”  with between five and 20 milliseconds of latency (as opposed to primarily taking place in a centralized cloud environment). Edge represents a convergence of technologies that have recently matured or are coming to market, including: 5G, Artificial Intelligence, Deep Learning, Analytics, and Hardware. Related, emerging edge applications and convergence of these technologies are also demanding and fueling lower latency and accelerated processing.

Another way to answer the “what is edge” question is: anything that is non-traditional video,or  anything that is not connected that moves (e.g., drones, cars etc). These emerging technologies are really driving the market.  

All that said, there is a strong market opportunity for edge applications, and this spans industrial, enterprise and consumer use cases in complex environments across multiple edges and domains. Primary examples include industrial manufacturing, energy (oil and gas), retail, homes (including B2B2C use cases), automotive, with interest also from sectors such as  fleet/transportation, logistics, building automation, cities and governments, healthcare, and more.

Another leading use case for edge applications is video. Several months ago, IHS Markit interviewed edge computing thought leaders to discover which applications run on edge, deployment timing, revenue potential and the existing and expected barriers and difficulties of deployment. The survey found that 92 percent of the respondents cited video as the top edge application for edge computing, and that 82 percent of edge traffic will be occupied by video applications by 2020. (More details on this research are available in my blog post from September, 2018.)

LF Edge – Why Now?

The current edge market is heavily fragmented, with multiple proprietary stacks for each public cloud. Every application and hardware manufacturer has to certify for individual cloud platforms such as AWS and Microsoft Azure. The open source market for edge is also currently fragmented, with a proliferation of groups working in silos towards similar goals. By adopting the umbrella formula utilized by other existing LF projects such as CNCF and LF Networking, LF Edge will provide an open framework to address market needs for edge and IoT by combining new and existing stacks and consolidating them into one singular, customizable framework.

Additional benefits LF Edge brings to the ecosystem include the establishment of an edge framework that is independent of hardware, silicon, cloud, or operating system protocol and introduces location and latency differentiation to edge applications. LF Edge is well-positioned to collaborate across standards bodies and consortiums (e.g., IIC, AECC, OEC, TIP) by developing code that complements existing industry specifications. The project will also complement existing ecosystems such as AWS and Azure by introducing standard APIs.  

In sum, LF Edge was established to create a common framework for hardware and software specifications and best practices critical to sustaining current and future generations of IoT and edge devices. This new community-forged organization will will help ensure greater harmonization to accelerate deployment among the rapidly growing number of edge devices slated to exceed 20 billion by 2020.

To learn more about LF Edge, read the press release and visit the new website, www.lfedge.org. You can also follow the project on Twitter at @lf_edge.