Posted on Leave a comment

The Linux Foundation Transforms the Energy Industry with New Initiative: LF Energy

We are thrilled to introduce the new LF Energy initiative to support and promote open source in the energy and electricity sectors. LF Energy is focused on accelerating the energy transition, including the move to renewable energy, electric mobility, demand response and more.

Open source has transformed industries as vast and different as telecommunications, financial services, automobiles, healthcare, and consumer products. Now we are excited to bring the same level of open collaboration and shared innovation to the power systems industry.

We are also honored that several global, highly influential energy leaders and research institutions are supporting The Linux Foundation including RTE (Europe’s biggest transmission system provider), the European Network of Transmission System Operators, Vanderbilt University and The Electric Power Research Institute.

LF Energy is welcoming four new projects as part of the initiative, and we plan to host numerous information and communication technologies (ICT) that will advance everything from smart assistants for system operators to advanced grid controls, analytics, and planning software. 

Read more at The Linux Foundation

Posted on Leave a comment

5 Reasons Open Source Certification Matters More Than Ever

In today’s technology landscape, open source is the new normal, with open source components and platforms driving mission-critical processes and everyday tasks at organizations of all sizes. As open source has become more pervasive, it has also profoundly impacted the job market. Across industries the skills gap is widening, making it ever more difficult to hire people with much needed job skills. In response, the demand for training and certification is growing.

In a recent webinar, Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation, discussed the growing need for certification and some of the benefits of obtaining open source credentials. “As open source has become the new normal in everything from startups to Fortune 2000 companies, it is important to start thinking about the career road map, the paths that you can take and how Linux and open source in general can help you reach your career goals,” Seepersad said.

With all this in mind, this is the first article in a weekly series that will cover: why it is important to obtain certification; what to expect from training options that lead to certification; and how to prepare for exams and understand what your options are if you don’t initially pass them.

Seepersad pointed to these five reasons for pursuing certification:

  • Demand for Linux and open source talent. “Year after year, we do the Linux jobs report, and year after year we see the same story, which is that the demand for Linux professionals exceeds the supply. This is true for the open source market in general,” Seepersad said. For example, certifications such as the LFCE, LFCS, and OpenStack administrator exam have made a difference for many people.

  • Getting the interview. “One of the challenges that recruiters always reference, especially in the age of open source, is that it can be hard to decide who you want to have come in to the interview,” Seepersad said. “Not everybody has the time to do reference checks. One of the beautiful things about certification is that it independently verifies your skillset.”

  • Confirming your skills. “Certification programs allow you to step back, look across what we call the domains and topics, and find those areas where you might be a little bit rusty,” Seepersad said. “Going through that process and then being able to demonstrate skills on the exam shows that you have a very broad skillset, not just a deep skillset in certain areas.”

  • Confidence. This is the beauty of performance-based exams,” Seepersad said. “You’re working on our live system. You’re being monitored and recorded. Your timer is counting down. This really puts you on the spot to demonstrate that you can troubleshoot.” The inevitable result of successfully navigating the process is confidence.

  • When you’re the hiring manager, you will appreciate certification. “As you become more senior in your career, you’re going to find the tables turned and you are in the role of making a hiring decision,” Seepersad said. “You’re going to want to have candidates who are certified, because you recognize what that means in terms of the skillsets.”

Although Linux has been around for more than 25 years, “it’s really only in the past few years that certification has become a more prominent feature,” Seepersad noted. As a matter of fact, 87 percent of hiring managers surveyed for the 2018 Open Source Jobs Report cite difficulty in finding the right open source skills and expertise. The Jobs Report also found that hiring open source talent is a priority for 83 percent of hiring managers, and half are looking for candidates holding certifications.

With certification playing a more important role in securing a rewarding long-term career, are you interested in learning about options for gaining credentials? If so, stay tuned for more information in this series.

Learn more about Linux training and certification.

Posted on Leave a comment

SBC Clusters — Beyond Raspberry Pi

Cluster computers constructed of Raspberry Pi SBCs have been around for years, ranging from supercomputer-like behemoths to simple hobbyist rigs. More recently, we’ve seen cluster designs that use other open-spec hacker boards, many of which offer higher computer power and faster networking at the same or lower price. Farther below, we’ll examine one recent open source design from Paul Smith at Climbers.net that combines 12 octa-core NanoPi-Fire3 SBCs for a 96-core cluster.

SBC-based clusters primarily fill the needs of computer researchers who find it too expensive to book time on a server-based HPC (high performance computing) cluster. Large-scale HPC clusters are in such high demand, that it’s hard to find available cluster time in the first place.

Research centers and universities around the world have developed RPi-based cluster computing for research into parallel computing, deep learning, medical research, weather simulations, cryptocurrency mining, software-defined networks, distributed storage, and more. Clusters have been deployed to provide a high degree of redundancy or to simulate massive IoT networks, such as with Resin.io’s 144-pi Beast v3.

Even the largest of these clusters comes nowhere close to the performance of server-based HPC clusters. Yet, in many research scenarios, top performance is not essential. It’s the combination of separate cores running in parallel that make the difference. The Raspberry Pi based systems typically use the MPI (Messaging Passing Interface) library for exchanging messages between computers to deploy a parallel program across distributed memory.

BitScope, which is a leader in Pi cluster hardware such as its Bitscope Blade, has developed a system with Los Alamos National Laboratory based on its larger BitScope Cluster Module. The Los Alamos hosted system comprises five racks of 150 Raspberry Pi 3 SBCs. Multiply those 750 boards by the four Cortex-A53 cores on each Pi and you get a 3,000-core parallelized supercomputer.

The Los Alamos system is said to be far more affordable and power efficient than building a dedicated testbed of the same size using conventional technology, which would cost a quarter billion dollars and use 25 megawatts of electricity. There are now plans to move to a 4,000-core Pi cluster.

Most clusters are much smaller 5-25 board rigs, and are typically deployed by educators, hobbyists, embedded engineers, and even artists and musicians. These range from open source DIY designs to commercial hardware rack systems designed to power and cool multiple densely packed compute boards.

96-core NanoPi Fire3 cluster shows impressive benchmarks

The 96-core cluster computer recently detailed on Climbers.net is the largest of several cluster designs developed by Nick Smith. These include a 40-core system based on the octa-core NanoPC-T3, and others that use the Pine A64+, the Orange Pi Plus 2E, and various Raspberry Pi models. (All these SBCs can be found in our recently concluded hacker board reader survey.)

The new cluster, which was spotted by Worksonarm and further described on CNXSoft, uses FriendlyElec’s open-spec NanoPi Fire3.

The open source cluster design includes Inkscape code for laser cutter construction. Smith made numerous changes to his earlier clusters intended to increase heat dissipation, improve durability, and reduce space, cost, and power consumption. These include offering two 7W case fans instead of one and moving to a GbE switch. The Bill of Materials ran to just over £543 ($717), with the NanoPi Fire3 boards totaling £383, including shipping. The next biggest shopping item was £62 for microSD cards.

The $35 Fire3 SBC, which measures only 75x40mm, houses a powerful Samsung S5P6818. The SoC features 8x Cortex-A53 cores at up to 1.4GHz and a Mali-400 MP4 GPU, which runs a bit faster than the Raspberry Pi’s VideoCore IV.

Although the Fire3 has only twice the number of -A53 cores as the Raspberry Pi 3, and is clocked only slightly faster, Smith’s benchmarks showed a surprising 6.6x times faster CPU boost over a similar RPi 3 cluster. GPU performance was 7.5x faster.

It turned out that much of the performance improvement was due to the Fire3’s native, PCIe-based Gigabit Ethernet port, which enabled the clustered SBCs to communicate more quickly with one another to run parallel computing applications. By comparison, the Raspberry Pi 3 has a 10/100Mbps port.

Performance would no doubt improve if Smith had used the new Raspberry Pi 3 Model B+, which offers a Gigabit Ethernet port. However, since the B+ port is based on USB 2.0, its Ethernet throughput is only three times faster than the Model B’s 10/100 port instead of about 10 times faster for the Fire3.

Still, that’s a significant throughput boost, and combined with the faster 1.4GHz clock rate, the RPi 3 B+ should quickly replace the RPi 3 Model B in Pi-based cluster designs. BitScope recently posted an enthusiastic review of the B+. In addition to the performance improvements, the review cites the improved heat dissipation from the PCB design and the “flip chip on silicon” BGA package for the Broadcom SoC, which uses heat spreading metal. The upcoming Power-over-Ethernet capability should also open new possibilities for clusters, says the review.

Hacker board community sites are increasingly showcasing cluster designs — here’s a cluster case design for the Orange Pi One on Thingiverse — and some vendors offer cluster hardware of their own. Hardkernel’s Odroid project, for example, came out with a 4-board, 32-core Odroid-MC1 cluster computer based on an Odroid-XU4S SBC, a modified version of the Odroid-XU4, which won third place in our hacker board survey. The board uses the same octa-core -A15 Samsung Exynos5422 SoC. More recently, it released an Odroid-MC1 Solo version that lets you choose precisely how many boards you want to add.

The Odroid-MC1 products are primarily designed to run Docker Swarm. Many of the cluster systems are designed to run Docker or other cloud-based software. Last year Alex Ellis, for example, posted a tutorial on creating a Serverless Raspberry Pi cluster that runs Docker and the OpenFaaS framework. Indeed, as with edge computing devices running modified versions of cloud software, such as AWS Greengrass, cluster computers based on SBCs show another example of how the embedded and enterprise server worlds are interacting in interesting new ways using Linux.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Certification Plays Big Role in Open Source Hiring

With high demand for Linux professionals and a shortage of workers with these skills, it’s small wonder that employers are willing not only to train their staff but also to help them get certified. Forty-two percent of employers report having trained existing workers on new open source technologies this year to meet their needs, compared to only 30 percent in 2017, according to the 2018 Open Source Jobs report.

The report, produced by Dice and The Linux Foundation, also found that 38 percent of companies are less likely to rely on outside consultants, compared with 47 percent in 2017. Consequently, they are turning to training to keep up in a fast-paced, ever-changing tech environment. Sixty-four percent of hiring managers say their employees are requesting or taking training courses on their own – the exact same percentage as last year.

Why? There is a strong belief that IT certifications are a reliable predictor of a successful employee, according to IT trade association CompTIA. In its own research, CompTIA found five reasons why 91 percent of employers believe IT certifications play a big role in the hiring process:

  • Certifications help fill open positions

  • Most companies have IT staff who have certifications

  • Certified IT pros make great employees

  • IT certifications are increasing in importance

  • Training alone is not enough

Certification as an incentive

Forty-two percent of employers are using training and certification opportunities as an incentive to retain employees, up from 33 percent last year and 26 percent in 2016, this year’s Open Source Jobs Report found. Underscoring the importance employers place on certifications: Nearly half (47 percent) of hiring managers say employing certified open source professionals is a priority for them, essentially the same number as last year.

The same percentage say they are more likely to hire a certified professional than one without a certification. An increasing number of companies are willing to pay for certifications, with 55 percent that reported they helped to cover the costs of certifications this year, up from 47 percent last year and 34 percent in 2016. Only 17 percent say they would not pay for certifications, a decline from 21 percent last year and 30 percent in 2016.

Certifications is a benefit that can be used as a recruiting tool, and employers that offer certification courses for full-time employees should mention it in job postings, the report stresses. Similarly, professionals seeking this benefit should make clear during the interview process their desire to continue their education and become certified while employed.

However, there continues to be debate over the value of certifications versus on-the-job experience. There are many seasoned tech professionals who claim years of experience is more important, yet the average certification now represents a 7.6 percent premium on an IT pro’s base salary, according to research firm Foote Partners, which publishes an annual IT Skills and Certifications Pay Index. Specifically, gains were seen in networking and communications and applications development and programming language certifications, the firm says.

A significant majority (80 percent) of open source professionals say certifications are useful to their careers, up slightly from 76 percent in the previous two years. The main reasons cited are that certifications enable employees to demonstrate technical knowledge to potential employers (stated by 45 percent of respondents), and certifications make professionals more employable in general (33 percent). Forty-seven percent of open source professionals plan to take at least one certification exam this year, up from 40 percent in 2017.

Vendor neutrality matters

Employers increasingly want vendor neutrality in their training providers, with 77 percent of hiring managers rating this as important, up from 68 percent last year and 63 percent in 2016. Almost all types of training have increased this year, with online/virtual courses being the most popular. Sixty-six percent of employers report offering this benefit, compared to 63 percent in 2017 and 49 percent in 2016. Forty percent of hiring managers say they are providing onsite training, up from 39 percent last year and 31 percent in 2016; and 49 percent provide individual training courses, the same as last year.

Additionally, employers say they increasingly see benefits from sending employees to conferences. Fifty-six percent of hiring managers said they pay for employees to attend technical conferences, up from 46 percent in 2017.

Download the complete Open Source Jobs Report now and learn more about Linux certification here.

Posted on Leave a comment

Users, Groups, and Other Linux Beasts

Having reached this stage, after seeing how to manipulate folders/directories, but before flinging ourselves headlong into fiddling with files, we have to brush up on the matter of permissions, users and groups. Luckily, there is already an excellent and comprehensive tutorial on this site that covers permissions, so you should go and read that right now. In a nutshell: you use permissions to establish who can do stuff to files and directories and what they can do with each file and directory — read from it, write to it, move it, erase it, etc.

To try everything this tutorial covers, you’ll need to create a new user on your system. Let’s be practical and make a user for anybody who needs to borrow your computer, that is, what we call a guest account.

WARNING: Creating and especially deleting users, along with home directories, can seriously damage your system if, for example, you remove your own user and files by mistake. You may want to practice on another machine which is not your main work machine or on a virtual machine. Regardless of whether you want to play it safe, or not, it is always a good idea to back up your stuff frequently, check the backups have worked correctly, and save yourself a lot of gnashing of teeth later on.

A New User

You can create a new user with the useradd command. Run useradd with superuser/root privileges, that is using sudo or su, depending on your system, you can do:

sudo useradd -m guest

… and input your password. Or do:

su -c "useradd -m guest"

… and input the password of root/the superuser.

(For the sake of brevity, we’ll assume from now on that you get superuser/root privileges by using sudo).

By including the -m argument, useradd will create a home directory for the new user. You can see its contents by listing /home/guest.

Next you can set up a password for the new user with

sudo passwd guest

Or you could also use adduser, which is interactive and asks you a bunch of questions, including what shell you want to assign the user (yes, there are more than one), where you want their home directory to be, what groups you want them to belong to (more about that in a second) and so on. At the end of running adduser, you get to set the password. Note that adduser is not installed by default on many distributions, while useradd is.

Incidentally, you can get rid of a user with userdel:

sudo userdel -r guest

With the -r option, userdel not only removes the guest user, but also deletes their home directory and removes their entry in the mailing spool, if they had one.

Skeletons at Home

Talking of users’ home directories, depending on what distro you’re on, you may have noticed that when you use the -m option, useradd populates a user’s directory with subdirectories for music, documents, and whatnot as well as an assortment of hidden files. To see everything in you guest’s home directory run sudo ls -la /home/guest.

What goes into a new user’s directory is determined by a skeleton directory which is usually /etc/skel. Sometimes it may be a different directory, though. To make check which directory is being used, run:

useradd -D
GROUP=100 HOME=/home INACTIVE=-1 EXPIRE= SHELL=/bin/bash SKEL=/etc/skel CREATE_MAIL_SPOOL=no

This gives you some extra interesting information, but what you’re interested in right now is the SKEL=/etc/skel line. In this case, and as is customary, it is pointing to /etc/skel/.

As everything is customizable in Linux, you can, of course, change what gets put into a newly created user directory. Try this: Create a new directory in /etc/skel/:

sudo mkdir /etc/skel/Documents

And create a file containing a welcome text and copy it over:

sudo cp welcome.txt /etc/skel/Documents

Now delete the guest account:

sudo userdel -r guest

And create it again:

sudo useradd -m guest

Hey presto! Your Documents/ directory and welcome.txt file magically appear in the guest’s home directory.

You can also modify other things when you create a user by editing /etc/default/useradd. Mine looks like this:

GROUP=users HOME=/home INACTIVE=-1 EXPIRE= SHELL=/bin/bash SKEL=/etc/skel CREATE_MAIL_SPOOL=no

Most of these options are self-explanatory, but let’s take a closer look at the GROUP option.

Herd Mentality

Instead of assigning permissions and privileges to users one by one, Linux and other Unix-like operating systems rely on groups. A group is a what you imagine it to be: a bunch of users that are related in some way. On your system you may have a group of users that are allowed to use the printer. They would belong to the lp (for “line printer“) group. The members of the wheel group were traditionally the only ones who could become superuser/root by using su. The network group of users can bring up and power down the network. And so on and so forth.

Different distributions have different groups and groups with the same or similar names have different privileges also depending on the distribution you are using. So don’t be surprised if what you read in the prior paragraph doesn’t match what is going on in your system.

Either way, to see which groups are on your system you can use:

getent group

The getent command lists the contents of some of the system’s databases.

To find out which groups your current user belongs to, try:

groups

When you create a new user with useradd, unless you specify otherwise, the user will only belong to one group: their own. A guest user will belong to a guest group and the group gives the user the power to administer their own stuff and that is about it.

You can create new groups and then add users to them at will with the groupadd command:

sudo groupadd photos

will create the photos group, for example. Next time, we’ll use this to build a shared directory all members of the group can read from and write to, and we’ll learn even more about permissions and privileges. Stay tuned!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Xen Project Hypervisor: Virtualization and Power Management are Coalescing into an Energy-Aware Hypervisor

Power management in the Xen Project Hypervisor historically targets server applications to improve power consumption and heat management in data centers reducing electricity and cooling costs. In the embedded space, the Xen Project Hypervisor faces very different applications, architectures and power-related requirements, which focus on battery life, heat, and size.

Although the same fundamental principles of power management apply, the power management infrastructure in the Xen Project Hypervisor requires new interfaces, methods, and policies tailored to embedded architectures and applications. This post recaps Xen Project power management, how the requirements change in the embedded space, and how this change may unite the hypervisor and power manager functions.   

Evolution of Xen Project Power Management on x86

Time-sharing of computer resources by different virtual machines (VMs) was the precursor to scheduling and virtualization. Sharing of time using workload estimates was both a good and simple enough proxy for energy sharing. As in all main OSes, energy and power management in the Xen Project Hypervisor came as an afterthought.

Intel and AMD developed the first forms of power management for the Xen Project with the x86_64 architecture. Initially, the Xen Project used the `hlt’ instruction for CPU idling and didn’t have any support for deeper sleep states. Then, support for suspend-to-RAM, also known as ACPI S3, was introduced. It was entirely driven by Dom0 and meant to support manual machine suspensions by the user, for instance when the lid is closed on a laptop. It was not intended to reduce power utilization under normal circumstances. As a result, power saving was minimal and limited to the effects of `hlt’.

Finally, Intel introduced support for cpu-freq in the Xen Project in 2007. This was the first non-trivial form of power management for the Xen Project. Cpu-freq decreases the CPU frequency at runtime to reduce power consumption when the CPU is only lightly utilized. Again, cpu-freq was entirely driven by Dom0: the hypervisor allowed Dom0 to control the frequency of the underlying physical CPUs.

Not only was this a backward approach from the Xen architecture point of view, but this approach was severely limiting. Dom0 didn’t have a full view of the system to make the right decisions. In addition, it required one virtual CPU in Dom0 for each physical CPU and to pin each Dom0 virtual CPU to a different physical CPU. It was not a viable option in the long run.

To address this issue, cpu-freq was re-architected, moving the cpu-freq driver to the hypervisor. Thus, Xen Project became able to change CPU frequency and make power saving decisions by itself, solving these issues.

Intel and AMD introduced support for deep sleep states around the same time of the cpu-freq redesign. The Xen Project Hypervisor added the ability to idle physical CPUs beyond the simple `hlt’ instruction. Deeper sleep states, also known as ACPI C-states, have better power savings properties, but come with higher latency cost. The deeper the sleep state, the more power is saved, the longer it takes to resume normal operation. The decision to enter a sleep state is based on two variables: time and energy. However, scheduling and idling remain separate activities by large margins. As an example, the scheduler has very limited influence on the choice of the particular sleep state.

Xen Project Power Management on Arm

The first Xen release with Arm support was Xen 4.3 in 2013, but the Xen power management has not been actively addressed until very recently. One of the reasons may be the dominance of proprietary and in-house hypervisors for Arm in the embedded space and the overwhelming prevalence of x86 for servers. Due to the Xen Project’s maturity, its open source model and wide deployment, it is frequently used today in a variety of Arm-based applications. The power management support for the Xen Project hypervisor on Arm is becoming essential, in particular in the embedded world.

In our next blog post, we will cover architectural choices for Xen on Arm in the embedded world and use cases on how to make this work.

Xen Power Management for Embedded Applications

Embedded applications require the same OS isolation and security capabilities that  motivated the development of server virtualization, but come with a wider variety of multicore architectures, guest OSes, and virtual to physical hardware mappings. Moreover, most embedded designs are highly sensitive to deteriorations in performance, memory size, power efficiency and wakeup latency that often come with hypervisors. As the embedded devices are increasingly cooler, quieter, smaller and battery powered, efficient power management emerges as a vital hurdle for the successful adoption of hypervisors in the embedded community.

Standard non-virtualized embedded devices manage power at two levels: the platform and the OS level. At the platform level, the platform manager is typically executing on dedicated on-chip or on-board processors and microcontrollers. It is monitoring and controlling the energy consumption of the CPUs, the peripherals, the CPU clusters and all board level components by changing the frequencies, voltages, and functional states of the hardware. However, it has no intrinsic knowledge about the running applications, which is necessary for making the right decisions to save power.

This knowledge is provided by the OS, or, in some cases, directly by the application software itself. The Power State Coordination Interface (PSCI) and the Extensible Energy Management Interface (EEMI) are used to coordinate the power events between the platform manager, the OSes, and the processing clusters. Whereas PSCI coordinates the power events among the CPUs of a single processor cluster, EEMI is responsible for the peripherals and the power interaction between multiple clusters.

Contrary to the ACPI based power management for x86 architectures typical for desktops and servers, PSCI and EEMI allow for much more direct control and enable precise power management of virtual clusters. In embedded systems, every micro Joule counts, so the precision in terms of timing and scope of power management actions is essential.     

When a virtualization layer is inserted between the OSes and the platform manager, it effectively enables additional virtual clusters, which come with virtual CPUs, virtual peripherals, and even physical peripherals with device passthrough. The EEMI power coordination of the virtual clusters can execute in the platform manager, hypervisor or both.  If the platform manager is selected, the power management can be made very precise, but at the expense of firmware memory bloating, as it needs to manage not only the fixed physical clusters but also the dynamically created virtual clusters.

Additionally, the platform manager requires stronger processing capabilities to optimally manage power, especially if it takes the cluster and system loads into consideration. As platform managers typically reside in low power domains, both memory space, and processing power are in short supply.

The hypervisor usually executes on powerful CPU clusters, so has enough memory and processing power at its disposal. It is also well informed about the partitioning and load of the virtual clusters, making it the ideal place to manage power. However, for proper power management, the hypervisor also requires an accurate energy model of the underlying physical clusters. Similar to the energy-aware scheduler in Linux, the hypervisor must coalesce the sharing of time and energy to manage power properly. In this case, the OS-based power management is effectively transformed into the hypervisor-based power management.

The Hypervisor and Power Manager Coalesce

Most embedded designs consist of multiple physical clusters or subsystems that are frequently put into inactive low-power states to save energy, such as sleep, suspend, hibernate or power-off suspend. Typical examples are the application, real-time video, or accelerator clusters that own multiple CPUs and share the system memory, peripherals, board level components, and the energy source. If all the clusters enter low-power states, their respective hypervisors are inactive, and the always-on platform manager has to take over the sole responsibility for system power management. Once the clusters become active again, the power management is passed back to the respective hypervisors. In order to secure optimum power management, the hypervisors and the power manager have to act as one, ultimately coalescing into a distributed system software covering both performance and power management.

A good example of a design in action indicative of such evolution is the power management support for the Xilinx Zynq UltraScale+ MPSoC. The Xen hypervisor running in the Application Processing Unit (APU) and the power manager in the Power Management Unit (PMU) have already evolved into a tight bundle around EEMI based power management and shall further evolve with the upcoming EEMI clock support.

The next blog in this series will cover the suspend-to-RAM feature for the Xen Project Hypervisor targeting the Xilinx Zynq UltraScale+ MPSoC, which lays the foundation for full-scale power management on Arm architectures.

Authors:

Vojin Zivojnovic, CEO and Co-Founder at AGGIOS

Stefano Stabellini, Principal Engineer at Xilinx and Xen Project Maintainer

Posted on Leave a comment

Last Chance to Speak at Hyperledger Global Forum | Deadline is This Friday

Hyperledger Global Forum is the premier event showcasing the real uses of distributed ledger technologies for businesses and how these innovative technologies run live in production networks today. Hyperledger Global Forum unites the industry’s most respected thought leaders, domain experts, and key maintainers behind popular frameworks and tools like Hyperledger Fabric, Sawtooth, Indy, Iroha, Composer, Explorer, and more.

The Hyperledger Global Forum agenda will include both technical and enterprise tracks on everything from Distributed Ledger Technologies to Smart Contracts 101; roadmaps for Hyperledger projects; cross-industry keynotes and panels on use-cases in development, and much more. Hyperledger Global Forum will also facilitate social networking for the community to bond.

Learn more about submitting a proposal, review suggested technical and business topics, and see sample submissions. The deadline to submit proposals is Friday, July 13, so apply today!

Submit Now >>

Not submitting a session, but plan to attend? Register now and save before ticket prices increase on September 30.

This article originally appeared at Hyperledger

Posted on Leave a comment

Open Collaboration in Practice at Open Source Summit

A key goal in my career is growing the understanding and best practice of how communities, and open source communities in particular, can work well together. There is a lot of nuance to this work, and the best way to build a corpus of best practice is to bring people together to share ideas and experience.

In service of this, last year I reached out to The Linux Foundation about putting together an event focused on these “people” elements of Open Source such as community management, collaborative workflow, governance, managing conflict, and more. It was called the Open Community Conference, which took place at the Open Source Summit events in Los Angeles and Prague, and everything went swimmingly.

This train though, has to keep moving, and we realized that the scope of the event needed broadening. What about legal, compliance, standards, and other similar topics? They needed a home, and this event seemed like a logical place to house them. So, in a roaring display of rebranding, we renamed the event to the Open Collaboration Conference. It happens again at the Open Source Summit, this year in Vancouver from August 29-31 and then in Edinburgh from October 22-24, 2018.

The upcoming event in Vancouver is looking fantastic. Just like last year, we had a raft of submissions, so thanks everyone for making my job (rightly) difficult for choosing the final set of talks.

Featured Talks

Unsurprisingly, we have some really remarkable speakers, from a raft of different organizations, backgrounds, and disciplines. This includes:

Oh, and I will be speaking, too, delivering a new presentation called “Building Effective Community Leaders: A Guide.” It will cover key principles of leadership and how to bake them into your community, company, or other organization.

In addition to this, don’t forget the fantastic networking, evening events, and other goodness that will be jammed into an exciting few days. As usual, this all takes place at the Open Source Summit, and you view the whole schedule and learn more about how to join us at https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/.

Finally, I will be there for the full event. If you want to have a meeting, drop me an email to jono@jonobacon.com.

Sign up to receive updates on Open Source Summit North America:

Posted on Leave a comment

Robolinux Lets You Easily Run Linux and Windows Without Dual Booting

The number of Linux distributions available just keeps getting bigger. In fact, in the time it took me to write this sentence, another one may have appeared on the market. Many Linux flavors have trouble standing out in this crowd, and some are just a different combination of puzzle pieces joined to form something new: An Ubuntu base with a KDE desktop environment. A Debian base with an Xfce desktop. The combinations go on and on.

Robolinux, however, does something unique. It’s the only distro, to my knowledge, that makes working with Windows alongside Linux a little easier for the typical user. With just a few clicks, it lets you create a Windows virtual machine (by way of VirtualBox) that can run side by side with Linux. No more dual booting. With this process, you can have Windows XP, Windows 7, or Windows 10 up and running with ease.

And, you get all this on top of an operating system that’s pretty fantastic on its own. Robolinux not only makes short work of having Windows along for the ride, it simplifies using Linux itself. Installation is easy, and the installed collection of software means anyone can be productive right away.

Let’s install Robolinux and see what there is to see.

Installation

As I mentioned earlier, installing Robolinux is easy. Obviously, you must first download an ISO image of the operating system. You have the choice of installing a Cinnamon, Mate, LXDE, or xfce desktop (I opted to go the Mate route). I will warn you, the developers do make a pretty heavy-handed plea for donations. I don’t fault them for this. Developing an operating system takes a great deal of time. So if you have the means, do make a donation.
Once you’ve downloaded the file, burn it to a CD/DVD or flash drive. Boot your system with the media and then, once the desktop loads, click the Install icon on the desktop. As soon as the installer opens (Figure 1), you should be immediately familiar with the layout of the tool.

Once you’ve walked through the installer, reboot, remove the installation media, and login when prompted. I will say that I installed Robolinux as a VirtualBox VM and it installed to perfection. This however, isn’t a method you should use, if you’re going to take advantage of the Stealth VM option. After logging in, the first thing I did was install the Guest Additions and everything was working smoothly.

Default applications

The collection of default applications is impressive, but not overwhelming. You’ll find all the standard tools to get your work done, including:

  • LibreOffice

  • Atril Document Viewer

  • Backups

  • GNOME Disks

  • Medit text editor

  • Seahorse

  • GIMP

  • Shotwell

  • Simple Scan

  • Firefox

  • Pidgen

  • Thunderbird

  • Transmission

  • Brasero

  • Cheese

  • Kazam

  • Rhythmbox

  • VLC

  • VirtualBox

  • And more

With that list of software, you shouldn’t want for much. However, should you find a app not installed, click on the desktop menu button and then click Package Manager, which will open Synaptic Package Manager, where you can install any of the Linux software you need.

If that’s not enough, it’s time to take a look at the Windows side of things.

Installing Windows

This is what sets Robolinux apart from other Linux distributions. If you click on the desktop menu button, you see a Stealth VM entry. Within that sub-menu, a listing of the different Windows VMs that can be installed appears (Figure 2).

Before you can install one of the VMs, you must first download the Stealth VM file. To do that, double-click on the desktop icon that includes an image of the developer’s face (labeled Robo’s FREE Stealth VM). You must save that file to the ~/Downloads directory. Don’t save it anywhere else, don’t extract it, and don’t rename it. With that file in place, click the start menu and then click Stealth VM. From the listing, click the top entry, Robolinx Stealth VM Installer. When prompted, type your sudo password. You will then be prompted that the Stealth VM is ready to be used. Go back to the start menu and click Stealth VM and select the version of Windows you want to install. A new window will appear (Figure 3). Click Yes and the installation will continue.

Next you will be prompted to type your sudo password again (so your user can be added to the vboxusers group). Once you’ve taken care of that, you’ll be prompted to configure the RAM you want to dedicate to the VM. After that, a browser window will appear (once again asking for a donation). At this point everything is (almost) done. Close the browser and the terminal window.

You’re not finished.

Next you must insert the Windows installer media that matches the type of Windows VM you installed. You then must start VirtualBox by click start menu > System Tools > Oracle VM VirtualBox. When VirtualBox opens, an entry will already be created for your Windows VM (Figure 4).

You can now click the Start button (in VirtualBox) to finish up the installation. When the Windows installation completes, you’re ready to work with Linux and Windows side-by-side.

Making VMs a bit more user-friendly

You may be thinking to yourself, “Creating a virtual machine for Windows is actually easier than that!”. Although you are correct with that sentiment, not everyone knows how to create a new VM with VirtualBox. In the time it took me to figure out how to work with the Robolinux Stealth VM, I could have had numerous VMs created in VirtualBox. Additionally, this approach doesn’t happen free of charge. You do still have to have a licensed copy of Windows (as well as the installation media). But anything developers can do to make using Linux easier is a plus. That’s how I see this—a Linux distribution doing something just slightly different that could remove a possible barrier to entry for the open source platform. From my perspective, that’s a win-win. And, you’re getting a pretty solid Linux distribution to boot.

If you already know the ins and outs of VirtualBox, Robolinux might not be your cuppa. But, if you don’t like technology getting in the way of getting your work done and you want to have a Linux distribution that includes all the necessary tools to help make you productive, Robolinux is definitely worth a look.

Posted on Leave a comment

Free Open Source Guides Offer Practical Advice for Building Leadership

How important is leadership for evolving open source projects and communities? According to the most recent Open Source Guide for the Enterprise from The Linux Foundation and the TODO Group, building leadership in the community is key to establishing trust, enabling collaboration, and fostering the cultural understanding required to be effective in open source.

The new Building Leadership in an Open Source Community guide provides practical advice that can help organizations build leadership and influence within open source projects.

“Contributing code is just one aspect of creating a successful open source project,” says this Linux Foundation article introducing the latest guide. “The open source culture is fundamentally collaborative, and active involvement in shaping a project’s direction is equally important. The path toward leadership is not always straightforward, however, so the latest Open Source Guide for the Enterprise from The TODO Group provides practical advice for building leadership in open source projects and communities.” 

Read more at The Linux Foundation