Posted on Leave a comment

Freespire Linux: A Great Desktop for the Open Source Purist

Quick. Click on your Linux desktop menu and scan through the list of installed software. How much of that software is strictly open source? To make matters a bit more complicated, have you installed closed source media codecs (to play the likes of MP3 files perhaps)? Is everything fully open, or do you have a mixture of open and closed source tools?

If you’re a purist, you probably strive to only use open source tools on your desktop. But how do you know, for certain, that your distribution only includes open source software? Fortunately, a few distributions go out of their way to only include applications that are 100% open. One such distro is Freespire.

Does that name sound familiar? It should, as it is closely related to Linspire. Now we’re talking familiarity. Remember back in the early 2000s, when Walmart sold Linux desktop computers? Those computers were powered by the Linspire operating system. Linspire went above and beyond to create an experience that would be similar to that of Windows—even including the tools to install Windows apps on Linux. That experiment failed, mostly because consumers thought they were getting a Windows desktop machine for a dirt cheap price. After that debacle, Linspire went away for a while. It’s now back, thanks to PC/OpenSystems LLC. Their goal isn’t to recreate the past but to offer two different flavors of Linux:

  • Linspire—a commercial distribution of Linux that includes proprietary software and does have an associated cost ($39.99 USD for a single license).

  • Freespire—a non-commercial distribution of Linux that only includes open source software and is free to download.

We’re here to discuss Freespire and why it is an outstanding addition to the Linux community, especially those who strive to use only free and open source software. This version of Freespire (4.0) was released on August 20, 2018, so it’s fresh and ready to go.

Let’s dig into the operating system and see what makes this a viable candidate for open source fans.

Installation

In keeping with my usual approach, there’s very little reason to even mention the installation of Freespire Linux. There is nothing out of the ordinary here. Download the ISO image, burn it to a USB Drive (or CD/DVD if you’re dealing with older hardware), boot the drive, click the Install icon, answer a few simple questions, and wait for the installation to prompt for a reboot. That’s how far we’ve come with Linux installations… they are simple, and rarely will you have a single issue with the process. In the end, you’ll be presented with a simple (modified) Mate desktop (Figure 1) that makes it easy for any user (of any skill level) to feel right at home.

Software Titles

Once you’ve logged into the desktop, you’ll find a main menu where you can view all of the installed applications. That list of software includes:

  • Geary

  • Chromium Browser

  • Abiword

  • Gnumeric

  • Calendar

  • Audacious

  • Totem Video Player

  • Software Center

  • Synaptic

  • G-Debi

Also rolled into the system is support for both Flatpak and Snap applications, so you shouldn’t miss out on any software you need, which brings me to the part when purists might want to look away.

Just because Freespire is marketed as a purely open source distribution, it doesn’t mean users are locked down to only open source software. In fact, if you open the Software Center, you can do a quick search for Spotify (a closed source application with an available Linux desktop client) and there it is! (Figure 2).

Fortunately, for those productive-minded folks, the likes of LibreOffice (which is not installed by default) is open source and can be installed easily from the Software Center. That doesn’t mean you must install other software, but those who need to do serious business-centric work (such as collaborating on documents), will likely want/need to install a more powerful office suite (as Abiword won’t cut it as a business-level word processor).

For those who tend to work long hours on the Linux desktop and want to protect their eyes from extended strain, Freespire does include a nightlight tool that can adjust the color temperature of the interface. To open this tool, click on the main desktop menu and type night in the Search bar (Figure 3).

Once opened, Night Light will automatically adjust the color temperature, based on the time of day. From the notification tray, you can click the icon to suspend Night Light, set it to autostart, and close the service (Figure 4).

Beyond the Mate Desktop

As is, Mate fans might not exactly recognize the Freespire desktop. The developers have clearly given Mate a significant set of tweaks to make it slightly resemble the Mac OS desktop. It’s not quite as elegant as, say, Elementary OS, but this is certainly an outstanding take on the Linux desktop. Whether you’re a fan of Mate or Mac OS, you should feel immediately at home on the desktop. On the top bar, the developers have included an appmenu that changes, based on what application you have open. Start any app and you’ll find that app’s menu appears in the top bar. This active menu makes the desktop quite efficient.

Are you ready for Freespire?

Every piece of the Freespire puzzle is equally as user-friendly as it is intuitive. The developers of Freespire have gone to great lengths to make this pure open source distribution a treat to use. Even if a 100% open source desktop isn’t your thing, Freespire is still a worthy contender in the world of desktop Linux. It’s clean and stable (as it’s based on Ubuntu 18.04) and able to help you be efficient and productive on the desktop.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Open Source Networking Days Returning This Fall

As we gear up the for the first ever Open Networking Summit Europe event, Amsterdam, September 25-27, it’s becoming clear to me just how far we’ve come this year since the formation of LF Networking. With new major operators joining, like Deutsche Telekom, and others requiring open source project automation tools in their RFPs, like Orange, it’s inspiring to witness just how much the networking industry is rallying around open source and incorporating it as a key element of their business strategies. 

…For those who can’t make it to Amsterdam this time, I wanted to share the good news that the next three Open Source Networking Days (OSN Days) tours will be coming this fall to China, APAC, and North America. Here are confirmed cities and dates so far. Click on the links to learn more and register. Check back soon to main OSN Days website for updates on the others.

ChinaShanghai: Oct 12 | Nanjing: Oct 15 | Beijing: Oct 17

APACSingapore: Oct 15 | Taiwan: (Hsinchu): Oct 17 | Tokyo: Oct 23

North AmericaOttawa: Oct 30 | Bay Area: Nov 1 | Dallas: Nov 6 | Toronto: Nov 8 | Boston: Nov 19 | Montreal: Nov 29 | Austin: TBD

Read more at The Linux Foundation

Posted on Leave a comment

Top 10 Reasons to Join the Premier European Open Source Event of the Year | Register Now to Save $150

See why you need to be at Open Source Summit Europe and Embedded Linux Conference + OpenIoT Summit Europe next month! Hurry — space is going quickly. Secure your spot and register by September 22 to save $150.

Here are the Top 10 Reasons you’ll want to be at this event:

  1. Timely Cutting-edge Content: 300+ sessions on Linux development, embedded Linux systems, IoT, cloud native development, cloud infrastructure, AI, blockchain and open source program management & community leadership.
  2. Deep Dive Labs & Tutorials: An Introduction to Linux Control Groups (cgroups),  Building Kubernetes Native Apps with the Operator Framework, Resilient and Fast Persistent Container Storage Leveraging Linux’s Storage Functionalities,  and 10 Years of Linux Containers, are just some of the labs and tutorials included in one low registration price.
  3. 12 Co-located Events*: Come for OSS & ELC + OpenIoT Summit and stay for LF Energy Summit, Linux Security Summit, Cloud & Container Embedded Apprentice Linux Engineer tutorials, IoT Apprentice Linux Engineer tutorials, Hyperledger Scotland Meetup, Linux in Safety-Critical Systems Summit, and many more co-located events.  (*Some co-located events may require an additional registration fee.)
  4. Discover New Projects & Technologies: Over 30 sponsors will be showcasing new projects and technologies in the Sponsor Showcase throughout the event, joined by our Technical Showcase at the Onsite Attendee reception showcasing Free and Open Source Software (FOSS) projects from system developers and hardware makers.
  5. Social Activities & Evening Events: Take a break and go on a sightseeing bus tour, join the 5K fun run or morning meditation, and meet with fellow attendees through the networking app. Collaborate with fellow attendees at the attendee reception at the National Museum of Scotland and at the Onsite Attendee Reception & Sponsor + Technical Showcase.
  6. Diversity Empowerment Summit: Explore ways to advance diversity and inclusion in the community and across the technology industry.
  7. Women in Open Source Lunch &  Better Together Diversity Social: Women and non-binary members of the open source community are invited to network with each other at the lunch sponsored by Adobe, while all underrepresented minorities are welcome to attend the at the Better Together Diversity Social.
  8. Developer & Hallway Track Lounge: The highlight for many at this event is the ability to collaborate with the open source community. This dedicated lounge offers a space for developers to hack and collaborate throughout the event as well as plenty of seating for hallway track discussions.
  9. Networking Opportunities: Attend the Speed Networking & Mentoring event, OS Career Mixer, or use the networking appto expand your open source community connections by finding and meeting with attendees with similar interests.
  10. Hear from the Leading Technologists in Open Source: Keynote talks include a Linux Kernel update, a fireside chat with Linus Torvalds & Dirk Hohndel, a look at the future of AI and Deep Learning, a panel discussion on the future of energy with open source, a discussion on diversity & inclusion, a talk on the parallels between open source & video games, and insightful talks on how open source is changing banking, human rights and scientific collaboration

Sign up to receive updates on Open Source Summit Europe: 

VIEW THE FULL SCHEDULE »

REGISTER NOW »

This article originally appeared at The Linux Foundation

Posted on Leave a comment

A Hitchhiker’s Guide to Deploying Hyperledger Fabric on Kubernetes

Deploying a multi-component system like Hyperledger Fabric to production is challenging. Join us Wednesday, September 26, 2018 9:00 a.m. Pacific for an introductory webinar, presented by Alejandro (Sasha) Vicente Grabovetsky and Nicola Paoli of AID:Tech.

Why should you care?

Hyperledger Fabric is rather awesome, but deploying a distributed network has been known to give headaches and even migraines. In this talk, we will not be providing you with a guillotine that forever gets rid of these headaches, but instead we will talk you through some tools that can help you deploy a functioning, production-ready Hyperledger Fabric network on a Kubernetes cluster.

Who should attend?

Ideally, you are a Dev, an Ops or a DevOps interested in learning more about how to deploy Hyperledger Fabric to Kubernetes.

You might know a little bit about Hyperledger Fabric and about Docker containers and Kubernetes. We assume limited knowledge and will do our best to as possible and explain and demystify all the components along the way.

Read more at The Linux Foundation

Posted on Leave a comment

Know Your Storage: Block, File & Object

Dealing with the tremendous amount of data generated today presents a big challenge for companies who create or consume such data. It’s a challenge for tech companies that are dealing with related storage issues.

“Data is growing exponentially each year, and we find that the majority of data growth is due to increased consumption and industries adopting transformational projects to expand value. Certainly, the Internet of Things (IoT) has contributed greatly to data growth, but the key challenge for software-defined storage is how to address the use cases associated with data growth,” said Michael St. Jean, principal product marketing manager, Red Hat Storage.

Every challenge is an opportunity. “The deluge of data being generated by old and new sources today is certainly presenting us with opportunities to meet our customers escalating needs in the areas of scale, performance, resiliency, and governance,” said Tad Brockway, General Manager for Azure Storage, Media and Edge.

Trinity of modern software-defined storage

There are three different kinds of storage solutions — block, file, and object — each serving a different purpose while working with the others.

Block storage is the oldest form of data storage, where data is stored in fixed-length blocks or chunks of data. Block storage is used in enterprise storage environments and usually is accessed using Fibre Channel or iSCSI interface. “Block storage requires an application to map where the data is stored on the storage device,” according to SUSE’s Larry Morris, Sr. Product Manager, Software Defined Storage.

Block storage is virtualized in storage area network and software defined storage systems, which are abstracted logical devices that reside on a shared hardware infrastructure and are created and presented to the host operating system of a server, virtual server, or hypervisor via protocols like SCSI, SATA, SAS, FCP, FCoE, or iSCSI.

“Block storage splits a single storage volume (like a virtual or cloud storage node, or a good old fashioned hard disk) into individual instances known as blocks,” said St. Jean.

Each block exists independently and can be formatted with its own data transfer protocol and operating system — giving users complete configuration autonomy. Because block storage systems aren’t burdened with the same investigative file-finding duties as the file storage systems, block storage is a faster storage system. Pairing that speed with configuration flexibility makes block storage ideal for raw server storage or rich media databases.

Block storage can be used to host operating systems, applications, databases, entire virtual machines and containers. Traditionally, block storage can only be accessed by individual machine, or machines in a cluster, to which it has been presented.

File-based storage

File-based storage uses a filesystem to map where the data is stored on the storage device. It’s a dominant technology used on direct- and networked-attached storage system, and it takes care of two things: organizing data and representing it to users. “With file storage, data is arranged on the server side in the exact same format as the clients see it. This allows the user to request a file by some unique identifier — like a name, location, or URL — which is communicated to the storage system using specific data transfer protocols,” said St. Jean.

The result is a type of hierarchical file structure that can be navigated from top to bottom. File storage is layered on top of block storage, allowing users to see and access data as files and folders, but restricting access to the blocks that stand up those files and folders.

“File storage is typically represented by shared filesystems like NFS and CIFS/SMB that can be accessed by many servers over an IP network. Access can be controlled at a file, directory, and export level via user and group permissions. File storage can be used to store files needed by multiple users and machines, application binaries, databases, virtual machines, and can be used by containers,” explained Brockway.

Object storage

Object storage is the newest form of data storage, and it provides a repository for unstructured data which separates the content from the indexing and allows the concatenation of multiple files into an object. An object is a piece of data paired with any associated metadata that provides context about the bytes contained within the object (things like how old or big the data is). Those two things together — the data and metadata — make an object.

One advantage of object storage is the unique identifier associated with each piece of data. Accessing the data involves using the unique identifier and does not require the application or user to know where the data is actually stored. Object data is accessed through APIs.

“The data stored in objects is uncompressed and unencrypted, and the objects themselves are arranged in object stores (a central repository filled with many other objects) or containers (a package that contains all of the files an application needs to run). Objects, object stores, and containers are very flat in nature — compared to the hierarchical structure of file storage systems — which allow them to be accessed very quickly at huge scale,” explained St. Jean.

Object stores can scale to many petabytes to accommodate the largest datasets and are a great choice for images, audio, video, logs, backups, and data used by analytics services.

Conclusion

Now you know about the various types of storage and how they are used. Stay tuned to learn more about software-defined storage as we examine the topic in the future.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

ACM’s Code of Ethics Offers Updated Guidelines for Computing Professionals

The Association of Computing Machinery (ACM) has released an update to its Code of Ethics and Professional Conduct geared at computing professionals. The update was done “to address the significant advances in computing technology and the degree [to which] these technologies are integrated into our daily lives,” explained ACM members Catherine Flick and Michael Kirkpatrick, writing in Reddit.

This marks the first update to the Code, which the ACM maintains “expresses the conscience of the profession,” since 1992. The goal is to ensure it “reflects the experiences, values and aspirations of computing professionals around the world,’’ Flick and Kirkpatrick said.

The Code was written to guide computing professionals’ ethical conduct and includes anyone using computing technology “in an impactful way.” It also serves as a basis for remediation when violations occur. The Code contains principles developed as statements of responsibility in the belief that “the public good is always the primary consideration.”

Ethical Decision Making

In its entirety, the ACM says the Code “is concerned with how fundamental ethical principles apply to a computing professional’s conduct. The Code is not an algorithm for solving ethical problems; rather it serves as a basis for ethical decision-making.”

It is divided into four sections: General Ethical Principles; Professional Responsibilities; Professional Leadership Principles; and Compliance with the Code.

The General Ethical Principles section discusses the role of a computer professional, saying they should contribute to society, with an acknowledgement “that all people are stakeholders in computing.” This section addresses the “obligation” of computing professionals to use their skills for the benefit of society.

“An essential aim of computing professionals is to minimize negative consequences of computing, including threats to health, safety, personal security, and privacy,’’ the code advises. “When the interests of multiple groups conflict, the needs of those less advantaged should be given increased attention and priority.”

Computing professionals should perform high quality work and maintain professional confidence. They should also take into consideration diversity and social responsibility in their efforts and engage in pro bono or volunteer work benefitting the public good, the ACM recommends.

They should also try to avoid harm, in areas including “unjustified physical or mental injury, unjustified destruction or disclosure of information, and unjustified damage to property, reputation, and the environment.” To minimize the possibility of unintentionally or indirectly hurting others, computing professionals are advised to follow “generally accepted best practices unless there is a compelling ethical reason to do otherwise.” They should also carefully consider the consequences of “data aggregation and emergent properties of systems,” the ACM advises.

Computing professionals should also be honest and trustworthy and transparent. They should “provide full disclosure of all pertinent system capabilities, limitations, and potential problems to the appropriate parties. Making deliberately false or misleading claims, fabricating or falsifying data, offering or accepting bribes, and other dishonest conduct are violations of the Code,” the ACM stresses. This also applies to honesty about their qualifications and any limitations in their ability to complete a task. They should also be fair and not discriminate against others, and “credit the creators of ideas, inventions, work and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors’ works.”

With a nod to the ability of technology to collect, monitor and disseminate personal information, another call to action under the Ethical Principles section is respecting the privacy, rights and responsibilities associated with collecting and using personal information. Use of personal information should only be done for “legitimate ends and without violating the rights of individuals and groups,” the Code states.

A Position of Trust

Noting that computing professionals “are in a position of trust,” they have “a special responsibility to provide objective, credible evaluations and testimony to employers, employees, clients, users, and the public.” Consequently, the Code says these individuals “should strive to be perceptive, thorough, and objective when evaluating, recommending, and presenting system descriptions and alternatives.”

The Code also stresses that “extraordinary care should be taken to identify and mitigate potential risks in machine learning systems.” Other mandates in the Professional Responsibilities section include maintaining high standards of competence, conduct and ethical practice. Computing professionals should also only perform work in areas in which they are competent. They should also design and implement systems that are “robustly and usably secure,” the Code states.

The Professional Leadership Principles section, as the name suggests, deals with the attributes of a leader. These principles deal with the importance of ensuring computing work is done, again, with the public good in mind, and having procedures and attitudes oriented toward the welfare of society. Doing so, the Code suggests, will “reduce harm to the public and raise awareness of the influence of technology in our lives.”

Leaders should also enhance the quality of work life, articulate, apply and support the Code’s principles and create opportunities for people to grow as professionals. They should use care when changing or discontinuing support for systems/features, and help users understand “that timely replacement of inappropriate or outdated features or entire systems may be needed.”

Lastly, the ACM urges compliance to the Code’s principles and to treat violations “as inconsistent with membership in the ACM.”

Posted on Leave a comment

How to Use the Netplan Network Configuration Tool on Linux

For years Linux admins and users have configured their network interfaces in the same way. For instance, if you’re a Ubuntu user, you could either configure the network connection via the desktop GUI or from within the /etc/network/interfaces file. The configuration was incredibly easy and never failed to work. The configuration within that file looked something like this:

auto enp10s0 iface enp10s0 inet static address 192.168.1.162 netmask 255.255.255.0 gateway 192.168.1.100 dns-nameservers 1.0.0.1,1.1.1.1

Save and close that file. Restart networking with the command:

sudo systemctl restart networking

Or, if you’re not using a non-systemd distribution, you could restart networking the old fashioned way like so:

sudo /etc/init.d/networking restart

Your network will restart and the newly configured interface is good to go.

That’s how it’s been done for years. Until now. With certain distributions (such as Ubuntu Linux 18.04), the configuration and control of networking has changed considerably. Instead of that interfaces file and using the /etc/init.d/networking script, we now turn to Netplan. Netplan is a command line utility for the configuration of networking on certain Linux distributions. Netplan uses YAML description files to configure network interfaces and, from those descriptions, will generate the necessary configuration options for any given renderer tool.

I want to show you how to use Netplan on Linux, to configure a static IP address and a DHCP address. I’ll be demonstrating on Ubuntu Server 18.04. I will give you one word of warning, the .yaml files you create for Netplan must be consistent in spacing, otherwise they’ll fail to work. You don’t have to use a specific spacing for each line, it just has to remain consistent.

The new configuration files

Open a terminal window (or log into your Ubuntu Server via SSH). You will find the new configuration files for Netplan in the /etc/netplan directory. Change into that directory with the command cd /etc/netplan. Once in that directory, you will probably only see a single file:

01-netcfg.yaml

You can create a new file or edit the default. If you opt to edit the default, I suggest making a copy with the command:

sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak

With your backup in place, you’re ready to configure.

Network Device Name

Before you configure your static IP address, you’ll need to know the name of device to be configured. To do that, you can issue the command ip a and find out which device is to be used (Figure 1).

I’ll be configuring ens5 for a static IP address.

Configuring a Static IP Address

Open the original .yaml file for editing with the command:

sudo nano /etc/netplan/01-netcfg.yaml

The layout of the file looks like this:

network:

    Version: 2

    Renderer: networkd

    ethernets:

       DEVICE_NAME:

          Dhcp4: yes/no

          Addresses: [IP/NETMASK]

          Gateway: GATEWAY

          Nameservers:

             Addresses: [NAMESERVER, NAMESERVER]

Where:

  • DEVICE_NAME is the actual device name to be configured.

  • yes/no is an option to enable or disable dhcp4.

  • IP is the IP address for the device.

  • NETMASK is the netmask for the IP address.

  • GATEWAY is the address for your gateway.

  • NAMESERVER is the comma-separated list of DNS nameservers.

Here’s a sample .yaml file:

network: version: 2 renderer: networkd ethernets: ens5: dhcp4: no addresses: [192.168.1.230/24] gateway4: 192.168.1.254 nameservers: addresses: [8.8.4.4,8.8.8.8]

Edit the above to fit your networking needs. Save and close that file.

Notice the netmask is no longer configured in the form 255.255.255.0. Instead, the netmask is added to the IP address.

Testing the Configuration

Before we apply the change, let’s test the configuration. To do that, issue the command:

sudo netplan try

The above command will validate the configuration before applying it. If it succeeds, you will see Configuration accepted. In other words, Netplan will attempt to apply the new settings to a running system. Should the new configuration file fail, Netplan will automatically revert to the previous working configuration. Should the new configuration work, it will be applied.

Applying the New Configuration

If you are certain of your configuration file, you can skip the try option and go directly to applying the new options. The command for this is:

sudo netplan apply

At this point, you can issue the command ip a to see that your new address configurations are in place.

Configuring DHCP

Although you probably won’t be configuring your server for DHCP, it’s always good to know how to do this. For example, you might not know what static IP addresses are currently available on your network. You could configure the device for DHCP, get an IP address, and then reconfigure that address as static.

To use DHCP with Netplan, the configuration file would look something like this:

network: version: 2 renderer: networkd ethernets: ens5: Addresses: [] dhcp4: true optional: true

Save and close that file. Test the file with:

sudo netplan try

Netplan should succeed and apply the DHCP configuration. You could then issue the ip a command, get the dynamically assigned address, and then reconfigure a static address. Or, you could leave it set to use DHCP (but seeing as how this is a server, you probably won’t want to do that).

Should you have more than one interface, you could name the second .yaml configuration file 02-netcfg.yaml. Netplan will apply the configuration files in numerical order, so 01 will be applied before 02. Create as many configuration files as needed for your server.

That’s All There Is

Believe it or not, that’s all there is to using Netplan. Although it is a significant change to how we’re accustomed to configuring network addresses, it’s not all that hard to get used to. But this style of configuration is here to stay… so you will need to get used to it.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Open Source Summit: Innovation, Allies, and Open Development

August was an exciting month for Linux and open source, with the release of Linux kernel 4.18, a new ebook offering practical advice for enterprise open source, and the formation of the Academy Software Foundation. And, to cap it off, we ended the month with a successful Open Source Summit event highlighting open source innovation at every level and featuring keynote presentations from Linus Torvalds, Van Jones, Jim Zemlin, Jennifer Cloer, and many others.

In his welcoming address in Vancouver, The Linux Foundation’s Executive Director, Jim Zemlin, explained that The Foundation’s job is to create engines of innovation and enable the gears of those engines to spin faster.

This acceleration can be seen in the remarkable growth of the Cloud Native Computing Foundation (CNCF) and in the Google Cloud announcement transferring ownership and management of the Kubernetes project’s cloud resources to the CNCF, along with a $9 million grant over three years to cover infrastructure costs.

Such investment underscores a strong belief in the power of open source technologies to speed innovation and solve problems, which was echoed by Zemlin, who encouraged the audience to go solve big problems, one person, one project, one industry at a time.

Empathy

In another conference keynote, Van Jones, President and founder of Dream Corps, best-selling author, and CNN contributor, spoke with Jamie Smith, Chief Marketing Officer at The Linux Foundation about the power of tech and related social responsibilities.  

“There was a time when the future was written in law,” Jones said. “Now the future is written in Silicon Valley in code.” Jones went on to say that those working in technology today possess a new set of superpowers and they need to understand how to use those powers for good.

A big deficit that Jones sees, not just in technology but in politics and elsewhere, is an empathy gap. He noted, however, that listening and mentoring can help bridge this gap. “Each person has an opportunity to mentor one person… Don’t underestimate the one person in your life who gave you a shot; you can be that person,” he said.

Allies and advocates

Jennifer Cloer, founder and lead consultant at reTHINKit PR and co-founder of Wicked Flicks, also explored the power of mentors and supporters in her talk highlighting the “Chasing Grace” video project. Cloer offered a preview of the project in a short episode featuring Nithya Ruff, Senior Director, Open Source Practice at Comcast, and member of the Board of Directors for The Linux Foundation. In the video preview, Ruff described the important role that her father played in supporting her career.

Ruff also moderated a panel discussion at Open Source Summit examining issues of diversity and inclusion and exploring solid strategies for success. Ruff acknowledged that the efforts of open source communities to attract and retain diverse contributors with unique talent and perspectives have gathered momentum, but she said, “We cannot tackle these issues without the support of allies and advocates.”

Open development

On the last day of the conference, Linux creator Linus Torvalds sat down with Dirk Hohndel, VMware VP and chief open source officer, for their now-familiar fireside chat session. In the discussion, they touched on topics including hardware, quantum computing, kernel maintainership, and more.

In speaking of recent hardware vulnerabilities, Torvalds said, “These hardware issues were kept under wraps. Because it was secret and we were not allowed to talk about it, we were not allowed to use our usual open development model. That makes it way more painful than it should be.”

“When you’re doing a complex project, the only way to deal with complexity is to have the code out there,” Torvalds said. “There are so many layers. No one knows how all this works,” he continued, describing it as an “explosion of complexity.”

Nonetheless, Torvalds said he doesn’t worry so much about issues of technology within the kernel. “What I’m really worried about is the flow of patches. If you have the right workflow, the code will sort itself out.”

When asked whether he still understands the Linux kernel, Torvalds replied, “No. … Nobody knows the whole kernel. Having looked at patches for many, many years, I know the big picture, and I can tell by looking if it’s right or wrong.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Survey: Open Source Programs Are a Best Practice Among Large Companies

Open source software programs play an important role in how DevOps and open source best practices are adopted by organizations, according to a survey conducted by The New Stack and The Linux Foundation (via the TODO Group). By implementing open source best practices, organizations are helping developers become both more productive and more structured in how they manage the often abundant open source software their businesses rely on.

Thousands of open source projects have participants from organizations with open source offices that consume — and in turn contribute back to — any number of projects. The interest is showing enough results to merit companies considering deeper and more focused approaches to the development of the open source offices they manage.

Open source programs generally have three core characteristics: they 1) execute and communicate the organization’s open source software strategy, 2) maintain license compliance, and, most of all, 3) foster open source culture.

Read more at The New Stack

Posted on Leave a comment

SharkLinux Distro: Open Source in Action

Every so often I run into a Linux distribution that reminds me of the power of open source software. SharkLinux is one such distribution. With a single developer creating this project, it attempts to change things up a bit. Some of those changes will be gladly welcomed by new users, while scoffed at by the Linux faithful. In the end, however, thanks to open source software, the developer of SharkLinux has created a distribution exactly how he would want it to be. And that my friends, is one amazing aspect of open source. We get to do it our way.

But what is SharkLinux and what makes it stand out? I could make one statement about SharkLinux and end this now. The developer of SharkLinux reportedly developed the entire distribution using only an Android phone. That, alone, should have you wanting to give SharkLinux a go.

Let’s take a look at this little-known distribution and see it’s all about.

What Exactly is SharkLinux?

First off, SharkLinux is based on Ubuntu and makes use of a custom Mate/Xfce desktop. Outside of the package manager, the similarities between SharkLinux and Ubuntu are pretty much non-existent. Instead of aiming for the new or average user, the creator has his eyes set on developers and other users who need to lean heavily on virtualization. The primary feature set for SharkLinux includes:

  • KVM hypervisor

  • Full QEMU Utilities

  • Libvirt and Virtual Machine Manager

  • Vagrant (mutate and libvirt support)

  • LXD/LXC/QLc/LXDock

  • Docker/Kubernetes

  • VMDebootstrap

  • Virt-Install/Convert

  • Launch Local Cloud Images

  • Full System Containers GUI Included

  • Kimchi – WebVirtCloud – Guacamole

  • Vagrant Box Conversion

  • Many Dashboards, Admin Panels

  • LibGuestFS and other disk/filesystem tools

  • Nested Virtualization (hardware depending)

  • Alien (rpm) LinuxBrew (Mac) Nix Package Manager

  • Powershell, Upstream WINE (Win)

  • Cloud Optimized Desktop

  • Dozens of wrappers, automated install scripts, and expansion packs

  • Guake terminal

  • Kernel Options v4.4** -> v4.12*

Clearly, SharkLinux isn’t built for those who simply need a desktop, browser, and office suite. This includes tools for a specific cross section of users. Let’s dive in a bit deeper.

Post Install

As per usual, I don’t want to waste time on the installation of another Linux distribution, simply because that process has become so easy. It’s point and click, fill out a few items, and wait for 5-10 minutes for the call to reboot.

Once you’ve logged into your newly installed instance of SharkLinux, you’ll immediately notice something different. The “Welcome to SharkLinux” window is clearly geared toward users with a certain level of knowledge. Tasks such as Automatic Maintenance, the creation of swap space, sudo policy, and more are all available (Figure 1).

The first thing you should do is click the SharkLinux Expansion button. When prompted, click Yes to install this package. Without this package installed, absolutely no upstream packages are enabled for the system. Until you install the expansion, you’ll be missing out on a lot of available software. So install the SharkLinux Expansion out of the gate.

Next you’ll want to install the SharkExtras. This makes it easy to install other packages (such as Bionic, MiniKube, Portainer, Cockpit, Kimchi, Webmin, Gimp Extension Pack, Guacamole, LXDock, Mainline Kernel, Wine, and much more (Figure 2).

Sudo Policy

This is where things get a bit dicey for the Linux faithful. I will say this: I get why the developer has included this. Out of the box, SharkLinux does require a sudo password, but with the Sudo Policy editor, you can easily set up the desktop such that sudo doesn’t require a password (Figure 3).

Click on the Sudo Policy button in the Welcome to SharkLinux window and then either click Password Required or Password Not Required. Use this option with great caution, as you’d  reduce the security of the desktop by disabling the need for a sudo password.

Automatic Maintenance

Another interesting feature, found in the Welcome to SharkLinux window is Automatic Maintenance. If you turn this feature on (Figure 4), functions like system updates will occur automatically (without user interaction). For those that often forget to regularly update their system, this might be a good idea. If you’re like me, and prefer to run updates on a daily basis manually, you’ll probably opt to skip this feature.

After taking care of everything you need in the Welcome to SharkLinux window, close it out and you’ll find yourself on the desktop (Figure 5).

At this point, you can start using SharkLinux as you would any desktop distribution, the difference being you’ll have quite a bit more tools for virtualization and development at your disposal. One tiny word of warning: You will notice, by default, the desktop wallpaper is set to randomly change. In that mix of wallpapers, the developer has added one particular wallpaper that may not be quite suitable for a work environment (it’s nothing too drastic, just a woman posing seductively). You can remove that photo from the Appearance Preferences window (should you choose to do so). Beyond that, SharkLinux works as well as any desktop Linux distribution you can find.

One Quirky Distribution

Of all the Linux distributions I have used over the years (and I have used PLENTY), SharkLinux might well be one of the more quirky releases. That doesn’t mean it’s one to avoid. Quite the opposite. I highly recommend everyone interested in seeing what a single developer can do with the Linux platform give SharkLinux a try. I promise you, you’ll be glad you gave it a go. SharkLinux is fun, of that there is no doubt. It’s also a flavor of desktop Linux that shows you what is possible, thanks to open source.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.