Posted on Leave a comment

Kubernetes Setup Using Ansible and Vagrant

This blog post describes the steps required to set up a multi node Kubernetes cluster for development purposes. This setup provides a production-like cluster that can be set up on your local machine.

Why do we require multi node cluster setup?

Multi node Kubernetes clusters offer a production-like environment which has various advantages. Even though Minikube provides an excellent platform for getting started, it doesn’t provide the opportunity to work with multi node clusters which can help solve problems or bugs that are related to application design and architecture. For instance, Ops can reproduce an issue in a multi node cluster environment, Testers can deploy multiple versions of an application for executing test cases and verifying changes. These benefits enable teams to resolve issues faster which make them more agile.

Why use Vagrant and Ansible?

Vagrant is a tool that will allow us to create a virtual environment easily and it eliminates pitfalls that cause the works-on-my-machine phenomenon. It can be used with multiple providers such as Oracle VirtualBox, VMware, Docker, and so on. It allows us to create a disposable environment by making use of configuration files.

Ansible is an infrastructure automation engine that automates software configuration management. It is agentless and allows us to use SSH keys for connecting to remote machines. Ansible playbooks are written in yaml and offer inventory management in simple text files.

Prerequisites

  • Vagrant should be installed on your machine. Installation binaries can be found here.
  • Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant’s official documentation.
  • Ansible should be installed in your machine. Refer to the Ansible installation guide for platform-specific installation.

Read more at Kubernetes.io

Posted on Leave a comment

Finding Files with mlocate

Learn how to locate files in this tutorial from our archives.

It’s not uncommon for a sysadmin to have to find needles buried deep inside haystacks. On a busy machine, there can be hundreds of thousands of files present on your filesystems. What do you do when you need to make sure one particular configuration file is up to date, but you can’t remember where it is located?

If you’ve used Unix-type machines for a while, then you’ve almost certainly come across the find command before. It is unquestionably exceptionally sophisticated and highly functional. Here’s an example that just searches for links inside a directory, ignoring files:

# find . -lname "*"

You can do seemingly endless things with the find command; there’s no denying that. The find command is nice and succinct when it wants to be, but it can also get complex very quickly. This is not necessarily because of the find command itself, but coupled with “xargs” you can pass it all sorts of options to tune your output, and indeed delete those files which you have found.

Location, location, frustration

There often comes a time when simplicity is the preferred route, however — especially when a testy boss is leaning over your shoulder, chatting away about how time is of the essence. And, imagine trying to vaguely guess the path of the file you’ve never seen but that your boss is certain lives somewhere on the busy /var partition.

Step forward mlocate. You may be aware of one of its close relatives: slocate, which securely (note the prepended letter s for secure) took note of the pertinent file permissions to prevent unprivileged users from seeing privileged files). Additionally, there is also the older, original locate command whence they came.

The differences between mlocate and other members of its family (according to mlocate at least) is that, when scanning your filesystems, mlocate doesn’t need to continually rescan all your filesystem(s). Instead, it merges its findings (note the prepended m for merge) with any existing file lists, making it much more performant and less heavy on system caches.

In this series of articles, we’ll look more closely at the mlocate tool (and simply refer to it as “locate” due to its popularity) and examine how to quickly and easily tune it to your heart’s content.

Compact and Bijou

If you’re anything like me unless you reuse complex commands frequently then ultimately you forget them and need to look them up.The beauty of the locate command is that you can query entire filesystems very quickly and without worrying about top-level, root, paths with a simple command using locate.

In the past, you might well have discovered that the find command can be very stubborn and cause you lots of unwelcome head-scratching. You know, a missing semicolon here or a special character not being escaped properly there. Let’s leave the complicated find command alone now, relax, and have a look into the clever little command that is locate.

You will most likely want to check that it’s on your system first by running these commands:

For Red Hat derivatives:

# yum install mlocate

For Debian derivatives:

# apt-get install mlocate

There shouldn’t be any differences between distributions, but there are almost certainly subtle differences between versions; beware.

Next, we’ll introduce a key component to the locate command, namely updatedb. As you can probably guess, this is the command which updates the locate command’s db. Hardly counterintuitive.

The db is the locate command’s file list, which I mentioned earlier. That list is held in a relatively simple and highly efficient database for performance. The updatedb runs periodically, usually at quiet times of the day, scheduled via a cron job. In Listing 1, we can see the innards of the file /etc/cron.daily/mlocate.cron (both the file’s path and its contents might possibly be distro and version dependent).

#!/bin/sh nodevs=$(< /proc/filesystems awk '$1 == "nodev" { print $2 }') renice +19 -p $$ >/dev/null 2>&1 ionice -c2 -n7 -p $$ >/dev/null 2>&1 /usr/bin/updatedb -f "$nodevs"

Listing 1: How the “updatedb” command is triggered every day.

As you can see, the mlocate.cron script makes careful use of the excellent nice commands in order to have as little impact as possible on system performance. I haven’t explicitly stated that this command runs at a set time every day (although if my addled memory serves, the original locate command was associated with a slow-down-your-computer run scheduled at midnight). This is thanks to the fact that on some “cron” versions delays are now introduced into overnight start times.

This is probably because of the so-called Thundering Herd of Hippos problem. Imagine lots of computers (or hungry animals) waking up at the same time to demand food (or resources) from a single or limited source. This can happen when all your hippos set their wristwatches using NTP (okay, this allegory is getting stretched too far, but bear with me). Imagine that exactly every five minutes (just as a “cron job” might) they all demand access to food or something otherwise being served.

If you don’t believe me then have a quick look at the config from — a version of cron called anacron, in Listing 2, which is the guts of the file /etc/anacrontab.

# /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # the maximal random delay added to the base delay of the jobs RANDOM_DELAY=45 # the jobs will be started during the following hours only START_HOURS_RANGE=3-22 #period in days delay in minutes job-identifier command 1 5 cron.daily nice run-parts /etc/cron.daily 7 25 cron.weekly nice run-parts /etc/cron.weekly @monthly 45 cron.monthly nice run-parts /etc/cron.monthly 

Listing 2: How delays are introduced into when “cron” jobs are run.

From Listing 2, you have hopefully spotted both “RANDOM_DELAY” and the “delay in minutes” column. If this aspect of cron is new to you, then you can find out more here:

# man anacrontab

Failing that, you can introduce a delay yourself if you’d like. An excellent web page (now more than a decade old) discusses this issue in a perfectly sensible way. This website discusses using sleep to introduce a level of randomality, as seen in Listing 3.

#!/bin/sh # Grab a random value between 0-240.
value=$RANDOM
while [ $value -gt 240 ] ; do value=$RANDOM
done # Sleep for that time.
sleep $value # Syncronize.
/usr/bin/rsync -aqzC --delete --delete-after masterhost::master /some/dir/

Listing 3: A shell script to introduce random delays before triggering an event, to avoid a Thundering Herd of Hippos.

The aim in mentioning these (potentially surprising) delays was to point you at the file /etc/crontab, or the root user’s own crontab file. If you want to change the time of when the locate command runs specifically because of disk access slowdowns, then it’s not too tricky. There may be a more graceful way of achieving this result, but you can also just move the file /etc/cron.daily/mlocate.cron somewhere else (I’ll use the /usr/local/etc directory), and as the root user add an entry into the root user’s crontab with this command and then paste the content as below:

# crontab -e 33 3 * * * /usr/local/etc/mlocate.cron

Rather than traipse through /var/log/cron and its older, rotated, versions, you can quickly tell the last time your cron.daily jobs were fired, in the case of anacron at least, with:

# ls -hal /var/spool/anacron

Next time, we’ll look at more ways to use locate, updatedb, and other tools for finding files.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords, so you can learn how to defend against these attacks. In the book, he also talks you through making your servers invisible, performing penetration testing, and mitigating unwelcome attacks. You can find out more about DevSecOps and Linux security via his website (http://www.devsecops.cc).

Posted on Leave a comment

Mageia Linux Is a Modern Throwback to the Underdog Days

I’ve been using Linux long enough to remember Linux Mandrake. I recall, at one of my first-ever Linux conventions, hanging out with the MandrakeSoft crew and being starstruck to think that they were creating a Linux distribution that was sure to bring about world domination for the open source platform.

Well, that didn’t happen. In fact, Linux Mandrake didn’t even stand the test of time. It was renamed Mandriva and rebranded. Mandriva retained popularity but eventually came to a halt in 2011. The company disbanded, sending all those star developers to other projects. Of course, rising from the ashes of Mandrake Linux came the likes of OpenMandriva, as well as another distribution called Mageia Linux.

Like OpenMandriva, Mageia Linux is a fork of Mandriva. It was created (by a group of former Mandriva employees) in 2010 and first released in 2011, so there was next to no downtime between the end of Mandriva and the release of Mageia. Since then, Mageia has existed in the shadows of bigger, more popular flavors of Linux (e.g., Ubuntu, Mint, Fedora, Elementary OS, etc.), but it’s never faltered. As of this writing, Mageia is listed as number 26 on the Distrowatch Page Hit Ranking chart and is enjoying release number 6.1.

What Sets Mageia Apart?

This question has become quite important when looking at Linux distributions, considering just how many distros there are—many of which are hard to tell apart. If you’ve seen one KDE, GNOME, or Xfce distribution, you’ve seen them all, right? Anyone who’s used Linux enough knows this statement is not even remotely true. For many distributions, though, the differences lie in the subtleties. It’s not about what you do with the desktop; it’s how you put everything together to improve the user experience.

Mageia Linux defaults to the KDE desktop and does as good a job as any other distribution at presenting KDE to users. But before you start using KDE, you should note some differences between Mageia and other distributions. To start, the installation is quite simple, but it’s slightly askew from what might expect. In similar fashion to most modern distributions, you boot up the live instance and click on the Install icon (Figure 1).

Once you’ve launched the installation app, it’s fairly straightforward, although not quite as simple as some other versions of Linux. New users might hesitate when they are presented with the partition choice between Use free space or Custom disk partition (Remember, I’m talking about new users here). This type of user might prefer a bit simpler verbiage. Consider this: What if you were presented (at the partition section) by two choices:

  • Basic Install

  • Custom Install

The Basic install path would choose a fairly standard set of options (e.g., using the whole disk for installation and placing the bootloader in the proper/logical place). In contrast, the Custom install would allow the user to install in a non-default fashion (for dual boot, etc.) and choose where the bootloader would go and what options to apply.

The next possible confusing step (again, for new users) is the bootloader (Figure 2). For those who have installed Linux before, this option is a no-brainer. For new users, even understanding what a bootloader does can be a bit of an obstacle.

The bootloader configuration screen also allows you to password protect GRUB2. Because of the layout of this screen, it could be confused as the root user password. It’s not. If you don’t want to password protect GRUB2, leave this blank. In the final installation screen (Figure 3), you can set any bootloader options you might want. Once again, we find a window that could cause confusion with new users.

Click Finish and the installation will complete. You might have noticed the absence of user configuration or root user password options. With the first stage of the installation complete, you reboot the machine, remove the installer media, and (when the machine reboots) you’ll then be prompted to configure both the root user password and a standard user account (Figure 4).

And that’s all there is to the Mageia installation.

Welcome to Mageia

Once you log into Mageia, you’ll be greeted by something every Linux distribution should use—a welcome app (Figure 5).

From this welcome app, you can get information about the distribution, get help, and join communities. The importance of having such an approach to greet users at login cannot be overstated. When new users log into Linux for the first time, they want to know that help is available, should they need it. Mageia Linux has done an outstanding job with this feature. Granted, all this app does is serve as a means to point users to various websites, but it’s important information for users to have at the ready.

Beyond the welcome app, the Mageia Control Center (Figure 6) also helps Mageia stand out. This one-stop-shop is where users can take care of installing/updating software, configuring media sources for installation, configure update frequency, manage/configure hardware, configure network devices (e.g., VPNs, proxies, and more), configure system services, view logs, open an administrator console, create network shares, and so much more. This is as close to the openSUSE YaST tool as you’ll find (without using either SUSE or openSUSE).

Beyond those two tools, you’ll find everything else you need to work. Mageia Linux comes with the likes of LibreOffice, Firefox, KMail, GIMP, Clementine, VLC, and more. Out of the box, you’d be hard pressed to find another tool you need to install to get your work done. It’s that complete a distribution.

Target Audience

Figuring out the Mageia Linux target audience is a tough question to answer. If new users can get past the somewhat confusing installation (which isn’t really that challenging, just slightly misleading), using Mageia Linux is a dream.

The slick, barely modified KDE desktop, combined with the welcome app and control center make for a desktop Linux that will let users of all skill levels feel perfectly at home. If the developers could tighten up the verbiage on the installation, Mageia Linux could be one of the greatest new user Linux experiences available. Until then, new users should make sure they understand what they’re getting into with the installation portion of this take on the Linux platform.

Posted on Leave a comment

ONS Evolution: Cloud, Edge, and Technical Content for Carriers and Enterprise

The first Open Networking Summit was held in October 2011 at Stanford University and described as “a premier event about OpenFlow and Software-Defined Networking (SDN)”. Here we are seven and half years later and I’m constantly amazed at both how far we’ve come since then, and at how quickly a traditionally slow-moving industry like telecommunications is embracing change and innovation powered by open source. Coming out of the ONS Summit in Amsterdam last fall, Network World described open source networking as the “new norm,” and indeed, open platforms have become de-facto standards in networking.  

Like the technology, ONS as an event is constantly evolving to meet industry needs and is designed to help you take advantage of this revolution in networking. The theme of this year’s event is “Enabling Collaborative Development & Innovation” and we’re doing this by exploring collaborative development and innovation across the ecosystem for enterprises, service providers and cloud providers in key areas like SDN, NFV, VNF, CNF/Cloud Native Networking, Orchestration, Automation of Cloud, Core Network, Edge, Access, IoT services, and more.

A unique aspect of ONS is that it facilitates deep technical discussions in parallel with exciting keynotes, industry, and business discussions in an integrated program. The latest innovations from the networking project communities including LF Networking (ONAP, OpenDaylight, OPNFV, Tungsten Fabric) are well represented in the program, and in features and add-ons such as the LFN Unconference Track and LFN Networking Demos. A variety of event experiences ensure that attendees have ample opportunities to meet and engage with each other in sessions, the expo hall, and during social events.

New this year is a track structure built to cover the key topics in depth to meet the needs of both CIOs/CTO/architects and developers, sysadmins, NetOps and DevOps teams:

The ONS Schedule is now live — find the sessions and tutorials that will help you learn how to participate in the open source communities and ecosystems that will make a difference in your networking career. And if you need help convincing your boss, this will help you make the case.

The standard price expires March 17th so hurry up and register today! Be sure to check out the Day Passes and Hall Passes available as well.

I hope to see you there!

This article originally appeared at the Linux Foundation.

Posted on Leave a comment

Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups

In the last few years, we have witnessed the unprecedented growth of open source in all industries—from the increased adoption of open source software in products and services, to the extensive growth in open source contributions and the releasing of proprietary technologies under an open source license. It has been an incredible experience to be a part of.

As many have stated, Open Source is the New Normal, Open Source is Eating the World, Open Source is Eating Software, etc. all of which are true statements. To that extent, I’d like to add one more maxim: Open Source is Eating the Startup Ecosystem. It is almost impossible to find a technology startup today that does not rely in one shape or form on open source software to boot up its operation and develop its product offering. As a result, we are operating in a space where open source due diligence is now a mandatory exercise in every M&A transaction. These exercises evaluate the open source practices of an organization and scope out all open source software used in product(s)/service(s) and how it interacts with proprietary components—all of which is necessary to assess the value creation of the company in relation to open source software.

Being intimately involved in this space has allowed me to observe, learn, and apply many open source best practices. I decided to chronicle these learnings in an ebook as a contribution to the OpenChain projectAssessment of Open Source Practices as part of Due Diligence in Merger and Acquisition Transactions. This ebook addresses the basic question of: How does one evaluate open source practices in a given organization that is an acquisition target? We address this question by offering a path to evaluate these practices along with appropriate checklists for reference. Essentially, it explains how the acquirer and the target company can prepare for this due diligence, offers an explanation of the audit process, and provides general recommended practices for ensuring open source compliance.

If is important to note that not every organization will see a need to implement every practice we recommend. Some organizations will find alternative practices or implementation approaches to achieve the same results. Appropriately, an organization will adapt its open source approach based upon the nature and amount of the open source it uses, the licenses that apply to open source it uses, the kinds of products it distributes or services it offers, and the design of the products or services themselves

If you are involved in assessing the open source and compliance practices of organizations, or involved in an M&A transaction focusing on open source due diligence, or simply want to have a deeper level of understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. Download the Brief.

This article originally appeared at the Linux Foundation.

Posted on Leave a comment

CHIPS Alliance to Create Open Chip Design Tools for RISC-V and Beyond

The Linux Foundation and several major RISC-V development firms have launched an LF-hosted CHIPS Alliance with a mission “to host and curate high-quality open source code relevant to the design of silicon devices.” The founding members — Esperanto Technologies, Google, SiFive, and Western Digital — are all involved in RISC-V projects.  

On the same day that the CHIPS Alliance was announced, Intel and other companies, including Google launched a Compute Express Link (CXL) consortium that will open source and develop Intel’s CXL interconnect. CXL shares many traits and goals of the OmniXtend protocol that Western Digital is contributing to CHIPS (see farther below).

The CHIPS Alliance aims to “foster a collaborative environment that will enable accelerated creation and deployment of more efficient and flexible chip designs for use in mobile, computing, consumer electronics, and Internet of Things (IoT) applications.” This “independent entity” will enable “companies and individuals to collaborate and contribute resources to make open source CPU chip and system-on-a-chip (SoC) design more accessible to the market,” says the project.

This announcement follows a collaboration between RISC-V and Linux Foundation formed last November to accelerate development for the open source RISC-V ISA, starting with RISC-V starter guides for Linux and Zephyr. The CHIPS Alliance is more focused on developing open source VLSI chip design building blocks for semiconductor vendors.

The CHIPS Alliance will follow Linux Foundation style governance practices and include the usual Board of Directors, Technical Steering Committee, and community contributors “who will work collectively to manage the project.” Initial plans call for establishing a curation process aimed at providing the chip community with access to high-quality, enterprise grade hardware.”

A testimonial quote by Zvonimir Bandic, senior director of next-generation platforms architecture at Western Digital, offers a few clues about the project’s plans: “The CHIPS Alliance will provide access to an open source silicon solution that can democratize key memory and storage interfaces and enable revolutionary new data-centric architectures. It paves the way for a new generation of compute devices and intelligent accelerators that are close to the memory and can transform how data is moved, shared, and consumed across a wide range of applications.”

Both the AI-focused Esperanto and SiFive, which has led the charge on Linux-driven RISC-V devices with its Freedom U540 SoC and upcoming U74 and U74-MC designs, are exclusively focused on RISC-V. Western Digital, which is contributing its RISC-V based SweRV core to the project, has pledged to produce 1 billion of SiFive’s RISC-V cores. All but Esperanto have committed to contribute specific technology to the project (see farther below).

Notably missing from the CHIPS founders list is Microchip, whose Microsemi unit announced a Linux-friendly PolarFire SoC, based in part on SiFive’s U54-MC cores. The PolarFire SoC is billed as the world’s first RISC-V FPGA SOC.

Although not included as a founding member, the RISC-V Foundation appears to behind the CHIPS Alliance, as evident from this quote from Martin Fink, interim CEO of RISC-V Foundation and VP and CTO of Western Digital: “With the creation of the CHIPS Alliance, we are expecting to fast-track silicon innovation through the open source community.”

With the exploding popularity of RISC-V, the RISC-V Foundation may have decided it has too much on its plate right now to tackle the projects the CHIPS Alliance is planning. For example, the Foundation is attempting to crack down on the growing fragmentation of RISC-V designs. A recent article in Semiconductor Engineering reports on the topic and RISC-V’s RISC-V Compliance Task Group.

Although the official CHIPS Alliance mission statements do not mention RISC-V, the initiative appears to be an extension of the RISC-V ecosystem. So far, there have been few open-ISA alternatives to RISC-V. In December, however, Wave Computing announced plans to follow in RISC-V’s path by offering its MIPS ISA as open source code without royalties or proprietary licensing. As noted in a Bit-Tech.net report on the CHIPS Alliance, there are also various open source chip projects that cover somewhat similar ground, including the FOSSi (Free and Open Source Silicon) Foundation, LibreCores, and OpenCores.

Contributions from Google, SiFive, and Western Digital

Google plans to contribute to the CHIPS Alliance a Universal Verification Methodology (UVM) based instruction stream generator environment for RISC-V cores. The configurable UVM environment will provide “highly stressful instruction sequences that can verify architectural and micro-architectural corner-cases of designs,” says the CHIPS Alliance.

SiFive will contribute and continue to improve its RocketChip (or Rocket-Chip) SoC generator, including the initial version of the TileLink coherent interconnect fabric. SiFive will also continue to contribute to the SCALA-based Chisel open-source hardware construction language and the FIRRTL “intermediate representation specification and transformation toolkit” for writing circuit-level transformations. SiFive will also continue to contribute to and maintain the Diplomacy SoC parameter negotiation framework.

As noted, Western Digital will contribute its 9-stage, dual issue, 32-bit SweRV Core, which recently appeared on GitHub. It will also contribute a SWERV test bench and SweRV instruction set simulator. Additional contributions will include specification and early implementations of the OmniXtend cache coherence protocol.

Intel launches CXL interconnect consortium

Western Digital’s OmniXtend is similar to the high-speed Compute Express Link (CXL) CPU interconnect that Intel is open sourcing. On Monday, Intel, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, and Microsoft announced a CXL consortium to help develop the PCIe Gen 5 -based CXL into an industry standard. Intel intends to incorporate CXL into its processors starting in 2021 to link the CPU with memory and various accelerator chips.

The CXL group competes with a Cache Coherent Interconnect for Accelerators (CCIX) consortium founded in 2016 by AMD, Arm, IBM, and Xilinx. It similarly adds cache coherency atop a PCIe foundation to improve interconnect performance. By contrast, OmniXtend is based on Ethernet PHY technology. While the CXL and CCIX groups are focused only on interconnects, the CHIPS Alliance has a far more ambitious agenda, according to an interesting EETimes story on the CHIPS Alliance, CXL, and CCIX.

Posted on Leave a comment

New Red Team Project Aims to Help Secure Open Source Software

The Linux Foundation has launched the Red Team Project, which incubates open source cybersecurity tools to support cyber range automation, containerized pentesting utilities, binary risk quantification, and standards validation and advancement.

The Red Team Project’s main goal is to make open source software safer to use. They use the same tools, techniques, and procedures used by malicious actors, but in a constructive way to provide feedback and help make open source projects more secure.

We talked with Jason Callaway, Customer Engineer at Google, to learn more about the Red Team project.

Linux Foundation: Can you briefly describe the Red Team project and its history with the Fedora Red Team SIG?

Jason Callaway: I founded the Fedora Red Team SIG with some fellow Red Hatters at Def Con 25. We had some exploit mapping tools that we wanted to build, and I was inspired by Mudge and Sarah Zatko’s Cyber-ITL project; I wanted to make an open source implementation of their methodologies. The Fedora Project graciously hosted us and were tremendous advocates. Now that I’m at Google, I’m fortunate to get to work on the Red Team as my 20% Project, where I hope to broaden its impact and build a more vendor neutral community. Fedora is collaborating with LF, supports our forking the projects, and will have a representative on our technical steering committee.

LF: What are some of the short- and long-term goals of the project?

Jason: Our most immediate goal is to get back up and running. That means migrating GitHub repos, setting up our web and social media presence, and most importantly, getting back to coding. We’re forming a technical steering committee that I think will be a real force multiplier in helping us to stay focused and impactful. We’re also starting a meetup in Washington DC that will alternate between featured speakers and hands-on exploit curation hackathons on a two-week cadence.

LF: Why is open source important to the project?

Jason: Open source is important to us in many ways, but primarily because it’s the right thing to do. Cybersecurity is a global problem that impacts individuals, businesses, governments, everybody. So we have to make open source software safer.

There are lots of folks working on that, and in classic open source fashion, we’re standing on the shoulders of giants. But the Red Team Project hopes to offer some distinctly offensive value to open source software security.

LF: How can the community learn more and get involved?

Jason: I used to have a manager who liked to say, “80% of the job is just showing up.” It was tongue-in-cheek for sure, but it definitely applies to open source projects. To learn more, you can attend our meetups either in person or via Google Hangout, subscribe to our mailing list, and check out our projects on GitHub or our website.

This article originally appeared at The Linux Foundation

Posted on Leave a comment

BackBox Linux for Penetration Testing

Any given task can succeed or fail depending upon the tools at hand. For security engineers in particular, building just the right toolkit can make life exponentially easier. Luckily, with open source, you have a wide range of applications and environments at your disposal, ranging from simple commands to complicated and integrated tools.

The problem with the piecemeal approach, however, is that you might wind up missing out on something that can make or break a job… or you waste a lot of time hunting down the right tools for the job. To that end, it’s always good to consider an operating system geared specifically for penetration testing (aka pentesting).

Within the world of open source, the most popular pentesting distribution is Kali Linux. It is, however, not the only tool in the shop. In fact, there’s another flavor of Linux, aimed specifically at pentesting, called BackBox. BackBox is based on Ubuntu Linux, which also means you have easy access to a host of other outstanding applications besides those that are included, out of the box.

What Makes BackBox Special?

BackBox includes a suite of ethical hacking tools, geared specifically toward pentesting. These testing tools include the likes of:

Out of the box, one of the most significant differences between Kali Linux and BackBox is the number of installed tools. Whereas Kali Linux ships with hundreds of tools pre-installed, BackBox significantly limits that number to around 70.  Nonetheless, BackBox includes many of the tools necessary to get the job done, such as:

BackBox is in active development, the latest version (5.3) was released February 18, 2019. But how is BackBox as a usable tool? Let’s install and find out.

Installation

If you’ve installed one Linux distribution, you’ve installed them all … with only slight variation. BackBox is pretty much the same as any other installation. Download the ISO, burn the ISO onto a USB drive, boot from the USB drive, and click the Install icon.

The installer (Figure 1) will be instantly familiar to anyone who has installed a Ubuntu or Debian derivative. Just because BackBox is a distribution geared specifically toward security administrators, doesn’t mean the operating system is a challenge to get up and running. In fact, BackBox is a point-and-click affair that anyone, regardless of skills, can install.

The trickiest section of the installation is the Installation Type. As you can see (Figure 2), even this step is quite simple.

Once you’ve installed BackBox, reboot the system, remove the USB drive, and wait for it to land on the login screen. Log into the desktop and you’re ready to go (Figure 3).

Using BackBox

Thanks to the Xfce desktop environment, BackBox is easy enough for a Linux newbie to navigate. Click on the menu button in the top left corner to reveal the menu (Figure 4).

From the desktop menu, click on any one of the favorites (in the left pane) or click on a category to reveal the related tools (Figure 5).

The menu entries you’ll most likely be interested in are:

  • Anonymous – allows you to start an anonymous networking session.

  • Auditing – the majority of the pentesting tools are found in here.

  • Services – allows you to start/stop services such as Apache, Bluetooth, Logkeys, Networking, Polipo, SSH, and Tor.

Before you run any of the testing tools, I would recommend you first making sure to update and upgrade BackBox. This can be done via a GUI or the command line. If you opt to go the GUI route, click on the desktop menu, click System, and click Software Updater. When the updater completes its check for updates, it will prompt you if any are available, or if (after an upgrade) a reboot is necessary (Figure 6).

Should you opt to go the manual route, open a terminal window and issue the following two commands:

sudo apt-get update sudo apt-get upgrade -y

Many of the BackBox pentesting tools do require a solid understanding of how each tool works, so before you attempt to use any given tool, make sure you know how to use said tool. Some tools (such as Metasploit) are made a bit easier to work with, thanks to BackBox. To run Metasploit, click on the desktop menu button and click msfconsole from the favorites (left pane). When the tool opens for the first time, you’ll be asked to configure a few options. Simply select each default given by clicking your keyboard Enter key when prompted. Once you see the Metasploit prompt, you can run commands like:

db_nmap 192.168.0/24

The above command will list out all discovered ports on a 192.168.1.x network scheme (Figure 7).

Even often-challenging tools like Metasploit are made far easier than they are with other distributions (partially because you don’t have to bother with installing the tools). That alone is worth the price of entry for BackBox (which is, of course, free).

The Conclusion

Although BackBox usage may not be as widespread as Kali Linux, it still deserves your attention. For anyone looking to do pentesting on their various environments, BackBox makes the task far easier than so many other operating systems. Give this Linux distribution a go and see if it doesn’t aid you in your journey to security nirvana.

Posted on Leave a comment

New Linux Kernel: The Big 5.0

Linus Torvalds at last made the jump with the recent release of kernel 5.0. Although Linus likes to say the his only reason to move on to the next integer is when he runs out of fingers and toes with which to count the fractional part of the version number, the truth is this kernel is pretty loaded with new features.

On the network front, apart from improvements to drivers like that of the Realtek R8169, 5.0 will come with better network performance. Network performance has been down for the last year or so because of Spectre V2. The bug forced kernel developers to introduce something called a Retpoline (short for “RETurn tramPOLINE“) to mitigate its effect. The changes introduced in kernel 5.0 “[…] Overall [give a greater than] 10% performance improvement for UDP GRO benchmark and smaller but measurable [improvements] for TCP syn flood” according to developer Paolo Abeni.

What hasn’t made the cut yet is the much anticipated integration of WireGuard. Wireguard is a VPN protocol that is allegedly faster, more versatile and safer than the ones currently supported by the kernel. Wireguard is easy to implement, uses state of the art encryption, and is capable of maintaining the network link to the VPN up even if the user switches to a different WiFi network or changes from WiFi to a wired connection.

An ongoing task is the work going into preparing for the Y2038 problem. In case you have never heard of this, UNIX and UNIX-like systems (including Linux) have clocks that count from January the 1st, 1970. The amount of seconds from that date onwards is stored in a signed 32-bit variable called time_t. The variable is signed because, you know, there are some programs that need to show dates before the 70s.

At the moment of writing we are already somewhere in the 01011100 01110010 10010000 10111010 region and the clock is literally ticking. On January 19th 2038, at 3:14:07 in the morning, the clock will reach 01111111 11111111 11111111 11111111. One second later, time_t will overflow, changing the sign of your clock and making your system believe, along with millions of devices and servers worldwide, that we are back in 1901.

Then… well, the usual: planes will fall from the sky, nuclear power stations will melt down, and toasters will explode, rendering the world breakfastless. That is, of course, unless the brave kernel developers don’t come up with a solution in the meantime. Then again, they made the Wii controller work in Linux, what could they not achieve?

More stuff to look forward to in Linux kernel 5.0

  • Native support for FreeSync/VRR of AMD GPUs means that now your smart monitor and your video card can sync up their frame rates and you won’t see any more tearing artifacts when playing a busy game or watching an action movie.
  • Linux now has native support for and boosted the performance of the Adiantum filesystem encryption. This encryption system is used in low-powered devices built around ARM Cortex-A7 or lower — think mid- to low-end phones and many SBCs.
  • Talking of SBCs, the touch screen for the Raspberry Pi has at last been mainlined, and Btrfs now supports swap files.

As always, you can find more information about Linux 5.0 by reading Linus’s announcement on the Linux Kernel mailing list, checking out the in-depth articles at Phoronix and by reading the Kernel Newbies report.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

MiyoLinux: A Lightweight Distro with an Old-School Approach

I must confess, although I often wax poetic about the old ways of the Linux desktop, I much prefer my distributions to help make my daily workflow as efficient as possible. Because of that, my taste in Linux desktop distributions veers very far toward the modern side of things. I want a distribution that integrates apps seamlessly, gives me notifications, looks great, and makes it easy to work with certain services that I use.

However, every so often it’s nice to dip my toes back into those old-school waters and remind myself why I fell in love with Linux in the first place. That’s precisely what MiyoLinux did for me recently. This lightweight distribution is based on Devuan and makes use of the i3 Tiling Window Manager.

Why is it important that MiyoLinux is based on Devuan? Because that means it doesn’t use systemd. There are many within the Linux community who’d be happy to make the switch to an old-school Linux distribution that opts out of systemd. If that’s you, MiyoLinux might just charm you into submission.

But don’t think MiyoLinux is going to be as easy to get up and running as, say, Ubuntu Linux, Elementary OS, or Linux Mint. Although it’s not nearly as challenging as Arch or Gentoo, MiyoLinux does approach installation and basic usage a bit differently. Let’s take a look at how this particular distro handles things.

Installation

The installation GUI of MiyoLinux is pretty basic. The first thing you’ll notice is that you are presented with a good amount of notes, regarding the usage of the MiyoLinux desktop. If you happen to be testing MiyoLinux via VirtualBox, you’ll wind up having to deal with the frustration of not being able to resize the window (Figure 1), as the Guest Additions cannot be installed. This also means mouse integration cannot be enabled during the installation, so you’ll have to tab through the windows and use your keyboard cursor keys and Enter key to make selections.

Once you click the Install MiyoLinux button, you’ll be prompted to continue using either ‘su” or sudo. Click the use sudo button to continue with the installation.

The next screen of importance is the Installation Options window (Figure 2), where you can select various options for MiyoLinux (such as encryption, file system labels, disable automatic login, etc.).

The MiyoLinux installation does not include an automatic partition tool. Instead, you’ll be prompted to run either cfdisk or GParted (Figure 3). If you don’t know your way around cfdisk, select GParted and make use of the GUI tool.

With your disk partitioned (Figure 4), you’ll be required to take care of the following steps:

  • Configure the GRUB bootloader.

  • Select the filesystem for the bootloader.

  • Configure time zone and locales.

  • Configure keyboard, keyboard language, and keyboard layout.

  • Okay the installation.

Once, you’ve okay’d the installation, all packages will be installed and you will then be prompted to install the bootloader. Following that, you’ll be prompted to configure the following:

  • Hostname.

  • User (Figure 5).

  • Root password.

With the above completed, reboot and log into your new MiyoLinux installation.

Usage

Once you’ve logged into the MiyoLinux desktop, you’ll find things get a bit less-than-user-friendly. This is by design. You won’t find any sort of mouse menu available anywhere on the desktop. Instead you use keyboard shortcuts to open the different types of menus. The Alt+m key combination will open the PMenu, which is what one would consider a fairly standard desktop mouse menu (Figure 6).

The Alt+d key combination will open the dmenu, a search tool at the top of the desktop, where you can scroll through (using the cursor keys) or search for an app you want to launch (Figure 7).

Installing Apps

If you open the PMenu, click System > Synaptic Package Manager. From within that tool you can search for any app you want to install. However, if you find Synaptic doesn’t want to start from the PMenu, open the dmenu, search for terminal, and (once the terminal opens), issue the command sudo synaptic. That will get the package manager open, where you can start installing any applications you want (Figure 8).

Of course, you can always install applications from the command line. MiyoLinux depends upon the Apt package manager, so installing applications is as easy as:

sudo apt-get install libreoffice -y

Once installed, you can start the new package from either the PMenu or dmenu tools.

MiyoLinux Accessories

If you find you need a bit more from the MiyoLinux desktop, type the keyboard combination Alt+Ctrl+a to open the MiyoLinux Accessories tool (Figure 9). From this tool you can configure a number of options for the desktop.

All other necessary keyboard shortcuts are listed on the default desktop wallpaper. Make sure to put those shortcuts to memory, as you won’t get very far in the i3 desktop without them.

A Nice Nod to Old-School Linux

If you’re itching to throw it back to a time when Linux offered you a bit of challenge to your daily grind, MiyoLinux might be just the operating system for you. It’s a lightweight operating system that makes good use of a minimal set of tools. Anyone who likes their distributions to be less modern and more streamlined will love this take on the Linux desktop. However, if you prefer your desktop with the standard bells and whistles, found on modern distributions, you’ll probably find MiyoLinux nothing more than a fun distraction from the standard fare.