Posted on Leave a comment

Learn to Work with the Linux Command Line

Open source software isn’t just proliferating within technology infrastructures around the world, it is also creating profound opportunities for people with relevant skills. Organizations of all sizes have reported widening skills gaps in this area. Linux tops the list as the most in-demand open source skill, according to the 2018 Open Source Jobs Report. With this in mind, In this article series, we are taking a closer look at one of the best new ways to gain open source and Linux fluency: the Introduction to Open Source Software Development, Git and Linux training course from The Linux Foundation.

This story is the third in a four-part article series that highlights major aspects of the training course. The first article in the series covered the course’s general introduction to working with open source software, with a focus on such essentials as project collaboration, licensing, legal issues and getting help. The second article covered the course curriculum dedicated to working with Bash and Linux basics.

Working with commands and command-line tools are essential Linux skills, and the course delves into task- and labs-based instruction on these topics. The discussion of major command-line tools is comprehensive and includes lessons on:

  • Locating files with find and locate

  • Finding character strings in files using grep

  • Substituting strings in files using sed

There is a Labs module that asks you to set the prompt to a current directory and encourages follow up by changing the prompt to any other desired configuration. In addition to being self-paced, the course focuses on performing meaningful tasks rather than simply reading or watching.

Overall, the course contains 43 hands-on lab exercises that will allow you to practice your skills, along with a similar number of quizzes to check your knowledge. It also provides more than 20 videos showing you how to accomplish important tasks.

As you go through these lessons, keep in mind that the online course includes many summary slides, useful lists, graphics, and other resources that can be referenced later. It’s definitely worth setting up a desktop folder and regularly saving screenshots of especially useful topics to a folder for handy reference. For example, here is a slide that summarizes the handy utilities that any user should have in his or her toolbox:

With the groundwork laid for working with the command line and command line tools, the course then comprehensively covers working with Git, including hands-on learning modules. We will explore the course’s approach to this important topic in the next installment in this series.

Learn more about Introduction to Open Source Development, Git, and Linux (LFD201) and sign up now to start your open source journey.

Posted on Leave a comment

An Overview of Android Pie

Let’s talk about Android for a moment. Yes, I know it’s only Linux by way of a modified kernel, but what isn’t these days? And seeing as how the developers of Android have released what many (including yours truly) believe to be the most significant evolution of the platform to date, there’s plenty to talk about. Of course, before we get into that, it does need to be mentioned (and most of you will already know this) that the whole of Android isn’t open source. Although much of it is, when you get into the bits that connect to Google services, things start to close up. One major service is the Google Play Store, a functionality that is very much proprietary. But this isn’t about how much of Android is open or closed, this is about Pie.
Delicious, nutritious … efficient and battery-saving Pie.

I’ve been working with Android Pie on my Essential PH-1 daily driver (a phone that I really love, but understand how shaky the ground is under the company). After using Android Pie for a while now, I can safely say you want it. It’s that good. But what about the ninth release of Android makes it so special? Let’s dig in and find out. Our focus will be on the aspects that affect users, not developers, so I won’t dive deep into the underlying works.

Gesture-Based Navigation

Much has been made about Android’s new gesture-based navigation—much of it not good. To be honest, this was a feature that aroused all of my curiosity. When it was first announced, no one really had much of an idea what it would be like. Would users be working with multi touch gestures to navigate around the Android interface? Or would this be something completely different.

The reality is, gesture-based navigation is much more subtle and simple than what most assumed. And it all boils down to the Home button. With gesture-based navigation enabled, the Home button and the Recents button have been combined into a single feature. This means, in order to gain access to your recent apps, you can’t simply tap that square Recents button. Instead, the Recent apps overview (Figure 1) is opened with a short swipe up from the home button.

Another change is how the App Drawer is accessed. In similar fashion to opening the Recents overview, the App Drawer is opened via a long swipe up from the Home button.

As for the back button? It’s not been removed. Instead, what you’ll find is it appears (in the left side of the home screen dock) when an app calls for it. Sometimes that back button will appear, even if an app includes its own back button.

Thing is, however, if you don’t like gesture-based navigation, you can disable it. To do so, follow these steps:

  1. Open Settings

  2. Scroll down and tap System > Gestures

  3. Tap Swipe up on Home button

  4. Tap the On/Off slider (Figure 2) until it’s in the Off position

Battery Life

AI has become a crucial factor in Android. In fact, it is AI that has helped to greatly improve battery life in Android. This new feature is called Adaptive Battery and works by prioritizing battery power for the apps and services you use most. By using AI, Android learns how you use your Apps and, after a short period, can then shut down unused apps, so they aren’t draining your battery while waiting in memory.

The only caveat to Adaptive Battery is, should the AI pick up “bad habits” and your battery start to prematurely drain, the only way to reset the function is by way of a factory reset. Even with that small oversight, the improvement in battery life from Android Oreo to Pie is significant.

Changes to Split Screen

Split Screen has been available to Android for some time. However, with Android Pie, how it’s used has slightly changed. This change only affects those who have gesture-based navigation enabled (otherwise, it remains the same). In order to work with Split Screen on Android 9.0, follow these steps:

  1. Swipe upward from the Home button to open the Recent apps overview.

  2. Locate the app you want to place in the top portion of the screen.

  3. Long press the app’s circle icon (located at the top of the app card) to reveal a new popup menu (Figure 3)

  4. Tap Split Screen and the app will open in the top half of the screen.

  5. Locate the second app you want to open and, tap it to add it to the bottom half of the screen.

Using Split Screen and closing apps with the feature remains the same as it was.

App Actions

This is another feature that was introduced some time ago, but was given some serious attention for the release of Android Pie. App Actions make it such that you can do certain things with an app, directly from the apps launcher.

For instance, if you long-press the GMail launcher, you can select to reply to a recent email, or compose a new email. Back in Android Oreo, that feature came in the form of a popup list of actions. With Android Pie, the feature now better fits with the Material Design scheme of things (Figure 4).

Sound Controls

Ah, the ever-changing world of sound controls on Android. Android Oreo had an outstanding method of controlling your sound, by way of minor tweaks to the Do Not Disturb feature. With Android Pie, that feature finds itself in a continued state of evolution.

What Android Pie nailed is the quick access buttons to controlling sound on a device. Now, if you press either the volume up or down button, you’ll see a new popup menu that allows you to control if your device is silenced and/or vibrations are muted. By tapping the top icon in that popup menu (Figure 5), you can cycle through silence, mute, or full sound.

Screenshots

Because I write about Android, I tend to take a lot of screenshots. With Android Pie came one of my favorite improvements: sharing screenshots. Instead of having to open Google Photos, locate the screenshot to be shared, open the image, and share the image, Pie gives you a pop-up menu (after you take a screenshot) that allows you to share, edit, or delete the image in question. 

If you want to share the screenshot, take it, wait for the menu to pop up, tap Share (Figure 6), and then share it from the standard Android sharing menu.

A More Satisfying Android Experience

The ninth iteration of Android has brought about a far more satisfying user experience. What I’ve illustrated only scratches the surface of what Android Pie brings to the table. For more information, check out Google’s official Android Pie website. And if your device has yet to receive the upgrade, have a bit of patience. Pie is well worth the wait.

Posted on Leave a comment

Understanding Linux Links: Part 2

In the first part of this series, we looked at hard links and soft links and discussed some of the various ways that linking can be useful. Linking may seem straightforward, but there are some non-obvious quirks you have to be aware of. That’s what we’ll be looking at here. Consider, for example, at the way we created the link to libblah in the previous article. Notice how we linked from within the destination folder:

cd /usr/local/lib ln -s /usr/lib/libblah

That will work. But this:

 cd /usr/lib ln -s libblah /usr/local/lib 

That is, linking from within the original folder to the destination folder, will not work.

The reason for that is that ln will think you are linking from inside /usr/local/lib to /usr/local/lib and will create a linked file from libblah in /usr/local/lib to libblah also in /usr/local/lib. This is because all the link file gets is the name of the file (libblah) but not the path to the file. The end result is a very broken link.

However, this:

 cd /usr/lib ln -s /usr/lib/libblah /usr/local/lib 

will work. Then again, it would work regardless of from where you executed the instruction within the filesystem. Using absolute paths, that is, spelling out the whole the path, from root (/) drilling down to to the file or directory itself, is just best practice.

Another thing to note is that, as long as both /usr/lib and /usr/local/lib are on the same partition, making a hard link like this:

 cd /usr/lib ln -s libblah /usr/local/lib 

will also work because hard links don’t rely on pointing to a file within the filesystem to work.

Where hard links will not work is if you want to link across partitions. Say you have fileA on partition A and the partition is mounted at /path/to/partitionA/directory. If you want to link fileA to /path/to/partitionB/directory that is on partition B, this will not work:

 ln /path/to/partitionA/directory/file /path/to/partitionB/directory 

As we saw previously, hard links are entries in a partition table that point to data on the *same partition*. You can’t have an entry in the table of one partition pointing to data on another partition. Your only choice here would be to us a soft link:

 ln -s /path/to/partitionA/directory/file /path/to/partitionB/directory 

Another thing that soft links can do and hard links cannot is link to whole directories:

 ln -s /path/to/some/directory /path/to/some/other/directory 

will create a link to /path/to/some/directory within /path/to/some/other/directory without a hitch.

Trying to do the same by hard linking will show you an error saying that you are not allowed to do that. And the reason for that is unending recursiveness: if you have directory B inside directory A, and then you link A inside B, you have situation, because then A contains B within A inside B that incorporates A that encloses B, and so on ad-infinitum.

You can have recursive using soft links, but why would you do that to yourself?

Should I use a hard or a soft link?

In general you can use soft links everywhere and for everything. In fact, there are situations in which you can only use soft links. That said, hard links are slightly more efficient: they take up less space on disk and are faster to access. On most machines you will not notice the difference, though: the difference in space and speed will be negligible given today’s massive and speedy hard disks. However, if you are using Linux on an embedded system with a small storage and a low-powered processor, you may want to give hard links some consideration.

Another reason to use hard links is that a hard link is much more difficult to break. If you have a soft link and you accidentally move or delete the file it is pointing to, your soft link will be broken and point to… nothing. There is no danger of this happening with a hard link, since the hard link points directly to the data on the disk. Indeed, the space on the disk will not be flagged as free until the last hard link pointing to it is erased from the file system.

Soft links, on the other hand can do more than hard links and point to anything, be it file or directory. They can also point to items that are on different partitions. These two things alone often make them the only choice.

Next Time

Now we have covered files and directories and the basic tools to manipulate them, you are ready to move onto the tools that let you explore the directory hierarchy, find data within files, and examine the contents. That’s what we’ll be dealing with in the next installment. See you then!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

The State of Hyperledger with Brian Behlendorf

Brian Behlendorf has been heading the Hyperledger project from the early days. We sat down with him at Open Source Summit to get an update on the Hyperledger project.

Hyperledger has grown in a way that mirrors the growth of the blockchain industry. “When we started, all the excitement was around bitcoin,” said Brian Behlendorf,  Executive Director of Hyperledger. Initially, it was more about moving money around. But the industry started to go beyond that and started to see if it “could be used as a way to reestablish how trust works on the Internet and, and try to decentralize a lot of things that today with led to being centralized.”

As the industry has evolved around blockchain so did Hyperledger. “We realized pretty early that we needed to be a home for a lot of different ways to build a blockchain. It wasn’t going to be like the Linux kernel project with one singular architecture,” said Behlendorf.

Read more at The Linux Foundation

Posted on Leave a comment

EdgeX Foundry’s First Dev Kit Runs Ubuntu on an Artik Board

The Linux Foundation’s EdgeX Foundry project for developing open source edge computing middleware has released its first developer kit. The Ubuntu-based kit is built around an octa-core Samsung Artik 710 Starter Kit teamed with a GrovePi+ I/O board. Future kits will include an Artik 530 kit, and eventually, a Raspberry Pi/GrovePi+ combination.

At the recent IoT Solutions World Congress, the EdgeX Foundry project also announced nine new members, including Intel, and debuted a Smart Building Automation Use Case Community Demo. The demo showed off the platform’s ability to bring together heterogeneous solution components, including different vendors, connectivity standards, operating systems, and hardware types. 

EdgeX Foundry was announced in 2017, with a goal of developing a standardized, open source interoperability framework for IoT edge computing. In August, the project released a v2 ”California” version of the middleware, which will be succeeded by a “Delhi” release in November. Delhi will provide EdgeX’s first management features, as well as improved security features such as access control and security bootstrapping. It will also offer C and Golang-based Device Service SDKs and a reference GUI.

Based largely on technology created by Dell, EdgeX Foundry is creating and certifying an ecosystem of interoperable, plug-and-play components to create an open source EdgeX stack for IoT edge computing. The cross-platform middleware will mediate between multiple sensor network messaging protocols as well as multiple cloud and analytics platforms.

Dell is one of three Platinum members alongside Analog Devices and Samsung. With the new additions, the membership has reached 70. The new members are Basking Automation, Beijing University of Posts and Telecommunications (BUPT), DATA AHEAD, CertusNet, Intel Corp., Redis Labs, the Federal University of Campina Grande (UFCG) /Embedded Lab, Windmill Enterprise, and ZEDEDA. Previous members include AMD, Canonical, Cloud Foundry, Linaro, Mocana, NetFoundry, and VMware.

EdgeX developer kits

The Artik 710 based EdgeX developer kit is initially available as a community-supported product. Developers independently purchase the kit from Samsung and download the upcoming EdgeX Delhi software from the EdgeX repository on GitHub. Informal, community-based tech support is available via forums like the EdgeX Rocket Chat.

This initial kit, as well as future kits, will also soon be available as part of a commercial track that offers professional support. The commercial kits are designed primarily for EdgeX members but are available to anyone. Commercial options will include “kits based on supported versions of the EdgeX framework itself (neutral to any plug-in value add), kits based on specific IoT platforms, and microservice plug-ins for value-add such as analytics, data orchestration and security,” says the project.

Samsung’s Artik 710 and Artik 530, which will form the basis of an upcoming EdgeX kit, switched their BSPs from Fedora to Ubuntu in Oct. 2017. The Artik 710 module features a 1.4GHz, octa-core, -A53 SoC with a Mali T400 GPU while the Artik 530 has a 1.2GHz, quad-core, -A9 SoC. Both include hardware security elements.

The 49x36mm modules integrate 1GB DDR3 RAM, 4GB eMMC flash, and an Ethernet PHY. They also include dual-band 802.11a/b/g/n (WiFi 4), Bluetooth 4.2, and Zigbee/Thread (802.15.4).

The Artik 710 Developer Kit is a double board set. The Interposer Board provides the Artik 710 plus Gigabit Ethernet, micro-HDMI, and micro-USB OTG ports. There’s also an LVDS interface and antenna connectors. The Platform Board sits under the Interposer board and provides a USB 2.0 host port, SD slot, audio jack, JTAG, 5V DC input, and MIPI-CSI and -DSI connections.

The EdgeX version of the Artik 710 kit also includes the optional Artik Interface II Board, which connects the bundled Seeed GrovePi+ I/O board. The GrovePi+ Starter Kit also provides a dozen Grove sensors and LEDs, plus a backlit LCD, buzzer, relay, and button.

The GrovePi+ Starter Kit is also part of Samsung’s GrovePi+ Starter Kit for Eagleye 530 board, which will form the basis of the upcoming Artik 530 kit. Unlike the Artik 710 kit, the Artik 530 equipped Eagleye 530 is a single board with a Raspberry Pi like layout, footprint, and 40-pin GPIO interface. The Eagleye 530 is further equipped with GbE and HDMI ports, 2x USB 2.0 ports, and micro-USB OTG and power ports. There’s also an SD slot, audio jack, and MIPI-CSI camera interface. Unlike the Artik 710 kit, the Eagleye 530 does not require the Interface II Board to hook up the bundled GrovePi+ board.

The GrovePi+ board will also be available in a future EdgeX kit that runs on the GrovePi+ Starter Kit for Raspberry Pi. Other development kits are also under consideration. Even if Intel had not joined the project, one of them was likely to provide an x86 chip.

“Intel’s involvement in EdgeX Foundry will help drive scale and accessibility of solutions for both our customers and businesses of all sizes,” stated Stacey Shulman, Intel’s chief innovation officer for Retail Solutions.

Posted on Leave a comment

Linux Kernel 4.19 – Long Term Support, USB Type C, and WiFi 6

This was a rather special release due to the fact that, at about half way through the process, Linus Torvalds left the helm of Linux kernel development to take a rare break. However, Greg Kroah-Hartman took over until the release was ready and is now handing the reins back to Torvalds.

Another interesting fact about this iteration is that 4.19 will be a Long Term Support (LTS) kernel. That is, it will receive updates and patches to keep it safe and maintained for at least a couple of years. The last LTS kernel (which is still supported) was 4.14, released in November 2017.

On the purely technical side, among many other things, 4.19 is getting a new USB Type C display mode driver. This means exactly what it says on the box: soon you will be able to use the USB Type C on your machine to stream video to a display.

Also, the kernel at last gets a built-in GPS subsystem. Obviously, Linux has supported GPS devices for years, but this support was pretty non-standard and dependent on external and ad hoc modules that varied from device to device. The GNSS (Global Navigation Satellite System) subsystem puts an end to that. According to the Phoronix article on the matter, GNSS abstracts the underlying hardware interfaces and will have a front end in user space that will allow programs to read the data in standardized way and format. And, once more, developers have had to deal with the Spectre bug. This time, it was to add mitigations for affected IBM POWER CPUs.

Other things to look forward to in Linux kernel 4.19:

  • A new queuing discipline for the network packet scheduler. Dubbed CAKE (for Common Applications Kept Enhanced), it aims to speed up home network routers and links. By shaping data transmissions over the network, it helps reduce the problem of buffering and latency that will slow down your downloads, make your video-stream stutter or get you kicked off CS:GO.
  • A brand new (and, for now, experimental) read only file system called EROFS (for Enhanced Read-Only File System). It is designed to be lightweight and modern in its design and is created by Huawei for situations where a high-performance read-only file system is needed. This is useful in firmwares for mobile devices or for Live CDs.
  • Preliminary support for the upcoming WiFi 6 (aka 802.11 ax) protocol. WiFi 6 widens the band for network transmissions and will be substantially faster than current WiFi networks, as a wider band means less congestion. Less congestion will also mean data can be transmitted more reliably. That said, there are still only a few 802.11ax-enabled devices out there, but when they come, Linux will be ready!

As usual, there’s much more to learn about kernel 4.19 on the Linux Kernel Mailing List. You can also visit the articles on Phoronix or check out the Kernel Newbies report

Learn more about Linux in the Introduction to Open Source Development, Git, and Linux (LFD201) training course from The Linux Foundation and get started on your open source journey.

Posted on Leave a comment

The Linux Foundation Awards 31 Open Source Training Scholarships

Since 2011, the Linux Foundation has awarded 106 training scholarships worth over $220,000. In 2018, we awarded 31 scholarships, our most ever. Scholarship recipients receive a Linux Foundation training course and certification exam at no cost. This year, The Linux Foundation awarded scholarships to 31 of the more than 900 applicants who vied to be selected in one of the nine categories offered. Two applicants were selected to receive a scholarship in each category with the exception of ‘Open Source Newbies’, in which 15 applicants were selected.  

“With the LiFT scholarship program, we strive to select a cohort of individuals that represent the future of software development and those who will utilize this opportunity to give back to not only the broader open source community, but also their local communities,” said Linux Foundation Executive Director Jim Zemlin. “Scholarship programs such as LiFT showcase the unlimited opportunities a single person can unlock for themselves and other aspiring developers when given access to do so.”

Read the inspiring stories of this year’s recipients at The Linux Foundation.

Posted on Leave a comment

Celebrating 15 Years of the Xen Project and Our Future

In the 1990s, Xen was a part of a research project to build a public computing infrastructure on the Internet led by Ian Pratt and Keir Fraser at The University of Cambridge Computer Laboratory. The Xen Project is now one of the most popular open source hypervisors and amasses more than 10 million users, and this October marks our 15th anniversary.

From its beginnings, Xen technology focused on building a modular and flexible architecture, a high degree of customizability, and security. This security mindset from the outset led to inclusion of non-core security technologies, which eventually allowed the Xen Project to excel outside of the data center and be a trusted source for security and embedded vendors (ex. Qubes, Bromium, Bitdefender, Star Labs, Zentific, Dornerworks, Bosch, BAE systems), and also a leading hypervisor contender for the automotive space.

As the Xen Project looks to a future of virtualization everywhere, we reflect back on some of our major achievements over the last 15 years. To celebrate, we’ve created an infographic that captures some of our key milestones — share it on social.

Read more at The Linux Foundation

Posted on Leave a comment

Learn to Work with Bash, Linux, and Git

As technology infrastructure shifts ever more in the direction of open source, there is rapidly growing need for open source skills. Use of open source software leads to better and faster development, and wider collaboration, and open source skills are a very valuable form of currency in the job market. That’s why it’s worth checking out the Introduction to Open Source Software Development, Git and Linux, an online training course from The Linux Foundation.

The course presents a comprehensive learning path focused on development, Linux systems and Git, the revision control system. It is self-paced and comes with extensive and easily referenced learning materials. Can this course arm you with Linux, development and Git skills that translate directly into value in the workplace and the job market? It absolutely can.

Laying the groundwork

This article is the second in a four-part article series that highlights the major aspects of the training course. The initial article covered the course’s general introduction to working with open source software, with a focus on such essentials as project collaboration, licensing, legal issues and getting help. With that groundwork laid, the course next delves into working with Bash, the standard shell for most Linux distributions.

In addition to comprehensive coverage of how to write effective Bash scripts, the course covers configuring bash, setting aliases, Bash tips and tricks and much more. There is also discussion of shell initialization and customizing the command prompt.  

With these topics mastered, students will be able to not only perform basic tasks, but also perform basic customizations. One recommendation: the online course includes many summary slides, useful bullet lists that can be referenced later, graphics and more. It’s definitely worth setting up a desktop folder and regularly saving screenshots of especially useful topics to the folder, with simple names for the screenshots such as “CommandLine.jpg.”

Hands-on learning

The “Labs” modules prompt students to perform specific actions. For example, a Labs module might ask you to set the prompt to a current directory and encourage you follow up by changing the prompt to any other desired configuration. In addition to being self-paced, the course is very focused on getting students to perform meaningful tasks rather than simply reading or watching.

In the course’s discussion of aliases, students learn that they permit custom definitions, and they learn that they can type alias with no arguments to view predefined aliases. Working with redirection and pipes are also covered thoroughly, as is working with special characters and using them to perform specific actions (such as redirecting an input descriptor).  

Before proceeding to more advanced topics, the course lays more basic groundwork, much of it focused on Linux. It comprehensively covers filesystem layout, partitions, paths and links, as well as the basics of working with text editors. The layout of the Linux filesystem is covered clearly, showing the main directories and their purposes.

Working with commands and command-line tools are, of course, essential Linux skills, and the course proceeds by delving into task-based instruction on these topics. We will cover these important lessons in the next installment in this series.

Learn more about Introduction to Open Source Development, Git, and Linux (LFD201) and sign up now to start your open source journey.

Posted on Leave a comment

To BeOS or not to BeOS, that is the Haiku

Back in 2001, a new operating system arrived that promised to change the way users worked with their computers. That platform was BeOS and I remember it well. What I remember most about it was the desktop, and how much it looked and felt like my favorite window manager (at the time) AfterStep. I also remember how awkward and overly complicated BeOS was to install and use. In fact, upon installation, it was never all too clear how to make the platform function well enough to use on a daily basis. That was fine, however, because BeOS seemed to live in a perpetual state of “alpha release.”

That was then. This is very much now.

Now we have haiku

Bringing BeOS to life

An AfterStep joy.

No, Haiku has nothing to do with AfterStep, but it fit perfectly with the haiku meter, so work with me.

The Haiku project released it’s R1 Alpha 4 six years ago. Back in September of 2018, it finally released it’s R1 Beta 1 and although it took them eons (in computer time), seeing Haiku installed (on a virtual machine) was worth the wait … even if only for the nostalgia aspect. The big difference between R1 Beta 1 and R1 Alpha 4 (and BeOS, for that matter), is that Haiku now works like a real operating system. It’s lighting fast (and I do mean fast), it finally enjoys a modicum of stability, and has a handful of useful apps. Before you get too excited, you’re not going to install Haiku and immediately become productive. In fact, the list of available apps is quite limiting (more on this later). Even so, Haiku is definitely worth installing, even if only to see how far the project has come.

Speaking of which, let’s do just that.

Installing Haiku

The installation isn’t quite as point and click as the standard Linux distribution. That doesn’t mean it’s a challenge. It’s not; in fact, the installation is handled completely through a GUI, so you won’t have to even touch the command line.

To install Haiku, you must first download an image. Download this file into your ~/Downloads directory. This image will be in a compressed format, so once it’s downloaded you’ll need to decompress it. Open a terminal window and issue the command unzip ~/Downloads/haiku*.zip. A new directory will be created, called haiku-r1beta1XXX-anyboot (Where XXX is the architecture for your hardware). Inside that directory you’ll find the ISO image to be used for installation.

For my purposes, I installed Haiku as a VirtualBox virtual machine. I highly recommend going the same route, as you don’t want to have to worry about hardware detection. Creating Haiku as a virtual machine doesn’t require any special setup (beyond the standard). Once the live image has booted, you’ll be asked if you want to run the installer or boot directly to the desktop (Figure 1). Click Run Installer to begin the process.

The next window is nothing more than a warning that Haiku is beta software and informing you that the installer will make the Haiku partition bootable, but doesn’t integrate with your existing boot menu (in other words, it will not set up dual booting). In this window, click the Continue button.

You will then be warned that no partitions have been found. Click the OK button, so you can create a partition table. In the remaining window (Figure 2), click the Set up partitions button.

In the resulting window (Figure 3), select the partition to be used and then click Disk > Initialize > GUID Partition Map. You will be prompted to click Continue and then Write Changes.

Select the newly initialized partition and then click Partition > Format > Be File System. When prompted, click Continue. In the resulting window, leave everything default and click Initialize and then click Write changes.

Close the DriveSetup window (click the square in the titlebar) to return to the Haiku Installer. You should now be able to select the newly formatted partition in the Onto drop-down (Figure 4).

After selecting the partition, click Begin and the installation will start. Don’t blink, as the entire installation takes less than 30 seconds. You read that correctly—the installation of Haiku takes less than 30 seconds. When it finishes, click Restart to boot your newly installed Haiku OS.

Usage

When Haiku boots, it’ll go directly to the desktop. There is no login screen (or even the means to log in). You’ll be greeted with a very simple desktop that includes a few clickable icons and what is called the Tracker(Figure 5).

The Tracker includes any minimized application and a desktop menu that gives you access to all of the installed applications. Left click on the leaf icon in the Tracker to reveal the desktop menu (Figure 6).

From within the menu, click Applications and you’ll see all the available tools. In that menu you’ll find the likes of:

  • ActivityMonitor (Track system resources)

  • BePDF (PDF reader)

  • CodyCam (allows you to take pictures from a webcam)

  • DeskCalc (calculator)

  • Expander (unpack common archives)

  • HaikuDepot (app store)

  • Mail (email client)

  • MediaPlay (play audio files)

  • People (contact database)

  • PoorMan (simple web server)

  • SoftwareUpdater (update Haiku software)

  • StyledEdit (text editor)

  • Terminal (terminal emulator)

  • WebPositive (web browser)

You will find, in the HaikuDepot, a limited number of available applications. What you won’t find are many productivity tools. Missing are office suites, image editors, and more. What we have with this beta version of Haiku is not a replacement for your desktop, but a view into the work the developers have put into giving the now-defunct BoOS new life. Chances are you won’t spend too much time with Haiku, beyond kicking the tires. However, this blast from the past is certainly worth checking out.

A positive step forward

Based on my experience with BeOS and the alpha of Haiku (all those years ago), the developers have taken a big, positive step forward. Hopefully, the next beta release won’t take as long and we might even see a final release in the coming years. Although Haiku won’t challenge the likes of Ubuntu, Mint, Arch, or Elementary OS, it could develop its own niche following. No matter its future, it’s good to see something new from the developers. Bravo to Haiku.

Your OS is prime

For a beta 2 release

Make it so, my friends.