Posted on Leave a comment

Must-Have Tools for Writers on the Linux Platform

I’ve been a writer for more than 20 years. I’ve written thousands of articles and how-tos on various technical topics and have penned more than 40 works of fiction. So, the written word is not only important to me, it’s familiar to the point of being second nature. And through those two decades (and counting) I’ve done nearly all my work on the Linux platform. I must confess, during those early years it wasn’t always easy. Formats didn’t always mesh with what an editor required and, in some cases, the open source platform simply didn’t have the necessary tools required to get the job done.

That was then, this is now.

A perfect storm of Linux evolution and web-based tools have made it such that any writer can get the job done (and done well) on Linux. But what tools will you need? You might be surprised to find out that, in some instances, the job cannot be efficiently done with 100% open source tools. Even with that caveat, the job can be done. Let’s take a look at the tools I’ve been using as both a tech writer and author of fiction. I’m going to outline this by way of my writing process for both nonfiction and fiction (as the process is different and requires specific tools).

A word of warning to seriously hard-core Linux users. A long time ago, I gave up on using tools like LaTeX and DocBook for my writing. Why? Because, for me, the focus must be on the content, not the process. When you’re facing deadlines, efficiency must take precedent.

Nonfiction

We’ll start with nonfiction, as that process is the simpler of the two. For writing technical how-tos, I collaborate with different editors and, in some cases, have to copy/paste content into a CMS. But like with my fiction, the process always starts with Google Drive. This is the point at which many open source purists will check out. Fear not, you can always opt to either keep all of your files locally, or use a more open-friendly cloud service (such as Zoho or nextCloud).

Why start on the cloud? Over the years, I’ve found I need to be able to access that content from anywhere at any time. The simplest solution was to migrate the cloud. I’ve also become paranoid about losing work. To that end, I make use of a tool like Insync to keep my Google Drive in sync with my desktop. With that desktop sync in place, I know there’s always a backup of my work, in case something should go awry with Google Drive.

For those clients with whom I must enter content into a Content Management System (CMS), the process ends there. I can copy/paste directly from a Google Doc into the CMS and be done with it. Of course, with technical content, there are always screenshots involved. For that, I use Gimp, which makes taking screenshots simple:

  1. Open Gimp.

  2. Click File > Create > Screenshot.

  3. Select from a single window, the entire screen, or a region to grab (Figure 1).

  4. Click Snap.

The majority of my clients tend to prefer I work with Google Docs, because I can share folders so that they have reliable access to the content. There are a few clients I have that do not work with Google Docs, and so I must download the files into a format that can be used. What I do for this is download in .odt format, open the document in LibreOffice (Figure 2), format as needed, save in a format required by the client, and send the document on.

And that, is the end of the line for nonfiction.

Fiction

This is where it gets a bit more complicated. The beginning steps are the same, as I always write every first draft of a novel in Google Docs. Once that is complete, I then download the file to my Linux desktop, open the file in LibreOffice, format as necessary, and then save as a file type supported by my editor (unfortunately, that means .docx).

The next step in the process gets a bit dicey. My editor prefers to use comments over track changes (as it makes it easier for both of us to read the document as we make changes). Because of this, a 60k word doc can include hundreds upon hundreds of comments, which slows LibreOffice to a useless crawl. Once upon a time, you could up the memory used for documents, but as of LibreOffice 6, that is no longer possible. This means any larger, novel-length, document with numerous comments will become unusable. Because of that, I’ve had to take drastic measures and use WPS Office (Figure 3). Although this isn’t an open source solution, WPS Office does a fine job with numerous comments in a document, so there’s no need to deal with the frustration that is LibreOffice (when working with these large files with hundreds of comments).

Once my editor and I finish up the edits for the book (and all comments have been removed), I can then open the file in LibreOffice for final formatting. When the formatting is complete, I save the file in .html format and then open the file in Calibre for exporting the file to .mobi and .epub formats.

Calibre is a must-have for anyone looking to publish on Amazon, Barnes & Noble, Smashwords, or other platforms. One thing Calibre does better than other, similar, solutions is enable you to directly edit the .epub files (Figure 4). For the likes of Smashword, this is an absolute necessity (as the export process will add elements not accepted on the Smashwords conversion tool).

After the writing process is over (or sometimes while waiting for an editor to complete a pass), I’ll start working on the cover for the book. That task is handled completely in Gimp (Figure 5).

And that finishes up the process of creating a work of fiction on the Linux platform. Because of the length of the documents, and how some editors work, it can get a bit more complicated than the process of creating nonfiction, but it’s far from challenging. In fact, creating fiction on Linux is just as simple (and more reliable) than other platforms.

HTH

I hope this helps aspiring writers to have the confidence to write on the Linux platform. There are plenty of other tools available to use, but the ones I have listed here have served me quite well over the years. And although I do make use of a couple of proprietary tools, as long as they keep working well on Linux, I’m okay with that.

Learn more about Linux in the Introduction to Open Source Development, Git, and Linux (LFD201) training course from The Linux Foundation, and sign up now to start your open source journey.

Posted on Leave a comment

How to Move Files Using Linux Commands or File Managers

Learn how to move files with Linux commands in this tutorial from our archives.

There are certain tasks that are done so often, users take for granted just how simple they are. But then, you migrate to a new platform and those same simple tasks begin to require a small portion of your brain’s power to complete. One such task is moving files from one location to another. Sure, it’s most often considered one of the more rudimentary actions to be done on a computer. When you move to the Linux platform, however, you may find yourself asking “Now, how do I move files?”

If you’re familiar with Linux, you know there are always many routes to the same success. Moving files is no exception. You can opt for the power of the command line or the simplicity of the GUI – either way, you will get those files moved.

Let’s examine just how you can move those files about. First we’ll examine the command line.

Command line moving

One of the issues so many users, new to Linux, face is the idea of having to use the command line. It can be somewhat daunting at first. Although modern Linux interfaces can help to ensure you rarely have to use this “old school” tool, there is a great deal of power you would be missing if you ignored it all together. The command for moving files is a perfect illustration of this.

The command to move files is mv. It’s very simple and one of the first commands you will learn on the platform. Instead of just listing out the syntax and the usual switches for the command – and then allowing you to do the rest – let’s walk through how you can make use of this tool.

The mv command does one thing – it moves a file from one location to another. This can be somewhat misleading, because mv is also used to rename files. How? Simple. Here’s an example. Say you have the file testfile in /home/jack/ and you want to rename it to testfile2 (while keeping it in the same location). To do this, you would use the mv command like so:

mv /home/jack/testfile /home/jack/testfile2

or, if you’re already within /home/jack:

mv testfile testfile2

The above commands would move /home/jack/testfile to /home/jack/testfile2 – effectively renaming the file. But what if you simply wanted to move the file? Say you want to keep your home directory (in this case /home/jack) free from stray files. You could move that testfile into /home/jack/Documents with the command:

mv /home/jack/testfile /home/jack/Documents/

With the above command, you have relocated the file into a new location, while retaining the original file name.

What if you have a number of files you want to move? Luckily, you don’t have to issue the mv command for every file. You can use wildcards to help you out. Here’s an example:

You have a number of .mp3 files in your ~/Downloads directory (~/ – is an easy way to represent your home directory – in our earlier example, that would be /home/jack/) and you want them in ~/Music. You could quickly move them with a single command, like so:

mv ~/Downloads/*.mp3 ~/Music/

That command would move every file that ended in .mp3 from the Downloads directory, and move them into the Music directory.

Should you want to move a file into the parent directory of the current working directory, there’s an easy way to do that. Say you have the file testfile located in ~/Downloads and you want it in your home directory. If you are currently in the ~/Downloads directory, you can move it up one folder (to ~/) like so:

mv testfile ../ 

The “../” means to move the folder up one level. If you’re buried deeper, say ~/Downloads/today/, you can still easily move that file with:

mv testfile ../../

Just remember, each “../” represents one level up.

As you can see, moving files from the command line, isn’t difficult at all.

GUI

There are a lot of GUIs available for the Linux platform. On top of that, there are a lot of file managers you can use. The most popular file managers are Nautilus (GNOME) and Dolphin (KDE). Both are very powerful and flexible. I want to illustrate how files are moved using the Nautilus file manager (on the Ubuntu 13.10 distribution, with Unity as the interface).

Nautilus has probably the most efficient means of moving files about. Here’s how it’s done:

  1. Open up the Nautilus file manager.

  2. Locate the file you want to move and right-click said file.

  3. From the pop-up menu (Figure 1) select the “Move To” option.

  4. When the Select Destination window opens, navigate to the new location for the file.

  5. Once you’ve located the destination folder, click Select.

Nautilus screenshot

This context menu also allows you to copy the file to a new location, move the file to the Trash, and more.

If you’re more of a drag and drop kind of person, fear not – Nautilus is ready to serve. Let’s say you have a file in your home directory and you want to drag it to Documents. By default, Nautilus will have a few bookmarks in the left pane of the window. You can drag the file into the Document bookmark without having to open a second Nautilus window. Simply click, hold, and drag the file from the main viewing pane to the Documents bookmark.

If, however, the destination for that file is not listed in your bookmarks (or doesn’t appear in the current main viewing pane), you’ll need to open up a second Nautilus window. Side by side, you can then drag the file from the source folder in the original window to the the destination folder in the second window.

If you need to move multiple files, you’re still in luck. Similar to nearly every modern user interface, you can do multi-select of files by holding down the Ctrl button as you click each file. After you have selected each file (Figure 2), you can either right-click one of the selected files and the choose the Move To option, or just drag and drop them into a new location.

nautilus

The selected files (in this case, folders) will each be highlighted.

Moving files on the Linux desktop is incredibly easy. Either with the command line or your desktop of choice, you have numerous routes to success – all of which are user-friendly and quick to master.

Posted on Leave a comment

Blockchain as a Catalyst for Good

Blockchain and its ability to “embed trust” can help elevate trust, which right now, is low, according to Sally Eaves, a chief technology officer and strategic advisor to the Forbes Technology Council, speaking at The Linux Foundation’s Open FinTech Forum in New York City.

People’s trust in business, media, government and non-government organizations (NGOs) is at a 17-year low, and businesses are suffering as a result, Eaves said.

Additionally, Eaves said, 87 percent of millennials believe business success should be measured in more than just financial performance. People want jobs with real meaning and purpose, she added.

To provide further context, Eaves noted the following urgent global challenges:

  • 1.5 billion people cannot prove their identity (which has massive implications in not just banking but education as well)
  • 2 billion people worldwide do not have a bank account or access to a financial institution
  • Identity fraud is estimated to cost the UK millions of euros annually.

Read more at The Linux Foundation

Posted on Leave a comment

gRPC Load Balancing on Kubernetes without Tears

Many new gRPC users are surprised to find that Kubernetes’s default load balancing often doesn’t work out of the box with gRPC. For example, here’s what happens when you take a simple gRPC Node.js microservices app and deploy it on Kubernetes:

While the voting service displayed here has several pods, it’s clear from Kubernetes’s CPU graphs that only one of the pods is actually doing any work—because only one of the pods is receiving any traffic. Why?

In this blog post, we describe why this happens, and how you can easily fix it by adding gRPC load balancing to any Kubernetes app with Linkerd, a CNCF service mesh and service sidecar.

Read more at Kubernetes Blog

Click Here!

Posted on Leave a comment

RISC-V Linux Development in Full Swing

Most Linux users have heard about the open source RISC-V ISA and its potential to challenge proprietary Arm and Intel architectures. Most are probably aware that some RISC-V based CPUs, such as SiFive’s 64-bit Freedom U540 found on its HiFive Unleashed board, are designed to run Linux. What may come as a surprise, however, is how quickly Linux support for RISC-V is evolving.

“This is a good time to port Linux applications to RISC-V,” said Comcast’s Khem Raj at an Embedded Linux Conference Europe presentation last month. “You’ve got everything you need. Most of the software is upstream so you don’t need forks,” he said.

By adopting an upstream first policy, the RISC-V Foundation is accelerating Linux-on-RISC-V development both now and in the future. Early upstreaming helps avoid forked code that needs to be sorted out later. Raj offered specifics on different levels of RISC-V support from the Linux kernel to major Linux distributions, as well as related software from Glibc to U-Boot (see farther below).

The road to RISC-V Linux has also been further accelerated thanks to the enthusiasm of the Linux open source community. Penguinistas see the open source computing architecture as a continuation of the mission of Linux and other open source projects. Since IoT is an early RISC-V target, the interest is particularly keen in the open source Linux SBC community. The open hardware movement recently expanded to desktop PCs with System76’s Ubuntu-driven Thelio system.

Processors remain the biggest exceptions to open hardware. RISC-V is a step in the right direction for CPUs, but RISC-V lacks a spec for graphics, which with the rise of machine vision and edge AI and multimedia applications, is becoming increasingly important in embedded. There’s progress on this front as well, with an emerging project to create an open RISC-V based GPU called Libre RISC-V. More details can be found in this Phoronix story.

SiFive launches new Linux-driven U74 core designs

RISC-V is also seeing new developments on the CPU front. Last week, SiFive, which is closely associated with the UC Berkeley team that developed the architecture, announced a second gen RISC-V CPU core designs called IP 7 Series. IP 7 features the Linux-friendly U74 and U74-MC chips. These quad-core, Cortex-A55 like processors, which should appear in SoCs in 2019, are faster and more power efficient than the U540.

The new U74 chips will support future, up to octa-core, SoC designs that mix and match the U74 cores with its new next-gen MCU chips: the Cortex-M7 like E76 and Cortex-R8 like S76. The U74-MC model even features its own built-in S7 MCU for real-time processing.

Although much of the early RISC-V business has been focused on MCUs, SiFive is not alone in building Linux-driven RISC-V designs. Earlier this summer a Shakti Project backed by the Indian government demonstrated Linux booting on a homegrown 400MHz Shakti RISC-V processor.

A snapshot of Linux support for RISC-V

In his ELC presentation, called “Embedded Linux on RISC-V Architecture — Status Report,” Raj, who is an active contributor to RISC-V, as well as the OpenEmbedded and Yocto projects, revealed the latest updates for RISC-V support in the Linux kernel and related software. The report has a rather short shelf life, admitted Raj: “The software is developing very fast so what I say today may be obsolete tomorrow — we’ve already seen a lot of basic tool, compilers, and toolchain support landing upstream.”

Raj started with a brief overview of RISC-V, explaining how it supports 32-, 64-, and even future 128-bit instruction sets. Attached to these versions are extensions such as integer multiply/divide, atomic memory access, floating point single and double precision, and compressed.  

The initial Linux kernel support adopts the most commonly used profile for Linux: RV64GC (LP64 ABI). The G and the C at the end of the RV64 name stand for general-purpose and compressed, respectively.

The Linux kernel has had a stable ABI (application binary interface) in upstream Linux since release 4.15. According to Raj, the recent 4.19 release added QEMU virt board drivers “thanks to major contributions from UC Berkeley, SiFive, and Andes Technology.”

You can now run many other Linux-related components on a SiFive U540 chip, including binutils 2.28, gcc 7.0, glibc 2.27 and 2.28 (32-bit), and newlib 3.0 (for bare metal bootstrapping). For the moment, gdb 8.2 is available only for bare-metal development.

In terms of bootloaders, Coreboot offered early support, and U-Boot 2018.11 recently added RISC-V virt board support upstream. PK/BBL is now upstream on the RISC-V GitHub page.

OpenEmbedded/Yocto Project OE/Yocto was the first official Linux development platform port, with core support upstreamed with the 2.5 release. Among full-fledged Linux distributions, Fedora is the farthest along. Fedora, which has done a lot of the “initial heavy lifting,” finished its bootstrap back in March, said Raj. In addition, its “Koji build farm is turning out RISC-V RPMs like any other architecture,” he added. Fedora 29 (Rawhide) offers specific support for the RISC-V version of QEMU.

Debian still lacks toolchain for cross-build development on RISC-V, but it’s already possible, said Raj. Buildroot now has a 64-bit RISC-V port and a 32-bit port was recently submitted.

Raj went on to detail RISC-V porting progress for the LLVM compiler and the Musl C library. Farther behind, but in full swing, are ports for OpenOCD UEFI, Grub, V8, Node.js, Rust, and Golang, among others. For the latest details, see the RISC-V software status page, as well as other URLs displayed toward the end of Raj’s ELC video below.

[embedded content]

Posted on Leave a comment

GraphQL Gets Its Own Foundation

Addressing the rapidly growing user base around GraphQL, The Linux Foundation has launched the GraphQL Foundation to build a vendor-neutral community around the query language for APIs (application programming interfaces).

“Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support,” said Lee Byron, co-creator of GraphQL, in a statement.

“GraphQL has redefined how developers work with APIs and client-server interactions,” said Chris Aniszczyk, Linux Foundation vice president of developer relations…

Read more at The New Stack

Posted on Leave a comment

KubeCon + CloudNativeCon

The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities in Barcelona, Spain from May 20-23, 2019. Join Kubernetes, Prometheus, OpenTracing, Fluentd, gRPC, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Vitess, CoreDNS, NATS, Linkerd and Helm as the community gathers for four days to further the education and advancement of cloud native computing.

Posted on Leave a comment

Advance Your Open Source Skills with These Essential Articles, Videos, and More

Recent industsry events have underscored the strength of open source in today’s computing landscape. With billions of dollars being spent, the power of open source development, collaboration, and organization seems unstoppable.

Toward that end, we recently provided an array of articles, videos, and other resources to meet you where you are on your open source journey and help you master the basics, improve your skills, or explore the broader ecosystem. Let’s take a look.

To start, we provided some Linux basics in our two-part series exploring Linux links:

Then, we covered some basic tools for open source logging and monitoring:

We also took an in-depth look at the Introduction to Open Source, Git, and Linux training course from The Linux Foundation. This course presents a comprehensive learning path focused on development, Linux systems, and the Git revision control system. The $299 course is self-paced and comes with extensive and easily referenced learning materials.  Get a preview of the course curriculum in this four-part series by Sam Dean:

As the default compiler for the Linux kernel, the GNU Compiler Collection (GCC) delivers trusted, stable performance along with the additional extensions needed to correctly build the kernel. We took a closer look at this vital tool in this whitepaper:

Security is another vital component of Linux. In this video interview, Linux kernel maintainer Greg Kroah-Hartman provides a glimpse into how the kernel community deals with vulnerabilities.

Along with all these articles, we also recently published videos from some of our October events. Follow the links below to watch complete keynote and technical session presentations from Open Source Summit, Linux Security Summit, and Open FinTech Forum.

  • Check out 90+ sessions from Open Source Summit Europe & ELC + OpenIoT Summit Europe.

  • These 21 videos from Linux Security Summit Europe provide an overview of recent kernel development.

  • The 9 keynote videos from Open FinTech Forum cover cutting-edge open source technologies including AI, blockchain, and Kubernetes.

Stay tuned for more event coverage and essential open source resources.

Posted on Leave a comment

17 Fun Linux Commands to Run in the Terminal

The terminal is a very powerful tool, and it’s probably the most interesting part in Unix. Among the plethora of useful commands and scripts you can use, some seem less practical, if not completely useless. Here are some Bash commands that are fun, and some of them are useful as well.

This command adds some spice to your terminal by adding a cat to your screen which will chase after your (mouse) cursor. Install it by running this script:

Type oneko to display the cat.

linux-fun-commands-oneko

Figlet is a command for those who love to write in ASCII art. It greatly simplifies this task as it automatically transforms any given string. It comes with a bunch of fonts by default at “/usr/share/figlet/fonts/,” and you can of course add your own.

Read more at MakeTechEasier

Click Here!

Posted on Leave a comment

How to Work with Git and GitHub

Enterprises of all sizes are reporting dramatic and widening skills gaps in Linux and open source skills. Meanwhile, Linux tops the list as the most in-demand open source skill, according to the 2018 Open Source Jobs Report. In this article series, we are taking a closer look at one of the best new ways to gain open source and Linux fluency: the Introduction to Open Source Software Development, Git and Linux training course from The Linux Foundation.

This article is the final one in a four-part article series that highlights the major aspects of the training course, in chronological order. The initial article in the series covered the course’s general introduction to working with open source software, with a focus on such essentials as project collaboration, licensing, legal issues and getting help. With that groundwork laid, the course delves into working with Bash, the standard shell for most Linux distributions. The second article covered the course curriculum dedicated to working with Bash and Linux basics. The third article covered working with the command line as well as command-line tools. Here we will look at the course’s extensive content on working with Git and GitHub.

Working with Git, is, of course, essential for working with open source in today’s environment, especially if you will be collaborating with others. Git is a distributed version control system that makes collaborating on projects easy, while at the same time minimizing version-related errors and unwanted duplication of effort. Once you are working with Git you can also leverage a valuable repository called GitHub, where teams can house their projects, access and update code, and more.

The course covers Git as well as working with GitHub, and also notes that there are alternatives to GitHub that are worth knowing about, such as:

Why are Git and GitHub essentials important?

Git began as an offshoot of the Linux kernel development community, initially created by Linus Torvalds himself. However, people quickly realized that it could be used for any project that had collaborative needs. The course comprehensively covers Git essentials as they apply to collaborating on projects. In focusing on GitHub, it notes that collaborators can designate hosted projects as public or private, and that public repositories are free of charge.

The course devotes 11 chapters to installing, using, and working with Git, covering the following topics:

  • Git Installation

  • Git and Revision Control Systems

  • Using Git: An Example

  • Git Concepts and Architecture

  • Managing Files and the Index

  • Commits

  • Branches

  • Diffs

  • Merges

  • Managing Local and Remote Repositories

  • Using Patches

As is true throughout the Introduction to Open Source Software Development, Git and Linux training course, there are Labs modules that encourage students to get hands-on experience with Git and GitHub. An initial module guides students through creating a GitHub account that can go on to be used for working with open source projects over time.

In this part of the course, the focus is very much on applying Git and GitHub skills to collaborative project management and tasks. As students go through these lessons, they should keep in mind that the online course includes many summary slides, useful bullet lists, graphics, and more. It’s definitely worth setting up a desktop folder and regularly saving screenshots of especially useful topics to the folder.

Are you interested in advancing your open source skills? If so, this training course can help. Learn more about the Introduction to Open Source Development, Git, and Linux (LFD201) course and sign up now to start your open source journey.