Posted on Leave a comment

Three Graphical Clients for Git on Linux

Those that develop on Linux are likely familiar with Git. With good reason. Git is one of the most widely used and recognized version control systems on the planet. And for most, Git use tends to lean heavily on the terminal. After all, much of your development probably occurs at the command line, so why not interact with Git in the same manner?

In some instances, however, having a GUI tool to work with can make your workflow slightly more efficient (at least for those that tend to depend upon a GUI). To that end, what options do you have for Git GUI tools? Fortunately, we found some that are worthy of your time and (in some cases) money. I want to highlight three such Git clients that run on the Linux operating system. Out of these three, you should be able to find one that meets all of your needs.
I am going to assume you understand how Git and repositories like GitHub function, which I covered previously, so I won’t be taking the time for any how-tos with these tools. Instead, this will be an introduction, so you (the developer) know these tools are available for your development tasks.

A word of warning: Not all of these tools are free, and some are released under proprietary licenses. However, they all work quite well on the Linux platform and make interacting with GitHub a breeze.

With that said, let’s look at some outstanding Git GUIs.

SmartGit

SmartGit is a proprietary tool that’s free for non-commercial usage. If you plan on employing SmartGit in a commercial environment, the license cost is $99 USD per year for one license or $5.99 per month. There are other upgrades (such as Distributed Reviews and SmartSynchronize), which are both $15 USD per licence. You can download either the source or a .deb package for installation. I tested SmartGit on Ubuntu 18.04 and it worked without issue.

But why would you want to use SmartGit? There are plenty of reasons. First and foremost, SmartGit makes it incredibly easy to integrate with the likes of GitHub and Subversion servers. Instead of spending your valuable time attempting to configure the GUI to work with your remote accounts, SmartGit takes the pain out of that task. The SmartGit GUI (Figure 1) is also very well designed to be uncluttered and intuitive.

After installing SmartGit, I had it connected with my personal GitHub account in seconds. The default toolbar makes working with a repository, incredibly simple. Push, pull, check out, merge, add branches, cherry pick, revert, rebase, reset — all of Git’s most popular features are there to use. Outside of supporting most of the standard Git and GitHub functions/features, SmartGit is very stable. At least when using the tool on the Ubuntu desktop, you feel like you’re working with an application that was specifically designed and built for Linux.

SmartGit is probably one of the best tools that makes working with even advanced Git features easy enough for any level of user. To learn more about SmartGit, take a look at the extensive documentation.

GitKraken

GitKraken is another proprietary GUI tool that makes working with both Git and GitHub an experience you won’t regret. Where SmartGit has a very simplified UI, GitKraken has a beautifully designed interface that offers a bit more feature-wise at the ready. There is a free version of GitKraken available (and you can test the full-blown paid version with a 15 day trial period). After the the trial period ends, you can continue using the free version, but for non-commercial use only.

For those who want to get the most out of their development workflow, GitKraken might be the tool to choose. This particular take on the Git GUI features the likes of visual interactions, resizable commit graphs, drag and drop, seamless integration (with GitHub, GitLab, and BitBucket), easy in-app tasks, in-app merge tools, fuzzy finder, gitflow support, 1-click undo & redo, keyboard shortcuts, file history & blame, submodules, light & dark themes, git hooks support, git LFS, and much more. But the one feature that many users will appreciate the most is the incredibly well-designed interface (Figure 2).

Outside of the amazing interface, one of the things that sets GitKraken above the rest of the competition is how easy it makes working with multiple remote repositories and multiple profiles. The one caveat to using GitKraken (besides it being proprietary) is the cost. If you’re looking at using GitKraken for commercial use, the license costs are:

  • $49 per user per year for individual

  • $39 per user per year for 10+ users

  • $29 per user per year for 100+ users

The Pro accounts allow you to use both the Git Client and the Glo Boards (which is the GitKraken project management tool) commercially. The Glo Boards are an especially interesting feature as they allow you to sync your Glo Board to GitHub Issues. Glo Boards are sharable and include search & filters, issue tracking, markdown support, file attachments, @mentions, card checklists, and more. All of this can be accessed from within the GitKraken GUI.
GitKraken is available for Linux as either an installable .deb file, or source.

Git Cola

Git Cola is our free, open source entry in the list. Unlike both GitKraken and Smart Git, Git Cola is a pretty bare bones, no-nonsense Git client. Git Cola is written in Python with a GTK interface, so no matter what distribution and desktop combination you use, it should integrate seamlessly. And because it’s open source, you should find it in your distribution’s package manager. So installation is nothing more than a matter of opening your distribution’s app store, searching for “Git Cola” and installing. You can also install from the command line like so:

sudo apt install git-cola

Or:

sudo dnf install git-cola

The Git Cola interface is pretty simple (Figure 3). In fact, you won’t find much in the way of too many bells and whistles, as Git Cola is all about the basics.

Because of Git Cola’s return to basics, there will be times when you must interface with the terminal. However, for many Linux users this won’t be a deal breaker (as most are developing within the terminal anyway). Git Cola does include features like:

Although Git Cola does support connecting to remote repositories, the integration to the likes of Github isn’t nearly as intuitive as it is on either GitKraken or SmartGit. But if you’re doing most of your work locally, Git Cola is an outstanding tool that won’t get in between you and Git.

Git Cola also comes with an advanced (Directed Acyclic Graph) DAG visualizer, called Git Dag. This tool allows you to get a visual representation of your branches. You start Git Dag either separately from Git Cola or within Git Cola from the View > DAG menu entry. Git DAG is a very powerful tool, which helps to make Git Cola one of the top open source Git GUIs on the market.

There’s more where that came from

There are plenty more Git GUI tools available. However, from these three tools you can do some serious work. Whether you’re looking for a tool with all the bells and whistles (regardless of license) or if you’re a strict GPL user, one of these should fit the bill.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

What is Ethereum?

Ethereum is a blockchain protocol that includes a programming language which allows for applications, called contracts, to run within the blockchain. Initially described in a white paper by it’s creator, Vitalik Buterin in late 2013, Ethereum was created as a platform for the development of decentralized applications that can do more than make simple coin transfers.

How does it work?

Ethereum is a blockchain. In general, a blockchain is a chain of data structures (blocks) that contains information such as account ids, balances, and transaction histories. Blockchains are distributed across a network of computers; the computers are often referred to as nodes.

Cryptography is a major part of blockchain technology. Mathematical encryption algorithms like RSA and ECDSA are used to generate public and private keys that are mathematically coupled. Public keys, or addresses, and private keys allow people to make transactions across the network without involving any personal information like name, address, date of birth, etc. These keys and addresses are often called hashes and are usually a long string on hexadecimal symbols.

1*mjowAqqMPCvO25ptToWrag.png

Example of an RSA generated public key

Blockchains have a public ledger that keeps track of all transactions that have occurred since the first, “genesis”, block. A block will include, at least, a hash of the current and previous blocks, and some data. Nodes across the network work to verify transactions and add them to the public ledger. In order for a transaction to be considered legitimate, there must be consensus.

Consensus means that the transaction is considered valid by the majority of the nodes in the Network. There are four main algorithms used to achieve consensus among a distributed blockchain network: the byzantine fault tolerance, proof-of-work, proof-of-stake, and delegated proof-of-stake. Chris explains them well in his post.

Attempting to make even the slightest alteration to the data in a block will change the hash of the block and will therefore be noticed by the entire network. This makes blockchains immutable and append-only. A transaction can only be added at the end of the chain, and once a transaction is added to a block there can be no changes made to it.

Source: andrefortuna.org

Accounts

Users of Ethereum control an account that has a cryptographic private key with a corresponding Ethereum address. If Alice, for example, wants to send Bob 1,000 ETH (ETH / Ether is Ethereum’s money). Alice needs Bob’s Ethereum address so she knows where to send it, and then Bob needs to use his private key that corresponds to that address in order to receive the 1,000 ETH.

Ethereum has two types of accounts: accounts that user’s control and contracts (or “smart contracts”). Accounts that user’s control, like Alice and Bob, primarily serve for ETH transfers. Just about every blockchain system has this type of account that can make money transfers. But what makes Ethereum special is the second type of account; a contract.

Contract accounts are controlled by a piece of code (an application) that is run inside the blockchain itself.

“What do you mean, inside the blockchain?”

EVM

Ethereum has a Virtual Machine, called EVM. This is where contracts get executed. EVM includes a stack (~processor), temporary memory (~RAM), storage space for permanent memory (~disk/database), environment variables (~system information, e.g: timestamp), logs, and sub-calls (you can call a contract within a contract).

An example contract might look like this:

if (something happens): send 1,000 ETH to Bob (addr: 22982be234)
else if (something else happens): send 1,000 ETH to Alice (addr: bbe4203fe)
else: don't send any ETH

If a user sends 1,000 ETH to this account (the contract), then the code in this account is the only thing that has power to transfer that ETH. It’s kind of like an escrow. The sender no longer has control over the 1,000 ETH. The digital assets are now under the control of a computer program and will be moved depending on the conditional logic of the contract.

Is it free?

No. The execution of contracts occurs within the blockchain, therefore within the Ethereum Network. Contracts take up storage space, and they require computational power. So Ethereum uses something called gas as a unit of measurement of how much something costs the Network. The price of gas is voted on by the nodes and the fees user’s pay in gas goes to the miners.

Miners

Miners are people using computers to do computations required validate transactions across the Network and add new blocks to the chain.

Mining works like this: when a block of transactions is ready to be added to the chain, miners use computer processing power to find hashes that match a specific target. When a miner finds the matching hash, she will be rewarded with ETH and will broadcast the new block across the network. The other nodes verify the matching hash, then if there is consensus, it is added to the chain.

What’s inside a block?

Within an Ethereum block is something called the state and history. The state is a mapping of addresses to account objects. The state of each account object includes:

  • ETH balance
  • nonce **
  • the contract’s source code (if the account is a contract)
  • contract storage (database)

** a nonce is a counter that prevents the account from repeating a transaction over and over resulting perhaps in taking more ETH from a sender than they are supposed to.

Blocks also store history: records of previous transactions and receipts.

State and History and stored in each node (each member of the Ethereum Network). Having each node contain the history of Ethereum transactions and contract code is great for security and immutability, but can be hard to scale. A blockchain cannot process more transactions than a single node can. Because of this, Ethereum limits the number of transactions to 7–15 per second. The protocol has adopted sharding — a technique that essentially breaks up the chain into smaller pieces but still aims to have the same level of security.

Transactions

Every transaction specifies a TO: address. If the TO: is a user-controlled account, and the transaction contains ETH, it is considered a transfer of ETH from account A to account B. If the TO: is a contract, then the code of the contract gets executed. The execution of a contract can result in further transactions, even calls to contracts within a contract, an event known as an inter-transaction.

But contracts don’t always have to be about transferring ETH. Anyone can create an application with any rules by defining it as a contract.

Who is using Ethereum?

Ethereum is currently being used mostly by crytocurrency traders and investors, but there is a growing community of developers that are building dapps (decentralized applications) on the Ethereum Network.

There are thousands of Ethereum-based projects being developed as we speak. Some more the most popular dapps are games (e.g CryptoKitties and CrptyoTulips).

How is Ethereum different than bitcoin?

Bitcoin is a blockchain technology where users are assigned a private key, linked with a wallet that generates bitcoin addresses where people can send bitcoins to. It’s all about the coins. It’s a way to exchange money in an encrypted, decentralized environment.

Ethereum not only lets users exchange money like bitcoin does, but it also has programming languages that let people build applications (contracts) that are executed within the blockchain.

Bitcoin functions on proof of work as a means of achieving consensus across the network. Whereas Ethereum uses proof of stake.

Ethereum’s creator is public (Vitalik Buterin). Bitcoin’s is unknown (goes by the alias, Satoshi Nakamoto)

Other blockchains that do contracts

There are other blockchain projects that allow the creation of contracts. Here is a brief description of what they are and how they are different than Ethereum:

Neo — faster transaction speeds, inability to fork, less energy use, has two tokens (NEO and GAS), will be quantum resistant.

Icon — uses loopchain to connect blockchain-based communities around the world.

Nem — contract code is stored outside of the blockchain resulting in a lighter and faster network.

Ethereum Classic — a continuation of the original Ethereum blockchain (before it was forked)

Conclusion

Ethereum is a rapidly growing blockchain protocol that allows people to not only transfer assests to each other, but to create decentralized applications that run securely on a distributed network of computers.

Posted on Leave a comment

DARPA Drops $35 Million on “Posh Open Source Hardware” Project

The U.S. Defense Advanced Research Projects Agency (DARPA) announced the first grants for its Electronic Resurgence Initiative (ERI). The initial round, which will expand to $1.5 billion over five years, covers topics ranging from automating EDA to optimizing chips for SDR to improving NVM performance. Of particular interest is a project called POSH, (posh open source hardware), which intends to create a Linux-based platform and ecosystem for designing and verifying open source IP hardware blocks for next-generation system-on-chips.

The first funding recipients were announced at DARPA’s ERI Summit this week in San Francisco. As reported in IEEE Spectrum, the recipients are working out of R&D labs at major U.S. universities and research institutes, as well as companies like Cadence, IBM, Intel, Nvidia, and Qualcomm.

Most of the projects are intended to accelerate the development of complex, highly customized SoCs. ERI is motivated by two trends in chip design. First, as Moore’s Law roadmap slows to a crawl, SoC designers are depending less on CPUs and more on a growing profusion of GPUs, FPGAs, neural chips, and other co-processors, thereby adding to complexity. Second, we’re seeing a greater diversity of applications ranging from cloud-based AI to software defined networking to the Internet of Things. Such divergent applications often require highly divergent mixes of processors, including novel chips like neural net accelerators.

DARPA envisions the tech world moving toward a wider variety of SoCs with different mixes of IP blocks, including highly customized SoCs for specific applications. With today’s semiconductor design tools, however, such a scenario would bog down in spiraling costs and delays. ERI plans to speed things up.

Here are some brief summaries of the projects followed by a closer look at POSH:

  • IDEA — This EDA project is based primarily on work by David White at Cadence, which received $24.1 million of the total IDEA funding. The immediate goal is to create a layout generator that would enable users with even limited electronic design expertise to complete the physical design of electronic hardware such as a single board computer within 24 hours. A larger goal is to enable the automated EDA system to capture the expertise of designers using it.
  • Software Defined Hardware (SDH) — SDH aims to develop hardware and software that can be reconfigured in real time based on the kind of data being processed. The goal is to design chips that can reconfigure their workload in a matter of milliseconds. Stephen Keckler at Nvidia is leading the funding at $22.7 million.
  • Domain-Specific System on Chip (DSSoC) — Like the closely related SDH project, the DSSoC project is inspired by software defined radio (SDR). The project is working with the GNU Radio Foundation to look at the needs of SDR developers as the starting point for developing an ideal SDR SoC.
  • 3DSoC — This semiconductor materials and integration project is based largely on MIT research from Max Shulaker, who received $61 million. FRANC is attempting to grow multiple layers of interconnected circuitry atop a CMOS base to prove that a monolithic 3D system using a more affordable 90nm process can compete with CPUs with more advanced processes.
  • Foundations Required for Novel Compute (FRANC) — FRANC is looking to improve the performance of NVM memories such as embedded MRAM with a goal of enabling “emerging memory-centric computing architectures to overcome the memory bottleneck presented in current von Neumann computing.”

POSH boosts open hardware with verification

The POSH project received over $35 million in funding spread out among a dozen researchers. The biggest grants, ranging from about $6 to $7 million went to Eric Keiter (Sandia National Labs), Alex Rabinovitch (Synopsis), Tony Levi, and Clark Barrett (Stanford and SiFive).

As detailed in a July 18 interview in IEEE Spectrum with DARPA ERI director Bill Chappell, proprietary licensing can slow down development, especially when it comes to building complex, highly customized SoCs. If SoC designers could cherry pick verified, open source hardware blocks with the same ease that software developers can download software from GitHub today, it could significantly reduce development time and cost. In addition, open source can speed and improve hardware testing, which can be time-consuming when limited to engineers working for a single chipmaker.

POSH is not intended as a new open source processor architecture such as RISC-V. DARPA has helped fund RISC-V, and as noted, POSH funding recipient Barrett is part of the leadership team at RISC-V chip leader SiFive.

POSH is defined as “an open source SoC design and verification ecosystem that will enable the cost effective design of ultra-complex SoCs.” In some ways, POSH is the hardware equivalent of projects such as Linaro and Yocto, which verify, package, and update standardized software components for use by open source developers. As Chappell put it in the IEEE interview, POSH intends to “create a foundation of building blocks where we have full understanding and analysis capability as deep as we want to go to understand how these blocks are going to work.”

The ERI funding announcement quotes POSH and IDEA project leader Andreas Olofsson, as saying: “Through POSH, we hope to eliminate the need to start from scratch with every new design, creating a verified foundation to build from while providing deeper assurance to users based on the open source inspection process.”

POSH is focusing primarily on streamlining and unifying the verification process, which along with design, is by far a leading cost of SoC design. Currently, there are very few high-quality, verified hardware IP blocks that are openly available, and it’s difficult to tell those apart from the blocks that aren’t.

You’re not going to bet $100 [million] to $200 million on a block that was maybe built by a university or, even if it was from another industrial location, [if] you don’t really know the quality of it,” Chappell told IEEE Spectrum. “So you have to have a methodology to understand how good something is at a deep level before it’s used.”

According to Chappell, open source hardware is finally starting to take off because of the increasing abstraction of the hardware design. “It gets closer to the software community’s mentality,” he said.
DARPA ERI’s slidedeck (the POSH section starts at page 49) suggests Linux as the foundation for the POSH IP block development and verification platform. It also details the following goals:

  • TA-1: Hardware Assurance Technology — Development of hardware assurance technology appropriate for signoff quality validation of deeply hierarchical analog and digital circuits of unknown origin. Hardware assurance technology would provide increasingly high levels assurance for formal analysis, simulation, and emulation and prototypes.
  • TA-2: Open Source Hardware Technology — Development of design methods, standards, and critical IP components needed to kick-start a viable open source SoC eco-system. IP blocks would include digital blocks such as CPU, GPU, FPGA, media codecs, encryption accelerators, and controllers for memory, Ethernet, PCIe, USB 3.0, MIPI-CSI, HDMI, SATA, CAN, and more. Analog blocks might include PHYs, PLL and DLL, ADCs, DACs, regulators. and monitor circuits.
  • TA-3: Open Source System-On-Chip Demonstration — Demonstration of open source hardware viability through the design of a state of the art open source System-On-Chip.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Hot Technologies on Track at Open Source Summit

Open Source Summit North America is right around the corner. There will be hundreds of sessions, workshops, and talks, all curated by experts in the Linux and open source communities. It’s not an easy feat to choose the topics and sessions you want to attend at the event  because there are so many topics and only so much time.

In this article, we talk with Laura Abbott, a developer employed by Red Hat, and Bryan Liles, a developer at Heptio, a Kubernetes company, based in Seattle, Washington, about the upcoming event. Abbott is on the program committee for Open Source Summit, and Liles is one of the program chairs, working hard “to build out a schedule that touches on many aspects of Open Source.”

Hot topics

“I’ve been interested in cloud-native applications for a few years now, and I spend most of my time thinking about the problems and developing software in this space,” said Liles. “I’m also interested in computer vision, augmented reality, and virtual reality. One of the most important topics in this space right now is Machine Learning. It’s amazing to see all the open source solutions being created. I feel that even as a hobbyist, I can find tools to help me build and run models without causing me to go into debt. Personally, I’m looking forward to the talks in the Infrastructure & Automation and the Kubernetes/Containers/Cloud Native Apps tracks.”

Here are just a few of the must-see cloud computing sessions:

As a kernel developer, Abbott gets excited when people talk about their future kernel work, especially when it involves the internals like the page cache or memory management. “I also love to see topics that talk about getting people involved in projects for the first time,” she said. “I’m also excited to see the Diversity Empowerment Summit and learning from the speakers there.”

You may wonder as we are moving toward the cloud native world, where everything is running in a cloud, does Linux even matter anymore? But, the fact is Linux is powering the cloud. “Linux is what’s powering all those topics. When people say Linux. they’re usually referring to the complete platform from kernel to userspace libraries. You need a solid base to be able to run your application in the cloud. The entire community of Linux contributors enables today’s developers to work with the latest technologies,” said Abbott.

A few of the featured talks in the Linux Systems and Development track include:

Latest Trends

“DevOps is unsurprisingly a hot topic,” said Abbott. “There is a lot of focus on how to move towards newer best practices with projects like Kubernetes and how to best monitor your infrastructure. Blockchain technologies are a very hot topic. Some of this work is very forward looking but there’s a lot of interest in figuring out if blockchain can solve existing problems,” said Abbott.

That means OSSNA is the place to be if you are interested in emerging trends and technologies. “If you are looking to see what is coming next, or currently involved in Open Source, you should attend,” says Liles. “The venue is in a great location in Vancouver, so you can also take in the city between listening to your peers during talks or debating current trends during the hallway track,” said Liles.

Abbott concluded, “Anyone who is excited about Linux should attend. There’s people talking about such a wide variety of topics from kernel development to people management. There’s something for everyone.”

Sign up to receive updates on Open Source Summit North America:

This article originally appeared at The Linux Foundation.

Posted on Leave a comment

The Kubernetes Third-Year Anniversary Is Just the Beginning

A vibrant development community continues to help make Kubernetes the profoundly successful open source project it has become. In just a few years after Kubernetes was created as an in-house project by Google, Kubernetes’ governance processes have also served to underpin the platform’s adoption as well.  And a healthy community is at the heart of any successful source project.

At the same time, the open source community is not a “static asset.” To be permanently successful and to move forward, any open source project also needs a growing pool of contributors in order to survive. That’s why the Kubernetes community is working on multiple programs, focused on onboarding contributors, including the Kubernetes Mentoring initiativeKubernetes Contributing guide and Office Hours, as well as “Meet our Contributors sessions”Outreachy and even Google Summer of Code (GSoC), which is one of the most popular and well-known programs for the new contributors to the open source projects in the world. Some of the stand-out contributors have also garnered industry recognition.

Releases, Features and Roadmap

Kubernetes is a technology, first and foremost. And the project obviously couldn’t be so successful if the technology did not offer such a profound benefit to organizations.

The most important releases of Kubernetes occur four times per year, providing a new set of features each time. The patch releases (delivering security patches and bug fixes) take place even more often, keeping the codebase always up-to-date.

Read more at The New Stack

Learn more in the Kubernetes project update from Ihor Dvoretskyi coming up at Open Source Summit in Vancouver.

Posted on Leave a comment

Top 10 Reasons to be at the Premier Open Source Event of the Year | Register Now to Save $150

Here’s a sneak peek at why you need to be at Open Source Summit in Vancouver next month! But hurry – spots are going quickly. Secure your space and register by August 4 to save $150.

  1. Awesome content: 250+ sessions on Linux systems, cloud native development, cloud infrastructure, AI, blockchain and open source program management & community leadership.
  2. Deep Dive Labs & Tutorials: Including Hands-On with Cilium Network Security, Cloud-native Network Functions (CNF) Seminar, Istio Playground Lab, Practical Machine Learning Lab, First Tutorial on Container Orchestration plus many more – all included in one low registration price.
  3. 9 Co-located Events: Linux Security Summit, OpenChain Summit, Acumos AI Developer Mini-Summit, Cloud & Container Apprentice Linux Engineer tutorials, CHAOSSCon and much more!

Read more at The Linux Foundation

Posted on Leave a comment

Open Source Certification: Preparing for the Exam

Open source is the new normal in tech today, with open components and platforms driving mission-critical processes at organizations everywhere. As open source has become more pervasive, it has also profoundly impacted the job market. Across industries the skills gap is widening, making it ever more difficult to hire people with much needed job skills. That’s why open source training and certification are more important than ever, and this series aims to help you learn more and achieve your own certification goals.

In the first article in the series, we explored why certification matters so much today. In the second article, we looked at the kinds of certifications that are making a difference. This story will focus on preparing for exams, what to expect during an exam, and how testing for open source certification differs from traditional types of testing.

Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation, stated, “For many of you, if you take the exam, it may well be the first time that you’ve taken a performance-based exam and it is quite different from what you might have been used to with multiple choice, where the answer is on screen and you can identify it. In performance-based exams, you get what’s called a prompt.”

As a matter of fact, many Linux-focused certification exams literally prompt test takers at the command line. The idea is to demonstrate skills in real time in a live environment, and the best preparation for this kind of exam is practice, backed by training.

Know the requirements

“Get some training,” Seepersad emphasized. “Get some help to make sure that you’re going to do well. We sometimes find folks have very deep skills in certain areas, but then they’re light in other areas. If you go to the website for Linux Foundation training and certification, for the LFCS and the LFCE certifications, you can scroll down the page and see the details of the domains and tasks, which represent the knowledge areas you’re supposed to know.”

Once you’ve identified the skills you need, “really spend some time on those and try to identify whether you think there are areas where you have gaps. You can figure out what the right training or practice regimen is going to be to help you get prepared to take the exam,” Seepersad said.

Practice, practice, practice

“Practice is important, of course, for all exams,” he added. “We deliver the exams in a bit of a unique way — through your browser. We’re using a terminal emulator on your browser and you’re being proctored, so there’s a live human who is watching you via video cam, your screen is being recorded, and you’re having to work through the exam console using the browser window. You’re going to be asked to do something live on the system, and then at the end, we’re going to evaluate that system to see if you were successful in accomplishing the task”

What if you run out of time on your exam, or simply don’t pass because you couldn’t perform the required skills? “I like the phrase, exam insurance,” Seepersad said. “The way we take the stress out is by offering a ‘no questions asked’ retake. If you take either exam, LFCS, LFCE and you do not pass on your first attempt, you are automatically eligible to have a free second attempt.”

The Linux Foundation intentionally maintains separation between its training and certification programs and uses an independent proctoring solution to monitor candidates. It also requires that all certifications be renewed every two years, which gives potential employers confidence that skills are current and have been recently demonstrated.

Free certification guide

Becoming a Linux Foundation Certified System Administrator or Engineer is no small feat, so the Foundation has created this free certification guide to help you with your preparation. In this guide, you’ll find:

  • An array of both free and paid study resources to help you be as prepared as possible

  • A few tips and tricks that could make the difference at exam time

  • A checklist of all the domains and competencies covered in the exam

With certification playing a more important role in securing a rewarding long-term career, careful planning and preparation are key. Stay tuned for the next article in this series that will answer frequently asked questions pertaining to open source certification and training.

Learn more about Linux training and certification.

Posted on Leave a comment

11 Ways (Not) to Get Hacked

Kubernetes security has come a long way since the project’s inception, but still contains some gotchas. Starting with the control plane, building up through workload and network security, and finishing with a projection into the future of security, here is a list of handy tips to help harden your clusters and increase their resilience if compromised.

Part One: The Control Plane

The control plane is Kubernetes’ brain. It has an overall view of every container and pod running on the cluster, can schedule new pods (which can include containers with root access to their parent node), and can read all the secrets stored in the cluster. This valuable cargo needs protecting from accidental leakage and malicious intent: when it’s accessed, when it’s at rest, and when it’s being transported across the network.

1. TLS Everywhere

TLS should be enabled for every component that supports it to prevent traffic sniffing, verify the identity of the server, and (for mutual TLS) verify the identity of the client.

Note that some components and installation methods may enable local ports over HTTP and administrators should familiarize themselves with the settings of each component to identify potentially unsecured traffic.

Source

This network diagram by Lucas Käldström demonstrates some of the places TLS should ideally be applied: between every component on the master, and between the Kubelet and API server. Kelsey Hightower‘s canonical Kubernetes The Hard Way provides detailed manual instructions, as does etcd’s security model documentation.

kubernetes-control-plane.png

Read more at Kubernetes.io

Posted on Leave a comment

Setting Up a Timer with systemd in Linux

Previously, we saw how to enable and disable systemd services by hand, at boot time and on power down, when a certain device is activated, and when something changes in the filesystem.

Timers add yet another way of starting services, based on… well, time. Although similar to cron jobs, systemd timers are slightly more flexible. Let’s see how they work.

“Run when”

Let’s expand the Minetest service you set up in the first two articles of this series as our first example on how to use timer units. If you haven’t read those articles yet, you may want to go and give them a look now.

So you will “improve” your Minetest set up by creating a timer that will run the game’s server 1 minute after boot up has finished instead of right away. The reason for this could be that, as you want your service to do other stuff, like send emails to the players telling them the game is available, you will want to make sure other services (like the network) are fully up and running before doing anything fancy.

Jumping in at the deep end, your minetest.timer unit will look like this:

# minetest.timer
[Unit] Description=Runs the minetest.service 1 minute after boot up [Timer] OnBootSec=1 m Unit=minetest.service [Install] WantedBy=basic.target

Not hard at all.

As usual, you have a [Unit] section with a description of what the unit does. Nothing new there. The [Timer] section is new, but it is pretty self-explanatory: it contains information on when the service will be triggered and the service to trigger. In this case, the OnBootSec is the directive you need to tell systemd to run the service after boot has finished.

Other directives you could use are:

  • OnActiveSec=, which tells systemd how long to wait after the timer itself is activated before starting the service.
  • OnStartupSec=, on the other hand, tells systemd how long to wait after systemd was started before starting the service.
  • OnUnitActiveSec= tells systemd how long to wait after the service the timer is activating was last activated.
  • OnUnitInactiveSec= tells systemd how long to wait after the service the timer is activating was last deactivated.

Continuing down the minetest.timer unit, the basic.target is usually used as a synchronization point for late boot services. This means it makes minetest.timer wait until local mount points and swap devices are mounted, sockets, timers, path units and other basic initialization processes are running before letting minetest.timer start. As we explained in the second article on systemd units, targets are like the old run levels and can be used to put your machine into one state or another, or, like here, to tell your service to wait until a certain state has been reached.

The minetest.service you developed in the first two articles ended up looking like this:

# minetest.service
[Unit] Description= Minetest server Documentation= https://wiki.minetest.net/Main_Page [Service] Type= simple User= ExecStart= /usr/games/minetest --server
ExecStartPost= /home//bin/mtsendmail.sh "Ready to rumble?" "Minetest Starting up" TimeoutStopSec= 180 ExecStop= /home//bin/mtsendmail.sh "Off to bed. Nightie night!" "Minetest Stopping in 2 minutes" ExecStop= /bin/sleep 120 ExecStop= /bin/kill -2 $MAINPID [Install] WantedBy= multi-user.target

There’s nothing you need to change here. But you do have to change mtsendmail.sh (your email sending script) from this:

#!/bin/bash # mtsendmail
sleep 20 echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
sleep 10

to this:

#!/bin/bash
# mtsendmail.sh
echo $1 | mutt -F /home/paul/.muttrc -s "$2" pbrown@mykolab.com

What you are doing is stripping out those hacky pauses in the Bash script. Systemd does the waiting now.

Making it work

To make sure things work, disable minetest.service:

sudo systemctl disable minetest

so it doesn’t get started when the system starts; and, instead, enable minetest.timer:

sudo systemctl enable minetest.timer

Now you can reboot you server machine and, when you run sudo journalctl -u minetest.* you will see how, first the minetest.timer unit gets executed and then the minetest.service starts up after a minute… more or less.

A Matter of Time

A couple of clarifications about why the minetest.timer entry in the systemd’s Journal shows its start time as 09:08:33, while the minetest.service starts at 09:09:18, that is less than a minute later: First, remember we said that the OnBootSec= directive calculates when to start a service from when boot is complete. By the time minetest.timer comes along, boot has finished a few seconds ago.

The other thing is that systemd gives itself a margin of error (by default, 1 minute) to run stuff. This helps distribute the load when several resource-intensive processes are running at the same time: by giving itself a minute, systemd can wait for some processes to power down. This also means that minetest.service will start somewhere between the 1 minute and 2 minute mark after boot is completed, but when exactly within that range is anybody’s guess.

For the record, you can change the margin of error with AccuracySec= directive.

Another thing you can do is check when all the timers on your system are scheduled to run or the last time the ran:

systemctl list-timers --all

The final thing to take into consideration is the format you should use to express the periods of time. Systemd is very flexible in that respect: 2 h, 2 hours or 2hr will all work to express a 2 hour delay. For seconds, you can use seconds, second, sec, and s, the same way as for minutes you can use minutes, minute, min, and m. You can see a full list of time units systemd understands by checking man systemd.time.

Next Time

You’ll see how to use calendar dates and times to run services at regular intervals and how to combine timers and device units to run services at defined point in time after you plug in some hardware.

See you then!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

5 Tips to Improve Technical Writing for an International Audience

Learn how to write for an international audience in this article from our archives.

Writing in English for an international audience does not necessarily put native English speakers in a better position. On the contrary, they tend to forget that the document’s language might not be the first language of the audience. Let’s have a look at the following simple sentence as an example: “Encrypt the password using the ‘foo bar’ command.”

Grammatically, the sentence is correct. Given that “-ing” forms (gerunds) are frequently used in the English language, most native speakers would probably not hesitate to phrase a sentence like this. However, on closer inspection, the sentence is ambiguous: The word “using” may refer either to the object (“the password”) or to the verb (“encrypt”). Thus, the sentence can be interpreted in two different ways:

As long as you have previous knowledge about the topic (password encryption or the ‘foo bar’ command), you can resolve this ambiguity and correctly decide that the second reading is the intended meaning of this sentence. But what if you lack in-depth knowledge of the topic? What if you are not an expert but a translator with only general knowledge of the subject? Or, what if you are a non-native speaker of English who is unfamiliar with advanced grammatical forms?

Know Your Audience

Even native English speakers may need some training to write clear and straightforward technical documentation. Raising awareness of usability and potential problems is the first step. This article, based on my talk at Open Source Summit EU, offers several useful techniques. Most of them are useful not only for technical documentation but also for everyday written communication, such as writing email or reports.

1. Change perspective. Step into your audience’s shoes. Step one is to know your intended audience. If you are a developer writing for end users, view the product from their perspective. The persona technique can help to focus on the target audience and to provide the right level of detail for your readers.

2. Follow the KISS principle. Keep it short and simple. The principle can be applied to several levels, like grammar, sentences, or words. Here are some examples:

Words: Uncommon and long words slow down reading and might be obstacles for non-native speakers. Use simpler alternatives:

“utilize” → “use”

“indicate” → “show”, “tell”, “say”

“prerequisite” → “requirement”

Grammar: Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action: “Click OK. The Printer Options dialog appears.”

Sentences: As a rule of thumb, present one idea in one sentence. However, restricting sentence length to a certain amount of words is not useful in my opinion. Short sentences are not automatically easy to understand (especially if they are a cluster of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiquities, which can, in turn, make sentences even more difficult to understand.

3. Beware of ambiguities. As authors, we often do not notice ambiguity in a sentence. Having your texts reviewed by others can help identify such problems. If that’s not an option, try to look at each sentence from different perspectives: Does the sentence also work for readers without in-depth knowledge of the topic? Does it work for readers with limited language skills? Is the grammatical relationship between all sentence parts clear? If the sentence does not meet these requirements, rephrase it to resolve the ambiguity.

4. Be consistent. This applies to choice of words, spelling, and punctuation as well as phrases and structure. For lists, use parallel grammatical construction. For example:

Why white space is important:

5. Remove redundant content. Keep only information that is relevant for your target audience. On a sentence level, avoid fillers (basically, easily) and unnecessary modifications:

“already existing” → “existing”

“completely new” → “new”

As you might have guessed by now, writing is rewriting. Good writing requires effort and practice. But even if you write only occasionally, you can significantly improve your texts by focusing on the target audience and by using basic writing techniques. The better the readability of a text, the easier it is to process, even for an audience with varying language skills. When it comes to localization especially, good quality of the source text is important: Garbage in, garbage out. If the original text has deficiencies, it will take longer to translate the text, resulting in higher costs. In the worst case, the flaws will be multiplied during translation and need to be corrected in various languages. 

Driven by an interest in both language and technology, Tanja has been working as a technical writer in mechanical engineering, medical technology, and IT for many years. She joined SUSE in 2005 and contributes to a wide range of product and project documentation, including High Availability and Cloud topics.