Posted on Leave a comment

Using Text Mining and Machine Learning to Enhance the Credit Risk Assessment Process

Advances in technology have instigated a substantial shift in consumer expectations. Today’s financial services customers demand access to a range of services, real-time updates and a seamless customer experience. At Open FinTech Forum, I will provide some insight into Spotcap’s approach to credit risk assessment using text mining and machine learning.

A recent survey by Oracle found, that although customers are generally satisfied with basic banking services, their satisfaction drops when attempting more complex transactions such as securing a loan. We have observed the same sentiment across the business community. This is why, at Spotcap, we’ve turned tradition on its head and created a more efficient take on business loans.

We undertake cash flow based, rather than credit-score based underwriting, and use technology to speed up the process. Combining tried and tested credit assessment principles with innovative technology such as our automated data scraping services, machine learning credit models, and skilled human analysts enables us to offer a more efficient take on business loans.

Machine learning credit algorithms

Our risk assessment utilizes numerous sources but relies heavily on three main sources – borrower profile, bank account, and business profile – and is supported by a set of machine learning credit algorithms. This approach allows us to accurately and fairly assess how a business is performing today, and make a prediction about its future performance.  

Whilst we feed our models with hundreds of data points sourced from credit bureaus, tax agencies, business records and the applicants themselves, it is bank account transactional data that often paints the most accurate picture.

Spotcap’s Bank Account Model incorporates more than 200 numerical variables. Business bank account data, when structured correctly, is one of the strongest sources of predictive information for short-term lending and risk mitigation. We construct the raw data found in a bank account into a form of variables enabling us to derive meaningful insights.  

We have also developed bank account text mining tools to identify key negative factors such as payment reversals, late fees and collections transactions.However, this requires a supervised approach to minimize the risk of false positives.

The more data you feed into your machine learning models, the more accurate will be your results. But it’s not only about quantity, it’s primarily the quality of data that matters. Well specified machine learning models can help lenders make faster and more informed decisions. However, even the most powerful machine learning algorithm will fail if applied to data with measurement error. The better your understanding of your data, the more accurate and insightful your results. Our underwriters and data scientists continuously add new knowledge and risk drivers to our models to get even more precise outcomes.

It’s all about automating the right parts of your analysis and remembering that human interaction is important at every stage of the model life cycle because we’re dealing with real people and real businesses, which are by nature complex. Human expertise combined with advanced technology enables us to make accurate, yet flexible credit decisions within one day.

We hope to see you at Open FinTech Forum for an informative and high-value event. Sign up to receive updates on Open FinTech Forum:

Posted on Leave a comment

AT&T Details Open White Box Specs for Linux-Based 5G Routers

This week AT&T will release detailed specs to the Open Compute Project for building white box cell site gateway routers for 5G. Over the next few years, more than 60,000 white box routers built by a variety of manufacturers will be deployed as 5G routers in AT&T’s network.

In its Oct. 1 announcement, AT&T said it will load the routers with its Debian Linux based Vyatta Network Operating System (NOS) stack. Vyatta NOS forms the basis for AT&T’s open source dNOS platform, which in turn is the basis for a new Linux Foundation open source NOS project called DANOS, which similarly stands for Disaggregated Network Operating System (see below).

AT&T’s white box blueprint “decouples hardware from software” so any organization can build its own compliant systems running other software. This will provide the cellular gateway industry with flexibility as well as the security of building on an interoperable, long-lifecycle platform. The white box spec appears to OS agnostic. However, routers typically run Linux-based NOS stacks, and that does not appear to be changing with 5G.

The release of specs to the Open Compute Project — an organization that helps standardize open white box designs — departs from the traditional practice of contracting a few vendors to build proprietary solutions for cellular routers. AT&T’s next-gen router blueprint will enable any hardware manufacturer willing to build to spec to compete for the orders. By attracting more manufacturers, AT&T aims to reduce costs, spur innovation, and more quickly meet the “surging data demands” for 5G.

“We now carry more than 222 petabytes of data on an average business day,” stated Chris Rice, SVP, Network Cloud and Infrastructure at AT&T. “The old hardware model simply can’t keep up, and we need to get faster and more efficient.”

The reference design blueprint is said to be flexible enough to enable manufacturers to offer custom platforms for different use cases. In addition to offering faster mobile services, AT&T’s 5G services will enable new applications in “autonomous cars, drones, augmented reality and virtual reality systems, smart factories, and more,” says AT&T.

5G technology will not only provide a major boost in bandwidth for mobile customers, it should also enable wireless services to better compete with the cable providers’ wired broadband Internet services for the home. This week, AT&T rival Verizon opened pre-orders for consumer customers to sign up for 5G home internet service targeted for a launch in 2019.

At publication time, neither AT&T or the Open Compute Project had not yet published the white box specs, but AT&T offered a few details:

  • Supports a wide range of client-side speeds including “100M/1G needed for legacy Baseband Unit systems and next generation 5G Baseband Unit systems operating at 10G/25G and backhaul speeds up to 100G”

  • Supports industrial temperature ranges (-40 to 65°C)

  • Integrates the Broadcom Qumran-AX switching chip with deep buffers to support advanced features and QOS

  • Integrates a baseboard management controller (BMC) for platform health status monitoring and recovery

  • Include a “powerful CPU for network operating software”

  • Provides timing circuitry that supports a variety of I/O</ul>

Vyatta NOS to dNOS to DANOS

Vyatta launched the Debian based, OpenVPN compliant Vyatta Community Edition over a decade ago. The distribution, which later added features like Quagga support and a standardized management console, was available in both subscription-based and open source Vyatta Core versions.

When Brocade acquired Vyatta in 2012, it discontinued the open source version. However, independent developers forked Vyatta Core to create an open source VyOS platform. Last year, Brocade sold its proprietary Vyatta assets to AT&T, which developed it as Vyatta NOS.

AT&T will initially load the proprietary, “production-hardened” Vyatta NOS on the white box routers it purchases. However, the goal appears to be to eventually replace this with AT&T’s dNOS stack under the emerging DANOS framework.

Robert Bays, assistant VP of Vyatta Development at AT&T Labs, stated: “Consistent with our previous announcements to create the DANOS open source project, hosted by the Linux Foundation, we are now sorting out which components of the open cell site gateway router NOS we will be contributing to open source.”

dNOS/DANOS aims to be the world’s first open source, carrier-grade operating system for wide area networks. The software is designed to interoperate with the widely endorsed ONAP (Open Network Automation Platform), a Linux Foundation project for standardizing open source cloud networking software. In AT&T’s dNOS announcement in January, which preceded the DANOS project launch in March, the company stated: “Just as the ONAP platform has become the open network operating system for the network cloud, the dNOS project aims to be the open operating system for white box.”

The DANOS project is also aligned with Linux Foundation projects like FRRouting, OpenSwitch, and the AT&T-derived Akraino Edge Stack. The Akraino project aims to standardize open source edge computing software for basestations and will also support telecom, enterprise networking, and IoT edge platforms.

Different Akraino blueprints will target technologies and standards such as DANOS, Ceph, Kata Containers, Kubernetes, StarlingX, OpenStack, Acumos AI, and EdgeX Foundry. In a few years, we will likely see DANOS-based white box gateway routers running Akraino software to enable 5G applications ranging from autonomous car communications to augmented reality.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux

As Linux adoption expands, it’s increasingly important for the kernel community to improve the security of the world’s most widely used technology. Security is vital not only for enterprise customers, it’s also important for consumers, as 80 percent of mobile devices are powered by Linux. In this article, Linux kernel maintainer Greg Kroah-Hartman provides a glimpse into how the kernel community deals with vulnerabilities.

There will be bugs

As Linus Torvalds once said, most security holes are bugs, and bugs are part of the software development process. As long as the software is being written, there will be bugs.

“A bug is a bug. We don’t know if a bug is a security bug or not. There is a famous bug that I fixed and then three years later Red Hat realized it was a security hole,” said Kroah-Hartman.

There is not much the kernel community can do to eliminate bugs, but it can do more testing to find them. The kernel community now has its own security team that’s made up of kernel developers who know the core of the kernel.

“When we get a report, we involve the domain owner to fix the issue. In some cases it’s the same people, so we made them part of the security team to speed things up,” Kroah Hartman said. But he also stressed that all parts of the kernel have to be aware of these security issues because kernel is a trusted environment and they have to protect it.

“Once we fix things, we can put them in our stack analysis rules so that they are never reintroduced,” he said.

Besides fixing bugs, the community also continues to add hardening to the kernel. “We have realized that we need to have mitigations. We need hardening,” said Kroah-Hartman.

Huge efforts have been made by Kees Cook and others to take the hardening features that have been traditionally outside of the kernel and merge or adapt them for the kernel. With every kernel released, Cook provides a summary of all the new hardening features. But hardening the kernel is not enough, vendors have to enable the new features and take advantage of them. That’s not happening.  

Kroah-Hartman releases a stable kernel every week, and companies pick one to support for a longer period so that device manufacturers can take advantage of it. However, Kroah-Hartman has observed that, aside from the Google Pixel, most Android phones don’t include the additional hardening features, meaning all those phones are vulnerable. “People need to enable this stuff,” he said.

“I went out and bought all the top of the line phones based on kernel 4.4 to see which one actually updated. I found only one company that updated their kernel,” he said.  “I’m working through the whole supply chain trying to solve that problem because it’s a tough problem. There are many different groups involved — the SoC manufacturers, the carriers, and so on. The point is that they have to push the kernel that we create out to people.”

The good news is that unlike with consumer electronics, the big vendors like Red Hat and SUSE keep the kernel updated even in the enterprise environment. Modern systems with containers, pods, and virtualization make this even easier. It’s effortless to update and reboot with no downtime. It is, in fact, easier to keep things secure than it used to be.

Meltdown and Spectre

No security discussion is complete without the mention of Meltdown and Spectre. The kernel community is still working on fixes as new flaws are discovered. However, Intel has changed its approach in light of these events.

“They are reworking on how they approach security bugs and how they work with the community because they know they did it wrong,” Kroah-Hartman said. “The kernel has fixes for almost all of the big Spectre issues, but there is going to be a long tail of minor things.”

The good news is that these Intel vulnerabilities proved that things are getting better for the kernel community. “We are doing more testing. With the latest round of security patches, we worked on our own for four months before releasing them to the world because we were embargoed. But once they hit the real world, it made us realize how much we rely on the infrastructure we have built over the years to do this kind of testing, which ensures that we don’t have bugs before they hit other people,” he said. “So things are certainly getting better.”

The increasing focus on security is also creating more job opportunities for talented people. Since security is an area that gets eyeballs, those who want to build a career in kernel space, security is a good place to get started with.

“If there are people who want a job to do this type of work, we have plenty of companies who would love to hire them. I know some people who have started off fixing bugs and then got hired,” Kroah-Hartman said.

Check out the schedule of talks for Open Source Summit Europe and sign up to receive updates:

Posted on Leave a comment

A New Method of Containment: IBM Nabla Containers

By James Bottomley

In the previous post about Containers and Cloud Security, I noted that most of the tenants of a Cloud Service Provider (CSP) could safely not worry about the Horizontal Attack Profile (HAP) and leave the CSP to manage the risk.  However, there is a small category of jobs (mostly in the financial and allied industries) where the damage done by a Horizontal Breach of the container cannot be adequately compensated by contractual remedies.  For these cases, a team at IBM research has been looking at ways of reducing the HAP with a view to making containers more secure than hypervisors.  For the impatient, the full open source release of the Nabla Containers technology is here and here, but for the more patient, let me explain what we did and why.  We’ll have a follow on post about the measurement methodology for the HAP and how we proved better containment than even hypervisor solutions.

The essence of the quest is a sandbox that emulates the interface between the runtime and the kernel (usually dubbed the syscall interface) with as little code as possible and a very narrow interface into the kernel itself.

The Basics: Looking for Better Containment

The HAP attack worry with standard containers is shown on the left: that a malicious application can breach the containment wall and attack an innocent application.  

Read more at Hansen Partnership

Click Here!

Posted on Leave a comment

Programming Snapshot: Implementing Fast Queries for Local Files in Go

To find files quickly in the deeply nested subdirectories of his home directory, Mike whips up a Go program to index file metadata in an SQLite database.

…the GitHub Codesearch [1] project, with its indexer built in Go, at least lets you browse locally available repositories, index them, and then search for code snippets in a flash. Its author, Russ Cox, then an intern at Google, explained later how the search works [2].

How about using a similar method to create an index of files below a start directory to perform quick queries such as: “Which files have recently been modified?” “Which are the biggest wasters of space?” Or “Which file names match the following pattern?”

Unix filesystems store metadata in inodes, which reside in flattened structures on disk that cause database-style queries to run at a snail’s pace. To take a look at a file’s metadata, run the statcommand on it and take a look at the file size and timestamps, such as the time of the last modification (Figure 2).

Figure 2: Inode metadata of a file, here determined by stat, can be used to build an index.

Newer filesystems like ZFS or Btrfs take a more database-like approach in the way they organize the files they contain but do not go far enough to be able to support meaningful queries from userspace.

Fast Forward Instead of Pause

For example, if you want to find all files over 100MB on the disk, you can do this with a find call like:

find / -type f -size +100M

If you are running the search on a traditional hard disk, take a coffee break. Even on a fast SSD, you need to prepare yourself for long search times in the minute range. The reason for this is that the data is scattered in a query-unfriendly way across the sectors of the disk.

Read more at Linux Pro Magazine

Posted on Leave a comment

The End of the Road: systemd’s “Socket” Units

Sockets are used for two different processes to share data or for shuttling information from one machine to another and the network. They are extremely useful and the basis of things like FTP, real-time network chat systems, secure shells, and so on.

For the fly-by programmer, sockets can be somewhat hard to get right, but by using a systemd’s socket units, you can make systemd do the heavy lifting.

Besides making sockets simpler to set up, systemd dumps whatever comes in through the socket to STDIN. This means you don’t have to bother with complicated socket management in you script; just pick up the data from STDIN and use it from there.

The other advantage is that systemd will make sure your socket is active only as long as necessary, waking it up when data is incoming, and closing it down again when it is done. This saves resources, as the server associated on the receiving side will be closed most of the time and will only be activated if a systemd socket unit detects activity on its port.

To see how all this works, first you’ll see how easy it is to send some strings of text over a systemd activated socket. Later, we’ll look at how to send a whole binary file. Finally, we will pick up the systemd-based surveillance system we have been developing over the past several installments and learn how to send the images it captures to your laptop.

DISCLAIMER: What you’ll see here are over-simplified examples created for teaching purposes only. Although they all work, there is no error-handling or security built into any of them. I don’t recommend you use them in a real-world scenario.

Sending Texts

Socket units are stupidly simple, or rather, they usually are. Although there are dozens of socket-specific directives you can use to fine tune your units, you will rarely use more than two, In this case, you do exactly that and use only a listening directive and the Accept directive:

# echo.socket
[Unit] Description = Echo server [Socket] ListenStream = 4444 Accept = yes [Install] WantedBy = sockets.target

That is what a basic socket file looks like. It has a [Socket] section where you specify what it has to listen for. Apart from streams, it cand listen for datagrams, sequential packages, and so on. On the other side of the = is where to listen from. You could specify a full IP address, file system socket or something else. A single number, like in this case, means a port. The socket unit above will be listening on the local machine to port 4444.

The other socket-specific directive is Accept. Accept by default is set to false, as this is used mostly for AF_UNIX sockets. Not to get into too much detail, but AF_UNIX sockets are sockets where the processes sharing the information reside on the same machine.

As you want to send information from one machine to another, you will be using an AF_INET, and for that the best thing to do is have Accept set to true or yes.

The service itself is also pretty basic:

# echo@.service
[Unit] Description=Echo server service [Service] ExecStart=/path/to/socketthing.py StandardInput=socket 

In most cases, the service will have the same name the socket unit, except with an @ and the service suffix. As your socket unit was echo.socket, your service will be echo@.service.

The service Type is simple, which is already the default, so there is no need to include it. ExecStart points to the socketthing.py script you will see in a minute, and the StandardInput for said script comes from the socket set up by echo.socket.

The socketthing.py script is barely three lines long:

#!/usr/bin/python
import sys
sys.stdout.write(sys.stdin.readline().strip().upper() + '\r\n')

What this does is read a line of text in from STDIN, which, as you saw, comes in via the socket. Then it strips all the spaces from the beginning and the end, and puts it into uppercase (sys.stdin.readline().strip().upper()). Finally it sends it back across the socket to the terminal of the sending computer (sys.stdout.write([...])). This means a user will connect to your receiving machine’s socket, type in a string, and will see it echoed back in CAPITAL LETTERS.

Start the socket unit with:

sudo systemctl start echo.socket

And echo.socket will automatically call echo@.service (which in turn runs socketthing.py) each time someone tries to push a string to the server through port 4444.

To do that, on the sending computer, you can use a program like socat:

$ socat - TCP:server_IP_address:4444
hello computer
HELLO COMPUTER
$

Although good for illustrating how to get started, this example is pretty pointless. Let’s do something a bit more useful and send over a whole file…

Transferring Files

For a systemd, there is no difference between sending a stream of text to a stream of binary data. In fact, to all practical effects the socket file is the same…

# filetrans.socket
[Unit] Description=File transfer server [Socket] ListenStream=4444 Accept=yes [Install] WantedBy=sockets.target

… As is the service unit:

# filetrans@.service
[Unit] Description=File transfer server service [Service] ExecStart=/path/to/socketfilething.py StandardInput=socket 

All you need to do is change the name and description of the services and have the “new” filetrans@.service point to a script that will handle the reception of the file.

In this case, the script, socketfilething.py, will handle PDFs coming from the sending computer:

#!/usr/bin/python
import sys output_file = open ("/path/to/store/test.pdf", "wb")
output_file.write(sys.stdin.buffer.read())
output_file.close()

You use sys.stdin.buffer.read() to read in a stream of binary data from STDIN, and, as you have opened test.pdf in write binary mode ("wb"), you can just write the stream passed down from the socket directly into the file.

To try this our, from the sending end of things, you can send the PDF file over the wire again using socat:

cat some.pdf | socat - TCP:192.168.1.111:4444

On the receiving end, a copy of some.pdf called test.pdf will pop up in the directory of your choice.

You can probably see where we are going with this and how we can use it in our systemd-powered surveillance system.

Surveillance Sockets

Again, on the receiving side, there is virtually no difference to either the socket unit:

# surveillance.socket #
[Unit] Description=Surveillance server [Socket] ListenStream=4444 Accept=yes [Install] WantedBy=sockets.target

… Or the service unit:

# surveillance@.service #
[Unit] Description=Surveillance server service [Service] ExecStart=/path/to/surveillancething.py StandardInput=socket 

Save for a change of name, description and the have it point to another script you can call surveillancething.py:

#!/usr/bin/python import sys from time import strftime fn = strftime("%Y_%m_%d_%H_%M_%S")+".jpg" output_file = open ("/path/to/store/" + fn, "wb")
output_file.write(sys.stdin.buffer.read()) output_file.close()

This new script is very similar to the prior one you used to send a PDF. The only difference is that, as the surveying machine sends an image every time it detects changes, you want to give each image you receive a unique name, preferably with a time stamp, hence the fn = strftime("%Y_%m_%d_%H_%M_%S")+".jpg" line.

On the surveying side, you only need to change the picmonitor.sh file so that it sends the new image over the socket:

#!/bin/bash
fn=`date|tr [:punct:][:space:] _`.jpg cp /home/[user name]/monitor/monitor.jpg /home/[user name]/monitor/$fn cat /home/[user name]/monitor/$fn | socat - TCP:192.168.1.111:4444

Start surveillance.socket on the server and picchanged.timer on the surveying machine, and you will start to receive images from your spying webcam.

Conclusion

And that’s it! Over the past few months, we have covered everything you need to know to get started writing systemd units. We have gone from the most basic service units, all the way through device event-activated services, timers, and more.

In case you missed anything, here’s an index to all the other systemd topics we have covered:

  1. Basic Services: Writing Systemd Services for Fun and Profit
  2. More Advanced Services: Beyond Starting and Stopping
  3. Device aware services: Reacting to Change
  4. Paths: Monitoring Files and Directories
  5. Timers 1: Setting Up a Timer
  6. Timers 2: Timers: Three Use Cases
Posted on Leave a comment

Netrunner Builds on KDE for a Unique Linux Spin

Most every Linux distribution is based on another one. Many are based on Ubuntu or Debian, some are based on Fedora, while others are based on Arch Linux. And, even when a distribution offers different types of releases (stable vs. rolling, or various available desktops), they are generally based on the same base platform.

Netrunner, however, takes a slightly different approach. If that name sounds slightly familiar, you might remember the Collectable Card Game from the 1990s that pitted two players against each other — one playing a corporation and one playing a hacker attempting to break into the corporation’s network. There is no indication that Blue Systems (the company supporting Netrunner) named the OS after the game, but it’s a great launching point for yet another Linux distribution.

So, what is Netrunner doing differently? The main trick they have up their sleeve is that the distribution is offered in three different flavors:

  • Stable

  • Rolling

  • Core

That is a fairly common offering these days. But, whereas Netrunner’s Stable and Core releases are based on Debian Testing, the Rolling release is based on Manjaro (which is itself based on Arch Linux). So, depending upon the release cycle you want, you may be using a Debian-based or Arch-based distribution. No matter the choice of base, however, Netrunner only offers one desktop—a modified version of KDE Plasma. That modified Plasma desktop might intrigue some users and might put some users off. Why? Because, at first blush, the desktop offered on Netrunner looks as much like a Mate or Cinnamon interface as it does KDE.

Let’s install Netrunner and see what makes this uniquely released distribution tick.

Installation

Once again, I’m happy to report that there is no need to discuss the installation of a Linux distribution. This has become quite a selling point for so many open source operating systems… that installation has become as easy as installing an application. Netrunner offers yet another point-and-click install that’s as easy as answering a few simple questions and clicking a few buttons. That’s all there is to it. Simple, fast, and user-friendly. This installation throws not a single trick at the user and, in roughly five to ten minutes, you’ll have the operating system up and running and ready to serve.

What’s installed

Out of the box (and after a single update), you’ll find kernel 4.14.0-3, KDE Plasma version 5.12.2, KDE Apps version 17.08.3, Frameworks version 5.42.0, and Qt version 5.9.2. Along with those pieces, you’ll find the following installed software:

  • Synaptic Package Manager for the stable release and Octopi (Pacman front-end) for the rolling release (along with KDE’s Discovery on both).

  • Audacious (music player)

  • Firefox (web browser)

  • GIMP (image editor)

  • GMusic Browser (another music player)

  • HandBrake (video converter)

  • Inkskape (vector image editor)

  • Kamoso (webcam software)

  • Kdenlive (video editor)

  • LibreOffice (office suite)

  • Pidgin Internet Messenger (chat/message client)

  • Skype (VOIP client)

  • Steam (game console)

  • Thunderbird (email client)

  • VirtualBox (Virtual Machine manager)

  • Vokoscreen (screencasting)

  • Yakuake (terminal)

  • Yarock (music player)

Clearly, Netrunning contains all the software you need to get started with Linux on the desktop (especially if you’re a fan of music). The only qualm I have with the list of included software is that GMusic Browser is way out of date. The latest stable version was released August 20, 2015. I’d much rather see the likes of Clementine included, as it is under regular development. Or, just stick with Yarock (which is much more current, with the latest release out February 11, 2018).
It should also be noted that Firefox is installed along with the uBlock Origin extension. UBlock Origin is a web blocker that doesn’t consume much in the way of system resources. It’s easy to use and automatically filters with the help of the following lists:

You won’t find anything by way of development packages installed out of the box. Of course, as this is Linux, all of the tools you need for development are a quick install away.

The Changes to KDE Plasma

What Netrunner has done with KDE is, effectively, organized the various modules such that the desktop becomes much more immediately familiar with users, with few of the standard KDE bits at the fore. The best way to test these changes is by opening the System Settings tool and going to the Plasma Tweaks section. Here you’ll find every aspect of KDE that has been tweaked by the developers (Figure 1).

Select a different theme under Look And Feel, make sure to select Use Desktop Layout from theme, and click Apply. For example, select the Breeze theme and you’ll see how different the default Plasma desktop is from what Netrunner has done (Figure 2).

The changes made to the KDE Plasma do not detract from how efficient and user-friendly it is, they only enhance it. However, for those that prefer the default KDE Plasma desktop, it’s just a couple of clicks away. But if the likes of Mate or Cinnamon are up your alley, you’ll love what the developers have done with KDE. Either way, the desktop runs incredibly smoothly and performs like a champ (even when running as a VirtualBox virtual machine).

Take your pick

Whether you prefer a stable, bleeding edge, or minimal distribution, Netrunner has you covered. If you prefer the simplicity of Debian or the flexibility of Arch Linux, Netrunner still has you covered. No matter your pick, this flavor of Linux is a desktop that is sure to please most Linux users, regardless of experience or preference.

To download Netrunner, visit the distribution’s download page and select from the Debian-based, the Arch-based, or the core. You’ll only find 64-bit versions of each (as well as an Arm-based version), so make sure you have the proper hardware before starting the download.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Open Source Summit & ELC + OpenIoT Summit Europe Features 13 Co-Located Events including Linux Security Summit, Yocto Dev Day, LF Energy Summit, Tracing Summit, and More

Make the most of your time at Open Source Summit/ELC + OpenIoT Summit Europe!

Over a dozen events taking place alongside Open Source Summit and ELC+OpenIoT Summit Europe offer attendees even more ways to increase skills and connections – all in one trip. 300 conference sessions, 2000 attendees, 13 co-located events and dozens of event experiences; if you’re not registered yet, now is the time.

REGISTER NOW »

Sign up to receive updates on Open Source Summit Europe:

Co-Located Events:

Embedded Apprentice Linux Engineer Track – Mon., Oct. 22 & Tues., Oct. 23*

Are you an Embedded Engineer who is transitioning to using Linux? Embedded Apprentice Linux Engineer is a series of 8 seminars over 2 days taught by a professional Embedded Linux Instructor with years of practical experience.

OpenChain Workshop – The Supply Chain Compliance Solution (Not A Blockchain) – Tues., Oct. 23

The OpenChain Project defines the key requirements for a quality open source compliance program through a single, simple specification. This workshop will feature the latest developments around supply chain compliance and provide an excellent opportunity for attendees to both learn from and contribute to the project work teams.

Hyperledger Scotland Meetup – Tues., Oct. 23

Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration hosted by The Linux Foundation and including leaders in finance, banking, IoT, supply chains, manufacturing and technology. Hyperledger Meetup groups have an informal relationship with Hyperledger, and make up a key part of the Hyperledger ecosystem.

LF Energy Summit – Wed., Oct. 24*

The inaugural LF Energy Summit will focus on creating a shared vision to accelerate and transform the world’s relationship with energy by including the perspectives of power systems engineers and executives with open source developers. Together, we will identify the best paths to building a vibrant ecosystem with specific and practical outcomes for next steps and technical groups where companies and individuals can contribute. Space is limited, register today.

Linux in Safety-Critical Systems Summit – Wed., Oct. 24

This summit will inform interested developers and users about the activities and plans to support the use of Linux in safety-critical systems, presenting developments in the SIL2LinuxMP project and work from others that are valuable to the project.

IoT Apprentice Linux Engineer Track – Wed., Oct. 24*

The I-ALE program introduces Linux engineers to a more deeply embedded platform and programming. This series of 3 seminars will introduce you to a small micro controller on a board with various input and output devices which will allow you to build an Internet-connected device you can hang on your wall.

Linux Security Summit (LSS) Europe – Thurs., Oct. 25 & Fri, Oct. 26*

The Linux Security Summit (LSS) is a technical forum for collaboration between Linux developers, researchers, and end users. Its primary aim is to foster community efforts in analyzing and solving Linux security challenges.

Zephyr Hackathon – “Get Connected”  – Thurs., Oct. 25

Includes a Zephyr orientation session and the chance to learn the tips and tricks of setting up a development environment and working with Zephyr. Note: Currently Full, Waitlist Only.

Tracing Summit  – Thurs., Oct. 25

The goal of the Tracing Summit is to provide space for discussion between people of the various areas that benefit from tracing, namely parallel, distributed and/or real-time systems, as well as kernel development.

Linux Media Summit  – Thurs., Oct. 25

The Linux Media Summit is the premier forum to discuss the Linux multimedia development for cameras, audio and video streaming devices, analog/digital TV support, remote controller and HDMI Consumer Electronics Control (CEC) at the Linux Kernel and its userspace APIs.

Yocto Project Dev Day Europe 2018  – Thurs., Oct. 25*

A one day, hands-on training that puts you in direct contact with Yocto Project technical experts and developers. Its primary goal is to show developers how to create custom-build Linux distributions for embedded devices by using layers and recipes designed to resolve incompatibilities between different configurations.

Real-Time Summit  – Thurs., Oct. 25*

The Real-Time Summit is organized by the Linux Foundation Real-Time Linux (RTL) collaborative project. The event is intended to gather developers and users of the PREEMPT_RT patch, providing room for discussion between developers, tooling experts, and users. 

FOSSology – Hands on Training  – Thurs., Oct. 25*

This hands-on training session will provide this understanding of FOSSology, an open source license compliance software system and toolkit.

This hands-on training session will provide this understanding of FOSSology, an open source license compliance software system and toolkit.

*Co-located events with an additional fee are denoted with an asterisk.

In addition to all these great co-located event offerings, we want to remind you of all the other experiences that the conference provides for attendees.

Sunday, October 21

Better Together Diversity Social

Monday, October 22

Diversity Empowerment Summit

First-time Attendee Breakfast

Sightseeing Bus Tour

Women in Open Source Lunch

Attendee Opening Reception at the National Museum of Scotland

Tuesday, October 23

Open Source Career Breakfast

Diversity Empowerment Summit

Speed Networking & Mentoring

Onsite Attendee Reception & Sponsor + Technical Showcase

5K Fun Run

Partner Reception – Invitation Only

Wednesday, October 24

Morning Meditation

Diversity Empowerment Summit

REGISTER NOW »

Posted on Leave a comment

Open Source Summit & ELC + OpenIoT Summit Europe Features 13 Co-Located Events

Make the most of your time at Open Source Summit/ELC + OpenIoT Summit Europe!

Over a dozen events taking place alongside Open Source Summit and ELC+OpenIoT Summit Europe offer attendees even more ways to increase skills and connections – all in one trip. 300 conference sessions, 2000 attendees, 13 co-located events and dozens of event experiences; if you’re not registered yet, now is the time.

REGISTER NOW »

Sign up to receive updates on Open Source Summit Europe:

Co-Located Events:

Embedded Apprentice Linux Engineer Track – Mon., Oct. 22 & Tues., Oct. 23*

Are you an Embedded Engineer who is transitioning to using Linux? Embedded Apprentice Linux Engineer is a series of 8 seminars over 2 days taught by a professional Embedded Linux Instructor with years of practical experience.

OpenChain Workshop – The Supply Chain Compliance Solution (Not A Blockchain) – Tues., Oct. 23

The OpenChain Project defines the key requirements for a quality open source compliance program through a single, simple specification. This workshop will feature the latest developments around supply chain compliance and provide an excellent opportunity for attendees to both learn from and contribute to the project work teams.

Hyperledger Scotland Meetup – Tues., Oct. 23

Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration hosted by The Linux Foundation and including leaders in finance, banking, IoT, supply chains, manufacturing and technology. Hyperledger Meetup groups have an informal relationship with Hyperledger, and make up a key part of the Hyperledger ecosystem.

LF Energy Summit – Wed., Oct. 24*

The inaugural LF Energy Summit will focus on creating a shared vision to accelerate and transform the world’s relationship with energy by including the perspectives of power systems engineers and executives with open source developers. Together, we will identify the best paths to building a vibrant ecosystem with specific and practical outcomes for next steps and technical groups where companies and individuals can contribute. Space is limited, register today.

Linux in Safety-Critical Systems Summit – Wed., Oct. 24

This summit will inform interested developers and users about the activities and plans to support the use of Linux in safety-critical systems, presenting developments in the SIL2LinuxMP project and work from others that are valuable to the project.

IoT Apprentice Linux Engineer Track – Wed., Oct. 24*

The I-ALE program introduces Linux engineers to a more deeply embedded platform and programming. This series of 3 seminars will introduce you to a small micro controller on a board with various input and output devices which will allow you to build an Internet-connected device you can hang on your wall.

Linux Security Summit (LSS) Europe – Thurs., Oct. 25 & Fri, Oct. 26*

The Linux Security Summit (LSS) is a technical forum for collaboration between Linux developers, researchers, and end users. Its primary aim is to foster community efforts in analyzing and solving Linux security challenges.

Zephyr Hackathon – “Get Connected”  – Thurs., Oct. 25

Includes a Zephyr orientation session and the chance to learn the tips and tricks of setting up a development environment and working with Zephyr. Note: Currently Full, Waitlist Only.

Tracing Summit  – Thurs., Oct. 25

The goal of the Tracing Summit is to provide space for discussion between people of the various areas that benefit from tracing, namely parallel, distributed and/or real-time systems, as well as kernel development.

Linux Media Summit  – Thurs., Oct. 25

The Linux Media Summit is the premier forum to discuss the Linux multimedia development for cameras, audio and video streaming devices, analog/digital TV support, remote controller and HDMI Consumer Electronics Control (CEC) at the Linux Kernel and its userspace APIs.

Yocto Project Dev Day Europe 2018  – Thurs., Oct. 25*

A one day, hands-on training that puts you in direct contact with Yocto Project technical experts and developers. Its primary goal is to show developers how to create custom-build Linux distributions for embedded devices by using layers and recipes designed to resolve incompatibilities between different configurations.

Real-Time Summit  – Thurs., Oct. 25*

The Real-Time Summit is organized by the Linux Foundation Real-Time Linux (RTL) collaborative project. The event is intended to gather developers and users of the PREEMPT_RT patch, providing room for discussion between developers, tooling experts, and users. 

FOSSology – Hands on Training  – Thurs., Oct. 25*

This hands-on training session will provide this understanding of FOSSology, an open source license compliance software system and toolkit.

This hands-on training session will provide this understanding of FOSSology, an open source license compliance software system and toolkit.

*Co-located events with an additional fee are denoted with an asterisk.

In addition to all these great co-located event offerings, we want to remind you of all the other experiences that the conference provides for attendees.

Sunday, October 21

Better Together Diversity Social

Monday, October 22

Diversity Empowerment Summit

First-time Attendee Breakfast

Sightseeing Bus Tour

Women in Open Source Lunch

Attendee Opening Reception at the National Museum of Scotland

Tuesday, October 23

Open Source Career Breakfast

Diversity Empowerment Summit

Speed Networking & Mentoring

Onsite Attendee Reception & Sponsor + Technical Showcase

5K Fun Run

Partner Reception – Invitation Only

Wednesday, October 24

Morning Meditation

Diversity Empowerment Summit

REGISTER NOW »

Posted on Leave a comment

Open FinTech Forum Offers Tips for Open Source Success

2018 marks the year that open source disrupts yet another industry, and this time it’s financial services. The first-ever Open FinTech Forum, happening October 10-11 in New York City, focuses on the intersection of financial services and open source. It promises to provide attendees with guidance on building internal open source programs along with an in-depth look at cutting-edge technologies being deployed in the financial sector, such as AI, blockchain/distributed ledger, and Kubernetes.

Several factors make Open FinTech Forum special, but the in-depth sessions on day 1 especially stand out. The first day offers five technical tutorials, as well as four working discussions covering open source in an enterprise environment, setting up an open source program office, ensuring license compliance, and best practices for contributing to open source projects.

Enterprise open source adoption has its own set of challenges, but it becomes easier if you have a clear plan to follow. 

Read more at The Linux Foundation