Posted on Leave a comment

Python 🐍 Put Legend Outside Plot 📈 – Easy Guide

5/5 – (1 vote)

Are you tired of feeling boxed in by your Python plots and ready to break free from the constraints of traditional legend placement?

In this guide, I’ll show you how to put legends outside your plot for (click to 🦘 jump):

Let’s start with the first! 👇👩‍💻

Matplotlib Put Legend Outside Plot

Let’s start with various ways to position the legend outside for better visualization and presentation.

Matplotlib Set Legend Outside Plot (General)

First, let’s adjust the legend’s position outside the plot in general. To do this, use the bbox_to_anchor parameter in the legend function like this: matplotlib.pyplot.legend(bbox_to_anchor=(x, y)) 😃. Here, adjust the values of x and y to control the legend’s position.

import matplotlib.pyplot as plt # Sample data for plotting
x = [1, 2, 3, 4, 5]
y1 = [1, 4, 9, 16, 25]
y2 = [1, 8, 27, 64, 125] # Create the plot
plt.plot(x, y1, label='y = x^2')
plt.plot(x, y2, label='y = x^3') # Set the legend's position outside the plot using bbox_to_anchor
plt.legend(bbox_to_anchor=(1.05, 1)) # Add axis labels and title
plt.xlabel('x')
plt.ylabel('y')
plt.title('Plotting with External Legend') # Display the plot
plt.show()

In this example, the bbox_to_anchor parameter is set to (1.05, 1), which moves the legend slightly to the right of the plot.

💡 Info: The bbox_to_anchor parameter in matplotlib.pyplot.legend() uses a tuple of two values, x and y, to control the position of the legend. x=0/1 controls the left/right and y=0/1 controls the bottom/top legend placement.

Generally, in axes coordinates:

  • (0, 0) represents the left-bottom corner of the axes.
  • (0, 1) represents the left-top corner of the axes.
  • (1, 0) represents the right-bottom corner of the axes.
  • (1, 1) represents the right-top corner of the axes.

However, the values of x and y are not limited to the range [0, 1]. You can use values outside of this range to place the legend beyond the axes’ boundaries.

For example:

  • (1.05, 1) places the legend slightly to the right of the top-right corner of the axes.
  • (0, 1.1) places the legend slightly above the top-left corner of the axes.

Using negative values is also allowed. For example:

  • (-0.3, 0) places the legend to the left of the bottom-left corner of the axes.
  • (1, -0.2) places the legend below the bottom-right corner of the axes.

The range of x and y depends on the desired position of the legend relative to the plot. By adjusting these values, you can fine-tune the legend’s position to create the perfect visualization. 💫

Matplotlib Set Legend Below or Above Plot

To place the legend below the plot, you can set the loc parameter as ‘upper center’ and use bbox_to_anchor like this: plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.1)). For placing the legend above the plot, use bbox_to_anchor=(0.5, 1.1) instead 📊.

Matplotlib Set Legend Left of Plot (Upper, Center, Lower Left)

For positioning the legend to the left of the plot, use the following examples:

  • Upper left: plt.legend(loc='center left', bbox_to_anchor=(-0.2, 0.5)) 🌟
  • Center left: plt.legend(loc='center left', bbox_to_anchor=(-0.1, 0.5))
  • Lower left: plt.legend(loc='lower left', bbox_to_anchor=(-0.2, 0))

Matplotlib Set Legend Right of Plot (Upper, Center, Lower Right)

To position the legend to the right of the plot, you can try the following:

  • Upper right: plt.legend(loc='upper right', bbox_to_anchor=(1.1, 1)) 👍
  • Center right: plt.legend(loc='center right', bbox_to_anchor=(1.1, 0.5))
  • Lower right: plt.legend(loc='lower right', bbox_to_anchor=(1.1, 0))

Matplotlib Set Subplots Legend Outside Plot

When working with subplots, you can place a single, unified legend outside the plot by iterating over the axes and using the legend() method on the last axis (source) 😊. Remember to use the bbox_to_anchor parameter to control the legend’s position.

Here’s an example that does two things, i.e., (1) placing the legend on the right and (2) adjusting the layout to accommodate the external legend:

import numpy as np
import matplotlib.pyplot as plt # Sample data for plotting
x = np.linspace(0, 10, 100)
y1 = np.sin(x)
y2 = np.cos(x) # Create subplots
fig, axes = plt.subplots(nrows=2, ncols=1, sharex=True) # Plot data on the first subplot
axes[0].plot(x, y1, label='sin(x)')
axes[0].set_title('Sine Wave') # Plot data on the second subplot
axes[1].plot(x, y2, label='cos(x)', color='orange')
axes[1].set_title('Cosine Wave') # Set a common x-label for both subplots
fig.text(0.5, 0.04, 'x', ha='center') # Set y-labels for individual subplots
axes[0].set_ylabel('sin(x)')
axes[1].set_ylabel('cos(x)') # Create a unified legend for both subplots
handles, labels = axes[-1].get_legend_handles_labels()
for ax in axes[:-1]: h, l = ax.get_legend_handles_labels() handles += h labels += l # Place the unified legend outside the plot using bbox_to_anchor
fig.legend(handles, labels, loc='upper right', bbox_to_anchor=(1, 0.75)) # Adjust the layout to accommodate the external legend
fig.subplots_adjust(right=0.7) # Display the subplots
plt.show()

Legend Outside Plot Is Cut Off

If your legend is cut off, you can adjust the saved figure’s dimensions using plt.savefig('filename.ext', bbox_inches='tight'). The bbox_inches parameter with the value ‘tight’ will ensure that the whole legend is visible on the saved figure 🎉.

Add Legend Outside a Scatter Plot

For a scatter plot, you can use the same approach as mentioned earlier by adding the loc and bbox_to_anchor parameters to position the legend outside the plot. For instance, plt.legend(loc='upper right', bbox_to_anchor=(1.1, 1)) will place the legend in the upper right corner outside the scatter plot 💡.

If your legend is cut off when placing it outside the plot in Python’s Matplotlib, you can adjust the layout and save or display the entire figure, including the external legend, by following these steps:

  1. Use the bbox_to_anchor parameter in the legend() function to control the position of the legend.
  2. Adjust the layout of the figure using the subplots_adjust() or tight_layout() function to make room for the legend.
  3. Save or display the entire figure, including the external legend.

Here’s an example demonstrating these steps:

import matplotlib.pyplot as plt # Sample data for plotting
x = [1, 2, 3, 4, 5]
y1 = [1, 4, 9, 16, 25]
y2 = [1, 8, 27, 64, 125] # Create the plot
plt.plot(x, y1, label='y = x^2')
plt.plot(x, y2, label='y = x^3') # Set the legend's position outside the plot using bbox_to_anchor
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left') # Add axis labels and title
plt.xlabel('x')
plt.ylabel('y')
plt.title('Plotting with External Legend') # Adjust the layout to accommodate the external legend
plt.subplots_adjust(right=0.7) # Display the plot
plt.show()

In this example, you use the subplots_adjust() function to adjust the layout of the figure and make room for the legend.

You can also use the tight_layout() function, which automatically adjusts the layout based on the elements in the figure:

# ...
# Set the legend's position outside the plot using bbox_to_anchor
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left') # Add axis labels and title
plt.xlabel('x')
plt.ylabel('y')
plt.title('Plotting with External Legend') # Automatically adjust the layout to accommodate the external legend
plt.tight_layout(rect=[0, 0, 0.7, 1]) # Display the plot
plt.show()

In this case, the rect parameter is a list [left, bottom, right, top], which specifies the normalized figure coordinates of the new bounding box for the subplots. Adjust the values in the rect parameter as needed to ensure the legend is not cut off.

Additional Legend Configurations

In this section, you’ll learn about some additional ways to customize the legend for your plots in Python using Matplotlib. This will help you create more meaningful and visually appealing visualizations. However, feel free to skip ahead to the following sections on plotting the legend outside the figure in Seaborn, Pandas, and Bokeh.

Python Set Legend Position

As you already know by now, you can place the legend at a specific position in the plot by using the bbox_to_anchor parameter. For example, you could place the legend outside the plot to the right by passing (1.05, 0.5) as the argument:

import matplotlib.pyplot as plt
plt.legend(bbox_to_anchor=(1.05, 0.5))

This will place the legend slightly outside the right border of the plot, with its vertical center aligned with the plot center.

Python Set Legend Location

You can easily change the location of the legend by using the loc parameter. Matplotlib allows you to use predefined locations like 'upper right', 'lower left', etc., or use a specific coordinate by passing a tuple:

plt.legend(loc='upper left')

This will place the legend in the upper-left corner of the plot.📍

Python Set Legend Font Size

To change the font size of the legend, you can use the fontsize parameter. You can pass a numeric value or a string like 'small', 'medium', or 'large' to set the font size:

plt.legend(fontsize='large')

This will increase the font size of the legend text.😃

Python Set Legend Title

If you want to add a title to your legend, you can use the title parameter. Just pass a string as the argument to set the title:

plt.legend(title='My Legend Title')

This will add the title “My Legend Title” above your legend.👍

Python Set Legend Labels

If you’d like to customize the legend labels, you can pass the labels parameter. It takes a list of strings as the argument:

plt.legend(labels=['Label 1', 'Label 2'])

Your legend will now display the custom labels “Label 1” and “Label 2” for the corresponding plot elements.🏷

Python Set Legend Color

Changing the color of the legend text and lines can be achieved by using the labelcolor parameter. Just pass a color string or a list of colors:

plt.legend(labelcolor='red')

This will change the color of the legend text and lines to red.🔴

Seaborn Put Legend Outside Plot

In this section, I’ll show you how to move the legend outside the plot using Seaborn. Let’s dive into various ways of setting the legend position, like below, above, left or right of the plot.👨‍💻

Sns Set Legend Outside Plot (General)

First, let’s talk about the general approach:

You can use the legend() function and the bbox_to_anchor parameter from Matplotlib to move the legend. You can combine this with the loc parameter to fine-tune the legend’s position.📈

Here’s a quick example:

import seaborn as sns
import matplotlib.pyplot as plt # Sample data for plotting
tips = sns.load_dataset("tips") # Create a Seaborn plot with axis variable 'ax'
fig, ax = plt.subplots()
sns_plot = sns.scatterplot(x="total_bill", y="tip", hue="day", data=tips, ax=ax) # Set the legend's position outside the plot using bbox_to_anchor on 'ax'
ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) # Add axis labels and title
ax.set_xlabel('Total Bill')
ax.set_ylabel('Tip')
ax.set_title('Tips by Total Bill and Day') # Display the plot
plt.show()

Sns Set Legend Below or Above Plot

Now let’s move the legend below or above the plot.💡 Simply adjust the bbox_to_anchor parameter accordingly. For example, to place the legend below the plot, you can use bbox_to_anchor=(0.5, -0.1):

ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.1))

And to place the legend above the plot, use bbox_to_anchor=(0.5, 1.1):

ax.legend(loc='lower center', bbox_to_anchor=(0.5, 1.1))

Sns Set Legend Left of Plot (Upper, Center, Lower Left)

Similarly, to position the legend on the left side of the plot, you can use the following code snippets:🎨

Upper left:

ax.legend(loc='upper right', bbox_to_anchor=(-0.15, 1))

Center left:

ax.legend(loc='center right', bbox_to_anchor=(-0.15, 0.5))

Lower left:

ax.legend(loc='lower right', bbox_to_anchor=(-0.15, 0))

Sns Set Legend Right of Plot (Upper, Center, Lower Right)

Lastly, to place the legend on the right side of the plot, adjust the bbox_to_anchor parameter like so:🚀

Upper right:

ax.legend(loc='upper left', bbox_to_anchor=(1.05, 1))

Center right:

ax.legend(loc='center left', bbox_to_anchor=(1.05, 0.5))

Lower right:

ax.legend(loc='lower left', bbox_to_anchor=(1.05, 0))

With these techniques, you can easily position the legend outside the plot using Seaborn! Happy plotting!👩‍🔬

Pandas Put Legend Outside Plot

When working with Pandas and Matplotlib, you often want to move the legend outside the plot 🖼 to improve readability. No worries! You can do this by taking advantage of the fact that .plot() returns a Matplotlib axis, enabling you to add .legend(bbox_to_anchor=(x, y)) to your code.

Here’s how:

First, import the necessary libraries like Pandas and Matplotlib:

import pandas as pd
import matplotlib.pyplot as plt

Create your DataFrame and plot it using the plot() function, like this:

data = {'A': [1, 2, 3], 'B': [4, 5, 6]}
df = pd.DataFrame(data)
ax = df.plot()

Next, you’ll want to place the legend outside the plot. Adjust the coordinates parameter, bbox_to_anchor, to position it according to your preferences. For example, if you want to place the legend to the right of the plot, use:

ax.legend(bbox_to_anchor=(1.05, 1)) plt.tight_layout()
plt.show()

This code will place the legend to the right of the plot, at the top 🚀.

Bokeh Put Legend Outside Plot

Placing a legend outside the plot in Bokeh can be easily done by using the add_layout method. You’ll need to create a Legend object manually and then add it to your plot using add_layout. This gives you the flexibility to position the legend anywhere on your plot, which is particularly helpful when you have many curves that might be obscured by an overlapping legend. 😊

Here’s a short example to help you move the legend outside the plot in Bokeh:

from bokeh.plotting import figure, show
from bokeh.models import Legend, LegendItem
from bokeh.io import output_notebook
output_notebook() # Example data
x = list(range(10))
y1 = [i**2 for i in x]
y2 = [i**0.5 for i in x] # Create a figure
p = figure(title="Example Plot") # Add line glyphs and store their renderer
r1 = p.line(x, y1, line_color="blue", legend_label="Line 1")
r2 = p.line(x, y2, line_color="red", legend_label="Line 2") # Create Legend object
legend = Legend(items=[ LegendItem(label="Line 1", renderers=[r1]), LegendItem(label="Line 2", renderers=[r2])
], location="top_left") # Add legend to plot
p.add_layout(legend, 'right') # Show plot
show(p)

To ensure your plot is as clear as possible, we recommend experimenting with different legend positions and layouts. By customizing your plot, you can maintain a clean and organized display of your curves even when there’s a lot of information to convey.

With Bokeh, you have full control over your plot’s appearance, so enjoy exploring different options to find the perfect fit for your data! 📈🎉


Keep Learning With Your Python Cheat Sheet! ✅

Feel free to download our free cheat sheet and join our tech and coding academy by downloading the free Finxter cheat sheets — it’s fun!

Posted on Leave a comment

Python List of Tuples to DataFrame 🐼

5/5 – (1 vote)

To convert a list of tuples to a Pandas DataFrame, import the pandas library, call the DataFrame constructor, and pass the list of tuples as the data argument such as in pd.DataFrame(tuples_list, columns=['Number', 'Letter']).

Here’s a minimal example:

import pandas as pd
tuples_list = [(1, 'A'), (2, 'B'), (3, 'C')]
df = pd.DataFrame(tuples_list, columns=['Number', 'Letter'])

The output of the given code will be a Pandas DataFrame with two columns, 'Number' and 'Letter', as follows:

 Number Letter
0 1 A
1 2 B
2 3 C

After the Panda image, let’s dive deeper into this conversion technique so you can improve your skills and learn more on Pandas’ assume capabilities!

I’ll also show you how to convert a list of named tuples — and how to convert the DataFrame back to a list of tuples (key-value pairs). 👇

Converting a List of Tuples to DataFrame

First, let’s explore how to convert a list of tuples into a DataFrame using Python’s Pandas library. 🐍

Using DataFrame Constructor

The simplest way to convert a list of tuples into a DataFrame is by using the DataFrame() constructor provided by the Pandas library. This method is straightforward and can be achieved in just a few lines of code.

Here’s an example:

import pandas as pd
tuple_list = [('A', 1), ('B', 2), ('C', 3)]
df = pd.DataFrame(tuple_list)
print(df)

Executing this code will create a DataFrame with the following structure:

0 1
A 1
B 2
C 3

Handling Data with Column Names

When converting a list of tuples to a DataFrame, it’s often useful to include column names to make the data more readable and understandable. To do this, you can add the columns parameter when calling the DataFrame() constructor.

Here’s an example:

import pandas as pd
tuple_list = [('A', 1), ('B', 2), ('C', 3)]
column_names = ['Letter', 'Number']
df = pd.DataFrame(tuple_list, columns=column_names)
print(df)

With the column names specified, the resulting DataFrame will look like this:

Letter Number
A 1
B 2
C 3

By using the DataFrame constructor and handling data with column names, you can easily convert a list of tuples into a DataFrame that is more organized and easier to understand. Keep working with these techniques, and soon enough, you’ll be a master of DataFrames! 😎

Examples and Use Cases

When working with Python, one often encounters data stored in lists of tuples. These data structures are lightweight and easy to use, but sometimes, it’s beneficial to convert them into a more structured format, such as a DataFrame 💡. In this section, we will explore some examples and use cases for converting a list of tuples into a DataFrame in Python, using the pandas library.

Here’s a simple example that demonstrates how to create a DataFrame from a list of tuples:

import pandas as pd data = [('Peter', 18, 7), ('Riff', 15, 6), ('John', 17, 8), ('Michel', 18, 7), ('Sheli', 17, 5)]
df = pd.DataFrame(data, columns=['Name', 'Age', 'Score'])

In this example, we have a list of tuples representing student data, with each tuple containing a name, age, and score. By passing this list to the DataFrame constructor along with the column names, we can easily convert it into a DataFrame 🚀.

Consider another use case, where we need to filter and manipulate data before converting it into a DataFrame. For instance, let’s imagine we have a list of sales data, with each tuple representing an item, its price, and the number of sales:

data = [('Item A', 35, 20), ('Item B', 45, 15), ('Item C', 50, 30), ('Item D', 25, 10)]

In this case, we can use list comprehensions to filter items with sales greater than 20 and update the price by applying a 10% discount:

filtered_data = [(item, price * 0.9, sales) for item, price, sales in data if sales > 20]
df = pd.DataFrame(filtered_data, columns=['Item', 'Discounted Price', 'Sales'])

Now, our DataFrame contains only the filtered items with the discounted prices 🎉.

Python List of Named Tuples to DataFrame

Converting a list of named tuples to a DataFrame in Python can be done efficiently using the pandas library’s default functions as well.

💡 Info: A named tuple is a subclass of a tuple, which allows you to access elements by name, making it highly readable and practical for data manipulation. 🐍

First, create a list of named tuples using Python’s built-in collections module.

Let’s assume we have a list of students with their names, ages, and test scores:

from collections import namedtuple Student = namedtuple('Student', ['name', 'age', 'score'])
students = [ Student('Alice', 23, 89), Student('Bob', 22, 92), Student('Charlie', 24, 85)
]

With the list of named tuples prepared, proceed to import the pandas library and use the pd.DataFrame() method to convert the list to a DataFrame:

import pandas as pd dataframe = pd.DataFrame(students, columns=Student._fields)

This process creates a DataFrame with columns corresponding to the named tuple fields. The final result appears as follows:

 name age score
0 Alice 23 89
1 Bob 22 92
2 Charlie 24 85

In summary, simply define the list with the named tuple structure, and then call the pd.DataFrame() method to create the DataFrame.

Create a List of Tuples From a DataFrame

When working with data in Python, you may need to convert a DataFrame back into a list of tuples.

To begin, import the library in your Python code using import pandas as pd.

Now, let’s say you have a DataFrame, and you want to extract its data as a list of tuples. The simplest approach is to use the itertuples() function, which is a built-in method in Pandas (source).

To use this method, call the itertuples() function on the DataFrame object, and then pass the output to the list() function to convert it into a list:

python import pandas as pd # Sample DataFrame data = {'Name': ['John', 'Alice', 'Tim'], 'Age': [28, 22, 27]}
df = pd.DataFrame(data) # Convert DataFrame to list of tuples list_of_tuples = list(df.itertuples(index=False, name=None))
print(list_of_tuples) 

This code will output:

 [('John', 28), ('Alice', 22), ('Tim', 27)] 

The itertuples() method has two optional parameters: index and name. Setting index=False excludes the DataFrame index from the tuples, and setting name=None returns regular tuples instead of named tuples.


So there you go! You now know how to convert a DataFrame into a list of tuples using the Pandas library in Python 🐍. To keep learning and improving your Python skills, feel free to download our cheat sheets and visit the recommended Pandas tutorial:

⭐ Recommended: 10 Minutes to Pandas (in 5 Minutes)

Posted on Leave a comment

How to Transfer an ENS Domain: A Quick and Easy Guide

5/5 – (1 vote)

Prerequisites

Before beginning the process of transferring an ENS domain, ensure you have the following tools at hand. This includes 📝:

  • A current Ethereum wallet: This is the wallet containing the ENS domain that you want to transfer.
  • A new Ethereum wallet: This is the wallet to which you want to transfer the ENS domain.
  • Access to the ENS domain manager: This is where you’ll manage your ENS domain settings and perform the transfer (app.ens.domains).

Note that ENS domains work as non-fungible tokens (NFTs), so transferring ownership is similar to transferring other NFTs 🏷.

🛑 Attention: Simply sending the ENS domain NFT to the other wallet will only transfer the Registrant but not the Controller role of the domain. The controller manages ENS records and subdomains, while the registrant controls the controller address and registration transfers. Roughly speaking, the registrant is more powerful than the controller. But the controller is the address that gets the assets! For example, if the controller of domain aaa.eth has address 0x123 and the registrant has address 0x456 and you send funds to aaa.eth, the controller will get those funds but the registrant has ultimate control if they choose to.

Make sure you have control over both the current and new Ethereum wallets.

With these prerequisites in place, you are well-prepared to proceed with the transfer ✅. Just follow the steps in the subsequent sections and your ENS domain will soon be transferred to your new Ethereum wallet.

Connecting Wallet

Before transferring your ENS domain, connect your wallet to the platform you’ll be using for the process. This is typically done on the ENS Domain Manager or other compatible platforms like Coinbase Wallet and MetaMask.

First, visit the chosen platform’s website or open its app and log in using your wallet credentials. Most platforms support popular wallets like MetaMask or Coinbase Wallet. 😊

Once you’ve logged in, navigate to the settings or account section of the platform. Here, you should find an option to connect your wallet. Select the option and follow the on-screen instructions to complete your wallet connection. Some platforms may require additional verification steps, such as providing password authentication or approving the connection from your connected wallet. 🌐

After successfully connecting your wallet, you should have access to your ENS domain and be ready to transfer it to a new wallet. Connecting your wallet is a crucial step in transferring your ENS domain, as it ensures the proper ownership and control of your domain during the process. 👍

Finding ENS Domain

In order to transfer an ENS domain, the first step is finding the desired domain. Fortunately, there are user-friendly tools that make this process simple and efficient. 😊

The ENS Domain Manager application can be used for finding and managing domains. Simply visit the application and search for the desired domain to check its availability.

Once the domain is found, users can view additional details, such as the current owner, registration process, and more. The ENS domain system also offers compatibility with IPFS by including hashes in the domain records. This feature enables decentralized websites to be hosted seamlessly. 🌍

In order to complete domain-related actions smoothly, it is essential to have an Ethereum wallet connected, such as MetaMask. This connection allows for proper authentication and execution of transactions in the Ethereum Name Service ecosystem. 🦊

What is the Difference Between a Registrant and a Controller?

The distinction between a Registrant and a Controller in an Ethereum Name Service (ENS) allows for a more efficient management of domain names 😊. To understand their roles, let’s start with a brief explanation of each.

A Registrant is the person or entity to whom the domain is registered 📝. They are the ultimate owner of the domain and have complete control over it. The Registrant can transfer ownership to another account or a smart contract that manages subdomains, records, and more, while still being able to recover ownership if needed (ENS Documentation).

On the other hand, a Controller is someone who has been delegated with day-to-day control over the domain by the Registrant 🔑. This role can change the resolver and add or edit records. Some applications, like Fleek and OpenSea, set themselves as the Controller to update records on behalf of the Registrant (ENS App). A Controller’s role is similar to the operator of DNS servers for a domain name that is registered with a domain registrar like GoDaddy (Reddit ethdev).

The Registrant has the ultimate control over the name, while the Controller is responsible for handling everyday operations. Separating these roles makes it easier to build automated systems to update ENS efficiently and provide more flexibility in domain management 🚀.

Initiating Transfer Process

You can transfer the ENS domain to a new wallet in three steps:

Step 1: Connect your wallet that has both registrant and controller roles to the ENS website.

Step 2: Transfer the Registrant role by clicking the first Transfer button and confirming the transaction proposed by your wallet. This will cost some ETH fees because it is recorded in the blockchain. Make sure you have some ETH in your wallet for fees!

Step 3: Transfer the Controller role by clicking the second Transfer button and confirming the transaction.

Now you’re done! The new wallet address now has the ENS NFT and both the controller and registrant roles. ✅

Verifying Transfer

After completing the process of transferring your ENS domain, it’s important to verify that the transfer has been successfully executed 🔍. This section will guide you through the steps to make sure everything went smoothly.

First and foremost, check your new wallet and make sure it now displays the transferred ENS domain (as an NFT). If the domain is visible, it’s a clear indication that the transfer has been successful 👍. However, if the domain is not visible, do not panic; it might take a few minutes for the changes to reflect on the blockchain. Just give it some time ⏳.

In case you still cannot see the domain in the new wallet after a reasonable waiting period, head back to the ENS App and enter your ENS domain name in the search bar. This will provide you with detailed information, including the current Registrant and Controller addresses 🕵️‍♂️. Verify that these two addresses match the new wallet address that you intended to transfer the domain to. If they match, then the transfer has been successful, and you just need to wait a bit longer for your new wallet to reflect the changes.

You can also verify on an Ethereum blockchain explorer such as https://etherscan.io/ (see previous graphic).

Remember to keep track of any errors or irregularities you encounter during the process. In the rare case that you experience an issue that you cannot resolve, consider reaching out to the ENS support team or community forums for assistance 🤝. They’re always ready to help you with any problems related to ENS domain transfers.

Common Issues and Troubleshooting

When transferring an ENS domain, users may encounter some common issues. In this section, we’ll discuss a few of these problems and provide solutions to help make the process smoother. 😊

One common issue is forgetting to change the Controller of the domain. The controller is the account that manages day-to-day operations of the domain, such as creating subdomains and setting resolvers 💻. To resolve this, visit the ENS Domain Manager in your wallet and update the controller address.

Another issue that may arise is the inability to transfer the domain due to it being locked 🔒. This can occur if the domain is involved in a dispute or if it has been involved in illegal activities. To resolve this, contact the ENS administrators or an appropriate legal process for assistance.

Users might also face challenges in maintaining anonymity while transferring a domain. To maintain privacy, it is recommended to use an anonymous wallet or take additional steps like holding a fake sale on OpenSea and keeping the new address segregated from KYC services 🕵️‍♂️. More details can be found here.

Lastly, technical difficulties may occur while using the ENS Domain Manager. In such cases, ensure that you are using an updated browser version 🌐, have a stable internet connection 📶, and consider trying a different browser or device if issues persist.


If you want to keep learning about exponential technologies, consider joining our free email academy (140,000 coders and tech enthusiasts). However, you should only join if you want to learn coding and tech!

✅ Recommended: Ethereum – Top 10 Articles to Get Started

Posted on Leave a comment

How to Get a File from Your EC2 Instance to Your Local Computer

5/5 – (1 vote)

I am running AutoGPT on an EC2 instance and encountered the following problem:

⚔ Challenge: How to pull a file (e.g., an Image) from the EC2 instance to my local machine (Windows, Mac, Linux)?

In this quick article, I share my findings! If you’re short on time, you can use these commands to exchange files between your local machine and your EC2 instance:

💡 Example 1: Transfer File from EC2 to your computer
scp -i /path/to/your/ec2key.pem user@instance-ip:/path/to/your/file /path/to/local/destination 💡 Example 2: Transfer File from your computer to EC2
scp -i /path/to/your/ec2key.pem /path/to/local/file user@instance-ip:/path/to/remote/file 

I’ll explain them in more detail below! 👇

Prerequisites

Before attempting to transfer a file from an EC2 instance to your local computer, there are a few essential prerequisites you need to have in place 😊:

  • EC2 key: The ec2key.pem file was created when you set up the EC2 instance. Make sure you have access to it.
  • EC2 username and IP: Find this information in the EC2 Console using the ‘Connect to Instance’ button. These are essential for establishing a secure connection to your instance. If you already have the .pem file, you don’t need this.
  • Public DNS name: Obtain the public DNS for your instance from the Amazon EC2 console or by using the AWS CLI. Find it in the Public IPv4 DNS column of the Instances pane.
  • File path: Note the exact path to the file you want to transfer from the EC2 instance to your local machine. This information is necessary for initiating the file transfer process 📁.

With these prerequisites in place, you’ll be prepared to seamlessly transfer files between your EC2 instance and local computer! 👇

Connecting to EC2 Instance

Being able to connect to your Amazon EC2 instance is crucial for effectively accessing and transferring files between your local computer and the instance. In this section, you’ll learn where to find instance information and how to set up the SSH client for secure connection 😉.

Finding Instance Information

First, you need to gather essential information about your EC2 instance, including the instance ID, public DNS, and the key pair file you created when launching the EC2 instance. You can find this information in the Amazon EC2 console.

Simply navigate to the Instances section, then select the instance you want to connect to and click on the ‘Connect’ button.

You will find user-friendly instructions on how to access your instance with your preferred method, as well as the necessary details 🕵️‍♀️. Remember to keep your key pair file safe and secure, as it’s required for authentication.

Setting Up SSH Client

With the instance information in hand, you can now set up an SSH client to establish a secure connection to your EC2 instance.

OpenSSH and PuTTY are two popular SSH clients for Windows, while Mac and Linux users can use their built-in terminal applications for SSH connections 🖥.

If you’re using OpenSSH or the default terminal on Mac/Linux, you’ll need to use the following command, adjusting the path to your key pair file and the instance details as needed:

$ ssh -i /path/to/your-ec2-key.pem username@IP Example: 💡
ssh -i '.\AWS Key Pair.pem' ec2-user@12.34.567.89

Windows users with PuTTY can follow these instructions to load their key pair file, enter the public DNS, and start an SSH session to the EC2 instance 🚀.

Now that you’re connected to your EC2 instance, you can navigate its file system and transfer files without a hitch 😊. In the next section, you’ll learn how to get a file from your EC2 instance to your local computer, step by step. Stay tuned!

Transferring Files

Transferring files from an EC2 instance to a local computer can be done with ease using either SCP (Secure Copy) commands or SFTP (SSH File Transfer Protocol) clients. Let’s explore both methods to see how they work. 😊

Using SCP (Secure Copy) Commands

SCP provides a simple method for transferring files between your local computer and an EC2 instance. To use SCP, ensure that you have the required information such as your EC2 key pair (.pem file), your EC2 instance’s IP address, and the file path of the file you wish to transfer. 💻🔐

Here’s an example of how to use the SCP command:

💡 Example 1: Transfer File from EC2 to your computer
scp -i /path/to/your/ec2key.pem user@instance-ip:/path/to/your/file /path/to/local/destination 💡 Example 2: Transfer File from your computer to EC2
scp -i /path/to/your/ec2key.pem /path/to/local/file user@instance-ip:/path/to/remote/file 

This will download the file from your EC2 instance to your local computer securely. Just replace the paths and user information with your own. 👍

Let’s dive a bit deeper into these commands so you understand the different components:

Example 1: Transfer File from EC2 to your computer

scp -i /path/to/your/ec2key.pem user@instance-ip:/path/to/your/file /path/to/local/destination

This command transfers a file from an Amazon EC2 (Elastic Compute Cloud) instance to your local computer. Here’s a breakdown of the command components:

  • scp: The command itself, which stands for “secure copy”.
  • -i /path/to/your/ec2key.pem: The -i flag is followed by the path to your EC2 private key file (usually in .pem format), which is used for authentication when connecting to the EC2 instance.
  • user@instance-ip: This specifies the username and the IP address (or DNS name) of the EC2 instance you want to connect to.
  • /path/to/your/file: The path to the file you want to transfer from the EC2 instance.
  • /path/to/local/destination: The path to the location on your local computer where you want to save the transferred file.

Example 2: Transfer File from your computer to EC2

scp -i /path/to/your/ec2key.pem /path/to/local/file user@instance-ip:/path/to/remote/file

This command transfers a file from your local computer to an Amazon EC2 instance. The structure of this command is similar to the first example:

  • scp: The command itself, which stands for “secure copy”.
  • -i /path/to/your/ec2key.pem: The -i flag is followed by the path to your EC2 private key file (usually in .pem format), which is used for authentication when connecting to the EC2 instance.
  • /path/to/local/file: The path to the file on your local computer that you want to transfer.
  • user@instance-ip: This specifies the username and the IP address (or DNS name) of the EC2 instance you want to connect to.
  • /path/to/remote/file: The path to the location on the EC2 instance where you want to save the transferred file.

So far so good. Next, you’ll learn about an alternative to transfer files in a remote EC2 setting, i.e., SFTP. However, I recommend the previous approach using SCP.

Using SFTP (SSH File Transfer Protocol) Clients

SFTP allows for file transfer between a local computer and an EC2 instance through an intuitive graphical user interface. Popular SFTP clients include FileZilla, WinSCP, and Cyberduck. These clients make it simple to drag and drop files from your local machine to your remote server. 🖱🔄

To connect to your EC2 instance, you’ll need the following information:

  • Server: your EC2 instance’s IP address 🌐
  • Username: your EC2 username (usually “ec2-user”) 👤
  • Password: leave this field empty, and use your key pair file instead 🔑

Simply input the required information into your SFTP client, and you’ll be able to transfer files between your local computer and EC2 instance in a matter of seconds! 🚀

Common Issues and Troubleshooting Tips

When attempting to transfer files from an EC2 instance to a local computer, it’s not uncommon to come across some hurdles along the way. In this section, we will discuss some common issues users might face and provide troubleshooting tips to help you overcome them. 😊

One common issue that users might encounter is having difficulty connecting to the EC2 instance. To address this issue, ensure that your instance is running and has passed its status checks. Make sure you’re using the correct key, username, and IP address you obtained from the EC2 console. If the problem persists, check the instance’s security group rules, and ensure that the necessary ports are open for communication. 🔧

Another problem that may arise is slow or interrupted file transfers. To solve this, ensure that your internet connection is stable and consider using a file transfer tool like scp or FileZilla that supports resuming interrupted transfers. Additionally, compressing the files before transferring can help speed up the process. 🚀

If you’re facing issues with file permissions while transferring files from an EC2 instance, make sure you have the necessary read and write permissions on both the local and remote systems. You might need to adjust the permissions on your EC2 instance or your local machine to successfully transfer the files. 🔐

Lastly, if you’re troubleshooting EC2 Windows instance issues, you can use the EC2Rescue tool to help diagnose and fix common issues. This tool can be run using different methods, including the GUI, the command line interface (CLI), or the AWSSupport-RunEC2RescueForWindowsTool Systems Manager Run Command. 😃

Posted on Leave a comment

Keep SSH Session Alive

5/5 – (1 vote)

Problem Formulation

I just tried to run AutoGPT on an EC2 instance using SSH from my local Windows machine. But here’s the annoying part: the connection always closes and AutoGPT can only work in small ~10 minute increments. When I return to my machine, I need to SSH into my instance and restart the program.

⚔ Problem Formulation: Your SSH connection to a remote server works properly at your workplace, but it freezes after 10-15 minutes when connecting from home. You don’t receive any error messages, but you notice zombie login users that need to be killed manually.

Quick and Easy Solution (Client-Side)

To prevent an SSH connection from closing when the client goes silent, you can configure the client to send a keep-alive signal to the server periodically.

Create a configuration file in your home directory at $HOME/.ssh/config, and set its permissions to 600 using chmod 600 ~/.ssh/config after file creation. To send a keep-alive signal every 240 seconds, for example, add the following lines to the configuration file:

Host * ServerAliveInterval 240

You can get this done with the following two commands on Linux:

printf '%s\n' 'Host *' ' ServerAliveInterval 240' > ~/.ssh/config
chmod 600 ~/.ssh/config

You can then check the file content using the command cat ~/.ssh/config like so:

Alternative Solution: Server Side

In some cases, you have access to the server’s SSH settings. In that case, add an entry ClientAliveInterval 60 to the file /etc/ssh/sshd_config. I used the Vim editor in the terminal to accomplish this.

Do you want to keep improving your coding and tech skills? Feel free to check out our Python and tech academy by downloading your free cheat sheets for starters:

Posted on Leave a comment

How I Created a Translation and Counter App using Django

5/5 – (1 vote)

In this two-part tutorial series, we will learn how to create a translation app using Django. 🤠 As an extra feature, we are going to demonstrate two ways to create a word counter that counts the number of words written.

Translating text to a language one can understand is no longer a new development. As many businesses are done on an international level, it necessitates the need to communicate in a language the other party can understand.

Advancement in technology has removed the barrier to communication. With an app such as Google Translate, you can get the meaning of text written in another language.

As part of learning Django through building projects, we are going to implement such a feature.

What We Are Going to Learn

As earlier stated, this is a two-part series project tutorial. The first part focuses on building a translation app. In the second part, we are going to learn how to add another feature, a word counter. I’m going to show you how to go about building it using both JavaScript and Python.

By the end of this tutorial, you are not only going to learn how Django interacts with web pages, but you are also going to learn how to manipulate the DOM with JavaScript. Thus, even if you have little or no knowledge of HTML, CSS, and JavaScript, you can combine your knowledge of Python with my explanation to understand what we are doing.

Getting Started

Although this is a beginner Django project, I expect you to know the steps involved in setting up Django as I don’t have to be repeating myself whenever I’m writing a Django project tutorial. However, if this is your first time, check this article to learn how to install Django.

Your Django project should be created in the current folder using the name, translator. Then use app as the name of the Django app. After installation, go to the settings.py file and add the app name.

INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # custom app 'app', ]

Creating the View Function

Next, go to the views.py file and add the following code to it.

from django.shortcuts import render
from translate import Translator # Create your views here. def index(request): if request.method == 'POST': text = request.POST['translate'] lang = request.POST['language'] translator = Translator(to_lang=lang) translation = translator.translate(text) context = { 'translation': translation, } return render(request,'index.html', context) return render(request, 'index.html')

This is a very simple view function. We get the input from the HTML form provided it is a POST request. Then, we call on the Translator class of the translator module to translate the given text into the language selected. The Translator class has an argument, to_lang. This indicates the language you want your text to be translated.

We finally use the render function to display the translated text on the index.html web page. We can as well use the HttpResponse() function but it will not be displayed on the index.html web page.

But if the request method is not POST, we simply return the web page containing an empty form.

This part is as simple as it should be. But in future tutorials, we will be dealing with a more complicated view function.

The Templates

Next is the templates folder. Create the folder and add it to the settings.py file.

TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')], # add these 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, },
]

Inside the folder, we create the index.html file.

<!DOCTYPE html>
<html> <head> <title>Django Translate</title> </head> <body> <h1>Django Translate App</h1> <form method="post"> {% csrf_token %} <label for="django">Enter Text:</label> <br/> <textarea onChange="" id="translate" name="translate" placeholder="Enter your text here" rows="10" cols="70"></textarea> <br> <select required name="language"> <option value="French">French</option> <option value="Spanish">Spanish</option> <option value="German">German</option> <option value="Italian">Italian</option> </select> <br> <br> <button id="btn" type="submit">Translate</button> </form> <br> <textarea id="text" placeholder="Your text will appear here" rows='10' cols='70'>{{ translation }}</textarea> </body> <script src="{% static 'js/script.js' %}"></script>
</html>

You can see the form element has the method property as POST. Without this, our view function will not work as expected. The csrf_token is a mandatory requirement for security. We use the select element to list the languages. You can add as many as you want.

Notice that the textarea and the select elements each has a name property, and if you check the view function, you will see that it has the same name as found in the HTML file. This is the way we retrieve data from the web page.

Finally, we register the URL of the app both in the project level and in the app level. For the project level, go to translate/urls.py file and add this:

from django.contrib import admin
from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('', include('app.urls'))
]

For the app level create a app/urls.py file and add this:

from django.urls import path
from .views import index urlpatterns = [ path('', index, name='home'),
]

Check the article I mentioned to see a brief explanation of the above code. That’s it. We are good to go. Fire up the local server to test what we have done.

Notice how we make the translated text appear inside the box. We accomplished this using Django templating language. Remember the context variable passed to the render() function? It is a dictionary object, the name of the key being what we passed to the textarea element. Thus, we are telling Django to display the value of the key to the web page.

And what is the value? The translated text! That is how Django dynamically writes to a web page. Feel free to play with any language of your choice provided the language is included in the translate library.

Conclusion

This is how we come to the end of the first part of this tutorial series. The full code for the project series can be found here. In the second part, we will learn two ways we can count the number of words written in the textarea element:

💡 Recommended: How I Created a Translation and Counter App using Django (2/2)

Posted on Leave a comment

Dictionary of Lists to DataFrame – Python Conversion

5/5 – (1 vote)

Problem Formulation

Working with Python often involves processing complex data structures such as dictionaries and lists. In many instances, it becomes necessary to convert a dictionary of lists into a more convenient and structured format like a Pandas DataFrame 🐼.

DataFrames offer numerous benefits, including easier data handling and analysis 🔍, as well as an array of built-in functions that make data manipulation much more straightforward.

In this context, the potential challenge arises in figuring out how to correctly convert a dictionary with lists as its values into a DataFrame. Various methods can be employed to achieve this goal, but it is crucial to understand the appropriate approach in each situation to ensure accurate and reliable data representation 😇.

Method 1: Using DataFrame.from_dict()

In this method, we will use the DataFrame.from_dict() function provided by the pandas library to convert a Python dictionary of lists to a DataFrame. This function is quite versatile, as it can construct a DataFrame from a dictionary of array-like or dictionaries data. 📘

To begin with, let’s import the necessary library:

import pandas as pd

Next, create a dictionary with lists as values. For example, let’s consider the following dictionary:

data = { 'Name': ['Sam', 'Alex', 'Jamie'], 'Age': [29, 28, 24], 'Country': ['USA', 'UK', 'Canada']
}

Now, use the from_dict() method to create a DataFrame from the dictionary. The process is quite simple, all you have to do is call the method with the dictionary as its argument. 💡

df = pd.DataFrame.from_dict(data)

And there you have it, a DataFrame created from a dictionary of lists! The resulting DataFrame will look like this:

 Name Age Country
0 Sam 29 USA
1 Alex 28 UK
2 Jamie 24 Canada

The benefits of using this method are its simplicity and compatibility with different types of dictionary data. However, always remember to maintain a consistent length for the lists within the dictionary to avoid any issues. 🚀

Method 2: Using pd.Series() with DataFrame

In this method, we will be using the 🐼 Pandas library’s pd.Series data structure inside the DataFrame method. It is a useful approach that can help you convert dictionaries with lists into a DataFrame format quickly and efficiently. 😃

To implement this method, you can use Python’s dictionary comprehension and the items() method, as shown in the syntax below:

pd.DataFrame({key: pd.Series(val) for key, val in dictionary.items()})

Here, dictionary.items() fetches key-value pairs from the dictionary, and pd.Series(val) creates a series of values from these pairs. The result is a well-structured Pandas DataFrame! 🎉

Let’s take a look 👀 at an example:

import pandas as pd data = { "Name": ["Alice", "Bob", "Claire"], "Age": [25, 30, 35], "City": ["London", "New York", "Sydney"],
} df = pd.DataFrame({key: pd.Series(val) for key, val in data.items()})
print(df)

Executing this code will generate the following DataFrame:

 Name Age City
0 Alice 25 London
1 Bob 30 New York
2 Claire 35 Sydney

As you can see, using the pd.Series data structure with the DataFrame method provides a clean and effective way to transform your dictionaries with lists into Pandas DataFrames! 🐼♥

💡 Recommended: Python Dictionary Comprehension: A Powerful One-Liner Tutorial

YouTube Video

Method 3: json_normalize()

In this method, we will use the pd.json_normalize function to convert a Python dict of lists to a Pandas DataFrame. This function is particularly useful for handling semi-structured nested JSON structures, as it can flatten them into flat tables.

To begin, you should first import the Pandas library using the following snippet:

import pandas as pd

Next, create your Python dict of lists like this example:

data = { 'manoj': ["java", "php", "python"], 'rajesh': ["c", "c++", "java"], 'ravi': ["r", "python", "javascript"]
}

With your data ready, you can now use the json_normalize function to convert the dict of lists into a DataFrame.

Here’s how:

df = pd.json_normalize(data, record_path='manoj', meta=['rajesh', 'ravi'])

And that’s it! 🎉 You now have a DataFrame created from the dict of lists. Don’t forget to preview your DataFrame using print(df) or df.head() to ensure that the data has been converted correctly. 😊

Method 4: Utilizing DataFrame Constructor with List Comprehension

In this method, we create a pandas DataFrame from a dictionary of lists using the DataFrame constructor and list comprehension. This approach is quite simple and potentially more efficient for larger datasets.💡

First, we need to import the pandas library:

python import pandas as pd

Next, let’s create a sample dictionary of lists containing student data:

data = { 'Name': ['Alice', 'Bob', 'Charlie'], 'Math': [80, 70, 90], 'History': [95, 85, 78] } 

Now, we will use the DataFrame constructor and list comprehension to convert the dictionary of lists into a pandas DataFrame:

df = pd.DataFrame({key: pd.Series(value) for key, value in data.items()}) 

Here’s what’s happening in the code above:

  • The dictionary of lists is iterated using items() method to obtain the key-value pairs🔑
  • Each value is converted to a pandas Series using pd.Series() function📚
  • A DataFrame is created using the pd.DataFrame() constructor to combine the converted series📄

Once the DataFrame is constructed, it will look something like this:

 Name Math History
0 Alice 80 95
1 Bob 70 85
2 Charlie 90 78 

Method 4 provides a concise and versatile way to transform a dictionary of lists into a DataFrame, making it convenient for data manipulation and analysis.📈 Enjoy working with it!✨

Summary

In this article, we explored the process of converting a Python dictionary with lists as values into a pandas DataFrame. Various methods have been discussed, such as using pd.DataFrame.from_dict() and pd.DataFrame.from_records() to achieve this. 🌟

It’s important to choose a method that fits the specific structure and format of your data. Sometimes, you might need to preprocess the data into separate lists before creating a DataFrame. An example of doing this can be found here. 📊

Throughout the article, we provided examples and detailed explanations on how to work with complex data structures, including lists of lists and lists of dictionaries. 🧭 Remember to keep the code clean and efficient for better readability!

💡 Recommended: How to Create a DataFrame From Lists?

With the knowledge gained, you’ll be better equipped to handle Python dictionaries containing lists, and successfully transform them into pandas DataFrames for a wide range of data analysis tasks. 💪🐍 Happy coding!

Posted on Leave a comment

Pandas Boolean Indexing

5/5 – (1 vote)

Boolean indexing in Pandas filters DataFrame rows using conditions. Example: df[df['column'] > 5] returns rows where 'column' values exceed 5. Efficiently manage and manipulate data with this method.

Here’s an easy example:

import pandas as pd # Create a sample DataFrame
data = {'Name': ['Alice', 'Bob', 'Charlie', 'David'], 'Age': [25, 30, 35, 40], 'City': ['New York', 'San Francisco', 'Los Angeles', 'Seattle']}
df = pd.DataFrame(data) # Perform boolean indexing to filter rows with age greater than 30
age_filter = df['Age'] > 30
filtered_df = df[age_filter] # Display the filtered DataFrame
print(filtered_df)

This code creates a DataFrame with data for four people, then uses boolean indexing to filter out the rows with an age greater than 30. The filtered DataFrame is then printed.

Let’s dive slowly into Boolean Indexing in Pandas: 👇


Understanding Boolean Indexing

Boolean indexing is a powerful feature in pandas that allows filtering and selecting data from DataFrames using a boolean vector. It’s particularly effective when applying complex filtering rules to large datasets 😃.

To use boolean indexing, a DataFrame, along with a boolean index that matches the DataFrame’s index or columns, must be present.

To start, there are different ways to apply boolean indexing in pandas. One can access a DataFrame with a boolean index, apply a boolean mask, or filter data based on column or index values 🧐.

For instance, boolean indexing can filter entries in a dataset with specific criteria, such as data points above or below a certain threshold or specific ranges 🔎.

Working with boolean indexes is pretty straightforward. First, create a condition based on which data will be selected. This condition will generate a boolean array, which will then be used in conjunction with the pandas DataFrame to select only the desired data 🎯.

Here’s a table with examples of boolean indexing in pandas:

Example Description
df[df['column'] > 10] Select only rows where 'column' has a value greater than 10.
df[(df['column1'] == 'A') & (df['column2'] > 5)] Select rows where 'column1' is equal to 'A' and 'column2' has a value greater than 5.
df[~(df['column'] == 'B')] Select rows where 'column' is not equal to 'B'.

How Boolean Indexing Works in Pandas

Boolean indexing in Pandas is a technique used to filter data based on actual values in the DataFrame, rather than row/column labels or integer locations. This allows for a more intuitive and efficient way to select subsets of data based on specific conditions. 🙂 Let’s dive into the steps on how boolean indexing works in Pandas:

Creating Boolean Arrays

Before applying boolean indexing, you first need to create a boolean array. This array contains True and False values corresponding to whether a specific condition is met in the DataFrame. ⚡

Consider the following example:

import pandas as pd data = {'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]} df = pd.DataFrame(data)
bool_array = df['A'] > 2

In this example, we create a boolean array by checking which elements in column 'A' are greater than 2. The resulting boolean array would be:

[False, False, True, True]

Applying Boolean Arrays to DataFrames

Once you have a boolean array, you can use it to filter the DataFrame based on the conditions you set. ✨ To do so, simply pass the boolean array as an index to the DataFrame.

Let’s apply the boolean array we created in the previous step:

filtered_df = df[bool_array]

This will produce a new DataFrame containing only the rows where the condition was met, in this case, the row that had values greater than 2:

 A B
2 3 7
3 4 8

To provide more examples, let’s consider the following table:

Boolean Condition DataFrame[boolean_array]
df['A'] >= 3 A B 2 3 7 3 4 8
df['B'] < 8 A B 0 1 5 1 2 6 2 3 7
(df['A'] == 1) | (df['B'] == 8) A B 0 1 5 3 4 8
(df['A'] != 1) & (df['B'] != 7) A B 1 2 6 3 4 8

Filtering Data with Boolean Indexing

Boolean indexing is also a powerful technique to filter data in Pandas DataFrames based on the actual values of the data, rather than row or column labels 🐼. In this section, you’ll learn how to harness the power of boolean indexing to filter your data efficiently and effectively.

Selecting Rows Based on Condition

To select rows based on a condition, you can create a boolean mask by applying a logical condition to a column or dataframe. Then, use this mask to index your DataFrame and extract the rows that meet your condition 🌟. For example:

import pandas as pd data = {'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}
df = pd.DataFrame(data) mask = df['A'] > 2
filtered_data = df[mask]

In this example, the mask is a boolean Series with True values for rows with A > 2, and filtered_data is the filtered DataFrame containing only the rows that meet the condition.

Combining Conditions with Logical Operators

For more complex filtering, you can combine multiple conditions using logical operators like & (AND), | (OR), and ~ (NOT). Just remember to use parentheses to separate your conditions:

😊 Example:

mask2 = (df['A'] > 2) & (df['B'] < 8)
filtered_data2 = df[mask2]

This filters the data for rows where both A > 2 and B < 8.

Using Query Method for Complex Filtering

For even more complex filtering conditions, you can use the query method. This method allows you to write your conditions using column names, making it more readable and intuitive:

🚀 Example:

filtered_data3 = df.query('A > 2 and B < 8')

This achieves the same result as the masked2 example, but with a more readable syntax.

Pandas Boolean Indexing Multiple Conditions

Here is a table summarizing the examples of boolean indexing with multiple conditions in Pandas:

Example Description
df[(df['A'] > 2) & (df['B'] < 8)] Rows where A > 2 and B < 8
df[(df['A'] > 2) | (df['B'] < 8)] Rows where A > 2 or B < 8
df[~(df['A'] > 2)] Rows where A is not > 2
df.query('A > 2 and B < 8') Rows where A > 2 and B < 8, using query method

With these techniques at your disposal, you’ll be able to use boolean indexing effectively to filter your Pandas DataFrames, whether you’re working with simple or complex conditions 🎉.

Modifying Data Using Boolean Indexing

Boolean indexing is also great to modify data within a DataFrame or Series by specifying conditions that return a boolean array. These boolean arrays are then used to index the original DataFrame or Series, making it easy to modify selected rows or columns based on specific criteria. 🐼

In essence, it allows you to manipulate and clean data according to various conditions. It’s perfect for tasks like replacing missing or erroneous values, transforming data, or selecting specific data based on the criteria you set. This process is efficient and versatile, allowing for greater control when working with large datasets.🧪

Now, let’s take a look at some examples of Boolean indexing in pandas to get a better understanding of how it works. The table below demonstrates various ways of modifying data using Boolean indexing:

Operation Example
Selecting rows that fulfill a condition df[df['column_name'] > value]
Modifying values based on a condition df.loc[df['column_name'] > value, 'column_name'] = new_
Replacing values based on a condition df['column_name'].where(df['column_name'] > value, alternative_value)
Performing calculation on values meeting a condition df['column_name'][df['column_name'] > value] *= multiplier

These examples showcase some basic boolean indexing operations in pandas, but it’s worth noting that more complex operations can be achieved using boolean indexing too. The key takeaway is that this powerful technique can quickly and efficiently modify your data, making your data processing tasks simpler and more effective.👩‍💻👨‍💻

So, next time you’re working with data in pandas, don’t forget to employ this nifty technique to make your data wrangling tasks more manageable and efficient. Happy data cleaning! 🧹✨

Advanced Applications

Boolean indexing in Pandas has a wide range of advanced applications, allowing users to harness its power in complex scenarios. In this section, we will dive into a few of these applications, exploring their usefulness and demonstrating practical examples. 😃

Using Indexers with Boolean Indexing

Combining indexers like iloc and loc with boolean indexing enhances the ability to select specific data subsets. Utilizing indexers in conjunction with boolean indexing allows you to specify both rows and columns, maintaining that sweet balance of performance and flexibility.👍

Handling Missing Data with Boolean Indexing

Dealing with missing data can be quite challenging. However, boolean indexing in Pandas comes to the rescue.💪 With boolean indexing, users can quickly filter out missing data by applying boolean masks. This makes data cleaning and preprocessing a breeze. No more headaches navigating through messy data! 🚀

Pandas Boolean Indexing MultiIndex

MultiIndex, also known as a hierarchical index, adds another layer of depth to boolean indexing. 🌟 By incorporating boolean indexing with MultiIndex DataFrames, you can access and manipulate data across multiple levels, enhancing your data exploration capabilities.

Here’s an example demonstrating the use of a MultiIndex in combination with boolean indexing in Pandas:

import pandas as pd # Create a sample DataFrame with MultiIndex
index = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 1), ('B', 2)], names=['Category', 'Subcategory'])
data = {'Value': [10, 15, 20, 25]}
df = pd.DataFrame(data, index=index) # Perform boolean indexing to filter rows where 'Category' is 'A' and 'Value' is greater than 12
category_filter = df.index.get_level_values('Category') == 'A'
value_filter = df['Value'] > 12
filtered_df = df[category_filter & value_filter] # Display the filtered DataFrame
print(filtered_df)

This code creates a DataFrame with a MultiIndex consisting of two levels: 'Category' and 'Subcategory'. Then, it uses boolean indexing to filter the rows where the 'Category' is 'A' and the 'Value' column is greater than 12. The filtered DataFrame is then printed.

The output of the provided code is:

 Value
Category Subcategory A 2 15

The filtered DataFrame contains only one row where the 'Category' is 'A', the 'Subcategory' is 2, and the 'Value' is 15, as this row meets both conditions specified in the boolean indexing.

Talk about leveling up your data analysis game! 🎮

Pandas Boolean Indexing DateTime

Time series data often requires efficient filtering and slicing. With boolean indexing applied to DateTime data, users can effortlessly filter their data based on specific date ranges, time periods, or even individual timestamps. ⏲ You’ll never lose track of time with this powerful feature! 📅

Examples of Boolean Indexing in Pandas

Below is a table showcasing a few examples of boolean indexing in action:

Scenario Example Code
Filtering rows where column A is greater than 5 data[data['A'] > 5]
Selecting rows where column B is equal to ‘x’ data[data['B'] == 'x']
Combining multiple conditions with logical AND data[(data['A'] > 5) & (data['B'] == 'x')]
Filtering rows with missing data in column C data[data['C'].notnull()]
Selecting data within a specific date range data[(data['DateTime'] >= '2023-01-01') & (data['DateTime'] <= '2023-12-31')]

Now you have a better understanding of advanced applications with boolean indexing in Pandas! Happy data wrangling! 🎉

Pandas Boolean Indexing “OR”

In Pandas, Boolean indexing is a powerful way to filter and manipulate data using logical conditions 🧠. The “OR” operator, denoted by the symbol “|“, allows users to select rows that satisfy at least one of the specified conditions 🎯. In this section, let’s explore how the “OR” operator works with Boolean indexing in details, along with some examples 👨‍💻.

With Pandas, users can combine multiple logical conditions using the “OR” operator by simply providing them with a “|“. This can be especially useful when working on complex data filtering tasks 🧪. Normally, the conditions are enclosed in parentheses to maintain order and group them correctly. Just remember to use the proper Boolean operator carefully! 😉

For a better understanding, let’s take a look at the following example on how the “OR” operator works with Boolean indexing in Pandas:

import pandas as pd # Sample DataFrame
data = {'A': [1, 2, 3, 4, 5], 'B': [6, 7, 8, 9, 10]}
df = pd.DataFrame(data) # Boolean indexing using "OR" operator
result = df[(df['A'] > 3) | (df['B'] <= 7)]

In this example, we have a DataFrame with two columns ‘A’ and ‘B’, and the goal is to filter rows where the value of ‘A’ is greater than 3 or the value of ‘B’ is less than or equal to 7. The resulting DataFrame will include rows that meet either condition 👌.

Column A Column B Condition
1 6 True
2 7 True
3 8 False
4 9 True
5 10 True

Pandas Boolean Indexing “NOT”

🐼 Pandas boolean indexing is a powerful tool used for selecting subsets of data based on the actual values of the data in a DataFrame, which can make filtering data more intuitive 🧠. In this section, we’ll focus on the “NOT” operation and its usage in pandas boolean indexing.

The “NOT” operation is primarily used to reverse the selection made by the given condition, meaning if the condition is initially true, it will turn false, and vice versa. In pandas, the “not” operation can be performed using the tilde operator (~) 😲. It can be particularly helpful when filtering the data that does not meet specific criteria.

Let’s consider some examples to understand better how “NOT” operation works in pandas boolean indexing:

Example Description
~df['column_name'].isnull() Selects rows where ‘column_name‘ is NOT null
~(df['column_name'] > 100) Selects rows where ‘column_name‘ is NOT greater than 100
~df['column_name'].str.contains('value') Selects rows where ‘column_name‘ does NOT contain the string 'value'

👆 In these examples, the tilde operator (~) is utilized to perform the “NOT” operation, which helps to refine the selection criteria to better suit our needs. We can also combine the “NOT” operation with other boolean indexing operations like “AND” (&) and “OR” (|) to create more complex filtering conditions 🎉.

Remember, when working with pandas boolean indexing, it’s essential to use parentheses to group conditions properly, as it ensures the correct precedence of operations and avoids ambiguity when combining them 🤓.

Boolean indexing in pandas provides an efficient and easy way to filter your data based on specific conditions, and mastering the different operations, such as “NOT”, allows you to craft precise and powerful selections in your DataFrames 💪.

Pandas Boolean Indexing in List

🐼 Pandas Boolean indexing is a powerful technique that allows you to select subsets of data in a DataFrame based on actual values rather than row or column labels 😮. This technique is perfect for filtering data based on specific conditions 👌.

When using Boolean indexing, you can apply logical conditions using comparison operators or combination operators like & (and) and | (or) 💡. Keep in mind that when applying multiple conditions, you must wrap each condition in parentheses for proper evaluation ✅.

Let’s go through a few examples to better understand how Boolean indexing with lists works!

Example Description
df[df['col1'].isin(['a', 'b'])] Select rows where ‘col1’ is either ‘a’ or ‘b’
df[(df['col1'] == 'a') | (df['col1'] == 'b')] Select rows where ‘col1’ is either ‘a’ or ‘b’, alternate method
df[(df['col1'] == 'a') & (df['col2'] > 10)] Select rows where ‘col1’ is ‘a’ and ‘col2’ is greater than 10
df[~df['col1'].isin(['a', 'b'])] Select rows where ‘col1’ is neither ‘a’ nor ‘b’, using the ‘not in’ condition

Remember, when working with Pandas Boolean indexing, don’t forget to import the pandas library, use proper syntax, and keep practicing 🎓! This way, you’ll be a Boolean indexing pro in no time 👩‍💻👨‍💻!

Pandas Boolean Indexing Columns

Boolean indexing in pandas refers to the process of selecting subsets of data based on their actual values rather than row or column labels or integer locations. It utilizes a boolean vector as a filter for the data in a DataFrame 📊. This powerful technique enables users to easily access specific data pieces based on conditions while performing data analysis tasks 🧐.

In pandas, boolean indexing commonly employs logical operators such as AND (&), OR (|), and NOT (~) to create a boolean mask which can be used to filter the DataFrame. The process usually involves creating these logical expressions by applying conditions to one or more columns, and then applying the boolean mask to the DataFrame to achieve the desired subset 🎯.

Here’s a table showing some examples of boolean indexing with pandas:

Example Description
df[df['A'] > 2] Filter DataFrame where values in column A are greater than 2 😊.
df[(df['A'] > 2) & (df['B'] < 5)] Select rows where column A values are greater than 2, and column B values are less than 5 🎉.
df[df['C'].isin([1, 3, 5])] Filter DataFrame where column C contains any of the values 1, 3, or 5 🔍.
df[~df['D'].str.contains('abc')] Select rows where column D doesn’t contain the substring ‘abc’ 🚫.

Boolean indexing is an essential tool for data manipulation in pandas, offering a versatile solution to filter and identify specific elements within the data. Harnessing the power of boolean indexing can greatly improve the efficiency of data analysis tasks, making it a valuable skill to master for users working with pandas data structures 👩‍💻👨‍💻.

Pandas Boolean Indexing Set Value

In Pandas, Boolean indexing is a powerful feature that allows users to filter data based on the actual values in a DataFrame 📊, instead of relying on their row or column labels. This technique uses a Boolean vector (True or False values) to filter out and select specific data points in a DataFrame 💡. Let’s dive into how it works!

Using logical operators such as AND (&), OR (|), and NOT (~), Pandas makes it easy to combine multiple conditions while filtering data. Below is a table showcasing some examples of how to use Boolean indexing in Pandas to set values with different conditions:

Condition Code Example
Setting values based on a single condition df.loc[df['column_name'] > 10, 'new_column'] = 'Greater than 10'
Setting values based on multiple conditions (AND) df.loc[(df['column1'] == 'A') & (df['column2'] == 'B'), 'new_column'] = 'Both conditions met'
Setting values based on multiple conditions (OR) df.loc[(df['column1'] == 'A') | (df['column2'] == 'B'), 'new_column'] = 'One condition met'
Setting values based on NOT conditions df.loc[ ~(df['column_name'] < 10), 'new_column'] = 'Not less than 10'

When working with Pandas, Boolean indexing can tremendously simplify the process of filtering and modifying datasets for specific tasks ✨. Remember that the possibilities are virtually endless, and you can always combine conditional statements to manipulate your datasets in numerous ways!

Pandas Boolean Indexing Not Working

Sometimes when working with Pandas, you may encounter issues with Boolean indexing. There are a few common scenarios that can lead to Boolean indexing not functioning as expected. Let’s go through these cases and their possible solutions. 😊

One common issue arises when using Boolean Series as an indexer. This may lead to an IndexingError: Unalignable boolean Series provided as indexer error. This usually occurs when the Boolean mask cannot be aligned on the index, which is used by default when trying to filter a DataFrame (source).

To overcome this problem, ensure that your Boolean Series index aligns with your DataFrame index. You can use the `.loc` method with the same index as the DataFrame to make sure the Series is alignable:

df[df.notnull().any(axis=0).loc[df.columns]]

Another issue that may arise is confusion with logical operators during the Boolean indexing process. In Pandas, logical operators for Boolean indexing are different from standard Python logical operators. You should use & for logical AND, | for logical OR, and ~ for logical NOT (source).

For example, to filter rows based on two conditions:

df[(df['col1'] == x) & (df['col2'] == y)]

📝 Here is a table with some examples of Boolean indexing in Pandas:

Condition Code Example
Rows with values in ‘col1’ equal to x df[df['col1'] == x]
Rows with values in ‘col1’ less than x and ‘col2’ greater than y df[(df['col1'] < x) & (df['col2'] > y)]
Rows where ‘col1’ is not equal to x df[~(df['col1'] == x)]

By understanding these potential pitfalls, you can ensure smoother Boolean indexing in your Pandas projects. Good luck, and happy data wrangling! 🚀

Posted on Leave a comment

What is AutoGPT and How to Get Started: Your Quick Guide to Success

5/5 – (1 vote)

Check out the following explainer video of AutoGPT 🤯 (video source):

Understanding AutoGPT

AutoGPT is an experimental open-source application that showcases the capabilities of the GPT-4 language model🚀. It autonomously develops and manages businesses, aiming to increase their net worth 💼💰. As one of the first examples of GPT-4 running fully autonomously, AutoGPT truly pushes the boundaries of what is possible with AI 🧠 source.

By using AutoGPT, you’ll be able to harness the power of artificial intelligence to generate larger-scale projects in minimal time, saving you tons of time and money ⏰💸. For example, it can significantly improve your website’s SEO and make it appear more active and professional 🌟 source.

Getting started with AutoGPT is simple! Just follow these steps👇:

  1. Visit the Auto-GPT repository on GitHub, where you’ll find all the necessary files and instructions 📝.
  2. Join the Auto-GPT Discord community to ask questions and chat with like-minded individuals 🙌.
  3. Follow the project creator, Torantulino, on Twitter to stay updated on the latest developments and progress 🐦.

Remember, while initial setup may require some time, it’ll be worth it when you see the fantastic benefits AutoGPT can provide to your content creation process 🎉. Happy experimenting! 😄

Getting Started with AutoGPT

AutoGPT is an advanced language model based on the GPT-3.5 architecture, designed to generate high-quality text with minimal user input 📘. In this section, we’ll guide you through the process of getting started with AutoGPT, covering prerequisites and preparation, along with installation and configuration. Let’s dive in! 🚀

Prerequisites and Preparation

Before you start with the installation, ensure that you have the following prerequisites in place:

  • 💻 A computer with internet access
  • 🐍 Python installed (latest stable version)
  • 🌐 Access to GitHub for downloading the AutoGPT repository

Once you have all of these in place, it’s time to prepare your computer for the installation. Begin by navigating to the directory where you want to download AutoGPT. If you prefer to use a virtual environment for Python projects, go ahead and activate it. With your environment set up, you’re ready to install AutoGPT.

Installation and Configuration

Installing AutoGPT is a breeze 🌬! To get started, clone the AutoGPT repository from GitHub:

git clone https://github.com/Torantulino/Auto-GPT.git

Next, navigate to the newly created Auto-GPT directory and install the required dependencies. This can usually be done using a single command, such as:

pip install -r requirements.txt

Once the dependencies are installed, you’re all set to start using AutoGPT 🎉! Remember to refer to the AutoGPT GitHub page for additional information and documentation, which will help you make the most of this powerful tool 💪.

That’s it! You’ve successfully set up AutoGPT on your computer. Now go ahead and start exploring the amazing potential of this AI-driven language model 🚀.

10 Real-World Applications and Use Cases

AutoGPT offers a variety of practical applications that can simplify your daily tasks and enhance your projects. Here are ten real-world use cases where you can take advantage of its capabilities:

  1. Internet Research 🌐: AutoGPT can help you navigate the internet and perform searches to find relevant information, saving you time and energy.
  2. Autonomous Coding and Debugging 💻: Leverage the power of AI to write and debug code faster and more efficiently than ever before.
  3. Social Media Management 📱: Unleash AutoGPT as a Twitter bot that can autonomously generate and post engaging content to grow your online presence.
  4. Content Creation ✍: Boost your writing skills by using AutoGPT for creative tasks like storytelling, article writing, or even poetry.
  5. Better Decision Making 🧠: Make use of informed predictions and insights offered by AutoGPT to assist you in personal and professional decision-making.
  6. Custom Chatbots 🤖: Build intelligent chatbots that can interact with users and provide informative responses tailored to their needs.
  7. Language Translation 🌍: Break down language barriers by utilizing AutoGPT to translate text across multiple languages with ease.
  8. Online Tutoring 🎓: Provide support and assistance to students with their studies by employing AutoGPT as a virtual learning companion.
  9. Email Management 💌: Streamline your inbox by enlisting AutoGPT to sort through emails, flagging important messages, and automatically replying to routine inquiries.
  10. Simulation and Modeling 🔬: Improve your understanding of complex systems by harnessing AutoGPT’s ability to simulate and analyze various scenarios.

By incorporating AutoGPT into your work, you can unlock new possibilities and take your projects to the next level. Get started today and see the amazing benefits that await you! 😊

👩‍💻 Recommended: The Evolution of Large Language Models (LLMs): Insights from GPT-4 and Beyond

Posted on Leave a comment

How I Used the Flask Framework to Create an URL Shortener Application

5/5 – (1 vote)

A URL shortener service generates a shorter, more readable version of the URL it was given. Flask, a Python web framework can be used to create a URL shortener app.

So, we will create an application allowing users to enter a URL and shorten it. We will use the SQLite database engine to store application data.
If you prefer to learn how this is done using the Django framework, you are free to read this article.

Set up

Create a new folder for this project. Then, create and activate a virtual environment by running the following commands in your terminal.

python3 -m venv venv
source venv/bin/activate

Install Flask and the hashids library.

pip install flask hashids

The hashids library will be used to generate a unique ID. You will understand this as we proceed.

Creating a Database Engine

Since we will store application data, we must create a database file. Do it and call the file schema.sql. Then, write the following SQL commands.

DROP TABLE IF EXISTS urls;
CREATE TABLE urls( id INTEGER PRIMARY KEY AUTOINCREMENT, created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, original_url TEXT NOT NULL, clicks INTEGER NOT NULL DEFAULT 0
);

If the above code seems strange, you may want to familiarize yourself with SQL commands.

We want to create a table named urls. As we don’t want to face issues caused by several tables with the same name, we must first delete it. That’s what is meant by ‘DROP TABLE…

The table is then created with four columns. The id column will contain the unique integer value for each entry. Next is the date the shortened URL was generated. The third column is the original URL. Finally, the number of times the URL was clicked.

The schema.sql file can only be executed with the help of a Python script. So, we create another file called init_db.py

import Sqlite3 connection = Sqlite3.connect('database.db') with open('schema.sql') as sql: connection.executescript(sql.read()) connection.commit()
connection.close()

Once you run the script (with python3 init_db.py), a new file called database.db will be created. This is where all application data will be stored.

The connect() method creates the file. As soon as the file is created, it is then populated with the urls table. This is done by first opening and reading the content from the schema.sql.

It then calls the executescript() method to execute all the SQL commands in the SQL file. After which, we commit and close the file. By now, your folder should contain the following files:

database.db init_db.py schema.sql

Creating the Database Connection

Let us open a connection to the database file. Create a file and name it db_connection.py.

import sqlite3 def get_db_connection(): conn = sqlite3.connect('database.db') conn.row_factory = sqlite3.Row return conn

Notice that we set the row-factory attribute to sqlite3.Row. This makes it possible to access values by column name. We then return the connection object, which will be used to access the database.

The Main File

Next, create another file and name it main.py. This will be our main file. In this file, we will import the database connection file.

from db_connection import get_db_connection
from hashids import Hashids
from flask import Flask, flash, render_template, request, url_for, redirect app = Flask(__name__)
app.config['SECRET_KEY'] = 'Your secret key' hashids = Hashids(min_length=4, salt=app.config['SECRET_KEY']) @app.route('/', methods=('GET', 'POST'))
def index(): conn = get_db_connection() if request.method == 'POST': url = request.form['url'] if not url: flash('The URL is required!') return redirect(url_for('index')) url_data = conn.execute('INSERT INTO urls (original_url) VALUES (?)', (url,)) conn.commit() conn.close() url_id = url_data.lastrowid hashid = hashids.encode(url_id) short_url = request.host_url + hashid return render_template('index.html', short_run=short_url) return render_template('index.html')

We create an instance of the Flask class. The __name__ variable allows Flask to locate other resources, including templates in the current directory. We then create hashids object that will have four characters. (You can choose to have more characters). We use a secret key to specify the salt for the Hashids library.

The index() function is decorated with the @app.route decorator that assigns the URL ('/') to the function, thus turning it into a Flask view function.

In the index() function, we open a database connection. Then, we check if the request method is POST. If so, the code block under it will be executed. If not, we only return an empty web page using the render_template() method.

If the request method is POST, we use request.form['url'] to collect input from the template file (index.html). The output is the URL to shorten. However, if the user gives no URL, we simply flash a message and redirect the user back to the same index.html web page.

If a URL is given, it will be added to the database by executing the command, INSERT INTO …

After closing the database, we select the last row id of the database, which is the current URL added. Remember the AUTOINCREMENT keyword in the id column of the database file. This ensures that the id is incremented with each new entry.

With the last row id selected, we use the hashids.encode() method to generate a unique hash and concatenate it to the URL of the application’s host (indicated with the request.host_url attribute). This becomes the shortened URL that would be displayed to the user.

Please check my GitHub page for the template files. Make sure you create a templates folder to keep the HTML files.

The local server is opened when you run python3 main.py in your terminal. This is possible because of the special name variable and the app.run() method.

Adding Extra Features

Won’t it be nice to know how many times each URL has been clicked and have them displayed on a web page? We are going to add that feature. Update your app.py by adding the following:

@app.route('/stats')
def stats(): conn = get_db_connection() db_urls = conn.execute('SELECT * FROM urls').fetchall() conn.close() urls = [] for url in db_urls: url = dict(url) url['short_url'] = request.host_url + hasids.encode(url['id']) urls.append(url) return render_template('stats.html', urls=urls)

We again open a database connection, and fetch all the columns in the urls table (indicated by *) and a list of all the rows using the fetchall() method.

After closing the database, we loop through the result. In each iteration, we convert the sqlite3.Row object to a dictionary and repeat the same thing we did previously to encode the id number. This is then concatenated to form a new URL. Finally, we append the result to an empty list and render it to the browser.

💡 Notice we didn’t commit the database as we did previously. This is because we didn’t make changes to the database. We close it after fetching the data we needed.

Your folder should now have the following files:

  • database.db,
  • db_connection.py,
  • init_db.py,
  • main.py,
  • schema.sql, templates.

Templates Files

As earlier stated, you should check my GitHub page for the templates files. We created a base.html file inside the templates folder that other files will inherit.

The other two files have certain things that make rendering dynamic content to our Flask web page possible. It is the {% ... %} and { ... } code blocks.

These are Jinja2 templating language that comes together with the Flask library.

The render_templates() method in the stats() function has another argument, urls. This is from the stats.html web page, while the other urls is the variable that will be displayed on the web page.

Conclusion

This is one of the ways to create a URL shortener app using the Flask framework. This project has expose us to how Flask works as well as how it interacts with database. If you struggle to understand some of what we did, that should be expected as a beginner. However, as you keep working on projects, it will become second nature to you.

💡 Recommended: How I Created an URL Shortener App Using Django