Constructing Your First ETL Pipeline with Bash


Building Your First ETL Pipeline with Bash
Picture by Creator | Midjourney & Canva

 

Introduction

 

ETL, or Extract, Remodel, Load, is a obligatory information engineering course of, which includes extracting information from varied sources, changing it right into a workable type, and shifting it to some vacation spot, reminiscent of a database. ETL pipelines automate this course of, ensuring that information is processed in a constant and environment friendly method, which gives a framework for duties like information evaluation, reporting, and machine studying, and ensures information is clear, dependable, and able to use.

Bash, quick for brief for Bourne-Once more Shell — aka the Unix shell — is a robust instrument for constructing ETL pipelines, because of its simplicity, flexibility, and very vast applicability, and thus it is a wonderful choice for novices and seasoned execs alike. Bash scripts can do issues like automate duties, transfer recordsdata round, and speak to different instruments on the command line, which means that it’s a good selection for ETL work. Furthermore, Bash is ubiquitous on Unix-like programs (Linux, BSD, macOS, and so forth.), so it is able to use on most such programs with no further work in your half.

This text is meant for newbie and practitioner information scientists and information engineers who need to construct their first ETL pipeline. It assumes a primary understanding of the command line and goals to offer a sensible information to creating an ETL pipeline utilizing Bash.

The purpose of this text is to information readers via the method of constructing a primary ETL pipeline utilizing Bash. By the tip of the article, readers can have a working understanding of implementing an ETL pipeline that extracts information from a supply, transforms it, and hundreds it right into a vacation spot database.

 

Setting Up Your Atmosphere

 

Earlier than we start, guarantee you’ve got the next:

  • A Unix-based system (Linux or macOS)
  • Bash shell (normally pre-installed on Unix programs)
  • Primary understanding of command-line operations

For our ETL pipeline, we are going to want these particular command line instruments:

You may set up them utilizing your system’s bundle supervisor. On a Debian-based system, you need to use apt-get:

sudo apt-get set up curl jq awk sed sqlite3

 

On macOS, you need to use brew:

brew set up curl jq awk sed sqlite3

 

Let’s arrange a devoted listing for our ETL undertaking. Open your terminal and run:

mkdir ~/etl_project
cd ~/etl_project

 

This creates a brand new listing referred to as etl_project and navigates into it.

 

Extracting Knowledge

 

Knowledge can come from varied sources reminiscent of APIs, CSV recordsdata, or databases. For this tutorial, we’ll show extracting information from a public API and a CSV file.

Let’s use curl to fetch information from a public API. For instance, we’ll extract information from a mock API that gives pattern information.

# Fetching information from a public API
curl -o information.json "https://api.instance.com/information"

 

This command will obtain the info and put it aside as information.json.

We will additionally use curl to obtain a CSV file from a distant server.

# Downloading a CSV file
curl -o information.csv "https://instance.com/information.csv"

 

It will save the CSV file as information.csv in our working listing.

 

Remodeling Knowledge

 

Knowledge transformation is important to transform uncooked information right into a format appropriate for evaluation or storage. This may increasingly contain parsing JSON, filtering CSV recordsdata, or cleansing textual content information.

jq is a robust instrument for working with JSON information. Let’s use it to extract particular fields from our JSON file.

# Parsing and extracting particular fields from JSON
jq '.information[] | {id, title, worth}' information.json > transformed_data.json

 

This command extracts the id, title, and worth fields from every entry within the JSON information and saves the end in transformed_data.json.

awk is a flexible instrument for processing CSV recordsdata. We’ll use it to extract particular columns from our CSV file.

# Extracting particular columns from CSV
awk -F, '{print $1, $3}' information.csv > transformed_data.csv

 

This command extracts the primary and third columns from information.csv and saves them in transformed_data.csv.

sed is a stream editor for filtering and remodeling textual content. We will use it to carry out textual content replacements and clear up our information.

# Changing textual content in a file
sed 's/old_text/new_text/g' transformed_data.csv

 

This command replaces occurrences of old_text with new_text in transformed_data.csv.

 

Loading Knowledge

 

Widespread locations for loading information embrace databases and recordsdata. For this tutorial, we’ll use SQLite, a generally used light-weight database.

First, let’s create a brand new SQLite database and a desk to carry our information.

# Creating a brand new SQLite database and desk
sqlite3 etl_database.db "CREATE TABLE information (id INTEGER PRIMARY KEY, title TEXT, worth REAL);"

 

This command creates a database file named etl_database.db and a desk named information with three columns.

Subsequent, we’ll insert our remodeled information into the SQLite database.

# Inserting information into SQLite database
sqlite3 etl_database.db <<EOF
.mode csv
.import transformed_data.csv information
EOF

 

This block of instructions units the mode to CSV and imports transformed_data.csv into the information desk.

We will confirm that the info has been inserted accurately by querying the database.

# Querying the database
sqlite3 etl_database.db "SELECT * FROM information;"

 

This command retrieves all rows from the information desk and shows them.

 

Ultimate Ideas

 

We now have coated the next steps whereas constructing our ETL pipeline with Bash, together with:

  1. Atmosphere setup and gear set up
  2. Knowledge extraction from a public API and CSV file with curl
  3. Knowledge transformation utilizing jq, awk, and sed
  4. Knowledge loading in an SQLite database with sqlite3

Bash is an efficient alternative for ETL because of its simplicity, flexibility, automation capabilities, and interoperability with different CLI instruments.

For additional investigation, take into consideration incorporating error dealing with, scheduling the pipeline through cron, or studying extra superior Bash ideas. You may additionally want to examine different transformation apps and strategies to extend your pipeline skillset.

Check out your individual ETL tasks, placing what you’ve got discovered to the check, in additional elaborate situations. With some luck, the essential ideas right here will probably be a great jumping-off level to extra complicated information engineering duties.
 
 

Matthew Mayo (@mattmayo13) holds a Grasp’s diploma in laptop science and a graduate diploma in information mining. As Managing Editor, Matthew goals to make complicated information science ideas accessible. His skilled pursuits embrace pure language processing, machine studying algorithms, and exploring rising AI. He’s pushed by a mission to democratize information within the information science neighborhood. Matthew has been coding since he was 6 years previous.



Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *