Tag: Data science

  • Data Types & Conversions: Structuring Data for Accurate Analysis


    Why Data Types Matter More Than You Think

    After learning how to inspect and clean datasets, the next critical step is ensuring that your data is stored in the correct format. This is where data types come into play.

    At a beginner level, it’s easy to assume that if the data “looks right,” it is right. But in real-world analysis, appearance can be misleading. A column may visually contain numbers, but if it is stored as text, calculations will either fail or—worse—produce incorrect results without any warning.

    Consider this simple scenario: you want to calculate total revenue. If your revenue column is stored as strings, operations like summation may concatenate values instead of adding them. This leads to outputs that look valid but are fundamentally wrong.

    Even more subtle issues arise in sorting and filtering. Text-based numbers follow alphabetical order, not numerical order. So "100" comes before "20", which breaks logical expectations.

    This is why data types are not just a technical requirement—they are a core part of analytical correctness.

    Clean data is not only error-free—it is correctly structured to behave as expected under analysis.


    What Are Data Types?

    A data type defines the kind of value stored in a column and determines how that data behaves when you perform operations on it.

    In Python, and more specifically in pandas, data types are designed to efficiently handle different kinds of data such as numbers, text, dates, and categories.

    Here are the most commonly used data types:

    Data TypeDescriptionExample
    int64Whole numbers1, 25, 100
    float64Decimal numbers10.5, 99.99
    objectText (string data)“India”, “Aks”
    boolBoolean valuesTrue, False
    datetime64Date and time values2024-01-01
    categoryRepeated categorical labels“High”, “Medium”, “Low”

    Each type is optimized for specific operations. For example:

    • Numeric types allow mathematical operations
    • Datetime types allow time-based filtering and grouping
    • Category types optimize memory and performance

    Choosing the correct type ensures your dataset behaves logically and efficiently.


    How Data Types Affect Analysis

    Data types influence almost every step of analysis. Let’s look at a few concrete impacts:

    1. Calculations

    If a numeric column is stored as text:

    • You cannot compute averages correctly
    • Aggregations may fail or give incorrect results

    2. Sorting

    Text-based sorting:

    "100", "20", "3"
    

    Numeric sorting:

    3, 20, 100
    

    3. Visualization

    Charts rely on correct data types. If dates are stored as text:

    • Time-series plots won’t work properly
    • Trends become harder to interpret

    4. Modeling

    Machine learning models expect numeric inputs. Incorrect types:

    • Break model pipelines
    • Reduce accuracy

    This shows that data types are deeply tied to both correctness and usability.


    The Most Common Real-World Issues

    In real datasets, data types are rarely perfect. This is because data often comes from:

    • Multiple systems
    • Manual entry
    • Different formats and standards

    You may encounter:

    • Numbers stored as strings ("5000")
    • Dates stored inconsistently ("01-02-2024""2024/02/01")
    • Mixed values (100"unknown"None)
    • Categorical inconsistencies ("Male""male""M")

    These inconsistencies don’t always throw errors—they quietly degrade the quality of your analysis.

    A key skill is learning to recognize these issues early and fix them systematically.


    Inspecting Data Types in pandas

    Before making any changes, always start by inspecting your dataset.

    df.info()
    

    This command provides a structured overview:

    • Column names
    • Data types
    • Number of non-null values

    This helps you quickly identify mismatches.

    Example

    If you see:

    • Revenue → object
    • Date → object

    It signals that conversions are required.

    You should treat df.info() as your first diagnostic tool when working with any dataset.


    Understanding the “object” Type

    The object type is the most common—and most problematic—data type in pandas.

    It is used as a default when pandas cannot assign a more specific type. This means it may contain:

    • Pure text
    • Numeric values stored as strings
    • Mixed data types

    Because of this ambiguity, object columns should always be examined carefully.

    A dataset with many object columns is almost always under-processed.


    Converting Data Types: The Core Skill

    Converting data types is a fundamental step in data cleaning. The goal is to align the data’s format with its real-world meaning.

    Let’s go through the most important conversions in detail.


    1. Converting to Numeric

    This is one of the most frequent tasks.

    Problem

    df["Revenue"]
    

    Output:

    "1000", "2500", "300"
    

    These are strings, not numbers.


    Basic Conversion

    df["Revenue"] = df["Revenue"].astype(float)
    

    Now you can:

    • Perform calculations
    • Aggregate values
    • Use the column in models

    Handling Errors Safely

    Real-world data often contains invalid entries:

    "1000", "2500", "unknown"
    

    Use:

    df["Revenue"] = pd.to_numeric(df["Revenue"], errors="coerce")
    

    This converts valid values and replaces invalid ones with NaN.


    Why This Matters

    Instead of failing, your pipeline continues smoothly, allowing you to handle missing values later.


    2. Converting to Integer

    Use integers for count-based data:

    df["Quantity"] = df["Quantity"].astype(int)
    

    However, ensure:

    • No missing values
    • No invalid entries

    Otherwise, convert safely first.


    3. Converting to String

    Some numeric-looking values should remain text:

    Examples:

    • IDs
    • Phone numbers
    • ZIP codes
    df["Customer_ID"] = df["Customer_ID"].astype(str)
    

    This prevents accidental mathematical operations.


    4. Converting to Datetime

    Dates are essential for time-based analysis but often stored incorrectly.

    Problem

    "01-02-2024", "2024/02/01", "Feb 1 2024"
    

    Solution

    df["Date"] = pd.to_datetime(df["Date"])
    

    Pandas handles multiple formats automatically.


    Extracting Useful Components

    df["Year"] = df["Date"].dt.year
    df["Month"] = df["Date"].dt.month
    

    This enables:

    • Trend analysis
    • Seasonal insights

    5. Boolean Conversion

    Binary values are often stored as text.

    df["Subscribed"] = df["Subscribed"].map({"Yes": True, "No": False})
    

    This simplifies filtering and analysis.


    6. Category Data Type

    For repeated labels:

    df["Segment"] = df["Segment"].astype("category")
    

    Advantages:

    • Lower memory usage
    • Faster operations
    • Better performance in modeling

    Cleaning Before Conversion

    Often, conversion requires preprocessing.

    Removing Currency Symbols

    df["Revenue"] = df["Revenue"].str.replace("$", "")
    

    Removing Commas

    df["Revenue"] = df["Revenue"].str.replace(",", "")
    

    Then convert:

    df["Revenue"] = df["Revenue"].astype(float)
    

    Handling Mixed Data

    Mixed data types are common:

    100, "unknown", 250
    

    Use:

    df["Value"] = pd.to_numeric(df["Value"], errors="coerce")
    

    Then treat missing values separately.


    Validating Your Work

    After conversion, always verify:

    df.info()
    

    Check:

    • Data types are correct
    • No unexpected missing values

    Validation ensures reliability.


    Memory Optimization

    Efficient data types improve performance.

    df["Category"] = df["Category"].astype("category")
    

    Downcasting

    df["Value"] = pd.to_numeric(df["Value"], downcast="integer")
    

    This reduces memory usage without losing information.


    Practical Workflow

    A structured approach:

    1. Inspect (df.info())
    2. Identify issues
    3. Clean raw values
    4. Convert types
    5. Validate

    This workflow ensures consistency.


    Real-World Example

    df = pd.read_csv("sales.csv")
    
    df["Revenue"] = df["Revenue"].str.replace("$", "").str.replace(",", "")
    df["Revenue"] = pd.to_numeric(df["Revenue"], errors="coerce")
    
    df["Date"] = pd.to_datetime(df["Date"])
    
    df["Customer_ID"] = df["Customer_ID"].astype(str)
    
    df["Segment"] = df["Segment"].astype("category")
    

    This is a typical pipeline used in real projects.


    Common Mistakes to Avoid

    • Skipping type inspection
    • Converting without cleaning
    • Ignoring errors
    • Leaving columns as object
    • Not validating results

    Avoiding these mistakes improves both accuracy and efficiency.


    Analytical Mindset

    Always question your data:

    • Does this column behave logically?
    • Can I perform correct operations on it?
    • Is this the most efficient format?

    Thinking this way ensures high-quality analysis.


    Summary

    In this page, you learned:

    • The importance of data types
    • How to inspect and identify issues
    • How to convert between types
    • How to clean data before conversion
    • How to validate and optimize datasets

    Correct data types form the foundation of reliable analysis.


    Transition to Next Page

    Now that your data is properly structured, the next step is handling missing values—one of the most common and impactful challenges in real-world datasets.

    You’ll learn how to detect, analyze, and treat missing data using different strategies.

    What’s Next?

    In the next page, you will move into:

    Filtering, Grouping & Merging Data

    This is where you begin to manipulate datasets to answer real business questions.


  • Foundations of Clean Data: From Raw Inputs to Reliable Datasets


    Why Data Cleaning Comes First

    Before you build models, create visualizations, or extract insights, there is one step that determines the quality of everything that follows: data cleaning.

    In theory, data analysis sounds straightforward—load a dataset, run some analysis, and get results. But in reality, most datasets are messy, incomplete, and inconsistent. If you skip or rush the cleaning process, your analysis may produce misleading or completely incorrect conclusions.

    This is why experienced data analysts often say:

    “Good analysis starts with good data—and good data starts with cleaning.”

    In real-world projects, data cleaning is not a small step—it can take up 60–80% of the total analysis time. That’s because raw data is rarely collected in a perfect format. It comes from multiple sources, different systems, and often includes human errors.

    This module begins by helping you understand how to approach messy data systematically, rather than trying to fix things randomly.


    What is Data Cleaning & Wrangling?

    Although often used together, these two terms have slightly different meanings.

    Data Cleaning

    Data cleaning focuses on identifying and fixing problems in the dataset. This includes:

    • Missing values
    • Incorrect entries
    • Duplicates
    • Inconsistent formats

    The goal is to make the data accurate and reliable.


    Data Wrangling

    Data wrangling goes beyond cleaning. It involves transforming data into a format that is ready for analysis. This includes:

    • Restructuring datasets
    • Combining multiple data sources
    • Creating new features
    • Organizing data logically

    The goal is to make the data usable and meaningful.


    Simple Way to Understand

    • Cleaning = Fixing problems
    • Wrangling = Preparing for analysis

    Together, they form the foundation of any data workflow.


    The Reality of Real-World Data

    In textbooks and tutorials, datasets are usually clean and easy to work with. But real-world data looks very different.

    You might encounter:

    • Missing values in important columns
    • Dates stored in multiple formats
    • Numbers stored as text
    • Duplicate rows
    • Inconsistent naming conventions
    • Unexpected or extreme values

    Let’s look at a small example:

    Order IDDateRevenueCountry
    10101-02-24500USA
    1022024/02/01United States
    103Feb 1 20245000U.S.
    10101-02-24500USA

    Even in this small dataset, there are multiple issues:

    • Missing value ()
    • Multiple date formats
    • Duplicate row
    • Inconsistent country names
    • Possible outlier (5000 vs 500)

    This is not unusual—it’s typical.

    The goal of this module is to train you to recognize and handle these issues confidently.


    Why Data Cleaning is Critical

    Skipping or poorly handling data cleaning can lead to serious problems:

    • Incorrect Analysis: If data is inconsistent, your results may be misleading.
    • Broken Calculations: Wrong formats can cause errors or incorrect outputs.
    • Poor Model Performance: Machine learning models rely on clean, structured data.
    • Loss of Trust: If your insights are wrong, stakeholders lose confidence.

    In professional settings, accuracy matters more than speed. A well-cleaned dataset leads to reliable insights and better decisions.


    The Data Cleaning Workflow

    Rather than fixing issues randomly, good analysts follow a structured workflow.


    Step 1: Inspect the Data

    Before making any changes, understand your dataset.

    Key questions:

    • How many rows and columns are there?
    • What are the data types?
    • Are there missing values?
    • What does the data look like?
    df.head()
    df.info()
    df.describe()
    

    This gives you a high-level overview.


    Step 2: Identify Issues

    Look for common problems:

    • Missing values
    • Duplicates
    • Incorrect formats
    • Outliers
    • Inconsistent categories

    At this stage, you are not fixing anything—you are diagnosing the dataset.


    Step 3: Decide a Strategy

    Not all problems have a single solution.

    For example:

    • Should you remove missing values or fill them?
    • Should duplicates be deleted or merged?
    • Should outliers be removed or analyzed further?

    Your decisions should depend on:

    • The context of the data
    • The analysis goal

    Step 4: Apply Transformations

    Now you clean and restructure the data using tools like pandas.

    This includes:

    • Fixing data types
    • Handling missing values
    • Removing duplicates
    • Standardizing formats

    Step 5: Validate the Data

    After cleaning, always verify your dataset.

    df.info()
    df.isnull().sum()
    df.describe()
    

    Ask:

    • Are there still missing values?
    • Are data types correct?
    • Do values make logical sense?

    Validation ensures your cleaning process is complete and accurate.


    Setting Up Your Environment

    To work with data effectively in Python, you’ll primarily use two libraries:

    • pandas → for data manipulation
    • NumPy → for numerical operations

    Basic Setup

    import pandas as pd
    import numpy as np
    

    Loading Data

    df = pd.read_csv("data.csv")
    

    Initial Inspection

    df.head()
    df.info()
    df.describe()
    

    These commands should become part of your default workflow whenever you open a new dataset.


    Understanding Data Types

    Before diving deeper in the next page, it’s important to briefly understand data types.

    Each column in a dataset has a type, such as:

    • Numeric
    • Text
    • Date

    Incorrect data types are one of the most common issues in real-world data.

    For example:

    • Revenue stored as text
    • Dates stored as strings

    This affects:

    • Calculations
    • Sorting
    • Analysis

    You’ll explore this in detail in the next page.


    Handling Missing Values (Introduction)

    Missing data is one of the most frequent challenges.

    You can detect missing values using:

    df.isnull().sum()
    

    Common strategies include:

    • Removing rows
    • Filling with default values
    • Using statistical methods

    We will cover this in depth later in the module.


    Removing Duplicates

    Duplicate records can distort results.

    Detect Duplicates

    df.duplicated().sum()
    

    Remove Duplicates

    df = df.drop_duplicates()
    

    Duplicates are especially common in:

    • Transaction data
    • User logs
    • Merged datasets

    Filtering and Selecting Data

    Often, you don’t need the entire dataset.

    Selecting Columns

    df = df[["Order ID", "Revenue", "Country"]]
    

    Filtering Rows

    df = df[df["Revenue"] > 0]
    

    This helps focus your analysis on relevant data.


    Standardizing Data Formats

    Inconsistent formats can cause confusion.

    Example:

    df["Country"] = df["Country"].replace({
        "USA": "United States",
        "U.S.": "United States"
    })
    

    Standardization ensures consistency across the dataset.


    Working with Dates (Introduction)

    Dates are often messy but essential.

    df["Date"] = pd.to_datetime(df["Date"])
    

    Once converted, you can analyze trends over time.


    Creating New Features

    Data wrangling includes feature creation.

    df["Revenue_per_Item"] = df["Revenue"] / df["Quantity"]
    

    New features often provide deeper insights.


    Grouping and Aggregation

    To summarize data:

    df.groupby("Country")["Revenue"].sum()
    

    This helps identify patterns and trends.


    Merging Datasets

    Real-world projects often involve multiple datasets.

    df = pd.merge(df_orders, df_customers, on="customer_id")
    

    This allows you to combine related information.


    Outliers: Detect and Handle

    Outliers can distort analysis.

    df["Revenue"].describe()
    

    Simple filtering:

    df = df[df["Revenue"] < 10000]
    

    More advanced techniques will be covered later.


    Common Mistakes to Avoid

    • Skipping data inspection
    • Cleaning without understanding context
    • Removing too much data
    • Ignoring data types
    • Not validating results

    Avoiding these mistakes improves analysis quality.


    Developing an Analyst Mindset

    Data cleaning is not just technical—it’s analytical.

    You should constantly ask:

    • Does this value make sense?
    • Could this be an error?
    • How will this affect my analysis?

    This mindset is what separates beginners from professionals.


    Summary

    In this page, you learned:

    • What data cleaning and wrangling mean
    • Why they are essential in real-world analysis
    • How to inspect datasets
    • How to identify common data issues
    • Basic techniques for cleaning and structuring data

    This forms the foundation for all further analysis.


    What’s Next?

    Now that you understand the nature of real-world data and common data quality issues, the next step is to address one of the most critical challenges in data cleaning—missing values.

    In real datasets, missing data is almost unavoidable. Learning how to handle it correctly is essential for building reliable analysis.

    👉 Next: Handling Missing Values in Python
    Learn how to detect, analyze, and handle missing data using practical strategies and decision-making frameworks.

  • SQL Mini Project: Analyze the Superstore Database Using SQL and Python

    What This Project Is

    You have completed all six topics in Module 2. You can query a database, filter and sort rows, aggregate data, join tables, write subqueries, and connect SQL to Python. This mini project puts all of that together in one end-to-end deliverable.
    This is not a guided tutorial. There are no step-by-step instructions telling you which functions to use. The five business questions below are the kind you would genuinely receive in an entry-level data role — and your job is to answer them using everything you have learned.
    By the end you will have a completed Jupyter notebook, a written findings brief, and a GitHub repository — three things you can point to directly when applying for data roles.

    The Scenario

    You have just joined a retail company as a junior data professional. It is your first week. Your manager has sent you an email:

    “Hey — I’ve given you access to our sales database. Before our Friday meeting I’d love to get your take on five questions I’ve been sitting on for a while. Nothing fancy — just pull the numbers and tell me what you find. A short write-up is fine.”

    The five questions are below. The database is the Superstore SQLite database you have been working with throughout Module 2.

    The Five Business Questions

    Answer each question using SQL. Pull the result into a pandas DataFrame. Then write one to three sentences in plain English summarising what the data tells you.

    Question 1 — Regional Performance

    Which region generates the most total revenue and which generates the most total profit? Are they the same region? If not, what does that tell you?

    Question 2 — Product Profitability

    Which three sub-categories have the highest total profit and which three have the lowest? Are any sub-categories losing money overall?

    Question 3 — Customer Value

    Who are the top 10 customers by total revenue? For each of those customers, what is their profit margin? Are your highest-revenue customers also your most profitable ones?

    Question 4 — Loss-Making Orders

    What percentage of all orders are loss-making (profit below zero)? Which category has the highest proportion of loss-making orders? Which region?

    Question 5 — Shipping and Profitability

    Does ship mode affect profitability? Show average profit and average sales for each ship mode. Is there a pattern worth flagging to the business?

    Deliverables

    Submit three things when you complete this project:

    1. Jupyter Notebook
      One clean notebook containing all your SQL queries, pandas code, and output. Structure it with a markdown cell before each question stating the business question, followed by your code and output. Name it Module2_MiniProject.ipynb.
    2. Written Findings Brief
      A short document — five short paragraphs, one per question — written in plain English as if you are sending it to your manager. No code. No jargon. Just what the data shows and why it matters. Aim for 150 to 200 words total. Name it Module2_Findings_Brief.md.
    3. GitHub Repository
      Push both files to a public GitHub repository. Name it superstore-sql-analysis. Include a README that describes the project in two to three sentences, lists the tools used, and explains how to run the notebook.

    Technical Requirements

    Your notebook must meet all of the following:

    • All data retrieved using SQL queries against the Superstore SQLite database — no loading the raw CSV directly
    • At least three queries must use JOIN across two or more tables
    • At least one query must use a subquery
    • At least one query must use GROUP BY with HAVING
    • All results pulled into pandas DataFrames using pd.read_sql()
    • Connection opened once at the top and closed once at the bottom
    • All SQL queries use parameterised values where a variable is involved
    • Column names in output are readable — use AS aliases where needed

    Notebook Structure

    Set your notebook up in this order:

    1. Title cell — project name, your name, date
    2. Setup cell — imports and database connection
    3. Question 1 — markdown heading + SQL query + DataFrame output + written insight
    4. Question 2 — markdown heading + SQL query + DataFrame output + written insight
    5. Question 3 — markdown heading + SQL query + DataFrame output + written insight
    6. Question 4 — markdown heading + SQL query + DataFrame output + written insight
    7. Question 5 — markdown heading + SQL query + DataFrame output + written insight
    8. Summary cell — three to five overall takeaways in plain English
    9. Close connection
    

    Every question cell should follow this pattern:

    # ── QUESTION 1: Regional Performance ─────────────────────
    
    q1 = pd.read_sql("""
        -- Your SQL query here
    """, conn)
    
    q1
    

    Followed by a markdown cell with your plain-English insight.

    Hints — Read Only If Stuck

    These are directional hints only — not solutions.

    Question 1: GROUP BY region with SUM for both sales and profit. Think about why the most profitable region might not be the highest revenue region — margin matters.

    Question 2: GROUP BY sub_category with SUM(profit). Sort both ascending and descending. Use HAVING to isolate sub-categories where total profit is negative.

    Question 3: JOIN orders, customers, and order_items. GROUP BY customer name. Calculate profit margin as SUM(profit) / SUM(sales) * 100. Think about what a high-revenue but low-margin customer means for the business.

    Question 4: Use a subquery or a CASE statement to flag loss-making orders. Calculate percentage in pandas after pulling the counts. GROUP BY category and region separately to find the worst offenders.

    Question 5: GROUP BY ship_mode. Use AVG for both sales and profit. Think about whether causation is implied — does ship mode cause profitability differences or just correlate with them?

    Evaluation Criteria

    Your project will be assessed on four dimensions:

    Correctness — Do your SQL queries return accurate results? Are joins and aggregations logically sound?

    Clarity — Is your notebook clean and readable? Would a colleague understand your work without asking you to explain it?

    Insight — Do your written findings go beyond restating the numbers? Does your brief say something meaningful about the business?

    Craft — Are column names clean? Is the connection managed properly? Are queries well formatted and commented?

    Sharing Your Work

    When your project is complete:
    • Post your GitHub link in the course community forum
    • Write one sentence about the most surprising thing you found in the data
    • Review one other student’s project and leave a comment on their findings brief

    Looking at how others approached the same five questions is one of the most effective ways to deepen your SQL intuition. There is rarely one right query — seeing different approaches to the same problem is genuinely instructive.

    Up next — Module 3: Data Cleaning and Wrangling

    Module 3 moves back into Python full time. You will learn how to take messy, real-world data — missing values, wrong data types, duplicates, inconsistent categories — and turn it into a clean, analysis-ready dataset. The skills in Module 3 are what separates someone who can analyse clean data from someone who can handle data the way it actually arrives in the real world.

  • SQL and Python Together: How to Use sqlite3 and pd.read_sql() for Data Analysis

    Why SQL and Python Belong Together

    Every topic in this module has used Python to run SQL queries. You have been writing SQL inside Python strings and passing them to pd.read_sql(). That combination is not a workaround — it is the standard professional workflow for data analysts who work with databases.

    SQL and Python are not competing tools. They are complementary layers in the same pipeline. SQL is where you retrieve, filter, and summarise data at the database level. Python is where you transform, visualise, model, and communicate that data. Understanding where one ends and the other begins is one of the most practically valuable things you can take away from this module.

    This topic goes deeper into that boundary. You will learn how the connection between SQLite and Python actually works, how to manage that connection properly, how to decide what belongs in SQL versus pandas, and how to structure a clean repeatable workflow that scales from a local SQLite file to a production cloud database.

    How the sqlite3 Connection Works

    Every query you have run in this module started with one line:

    conn = sqlite3.connect('superstore.db')

    That line opens a connection to a SQLite database file. A connection is a live channel between your Python session and the database. Through that channel you can send SQL statements and receive results back as Python objects.

    Understanding the connection lifecycle matters because connections consume resources. A well-written analysis opens a connection, does its work, and closes the connection cleanly. A poorly written one leaves connections open, which can cause file locking issues and unpredictable behaviour especially when multiple processes access the same database.

    Opening and Closing Connections Properly

    import sqlite3
    import pandas as pd
    
    Open the connection
    conn = sqlite3.connect('superstore.db')
    
    Do your work
    df = pd.read_sql("SELECT * FROM superstore LIMIT 5", conn)
    
    Always close when done
    conn.close()

    For longer notebooks where you need the connection throughout, the best practice is to open it once at the top and close it once at the bottom — not open and close it around every query.

    Using a Context Manager

    Python’s with statement handles the connection lifecycle automatically. The connection closes itself when the block ends, even if an error occurs inside it:

    # Context manager — connection closes automatically
    with sqlite3.connect('superstore.db') as conn:
      df = pd.read_sql("SELECT * FROM superstore LIMIT 5", conn)
      print(df)
    # conn is closed here automatically

    For notebook-based analysis the manual approach is fine. For scripts that run automatically — scheduled reports, data pipelines — always use the context manager.

    Checking What Tables Exist

    When working with an unfamiliar database, the first thing you want to know is what tables are available:

    conn = sqlite3.connect('superstore.db')
    # List all tables in the database
    tables = pd.read_sql("""
    SELECT name
    FROM sqlite_master
    WHERE type = 'table'
    ORDER BY name
    """, conn)
    print(tables)

    sqlite_master is SQLite’s internal catalogue table. It stores metadata about everything in the database — tables, indexes, and views. This query is the SQLite equivalent of asking “what is in here?” when you open an unfamiliar database for the first time.

    pd.read_sql() — The Bridge Between SQL and pandas

    pd.read_sql() is the function that executes a SQL query and returns the result directly as a pandas DataFrame. It is the core of the SQL-Python workflow.

    # Basic usage
    df = pd.read_sql(sql_query, connection)

    Once the result is a DataFrame you have the full pandas toolkit available — filtering, reshaping, visualisation, statistical analysis, and everything from Module 1.

    Passing Parameters Safely

    When your query needs to include a variable value — a user input, a date from a loop, a value from another DataFrame — never build the query by concatenating strings. This is a security risk called SQL injection and also causes bugs when values contain special characters like apostrophes.

    Instead use parameterised queries:

    # Unsafe — never do this
    region = "West"
    df = pd.read_sql(f"SELECT * FROM superstore WHERE region = '{region}'", conn)
    
    # Safe — use parameters
    region = "West"
    df = pd.read_sql(
    "SELECT * FROM orders WHERE region = ?",
    conn,
    params=(region,)
    )

    The ? placeholder gets replaced safely by the value in params. SQLite handles the escaping automatically. This is especially important when the variable value comes from user input or an external source.

    Passing Multiple Parameters

    # Filter by region and minimum sales value
    
    region = "West"
    min_sales = 500
    df = pd.read_sql(
    """
    SELECT order_id, region, sales, profit
    FROM superstore
    WHERE region = ?
    AND sales > ?
    ORDER BY sales DESC
    """,
    conn,
    params=(region, min_sales)
    )

    Parameters are passed as a tuple in the same order as the ? placeholders appear in the query.

    Reading Large Datasets in Chunks

    When a query returns a very large result set — millions of rows — loading everything into memory at once can crash your notebook. pd.read_sql() supports chunked reading via the chunksize parameter:

    # Read in chunks of 1000 rows at a time
    chunks = pd.read_sql(
        "SELECT * FROM superstore",
        conn,
        chunksize=1000
    )
    
    # Process each chunk
    dfs = []
    for chunk in chunks:
        # Apply any row-level processing here
        dfs.append(chunk)
    
    df = pd.concat(dfs, ignore_index=True)
    print(f"Total rows loaded: {len(df)}")
    

    For the Superstore dataset this is not necessary — 10,000 rows loads instantly. But on a production database with millions of rows it is an essential technique to know.

    Writing Data Back to SQLite

    The SQL-Python bridge works in both directions. You can read data from a database into pandas, and you can write a pandas DataFrame back into a database as a table.

    to_sql() — Writing a DataFrame to a Database Table

    # Create a summary DataFrame in pandas
    summary = df.groupby('region').agg(
        total_sales=('sales', 'sum'),
        total_profit=('profit', 'sum'),
        order_count=('order_id', 'count')
    ).reset_index().round(2)
    
    # Write it back to the database as a new table
    summary.to_sql(
        'region_summary',       # table name
        conn,
        if_exists='replace',    # replace if table already exists
        index=False             # don't write the DataFrame index as a column
    )
    
    print("Summary table written to database.")
    

    The if_exists parameter controls what happens if the table already exists:

    • replace — drop and recreate the table
    • append — add rows to the existing table
    • fail — raise an error (the default)

    Once written back to the database, you can query this summary table with SQL just like any other table.

    When to Filter in SQL vs pandas

    This is the most practically important decision in the SQL-Python workflow. The wrong choice does not break anything — both tools can filter data. But the right choice makes your analysis faster, cleaner, and more professional.

    The Core Principle

    Filter and aggregate in SQL. Transform, visualise, and model in Python.

    SQL runs inside the database engine which is optimised for filtering and aggregating large datasets. When you push filtering into SQL, only the rows you actually need travel from the database to Python. When you pull everything into Python and filter there, you are loading unnecessary data into memory and doing work that the database could have done more efficiently.


    Filter in SQL When

    # ✅ Row-level filters that reduce data volume
    query("""
        SELECT *
        FROM superstore
        WHERE region = 'West'
        AND order_date >= '2021-01-01'
    """)
    
    # ✅ Aggregations that summarise large tables
    query("""
        SELECT region, SUM(sales) AS total_sales
        FROM superstore
        GROUP BY region
    """)
    
    # ✅ JOINs that combine tables
    query("""
        SELECT o.order_id, c.customer_name, oi.sales
        FROM orders o
        INNER JOIN customers c ON o.customer_id = c.customer_id
        INNER JOIN order_items oi ON o.order_id = oi.order_id
    """)
    
    # ✅ Deduplication before analysis
    query("""
        SELECT DISTINCT customer_id, segment
        FROM customers
    """)
    

    Filter in pandas When

    # ✅ Complex conditional logic involving multiple Python objects
    df['high_value'] = (df['sales'] > df['sales'].mean() * 1.5)
    
    # ✅ String operations not easily done in SQL
    df_filtered = df[df['customer_name'].str.contains('son', case=False)]
    
    # ✅ Filtering based on values calculated in Python
    threshold = df['profit'].quantile(0.75)
    df_top = df[df['profit'] > threshold]
    
    # ✅ Time-based filtering using pandas datetime methods
    df['order_date'] = pd.to_datetime(df['order_date'])
    df_recent = df[df['order_date'].dt.year == 2021]
    
    # ✅ Filtering after a merge or reshape operation in pandas
    merged = df1.merge(df2, on='customer_id')
    filtered = merged[merged['total_orders'] > 3]
    

    The Decision Framework

    Ask yourself three questions before deciding where to filter:

    1. Does the filter reduce the number of rows significantly?
      If yes, do it in SQL. Bringing fewer rows into Python is always better.
    2. Does the filter require Python objects, methods, or calculated values that SQL cannot access?
      If yes, do it in pandas after loading.
    3. Is this a one-time exploration or a repeatable pipeline?
      For pipelines, push as much as possible into SQL for performance and reliability.

    Building a Clean SQL-Python Workflow

    Here is a complete, realistic analyst workflow that shows SQL and Python working together from raw database to final insight:

    import sqlite3
    import pandas as pd
    import matplotlib.pyplot as plt
    
    # ── STEP 1: Connect ──────────────────────────────────────
    conn = sqlite3.connect('superstore.db')
    
    # ── STEP 2: Pull clean, pre-filtered data using SQL ──────
    df = pd.read_sql("""
        SELECT
            c.segment,
            o.region,
            oi.category,
            oi.sub_category,
            o.order_date,
            ROUND(oi.sales, 2)   AS sales,
            ROUND(oi.profit, 2)  AS profit
        FROM orders o
        INNER JOIN customers c
            ON o.customer_id = c.customer_id
        INNER JOIN order_items oi
            ON o.order_id = oi.order_id
        WHERE o.order_date >= '2021-01-01'
    """, conn)
    
    # ── STEP 3: Convert types in pandas ──────────────────────
    df['order_date'] = pd.to_datetime(df['order_date'])
    df['month'] = df['order_date'].dt.to_period('M')
    
    # ── STEP 4: Further analysis in pandas ───────────────────
    # Profit margin by segment
    df['profit_margin'] = (df['profit'] / df['sales'] * 100).round(2)
    
    # Monthly revenue trend
    monthly = df.groupby('month')['sales'].sum().reset_index()
    monthly.columns = ['month', 'total_sales']
    
    # Segment performance
    segment = df.groupby('segment').agg(
        total_sales=('sales', 'sum'),
        total_profit=('profit', 'sum'),
        avg_margin=('profit_margin', 'mean')
    ).round(2).reset_index()
    
    # ── STEP 5: Print insights ────────────────────────────────
    print("=== Segment Performance (2021) ===")
    print(segment.sort_values('total_profit', ascending=False))
    
    print("\n=== Monthly Revenue Trend ===")
    print(monthly)
    
    # ── STEP 6: Close connection ─────────────────────────────
    conn.close()
    

    This workflow is the template for every analysis you will build in this course going forward. SQL handles retrieval and pre-filtering. Python handles enrichment, aggregation, and presentation. Each tool does what it is best at.

    Saving Query Results for Reuse

    When a query takes a long time to run — common on large production databases — save the result to a CSV or parquet file so you do not have to re-query every time you restart your notebook:

    # Run the heavy query once
    df = pd.read_sql(heavy_query, conn)
    
    # Save locally
    df.to_csv('data/superstore_clean.csv', index=False)
    
    # Next session — load from file instead of re-querying
    df = pd.read_csv('data/superstore_clean.csv')
    

    This is standard practice in professional analytics. Query the database to get fresh data when you need it. Work from a saved file during iterative analysis and visualisation where you are not changing the underlying data pull.

    From SQLite to Production Databases

    Everything you have learned in this module using SQLite transfers directly to production databases. The only thing that changes is the connection setup. The SQL syntax, pd.read_sql(), parameterised queries, and the SQL-Python workflow are identical.

    Here is how connections look for the most common production databases:

    # PostgreSQL — using psycopg2
    import psycopg2
    conn = psycopg2.connect(
        host="your-host",
        database="your-db",
        user="your-user",
        password="your-password"
    )
    
    # MySQL — using mysql-connector-python
    import mysql.connector
    conn = mysql.connector.connect(
        host="your-host",
        database="your-db",
        user="your-user",
        password="your-password"
    )
    
    # BigQuery — using google-cloud-bigquery
    from google.cloud import bigquery
    client = bigquery.Client()
    df = client.query("SELECT * FROM dataset.table LIMIT 10").to_dataframe()
    
    # Once connected — pd.read_sql() works the same way for all of them
    df = pd.read_sql("SELECT * FROM orders LIMIT 10", conn)
    

    The credentials and connection libraries differ. The workflow after that — SQL queries, pd.read_sql(), DataFrames — is exactly the same. What you have learned here scales directly to enterprise databases handling billions of rows.

    Common Mistakes in the SQL-Python Workflow

    MistakeWhat HappensFix
    Leaving connections openFile locking, resource leaksAlways call conn.close() or use a context manager
    Building queries with f-strings and user inputSQL injection risk, apostrophe bugsUse parameterised queries with ? placeholders
    Pulling full tables into pandas before filteringSlow, memory-heavy, unprofessionalFilter in SQL first, bring only what you need into Python
    Re-running expensive queries every notebook restartSlow development cycleSave results to CSV after the first run
    Not resetting the index after pd.read_sql()Index issues in downstream operationsAdd .reset_index(drop=True) if needed
    Hardcoding credentials in notebooksSecurity risk if sharedUse environment variables or a config file

    Practice Exercises

    1. Connect to your Superstore database, pull all orders from 2020 using a SQL WHERE filter, and calculate the monthly revenue trend in pandas.
    2. Write a parameterised query that accepts a region name as a variable and returns total sales and profit for that region. Test it for all four regions in a loop.
    3. Pull the top 10 customers by total sales using SQL GROUP BY. Then in pandas, add a column showing each customer’s share of total revenue as a percentage.
    4. Write a complete workflow: SQL pulls order data joined with customer and product tables. pandas calculates profit margin per segment. Print a clean summary table.
    5. Save the result of a complex JOIN query to a CSV file. Then reload it from CSV in a new cell and confirm the row count matches.

    Summary — What You Can Now Do

    • Open, use, and close a SQLite connection correctly in Python
    • Use context managers for safe automatic connection handling
    • Query a database using pd.read_sql() and work with the result as a pandas DataFrame
    • Write parameterised queries to safely pass variable values into SQL
    • Read large result sets in chunks using the chunksize parameter
    • Write pandas DataFrames back to a database table using to_sql()
    • Decide confidently whether a filter or aggregation belongs in SQL or pandas
    • Build a clean end-to-end SQL-Python workflow from database connection to final insight
    • Understand how the SQLite workflow transfers directly to production databases

    Module 2 Complete

    You have now finished all six topics in Module 2. Here is what you can do that you could not at the start of this module:

    • Query any relational database using SELECT, WHERE, ORDER BY, LIMIT, and DISTINCT
    • Summarise data with COUNT, SUM, AVG, MIN, MAX, GROUP BY, and HAVING
    • Combine multiple tables using INNER JOIN and LEFT JOIN
    • Write subqueries in WHERE and FROM clauses for multi-step analysis
    • Connect SQL to Python, retrieve results as DataFrames, and decide where each tool does its best work

    The Mini Project for this module brings all of this together. You will use SQL to query the Superstore database, answer five business questions, pull the results into pandas, and write a short plain-English brief of your findings — exactly what a junior analyst would be asked to do in their first month on the job.

    Up next — Module 2 Mini Project

    Five business questions. One database. SQL queries, pandas output, and a written brief. Your first end-to-end analyst deliverable.

  • SQL Subqueries Explained: WHERE and FROM Subqueries for Data Analysts with Examples

    What Is a Subquery?

    A subquery is a SELECT statement written inside another SELECT statement. The inner query runs first, produces a result, and the outer query uses that result to complete its own logic.

    You have already written queries that filter rows, aggregate data, and join tables. A subquery combines those capabilities into a single statement — letting you answer multi-step business questions without breaking them into separate queries or creating temporary tables.

    Here is the simplest way to think about it. Imagine you want to find all orders where the sales value is above average. You cannot write WHERE sales > AVG(sales) directly — SQL does not allow aggregate functions inside a WHERE clause. But you can write a subquery that calculates the average first, then use that result in the WHERE condition:

    -- Find all orders above the average sales value
    query("""
    
    SELECT order_id, customer_name, sales
    FROM superstore
    WHERE sales > (SELECT AVG(sales) FROM superstore)
    ORDER BY sales DESC
    LIMIT 10
    
    """)

    The inner query SELECT AVG(sales) FROM superstore runs first and returns a single number. The outer query then uses that number as the filter threshold. This is a subquery in its most fundamental form.

    Why Subqueries Matter for Analysts

    New analysts sometimes wonder whether subqueries are necessary — after all, you could run two separate queries and use the result of the first manually. That works for one-off exploration but breaks down quickly in real work

    .Subqueries let you build self-contained, reusable queries that answer complex questions in a single execution. They make your SQL readable, auditable, and easy to hand off to a colleague. When you save a query in a reporting tool or share it with your team, everything is in one place — not split across two separate statements that need to be run in a specific order.

    They also open the door to a category of questions that would be genuinely difficult to answer any other way — questions like “which customers spend more than the average customer?” or “which products outsell the category average?” These comparisons require knowing a benchmark first, then filtering against it. Subqueries are the natural SQL tool for that.

    Types of Subqueries Covered in This Topic

    There are two subquery patterns every analyst needs to know:

    WHERE subquery — the inner query runs and produces a value or list of values that the outer query filters on. Used for comparison and filtering against calculated benchmarks.

    FROM subquery — the inner query runs and produces a temporary table that the outer query selects from. Used for multi-step aggregation and pre-filtering before a summary.

    Both patterns follow the same core principle: the inner query runs first, produces a result, and the outer query uses that result.

    WHERE Subqueries

    A WHERE subquery places a SELECT statement inside the WHERE clause. The inner query must return either a single value or a list of values that the outer WHERE condition can compare against.

    Comparing Against a Single Calculated Value

    This is the most common WHERE subquery pattern. Calculate a benchmark with the inner query, then filter rows against it in the outer query.

    -- Which orders have sales above the overall average?
    query("""
    
    SELECT
    order_id,
    customer_name,
    region,
    ROUND(sales, 2) AS sales
    FROM superstore
    WHERE sales > (
    SELECT AVG(sales)
    FROM superstore
    )
    ORDER BY sales DESC
    LIMIT 10
    
    """)

    The inner query returns one number — the average sales value across all orders. The outer query filters to rows where individual sales exceed that number. This gives you above-average orders without needing to know the average value in advance.

    -- Which orders have profit above the average profit?
    query("""
    
    SELECT
    order_id,
    customer_name,
    category,
    ROUND(profit, 2) AS profit
    FROM superstore
    WHERE profit > (
    SELECT AVG(profit)
    FROM superstore
    )
    ORDER BY profit DESC
    LIMIT 10
    
    """)

    Using IN with a Subquery

    When the inner query returns multiple values instead of one, use IN to check whether the outer query’s column matches any value in that list.

    -- Find all orders placed by customers in the Corporate segment
    query("""
    
    SELECT
    order_id,
    customer_id,
    order_date,
    region
    FROM orders
    WHERE customer_id IN (
    SELECT customer_id
    FROM customers
    WHERE segment = 'Corporate'
    )
    ORDER BY order_date DESC
    LIMIT 10
    
    """)

    The inner query returns a list of customer IDs belonging to the Corporate segment. The outer query then filters the orders table to only show orders where the customer ID appears in that list. This achieves a similar result to a JOIN — but the logic reads differently and is sometimes clearer depending on the question being asked.

    -- Find orders containing Technology products with high profit
    query("""
    
    SELECT
    o.order_id,
    o.order_date,
    o.region
    FROM orders o
    WHERE o.order_id IN (
    SELECT order_id
    FROM order_items
    WHERE category = 'Technology'
    AND profit > 500
    )
    ORDER BY o.order_date DESC
    LIMIT 10
    
    """)

    NOT IN — Exclusion Filter

    The reverse of IN is NOT IN — filter to rows where the value does not appear in the subquery result. Useful for finding records that are absent from another dataset.

    -- Find customers who have never ordered a Technology product
    query("""
    
    SELECT DISTINCT
    customer_id,
    customer_name,
    segment
    FROM customers
    WHERE customer_id NOT IN (
    SELECT DISTINCT o.customer_id
    FROM orders o
    INNER JOIN order_items oi
    ON o.order_id = oi.order_id
    WHERE oi.category = 'Technology'
    )
    ORDER BY customer_name
    LIMIT 10
    
    """)

    NOT IN and NULLs — important warning: If the subquery result contains any NULL values, NOT IN returns no rows at all. This is a subtle but serious bug. Always add WHERE column IS NOT NULL inside a NOT IN subquery to be safe.

    FROM Subqueries

    A FROM subquery places a SELECT statement in the FROM clause, treating the result of the inner query as a temporary table. The outer query then selects from that temporary table as if it were a real one.

    This pattern is used when you need to aggregate data in two stages — for example, first summarise by one dimension, then summarise or filter that summary further.

    Syntax:

    SELECT columns<br>FROM (<br>SELECT columns<br>FROM table<br>GROUP BY something<br>) AS subquery_alias<br>WHERE condition

    The alias after the closing bracket — AS subquery_alias — is required. SQL needs a name to refer to the temporary table in the outer query.

    Two-Stage Aggregation

    -- Step 1 inner query: calculate total sales per customer
    -- Step 2 outer query: find customers above a revenue threshold
    query("""
    
    SELECT
    customer_name,
    segment,
    ROUND(total_sales, 2) AS total_sales
    FROM (
    SELECT
    c.customer_name,
    c.segment,
    SUM(oi.sales) AS total_sales
    FROM orders o
    INNER JOIN customers c
    ON o.customer_id = c.customer_id
    INNER JOIN order_items oi
    ON o.order_id = oi.order_id
    GROUP BY c.customer_name, c.segment
    ) AS customer_summary
    WHERE total_sales > 5000
    ORDER BY total_sales DESC
    LIMIT 10
    
    """)

    Notice the outer query uses WHERE total_sales > 5000 — filtering on the aggregated column from the inner query. You cannot do this with HAVING in a simple GROUP BY because HAVING filters groups, not a pre-calculated summary table. The FROM subquery pattern gives you this flexibility.

    Filtering a Summary Before Further Analysis

    -- Find regions where average order value exceeds $250
    query("""
    
    SELECT
    region,
    ROUND(avg_order_value, 2) AS avg_order_value,
    total_orders
    FROM (
    SELECT
    o.region,
    AVG(oi.sales) AS avg_order_value,
    COUNT(*) AS total_orders
    FROM orders o
    INNER JOIN order_items oi
    ON o.order_id = oi.order_id
    GROUP BY o.region
    ) AS region_summary
    WHERE avg_order_value > 250
    ORDER BY avg_order_value DESC
    
    """)

    The inner query calculates average order value and total orders per region. The outer query then filters to regions where the average exceeds $250. This is clean, readable, and easy to modify — just change the threshold in the outer WHERE clause.

    Ranking Categories by Performance

    -- Which product categories have above-average total profit?
    query("""
    
    SELECT
    category,
    ROUND(total_profit, 2) AS total_profit
    FROM (
    SELECT
    category,
    SUM(profit) AS total_profit
    FROM order_items
    GROUP BY category
    ) AS category_totals
    WHERE total_profit > (
    SELECT AVG(total_profit)
    FROM (
    SELECT SUM(profit) AS total_profit
    FROM order_items
    GROUP BY category
    ) AS avg_calc
    )
    ORDER BY total_profit DESC
    
    """)

    This query has a subquery inside a FROM clause and another subquery inside a WHERE clause — both in the same statement. It reads from the inside out: the innermost queries run first, their results feed the next level, and the outermost query produces the final answer. This is advanced but follows the exact same rules you have already learned.

    Subquery vs JOIN — When to Use Which

    Both subqueries and JOINs can answer many of the same questions. Choosing between them is partly about correctness and partly about readability.

    SituationBetter ChoiceWhy
    Filtering based on a calculated value (avg, max)Subquery in WHEREJOINs cannot filter on aggregations directly
    Finding records absent from another tableSubquery with NOT IN or LEFT JOIN + IS NULLBoth work — LEFT JOIN is safer with NULLs
    Combining columns from two tables in the resultJOINSubqueries in WHERE do not add columns to output
    Two-stage aggregationFROM subqueryCleaner than a JOIN for pre-summarised data
    Simple lookup across two tablesJOINFaster and more readable for straightforward matches
    Filtering to a dynamic list from another tableEither — IN subquery or INNER JOIN both workJOIN is generally faster on large datasets

    The practical guideline: use a JOIN when you need columns from both tables in your output. Use a subquery when you are filtering or calculating based on a value derived from another query and do not need to show those extra columns.

    Common Subquery Mistakes

    MistakeWhat HappensFix
    Missing alias on FROM subquerySQL error — every derived table needs a nameAlways add AS alias_name after the closing bracket
    Inner query returns multiple rows in a single-value contextSQL errorUse IN instead of = when the subquery can return multiple rows
    NOT IN with NULLs in subquery resultReturns zero rows silentlyAdd WHERE column IS NOT NULL inside the NOT IN subquery
    Deeply nested subqueries that are hard to readDifficult to debug and maintainBreak into steps using pandas after pulling data, or use CTEs in future
    Using a subquery when a JOIN would be simplerSlower and harder to readIf you need columns from both tables, use a JOIN

    Practice Exercises

    1. Find all orders where sales are below the average sales value. Show order ID, customer name, and sales. Sort by sales ascending.
    2. Find all customers from the Consumer segment who have placed more than 5 orders. Use a FROM subquery to first count orders per customer, then filter.
    3. Find all sub-categories where total profit is above the average sub-category profit. Use a FROM subquery for the totals and a WHERE subquery for the average.
    4. Using NOT IN, find all customers who have never placed an order in the West region.
    5. Find the top 3 regions by average order value using a FROM subquery. Show region, average order value, and total orders.

    Summary — What You Can Now Do

    • Explain what a subquery is and why it runs before the outer query
    • Write a WHERE subquery to filter rows against a single calculated value
    • Use IN and NOT IN with subqueries to filter against a list of values
    • Write a FROM subquery to create a temporary summary table for further filtering
    • Combine WHERE and FROM subqueries in the same query for multi-step analysis
    • Choose between a subquery and a JOIN based on what the question requires
    • Avoid common subquery errors including missing aliases and NOT IN with NULLs

    Up next — Topic 6: SQL Meets Python

    Topic 6 brings everything together — connecting SQLite to Python, running queries with pd.read_sql(), deciding when to filter in SQL versus pandas, and building a workflow where SQL retrieves the data and Python does the analysis. This is the bridge between Module 2 and everything that follows in the course.

  • SQL JOINs Explained: INNER JOIN and LEFT JOIN for Data Analysts with Examples

    What Is a JOIN and Why Do You Need It

    Every query in Topics 2 and 3 touched a single table. That works fine when all your data lives in one place — but in real databases it almost never does.

    Customer details live in a customers table. Orders live in an orders table. Products live in a products table. These tables are kept separate deliberately — it avoids storing the same customer name and address on every single order they place. Instead, each order stores a customer ID, and that ID links back to the customer record.

    This is efficient for storage and data integrity. But it means that to answer most real business questions, you need to combine two or more tables. That is exactly what a JOIN does.

    A JOIN connects two tables based on a shared column — usually an ID that appears in both. The result is a new combined table containing columns from both sources, matched row by row.

    Without JOINs, a relational database is just a collection of disconnected tables. With JOINs, it becomes a connected system you can query across freely.

    Setting Up the Superstore Tables for JOINs

    In Topics 2 and 3 the Superstore data was one flat table. To practice JOINs properly you need it split into separate tables the way a real database works. Run this setup code once in your notebook to create the tables:

    import sqlite3
    import pandas as pd
    
    conn = sqlite3.connect('superstore.db')
    
    # Load the original flat table
    df = pd.read_csv('superstore_sales.csv')
    df.columns = [col.strip().replace(' ', '_').lower() for col in df.columns]
    
    # Create customers table
    customers = df[['customer_id', 'customer_name', 'segment']].drop_duplicates()
    customers.to_sql('customers', conn, if_exists='replace', index=False)
    
    # Create orders table
    orders = df[['order_id', 'customer_id', 'order_date',
                 'ship_date', 'ship_mode', 'region',
                 'city', 'state']].drop_duplicates()
    orders.to_sql('orders', conn, if_exists='replace', index=False)
    
    # Create order_items table
    order_items = df[['order_id', 'product_id', 'sub_category',
                      'category', 'sales', 'quantity',
                      'discount', 'profit']].drop_duplicates()
    order_items.to_sql('order_items', conn, if_exists='replace', index=False)
    
    print("Tables created successfully.")
    conn.close()
    

    Now you have three separate tables — customers, orders, and order_items — linked by shared ID columns. This mirrors how a real company database is structured.

    def query(sql):
    return pd.read_sql(sql, conn)
    
    conn = sqlite3.connect('superstore.db')

    How a JOIN Works — The Mental Model

    Before writing JOIN syntax, understand what is happening conceptually.

    Imagine two tables sitting side by side. The orders table has a column called customer_id. The customers table also has a column called customer_id. A JOIN says: for every row in orders, find the matching row in customers where the customer_id values are equal, and combine them into one wider row.

    The column you join on is called the join key. It must exist in both tables and contain matching values. In the Superstore setup:

    •   orders.customer_id links to customers.customer_id
    •   order_items.order_id links to orders.order_id

    The difference between JOIN types is what happens when a match is not found. That is the entire distinction between INNER JOIN and LEFT JOIN.

    INNER JOIN — Only Matching Rows

    Syntax:

    SELECT columns<br>FROM table_one<br>INNER JOIN table_two ON table_one.key = table_two.key

    INNER JOIN returns only rows where a match exists in both tables. If a row in the left table has no matching row in the right table, it is excluded from the result. If a row in the right table has no match in the left table, it is also excluded.

    Think of it as the intersection — only rows that exist in both tables come through.

    Your First INNER JOIN

    -- Combine orders with customer details
    query("""
    
    SELECT
    o.order_id,
    o.order_date,
    o.region,
    c.customer_name,
    c.segment
    FROM orders o
    INNER JOIN customers c ON o.customer_id = c.customer_id
    LIMIT 10
    
    """)

    Output (first 5 rows):

    order_idorder_dateregioncustomer_namesegment
    CA-2020-1521562020-11-08SouthClaire GuteConsumer
    CA-2020-1386882020-06-12WestDarrin Van HuffCorporate
    US-2019-1089662019-10-11SouthSean O’DonnellConsumer
    CA-2019-1158122019-06-09EastBrosina HoffmanConsumer
    CA-2019-1144122019-04-15WestAndrew AllenConsumer

    Notice o. and c. before column names. These are table aliases — shorthand so you don’t have to type the full table name every time. orders o means “refer to the orders table as o.” This becomes essential when both tables have columns with the same name.

    Joining Three Tables

    Most real analyst queries join more than two tables. Here is how to bring orders, customers, and order_items together in one query:

    -- Full picture: customer details + order details + financials
    query("""
    
    SELECT
    c.customer_name,
    c.segment,
    o.order_date,
    o.region,
    oi.category,
    oi.sub_category,
    ROUND(oi.sales, 2) AS sales,
    ROUND(oi.profit, 2) AS profit
    FROM orders o
    INNER JOIN customers c
    ON o.customer_id = c.customer_id
    INNER JOIN order_items oi
    ON o.order_id = oi.order_id
    LIMIT 10
    
    """)

    Each INNER JOIN adds another table into the result. The pattern is always the same — join on the shared key column between the two tables being connected.

    INNER JOIN with WHERE and ORDER BY

    JOINs combine cleanly with everything from Topics 2 and 3:

    -- High-value orders in the West region with customer details
    query("""
    
    SELECT
    c.customer_name,
    c.segment,
    o.region,
    oi.category,
    ROUND(oi.sales, 2) AS sales,
    ROUND(oi.profit, 2) AS profit
    FROM orders o
    INNER JOIN customers c
    ON o.customer_id = c.customer_id
    INNER JOIN order_items oi
    ON o.order_id = oi.order_id
    WHERE o.region = 'West'
    AND oi.sales > 1000
    ORDER BY oi.sales DESC
    LIMIT 10
    
    """)

    LEFT JOIN — Keep All Rows from the Left Table

    Syntax:

    SELECT columns<br>FROM table_one<br>LEFT JOIN table_two ON table_one.key = table_two.key

    LEFT JOIN returns all rows from the left table, plus matching rows from the right table. When no match exists in the right table, the columns from the right table come back as NULL.

    The key difference from INNER JOIN: no rows from the left table are ever dropped. Even if they have no match on the right side, they appear in the result — just with NULL values in the right table’s columns.

    When to Use LEFT JOIN

    LEFT JOIN is the right choice when you want to keep all records from one table regardless of whether they have matching records in another. Common scenarios:

    • Find customers who have never placed an order
    • Find products that have never been sold
    • Identify records in one system that are missing from another
    • Audit data completeness across two sources

    LEFT JOIN Example — Finding Unmatched Records

    To demonstrate LEFT JOIN clearly, first add a test customer with no orders:

    Add a customer who has never ordered
    
    import pandas as pd
    import sqlite3
    conn = sqlite3.connect('superstore.db')
    new_customer = pd.DataFrame([{
    'customer_id': 'TEST-001',
    'customer_name': 'Test Customer',
    'segment': 'Consumer'
    }])
    new_customer.to_sql('customers', conn, if_exists='append', index=False)
    conn.commit()

    Now run a LEFT JOIN to find customers with no orders:

    -- Find customers who have never placed an order
    query("""
    
    SELECT
    c.customer_id,
    c.customer_name,
    c.segment,
    o.order_id
    FROM customers c
    LEFT JOIN orders o ON c.customer_id = o.customer_id
    WHERE o.order_id IS NULL
    
    """)

    Output:

    customer_idcustomer_namesegmentorder_id
    TEST-001Test CustomerConsumerNULL

    The NULL in order_id tells you this customer exists in the customers table but has no matching record in the orders table. The INNER JOIN version of this query would have excluded this row entirely — you would never know this customer existed.

    This pattern — LEFT JOIN followed by WHERE right_table.key IS NULL — is one of the most useful SQL techniques for data quality auditing.

    Handling NULLs After a JOIN

    NULL values appearing after a LEFT JOIN are expected and useful — they signal missing matches. But they need to be handled carefully in any further calculations or filtering.

    COALESCE — Replacing NULL with a Default Value

    COALESCE returns the first non-NULL value from a list of arguments. Use it to replace NULLs with a meaningful default:

    -- Replace NULL order_id with a readable label
    query("""
    
    SELECT
    c.customer_name,
    COALESCE(o.order_id, 'No orders yet') AS order_status
    FROM customers c
    LEFT JOIN orders o ON c.customer_id = o.customer_id
    WHERE o.order_id IS NULL
    
    """)

    COALESCE is also useful when joining tables where a column might be populated in one table but missing in another. Rather than seeing NULL in your output, you get a clean fallback value.

    NULL in Aggregate Functions After a JOIN

    One important behaviour: aggregate functions like COUNT, SUM, and AVG ignore NULL values automatically. This matters after a LEFT JOIN because unmatched rows produce NULLs in the right table’s columns.

    -- Count orders per customer — unmatched customers show 0, not NULL
    query("""
    
    SELECT
    c.customer_name,
    COUNT(o.order_id) AS order_count
    FROM customers c
    LEFT JOIN orders o ON c.customer_id = o.customer_id
    GROUP BY c.customer_name
    ORDER BY order_count DESC
    LIMIT 10
    
    """)

    COUNT(o.order_id) counts non-NULL values only — so customers with no orders correctly show 0. If you used COUNT() here you would get 1 for every customer including unmatched ones, because COUNT() counts the row itself regardless of NULL values.

    INNER JOIN vs LEFT JOIN — When to Use Which

    SituationUse
    You only want rows that exist in both tablesINNER JOIN
    You want all rows from the left table, matched or notLEFT JOIN
    Finding records that are missing from another tableLEFT JOIN + WHERE right key IS NULL
    Combining sales data with product detailsINNER JOIN
    Auditing customers with no ordersLEFT JOIN
    Getting complete order history including customer infoINNER JOIN
    Data completeness check across two systemsLEFT JOIN

    The default choice for most analyst queries is INNER JOIN — you usually only want complete, matched records. Reach for LEFT JOIN specifically when missing matches are meaningful information rather than just gaps to exclude.

    A Complete Business Query Using JOINs

    Here is a realistic analyst query combining JOINs, WHERE, GROUP BY, and ORDER BY to answer a full business question:

    Question: Which customer segments generate the most revenue and profit, broken down by product category?

    query("""
    
    SELECT
    c.segment,
    oi.category,
    COUNT(DISTINCT o.order_id) AS total_orders,
    ROUND(SUM(oi.sales), 2) AS total_revenue,
    ROUND(SUM(oi.profit), 2) AS total_profit,
    ROUND(SUM(oi.profit) /
    SUM(oi.sales) * 100, 1) AS profit_margin_pct
    FROM orders o
    INNER JOIN customers c
    ON o.customer_id = c.customer_id
    INNER JOIN order_items oi
    ON o.order_id = oi.order_id
    GROUP BY c.segment, oi.category
    ORDER BY c.segment, total_revenue DESC
    
    """)

    This single query pulls from three tables, aggregates across two dimensions, calculates a derived metric, and produces a result a manager could read directly in a meeting. That is the power of combining JOINs with everything from the previous topics.

    Common JOIN Mistakes

    MistakeWhat HappensFix
    Joining on the wrong columnIncorrect or empty resultsDouble-check which columns are the shared keys between tables
    Forgetting table aliases when column names clashSQL error — ambiguous column nameAlways use aliases when the same column name exists in both tables
    Using INNER JOIN when LEFT JOIN is neededSilently drops unmatched rowsAsk yourself — do I care about rows with no match? If yes, use LEFT JOIN
    Not handling NULLs after LEFT JOINWrong aggregation resultsUse COALESCE for display, COUNT(column) not COUNT(*) for counting
    Joining without ON clauseCartesian product — every row matched to every rowAlways include the ON condition

    Practice Exercises

    1. Join the orders and customers tables. Show the customer name, region, and order date for all orders placed in 2021.
    2. Join all three tables. Find the top 5 customers by total profit across all their orders.
    3. Using a LEFT JOIN, find any order IDs in order_items that do not have a matching record in the orders table.
    4. Join orders and customers. Group by segment and region. Show total revenue per combination sorted by revenue descending.
    5. Join all three tables. Filter to Furniture category only. Show total sales and profit per customer segment.

    Summary — What You Can Now Do

    • Explain what a JOIN does and why relational databases require them
    • Write an INNER JOIN to combine two or more tables on a shared key column
    • Write a LEFT JOIN to keep all rows from the left table including unmatched ones
    • Use LEFT JOIN with WHERE IS NULL to find records missing from a second table
    • Handle NULLs after a JOIN using COALESCE and COUNT(column) vs COUNT(*)
    • Combine JOINs with WHERE, GROUP BY, ORDER BY, and HAVING in a single query
    • Choose between INNER JOIN and LEFT JOIN based on whether unmatched rows matter

    Up next — Topic 5: Subqueries

    Topic 5 covers queries inside queries — how to use a SELECT result as a filter in WHERE, or as a derived table in FROM. Subqueries let you answer multi-step business questions in a single SQL statement without needing temporary tables or multiple separate queries.