How to Fix Handling Python Error in a Bash Script with Code

I recently ran into a frustrating situation while automating some Jupyter notebook runs using Papermill. I expected my bash script to fail whenever a notebook raised a Python error like an AssertionError. Instead, my script happily reported “Successfully executed script” even when the notebook had clearly blown up.

It turned out the problem wasn’t with Papermill it was with my bash script.

The Broken Version Where I Went Wrong

Here’s the first version of my script:

# nbTest.sh (broken)
papermill python_notebooks/testing/$d python_notebooks/testing/results/$d.results.ipynb || True
if [ $? -eq 0 ]
then
  echo "Successfully executed script"
else
  echo "Script exited with error."
fi

At first glance, this looked fine: run Papermill, then check the exit code. But I made two mistakes:

  1. || True masks failures.
    Writing cmd || True means “if cmd fails, run True.” Since True always exits with 0, the whole command succeeds—even if Papermill exploded.
  2. $? checked the wrong command.
    $? stores the exit code of the last command. In my case, that was True, not Papermill. So my script always saw “success.”

TL;DR: I short-circuited the error and then checked the wrong thing.

The Minimal Fix

The easiest way to fix this is to remove the || True and test Papermill directly inside the if statement:

# nbTest.sh (fixed, minimal)
if papermill "python_notebooks/testing/$d" "python_notebooks/testing/results/$d.results.ipynb"; then
  echo "Successfully executed script"
else
  echo "Script exited with error." >&2
fi

Now, if Papermill raises an AssertionError or any other failure, it exits with a non-zero code. My script correctly prints the error branch.

A More Robust Approach

After fixing the basics, I wanted something more resilient. Bash has strict modes that make scripts safer:

#!/usr/bin/env bash
set -euo pipefail
# -e: exit on error
# -u: error on undefined variables
# -o pipefail: fail if any command in a pipeline fails

trap 'echo "Error on line $LINENO" >&2' ERR

d="${1:?Pass notebook name (without paths) as arg}"

in_nb="python_notebooks/testing/${d}"
out_nb="python_notebooks/testing/results/${d}.results.ipynb"

echo "Running: $in_nb -> $out_nb"
papermill "$in_nb" "$out_nb"
echo "OK: $d"

Here’s what happens:

  • If Papermill fails, the script stops immediately (set -e).
  • The ERR trap prints which line caused the failure.
  • I don’t have to check $? at all the script does the right thing automatically.

Practice Functionality Batch Run

In real projects, I often run dozens of notebooks. I wanted a way to:

  • Run them all in sequence
  • Record which ones failed
  • Exit non-zero if any failed (perfect for CI pipelines)

So I wrote this:

#!/usr/bin/env bash
set -euo pipefail

NOTEBOOK_DIR="python_notebooks/testing"
RESULTS_DIR="python_notebooks/testing/results"
SUMMARY_CSV="${RESULTS_DIR}/run_summary.csv"

mkdir -p "$RESULTS_DIR"

# header
echo "notebook,input_path,output_path,status,exit_code" > "$SUMMARY_CSV"

any_failed=0

run_one() {
  local nb_basename="$1"
  local in_nb="${NOTEBOOK_DIR}/${nb_basename}"
  local out_nb="${RESULTS_DIR}/${nb_basename}.results.ipynb"

  echo " Running: $in_nb"
  if papermill "$in_nb" "$out_nb"; then
    echo " Success: $nb_basename"
    echo "${nb_basename},${in_nb},${out_nb},SUCCESS,0" >> "$SUMMARY_CSV"
  else
    rc=$?
    echo " Failed:  $nb_basename (exit ${rc})" >&2
    echo "${nb_basename},${in_nb},${out_nb},FAIL,${rc}" >> "$SUMMARY_CSV"
    any_failed=1
  fi
}

# run all notebooks
shopt -s nullglob
for path in "${NOTEBOOK_DIR}"/*.ipynb; do
  nb_file="$(basename "$path")"
  run_one "$nb_file"
done

echo
echo "Summary written to: $SUMMARY_CSV"
if [[ $any_failed -ne 0 ]]; then
  echo "Some notebooks failed." >&2
  exit 1
fi
echo "All notebooks succeeded."

This version:

  • Iterates all notebooks in the testing folder
  • Logs results to a CSV (with exit codes)
  • Shows or in the console
  • Exits with 1 if anything failed

That way, my CI system fails the build automatically when a notebook breaks.

Optional Nice to Have

Once the basics worked, I added some polish:

  • Colored logs: wrapping echo with ANSI colors makes it easy to spot errors.
  • Timeouts: timeout 30m papermill … to prevent hangs.
  • Parameters: pass notebook params with -p name value.
  • Parallel runs: use GNU parallel if the notebooks don’t depend on each other.

Final Thought

What I learned here is simple but important most of my problems weren’t Papermill’s fault they were caused by my bash habits. Adding || True looked harmless but killed error handling completely. By removing it, using stricter bash modes, and writing a batch runner, I now have a reliable way to run dozens of notebooks and catch failures automatically.

Related blog posts