How to Fix Python Error in Bash Scripts with Papermill Exit Codes

I’m often automating Jupyter notebook runs with Papermill inside Bash scripts. One day I realized that, no matter how many AssertionErrors or unexpected exceptions popped up in my notebooks, my wrapper script happily reported success because Papermill always exited with code 0 by default. I’ll walk you through the problem I faced, how Papermill’s default behavior masks errors, and the step‑by‑step changes I made to ensure real failures bubble up so my CI/CD pipeline actually catches them.

The Problem

I have a Bash script, nbTest.sh, which loops through a folder of notebooks and executes each via Papermill:

#!/usr/bin/env bash

for d in *.ipynb; do
papermill python_notebooks/testing/"$d" \
python_notebooks/testing/results/"$d".results.ipynb || True

if [ $? -eq 0 ]; then
echo "Successfully executed script"
else
echo "Script exited with error." >&2
fi
done

Even when a cell throws an AssertionError, I see the full traceback in the output notebook but my terminal still shows:

Successfully executed script

That means downstream steps think everything passed, even when the notebooks clearly failed.

Why Papermill Always Returns 0

Papermill’s default behavior is to catch any exception in a notebook, render the error in the output file, and then exit cleanly (with status 0). In Bash, the special variable $? only reflects the exit code of the last command in this case, Papermill itself not the internal notebook exceptions. On top of that, I had || True after the Papermill call, which guaranteed a zero exit code no matter what.

How the Original Script Works

  • Looping Over Notebooks
for d in *.ipynb; do … done
  • Running Papermill
papermill input.ipynb output.ipynb || True
  • Checking the Exit Code
if [ $? -eq 0 ]; then

My Improved Script

I needed my script to:

  • Actually fail when a notebook cell errors.
  • Retry briefly in case of transient issues.
  • Log timestamps and summaries for auditing.
  • Give clear feedback in the console.

Here’s the revised nbTest.sh that does just that:

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

LOGFILE="notebook_run.log"
MAX_RETRIES=2

echo "=== Notebook execution started at $(date) ===" >> "$LOGFILE"

run_notebook() {
local src="$1"
local dst="$2"
local attempt=1

until [ "$attempt" -gt "$MAX_RETRIES" ]; do
echo "[$(date '+%H:%M:%S')] Attempt #$attempt: $src" >> "$LOGFILE"
papermill \
--request-save-on-failure \
"$src" "$dst" \
&& return 0

echo "[$(date '+%H:%M:%S')] ERROR in $src on attempt #$attempt" >> "$LOGFILE"
((attempt++))
sleep 2
done

echo "[$(date '+%H:%M:%S')] FAILED $src after $MAX_RETRIES attempts" >> "$LOGFILE"
return 1
}

for nb in python_notebooks/testing/*.ipynb; do
out="python_notebooks/testing/results/$(basename "$nb" .ipynb).results.ipynb"
if run_notebook "$nb" "$out"; then
echo " $(basename "$nb") succeeded."
else
echo " $(basename "$nb") failed—check $LOGFILE." >&2
# You could trigger email or Slack alerts here
fi
done

echo "=== Notebook execution finished at $(date) ===" >> "$LOGFILE"

Key Changes

  • set -euo pipefail
    Stops on any error, treats unset variables as failures, and fails pipelines if any stage fails.
  • --request-save-on-failure
    Tells Papermill to return a non‑zero exit code whenever a cell errors out.
  • Retry Logic
    The run_notebook function retries up to MAX_RETRIES times before giving up.
  • Structured Logging
    Appends timestamped entries to notebook_run.log for every start, error, and final status.
  • Console Feedback
    Uses ✔/✖ emojis so you can instantly see which notebooks passed or failed.

Next Steps & Practice Ideas

  1. Parallelization
    Use GNU Parallel or background jobs (&) to execute multiple notebooks at once.
  2. Alert Integration
    Hook into Slack or email APIs so your team gets notified immediately on failures.
  3. Summary Reports
    Write a small parser in Python or Bash to scan notebook_run.log and produce a summary table of successes vs. failures.
  4. Dockerize Your Workflow
    Bundle Papermill, your notebooks, and this script into a Docker container to guarantee consistency across environments.

Final Thought

I used to shrug off silent notebook errors after all, the output file showed the traceback, But once this script landed in our CI pipeline, hidden failures started cascading into “working” analyses, false dashboards, and wasted debugging time. By tweaking Papermill’s exit behavior and tightening up Bash error handling, I turned silent exceptions into loud alarms. Now when something goes wrong in a notebook, I know about it immediately and you can too.

Related blog posts