Mastering a few key Jupyter Notebook tips — such as using keyboard shortcuts, writing self-documenting cells, configuring display settings, leveraging tab completion, and always restarting before sharing — can transform your notebook experience from frustrating and slow to fast, organized, and professional. These ten tips are the ones every beginner should learn in their first week with Jupyter.
Introduction: From Functional to Fluent
Learning to open Jupyter Notebook, create cells, and run Python code is only the beginning. The difference between a beginner who struggles and one who flows effortlessly through their work almost always comes down to a handful of habits and tricks that experienced practitioners take for granted but rarely explain explicitly.
Most people discover these tips one at a time, over months, through trial and error or by watching a colleague’s screen. This article compresses that learning curve. You will find ten concrete, immediately applicable tips that address the most common pain points beginners experience: slow cell navigation, cluttered outputs, mysterious errors, poorly organized notebooks, and wasted time looking up function signatures.
Each tip includes the specific technique, the reasoning behind why it matters, and practical code examples showing it in action. By the end, you will have a toolkit that makes your Jupyter sessions noticeably faster, cleaner, and more professional.
Tip 1: Master the Two-Mode System and Its Most Important Shortcuts
Every Jupyter beginner eventually discovers that pressing a key in the wrong mode produces unexpected results — typing “b” accidentally inserts a cell instead of adding the letter to your code. Understanding Jupyter’s two-mode system is the single most important foundation for productive keyboard use.
The Two Modes
Edit Mode (green left border on the cell): You are inside a cell, typing content. The keyboard behaves like a normal text editor.
Command Mode (blue left border): You are navigating the notebook. Single-key shortcuts control notebook-level operations.
Switch between them:
- Escape → Enter Command Mode from Edit Mode
- Enter → Enter Edit Mode from Command Mode
The Shortcuts That Matter Most
Rather than memorizing every shortcut at once, focus on learning these seven first — they cover 90% of what you do in Jupyter:
Shift + Enter Run cell and move to the next one (most used of all)
Ctrl + Enter Run cell and stay on the same cell (great for experimenting)
A Insert cell Above (Command Mode)
B Insert cell Below (Command Mode)
D, D Delete cell — press D twice (Command Mode)
M Change cell to Markdown (Command Mode)
Y Change cell to Code (Command Mode)Building the Habit
The most effective way to internalize shortcuts is to deliberately avoid the mouse for one week. Every time you reach for the mouse to run a cell or add a new one, stop and use the keyboard shortcut instead. This deliberate discomfort pays off quickly — experienced Jupyter users navigate entire notebooks without ever touching the mouse, which easily triples their speed.
# Practical drill: try this sequence entirely with keyboard
# 1. Run this cell with Shift+Enter
# 2. Press Escape to enter Command Mode
# 3. Press B to add a cell below
# 4. Press Enter to start editing
# 5. Type your next bit of code
# 6. Repeat
import pandas as pd
print("Cell executed! Now practice keyboard navigation.")Tip 2: Use Tab Completion and Shift+Tab Documentation Every Time
One of the most underused features in Jupyter is its built-in code intelligence — two keystrokes that can save enormous amounts of time and eliminate the need to leave your notebook to look up documentation.
Tab Completion
While typing any name — a variable, a function, a method, a file path — press Tab to see completions:
import pandas as pd
# Type "pd." then press Tab → see all Pandas functions and classes
pd.
# Type "pd.read_" then press Tab → narrows to read_csv, read_excel, read_json, etc.
pd.read_
# Tab completion also works for your own variables
customer_dataframe = pd.DataFrame({'a': [1, 2, 3]})
customer_ # Press Tab here → shows customer_dataframeTab completion works for:
- Python built-ins and imported module names
- Your own variables and functions defined in the current session
- Dictionary keys (inside quotes after
[) - File system paths (inside strings)
Shift+Tab for Instant Documentation
While your cursor is inside a function call, press Shift+Tab to see the function signature and docstring:
# Place your cursor inside the parentheses and press Shift+Tab
pd.read_csv( # ← cursor here, press Shift+TabThis reveals all parameters with their defaults — filepath_or_buffer, sep, header, names, index_col, dtype, na_values, and many more. Press Shift+Tab a second time to expand to a full documentation popup. Press it a third time to open a persistent pane at the bottom of the screen.
# Works with any function — built-in or custom
import numpy as np
np.linspace( # Shift+Tab shows: start, stop, num=50, endpoint=True, ...
# Also works with your own functions
def calculate_roi(revenue, cost, tax_rate=0.20):
"""Calculate return on investment after tax."""
return (revenue - cost) * (1 - tax_rate)
calculate_roi( # Shift+Tab shows your own docstring!This combination — Tab for completion, Shift+Tab for documentation — means you can explore any library or your own codebase without ever opening a browser.
Tip 3: Configure Display Settings in Your First Cell
The default Jupyter display settings are often not optimal for data science work. A properly configured setup cell at the top of every notebook can save dozens of manual adjustments per session.
The Essential Setup Cell
Make this your standard first code cell in every notebook:
# ── Standard Jupyter Setup Cell ──────────────────────────────────────────────
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
# ── Display settings ──────────────────────────────────────────────────────────
# Show all columns in DataFrame display (never truncated with ...)
pd.set_option('display.max_columns', 50)
# Show more rows before truncation
pd.set_option('display.max_rows', 100)
# Format floats to 2 decimal places for readability
pd.set_option('display.float_format', '{:.2f}'.format)
# Wider display width (useful for wide DataFrames)
pd.set_option('display.width', 120)
# ── Matplotlib settings ───────────────────────────────────────────────────────
# Render charts directly in the notebook
%matplotlib inline
# Higher DPI for crisper charts (especially on high-res screens)
plt.rcParams['figure.dpi'] = 120
# Sensible default figure size
plt.rcParams['figure.figsize'] = (10, 6)
# Clean styling
plt.style.use('seaborn-v0_8-whitegrid')
# ── Suppress common warnings ──────────────────────────────────────────────────
# FutureWarning from pandas and scikit-learn can clutter output
warnings.filterwarnings('ignore', category=FutureWarning)
warnings.filterwarnings('ignore', category=UserWarning)
print("Environment configured ✓")
print(f"Pandas {pd.__version__} | NumPy {np.__version__}")Why This Matters
Without display.max_columns, Pandas truncates wide DataFrames and hides columns with ..., forcing you to run additional commands to see your data. Without %matplotlib inline, charts open in separate windows and may not appear at all in some configurations. Without float formatting, you see numbers like 1234567.8901234 instead of 1234567.89.
Taking 30 seconds to add this setup cell saves minutes of configuration work per session.
Resetting Display Options
If you need to temporarily change or reset display options:
# See current value of a setting
pd.get_option('display.max_rows')
# Reset a specific option to default
pd.reset_option('display.float_format')
# Reset all options to defaults
pd.reset_option('all')Tip 4: Write Self-Documenting Cells with Markdown
A notebook full of code cells and no explanatory text is a missed opportunity. The most valuable skill you can develop early is the habit of writing Markdown cells that explain what your code does and why — not just how.
The Three-Part Cell Pattern
For every significant analysis step, use this three-part pattern:
### Step Title: What You Are Doing
Brief explanation of WHY you are doing this step, what you expect to find,
and any important assumptions or decisions.# Code that implements the step
result = df.groupby('region')['revenue'].sum()
result**Interpretation:** Key finding from this step. What does the result tell you?
What decision or next step does it inform?Practical Markdown Elements for Data Science
## Section Heading
Use **bold** for emphasis and `code` for variable names and function calls.
Key findings:
- Finding 1: North region contributes 35% of total revenue
- Finding 2: Electronics category shows 15% month-over-month growth
- Finding 3: Weekend sales are 23% higher than weekday sales
> **Note:** This analysis excludes returns and cancellations.
> The raw revenue figures may differ slightly from financial reports.
| Metric | Value | vs. Last Quarter|
|-----------------|----------|-----------------|
| Total Revenue | $1.2M | +8.3% |
| Avg Order Value | $245.50 | +2.1% |
| Transactions | 4,897 | +6.0% |Self-Documenting Variable Names
In addition to Markdown, use descriptive variable names that make code cells partially self-documenting:
# Hard to understand at a glance
df2 = df[df['c'] > 0.5].groupby('r')['s'].mean()
# Self-documenting
high_confidence_predictions = predictions_df[
predictions_df['confidence'] > 0.5
].groupby('region')['sales'].mean()The extra characters in descriptive names cost seconds to type but save minutes (or hours) of confusion when you or someone else revisits the notebook later.
Tip 5: Use the Question Mark Operator for Quick Help
Jupyter provides a fast, elegant way to access documentation for any object, function, or module without leaving your notebook or opening a browser.
Single Question Mark: Signature and Docstring
Append ? to any name and run the cell to display its documentation in a scrollable popup at the bottom of the screen:
# Library functions
pd.DataFrame.merge?
np.sort?
plt.scatter?
# Your own functions
def compute_margin(revenue, cost):
"""
Calculate profit margin as a percentage.
Parameters
----------
revenue : float
Total revenue amount.
cost : float
Total cost amount.
Returns
-------
float
Profit margin as a percentage (0–100).
"""
return (revenue - cost) / revenue * 100
compute_margin? # Shows your own docstring!Double Question Mark: Show Source Code
# See the actual implementation source code
pd.DataFrame.dropna??
# Extremely useful for understanding exactly what a function does
# Great for learning from well-written library codeThe help() Function for Detailed Docs
For comprehensive documentation including all methods of a class:
# Comprehensive documentation
help(pd.DataFrame) # Very long — useful for reference
help(pd.Series.str) # Just the string accessor methods
# Or use the built-in ? equivalent in a cell
pd.DataFrame?Tip 6: Output Control — Display Exactly What You Need
By default, Jupyter displays the value of the last expression in every code cell. Learning to control what gets displayed — and suppressing what does not need to be shown — keeps your notebook clean and readable.
Suppressing Output with Semicolons
Add a semicolon ; at the end of the last line to suppress its output:
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 100)
# Without semicolon: displays the plot AND a text representation like:
# [<matplotlib.lines.Line2D object at 0x...>]
plt.plot(x, np.sin(x))
# With semicolon: displays only the plot, no text clutter
plt.plot(x, np.sin(x));The semicolon trick is especially useful with Matplotlib, where the default output includes cryptic object references alongside your chart.
Displaying Multiple Outputs in One Cell
By default, only the last expression’s value is shown. Use display() from IPython to show multiple outputs:
from IPython.display import display
df1 = pd.DataFrame({'A': [1, 2, 3]})
df2 = pd.DataFrame({'B': [4, 5, 6]})
# Without display(): only df2 is shown
df1 # ← not displayed
df2 # ← displayed
# With display(): both shown
display(df1)
display(df2)Controlling DataFrame Display Inline
import pandas as pd
df = pd.DataFrame({
'product': ['Laptop', 'Mouse', 'Monitor', 'Keyboard'],
'price': [999.99, 29.99, 349.99, 79.99],
'stock': [45, 230, 88, 167]
})
# Show only first N rows (more explicit than .head())
display(df.head(3))
# Style the DataFrame for clearer presentation
display(df.style
.format({'price': '${:.2f}', 'stock': '{:,}'})
.highlight_max(subset=['price'], color='lightgreen')
.highlight_min(subset=['price'], color='lightyellow')
.set_caption('Product Inventory Summary')
)Hiding Cell Input (Code) While Keeping Output
Sometimes you want to show a chart or table to a non-technical audience without showing the code that generated it. Click the cell’s left margin to collapse the input, or add this to your cell:
# In JupyterLab: click the blue bar to the left of the cell
# In Classic Jupyter: View → Toggle Input
# Programmatically (works in both):
from IPython.display import HTML
HTML("""
<style>
.jp-InputArea { display: none; }
</style>
""")Tip 7: Check and Reset the Kernel State Regularly
The kernel state issue — where variables linger in memory even after the cells that created them are deleted or modified — is the leading cause of mysterious bugs in Jupyter notebooks. Developing good kernel hygiene habits early will save you hours of confusion.
The Who and Whos Magic Commands
Before diving into a complex analysis, check what is already in memory:
# List all variables in the current namespace (brief)
%who
# Outputs: df model scaler X_train X_test y_train y_test
# List variables with types and sizes (detailed)
%whos
# Variable Type Data/Info
# --------------------------------
# df DataFrame 500 rows x 12 cols
# model object LinearRegression()
# X_train ndarray 400x11: 4400 elems, type float64, 35200 bytesThe del Statement for Memory Management
When you are done with a large DataFrame or array, explicitly delete it to free memory:
# Load a large dataset
large_df = pd.read_csv('huge_file.csv') # 2 GB file
# ... do your analysis ...
# Free the memory when done
del large_df
# Verify it is gone
%whoA Kernel State Health Check Workflow
Follow this routine when a notebook starts behaving unexpectedly:
# Step 1: Check what is in memory
%whos
# Step 2: Look for variables that should not exist
# (deleted cells, renamed variables, etc.)
# Step 3: If confused, restart cleanly
# Kernel → Restart & Clear Output
# Then run: Kernel → Restart & Run All
# Step 4: Verify the notebook completes without errors
# If it does, it is reproducible and shareableCreating a “Reset” Cell
Add this cell near the top of every notebook for quick resets during development:
# ── DEV ONLY: Reset cell — remove before finalizing ──────────────────────────
# Run this cell to clear all variables and start fresh without restarting kernel
# Get list of all user-defined variables
user_vars = [var for var in dir() if not var.startswith('_')]
# Delete them
for var in user_vars:
try:
del globals()[var]
except:
pass
print("Namespace cleared ✓")Tip 8: Use Cell Outputs as Checkpoints
One of Jupyter’s greatest advantages over plain Python scripts is that outputs persist even after execution. Learning to use this feature intentionally turns your notebook into an interactive analysis log.
Annotate Important Results
When you find a significant result, add a Markdown cell immediately after to capture your interpretation:
# Analysis cell
monthly_growth = sales.groupby('month')['revenue'].sum().pct_change() * 100
best_month = monthly_growth.idxmax()
print(f"Highest growth month: {best_month} ({monthly_growth.max():.1f}%)")
monthly_growth.plot(kind='bar', title='Month-over-Month Revenue Growth (%)', color='steelblue');**Key Finding:** March showed the highest month-over-month growth at 23.4%,
driven primarily by the Electronics category launch in the South region.
This coincides with the marketing campaign that began on March 3rd.
**Action:** Replicate the March South campaign in the East region for Q2.Freezing Important Outputs
Sometimes you want to preserve an important output even while continuing to modify the code. You can copy cell output text, or use the following pattern:
# Save important intermediate results to variables with descriptive names
# so they are preserved even if you re-run with different data
BASELINE_ACCURACY = 0.847 # Record key metrics as constants
BASELINE_ROC_AUC = 0.912 # These serve as documented benchmarks
print(f"Baseline performance:")
print(f" Accuracy: {BASELINE_ACCURACY:.3f}")
print(f" ROC-AUC: {BASELINE_ROC_AUC:.3f}")Using assert Statements as Output Checkpoints
Add assertions after critical computations to document your expectations and catch errors early:
# After loading data
df = pd.read_csv('sales_data.csv')
# Assertions serve as both documentation and runtime checks
assert df.shape[0] > 0, "Dataset is empty!"
assert 'revenue' in df.columns, "Missing 'revenue' column"
assert df['revenue'].min() >= 0, "Negative revenues found — data issue!"
assert df['customer_id'].nunique() > 100, "Fewer customers than expected"
print(f"✓ Data validation passed: {df.shape[0]:,} rows, {df.shape[1]} columns")If any assertion fails, you get an immediate, clear error message rather than a silent data quality issue that corrupts downstream analysis.
Tip 9: Organize Long Notebooks with Structure and Navigation
As notebooks grow longer — and production analysis notebooks often grow very long — finding the section you need becomes increasingly frustrating without proper organization.
Use a Consistent Heading Hierarchy
# Notebook Title and Overview
## 1. Setup
## 2. Data Loading and Validation
### 2.1 Load Raw Data
### 2.2 Validate Schema
### 2.3 Check Data Quality
## 3. Exploratory Data Analysis
### 3.1 Univariate Analysis
### 3.2 Bivariate Analysis
### 3.3 Time Series Patterns
## 4. Feature Engineering
## 5. Modeling
### 5.1 Baseline Model
### 5.2 Model Tuning
### 5.3 Final Model Evaluation
## 6. Conclusions and RecommendationsTable of Contents with Hyperlinks
Jupyter Markdown supports anchor links, so you can create a navigable table of contents:
## Table of Contents
1. [Setup](#1-Setup)
2. [Data Loading](#2-Data-Loading)
3. [Exploratory Analysis](#3-Exploratory-Analysis)
4. [Modeling](#4-Modeling)
5. [Conclusions](#5-Conclusions)Headers in Markdown automatically become anchor targets — spaces become hyphens and letters become lowercase.
Split Long Notebooks into Sections
For analyses that grow beyond 50–60 cells, consider splitting into multiple notebooks with a clear naming convention:
project/
├── 01_data_loading.ipynb
├── 02_data_cleaning.ipynb
├── 03_exploratory_analysis.ipynb
├── 04_feature_engineering.ipynb
├── 05_modeling.ipynb
└── 06_evaluation_and_reporting.ipynbUse a shared data file (Parquet or CSV) between notebooks so each starts from a clean, well-defined state.
Color-Code by Section with Cell Tags
In JupyterLab, you can add tags to cells (View → Cell Toolbar → Tags) to label them by type:
Tags: setup, data-loading, cleaning, eda, modeling, visualizationThese tags can be used by tools like nbconvert to selectively include or exclude cells when exporting.
Tip 10: Develop a Pre-Share Checklist
The most professional habit you can build is a consistent checklist you run through before sharing any notebook with a colleague, uploading it to GitHub, or submitting it as part of a portfolio.
The Pre-Share Checklist
Run through these steps before sharing any Jupyter notebook:
## Pre-Share Checklist
**Reproducibility:**
- [ ] Kernel → Restart & Run All completes without errors
- [ ] All file paths are relative (not absolute like /Users/yourname/...)
- [ ] All required libraries are imported in the first cell
- [ ] No hardcoded personal credentials or API keys
**Clarity:**
- [ ] Notebook title and purpose explained in first Markdown cell
- [ ] Every major section has a Markdown heading
- [ ] Code cells have comments explaining non-obvious logic
- [ ] Key findings are interpreted in Markdown cells after analysis cells
- [ ] Variables have descriptive names (not a, df2, temp)
**Cleanliness:**
- [ ] No dead code (commented-out experiments left behind)
- [ ] No excessively long outputs (use .head() instead of printing entire DataFrames)
- [ ] Matplotlib semicolons suppress unwanted object repr output
- [ ] Development/debugging cells removed or marked clearly
- [ ] Reasonable cell size (no cell > 30 lines without strong justification)
**Outputs:**
- [ ] All charts have titles, axis labels, and legends
- [ ] Numeric outputs are formatted (currency, percentages, commas for thousands)
- [ ] Important results are highlighted or annotatedThe Kernel Restart Test — Your Final Verification
The single most important item on this list is the Restart & Run All test:
# BEFORE SHARING: always do this:
# 1. Kernel → Restart & Clear Output
# 2. Kernel → Restart & Run All
# 3. Scroll through the entire notebook top to bottom
# 4. Verify: no error cells (red output boxes)
# 5. Verify: all outputs look correct
# 6. THEN share
# If any cell produces an error during Restart & Run All,
# your notebook has a hidden dependency that needs fixing.Adding Execution Metadata
Consider adding a final cell that records when and how the notebook was run:
import sys
import platform
from datetime import datetime
print("=== Notebook Execution Summary ===")
print(f"Run time: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
print(f"Python version: {sys.version.split()[0]}")
print(f"Platform: {platform.system()} {platform.release()}")
print(f"Pandas version: {pd.__version__}")
print(f"NumPy version: {np.__version__}")
print(f"Total cells run: Check In[n] numbers above")
print("=====================================")This metadata cell makes it easy for collaborators to understand the environment in which the analysis was produced, which is invaluable for debugging reproducibility issues.
Bonus: A Quick Reference Summary Card
Here is everything in this article condensed into a single reference card you can keep nearby:
| Category | Tip | Quick Reference |
|---|---|---|
| Navigation | Two-mode system | Escape = Command Mode, Enter = Edit Mode |
| Running | Execute cells | Shift+Enter (advance), Ctrl+Enter (stay) |
| Cells | Insert / Delete | A/B (above/below), D,D (delete) |
| Cell Type | Switch type | M (Markdown), Y (Code) |
| Completion | Code intelligence | Tab (complete), Shift+Tab (docs) |
| Help | Documentation | function? or function?? for source |
| Display | Setup cell | %matplotlib inline, pd.set_option(...) |
| Output | Suppress clutter | Add ; at end of line |
| Output | Show multiple | Use display(obj) |
| Memory | Check state | %who, %whos, del var |
| Assertions | Validate data | assert condition, "error message" |
| Kernel | Clean state | Kernel → Restart & Run All |
| Sharing | Pre-share test | Restart & Run All → no errors = ready |
Conclusion: Small Habits, Large Productivity Gains
The ten tips in this article are not individually revolutionary — none of them reinvents how Jupyter works. What they do is collectively eliminate the small frictions, confusions, and inefficiencies that accumulate into significant productivity losses over hours, days, and weeks of work.
Learning the two-mode keyboard system eliminates the need to reach for the mouse dozens of times per session. Tab completion and Shift+Tab documentation eliminate the context-switching cost of looking up function signatures in a browser. A consistent setup cell eliminates manual display configuration in every notebook. Markdown documentation turns throwaway code experiments into reusable, shareable analyses.
The pre-share checklist and the Restart & Run All habit are perhaps the highest-leverage tips of all: they ensure that every notebook you share reflects your best work and actually runs for the person you share it with — which is the fundamental purpose of a shareable notebook.
Start by picking the two or three tips that address your most frequent pain points. Once those become second nature, add the others. Within a few weeks of consistent practice, you will find yourself working in Jupyter with a fluency that feels markedly different from where you started — and the tips will feel less like techniques and more like instincts.
In the next article, you will go even deeper into Jupyter’s capabilities with magic commands — special commands that give you superpowers like profiling, debugging, and running code in other languages directly from your notebook.
Key Takeaways
- Jupyter has two modes: Edit Mode (green border, typing inside a cell) and Command Mode (blue border, notebook navigation). Press
Escape/Enterto switch. - The seven most important shortcuts are:
Shift+Enter,Ctrl+Enter,A,B,D,D,M, andY— learn these first before any others. - Tab autocompletes variable names and methods; Shift+Tab shows function signatures and docstrings — use these constantly to avoid context-switching to browser docs.
- A standard first-cell setup (
%matplotlib inline,pd.set_option(...),plt.rcParams) saves repeated configuration work and ensures consistent output across your sessions. - Writing Markdown cells before and after analysis cells transforms a collection of code into a readable, shareable document with context and interpretation.
- The single question mark (
function?) shows docstrings; double question mark (function??) shows source code — both accessible without leaving the notebook. - Add a semicolon
;at the end of the last line in a cell to suppress its text output (especially useful with Matplotlib to hide object repr). %whoand%whosshow what is in the kernel’s memory; usedel variableto free memory from large objects you no longer need.- Use
assertstatements after data loading and critical computations as both documentation and runtime validation checkpoints. - Always run
Kernel → Restart & Run Allbefore sharing a notebook — if it completes without errors, it is reproducible; if it crashes, there is a hidden dependency to fix.








