Jupyter magic commands are special commands prefixed with % (line magics) or %% (cell magics) that extend Python’s capabilities with shortcuts for timing code, profiling performance, running external scripts, writing files, switching languages, and dozens of other workflow-enhancing operations — all directly from your notebook cells without needing to import any library.
Introduction: Superpowers Built Into Every Jupyter Notebook
When you first learn Jupyter Notebook, you focus on running Python code and rendering Markdown. But lurking beneath the surface is an entire layer of functionality that most beginners never discover: magic commands — a set of special instructions built into IPython (the enhanced Python kernel that powers Jupyter) that give you capabilities far beyond what standard Python provides.
Magic commands let you time your code with statistical precision, profile bottlenecks line by line, run entire Python scripts and capture their variables, write the contents of a cell directly to a file, display rich HTML and interactive widgets, switch temporarily to Bash or other languages, load previously run cells from your history, and much more.
None of this requires importing a single library. Magic commands are available in every Jupyter notebook from the moment you open it, and they are activated with a simple % or %% prefix. Understanding them is one of the hallmarks that separates a beginner who muddles through Jupyter from an experienced practitioner who flows through their work.
This article covers the most useful magic commands grouped by purpose, with practical examples for each. By the end, you will have a concrete toolkit of commands that address real workflow friction points — from “how do I know which part of my code is slow?” to “how do I quickly test something in Bash without leaving my notebook?”
1. Understanding the Two Types of Magic Commands
Jupyter has two categories of magic commands, distinguished by the number of % signs:
Line Magics (%)
A single % prefix makes a line magic — it applies to the rest of that single line only. The Python code in other lines of the cell is not affected:
# %timeit is a line magic — it only times this one expression
%timeit [x**2 for x in range(1000)]
# The rest of the cell runs as normal Python
result = [x**2 for x in range(1000)]
print(f"Length: {len(result)}")Cell Magics (%%)
A double %% prefix makes a cell magic — it applies to the entire cell. The %% command must appear on the very first line of the cell:
%%timeit
# This entire cell is timed — all lines are included in the measurement
import numpy as np
arr = np.arange(1000)
result = arr ** 2Discovering Available Magic Commands
# List all available magic commands
%lsmagic
# Get detailed help for any magic command
%timeit?
%run?
# General magic system help
%magicRunning %lsmagic produces a comprehensive list organized by line magics and cell magics — there are dozens available. This article focuses on the most practically valuable ones for data science work.
2. Timing and Performance Magic Commands
Performance measurement is one of the most common reasons to reach for a magic command. Knowing how long your code takes — and which parts are slowest — is essential for writing efficient data pipelines.
%timeit — Statistical Timing for a Single Line
%timeit runs your code many times (automatically determining the number of repetitions), calculates the mean and standard deviation, and reports the result. This statistical approach gives you a much more accurate and reliable measurement than simply using time.time() before and after:
import numpy as np
import pandas as pd
# Compare two approaches to squaring a million numbers
data = list(range(1_000_000))
arr = np.array(data)
# Python list comprehension
%timeit [x**2 for x in data]
# 126 ms ± 3.21 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# NumPy vectorized operation
%timeit arr ** 2
# 1.85 ms ± 42.4 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
# Pandas Series operation
s = pd.Series(data)
%timeit s ** 2
# 4.23 ms ± 91.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)This makes immediately clear that NumPy is about 68x faster than a Python list comprehension for this operation — a crucial insight for optimizing data science code.
%%timeit — Timing an Entire Cell
When your code spans multiple lines and you want to time the entire block:
%%timeit
# Entire cell is timed — building a DataFrame and computing group stats
import pandas as pd
import numpy as np
n = 100_000
df = pd.DataFrame({
'group': np.random.choice(['A','B','C','D'], n),
'value': np.random.randn(n)
})
result = df.groupby('group')['value'].agg(['mean', 'std', 'count'])Controlling the number of runs:
# Run 100 loops, repeat 5 times
%timeit -n 100 -r 5 sum(range(10000))
# Disable auto-repeat with -r 1 for very slow operations
%timeit -n 1 -r 1 pd.read_csv('large_file.csv')%time — Simple Wall Clock Timing
When you want a simple, one-shot timing without statistical averaging — especially useful for operations that are too slow to run hundreds of times:
# Single timing measurement
%time result = sum(range(10_000_000))
# CPU times: user 387 ms, sys: 4.21 ms, total: 391 ms
# Wall time: 392 ms
# Shows: CPU user time, system time, total CPU time, and wall clock time
# Wall time = real-world elapsed time (what you actually experience)%%time — Time an Entire Slow Cell
%%time
# Load and process a large dataset — too slow for %timeit's repeated runs
import pandas as pd
df = pd.read_csv('sales_data.csv')
df['revenue'] = df['price'] * df['quantity']
summary = df.groupby('region')['revenue'].sum()
print(summary)
# CPU times: user 2.34 s, sys: 284 ms, total: 2.62 s
# Wall time: 2.71 s3. Profiling Magic Commands
Timing tells you how long code takes; profiling tells you where the time is being spent. These magic commands let you drill into your code’s performance at the function and line level.
%prun — Function-Level Profiling
%prun runs your code through Python’s cProfile profiler and displays a table showing how much time was spent in each function call:
import pandas as pd
import numpy as np
def complex_analysis(n=100_000):
"""Simulate a multi-step data analysis."""
df = pd.DataFrame({
'a': np.random.randn(n),
'b': np.random.randn(n),
'cat': np.random.choice(['X','Y','Z'], n)
})
df['c'] = df['a'] ** 2 + df['b'] ** 2
df['d'] = np.sqrt(df['c'])
result = df.groupby('cat')['d'].agg(['mean', 'std'])
return result
# Profile the function
%prun complex_analysis()
# Output (abbreviated):
# ncalls tottime percall cumtime percall filename:lineno(function)
# 1 0.000 0.000 0.312 0.312 <string>:1(<module>)
# 1 0.045 0.045 0.312 0.312 <ipython>:3(complex_analysis)
# 1 0.087 0.087 0.156 0.156 {pandas groupby}
# ...The columns tell you:
- ncalls: How many times this function was called
- tottime: Total time spent in this function (excluding sub-functions)
- cumtime: Cumulative time including all sub-functions
- percall: Time per call
Sort by cumtime to find the biggest bottlenecks.
%%prun — Profile an Entire Cell
%%prun -s cumulative -l 10
# Profile the entire cell, sorted by cumulative time, showing top 10 functions
import numpy as np
import pandas as pd
n = 500_000
data = np.random.randn(n, 5)
df = pd.DataFrame(data, columns=list('ABCDE'))
result = df.corr()
print(result.round(3))%lprun — Line-by-Line Profiling
For granular profiling that shows the time spent on each individual line of a function, use line_profiler (requires installation):
# Install line_profiler if not available
# !pip install line_profiler
# Load the line_profiler extension
%load_ext line_profiler
def process_dataframe(df):
"""A function we want to profile line by line."""
df = df.copy() # Line 1
df['squared'] = df['value'] ** 2 # Line 2
df['log'] = np.log1p(df['value'].abs()) # Line 3
df['category'] = pd.cut(df['value'], # Line 4
bins=5,
labels=['VL','L','M','H','VH'])
result = df.groupby('category')['squared'].mean() # Line 5
return result
import numpy as np, pandas as pd
test_df = pd.DataFrame({'value': np.random.randn(100_000)})
# Profile line by line
%lprun -f process_dataframe process_dataframe(test_df)
# Output shows each line with its time and percentage of total:
# Line Hits Time Per Hit % Time Line Contents
# ============================================================
# 2 1 12450.0 12450.0 45.2 df = df.copy()
# 3 1 3211.0 3211.0 11.7 df['squared'] = ...
# 4 1 4892.0 4892.0 17.8 df['log'] = ...
# 5 1 5219.0 5219.0 19.0 df['category'] = ...
# 6 1 1742.0 1742.0 6.3 result = df.groupby...4. File and Script Management Magic Commands
%run — Execute External Python Scripts
%run executes an external .py file as if you had run it in the current namespace. Variables defined in the script become available in your notebook after execution:
# Run a Python script and import its variables into the notebook
%run data_loader.py
# After running, variables defined in data_loader.py are available
print(df.shape) # df was defined in data_loader.py
print(model) # model was also defined there
# Run with command-line arguments
%run my_script.py --input data.csv --output results.csv
# Run and capture timing
%run -t expensive_computation.pyThis is invaluable when you want to reuse a preprocessing pipeline defined in a .py file without copying code into your notebook.
%%writefile — Write a Cell’s Contents to a File
%%writefile saves the contents of a cell to a file — excellent for creating utility scripts, configuration files, or small modules directly from your notebook:
%%writefile data_cleaner.py
"""
data_cleaner.py — Reusable data cleaning utilities.
Generated from analysis notebook on 2024-03-15.
"""
import pandas as pd
import numpy as np
def clean_dataframe(df):
"""Apply standard cleaning steps to a raw DataFrame."""
df = df.copy()
# Strip whitespace from string columns
str_cols = df.select_dtypes('object').columns
for col in str_cols:
df[col] = df[col].str.strip()
# Drop fully duplicate rows
df = df.drop_duplicates()
# Fill numeric nulls with column median
num_cols = df.select_dtypes('number').columns
for col in num_cols:
df[col] = df[col].fillna(df[col].median())
return df
def validate_schema(df, required_cols):
"""Raise ValueError if any required columns are missing."""
missing = set(required_cols) - set(df.columns)
if missing:
raise ValueError(f"Missing required columns: {missing}")
return True
# Writing data_cleaner.pyAfter running this cell, data_cleaner.py exists on disk and can be imported in other notebooks or scripts:
# Now import the module you just wrote
from data_cleaner import clean_dataframe, validate_schema
df_clean = clean_dataframe(raw_df)To append to an existing file instead of overwriting:
%%writefile -a existing_file.py
# This content is appended to the existing file
def new_function():
pass%load — Load a File’s Contents into a Cell
The opposite of %%writefile — %load reads a file and inserts its contents into the current cell:
# This magic loads the file contents into this cell when run
%load data_cleaner.py
# After running, the cell is replaced with the file's actual contents
# You can then edit it and run the modified version%save — Save Cell Contents to a File
Save the output of specific cells (by their execution number) to a Python script:
# Save cells 3, 5, 6, and 7 to a script
%save analysis_functions.py 3 5-7
# Save the current cell
%save current_work.py5. History and Session Management
%history — Review Previous Commands
# Show last 10 commands executed in this session
%history -l 10
# Show commands from cell 1 to cell 5
%history 1-5
# Show commands with their cell numbers
%history -n
# Search history for commands containing 'groupby'
%history -g groupby%recall — Bring a Previous Command Back
# Recall the code from cell 4 into the current cell for editing
%recall 4
# Recall the last cell
%recall%store — Persist Variables Between Sessions
%store saves variables to disk so they persist even after you shut down the notebook and restart the kernel:
# After an expensive computation...
result_df = run_expensive_analysis() # Takes 10 minutes
# Store it for future sessions
%store result_df
# Stored 'result_df' (DataFrame)
# In a future session, restore it instantly
%store -r result_df
print(result_df.shape) # Available without re-running the analysis
# List all stored variables
%store
# Delete a stored variable
%store -d result_dfThis is extremely practical for long-running computations — train a model once, store it, reload it in future sessions without waiting.
6. System and Environment Magic Commands
%matplotlib — Configure Plot Rendering
One of the most universally used magic commands:
# Render plots inline in the notebook (most common)
%matplotlib inline
# Render interactive plots (can pan, zoom) — requires ipympl
%matplotlib widget
# Open plots in separate windows (useful for detailed inspection)
%matplotlib qt
# No display (useful in batch processing scripts)
%matplotlib agg%env — View and Set Environment Variables
# List all environment variables
%env
# Get a specific variable
%env PATH
# Set an environment variable for this session
%env MY_API_KEY=abc123
# Useful for configuring library behavior
%env CUDA_VISIBLE_DEVICES=0 # Use only the first GPU
%env OMP_NUM_THREADS=4 # Limit OpenMP threads%pwd and %cd — Navigate the File System
# Show current working directory
%pwd
# '/Users/username/projects/data_science'
# Change directory
%cd /path/to/data/folder
# Change to home directory
%cd ~
# List directory contents
%ls
%ls -la # Detailed listing with permissions and sizes
%ls *.csv # List only CSV files%who and %whos — Inspect the Namespace
# List all variables in the current namespace
%who
# df model X_train y_train results config
# Detailed view with types and information
%whos
# Variable Type Data/Info
# ---------------------------------
# df DataFrame 500 rows x 12 cols
# model object RandomForestClassifier(...)
# X_train ndarray 400x11: 4400 elems, float64
# Filter by type
%who DataFrame
%who int float%reset — Clear the Namespace
# Interactively reset — prompts for confirmation
%reset
# Reset without confirmation (use carefully!)
%reset -f
# Reset only specific variables
%reset_selective df temp_results7. Shell Commands and OS Integration
The ! Prefix — Run Any Shell Command
While not technically a magic command (it is a shell escape), the ! prefix is equally important for workflow:
# Install packages without leaving the notebook
!pip install lightgbm xgboost catboost
# Check installed package versions
!pip show pandas scikit-learn
# Download datasets
!wget https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv
# Or with curl
!curl -o data.csv https://example.com/dataset.csv
# Unzip files
!unzip archive.zip -d data/
# Count rows in a CSV (including header)
!wc -l data.csv
# Check disk space
!df -h
# List running Python processes
!ps aux | grep python
# Capture shell output as a Python variable
files = !ls *.csv
print(f"Found {len(files)} CSV files: {files}")%%bash — Run an Entire Cell as Bash
%%bash
# This entire cell runs as a Bash script
echo "Creating directory structure..."
mkdir -p data/raw data/processed data/models
echo "Downloading data..."
curl -s -o data/raw/sales.csv https://example.com/sales.csv
echo "Checking download..."
wc -l data/raw/sales.csv
echo "Done."%%sh — Alternative Shell Cell Magic
%%sh
# Similar to %%bash, runs as sh (more portable)
for file in data/raw/*.csv; do
echo "Processing: $file"
wc -l "$file"
done8. Multi-Language Magic Commands
One of Jupyter’s most powerful features — rarely known by beginners — is the ability to run code in other languages directly within a Python notebook.
%%html — Render Raw HTML
%%html
<div style="background: #f0f8ff; padding: 20px; border-radius: 8px; border-left: 4px solid #2196F3;">
<h3 style="color: #1565C0; margin-top: 0;">📊 Analysis Summary</h3>
<ul>
<li><strong>Dataset:</strong> 50,000 customer records</li>
<li><strong>Date Range:</strong> January 2023 – December 2023</li>
<li><strong>Key Metric:</strong> Revenue increased 23.4% YoY</li>
</ul>
<p style="color: #666;">Generated automatically by analysis pipeline.</p>
</div>%%javascript — Run JavaScript in the Browser
%%javascript
// Access the browser's JavaScript environment directly
console.log("Hello from JavaScript!");
// Create an alert (useful for signaling when long computations finish)
alert("Your analysis is complete!");
// Modify the page DOM
document.querySelector('.jp-Cell-inputArea').style.backgroundColor = '#fffde7';%%latex — Render LaTeX Equations
%%latex
\begin{equation}
\text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2
\end{equation}
\begin{equation}
R^2 = 1 - \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2}
\end{equation}%%markdown — Render Markdown Programmatically
%%markdown
## Dynamic Report Section
This section was generated programmatically.
You can use Python to build Markdown strings and render them with this magic.%%sql — Run SQL Directly (with ipython-sql)
# First: !pip install ipython-sql sqlalchemy
%load_ext sql
%sql sqlite:///mydata.db
%%sql
SELECT
region,
COUNT(*) as transactions,
SUM(revenue) as total_revenue,
AVG(revenue) as avg_revenue
FROM sales
WHERE year = 2024
GROUP BY region
ORDER BY total_revenue DESC
LIMIT 10;9. Debugging and Development Magic Commands
%debug — Post-Mortem Debugging
When a cell raises an error, %debug drops you into an interactive debugger at the point of the error:
# After a cell raises an exception, run this in the next cell:
%debug
# Opens an interactive pdb (Python Debugger) session where you can:
# - Inspect the variable values at the point of failure
# - Step through the code
# - Type 'q' to quit the debugger# Or enable automatic debugging for every exception
%pdb on
# Now any exception automatically opens the debugger
def buggy_function(x):
result = x / 0 # This will trigger the debugger
return result
buggy_function(5)%%capture — Capture Cell Output
%%capture suppresses a cell’s printed output and stores it in a variable for later inspection:
%%capture captured_output
# This cell's output is captured, not displayed
import subprocess
result = subprocess.run(['python', '--version'], capture_output=True, text=True)
print(f"Python version: {result.stdout}")
print("This output is captured")
for i in range(10):
print(f"Processing item {i}")
# Later, inspect what was captured
print(captured_output.stdout)
print(captured_output.stderr)%xmode — Control Exception Verbosity
# Minimal traceback — useful in production/clean output
%xmode Minimal
# Standard Python traceback (default)
%xmode Context
# Verbose traceback with local variable values — extremely helpful for debugging
%xmode Verbose
# Ultra-compact — just the exception type and message
%xmode Plain10. Display and Rich Output Magic Commands
%matplotlib with Different Backends
# High-quality inline plots (default)
%matplotlib inline
# Interactive plots — pan, zoom, hover tooltips (requires ipympl: pip install ipympl)
%matplotlib widget
# Try this for interactive exploration:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib widget # Switch to interactive mode
fig, ax = plt.subplots()
x = np.linspace(0, 2*np.pi, 1000)
ax.plot(x, np.sin(x))
ax.set_title('Interactive Sine Wave — try zooming!')
plt.show()%%capture with Rich Display
from IPython.display import display, HTML, Image, Markdown, IFrame
# Display formatted HTML
display(HTML("<h2 style='color:navy'>Results Dashboard</h2>"))
# Display an image from a URL or local path
display(Image('chart.png', width=600))
# Display Markdown programmatically
display(Markdown(f"""
## Analysis Complete
- **Records processed:** {len(df):,}
- **Revenue total:** ${df['revenue'].sum():,.2f}
- **Top region:** {df.groupby('region')['revenue'].sum().idxmax()}
"""))11. Practical Magic Command Workflows
Workflow 1: Benchmarking Multiple Implementations
import numpy as np
import pandas as pd
n = 500_000
data = np.random.randn(n)
series = pd.Series(data)
lst = list(data)
print("=== Squaring 500,000 numbers ===")
print("\nMethod 1: Python list comprehension")
%timeit [x**2 for x in lst]
print("\nMethod 2: NumPy array operation")
%timeit data ** 2
print("\nMethod 3: Pandas Series operation")
%timeit series ** 2
print("\nMethod 4: NumPy vectorized with sqrt")
%timeit np.sqrt(np.abs(data))Workflow 2: Quick Script Development and Testing
# Step 1: Write a utility function to a file
%%writefile feature_engineering.py
import pandas as pd
import numpy as np
def create_time_features(df, date_col):
"""Extract time-based features from a datetime column."""
df = df.copy()
dt = pd.to_datetime(df[date_col])
df['hour'] = dt.dt.hour
df['day_of_week']= dt.dt.dayofweek
df['month'] = dt.dt.month
df['quarter'] = dt.dt.quarter
df['is_weekend'] = dt.dt.dayofweek >= 5
df['is_month_end'] = dt.dt.is_month_end
return df# Step 2: Load and test it immediately
%run feature_engineering.py
# create_time_features is now available in this notebook
test_df = pd.DataFrame({
'timestamp': pd.date_range('2024-01-01', periods=100, freq='6H'),
'value': np.random.randn(100)
})
result = create_time_features(test_df, 'timestamp')
print(result.head())
print(f"\nNew columns: {[c for c in result.columns if c not in test_df.columns]}")Workflow 3: Long Computation with Progress and Storage
import time
# Simulate expensive computation
%%time
results = {}
for model_name in ['LinearRegression', 'RandomForest', 'GradientBoosting', 'XGBoost']:
print(f"Training {model_name}...", end=' ')
time.sleep(0.5) # Simulate training time
results[model_name] = {'accuracy': 0.85 + hash(model_name) % 10 / 100}
print("done ✓")
print(f"\nBest model: {max(results, key=lambda k: results[k]['accuracy'])}")# Store results so you don't need to re-run if kernel restarts
%store results
# Stored 'results' (dict)# In a future session — instant recovery
%store -r results
print("Restored training results:")
for model, metrics in results.items():
print(f" {model}: {metrics['accuracy']:.3f}")Workflow 4: Interactive Data Exploration with Shell
# Check what data files are available
%ls data/
# Count rows in each CSV before loading
%%bash
for f in data/*.csv; do
echo -n "$f: "
wc -l < "$f"
done
# Check total disk usage
!du -sh data/12. A Quick Reference Guide to the Most Important Magic Commands
| Category | Command | What It Does |
|---|---|---|
| Timing | %timeit expr | Statistical timing of one line |
| Timing | %%timeit | Statistical timing of entire cell |
| Timing | %time expr | Single wall-clock timing |
| Timing | %%time | Single wall-clock timing of cell |
| Profiling | %prun func() | Function-level cProfile profiling |
| Profiling | %lprun -f func func() | Line-by-line profiling (needs line_profiler) |
| Files | %%writefile file.py | Write cell contents to a file |
| Files | %load file.py | Load file contents into a cell |
| Files | %run script.py | Execute an external Python script |
| Files | %save file.py n | Save cell n to a file |
| History | %history -l 10 | Show last 10 executed commands |
| History | %store var | Persist variable between sessions |
| History | %store -r var | Restore a stored variable |
| System | %pwd | Print working directory |
| System | %cd path | Change directory |
| System | %ls | List directory contents |
| System | %env VAR=value | Set environment variable |
| Namespace | %who | List all variables |
| Namespace | %whos | Variables with types and sizes |
| Namespace | %reset -f | Clear all variables (no prompt) |
| Display | %matplotlib inline | Inline chart rendering |
| Display | %matplotlib widget | Interactive charts |
| Debug | %debug | Post-mortem debugger after error |
| Debug | %pdb on | Auto-debug on every exception |
| Debug | %xmode Verbose | Verbose tracebacks with variable values |
| Shell | !command | Run a shell command |
| Shell | %%bash | Run entire cell as Bash |
| Languages | %%html | Render HTML in the notebook |
| Languages | %%latex | Render LaTeX equations |
| Languages | %%javascript | Run JavaScript |
| Output | %%capture var | Capture cell output to a variable |
| Discovery | %lsmagic | List all available magic commands |
| Discovery | %magic | Full documentation of magic system |
Conclusion: Magic Commands Are Your Productivity Multipliers
Magic commands represent one of the deepest and most practical layers of Jupyter’s capabilities. They are not merely conveniences — they fundamentally change what you can accomplish in a notebook session.
%timeit and %prun give you a profiler built into your development environment, letting you make data-driven decisions about optimization without switching to external tools. %%writefile and %run create a bridge between your exploratory notebook and production-ready Python scripts. %store saves you from re-running expensive computations every time the kernel restarts. %%bash lets you manage files, download data, and run system commands without leaving the data science context.
The magic commands in this article are the ones that experienced Jupyter practitioners reach for constantly — the commands that, once learned, become so natural that working without them feels genuinely constrained. Start by integrating %timeit, %matplotlib inline, and %%writefile into your daily workflow. Add %prun and %store when you start working with slower code or longer computations. Build up from there.
In the next article, you will explore the full JupyterLab environment and understand when it offers advantages over the classic Jupyter Notebook interface — and when to stick with the simpler tool.
Key Takeaways
- Magic commands are prefixed with
%(line magic, applies to one line) or%%(cell magic, applies to the entire cell); they extend Python with built-in workflow tools. %timeitruns code hundreds of times and reports mean ± standard deviation for statistically reliable performance measurement; use%%timeitfor multi-line blocks.%timegives a simple single-run wall clock measurement — better for slow operations that cannot be run repeatedly.%prunprofiles at the function level;%lprunprofiles at the line level (requiresline_profiler) — use these to find bottlenecks before optimizing.%%writefile filename.pysaves a cell’s contents directly to disk;%run script.pyexecutes an external script and imports its variables into the notebook.%store variablepersists a Python variable between sessions;%store -r variablerestores it — invaluable for expensive computations you do not want to re-run.%whoand%whosinspect the current namespace;%reset -fclears it entirely.%%bash,%%html,%%javascript, and%%latexrun entire cells in other languages directly within a Python notebook.%debugopens an interactive debugger at the location of the last exception;%xmode Verboseadds variable values to every traceback.%lsmagiclists every available magic command; append?to any magic for detailed documentation (e.g.,%timeit?).








