Jupyter Magic Commands That Will Speed Up Your Workflow

Master Jupyter magic commands like %timeit, %run, %matplotlib, %%writefile, and more. Learn line and cell magics that transform your data science productivity.

Jupyter Magic Commands That Will Speed Up Your Workflow

Jupyter magic commands are special commands prefixed with % (line magics) or %% (cell magics) that extend Python’s capabilities with shortcuts for timing code, profiling performance, running external scripts, writing files, switching languages, and dozens of other workflow-enhancing operations — all directly from your notebook cells without needing to import any library.

Introduction: Superpowers Built Into Every Jupyter Notebook

When you first learn Jupyter Notebook, you focus on running Python code and rendering Markdown. But lurking beneath the surface is an entire layer of functionality that most beginners never discover: magic commands — a set of special instructions built into IPython (the enhanced Python kernel that powers Jupyter) that give you capabilities far beyond what standard Python provides.

Magic commands let you time your code with statistical precision, profile bottlenecks line by line, run entire Python scripts and capture their variables, write the contents of a cell directly to a file, display rich HTML and interactive widgets, switch temporarily to Bash or other languages, load previously run cells from your history, and much more.

None of this requires importing a single library. Magic commands are available in every Jupyter notebook from the moment you open it, and they are activated with a simple % or %% prefix. Understanding them is one of the hallmarks that separates a beginner who muddles through Jupyter from an experienced practitioner who flows through their work.

This article covers the most useful magic commands grouped by purpose, with practical examples for each. By the end, you will have a concrete toolkit of commands that address real workflow friction points — from “how do I know which part of my code is slow?” to “how do I quickly test something in Bash without leaving my notebook?”

1. Understanding the Two Types of Magic Commands

Jupyter has two categories of magic commands, distinguished by the number of % signs:

Line Magics (%)

A single % prefix makes a line magic — it applies to the rest of that single line only. The Python code in other lines of the cell is not affected:

Python
# %timeit is a line magic — it only times this one expression
%timeit [x**2 for x in range(1000)]

# The rest of the cell runs as normal Python
result = [x**2 for x in range(1000)]
print(f"Length: {len(result)}")

Cell Magics (%%)

A double %% prefix makes a cell magic — it applies to the entire cell. The %% command must appear on the very first line of the cell:

Python
%%timeit
# This entire cell is timed — all lines are included in the measurement
import numpy as np
arr = np.arange(1000)
result = arr ** 2

Discovering Available Magic Commands

Python
# List all available magic commands
%lsmagic

# Get detailed help for any magic command
%timeit?
%run?

# General magic system help
%magic

Running %lsmagic produces a comprehensive list organized by line magics and cell magics — there are dozens available. This article focuses on the most practically valuable ones for data science work.

2. Timing and Performance Magic Commands

Performance measurement is one of the most common reasons to reach for a magic command. Knowing how long your code takes — and which parts are slowest — is essential for writing efficient data pipelines.

%timeit — Statistical Timing for a Single Line

%timeit runs your code many times (automatically determining the number of repetitions), calculates the mean and standard deviation, and reports the result. This statistical approach gives you a much more accurate and reliable measurement than simply using time.time() before and after:

Python
import numpy as np
import pandas as pd

# Compare two approaches to squaring a million numbers
data = list(range(1_000_000))
arr  = np.array(data)

# Python list comprehension
%timeit [x**2 for x in data]
# 126 ms ± 3.21 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# NumPy vectorized operation
%timeit arr ** 2
# 1.85 ms ± 42.4 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

# Pandas Series operation
s = pd.Series(data)
%timeit s ** 2
# 4.23 ms ± 91.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

This makes immediately clear that NumPy is about 68x faster than a Python list comprehension for this operation — a crucial insight for optimizing data science code.

%%timeit — Timing an Entire Cell

When your code spans multiple lines and you want to time the entire block:

Python
%%timeit
# Entire cell is timed — building a DataFrame and computing group stats
import pandas as pd
import numpy as np

n = 100_000
df = pd.DataFrame({
    'group':  np.random.choice(['A','B','C','D'], n),
    'value':  np.random.randn(n)
})
result = df.groupby('group')['value'].agg(['mean', 'std', 'count'])

Controlling the number of runs:

Python
# Run 100 loops, repeat 5 times
%timeit -n 100 -r 5 sum(range(10000))

# Disable auto-repeat with -r 1 for very slow operations
%timeit -n 1 -r 1 pd.read_csv('large_file.csv')

%time — Simple Wall Clock Timing

When you want a simple, one-shot timing without statistical averaging — especially useful for operations that are too slow to run hundreds of times:

Python
# Single timing measurement
%time result = sum(range(10_000_000))
# CPU times: user 387 ms, sys: 4.21 ms, total: 391 ms
# Wall time: 392 ms

# Shows: CPU user time, system time, total CPU time, and wall clock time
# Wall time = real-world elapsed time (what you actually experience)

%%time — Time an Entire Slow Cell

Python
%%time
# Load and process a large dataset — too slow for %timeit's repeated runs
import pandas as pd

df = pd.read_csv('sales_data.csv')
df['revenue'] = df['price'] * df['quantity']
summary = df.groupby('region')['revenue'].sum()
print(summary)
# CPU times: user 2.34 s, sys: 284 ms, total: 2.62 s
# Wall time: 2.71 s

3. Profiling Magic Commands

Timing tells you how long code takes; profiling tells you where the time is being spent. These magic commands let you drill into your code’s performance at the function and line level.

%prun — Function-Level Profiling

%prun runs your code through Python’s cProfile profiler and displays a table showing how much time was spent in each function call:

Python
import pandas as pd
import numpy as np

def complex_analysis(n=100_000):
    """Simulate a multi-step data analysis."""
    df = pd.DataFrame({
        'a': np.random.randn(n),
        'b': np.random.randn(n),
        'cat': np.random.choice(['X','Y','Z'], n)
    })
    df['c'] = df['a'] ** 2 + df['b'] ** 2
    df['d'] = np.sqrt(df['c'])
    result = df.groupby('cat')['d'].agg(['mean', 'std'])
    return result

# Profile the function
%prun complex_analysis()

# Output (abbreviated):
#          ncalls  tottime  percall  cumtime  percall  filename:lineno(function)
#               1    0.000    0.000    0.312    0.312  <string>:1(<module>)
#               1    0.045    0.045    0.312    0.312  <ipython>:3(complex_analysis)
#               1    0.087    0.087    0.156    0.156  {pandas groupby}
#             ...

The columns tell you:

  • ncalls: How many times this function was called
  • tottime: Total time spent in this function (excluding sub-functions)
  • cumtime: Cumulative time including all sub-functions
  • percall: Time per call

Sort by cumtime to find the biggest bottlenecks.

%%prun — Profile an Entire Cell

Python
%%prun -s cumulative -l 10
# Profile the entire cell, sorted by cumulative time, showing top 10 functions
import numpy as np
import pandas as pd

n = 500_000
data = np.random.randn(n, 5)
df = pd.DataFrame(data, columns=list('ABCDE'))
result = df.corr()
print(result.round(3))

%lprun — Line-by-Line Profiling

For granular profiling that shows the time spent on each individual line of a function, use line_profiler (requires installation):

Python
# Install line_profiler if not available
# !pip install line_profiler

# Load the line_profiler extension
%load_ext line_profiler

def process_dataframe(df):
    """A function we want to profile line by line."""
    df = df.copy()                              # Line 1
    df['squared'] = df['value'] ** 2           # Line 2
    df['log'] = np.log1p(df['value'].abs())    # Line 3
    df['category'] = pd.cut(df['value'],       # Line 4
                             bins=5,
                             labels=['VL','L','M','H','VH'])
    result = df.groupby('category')['squared'].mean()  # Line 5
    return result

import numpy as np, pandas as pd
test_df = pd.DataFrame({'value': np.random.randn(100_000)})

# Profile line by line
%lprun -f process_dataframe process_dataframe(test_df)

# Output shows each line with its time and percentage of total:
# Line   Hits    Time   Per Hit  % Time  Line Contents
# ============================================================
#    2      1  12450.0  12450.0    45.2  df = df.copy()
#    3      1   3211.0   3211.0    11.7  df['squared'] = ...
#    4      1   4892.0   4892.0    17.8  df['log'] = ...
#    5      1   5219.0   5219.0    19.0  df['category'] = ...
#    6      1   1742.0   1742.0     6.3  result = df.groupby...

4. File and Script Management Magic Commands

%run — Execute External Python Scripts

%run executes an external .py file as if you had run it in the current namespace. Variables defined in the script become available in your notebook after execution:

Python
# Run a Python script and import its variables into the notebook
%run data_loader.py

# After running, variables defined in data_loader.py are available
print(df.shape)   # df was defined in data_loader.py
print(model)      # model was also defined there

# Run with command-line arguments
%run my_script.py --input data.csv --output results.csv

# Run and capture timing
%run -t expensive_computation.py

This is invaluable when you want to reuse a preprocessing pipeline defined in a .py file without copying code into your notebook.

%%writefile — Write a Cell’s Contents to a File

%%writefile saves the contents of a cell to a file — excellent for creating utility scripts, configuration files, or small modules directly from your notebook:

Python
%%writefile data_cleaner.py
"""
data_cleaner.py — Reusable data cleaning utilities.
Generated from analysis notebook on 2024-03-15.
"""
import pandas as pd
import numpy as np

def clean_dataframe(df):
    """Apply standard cleaning steps to a raw DataFrame."""
    df = df.copy()
    
    # Strip whitespace from string columns
    str_cols = df.select_dtypes('object').columns
    for col in str_cols:
        df[col] = df[col].str.strip()
    
    # Drop fully duplicate rows
    df = df.drop_duplicates()
    
    # Fill numeric nulls with column median
    num_cols = df.select_dtypes('number').columns
    for col in num_cols:
        df[col] = df[col].fillna(df[col].median())
    
    return df

def validate_schema(df, required_cols):
    """Raise ValueError if any required columns are missing."""
    missing = set(required_cols) - set(df.columns)
    if missing:
        raise ValueError(f"Missing required columns: {missing}")
    return True
# Writing data_cleaner.py

After running this cell, data_cleaner.py exists on disk and can be imported in other notebooks or scripts:

Python
# Now import the module you just wrote
from data_cleaner import clean_dataframe, validate_schema

df_clean = clean_dataframe(raw_df)

To append to an existing file instead of overwriting:

Python
%%writefile -a existing_file.py
# This content is appended to the existing file
def new_function():
    pass

%load — Load a File’s Contents into a Cell

The opposite of %%writefile%load reads a file and inserts its contents into the current cell:

Python
# This magic loads the file contents into this cell when run
%load data_cleaner.py

# After running, the cell is replaced with the file's actual contents
# You can then edit it and run the modified version

%save — Save Cell Contents to a File

Save the output of specific cells (by their execution number) to a Python script:

Python
# Save cells 3, 5, 6, and 7 to a script
%save analysis_functions.py 3 5-7

# Save the current cell
%save current_work.py

5. History and Session Management

%history — Review Previous Commands

Python
# Show last 10 commands executed in this session
%history -l 10

# Show commands from cell 1 to cell 5
%history 1-5

# Show commands with their cell numbers
%history -n

# Search history for commands containing 'groupby'
%history -g groupby

%recall — Bring a Previous Command Back

Python
# Recall the code from cell 4 into the current cell for editing
%recall 4

# Recall the last cell
%recall

%store — Persist Variables Between Sessions

%store saves variables to disk so they persist even after you shut down the notebook and restart the kernel:

Python
# After an expensive computation...
result_df = run_expensive_analysis()   # Takes 10 minutes

# Store it for future sessions
%store result_df
# Stored 'result_df' (DataFrame)

# In a future session, restore it instantly
%store -r result_df
print(result_df.shape)   # Available without re-running the analysis

# List all stored variables
%store

# Delete a stored variable
%store -d result_df

This is extremely practical for long-running computations — train a model once, store it, reload it in future sessions without waiting.

6. System and Environment Magic Commands

%matplotlib — Configure Plot Rendering

One of the most universally used magic commands:

Python
# Render plots inline in the notebook (most common)
%matplotlib inline

# Render interactive plots (can pan, zoom) — requires ipympl
%matplotlib widget

# Open plots in separate windows (useful for detailed inspection)
%matplotlib qt

# No display (useful in batch processing scripts)
%matplotlib agg

%env — View and Set Environment Variables

Python
# List all environment variables
%env

# Get a specific variable
%env PATH

# Set an environment variable for this session
%env MY_API_KEY=abc123

# Useful for configuring library behavior
%env CUDA_VISIBLE_DEVICES=0    # Use only the first GPU
%env OMP_NUM_THREADS=4         # Limit OpenMP threads

%pwd and %cd — Navigate the File System

Python
# Show current working directory
%pwd
# '/Users/username/projects/data_science'

# Change directory
%cd /path/to/data/folder

# Change to home directory
%cd ~

# List directory contents
%ls
%ls -la   # Detailed listing with permissions and sizes
%ls *.csv # List only CSV files

%who and %whos — Inspect the Namespace

Python
# List all variables in the current namespace
%who
# df   model   X_train   y_train   results   config

# Detailed view with types and information
%whos
# Variable   Type        Data/Info
# ---------------------------------
# df         DataFrame   500 rows x 12 cols
# model      object      RandomForestClassifier(...)
# X_train    ndarray     400x11: 4400 elems, float64

# Filter by type
%who DataFrame
%who int float

%reset — Clear the Namespace

Python
# Interactively reset — prompts for confirmation
%reset

# Reset without confirmation (use carefully!)
%reset -f

# Reset only specific variables
%reset_selective df temp_results

7. Shell Commands and OS Integration

The ! Prefix — Run Any Shell Command

While not technically a magic command (it is a shell escape), the ! prefix is equally important for workflow:

Python
# Install packages without leaving the notebook
!pip install lightgbm xgboost catboost

# Check installed package versions
!pip show pandas scikit-learn

# Download datasets
!wget https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv

# Or with curl
!curl -o data.csv https://example.com/dataset.csv

# Unzip files
!unzip archive.zip -d data/

# Count rows in a CSV (including header)
!wc -l data.csv

# Check disk space
!df -h

# List running Python processes
!ps aux | grep python

# Capture shell output as a Python variable
files = !ls *.csv
print(f"Found {len(files)} CSV files: {files}")

%%bash — Run an Entire Cell as Bash

Python
%%bash
# This entire cell runs as a Bash script
echo "Creating directory structure..."
mkdir -p data/raw data/processed data/models

echo "Downloading data..."
curl -s -o data/raw/sales.csv https://example.com/sales.csv

echo "Checking download..."
wc -l data/raw/sales.csv
echo "Done."

%%sh — Alternative Shell Cell Magic

Python
%%sh
# Similar to %%bash, runs as sh (more portable)
for file in data/raw/*.csv; do
    echo "Processing: $file"
    wc -l "$file"
done

8. Multi-Language Magic Commands

One of Jupyter’s most powerful features — rarely known by beginners — is the ability to run code in other languages directly within a Python notebook.

%%html — Render Raw HTML

Python
%%html
<div style="background: #f0f8ff; padding: 20px; border-radius: 8px; border-left: 4px solid #2196F3;">
    <h3 style="color: #1565C0; margin-top: 0;">📊 Analysis Summary</h3>
    <ul>
        <li><strong>Dataset:</strong> 50,000 customer records</li>
        <li><strong>Date Range:</strong> January 2023 – December 2023</li>
        <li><strong>Key Metric:</strong> Revenue increased 23.4% YoY</li>
    </ul>
    <p style="color: #666;">Generated automatically by analysis pipeline.</p>
</div>

%%javascript — Run JavaScript in the Browser

Python
%%javascript
// Access the browser's JavaScript environment directly
console.log("Hello from JavaScript!");

// Create an alert (useful for signaling when long computations finish)
alert("Your analysis is complete!");

// Modify the page DOM
document.querySelector('.jp-Cell-inputArea').style.backgroundColor = '#fffde7';

%%latex — Render LaTeX Equations

Python
%%latex
\begin{equation}
    \text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2
\end{equation}

\begin{equation}
    R^2 = 1 - \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2}
\end{equation}

%%markdown — Render Markdown Programmatically

Python
%%markdown
## Dynamic Report Section

This section was generated programmatically.
You can use Python to build Markdown strings and render them with this magic.

%%sql — Run SQL Directly (with ipython-sql)

Python
# First: !pip install ipython-sql sqlalchemy
%load_ext sql
%sql sqlite:///mydata.db

%%sql
SELECT 
    region,
    COUNT(*) as transactions,
    SUM(revenue) as total_revenue,
    AVG(revenue) as avg_revenue
FROM sales
WHERE year = 2024
GROUP BY region
ORDER BY total_revenue DESC
LIMIT 10;

9. Debugging and Development Magic Commands

%debug — Post-Mortem Debugging

When a cell raises an error, %debug drops you into an interactive debugger at the point of the error:

Python
# After a cell raises an exception, run this in the next cell:
%debug

# Opens an interactive pdb (Python Debugger) session where you can:
# - Inspect the variable values at the point of failure
# - Step through the code
# - Type 'q' to quit the debugger
Python
# Or enable automatic debugging for every exception
%pdb on

# Now any exception automatically opens the debugger
def buggy_function(x):
    result = x / 0   # This will trigger the debugger
    return result

buggy_function(5)

%%capture — Capture Cell Output

%%capture suppresses a cell’s printed output and stores it in a variable for later inspection:

Python
%%capture captured_output

# This cell's output is captured, not displayed
import subprocess
result = subprocess.run(['python', '--version'], capture_output=True, text=True)
print(f"Python version: {result.stdout}")
print("This output is captured")
for i in range(10):
    print(f"Processing item {i}")

# Later, inspect what was captured
print(captured_output.stdout)
print(captured_output.stderr)

%xmode — Control Exception Verbosity

Python
# Minimal traceback — useful in production/clean output
%xmode Minimal

# Standard Python traceback (default)
%xmode Context

# Verbose traceback with local variable values — extremely helpful for debugging
%xmode Verbose

# Ultra-compact — just the exception type and message
%xmode Plain

10. Display and Rich Output Magic Commands

%matplotlib with Different Backends

Python
# High-quality inline plots (default)
%matplotlib inline

# Interactive plots — pan, zoom, hover tooltips (requires ipympl: pip install ipympl)
%matplotlib widget

# Try this for interactive exploration:
import matplotlib.pyplot as plt
import numpy as np

%matplotlib widget   # Switch to interactive mode
fig, ax = plt.subplots()
x = np.linspace(0, 2*np.pi, 1000)
ax.plot(x, np.sin(x))
ax.set_title('Interactive Sine Wave — try zooming!')
plt.show()

%%capture with Rich Display

Python
from IPython.display import display, HTML, Image, Markdown, IFrame

# Display formatted HTML
display(HTML("<h2 style='color:navy'>Results Dashboard</h2>"))

# Display an image from a URL or local path
display(Image('chart.png', width=600))

# Display Markdown programmatically
display(Markdown(f"""
## Analysis Complete

- **Records processed:** {len(df):,}
- **Revenue total:** ${df['revenue'].sum():,.2f}
- **Top region:** {df.groupby('region')['revenue'].sum().idxmax()}
"""))

11. Practical Magic Command Workflows

Workflow 1: Benchmarking Multiple Implementations

Python
import numpy as np
import pandas as pd

n = 500_000
data = np.random.randn(n)
series = pd.Series(data)
lst = list(data)

print("=== Squaring 500,000 numbers ===")
print("\nMethod 1: Python list comprehension")
%timeit [x**2 for x in lst]

print("\nMethod 2: NumPy array operation")
%timeit data ** 2

print("\nMethod 3: Pandas Series operation")
%timeit series ** 2

print("\nMethod 4: NumPy vectorized with sqrt")
%timeit np.sqrt(np.abs(data))

Workflow 2: Quick Script Development and Testing

Python
# Step 1: Write a utility function to a file
%%writefile feature_engineering.py

import pandas as pd
import numpy as np

def create_time_features(df, date_col):
    """Extract time-based features from a datetime column."""
    df = df.copy()
    dt = pd.to_datetime(df[date_col])
    df['hour']       = dt.dt.hour
    df['day_of_week']= dt.dt.dayofweek
    df['month']      = dt.dt.month
    df['quarter']    = dt.dt.quarter
    df['is_weekend'] = dt.dt.dayofweek >= 5
    df['is_month_end'] = dt.dt.is_month_end
    return df
Python
# Step 2: Load and test it immediately
%run feature_engineering.py

# create_time_features is now available in this notebook
test_df = pd.DataFrame({
    'timestamp': pd.date_range('2024-01-01', periods=100, freq='6H'),
    'value': np.random.randn(100)
})

result = create_time_features(test_df, 'timestamp')
print(result.head())
print(f"\nNew columns: {[c for c in result.columns if c not in test_df.columns]}")

Workflow 3: Long Computation with Progress and Storage

Python
import time

# Simulate expensive computation
%%time
results = {}
for model_name in ['LinearRegression', 'RandomForest', 'GradientBoosting', 'XGBoost']:
    print(f"Training {model_name}...", end=' ')
    time.sleep(0.5)   # Simulate training time
    results[model_name] = {'accuracy': 0.85 + hash(model_name) % 10 / 100}
    print("done ✓")

print(f"\nBest model: {max(results, key=lambda k: results[k]['accuracy'])}")
Python
# Store results so you don't need to re-run if kernel restarts
%store results
# Stored 'results' (dict)
Python
# In a future session — instant recovery
%store -r results
print("Restored training results:")
for model, metrics in results.items():
    print(f"  {model}: {metrics['accuracy']:.3f}")

Workflow 4: Interactive Data Exploration with Shell

Python
# Check what data files are available
%ls data/

# Count rows in each CSV before loading
%%bash
for f in data/*.csv; do
    echo -n "$f: "
    wc -l < "$f"
done

# Check total disk usage
!du -sh data/

12. A Quick Reference Guide to the Most Important Magic Commands

CategoryCommandWhat It Does
Timing%timeit exprStatistical timing of one line
Timing%%timeitStatistical timing of entire cell
Timing%time exprSingle wall-clock timing
Timing%%timeSingle wall-clock timing of cell
Profiling%prun func()Function-level cProfile profiling
Profiling%lprun -f func func()Line-by-line profiling (needs line_profiler)
Files%%writefile file.pyWrite cell contents to a file
Files%load file.pyLoad file contents into a cell
Files%run script.pyExecute an external Python script
Files%save file.py nSave cell n to a file
History%history -l 10Show last 10 executed commands
History%store varPersist variable between sessions
History%store -r varRestore a stored variable
System%pwdPrint working directory
System%cd pathChange directory
System%lsList directory contents
System%env VAR=valueSet environment variable
Namespace%whoList all variables
Namespace%whosVariables with types and sizes
Namespace%reset -fClear all variables (no prompt)
Display%matplotlib inlineInline chart rendering
Display%matplotlib widgetInteractive charts
Debug%debugPost-mortem debugger after error
Debug%pdb onAuto-debug on every exception
Debug%xmode VerboseVerbose tracebacks with variable values
Shell!commandRun a shell command
Shell%%bashRun entire cell as Bash
Languages%%htmlRender HTML in the notebook
Languages%%latexRender LaTeX equations
Languages%%javascriptRun JavaScript
Output%%capture varCapture cell output to a variable
Discovery%lsmagicList all available magic commands
Discovery%magicFull documentation of magic system

Conclusion: Magic Commands Are Your Productivity Multipliers

Magic commands represent one of the deepest and most practical layers of Jupyter’s capabilities. They are not merely conveniences — they fundamentally change what you can accomplish in a notebook session.

%timeit and %prun give you a profiler built into your development environment, letting you make data-driven decisions about optimization without switching to external tools. %%writefile and %run create a bridge between your exploratory notebook and production-ready Python scripts. %store saves you from re-running expensive computations every time the kernel restarts. %%bash lets you manage files, download data, and run system commands without leaving the data science context.

The magic commands in this article are the ones that experienced Jupyter practitioners reach for constantly — the commands that, once learned, become so natural that working without them feels genuinely constrained. Start by integrating %timeit, %matplotlib inline, and %%writefile into your daily workflow. Add %prun and %store when you start working with slower code or longer computations. Build up from there.

In the next article, you will explore the full JupyterLab environment and understand when it offers advantages over the classic Jupyter Notebook interface — and when to stick with the simpler tool.

Key Takeaways

  • Magic commands are prefixed with % (line magic, applies to one line) or %% (cell magic, applies to the entire cell); they extend Python with built-in workflow tools.
  • %timeit runs code hundreds of times and reports mean ± standard deviation for statistically reliable performance measurement; use %%timeit for multi-line blocks.
  • %time gives a simple single-run wall clock measurement — better for slow operations that cannot be run repeatedly.
  • %prun profiles at the function level; %lprun profiles at the line level (requires line_profiler) — use these to find bottlenecks before optimizing.
  • %%writefile filename.py saves a cell’s contents directly to disk; %run script.py executes an external script and imports its variables into the notebook.
  • %store variable persists a Python variable between sessions; %store -r variable restores it — invaluable for expensive computations you do not want to re-run.
  • %who and %whos inspect the current namespace; %reset -f clears it entirely.
  • %%bash, %%html, %%javascript, and %%latex run entire cells in other languages directly within a Python notebook.
  • %debug opens an interactive debugger at the location of the last exception; %xmode Verbose adds variable values to every traceback.
  • %lsmagic lists every available magic command; append ? to any magic for detailed documentation (e.g., %timeit?).
Share:
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Discover More

Essential Python Libraries for Machine Learning: A Complete Overview

Discover the essential Python libraries for machine learning including NumPy, Pandas, Scikit-learn, Matplotlib, and TensorFlow.…

EU Antitrust Scrutiny Intensifies Over AI Integration in Messaging Platforms

European regulators are examining whether built-in AI features in messaging platforms could restrict competition and…

Data Mining Tools: Weka, RapidMiner and KNIME

Discover Weka, RapidMiner and KNIME—top data mining tools for analysis, visualization and machine learning. Compare…

Intel Debuts Revolutionary Core Ultra Series 3 Processors at CES 2026 with 18A Manufacturing Breakthrough

Intel launches Core Ultra Series 3 processors at CES 2026 with groundbreaking 18A technology, delivering…

Introduction to Conditional Statements and Control Structures in C++

Learn how to use conditional statements and control structures in C++ to write efficient and…

Blue Origin Announces TeraWave: 5,408 Satellites to Challenge Starlink

Blue Origin announces TeraWave satellite network with 5,408 satellites offering 6 terabits per second speeds…

Click For More
0
Would love your thoughts, please comment.x
()
x