From requirements.txt to pyproject.toml: My Python Evolution
A few months ago, I wrote about venturing into Python territory as a Node.js developer. That post was about culture shock—learning requirements.txt, virtual environments, and the Python "trinity" of Black, Flake8, and MyPy. Today, I'm writing the sequel. I've shipped multiple Python projects since then, and my setup has evolved dramatically.
Spoiler: I don't use requirements.txt anymore. Or Black. Or Flake8. Here's what changed.
The Old Way vs. The New Way
Remember my original setup?
# The old way (what I wrote about before) python -m venv venv source venv/bin/activate pip install -r requirements.txt -r requirements-dev.txt
Now? A single command:
# The new way uv sync
That's it. One tool. One command. Everything just works.
uv: The Package Manager That Changed Everything
If you're coming from the JavaScript world, you know how Bun revolutionized Node.js package management. uv does the same for Python—but even more dramatically.
Here's my actual setup from a recent project:
[project]
name = "load-tester"
version = "0.1.0"
description = "A high-performance async load testing tool"
requires-python = ">=3.12"
dependencies = [
"aiohttp>=3.13.2",
"pydantic>=2.12.5",
"rich>=14.2.0",
"typer>=0.20.0",
]
[dependency-groups]
dev = [
"pytest>=8.0.0",
"pytest-asyncio>=0.23.0",
"pytest-cov>=4.1.0",
"ruff>=0.1.0",
"mypy>=1.8.0",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
No more requirements.txt. No more requirements-dev.txt. Everything lives in pyproject.toml—just like package.json in Node.js, but better structured.
The [dependency-groups] feature is beautiful. Development dependencies stay separate from production, but they're all in one file. When I run uv sync, it sets up everything. When I deploy, I can exclude dev dependencies.
Ruff: One Tool to Rule Them All
Remember the "Python Development Trinity" I mentioned before—Black, Flake8, and MyPy? I've collapsed two of those into one:
[tool.ruff]
line-length = 88
target-version = "py312"
src = ["src", "tests"]
[tool.ruff.lint]
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # Pyflakes
"I", # isort
"B", # flake8-bugbear
"C4", # flake8-comprehensions
"UP", # pyupgrade
"ARG", # flake8-unused-arguments
"SIM", # flake8-simplify
"TCH", # flake8-type-checking
"PTH", # flake8-use-pathlib
"RUF", # Ruff-specific rules
]
[tool.ruff.format]
quote-style = "double"
indent-style = "space"
docstring-code-format = true
Ruff does what Black + Flake8 + isort did, but in a single tool that's written in Rust and runs in milliseconds. My entire codebase lints in the time it took Flake8 to start up.
The best part? Ruff's select system lets me pick exactly which rules I want. I'm not stuck with a monolithic configuration—I can enable flake8-bugbear for catching common bugs, flake8-simplify for code simplification suggestions, and pyupgrade for automatically modernizing my code.
Pydantic: TypeScript-Level Confidence
Coming from TypeScript, I missed the confidence of knowing my data shapes at compile time. Pydantic gives me that in Python—and then some:
from pydantic import BaseModel, ConfigDict, Field, HttpUrl, model_validator
from typing import Annotated
class LoadTestConfig(BaseModel):
"""Configuration for a single load test execution."""
model_config = ConfigDict(frozen=True)
url: HttpUrl
method: HttpMethod = HttpMethod.GET
num_requests: Annotated[int, Field(ge=1)] = 100
concurrency: Annotated[int, Field(ge=1)] = 10
timeout: Annotated[float, Field(gt=0)] = 30.0
headers: dict[str, str] = Field(default_factory=dict)
@model_validator(mode="after")
def validate_config(self) -> "LoadTestConfig":
"""Validate configuration constraints."""
if self.concurrency > self.num_requests:
raise ValueError("concurrency cannot exceed num_requests")
return self
This isn't just type hints—it's runtime validation with clear error messages. The Annotated[int, Field(ge=1)] ensures the value is at least 1. The @model_validator handles cross-field validation that TypeScript's type system can't express.
And frozen=True? That makes the model immutable after creation. No accidental mutations. No debugging weird state changes.
Typer + Rich: Beautiful CLIs Without the Boilerplate
Building CLI tools in Python used to mean wrestling with argparse. Now I use Typer with Rich, and the developer experience is incredible:
from typing import Annotated
import typer
from rich.console import Console
app = typer.Typer(
name="load-tester",
help="A high-performance async load testing tool.",
no_args_is_help=True,
)
console = Console()
@app.command()
def run(
url: Annotated[str, typer.Argument(help="Target URL for load testing.")],
num_requests: Annotated[
int,
typer.Option(
"-n", "--requests",
help="Total number of requests to send.",
min=1,
),
] = 100,
verbose: Annotated[
bool,
typer.Option("-v", "--verbose", help="Enable verbose output."),
] = False,
) -> None:
"""Run a load test against a target URL."""
console.print(f"[green]Testing {url}...[/green]")
Type hints become CLI arguments. Help text is generated automatically. Validation happens for free (min=1 ensures positive values). Rich gives me colors and formatting without any extra work.
The result? Professional CLIs that rival anything built with Click or argparse, but with a fraction of the code.
My Modern Python Project Structure
After several projects, I've settled on this structure:
project/ ├── src/ │ ├── __init__.py │ ├── main.py # CLI entry point │ ├── models/ # Pydantic models │ │ ├── __init__.py │ │ ├── config.py │ │ └── results.py │ ├── engine/ # Core business logic │ │ ├── __init__.py │ │ └── runner.py │ └── utils/ │ └── errors.py ├── tests/ │ ├── conftest.py # Shared fixtures │ ├── unit/ │ │ └── test_models.py │ └── integration/ │ └── test_engine.py ├── pyproject.toml # Single config file ├── Makefile # Common commands └── uv.lock # Lock file (auto-generated)
It's feature-based, like my Vue/React projects. Each feature owns its domain. Tests mirror the source structure. Everything is discoverable.
The Makefile: npm Scripts for Python
I still use a Makefile for common tasks—it's the npm scripts of Python:
.PHONY: lint format typecheck test lint: uv run ruff check . format: uv run ruff format . typecheck: uv run pyright test: uv run pytest tests/ -v
uv run is the magic here. It automatically uses the project's virtual environment without me having to activate it. Just like npx or bunx, but smarter.
Testing: pytest + pytest-asyncio
Testing async Python code used to be painful. Now with pytest-asyncio, it's trivial:
import pytest
from src.models import LoadTestConfig, HttpMethod
class TestLoadTestConfig:
"""Tests for LoadTestConfig model."""
def test_valid_config(self) -> None:
"""Test creating a valid load test config."""
config = LoadTestConfig(
url="https://example.com/api", # type: ignore[arg-type]
method=HttpMethod.POST,
num_requests=1000,
concurrency=100,
)
assert config.num_requests == 1000
def test_concurrency_cannot_exceed_num_requests(self) -> None:
"""Test that concurrency cannot exceed num_requests."""
with pytest.raises(ValidationError):
LoadTestConfig(
url="https://example.com", # type: ignore[arg-type]
num_requests=10,
concurrency=100,
)
Clean, readable, and expressive. The Pydantic validation errors make debugging a breeze—they tell you exactly what went wrong and where.
What I Actually Build Now
Let me show you a real async function from production:
async def run_load_test(
config: LoadTestConfig,
proxy_file: Path | None = None,
show_progress: bool = True,
output_format: str = "human",
) -> Statistics:
"""Run a complete load test."""
if output_format == "human":
print_test_header(config)
proxy_manager: ProxyManager | None = None
if proxy_file:
proxy_manager = await ProxyManager.from_file(proxy_file)
console.print(f"[green]Loaded {proxy_manager.proxy_count} proxies[/green]")
runner = LoadTestRunner(config=config, proxy_manager=proxy_manager)
stats = await runner.run(show_progress=show_progress)
if output_format == "json":
print_json_statistics(stats, config)
else:
print_statistics(stats, config)
return stats
Type hints everywhere. Async/await patterns. Clean separation of concerns. This could almost be TypeScript—but it's Python, and it runs faster than I expected.
The Confidence Boost
Here's what strict typing and Pydantic validation give me:
[tool.mypy] python_version = "3.12" strict = true warn_return_any = true warn_unused_ignores = true disallow_untyped_defs = true disallow_incomplete_defs = true check_untyped_defs = true
With strict = true, MyPy catches everything. Combined with Pydantic's runtime validation, I have TypeScript-level confidence. Bugs that would have crashed in production now fail at import time or during validation—with clear error messages.
The Evolution Summary
| Before | After |
|---|---|
requirements.txt + requirements-dev.txt | pyproject.toml with [dependency-groups] |
pip install + manual venv | uv sync |
| Black + Flake8 + isort | Ruff (all-in-one, Rust-powered) |
| argparse / Click | Typer + Rich |
| Manual validation | Pydantic with @model_validator |
Makefiles with source venv/bin/activate | Makefiles with uv run |
What's Next?
I'm still exploring Python's async ecosystem. Building a load tester taught me a lot about aiohttp, asyncio patterns, and concurrent programming. The language has matured significantly—modern Python with type hints, Pydantic, and uv feels like a completely different experience from the tutorials I first encountered.
If you're a JavaScript developer considering Python, don't be intimidated by the old tutorials showing requirements.txt and virtualenv. The modern Python ecosystem is clean, fast, and surprisingly similar to what you're used to.
Just start with uv init, add your dependencies to pyproject.toml, and let Ruff handle the rest. You'll feel at home in no time.
What's your Python setup look like in 2025? I'd love to hear how others are structuring their projects. The ecosystem moves fast, and there's always something new to learn.