ACID Transactions: What They Actually Mean in Practice
TL;DR
ACID guarantees that database operations are Atomic (all or nothing), Consistent (rules enforced), Isolated (concurrent transactions don't interfere), and Durable (committed data survives crashes). Without transactions, concurrent writes corrupt data.
I once watched a money transfer bug drain $50,000 from user accounts. Two concurrent requests hit a balance check simultaneously, both saw "enough funds," both deducted, balance went negative. Classic race condition. Classic missing transaction isolation.
Understanding ACID isn't academic. It's the difference between a database you can trust and one that silently corrupts your data.
What ACID Actually Means
ACID is four guarantees that databases make about transactions:
- Atomic: All operations succeed, or none do
- Consistent: Data always follows defined rules
- Isolated: Concurrent transactions don't see each other's partial work
- Durable: Committed data survives crashes
Atomicity: All or Nothing
-- BAD: Two separate statements, no transaction
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- Crash happens here
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
-- Result: $100 vanished
-- GOOD: Atomic transaction
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;
-- Either both happen, or neither does
If anything fails between BEGIN and COMMIT, the database rolls back all changes. The failure is clean—no partial state.
Application-level atomicity
# BAD - partial updates possible
def transfer_money(from_id, to_id, amount):
from_account = db.get("SELECT balance FROM accounts WHERE id = %s", from_id)
if from_account.balance < amount:
raise ValueError("Insufficient funds")
db.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", amount, from_id)
# Crash here = money disappears
db.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", amount, to_id)
# GOOD - atomic
def transfer_money(from_id, to_id, amount):
with db.transaction():
from_account = db.get(
"SELECT balance FROM accounts WHERE id = %s FOR UPDATE", from_id
)
if from_account.balance < amount:
raise ValueError("Insufficient funds")
db.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", amount, from_id)
db.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", amount, to_id)
# Commits here, or rolls back if exception was raised
Consistency: Rules That Always Hold
Consistency means the database enforces your defined constraints. Data can't violate the rules.
-- Define rules at the schema level
CREATE TABLE accounts (
id SERIAL PRIMARY KEY,
balance NUMERIC NOT NULL CHECK (balance >= 0), -- Can't go negative
user_id INTEGER REFERENCES users(id) ON DELETE RESTRICT -- Must reference valid user
);
-- This will fail with a constraint violation:
UPDATE accounts SET balance = -100 WHERE id = 1;
-- ERROR: new row violates check constraint "accounts_balance_check"
Application-level consistency:
# BAD - consistency enforced in application code only
def create_order(user_id, product_id, quantity):
product = get_product(product_id)
if product.inventory < quantity:
raise ValueError("Not enough stock")
# Concurrent request can slip through here
create_order_record(user_id, product_id, quantity)
reduce_inventory(product_id, quantity) # Can go negative!
# GOOD - consistency enforced at the database level
def create_order(user_id, product_id, quantity):
with db.transaction():
# Lock the row, then check
product = db.query_one("""
SELECT inventory FROM products
WHERE id = %s FOR UPDATE
""", product_id)
if product.inventory < quantity:
raise ValueError("Not enough stock")
updated = db.execute("""
UPDATE products
SET inventory = inventory - %s
WHERE id = %s AND inventory >= %s
""", quantity, product_id, quantity)
if updated.rowcount == 0:
raise ValueError("Inventory changed, retry")
db.execute("INSERT INTO orders ...")
Isolation: Concurrent Transactions Don't Interfere
Four isolation levels, each with different trade-offs:
Read Committed (default in PostgreSQL)
-- Transaction A
BEGIN;
UPDATE users SET email = 'new@example.com' WHERE id = 1;
COMMIT;
-- Transaction B (started before A committed)
BEGIN;
SELECT email FROM users WHERE id = 1; -- Sees old email
-- A commits
SELECT email FROM users WHERE id = 1; -- Sees new email!
-- Same query, different results within same transaction = non-repeatable read
Fine for most reads, but analytics queries can get inconsistent results mid-run.
Repeatable Read (default in MySQL)
-- Transaction A
BEGIN;
SELECT COUNT(*) FROM orders WHERE status = 'pending'; -- Returns 10
-- Transaction B inserts a new pending order and commits
-- Transaction A again (same transaction)
SELECT COUNT(*) FROM orders WHERE status = 'pending'; -- Still returns 10!
-- Sees a consistent snapshot from transaction start
Same query returns same results within a transaction. Good for reports and batch jobs.
Serializable (strongest)
-- Transaction A (Serializable)
BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT SUM(balance) FROM accounts; -- Total: $1000
-- Transaction B concurrently inserts a new account and commits
-- Transaction A
COMMIT;
-- ERROR: could not serialize access due to concurrent modification
-- Retry required, but data is always correct
Practical guide
Read Uncommitted → Never use in production
Read Committed → Default, fine for most OLTP operations
Repeatable Read → Use for reports, batch jobs, financial calculations
Serializable → Use for seat booking, inventory, anything that can't double-book
The Classic Problem: Lost Update
Without proper isolation, concurrent writes corrupt data:
# BAD - race condition
def increment_views(post_id):
post = db.get("SELECT views FROM posts WHERE id = %s", post_id)
new_views = post.views + 1
db.execute("UPDATE posts SET views = %s WHERE id = %s", new_views, post_id)
# What happens with 2 concurrent requests:
# Request A: reads views = 100
# Request B: reads views = 100
# Request A: writes views = 101
# Request B: writes views = 101 ← Lost A's update!
# Final: 101 instead of 102
-- GOOD: atomic update in the database
UPDATE posts SET views = views + 1 WHERE id = $1;
-- Database handles the increment atomically
-- No lost updates possible
# GOOD: pessimistic locking
def transfer_with_lock(from_id, to_id, amount):
with db.transaction():
# Lock both rows before reading (always lock in same order to prevent deadlock)
db.execute("""
SELECT id, balance FROM accounts
WHERE id = ANY(%s)
ORDER BY id
FOR UPDATE
""", [[from_id, to_id]])
db.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", amount, from_id)
db.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", amount, to_id)
# GOOD: optimistic locking (check and retry)
def update_inventory(product_id, quantity):
for attempt in range(3):
product = db.get(
"SELECT id, inventory, version FROM products WHERE id = %s", product_id
)
new_inventory = product.inventory - quantity
if new_inventory < 0:
raise ValueError("Out of stock")
# Only update if version hasn't changed
rows_updated = db.execute("""
UPDATE products
SET inventory = %s, version = version + 1
WHERE id = %s AND version = %s
""", new_inventory, product_id, product.version)
if rows_updated == 1:
return # Success
# Otherwise retry (someone else updated first)
raise RuntimeError("Too many concurrent updates, retry later")
Durability: Survives Crashes
Once committed, data is permanent. Even if the server loses power immediately after COMMIT.
Databases achieve this with write-ahead logging (WAL):
1. Write changes to WAL log (durable storage)
2. COMMIT returns success to client
3. Apply changes to actual data files (async)
If crash happens between 2 and 3:
→ WAL replay recovers the committed data on restart
-- PostgreSQL durability settings
-- synchronous_commit = on (default): flush WAL to disk on every commit
-- Set to 'off' for 2-3x write throughput (risk: lose last ~10ms of commits on crash)
SHOW synchronous_commit;
Transactions in ORMs
# Django - BAD: no transaction, partial updates possible
def create_user_and_profile(email, name):
user = User.objects.create(email=email)
# If this fails, user exists without profile
Profile.objects.create(user=user, name=name)
# GOOD: atomic transaction
from django.db import transaction
@transaction.atomic
def create_user_and_profile(email, name):
user = User.objects.create(email=email)
Profile.objects.create(user=user, name=name)
# Both succeed or both rollback
// Prisma - interactive transactions
await prisma.$transaction(async (tx) => {
const user = await tx.user.create({ data: { email } });
await tx.profile.create({ data: { userId: user.id, name } });
// Rollback if either fails
});
When Transactions Span Services
You can't use database transactions across microservices. Your options:
Saga Pattern (eventual consistency)
1. Order service creates order (status: PENDING)
2. Payment service processes payment
3a. Success → Order service marks CONFIRMED
3b. Failure → Compensation logic runs, order marked FAILED
Outbox Pattern (reliable at-least-once)
- Write event to SAME database as state change (atomic!)
- Separate worker reads events and delivers to other services
- Eventually consistent, but reliable
Two-Phase Commit (2PC)
- Complex, slow, single point of failure
- Avoid unless you have no other option
The Bottom Line
ACID transactions are what make databases trustworthy. Every payment processor, banking system, and e-commerce platform relies on them.
Key points:
- Wrap related operations in transactions to guarantee atomicity
- Use database constraints for consistency—don't rely on application logic alone
- Understand isolation levels: Read Committed for most things, Serializable for anything that can't double-book
- Use
UPDATE ... WHERE version = $vorSELECT FOR UPDATEto prevent lost updates - When transactions span services, use sagas or the outbox pattern
The 20 lines it takes to add a transaction have saved more companies from data corruption than any amount of application-level validation.