Why I Write SQL Instead of Using ORMs
TL;DR
ORMs hide complexity that eventually bites you. Raw SQL is explicit, performant, and easier to debug. Use ORMs for CRUD, write SQL for anything complex.
I used to think writing raw SQL was a waste of time. Why write queries manually when an ORM could generate them for me? Just define models, call methods, and let the abstraction handle the database.
Then I spent three days debugging why a simple user search was taking 8 seconds, only to discover my ORM was generating 47 separate queries when one JOIN would have done it. That's when I started questioning the whole premise.
Five years later, I write SQL for almost everything. I'm faster, my queries are more efficient, and debugging takes minutes instead of days. Here's what changed my mind and when ORMs still make sense.
The N+1 Problem That Broke Me
The turning point was a production incident. Our user dashboard was timing out under load. The ORM code looked innocent:
# Prisma/TypeORM style - looks clean, right?
users = User.query.filter_by(active=True).all()
for user in users:
print(f"{user.name}: {len(user.posts)} posts")
for post in user.posts:
print(f" - {post.title} ({len(post.comments)} comments)")
Simple, readable, exactly what you'd write following ORM best practices. But look at the generated queries:
-- Query 1: Get users
SELECT * FROM users WHERE active = true;
-- Query 2-101: For each of 100 users
SELECT * FROM posts WHERE user_id = 1;
SELECT * FROM posts WHERE user_id = 2;
-- ... 98 more queries
-- Query 102-10101: For each post
SELECT * FROM comments WHERE post_id = 1;
SELECT * FROM comments WHERE post_id = 2;
-- ... thousands more queries
The ORM generated over 10,000 queries to display 100 users. Load time: 8 seconds.
Here's the SQL version:
SELECT
u.id, u.name,
COUNT(DISTINCT p.id) as post_count,
COUNT(c.id) as total_comments
FROM users u
LEFT JOIN posts p ON p.user_id = u.id
LEFT JOIN comments c ON c.post_id = p.id
WHERE u.active = true
GROUP BY u.id, u.name;
One query. 40ms. Problem solved.
But the real issue wasn't just performance—it was that the ORM hid what was happening. The code looked fine. Only diving into query logs revealed the disaster.
When ORMs Hurt More Than They Help
1. Complex Queries Become Impossible
Trying to write a moderately complex query with an ORM is like assembling IKEA furniture with oven mitts on:
# SQLAlchemy - trying to get top 5 users by post engagement
query = db.session.query(
User,
func.count(distinct(Post.id)).label('post_count'),
func.count(Comment.id).label('comment_count'),
func.sum(Post.views).label('total_views')
).join(
Post, User.id == Post.user_id
).outerjoin(
Comment, Post.id == Comment.post_id
).filter(
Post.created_at >= datetime.now() - timedelta(days=30)
).group_by(
User.id
).having(
func.count(distinct(Post.id)) > 0
).order_by(
desc(func.sum(Post.views))
).limit(5)
Compare to SQL:
SELECT
u.*,
COUNT(DISTINCT p.id) as post_count,
COUNT(c.id) as comment_count,
SUM(p.views) as total_views
FROM users u
JOIN posts p ON p.user_id = u.id
LEFT JOIN comments c ON c.post_id = p.id
WHERE p.created_at >= NOW() - INTERVAL '30 days'
GROUP BY u.id
HAVING COUNT(DISTINCT p.id) > 0
ORDER BY total_views DESC
LIMIT 5;
The SQL is shorter, clearer, and actually looks like what it does. The ORM version is method chaining hell that takes 10 minutes to write and 20 to understand later.
2. Performance Is a Black Box
ORMs hide the cost of operations. This looks harmless:
// Sequelize - accessing a relationship
const user = await User.findByPk(1);
console.log(user.posts.length); // Lazy load = surprise query!
Did that hit the database? Maybe. Depends on whether posts was eager-loaded. With raw SQL, there's no ambiguity:
const result = await db.query(
'SELECT COUNT(*) as count FROM posts WHERE user_id = $1',
[userId]
);
console.log(result.rows[0].count);
You know exactly what's happening and when.
3. Database Features Become Inaccessible
Modern databases have incredible features. ORMs make them hard or impossible to use:
-- PostgreSQL JSON queries
SELECT * FROM events
WHERE metadata @> '{"status": "active"}'
AND metadata -> 'tags' ? 'important';
-- Window functions
SELECT
name,
salary,
AVG(salary) OVER (PARTITION BY department) as dept_avg,
RANK() OVER (ORDER BY salary DESC) as salary_rank
FROM employees;
-- CTEs for complex logic
WITH RECURSIVE org_chart AS (
SELECT id, name, manager_id, 1 as level
FROM employees WHERE manager_id IS NULL
UNION ALL
SELECT e.id, e.name, e.manager_id, oc.level + 1
FROM employees e
JOIN org_chart oc ON e.manager_id = oc.id
)
SELECT * FROM org_chart;
Good luck expressing these in an ORM without falling back to raw SQL anyway.
4. Migrations Become Mysterious
ORM migrations abstract away DDL, which sounds great until you need to see what actually changed:
# Django migration - what does this actually do?
operations = [
migrations.AlterField(
model_name='user',
name='email',
field=models.EmailField(unique=True),
),
]
Versus the actual SQL:
-- Explicit and reviewable
ALTER TABLE users
ADD CONSTRAINT users_email_unique UNIQUE (email);
CREATE UNIQUE INDEX CONCURRENTLY users_email_idx ON users(email);
The raw SQL tells you exactly what locks will be taken, what indexes will be created, and how long it might take in production.
When I Actually Use ORMs
I'm not anti-ORM. They solve real problems in specific contexts.
Simple CRUD Operations
For basic create/read/update/delete on single tables, ORMs are genuinely useful:
// GORM - this is fine for simple cases
user := User{Name: "John", Email: "john@example.com"}
db.Create(&user)
var user User
db.First(&user, "email = ?", "john@example.com")
db.Model(&user).Update("verified", true)
This is cleaner than writing the SQL manually, and performance doesn't matter for single-row operations.
Rapid Prototyping
When you're exploring an idea and database design is still fluid, ORMs let you move fast:
# Rails - great for prototyping
class User < ApplicationRecord
has_many :posts
validates :email, presence: true, uniqueness: true
end
User.create!(email: "test@example.com")
Schema changes are quick, and you can iterate without writing migration SQL.
Admin Panels and Internal Tools
For low-traffic internal tools where developer speed matters more than query performance:
# Django Admin gets you a full CRUD UI
class UserAdmin(admin.ModelAdmin):
list_display = ['name', 'email', 'created_at']
search_fields = ['name', 'email']
admin.site.register(User, UserAdmin)
You get a working admin panel in minutes. The N+1 queries don't matter when you have 10 users.
My Current Approach
The 80/20 Split
Use ORMs for:
- Single-table reads/writes
- Prototyping and early development
- Admin interfaces
- Simple relationships that fit ORM patterns
Write SQL for:
- Anything with JOINs
- Complex filtering or aggregations
- Performance-critical queries
- Reporting and analytics
- Anything using advanced database features
Query Builders as Middle Ground
For projects that need structure without full ORM overhead, query builders work well:
// Knex.js - more control than ORM, less than raw SQL
const users = await knex('users')
.select('users.*', knex.raw('COUNT(posts.id) as post_count'))
.leftJoin('posts', 'users.id', 'posts.user_id')
.where('users.active', true)
.groupBy('users.id')
.orderBy('post_count', 'desc')
.limit(10);
You get parameterization and composability without the ORM trying to be too clever.
Database-First Design
I now design the database first, then build the application:
-- Define the schema explicitly
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email TEXT UNIQUE NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_users_email ON users(email);
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
title TEXT NOT NULL,
content TEXT,
views INTEGER DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_created_at ON posts(created_at);
Then write application code that uses this schema, rather than letting the ORM define it.
Practical Patterns I Use
Explicit Query Functions
Instead of sprinkling queries throughout the codebase, centralize them:
// queries/users.go
package queries
func GetActiveUsers(db *sql.DB) ([]User, error) {
query := `
SELECT id, name, email, created_at
FROM users
WHERE active = true
ORDER BY created_at DESC
`
rows, err := db.Query(query)
if err != nil {
return nil, err
}
defer rows.Close()
var users []User
for rows.Next() {
var u User
err := rows.Scan(&u.ID, &u.Name, &u.Email, &u.CreatedAt)
if err != nil {
return nil, err
}
users = append(users, u)
}
return users, rows.Err()
}
Now all SQL is in one place, reviewable and testable.
Prepared Statements for Safety
Prevent SQL injection without ORM magic:
// Node.js with pg
async function getUserByEmail(email) {
const result = await pool.query(
'SELECT * FROM users WHERE email = $1',
[email] // Automatically parameterized
);
return result.rows[0];
}
Parameterized queries are built into every database driver. You don't need an ORM for safety.
Transactions Are Explicit
import psycopg2
def transfer_money(from_account, to_account, amount):
conn = get_connection()
try:
with conn:
with conn.cursor() as cur:
cur.execute(
'UPDATE accounts SET balance = balance - %s WHERE id = %s',
(amount, from_account)
)
cur.execute(
'UPDATE accounts SET balance = balance + %s WHERE id = %s',
(amount, to_account)
)
# Commits automatically if no exception
except Exception:
# Rolls back automatically on exception
raise
No hidden transaction behavior—you see exactly what's grouped together.
Query Comments for Documentation
-- Get users with engagement metrics for dashboard
-- Used by: /api/dashboard, /admin/users
-- Performance: ~50ms for 10k users with proper indexes
SELECT
u.id,
u.name,
u.email,
COUNT(DISTINCT p.id) as post_count,
COUNT(DISTINCT c.id) as comment_count,
MAX(p.created_at) as last_post_at
FROM users u
LEFT JOIN posts p ON p.user_id = u.id AND p.deleted_at IS NULL
LEFT JOIN comments c ON c.post_id = p.id
WHERE u.active = true
GROUP BY u.id, u.name, u.email
ORDER BY last_post_at DESC NULLS LAST;
ORM code can't carry this kind of context as naturally.
The Performance Win Is Real
On a project I recently took over, I replaced ORM queries with raw SQL for the hot paths:
Before (Django ORM):
- User dashboard: 2.3s average
- Search page: 1.8s average
- 500 req/sec max throughput
After (raw SQL):
- User dashboard: 180ms average (12x faster)
- Search page: 95ms average (19x faster)
- 3000 req/sec max throughput (6x improvement)
Same database, same data, just better queries.
The Mental Shift
The key insight: Databases are not implementation details to hide. They're powerful tools worth learning and using directly.
ORM thinking: "How do I make the ORM generate the query I need?"
SQL thinking: "What data do I need? Let me express that directly."
The second approach is simpler, more powerful, and makes you a better engineer.
Learning SQL Properly
Start With These Queries
-- Basic filtering and joins
SELECT u.name, p.title
FROM users u
JOIN posts p ON p.user_id = u.id
WHERE u.active = true;
-- Aggregations
SELECT
u.name,
COUNT(*) as post_count,
AVG(p.views) as avg_views
FROM users u
JOIN posts p ON p.user_id = u.id
GROUP BY u.id, u.name
HAVING COUNT(*) > 5;
-- Subqueries
SELECT *
FROM users
WHERE id IN (
SELECT user_id
FROM posts
WHERE views > 1000
);
-- Window functions
SELECT
name,
department,
salary,
AVG(salary) OVER (PARTITION BY department) as dept_avg
FROM employees;
Master these patterns and you can write 90% of application queries.
Use EXPLAIN to Understand Performance
EXPLAIN ANALYZE
SELECT u.name, COUNT(p.id)
FROM users u
LEFT JOIN posts p ON p.user_id = u.id
GROUP BY u.id;
-- Look for:
-- - Seq Scan (bad) vs Index Scan (good)
-- - Execution time
-- - Rows actually returned vs estimated
This teaches you what makes queries fast or slow.
Database-Specific Features Worth Learning
PostgreSQL:
- JSONB operators for flexible schemas
- Full-text search with
to_tsvector - Array operations
- CTEs and window functions
MySQL:
- JSON functions
- Full-text indexes
- Spatial data types
SQLite:
- FTS5 for full-text search
- JSON1 extension
- Common table expressions
These make your database far more powerful than any ORM abstraction.
When ORMs Make Sense
Large teams with mixed skill levels: ORMs provide guardrails that prevent juniors from writing dangerous queries.
Rapid prototyping: When database design is still fluid and you need to iterate fast.
Multi-database support: If you genuinely need to support PostgreSQL, MySQL, and SQLite from the same codebase.
Framework integration: In Rails or Django, using the framework's ORM often makes integration easier than fighting it.
My Advice
Start with SQL. Learn to write queries before reaching for abstractions. You'll understand what the ORM is doing (or failing to do) when you eventually use one.
Profile everything. Use query logging and EXPLAIN to see what queries actually run. ORMs are great at hiding slow queries.
Keep a queries file. Put all your SQL in one place rather than scattering it through the codebase. Makes it easy to review, optimize, and test.
Use database features. Indexes, constraints, triggers, and stored procedures exist for a reason. ORMs make them hard to use.
Mix approaches. Use an ORM for simple CRUD, write SQL for complex queries. You don't have to choose one exclusively.
The goal isn't to avoid ORMs entirely—it's to use them intentionally for problems they actually solve, and reach for SQL when you need the power and clarity of working directly with the database.
I still use ORMs occasionally. But I write SQL for anything that matters. My queries are faster, my code is clearer, and I actually understand what's happening in the database.
The best abstraction is often no abstraction at all.