Rails Performance: 5 Critical Bottlenecks You're Missing

Why Performance Triage Matters
After profiling many Rails applications in production, performance issues tend to follow predictable patterns. When something is slow, it's rarely a mystery—it's almost always due to one of five common culprits. Here's a practical checklist for diagnosing and fixing Rails performance problems, ordered by impact 🎯.
⏱️ Quick Wins First (2-Minute Fixes)
Before diving into the details, here are the three changes that fix 80% of slow Rails app issues:
# 1. Catch N+1 queries automatically
gem 'bullet', groups: [:development, :test]
# 2. Load associations together (not separately)
Post.includes(:author) # Instead of Post.all
# 3. Add missing foreign key indexes
add_index :posts, :user_id
add_index :comments, :post_id
Expected impact: Response times often improve by 5-10x with just these three changes.
The Performance Triage Approach
Not all bottlenecks are created equal. Some give you 10x improvements with minimal effort, while others require major refactoring for marginal gains. This list is ordered by impact-to-effort ratio based on common production scenarios.
Bottleneck #1: N+1 Queries (The Silent Killer)
Why It's #1
N+1 queries are one of the most common performance killers in Rails applications. They're easy to introduce, hard to spot in development (with small datasets), and can significantly impact production performance.
Real-world impact: A typical homepage loading 50 posts with N+1 queries makes 51 database queries instead of 2. This can slow response times from 200ms to 2+ seconds—a 10x performance hit.
How to Spot It
N+1 queries typically show up as slow endpoints, increased database load, or degraded response times. Common tools for detecting them include:
- Bullet gem in development
- APM tools like NewRelic or Scout in production
- Rails query logs with
config.active_record.verbose_query_logs = true
The Fix
Before (N+1):
# In your controller
@posts = Post.all
# In your view - triggers N queries
@posts.each do |post|
post.author.name # Loads author for each post!
post.comments.count # Loads comments for each post!
end
After (Optimized):
# In your controller
@posts = Post.includes(:author).with_comments_count
# Option 1: Counter cache (best for frequently accessed counts)
# In Comment model (the child)
class Comment < ApplicationRecord
belongs_to :post, counter_cache: true # ← counter_cache goes here
end
# Migration to add counter cache column and backfill existing data
class AddCommentsCountToPosts < ActiveRecord::Migration
def change
add_column :posts, :comments_count, :integer, default: 0
reversible do |dir|
dir.up do
# Backfill existing records (critical for existing data)
Post.find_each { |post| Post.reset_counters(post.id, :comments) }
end
end
end
end
# Option 2: Real-time counts without counter cache (if you need calculated counts)
scope :with_comments_count, -> {
left_joins(:comments)
.select('posts.*, COUNT(comments.id) as comments_count')
.group('posts.id')
}
When to Use What
| Method | Use Case | Example |
|---|---|---|
includes |
99% of cases (default choice) | Post.includes(:author) |
preload |
Polymorphic associations, force separate queries | Post.preload(:comments) |
eager_load |
Need to add WHERE conditions on associations | Post.eager_load(:author).where(authors: { verified: true }) |
joins |
Filter by association without loading it | Post.joins(:author).where(authors: { verified: true }) |
| Counter cache | Frequently accessed counts | has_many :comments, counter_cache: true |
💡 Pro Tip: Running Bullet in your test suite with Bullet.raise = true can catch N+1s before they reach production. This prevents performance regressions from sneaking into the codebase.
🚨 Gotcha: N+1 queries in background jobs are easy to miss because they don't cause user-facing slowdowns, but they can significantly impact job processing capacity and system resources.
Bottleneck #2: Missing Database Indexes
Why It's #2
Missing indexes are a common performance issue in Rails applications. A single missing index can turn a 10ms query into a multi-second table scan.
Real-world impact: A query filtering 100,000 posts by user_id without an index can take 800ms. Add the index, and it drops to 8ms—a 100x improvement.
How to Spot It
Missing indexes typically manifest through:
- Slow query logs (queries > 100ms)
- High database CPU usage
EXPLAIN ANALYZEshowing sequential scans on large tables
Finding the Culprits
# In Rails console
ActiveRecord::Base.logger = Logger.new(STDOUT)
# Run your slow action, then check the logs
# Look for queries that scan many rows
# PostgreSQL EXPLAIN - Progressive debugging approach:
# 1. See query plan (estimated costs)
Post.where(published: true).explain
# 2. See actual execution stats (run the query)
Post.where(published: true).explain(:analyze, :buffers)
# Look for:
# - "Seq Scan" on large tables → missing index
# - High "Buffers" numbers → too many rows scanned
# - Actual time >> estimated time → statistics out of date
The Fix
Common patterns that need indexes:
# Foreign keys (often missed!)
add_index :posts, :user_id
add_index :comments, :post_id
# Boolean flags you filter on
add_index :posts, :published
add_index :users, :admin
# Timestamps you sort/filter by
add_index :posts, :created_at
add_index :posts, :published_at
# Composite indexes for common query combinations
add_index :posts, [:user_id, :published, :created_at]
# Composite index on foreign key + status (very common pattern)
add_index :comments, [:post_id, :approved]
add_index :orders, [:user_id, :status]
# Partial indexes for conditional queries
add_index :posts, :featured, where: "featured = true"
add_index :users, :email, where: "deleted_at IS NULL"
add_index :comments, [:post_id, :approved], where: "approved = true"
Decision Framework
When to add an index:
- Foreign keys (always!)
- Columns used in WHERE clauses frequently
- Columns used for sorting (ORDER BY)
- Columns used in JOIN conditions
When NOT to add:
- Tables with < 1,000 rows (usually not worth it)
- Columns that are frequently updated (indexes slow writes)
- Columns with very low cardinality (e.g., boolean with 90% one value)
- Exception: Use partial indexes for low-cardinality columns when you only query one value:
# Problem: posts.featured is boolean, 90% are false
# Full index is mostly useless since you only query featured = true
# Solution: Partial index (only indexes true values)
add_index :posts, :featured, where: "featured = true"
# Or for common status combinations:
add_index :comments, [:post_id, :approved], where: "approved = true"
add_index :orders, [:user_id, :status], where: "status = 'pending'"
💡 Pro Tip: Use EXPLAIN (ANALYZE, BUFFERS) in PostgreSQL to see actual performance, not just the query plan. It shows real execution time and how many rows were actually scanned.
🚨 Gotcha: Indexes speed up reads but slow down writes. Adding too many indexes to a table can negatively impact insert and update performance. Balance is key.
Bottleneck #3: Inefficient View Rendering
Why It's #3
After optimizing database queries, view rendering often becomes the next bottleneck—especially for pages with lots of partials or complex logic.
How to Spot It
Signs of view rendering problems include:
- Fast database queries but slow page rendering
- High "View" time in APM tools
- Deeply nested partials
Common Culprits
1. Logic in Views
<%# BAD - running queries in the view %>
<% @posts.each do |post| %>
<% if post.comments.where(approved: true).any? %>
<%# ... %>
<% end %>
<% end %>
<%# GOOD - precompute in controller/model %>
<% @posts.each do |post| %>
<% if post.has_approved_comments? %>
<%# ... %>
<% end %>
<% end %>
2. Too Many Partials
<%# This renders 100 partials (slow!) %>
<% @posts.each do |post| %>
<%= render "post_card", post: post %>
<% end %>
<%# Better - use collection rendering %>
<%= render partial: "post_card", collection: @posts, as: :post %>
3. Unnecessary JSON Serialization
<%# BAD - using to_json in views %>
<%= @posts.to_json %>
<%# GOOD - use a proper serializer %>
<%= render json: @posts, each_serializer: PostSerializer %>
The Fix: Fragment Caching
Fragment caching can significantly improve view rendering performance by storing rendered HTML fragments:
<%# Cache the expensive part %>
<% cache @post do %>
<%= render "post_content", post: @post %>
<% end %>
<%# Russian doll caching with proper invalidation %>
<% cache @post do %>
<%= render @post %>
<% cache [@post, @post.comments.cache_key_with_version] do %>
<%= render @post.comments %>
<% end %>
<% end %>
<%# Collection caching %>
<%= render partial: "post_card", collection: @posts, cached: true %>
For proper cache invalidation with Russian doll caching, ensure child records touch their parent:
class Comment < ApplicationRecord
belongs_to :post, touch: true # Updates post.updated_at when comment changes
end
The cache_key_with_version method ensures the cache key updates when any comment changes, providing automatic cache invalidation.
💡 Pro Tip: Use Russian doll caching for nested content where inner caches can be reused when outer fragments change. The touch: true option ensures parent caches invalidate when child records change, maintaining cache consistency.
Bottleneck #4: Memory Bloat in Background Jobs
Why It's #4
Background jobs are often overlooked in performance optimization, but they're critical for overall system health. Memory-intensive jobs can impact your entire job processing infrastructure.
Real-world impact: A job processing 10,000 users with User.all.each can consume 2GB+ of RAM and crash workers. Using find_each keeps memory constant at ~50MB regardless of dataset size.
How to Spot It
Warning signs of memory issues in background jobs:
- Sidekiq/job workers using excessive RAM
- Workers getting OOM-killed
- Job processing slowing down over time
Common Causes
1. Loading Too Much Data
# BAD - loads all records into memory
User.all.each do |user|
UserMailer.weekly_digest(user).deliver_now
end
# GOOD - batching
User.find_each(batch_size: 100) do |user|
UserMailer.weekly_digest(user).deliver_now
end
2. Holding References
# BAD - accumulates all results
results = []
User.find_each do |user|
results << process_user(user)
end
results.each { |r| do_something(r) }
# GOOD - process and discard
User.find_each do |user|
result = process_user(user)
do_something(result)
# result goes out of scope and can be GC'd
end
3. Not Using Database Operations
# BAD - loads all records into memory
Post.where(published: false).each(&:destroy)
# GOOD - uses SQL (no callbacks, just SQL DELETE)
Post.where(published: false).delete_all
# If callbacks ARE needed, batch it:
Post.where(published: false).find_each do |post|
post.destroy # Calls callbacks but processes in batches
end
💡 Pro Tip: Use find_each with appropriate batch sizes for processing large datasets in background jobs. This ensures memory usage stays constant regardless of the total number of records.
🚨 Gotcha: Be careful with accumulating results in memory during batch processing. If you need to collect results, consider writing them to a file or database incrementally rather than keeping everything in memory.
Bottleneck #5: Slow Asset Compilation
Why It's #5
While less critical in production (since assets are precompiled), slow asset compilation can impact developer productivity and CI/CD pipeline speed.
How to Spot It
Common signs of asset compilation issues:
- Long deployment times
- Slow CI builds
- Slow development environment startup
The Fix
1. Use Modern JavaScript Bundlers
# esbuild - significantly faster JavaScript bundling
bin/rails javascript:install:esbuild
# Or Vite - similar speed with better DX
Modern bundlers provide substantial improvements:
- esbuild: ~50% faster JavaScript/TypeScript compilation than Webpacker
- Vite: Similar JS/TS speed to esbuild, with hot module replacement
- Important: Speed improvements are primarily for JS/TS bundling; CSS compilation improvements are less dramatic
- Main impact: CI/CD pipelines, development startup, and deploy times
2. Optimize Images
# Use WebP, lazy loading, proper sizes
<%= image_tag "hero.jpg", loading: "lazy", sizes: "100vw" %>
3. CDN for Assets Using a CDN for assets can significantly reduce load times by serving static files from geographically distributed servers closer to users.
💡 Pro Tip: While asset compilation is precompiled in production, slow builds impact developer productivity and CI/CD speed. Modern bundlers like esbuild primarily speed up JavaScript/TypeScript bundling—CSS compilation improvements are less dramatic.
Wrapping Up
Rails performance optimization isn't magic—it's methodical. Most applications see dramatic speed gains without major rewrites: fix N+1 queries, add the right indexes, and cache what's costly. Start by measuring, target the changes with the biggest impact, and monitor for regressions over time.
Key takeaways:
- Always measure first – Let real data guide what you work on.
- Prioritize database issues – Slow queries are almost always the top culprit.
- Don't chase micro-optimizations – Focus on fixes with the highest leverage.
- Monitor continuously – Regular profiling prevents future slowdowns.
Often, just a couple of smart changes can transform how fast your app feels—for you and your users.
Resources:
Development: Bullet • rack-mini-profiler
Production: NewRelic • Scout APM • PgHero • Skylight
Docs: Rails Query Guide • PostgreSQL EXPLAIN
Enjoyed this post? Let's Not Make It One-Sided
I’m always up for a chat about code, MVP launches, or wild performance wins—reach out on X or LinkedIn, or just drop your story in the comments below.
Got a tip or a pain point you think the Rails community should hear? Share it below—your experience helps everyone.
More performance insights, code walkthroughs, and deep-dives on Rails bottlenecks coming soon!
💡 Found this helpful? Share it with your network!