TL;DR
OFFSET pagination doesn’t scale well because your database has to scan and discard tons of rows just to find the ones you actually need. Use cursor-based pagination instead your database will thank you.
Last week, I was designing a high-availability REST API endpoint, doing the usual thing: setting up pagination with page number and page size query parameters. Then it hit me what happens when someone wants to view page 40,000? How would my server even handle that?
The Usual Approach
When you’re building paginated GET endpoints for large-scale systems, you obviously can’t return all records at once. So we design paginated APIs where clients send query parameters like pageNumber and pageSize.
Pretty standard stuff, right?
How Backend Handles Pagination
If the client doesn’t provide these parameters, the backend assumes they want the first page with some default page size (usually configurable).
When pagination params are provided, the backend calculates limit and offset to fetch data from the database, processes it, and sends it back to the client.
Simple enough.
What If It’s Not the First Page?
Here’s where things get interesting. When a client requests, say, page 50, the server calculates:
limit = pageSize
offset = pageSize × (pageNumber - 1)
For example, if someone wants the 10th page with 20 records per page:
- limit = 20
- offset = 180 (we need to skip the first 9 pages)
The OFFSET Problem
In SQL, OFFSET tells the database: “Skip this many rows before giving me results.”
So our query looks like:
SELECT id, name, created_at
FROM users
ORDER BY id
LIMIT 20 OFFSET 180;
Seems fine, right? But here’s the catch the database still has to scan through all those rows from the beginning. It literally reads 180 rows just to throw them away.
Now imagine the offset is 100,000,000. Yikes.
It’s Definitely a Trap
For a high-traffic endpoint, every single request forces the database to scan millions of records just to return 20. That’s putting massive stress on your database for no good reason.
There has to be a better way.
Cursor-Based Pagination to the Rescue
Instead of scanning from the start every time, what if we could jump directly to where we need to be?
That’s exactly what cursor-based pagination does. I’ve found two simple approaches:
- Primary key-based cursoring
- Timestamp-based cursoring
The idea is beautifully simple: when fetching the 3rd page, send the ID of the last record from the 2nd page. Then use a WHERE clause to start from that exact point:
SELECT id, name, created_at
FROM users
WHERE id > 987654321
ORDER BY id
LIMIT 20;
The timestamp approach works similarly:
SELECT id, name, created_at
FROM users
WHERE created_at > '2026-01-20 10:30:00'
ORDER BY created_at
LIMIT 20;
The Performance Difference
Let’s talk time complexity:
OFFSET approach: O(offset)
Cursor approach: O(pageSize)
The difference is massive. With OFFSET, performance degrades as you go deeper into pages. With cursors, every page loads at roughly the same speed.
So Which Should You Use?
Big distributed systems like Kafka prefer cursor based pagination for good reason it scales.
That said, if you’re building a small system without huge load, OFFSET is fine. It’s simpler to implement and uses built in database keywords.
But here’s the thing: cursor-based pagination isn’t that complex either. Once you understand it, it’s just as straightforward.
I used OFFSET in my side projects before, but now that I’m working on high-availability systems, I always go with cursor-based pagination. It’s just the smarter choice at scale.
Final Thoughts
Start with what works for your current scale, but know the trade offs. OFFSET is a ticking time bomb if your system grows. Cursor based pagination might seem like extra work upfront, but it’ll save you headaches down the road.
Your database performance will scale much better, and your users will get faster response times especially those browsing beyond the first few pages.
Have you dealt with pagination issues in your projects? I’d love to hear about your experiences!