Web Scraping26 min read
Rate Limiting
Understand rate limits, why sites enforce them, and how to throttle your scraper to stay within safe request speeds.
David Miller
November 19, 2025
4.0k163
Websites limit how fast you can hit them.
This is called rate limiting.
Why rate limits exist
- protect servers
- ensure fair use
- stop abuse
If ignored, you may get:
- 429 errors
- IP bans
- blocked accounts
Simple manual delay
import time
import requests
urls = ["https://example.com/page1", "https://example.com/page2"]
for url in urls:
res = requests.get(url)
print(res.status_code)
time.sleep(2)
Token bucket style (concept)
You allow N requests per time window.
Using a simple limiter function
import time
last_call = 0
def rate_limit(seconds):
global last_call
now = time.time()
wait = seconds - (now - last_call)
if wait > 0:
time.sleep(wait)
last_call = time.time()
for url in urls:
rate_limit(2)
requests.get(url)
Graph: throttling
flowchart LR
A[Request] --> B[Wait]
B --> C[Next Request]
Remember
- Always be slower than needed
- Respect site limits if documented
- Slow scraping is safer scraping
#Python#Advanced#RateLimit