How we implemented file-based rate limiting that blocks 99.7% of brute force attempts without Redis, memcached, or any external dependency.
The Attack That Started It
In November 2025, one of our client sites received 14,000 login attempts in a single hour. The attacker used a credential stuffing list — username/password pairs leaked from other breaches — and automated POST requests to /admin/login.php. The site stayed up because PHP handles each request independently, but the database was hammered with authentication queries and legitimate users experienced 8-second response times.
That incident led us to build rate limiting into JekCMS core. Not as a plugin. Not as an optional feature. As a fundamental part of every request lifecycle.
Choosing the Algorithm: Sliding Window
There are three common rate limiting algorithms:
- Fixed Window: Count requests in fixed time blocks (e.g., 100/minute). Simple but has a burst problem at window boundaries — you can send 100 requests at 11:59:59 and 100 more at 12:00:01.
- Token Bucket: Tokens refill at a constant rate. Allows bursts up to the bucket size. Good for APIs but complex to implement without shared state.
- Sliding Window: Tracks individual request timestamps and counts within a rolling window. No boundary bursts, predictable behavior, easy to understand.
We chose sliding window because it's the simplest to implement correctly with file-based storage and has no edge-case surprises.
File-Based Implementation
Most rate limiting tutorials assume Redis or memcached. We don't. JekCMS runs on shared hosting where you get PHP and MySQL, period. Our rate limiter uses the filesystem:
class RateLimiter
{
private string $storageDir;
public function __construct(string $storageDir = null)
{
$this->storageDir = $storageDir ?? ROOT_PATH . '/cache/rate';
if (!is_dir($this->storageDir)) {
@mkdir($this->storageDir, 0755, true);
}
}
public function check(string $key, int $maxRequests, int $windowSeconds): bool
{
$file = $this->storageDir . '/' . md5($key) . '.rate';
$requests = [];
if (file_exists($file)) {
$data = file_get_contents($file);
$requests = json_decode($data, true) ?: [];
}
$now = time();
$cutoff = $now - $windowSeconds;
// Remove expired entries
$requests = array_values(array_filter($requests, fn($t) => $t > $cutoff));
if (count($requests) >= $maxRequests) {
return false; // Rate limited
}
$requests[] = $now;
file_put_contents($file, json_encode($requests), LOCK_EX);
return true; // Allowed
}
}
The LOCK_EX flag prevents race conditions when multiple requests hit simultaneously. Each IP/action combination gets its own file, so there's no contention between different clients.
Login Throttling
Login attempts get progressively stricter rate limits based on failure count:
// In Auth.php login method
$ip = $_SERVER['REMOTE_ADDR'];
$rateLimiter = new RateLimiter();
// Global: 30 login attempts per IP per hour
if (!$rateLimiter->check("login_ip_{$ip}", 30, 3600)) {
return ['error' => 'Too many login attempts. Try again in 1 hour.'];
}
// Per-account: 5 attempts per username per 15 minutes
$usernameHash = md5(strtolower($username));
if (!$rateLimiter->check("login_user_{$usernameHash}", 5, 900)) {
return ['error' => 'Account temporarily locked. Try again in 15 minutes.'];
}
// After successful login, clear the per-account counter
// After failed login, the counters naturally accumulate
The dual-layer approach means an attacker trying different passwords on one account gets locked out after 5 attempts, while an attacker trying one password across many accounts gets locked out after 30 attempts from the same IP.
API Rate Limiting
API endpoints use a separate rate limit configuration:
// API rate limits (per API key, not per IP)
$apiKey = $this->getApiKey();
$limits = [
'default' => [100, 3600], // 100 requests/hour
'upload' => [20, 3600], // 20 uploads/hour
'search' => [60, 60], // 60 searches/minute
'webhook' => [200, 3600], // 200 webhooks/hour
];
Garbage Collection
Rate limit files accumulate. We run garbage collection with 1% probability on each request:
if (mt_rand(1, 100) === 1) {
$this->cleanup(3600); // Remove files older than 1 hour
}
Results
After deploying rate limiting across all 12 sites: brute force attempts dropped from ~2,000/day to fewer than 6 successful requests before lockout. Database load during attacks dropped by 97%. Zero legitimate users have been incorrectly rate-limited (the thresholds are generous enough for normal human behavior).