Memcached
High-performance distributed memory caching system designed to speed up web applications by reducing database load.
Updated on January 14, 2026
Memcached is an open-source distributed memory caching system that stores data as key-value pairs in RAM. Originally developed by Brad Fitzpatrick for LiveJournal in 2003, it significantly accelerates dynamic web applications by reducing database access frequency. Its architectural simplicity and exceptional performance make it a reference solution for large-scale distributed caching.
Technical Fundamentals
- Client-server architecture with simple TCP/UDP-based protocol for ultra-fast communications
- Exclusive storage in RAM with LRU (Least Recently Used) eviction algorithm
- Distributed system without replication or persistence, prioritizing speed over durability
- Simple data model: alphanumeric keys (max 250 chars) mapped to binary values (max 1MB by default)
Strategic Benefits
- Exceptional performance: microsecond response times thanks to RAM storage
- Horizontal scalability: simple node addition to increase cache capacity
- Drastic reduction in database load (up to 90% in some cases)
- Minimal memory footprint and very low CPU consumption
- Multi-language support with client libraries for PHP, Python, Java, Ruby, Node.js, etc.
- Easy implementation and maintenance thanks to streamlined architecture
Practical Implementation Example
Here's an example of integrating Memcached into a Node.js application to cache database query results:
import Memcached from 'memcached';
import { DatabaseClient } from './database';
const memcached = new Memcached('localhost:11211', {
retries: 3,
timeout: 500,
poolSize: 10
});
const CACHE_TTL = 3600; // 1 hour
interface User {
id: string;
name: string;
email: string;
}
export class UserService {
constructor(private db: DatabaseClient) {}
async getUserById(userId: string): Promise<User | null> {
const cacheKey = `user:${userId}`;
// Attempt to retrieve from cache
return new Promise((resolve, reject) => {
memcached.get(cacheKey, async (err, cachedData) => {
if (err) {
console.error('Memcached error:', err);
// Fallback to database on cache error
return resolve(await this.fetchFromDatabase(userId));
}
// Cache hit: immediate return
if (cachedData) {
console.log(`Cache HIT for user ${userId}`);
return resolve(JSON.parse(cachedData));
}
// Cache miss: fetch from database
console.log(`Cache MISS for user ${userId}`);
const user = await this.fetchFromDatabase(userId);
if (user) {
// Store in cache for subsequent requests
memcached.set(cacheKey, JSON.stringify(user), CACHE_TTL, (setErr) => {
if (setErr) console.error('Cache set error:', setErr);
});
}
resolve(user);
});
});
}
private async fetchFromDatabase(userId: string): Promise<User | null> {
return this.db.query('SELECT * FROM users WHERE id = $1', [userId]);
}
async invalidateUserCache(userId: string): Promise<void> {
const cacheKey = `user:${userId}`;
memcached.del(cacheKey, (err) => {
if (err) console.error('Cache invalidation error:', err);
});
}
}Implementation Strategy
- Install Memcached on your servers (via apt, yum, Docker) and configure allocated memory based on your needs
- Identify cache-suitable data: expensive query results, intensive calculations, rarely modified data
- Implement a consistent key strategy (namespaces, versioning) to facilitate invalidation
- Define appropriate TTLs (Time To Live) based on data volatility (from seconds to hours)
- Establish a fallback mechanism to data source in case of cache unavailability
- Monitor hit/miss ratios and adjust configuration (memory size, TTL) accordingly
- For distributed environments, implement consistent hashing for key distribution
Expert Tip
Combine Memcached with a multi-tier caching strategy (L1 in-app memory, L2 Memcached, L3 database). Use the 'cache-aside' pattern with proactive invalidation rather than relying solely on TTLs. For critical data requiring persistence, prefer Redis which offers advanced replication and disk persistence features.
Ecosystem and Complementary Tools
- Memcached Exporter: for Prometheus integration and metrics monitoring
- PHPMemcachedAdmin / MemcacheDManager: graphical interfaces for management and monitoring
- Twemproxy (nutcracker): Twitter proxy to manage Memcached pools with automatic sharding
- Mcrouter (Facebook): cache router for complex topologies with replication and failover
- Official client libraries: libmemcached (C), pymemcache (Python), node-memcached (Node.js)
Memcached remains a top choice for applications requiring simple, high-performance, and reliable distributed caching. Its ability to drastically reduce response times and database load translates directly into better user experience and significant infrastructure savings. For simple use cases prioritizing pure speed, Memcached often outperforms more complex alternatives.
