Long Polling
Client-server communication technique maintaining an open HTTP connection to receive near real-time updates without repetitive polling.
Updated on January 26, 2026
Long Polling is a web communication technique that improves upon traditional polling by maintaining an open HTTP connection until the server has new data to transmit. Unlike conventional polling that performs repetitive requests at fixed intervals, long polling reduces latency and server load by keeping the connection active. This approach provides an effective middle ground between basic polling and modern real-time technologies like WebSockets.
Fundamentals of Long Polling
- The client initiates an HTTP request that remains open server-side until new data becomes available or a timeout expires
- The server responds immediately when data is available, then the client instantly reinitializes a new connection
- The persistent connection eliminates wait delays between requests while remaining compatible with standard HTTP infrastructure
- A timeout mechanism prevents indefinitely suspended connections and enables network error handling
Benefits of Long Polling
- Reduced latency compared to classic polling through immediate transmission of available data
- Universal compatibility with all proxies, firewalls, and existing HTTP infrastructure without special configuration
- Significant reduction in server load and bandwidth by eliminating unnecessary empty requests
- Simple implementation requiring no complex specialized protocols or libraries
- Ideal fallback for environments where WebSockets is unsupported or blocked by network policies
Practical Example
Here's a complete TypeScript implementation of a long polling system for a notification service:
class LongPollingClient {
private baseUrl: string;
private isPolling: boolean = false;
private abortController: AbortController | null = null;
constructor(baseUrl: string) {
this.baseUrl = baseUrl;
}
async startPolling(onMessage: (data: any) => void): Promise<void> {
this.isPolling = true;
while (this.isPolling) {
try {
this.abortController = new AbortController();
const response = await fetch(`${this.baseUrl}/poll`, {
method: 'GET',
headers: { 'Content-Type': 'application/json' },
signal: this.abortController.signal,
});
if (response.ok) {
const data = await response.json();
onMessage(data);
}
// Immediate reconnection after reception
if (this.isPolling) {
continue;
}
} catch (error) {
if (error.name === 'AbortError') {
console.log('Polling stopped');
break;
}
// Wait before retry on error
await new Promise(resolve => setTimeout(resolve, 3000));
}
}
}
stopPolling(): void {
this.isPolling = false;
this.abortController?.abort();
}
}
// Server-side (Express/Node.js)
app.get('/poll', async (req, res) => {
const timeout = 30000; // 30 seconds
const checkInterval = 1000; // Check every second
const startTime = Date.now();
const checkForUpdates = async (): Promise<any | null> => {
// Business logic to check for new data
const updates = await db.getNewNotifications(req.user.id);
return updates.length > 0 ? updates : null;
};
const pollInterval = setInterval(async () => {
const updates = await checkForUpdates();
if (updates) {
clearInterval(pollInterval);
res.json({ success: true, data: updates });
} else if (Date.now() - startTime > timeout) {
clearInterval(pollInterval);
res.json({ success: true, data: [] }); // Timeout without data
}
}, checkInterval);
// Cleanup if client disconnects
req.on('close', () => clearInterval(pollInterval));
});Effective Implementation
- Define an appropriate server timeout (20-60 seconds) balancing responsiveness and server load
- Implement robust error handling with exponential retry to manage network interruptions
- Add authentication and session mechanisms to secure long-duration connections
- Configure proxy and load balancer timeouts to support prolonged connections
- Monitor simultaneous connection count to prevent resource exhaustion
- Implement a heartbeat system to detect dead connections and free resources
- Provide a fallback strategy to classic polling in case of infrastructure constraints
Optimization Tip
For high-load applications, combine long polling with a pub/sub architecture (Redis, RabbitMQ) server-side. The server listens to message broker events and responds immediately to pending requests, avoiding active database polling and drastically improving scalability to several thousand simultaneous connections.
Related Tools and Libraries
- Socket.IO: library offering long polling as automatic fallback transport with WebSockets
- Nginx and HAProxy: reverse proxies configurable to efficiently support long-duration connections
- Server-Sent Events (SSE): standardized alternative for unidirectional server-to-client streaming
- Polling.js: lightweight JavaScript library specialized in implementing advanced polling patterns
- Redis Pub/Sub: messaging system for coordinating events between server instances in distributed architecture
Long polling remains a pragmatic solution for implementing near real-time features without excessive architectural complexity. Its universal compatibility and implementation simplicity make it a strategic choice for applications requiring push updates without justifying full WebSocket infrastructure, particularly in constrained network environments or for moderate-frequency notification systems.
