202 Accepted
What is HTTP 202 Accepted?
Sometimes you ask a computer to do something that takes a really long time, like...
Explain Like I’m 3
You asked someone to do something that takes a really long time. They say ‘Okay, I’ll do it!’ and start working on it. You can go play and check back later to see if it’s done yet!
Example: You ask your parent to bake cookies. They say ‘Okay, I’ll make them!’ and start mixing. The cookies aren’t ready yet, but they promised to make them. You can come back in 20 minutes to check!
Explain Like I’m 5
Sometimes you ask a computer to do something that takes a really long time, like creating a big video or organizing thousands of pictures. The computer says ‘202 Accepted!’ which means ‘I got your request and I started working on it, but it’s not finished yet.’ The computer gives you a special ticket number so you can check back later to see if it’s done. It’s like ordering food at a restaurant - they give you a number and you wait for them to call you when it’s ready!
Example: You ask a website to create a photo album with 1000 pictures. The website says ‘202 Accepted - Your job number is #12345.’ You can check on job #12345 every few minutes to see if the album is ready yet.
Jr. Developer
HTTP 202 Accepted indicates the request has been accepted for processing, but processing hasn’t completed yet. This is used for asynchronous operations where the work takes too long to complete within a typical HTTP request timeout. The server accepts the request, queues it for processing, and immediately returns 202 with a Location header pointing to a status endpoint. The client can poll this endpoint to check progress. Common use cases include video transcoding, batch processing, report generation, large file imports, or any operation taking more than a few seconds. The 202 response is non-committal - accepting the request doesn’t guarantee it will succeed. Always include a way for clients to check status and retrieve results once complete.
Example: A user uploads a video for transcoding via POST /videos. Your API validates the upload, creates a job in a queue, and returns 202 with Location: /jobs/abc123. The client polls GET /jobs/abc123 every 10 seconds. Initially it returns {“status”: “processing”}. When done, it returns {“status”: “completed”, “result_url”: “/videos/def456”}.
Code Example
// Express.js async job pattern with 202const jobQueue = new Map();
app.post('/api/process-video', async (req, res) => { const jobId = generateId();
// Create job and queue it jobQueue.set(jobId, { status: 'queued', createdAt: new Date() });
// Start async processing (don't await) processVideoAsync(jobId, req.body.videoUrl);
// Return 202 immediately with status endpoint res.status(202) .location(\`/api/jobs/${jobId}\`) .json({ message: 'Video processing started', jobId, statusUrl: \`/api/jobs/${jobId}\` });});
// Status endpointapp.get('/api/jobs/:id', (req, res) => { const job = jobQueue.get(req.params.id);
if (!job) { return res.status(404).json({ error: 'Job not found' }); }
res.status(200).json(job);});Crash Course
202 Accepted, defined in RFC 9110 Section 15.3.3, signals that the request has been accepted for asynchronous processing but hasn’t completed (and may not have started). This implements the Async Request-Reply pattern: the server validates the request, queues the work, and responds immediately with 202, freeing the connection. The response SHOULD include a Location header pointing to a status-monitoring endpoint and MAY include an estimate of completion time via Retry-After header. The pattern typically involves: (1) Client sends request, (2) Server returns 202 with job ID and status URL, (3) Client polls status endpoint (returns 200 while in progress), (4) Server returns final result (200 with data, or 303 See Other redirecting to the result). Unlike 200 OK (synchronous success) or 201 Created (immediate resource creation), 202 is non-committal - the request may still fail during processing. Common for operations exceeding 2-5 second response times: video/image processing, PDF generation, data exports, batch imports, ML model training, email campaigns. Implementation options include job queues (Redis Queue, Bull, Celery), serverless functions with callbacks, or message brokers (RabbitMQ, Kafka).
Example: An analytics API receives POST /api/reports/generate with date range and 50 filters. Generating the report requires querying millions of database rows and takes 3 minutes. The server creates a job in Redis, returns 202 Accepted with Location: /api/reports/jobs/xyz789 and Retry-After: 60. A background worker processes the job. The client polls every 60 seconds, receiving {“status”: “processing”, “progress”: 45}. After 3 minutes, the status endpoint returns {“status”: “completed”, “result_url”: “/api/reports/xyz789/download”}.
Code Example
// Production async pattern with Redis queueconst express = require('express');const { Queue } = require('bull');const redis = require('redis');
const app = express();const videoQueue = new Queue('video-processing', { redis: { host: 'localhost', port: 6379 }});
// Submit async jobapp.post('/api/videos/transcode', async (req, res) => { const { videoUrl, format } = req.body;
// Add job to queue const job = await videoQueue.add({ videoUrl, format, userId: req.user.id }, { attempts: 3, backoff: { type: 'exponential', delay: 2000 } });
// Store job metadata in Redis for status checks await redis.set(\`job:${job.id}\`, JSON.stringify({ status: 'queued', progress: 0, createdAt: new Date().toISOString() }), 'EX', 86400); // Expire after 24 hours
// Return 202 with status endpoint res.status(202) .location(\`/api/jobs/${job.id}\`) .set('Retry-After', '30') // Suggest polling every 30 seconds .json({ message: 'Video transcoding started', jobId: job.id, statusUrl: \`/api/jobs/${job.id}\`, estimatedTime: '5-10 minutes' });});
// Status endpoint - returns 200 while processingapp.get('/api/jobs/:id', async (req, res) => { const jobData = await redis.get(\`job:${req.params.id}\`);
if (!jobData) { return res.status(404).json({ error: 'Job not found or expired' }); }
const job = JSON.parse(jobData);
// Return 200 with current status res.status(200).json(job);});
// Worker processes jobsvideoQueue.process(async (job) => { const { videoUrl, format } = job.data;
// Update progress await job.progress(0); await redis.set(\`job:${job.id}\`, JSON.stringify({ status: 'processing', progress: 0 }));
// Perform transcoding const result = await transcodeVideo(videoUrl, format, (progress) => { job.progress(progress); redis.set(\`job:${job.id}\`, JSON.stringify({ status: 'processing', progress })); });
// Mark complete await redis.set(\`job:${job.id}\`, JSON.stringify({ status: 'completed', progress: 100, resultUrl: result.url, completedAt: new Date().toISOString() }), 'EX', 86400);
return result;});Deep Dive
The 202 Accepted status code, specified in RFC 9110 Section 15.3.3, indicates that the request has been accepted for processing but processing has not been completed. The request might or might not eventually be acted upon, as processing might be disallowed when it actually takes place. Unlike 200 OK (synchronous success), 202 is intentionally non-committal - there is no facility in HTTP for re-sending a status code from an asynchronous operation. The 202 response is a promise to process the request, not a guarantee of success. Per the RFC, the representation sent with this response SHOULD describe the request’s current status and point to (or embed) a status monitor that can provide information about when the request will be fulfilled.
Technical Details
The Async Request-Reply pattern addresses the fundamental constraint that HTTP is synchronous request-response protocol unsuited for long-running operations. Traditional approaches like long-polling (keeping connections open) consume server resources, hit timeout limits (typically 30-120 seconds), and complicate scaling. The 202 pattern decouples request acceptance from processing completion.\n\nStandard headers in 202 responses include: Location (SHOULD include, pointing to status endpoint), Retry-After (MAY include, suggesting polling interval in seconds or HTTP-date), Content-Location (optional, if response includes current status representation). The status endpoint design has multiple approaches: Polling (client repeatedly GETs status URL), Webhooks (client provides callback URL in initial request, server POSTs completion notification), WebSockets (bidirectional real-time updates), Server-Sent Events (SSE, server pushes updates over persistent connection). Polling is simplest and most widely supported but less efficient. Webhooks require client reachability and webhook endpoint implementation. WebSockets and SSE require protocol upgrades and are overkill for infrequent checks.\n\nStatus endpoint response evolution typically follows: Initial: {“status”: “queued”, “position”: 5}, Processing: {“status”: “processing”, “progress”: 45, “message”: “Transcoding video…”}, Completed: {“status”: “completed”, “result”: {…}} or redirect via 303 See Other, Failed: {“status”: “failed”, “error”: “Insufficient storage”}. Some implementations use 200 for in-progress status, 303 See Other to redirect to completed result, 410 Gone for expired jobs, 500 for processing failures. The RFC doesn’t mandate specific status endpoint semantics.\n\nIdempotency considerations: POST requests returning 202 are typically not idempotent. If the client retries due to network failure, multiple jobs may be created. Solutions include idempotency keys (client sends Idempotency-Key header, server deduplicates), job identifiers derived from request content hashes, or PUT to job-specific URLs (PUT /jobs/{client-generated-id}).\n\nJob lifecycle management requires: Job expiration (status endpoints should return 410 Gone after TTL), Job cancellation endpoints (DELETE /jobs/{id}), Job metadata (creation time, estimated completion, user ID), Job results garbage collection (completed results shouldn’t persist indefinitely). Redis with TTL is common for job storage. Database rows with indexed timestamps enable cleanup jobs.\n\nSecurity implications: Job IDs must be unguessable UUIDs or cryptographically random strings to prevent enumeration attacks. Authorization checks required on status endpoints - users shouldn’t access others’ jobs. Rate limiting prevents job queue flooding. Resource quotas prevent individual users from monopolizing processing capacity. Job result URLs may need signed tokens if results contain sensitive data.\n\nPerformance and scaling…
Code Example
// Enterprise-grade async pattern with idempotency\nconst express = require('express');\nconst { Queue } = require('bull');\nconst Redis = require('ioredis');\nconst { v4: uuidv4 } = require('uuid');\nconst crypto = require('crypto');\n\nconst app = express();\nconst redis = new Redis();\nconst reportQueue = new Queue('report-generation', {\n redis: { host: 'localhost', port: 6379 }\n});\n\n// Idempotency middleware\nasync function ensureIdempotency(req, res, next) {\n const idempotencyKey = req.headers['idempotency-key'];\n \n if (!idempotencyKey) {\n return res.status(400).json({\n error: 'Idempotency-Key header required for async operations'\n });\n }\n \n // Check if we've seen this key before\n const existing = await redis.get(\`idempotency:${idempotencyKey}\`);\n \n if (existing) {\n const response = JSON.parse(existing);\n return res.status(response.status)\n .location(response.location)\n .json(response.body);\n }\n \n req.idempotencyKey = idempotencyKey;\n next();\n}\n\napp.post('/api/reports/generate',\n ensureIdempotency,\n async (req, res) => {\n const { dateRange, filters, format } = req.body;\n const userId = req.user.id;\n \n // Validate request\n if (!dateRange || !filters) {\n return res.status(400).json({ error: 'Missing required fields' });\n }\n \n // Check user quota\n const userJobCount = await redis.get(\`user:${userId}:job-count\`);\n if (parseInt(userJobCount || '0') >= 5) {\n return res.status(429)\n .set('Retry-After', '3600')\n .json({ error: 'Job quota exceeded. Max 5 concurrent jobs.' });\n }\n \n // Create job with deterministic ID from content\n const jobId = uuidv4();\n \n // Add to queue with priority based on user tier\n const job = await reportQueue.add({\n jobId,\n userId,\n dateRange,\n filters,\n format\n }, {\n jobId, // Use our UUID as Bull job ID\n priority: req.user.tier === 'premium' ? 1 : 10,\n attempts: 3,\n backoff: { type: 'exponential', delay: 5000 },\n removeOnComplete: false, // Keep for status checks\n removeOnFail: false\n });\n \n // Initialize job status\n const jobStatus = {\n status: 'queued',\n progress: 0,\n createdAt: new Date().toISOString(),\n userId,\n estimatedDuration: '3-5 minutes'\n };\n \n await redis.setex(\n \`job:${jobId}\`,\n 86400, // 24 hour TTL\n JSON.stringify(jobStatus)\n );\n \n // Increment user job count\n await redis.incr(\`user:${userId}:job-count\`);\n await redis.expire(\`user:${userId}:job-count\`, 3600);\n \n // Prepare response\n const response = {\n status: 202,\n location: \`/api/jobs/${jobId}\`,\n body: {\n message: 'Report generation started',\n jobId,\n statusUrl: \`/api/jobs/${jobId}\`,\n estimatedCompletion: new Date(Date.now() + 300000).toISOString()\n ...Frequently Asked Questions
What's the difference between 202 Accepted and 200 OK?
200 OK means the request succeeded and processing is complete - the response is the final result. 202 Accepted means the request was accepted but processing hasn't finished yet. Use 202 for async operations (video processing, batch jobs) that take too long for a synchronous response. Always include a Location header pointing to where clients can check status.
How should clients know when an async job is complete?
The 202 response should include a Location header pointing to a status endpoint. Clients poll this endpoint (e.g., every 30 seconds) until the job completes. The status endpoint returns 200 with job status while processing, and either 200 with results or 303 See Other redirecting to results when done. Include Retry-After header in 202 response to suggest polling interval.
Is 202 Accepted a guarantee that the job will succeed?
No! 202 is non-committal - it means the request was accepted and will be attempted, but it might still fail during processing. The job could fail due to validation errors discovered during processing, resource constraints, or external dependencies. Always check the status endpoint for the actual outcome.
Should I use 202 for all long-running operations?
Use 202 when operations exceed 2-5 seconds or your typical request timeout. For operations under 2 seconds, synchronous processing with 200/201 is usually better despite the wait - it's simpler for clients. For operations 2-30 seconds, consider response streaming or HTTP/2. For 30+ seconds, definitely use 202 with async processing.
How do I prevent duplicate jobs when clients retry 202 requests?
Implement idempotency using Idempotency-Key headers. Clients send a unique key with each request. The server checks if it has seen this key before and returns the cached response if so. Store the original 202 response (including job ID) keyed by idempotency key for 24 hours. This prevents duplicate jobs from network retries or client errors.
Common Causes
- Long-running video or image processing operations
- Batch data processing or large file imports
- Report generation with complex queries or large datasets
- Email campaign sending to thousands of recipients
- Machine learning model training or inference on large datasets
- Data export operations generating large files (CSV, PDF)
- Async webhook delivery or third-party API calls
- Background job queue processing (Redis Queue, Bull, Celery)
Implementation Guidance
- Always include Location header pointing to status endpoint in 202 response
- Implement status endpoint returning job progress (status, progress percentage, etc.)
- Include Retry-After header suggesting how often clients should poll
- Use idempotency keys (Idempotency-Key header) to prevent duplicate jobs
- Set job TTL (24-48 hours) and return 410 Gone for expired jobs
- Provide job cancellation endpoint (DELETE /jobs/{id})
- Return 200 from status endpoint while processing, 303 See Other when complete
- Implement authorization checks - users shouldn’t access others’ jobs
- Use UUIDs for job IDs to prevent enumeration attacks
- Add rate limiting to prevent job queue flooding
- Monitor queue depth and return 503 Service Unavailable when overloaded
- Consider webhooks for completion notification instead of polling