Deep-dive on the Next Gen Platform. Join the Webinar!

Skip Navigation
Show nav
Dev Center
  • Get Started
  • Documentation
  • Changelog
  • Search
  • Get Started
    • Node.js
    • Ruby on Rails
    • Ruby
    • Python
    • Java
    • PHP
    • Go
    • Scala
    • Clojure
    • .NET
  • Documentation
  • Changelog
  • More
    Additional Resources
    • Home
    • Elements
    • Products
    • Pricing
    • Careers
    • Help
    • Status
    • Events
    • Podcasts
    • Compliance Center
    Heroku Blog

    Heroku Blog

    Find out what's new with Heroku on our blog.

    Visit Blog
  • Log inorSign up
Hide categories

Categories

  • Heroku Architecture
    • Compute (Dynos)
      • Dyno Management
      • Dyno Concepts
      • Dyno Behavior
      • Dyno Reference
      • Dyno Troubleshooting
    • Stacks (operating system images)
    • Networking & DNS
    • Platform Policies
    • Platform Principles
  • Developer Tools
    • Command Line
    • Heroku VS Code Extension
  • Deployment
    • Deploying with Git
    • Deploying with Docker
    • Deployment Integrations
  • Continuous Delivery & Integration (Heroku Flow)
    • Continuous Integration
  • Language Support
    • Node.js
      • Working with Node.js
      • Troubleshooting Node.js Apps
      • Node.js Behavior in Heroku
    • Ruby
      • Rails Support
      • Working with Bundler
      • Working with Ruby
      • Ruby Behavior in Heroku
      • Troubleshooting Ruby Apps
    • Python
      • Working with Python
      • Background Jobs in Python
      • Python Behavior in Heroku
      • Working with Django
    • Java
      • Java Behavior in Heroku
      • Working with Java
      • Working with Maven
      • Working with Spring Boot
      • Troubleshooting Java Apps
    • PHP
      • PHP Behavior in Heroku
      • Working with PHP
    • Go
      • Go Dependency Management
    • Scala
    • Clojure
    • .NET
      • Working with .NET
  • Databases & Data Management
    • Heroku Postgres
      • Postgres Basics
      • Postgres Getting Started
      • Postgres Performance
      • Postgres Data Transfer & Preservation
      • Postgres Availability
      • Postgres Special Topics
      • Migrating to Heroku Postgres
    • Heroku Key-Value Store
    • Apache Kafka on Heroku
    • Other Data Stores
  • AI
    • Working with AI
  • Monitoring & Metrics
    • Logging
  • App Performance
  • Add-ons
    • All Add-ons
  • Collaboration
  • Security
    • App Security
    • Identities & Authentication
      • Single Sign-on (SSO)
    • Private Spaces
      • Infrastructure Networking
    • Compliance
  • Heroku Enterprise
    • Enterprise Accounts
    • Enterprise Teams
    • Heroku Connect (Salesforce sync)
      • Heroku Connect Administration
      • Heroku Connect Reference
      • Heroku Connect Troubleshooting
  • Patterns & Best Practices
  • Extending Heroku
    • Platform API
    • App Webhooks
    • Heroku Labs
    • Building Add-ons
      • Add-on Development Tasks
      • Add-on APIs
      • Add-on Guidelines & Requirements
    • Building CLI Plugins
    • Developing Buildpacks
    • Dev Center
  • Accounts & Billing
  • Troubleshooting & Support
  • Integrating with Salesforce
  • Troubleshooting & Support
  • Request Timeout

Request Timeout

English — 日本語に切り替える

Last updated May 07, 2024

Table of Contents

  • Language specific H12 articles
  • Long-polling and streaming responses
  • Timeout behavior
  • Debugging request timeouts
  • Request queueing efficiency
  • Uploading large files

If your app exhibits a high volume of H12 errors, see Addressing H12 Errors for immediate remediation steps.

 

Tuning your applications’ dynos may help to prevent timeouts and generally improve performance. Learn more: Optimizing Dyno Usage and Preventing H12 Errors.

Web requests processed by Heroku are directed to your dynos via a number of Heroku routers. These requests are intended to be served by your application quickly. Best practice is to get the response time of your web application to be under 500ms, this will free up the application for more requests and deliver a high quality user experience to your visitors. Occasionally a web request may hang or take an excessive amount of time to process by your application. When this happens the router will terminate the request if it takes longer than 30 seconds to complete.

The countdown for this 30 second timeout begins after the entire request (all request headers and, if applicable, the request body) has been sent from the router to the dyno. The request must then be processed in the dyno by your application, and a response delivered back to the router, within 30 seconds to avoid the timeout.

When a timeout is detected the router will return a customizable error page to the client and an H12 error is emitted to your application logs. While the router has returned a response to the client, your application will not know that the request it is processing has reached a time-out, and your application will continue to work on the request. To avoid this situation Heroku recommends setting a timeout within your application and keeping the value well under 30 seconds, such as 10 or 15 seconds. Unlike the routing timeout, these timers will begin when the request begins being processed by your application. You can read more about this below in Timeout behavior.

The timeout value is not configurable. If your server requires longer than 30 seconds to complete a given request, we recommend moving that work to a background task or worker to periodically ping your server to see if the processing request has been finished. This pattern frees your web processes up to do more work, and decreases overall application response times.

Language specific H12 articles

  • H12 - Request Timeout in Ruby (MRI)

Long-polling and streaming responses

Heroku supports HTTP features such as long-polling and streaming responses. An application has an initial 30 second window to respond with a single byte back to the client. However, each byte transmitted thereafter (either received from the client or sent by your application) resets a rolling 55 second window. If no data is sent during the 55 second window, the connection will be terminated.

If you’re sending a streaming response, such as with server-sent events, you’ll need to detect when the client has hung up, and make sure your app server closes the connection promptly. If the server keeps the connection open for 55 seconds without sending any data, you’ll see a request timeout.

Timeout behavior

When a connection is terminated, an error page will be issued to the client. The web dyno that was processing the request is left untouched – it will continue to process the request, even though it won’t be able to send any response. Subsequent requests may then be routed to the same process which will be unable to respond (depending on the concurrency behavior of the application’s language/framework) causing further degradation.

Depending on your language you may be able to set a timeout on the app server level. One example is Ruby’s Unicorn. In Unicorn you can set a timeout in config/unicorn.rb like this:

timeout 15

The timer will begin once Unicorn starts processing the request, if 15 seconds pass, then the master process will send a SIGKILL to the worker but no exception will be raised.

In addition to server level timeouts you can use other request timeout libraries. One example is using Ruby’s rack-timeout gem and setting the timeout value to lower than the router’s 30 second timeout, such as 15 seconds. Like the application level timeout this will prevent runaway requests from living on indefinitely in your application dyno, however it will raise an error which makes tracking the source of the slow request considerably easier.

Similar tools exist for other languages, or are built into the runtime. See Preventing H12 Errors for suggestions for other languages.

Debugging request timeouts

One cause of request timeouts is an infinite loop in the code. Test locally (perhaps using a local copy of the production database, extracted using pgbackups) and see if you can replicate the problem and fix the bug.

Another possibility is that you are trying to do some sort of long-running task inside your web process, such as:

  • Sending an email
  • Accessing a remote API (posting to Twitter, querying Flickr, etc.)
  • Web scraping / crawling
  • Rendering an image or PDF
  • Heavy computation (computing a fibonacci sequence, etc.)
  • Heavy database usage (slow or numerous queries, N+1 queries)

If so, you should move this heavy lifting into a background job which can run asynchronously from your web request. See Worker Dynos, Background Jobs and Queueing for details.

Another class of timeouts occur when an external service used by your application is unavailable or overloaded. In this case, your web app is highly likely to timeout unless you move the work to the background. In some cases where you must process these requests during your web request, you should always plan for the failure case. Most languages let you specify a timeout on HTTP requests, for example.

If you are using Unicorn or Gunicorn, your main worker can time out and kill web workers. See this Knowledge Base article for more info.

Request queueing efficiency

Request timeouts can also be caused by queueing of TCP connections inside the dyno. Some languages and frameworks process only one connection at a time, but it’s possible for the routers to send more than one request to a dyno concurrently. In this case, requests will queue behind the one being processed by the app, causing those subsequent requests to take longer than they would on their own. You can get some visibility into request queue times with the New Relic addon. This problem can be ameliorated by the following techniques, in order of typical effectiveness:

  • Run more processes per dyno, with correspondingly fewer dynos. If you’re using a framework that can handle only one request at a time (for example, Ruby on Rails), try a tool like Unicorn with more worker processes. This keeps your app’s total concurrency the same, but dramatically improves request queueing efficiency by sharing each dyno queue among more processes.
  • Make slow requests faster by optimizing app code. To do this effectively, focus on the 99th percentile and maximum service time for your app. This decreases the amount of time requests will spend waiting behind other, slower requests.
  • Run more dynos, thus increasing total concurrency. This slightly decreases the probability that any given request will get stuck behind another one.

Uploading large files

Many web applications allow users to upload files. When these files are large, or the user is on a slow internet connection, the upload can take longer than 30 seconds. For some types of web applications that block requests, it can result in hitting the H12 request timeout. For this we recommend directly uploading to S3.

Keep reading

  • Troubleshooting & Support

Feedback

Log in to submit feedback.

Wrong Version of Ruby or Rake in App Support Channels

Information & Support

  • Getting Started
  • Documentation
  • Changelog
  • Compliance Center
  • Training & Education
  • Blog
  • Support Channels
  • Status

Language Reference

  • Node.js
  • Ruby
  • Java
  • PHP
  • Python
  • Go
  • Scala
  • Clojure
  • .NET

Other Resources

  • Careers
  • Elements
  • Products
  • Pricing
  • RSS
    • Dev Center Articles
    • Dev Center Changelog
    • Heroku Blog
    • Heroku News Blog
    • Heroku Engineering Blog
  • Twitter
    • Dev Center Articles
    • Dev Center Changelog
    • Heroku
    • Heroku Status
  • Github
  • LinkedIn
  • © 2025 Salesforce, Inc. All rights reserved. Various trademarks held by their respective owners. Salesforce Tower, 415 Mission Street, 3rd Floor, San Francisco, CA 94105, United States
  • heroku.com
  • Legal
  • Terms of Service
  • Privacy Information
  • Responsible Disclosure
  • Trust
  • Contact
  • Cookie Preferences
  • Your Privacy Choices