Concurrency and threshold throttling for Sidekiq.
Add this line to your application’s Gemfile:
gem "sidekiq-throttled"
And then execute:
$ bundle
Or install it yourself as:
$ gem install sidekiq-throttled
Add somewhere in your app’s bootstrap (e.g. config/initializers/sidekiq.rb
if
you are using Rails):
require "sidekiq/throttled"
Once you’ve done that you can include Sidekiq::Throttled::Job
to your
job classes and configure throttling:
class MyJob
include Sidekiq::Job
include Sidekiq::Throttled::Job
sidekiq_options :queue => :my_queue
sidekiq_throttle(
# Allow maximum 10 concurrent jobs of this class at a time.
concurrency: { limit: 10 },
# Allow maximum 1K jobs being processed within one hour window.
threshold: { limit: 1_000, period: 1.hour }
)
def perform
# ...
end
end
Tip
|
Sidekiq::Throttled::Job is aliased as Sidekiq::Throttled::Worker ,
thus if you’re using Sidekiq::Worker naming convention, you can use the
alias for consistency:
|
class MyWorker
include Sidekiq::Worker
include Sidekiq::Throttled::Worker
# ...
end
Sidekiq::Throttled.configure do |config|
# Period in seconds to exclude queue from polling in case it returned
# {config.cooldown_threshold} amount of throttled jobs in a row. Set
# this value to `nil` to disable cooldown manager completely.
# Default: 2.0
config.cooldown_period = 2.0
# Exclude queue from polling after it returned given amount of throttled
# jobs in a row.
# Default: 1 (cooldown after first throttled job)
config.cooldown_threshold = 1
end
Sidekiq::Throttled
relies on following bundled middlewares:
-
Sidekiq::Throttled::Middlewares::Server
The middleware is automatically injected when you require sidekiq/throttled
.
In rare cases, when this causes an issue, you can change middleware order manually:
Sidekiq.configure_server do |config|
# ...
config.server_middleware do |chain|
chain.prepend(Sidekiq::Throttled::Middlewares::Server)
end
end
You can specify an observer that will be called on throttling. To do so pass an
:observer
option with callable object:
class MyJob
include Sidekiq::Job
include Sidekiq::Throttled::Job
MY_OBSERVER = lambda do |strategy, *args|
# do something
end
sidekiq_options queue: :my_queue
sidekiq_throttle(
concurrency: { limit: 10 },
threshold: { limit: 100, period: 1.hour },
observer: MY_OBSERVER
)
def perform(*args)
# ...
end
end
Observer will receive strategy, *args
arguments, where strategy
is a Symbol
:concurrency
or :threshold
, and *args
are the arguments that were passed
to the job.
You can throttle jobs dynamically with :key_suffix
option:
class MyJob
include Sidekiq::Job
include Sidekiq::Throttled::Job
sidekiq_options queue: :my_queue
sidekiq_throttle(
# Allow maximum 10 concurrent jobs per user at a time.
concurrency: { limit: 10, key_suffix: -> (user_id) { user_id } }
)
def perform(user_id)
# ...
end
end
You can also supply dynamic values for limits and periods by supplying a proc for these values. The proc will be evaluated at the time the job is fetched and will receive the same arguments that are passed to the job.
class MyJob
include Sidekiq::Job
include Sidekiq::Throttled::Job
sidekiq_options queue: :my_queue
sidekiq_throttle(
# Allow maximum 1000 concurrent jobs of this class at a time for VIPs and 10 for all other users.
concurrency: {
limit: ->(user_id) { User.vip?(user_id) ? 1_000 : 10 },
key_suffix: ->(user_id) { User.vip?(user_id) ? "vip" : "std" }
},
# Allow 1000 jobs/hour to be processed for VIPs and 10/day for all others
threshold: {
limit: ->(user_id) { User.vip?(user_id) ? 1_000 : 10 },
period: ->(user_id) { User.vip?(user_id) ? 1.hour : 1.day },
key_suffix: ->(user_id) { User.vip?(user_id) ? "vip" : "std" }
}
)
def perform(user_id)
# ...
end
end
You also can use several different keys to throttle one worker.
class MyJob
include Sidekiq::Job
include Sidekiq::Throttled::Job
sidekiq_options queue: :my_queue
sidekiq_throttle(
# Allow maximum 10 concurrent jobs per project at a time and maximum 2 jobs per user
concurrency: [
{ limit: 10, key_suffix: -> (project_id, user_id) { project_id } },
{ limit: 2, key_suffix: -> (project_id, user_id) { user_id } }
]
# For :threshold it works the same
)
def perform(project_id, user_id)
# ...
end
end
Important
|
Don’t forget to specify :key_suffix and make it return different
values if you are using dynamic limit/period options. Otherwise, you risk
getting into some trouble.
|
Concurrency throttling is based on distributed locks. Those locks have default time to live (TTL) set to 15 minutes. If your job takes more than 15 minutes to finish, lock will be released and you might end up with more jobs running concurrently than you expect.
This is done to avoid deadlocks - when by any reason (e.g. Sidekiq process was OOM-killed) cleanup middleware wasn’t executed and locks were not released.
If your job takes more than 15 minutes to complete, you can tune concurrency lock TTL to fit your needs:
# Set concurrency strategy lock TTL to 1 hour.
sidekiq_throttle(concurrency: { limit: 20, ttl: 1.hour.to_i })
This library aims to support and is tested against the following Ruby versions:
-
Ruby 2.7.x
-
Ruby 3.0.x
-
Ruby 3.1.x
-
Ruby 3.2.x
-
Ruby 3.3.x
If something doesn’t work on one of these versions, it’s a bug.
This library may inadvertently work (or seem to work) on other Ruby versions, however support will only be provided for the versions listed above.
If you would like this library to support another Ruby version or implementation, you may volunteer to be a maintainer. Being a maintainer entails making sure all tests run and pass on that implementation. When something breaks on your implementation, you will be responsible for providing patches in a timely fashion. If critical issues for a particular implementation exist at the time of a major release, support for that Ruby version may be dropped.
This library aims to support and work with following Sidekiq versions:
-
Sidekiq 6.5.x
-
Sidekiq 7.0.x
-
Sidekiq 7.1.x
-
Sidekiq 7.2.x
And the following Sidekiq Pro versions:
-
Sidekiq Pro 7.0.x
-
Sidekiq Pro 7.1.x
-
Sidekiq Pro 7.2.x
-
Fork sidekiq-throttled on GitHub
-
Make your changes
-
Ensure all tests pass (
bundle exec rake
) -
Send a pull request
-
If we like them we’ll merge them
-
If we’ve accepted a patch, feel free to ask for commit access!
The initial work on the project was initiated to address the needs of SensorTower.