• This repository has been archived on 18/Sep/2021
  • Stars
    star
    1,351
  • Rank 33,405 (Top 0.7 %)
  • Language
    Scala
  • License
    Apache License 2.0
  • Created almost 12 years ago
  • Updated about 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A load generator, built for engineers

Iago, A Load Generator

Build Status

Iago Quick Start

Please join [email protected] for updates and to ask questions.

If you are already familiar with the Iago Load Generation tool, follow these steps to get started; otherwise, start with the Iago Overview and perhaps Iago Philosophy, also known as "Why Iago?". For questions, please contact [email protected].

Iago Prerequisites

  1. Download and unpack the Iago distribution. We support Scala 2.10 and recommend you clone the latest master: master.

  2. Read the documentation.

Preparing Your Test

  1. Identify your transaction source; see Transaction Requirements and Sources of Transactions for more information.
  2. In Scala, extend the Iago server's RecordProcessor or ThriftRecordProcessor class, or in Java, extend LoadTest or ThriftLoadTest; see Implementing Your Test for more information.
  3. Create a launcher.scala file in your Iago config directory with the appropriate settings; see Configuring Your Test for more information.

Executing Your Test

Launch Iago from the distribution with java -jar iago_jar -f your_config. This will create the Iago processes for you and configure it to use your transactions. To kill a running job, add -k to your launch parameters: java -jar iago_jar -f your_config -k.

If you launch your Iago job on your local machine and an old Iago job is still running, it probably won't get far: it will attempt to re-use a port and fail. You want to kill the running job, as described above.

If you build via Maven, then you might wonder "How do I launch Iago 'from the distribution'?" The steps are:

% mvn package -DskipTests
% mkdir tmp; cd tmp
% unzip ../target/iago-version-package-dist.zip
% java -jar iago-version.jar -f config/my_config.scala

Don't assume that you can skip the package/unzip steps if you're just changing a config file. You need to re-package and unzip again.

If you are using Iago as a library, for example, in the case of testing over the Thrift protocol or building more complex tests with HTTP or Memcached/Kestrel, you should instead add a task to your project's configuration. See Configuring Your Test for more information.

Top

Iago Overview

Iago is a load generation tool that replays production or synthetic traffic against a given target. Among other things, it differs from other load generation tools in that it attempts to hold constant the transaction rate. For example, if you want to test your service at 100K requests per minute, Iago attempts to achieve that rate.

Because Iago replays traffic, you must specify the source of the traffic. You use a transaction log as the source of traffic, in which each transaction generates a request to your service that your service processes.

Replaying transactions at a fixed rate enables you to study the behavior of your service under an anticipated load. Iago also allows you to identify bottlenecks or other issues that may not be easily observable in a production environment in which your maximum anticipated load occurs only rarely.

Top

Supported Services

Iago can generate service requests that travel the net in different ways and are in different formats. The code that does this is in a Transport, a class that extends ParrotTransport. Iago comes with several Transports already defined. When you configure your test, you will need to set some parameters; to understand which of those parameters are used and how they are used, you probably want to look at the source code for your test's Transport class.

Your service is typically an HTTP or Thrift service written in either Scala or Java.

Top

Transaction Requirements

For replay, Iago recommends you scrub your logs to only include requests which meet the following requirements:

  • Idempotent, meaning that re-execution of a transaction any number of times yields the same result as the initial execution.
  • Commutative, meaning that transaction order is not important. Although transactions are initiated in replay order, Iago's internal behavior may change the actual execution order to guarantee the transaction rate. Also, transactions that implement Future responses are executed asynchronously. You can achieve ordering, if required, by using Iago as a library and initiating new requests in response to previous ones. Examples of this are available.

Top

Sources of Transactions

Transactions typically come from logs, such as the following:

  • Web server logs capture HTTP transactions.
  • Proxy server logs can capture transactions coming through a server. You can place a proxy server in your stack to capture either HTTP or Thrift transactions.
  • Network sniffers can capture transactions as they come across a physical wire. You can program the sniffer to create a log of transactions you identify for capture.

In some cases, transactions do not exist. For example, transactions for your service may not yet exist because they are part of a new service, or you are obligated not to use transactions that contain sensitive information. In such cases, you can provide synthetic transactions, which are transactions that you create to model the operating environment for your service. When you create synthetic transactions, you must statistically distribute your transactions to match the distribution you expect when your service goes live.

Top

Iago Architecture Overview

Iago consists of feeders and servers. A feeder reads your transaction source. A server formats and delivers requests to the service you want to test. The feeder contains a Poller object, which is responsible for guaranteeing cachedSeconds worth of transactions in the pipeline to the Iago servers.

Metrics are available in logs and in graphs as described in Metrics.

The Iago servers generate requests to your service. Together, all Iago servers generate the specified number of requests per minute. A Iago server's RecordProcessor object executes your service and maps the transaction to the format required by your service.

The feeder polls its servers to see how much data they need to maintain cachedSeconds worth of data. That is how we can have many feeders that need not coordinate with each other.

Ensuring that we go through every last message is important when we are writing traffic summaries in the record processor, especially when the data set is small. The parrot feeder shuts down due to running out of time, running out of data, or both. When the feeder runs out of data we

  • make sure that all the data in parrot feeder's internal queues are sent to the parrot server
  • make sure all the data held in the parrot servers cache is sent
  • wait until we get a response for all pending messages or until the reads time out

When the parrot feeder runs out of time (the duration configuration) the data in the feeder's internal queues are ignored, otherwise the same process as above occurs.

Top

Implementing Your Test

The following sections show examples of implementing your test in both Scala and Java. See Code Annotations for the Examples for information about either example.

Top

Scala Example

To implement a load test in Scala, you must extend the Iago server's RecordProcessor class to specify how to map transactions into the requests that the Iago server delivers to your service. The following example shows a RecordProcessor subclass that implements a load test on an EchoService HTTP service:

package com.twitter.example

import org.apache.thrift.protocol.TBinaryProtocol

import com.twitter.parrot.processor.RecordProcessor                                     // 1
import com.twitter.parrot.thrift.ParrotJob                                              // 2
import com.twitter.parrot.server.{ParrotRequest,ParrotService}                          // 3
import com.twitter.logging.Logger
import org.jboss.netty.handler.codec.http.HttpResponse

import thrift.EchoService

class EchoLoadTest(parrotService: ParrotService[ParrotRequest, HttpResponse]) extends RecordProcessor {
  val client = new EchoService.ServiceToClient(service, new TBinaryProtocol.Factory())  // 4
  val log = Logger.get(getClass)

  def processLines(job: ParrotJob, lines: Seq[String]) {                                // 5
    lines map { line =>
      client.echo(line) respond { rep =>
        if (rep == "hello") {
          client.echo("IT'S TALKING TO US")                                             // 6
        }
        log.info("response: " + rep)                                                    // 7
      }
    }
  }
}

Top

Scala Thrift Example

To implement a Thrift load test in Scala, you must extend the Iago server's Thrift RecordProcessor class to specify how to map transactions into the requests that the Iago server delivers to your service. The following example shows a ThriftRecordProcessor subclass that implements a load test on an EchoService Thrift service:

package com.twitter.example

import org.apache.thrift.protocol.TBinaryProtocol

import com.twitter.parrot.processor.ThriftRecordProcessor                               // 1
import com.twitter.parrot.thrift.ParrotJob                                              // 2
import com.twitter.parrot.server.{ParrotRequest,ParrotService}                          // 3
import com.twitter.logging.Logger

import thrift.EchoService

class EchoLoadTest(parrotService: ParrotService[ParrotRequest, Array[Byte]]) extends ThriftRecordProcessor(parrotService) {
  val client = new EchoService.ServiceToClient(service, new TBinaryProtocol.Factory())  // 4
  val log = Logger.get(getClass)

  def processLines(job: ParrotJob, lines: Seq[String]) {                                // 5
    lines map { line =>
      client.echo(line) respond { rep =>
        if (rep == "hello") {
          client.echo("IT'S TALKING TO US")                                             // 6
        }
        log.info("response: " + rep)                                                    // 7
      }
    }
  }
}

Top

Java Example

To implement a load test in Java, you must extend the Iago server's LoadTest class to specify how to map transactions into the requests that the Iago server delivers to your service. The LoadTest class provides Java-friendly type mappings for the underlying Scala internals. The following example shows a LoadTest subclass that implements a load test on an EchoService HTTP service:

package com.twitter.jexample;

import com.twitter.example.thrift.EchoService;
import com.twitter.parrot.processor.LoadTest;                                           // 1
import com.twitter.parrot.thrift.ParrotJob;                                             // 2
import com.twitter.parrot.server.ParrotRequest;                                         // 3

import com.twitter.parrot.server.ParrotService;                                         // 3
import com.twitter.util.Future;
import com.twitter.util.FutureEventListener;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.jboss.netty.handler.codec.http.HttpResponse;

import java.util.List;

public class EchoLoadTest extends LoadTest {
  EchoService.ServiceToClient client = null;

  public EchoLoadTest(ParrotService<ParrotRequest, HttpResponse> parrotService) {
    super(parrotService);
    client = new EchoService.ServiceToClient(service(), new TBinaryProtocol.Factory()); // 4
  }

  public void processLines(ParrotJob job, List<String> lines) {                         // 5
    for(String line: lines) {
      Future<String> future = client.echo(line);
      future.addEventListener(new FutureEventListener<String>() {
        public void onSuccess(String msg) {
          System.out.println("response: " + msg);
        }

      public void onFailure(Throwable cause) {
        System.out.println("Error: " + cause);
      }
     });
    }
  }
}

Top

Java Thrift Example

To implement a Thrift load test in Java, you must extend the Iago server's ThriftLoadTest class to specify how to map transactions into the requests that the Iago server delivers to your service. The ThriftLoadTest class provides Java-friendly type mappings for the underlying Scala internals. The following example shows a ThriftLoadTest subclass that implements a load test on an EchoService Thrift service:

package com.twitter.jexample;

import com.twitter.example.thrift.EchoService;
import com.twitter.parrot.processor.ThriftLoadTest;                                     // 1
import com.twitter.parrot.thrift.ParrotJob;                                             // 2
import com.twitter.parrot.server.ParrotRequest;                                         // 3
import com.twitter.parrot.server.ParrotService;                                         // 3
import com.twitter.util.Future;
import com.twitter.util.FutureEventListener;
import org.apache.thrift.protocol.TBinaryProtocol;

import java.util.List;

public class EchoLoadTest extends ThriftLoadTest {
  EchoService.ServiceToClient client = null;

  public EchoLoadTest(ParrotService<ParrotRequest, byte[]> parrotService) {
    super(parrotService);
    client = new EchoService.ServiceToClient(service(), new TBinaryProtocol.Factory()); // 4
  }

  public void processLines(ParrotJob job, List<String> lines) {                         // 5
    for(String line: lines) {
      Future<String> future = client.echo(line);
      future.addEventListener(new FutureEventListener<String>() {
        public void onSuccess(String msg) {
          System.out.println("response: " + msg);
        }

      public void onFailure(Throwable cause) {
        System.out.println("Error: " + cause);
      }
     });
    }
  }
}

Top

Code Annotations for the Examples

You define your Iago subclass to execute your service and map transactions to requests for your service:

  1. Import com.twitter.parrot.processor.RecordProcessor (Scala) or LoadTest (Java), whose instance will be executed by a Iago server.
  2. Import com.twitter.parrot.thrift.ParrotJob, which contains the Iago server class.
  3. Import com.twitter.parrot.server.ParrotService and com.twitter.parrot.server.ParrotRequest
  4. Create an instance of your service to be placed under test. Your service is a client of the Iago service.
  5. Define a processLines method to format the request and and execute your service.
  6. Optionally, you can initiate a new request based on the response to a previous one.
  7. Optionally, do something with the response. In this example, the response is logged.

Top

Configuring Your Test

To configure your test, create a launcher.scala file that that creates a ParrotLauncherConfig instance with the configuration parameters you want to set.

There are several parameters to set. A good one to figure out early is transport; that will in turn help you to find out what, e.g., responseType you need.

The following example shows parameters for testing a Thrift service:

import com.twitter.parrot.config.ParrotLauncherConfig

new ParrotLauncherConfig {
  distDir = "."
  jobName = "load_echo"
  port = 8080
  victims = "localhost"
  log = "logs/yesterday.log"
  requestRate = 1
  numInstances = 1
  duration = 5
  timeUnit = "MINUTES" // affects duration; does not affect requestRate

  imports = "import com.twitter.example.EchoLoadTest"
  responseType = "Array[Byte]"
  transport = "ThriftTransportFactory(this)"
  loadTest = "new EchoLoadTest(service.get)"
}

Note: For a sample configuration file, see config/launcher.scala within the Iago distribution.

You can specify any of the following parameters:

Parameter Description Required or
Default Value
createDistribution

You can use this field to create your own distribution rate, instead of having a constant flow. You will need to create a subclass of RequestDistribution and import it.

Example:

createDistribution = """createDistribution = {
  rate => new MyDistribution(rate)
}"""

""
customLogSource

A string with Scala code that will be put into the Feeder config. You can use this to get Iago to read in compressed files. Iago can read LZO compressed files using its built-in LzoFileLogSource.

Example:

customLogSource = """
  if(inputLog.endsWith(".lzo")) {
    logSource = Some(new com.twitter.parrot.feeder.LzoFileLogSource(inputLog))
  }"""
    

""
distDir

The subdirectory of your project you're running from, if any.

Example: distDir = "target"

"."
doConfirm

If set to false, you will not be asked to confirm the run.

Example: doConfirm = false

true
duration

An integer value that specifies the time to run the test in timeUnit units.

Example: duration = 5

ย 
feederXmx

Defines feeder heap size. Suggested not to be higher than 4 GB (will cause issues scheduling)

Example: feederXmx = 2048

1744
header

A string value that specifies the HTTP Host header.

Example: header = "api.yourdomain.com"

""
hostConnectionCoresize

Number of connections per host that will be kept open, once established, until they hit max idle time or max lifetime

Example: hostConnectionCoresize = 1

1
hostConnectionIdleTimeInMs

For any connection > coreSize, maximum amount of time, in milliseconds, between requests we allow before shutting down the connection

Example: hostConnectionIdleTimeInMs = 50000

60000
hostConnectionLimit

Limit on the number of connections per host

Example: hostConnectionLimit = 4

Integer.MAX_VALUE
hostConnectionMaxIdleTimeInMs

The maximum time in milliseconds that any connection (including within core size) can stay idle before shutdown

Example: hostConnectionMaxIdleTimeInMs = 500000

300000
hostConnectionMaxLifeTimeInMs

The maximum time in milliseconds that a connection will be kept open

Example: hostConnectionMaxLifeTimeInMs = 10000

Integer.MAX_VALUE
jobName

A string value that specifies the the name of your test. This is used for two things:

  1. if the parrot feeder is configured to find its servers using zookeeper, and/or
  2. when using mesos it is part of the job names generated. A job name of "foo" results in mesos job sharding groups "parrot_server_foo" and "parrot_feeder_foo".

Example: jobName = "testing_tasty_new_feature"

Required
localMode

Should Iago attempt to run locally or to use the cluster via mesos?

Example: localMode = true

false
log

A string value that specifies the complete path to the log you want Iago to replay. If localMode=true then the log should be on your local file system. The log should have at least 1000 items or you should change the reuseFile parameter.

Example: log = "logs/yesterday.log"

If localMode=false (the default), then the parrot launcher will copy your log file when attempts to make a package for mesos. You can avoid this, and should, by storing your log file in HDFS.

Example: log = "hdfs://hadoop-example.com/yesterday.log"

Required
loggers

A List of LoggerFactories; allows you to define the type and level of logging you want

Example:

import com.twitter.logging.LoggerFactory
import com.twitter.logging.config._

new ParrotLauncherConfig { ... loggers = new LoggerFactory( level = Level.DEBUG, handlers = new ConsoleHandlerConfig() ) }

Nil
maxRequests

An integer value that specifies the total number of requests to submit to your service.

Example: maxRequests = 10000

Integer.MAX_VALUE
requestRate

An integer value that specifies the number of requests per second to submit to your service.

Example: requestRate = 10

Note: if using multiple server instances, requestRate is per-instance, not aggregate.

1
reuseFile

A boolean value that specifies whether or not to stop the test when the input log has been read through. Setting this value to true will result in Iago starting back at the beginning of the log when it exhausts the contents. If this is true, your log file should at least be 1,000 lines or more.

Example: reuseFile = false

true
scheme

A string value that specifies the scheme portion of a URI.

Example: scheme = "http"

http
serverXmx

Defines server heap size. Suggested not to be higher than 8 GB (will cause issues scheduling)

Example: serverXmx = 5000

4000
requestTimeoutInMs

(From the Finagle Documentation) The request timeout is the time given to a *single* request (if there are retries, they each get a fresh request timeout). The timeout is applied only after a connection has been acquired. That is: it is applied to the interval between the dispatch of the request and the receipt of the response.

Note that parrot servers will not shut down until every response from every victim has come in. If you've modified your record processor to write test summaries this can be an issue.

Example: requestTimeoutInMs = 3000 // if the victim doesn't respond in three seconds, stop waiting

30000 // 30 seconds
reuseConnections

A boolean value that specifies whether connections to your service's hosts can be reused. A value of true enables reuse. Setting this to false greatly increases your use of ephemeral ports and can result in port exhaustion, causing you to achieve a lower rate than requested

This is only implemented for FinagleTransport.

Example: reuseConnections = false

true
thriftClientId

If you are making Thrift requests, your clientId

Example: thriftClientId = "projectname.staging"

""
timeUnit

A string value that specifies time unit of the duration. It contains one of the following values:

  • "MINUTES"
  • "HOURS"
  • "DAYS"

Example: timeUnit = "MINUTES"

ย 
traceLevel

A com.twitter.logging.Level subclass. Controls the level of "debug logging" for servers and feeders.

Example:

traceLevel = com.twitter.logging.Level.TRACE

Level.INFO
verboseCmd

A boolean value that specifies the level of feedback from Iago. A value of true specifies maximum feedback.

Example: verboseCmd = true

false

[Specifying Victims]

The point of Iago is to load-test a service. Iago calls these "victims".

Victims may be a

  1. single host:port pair
  2. list of host:port pairs
  3. a zookeeper serverset

Note that ParrotUdpTransport can only handle a single host:port pair. The other transports that come with Iago, being Finagle based, do not have this limitation.

ย  ย  ย 
Parameter Description Required or
Default Value
victims

A list of host:port pairs:

ย  victims="example.com:80 example2.com:80" ย 

A zookeeper server set:

ย  victims="/some/zookeeper/path" ย 
Required
port

An integer value that specifies the port on which to deliver requests to the victims.

The port is used for two things: to provide a port if none were specified in victims, and to provide a port for the host header using a FinagleTransport.

Example: port = 9000

Required
victimClusterType

When victimClusterType is "static", we set victims and port. victims can be a single host name, a host:port pair, or a list of host:port pairs separated with commas or spaces.

When victimClusterType is "sdzk" (which stands for "service discovery zookeeper") the victim is considered to be a server set, referenced with victims, victimZk, and victimZkPort.

Default: "static"
victimZk

the host name of the zookeeper where your serverset is registered

Default is "sdzookeeper.local.twitter.com"

victimZkPort

The port of the zookeeper where your serverset is registered

Default: 2181

[Extension Point Parameters]

Alternative Use: You can specify the following extension point parameters to configure projects in which Iago is used as both a feeder and server. The Iago feeder provides the log lines to your project, which uses these log lines to form requests that the Iago server then handles:

Parameter Description Required or
Default Value
imports

Imports from this project to Iago

Example: If ProjectX includes Iago as a dependency, you would specify:
import org.jboss.netty.handler.codec.http.HttpResponse
import com.twitter.projectX.util.ProcessorClass

import org.jboss.netty.handler.codec.http.HttpResponse
import com.twitter.parrot.util.LoadTestStub
requestType

The request type of requests from Iago.

Examples:

  • ParrotRequest for most services (including HTTP and Thrift)

ParrotRequest
responseType

The response type of responses from Iago.

Examples:

  • HttpResponse for an HTTP service
  • Array[Byte] for a Thrift service

HttpResponse
transport

The kind of transport to the server, which matches the responseType you want.

Example:transport = "ThriftTransportFactory(this)"

The Thrift Transport will send your request and give back Future[Array[Byte]].

FinagleTransport
loadTest

Your processor for the Iago feeder's lines, which converts the lines into requests and sends them to the Iago server.

Example: new LoadTestStub(service.get)

new LoadTestStub(service.get)

Top

[Sending Large Messages]

By default, the parrot feeder sends a thousand messages at a time to each connected parrot server until the parrot server has twenty seconds worth of data. This is a good strategy when messages are small (less than a kilobyte). When messages are large, the parrot server will run out of memory. Consider an average message size of 100k, then the feeder will be maintaining an output queue for each connected parrot server of 100 million bytes. For the parrot server, consider a request rate of 2000, then 2000 * 20 * 100k = 4 gigabytes (at least). The following parameters help with large messages:

Parameter Description Required or
Default Value
batchSize

how many messages the parrot feeder sends at one time to the parrot server. For large messages, setting this to 1 is recommended.

Default: 1000
cachedSeconds

How many seconds worth of data the parrot server will attempt to cache. Setting this to 1 for large messages is recommended. The consequence is that, if the parrot feeder garbage-collects, there will be a corresponding pause in traffic to your service unless cachedSeconds is set to a value larger than a typical feeder gc. This author has never observed a feeder gc exceeding a fraction of a second.

Default is 20

Top

[Weighted Requests]

Some applications must make bulk requests to their service. In other words, a single meta-request in the input log may result in several requests being satisfied by the victim. A weight field to ParrotRequest was added so that the RecordProcessor can set and use that weight to control the send rate in the RequestConsumer. For example, a request for 17 messages would be given a weight of 17 which would cause the RequestConsumer to sample the request distribution 17 times yielding a consistent distribution of load on the victim.

Top

[Metrics]

Iago uses Ostrich to record its metrics. Iago is configured so that a simple graph server is available as long as the parrot server is running. If you are using localMode=true, then the default place for this is

ย ย http://localhost:9994/graph/

One metric of particular interest is

ย ย http://localhost:9994/graph/?g=metric:client/request_latency_ms

Request latency is the time it takes to queue the request for sending until the response is received. See the Finagle User Guide for more about the individual metrics.

Other metrics of interest:

Statistic Description
connection_duration duration of a connection from established to closed?
connection_received_bytes bytes received per connection
connection_requests Number of connection requests that your client did, ie. you can have a pool of 1 connection and the connection can be closed 3 times, so the "connection_requests" would be 4 (even if connections = 1)
connection_sent_bytes bytes send per connection
connections is the current number of connections between client and server
handletime_us time to process the response from the server (ie. execute all the chained map/flatMap)
pending Number of pending requests (ie. requests without responses)
request_concurrency is the current number of connections being processed by finagle
request_latency_ms the time of everything between request/response.
request_queue_size Number of requests waiting to be handled by the server

[Raggiana]

Raggiana is a simple standalone Finagle stats viewer.

You can use Raggiana to view the stats log, parrot-server-stats.log, generated by Iago.

You can clone it from

https://github.com/twitter/raggiana

or, just use it directly at

http://twitter.github.io/raggiana

Top

[Tracing]

Parrot works with Zipkin, a distributed tracing system.

Top

[What Files Are Created?]

The Iago launcher creates the following files

config/target/parrot-feeder.scala
config/target/parrot-server.scala
scripts/common.sh
scripts/parrot-feeder.sh
scripts/parrot-server.sh

The Iago feeder creates

parrot-feeder.log
gc-feeder.log

The Iago server creates

parrot-server.log
parrot-server-stats.log
gc-server.log 

The logs are rotated by size. Each individual log can be up to 100 megabytes before being rotated. There are 6 rotations maintained.

The stats log, parrot-server-stats.log, is a minute-by-minute dump of all the statistics (or Metrics) maintained by the Iago server. Each entry is for the time period since the previous one. That is, all entries in parrot-server-stats.log need to be accumulated to match the final values reported by http://localhost:9994/stats.txt.

Top

Using Iago as a Library

While Iago provides everything you need to target your API with a large distributed loadtest with just a small log processor, it also exposes a library of classes for log processing, traffic replay, & load generation. These can be used in your Iago configuration or incorporated in your application as a library.

parrot/server:

  • ParrotRequest: Parrot's internal representation of a request
  • ParrotTransport (FinagleTransport, KestrelTransport, MemcacheTransport, ParrotUdpTransport, ThriftTransport): Interchangeable transport layer for requests to be sent. Parrot contains transport implementations for the following protocols: HTTP (FinagleTransport), Kestrel, Memcache, raw UDP and Thrift.
  • RequestConsumer: Queues ParrotRequests and sends them out on a ParrotTransport at a rate determined by RequestDistribution
  • RequestQueue: A wrapper/control layer for RequestConsumer
  • ParrotService (ParrotThriftService): Enqueues ParrotRequests to a RequestQueue. ParrotThriftService implements finagle's Service interface for use with finagle thrift clients.

parrot/util:

  • RequestDistribution: A function specifying the time to arrival of the next request, used to control the request rate. Instances include
    • UniformDistribution: Sends requests at a uniform rate
    • PoissonProcess: Sends requests at approximatly constant rate randomly varying using a poisson process. This is the default.
    • SinusoidalPoissonProcess: Like PoissonProcess but varying the rate sinusoidally.
    • SlowStartPoissonProcess: Same as PoissonProcess but starting with a gradual ramp from initial rate to final rate. It will then hold steady at the final rate until time runs out.
    • InfiniteRampPoissonProcess: a two staged ramped distribution. Ideal for services that need a warm-up period before ramping up. The rate continues to increase until time runs out.

You may also find the LogSource and RequestProcessor interfaces discussed earlier useful.

Examples:

// Make 1000 HTTP requests at a roughly constant rate of 10/sec

// construct the transport and queue
val client =
  ClientBuilder()
    .codec(http())
    .hosts("twitter.com:80")
    .build()
val transport = new FinagleTransport(FinagleService(client))
val consumer = new RequestConsumer(() => new PoissionProcess(10)
// add 1000 requests to the queue
for (i <- (1 to 1000)) {
  consumer.offer(new ParrotRequest(uri= Uri("/jack/status/20", Nil))
}
// start sending
transport.start()
consumer.start()
// wait for the comsumer to exhaust the queue
while(consumer.size > 0) {
  Thread.sleep(100)
}
// shutdown
consumer.shutdown()
transport.close()
// Call a thrift service with a sinusoidally varying rate

// Configure cluster for the service using zookeeper
val zk = "zookeeper.example.com"
val zkPort = 2181
val path = "my/env/role/service"
val zookeeperClient = new ZooKeeperClient(Amount.of(1, Time.SECONDS),
  Seq(InetSocketAddress.createUnresolved(zk, zkPort)).asJava)
val serverSet = new ServerSetImpl(zookeeperClient, path)
val cluster = new ZookeeperServerSetCluster(serverSet)

// create transport and queue
val client =
  ClientBuilder()
    .codec(ThriftClientFramedCodec)
    .cluster(cluster)
    .build()
val transport = new ThriftTransport(client)
val createDistribution = () => new SinusoidalPoisionProccess(10, 20, 60.seconds)
val queue = new RequestQueue(new RequestConsumer(createDistribution, transport), transport)
// create the service and processor
val service = transport.createService(queue)
val processor = new EchoLoadTest(service)
// start sending
transport.start()
consumer.start()
// Fill the queue from a logfile
val source = new LogSourceImpl("some_file.txt")
while (source.hasNext) {
  processor.processLines(Seq(source.next))
}
// wait for the comsumer to exhaust the queue
while(consumer.size > 0) {
  Thread.sleep(100)
}
// shutdown
consumer.shutdown()
transport.close()

Top

[ChangeLog]

2013-06-25 release 0.6.7

  • graceful shutdown for small log sources
  • dropped vestigial parser config
  • weighted parrot requests
  • supporting large requests (BlobStore): new configurations cachedSeconds & mesosRamInMb
  • launcher changes: configurable proxy, create config directory if needed, and handle errors better (don't hang)
  • serversets as victims
  • make local logs work with non-local distribution directories
  • kestrel transport transactional get support
  • check generated config files before launch
  • LzoFileLogSource for iago
  • Thrift over TLS
  • traceLevel config

Top

[Contributing to Iago]

Iago is open source, hosted on Github here. If you have a contribution to make, please fork the repo and submit a pull request.

More Repositories

1

snowflake

Snowflake is a network service for generating unique ID numbers at high scale with some simple guarantees.
Scala
7,566
star
2

diffy

Find potential bugs in your services with Diffy
Scala
3,827
star
3

flockdb

A distributed, fault-tolerant graph database
Scala
3,326
star
4

kestrel

simple, distributed message queue system (inactive)
Scala
2,780
star
5

twui

A UI framework for Mac based on Core Animation
Objective-C
2,750
star
6

CocoaSPDY

SPDY for iOS and OS X
Objective-C
2,396
star
7

gizzard

[Archived] A flexible sharding framework for creating eventually-consistent distributed datastores
Scala
2,255
star
8

distributedlog

A high performance replicated log service. (The development is moved to Apache Incubator)
Java
2,227
star
9

recess

A simple and attractive code quality tool for CSS built on top of LESS
CSS
2,190
star
10

commons

Twitter common libraries for python and the JVM (deprecated)
Java
2,102
star
11

twitter-text-js

A JavaScript implementation of Twitter's text processing library
1,212
star
12

ambrose

A platform for visualization and real-time monitoring of data workflows
Java
1,180
star
13

twitter-kit-android

Twitter Kit for Android
Java
827
star
14

ostrich

A stats collector & reporter for Scala servers (deprecated)
Scala
774
star
15

twitter-kit-ios

Twitter Kit is a native SDK to include Twitter content inside mobile apps.
Objective-C
684
star
16

twitter-text-rb

A library that does auto linking and extraction of usernames, lists and hashtags in tweets
617
star
17

mysos

Cotton (formerly known as Mysos)
592
star
18

twitter-text-objc

An Objective-C implementation of Twitter's text processing library
587
star
19

torch-autograd

Autograd automatically differentiates native Torch code
Lua
555
star
20

ospriet

An example audience moderation app built on Twitter
JavaScript
408
star
21

cloudhopper-smpp

Efficient, scalable, and flexible Java implementation of the Short Messaging Peer to Peer Protocol (SMPP)
Java
384
star
22

twitter-text-java

A Java implementation of Twitter's text processing library
363
star
23

jvmgcprof

A simple utility for profile allocation and garbage collection activity in the JVM
C
342
star
24

css-flip

A CSS BiDi flipper
JavaScript
313
star
25

clockworkraven

Human-Powered Data Analysis with Mechanical Turk
Ruby
299
star
26

torch-twrl

Torch-twrl is a package that enables reinforcement learning in Torch.
Lua
251
star
27

cassie

A Scala client for Cassandra
Scala
243
star
28

twemperf

A tool for measuring memcached server performance
C
242
star
29

hdfs-du

Visualize your HDFS cluster usage
JavaScript
231
star
30

pycascading

A Python wrapper for Cascading
Python
223
star
31

RTLtextarea

Automatically detects RTL and configures a text input
JavaScript
170
star
32

haplocheirus

A Redis-backed storage engine for timelines
Scala
133
star
33

standard-project

A slightly more standard sbt project plugin library
Scala
132
star
34

torch-decisiontree

This project implements random forests and gradient boosted decision trees (GBDT). The latter uses gradient tree boosting. Both use ensemble learning to produce ensembles of decision trees (that is, forests).
Lua
125
star
35

torch-ipc

A set of primitives for parallel computation in Torch
C
96
star
36

elephant-twin

Elephant Twin is a framework for creating indexes in Hadoop
Java
96
star
37

torch-distlearn

A set of distributed learning algorithms for Torch
Lua
95
star
38

libcrunch

A lightweight mapping framework that maps data objects to a number of nodes, subject to constraints
Java
90
star
39

scribe

A Ruby client library for Scribe
Ruby
89
star
40

sbt-package-dist

sbt 11 plugin codifying best practices for building, packaging, and publishing
Scala
88
star
41

twisitor

A simple and spectacular photo-tweeting birdhouse
JavaScript
84
star
42

code-of-conduct

Open Source Code of Conduct at Twitter
83
star
43

flockdb-client

A Ruby client library for FlockDB
Ruby
83
star
44

twitter-text-conformance

Conformance testing data for the twitter-text-* repositories
77
star
45

torch-dataset

An extensible and high performance method of reading, sampling and processing data for Torch
Lua
77
star
46

naggati2

Protocol builder for netty using scala (DEPRECATED)
Scala
74
star
47

cdk

CDK is a tool to quickly generate single-file html slide presentations from AsciiDoc
CSS
73
star
48

twitter-kit-unity

Twitter Kit for Unity
C#
71
star
49

plumage.js

Batteries Included App Framework for Data Intensive UIs
JavaScript
66
star
50

gozer

Prototype mesos framework using new low-level API built in Go
Go
61
star
51

bookkeeper

Twitter's fork of Apache BookKeeper (will push changes upstream eventually)
Java
60
star
52

grabby-hands

A JVM Kestrel client that aggregates queues from multiple servers. Implemented in Scala with Java bindings. In use at Twitter for all JVM Search and Streaming Kestrel interactions.
Scala
56
star
53

gizzmo

A command-line client for Gizzard
Ruby
54
star
54

thrift

Twitter's out-of-date, forked thrift
C++
52
star
55

libkestrel

libkestrel
Scala
47
star
56

time_constants

Time constants, in seconds, so you don't have to use slow ActiveSupport helpers
Ruby
46
star
57

sbt-scrooge

An SBT plugin that adds a mixin for doing Thrift code auto-generation during your compile phase
Scala
44
star
58

cli-guide.js

CLI Guide JQuery Plugin
JavaScript
41
star
59

sbt-thrift

sbt rules for generating source stubs out of thrift IDLs, for java & scala
Ruby
37
star
60

jaqen

A type-safe heterogenous Map or a Named field Tuple
Scala
35
star
61

spitball

A very simple gem package generation tool built on bundler
Ruby
33
star
62

torch-thrift

A Thrift codec for Torch
C
30
star
63

jsr166e

JSR166e for Twitter
Java
27
star
64

unishark

Unishark: Another unittest extension for Python
Python
26
star
65

raggiana

A simple standalone Finagle stats viewer
JavaScript
21
star
66

sekhmet

foundational tools and building blocks for gaining insights and diagnosing system health in real-time
20
star
67

periscope-live-engagement-unity-sdk

Periscope Live Engagement Unity SDK
C#
20
star
68

twitterActors

Improved Scala actors library; used internally at Twitter
Scala
18
star
69

finatra-activator-http-seed

Typesafe activator template for constructing a Finatra HTTP server application:
Scala
18
star
70

killdeer

Killdeer is a simple server for replaying a sample of responses to sythentically recreate production response characteristics.
Scala
15
star
71

bittern

Bittern Cache uses nvdimm to speed up block io operations
C
14
star
72

elephant-twin-lzo

Elephant Twin LZO uses Elephant Twin to create LZO block indexes
Java
14
star
73

finatra-activator-thrift-seed

Typesafe activator template for constructing a Finatra Thrift server application: https://twitter.github.io/finatra/user-guide/ โ€”
Scala
11
star
74

chainsaw

A thin Scala wrapper for SLF4J
Scala
9
star
75

PerfTracepoint

Perf tracepoint support for the JVM
Java
7
star
76

oscon-puzzles

OSCON 2014 Puzzle
JavaScript
7
star
77

scala-json

JSON in Scala (deprecated)
Scala
5
star
78

scala-csp-config

A Scala library for configuring Content Security Policy headers for HTTP responses.
Scala
4
star
79

finatra-misc

Miscellaneous libraries and utils used by Finatra
Scala
3
star
80

.github

2
star
81

autolog-clustering

USF Capstone Project for Auto-log Clustering
Python
1
star