SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

WC 735 / RT 4min

SEDA - staged event driven architecture

A SEDA is intended to support massive concurrency demands and simplify the construction of well-conditioned services. In SEDA, applications consist of a network of event-driven stages connected by explicit queues. This architecture allows services to be well-conditioned to load, preventing resources from being overcommitted when demand exceeds service capacity.

SEDA combines aspects of threads and event-based programming models to manage the concurrency, I/O, scheduling, and resource management needs of Internet services.

Applications are constructed as a network of stages, each with an associated incoming event queue. Each stage represents a robust building block that may be individually conditioned to load by thresholding or filtering its event queue.


Service is well-conditioned if it behaves like a simple pipeline, where the depth of the pipeline is determined by the path through the network and the processing stages within the service itself. As the offered load increases, the delivered throughput increases proportionally until the pipeline is full and the throughput saturates; additional load should not degrade throughput.

Thread based concurrency

Operating system overlaps computation and I/O by transparently switching among threads. Although relatively easy to program, the overheads associated with threading — including cache and TLB misses, scheduling overhead, and lock contention — can lead to serious performance degradation when the number of threads is large.

Bounded thread pools

To avoid the overuse of threads, a number of systems adopt a coarse form of load conditioning that serves to bound the size of the thread pool associated with a service. When the number of requests in the server exceeds some fixed limit, additional connections are not accepted. This approach is used by Web servers such as Apache, IIS, and Netscape Enterprise Server. By limiting the number of concurrent threads, the server can avoid throughput degradation, and the overall performance is more robust than the unconstrained thread-per-task model.

Event-driven concurrency

Server consists of a small number of threads (typically one per CPU) that loop continuously, processing events of different types from a queue. Events may be generated by the operating system or internally by the application, and generally correspond to network and disk I/O readiness and completion notifications, timers, or other application-specific events.

Certain I/O operations (in this case, filesystem access) do not have asynchronous interfaces, the main server process handles these events by dispatching them to helper processes via IPC. Helper processes issue (blocking) I/O requests and return an event to the main process upon completion.

Important limitation of this model is that it assumes that event handling threads do not block, and for this reason nonblocking I/O mechanisms must be employed.

Structured event queues

Common aspect of these designs is to structure an event-driven application using a set of event queues to improve code modularity and simplify application design.

Staged event driven architecture

Support massive concurrency: To avoid performance degradation due to threads, SEDA makes use of event-driven execution wherever possible. This also requires that the system provide efficient and scalable I/O primitives.

Simplify the construction of well-conditioned services: To reduce the complexity of building Internet services, SEDA shields application programmers from many of the details of scheduling and resource management. The design also supports modular construction of these applications, and provides support for debugging and performance profiling.

Enable introspection: Applications should be able to analyze the request stream to adapt behavior to changing load conditions. For example, the system should be able to prioritize and filter requests to support degraded service under heavy load.

Support self-tuning resource management: Rather than mandate a priori knowledge of application resource requirements and client load characteristics, the system should adjust its resource management parameters dynamically to meet performance targets. For example, the number of threads allocated to a stage can be determined automatically based on perceived concurrency demands, rather than hard-coded by the programmer or administrator.

Building blocks

The fundamental unit of processing within SEDA is the stage. Stage is a self-contained application component consisting of an event handler, an incoming event queue, and a thread pool.

The core logic for each stage is provided by the event handler, the input to which is a batch of multiple events. Event handlers do not have direct control over queue operations or threads.

Event queues in SEDA is that they may be finite: that is, an enqueue operation may fail if the queue wishes to reject new entries, say, because it has reached a threshold.