Home
Tags Projects About
Asynchronous Programming. Cooperative Multitasking

Asynchronous Programming. Cooperative Multitasking

This is the third post in a series on asynchronous programming. The whole series explores a single question: What is asynchrony? When I first started digging into this, I thought I had a solid grasp of it. Turns out, I didn't know the first thing about asynchrony. So, let’s dive in together!

Whole series:

In the previous post, we talked about providing concurrent processing for multiple requests, discussing preemptive multitasking via threads and processes as solutions. However, there's one more option — cooperative multitasking (aka non-preemptive multitasking).


We're saying that while operating systems are amazing — equipped with schedulers, planners, handling processes, threads, and switching between them — they still don't know how our application works. This means the OS might pause a thread or process at a random moment (probably not the best one), saving the context and switching to the next (this is known as preemptive multitasking). But we, the developers, know our applications better. We know we have brief periods of CPU-bound tasks but are mostly waiting on network I/O. This knowledge enables us to manage switching between tasks more effectively.

From the OS's perspective, cooperative multitasking is just one execution thread, but within this thread, the application can decide when to switch between processing tasks. Once data arrives, the application reads it, parses the request, sends it to the database, and then — instead of waiting for a response — can start processing another request. This is called "cooperation" because all tasks are structured to work together, with one control thread (a cooperative scheduler) starting processes and allowing them to voluntarily yield control.

This approach simplifies multitasking because, for the developer, only one task runs at a time. While on single-processor systems, multithreaded applications are interleaved, a developer working with threads must still handle errors carefully to ensure multi-core compatibility. With a single-threaded asynchronous system, interleaving is consistent, even on multi-core systems.

The complexity of writing such programs lies in the fact that this process of switching, maintaining the context as such, organizing each task as a sequence of smaller steps performed with interruptions, falls on the shoulders of developers. On the other hand, we win in efficiency because there are no unnecessary context switches, like a processor context when switching between threads and processes.

There are two ways to implement cooperative multitasking — callbacks and cooperative threads.

Callbacks

Callbacks

All cooperative operations involve delaying an action to a later time, allowing the execution thread to continue and return the result when ready. To retrieve this result, we register a callback function. If the request or operation succeeds, it calls one function; if it fails, it calls another. Callbacks are explicit — the developer writes the program with the understanding that the callback function may be invoked unpredictably.

This approach is widely used because it’s straightforward and supported by most modern languages. However, managing complex callbacks can become difficult, leading to what’s known as "callback hell” in deeply nested or sequential callbacks. This complexity led to the development of Futures and Promises, which offer a more readable, structured API for handling asynchronous tasks.

Pros and Cons:

  • Callbacks avoid many common issues that arise in multithreaded programs.
  • They can swallow exceptions, making error-handling trickier to manage.
  • Managing multiple or nested callbacks can be challenging and hard to debug.

Futures and Promises

Futures and Promises provide a more structured and intuitive way to handle asynchronous operations than callbacks. A Future (also called a Promise in some languages) represents a placeholder for a result that isn’t available yet but will be at some point in the future. Futures allow us to write code in a more linear, readable way by attaching handlers (or chaining methods) to the Future, which will be triggered once the result is available.

For example, rather than passing callback functions directly into an asynchronous operation, we might use a Future object, which represents the eventual result of that operation. With a Future, the developer can register success and failure handlers (similar to callbacks) that execute when the Future resolves.

Why Futures and Promises?

  • Improved Readability: Futures make asynchronous code look more like synchronous code by enabling chaining or async/await patterns.
  • Error Handling: Unlike callbacks, Futures offer a more consistent way to handle errors, usually by catching exceptions directly in the chain.
  • No Callback Hell: Futures avoid nested callbacks, making code less prone to complexity and easier to debug.

Here’s a simple example of a Future in Python using the concurrent.futures library:

import concurrent.futures
import time

def slow_operation():
    time.sleep(2)
    return "Result is ready!"

with concurrent.futures.ThreadPoolExecutor() as executor:
    # Submit a task and get a Future
    future = executor.submit(slow_operation)
    
    # We can do other work here while `slow_operation` runs
    
    # Block and get the result once it’s ready
    result = future.result()  # This will wait until the result is available
    print(result)

In this example, the Future future acts as a proxy for the result of slow_operation. We can work on other tasks while waiting for the result, and later call future.result() to retrieve it once it’s complete.

Pros and Cons:

  • Futures and Promises provide a structured, cleaner API than callbacks.
  • They make error-handling straightforward with built-in mechanisms for exception handling.
  • However, they still introduce complexity and require careful management of chaining and error-catching.

Cooperative Threads

A more implicit way to handle asynchronous tasks is by writing code as if multitasking isn’t explicitly happening. This approach includes constructs like user threads (also known as green threads) or coroutines.

With green threads, blocking operations appear to provide immediate results as if they are non-blocking. However, behind the scenes, a framework or language runtime is managing these operations in a non-blocking way by switching control to another thread—not in the OS-level sense, but in a logical user-level thread. These threads operate entirely within the user-space process rather than being managed by the OS.

Coroutines are similar in spirit but involve more explicit control: developers insert checkpoints in the code where the function can pause and resume. A coroutine can call other coroutines and later return to its original state. Unlike threads, which are generally preemptively multitasked, coroutines are cooperatively multitasked, meaning the function only pauses when it reaches a designated point, avoiding the need for synchronization primitives (like mutexes or semaphores) and eliminating the need for OS support.

Pros and Cons:

  • Cooperative threads run in user space, avoiding OS-level thread management.
  • They have a synchronous feel, making them easier to write and reason about.
  • While they avoid OS-level context-switching, they inherit some of the challenges of threading (e.g., shared resources) without the heavy CPU overhead.

Comparing Callbacks, Futures, and Cooperative Threads

  • Explicit vs. Implicit: Callbacks and Futures require explicit handling of asynchronous actions, while cooperative threads and coroutines handle them more implicitly.
  • Control and Complexity: Callbacks can lead to nested complexity, while Futures provide a cleaner, chainable API. Cooperative threads or coroutines allow for sequential code structures.
  • Performance and Overhead: Cooperative threads avoid OS-level context-switching, making them lightweight, while Futures provide a balance between control and readability.

Together, callbacks, green threads, and coroutines illustrate different ways to manage asynchronous tasks, with each approach balancing control, complexity, and performance in unique ways.

Reactor/Proactor Patterns

In cooperative multitasking, there is always a processing engine responsible for all I/O management, known as the Reactor, named after the Reactor pattern. The reactor interface operates with a straightforward principle: "Give me a set of your sockets and callbacks, and when a socket is ready for I/O, I’ll call the appropriate callback functions." A reactor’s job is to react to I/O events, delegating processing to specific handlers or workers. The handlers handle I/O operations, so there’s no need to block I/O as long as handlers or callbacks are properly registered.

The purpose of the reactor design pattern is to avoid the common challenge of spawning a new thread for each message, request, or connection. It can receive events from multiple handlers and distribute them sequentially to the appropriate event handlers, allowing the application to process multiple events while maintaining the simplicity of single-threaded processing. Typically, it relies on non-blocking synchronous I/O (see multiplexing in I/O models).

An even more interesting approach is the Proactor pattern, which is an asynchronous version of the Reactor pattern. It usually employs true asynchronous I/O operations provided by the OS (see AIO in I/O models).

However, there are limitations to the Proactor approach:

  1. Restricted Operations: This pattern may limit the types of operations you can perform, depending on the platform. Reactors, in contrast, can handle any event type.
  2. Buffer Constraints: Each asynchronous operation requires buffer space for the duration of the I/O operation, which can persist indefinitely, posing a potential resource limitation.

These two paradigms form the foundation of popular asynchronous frameworks like the nginx HTTP server, Node.js (via libuv), Twisted Python, and Python’s asyncio library.

Best Approach

No single approach is perfect. A combination often works best, as cooperative multitasking generally excels when connections are long-lasting. For example, a WebSocket connection can persist for a long time. If a single process or thread is dedicated to each WebSocket, the server’s capacity for simultaneous connections is significantly reduced. With cooperative multitasking, however, you can maintain a large number of simultaneous connections, each performing minimal work.

One limitation of cooperative multitasking is that it can only use a single CPU core. Although running multiple application instances on the same machine can help, it’s not always convenient and can have drawbacks. A more effective strategy is to use multiple processes, each employing a reactor or proactor, and run cooperative multitasking within each process.

This hybrid approach allows you to utilize all available processor cores, efficiently handling each connection with minimal resource allocation per connection, while maximizing system performance.

Conclusion

The difficulty in writing applications that use cooperative multitasking is that this switching process while maintaining the context as such, falls on the shoulders of poor developers. On the other hand, by using this approach, we attain efficiency by avoiding unnecessary switches.

A more interesting solution comes from combining cooperative multitasking with Reactor/Proactor patterns.

In the next post, we will talk about asynchronous programming itself and how it differs from synchronous programming, about old concepts but considered on a new level and using new terms.

Additional materials



Previous post
Buy me a coffee
Next post

More? Well, there you go:

Asynchronous Programming. Threads and Processes

Asynchronous Programming. Await the Future

Asynchronous Programming. Blocking I/O and non-blocking I/O