what we blog

The Future With Futures

Recently there has been a lot of progress in the Rust language towards a robust asyncronous stack. In this article we'll take a look at what these things are, take a tour of what's available, play with some examples, and talk about how the pieces fit together.

We'll get started with the futures crate to start, move on to futures_cpupool, and then eventually to tokio. After that we'll play with hyper which now supports async. We will assume you have some knowledge of programming, and have at least thought about trying Rust before.

Throughout this article you can follow along inside of the repository and running the examples via cargo run --example 1-oneshot, for example.

What are Futures and Async?

When writing code to do some action which may take some time, such as requesting a resource from a remote host, it is not always desirable to block further execution of our program. This is particularly true in the case where we are writing a web server, or performing a large number of complex calculations.

One way of handling this is to spawn threads and discretely slice up tasks to be distributed across threads. This is not always as convienent or as easy as it may sound. Suddenly we are forced to figure out how tasks are best split, how to allocate resources, to program against race conditions, and manage interthread communication.

As a community we've learnt some techniques over the years, such as CPU pools which allow a number of threads to cooperate on a task, and 'future' values which resolve to their intended value after they are finished some computation. These provide us useful and powerful tools that makes it easier, safer, and more fun to write such code.

If you've ever developed in Javascript you may already be familiar with asyncronous programming and the idea of Promises. Futures in Rust are very similar to this, but we're provided more control and more responsibility. With this comes greater power.

How We Got Here

Much of this async work has been in development since green threads were removed from Rust around the 0.7 release. There have been several projects related to futures and async since, their influence can be felt by what we have now and what is on the horizon.

Much of the async story today is founded on the ideas and lessons learned from std's green threads, mio, coroutine, and rotor. mio in particular is foundational library of nearly the entire async area of Rust. If you have interest in seeing high quality systems code I highly reccomend paying some attention to this project.

Today, tokio and futures are the focus of much community effort. Recently Tokio accounced its first release, while futures have been relatively stable for a couple months. This has spurned a large amount of development within the community to support and leverage these new capabilities.

Getting Started

The futures crate offers a number of structures and abstractions to enable building asyncronous code. By itself it's quite simple, and through its clever design it is fundamentally zero cost. This is an important thing to keep in mind as we work through our examples: Writing code with futures should have no performance overhead over code without.

When compiled futures boil down to actual state machines while still allowing us to write our code in a relatively familiar 'callback style' method. If you've been using Rust already then working with Futures will feel very similar to how iterators feel.

One of the things we'll commonly do in this section is "sleep a little bit" on a thread to simulate some asyncronous action. In order to do this let's create a little function:

extern crate rand;

use std::thread;
use std::time::Duration;
use rand::distributions::{Range, IndependentSample};

// This function sleeps for a bit, then returns how long it slept.
pub fn sleep_a_little_bit() -> u64 {
    let mut generator = rand::thread_rng();
    let possibilities = Range::new(0, 1000);

    let choice = possibilities.ind_sample(&mut generator);

    let a_little_bit = Duration::from_millis(choice);
    thread::sleep(a_little_bit);
    choice
}

Now that we've got that established we can use it in our first future. We'll use a oneshot to delegate a task to another thread, then pick it up back in the main thread.

extern crate futures;
extern crate fun_with_futures;

use std::thread;
use futures::Future;
use futures::sync::oneshot;

// This is the function we defined earlier.
use fun_with_futures::sleep_a_little_bit;

fn main() {
    // This is a simple future built into the crate which feel sort of like
    // one-time channels. You get a (sender, receiver) when you invoke them.
    // Sending a value consumes that side of the channel, leaving only the reciever.
    let (tx, rx) = oneshot::channel();

    // We can spawn a thread to simulate an action that takes time, like a web
    // request. In this case it's just sleeping for a random time.
    thread::spawn(move || {
        println!("--> START");

        let waited_for = sleep_a_little_bit();
        println!("--- WAITED {}", waited_for);
        // This consumes the sender, we can't use it afterwards.
        tx.complete(waited_for);

        println!("<-- END");
    });

    // Now we can wait for it to finish
    let result = rx.wait()
        .unwrap();
    
    // This value will be the same as the previous "WAITED" output.
    println!("{}", result);
}

If we run this example we'll see something like:

--> START
--- WAITED 542
<-- END
542

The output is exactly what we might expect if we were doing this via the std channels. The code looks similar as well. At this point we're barely using Futures at all, so it shouldn't be a huge surprise that there is nothing surprising happening here. Futures start to come into their own more complicated tasks.

Next, let's look at how we can work with a set of futures. In this example we'll spawn a number of threads, have them do some long running task, and then collect all of the results into a vector.

extern crate futures;
extern crate fun_with_futures;

use std::thread;
use futures::Future;
use futures::future::join_all;

use fun_with_futures::sleep_a_little_bit;

const NUM_OF_TASKS: usize = 10;

fn main() {
    // We'll create a set to add a bunch of recievers to.
    let mut rx_set = Vec::new();

    // Next we'll spawn up a bunch of threads doing 'something' for a bit then sending a value.
    for index in 0..NUM_OF_TASKS {
        // Here we create a future, this is a `oneshot` value which is consumed after use.
        let (tx, rx) = futures::oneshot();
        // Add the reciever to the vector we created earlier so we can collect on it.
        rx_set.push(rx);

        // Spawning up a thread means things won't be executed sequentially, so this will actually
        // behave like an asyncronous value, so we can actually see how they work.
        thread::spawn(move || {
            println!("{} --> START", index);

            let waited_for = sleep_a_little_bit();
            println!("{} --- WAITED {}", index, waited_for);

            // Here we send back the index (and consume the sender).
            tx.complete(index);

            println!("{} <-- END", index);
        });
    }

    // `join_all` lets us join together the set of futures.
    let result = join_all(rx_set)
        // Block until they all are resolved.
        .wait()
        // Check they all came out in the right order.
        .map(|values| 
            values.iter()
                .enumerate()
                .all(|(index, &value)| index == value))
        // We'll be lazy and just unwrap the result.
        .unwrap();

    println!("Job is done. Values returned in order: {}", result);
}

The map call behaves just like the map of an Option<T> or a Result<T,E>. It transforms some Future<T> into some Future<U>. Running this example in output similar to the following:

0 --> START
1 --> START
3 --> START
2 --> START
4 --> START
5 --> START
6 --> START
7 --> START
8 --> START
9 --> START
4 --- WAITED 124
4 <-- END
1 --- WAITED 130
1 <-- END
0 --- WAITED 174
0 <-- END
2 --- WAITED 268
2 <-- END
6 --- WAITED 445
6 <-- END
3 --- WAITED 467
3 <-- END
9 --- WAITED 690
9 <-- END
8 --- WAITED 694
8 <-- END
5 --- WAITED 743
5 <-- END
7 --- WAITED 802
7 <-- END
Job is done. Values returned in order: true

In this example we can observe that all of the futures started, waited various times, then finished. They did not finish in order, but the resulting vector did come out in the correct order. Again, this result is not entirely surprising and we could have done something very similar with std's channels.

Remember, futures are a basic building block, not

We'll cover one more example which will feel fairly familiar to people who have used the channels from std, then we'll start doing more interesting stuff.