Project Loom: Understand the new Java concurrency model

Project Loom introduces lightweight and efficient virtual threads called fibers, massively increasing resource efficiency while preserving the same simple thread abstraction for developers.

Thinkstock

Loom is a newer project in the Java/JVM ecosystem (hosted by OpenJDK) that attempts to address limitations in the traditional concurrency model. In particular, Loom offers a lighter alternative to threads along with new language constructs for managing them.

Read on for an overview of these important upcoming changes.

Fibers: Virtual threads in Java

Traditional Java concurrency is managed with the Thread and Runnable classes, as seen in Listing 1 (which launches a new named thread and outputs the name).

Listing 1. Launching a thread with traditional Java

Thread thread = new Thread("My Thread") {
      public void run(){
        System.out.println("run by: " + getName());
      }
   };
   thread.start();
   System.out.println(thread.getName());

This model is fairly easy to understand in simple cases, and Java offers a wealth of support for dealing with it.

The downside is that Java threads are mapped directly to the threads in the OS. This places a hard limit on the scalability of concurrent Java apps. Not only does it imply a one-to-one relationship between app threads and operating system threads, but there is no mechanism for organizing threads for optimal arrangement. For instance, threads that are closely related may wind up sharing different processes, when they could benefit from sharing the heap on the same process.

To give you a sense of how ambitious the changes in Loom are, current Java threading, even with hefty servers, is counted in the thousands of threads (at most). Loom proposes to move this limit towards million of threads. The implications of this for Java server scalability are breathtaking, as standard request processing is married to thread count.

The solution is to introduce some kind of virtual threading, where the Java thread is abstracted from the underlying OS thread, and the JVM can more effectively manage the relationship between the two. That is what project Loom sets out to do, by introducing a new virtual thread class called a fiber.

As the Project Loom proposal states:

The main technical mission in implementing continuations — and indeed, of this entire project — is adding to HotSpot the ability to capture, store and resume callstacks not as part of kernel threads.

If you were ever exposed to Quasar, which brought lightweight threading to Java via bytecode manipulation, the same tech lead (Ron Pressler) heads up Loom for Oracle.

Alternatives to fibers in Java

Before looking more closely at Loom's solution, it should be mentioned that a variety of approaches have been proposed for concurrency handling. In general, these amount to asynchronous programming models. Some, like CompletableFutures and Non-Blocking IO, work around the edges of things by improving the efficiency of thread usage. Others, like JavaRX (the Java implementation of the ReactiveX spec), are wholesale asynchronous alternatives.

Although JavaRX is a powerful and potentially high-performance approach to concurrency, it is not without drawbacks. In particular, it is quite different from the existing mental constructs that Java developers have traditionally used. Also, JavaRX can’t match the theoretical performance achievable by managing virtual threads at the virtual machine layer.

Fibers are designed to allow for something like the synchronous-appearing code flow of JavaScript’s async/await, while hiding away much of the performance-wringing middleware in the JVM.

Java fibers in action

As mentioned, the new Fiber class represents a virtual thread. Under the hood, asynchronous acrobatics are underway. Why go to this trouble, instead of just adopting something like ReactiveX at the language level? The answer is both to make it easier for developers to understand, and to make it easier to move the universe of existing code. For example, data store drivers can be more easily transitioned to the new model.

A very simple example of using fibers is seen in Listing 2. Notice it is very similar to existing Thread code. (This snippet comes from this Oracle blog post.)

Listing 2. Creating a virtual thread

Thread.startVirtualThread(
  () -> {
    System.out.println("Hello World");
  }
);

Beyond this very simple example is a wide range of considerations for scheduling. These mechanisms are not set in stone yet, and the Loom proposal gives a good overview of the ideas involved.

An important note about Loom’s fibers is that whatever changes are required to the entire Java system, they are not to break existing code. Existing threading code will be fully compatible going forward. You can use fibers, but you don’t have to. As you can imagine, this is a fairly Herculean task, and accounts for much of the time spent by the people working on Loom.

Lower-level async with continuations

Now that we’ve seen fibers, let’s take a look at continuations. Loom implements continuations both to underpin fibers, and to expose fibers as a public API for developers to use in applications. So what is a continuation?

At a high level, a continuation is a representation in code of the execution flow. In other words, a continuation allows the developer to manipulate the execution flow by calling functions. The Loom docs present the example seen in Listing 3, which provides a good mental picture of how this works.

Listing 3. Continuation example

foo() { // (2)
  ...
  bar()
  ...
}
bar() {
  ...
  suspend // (3)
  ... // (5)
}
main() {
  c = continuation(foo) // (0)
  c.continue() // (1)
  c.continue() // (4)
}

Consider the flow of execution as described by each commented number:

     (0) A continuation is created, beginning at the foo function
     (1) Passes control to the entry point of the continuation
     (2) executes until the next suspension point, which is at (3)
     (3) releases control back to the origination, at (1)
     (4) Now executes, which calls continue on the continuation, and flow returns to where it was suspended at (5)

This kind of control is not difficult in a language like JavaScript where functions are easily referenced and can be called at will to direct execution flow.

Tail-call elimination

Another stated goal of Loom is Tail-call elimination (also called tail-call optimization). This is a fairly esoteric element of the proposed system. The core idea is that the system will be able to avoid allocating new stacks for continuations wherever possible. In such cases, the amount of memory required to execute the continuation remains consistent, instead of continually building as each step in the process requires the previous stack to be saved and made available when the call stack is unwound.

Loom and the future of Java

Loom and Java in general are prominently devoted to building web applications. Obviously, Java is used in many other areas, and the ideas introduced by Loom may well be useful in these applications. It’s easy to see how massively increasing thread efficiency, and dramatically reducing the resource requirements for handling multiple competing needs, will result in greater throughput for servers. Better handling of requests and responses is a bottom-line win for a whole universe of existing and to-be-built Java applications.

Like any ambitious new project, Loom is not without its challenges. Dealing with sophisticated interleaving of threads (virtual or otherwise) is always going to be a complex challenge, and we’ll have to wait to see exactly what library support and design patterns emerge to deal with these situations.

It will be fascinating to watch as Project Loom moves into the main branch and evolves in response to real-world use. As this plays out, and the advantages inherent in the new system are adopted into the infrastructure that developers rely on (think Java app servers like Jetty and Tomcat), we could see a sea change in the Java ecosystem.

Already, Java and its primary server-side competitor Node.js are neck and neck in performance. An order of magnitude boost to Java performance in typical web app use cases could alter the landscape for years to come.