ThreadPoolExecutor on Android: A Practical Example

Sometimes we need to run tasks in parallel. We can either manage it directly by creating and controlling instances of Thread, or we can use a ThreadPoolExecutor that is part of the Executor framework.

Executor framework is quite extensive, providing methods to control many aspects like tasks priorities, number of core threads in the pool, maximum number of threads, idle threads keepalive time as well as it offers different queue types and rejection policies (with the possibility to create custom policies). As we can see, due the number of possibilities we can configure, it can be hard to understand at first glance.

Since Executor framework was introduced in Java 5, we can find many great articles out there, so we do not intent to write extensively another post about it. But our goal here is to present a fully working demo app which implements many aspects of this amazing framework. In the demo app you can easily adjust some of the executor configurations like:

  • Rejection policy (including a custom rejection policy)
  • The number of tasks to be executed per test
  • Core thread pool size
  • Maximum thread pool size
  • Queue size
  • Configure core threads to be pre-started
  • Safe stop a test
  • Explicitly stop a test

During a test execution you will be able to:

  • See the queue size as it changes
  • See the number of active threads
  • See the number of thread in the pool
  • Dynamically change the number of maximum thread pool

In the end it will give a summary of what have done:

  • Number of Enqueued Tasks
  • Number of Started Tasks
  • Number of Completed Tasks
  • Number of Not Completed Tasks
  • Number of Rejected Tasks

As you can see, this app is a powerful tool which can help to quickly test different ThreadPoolExecutor parameters configuration and see how they behave in different scenarios.

We will start with a brief description of some topics covered in the demo app while showing some snippet and short videos, and then present a few use cases by using different values and show the expected results.

Custom ThreadPoolExecutor

Although Executor framework provides convenient factory methods which facilitates when creating a thread pool executor, we are not using them, but a custom ThreadPoolExecutor instead. The reason is that when using a custom ThreadPoolExecutor we will be able to override some hook methods that are called before and after a task execution. So, we will take advantage of it and override both beforeExecute and afterExecute methods to the get queue size and the number of active threads whenever a task is started and finished:

Core Thread Pool Size

When creating a ThreadPoolExecutor we can inform the number of threads that will be always available after they are first created (i.e. as tasks are submitted). By default a thread pool does not initialize any thread. As tasks are being submitted, core threads are being created. Once it reaches the core thread pool size, these threads will be available forever (by default).

Core Threads Timeout

As mentioned above, after a core thread is created, it will never be terminated. If we want they to be terminated after a timeout (i.e.: the same timeout applied to the non-core threads), we just need to call ThreadPoolExecutor#allowCoreThreadTimeOut(true).

Pre-Start Core Threads

If you want your executor to pre-start all core threads by default, you need to call Executor#preStartAllCoreThreads() method before start it.

In the demo app there is a check-box which you can simply check it to make executor to pre-start all core threads.

Queue Size and Maximum Thread Pool Size

After a thread pool reaches the number of core threads, as new tasks are submitted and all core threads are still in use, it will start queuing them until queue gets full. After that, new threads will be created for every new task request, until maximumPoolSize. When queue is full and all threads are active, new tasks requests will be rejected. If a task finishes and another one is submitted, Executor will try to reuse a thread from the pool (if it is not terminated due a timeout) instead of create a new one.

So, in short, this is what happens:

  • If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
  • If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.
  • If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.

We can configure all those parameters the way we want. As we can see, this give us much power to efficiently use system resources according to our needs.

Note that queue size can only be configured during the ThreadPoolExecutor construction.

Controlling Number of Threads Dynamically

There may be cases when the number of threads we initially configure is not efficient throughout the app lifecycle and it would be nice if we could change it dynamically. Fortunately, Executor framework allows us to change number of threads dynamically by simply call ThreadPoolExecutor#setMaximumPoolSize() at any time. This overrides any value set in the constructor. If the new value is smaller than the current value, excess existing threads will be terminated when they next become idle.

We implemented a convenient SeekBar control which you can dynamically change thread pool size during a test execution. By changing it during a test, we can see in the “Dynamic Parameters” box new threads being created or terminated and new tasks using them. Remember that when changing the number of threads to a higher value, those threads will only be created after the queue is full. In the video below we set queue size to 150, so after change thread pool size to higher values, we can see that it will take a while for the executor to start creating/using new threads:

Rejection Mode

When submitting tasks to an executor we normally expect them to be executed. But there are some cases that tasks can be rejected. They are:

  • Tasks submitted when all threads are in use and the queue is full
  • Tasks submitted when the executor has been shutdown.

Fortunately, executor framework provides some built-in modes to handle rejected tasks. As stated in the docs, they are:

  • AbortPolicy (default policy): the handler throws a runtime RejectedExecutionException upon rejection.
  • CallerRunsPolicy: the thread that invokes execute itself runs the task. This provides a simple feedback control mechanism that will slow down the rate that new tasks are submitted.
  • DiscardPolicy: a task that cannot be executed is simply dropped.
  • DiscardOldestPolicy: if the executor is not shut down, the task at the head of the work queue is dropped, and then execution is retried (which can fail again, causing this to be repeated.)

Apart from these policies, we can create our own custom rejection policy. For the sake of demonstration, our demo app also implements a custom rejection policy that just abort rejected tasks:

The possibility to create custom rejection policies is very helpful. Depends on the requirements, when a task is rejected, we can, for example, store it on a database to try to execute it later, or maybe send a notification to the user and let him to decide what to do. 

Stopping an Execution

After an execution is started we can wait until it finishes or stop it smoothly by calling ExecutorService#shutdown(). At this point, executor will stop accepting new tasks requests, but will process all already enqueued tasks. We can also stop an execution abruptly by calling ExecutorService#shutdownNow() which will make executor to stop accepting new tasks requests and all already queued tasks will not be completed.

We implemented handy buttons to stop (smoothly and abruptly) an execution.

Demo App examples

Let’s see some examples using the demo app. We will show different configurations and the expected results:

Example 1 – Large queue size

  • Number of tasks: 2500
  • Core thread size: 4
  • Maximum thread size: 8
  • Queue size: 2000
  • Rejection Policy: abortPolicy

Executor will create initially 4 threads. Then, it will start queuing new tasks. Since queue size is somewhat large, it will take a while for the executor to start creating new threads (up to 8). Maybe it will never happen, since queue will (probably) never get full.

Example 2 – Small queue size and changing thread pool size dynamically

  • Number of tasks: 2500
  • Core thread size: 4
  • Maximum thread size: 200
  • Queue size: 100
  • Rejection Policy: abortPolicy

Executor will create initially 4 threads. Then, it will start queuing new tasks (while processing only four tasks at a time). Since queue size is small, it will probably gets full quickly making executor to spawn new threads up to 200. If there are 200 threads being executed and queue is still full, new tasks requests will be rejected. But, if we increase the number of threads dynamically prior the queue gets full, let’s say, to 1000 threads, probably no tasks will be rejected since this will be enough to process all the tasks prior the queue gets full. On the other hand, if as soon as the test starts, we decrease the number of threads to a low value (e.g. 10 threads), almost all tasks will be rejected. It worth noting that each task takes from 5 to 10 seconds to be executed. You can of course change this value at any time. See SleepTask#run() method.

Example 3 – callerRunsPolicy and changing thread pool size dynamically

  • Number of tasks: 1000
  • Core thread size: 4
  • Maximum thread size: 8
  • Queue size: 10
  • Rejection Policy: callerRunsPolicy

Executor will create initially 4 threads. Then, it will start queuing new tasks. Since queue size is not large, it will gets full quickly, making executor to create 4 more threads (to hit maximum size of 8). After that, due the rejection policy, tasks will start being executed in the caller thread (i.e.: a thread created into the DummyService class). You may notice this test will take longer, since it will execute only 8 threads in parallel (4 core thread + 4 of maximum thread configuration) plus one task being executed in the caller’s thread, so, if we change the thread pool size dynamically by changing the seekBar control to a high value (e.g.: 900) we will see that test will finish very quickly since it will use much more threads to get the job done. 

When using callerRunsPolicy, if the caller thread is the main UI thread, ensure to not execute long operations, otherwise GUI may freeze.

Example 4 – customRejectionPolicy

  • Number of tasks: 2500
  • Core thread size: 4
  • Maximum thread size: 200
  • Queue size: 100
  • Rejection Policy: customRejectionPolicy

This test will behave exactly the same way of example 1, since our customRejectionPolicy throws a runtime RejectedExecutionException. We created it just to show how easy is to use a custom discard policy.

Note: After each test finishes, if we wait for a while in the test fragment, we can see the number of threads in the pool decreases until it hits the core pool size. This is due the keepAlive configured during the pool constructor (which is set to 5 seconds), meaning all (non-core) threads that get idle for 5 seconds will be terminated.

Conclusion

Understand how the Executor manages its threads is an important step to get the most out of the framework. At first it seems something hard, but if we give it a chance, things will become clear very quickly. 

With the demo app we provided, we expect developers can better visualize some of the innumerous possibilities of configuration this framework offers. You can use it and change it the way you want. And as usual, any questions will be more than welcome.

  • Irudaya Raj

    Good work.