Back to Blog

An example of functional design

In this blog post we'll walk through the technical design of one part of a (not yet public nor completed) web application made by Futurice. The library in focus is named jobba. Using jobba we can fetch resources from backend servers in a functional way. Unfortunately jobba isn't open sourced (neither is Twitter's Stitch), but at least this "documentation" is public now.

TL;DR this is not a monad tutorial

Jobba is very similar to Facebook's Haxl and Twitter's Stitch.

Is Future functional?

To access backend resources, we could build a small library using Scala’s Future directly. But is Scala's Future (or JavaScript's Promises) functional? This question is immediately followed by a meta-question: What do we mean by "functional"? If functional means that we have map, filter and reduce, then — yes — Future is a functional construct. Yet the defining property of functional programs is the referential transparency, a property which enables us to reason equationally. Fancy words need examples:

If we start with a very simple example

    def f(b: Boolean): Int = ???
    val a: List[Int] = List(true, false, true) map (f _)

and ask ourselves, are the first and the third elements of the list l the same? If f is referentially transparent then we'll have

    val a: List[Int] = List(f(true), f(false), f(true))

and the answer is: yes, as f(true) == f(true) always. Scala doesn't enforce referential transparency, so we can't be a hundred percent sure. But most likely, functions from Booleans to Ints are referentially transparent.

But can we reason similarly about a program involving Futures?

    def g(b: Boolean): Future[Int] = if (b) foo else bar
    def b: Future[List[Int]] = Future.traverse(List(true, false, true)) (g _)

Can we be sure that, in the resulting list, the first and third values will always be the same (if b resolves successfully)? This is a trick question: the answer is “it depends”. If we manually evaluate the expression:

    Future.traverse(List(true, false, true)) (g _) ≡
    Future.sequence(List(true, false, true) map (g _)) ≡
    Future.sequence(List(g(true), g(false), g(true))) ≡
    Future.sequence(List(foo, bar, foo))

we see that the first and the third elements should be the same, and they will be if foo is a val. But if foo is defined as def foo = webClient.get("http://my.backend/api/foo"), then not only may the results vary, but two requests will be made to the same endpoint. And contrary to the previous example, Future-computations usually do have these kinds of side effects.

In this small example the issue is easy to work around. But try to imagine a larger system, with a very granular backend API, where one has to make hundreds of backend requests to compose the final result. Let's take Twitter as an example: you can fetch a user's feed, where each tweet has an author, and you can fetch each author's information, to get an avatar URL. You'd like the rendered feed (the result) to be consistent, i.e. each author always has the same avatar. Or going even further: you'd like to make as few backend requests as possible. Your API might even support fetching multiple users' information with a single request. You'd like to leverage that, but you don't want to sacrifice the clarity of your code. So you could take it all into account manually:

    def usersFeed(userHandle: UserHandle)(implicit ec: ExecutionContext): Future[Html] =
     for {
       feed <- fetchFeed(userHandle)
       usersData <- fetchUsersData(feed.uniqueUserHandles)
     } yield renderFeed(feed, usersData)

And that's ok. But then you'd like to render two users' feeds on top of each other. If you do

    def twoUsersFeed(a: UserHandle, b: UserHandle)(implicit ec: ExecutionContext): Future[Html] =
     for {
       feedA <- usersFeed(a)
       feedB <- usersFeed(b)
     } yield feedA ++ feedB

you lose the desired property: consistent results (the referential transparency). Also you lose the property of making as few requests as possible. It is difficult to use multi-valued requests and have composable code.

Rather you'd like to write more granular code, and have the above properties:

    def singleTweet(tweet: Tweet)(implicit ec: ExecutionContext): Future[Html] =
     for {
       author <- fetchUser(tweet.author)
     } yield renderTweet(tweet, author)

    def usersFeed(userHandle: UserHandle)(implicit ec: ExecutionContext): Future[Html] =
     for {
       feed <- fetchFeed(userHandle)
       renderedTweets <- Future.traverse (feed.tweets) (singleTweet _)
     } yield renderFeed(feed, renderedTweets)

The solution — Functional Future

How to proceed is actually quite obvious, if you are familiar with the common FP design practices. The Future isn't a pure construction, but merely a functional looking wrapper to side-effectful callback-based operations. The Future isn't composable in the way we want it to be. We want to separate the what and the how of backend access. Let frontend developers specify what to fetch and what to do with results, and backend developers how to fetch it. Plain Future doesn't work, so we just add another level of abstraction by introducing a new type, Job:

    // "frontend"
    def singleTweet(tweet: Tweet): Job[Html] =
     for {
       author <- fetchUser(tweet.author)
     } yield renderTweet(tweet, author)

    def usersFeed(userHandle: UserHandle): Job[Html] =
     for {
       feed <- fetchFeed(userHandle)
       renderedTweets <- Job.traverse (feed.tweets) (singleTweet _)
     } yield renderFeed(feed, renderedTweets)

    // "backend"
    package object executor {
     def execute[T](job: Job[T])(implicit ec: ExecutionContext): Future[T] = ???
    }

Here we already implicitly decided that the Job and Future API should be similar. The difference is that, we first construct a Job computation and then execute it. This is contrary to Future, where construction and execution are highly interleaved.

With the separation of construction and execution, frontend / application developers could compose Jobs (specify the what). Backend developers could work on the implementation of the execute method — the how. The implementation will be written in a referentially transparent way, also automatically optimising the execution.

What is the trick, one might ask? It’s the primitive building blocks. With Future you could do virtually everything, but with Job you can only ask for a JSON document from the backend. By restricting what a Job can do, the implementation can be made fully referentially transparent, as explained later.

    object Future {
     def apply[T](body: => T)(implicit ec: ExecutionContext): Future[T]
    }

    object Job {
     def fetch[T](url: Url)(implicit reads: Reads[T]): Job[T]
    }

    val f = Future { launchNuclearMissiles() }
    val j = Job.fetch[JsValue]("https://api.github.com/orgs/futurice")

Comparison of Jobba to Haxl and Stitch

Jobba, Haxl, and Stitch are very similar libraries. Yet there are a couple of differences worth pointing out, if you are already familiar with them.

Haxl is a Haskell library by Facebook with which you can:

  • Make concurrency implicit using Applicative Functors
  • Get soundness and modularity for free from implicit caching
  • Concurrently fetch data from multiple heterogeneous sources
  • Implicitly batch multiple requests to the same data source into a single request

Stitch is a Scala library by Twitter with which you can do almost the same:

  • Make concurrency implicit using Applicative Functors
  • Don't get soundness and modularity for free from implicit caching
  • Concurrently fetch data from multiple heterogeneous sources
  • Implicitly batch multiple requests to the same data source into a single request

Unluckily Stitch isn't open source. And it doesn't have the soundness and modularity property, which is another way to say that the execution is referentially transparent. So there are already three reasons to make our own library: we use scala (rules out Haxl), want referential transparency and a library we can use (rules out Stitch).

Also our environment was a bit simpler. We didn't have a batch API, and we have only a single source type: JSON over HTTP. And we take advantage of that, as it simplifies internals. So Jobba is a Scala library by us with which you can:

  • Make concurrency implicit using Applicative Functors
  • Get soundness and modularity for free from implicit caching
  • Concurrently fetch data from single heterogenous source
  • Implicitly batch multiple requests to the same data source into a single request

Here be dragons

The implementation of Job uses the same approach as Haxl or Stitch: Free Monad. There are good explanations of free monads and free structures in general on StackOverflow, so I won't repeat them.

The idea is to make a data type that has all the needed operations, but doesn't perform anything. Then the executor (evaluator or interpreter) will take the description of the computation and execute it.

This problem is quite similar to the Par example in Chapter 4 of Functional Programming in Scala. We abstract even more, though. Using free monads we could have totally different implementations of our execute method. In the Par example, the authors had to change the definition of the Par data type to provide a fully non-blocking Par implementation using actors.

One simple definition of Job could be:

    sealed trait Job[+T]
    case class PureJob[T](value: T) extends Job[T]
    case class BlockedJob[S,T](f: S => Job[T], job: Job[S]) extends Job[T]
    case class FetchJob[T](url: Url, reads: Reads[T]) extends Job[T]

Then we could implement the primitive operations of the Job API:

    def flatMap[S,T](f: S => Job[T], job: Job[S]): Job[T] =
     job match {
       case PureJob(value) => f(value)
       case _              => BlockedJob(f, job)
     }

    def pure[T](value: T) =
     PureJob(value)

    def map[S, T](f: S => T, job: Job[S]): Job[T] =
     job match {
       case PureJob(value) => PureJob(f(value))
       case _              => BlockedJob((x: S) => pure(f(x)), job)
     }

    def fetch[T](url: Url)(implicit reads: Reads[T]): Job[T] =
     FetchJob(url, reads)

Here we already perform some local optimisations of the syntax structure we build up.

Unfortunately, to make for-comprehensions work, we need to define map and flatMap as member methods of Job. So the real definition of the Job trait is more interleaved:

    sealed trait Job[+T] {
     def flatMap[S](f: T => Job[S]): Job[S] =
       BlockedJob(f, this)

     def map[S](f: T => S): Job[S] =
       BlockedJob[T, S]((x: T) => PureJob(f(x)), this)
    }

    case class PureJob[T](value: T) extends Job[T] {
     override def flatMap[S](f: T => Job[S]): Job[S] =
       f(value)

     override def map[S](f: T => S): Job[S] =
       PureJob(f(value))
    }
    case class BlockedJob[S,T](f: S => Job[T], job: Job[S]) extends Job[T]
    case class FetchJob[T](url: Url, reads: Reads[T]) extends Job[T]

After we have defined the Job data structure, we could look what the singleTweet Job will look like:

    scala> singleTweet(Tweet("phadej", "fun prog is fun"))
    res0: Job[Html] = BlockedJob(<function1>,FetchJob(/user/phadej,()))

As map is implemented using flatMap we see a BlockedJob as an outer constructor of the resulting job. is an anonymous unary function we make in flatMap.

Next, let's write a simple executor, we simply squash Job into Future, PureJob into Future.successful and BlockedJob into Future.flatMap:

    def execute[T](job: Job[T])(implicit ec: ExecutionContext): Future[T] =
     job match {
       case PureJob(value)       => Future.successful(value)
       case BlockedJob(f, job)      => execute(job) flatMap { value => execute(f(value)) }
       case FetchJob(url, reads) => ???
     }

The execution of FetchJob is the place where the magic starts to happen. We could simply perform the actual fetch for every FetchJob, but what we really do is we cache FetchJobs as we go:

    def execute[T](job: Job[T])(implicit cache: FetchCache, ec: ExecutionContext): Future[T] =
     job match {
       case PureJob(value)       => Future.successful(value)
       case BlockedJob(f, job)   => execute(job) flatMap { value => execute(f(value)) }
       case FetchJob(url, reads) => cache.cached(url, reads) { webClient.get(url)(reads) }
     }

Using this we get soundness and modularity. Yet skeptics will notice that we could rewrite the original singleTweet as:

    def singleTweet(tweet: Tweet)(implicit cache: FetchCache, ec: ExecutionContext): Future[Html] =
     for {
       author <- fetchUser(tweet.author)
     } yield renderTweet(tweet, author)

or

    implicit def jobba2execution(implicit ctx: JobbaContext): ExecutionContext = ???

    def singleTweet(tweet: Tweet)(implicit ctx: JobbaContext): Future[Html] =
     for {
       author <- fetchUser(tweet.author)
     } yield renderTweet(tweet, author)

and get the same benefits. And that is very true. For example in our application we have three caches: app-global in-memory, Redis cache for backend responses, and request-local in-memory cache for consistency. All these three could be implemented using an implicit context, as they only affect how a fetch operation is performed.

The type Job[+A] = JobbaContext => Future[A] definition, which resembles the Par implementation, would work as well as the implicit contexts: not well enough.

Why not just use implicit context

Using Futures directly, and implicits, we cannot affect the structure of the computation. We can't change the structure of the computation built with Futures. We can't bundle multiple fetches into one, to e.g. leverage Redis' MGET command. To achieve that, we must make the Job structure richer and add a few more job-types:

    sealed trait Job[+T] {
     def flatMap[S](f: T => Job[S]): Job[S] = BlockedJob(f, this)
     def map[S](f: T => S): Job[S] = MapJob(f, this)
    }
    case class MapJob[S, T](f: S => T, job: Job[S]) extends Job[T]
    case class PairJob[S, T](jobS: Job[S], jobT: Job[T]) extends Job[(S, T)]
    /* PureJob, BlockedJob and FetchJob as previously */

The MapJob is quite simple: it's there to expose the map operation directly, not via pure and flatMap. The PairJob needs more explanation. If we continue with a simpler Job definition for a moment, and try to build a concurrent Job:

    def twoTweets(a: Tweet, b: Tweet): Job[Html] =
     for {
       htmlA <- singleTweet(a)
       htmlB <- singleTweet(b)
     } yield htmlA ++ htmlB

    scala> twoTweets(Tweet("phadej", "fun prog is fun"), Tweet("futurice", "RT @phadej fun prog is fun"))
    res1: Job[Html] = BlockedJob(<function1>,BlockedJob(<function1>,FetchJob(/user/phadej,())))

Here we would expect to see FetchJob(/user/futurice) too, as clearly we could perform both fetches concurrently, or even bundled into one. Applicative Functors come to the rescue. We need to modify the twoTweets definition (using scalaz magical operators):

    def applicativeTwoTweets(a: Tweet, b: Tweet): Job[Html] =
     (singleTweet(a) |@| singleTweet(b)) { (htmlA, htmlB) => htmlA ++ htmlB }

Using this we actually do worse at first:

    scala> applicativeTwoTweets(Tweet("phadej", "fun prog is fun"), Tweet("futurice", "RT @phadej fun prog is fun"))
    res0: Job[Html] = BlockedJob(<function1>,BlockedJob(<function1>,BlockedJob(<function1>,FetchJob(/user/phadej,()))))

Let's define ap in Applicative using PairJob, i.e. remember that we composed Jobs applicatively. Then we'll see more of the computation's original structure:

    scala> applicativeTwoTweets(Tweet("phadej", "fun prog is fun"), Tweet("futurice", "RT @phadej fun prog is fun"))
    res0: Job[Html] =
     MapJob(<function1>,PairJob(
       MapJob(<function1>,FetchJob(/user/futurice,())),
       MapJob(<function1>,FetchJob(/user/phadej,()))))

At this point we can redo the executor — totally change its execution strategy. Make it first traverse the job structure, picking up all visible FetchJobs. Then we can bundle fetches together into a single Redis MGET command. After that, replace FetchJobs with results wrapped into PureJobs, simplify the structure of the job and loop, until we have only a single PureJob at hand! We call this executor OnionExecutor, because it peels the structure off, making the job smaller and smaller at every iteration.

Let's work the two tweets job step-by-step:

  • we see two FetchJobs: /user/futurice and /user/phadej
  • next we bundle them together into single MGET, which hopefully returns us two values: futurice and phadej
  • after that we substitute the values into the job we started with:
    res0: Job[Html] =
     MapJob(<function1>,PairJob(
       MapJob(<function1>,PureJob(<Author futurice>)),
       MapJob(<function1>,PureJob(<Author phadej))))
  • in this job we don't have anything blocking our computation, so we can reduce it into the final Html!

There weren't any BlockedJobs in the example job above. That means that we could render these two tweets using a single iteration of the executor loop, a single MGET — as we did!

In the real version of Job we also have FailedJob, because sometimes things go wrong, but that's all.

Conclusion

Above I showed you how one can construct a powerful computation abstraction using free structures. Essentially we made an embedded language (which in turn embeds a host language to do pure computations!) to do asynchronous resource retrievals. We have more control over the execution than when using Futures directly and more importantly the execution strategy, we used, is referentially transparent. We can also add more primitives if we need them.

Author

  • Portrait of Oleg Grenrus
    Oleg Grenrus
    Software Developer