Towards reactive animation in Elm

Dec 13, 2015   #FRP  #Animation  #Functional  #Reactive  #ReactJS  #Elm 

ReactJS is gradually moving towards a purely functional style of specifying application views. In recent versions, you’re encouraged to use a pure functional syntax that maps a view’s props to the appropriate virtual-dom components. Combined with Flux, this approach is getting closer to what is already established in the Elm world, where it is referred to as The Elm Architecture. This is particularly visible in the increasing emphasis on working exclusively with props and avoiding state when rendering React components, and piping all inputs and events into the “dispatcher”.

TL;DR elm-anima is a work in progress library built on the concepts described in this post. Check out the examples posted there.

On the elm-discuss group, how to approach animations has been discussed in quite some depth. However, there is relatively little consensus on how best to achieve this and I wasn’t happy with the level of abstraction at which we need to work to get animations as simple as this one (see its source code).

Given how ReactJS is moving in the Elm direction, if we manage to arrive at a good approach for animations in Elm, my guess is we’ll be able to transport the concepts into ReactJS as well to good effect. A good reason to do it this way is that Elm being a statically type checked functional language, sloppy abstract thinking is way more likely to get caught at compile time than, even if you’re careful, at ship time. Its virtual-dom based architecture is also fast enough to make this worth attempting in Elm.

Goals

Just as the elm architecture provides a broad framework within which various libraries can exist to help build an application, I felt a similar approach could be useful when it came to animations.

Conal Elliott’s Fran was one of my original inspirations. However, its stated goals are a bit different in that the intention is to let folks build up animations through composition, where an “animation” clubs the movement behaviour together with whatever is being visually presented. That’s got its advantages, but it doesn’t have the clean separation of concerns that the elm architecture demands.

If you think about it, animations can be treated simply as functions from time to values. Fran itself treats them this way, as part of its approach that combines animations and their presentation using “behaviours”. 1 In the context of Elm too, we currently have a few libraries that take different approaches to the problem. 2

Animations become reactive when you can change their course based on user input. In some contexts, this is also referred to as retargetable and additive animations. The general idea here is to not pre-commit to anything fixed happening over an extended period of time.

We also need to consider physics-based animations, where a set of forces are specified not in terms of instantaneous values, but functions of the system configuration to instantaneous values. You don’t want to specify a spring effect by calculating a force vector 60 times a second. Instead, what you want to do is to declare a “spring”. The user’s job stops with specifying such a configuration. The physics system should compute the behaviour based on the resultant of all such forces acting on each particle. Note that “particle” here is a generalized concept and doesn’t refer to a “point particle”. The phase position is in what is called generalized coordinates in the classical Hamiltonian formulation, and isn’t restricted to physical N-dimensional space.

Supporting physics based animations should not make simple fixed animations hard to express. For some reason, this looks like a dichotomy and libraries so far have leant one way or the other. Considering physics offers a way into understanding animations that can generalize to conventional simple animations as well.

Processes over time functions

Physics based animations need to compute the course of a particle in response to a set of forces acting on it. If you know the force values over time beforehand, the animation is deterministic. On the other hand, forces can be made to change based on user input, resulting in interactive physics.

So, at least in the context of physics, it is better to look at a “particle” as a process that transforms “input forces” into “output particle state” continuously over time. In Elm, this concept is easily expressed using the Automaton library.3

type alias Particle space = Automaton (TimeStep, List (Force space)) (TimeStep, PhasePos space)
type alias PhasePos space = {position : space, momentum : space}
type alias TimeStep = Float

In words, we’re modeling such a particle as a process that, in incremental time intervals, uses the current set of forces on it to update its position in a “phase space”. The phase space is simply the position of the particle in conjunction with its momentum. I’ll give the particle a force and a small time interval over which I know the force is acting, and I expect the particle to update its position and momentum according to its intrinsic properties.

An individual particle is quite simple in this view. For multiple particles, we can model the relationship between them in the way we send forces to them and have the animation process manage how to update the phase position of each of them in response.

This model of a particle suggests that we model the general case of reactive animations like this -

type alias Animation input output = Automaton (TimeStep, input) (TimeStep, output)

with particles being specialized as -

type alias Particle space = Animation (List (Force space)) (PhasePos space)

The interesting thing here is that we’re guaranteed the same degree of reactivity that systems of particles have in a physics based model for all animations as well. A specific animation, however, may completely ignore its input and provide a time varying output according to any law it chooses.

The Elm application as an Automaton

Note: This section is re-presentation of ideas from my talk on Functional Thinking for Fun and Profit in a more Elm-friendly manner.

The main property of an Automaton is that every time something arrives on its input, it must generate an output. i.e. There is a one-to-one correspondence between inputs arriving and outputs being generated. This simplification permits easy determination of when parts of composed automatons should run.

Stepping back, it is quite simple to model an Elm application like this -

type alias Application input = Automaton input Html

where the Html is wired to the appropriate Address locations that feed into input.

This is not exactly the Elm architecture, but becomes the Elm architecture if we split out this automaton into two processes - one that computes and maintains a Model and the other that computes a view from the model.

type alias Application input model = 
    { modeller : Automaton input model
    , viewer   : Automaton model Html
    }

How do we fit animations within this architecture without polluting the model with the gory details of state required to manage animations?

Conway’s law and animations

Conway’s law states that organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.

In the world of high end animations, the task of producing an animation is seldom done by one person. We can roughly divide it into two tasks/roles/capabilities - a director and an animator. While the director role concerns itself only with configurations and the sequence of them, the animator is responsible for realizing these configurations as time evolves. The third bit, of course, is the actual job of presenting the visualization based on the details provided by the animator.

Applying Conways' law in reverse, one possible factorization of the Elm application therefore is -

type alias Application input model direction viewstate =
    { modeller : Automaton input model
    , director : Automaton (input, model) direction
    , animator : Animation direction viewstate
    , viewer   : Automaton (model, viewstate) Html
    }

We introduce two new structures direction and viewstate that separate out visual concerns from purely application concerns, which we place within model. While this seems like a complication, it provides a structure within which to think about an application’s dynamic behaviour. 4

In the simplest case where no dynamism is required, the animator can simply be an identity transformer with the direction and viewstate types being the same.

When dynamism is required, the animator computes an instantaneous viewstate based on the current direction, and is prepared to change its course of action if the direction changes in response to user input. The input to the animator is expected to be a “sample and hold” value of the direction provided for each time step.

With this factorization, it is now possible to limit the scope of all animation related libraries and code to the animator part of the application. The rest of the application can focus on functionality (modeller) and UX (director).

Introducing elm-anima

The elm-anima library is a work-in-progress realization of this idea. It provides simple time-based animations in addition to physics based animations, which are its main focus. It also includes some ideas for combinators that lift value animations into record animations, apply multiple value animations in parallel, etc. None of this is in its final form though and I’m still in the exploratory phase.

Going forward, I hope to expand on the set of builtins to provide a rich toolkit for animations. Additional higher level combinators that help compose parts of the application together and capture common combination patterns is also an area worth exploring. The Focus library appears useful in this context. The use of Automaton in this architecture permits the introduction of caching mechanisms without user visibility which, I expect, can further improve the performance of the Elm system beyond what it is capable of today. For example, in the current architecture, the view gets recomputed (though not redisplayed) even if the model doesn’t change upon arrival of an input. If the viewer is modeled as an Automaton, it would be possible to introduce a cache behind the scenes, even if at the user level it is expressed as a pure function. One idea I’m exploring is to see if animation outputs are worth labeling with Stable and Transient tags, which can inform such a caching mechanism. It wouldn’t be worth caching transient states, and we can avoid wasted computation in long lasting stable states.

I expect to use Elm in such animation-rich web applications I build here onwards. This approach rose as a need to have a good conceptual foundation for me to build on. I also expect this work to be back portable into ReactJS so that the wider community of Javascript programmers can also benefit from it.

Comments are most welcome.

Notes


  1. That’s not really true and is a simplification. While “behaviours” in Fran provide an explicit model of time, the usage of the function composition approach tends to be simple mostly when treating animations as functions from time to value. ↩︎

  2. Here is an interesting discussion on what animations should be, in the context of Elm. ↩︎

  3. If you’re wondering why we need Automaton to map an input to an output when we already have functions, functions specify a relationship between input and output with implied computation whereas an automaton is a process. Given an input, a pure function is expected to associate it with the same output value irrespective of when it is called on to provide the association. On the other hand, an Automaton stands for a process that is expected to produce an output value whenever an input arrives. It captures the fact that a computation is expected to be done whenever the input changes. If the same input arrives again, a different output value may be produced, so an automaton may have a running internal state as well. ↩︎

  4. I’m ignoring effects for simplicity for the moment. It is possible to model the effects part of an application orthogonal to this system using a separate function like this -

    effects : Signal (input, model, direction) -> Signal (Task x ())
    

    , where some of the tasks could also feed new input. ↩︎