html canvas
vue.js
real time simulation
Estimate reading time13min read

Interactive visual simulations with HTML5 and Javascript

Author
Andrija Perušić
Published on
Apr 24, 2020.
Guidelines and challenges of making real time visual physics simulations as a UI component for a web application and how our team at Pinkdroids solved it.
Author
Andrija Perušić
Published on
Apr 24, 2020.

Introduction

Our team at Pinkdroids recently got a request from a client to create visual interactive simulations of several physics experiments as a part of an educational web platform. Simulations were supposed to be delivered as modular Vue.js components, which would later be integrated into a larger web application.

At first it seemed like a daunting task. What first came to mind was using WebGL technology and some popular rendering libraries like Pixi.js, Three.js or Babylon.js. Additionally, at first glance we thought a good idea would be to use a Javascript physics engine like Cannon.js or Oimo.js. Those initial ideas had some downsides.

On top of having to learn new technologies and libraries, one of the requirements was to keep production build size as small as possible, which left very little room for additional libraries, especially ones on the larger side, like physics, rendering or game engines. There is also a conceptual problem with using a physics engine for simulations. Physics engines use vector forces and apply them to rigid bodies which can often introduce numerical errors and inconsistencies into ideal mathematical simulation. One library that seemed interesting and a good fit was Konva, a 2D canvas rendering library but it was also full of features we didn’t need.

Considering all of the above we decided to build our components using plain HTML Canvas 2D context API with a little help from a Chart.js library for displaying charts. In this article I will try to explain our generalized solution and different challenges that we went through.

HTML Canvas

Let’s start from a simple Engine class and build from there:

class Engine { constructor (canvas) { this.canvas = canvas this.ctx = canvas.getContext('2d') } draw = () => { this.ctx.clearRect(0, 0, this.canvas.width, this.canvas.height) ... } render = () => { this.draw() window.requestAnimationFrame(this.render) } }

So the gist of this code is we get a reference to the canvas element, create a drawing context and then when we call render method it will start to execute in a loop. Draw method contains code that erases everything on canvas and draws it anew. Inside it, you can draw anything you like using CanvasRenderingContext2D API. Simple enough? The thing is, it is a complete waste of resources.

The main and crucial thing you have to worry about all the time when using canvas with 2D context is performance. We always come back to the base fact that Javascript is a single threaded language. This means it has one call stack and one memory heap.  When you draw on canvas with 2D context, all of the rendering, pixel shading operations are also done in this single precious thread. If you need to draw something on canvas once and use it as an image this is not a problem but if you use a rendering pipeline redrawing content on every animation step even the most advanced CPU won’t be able to take the load. If you are not careful your web app will very quickly become completely unresponsive and practically unusable. WebGL context resolves this problem in the way it uses WebGL technology to offload all of the rendering pipeline operations from the main thread to a Graphical Processing Unit. The downside is it brings complexity to a whole other level because you have to write custom shader scripts using GLSL language. There are frameworks that hide this layer of complexity from you but our use cases were simple enough to use plain 2D context without bundling additional libraries.

So how can we make most of the 2D context to have a rich and smooth experience without blocking our UI? 

Multiple canvases to the rescue!

One of the most effective optimisations is to have multiple canvases. If you have only one canvas element on every redraw all of your scene will have to be redrawn including every single object. This can be pretty expensive and it can be avoided by separating your scene into layers. You need to figure out how to group objects in your scene into layers and how will those layers be ordered on top of each other. In my case, most of the time, I divide my scene into a background static layer and on top of it a dynamic animated layer. All the stuff that is drawn once and then doesn’t move again goes into the background and all the animated objects go into the dynamic layer.

Possibilities are numerous, you can have as many layers as you want. Now, when the scene is divided into layers, they are represented by canvas elements absolutely positioned on top of each other. On every draw call you can selectively choose which part of the scene you want to re-render which greatly reduces the amount of work that has to be done.

This is one way how we can implement multiple canvases support into our starting example:

... layers = {} constructor (container, config) {   this.width = config.width   this.height = config.height   container.style.position = 'relative' config.layers.forEach((layer, index) => {     const canvas = document.createElement('canvas')     canvas.width = config.width     canvas.height = config.height     canvas.style.width = '100%'     canvas.style.position = 'absolute'     canvas.style.zIndex = index     container.appendChild(canvas)     this.layers[layer] = { canvas, ctx: canvas.getContext('2d') }     if (index === config.layers.length - 1) {       this.eventCanvas = canvas     }   })   this.container = container   this.container.style.height = `${this.eventCanvas.clientHeight}px` } ...

Now it is possible to have as many canvases as needed. In the constructor we won’t pass the canvas element. Canvases will be created dynamically. Instead, we pass a parent/container node, which can be a simple div element and configuration object. Configuration object contains canvas resolution and a list of layer names. Also notice that we chose the top layer canvas to be the canvas to receive mouse events if the need arises.  What else can be done to improve performance?

60fps is so overrated.

When trying to implement animation, you might be tempted to use setInterval function to call redraws but the Window.requestAnimationFrame() method that we used in the starting example should be preferred almost always. The reason for this is that setInterval forces the browser to execute the callback function in given intervals exactly. The requestAnimationFrame method provides a smoother and more efficient way for animating by calling the animation frame when the system is ready to paint the frame. Drawing callback will be executed before a browser repaint. Another advantage is that callbacks are completely paused in most browsers when the tab is in background which reduces battery drain.

Even though it is clear we should use requestAnimationFrame() there is still one tiny problem.

If there are no system performance issues, browser repaint will always try to reach 60 FPS. Which means our requestAnimationFrame callback will be executed at or slightly below 60 FPS and our layers will be redrawn every 16.5 milliseconds, more or less. Again, if our scene becomes more complex we will encounter performance issues.

If you ever played games you know that in the gaming community 60 FPS is a must for ideal gaming experience. But it turns out that for our simple physics simulations quite less is more than enough while still retaining smoothness. In most of my implementations I used 20 FPS. So how can we achieve this? With a simple if statement:

...  constructor {  ...  this.frameRateInterval = Math.ceil(1000 / config.fps)   this.lastFrameTime = performance.now()  } ...  render = time => {     const currentTime = time || performance.now()    let deltaT = currentTime - this.lastFrameTime    if (deltaT >= this.frameRateInterval) {      this.lastFrameTime = currentTime - (deltaT % this.frameRateInterval)      this.draw()    }    window.requestAnimationFrame(this.render)   }

Now the draw method will be executed only after frameRateInterval has passed. Notice that we added a fps field to config object.

Additional improvements

Now we added the most effective performance improvements. Here are some other tips which I found to also be important.

  • Whenever possible you should always try to use simpler shapes. This sounds pretty vague but let me give you an example I encountered. I had to create a swarm of tiny electrons moving and collisioning randomly. My intuitive idea was that every electron should be a tiny black circle. But as I increased the number of spawned electrons the app started to lag and the amount of electrons was still not enough to look good. Then I realised that these electrons could just be tiny squares. They were so small that the difference can almost pass unnoticed. With this small change I was able to double spawned electrons without losing any performance.

  • You should avoid the use of shadows as much as possible. Especially shadowBlur property which is very costly.

  • Another thing I found to improve performance is to reduce the numbers of mousemove events which are taken into account. Some of the simulations that we developed required drag interactivity. The interactivity was implemented using a combination of mousedown, mousemove and mouseup events on the event canvas. The problem is, if your FPS is set to some low value, in our case 20, it will often happen that multiple mousemove events fire between two frames, and callbacks will often need to do some linear algebra calculations to move objects around the scene. This is unnecessary because the change that you do is not even shown before the next frame. One way to deal with this is to just ignore the event if the required amount of milliseconds hasn’t passed from the last event.

That’s all about performance! You can check MDN documentation for more: https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Optimizing_canvas

Physics

These simulations require some physics calculations to be performed. Take for example a mathematical pendulum. To simulate it you need to calculate its position in every frame and for that you need to know its speed and acceleration, otherwise it is not a real simulation. There are many more examples like this which require you to calculate some physical values in real time and update the objects in the scene according to calculated values.

Good news is we already have most of the things we need. We have a render function where we calculate the difference in time from last render call, or in physics terms delta T. Now we only need to use that information to recalculate relevant physicals parameters before every render, update the values that define our scene state and that are dependent on our physics parameters (in our pendulum example this would for example be canvas x and y coordinates of a pendulum object) and only after all that render our scene. It is pretty much how all physics engines work but ours is a much simpler implementation.

To make the code more modular lets create a Physics class.

class Physics {  state = { ... }  onUpdate = state => console.log('You should do something on update')  update = dT => {     ...     this.onUpdate(this.state)  } }

We can define our own onUpdate callback which will recieve latest state as a parameter and pass it to Physics class on creation. After that we can call update(deltaT) in the render method before the draw method. That's all there is to it. The situation can be a little bit different when you don’t have physics parameters that depend on time. But the principles remain the same. Update physics parameters and then update your scene. Instead of doing this in a render method, it can be done on a mousemove event callback.

Let’s see how we added some real time graphs to our simulations.

Graphing

What is a good physics simulation without at least one graph?

It is understandable that one of the requirements for some of the simulations was to display a dynamic graph which shows some physical values in real time sync with animation. Drawing a line graph, or for that matter, any kind of graph is a cumbersome task to do with plain canvas, so after a short research we decided to go with Chart.js. It turned out that this library had everything that we needed and it was quite easy to implement required features. Responsiveness and styling comes out of the box and configuration is pretty straight forward.

So how would we implement a dynamic graph with Chart.js? Let’s first see a very simple line chart creation example.

const myChart = new Chart(ctx, {   type: 'line',   data: {     datasets: [{       data: [],     }]   },   options: {...} })

In the dataset data field we have a list of line chart values. To add a value to a chart and show it you only need to do this:

const value = ... myChart.data.datasets[0].data.push(value) myChart.update()

Never easier! This enables us to call this piece of code in our render method on each frame, or better yet in our Physics.onUpdate callback and thus the graph will be updated in real time with the simulation.

Accessibility

As you may assume, making a visual simulation completely accessible is not a trivial task to do but it can be accomplished to some degree. I found that all the important information needed to make an accessible canvas is surmised in these two links:

http://pauljadam.com/demos/canvas.html

https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Hit_regions_and_accessibility

I would say that providing a fallback content is starting to become obsolete as every mainstream browser now supports canvas.

Regarding ARIA rules, in most of the cases I would put ‘img’ or a ‘button’ role on a canvas element, depending if it is interactive or not. I would dynamically change the label to explain the current state of simulation and possible actions.

If your canvas is interactive and has mouse interactions you should always use keyboard events on canvas to enable same interactiveness with the keyboard.

There’s not much else to tell about accessibility except following the standard rules described in two links above to the best of possibility.

Conclusion

We have mentioned all of the main architecture parts of our solution and described some of the challenges we have encountered during the implementation. I hope you got some new knowledge out of this and now have a general idea on how interactive simulations like in our examples can be implemented.

There are many ways the examples above can be expanded or improved, for example adding some keyboard and mouse events to canvas or adding responsiveness to canvases, which often means changing resolution dynamically.. I encourage you to try it yourself and experiment.