How We Used WebAssembly To Speed Up Our Web App By 20X (Case Study)

Conversion Rate

Conversion Rate / Conversion Rate 6 Views 0

In this article, we discover how we will velocity up net purposes by changing sluggish JavaScript calculations with compiled WebAssembly.

Should you haven’t heard, right here’s the TL;DR: WebAssembly is a brand new language that runs within the browser alongside JavaScript. Yes, that’s proper. JavaScript is not the only language that runs in the browser!

However past just being “not JavaScript”, its distinguishing issue is you can compile code from languages corresponding to C/C++/Rust (and extra!) to WebAssembly and run them in the browser. Because WebAssembly is statically typed, makes use of a linear memory, and is stored in a compact binary format, additionally it is very quick, and could ultimately permit us to run code at “near-native” speeds, i.e. at speeds near what you’d get by operating the binary on the command line. The power to leverage present tools and libraries for use in the browser and the related potential for speedup, are two causes that make WebAssembly so compelling for the online.

Thus far, WebAssembly has been used for all types of purposes, starting from gaming (e.g. Doom 3), to porting desktop purposes to the online (e.g. Autocad and Figma). It is even used outdoors the browser, for example as an efficient and versatile language for serverless computing.

This text is a case research on utilizing WebAssembly to hurry up a knowledge analysis net software. To that end, we’ll take an present device written in C that performs the identical computations, compile it to WebAssembly, and use it to exchange sluggish JavaScript calculations.

Notice: This text delves into some superior subjects akin to compiling C code, however don’t fear in the event you don’t have experience with that; you will nonetheless have the ability to comply with alongside and get a sense for what is feasible with WebAssembly.


The online app we'll work with is, an interactive net device that gives scientists with a quick preview of the standard of their DNA sequencing knowledge; sequencing is the method by which we learn the “letters” (i.e. nucleotides) in a DNA sample.

Here’s a screenshot of the appliance in motion:

Interactive plots showing the user metrics for assessing the quality of their data
A screenshot of in action (Large preview)

We gained’t go into the small print of the calculations, however in a nutshell, the plots above present scientists a sense for a way properly the sequencing went and are used to determine knowledge high quality issues at a glance.

Though there are dozens of command line instruments obtainable to generate such quality control stories, the aim of is to provide an interactive preview of knowledge quality without leaving the browser. This is especially useful for scientists who aren't snug with the command line.

The enter to the app is a plain-text file that's output by the sequencing instrument and accommodates an inventory of DNA sequences and a top quality score for every nucleotide in the DNA sequences. The format of that file is called “FASTQ”, hence the identify

In case you’re curious concerning the FASTQ format (not crucial to know this text), take a look at the Wikipedia page for FASTQ. (Warning: The FASTQ file format is understood in the subject to induce facepalms.) The JavaScript Implementation

In the unique model of, the consumer starts by choosing a FASTQ file from their pc. With the File object, the app reads a small chunk of knowledge starting at a random byte place (utilizing the FileReader API). In that chunk of knowledge, we use JavaScript to carry out primary string manipulations and calculate relevant metrics. One such metric helps us monitor what number of A’s, C’s, G’s and T’s we sometimes see at each place alongside a DNA fragment.

As soon as the metrics are calculated for that chunk of knowledge, we plot the results interactively with Plotly.js, and move on to the subsequent chunk within the file. The rationale for processing the file in small chunks is just to improve the consumer experience: processing the entire file directly would take too lengthy, because FASTQ information are usually within the lots of of gigabytes. We discovered that a chunk measurement between 0.5 MB and 1 MB would make the appliance extra seamless and would return info to the consumer more shortly, but this number will differ relying on the small print of your software and how heavy the computations are.

The structure of our unique JavaScript implementation was fairly simple:

Randomly sample from the input file, calculate metrics using JavaScript, plot the results, and loop around
The structure of the JavaScript implementation of (Large preview)

The field in pink is the place we do the string manipulations to generate the metrics. That box is the extra compute-intensive a part of the appliance, which naturally made it an excellent candidate for runtime optimization with WebAssembly. The WebAssembly Implementation

To explore whether or not we might leverage WebAssembly to speed up our net app, we looked for an off-the-shelf software that calculates QC metrics on FASTQ information. Particularly, we sought a device written in C/C++/Rust in order that it was amenable to porting to WebAssembly, and one that was already validated and trusted by the scientific group.

After some research, we decided to go together with seqtk, a commonly-used, open-source software written in C that can assist us evaluate the quality of sequencing knowledge (and is more usually used to control these knowledge information).

Before we compile to WebAssembly, let’s first contemplate how we might usually compile seqtk to binary to run it on the command line. In response to the Makefile, that is the gcc incantation you need:

# Compile to binary
$ gcc seqtk.c 
   -o seqtk 

However, to compile seqtk to WebAssembly, we will use the Emscripten toolchain, which supplies drop-in replacements for present construct tools to make working in WebAssembly easier. In the event you don’t have Emscripten installed, you possibly can obtain a docker image we ready on Dockerhub that has the tools you’ll need (you can too install it from scratch, however that often takes a while):

$ docker pull robertaboukhalil/emsdk:1.38.26
$ docker run -dt --name wasm-seqtk robertaboukhalil/emsdk:1.38.26

Inside the container, we will use the emcc compiler as a alternative for gcc:

# Compile to WebAssembly
$ emcc seqtk.c 
    -o seqtk.js 
    -s USE_ZLIB=1 

As you'll be able to see, the variations between compiling to binary and WebAssembly are minimal:

  1. As an alternative of the output being the binary file seqtk, we ask Emscripten to generate a .wasm and a .js that handles instantiation of our WebAssembly module
  2. To help the zlib library, we use the flag USE_ZLIB; zlib is so widespread that it’s already been ported to WebAssembly, and Emscripten will embrace it for us in our undertaking
  3. We allow Emscripten’s digital file system, which is a POSIX-like file system (source code here), besides it runs in RAM inside the browser and disappears if you refresh the page (until you save its state within the browser using IndexedDB, however that’s for an additional article).

Why a virtual file system? To answer that, let’s examine how we might call seqtk on the command line vs. using JavaScript to name the compiled WebAssembly module:

# On the command line
$ ./seqtk fqchk knowledge.fastq

# In the browser console
> Module.callMain(["fqchk", "data.fastq"])

Accessing a virtual file system is powerful because it means we don’t need to rewrite seqtk to handle string inputs as an alternative of file paths. We will mount a piece of knowledge as the file knowledge.fastq on the virtual file system and simply name seqtk’s fundamental() perform on it.

With seqtk compiled to WebAssembly, here’s the new architecture:

Randomly sample from the input file, calculate metrics within a WebWorker using WebAssembly, plot the results, and loop around
Architecture of the WebAssembly + WebWorkers implementation of (Large preview)

As shown within the diagram, as an alternative of operating the calculations within the browser’s major thread, we make use of WebWorkers, which allow us to run our calculations in a background thread, and keep away from negatively affecting the responsiveness of the browser. Specifically, the WebWorker controller launches the Worker and manages communication with the primary thread. On the Employee’s aspect, an API executes the requests it receives.

We will then ask the Employee to run a seqtk command on the file we simply mounted. When seqtk finishes operating, the Worker sends the end result again to the primary thread by way of a Promise. As soon as it receives the message, the primary thread uses the resulting output to update the charts. Just like the JavaScript model, we course of the information in chunks and replace the visualizations at every iteration.

Performance Optimization

To guage whether or not utilizing WebAssembly did any good, we examine the JavaScript and WebAssembly implementations using the metric of how many reads we will course of per second. We ignore the time it takes for producing interactive graphs, since each implementations use JavaScript for that objective.

Out of the field, we already see a ~9X speedup:

Bar chart showing that we can process 9X more lines per second
Using WebAssembly, we see a 9X speedup compared to our unique JavaScript implementation. (Large preview)

This is already excellent, provided that it was comparatively straightforward to realize (that is when you understand WebAssembly!).

Next, we observed that though seqtk outputs a number of usually useful QC metrics, many of those metrics are usually not truly used or graphed by our app. By removing a number of the output for the metrics we didn’t need, we have been capable of see a good higher speedup of 13X:

Bar chart showing that we can process 13X more lines per second
Eradicating unnecessary outputs provides us further performance improvement. (Large preview)

This again is a superb enchancment given how straightforward it was to realize—by literally commenting out printf statements that weren't wanted.

Lastly, there's another improvement we appeared into. Up to now, the best way obtains the metrics of interest is by calling two totally different C features, every of which calculates a unique set of metrics. Particularly, one perform returns info in the type of a histogram (i.e. an inventory of values that we bin into ranges), whereas the opposite perform returns info as a perform of DNA sequence place. Unfortunately, because of this the identical chunk of file is read twice, which is pointless.

So we merged the code for the two features into one—albeit messy—perform (without even having to brush up on my C!). Because the two outputs have totally different numbers of columns, we did some wrangling on the JavaScript aspect to disentangle the two. However it was value it: doing so allowed us to realize a >20X speedup!

Bar chart showing that we can process 21X more lines per second
Finally, wrangling the code such that we solely read by means of each file chunk as soon as provides us >20X performance improvement. (Large preview)

A Phrase Of Warning

Now can be a very good time for a caveat. Don’t anticipate to all the time get a 20X speedup whenever you use WebAssembly. You may only get a 2X speedup or a 20% speedup. Or you might get a slow down for those who load very giant information in memory, or require lots of communication between the WebAssembly and the JavaScript.


Briefly, we’ve seen that changing sluggish JavaScript computations with calls to compiled WebAssembly can result in vital speedups. Because the code needed for those computations already existed in C, we acquired the additional advantage of reusing a trusted device. As we also touched upon, WebAssembly gained’t all the time be the correct device for the job (gasp!), so use it correctly.

Further Reading

Smashing Editorial(rb, ra, il)