How We Used WebAssembly To Velocity Up Our Net App By 20X (Case Research)

Conversion Rate

Conversion Rate / Conversion Rate 5 Views 0

On this article, we explore how we will velocity up net purposes by replacing sluggish JavaScript calculations with compiled WebAssembly.

Should you haven’t heard, here’s the TL;DR: WebAssembly is a new language that runs within the browser alongside JavaScript. Yes, that’s right. JavaScript is not the only language that runs within the browser!

However past just being “not JavaScript”, its distinguishing issue is which you can compile code from languages similar to C/C++/Rust (and more!) to WebAssembly and run them in the browser. Because WebAssembly is statically typed, makes use of a linear memory, and is saved in a compact binary format, it's also very quick, and will ultimately permit us to run code at “near-native” speeds, i.e. at speeds near what you’d get by operating the binary on the command line. The power to leverage present tools and libraries to be used in the browser and the related potential for speedup, are two reasons that make WebAssembly so compelling for the online.

To date, WebAssembly has been used for all types of purposes, starting from gaming (e.g. Doom 3), to porting desktop purposes to the online (e.g. Autocad and Figma). It is even used outdoors the browser, for example as an environment friendly and versatile language for serverless computing.

This text is a case research on utilizing WebAssembly to speed up a knowledge analysis net software. To that finish, we’ll take an present device written in C that performs the same computations, compile it to WebAssembly, and use it to switch sluggish JavaScript calculations.

Observe: This text delves into some superior subjects resembling compiling C code, but don’t fear for those who don’t have expertise with that; you will still be capable of comply with alongside and get a sense for what is possible with WebAssembly.


The online app we'll work with is, an interactive net device that gives scientists with a fast preview of the quality of their DNA sequencing knowledge; sequencing is the process by which we read the “letters” (i.e. nucleotides) in a DNA pattern.

Here’s a screenshot of the appliance in action:

Interactive plots showing the user metrics for assessing the quality of their data
A screenshot of in motion (Large preview)

We gained’t go into the small print of the calculations, but in a nutshell, the plots above provide scientists a way for a way nicely the sequencing went and are used to determine knowledge high quality points at a glance.

Though there are dozens of command line tools obtainable to generate such quality management stories, the aim of is to offer an interactive preview of knowledge high quality with out leaving the browser. This is especially useful for scientists who are usually not snug with the command line.

The enter to the app is a plain-text file that is output by the sequencing instrument and accommodates an inventory of DNA sequences and a top quality score for each nucleotide in the DNA sequences. The format of that file is called “FASTQ”, therefore the identify

In the event you’re curious concerning the FASTQ format (not needed to know this article), take a look at the Wikipedia page for FASTQ. (Warning: The FASTQ file format is understood within the subject to induce facepalms.) The JavaScript Implementation

In the unique version of, the consumer starts by choosing a FASTQ file from their pc. With the File object, the app reads a small chunk of knowledge beginning at a random byte place (utilizing the FileReader API). In that chunk of knowledge, we use JavaScript to carry out primary string manipulations and calculate relevant metrics. One such metric helps us monitor how many A’s, C’s, G’s and T’s we sometimes see at every place along a DNA fragment.

As soon as the metrics are calculated for that chunk of knowledge, we plot the results interactively with Plotly.js, and transfer on to the subsequent chunk within the file. The rationale for processing the file in small chunks is just to improve the consumer experience: processing the whole file directly would take too lengthy, as a result of FASTQ information are usually within the a whole lot of gigabytes. We discovered that a chunk measurement between 0.5 MB and 1 MB would make the appliance extra seamless and would return info to the consumer extra shortly, but this number will differ relying on the small print of your software and the way heavy the computations are.

The architecture of our unique JavaScript implementation was pretty simple:

Randomly sample from the input file, calculate metrics using JavaScript, plot the results, and loop around
The architecture of the JavaScript implementation of (Large preview)

The field in pink is the place we do the string manipulations to generate the metrics. That field is the extra compute-intensive part of the appliance, which naturally made it a superb candidate for runtime optimization with WebAssembly. The WebAssembly Implementation

To explore whether we might leverage WebAssembly to hurry up our net app, we looked for an off-the-shelf software that calculates QC metrics on FASTQ information. Specifically, we sought a device written in C/C++/Rust so that it was amenable to porting to WebAssembly, and one that was already validated and trusted by the scientific group.

After some research, we decided to go together with seqtk, a commonly-used, open-source device written in C that may assist us consider the standard of sequencing knowledge (and is more usually used to control those knowledge information).

Earlier than we compile to WebAssembly, let’s first think about how we might normally compile seqtk to binary to run it on the command line. In line with the Makefile, that is the gcc incantation you need:

# Compile to binary
$ gcc seqtk.c 
   -o seqtk 

Then again, to compile seqtk to WebAssembly, we will use the Emscripten toolchain, which offers drop-in replacements for present construct instruments to make working in WebAssembly easier. In the event you don’t have Emscripten put in, you possibly can obtain a docker picture we ready on Dockerhub that has the tools you’ll need (you may as well install it from scratch, however that often takes a while):

$ docker pull robertaboukhalil/emsdk:1.38.26
$ docker run -dt --name wasm-seqtk robertaboukhalil/emsdk:1.38.26

Inside the container, we will use the emcc compiler as a alternative for gcc:

# Compile to WebAssembly
$ emcc seqtk.c 
    -o seqtk.js 
    -s USE_ZLIB=1 

As you possibly can see, the differences between compiling to binary and WebAssembly are minimal:

  1. As an alternative of the output being the binary file seqtk, we ask Emscripten to generate a .wasm and a .js that handles instantiation of our WebAssembly module
  2. To help the zlib library, we use the flag USE_ZLIB; zlib is so widespread that it’s already been ported to WebAssembly, and Emscripten will embrace it for us in our challenge
  3. We allow Emscripten’s virtual file system, which is a POSIX-like file system (source code here), besides it runs in RAM inside the browser and disappears if you refresh the page (until you save its state in the browser using IndexedDB, but that’s for an additional article).

Why a digital file system? To reply that, let’s examine how we might call seqtk on the command line vs. utilizing JavaScript to call the compiled WebAssembly module:

# On the command line
$ ./seqtk fqchk knowledge.fastq

# Within the browser console
> Module.callMain(["fqchk", "data.fastq"])

Getting access to a digital file system is highly effective because it means we don’t need to rewrite seqtk to deal with string inputs as an alternative of file paths. We will mount a piece of knowledge as the file knowledge.fastq on the virtual file system and easily call seqtk’s most important() perform on it.

With seqtk compiled to WebAssembly, here’s the brand new structure:

Randomly sample from the input file, calculate metrics within a WebWorker using WebAssembly, plot the results, and loop around
Structure of the WebAssembly + WebWorkers implementation of (Large preview)

As shown in the diagram, as an alternative of operating the calculations in the browser’s principal thread, we make use of WebWorkers, which allow us to run our calculations in a background thread, and avoid negatively affecting the responsiveness of the browser. Particularly, the WebWorker controller launches the Worker and manages communication with the primary thread. On the Employee’s aspect, an API executes the requests it receives.

We will then ask the Employee to run a seqtk command on the file we just mounted. When seqtk finishes operating, the Worker sends the outcome again to the primary thread by way of a Promise. Once it receives the message, the primary thread makes use of the ensuing output to replace the charts. Just like the JavaScript model, we process the information in chunks and replace the visualizations at each iteration.

Efficiency Optimization

To guage whether utilizing WebAssembly did any good, we examine the JavaScript and WebAssembly implementations using the metric of what number of reads we will course of per second. We ignore the time it takes for generating interactive graphs, since both implementations use JavaScript for that objective.

Out of the box, we already see a ~9X speedup:

Bar chart showing that we can process 9X more lines per second
Using WebAssembly, we see a 9X speedup in comparison with our unique JavaScript implementation. (Large preview)

That is already excellent, provided that it was relatively straightforward to realize (that is when you perceive WebAssembly!).

Next, we observed that though seqtk outputs a number of usually useful QC metrics, many of those metrics are usually not truly used or graphed by our app. By eradicating a few of the output for the metrics we didn’t want, we have been capable of see a fair larger speedup of 13X:

Bar chart showing that we can process 13X more lines per second
Removing unnecessary outputs provides us additional efficiency enchancment. (Large preview)

This again is a superb enchancment given how straightforward it was to realize—by literally commenting out printf statements that weren't needed.

Lastly, there's yet one more improvement we seemed into. To date, the best way obtains the metrics of interest is by calling two totally different C features, every of which calculates a unique set of metrics. Specifically, one perform returns info within the form of a histogram (i.e. an inventory of values that we bin into ranges), whereas the opposite perform returns info as a perform of DNA sequence position. Unfortunately, which means the identical chunk of file is learn twice, which is unnecessary.

So we merged the code for the 2 features into one—albeit messy—perform (with out even having to brush up on my C!). Because the two outputs have totally different numbers of columns, we did some wrangling on the JavaScript aspect to disentangle the two. Nevertheless it was value it: doing so allowed us to realize a >20X speedup!

Bar chart showing that we can process 21X more lines per second
Lastly, wrangling the code such that we solely read by means of every file chunk as soon as provides us >20X efficiency enchancment. (Large preview)

A Word Of Warning

Now can be an excellent time for a caveat. Don’t anticipate to all the time get a 20X speedup if you use WebAssembly. You may solely get a 2X speedup or a 20% speedup. Or you could get a slow down in case you load very giant information in memory, or require numerous communication between the WebAssembly and the JavaScript.


Briefly, we’ve seen that changing sluggish JavaScript computations with calls to compiled WebAssembly can lead to vital speedups. Because the code needed for these computations already existed in C, we acquired the additional advantage of reusing a trusted software. As we additionally touched upon, WebAssembly gained’t all the time be the fitting device for the job (gasp!), so use it correctly.

Additional Reading

Smashing Editorial(rb, ra, il)