Our thoughts on the development of JavaScript ecosystem

Radosław Miernik
Radosław Miernik
Mar 25, 2021


A colleague of mine recently asked whether we consider the current state of the JavaScript ecosystem stable. As you may know, the problem of “JavaScript Fatigue” is real. If you haven’t heard about it – or you like nice drawings – consider reading this article on Auth0 blog as a primer.
A lot have changed within the last few years. That includes the number of things I experienced and my level of expertise. Keep that in mind, as both have a strong influence on my point of view.
As usual, the answer is: it depends. While plenty of positive things has happened – we’ll get to that in a minute – there is a bunch of new problems too. I hypothesize that it’s better in general.

Task runners, bundlers, tools

A few years back, setting up a project was a big deal. Of course, the tooling was already there: Grunt, gulp, and Webpack (pre-v3) did their job well. The entry-level was low – virtually anyone could set it up.
However, when you needed a proper “production-grade” setup, it got complicated. The configuration scripts were huge, extremely complex [1], and often required a couple of dozen dependencies. We learned to deal with that pretty fast. I believe that almost everyone has had their battle-tested config and used it at all times.
You can still use these today – all of them got better, really. But there’s also the “new wave” of tools that focus on productivity [2]: Parcel, Rome, Webpack (v4 and later), and create-react-app. It’s even easier to start a project – most things work out of the box. Ah, and we have npx now!
The need for a “production-grade” setup is still here, stronger than ever [3]. And while it’s still hard to do it right – maybe even harder – now, for some reason, it feels like a task, not the task.
Separately, the IDEs (Integrated Developer's Environments) got better as well. To be more precise, they got smarter. The “superpower” I’m referring to is the LSP (Language Server Protocol). It’s no longer the case that if you want proper Find References support, you have to buy expensive software – thanks to the LSP, every editor can be a robust IDE immediately. And the list of implementations is really long.

Language ergonomics

Back in 2013, I was using CoffeeScript a lot, especially its literate version. I really like the idea of a new language that borrowed a few ideas from elsewhere. Did you know that CoffeeScript had arrow functions since its first commit, back in 2009? The same goes for LiveScript and the pipe operator three years later [4].
It was the time of anything-to-JS. Basically, every major language got itself a dialect or a subset targeting the web. And even though it added some excessive overhead, it was great. I want to believe that only because of such experiments, we got the nice and shiny ES6 ES2015.
Of course, syntactic sugar is… Well, just a sweetener, and we could live our lives happily, with or without it. Some of us hopped on the hype train earlier, using Babel. Or should I say 6to5? But the real power of language changes was yet to come.
In the meantime, the whole IT reminded itself of static typing [5]. Both Dart and TypeScript were created. And while the former is somehow limited to Flutter, the latter became a de-facto industry standard. Even npm displays a TypeScript badge now.
There is also Flow, PureScript, and ReScript (formerly Reason). All of them bring new exciting ideas to the table and have different goals in mind. I have used all of them, especially Flow. But -of-languaTypeScript has won.

Asynchronous revolution

The introduction of the Promise was the dawn of a new era. It has not only changed the language but also led to a lot of improvements and changes in the standard library. Later, the async keyword sealed the deal – asynchronous code is now easy and more popular than ever.
There’s also a downside. During the “migration period”, where async was not supported by any major JavaScript environment, there were only two options: accept the regenerator-runtime or not use it.
Overall, I think it improved the code we work with. We’re finally at a point where the asynchronous code is as readable as the synchronous one. And yes, I’m willing to accept the overhead (in most cases).
As you may have heard, JavaScript was not the first one. A few years earlier, the async keyword appeared in C# 5.0. Recently, it was added to Python and Rust. And Swift will get it soon too.

Flux case study

Let’s analyze how the Flux architecture changed the frontend world. Before it got any traction, most frontend applications used the MVC (Model View Controller) pattern, an MVC-like structure, or no “standardized” architecture at all.
It was astonishing. It wasn’t a new library or tool – it was the architecture. It played well with our existing ecosystem. It kind of reflected the way how people think. It “clicked” with the browser’s architecture. It worked.
OK, there was the flux library. But it wasn’t that critical. The only important part was that people started thinking about it. And it led to an impressive blooming period. This first generation gave us DeLorean, Fluxxor, and Reflux. You may have not heard about these, but no worries.
The second generation was the time of Redux. Here the library itself is far more important than in the Flux case, but once again, the ideas – not code – were crucial. The “magical experience” of time travel debugging was great and revived the concept of immutability [6].
Then the third generation gave us redux-saga, redux-thunk, and finally, @reduxjs/toolkit. We could use Redux with these helpers until the end of time – it works, it can be very productive, and is a kind of a “veteran” among the state management solutions.
Separately, there were the new wave solutions: Cycle.js, Elm, and MobX. They base on the same ideas and address the same needs but ignore the existing code. This kind of innovation is the way to go.
The aftermath is that we saw a rise and fall of tens of excellent libraries. We cannot and shouldn’t say that it was pointless – all this work gave us the required humus to grow new ideas. Actually, most of them didn’t matter. It’s natural. It’s organic. You cannot sow on a barren ground.

People factor

Do people actually benefit from new tools or language features? One could answer it on multiple levels, but based on my experience with introducing new people – it’s worth it. If we add the fact that the code is often easier to understand – it’s even better.
Usually, people see it as “I don’t know X and I have to learn it now!” [7]. Been there, done that. It’s extremely hard to get rid of this feeling. And even harder to help someone else dealing with it.
Having said that, I see that the technologies of today are often less intimidating. Perhaps it’s the quality of documentation. Or the sheer amount of tutorials. Or maybe people I met recently are far brighter than the ones I met years ago. I don’t know.

Closing thoughts

Everything shifted toward the idea of being “easy to learn and hard to master”. And that’s really, really good. We need more people playing and experimenting – that’s the easiest way of getting them in.
Of course, we do need people who are waist-deep in the programming languages theory as well. I love reading through the papers published in POPL, looking for ideas. Maybe one of these novel ideas will lead to a groundbreaking proposal or RFC (Request for Comments) for JavaScript, Python, Rust, or Swift?
Is it actually better? I think so. It feels like it.

[1] It was also caused by the goal of these tools. Back then, we used task runners – the make of the web. Nowadays, it shifted towards bundlers, or even more generic tools, that manage the build process as a whole.
[2] It’s not that the others did not – they, of course, did. But now, the DX (Developer's Experience) is valued much higher. Even upgrading is often like a breeze, especially with tailor-made tools, like codemod.
[3] JavaScript is now literally everywhere – web (duh!), cloud, desktop, edge, embedded, mobile, and, of course, server. That means we may not only need one production build but actually a few, one for each target. And it gets worse if you plan to support non-evergreen browsers or deliver resource-aware code using the upcoming Device Memory API, failIfMajorPerformanceCaveat, or hardwareConcurrency.
[4] There’s a proposal to add the pipeline (pipe) operator to JavaScript.
[5] Not only in the JavaScript world – there’s mypy and typing for Python, Sorbet for Ruby, and PHP 7.4 for, well, PHP.
[6] JavaScript was not built with immutability in mind. Of course, there are functions like Array.prototype.concat, but most operations require manual copying. I used the excellent immutable library to get rid of this tedious work at first. It works, but I consider immer strictly superior, as it’s transparent for the rest of the code.
[7] My guess is that the “JavaScript Fatigue” is strongly correlated with the impostor syndrome. One’s knowledge was steadily increasing, but the technology evolved so fast that anyone could see himself less and less experienced every day, as you had no experience in the entirely new framework. But the fact that it’s only one week old was not soothing.
Radosław Miernik

Radosław Miernik

Head of Engineering at Vazco
Creator of uniforms and active Meteor contributor. Loves a complex problems to solve and teaching others. Outside of the office he's a PhD student focused on artificial intelligence for games.