There's nothing great or particularly amazing about Angular and its build processes.
Everything Angular does is fighting against its own architectural decisions. On a high level it's like this:
We write in Typescript, we need to compile to Javascript
Typescript cannot compile our templates because they are custom templates
We need to compile templates
Typescript compiler barfs at some of our code, we need to fix the code, or the compiler, or both, or hack in between them
The resulting Javascript is hundreds of kilobytes larger than any competition.
We need to somehow reduce the size
Let's throw Google Closure Compiler in
It cannot reliably process JS code produced by Typescript compiler
(side note: it cannot reliably process any JS code outside of Google's 'goog:module' and Closure Library even with "SIMPLE" optimisations)
Let's create a separate tool, Tsickle, that will help convert TypeScript code into Closure-compatible Javascript code.
The tool is a hack, because Typescript compiler doesn't expose compiler-as-a-service
And also let's do "Ahead-of-time" compilation on templates etc.
So now we produce marginally less code. It takes insane amounts of time to not only build, but even rebuild and do incremental changes
Oh, let's take Google's Bazel! It promises to be so fast!
Only doesn't know anything about compiling Typescript.
So we write new rules (aka configs) for typescript, for typescript code server, and a bunch of other stuff.
So now we will have:
Typescript compiling
Tsickle converting
Templates compiling
Closure Compiler compiling
Bazel orchestrating
So, nothing really changed (only the build chain is becoming increasingly complex and impossible to reason about), and yet "OMG Angular is so great: Google makes it possible to use it with Bazel and Closurescript instead of the circus that is Webpack and UglifyJS" 😂
Disclaimer: I talk a lot about React here, but you can substitute your favorite library: Inferno, Preact, Vue, snabbdom, virtual-dom (or non-js libraries and frameworks like Elm, Om, etc.). Similarly, replace Polymer with Vaadin, or X-Tag, or…
Brief, incomplete, and mostly incorrect history of Web Components
Ancient times
By 2011 Internet had grown. It had Facebooks and Gmails, and Twitters, and Asanas, and Google Docs, and countless other online things that could no longer be called sites, or Single Page Applications. They were, for all intents and puposes, applications. As simple as that.
And woe was to the devs who were developing them.
State of the art for Web GUIs at the time was probably a templating language glued together by some serverside logic and/or a client-side library. Backbone if you were lucky. jQuery UI or Sencha/ExtJs if you were enterprise enough.
This was cumbersome. This was limiting. You could not prototype easily and quickly. You could not easily escape the limitations of UI libraries. etc. etc. etc.
And you were limited to the same set of HTML elements as ever: divs, ps, formss…
I think we’re stuck today in a little bit of a rut of extensibility. We wind up leaning on JavaScript to get things, because it is the Turing complete language in our environment. It is the only thing that can give us an answer when CSS and HTML fail us. So we wind up piling ourselves into the JavaScript boat. We keep piling into the JavaScript boat.
Bruce yesterday brought up the great example of an empty body tag, and sort of this pathological case of piling yourself into the JavaScript boat, where you wind up then having to go recreate all of the stuff that the browser was going to do more or less for you if you’d sent markup down the wire in order to get back to the same value that was going to be provided to you if you’d done it in markup. But you did it for a good reason. Gmail has an empty body tag, not because it’s stupid. Gmail does that because that’s how you can actually deliver the functionality of Gmail in a way that’s both meaningful and reliable and maintainable. You wind up putting all of your bets, all of your eggs, into the JavaScript basket.
…
It takes browsers time to implement stuff. We have to get it out to users, now users have to start using it, and then we have to look at our deployed population and go, “OK, now I can use this feature, I can target this feature.”
So as developers, this is the sort of things we thrive on. The faster that this crank turns, the more chances we get to see new features enter the market that we can start to target. “OK, well, this is good.”
…
So we wind up with this unspoken tension between deep pragmatism and the Platonic ideal of where we would like to be. But we don’t have a really good model for thinking about it.
…
The more of this behavior [behavior agreed-upon, known to devs, and implemented in the browser] that we take into ourselves, the more we do in JavaScript, the more we get away from the idea of shared ambiguity, which is what makes the world work.
…
We want to feed this process of progress. I think we get stuck in a place where we consider HTML5, we’re done.
And then he goes on to introduce and demo Web Components, which at the time were three different things: Scoped CSS, Shadow DOM and Web Components. (I also highly recommend his talk in general)
But then W3C happened. In true w3c fashion it went ahead and spent another 5 years defining the Platonic ideal and never feeding the progress as a result.
Cue in Facebook
Facebook application is complex. It might not look like it, but it is. Those little boxes everywhere on the page have surprisingly complex layouts which have to be repeated, and/or customized, and/or adjusted in various contexts. A developer would naturally want to do the following: take this box element, and put it here, and apply these random styles to it without disturbing anything on the page.
And it has to be reasonably fast. Because, well, DOM updates are notoriously slow, and there are countless articles detailing how you absolutely must reduce the number of times you access the DOM.
It’s so bad that innerHTML, a sign of horrible smelly code, is on par with or faster than DOM manipulation.
So, what did Facebook do? Oh, simple, they basically wrote their own implementation of Web Components entirely in Javascript. With an XML-based DSL to boot. They called it React and unleashed it unto the world in 2013.
React provides you with following:
a way to define your own custom elements
place them on the page with HTML-like syntax
provides a fast virtual DOM implementation that minimizes changes to the actual DOM
has very few limitations on what you can do with or within components because it’s Javascript all the way down (the DSL is a thin wrapper on top of a small number of functions)
No wonder React took the world by storm. Those who weren’t writing it were surely talking about it. It spawned several competitiors that are aiming for the same feature set (Inferno, Preact) or various subsets, most notably Virtual DOM (Snabbdom, virtual-dom etc.).
2017
In 2017 React fulfills all the promises of Web Components: it lets you write performant reusable self-contained components. It can run on almost any Javascript-enabled browser (React doesn’t support IE < 9).
As the ecosystem blossomed, it went far beyond the scope of Web Components. If I’m not mistaken, CSS Modules proposal appeared because it was first implemented in, and for, React.
In 2017 Web Components are in development despite already spawning two versions each of two of underlying standards.
At the time of this writing the situation for Web Components looks like this:
So what’s this broken promise I hear about?
Well, the main failure is obvious: they are nowhere to be seen. The promise of “feeding the process of progress” is unfulfilled. By their 6th year they spawned a total of 6 standards. Two of them are already deprecated. Only one major browser is commited to supporting them (sorry, Opera, you’re no longer a major browser, and you run on top of Chrome these days).
The other broken promise is the one bandied about in the Internets these days: interoperable custom components without vendor lock-ins.
And this is the one that got me writing this overly long piece of thinking out loud.
Polymer is Google’s attempt at creating a Web Components-compliant implementation:
Unlock the Power of Web Components. Polymer is a JavaScript library that helps you create custom reusable HTML elements, and use them to build performant, maintainable apps.
Polymer shows the main problem with Web Components: they are DOM.
This is a custom component. Its style is defined by a JavaScript object. Its children are defined by maping over an array of values and producing another component. In this case it’s a <p>, but it could be anything. This component’s children is the current array value.
I assume the limitations described above were the immediate problem that Polymer faced. How do you work around this? Well, you invent your own not-really-JavaScript-but-kinda-Javascript kinda-templating-kinda-scripting kinda-language. That can only exist in strings.
Work your way through Polymer’s data system for a full overview. Below are just some examples.
Note: none of these [[]], {{}}, $= etc. are in the Web Component spec
In all seriousness though. Web Components ended up delivering hardly anything from their original promises (or have hardly answered any of the originally raised questions):
content models are strange (we’ll have to wait and see how they end up interacting in deeply nested structures)
To work around limitations (such as string attributes) libraries will (and have) come up with incompatible ways to pass data
is Polymer’s attr$='{{user.name}}' better than Vaadin’s item-label-path="name.first" or Angular’s <div *ngFor="let hero of heroes">{{hero.name}}</div> or more compatible with others?
What, where, and how should I import the entirety of a library X to deal with their weird ways of dealing with DOM limitations if I deal with multiple nested components?
DOM APIs are horrible, cumbersome, awkward and clunky. Polymer and others are bravely trying to use DOM APIs only, but even they resort to innerHTML anywhere they don’t have to put on a show (tests, for example). When Web Components take root, the web will be flooded with less performant innerHTMLs and possibly re-implementations of snabbdoms and virtual-doms (obviously incompatible)
how will this help with interoperability and vendor lock-ins if everyone will chose their own ways of dealing with this?
Scoped CSS…
CSS Modules. Need I say more?
These are just a few warts I could come up wth off the top of my head. I haven’t seen them really truly discussed anywhere.
We’re not going to use it at all at Facebook. We’re not going to build React on it because there’s a strong model difference – imperative in Web Components to declarative in React. Web Components doesn’t have an idiomatic way to define things like where events go. How do you pass data when everything is a string? We see it more as an interop layer that lets various frameworks talk to each other.
Nowadays React lets you have them as the leaf nodes in the component tree because React assumes any component name that starts with a lowercase to be a DOM element.
There’s a lot of stuff in Web Components spec that is nice. Being able to customize how select work for example (like dropdowns). So there’s a lot of great stuff in Web Components, but it’s not gonna be the way you structure your applications. and that’s what React tries to solve: how do you structure your application, manipulate the DOM.
Web Components are more of the same regular DOM API. What I like to think of it is: it standardized the worst practices.
There are very very few discussions about these issues except for comments on Twitter or on various articles. The consensus seems to be “Web Components is the glorious interoparable fast performant future”.
tl;dr: Closure compiler makes little to no sense outside of Google's ecosystem.
Nolan Lawson conducted a very nice research on how various Javasctip tools bundle/compile Javascript. I highly recommend it: The cost of small modules.
This article has made several rounds on Twitter and many people have asked: Why aren't more people using Closure? There are many reasons for that.
Considering the direstate of Javascript tools today, Google's Closure compiler is yet another cryptic, badly configured, half-baked tool to throw into the ever-growing pile of twigs and sticks called Javascript infrastructure.
Documentation
There's basically none.
The Quickstart shows how to create a page that contains basically exactly one line of Javascript code. Instead of showing you how to do more, the next step is Advanced compilation which is basically Javascript one-liners interspersed with Python code to invoke the compiler service.
How do I set up a project to compile my application?
How do I properly process the multiple modules my application has?
How do I include/process third-party modules?
How do I bundle stuff?
etc.
All these are unanswered.
Let's download the app then. There could me more info.
To run this you need a library you don't need
The README for the compiler has this nugget:
If you're using globs or many files, you may start to run into problems with managing dependencies between scripts. In this case, you should use the Closure Library. It contains functions for enforcing dependencies between scripts, and Closure Compiler will re-order the inputs automatically.
If you follow the link, this is the page you land on:
The Closure Library is a broad, well-tested, modular, and cross-browser JavaScript library. You can pull just what you need from a large set of reusable UI widgets and controls, and from lower-level utilities for DOM manipulation, server communication, animation, data structures, unit testing, rich-text editing, and more.
WAT?
I want to handle my dependencies, not have a UI library. Maybe they refer to ClosureBuilder? I have no idea.
Let's maybe run it?
I have some code in my app that consists of multiple modules, relies on some third-library code (node_modules) etc. The entry point is index.js
How do I run it so that it generates code I need to run my app in the browser?
However you run it, it only produces some minified code that:
Doesn't include any dependencies (except those directly in the folder)
Generates code that throws a require is not defined error in the browser
At this point I'm ready to give up because, well, I already have my sweet set up that handles everything, transpiles, and compiles, and minifies, and bundles all code.
Switching to ADVANCED compilation modules may pollute your console output with mutiple warnings or errors that are also not helpful in the least.
In (small) conclusion
Google's Closure compiler makes no sense outside Google infrastructure. When you have set up everything the Google way and there are people to help you along as you run into problems, you're ok.
When you're alone (as most devs are), you will either spend an unknown number of hours going through error-messages, disassembling the setups of other projects (ClojureScript, Angular 2)...
Or just use the tools which you kinda sorta can setup without going insane
Building JavaScript apps is overly complex right now
Among other things The State of JS 2016 asked developers if they disagreed (1), were neutral (3), or agreed (5) with the following statement: Building JavaScript apps is overly complex right now.
A full 59% of developers agree that Building JavaScript apps is overly complex right now. Only 16% disagree.
Javascript fatigue is real, no matter how hard some people try to shrug it off (see this for an example).
We’re told: move fast and break things. For once, just for once, I would like to stop breaking things and stick to something that works. Half of the issues here is something I’ve encountered in the past two weeks alone, when trying to create a project from scratch.
Developer Experience
Developer Experience is not a lost art. It’s an art that has never been discovered. It was briefly discussed a few years ago and then forgotten. These days if you ever hear it mentioned, it’s usually in the context of APIs.
However (emphasis mine):
As software consumes the world around us, good design in our tools is a growing competitive advantage
Our tools shape our work, and great tools change the shape of our industry.
We talk a lot about the importance of a strong engineering team, but not enough about the design of our tools and the impact it has on the quality of the products we build. We should be talking about DX more, and it’s not enough to talk about UX alone.
Javascript tools are quite literally death by a thousand cuts. The whole experience of working with and building for Javascript is, at the very least, an excercise in frustration. The landscape is utterly hostile to developers. With experience you learn to navigate it, somewhat safely. Is it an experience you need, though?
Falsehoods programmers believe about Javascript tools
Tools work when you run them
Tools can be configured
In JSON
In Javascript
Can be pointed to different config file
Don’t use hidden/special files like .something.rc
Tools fail if their config is incorrect
Tools warn you about invalid values in configs
Tools ignore invalid values in config
Tools use defaults instead of invalid values in config
Tools don’t ignore valid values in config
Well, at least tools report non-config errors
At least fatal errors?
Tools propagate errors from their plugins or sub-tools
At least fatal errors? I asked that one already, didn’t I?
You can tell if an error originated in the tool, in a plugin or in a sub-tool
At least, errors are clearly stated on screen/in logs
At least, with reasonable and relevant information
Tools can be invoked from command line
Tools can be run on a list of files
With glob patterns
Minor versions of tools, or their plugins, or their sub-parts keep backwards compatibility
Tools fail if none of their requirements are satisfied
Tools fail if some of their requirements are not satisfied
Errors or warnings if this happens
There is only one way to do things
There is more than one way to do things
These many ways lead to the same result
Your tool will be relevant 5 years from now
Ok, a year from now
So, let’s talk tools.
npm
npm is the ubiquitous javascript tool. You may still run into bower occasionally, but that battle seems to have been lost.
npm is:
a package manager
a dependency tracker
a build tool (node.js’s make, if you will)
a task runner
It probably does other tasks, but these are the most important ones. It does its job quite well, and I cannot recommend this post highly enough. Still, there are gotchas.
Run whatever I think you want, not what you want
This is relatively a minor WTF, but it’s there nonetheless:
If the specified configuration param resolves unambiguously to a known configuration parameter, then it is expanded to that configuration parameter. For example:
npm ls --par
# same as:
npm ls --parseable
In the example above --par is an invalid parameter. npm will not silently ignore it (which would be bad). npm will silently expand it to whatever parameter or combination of parameters it fuzzy matched.
npm has been notoriously bad at detecting actual errors. For example, until version 3.x (!) it would not fail if its configuration file wasinvalid.
Depend on whatever I you think you want, not what you want
Let’s consider npm install --save. This installs a dependency and adds it to the dependency list in your project’s package.json. And by saving I mean “take the list of dependencies, sort it alphabetically, and write it back to disk”.
This would not be a problem, save for this:
The npm client installs dependencies into the node_modules directory non-deterministically. This means that based on the order dependencies are installed, the structure of a node_modules directory could be different from one person to another. These differences can cause “works on my machine” bugs that take a long time to hunt down.
Many were quick to attribute the problem to the horrible programmer culture of JavaScript, where people have forgotten how to program. It’s not the case. JavaScript community has whole-heartedly adopted Unix’s philosophy of “one package has to do one thing, and do it well”, but may have taken it to extremes.
npm and npm’s registry are essential to developers and to developer experience. The way some/many of arising issues are handled by npm’s organisation are clearly detrimental to developer experience.
Your package name doesn’t matter. Until it matters
So after a search for various of keywords I found out that the module name npmjs was still available in the registry. In the four years that the registry existed nobody took the effort in registering it. This module was and is a perfect fit for my module.
On the 22th I received an email from Isaac, the CEO of npm Inc (which recently raised more than 2M in funding for his company) and creator of npm with a question:
Can you please choose another name for this module? It’s extremely confusing. Thanks.
…
It didn’t even matter what how right or wrong I was for using npmjs as a module name Isaac had clearly already decided to destroy the module as he stated there wasn’t any negotiation and that it would be deleted no matter what.
Sounds bad, doesn’t it? Ok, what about this story then (emphasis mine)?
For a few minutes today the package “fs” was unpublished from the registry in response to a user report that it was spam. It has been restored.
More detail: the “fs” package is a non-functional package. It simply logs the word “I am fs” and exits. There is no reason it should be included in any modules. However, something like 1000 packages do mistakenly depend on “fs”, probably because they were trying to use a built-in node module called “fs”.
Should you even allow publishing a module that has the same name as an intenal one?
If npmjs is confusing, how is fsnot confusing?
If thousands of packages depend on it, how can you remove it considering the SNAFU you had just several months prior?
npm devs are clueless? (added 2016-12-27)
Event though npm is a rather nice package manager, its devs often behave like they haven’t got a clue as to what’s happening, or how development should happen.
Performance drop by 40% or more
At one point npm implemented a new progress bar which slowed down installation speeds by 40% and more. See related issue. Worse still, it was now enabled by default (disabled by default in the previous version).
You’ve got to love some comments from the npm dev team:
I’ve been aware of this as an issue for a while and the fix was literally 10 minutes or so of effort, but it hadn’t bubbled up in priority as I hadn’t realized how big an impact it was having.
Or, in a related issue (this appeared after the release):
Profiling would be grand… Put together a minimal benchmark to work against… Ideally I’d like this benchmark to be 2.x and 3.x compatible so we can directly compare different parts.
We break your stable branch, we’re not going to fix it
They are all broken in more ways than one. Let me just quote from my own experience (read the whole article for more than just this one snippet):
Let’s step back for a second, and consider:
Grunt does not transform code, it’s a “task runner” The task it runs is called browserify
Well, the problem with the browserify task8 is that the task runner cannot run it. It needs a plugin called grunt-browserify* to do that
Oh, and browserify has to run babel on the source code before it can do anything with it.
And the problem is that browserify cannot run babel directly. It needs babelify to work
All in all the whole toolchain to produce a Javascript file is grunt -> grunt-browserify -> browserify -> babelify -> babel. And someone in that chain decides that babel missing all of its specified plugins is not a reason to throw an error, stop and report.
Are these problems fixed now? I don’t know. I no longer even care if they are fixed or not. I got myself new shiny better toys to play with. Or did I?
webpack
Webpack is almost the latest and greatest in a web-developer’s life (there’s a more latest now, called Rollup).
On the surface it does the following: take all the modules that your app has, figure out dependencies between them, bundle them up in a single file that you can serve to your website.
All webpack understands is ES5 and some common module structures: CommonJS, AMD etc. It can invoke so-called loaders whose job is to take a file and produce an ES5 output which Webpack will then take and bundle.
As a result, you can somewhat easily depend on anything: modules written in ES6, or in TypeScript, or in Coffeescript, or… You can even depend on CSS files or PNGs. If there’s a loader for that type of file, Webpack can bundle it.
Boilerplates
Also as a result, Webpack’s configuration is inane. And I don’t say it lightly.
Search github for “boilerplate”, and you will come away with easily hundreds of “this is configuration you need to get you started” because it is very nearly impossible to configure webpack.
Webpack is not alone to blame for this. It’s also the fractured tools, the fractured libraries etc.
Yes, the above is configuration for css-loader. Yes, it is a string that contains URL-like structure where parameters you pass in are, well, URL query parameters. Because reasons.
There’s a reason for that, obviously. There always is.
This is a plugin. Acting as a loader. It has a pre-loader, style. And a loader. Which is a combination of two loaders, css and postcss. Oh, and we exclude autoprefixer (plugin? feature?) from the css loader.
Yes, if there’s just one loader, you can provide its parameters as a regular JSON structure (feast your eyes on the query parameters section). However, this is yet another impendancy mismatch you have to deal with when trying to figure out what, where and how to configure all the moving parts.
You go an look for docs on a *-loader, and you run into anything: config as strings, config as objects, a mix of it. And if something goes wrong, you are left alone, there’s no way to know what failed.
But honsetly. How much time do you have to spend to come up with this: ExtractTextPlugin.extract('style', 'css?-autoprefixer!postcss')?
Learn to love the stacktrace
Let’s pretend you are a C++ developer. You use a Makefile to invoke the gcc compiler on your source code. If there is an error in your source code, you will see the relevant error.
For some reason you will never see the stacktraces from either the make tool or from gcc
Not so in Javascript. Because internal stacktraces from the build tools are the bread and butter of everyday JS development.
Enjoy.
> webpack --config webpack.config.js --progress --colors -w -d
Hash: 6f816ab5f143490174a0
Version: webpack 1.13.2
Time: 3176ms
Asset Size Chunks Chunk Names
app.js 1.11 MB 0 [emitted] main
app.js.map 1.25 MB 0 [emitted] main
[0] multi main 28 bytes {0} [built]
+ 226 hidden modules
ERROR in ./js/app/fsm/payment-flow-fsm.ts
Module parse failed: /Users/dmitriid/Projects/project/node_modules/awesome-typescript-loader/dist/entry.js!/Users/dmitriid/Projects/project/js/app/fsm/payment-flow-fsm.ts Unexpected token (3:18)
You may need an appropriate loader to handle this file type.
SyntaxError: Unexpected token (3:18)
at Parser.pp$4.raise (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:2221:15)
at Parser.pp.unexpected (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:603:10)
at Parser.pp$3.parseExprAtom (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:1822:12)
at Parser.pp$3.parseExprSubscripts (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:1715:21)
at Parser.pp$3.parseMaybeUnary (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:1692:19)
<...skip 20 or so lines...>
at Parser.parse (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:516:17)
at Object.parse (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:3098:39)
<...skip another 20 or so lines...>
at nextLoader (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/webpack-core/lib/NormalModuleMixin.js:290:3)
at /Users/dmitriid/Projects/project/node_modules/webpack/node_modules/webpack-core/lib/NormalModuleMixin.js:259:5
at Storage.finished (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:38:16)
at /Users/dmitriid/Projects/project/node_modules/webpack/node_modules/enhanced-resolve/node_modules/graceful-fs/graceful-fs.js:78:16
at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:380:3)
@ ./js/app/index.tsx 9:25-58
ERROR in [default] /Users/dmitriid/Projects/project/js/app/fsm/payment-flow-fsm.ts:3:15
Expression expected.
The relevant ticket for this is webpack/webpack#1245. Note no one even asks the most obvious question: “why in the seven hells would I need internal stacktraces if I’m not a webpack/plugin developer?” Well, not until yours truly came along.
Can you understand what actually failed?
So, I’m running this:
> webpack --config webpack.config.js --progress --colors "-w" "-d"
Hash: 578f6adad579fede3e98
Version: webpack 1.9.6
Time: 3224ms
Asset Size Chunks Chunk Names
app.js 1.12 MB 0 [emitted] main
app.js.map 1.27 MB 0 [emitted] main
[0] multi main 28 bytes {0} [built]
+ 227 hidden modules
ERROR in ./js/app/fsm/state-manager.ts
Module not found: Error: Cannot resolve module 'machina' in /Users/dmitriid/Projects/js/app/fsm
@ ./js/app/fsm/state-manager.ts 2:14-32
Can you immediately tell me if it’s webpack or typescript failing?
As you can clearly see, the first one is a webpack error. The second one is TypeScript error. Relevant issue: webpack/webpack#2878 (there are probably others).
All your options are belong to us
Specifically, the watch option. Could be others. I don’t know and don’t care at this point.
So, if you provide watch: true in Webpack configuration, Webpack will:
This is, unsurprisingly, not entirely correct. See, if you ever decide to create your own build script, and invoke webpack trough its node.js API, you will see that:
webpack provides to separate APIs: run and watch
they are both really identical:
you create a compiler object by invoking webpack with webpack config
you invoke the API
you provide a callback which accepts to parameters err and state
Except
runignores the watch option of config
watch expects a first parameter with options that are already there in the config, really
Because, I guess, reasons. And, surprisingly, the “short” version with webpack(config, callback) works as expected. Who’d a thunk it.
Forget continuous builds (added 2016-12-27)
Just read through this issue. Tl;dr: if webpack fails, it exits with a status code of 0. Because reasons.
And yes, despite this being a majot bug in the main version currenlty used, it has not been fixed in the two years since it was reported. Because reasons.
Nothing works out of the box anymore
Install and run? Sane defaults? These things are becoming a rare beast in the Javascript world. It seems that nothing works out of the box anymore.
In JS world, sadly, going through hoops and withholding crucial information from the developer is now the accepted norm.
Babel
The only job that Babel does is compiling a next version of JavaScript to the current version of Javascript.
Understand this: out of the box the javascript compiler does not do a single thing. You have to install a number of plugins/presets before it even does anything.
Moreover, if there are no presets and no plugins installed, babel will not even complain about it. It will just … do nothing.
Given this index.js file:
[1, 2, 3].map(n => n + 1);
Running freshly installed babel will not even warn you, and will do nothing:
> node_modules/.bin/babel index.js
[1, 2, 3].map(n => n + 1);
Only if you install a preset and specify it, you get back what you need.
This is a special preset that will contain all yearly presets so user’s won’t need to specify each one individually.
Does this sound like a good sane default to you? Why isn’t it the default?
Does it seem that having no presets or plugins should at least raise a warning or something?
The answer is yes in any world other than Javascript.
Typescript
I’ve decided to talk about Typescript, because why not. More often than not Javascript is only used as a target language. Multiple other languages exist that compile/transpile into Javascript while promising nicer features, syntax, tools and so on and so forth.
Typescript is a superset of Javascript developed by Microsoft. It introduces type checks and various niceties into the language.
It includes a nice fast compiler which removes the need for Babel, but, obviously, it introduces a whole host of other problems.
The former one is recommended in typescript docs. The latter one is used in 99% of Webpack boilerplates. Good luck figuring out how they are different, what features they support or don’t support etc.
I haven’t had much experience with ts-loader (yet?), but I’ve alredy run into the following with awesome-typescript-loader.
Who cares about your configs. Part I
Project from scratch. Forgot to create tsconfig.json. No errors whatsoever, obvously. Webpack fails because it cannot parse the non-processed .ts file.
Why did .ts compilation fail? Was it due to a missing tsconfig.json. Wouldn’t it be just so nice if the tools involved could report this?
Who cares about your configs. Part II
When trying to clean up configs I decided to move tsconfig.json to a config directory. Provided path to this file as tsconfig option as per README.
Provided the following invalid option to the compiler in the tsconfig:
{
"compilerOptions": {
"target": "absdefg",
}
}
The config above was accepted, and silently ignored. Everything got compiled. Was the config file even picked up? Well, webpack didn’t fail, so probably it was (see Part I). Who ignored the error? TS compiler? awesome-config-loader? No one knows, and it is impossible to find out.
Third-party modules, do you speak it? (upd. 2016-11-16)
Sooner or later you will have to import modules written in or transpiled to Javascript. Unless you develop a library with no external dependencies (quite possible) or something that only depends on other libraries written in Typescript (highly unlikely).
So, you will find yourself typing something like this into your Typescript code:
Because reasons. Go ahead and try to make sense of the ambient modules section in the docs.
Types, types everywhere
Libraries are developed in whatever language authors prefer. To provide proper static type checking Typescript needs more than the stub module definitions. It needs actual type definitions for libraries.
Thanks to countless contributors to DefinitelyTyped there are quite a few definitions Typescript can use.
Well,…for some definition of “can”.
> npm install @types/superagent
[default] /Users/dmitriid/Projects/keyflow/keyflow-website/app/node_modules/@types/superagent/index.d.ts:83:30
Cannot find name 'Promise'.
See, despite the fact that this is not a single isolated problem, there are no solutions to this.
Well, except one: maybe try a newer version of npm, namely npm 3.x.
I personally have a problem with this. node.js has this thing called Node LTS, Long Term Support. And at the time of this writing it was:
> node -v
v4.6.0
> npm -v
2.15.9
I know, I know. I probably shouldn’t run cutting edge stuff on non-cutting-edge platforms, yada yada.
The problem is there, the problem exists. And, obviously, it exists for some modules, and for others (@types/react works, @types/superagent doesn’t, ad nauseam). Because reasons.
Wrapping up
I’m not sure this warrants a conclusion. I’ve stopped detailing my experience as I was approaching the 4000 word mark. However, there are so many more things that break, run amok, break in unpredictable ways, etc. etc. etc.
I’ll leave you with these quotes:
…never have I worked in an ecosystem where the knowledge attained while becoming a master of the craft goes out of date so rapidly, or where solutions are quite so brittle.
The JS world’s obsession with small tools mean that they combine in endless permutations, causing endless issues to debug.
…
When the tower of abstractions falls down in a steaming pile and you need to figure out what’s gone wrong and fix it, you then end up sinking hours and hours into it (all the more because you don’t really understand what’s going on, because you didn’t set it up from scratch).
Or you waste a month figuring out how to plug all the tools together. If I have a complex project I know full well I’m going to want code coverage, a proper module system, minification, etc. etc. The initial time investment to investigate the options here and get it all working is faintly ridiculous compared to ecosystems like Java or .NET (or even C++, for that matter).
I’m not even going to talk about the cavalier attitude various popular parts of the ecosystem (e.g. react-router) have towards API stability.
…the JavaScript community suffers from a very serious case of NIH syndrome, compounded by a neglect for long term sustainability of software projects.
Don’t get me wrong, every single language in the 20 or so years of web development has gone through the framework phase, where people would experiment with solutions for every one of those problems.
The difference is that every one of those languages very quickly converged into good solutions and then made those good solutions into effective tools to get things done.
…
The JavaScript community, otoh, doesn’t seem to get to the converging part…
It’s been said that “Javascript fatigue” appears because developers are lazy.
It’s been said that “Javascript fatigue” is because developers don’t want to learn anything new.
These arguments are null and void…
there’s nothing lazy in trying to make your build tool work
there are exactly zero useful things to learn from that experience
The time I spent trying to figure out the exact motions of all the moving parts I could spend on learning something genuinely new.
Instead, I now have a build toolchain that I have exactly zero confidence in (because it will break unexpectedly at the very next update of any of the fourteen hundred moving parts in it).
I kinda like Elixir. I even think that Elixir is a huge leap towards the future and all the right things that should be happening in the Erlang world (see related post here). I still get irritated by it :)
What follows is a highly opinionated … erm … opinion. I am right and you are wrong ;)
It’s not the syntax that’s important, it’s the consistency
I will be hanged out to dry for this, then brought in and hanged dry again. But here goes.
Let’s start with Erlang. Erlang has one of the most concise and consistent syntaxes in programming languages.
Erlang syntax primer
1: A sequence of expressions is separated by commas:
Var = fun_call(),
Var2 = fun_call2(), Var3 = fun_call3(Param1, Param2)
2: A pattern match after which we return a value is followed by an arrow. From a programmer’s point of view, there’s no difference if this is a pattern match introduced by the function declaration or by a case, receive or if
function_declaration(Param1, Param2) ->
Var = fun_call(),
Var2 = fun_call2(), Var3 = fun_call3(Param1, Param2),
receive
{some, pattern} ->
do_one(),
do_three(),
do_four() %% the value of this call will be returned
end.
3: Choices are spearated by semicolons. Once again, there’s no real difference (at least to the programmer) between choices in function heads (we choose between different patterns) and choices in case, if or receive:
function1({a, b}) ->
do_a_b();
function1({c, D}) ->
case D of
{d, e} -> do_c_d_e();
{f, g} -> do_c_f_g()
end.
4: There are also blocks, and a block is terminated by an end (there’s a bit of inconsistency in blocks themselves: begin...end, case of...end, try...catch...end, if...end). E.g.:
function() ->
begin
case X of
b ->
if true -> ok end
end
end.
What of Elixir?
And here’s my problem with Elixir: You have to be always be aware of things and of what goes where when.
do..end vs ->
def f(a, b) do
case a do
{:c, :d} ->
do_a_b_c_d()
{:e, :f} ->
receive do
{:x} -> do_smth()
{:y} -> something_else()
end
end
end
What? Why is there suddenly an arrow? In a language where (almost) everything is a do...end the arrow is jarringly out of place. For the longest time ever I honestly thought that the only thing separating pattern matches in a case statement is identation. Which would be weird in a language where whitespace is insignificant.
There’s an argument that these are individual patterns within a do...end block separated by newlines and denoted by arrows (see here). cond, case and receive all use that. The argument is moot, however.
Here’s how you define an anonymous function in Elixir:
a_fun = fn x -> x end
This is an even weirder construct. It’s not a do...end. But it has an arrow and an end. Moreover, regular functions are also individual patterns within a do...end block:
defmodule M do
def x(:x), do: :x end
def x(:y), do: :y end
end
And here’s how you define similar constructs with arrows:
a_fun = fn :x -> :x
:y -> :y
end
result = case some_var do
:x -> :x
:z -> :z
end
If arrow is just a syntactic sugar for do...end, why can’t it be used in other places? If it’s used for sequences of patterns, why can’t it be used for function declarations?
There’s definitely an argument for readability. Compare:
## arrows
case some_var do
:x -> :x
:z ->
some
multiline
statement
:z
end
## do..end as everywhere
case some_var do
:x, do: :x
:z do
some
multiline
statement
:z
end
end
But then… Why not use the more readable arrows in other places as well? ;)
## regular do..end
defmodule M do
def x(:x), do: :x end
def x(:y) do
some
multiline
statement
:z
end
end
## arrows
defmodule M do
def x(:x) -> :x
def x(:y) ->
some
multiline
statement
:z
end
So, in my opinion, arrows is a very weird incosistent hardcoded syntactic sugar. And it irritates me :)
Moving on
Parentheses are optional. Except they aren’t
One of the arguably useful features of Elixir is that function calls do not require parantheses:
> String.upcase(“a”)
“A”
> String.upcase “a”
“A”
Except that if you want to keep your sanity instead of juggling things in your mind, you’d better off using the parentheses anyway.
## instead of writing this
Enum.map(List.flatten([1, [2], 3]), fn x -> x * 2 end)
## you can write this
[1, [2], 3] |> List.flatten |> Enum.map(fn x -> x * 2 end)
Whatever is on the left side of the pipe will be passed as the first argument to the whatever’s on the right. Then the result of that will be passed as the first argument to the next thing. And so on.
However. Notice the example above, taken straight from the docs. List.flatten, a function, is written without parentheses. Enum.map, also a function, requires parentheses. And here’s why:
> [1, [2], 3] |> List.flatten |> Enum.map fn x -> x * 2 end
iex:14: warning: you are piping into a function call without parentheses, which may be ambiguous. Please wrap the function you are piping into in parentheses. For example:foo 1 |> bar 2 |> baz 3Should be written as:foo(1) |> bar(2) |> baz(3)
This is just a warning, and things can get ambiguous. That’s why you’d better always use parentheses with pipes. It gets worse though.
Remember anonymous functions we talked about earlier? Well. Let me show an example.
Here’s how a function named flatten declared in module List behaves:
Because parentheses are optional, right. And functions are first class citizens, and there’s no difference if it’s a function or a variable holding a function, right? (In the example below &List.flatten/1 is equivalent to Erlang’s fun lists:flatten/1. It’s called a capture operator.)
> a_fun = List.flatten
** (UndefinedFunctionError) undefined function List.flatten/0
(elixir) List.flatten()> a_fun = &List.flatten
** (CompileError) iex:19: invalid args for &, expected an expression in the format of &Mod.fun/arity, &local/arity or a capture containing at least one argument as &1, got: List.flatten()> a_fun = &List.flatten/1
&List.flatten/1> a_fun([1, [2], 3])
** (CompileError) iex:21: undefined function a_fun/1> a_fun.([1, [2], 3])
[1, 2, 3]
Yup. Because parentheses are optional.
If you think it’s not a big problem, and that anonymous or “captured” functions are not that common… They may not be common, but they are still used. An empty project using Phoenix:
Every time you use a “captured” or an anonymous functions, you need to keep the utterly entirely useless information: they are invoked differently than the rest of the functions, they require parentheses etc. etc.
> 1 |> &(&1 * 2).()
** (ArgumentError) cannot pipe 1 into &(&1 * 2.()), can only pipe into local calls foo(), remote calls Foo.bar() or anonymous functions calls foo.()> 1 |> (&(&1 * 2)).()
2
Local. Remote. Anonymous. Riiiight
Compare with Erlang:
1> lists:flatten([1, [2], 3]).
[1,2,3]2> F = fun lists:flatten/1.
#Fun<lists.flatten.1>
3> F([1, [2], 3]).
[1,2,3]
Before we move on. Calls without parentheses lead to the unfortunate situation when you have no idea what you’re doing: assigning a variable value or the result of a function call.
some_var = x ## is it x or x(). Let's hope the IDE tells you
And yes, this also happens.
do..end vs ,do:
This has been explained to me here. I will still mention it, because it’s annoying the hell out of me and is definitely confusing to beginners. Basically, it’s one more thing to keep track of:
def oneliner(params), do: resultdef multiliner(params) do
one
two
result
end
Yup. A comma and a colon in one case. No comma and no colon in the second case. Because syntactic sugar and reasons.
Disclaimer: I love Erlang. From 2005 to 2015 I ran erlanger.ru (mostly alone, as editor-in-chief, content creator, developer — you name it). It is my favourite language, and this isn’t going to change any time soon. Exactly because I love it, I see problems with it.
Problem statement
Other languages are getting better at being Erlang than Erlang is getting better at being other languages.
Ericsson’s Baby
Erlang is first and foremost Ericsson’s baby. Thus, enterprise and telecom are the two things that influence Erlang the most. It is Erlang’s greatest strength and it’s greatest weakness.
Enterprise side of things made sure that Erlang has got to be one of the most stable languages out there:
Breaking changes, if any, are introduced very rarely, with clean and understandable upgrade paths often split over multiple releases.
New features are folded into the language or the standard library after a lot of careful deliberation and on an “absolutely needed” basis. They are also introduced gradually, often over the course of several releases.
Telecom side of things made sure that Erlang is also probably the language that got more things right from the start (or nearly from the start) than any other language:
lightweight processes
messaging
immutability, gen_* patterns in OTP etc.
The problem, however, this no longer matters.
A Brief, Incomplete, and Mostly Wrong History of Erlang
1990s: Erlang is lightyears ahead of anyone with processes, distribution, OTP etc.
early-to-mid 2000s: Erlang is on the bleeding edge with SMP, distribution
late 2000s to mid-2010s: Erlang is … mainstream at best
Java is the new Erlang
This may be hard to swallow, but Java is the new Erlang. And other languages either piggyback Java, or are fast developing to provide competition in many areas.
Let’s look at the most common tools that everyone uses or knows about in areas where Erlang is supposed to shine: Hadoop, Kafka, Mesos, ChaosMonkey, ElasticSearch, CloudStack, HBase, <insert your project>.
For every single of your distributed needs you will take Java. Not Erlang.
In other rapidly developing fields such as IoT, there’s Java again, C/C++, Python. Rarely, if ever, Erlang.
And the trend will only continue. Server management and monitoring tools, messaging and distribution, parallel computing—you name it—sees an explosion of tools, frameworks and libraries in most languages. All of them encroaching on space originally thought to be reserved for Erlang.
Erlang Is the New … Nothing, Really
While others are encroaching on it’s territory, Erlang has really nothing to fight back with.
Even today Erlang is a very bare-bones language. An assembly, if you will, for writing distributed services. Even if you include OTP and third-party libraries. And, as a developer, you have to write those services from scratch yourself. Not that it’s a bad thing, oh no. But in a “move fast and break things” scenario having to write lots of things from scratch may break you, and not others.
Erlang is a language for mad scientists to write crazy stuff for other mad scientists to use or to be amazed at (see my lightning talk from 2012)
Anything else? Well,
There’s Java for anything related to distribution
There are Python’s great interfaces to scientific and statistical libraries
There’s basically any language for strings, images, graphs, database interfaces—anything, I tell you
This leaves Erlang… well, in limbo.
Is There Hope Yet?
I really really don’t know. I really hope though.
Due to its heritage Erlang will evolve slowly. It’s a given, and it’s no fault of the brilliant people who develop Erlang.
Other languages for the Erlang VM? Maybe. As we have seen with Elixir, in three years it’s got more development (both in the language itself and in the available libraries) than Erlang got in 10 years. We can only hope the drive will not subside. Will there be a “Hadoop.ex” to rival the Java Hadoop? I don’t know.
Incidentally, a lot of the praise that now comes towards Elixir mentions Erlang only in passing. As in “compiles to Erlang” or “runs on Erlang VM”. And may be this is exactly what Erlang needs: become for (Elixir|LFE|Efene|…) what JVM became for (Scala|Clojure|Groovy|…)?
Erlang is far from dead. It’s alive and kicking (some butt, if need be). Will it do that for long? In a “let’s choose between languages A, B, and Erlang for scenario X” will there remain a scenario X where Erlang will remain relevant and/or a valid choice?
I don’t know. I sure as hell hope.
I’m trying to do some good myself. I’m not bright enough to create something like the libraries mentioned here, but there’s neo4j-erlang and pegjs for what it’s worth.
Programmers are the worst. They will vehemently defend the tool of their choice even though it’s the worst tool in the universe. See any vs in the programming community. Tabs vs. spaces. Emacs vs. vim. C++ vs. PHP. You name it.
All joking aside, I think that Javascript programmers are even worse than the worst. Here’s a little true story that happened to me three weeks ago.
We have a project which started way before Webpack was in any useable shape or form. So, it uses Grunt. It’s all been fine, and Grunt has been chugging along quite happily (chugging, because, well, you need to concat all/some/some magic number of files before it can figure out what to do with them. Yes, concat. Sigh). Until we imported a small three-file component which was written in — wait for it — ES6.
ES6 is the next version of Javascript (erm, ECMAScript) which is supported in exactly zero browsers, and exactly zero versions of any of the Javascript virtual machines. Through the magic of transpilers it can be converted to a supported version of Javascript. Therefore half of the internet publishes their code in ES6 now.
Oh my. I know what we need! We need Babel! The best tool to convert pesky ES6 into shiny ESwhatever-the-version (I’m told it’s 5).
Install Babel. Run.
>> ParseError: ‘import’ and ‘export’ may appear only with ‘sourceType: module’
Warning: Error running grunt-browserify. Use — force to continue.
Ok. Bear with me. Babel, whose job 99% of the time consists of transforming ES6 code into ES5 code no longer does this out of the box. I have no idea if it does anything out of the box anymore.
But it comes with nice presets! One of them does the job! It’s even called, erm, es-2015. Whatever. Specify it in options, run grunt be happy.
>> ParseError: ‘import’ and ‘export’ may appear only with ‘sourceType: module’
Warning: Error running grunt-browserify. Use — force to continue.
Ok. Bear with me. A preset is an umbrella name that specifies a list of plugins Babel will apply to code. If these plugins are not present, Babel will fail silently.
Oh, it doesn’t fail in your particular setup? Oh, how so very nice to be you. The problem is: there is exactly zero info on which of the moving parts of the entire system silently swallows the error.
Let’s step back for a second, and consider:
Grunt does not transform code, it’s a “task runner”
The task it runs is called browserify
Well, the problem with the browserify task is that the task runner cannot run it. It needs a plugin called grunt-browserify to do that
Oh, and browserify has to run babel on the source code before it can do anything with it.
And the problem is that browserify cannot run babel directly. It needs babelify to work
All in all the whole toolchain to produce a Javascript file is grunt -> grunt-browserify -> browserify -> babelify -> babel. And someone in that chain decides that babel missing all of its specified plugins is not a reason to throw an error, stop and report.
Unwind. Relax. Breathe. Solve differential equations in your head. Install whatever’s needed for babel. Run grunt.
>> ParseError: ‘import’ and ‘export’ may appear only with ‘sourceType: module’
Warning: Error running grunt-browserify. Use — force to continue.
Oh, Jesus Christ on a pony! What now? Ok. I’m a smart developer. I can figure this out. I mean, if I cannot debug my own toolchain, what good am I?
So, grunt has a -d option which means debug. Awesome.
Ok. Bear with me. The debug option passed to grunt does not propagate through the mess of twig and sticks called a toolchain. Grunt does not pass the debug option to grunt-browserify does not pass the debug option to browserify does not pass the debug option to babelify does not pass the debug option to babel.
You have to provide separate debug options to every piece of the toolschain and pray to god that this piece provides such an option and does not ignore it.
Let’s add debug: true to babelify.
>> ParseError: ‘import’ and ‘export’ may appear only with ‘sourceType: module’
Warning: Error running grunt-browserify. Use — force to continue.
Exactly. Was the debug: true option ignored? Or is this all the debug info I can get? I have no idea.
Ok. Bear with me. Grunt has a specific list of files and components it needs to process. I can only assume that the broken ladder of crap called the toolchain gets this list from Grunt with instructions: “These are the files. Process them.”
Despite all that, Babel by default does not process files from node_modules. Even when invoked from grunt-whatever-theplugin-is-i-dont-care will not process them, and will silently skip them. You have to explicitly provide a separate global: true option to all places in grunt->grunt-broswerify->babelify config where you think that code from node_modules may be imported/invoked/whatever.
No, it’s Stockholm syndrome
I’ve been told that this article is a valid argument against “Javascript fatigue”.
It’s not. It’s Stockholm syndrome.
Don’t try to know everything
I’m not “using everything”. Only the absolute minimum I need to do my job. Going as far as rewriting or reimplementing things that are overy bloated
Wait for the critical mass.
How is Grunt, browserify, Babel not critical mass?
Do exploratory toy projects
Believe me, I do. It is nigh impossible to start a toy project these days, unless you blindly copy paste an umpteenth webpack-starter-kit-for-reals-this-time-works-I-promise and pray that it works for the current minor and major versions of all the moving parts involved. Unless you have your one config file, faithfully copy-pasted all over the place.
Diversify in life
I do kendo, aikido, contemporary dance, salsa, and amateur Russian theater. How’s that for diversification? How any of these can help me with the abomination in the first part of this post?
You can always go back to fundamentals
How very condescending of you. I wrote articles (albeit in Russian) on fundamentals.
Nothing can excuse the terrible horrible mess that the state of Javascript development is in right now. Step back, and look at your tools. Really look at them. There is a reason we have a million “webpack starter packs”. Because nothing works unless you invoke a number of semi-arcane incantations with increasingly inane combinations and versions of options.
I will not even go into how half of these tools don’t support recursing directories or globs. Or how another half of them doesn’t support monitoring the file system and recompiling stuff on the fly (what? recompiling on the fly with dependency tracking wat?).
Why are you supporting this?
I’m not lazy. I don’t not want to learn
It’s been said that “Javascript fatigue” appears because developers are lazy.
It’s been said that “Javascript fatigue” is because developers don’t want to learn anything new.
These arguments are null and void. If you read the first part of the story, you’ve seen that:
there’s nothing lazy in trying to make your build tool work
there are exactly zero useful things to learn from that experience
The time I spent trying to figure out the exact motions of all the moving parts I could spend on learning something genuinely new.
Instead, I now have a build toolchain that I have exactly zero confidence in (because it will break unexpectedly at the very next update of any of the fourteen hundred moving parts in it).
And yes, I will be removing some of those moving parts. It doesn’t mean that I will enjoy it or learn anything remotely useful from it.
No, this is not really a post about the upcoming Designing for Scalability with Erlang/OTP. Erlang is nearly unique among other programming languages in that almost all of the books on it are in the good to great range. This book is going to be no exception.
No, this is not about Erlang per se, as other languages have the same problem. But Erlang is a poster child for “scalable distributed 99.999999% cloud developer-to-user ratio <insert the next current rave here>”.
Recently I twitted:
Every single book on #Erlang spends 99% of text on reiterating the same basic principles. Most of them are worse than LYSE
This is a bit harsh. May be not 99%. And not necessarily worse. So, what do I complain about? Most books mostly concern themselves with “draw some circles”. Reality is quite often “draw the rest of the owl”. There is almost no info on the intermediate steps in between.
Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged in order to accommodate that growth.
To scale horizontally (or scale out) means to add more nodes to a system, such as adding a new computer to a distributed software application.
To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer.
So, in the case of Erlang, which is described as, praised for being, marketed as being scalable, distributed, durable, resilient etc. etc. it would be really nice to read up on at least some of these things:
setting up multiple nodes
testing a distributed app
deploying a distributed app
handling failover
handling load balancing
handling netsplits (and not only in Mnesia. If we can add a process on node B to a gen_supervisor on node A, how do we handle netsplits, timeouts, restarts etc.?)
discovery of nodes
tracing
profiling
various VM options and their impact
securing connection between nodes
logging
debugging
crash dumps
remote inspection
mitigating overflowing mailboxes
SSL
sockets
working from behind firewalls
flood protection
slow requests
timeouts
sessions
latency
<add your own>
Funnily enough, in an out-of-the-box Erlang:
there are zero answers to some of these questions, so it would be nice to find out how this can be solved (consul/etcd for node discovery? how? and similar questions)
already has excellent tools, but very little info on best way to use them: profiling, tracing, remote inspection (redbug, do you use it? etc.)
has performance problems with poorly documented workarounds or third-party solutions for some situations (running across thousands of nodes or thousands of cpus comes to mind)
has support for some scenarios, but very little info on them (there’s a total of about 1000 words on large scale testing with Common Test in the docs for example)
Unfortunately, most existing books emphasize beyond measure only one single aspect of Erlang: OTP. Even though OTP is no doubt essential to creating robust scalable distributed applications, people looking for the answers to questions above already grok OTP :) They already know how to draw the circles.
If you’re looking for answers, though, the landscape in Erlang books is quite bleak.
Among the commercially available it looks like only “Mastering Erlang: Writing Real World Applications“ doesn’t fall into the trap of spending all of it’s chapters on OTP with only one or two chapters dedicated to something else. (Update: I’m not even sure this book exists. It looks like it’s been “Coming soon” since 2010. Except perhaps here)
The other notable exception is the excellent “Erlang in Anger” which answers quite a lot of the questions above.
We’re quite ready to move beyond drawing circles. Some of us are not yet ready to draw the entire owl ;) Hjälp!
Translation of a Russian joke about Putin's answers in his yearly press-conferences.
Q: Vladimir Vladimirovich, what is two times two?
A: I'll be brief. You know, just the other day I was at the Russian Academy of Sciences
and had a discussion with many scientists there, including young scientists,
all of them very bright, by the way. As it happens, we touched upon the present problem,
discussed the current state of the country's economy; they also descibed their plans
for the future. Of course, the number one priority for them is the problem of relevancy;
also, just as important is the question of housing loans, but I can assure you that all
these problems can be solved and we will direct all our efforts to their resolution in
the nearest future.
Among other things this also applies to the subject you raised in your question.