You Dont Know JS.git

GitHub - getify/You-Dont-Know-JS: A book series on JavaScript. @YDKJS on twitter.
Repo URL: https://github.com/getify/You-Dont-Know-JS.git
Edited by:
Cover image: Cover image
Share this using: email, Google+, Twitter, Facebook.
Exports: EPUB | MOBI

1 You Don’t Know JS: Async & Performance

<img src=“cover.jpg” width=“300”>


Purchase digital/print copy from O’Reilly


Table of Contents

2 You Don’t Know JS: Async & Performance

3 Appendix A: asynquence Library

Chapters 1 and 2 went into quite a bit of detail about typical asynchronous programming patterns and how they’re commonly solved with callbacks. But we also saw why callbacks are fatally limited in capability, which led us to Chapters 3 and 4, with Promises and generators offering a much more solid, trustable, and reason-able base to build your asynchrony on.

I referenced my own asynchronous library asynquence (http://github.com/getify/asynquence) – “async” + “sequence” = “asynquence” – several times in this book, and I want to now briefly explain how it works and why its unique design is important and helpful.

In the next appendix, we’ll explore some advanced async patterns, but you’ll probably want a library to make those palatable enough to be useful. We’ll use asynquence to express those patterns, so you’ll want to spend a little time here getting to know the library first.

asynquence is obviously not the only option for good async coding; certainly there are many great libraries in this space. But asynquence provides a unique perspective by combining the best of all these patterns into a single library, and moreover is built on a single basic abstraction: the (async) sequence.

My premise is that sophisticated JS programs often need bits and pieces of various different asynchronous patterns woven together, and this is usually left entirely up to each developer to figure out. Instead of having to bring in two or more different async libraries that focus on different aspects of asynchrony, asynquence unifies them into variated sequence steps, with just one core library to learn and deploy.

I believe the value is strong enough with asynquence to make async flow control programming with Promise-style semantics super easy to accomplish, so that’s why we’ll exclusively focus on that library here.

To begin, I’ll explain the design principles behind asynquence, and then we’ll illustrate how its API works with code examples.

3.1 Sequences, Abstraction Design

Understanding asynquence begins with understanding a fundamental abstraction: any series of steps for a task, whether they separately are synchronous or asynchronous, can be collectively thought of as a “sequence”. In other words, a sequence is a container that represents a task, and is comprised of individual (potentially async) steps to complete that task.

Each step in the sequence is controlled under the covers by a Promise (see Chapter 3). That is, every step you add to a sequence implicitly creates a Promise that is wired to the previous end of the sequence. Because of the semantics of Promises, every single step advancement in a sequence is asynchronous, even if you synchronously complete the step.

Moreover, a sequence will always proceed linearly from step to step, meaning that step 2 always comes after step 1 finishes, and so on.

Of course, a new sequence can be forked off an existing sequence, meaning the fork only occurs once the main sequence reaches that point in the flow. Sequences can also be combined in various ways, including having one sequence subsumed by another sequence at a particular point in the flow.

A sequence is kind of like a Promise chain. However, with Promise chains, there is no “handle” to grab that references the entire chain. Whichever Promise you have a reference to only represents the current step in the chain plus any other steps hanging off it. Essentially, you cannot hold a reference to a Promise chain unless you hold a reference to the first Promise in the chain.

There are many cases where it turns out to be quite useful to have a handle that references the entire sequence collectively. The most important of those cases is with sequence abort/cancel. As we covered extensively in Chapter 3, Promises themselves should never be able to be canceled, as this violates a fundamental design imperative: external immutability.

But sequences have no such immutability design principle, mostly because sequences are not passed around as future-value containers that need immutable value semantics. So sequences are the proper level of abstraction to handle abort/cancel behavior. asynquence sequences can be abort()ed at any time, and the sequence will stop at that point and not go for any reason.

There’s plenty more reasons to prefer a sequence abstraction on top of Promise chains, for flow control purposes.

First, Promise chaining is a rather manual process – one that can get pretty tedious once you start creating and chaining Promises across a wide swath of your programs – and this tedium can act counterproductively to dissuade the developer from using Promises in places where they are quite appropriate.

Abstractions are meant to reduce boilerplate and tedium, so the sequence abstraction is a good solution to this problem. With Promises, your focus is on the individual step, and there’s little assumption that you will keep the chain going. With sequences, the opposite approach is taken, assuming the sequence will keep having more steps added indefinitely.

This abstraction complexity reduction is especially powerful when you start thinking about higher-order Promise patterns (beyond race([..]) and all([..]).

For example, in the middle of a sequence, you may want to express a step that is conceptually like a try..catch in that the step will always result in success, either the intended main success resolution or a positive nonerror signal for the caught error. Or, you might want to express a step that is like a retry/until loop, where it keeps trying the same step over and over until success occurs.

These sorts of abstractions are quite nontrivial to express using only Promise primitives, and doing so in the middle of an existing Promise chain is not pretty. But if you abstract your thinking to a sequence, and consider a step as a wrapper around a Promise, that step wrapper can hide such details, freeing you to think about the flow control in the most sensible way without being bothered by the details.

Second, and perhaps more importantly, thinking of async flow control in terms of steps in a sequence allows you to abstract out the details of what types of asynchronicity are involved with each individual step. Under the covers, a Promise will always control the step, but above the covers, that step can look either like a continuation callback (the simple default), or like a real Promise, or as a run-to-completion generator, or … Hopefully, you get the picture.

Third, sequences can more easily be twisted to adapt to different modes of thinking, such as event-, stream-, or reactive-based coding. asynquence provides a pattern I call “reactive sequences” (which we’ll cover later) as a variation on the “reactive observable” ideas in RxJS (“Reactive Extensions”), that lets a repeatable event fire off a new sequence instance each time. Promises are one-shot-only, so it’s quite awkward to express repetitious asynchrony with Promises alone.

Another alternate mode of thinking inverts the resolution/control capability in a pattern I call “iterable sequences”. Instead of each individual step internally controlling its own completion (and thus advancement of the sequence), the sequence is inverted so the advancement control is through an external iterator, and each step in the iterable sequence just responds to the next(..) iterator control.

We’ll explore all of these different variations as we go throughout the rest of this appendix, so don’t worry if we ran over those bits far too quickly just now.

The takeaway is that sequences are a more powerful and sensible abstraction for complex asynchrony than just Promises (Promise chains) or just generators, and asynquence is designed to express that abstraction with just the right level of sugar to make async programming more understandable and more enjoyable.

3.2 asynquence API

To start off, the way you create a sequence (an asynquence instance) is with the ASQ(..) function. An ASQ() call with no parameters creates an empty initial sequence, whereas passing one or more values or functions to ASQ(..) sets up the sequence with each argument representing the initial steps of the sequence.

Note: For the purposes of all code examples here, I will use the asynquence top-level identifier in global browser usage: ASQ. If you include and use asynquence through a module system (browser or server), you of course can define whichever symbol you prefer, and asynquence won’t care!

Many of the API methods discussed here are built into the core of asynquence, but others are provided through including the optional “contrib” plug-ins package. See the documentation for asynquence for whether a method is built in or defined via plug-in: http://github.com/getify/asynquence

3.2.1 Steps

If a function represents a normal step in the sequence, that function is invoked with the first parameter being the continuation callback, and any subsequent parameters being any messages passed on from the previous step. The step will not complete until the continuation callback is called. Once it’s called, any arguments you pass to it will be sent along as messages to the next step in the sequence.

To add an additional normal step to the sequence, call then(..) (which has essentially the exact same semantics as the ASQ(..) call):

ASQ(
    // step 1
    function(done){
        setTimeout( function(){
            done( "Hello" );
        }, 100 );
    },
    // step 2
    function(done,greeting) {
        setTimeout( function(){
            done( greeting + " World" );
        }, 100 );
    }
)
// step 3
.then( function(done,msg){
    setTimeout( function(){
        done( msg.toUpperCase() );
    }, 100 );
} )
// step 4
.then( function(done,msg){
    console.log( msg );         // HELLO WORLD
} );

Note: Though the name then(..) is identical to the native Promises API, this then(..) is different. You can pass as few or as many functions or values to then(..) as you’d like, and each is taken as a separate step. There’s no two-callback fulfilled/rejected semantics involved.

Unlike with Promises, where to chain one Promise to the next you have to create and return that Promise from a then(..) fulfillment handler, with asynquence, all you need to do is call the continuation callback – I always call it done() but you can name it whatever suits you – and optionally pass it completion messages as arguments.

Each step defined by then(..) is assumed to be asynchronous. If you have a step that’s synchronous, you can either just call done(..) right away, or you can use the simpler val(..) step helper:

// step 1 (sync)
ASQ( function(done){
    done( "Hello" );    // manually synchronous
} )
// step 2 (sync)
.val( function(greeting){
    return greeting + " World";
} )
// step 3 (async)
.then( function(done,msg){
    setTimeout( function(){
        done( msg.toUpperCase() );
    }, 100 );
} )
// step 4 (sync)
.val( function(msg){
    console.log( msg );
} );

As you can see, val(..)-invoked steps don’t receive a continuation callback, as that part is assumed for you – and the parameter list is less cluttered as a result! To send a message along to the next step, you simply use return.

Think of val(..) as representing a synchronous “value-only” step, which is useful for synchronous value operations, logging, and the like.

3.2.2 Errors

One important difference with asynquence compared to Promises is with error handling.

With Promises, each individual Promise (step) in a chain can have its own independent error, and each subsequent step has the ability to handle the error or not. The main reason for this semantic comes (again) from the focus on individual Promises rather than on the chain (sequence) as a whole.

I believe that most of the time, an error in one part of a sequence is generally not recoverable, so the subsequent steps in the sequence are moot and should be skipped. So, by default, an error at any step of a sequence throws the entire sequence into error mode, and the rest of the normal steps are ignored.

If you do need to have a step where its error is recoverable, there are several different API methods that can accommodate, such as try(..) – previously mentioned as a kind of try..catch step – or until(..) – a retry loop that keeps attempting the step until it succeeds or you manually break() the loop. asynquence even has pThen(..) and pCatch(..) methods, which work identically to how normal Promise then(..) and catch(..) work (see Chapter 3), so you can do localized mid-sequence error handling if you so choose.

The point is, you have both options, but the more common one in my experience is the default. With Promises, to get a chain of steps to ignore all steps once an error occurs, you have to take care not to register a rejection handler at any step; otherwise, that error gets swallowed as handled, and the sequence may continue (perhaps unexpectedly). This kind of desired behavior is a bit awkward to properly and reliably handle.

To register a sequence error notification handler, asynquence provides an or(..) sequence method, which also has an alias of onerror(..). You can call this method anywhere in the sequence, and you can register as many handlers as you’d like. That makes it easy for multiple different consumers to listen in on a sequence to know if it failed or not; it’s kind of like an error event handler in that respect.

Just like with Promises, all JS exceptions become sequence errors, or you can programmatically signal a sequence error:

var sq = ASQ( function(done){
    setTimeout( function(){
        // signal an error for the sequence
        done.fail( "Oops" );
    }, 100 );
} )
.then( function(done){
    // will never get here
} )
.or( function(err){
    console.log( err );         // Oops
} )
.then( function(done){
    // won't get here either
} );

// later

sq.or( function(err){
    console.log( err );         // Oops
} );

Another really important difference with error handling in asynquence compared to native Promises is the default behavior of “unhandled exceptions”. As we discussed at length in Chapter 3, a rejected Promise without a registered rejection handler will just silently hold (aka swallow) the error; you have to remember to always end a chain with a final catch(..).

In asynquence, the assumption is reversed.

If an error occurs on a sequence, and it at that moment has no error handlers registered, the error is reported to the console. In other words, unhandled rejections are by default always reported so as not to be swallowed and missed.

As soon as you register an error handler against a sequence, it opts that sequence out of such reporting, to prevent duplicate noise.

There may, in fact, be cases where you want to create a sequence that may go into the error state before you have a chance to register the handler. This isn’t common, but it can happen from time to time.

In those cases, you can also opt a sequence instance out of error reporting by calling defer() on the sequence. You should only opt out of error reporting if you are sure that you’re going to eventually handle such errors:

var sq1 = ASQ( function(done){
    doesnt.Exist();         // will throw exception to console
} );

var sq2 = ASQ( function(done){
    doesnt.Exist();         // will throw only a sequence error
} )
// opt-out of error reporting
.defer();

setTimeout( function(){
    sq1.or( function(err){
        console.log( err ); // ReferenceError
    } );

    sq2.or( function(err){
        console.log( err ); // ReferenceError
    } );
}, 100 );

// ReferenceError (from sq1)

This is better error handling behavior than Promises themselves have, because it’s the Pit of Success, not the Pit of Failure (see Chapter 3).

Note: If a sequence is piped into (aka subsumed by) another sequence – see “Combining Sequences” for a complete description – then the source sequence is opted out of error reporting, but now the target sequence’s error reporting or lack thereof must be considered.

3.2.3 Parallel Steps

Not all steps in your sequences will have just a single (async) task to perform; some will need to perform multiple steps “in parallel” (concurrently). A step in a sequence in which multiple substeps are processing concurrently is called a gate(..) – there’s an all(..) alias if you prefer – and is directly symmetric to native Promise.all([..]).

If all the steps in the gate(..) complete successfully, all success messages will be passed to the next sequence step. If any of them generate errors, the whole sequence immediately goes into an error state.

Consider:

ASQ( function(done){
    setTimeout( done, 100 );
} )
.gate(
    function(done){
        setTimeout( function(){
            done( "Hello" );
        }, 100 );
    },
    function(done){
        setTimeout( function(){
            done( "World", "!" );
        }, 100 );
    }
)
.val( function(msg1,msg2){
    console.log( msg1 );    // Hello
    console.log( msg2 );    // [ "World", "!" ]
} );

For illustration, let’s compare that example to native Promises:

new Promise( function(resolve,reject){
    setTimeout( resolve, 100 );
} )
.then( function(){
    return Promise.all( [
        new Promise( function(resolve,reject){
            setTimeout( function(){
                resolve( "Hello" );
            }, 100 );
        } ),
        new Promise( function(resolve,reject){
            setTimeout( function(){
                // note: we need a [ ] array here
                resolve( [ "World", "!" ] );
            }, 100 );
        } )
    ] );
} )
.then( function(msgs){
    console.log( msgs[0] ); // Hello
    console.log( msgs[1] ); // [ "World", "!" ]
} );

Yuck. Promises require a lot more boilerplate overhead to express the same asynchronous flow control. That’s a great illustration of why the asynquence API and abstraction make dealing with Promise steps a lot nicer. The improvement only goes higher the more complex your asynchrony is.

3.2.3.1 Step Variations

There are several variations in the contrib plug-ins on asynquence’s gate(..) step type that can be quite helpful:

  • any(..) is like gate(..), except just one segment has to eventually succeed to proceed on the main sequence.
  • first(..) is like any(..), except as soon as any segment succeeds, the main sequence proceeds (ignoring subsequent results from other segments).
  • race(..) (symmetric with Promise.race([..])) is like first(..), except the main sequence proceeds as soon as any segment completes (either success or failure).
  • last(..) is like any(..), except only the latest segment to complete successfully sends its message(s) along to the main sequence.
  • none(..) is the inverse of gate(..): the main sequence proceeds only if all the segments fail (with all segment error message(s) transposed as success message(s) and vice versa).

Let’s first define some helpers to make illustration cleaner:

function success1(done) {
    setTimeout( function(){
        done( 1 );
    }, 100 );
}

function success2(done) {
    setTimeout( function(){
        done( 2 );
    }, 100 );
}

function failure3(done) {
    setTimeout( function(){
        done.fail( 3 );
    }, 100 );
}

function output(msg) {
    console.log( msg );
}

Now, let’s demonstrate these gate(..) step variations:

ASQ().race(
    failure3,
    success1
)
.or( output );      // 3


ASQ().any(
    success1,
    failure3,
    success2
)
.val( function(){
    var args = [].slice.call( arguments );
    console.log(
        args        // [ 1, undefined, 2 ]
    );
} );


ASQ().first(
    failure3,
    success1,
    success2
)
.val( output );     // 1


ASQ().last(
    failure3,
    success1,
    success2
)
.val( output );     // 2

ASQ().none(
    failure3
)
.val( output )      // 3
.none(
    failure3
    success1
)
.or( output );      // 1

Another step variation is map(..), which lets you asynchronously map elements of an array to different values, and the step doesn’t proceed until all the mappings are complete. map(..) is very similar to gate(..), except it gets the initial values from an array instead of from separately specified functions, and also because you define a single function callback to operate on each value:

function double(x,done) {
    setTimeout( function(){
        done( x * 2 );
    }, 100 );
}

ASQ().map( [1,2,3], double )
.val( output );                 // [2,4,6]

Also, map(..) can receive either of its parameters (the array or the callback) from messages passed from the previous step:

function plusOne(x,done) {
    setTimeout( function(){
        done( x + 1 );
    }, 100 );
}

ASQ( [1,2,3] )
.map( double )          // message `[1,2,3]` comes in
.map( plusOne )         // message `[2,4,6]` comes in
.val( output );         // [3,5,7]

Another variation is waterfall(..), which is kind of like a mixture between gate(..)’s message collection behavior but then(..)’s sequential processing.

Step 1 is first executed, then the success message from step 1 is given to step 2, and then both success messages go to step 3, and then all three success messages go to step 4, and so on, such that the messages sort of collect and cascade down the “waterfall”.

Consider:

function double(done) {
    var args = [].slice.call( arguments, 1 );
    console.log( args );

    setTimeout( function(){
        done( args[args.length - 1] * 2 );
    }, 100 );
}

ASQ( 3 )
.waterfall(
    double,                 // [ 3 ]
    double,                 // [ 6 ]
    double,                 // [ 6, 12 ]
    double                  // [ 6, 12, 24 ]
)
.val( function(){
    var args = [].slice.call( arguments );
    console.log( args );    // [ 6, 12, 24, 48 ]
} );

If at any point in the “waterfall” an error occurs, the whole sequence immediately goes into an error state.

3.2.3.2 Error Tolerance

Sometimes you want to manage errors at the step level and not let them necessarily send the whole sequence into the error state. asynquence offers two step variations for that purpose.

try(..) attempts a step, and if it succeeds, the sequence proceeds as normal, but if the step fails, the failure is turned into a success message formated as { catch: .. } with the error message(s) filled in:

ASQ()
.try( success1 )
.val( output )          // 1
.try( failure3 )
.val( output )          // { catch: 3 }
.or( function(err){
    // never gets here
} );

You could instead set up a retry loop using until(..), which tries the step and if it fails, retries the step again on the next event loop tick, and so on.

This retry loop can continue indefinitely, but if you want to break out of the loop, you can call the break() flag on the completion trigger, which sends the main sequence into an error state:

var count = 0;

ASQ( 3 )
.until( double )
.val( output )                  // 6
.until( function(done){
    count++;

    setTimeout( function(){
        if (count < 5) {
            done.fail();
        }
        else {
            // break out of the `until(..)` retry loop
            done.break( "Oops" );
        }
    }, 100 );
} )
.or( output );                  // Oops

3.2.3.3 Promise-Style Steps

If you would prefer to have, inline in your sequence, Promise-style semantics like Promises’ then(..) and catch(..) (see Chapter 3), you can use the pThen and pCatch plug-ins:

ASQ( 21 )
.pThen( function(msg){
    return msg * 2;
} )
.pThen( output )                // 42
.pThen( function(){
    // throw an exception
    doesnt.Exist();
} )
.pCatch( function(err){
    // caught the exception (rejection)
    console.log( err );         // ReferenceError
} )
.val( function(){
    // main sequence is back in a
    // success state because previous
    // exception was caught by
    // `pCatch(..)`
} );

pThen(..) and pCatch(..) are designed to run in the sequence, but behave as if it was a normal Promise chain. As such, you can either resolve genuine Promises or asynquence sequences from the “fulfillment” handler passed to pThen(..) (see Chapter 3).

3.2.4 Forking Sequences

One feature that can be quite useful about Promises is that you can attach multiple then(..) handler registrations to the same promise, effectively “forking” the flow-control at that promise:

var p = Promise.resolve( 21 );

// fork 1 (from `p`)
p.then( function(msg){
    return msg * 2;
} )
.then( function(msg){
    console.log( msg );     // 42
} )

// fork 2 (from `p`)
p.then( function(msg){
    console.log( msg );     // 21
} );

The same “forking” is easy in asynquence with fork():

var sq = ASQ(..).then(..).then(..);

var sq2 = sq.fork();

// fork 1
sq.then(..)..;

// fork 2
sq2.then(..)..;

3.2.5 Combining Sequences

The reverse of fork()ing, you can combine two sequences by subsuming one into another, using the seq(..) instance method:

var sq = ASQ( function(done){
    setTimeout( function(){
        done( "Hello World" );
    }, 200 );
} );

ASQ( function(done){
    setTimeout( done, 100 );
} )
// subsume `sq` sequence into this sequence
.seq( sq )
.val( function(msg){
    console.log( msg );     // Hello World
} )

seq(..) can either accept a sequence itself, as shown here, or a function. If a function, it’s expected that the function when called will return a sequence, so the preceding code could have been done with:

// ..
.seq( function(){
    return sq;
} )
// ..

Also, that step could instead have been accomplished with a pipe(..):

// ..
.then( function(done){
    // pipe `sq` into the `done` continuation callback
    sq.pipe( done );
} )
// ..

When a sequence is subsumed, both its success message stream and its error stream are piped in.

Note: As mentioned in an earlier note, piping (manually with pipe(..) or automatically with seq(..)) opts the source sequence out of error-reporting, but doesn’t affect the error reporting status of the target sequence.

3.3 Value and Error Sequences

If any step of a sequence is just a normal value, that value is just mapped to that step’s completion message:

var sq = ASQ( 42 );

sq.val( function(msg){
    console.log( msg );     // 42
} );

If you want to make a sequence that’s automatically errored:

var sq = ASQ.failed( "Oops" );

ASQ()
.seq( sq )
.val( function(msg){
    // won't get here
} )
.or( function(err){
    console.log( err );     // Oops
} );

You also may want to automatically create a delayed-value or a delayed-error sequence. Using the after and failAfter contrib plug-ins, this is easy:

var sq1 = ASQ.after( 100, "Hello", "World" );
var sq2 = ASQ.failAfter( 100, "Oops" );

sq1.val( function(msg1,msg2){
    console.log( msg1, msg2 );      // Hello World
} );

sq2.or( function(err){
    console.log( err );             // Oops
} );

You can also insert a delay in the middle of a sequence using after(..):

ASQ( 42 )
// insert a delay into the sequence
.after( 100 )
.val( function(msg){
    console.log( msg );     // 42
} );

3.4 Promises and Callbacks

I think asynquence sequences provide a lot of value on top of native Promises, and for the most part you’ll find it more pleasant and more powerful to work at that level of abstraction. However, integrating asynquence with other non-asynquence code will be a reality.

You can easily subsume a promise (e.g., thenable – see Chapter 3) into a sequence using the promise(..) instance method:

var p = Promise.resolve( 42 );

ASQ()
.promise( p )           // could also: `function(){ return p; }`
.val( function(msg){
    console.log( msg ); // 42
} );

And to go the opposite direction and fork/vend a promise from a sequence at a certain step, use the toPromise contrib plug-in:

var sq = ASQ.after( 100, "Hello World" );

sq.toPromise()
// this is a standard promise chain now
.then( function(msg){
    return msg.toUpperCase();
} )
.then( function(msg){
    console.log( msg );     // HELLO WORLD
} );

To adapt asynquence to systems using callbacks, there are several helper facilities. To automatically generate an “error-first style” callback from your sequence to wire into a callback-oriented utility, use errfcb:

var sq = ASQ( function(done){
    // note: expecting "error-first style" callback
    someAsyncFuncWithCB( 1, 2, done.errfcb )
} )
.val( function(msg){
    // ..
} )
.or( function(err){
    // ..
} );

// note: expecting "error-first style" callback
anotherAsyncFuncWithCB( 1, 2, sq.errfcb() );

You also may want to create a sequence-wrapped version of a utility – compare to “promisory” in Chapter 3 and “thunkory” in Chapter 4 – and asynquence provides ASQ.wrap(..) for that purpose:

var coolUtility = ASQ.wrap( someAsyncFuncWithCB );

coolUtility( 1, 2 )
.val( function(msg){
    // ..
} )
.or( function(err){
    // ..
} );

Note: For the sake of clarity (and for fun!), let’s coin yet another term, for a sequence-producing function that comes from ASQ.wrap(..), like coolUtility here. I propose “sequory” (“sequence” + “factory”).

3.5 Iterable Sequences

The normal paradigm for a sequence is that each step is responsible for completing itself, which is what advances the sequence. Promises work the same way.

The unfortunate part is that sometimes you need external control over a Promise/step, which leads to awkward “capability extraction”.

Consider this Promises example:

var domready = new Promise( function(resolve,reject){
    // don't want to put this here, because
    // it belongs logically in another part
    // of the code
    document.addEventListener( "DOMContentLoaded", resolve );
} );

// ..

domready.then( function(){
    // DOM is ready!
} );

The “capability extraction” anti-pattern with Promises looks like this:

var ready;

var domready = new Promise( function(resolve,reject){
    // extract the `resolve()` capability
    ready = resolve;
} );

// ..

domready.then( function(){
    // DOM is ready!
} );

// ..

document.addEventListener( "DOMContentLoaded", ready );

Note: This anti-pattern is an awkward code smell, in my opinion, but some developers like it, for reasons I can’t grasp.

asynquence offers an inverted sequence type I call “iterable sequences”, which externalizes the control capability (it’s quite useful in use cases like the domready):

// note: `domready` here is an *iterator* that
// controls the sequence
var domready = ASQ.iterable();

// ..

domready.val( function(){
    // DOM is ready
} );

// ..

document.addEventListener( "DOMContentLoaded", domready.next );

There’s more to iterable sequences than what we see in this scenario. We’ll come back to them in Appendix B.

3.6 Running Generators

In Chapter 4, we derived a utility called run(..) which can run generators to completion, listening for yielded Promises and using them to async resume the generator. asynquence has just such a utility built in, called runner(..).

Let’s first set up some helpers for illustration:

function doublePr(x) {
    return new Promise( function(resolve,reject){
        setTimeout( function(){
            resolve( x * 2 );
        }, 100 );
    } );
}

function doubleSeq(x) {
    return ASQ( function(done){
        setTimeout( function(){
            done( x * 2)
        }, 100 );
    } );
}

Now, we can use runner(..) as a step in the middle of a sequence:

ASQ( 10, 11 )
.runner( function*(token){
    var x = token.messages[0] + token.messages[1];

    // yield a real promise
    x = yield doublePr( x );

    // yield a sequence
    x = yield doubleSeq( x );

    return x;
} )
.val( function(msg){
    console.log( msg );         // 84
} );

3.6.1 Wrapped Generators

You can also create a self-packaged generator – that is, a normal function that runs your specified generator and returns a sequence for its completion – by ASQ.wrap(..)ing it:

var foo = ASQ.wrap( function*(token){
    var x = token.messages[0] + token.messages[1];

    // yield a real promise
    x = yield doublePr( x );

    // yield a sequence
    x = yield doubleSeq( x );

    return x;
}, { gen: true } );

// ..

foo( 8, 9 )
.val( function(msg){
    console.log( msg );         // 68
} );

There’s a lot more awesome that runner(..) is capable of, but we’ll come back to that in Appendix B.

3.7 Review

asynquence is a simple abstraction – a sequence is a series of (async) steps – on top of Promises, aimed at making working with various asynchronous patterns much easier, without any compromise in capability.

There are other goodies in the asynquence core API and its contrib plug-ins beyond what we saw in this appendix, but we’ll leave that as an exercise for the reader to go check the rest of the capabilities out.

You’ve now seen the essence and spirit of asynquence. The key take away is that a sequence is comprised of steps, and those steps can be any of dozens of different variations on Promises, or they can be a generator-run, or… The choice is up to you, you have all the freedom to weave together whatever async flow control logic is appropriate for your tasks. No more library switching to catch different async patterns.

If these asynquence snippets have made sense to you, you’re now pretty well up to speed on the library; it doesn’t take that much to learn, actually!

If you’re still a little fuzzy on how it works (or why!), you’ll want to spend a little more time examining the previous examples and playing around with asynquence yourself, before going on to the next appendix. Appendix B will push asynquence into several more advanced and powerful async patterns.

4 You Don’t Know JS: Async & Performance

5 Appendix B: Advanced Async Patterns

Appendix A introduced the asynquence library for sequence-oriented async flow control, primarily based on Promises and generators.

Now we’ll explore other advanced asynchronous patterns built on top of that existing understanding and functionality, and see how asynquence makes those sophisticated async techniques easy to mix and match in our programs without needing lots of separate libraries.

5.1 Iterable Sequences

We introduced asynquence’s iterable sequences in the previous appendix, but we want to revisit them in more detail.

To refresh, recall:

var domready = ASQ.iterable();

// ..

domready.val( function(){
    // DOM is ready
} );

// ..

document.addEventListener( "DOMContentLoaded", domready.next );

Now, let’s define a sequence of multiple steps as an iterable sequence:

var steps = ASQ.iterable();

steps
.then( function STEP1(x){
    return x * 2;
} )
.then( function STEP2(x){
    return x + 3;
} )
.then( function STEP3(x){
    return x * 4;
} );

steps.next( 8 ).value;  // 16
steps.next( 16 ).value; // 19
steps.next( 19 ).value; // 76
steps.next().done;      // true

As you can see, an iterable sequence is a standard-compliant iterator (see Chapter 4). So, it can be iterated with an ES6 for..of loop, just like a generator (or any other iterable) can:

var steps = ASQ.iterable();

steps
.then( function STEP1(){ return 2; } )
.then( function STEP2(){ return 4; } )
.then( function STEP3(){ return 6; } )
.then( function STEP4(){ return 8; } )
.then( function STEP5(){ return 10; } );

for (var v of steps) {
    console.log( v );
}
// 2 4 6 8 10

Beyond the event triggering example shown in the previous appendix, iterable sequences are interesting because in essence they can be seen as a stand-in for generators or Promise chains, but with even more flexibility.

Consider a multiple Ajax request example – we’ve seen the same scenario in Chapters 3 and 4, both as a Promise chain and as a generator, respectively – expressed as an iterable sequence:

// sequence-aware ajax
var request = ASQ.wrap( ajax );

ASQ( "http://some.url.1" )
.runner(
    ASQ.iterable()

    .then( function STEP1(token){
        var url = token.messages[0];
        return request( url );
    } )

    .then( function STEP2(resp){
        return ASQ().gate(
            request( "http://some.url.2/?v=" + resp ),
            request( "http://some.url.3/?v=" + resp )
        );
    } )

    .then( function STEP3(r1,r2){ return r1 + r2; } )
)
.val( function(msg){
    console.log( msg );
} );

The iterable sequence expresses a sequential series of (sync or async) steps that looks awfully similar to a Promise chain – in other words, it’s much cleaner looking than just plain nested callbacks, but not quite as nice as the yield-based sequential syntax of generators.

But we pass the iterable sequence into ASQ#runner(..), which runs it to completion the same as if it was a generator. The fact that an iterable sequence behaves essentially the same as a generator is notable for a couple of reasons.

First, iterable sequences are kind of a pre-ES6 equivalent to a certain subset of ES6 generators, which means you can either author them directly (to run anywhere), or you can author ES6 generators and transpile/convert them to iterable sequences (or Promise chains for that matter!).

Thinking of an async-run-to-completion generator as just syntactic sugar for a Promise chain is an important recognition of their isomorphic relationship.

Before we move on, we should note that the previous snippet could have been expressed in asynquence as:

ASQ( "http://some.url.1" )
.seq( /*STEP 1*/ request )
.seq( function STEP2(resp){
    return ASQ().gate(
        request( "http://some.url.2/?v=" + resp ),
        request( "http://some.url.3/?v=" + resp )
    );
} )
.val( function STEP3(r1,r2){ return r1 + r2; } )
.val( function(msg){
    console.log( msg );
} );

Moreover, step 2 could have even been expressed as:

.gate(
    function STEP2a(done,resp) {
        request( "http://some.url.2/?v=" + resp )
        .pipe( done );
    },
    function STEP2b(done,resp) {
        request( "http://some.url.3/?v=" + resp )
        .pipe( done );
    }
)

So, why would we go to the trouble of expressing our flow control as an iterable sequence in a ASQ#runner(..) step, when it seems like a simpler/flatter asyquence chain does the job well?

Because the iterable sequence form has an important trick up its sleeve that gives us more capability. Read on.

5.1.1 Extending Iterable Sequences

Generators, normal asynquence sequences, and Promise chains, are all eagerly evaluated – whatever flow control is expressed initially is the fixed flow that will be followed.

However, iterable sequences are lazily evaluated, which means that during execution of the iterable sequence, you can extend the sequence with more steps if desired.

Note: You can only append to the end of an iterable sequence, not inject into the middle of the sequence.

Let’s first look at a simpler (synchronous) example of that capability to get familiar with it:

function double(x) {
    x *= 2;

    // should we keep extending?
    if (x < 500) {
        isq.then( double );
    }

    return x;
}

// setup single-step iterable sequence
var isq = ASQ.iterable().then( double );

for (var v = 10, ret;
    (ret = isq.next( v )) && !ret.done;
) {
    v = ret.value;
    console.log( v );
}

The iterable sequence starts out with only one defined step (isq.then(double)), but the sequence keeps extending itself under certain conditions (x < 500). Both asynquence sequences and Promise chains technically can do something similar, but we’ll see in a little bit why their capability is insufficient.

Though this example is rather trivial and could otherwise be expressed with a while loop in a generator, we’ll consider more sophisticated cases.

For instance, you could examine the response from an Ajax request and if it indicates that more data is needed, you conditionally insert more steps into the iterable sequence to make the additional request(s). Or you could conditionally add a value-formatting step to the end of your Ajax handling.

Consider:

var steps = ASQ.iterable()

.then( function STEP1(token){
    var url = token.messages[0].url;

    // was an additional formatting step provided?
    if (token.messages[0].format) {
        steps.then( token.messages[0].format );
    }

    return request( url );
} )

.then( function STEP2(resp){
    // add another Ajax request to the sequence?
    if (/x1/.test( resp )) {
        steps.then( function STEP5(text){
            return request(
                "http://some.url.4/?v=" + text
            );
        } );
    }

    return ASQ().gate(
        request( "http://some.url.2/?v=" + resp ),
        request( "http://some.url.3/?v=" + resp )
    );
} )

.then( function STEP3(r1,r2){ return r1 + r2; } );

You can see in two different places where we conditionally extend steps with steps.then(..). And to run this steps iterable sequence, we just wire it into our main program flow with an asynquence sequence (called main here) using ASQ#runner(..):

var main = ASQ( {
    url: "http://some.url.1",
    format: function STEP4(text){
        return text.toUpperCase();
    }
} )
.runner( steps )
.val( function(msg){
    console.log( msg );
} );

Can the flexibility (conditional behavior) of the steps iterable sequence be expressed with a generator? Kind of, but we have to rearrange the logic in a slightly awkward way:

function *steps(token) {
    // **STEP 1**
    var resp = yield request( token.messages[0].url );

    // **STEP 2**
    var rvals = yield ASQ().gate(
        request( "http://some.url.2/?v=" + resp ),
        request( "http://some.url.3/?v=" + resp )
    );

    // **STEP 3**
    var text = rvals[0] + rvals[1];

    // **STEP 4**
    // was an additional formatting step provided?
    if (token.messages[0].format) {
        text = yield token.messages[0].format( text );
    }

    // **STEP 5**
    // need another Ajax request added to the sequence?
    if (/foobar/.test( resp )) {
        text = yield request(
            "http://some.url.4/?v=" + text
        );
    }

    return text;
}

// note: `*steps()` can be run by the same `ASQ` sequence
// as `steps` was previously

Setting aside the already identified benefits of the sequential, synchronous-looking syntax of generators (see Chapter 4), the steps logic had to be reordered in the *steps() generator form, to fake the dynamicism of the extendable iterable sequence steps.

What about expressing the functionality with Promises or sequences, though? You can do something like this:

var steps = something( .. )
.then( .. )
.then( function(..){
    // ..

    // extending the chain, right?
    steps = steps.then( .. );

    // ..
})
.then( .. );

The problem is subtle but important to grasp. So, consider trying to wire up our steps Promise chain into our main program flow – this time expressed with Promises instead of asynquence:

var main = Promise.resolve( {
    url: "http://some.url.1",
    format: function STEP4(text){
        return text.toUpperCase();
    }
} )
.then( function(..){
    return steps;           // hint!
} )
.val( function(msg){
    console.log( msg );
} );

Can you spot the problem now? Look closely!

There’s a race condition for sequence steps ordering. When you return steps, at that moment steps might be the originally defined promise chain, or it might now point to the extended promise chain via the steps = steps.then(..) call, depending on what order things happen.

Here are the two possible outcomes:

  • If steps is still the original promise chain, once it’s later “extended” by steps = steps.then(..), that extended promise on the end of the chain is not considered by the main flow, as it’s already tapped the steps chain. This is the unfortunately limiting eager evaluation.
  • If steps is already the extended promise chain, it works as we expect in that the extended promise is what main taps.

Other than the obvious fact that a race condition is intolerable, the first case is the concern; it illustrates eager evaluation of the promise chain. By contrast, we easily extended the iterable sequence without such issues, because iterable sequences are lazily evaluated.

The more dynamic you need your flow control, the more iterable sequences will shine.

Tip: Check out more information and examples of iterable sequences on the asynquence site (https://github.com/getify/asynquence/blob/master/README.md#iterable-sequences).

5.2 Event Reactive

It should be obvious from (at least!) Chapter 3 that Promises are a very powerful tool in your async toolbox. But one thing that’s clearly lacking is in their capability to handle streams of events, as a Promise can only be resolved once. And frankly, this exact same weakness is true of plain asynquence sequences, as well.

Consider a scenario where you want to fire off a series of steps every time a certain event is fired. A single Promise or sequence cannot represent all occurrences of that event. So, you have to create a whole new Promise chain (or sequence) for each event occurrence, such as:

listener.on( "foobar", function(data){

    // create a new event handling promise chain
    new Promise( function(resolve,reject){
        // ..
    } )
    .then( .. )
    .then( .. );

} );

The base functionality we need is present in this approach, but it’s far from a desirable way to express our intended logic. There are two separate capabilities conflated in this paradigm: the event listening, and responding to the event; separation of concerns would implore us to separate out these capabilities.

The carefully observant reader will see this problem as somewhat symmetrical to the problems we detailed with callbacks in Chapter 2; it’s kind of an inversion of control problem.

Imagine uninverting this paradigm, like so:

var observable = listener.on( "foobar" );

// later
observable
.then( .. )
.then( .. );

// elsewhere
observable
.then( .. )
.then( .. );

The observable value is not exactly a Promise, but you can observe it much like you can observe a Promise, so it’s closely related. In fact, it can be observed many times, and it will send out notifications every time its event ("foobar") occurs.

Tip: This pattern I’ve just illustrated is a massive simplification of the concepts and motivations behind reactive programming (aka RP), which has been implemented/expounded upon by several great projects and languages. A variation on RP is functional reactive programming (FRP), which refers to applying functional programming techniques (immutability, referential integrity, etc.) to streams of data. “Reactive” refers to spreading this functionality out over time in response to events. The interested reader should consider studying “Reactive Observables” in the fantastic “Reactive Extensions” library (“RxJS” for JavaScript) by Microsoft (http://rxjs.codeplex.com/); it’s much more sophisticated and powerful than I’ve just shown. Also, Andre Staltz has an excellent write-up (https://gist.github.com/staltz/868e7e9bc2a7b8c1f754) that pragmatically lays out RP in concrete examples.

5.2.1 ES7 Observables

At the time of this writing, there’s an early ES7 proposal for a new data type called “Observable” (https://github.com/jhusain/asyncgenerator#introducing-observable), which in spirit is similar to what we’ve laid out here, but is definitely more sophisticated.

The notion of this kind of Observable is that the way you “subscribe” to the events from a stream is to pass in a generator – actually the iterator is the interested party – whose next(..) method will be called for each event.

You could imagine it sort of like this:

// `someEventStream` is a stream of events, like from
// mouse clicks, and the like.

var observer = new Observer( someEventStream, function*(){
    while (var evt = yield) {
        console.log( evt );
    }
} );

The generator you pass in will yield pause the while loop waiting for the next event. The iterator attached to the generator instance will have its next(..) called each time someEventStream has a new event published, and so that event data will resume your generator/iterator with the evt data.

In the subscription to events functionality here, it’s the iterator part that matters, not the generator. So conceptually you could pass in practically any iterable, including ASQ.iterable() iterable sequences.

Interestingly, there are also proposed adapters to make it easy to construct Observables from certain types of streams, such as fromEvent(..) for DOM events. If you look at a suggested implementation of fromEvent(..) in the earlier linked ES7 proposal, it looks an awful lot like the ASQ.react(..) we’ll see in the next section.

Of course, these are all early proposals, so what shakes out may very well look/behave differently than shown here. But it’s exciting to see the early alignments of concepts across different libraries and language proposals!

5.2.2 Reactive Sequences

With that crazy brief summary of Observables (and F/RP) as our inspiration and motivation, I will now illustrate an adaptation of a small subset of “Reactive Observables,” which I call “Reactive Sequences.”

First, let’s start with how to create an Observable, using an asynquence plug-in utility called react(..):

var observable = ASQ.react( function setup(next){
    listener.on( "foobar", next );
} );

Now, let’s see how to define a sequence that “reacts” – in F/RP, this is typically called “subscribing” – to that observable:

observable
.seq( .. )
.then( .. )
.val( .. );

So, you just define the sequence by chaining off the Observable. That’s easy, huh?

In F/RP, the stream of events typically channels through a set of functional transforms, like scan(..), map(..), reduce(..), and so on. With reactive sequences, each event channels through a new instance of the sequence. Let’s look at a more concrete example:

ASQ.react( function setup(next){
    document.getElementById( "mybtn" )
    .addEventListener( "click", next, false );
} )
.seq( function(evt){
    var btnID = evt.target.id;
    return request(
        "http://some.url.1/?id=" + btnID
    );
} )
.val( function(text){
    console.log( text );
} );

The “reactive” portion of the reactive sequence comes from assigning one or more event handlers to invoke the event trigger (calling next(..)).

The “sequence” portion of the reactive sequence is exactly like the sequences we’ve already explored: each step can be whatever asynchronous technique makes sense, from continuation callback to Promise to generator.

Once you set up a reactive sequence, it will continue to initiate instances of the sequence as long as the events keep firing. If you want to stop a reactive sequence, you can call stop().

If a reactive sequence is stop()’d, you likely want the event handler(s) to be unregistered as well; you can register a teardown handler for this purpose:

var sq = ASQ.react( function setup(next,registerTeardown){
    var btn = document.getElementById( "mybtn" );

    btn.addEventListener( "click", next, false );

    // will be called once `sq.stop()` is called
    registerTeardown( function(){
        btn.removeEventListener( "click", next, false );
    } );
} )
.seq( .. )
.then( .. )
.val( .. );

// later
sq.stop();

Note: The this binding reference inside the setup(..) handler is the same sq reactive sequence, so you can use the this reference to add to the reactive sequence definition, call methods like stop(), and so on.

Here’s an example from the Node.js world, using reactive sequences to handle incoming HTTP requests:

var server = http.createServer();
server.listen(8000);

// reactive observer
var request = ASQ.react( function setup(next,registerTeardown){
    server.addListener( "request", next );
    server.addListener( "close", this.stop );

    registerTeardown( function(){
        server.removeListener( "request", next );
        server.removeListener( "close", request.stop );
    } );
});

// respond to requests
request
.seq( pullFromDatabase )
.val( function(data,res){
    res.end( data );
} );

// node teardown
process.on( "SIGINT", request.stop );

The next(..) trigger can also adapt to node streams easily, using onStream(..) and unStream(..):

ASQ.react( function setup(next){
    var fstream = fs.createReadStream( "/some/file" );

    // pipe the stream's "data" event to `next(..)`
    next.onStream( fstream );

    // listen for the end of the stream
    fstream.on( "end", function(){
        next.unStream( fstream );
    } );
} )
.seq( .. )
.then( .. )
.val( .. );

You can also use sequence combinations to compose multiple reactive sequence streams:

var sq1 = ASQ.react( .. ).seq( .. ).then( .. );
var sq2 = ASQ.react( .. ).seq( .. ).then( .. );

var sq3 = ASQ.react(..)
.gate(
    sq1,
    sq2
)
.then( .. );

The main takeaway is that ASQ.react(..) is a lightweight adaptation of F/RP concepts, enabling the wiring of an event stream to a sequence, hence the term “reactive sequence.” Reactive sequences are generally capable enough for basic reactive uses.

Note: Here’s an example of using ASQ.react(..) in managing UI state (http://jsbin.com/rozipaki/6/edit?js,output), and another example of handling HTTP request/response streams with ASQ.react(..) (https://gist.github.com/getify/bba5ec0de9d6047b720e).

5.3 Generator Coroutine

Hopefully Chapter 4 helped you get pretty familiar with ES6 generators. In particular, we want to revisit the “Generator Concurrency” discussion, and push it even further.

We imagined a runAll(..) utility that could take two or more generators and run them concurrently, letting them cooperatively yield control from one to the next, with optional message passing.

In addition to being able to run a single generator to completion, the ASQ#runner(..) we discussed in Appendix A is a similar implementation of the concepts of runAll(..), which can run multiple generators concurrently to completion.

So let’s see how we can implement the concurrent Ajax scenario from Chapter 4:

ASQ(
    "http://some.url.2"
)
.runner(
    function*(token){
        // transfer control
        yield token;

        var url1 = token.messages[0]; // "http://some.url.1"

        // clear out messages to start fresh
        token.messages = [];

        var p1 = request( url1 );

        // transfer control
        yield token;

        token.messages.push( yield p1 );
    },
    function*(token){
        var url2 = token.messages[0]; // "http://some.url.2"

        // message pass and transfer control
        token.messages[0] = "http://some.url.1";
        yield token;

        var p2 = request( url2 );

        // transfer control
        yield token;

        token.messages.push( yield p2 );

        // pass along results to next sequence step
        return token.messages;
    }
)
.val( function(res){
    // `res[0]` comes from "http://some.url.1"
    // `res[1]` comes from "http://some.url.2"
} );

The main differences between ASQ#runner(..) and runAll(..) are as follows:

  • Each generator (coroutine) is provided an argument we call token, which is the special value to yield when you want to explicitly transfer control to the next coroutine.
  • token.messages is an array that holds any messages passed in from the previous sequence step. It’s also a data structure that you can use to share messages between coroutines.
  • yielding a Promise (or sequence) value does not transfer control, but instead pauses the coroutine processing until that value is ready.
  • The last returned or yielded value from the coroutine processing run will be forward passed to the next step in the sequence.

It’s also easy to layer helpers on top of the base ASQ#runner(..) functionality to suit different uses.

5.3.1 State Machines

One example that may be familiar to many programmers is state machines. You can, with the help of a simple cosmetic utility, create an easy-to-express state machine processor.

Let’s imagine such a utility. We’ll call it state(..), and will pass it two arguments: a state value and a generator that handles that state. state(..) will do the dirty work of creating and returning an adapter generator to pass to ASQ#runner(..).

Consider:

function state(val,handler) {
    // make a coroutine handler for this state
    return function*(token) {
        // state transition handler
        function transition(to) {
            token.messages[0] = to;
        }

        // set initial state (if none set yet)
        if (token.messages.length < 1) {
            token.messages[0] = val;
        }

        // keep going until final state (false) is reached
        while (token.messages[0] !== false) {
            // current state matches this handler?
            if (token.messages[0] === val) {
                // delegate to state handler
                yield *handler( transition );
            }

            // transfer control to another state handler?
            if (token.messages[0] !== false) {
                yield token;
            }
        }
    };
}

If you look closely, you’ll see that state(..) returns back a generator that accepts a token, and then it sets up a while loop that will run until the state machine reaches its final state (which we arbitrarily pick as the false value); that’s exactly the kind of generator we want to pass to ASQ#runner(..)!

We also arbitrarily reserve the token.messages[0] slot as the place where the current state of our state machine will be tracked, which means we can even seed the initial state as the value passed in from the previous step in the sequence.

How do we use the state(..) helper along with ASQ#runner(..)?

var prevState;

ASQ(
    /* optional: initial state value */
    2
)
// run our state machine
// transitions: 2 -> 3 -> 1 -> 3 -> false
.runner(
    // state `1` handler
    state( 1, function *stateOne(transition){
        console.log( "in state 1" );

        prevState = 1;
        yield transition( 3 );  // goto state `3`
    } ),

    // state `2` handler
    state( 2, function *stateTwo(transition){
        console.log( "in state 2" );

        prevState = 2;
        yield transition( 3 );  // goto state `3`
    } ),

    // state `3` handler
    state( 3, function *stateThree(transition){
        console.log( "in state 3" );

        if (prevState === 2) {
            prevState = 3;
            yield transition( 1 ); // goto state `1`
        }
        // all done!
        else {
            yield "That's all folks!";

            prevState = 3;
            yield transition( false ); // terminal state
        }
    } )
)
// state machine complete, so move on
.val( function(msg){
    console.log( msg ); // That's all folks!
} );

It’s important to note that the *stateOne(..), *stateTwo(..), and *stateThree(..) generators themselves are reinvoked each time that state is entered, and they finish when you transition(..) to another value. While not shown here, of course these state generator handlers can be asynchronously paused by yielding Promises/sequences/thunks.

The underneath hidden generators produced by the state(..) helper and actually passed to ASQ#runner(..) are the ones that continue to run concurrently for the length of the state machine, and each of them handles cooperatively yielding control to the next, and so on.

Note: See this “ping pong” example (http://jsbin.com/qutabu/1/edit?js,output) for more illustration of using cooperative concurrency with generators driven by ASQ#runner(..).

5.4 Communicating Sequential Processes (CSP)

“Communicating Sequential Processes” (CSP) was first described by C. A. R. Hoare in a 1978 academic paper (http://dl.acm.org/citation.cfm?doid=359576.359585), and later in a 1985 book (http://www.usingcsp.com/) of the same name. CSP describes a formal method for concurrent “processes” to interact (aka “communicate”) during processing.

You may recall that we examined concurrent “processes” back in Chapter 1, so our exploration of CSP here will build upon that understanding.

Like most great concepts in computer science, CSP is heavily steeped in academic formalism, expressed as a process algebra. However, I suspect symbolic algebra theorems won’t make much practical difference to the reader, so we will want to find some other way of wrapping our brains around CSP.

I will leave much of the formal description and proof of CSP to Hoare’s writing, and to many other fantastic writings since. Instead, we will try to just briefly explain the idea of CSP in as un-academic and hopefully intuitively understandable a way as possible.

5.4.1 Message Passing

The core principle in CSP is that all communication/interaction between otherwise independent processes must be through formal message passing. Perhaps counter to your expectations, CSP message passing is described as a synchronous action, where the sender process and the receiver process have to mutually be ready for the message to be passed.

How could such synchronous messaging possibly be related to asynchronous programming in JavaScript?

The concreteness of relationship comes from the nature of how ES6 generators are used to produce synchronous-looking actions that under the covers can indeed either be synchronous or (more likely) asynchronous.

In other words, two or more concurrently running generators can appear to synchronously message each other while preserving the fundamental asynchrony of the system because each generator’s code is paused (aka “blocked”) waiting on resumption of an asynchronous action.

How does this work?

Imagine a generator (aka “process”) called “A” that wants to send a message to generator “B.” First, “A” yields the message (thus pausing “A”) to be sent to “B.” When “B” is ready and takes the message, “A” is then resumed (unblocked).

Symmetrically, imagine a generator “A” that wants a message from “B.” “A” yields its request (thus pausing “A”) for the message from “B,” and once “B” sends a message, “A” takes the message and is resumed.

One of the more popular expressions of this CSP message passing theory comes from ClojureScript’s core.async library, and also from the go language. These takes on CSP embody the described communication semantics in a conduit that is opened between processes called a “channel.”

Note: The term channel is used in part because there are modes in which more than one value can be sent at once into the “buffer” of the channel; this is similar to what you may think of as a stream. We won’t go into depth about it here, but it can be a very powerful technique for managing streams of data.

In the simplest notion of CSP, a channel that we create between “A” and “B” would have a method called take(..) for blocking to receive a value, and a method called put(..) for blocking to send a value.

This might look like:

var ch = channel();

function *foo() {
    var msg = yield take( ch );

    console.log( msg );
}

function *bar() {
    yield put( ch, "Hello World" );

    console.log( "message sent" );
}

run( foo );
run( bar );
// Hello World
// "message sent"

Compare this structured, synchronous(-looking) message passing interaction to the informal and unstructured message sharing that ASQ#runner(..) provides through the token.messages array and cooperative yielding. In essence, yield put(..) is a single operation that both sends the value and pauses execution to transfer control, whereas in earlier examples we did those as separate steps.

Moreover, CSP stresses that you don’t really explicitly “transfer control,” but rather you design your concurrent routines to block expecting either a value received from the channel, or to block expecting to try to send a message on the channel. The blocking around receiving or sending messages is how you coordinate sequencing of behavior between the coroutines.

Note: Fair warning: this pattern is very powerful but it’s also a little mind twisting to get used to at first. You will want to practice this a bit to get used to this new way of thinking about coordinating your concurrency.

There are several great libraries that have implemented this flavor of CSP in JavaScript, most notably “js-csp” (https://github.com/ubolonton/js-csp), which James Long (http://twitter.com/jlongster) forked (https://github.com/jlongster/js-csp) and has written extensively about (http://jlongster.com/Taming-the-Asynchronous-Beast-with-CSP-in-JavaScript). Also, it cannot be stressed enough how amazing the many writings of David Nolen (http://twitter.com/swannodette) are on the topic of adapting ClojureScript’s go-style core.async CSP into JS generators (http://swannodette.github.io/2013/08/24/es6-generators-and-csp/).

5.4.2 asynquence CSP emulation

Because we’ve been discussing async patterns here in the context of my asynquence library, you might be interested to see that we can fairly easily add an emulation layer on top of ASQ#runner(..) generator handling as a nearly perfect porting of the CSP API and behavior. This emulation layer ships as an optional part of the “asynquence-contrib” package alongside asynquence.

Very similar to the state(..) helper from earlier, ASQ.csp.go(..) takes a generator – in go/core.async terms, it’s known as a goroutine – and adapts it to use with ASQ#runner(..) by returning a new generator.

Instead of being passed a token, your goroutine receives an initially created channel (ch below) that all goroutines in this run will share. You can create more channels (which is often quite helpful!) with ASQ.csp.chan(..).

In CSP, we model all asynchrony in terms of blocking on channel messages, rather than blocking waiting for a Promise/sequence/thunk to complete.

So, instead of yielding the Promise returned from request(..), request(..) should return a channel that you take(..) a value from. In other words, a single-value channel is roughly equivalent in this context/usage to a Promise/sequence.

Let’s first make a channel-aware version of request(..):

function request(url) {
    var ch = ASQ.csp.channel();
    ajax( url ).then( function(content){
        // `putAsync(..)` is a version of `put(..)` that
        // can be used outside of a generator. It returns
        // a promise for the operation's completion. We
        // don't use that promise here, but we could if
        // we needed to be notified when the value had
        // been `take(..)`n.
        ASQ.csp.putAsync( ch, content );
    } );
    return ch;
}

From Chapter 3, “promisory” is a Promise-producing utility, “thunkory” from Chapter 4 is a thunk-producing utility, and finally, in Appendix A we invented “sequory” for a sequence-producing utility.

Naturally, we need to coin a symmetric term here for a channel-producing utility. So let’s unsurprisingly call it a “chanory” (“channel” + “factory”). As an exercise for the reader, try your hand at defining a channelify(..) utility similar to Promise.wrap(..)/promisify(..) (Chapter 3), thunkify(..) (Chapter 4), and ASQ.wrap(..) (Appendix A).

Now consider the concurrent Ajax example using asyquence-flavored CSP:

ASQ()
.runner(
    ASQ.csp.go( function*(ch){
        yield ASQ.csp.put( ch, "http://some.url.2" );

        var url1 = yield ASQ.csp.take( ch );
        // "http://some.url.1"

        var res1 = yield ASQ.csp.take( request( url1 ) );

        yield ASQ.csp.put( ch, res1 );
    } ),
    ASQ.csp.go( function*(ch){
        var url2 = yield ASQ.csp.take( ch );
        // "http://some.url.2"

        yield ASQ.csp.put( ch, "http://some.url.1" );

        var res2 = yield ASQ.csp.take( request( url2 ) );
        var res1 = yield ASQ.csp.take( ch );

        // pass along results to next sequence step
        ch.buffer_size = 2;
        ASQ.csp.put( ch, res1 );
        ASQ.csp.put( ch, res2 );
    } )
)
.val( function(res1,res2){
    // `res1` comes from "http://some.url.1"
    // `res2` comes from "http://some.url.2"
} );

The message passing that trades the URL strings between the two goroutines is pretty straightforward. The first goroutine makes an Ajax request to the first URL, and that response is put onto the ch channel. The second goroutine makes an Ajax request to the second URL, then gets the first response res1 off the ch channel. At that point, both responses res1 and res2 are completed and ready.

If there are any remaining values in the ch channel at the end of the goroutine run, they will be passed along to the next step in the sequence. So, to pass out message(s) from the final goroutine, put(..) them into ch. As shown, to avoid the blocking of those final put(..)s, we switch ch into buffering mode by setting its buffer_size to 2 (default: 0).

Note: See many more examples of using asynquence-flavored CSP here (https://gist.github.com/getify/e0d04f1f5aa24b1947ae).

5.5 Review

Promises and generators provide the foundational building blocks upon which we can build much more sophisticated and capable asynchrony.

asynquence has utilities for implementing iterable sequences, reactive sequences (aka “Observables”), concurrent coroutines, and even CSP goroutines.

Those patterns, combined with the continuation-callback and Promise capabilities, gives asynquence a powerful mix of different asynchronous functionalities, all integrated in one clean async flow control abstraction: the sequence.

6 You Don’t Know JS: Async & Performance

7 Appendix C: Acknowledgments

I have many people to thank for making this book title and the overall series happen.

First, I must thank my wife Christen Simpson, and my two kids Ethan and Emily, for putting up with Dad always pecking away at the computer. Even when not writing books, my obsession with JavaScript glues my eyes to the screen far more than it should. That time I borrow from my family is the reason these books can so deeply and completely explain JavaScript to you, the reader. I owe my family everything.

I’d like to thank my editors at O’Reilly, namely Simon St.Laurent and Brian MacDonald, as well as the rest of the editorial and marketing staff. They are fantastic to work with, and have been especially accommodating during this experiment into “open source” book writing, editing, and production.

Thank you to the many folks who have participated in making this book series better by providing editorial suggestions and corrections, including Shelley Powers, Tim Ferro, Evan Borden, Forrest L. Norvell, Jennifer Davis, Jesse Harlin, Kris Kowal, Rick Waldron, Jordan Harband, Benjamin Gruenbaum, Vyacheslav Egorov, David Nolen, and many others. A big thank you to Jake Archibald for writing the Foreword for this title.

Thank you to the countless folks in the community, including members of the TC39 committee, who have shared so much knowledge with the rest of us, and especially tolerated my incessant questions and explorations with patience and detail. John-David Dalton, Juriy “kangax” Zaytsev, Mathias Bynens, Axel Rauschmayer, Nicholas Zakas, Angus Croll, Reginald Braithwaite, Dave Herman, Brendan Eich, Allen Wirfs-Brock, Bradley Meck, Domenic Denicola, David Walsh, Tim Disney, Peter van der Zee, Andrea Giammarchi, Kit Cambridge, Eric Elliott, and so many others, I can’t even scratch the surface.

The You Don’t Know JS book series was born on Kickstarter, so I also wish to thank all my (nearly) 500 generous backers, without whom this book series could not have happened:

Jan Szpila, nokiko, Murali Krishnamoorthy, Ryan Joy, Craig Patchett, pdqtrader, Dale Fukami, ray hatfield, R0drigo Perez [Mx], Dan Petitt, Jack Franklin, Andrew Berry, Brian Grinstead, Rob Sutherland, Sergi Meseguer, Phillip Gourley, Mark Watson, Jeff Carouth, Alfredo Sumaran, Martin Sachse, Marcio Barrios, Dan, AimelyneM, Matt Sullivan, Delnatte Pierre-Antoine, Jake Smith, Eugen Tudorancea, Iris, David Trinh, simonstl, Ray Daly, Uros Gruber, Justin Myers, Shai Zonis, Mom & Dad, Devin Clark, Dennis Palmer, Brian Panahi Johnson, Josh Marshall, Marshall, Dennis Kerr, Matt Steele, Erik Slagter, Sacah, Justin Rainbow, Christian Nilsson, Delapouite, D.Pereira, Nicolas Hoizey, George V. Reilly, Dan Reeves, Bruno Laturner, Chad Jennings, Shane King, Jeremiah Lee Cohick, od3n, Stan Yamane, Marko Vucinic, Jim B, Stephen Collins, Ægir Þorsteinsson, Eric Pederson, Owain, Nathan Smith, Jeanetteurphy, Alexandre ELISÉ, Chris Peterson, Rik Watson, Luke Matthews, Justin Lowery, Morten Nielsen, Vernon Kesner, Chetan Shenoy, Paul Tregoing, Marc Grabanski, Dion Almaer, Andrew Sullivan, Keith Elsass, Tom Burke, Brian Ashenfelter, David Stuart, Karl Swedberg, Graeme, Brandon Hays, John Christopher, Gior, manoj reddy, Chad Smith, Jared Harbour, Minoru TODA, Chris Wigley, Daniel Mee, Mike, Handyface, Alex Jahraus, Carl Furrow, Rob Foulkrod, Max Shishkin, Leigh Penny Jr., Robert Ferguson, Mike van Hoenselaar, Hasse Schougaard, rajan venkataguru, Jeff Adams, Trae Robbins, Rolf Langenhuijzen, Jorge Antunes, Alex Koloskov, Hugh Greenish, Tim Jones, Jose Ochoa, Michael Brennan-White, Naga Harish Muvva, Barkóczi Dávid, Kitt Hodsden, Paul McGraw, Sascha Goldhofer, Andrew Metcalf, Markus Krogh, Michael Mathews, Matt Jared, Juanfran, Georgie Kirschner, Kenny Lee, Ted Zhang, Amit Pahwa, Inbal Sinai, Dan Raine, Schabse Laks, Michael Tervoort, Alexandre Abreu, Alan Joseph Williams, NicolasD, Cindy Wong, Reg Braithwaite, LocalPCGuy, Jon Friskics, Chris Merriman, John Pena, Jacob Katz, Sue Lockwood, Magnus Johansson, Jeremy Crapsey, Grzegorz Pawłowski, nico nuzzaci, Christine Wilks, Hans Bergren, charles montgomery, Ariel בר-לבב Fogel, Ivan Kolev, Daniel Campos, Hugh Wood, Christian Bradford, Frédéric Harper, Ionuţ Dan Popa, Jeff Trimble, Rupert Wood, Trey Carrico, Pancho Lopez, Joël kuijten, Tom A Marra, Jeff Jewiss, Jacob Rios, Paolo Di Stefano, Soledad Penades, Chris Gerber, Andrey Dolganov, Wil Moore III, Thomas Martineau, Kareem, Ben Thouret, Udi Nir, Morgan Laupies, jory carson-burson, Nathan L Smith, Eric Damon Walters, Derry Lozano-Hoyland, Geoffrey Wiseman, mkeehner, KatieK, Scott MacFarlane, Brian LaShomb, Adrien Mas, christopher ross, Ian Littman, Dan Atkinson, Elliot Jobe, Nick Dozier, Peter Wooley, John Hoover, dan, Martin A. Jackson, Héctor Fernando Hurtado, andy ennamorato, Paul Seltmann, Melissa Gore, Dave Pollard, Jack Smith, Philip Da Silva, Guy Israeli, @megalithic, Damian Crawford, Felix Gliesche, April Carter Grant, Heidi, jim tierney, Andrea Giammarchi, Nico Vignola, Don Jones, Chris Hartjes, Alex Howes, john gibbon, David J. Groom, BBox, Yu ‘Dilys’ Sun, Nate Steiner, Brandon Satrom, Brian Wyant, Wesley Hales, Ian Pouncey, Timothy Kevin Oxley, George Terezakis, sanjay raj, Jordan Harband, Marko McLion, Wolfgang Kaufmann, Pascal Peuckert, Dave Nugent, Markus Liebelt, Welling Guzman, Nick Cooley, Daniel Mesquita, Robert Syvarth, Chris Coyier, Rémy Bach, Adam Dougal, Alistair Duggin, David Loidolt, Ed Richer, Brian Chenault, GoldFire Studios, Carles Andrés, Carlos Cabo, Yuya Saito, roberto ricardo, Barnett Klane, Mike Moore, Kevin Marx, Justin Love, Joe Taylor, Paul Dijou, Michael Kohler, Rob Cassie, Mike Tierney, Cody Leroy Lindley, tofuji, Shimon Schwartz, Raymond, Luc De Brouwer, David Hayes, Rhys Brett-Bowen, Dmitry, Aziz Khoury, Dean, Scott Tolinski - Level Up, Clement Boirie, Djordje Lukic, Anton Kotenko, Rafael Corral, Philip Hurwitz, Jonathan Pidgeon, Jason Campbell, Joseph C., SwiftOne, Jan Hohner, Derick Bailey, getify, Daniel Cousineau, Chris Charlton, Eric Turner, David Turner, Joël Galeran, Dharma Vagabond, adam, Dirk van Bergen, dave ♥♫★ furf, Vedran Zakanj, Ryan McAllen, Natalie Patrice Tucker, Eric J. Bivona, Adam Spooner, Aaron Cavano, Kelly Packer, Eric J, Martin Drenovac, Emilis, Michael Pelikan, Scott F. Walter, Josh Freeman, Brandon Hudgeons, vijay chennupati, Bill Glennon, Robin R., Troy Forster, otaku_coder, Brad, Scott, Frederick Ostrander, Adam Brill, Seb Flippence, Michael Anderson, Jacob, Adam Randlett, Standard, Joshua Clanton, Sebastian Kouba, Chris Deck, SwordFire, Hannes Papenberg, Richard Woeber, hnzz, Rob Crowther, Jedidiah Broadbent, Sergey Chernyshev, Jay-Ar Jamon, Ben Combee, luciano bonachela, Mark Tomlinson, Kit Cambridge, Michael Melgares, Jacob Adams, Adrian Bruinhout, Bev Wieber, Scott Puleo, Thomas Herzog, April Leone, Daniel Mizieliński, Kees van Ginkel, Jon Abrams, Erwin Heiser, Avi Laviad, David newell, Jean-Francois Turcot, Niko Roberts, Erik Dana, Charles Neill, Aaron Holmes, Grzegorz Ziółkowski, Nathan Youngman, Timothy, Jacob Mather, Michael Allan, Mohit Seth, Ryan Ewing, Benjamin Van Treese, Marcelo Santos, Denis Wolf, Phil Keys, Chris Yung, Timo Tijhof, Martin Lekvall, Agendine, Greg Whitworth, Helen Humphrey, Dougal Campbell, Johannes Harth, Bruno Girin, Brian Hough, Darren Newton, Craig McPheat, Olivier Tille, Dennis Roethig, Mathias Bynens, Brendan Stromberger, sundeep, John Meyer, Ron Male, John F Croston III, gigante, Carl Bergenhem, B.J. May, Rebekah Tyler, Ted Foxberry, Jordan Reese, Terry Suitor, afeliz, Tom Kiefer, Darragh Duffy, Kevin Vanderbeken, Andy Pearson, Simon Mac Donald, Abid Din, Chris Joel, Tomas Theunissen, David Dick, Paul Grock, Brandon Wood, John Weis, dgrebb, Nick Jenkins, Chuck Lane, Johnny Megahan, marzsman, Tatu Tamminen, Geoffrey Knauth, Alexander Tarmolov, Jeremy Tymes, Chad Auld, Sean Parmelee, Rob Staenke, Dan Bender, Yannick derwa, Joshua Jones, Geert Plaisier, Tom LeZotte, Christen Simpson, Stefan Bruvik, Justin Falcone, Carlos Santana, Michael Weiss, Pablo Villoslada, Peter deHaan, Dimitris Iliopoulos, seyDoggy, Adam Jordens, Noah Kantrowitz, Amol M, Matthew Winnard, Dirk Ginader, Phinam Bui, David Rapson, Andrew Baxter, Florian Bougel, Michael George, Alban Escalier, Daniel Sellers, Sasha Rudan, John Green, Robert Kowalski, David I. Teixeira (@ditma, Charles Carpenter, Justin Yost, Sam S, Denis Ciccale, Kevin Sheurs, Yannick Croissant, Pau Fracés, Stephen McGowan, Shawn Searcy, Chris Ruppel, Kevin Lamping, Jessica Campbell, Christopher Schmitt, Sablons, Jonathan Reisdorf, Bunni Gek, Teddy Huff, Michael Mullany, Michael Fürstenberg, Carl Henderson, Rick Yoesting, Scott Nichols, Hernán Ciudad, Andrew Maier, Mike Stapp, Jesse Shawl, Sérgio Lopes, jsulak, Shawn Price, Joel Clermont, Chris Ridmann, Sean Timm, Jason Finch, Aiden Montgomery, Elijah Manor, Derek Gathright, Jesse Harlin, Dillon Curry, Courtney Myers, Diego Cadenas, Arne de Bree, João Paulo Dubas, James Taylor, Philipp Kraeutli, Mihai Păun, Sam Gharegozlou, joshjs, Matt Murchison, Eric Windham, Timo Behrmann, Andrew Hall, joshua price, Théophile Villard

This book series is being produced in an open source fashion, including editing and production. We owe GitHub a debt of gratitude for making that sort of thing possible for the community!

Thank you again to all the countless folks I didn’t name but who I nonetheless owe thanks. May this book series be “owned” by all of us and serve to contribute to increasing awareness and understanding of the JavaScript language, to the benefit of all current and future community contributors.

8 You Don’t Know JS: Async & Performance

9 Chapter 1: Asynchrony: Now & Later

One of the most important and yet often misunderstood parts of programming in a language like JavaScript is how to express and manipulate program behavior spread out over a period of time.

This is not just about what happens from the beginning of a for loop to the end of a for loop, which of course takes some time (microseconds to milliseconds) to complete. It’s about what happens when part of your program runs now, and another part of your program runs later – there’s a gap between now and later where your program isn’t actively executing.

Practically all nontrivial programs ever written (especially in JS) have in some way or another had to manage this gap, whether that be in waiting for user input, requesting data from a database or file system, sending data across the network and waiting for a response, or performing a repeated task at a fixed interval of time (like animation). In all these various ways, your program has to manage state across the gap in time. As they famously say in London (of the chasm between the subway door and the platform): “mind the gap.”

In fact, the relationship between the now and later parts of your program is at the heart of asynchronous programming.

Asynchronous programming has been around since the beginning of JS, for sure. But most JS developers have never really carefully considered exactly how and why it crops up in their programs, or explored various other ways to handle it. The good enough approach has always been the humble callback function. Many to this day will insist that callbacks are more than sufficient.

But as JS continues to grow in both scope and complexity, to meet the ever-widening demands of a first-class programming language that runs in browsers and servers and every conceivable device in between, the pains by which we manage asynchrony are becoming increasingly crippling, and they cry out for approaches that are both more capable and more reason-able.

While this all may seem rather abstract right now, I assure you we’ll tackle it more completely and concretely as we go on through this book. We’ll explore a variety of emerging techniques for async JavaScript programming over the next several chapters.

But before we can get there, we’re going to have to understand much more deeply what asynchrony is and how it operates in JS.

9.1 A Program in Chunks

You may write your JS program in one .js file, but your program is almost certainly comprised of several chunks, only one of which is going to execute now, and the rest of which will execute later. The most common unit of chunk is the function.

The problem most developers new to JS seem to have is that later doesn’t happen strictly and immediately after now. In other words, tasks that cannot complete now are, by definition, going to complete asynchronously, and thus we will not have blocking behavior as you might intuitively expect or want.

Consider:

// ajax(..) is some arbitrary Ajax function given by a library
var data = ajax( "http://some.url.1" );

console.log( data );
// Oops! `data` generally won't have the Ajax results

You’re probably aware that standard Ajax requests don’t complete synchronously, which means the ajax(..) function does not yet have any value to return back to be assigned to data variable. If ajax(..) could block until the response came back, then the data = .. assignment would work fine.

But that’s not how we do Ajax. We make an asynchronous Ajax request now, and we won’t get the results back until later.

The simplest (but definitely not only, or necessarily even best!) way of “waiting” from now until later is to use a function, commonly called a callback function:

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", function myCallbackFunction(data){

    console.log( data ); // Yay, I gots me some `data`!

} );

Warning: You may have heard that it’s possible to make synchronous Ajax requests. While that’s technically true, you should never, ever do it, under any circumstances, because it locks the browser UI (buttons, menus, scrolling, etc.) and prevents any user interaction whatsoever. This is a terrible idea, and should always be avoided.

Before you protest in disagreement, no, your desire to avoid the mess of callbacks is not justification for blocking, synchronous Ajax.

For example, consider this code:

function now() {
    return 21;
}

function later() {
    answer = answer * 2;
    console.log( "Meaning of life:", answer );
}

var answer = now();

setTimeout( later, 1000 ); // Meaning of life: 42

There are two chunks to this program: the stuff that will run now, and the stuff that will run later. It should be fairly obvious what those two chunks are, but let’s be super explicit:

Now:

function now() {
    return 21;
}

function later() { .. }

var answer = now();

setTimeout( later, 1000 );

Later:

answer = answer * 2;
console.log( "Meaning of life:", answer );

The now chunk runs right away, as soon as you execute your program. But setTimeout(..) also sets up an event (a timeout) to happen later, so the contents of the later() function will be executed at a later time (1,000 milliseconds from now).

Any time you wrap a portion of code into a function and specify that it should be executed in response to some event (timer, mouse click, Ajax response, etc.), you are creating a later chunk of your code, and thus introducing asynchrony to your program.

9.1.1 Async Console

There is no specification or set of requirements around how the console.* methods work – they are not officially part of JavaScript, but are instead added to JS by the hosting environment (see the Types & Grammar title of this book series).

So, different browsers and JS environments do as they please, which can sometimes lead to confusing behavior.

In particular, there are some browsers and some conditions that console.log(..) does not actually immediately output what it’s given. The main reason this may happen is because I/O is a very slow and blocking part of many programs (not just JS). So, it may perform better (from the page/UI perspective) for a browser to handle console I/O asynchronously in the background, without you perhaps even knowing that occurred.

A not terribly common, but possible, scenario where this could be observable (not from code itself but from the outside):

var a = {
    index: 1
};

// later
console.log( a ); // ??

// even later
a.index++;

We’d normally expect to see the a object be snapshotted at the exact moment of the console.log(..) statement, printing something like { index: 1 }, such that in the next statement when a.index++ happens, it’s modifying something different than, or just strictly after, the output of a.

Most of the time, the preceding code will probably produce an object representation in your developer tools’ console that’s what you’d expect. But it’s possible this same code could run in a situation where the browser felt it needed to defer the console I/O to the background, in which case it’s possible that by the time the object is represented in the browser console, the a.index++ has already happened, and it shows { index: 2 }.

It’s a moving target under what conditions exactly console I/O will be deferred, or even whether it will be observable. Just be aware of this possible asynchronicity in I/O in case you ever run into issues in debugging where objects have been modified after a console.log(..) statement and yet you see the unexpected modifications show up.

Note: If you run into this rare scenario, the best option is to use breakpoints in your JS debugger instead of relying on console output. The next best option would be to force a “snapshot” of the object in question by serializing it to a string, like with JSON.stringify(..).

9.2 Event Loop

Let’s make a (perhaps shocking) claim: despite clearly allowing asynchronous JS code (like the timeout we just looked at), up until recently (ES6), JavaScript itself has actually never had any direct notion of asynchrony built into it.

What!? That seems like a crazy claim, right? In fact, it’s quite true. The JS engine itself has never done anything more than execute a single chunk of your program at any given moment, when asked to.

“Asked to.” By whom? That’s the important part!

The JS engine doesn’t run in isolation. It runs inside a hosting environment, which is for most developers the typical web browser. Over the last several years (but by no means exclusively), JS has expanded beyond the browser into other environments, such as servers, via things like Node.js. In fact, JavaScript gets embedded into all kinds of devices these days, from robots to lightbulbs.

But the one common “thread” (that’s a not-so-subtle asynchronous joke, for what it’s worth) of all these environments is that they have a mechanism in them that handles executing multiple chunks of your program over time, at each moment invoking the JS engine, called the “event loop.”

In other words, the JS engine has had no innate sense of time, but has instead been an on-demand execution environment for any arbitrary snippet of JS. It’s the surrounding environment that has always scheduled “events” (JS code executions).

So, for example, when your JS program makes an Ajax request to fetch some data from a server, you set up the “response” code in a function (commonly called a “callback”), and the JS engine tells the hosting environment, “Hey, I’m going to suspend execution for now, but whenever you finish with that network request, and you have some data, please call this function back.”

The browser is then set up to listen for the response from the network, and when it has something to give you, it schedules the callback function to be executed by inserting it into the event loop.

So what is the event loop?

Let’s conceptualize it first through some fake-ish code:

// `eventLoop` is an array that acts as a queue (first-in, first-out)
var eventLoop = [ ];
var event;

// keep going "forever"
while (true) {
    // perform a "tick"
    if (eventLoop.length > 0) {
        // get the next event in the queue
        event = eventLoop.shift();

        // now, execute the next event
        try {
            event();
        }
        catch (err) {
            reportError(err);
        }
    }
}

This is, of course, vastly simplified pseudocode to illustrate the concepts. But it should be enough to help get a better understanding.

As you can see, there’s a continuously running loop represented by the while loop, and each iteration of this loop is called a “tick.” For each tick, if an event is waiting on the queue, it’s taken off and executed. These events are your function callbacks.

It’s important to note that setTimeout(..) doesn’t put your callback on the event loop queue. What it does is set up a timer; when the timer expires, the environment places your callback into the event loop, such that some future tick will pick it up and execute it.

What if there are already 20 items in the event loop at that moment? Your callback waits. It gets in line behind the others – there’s not normally a path for preempting the queue and skipping ahead in line. This explains why setTimeout(..) timers may not fire with perfect temporal accuracy. You’re guaranteed (roughly speaking) that your callback won’t fire before the time interval you specify, but it can happen at or after that time, depending on the state of the event queue.

So, in other words, your program is generally broken up into lots of small chunks, which happen one after the other in the event loop queue. And technically, other events not related directly to your program can be interleaved within the queue as well.

Note: We mentioned “up until recently” in relation to ES6 changing the nature of where the event loop queue is managed. It’s mostly a formal technicality, but ES6 now specifies how the event loop works, which means technically it’s within the purview of the JS engine, rather than just the hosting environment. One main reason for this change is the introduction of ES6 Promises, which we’ll discuss in Chapter 3, because they require the ability to have direct, fine-grained control over scheduling operations on the event loop queue (see the discussion of setTimeout(..0) in the “Cooperation” section).

9.3 Parallel Threading

It’s very common to conflate the terms “async” and “parallel,” but they are actually quite different. Remember, async is about the gap between now and later. But parallel is about things being able to occur simultaneously.

The most common tools for parallel computing are processes and threads. Processes and threads execute independently and may execute simultaneously: on separate processors, or even separate computers, but multiple threads can share the memory of a single process.

An event loop, by contrast, breaks its work into tasks and executes them in serial, disallowing parallel access and changes to shared memory. Parallelism and “serialism” can coexist in the form of cooperating event loops in separate threads.

The interleaving of parallel threads of execution and the interleaving of asynchronous events occur at very different levels of granularity.

For example:

function later() {
    answer = answer * 2;
    console.log( "Meaning of life:", answer );
}

While the entire contents of later() would be regarded as a single event loop queue entry, when thinking about a thread this code would run on, there’s actually perhaps a dozen different low-level operations. For example, answer = answer * 2 requires first loading the current value of answer, then putting 2 somewhere, then performing the multiplication, then taking the result and storing it back into answer.

In a single-threaded environment, it really doesn’t matter that the items in the thread queue are low-level operations, because nothing can interrupt the thread. But if you have a parallel system, where two different threads are operating in the same program, you could very likely have unpredictable behavior.

Consider:

var a = 20;

function foo() {
    a = a + 1;
}

function bar() {
    a = a * 2;
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", foo );
ajax( "http://some.url.2", bar );

In JavaScript’s single-threaded behavior, if foo() runs before bar(), the result is that a has 42, but if bar() runs before foo() the result in a will be 41.

If JS events sharing the same data executed in parallel, though, the problems would be much more subtle. Consider these two lists of pseudocode tasks as the threads that could respectively run the code in foo() and bar(), and consider what happens if they are running at exactly the same time:

Thread 1 (X and Y are temporary memory locations):

foo():
  a. load value of `a` in `X`
  b. store `1` in `Y`
  c. add `X` and `Y`, store result in `X`
  d. store value of `X` in `a`

Thread 2 (X and Y are temporary memory locations):

bar():
  a. load value of `a` in `X`
  b. store `2` in `Y`
  c. multiply `X` and `Y`, store result in `X`
  d. store value of `X` in `a`

Now, let’s say that the two threads are running truly in parallel. You can probably spot the problem, right? They use shared memory locations X and Y for their temporary steps.

What’s the end result in a if the steps happen like this?

1a  (load value of `a` in `X`   ==> `20`)
2a  (load value of `a` in `X`   ==> `20`)
1b  (store `1` in `Y`   ==> `1`)
2b  (store `2` in `Y`   ==> `2`)
1c  (add `X` and `Y`, store result in `X`   ==> `22`)
1d  (store value of `X` in `a`   ==> `22`)
2c  (multiply `X` and `Y`, store result in `X`   ==> `44`)
2d  (store value of `X` in `a`   ==> `44`)

The result in a will be 44. But what about this ordering?

1a  (load value of `a` in `X`   ==> `20`)
2a  (load value of `a` in `X`   ==> `20`)
2b  (store `2` in `Y`   ==> `2`)
1b  (store `1` in `Y`   ==> `1`)
2c  (multiply `X` and `Y`, store result in `X`   ==> `20`)
1c  (add `X` and `Y`, store result in `X`   ==> `21`)
1d  (store value of `X` in `a`   ==> `21`)
2d  (store value of `X` in `a`   ==> `21`)

The result in a will be 21.

So, threaded programming is very tricky, because if you don’t take special steps to prevent this kind of interruption/interleaving from happening, you can get very surprising, nondeterministic behavior that frequently leads to headaches.

JavaScript never shares data across threads, which means that level of nondeterminism isn’t a concern. But that doesn’t mean JS is always deterministic. Remember earlier, where the relative ordering of foo() and bar() produces two different results (41 or 42)?

Note: It may not be obvious yet, but not all nondeterminism is bad. Sometimes it’s irrelevant, and sometimes it’s intentional. We’ll see more examples of that throughout this and the next few chapters.

9.3.1 Run-to-Completion

Because of JavaScript’s single-threading, the code inside of foo() (and bar()) is atomic, which means that once foo() starts running, the entirety of its code will finish before any of the code in bar() can run, or vice versa. This is called “run-to-completion” behavior.

In fact, the run-to-completion semantics are more obvious when foo() and bar() have more code in them, such as:

var a = 1;
var b = 2;

function foo() {
    a++;
    b = b * a;
    a = b + 3;
}

function bar() {
    b--;
    a = 8 + b;
    b = a * 2;
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", foo );
ajax( "http://some.url.2", bar );

Because foo() can’t be interrupted by bar(), and bar() can’t be interrupted by foo(), this program only has two possible outcomes depending on which starts running first – if threading were present, and the individual statements in foo() and bar() could be interleaved, the number of possible outcomes would be greatly increased!

Chunk 1 is synchronous (happens now), but chunks 2 and 3 are asynchronous (happen later), which means their execution will be separated by a gap of time.

Chunk 1:

var a = 1;
var b = 2;

Chunk 2 (foo()):

a++;
b = b * a;
a = b + 3;

Chunk 3 (bar()):

b--;
a = 8 + b;
b = a * 2;

Chunks 2 and 3 may happen in either-first order, so there are two possible outcomes for this program, as illustrated here:

Outcome 1:

var a = 1;
var b = 2;

// foo()
a++;
b = b * a;
a = b + 3;

// bar()
b--;
a = 8 + b;
b = a * 2;

a; // 11
b; // 22

Outcome 2:

var a = 1;
var b = 2;

// bar()
b--;
a = 8 + b;
b = a * 2;

// foo()
a++;
b = b * a;
a = b + 3;

a; // 183
b; // 180

Two outcomes from the same code means we still have nondeterminism! But it’s at the function (event) ordering level, rather than at the statement ordering level (or, in fact, the expression operation ordering level) as it is with threads. In other words, it’s more deterministic than threads would have been.

As applied to JavaScript’s behavior, this function-ordering nondeterminism is the common term “race condition,” as foo() and bar() are racing against each other to see which runs first. Specifically, it’s a “race condition” because you cannot predict reliably how a and b will turn out.

Note: If there was a function in JS that somehow did not have run-to-completion behavior, we could have many more possible outcomes, right? It turns out ES6 introduces just such a thing (see Chapter 4 “Generators”), but don’t worry right now, we’ll come back to that!

9.4 Concurrency

Let’s imagine a site that displays a list of status updates (like a social network news feed) that progressively loads as the user scrolls down the list. To make such a feature work correctly, (at least) two separate “processes” will need to be executing simultaneously (i.e., during the same window of time, but not necessarily at the same instant).

Note: We’re using “process” in quotes here because they aren’t true operating system–level processes in the computer science sense. They’re virtual processes, or tasks, that represent a logically connected, sequential series of operations. We’ll simply prefer “process” over “task” because terminology-wise, it will match the definitions of the concepts we’re exploring.

The first “process” will respond to onscroll events (making Ajax requests for new content) as they fire when the user has scrolled the page further down. The second “process” will receive Ajax responses back (to render content onto the page).

Obviously, if a user scrolls fast enough, you may see two or more onscroll events fired during the time it takes to get the first response back and process, and thus you’re going to have onscroll events and Ajax response events firing rapidly, interleaved with each other.

Concurrency is when two or more “processes” are executing simultaneously over the same period, regardless of whether their individual constituent operations happen in parallel (at the same instant on separate processors or cores) or not. You can think of concurrency then as “process”-level (or task-level) parallelism, as opposed to operation-level parallelism (separate-processor threads).

Note: Concurrency also introduces an optional notion of these “processes” interacting with each other. We’ll come back to that later.

For a given window of time (a few seconds worth of a user scrolling), let’s visualize each independent “process” as a series of events/operations:

“Process” 1 (onscroll events):

onscroll, request 1
onscroll, request 2
onscroll, request 3
onscroll, request 4
onscroll, request 5
onscroll, request 6
onscroll, request 7

“Process” 2 (Ajax response events):

response 1
response 2
response 3
response 4
response 5
response 6
response 7

It’s quite possible that an onscroll event and an Ajax response event could be ready to be processed at exactly the same moment. For example let’s visualize these events in a timeline:

onscroll, request 1
onscroll, request 2          response 1
onscroll, request 3          response 2
response 3
onscroll, request 4
onscroll, request 5
onscroll, request 6          response 4
onscroll, request 7
response 6
response 5
response 7

But, going back to our notion of the event loop from earlier in the chapter, JS is only going to be able to handle one event at a time, so either onscroll, request 2 is going to happen first or response 1 is going to happen first, but they cannot happen at literally the same moment. Just like kids at a school cafeteria, no matter what crowd they form outside the doors, they’ll have to merge into a single line to get their lunch!

Let’s visualize the interleaving of all these events onto the event loop queue.

Event Loop Queue:

onscroll, request 1   <--- Process 1 starts
onscroll, request 2
response 1            <--- Process 2 starts
onscroll, request 3
response 2
response 3
onscroll, request 4
onscroll, request 5
onscroll, request 6
response 4
onscroll, request 7   <--- Process 1 finishes
response 6
response 5
response 7            <--- Process 2 finishes

“Process 1” and “Process 2” run concurrently (task-level parallel), but their individual events run sequentially on the event loop queue.

By the way, notice how response 6 and response 5 came back out of expected order?

The single-threaded event loop is one expression of concurrency (there are certainly others, which we’ll come back to later).

9.4.1 Noninteracting

As two or more “processes” are interleaving their steps/events concurrently within the same program, they don’t necessarily need to interact with each other if the tasks are unrelated. If they don’t interact, nondeterminism is perfectly acceptable.

For example:

var res = {};

function foo(results) {
    res.foo = results;
}

function bar(results) {
    res.bar = results;
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", foo );
ajax( "http://some.url.2", bar );

foo() and bar() are two concurrent “processes,” and it’s nondeterminate which order they will be fired in. But we’ve constructed the program so it doesn’t matter what order they fire in, because they act independently and as such don’t need to interact.

This is not a “race condition” bug, as the code will always work correctly, regardless of the ordering.

9.4.2 Interaction

More commonly, concurrent “processes” will by necessity interact, indirectly through scope and/or the DOM. When such interaction will occur, you need to coordinate these interactions to prevent “race conditions,” as described earlier.

Here’s a simple example of two concurrent “processes” that interact because of implied ordering, which is only sometimes broken:

var res = [];

function response(data) {
    res.push( data );
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", response );
ajax( "http://some.url.2", response );

The concurrent “processes” are the two response() calls that will be made to handle the Ajax responses. They can happen in either-first order.

Let’s assume the expected behavior is that res[0] has the results of the "http://some.url.1" call, and res[1] has the results of the "http://some.url.2" call. Sometimes that will be the case, but sometimes they’ll be flipped, depending on which call finishes first. There’s a pretty good likelihood that this nondeterminism is a “race condition” bug.

Note: Be extremely wary of assumptions you might tend to make in these situations. For example, it’s not uncommon for a developer to observe that "http://some.url.2" is “always” much slower to respond than "http://some.url.1", perhaps by virtue of what tasks they’re doing (e.g., one performing a database task and the other just fetching a static file), so the observed ordering seems to always be as expected. Even if both requests go to the same server, and it intentionally responds in a certain order, there’s no real guarantee of what order the responses will arrive back in the browser.

So, to address such a race condition, you can coordinate ordering interaction:

var res = [];

function response(data) {
    if (data.url == "http://some.url.1") {
        res[0] = data;
    }
    else if (data.url == "http://some.url.2") {
        res[1] = data;
    }
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", response );
ajax( "http://some.url.2", response );

Regardless of which Ajax response comes back first, we inspect the data.url (assuming one is returned from the server, of course!) to figure out which position the response data should occupy in the res array. res[0] will always hold the "http://some.url.1" results and res[1] will always hold the "http://some.url.2" results. Through simple coordination, we eliminated the “race condition” nondeterminism.

The same reasoning from this scenario would apply if multiple concurrent function calls were interacting with each other through the shared DOM, like one updating the contents of a <div> and the other updating the style or attributes of the <div> (e.g., to make the DOM element visible once it has content). You probably wouldn’t want to show the DOM element before it had content, so the coordination must ensure proper ordering interaction.

Some concurrency scenarios are always broken (not just sometimes) without coordinated interaction. Consider:

var a, b;

function foo(x) {
    a = x * 2;
    baz();
}

function bar(y) {
    b = y * 2;
    baz();
}

function baz() {
    console.log(a + b);
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", foo );
ajax( "http://some.url.2", bar );

In this example, whether foo() or bar() fires first, it will always cause baz() to run too early (either a or b will still be undefined), but the second invocation of baz() will work, as both a and b will be available.

There are different ways to address such a condition. Here’s one simple way:

var a, b;

function foo(x) {
    a = x * 2;
    if (a && b) {
        baz();
    }
}

function bar(y) {
    b = y * 2;
    if (a && b) {
        baz();
    }
}

function baz() {
    console.log( a + b );
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", foo );
ajax( "http://some.url.2", bar );

The if (a && b) conditional around the baz() call is traditionally called a “gate,” because we’re not sure what order a and b will arrive, but we wait for both of them to get there before we proceed to open the gate (call baz()).

Another concurrency interaction condition you may run into is sometimes called a “race,” but more correctly called a “latch.” It’s characterized by “only the first one wins” behavior. Here, nondeterminism is acceptable, in that you are explicitly saying it’s OK for the “race” to the finish line to have only one winner.

Consider this broken code:

var a;

function foo(x) {
    a = x * 2;
    baz();
}

function bar(x) {
    a = x / 2;
    baz();
}

function baz() {
    console.log( a );
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", foo );
ajax( "http://some.url.2", bar );

Whichever one (foo() or bar()) fires last will not only overwrite the assigned a value from the other, but it will also duplicate the call to baz() (likely undesired).

So, we can coordinate the interaction with a simple latch, to let only the first one through:

var a;

function foo(x) {
    if (a == undefined) {
        a = x * 2;
        baz();
    }
}

function bar(x) {
    if (a == undefined) {
        a = x / 2;
        baz();
    }
}

function baz() {
    console.log( a );
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", foo );
ajax( "http://some.url.2", bar );

The if (a == undefined) conditional allows only the first of foo() or bar() through, and the second (and indeed any subsequent) calls would just be ignored. There’s just no virtue in coming in second place!

Note: In all these scenarios, we’ve been using global variables for simplistic illustration purposes, but there’s nothing about our reasoning here that requires it. As long as the functions in question can access the variables (via scope), they’ll work as intended. Relying on lexically scoped variables (see the Scope & Closures title of this book series), and in fact global variables as in these examples, is one obvious downside to these forms of concurrency coordination. As we go through the next few chapters, we’ll see other ways of coordination that are much cleaner in that respect.

9.4.3 Cooperation

Another expression of concurrency coordination is called “cooperative concurrency.” Here, the focus isn’t so much on interacting via value sharing in scopes (though that’s obviously still allowed!). The goal is to take a long-running “process” and break it up into steps or batches so that other concurrent “processes” have a chance to interleave their operations into the event loop queue.

For example, consider an Ajax response handler that needs to run through a long list of results to transform the values. We’ll use Array#map(..) to keep the code shorter:

var res = [];

// `response(..)` receives array of results from the Ajax call
function response(data) {
    // add onto existing `res` array
    res = res.concat(
        // make a new transformed array with all `data` values doubled
        data.map( function(val){
            return val * 2;
        } )
    );
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", response );
ajax( "http://some.url.2", response );

If "http://some.url.1" gets its results back first, the entire list will be mapped into res all at once. If it’s a few thousand or less records, this is not generally a big deal. But if it’s say 10 million records, that can take a while to run (several seconds on a powerful laptop, much longer on a mobile device, etc.).

While such a “process” is running, nothing else in the page can happen, including no other response(..) calls, no UI updates, not even user events like scrolling, typing, button clicking, and the like. That’s pretty painful.

So, to make a more cooperatively concurrent system, one that’s friendlier and doesn’t hog the event loop queue, you can process these results in asynchronous batches, after each one “yielding” back to the event loop to let other waiting events happen.

Here’s a very simple approach:

var res = [];

// `response(..)` receives array of results from the Ajax call
function response(data) {
    // let's just do 1000 at a time
    var chunk = data.splice( 0, 1000 );

    // add onto existing `res` array
    res = res.concat(
        // make a new transformed array with all `chunk` values doubled
        chunk.map( function(val){
            return val * 2;
        } )
    );

    // anything left to process?
    if (data.length > 0) {
        // async schedule next batch
        setTimeout( function(){
            response( data );
        }, 0 );
    }
}

// ajax(..) is some arbitrary Ajax function given by a library
ajax( "http://some.url.1", response );
ajax( "http://some.url.2", response );

We process the data set in maximum-sized chunks of 1,000 items. By doing so, we ensure a short-running “process,” even if that means many more subsequent “processes,” as the interleaving onto the event loop queue will give us a much more responsive (performant) site/app.

Of course, we’re not interaction-coordinating the ordering of any of these “processes,” so the order of results in res won’t be predictable. If ordering was required, you’d need to use interaction techniques like those we discussed earlier, or ones we will cover in later chapters of this book.

We use the setTimeout(..0) (hack) for async scheduling, which basically just means “stick this function at the end of the current event loop queue.”

Note: setTimeout(..0) is not technically inserting an item directly onto the event loop queue. The timer will insert the event at its next opportunity. For example, two subsequent setTimeout(..0) calls would not be strictly guaranteed to be processed in call order, so it is possible to see various conditions like timer drift where the ordering of such events isn’t predictable. In Node.js, a similar approach is process.nextTick(..). Despite how convenient (and usually more performant) it would be, there’s not a single direct way (at least yet) across all environments to ensure async event ordering. We cover this topic in more detail in the next section.

9.5 Jobs

As of ES6, there’s a new concept layered on top of the event loop queue, called the “Job queue.” The most likely exposure you’ll have to it is with the asynchronous behavior of Promises (see Chapter 3).

Unfortunately, at the moment it’s a mechanism without an exposed API, and thus demonstrating it is a bit more convoluted. So we’re going to have to just describe it conceptually, such that when we discuss async behavior with Promises in Chapter 3, you’ll understand how those actions are being scheduled and processed.

So, the best way to think about this that I’ve found is that the “Job queue” is a queue hanging off the end of every tick in the event loop queue. Certain async-implied actions that may occur during a tick of the event loop will not cause a whole new event to be added to the event loop queue, but will instead add an item (aka Job) to the end of the current tick’s Job queue.

It’s kinda like saying, “oh, here’s this other thing I need to do later, but make sure it happens right away before anything else can happen.”

Or, to use a metaphor: the event loop queue is like an amusement park ride, where once you finish the ride, you have to go to the back of the line to ride again. But the Job queue is like finishing the ride, but then cutting in line and getting right back on.

A Job can also cause more Jobs to be added to the end of the same queue. So, it’s theoretically possible that a Job “loop” (a Job that keeps adding another Job, etc.) could spin indefinitely, thus starving the program of the ability to move on to the next event loop tick. This would conceptually be almost the same as just expressing a long-running or infinite loop (like while (true) ..) in your code.

Jobs are kind of like the spirit of the setTimeout(..0) hack, but implemented in such a way as to have a much more well-defined and guaranteed ordering: later, but as soon as possible.

Let’s imagine an API for scheduling Jobs (directly, without hacks), and call it schedule(..). Consider:

console.log( "A" );

setTimeout( function(){
    console.log( "B" );
}, 0 );

// theoretical "Job API"
schedule( function(){
    console.log( "C" );

    schedule( function(){
        console.log( "D" );
    } );
} );

You might expect this to print out A B C D, but instead it would print out A C D B, because the Jobs happen at the end of the current event loop tick, and the timer fires to schedule for the next event loop tick (if available!).

In Chapter 3, we’ll see that the asynchronous behavior of Promises is based on Jobs, so it’s important to keep clear how that relates to event loop behavior.

9.6 Statement Ordering

The order in which we express statements in our code is not necessarily the same order as the JS engine will execute them. That may seem like quite a strange assertion to make, so we’ll just briefly explore it.

But before we do, we should be crystal clear on something: the rules/grammar of the language (see the Types & Grammar title of this book series) dictate a very predictable and reliable behavior for statement ordering from the program point of view. So what we’re about to discuss are not things you should ever be able to observe in your JS program.

Warning: If you are ever able to observe compiler statement reordering like we’re about to illustrate, that’d be a clear violation of the specification, and it would unquestionably be due to a bug in the JS engine in question – one which should promptly be reported and fixed! But it’s vastly more common that you suspect something crazy is happening in the JS engine, when in fact it’s just a bug (probably a “race condition”!) in your own code – so look there first, and again and again. The JS debugger, using breakpoints and stepping through code line by line, will be your most powerful tool for sniffing out such bugs in your code.

Consider:

var a, b;

a = 10;
b = 30;

a = a + 1;
b = b + 1;

console.log( a + b ); // 42

This code has no expressed asynchrony to it (other than the rare console async I/O discussed earlier!), so the most likely assumption is that it would process line by line in top-down fashion.

But it’s possible that the JS engine, after compiling this code (yes, JS is compiled – see the Scope & Closures title of this book series!) might find opportunities to run your code faster by rearranging (safely) the order of these statements. Essentially, as long as you can’t observe the reordering, anything’s fair game.

For example, the engine might find it’s faster to actually execute the code like this:

var a, b;

a = 10;
a++;

b = 30;
b++;

console.log( a + b ); // 42

Or this:

var a, b;

a = 11;
b = 31;

console.log( a + b ); // 42

Or even:

// because `a` and `b` aren't used anymore, we can
// inline and don't even need them!
console.log( 42 ); // 42

In all these cases, the JS engine is performing safe optimizations during its compilation, as the end observable result will be the same.

But here’s a scenario where these specific optimizations would be unsafe and thus couldn’t be allowed (of course, not to say that it’s not optimized at all):

var a, b;

a = 10;
b = 30;

// we need `a` and `b` in their preincremented state!
console.log( a * b ); // 300

a = a + 1;
b = b + 1;

console.log( a + b ); // 42

Other examples where the compiler reordering could create observable side effects (and thus must be disallowed) would include things like any function call with side effects (even and especially getter functions), or ES6 Proxy objects (see the ES6 & Beyond title of this book series).

Consider:

function foo() {
    console.log( b );
    return 1;
}

var a, b, c;

// ES5.1 getter literal syntax
c = {
    get bar() {
        console.log( a );
        return 1;
    }
};

a = 10;
b = 30;

a += foo();             // 30
b += c.bar;             // 11

console.log( a + b );   // 42

If it weren’t for the console.log(..) statements in this snippet (just used as a convenient form of observable side effect for the illustration), the JS engine would likely have been free, if it wanted to (who knows if it would!?), to reorder the code to:

// ...

a = 10 + foo();
b = 30 + c.bar;

// ...

While JS semantics thankfully protect us from the observable nightmares that compiler statement reordering would seem to be in danger of, it’s still important to understand just how tenuous a link there is between the way source code is authored (in top-down fashion) and the way it runs after compilation.

Compiler statement reordering is almost a micro-metaphor for concurrency and interaction. As a general concept, such awareness can help you understand async JS code flow issues better.

9.7 Review

A JavaScript program is (practically) always broken up into two or more chunks, where the first chunk runs now and the next chunk runs later, in response to an event. Even though the program is executed chunk-by-chunk, all of them share the same access to the program scope and state, so each modification to state is made on top of the previous state.

Whenever there are events to run, the event loop runs until the queue is empty. Each iteration of the event loop is a “tick.” User interaction, IO, and timers enqueue events on the event queue.

At any given moment, only one event can be processed from the queue at a time. While an event is executing, it can directly or indirectly cause one or more subsequent events.

Concurrency is when two or more chains of events interleave over time, such that from a high-level perspective, they appear to be running simultaneously (even though at any given moment only one event is being processed).

It’s often necessary to do some form of interaction coordination between these concurrent “processes” (as distinct from operating system processes), for instance to ensure ordering or to prevent “race conditions.” These “processes” can also cooperate by breaking themselves into smaller chunks and to allow other “process” interleaving.

10 You Don’t Know JS: Async & Performance

11 Chapter 2: Callbacks

In Chapter 1, we explored the terminology and concepts around asynchronous programming in JavaScript. Our focus is on understanding the single-threaded (one-at-a-time) event loop queue that drives all “events” (async function invocations). We also explored various ways that concurrency patterns explain the relationships (if any!) between simultaneously running chains of events, or “processes” (tasks, function calls, etc.).

All our examples in Chapter 1 used the function as the individual, indivisible unit of operations, whereby inside the function, statements run in predictable order (above the compiler level!), but at the function-ordering level, events (aka async function invocations) can happen in a variety of orders.

In all these cases, the function is acting as a “callback,” because it serves as the target for the event loop to “call back into” the program, whenever that item in the queue is processed.

As you no doubt have observed, callbacks are by far the most common way that asynchrony in JS programs is expressed and managed. Indeed, the callback is the most fundamental async pattern in the language.

Countless JS programs, even very sophisticated and complex ones, have been written upon no other async foundation than the callback (with of course the concurrency interaction patterns we explored in Chapter 1). The callback function is the async work horse for JavaScript, and it does its job respectably.

Except… callbacks are not without their shortcomings. Many developers are excited by the promise (pun intended!) of better async patterns. But it’s impossible to effectively use any abstraction if you don’t understand what it’s abstracting, and why.

In this chapter, we will explore a couple of those in depth, as motivation for why more sophisticated async patterns (explored in subsequent chapters of this book) are necessary and desired.

11.1 Continuations

Let’s go back to the async callback example we started with in Chapter 1, but let me slightly modify it to illustrate a point:

// A
ajax( "..", function(..){
    // C
} );
// B

// A and // B represent the first half of the program (aka the now), and // C marks the second half of the program (aka the later). The first half executes right away, and then there’s a “pause” of indeterminate length. At some future moment, if the Ajax call completes, then the program will pick up where it left off, and continue with the second half.

In other words, the callback function wraps or encapsulates the continuation of the program.

Let’s make the code even simpler:

// A
setTimeout( function(){
    // C
}, 1000 );
// B

Stop for a moment and ask yourself how you’d describe (to someone else less informed about how JS works) the way that program behaves. Go ahead, try it out loud. It’s a good exercise that will help my next points make more sense.

Most readers just now probably thought or said something to the effect of: “Do A, then set up a timeout to wait 1,000 milliseconds, then once that fires, do C.” How close was your rendition?

You might have caught yourself and self-edited to: “Do A, setup the timeout for 1,000 milliseconds, then do B, then after the timeout fires, do C.” That’s more accurate than the first version. Can you spot the difference?

Even though the second version is more accurate, both versions are deficient in explaining this code in a way that matches our brains to the code, and the code to the JS engine. The disconnect is both subtle and monumental, and is at the very heart of understanding the shortcomings of callbacks as async expression and management.

As soon as we introduce a single continuation (or several dozen as many programs do!) in the form of a callback function, we have allowed a divergence to form between how our brains work and the way the code will operate. Any time these two diverge (and this is by far not the only place that happens, as I’m sure you know!), we run into the inevitable fact that our code becomes harder to understand, reason about, debug, and maintain.

11.2 Sequential Brain

I’m pretty sure most of you readers have heard someone say (even made the claim yourself), “I’m a multitasker.” The effects of trying to act as a multitasker range from humorous (e.g., the silly patting-head-rubbing-stomach kids’ game) to mundane (chewing gum while walking) to downright dangerous (texting while driving).

But are we multitaskers? Can we really do two conscious, intentional actions at once and think/reason about both of them at exactly the same moment? Does our highest level of brain functionality have parallel multithreading going on?

The answer may surprise you: probably not.

That’s just not really how our brains appear to be set up. We’re much more single taskers than many of us (especially A-type personalities!) would like to admit. We can really only think about one thing at any given instant.

I’m not talking about all our involuntary, subconscious, automatic brain functions, such as heart beating, breathing, and eyelid blinking. Those are all vital tasks to our sustained life, but we don’t intentionally allocate any brain power to them. Thankfully, while we obsess about checking social network feeds for the 15th time in three minutes, our brain carries on in the background (threads!) with all those important tasks.

We’re instead talking about whatever task is at the forefront of our minds at the moment. For me, it’s writing the text in this book right now. Am I doing any other higher level brain function at exactly this same moment? Nope, not really. I get distracted quickly and easily – a few dozen times in these last couple of paragraphs!

When we fake multitasking, such as trying to type something at the same time we’re talking to a friend or family member on the phone, what we’re actually most likely doing is acting as fast context switchers. In other words, we switch back and forth between two or more tasks in rapid succession, simultaneously progressing on each task in tiny, fast little chunks. We do it so fast that to the outside world it appears as if we’re doing these things in parallel.

Does that sound suspiciously like async evented concurrency (like the sort that happens in JS) to you?! If not, go back and read Chapter 1 again!

In fact, one way of simplifying (i.e., abusing) the massively complex world of neurology into something I can remotely hope to discuss here is that our brains work kinda like the event loop queue.

If you think about every single letter (or word) I type as a single async event, in just this sentence alone there are several dozen opportunities for my brain to be interrupted by some other event, such as from my senses, or even just my random thoughts.

I don’t get interrupted and pulled to another “process” at every opportunity that I could be (thankfully – or this book would never be written!). But it happens often enough that I feel my own brain is nearly constantly switching to various different contexts (aka “processes”). And that’s an awful lot like how the JS engine would probably feel.

11.2.1 Doing Versus Planning

OK, so our brains can be thought of as operating in single-threaded event loop queue like ways, as can the JS engine. That sounds like a good match.

But we need to be more nuanced than that in our analysis. There’s a big, observable difference between how we plan various tasks, and how our brains actually operate those tasks.

Again, back to the writing of this text as my metaphor. My rough mental outline plan here is to keep writing and writing, going sequentially through a set of points I have ordered in my thoughts. I don’t plan to have any interruptions or nonlinear activity in this writing. But yet, my brain is nevertheless switching around all the time.

Even though at an operational level our brains are async evented, we seem to plan out tasks in a sequential, synchronous way. “I need to go to the store, then buy some milk, then drop off my dry cleaning.”

You’ll notice that this higher level thinking (planning) doesn’t seem very async evented in its formulation. In fact, it’s kind of rare for us to deliberately think solely in terms of events. Instead, we plan things out carefully, sequentially (A then B then C), and we assume to an extent a sort of temporal blocking that forces B to wait on A, and C to wait on B.

When a developer writes code, they are planning out a set of actions to occur. If they’re any good at being a developer, they’re carefully planning it out. “I need to set z to the value of x, and then x to the value of y,” and so forth.

When we write out synchronous code, statement by statement, it works a lot like our errands to-do list:

// swap `x` and `y` (via temp variable `z`)
z = x;
x = y;
y = z;

These three assignment statements are synchronous, so x = y waits for z = x to finish, and y = z in turn waits for x = y to finish. Another way of saying it is that these three statements are temporally bound to execute in a certain order, one right after the other. Thankfully, we don’t need to be bothered with any async evented details here. If we did, the code gets a lot more complex, quickly!

So if synchronous brain planning maps well to synchronous code statements, how well do our brains do at planning out asynchronous code?

It turns out that how we express asynchrony (with callbacks) in our code doesn’t map very well at all to that synchronous brain planning behavior.

Can you actually imagine having a line of thinking that plans out your to-do errands like this?

“I need to go to the store, but on the way I’m sure I’ll get a phone call, so ‘Hi, Mom’, and while she starts talking, I’ll be looking up the store address on GPS, but that’ll take a second to load, so I’ll turn down the radio so I can hear Mom better, then I’ll realize I forgot to put on a jacket and it’s cold outside, but no matter, keep driving and talking to Mom, and then the seatbelt ding reminds me to buckle up, so ‘Yes, Mom, I am wearing my seatbelt, I always do!’. Ah, finally the GPS got the directions, now…”

As ridiculous as that sounds as a formulation for how we plan our day out and think about what to do and in what order, nonetheless it’s exactly how our brains operate at a functional level. Remember, that’s not multitasking, it’s just fast context switching.

The reason it’s difficult for us as developers to write async evented code, especially when all we have is the callback to do it, is that stream of consciousness thinking/planning is unnatural for most of us.

We think in step-by-step terms, but the tools (callbacks) available to us in code are not expressed in a step-by-step fashion once we move from synchronous to asynchronous.

And that is why it’s so hard to accurately author and reason about async JS code with callbacks: because it’s not how our brain planning works.

Note: The only thing worse than not knowing why some code breaks is not knowing why it worked in the first place! It’s the classic “house of cards” mentality: “it works, but not sure why, so nobody touch it!” You may have heard, “Hell is other people” (Sartre), and the programmer meme twist, “Hell is other people’s code.” I believe truly: “Hell is not understanding my own code.” And callbacks are one main culprit.

11.2.2 Nested/Chained Callbacks

Consider:

listen( "click", function handler(evt){
    setTimeout( function request(){
        ajax( "http://some.url.1", function response(text){
            if (text == "hello") {
                handler();
            }
            else if (text == "world") {
                request();
            }
        } );
    }, 500) ;
} );

There’s a good chance code like that is recognizable to you. We’ve got a chain of three functions nested together, each one representing a step in an asynchronous series (task, “process”).

This kind of code is often called “callback hell,” and sometimes also referred to as the “pyramid of doom” (for its sideways-facing triangular shape due to the nested indentation).

But “callback hell” actually has almost nothing to do with the nesting/indentation. It’s a far deeper problem than that. We’ll see how and why as we continue through the rest of this chapter.

First, we’re waiting for the “click” event, then we’re waiting for the timer to fire, then we’re waiting for the Ajax response to come back, at which point it might do it all again.

At first glance, this code may seem to map its asynchrony naturally to sequential brain planning.

First (now), we:

listen( "..", function handler(..){
    // ..
} );

Then later, we:

setTimeout( function request(..){
    // ..
}, 500) ;

Then still later, we:

ajax( "..", function response(..){
    // ..
} );

And finally (most later), we:

if ( .. ) {
    // ..
}
else ..

But there’s several problems with reasoning about this code linearly in such a fashion.

First, it’s an accident of the example that our steps are on subsequent lines (1, 2, 3, and 4…). In real async JS programs, there’s often a lot more noise cluttering things up, noise that we have to deftly maneuver past in our brains as we jump from one function to the next. Understanding the async flow in such callback-laden code is not impossible, but it’s certainly not natural or easy, even with lots of practice.

But also, there’s something deeper wrong, which isn’t evident just in that code example. Let me make up another scenario (pseudocode-ish) to illustrate it:

doA( function(){
    doB();

    doC( function(){
        doD();
    } )

    doE();
} );

doF();

While the experienced among you will correctly identify the true order of operations here, I’m betting it is more than a little confusing at first glance, and takes some concerted mental cycles to arrive at. The operations will happen in this order:

  • doA()
  • doF()
  • doB()
  • doC()
  • doE()
  • doD()

Did you get that right the very first time you glanced at the code?

OK, some of you are thinking I was unfair in my function naming, to intentionally lead you astray. I swear I was just naming in top-down appearance order. But let me try again:

doA( function(){
    doC();

    doD( function(){
        doF();
    } )

    doE();
} );

doB();

Now, I’ve named them alphabetically in order of actual execution. But I still bet, even with experience now in this scenario, tracing through the A -> B -> C -> D -> E -> F order doesn’t come natural to many if any of you readers. Certainly your eyes do an awful lot of jumping up and down the code snippet, right?

But even if that all comes natural to you, there’s still one more hazard that could wreak havoc. Can you spot what it is?

What if doA(..) or doD(..) aren’t actually async, the way we obviously assumed them to be? Uh oh, now the order is different. If they’re both sync (and maybe only sometimes, depending on the conditions of the program at the time), the order is now A -> C -> D -> F -> E -> B.

That sound you just heard faintly in the background is the sighs of thousands of JS developers who just had a face-in-hands moment.

Is nesting the problem? Is that what makes it so hard to trace the async flow? That’s part of it, certainly.

But let me rewrite the previous nested event/timeout/Ajax example without using nesting:

listen( "click", handler );

function handler() {
    setTimeout( request, 500 );
}

function request(){
    ajax( "http://some.url.1", response );
}

function response(text){
    if (text == "hello") {
        handler();
    }
    else if (text == "world") {
        request();
    }
}

This formulation of the code is not hardly as recognizable as having the nesting/indentation woes of its previous form, and yet it’s every bit as susceptible to “callback hell.” Why?

As we go to linearly (sequentially) reason about this code, we have to skip from one function, to the next, to the next, and bounce all around the code base to “see” the sequence flow. And remember, this is simplified code in sort of best-case fashion. We all know that real async JS program code bases are often fantastically more jumbled, which makes such reasoning orders of magnitude more difficult.

Another thing to notice: to get steps 2, 3, and 4 linked together so they happen in succession, the only affordance callbacks alone gives us is to hardcode step 2 into step 1, step 3 into step 2, step 4 into step 3, and so on. The hardcoding isn’t necessarily a bad thing, if it really is a fixed condition that step 2 should always lead to step 3.

But the hardcoding definitely makes the code a bit more brittle, as it doesn’t account for anything going wrong that might cause a deviation in the progression of steps. For example, if step 2 fails, step 3 never gets reached, nor does step 2 retry, or move to an alternate error handling flow, and so on.

All of these issues are things you can manually hardcode into each step, but that code is often very repetitive and not reusable in other steps or in other async flows in your program.

Even though our brains might plan out a series of tasks in a sequential type of way (this, then this, then this), the evented nature of our brain operation makes recovery/retry/forking of flow control almost effortless. If you’re out running errands, and you realize you left a shopping list at home, it doesn’t end the day because you didn’t plan that ahead of time. Your brain routes around this hiccup easily: you go home, get the list, then head right back out to the store.

But the brittle nature of manually hardcoded callbacks (even with hardcoded error handling) is often far less graceful. Once you end up specifying (aka pre-planning) all the various eventualities/paths, the code becomes so convoluted that it’s hard to ever maintain or update it.

That is what “callback hell” is all about! The nesting/indentation are basically a side show, a red herring.

And as if all that’s not enough, we haven’t even touched what happens when two or more chains of these callback continuations are happening simultaneously, or when the third step branches out into “parallel” callbacks with gates or latches, or… OMG, my brain hurts, how about yours!?

Are you catching the notion here that our sequential, blocking brain planning behaviors just don’t map well onto callback-oriented async code? That’s the first major deficiency to articulate about callbacks: they express asynchrony in code in ways our brains have to fight just to keep in sync with (pun intended!).

11.3 Trust Issues

The mismatch between sequential brain planning and callback-driven async JS code is only part of the problem with callbacks. There’s something much deeper to be concerned about.

Let’s once again revisit the notion of a callback function as the continuation (aka the second half) of our program:

// A
ajax( "..", function(..){
    // C
} );
// B

// A and // B happen now, under the direct control of the main JS program. But // C gets deferred to happen later, and under the control of another party – in this case, the ajax(..) function. In a basic sense, that sort of hand-off of control doesn’t regularly cause lots of problems for programs.

But don’t be fooled by its infrequency that this control switch isn’t a big deal. In fact, it’s one of the worst (and yet most subtle) problems about callback-driven design. It revolves around the idea that sometimes ajax(..) (i.e., the “party” you hand your callback continuation to) is not a function that you wrote, or that you directly control. Many times it’s a utility provided by some third party.

We call this “inversion of control,” when you take part of your program and give over control of its execution to another third party. There’s an unspoken “contract” that exists between your code and the third-party utility – a set of things you expect to be maintained.

11.3.1 Tale of Five Callbacks

It might not be terribly obvious why this is such a big deal. Let me construct an exaggerated scenario to illustrate the hazards of trust at play.

Imagine you’re a developer tasked with building out an ecommerce checkout system for a site that sells expensive TVs. You already have all the various pages of the checkout system built out just fine. On the last page, when the user clicks “confirm” to buy the TV, you need to call a third-party function (provided say by some analytics tracking company) so that the sale can be tracked.

You notice that they’ve provided what looks like an async tracking utility, probably for the sake of performance best practices, which means you need to pass in a callback function. In this continuation that you pass in, you will have the final code that charges the customer’s credit card and displays the thank you page.

This code might look like:

analytics.trackPurchase( purchaseData, function(){
    chargeCreditCard();
    displayThankyouPage();
} );

Easy enough, right? You write the code, test it, everything works, and you deploy to production. Everyone’s happy!

Six months go by and no issues. You’ve almost forgotten you even wrote that code. One morning, you’re at a coffee shop before work, casually enjoying your latte, when you get a panicked call from your boss insisting you drop the coffee and rush into work right away.

When you arrive, you find out that a high-profile customer has had his credit card charged five times for the same TV, and he’s understandably upset. Customer service has already issued an apology and processed a refund. But your boss demands to know how this could possibly have happened. “Don’t we have tests for stuff like this!?”

You don’t even remember the code you wrote. But you dig back in and start trying to find out what could have gone awry.

After digging through some logs, you come to the conclusion that the only explanation is that the analytics utility somehow, for some reason, called your callback five times instead of once. Nothing in their documentation mentions anything about this.

Frustrated, you contact customer support, who of course is as astonished as you are. They agree to escalate it to their developers, and promise to get back to you. The next day, you receive a lengthy email explaining what they found, which you promptly forward to your boss.

Apparently, the developers at the analytics company had been working on some experimental code that, under certain conditions, would retry the provided callback once per second, for five seconds, before failing with a timeout. They had never intended to push that into production, but somehow they did, and they’re totally embarrassed and apologetic. They go into plenty of detail about how they’ve identified the breakdown and what they’ll do to ensure it never happens again. Yadda, yadda.

What’s next?

You talk it over with your boss, but he’s not feeling particularly comfortable with the state of things. He insists, and you reluctantly agree, that you can’t trust them anymore (that’s what bit you), and that you’ll need to figure out how to protect the checkout code from such a vulnerability again.

After some tinkering, you implement some simple ad hoc code like the following, which the team seems happy with:

var tracked = false;

analytics.trackPurchase( purchaseData, function(){
    if (!tracked) {
        tracked = true;
        chargeCreditCard();
        displayThankyouPage();
    }
} );

Note: This should look familiar to you from Chapter 1, because we’re essentially creating a latch to handle if there happen to be multiple concurrent invocations of our callback.

But then one of your QA engineers asks, “what happens if they never call the callback?” Oops. Neither of you had thought about that.

You begin to chase down the rabbit hole, and think of all the possible things that could go wrong with them calling your callback. Here’s roughly the list you come up with of ways the analytics utility could misbehave:

  • Call the callback too early (before it’s been tracked)
  • Call the callback too late (or never)
  • Call the callback too few or too many times (like the problem you encountered!)
  • Fail to pass along any necessary environment/parameters to your callback
  • Swallow any errors/exceptions that may happen

That should feel like a troubling list, because it is. You’re probably slowly starting to realize that you’re going to have to invent an awful lot of ad hoc logic in each and every single callback that’s passed to a utility you’re not positive you can trust.

Now you realize a bit more completely just how hellish “callback hell” is.

11.3.2 Not Just Others’ Code

Some of you may be skeptical at this point whether this is as big a deal as I’m making it out to be. Perhaps you don’t interact with truly third-party utilities much if at all. Perhaps you use versioned APIs or self-host such libraries, so that its behavior can’t be changed out from underneath you.

So, contemplate this: can you even really trust utilities that you do theoretically control (in your own code base)?

Think of it this way: most of us agree that at least to some extent we should build our own internal functions with some defensive checks on the input parameters, to reduce/prevent unexpected issues.

Overly trusting of input:

function addNumbers(x,y) {
    // + is overloaded with coercion to also be
    // string concatenation, so this operation
    // isn't strictly safe depending on what's
    // passed in.
    return x + y;
}

addNumbers( 21, 21 );   // 42
addNumbers( 21, "21" ); // "2121"

Defensive against untrusted input:

function addNumbers(x,y) {
    // ensure numerical input
    if (typeof x != "number" || typeof y != "number") {
        throw Error( "Bad parameters" );
    }

    // if we get here, + will safely do numeric addition
    return x + y;
}

addNumbers( 21, 21 );   // 42
addNumbers( 21, "21" ); // Error: "Bad parameters"

Or perhaps still safe but friendlier:

function addNumbers(x,y) {
    // ensure numerical input
    x = Number( x );
    y = Number( y );

    // + will safely do numeric addition
    return x + y;
}

addNumbers( 21, 21 );   // 42
addNumbers( 21, "21" ); // 42

However you go about it, these sorts of checks/normalizations are fairly common on function inputs, even with code we theoretically entirely trust. In a crude sort of way, it’s like the programming equivalent of the geopolitical principle of “Trust But Verify.”

So, doesn’t it stand to reason that we should do the same thing about composition of async function callbacks, not just with truly external code but even with code we know is generally “under our own control”? Of course we should.

But callbacks don’t really offer anything to assist us. We have to construct all that machinery ourselves, and it often ends up being a lot of boilerplate/overhead that we repeat for every single async callback.

The most troublesome problem with callbacks is inversion of control leading to a complete breakdown along all those trust lines.

If you have code that uses callbacks, especially but not exclusively with third-party utilities, and you’re not already applying some sort of mitigation logic for all these inversion of control trust issues, your code has bugs in it right now even though they may not have bitten you yet. Latent bugs are still bugs.

Hell indeed.

11.4 Trying to Save Callbacks

There are several variations of callback design that have attempted to address some (not all!) of the trust issues we’ve just looked at. It’s a valiant, but doomed, effort to save the callback pattern from imploding on itself.

For example, regarding more graceful error handling, some API designs provide for split callbacks (one for the success notification, one for the error notification):

function success(data) {
    console.log( data );
}

function failure(err) {
    console.error( err );
}

ajax( "http://some.url.1", success, failure );

In APIs of this design, often the failure() error handler is optional, and if not provided it will be assumed you want the errors swallowed. Ugh.

Note: This split-callback design is what the ES6 Promise API uses. We’ll cover ES6 Promises in much more detail in the next chapter.

Another common callback pattern is called “error-first style” (sometimes called “Node style,” as it’s also the convention used across nearly all Node.js APIs), where the first argument of a single callback is reserved for an error object (if any). If success, this argument will be empty/falsy (and any subsequent arguments will be the success data), but if an error result is being signaled, the first argument is set/truthy (and usually nothing else is passed):

function response(err,data) {
    // error?
    if (err) {
        console.error( err );
    }
    // otherwise, assume success
    else {
        console.log( data );
    }
}

ajax( "http://some.url.1", response );

In both of these cases, several things should be observed.

First, it has not really resolved the majority of trust issues like it may appear. There’s nothing about either callback that prevents or filters unwanted repeated invocations. Moreover, things are worse now, because you may get both success and error signals, or neither, and you still have to code around either of those conditions.

Also, don’t miss the fact that while it’s a standard pattern you can employ, it’s definitely more verbose and boilerplate-ish without much reuse, so you’re going to get weary of typing all that out for every single callback in your application.

What about the trust issue of never being called? If this is a concern (and it probably should be!), you likely will need to set up a timeout that cancels the event. You could make a utility (proof-of-concept only shown) to help you with that:

function timeoutify(fn,delay) {
    var intv = setTimeout( function(){
            intv = null;
            fn( new Error( "Timeout!" ) );
        }, delay )
    ;

    return function() {
        // timeout hasn't happened yet?
        if (intv) {
            clearTimeout( intv );
            fn.apply( this, [ null ].concat( [].slice.call( arguments ) ) );
        }
    };
}

Here’s how you use it:

// using "error-first style" callback design
function foo(err,data) {
    if (err) {
        console.error( err );
    }
    else {
        console.log( data );
    }
}

ajax( "http://some.url.1", timeoutify( foo, 500 ) );

Another trust issue is being called “too early.” In application-specific terms, this may actually involve being called before some critical task is complete. But more generally, the problem is evident in utilities that can either invoke the callback you provide now (synchronously), or later (asynchronously).

This nondeterminism around the sync-or-async behavior is almost always going to lead to very difficult to track down bugs. In some circles, the fictional insanity-inducing monster named Zalgo is used to describe the sync/async nightmares. “Don’t release Zalgo!” is a common cry, and it leads to very sound advice: always invoke callbacks asynchronously, even if that’s “right away” on the next turn of the event loop, so that all callbacks are predictably async.

Note: For more information on Zalgo, see Oren Golan’s “Don’t Release Zalgo!” (https://github.com/oren/oren.github.io/blob/master/posts/zalgo.md) and Isaac Z. Schlueter’s “Designing APIs for Asynchrony” (http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony).

Consider:

function result(data) {
    console.log( a );
}

var a = 0;

ajax( "..pre-cached-url..", result );
a++;

Will this code print 0 (sync callback invocation) or 1 (async callback invocation)? Depends… on the conditions.

You can see just how quickly the unpredictability of Zalgo can threaten any JS program. So the silly-sounding “never release Zalgo” is actually incredibly common and solid advice. Always be asyncing.

What if you don’t know whether the API in question will always execute async? You could invent a utility like this asyncify(..) proof-of-concept:

function asyncify(fn) {
    var orig_fn = fn,
        intv = setTimeout( function(){
            intv = null;
            if (fn) fn();
        }, 0 )
    ;

    fn = null;

    return function() {
        // firing too quickly, before `intv` timer has fired to
        // indicate async turn has passed?
        if (intv) {
            fn = orig_fn.bind.apply(
                orig_fn,
                // add the wrapper's `this` to the `bind(..)`
                // call parameters, as well as currying any
                // passed in parameters
                [this].concat( [].slice.call( arguments ) )
            );
        }
        // already async
        else {
            // invoke original function
            orig_fn.apply( this, arguments );
        }
    };
}

You use asyncify(..) like this:

function result(data) {
    console.log( a );
}

var a = 0;

ajax( "..pre-cached-url..", asyncify( result ) );
a++;

Whether the Ajax request is in the cache and resolves to try to call the callback right away, or must be fetched over the wire and thus complete later asynchronously, this code will always output 1 instead of 0result(..) cannot help but be invoked asynchronously, which means the a++ has a chance to run before result(..) does.

Yay, another trust issued “solved”! But it’s inefficient, and yet again more bloated boilerplate to weigh your project down.

That’s just the story, over and over again, with callbacks. They can do pretty much anything you want, but you have to be willing to work hard to get it, and oftentimes this effort is much more than you can or should spend on such code reasoning.

You might find yourself wishing for built-in APIs or other language mechanics to address these issues. Finally ES6 has arrived on the scene with some great answers, so keep reading!

11.5 Review

Callbacks are the fundamental unit of asynchrony in JS. But they’re not enough for the evolving landscape of async programming as JS matures.

First, our brains plan things out in sequential, blocking, single-threaded semantic ways, but callbacks express asynchronous flow in a rather nonlinear, nonsequential way, which makes reasoning properly about such code much harder. Bad to reason about code is bad code that leads to bad bugs.

We need a way to express asynchrony in a more synchronous, sequential, blocking manner, just like our brains do.

Second, and more importantly, callbacks suffer from inversion of control in that they implicitly give control over to another party (often a third-party utility not in your control!) to invoke the continuation of your program. This control transfer leads us to a troubling list of trust issues, such as whether the callback is called more times than we expect.

Inventing ad hoc logic to solve these trust issues is possible, but it’s more difficult than it should be, and it produces clunkier and harder to maintain code, as well as code that is likely insufficiently protected from these hazards until you get visibly bitten by the bugs.

We need a generalized solution to all of the trust issues, one that can be reused for as many callbacks as we create without all the extra boilerplate overhead.

We need something better than callbacks. They’ve served us well to this point, but the future of JavaScript demands more sophisticated and capable async patterns. The subsequent chapters in this book will dive into those emerging evolutions.

12 You Don’t Know JS: Async & Performance

13 Chapter 3: Promises

In Chapter 2, we identified two major categories of deficiencies with using callbacks to express program asynchrony and manage concurrency: lack of sequentiality and lack of trustability. Now that we understand the problems more intimately, it’s time we turn our attention to patterns that can address them.

The issue we want to address first is the inversion of control, the trust that is so fragilely held and so easily lost.

Recall that we wrap up the continuation of our program in a callback function, and hand that callback over to another party (potentially even external code) and just cross our fingers that it will do the right thing with the invocation of the callback.

We do this because we want to say, “here’s what happens later, after the current step finishes.”

But what if we could uninvert that inversion of control? What if instead of handing the continuation of our program to another party, we could expect it to return us a capability to know when its task finishes, and then our code could decide what to do next?

This paradigm is called Promises.

Promises are starting to take the JS world by storm, as developers and specification writers alike desperately seek to untangle the insanity of callback hell in their code/design. In fact, most new async APIs being added to JS/DOM platform are being built on Promises. So it’s probably a good idea to dig in and learn them, don’t you think!?

Note: The word “immediately” will be used frequently in this chapter, generally to refer to some Promise resolution action. However, in essentially all cases, “immediately” means in terms of the Job queue behavior (see Chapter 1), not in the strictly synchronous now sense.

13.1 What Is a Promise?

When developers decide to learn a new technology or pattern, usually their first step is “Show me the code!” It’s quite natural for us to just jump in feet first and learn as we go.

But it turns out that some abstractions get lost on the APIs alone. Promises are one of those tools where it can be painfully obvious from how someone uses it whether they understand what it’s for and about versus just learning and using the API.

So before I show the Promise code, I want to fully explain what a Promise really is conceptually. I hope this will then guide you better as you explore integrating Promise theory into your own async flow.

With that in mind, let’s look at two different analogies for what a Promise is.

13.1.1 Future Value

Imagine this scenario: I walk up to the counter at a fast-food restaurant, and place an order for a cheeseburger. I hand the cashier $1.47. By placing my order and paying for it, I’ve made a request for a value back (the cheeseburger). I’ve started a transaction.

But often, the cheeseburger is not immediately available for me. The cashier hands me something in place of my cheeseburger: a receipt with an order number on it. This order number is an IOU (“I owe you”) promise that ensures that eventually, I should receive my cheeseburger.

So I hold onto my receipt and order number. I know it represents my future cheeseburger, so I don’t need to worry about it anymore – aside from being hungry!

While I wait, I can do other things, like send a text message to a friend that says, “Hey, can you come join me for lunch? I’m going to eat a cheeseburger.”

I am reasoning about my future cheeseburger already, even though I don’t have it in my hands yet. My brain is able to do this because it’s treating the order number as a placeholder for the cheeseburger. The placeholder essentially makes the value time independent. It’s a future value.

Eventually, I hear, “Order 113!” and I gleefully walk back up to the counter with receipt in hand. I hand my receipt to the cashier, and I take my cheeseburger in return.

In other words, once my future value was ready, I exchanged my value-promise for the value itself.

But there’s another possible outcome. They call my order number, but when I go to retrieve my cheeseburger, the cashier regretfully informs me, “I’m sorry, but we appear to be all out of cheeseburgers.” Setting aside the customer frustration of this scenario for a moment, we can see an important characteristic of future values: they can either indicate a success or failure.

Every time I order a cheeseburger, I know that I’ll either get a cheeseburger eventually, or I’ll get the sad news of the cheeseburger shortage, and I’ll have to figure out something else to eat for lunch.

Note: In code, things are not quite as simple, because metaphorically the order number may never be called, in which case we’re left indefinitely in an unresolved state. We’ll come back to dealing with that case later.

13.1.1.1 Values Now and Later

This all might sound too mentally abstract to apply to your code. So let’s be more concrete.

However, before we can introduce how Promises work in this fashion, we’re going to derive in code that we already understand – callbacks! – how to handle these future values.

When you write code to reason about a value, such as performing math on a number, whether you realize it or not, you’ve been assuming something very fundamental about that value, which is that it’s a concrete now value already:

var x, y = 2;

console.log( x + y ); // NaN  <-- because `x` isn't set yet

The x + y operation assumes both x and y are already set. In terms we’ll expound on shortly, we assume the x and y values are already resolved.

It would be nonsense to expect that the + operator by itself would somehow be magically capable of detecting and waiting around until both x and y are resolved (aka ready), only then to do the operation. That would cause chaos in the program if different statements finished now and others finished later, right?

How could you possibly reason about the relationships between two statements if either one (or both) of them might not be finished yet? If statement 2 relies on statement 1 being finished, there are just two outcomes: either statement 1 finished right now and everything proceeds fine, or statement 1 didn’t finish yet, and thus statement 2 is going to fail.

If this sort of thing sounds familiar from Chapter 1, good!

Let’s go back to our x + y math operation. Imagine if there was a way to say, “Add x and y, but if either of them isn’t ready yet, just wait until they are. Add them as soon as you can.”

Your brain might have just jumped to callbacks. OK, so…

function add(getX,getY,cb) {
    var x, y;
    getX( function(xVal){
        x = xVal;
        // both are ready?
        if (y != undefined) {
            cb( x + y );    // send along sum
        }
    } );
    getY( function(yVal){
        y = yVal;
        // both are ready?
        if (x != undefined) {
            cb( x + y );    // send along sum
        }
    } );
}

// `fetchX()` and `fetchY()` are sync or async
// functions
add( fetchX, fetchY, function(sum){
    console.log( sum ); // that was easy, huh?
} );

Take just a moment to let the beauty (or lack thereof) of that snippet sink in (whistles patiently).

While the ugliness is undeniable, there’s something very important to notice about this async pattern.

In that snippet, we treated x and y as future values, and we express an operation add(..) that (from the outside) does not care whether x or y or both are available right away or not. In other words, it normalizes the now and later, such that we can rely on a predictable outcome of the add(..) operation.

By using an add(..) that is temporally consistent – it behaves the same across now and later times – the async code is much easier to reason about.

To put it more plainly: to consistently handle both now and later, we make both of them later: all operations become async.

Of course, this rough callbacks-based approach leaves much to be desired. It’s just a first tiny step toward realizing the benefits of reasoning about future values without worrying about the time aspect of when it’s available or not.

13.1.1.2 Promise Value

We’ll definitely go into a lot more detail about Promises later in the chapter – so don’t worry if some of this is confusing – but let’s just briefly glimpse at how we can express the x + y example via Promises:

function add(xPromise,yPromise) {
    // `Promise.all([ .. ])` takes an array of promises,
    // and returns a new promise that waits on them
    // all to finish
    return Promise.all( [xPromise, yPromise] )

    // when that promise is resolved, let's take the
    // received `X` and `Y` values and add them together.
    .then( function(values){
        // `values` is an array of the messages from the
        // previously resolved promises
        return values[0] + values[1];
    } );
}

// `fetchX()` and `fetchY()` return promises for
// their respective values, which may be ready
// *now* or *later*.
add( fetchX(), fetchY() )

// we get a promise back for the sum of those
// two numbers.
// now we chain-call `then(..)` to wait for the
// resolution of that returned promise.
.then( function(sum){
    console.log( sum ); // that was easier!
} );

There are two layers of Promises in this snippet.

fetchX() and fetchY() are called directly, and the values they return (promises!) are passed into add(..). The underlying values those promises represent may be ready now or later, but each promise normalizes the behavior to be the same regardless. We reason about X and Y values in a time-independent way. They are future values.

The second layer is the promise that add(..) creates (via Promise.all([ .. ])) and returns, which we wait on by calling then(..). When the add(..) operation completes, our sum future value is ready and we can print it out. We hide inside of add(..) the logic for waiting on the X and Y future values.

Note: Inside add(..), the Promise.all([ .. ]) call creates a promise (which is waiting on promiseX and promiseY to resolve). The chained call to .then(..) creates another promise, which the return values[0] + values[1] line immediately resolves (with the result of the addition). Thus, the then(..) call we chain off the end of the add(..) call – at the end of the snippet – is actually operating on that second promise returned, rather than the first one created by Promise.all([ .. ]). Also, though we are not chaining off the end of that second then(..), it too has created another promise, had we chosen to observe/use it. This Promise chaining stuff will be explained in much greater detail later in this chapter.

Just like with cheeseburger orders, it’s possible that the resolution of a Promise is rejection instead of fulfillment. Unlike a fulfilled Promise, where the value is always programmatic, a rejection value – commonly called a “rejection reason” – can either be set directly by the program logic, or it can result implicitly from a runtime exception.

With Promises, the then(..) call can actually take two functions, the first for fulfillment (as shown earlier), and the second for rejection:

add( fetchX(), fetchY() )
.then(
    // fullfillment handler
    function(sum) {
        console.log( sum );
    },
    // rejection handler
    function(err) {
        console.error( err ); // bummer!
    }
);

If something went wrong getting X or Y, or something somehow failed during the addition, the promise that add(..) returns is rejected, and the second callback error handler passed to then(..) will receive the rejection value from the promise.

Because Promises encapsulate the time-dependent state – waiting on the fulfillment or rejection of the underlying value – from the outside, the Promise itself is time-independent, and thus Promises can be composed (combined) in predictable ways regardless of the timing or outcome underneath.

Moreover, once a Promise is resolved, it stays that way forever – it becomes an immutable value at that point – and can then be observed as many times as necessary.

Note: Because a Promise is externally immutable once resolved, it’s now safe to pass that value around to any party and know that it cannot be modified accidentally or maliciously. This is especially true in relation to multiple parties observing the resolution of a Promise. It is not possible for one party to affect another party’s ability to observe Promise resolution. Immutability may sound like an academic topic, but it’s actually one of the most fundamental and important aspects of Promise design, and shouldn’t be casually passed over.

That’s one of the most powerful and important concepts to understand about Promises. With a fair amount of work, you could ad hoc create the same effects with nothing but ugly callback composition, but that’s not really an effective strategy, especially because you have to do it over and over again.

Promises are an easily repeatable mechanism for encapsulating and composing future values.

13.1.2 Completion Event

As we just saw, an individual Promise behaves as a future value. But there’s another way to think of the resolution of a Promise: as a flow-control mechanism – a temporal this-then-that – for two or more steps in an asynchronous task.

Let’s imagine calling a function foo(..) to perform some task. We don’t know about any of its details, nor do we care. It may complete the task right away, or it may take a while.

We just simply need to know when foo(..) finishes so that we can move on to our next task. In other words, we’d like a way to be notified of foo(..)’s completion so that we can continue.

In typical JavaScript fashion, if you need to listen for a notification, you’d likely think of that in terms of events. So we could reframe our need for notification as a need to listen for a completion (or continuation) event emitted by foo(..).

Note: Whether you call it a “completion event” or a “continuation event” depends on your perspective. Is the focus more on what happens with foo(..), or what happens after foo(..) finishes? Both perspectives are accurate and useful. The event notification tells us that foo(..) has completed, but also that it’s OK to continue with the next step. Indeed, the callback you pass to be called for the event notification is itself what we’ve previously called a continuation. Because completion event is a bit more focused on the foo(..), which more has our attention at present, we slightly favor completion event for the rest of this text.

With callbacks, the “notification” would be our callback invoked by the task (foo(..)). But with Promises, we turn the relationship around, and expect that we can listen for an event from foo(..), and when notified, proceed accordingly.

First, consider some pseudocode:

foo(x) {
    // start doing something that could take a while
}

foo( 42 )

on (foo "completion") {
    // now we can do the next step!
}

on (foo "error") {
    // oops, something went wrong in `foo(..)`
}

We call foo(..) and then we set up two event listeners, one for "completion" and one for "error" – the two possible final outcomes of the foo(..) call. In essence, foo(..) doesn’t even appear to be aware that the calling code has subscribed to these events, which makes for a very nice separation of concerns.

Unfortunately, such code would require some “magic” of the JS environment that doesn’t exist (and would likely be a bit impractical). Here’s the more natural way we could express that in JS:

function foo(x) {
    // start doing something that could take a while

    // make a `listener` event notification
    // capability to return

    return listener;
}

var evt = foo( 42 );

evt.on( "completion", function(){
    // now we can do the next step!
} );

evt.on( "failure", function(err){
    // oops, something went wrong in `foo(..)`
} );

foo(..) expressly creates an event subscription capability to return back, and the calling code receives and registers the two event handlers against it.

The inversion from normal callback-oriented code should be obvious, and it’s intentional. Instead of passing the callbacks to foo(..), it returns an event capability we call evt, which receives the callbacks.

But if you recall from Chapter 2, callbacks themselves represent an inversion of control. So inverting the callback pattern is actually an inversion of inversion, or an uninversion of control – restoring control back to the calling code where we wanted it to be in the first place.

One important benefit is that multiple separate parts of the code can be given the event listening capability, and they can all independently be notified of when foo(..) completes to perform subsequent steps after its completion:

var evt = foo( 42 );

// let `bar(..)` listen to `foo(..)`'s completion
bar( evt );

// also, let `baz(..)` listen to `foo(..)`'s completion
baz( evt );

Uninversion of control enables a nicer separation of concerns, where bar(..) and baz(..) don’t need to be involved in how foo(..) is called. Similarly, foo(..) doesn’t need to know or care that bar(..) and baz(..) exist or are waiting to be notified when foo(..) completes.

Essentially, this evt object is a neutral third-party negotiation between the separate concerns.

13.1.2.1 Promise “Events”

As you may have guessed by now, the evt event listening capability is an analogy for a Promise.

In a Promise-based approach, the previous snippet would have foo(..) creating and returning a Promise instance, and that promise would then be passed to bar(..) and baz(..).

Note: The Promise resolution “events” we listen for aren’t strictly events (though they certainly behave like events for these purposes), and they’re not typically called "completion" or "error". Instead, we use then(..) to register a "then" event. Or perhaps more precisely, then(..) registers "fulfillment" and/or "rejection" event(s), though we don’t see those terms used explicitly in the code.

Consider:

function foo(x) {
    // start doing something that could take a while

    // construct and return a promise
    return new Promise( function(resolve,reject){
        // eventually, call `resolve(..)` or `reject(..)`,
        // which are the resolution callbacks for
        // the promise.
    } );
}

var p = foo( 42 );

bar( p );

baz( p );

Note: The pattern shown with new Promise( function(..){ .. } ) is generally called the “revealing constructor”. The function passed in is executed immediately (not async deferred, as callbacks to then(..) are), and it’s provided two parameters, which in this case we’ve named resolve and reject. These are the resolution functions for the promise. resolve(..) generally signals fulfillment, and reject(..) signals rejection.

You can probably guess what the internals of bar(..) and baz(..) might look like:

function bar(fooPromise) {
    // listen for `foo(..)` to complete
    fooPromise.then(
        function(){
            // `foo(..)` has now finished, so
            // do `bar(..)`'s task
        },
        function(){
            // oops, something went wrong in `foo(..)`
        }
    );
}

// ditto for `baz(..)`

Promise resolution doesn’t necessarily need to involve sending along a message, as it did when we were examining Promises as future values. It can just be a flow-control signal, as used in the previous snippet.

Another way to approach this is:

function bar() {
    // `foo(..)` has definitely finished, so
    // do `bar(..)`'s task
}

function oopsBar() {
    // oops, something went wrong in `foo(..)`,
    // so `bar(..)` didn't run
}

// ditto for `baz()` and `oopsBaz()`

var p = foo( 42 );

p.then( bar, oopsBar );

p.then( baz, oopsBaz );

Note: If you’ve seen Promise-based coding before, you might be tempted to believe that the last two lines of that code could be written as p.then( .. ).then( .. ), using chaining, rather than p.then(..); p.then(..). That would have an entirely different behavior, so be careful! The difference might not be clear right now, but it’s actually a different async pattern than we’ve seen thus far: splitting/forking. Don’t worry! We’ll come back to this point later in this chapter.

Instead of passing the p promise to bar(..) and baz(..), we use the promise to control when bar(..) and baz(..) will get executed, if ever. The primary difference is in the error handling.

In the first snippet’s approach, bar(..) is called regardless of whether foo(..) succeeds or fails, and it handles its own fallback logic if it’s notified that foo(..) failed. The same is true for baz(..), obviously.

In the second snippet, bar(..) only gets called if foo(..) succeeds, and otherwise oopsBar(..) gets called. Ditto for baz(..).

Neither approach is correct per se. There will be cases where one is preferred over the other.

In either case, the promise p that comes back from foo(..) is used to control what happens next.

Moreover, the fact that both snippets end up calling then(..) twice against the same promise p illustrates the point made earlier, which is that Promises (once resolved) retain their same resolution (fulfillment or rejection) forever, and can subsequently be observed as many times as necessary.

Whenever p is resolved, the next step will always be the same, both now and later.

13.2 Thenable Duck Typing

In Promises-land, an important detail is how to know for sure if some value is a genuine Promise or not. Or more directly, is it a value that will behave like a Promise?

Given that Promises are constructed by the new Promise(..) syntax, you might think that p instanceof Promise would be an acceptable check. But unfortunately, there are a number of reasons that’s not totally sufficient.

Mainly, you can receive a Promise value from another browser window (iframe, etc.), which would have its own Promise different from the one in the current window/frame, and that check would fail to identify the Promise instance.

Moreover, a library or framework may choose to vend its own Promises and not use the native ES6 Promise implementation to do so. In fact, you may very well be using Promises with libraries in older browsers that have no Promise at all.

When we discuss Promise resolution processes later in this chapter, it will become more obvious why a non-genuine-but-Promise-like value would still be very important to be able to recognize and assimilate. But for now, just take my word for it that it’s a critical piece of the puzzle.

As such, it was decided that the way to recognize a Promise (or something that behaves like a Promise) would be to define something called a “thenable” as any object or function which has a then(..) method on it. It is assumed that any such value is a Promise-conforming thenable.

The general term for “type checks” that make assumptions about a value’s “type” based on its shape (what properties are present) is called “duck typing” – “If it looks like a duck, and quacks like a duck, it must be a duck” (see the Types & Grammar title of this book series). So the duck typing check for a thenable would roughly be:

if (
    p !== null &&
    (
        typeof p === "object" ||
        typeof p === "function"
    ) &&
    typeof p.then === "function"
) {
    // assume it's a thenable!
}
else {
    // not a thenable
}

Yuck! Setting aside the fact that this logic is a bit ugly to implement in various places, there’s something deeper and more troubling going on.

If you try to fulfill a Promise with any object/function value that happens to have a then(..) function on it, but you weren’t intending it to be treated as a Promise/thenable, you’re out of luck, because it will automatically be recognized as thenable and treated with special rules (see later in the chapter).

This is even true if you didn’t realize the value has a then(..) on it. For example:

var o = { then: function(){} };

// make `v` be `[[Prototype]]`-linked to `o`
var v = Object.create( o );

v.someStuff = "cool";
v.otherStuff = "not so cool";

v.hasOwnProperty( "then" );     // false

v doesn’t look like a Promise or thenable at all. It’s just a plain object with some properties on it. You’re probably just intending to send that value around like any other object.

But unknown to you, v is also [[Prototype]]-linked (see the this & Object Prototypes title of this book series) to another object o, which happens to have a then(..) on it. So the thenable duck typing checks will think and assume v is a thenable. Uh oh.

It doesn’t even need to be something as directly intentional as that:

Object.prototype.then = function(){};
Array.prototype.then = function(){};

var v1 = { hello: "world" };
var v2 = [ "Hello", "World" ];

Both v1 and v2 will be assumed to be thenables. You can’t control or predict if any other code accidentally or maliciously adds then(..) to Object.prototype, Array.prototype, or any of the other native prototypes. And if what’s specified is a function that doesn’t call either of its parameters as callbacks, then any Promise resolved with such a value will just silently hang forever! Crazy.

Sound implausible or unlikely? Perhaps.

But keep in mind that there were several well-known non-Promise libraries preexisting in the community prior to ES6 that happened to already have a method on them called then(..). Some of those libraries chose to rename their own methods to avoid collision (that sucks!). Others have simply been relegated to the unfortunate status of “incompatible with Promise-based coding” in reward for their inability to change to get out of the way.

The standards decision to hijack the previously nonreserved – and completely general-purpose sounding – then property name means that no value (or any of its delegates), either past, present, or future, can have a then(..) function present, either on purpose or by accident, or that value will be confused for a thenable in Promises systems, which will probably create bugs that are really hard to track down.

Warning: I do not like how we ended up with duck typing of thenables for Promise recognition. There were other options, such as “branding” or even “anti-branding”; what we got seems like a worst-case compromise. But it’s not all doom and gloom. Thenable duck typing can be helpful, as we’ll see later. Just beware that thenable duck typing can be hazardous if it incorrectly identifies something as a Promise that isn’t.

13.3 Promise Trust

We’ve now seen two strong analogies that explain different aspects of what Promises can do for our async code. But if we stop there, we’ve missed perhaps the single most important characteristic that the Promise pattern establishes: trust.

Whereas the future values and completion events analogies play out explicitly in the code patterns we’ve explored, it won’t be entirely obvious why or how Promises are designed to solve all of the inversion of control trust issues we laid out in the “Trust Issues” section of Chapter 2. But with a little digging, we can uncover some important guarantees that restore the confidence in async coding that Chapter 2 tore down!

Let’s start by reviewing the trust issues with callbacks-only coding. When you pass a callback to a utility foo(..), it might:

  • Call the callback too early
  • Call the callback too late (or never)
  • Call the callback too few or too many times
  • Fail to pass along any necessary environment/parameters
  • Swallow any errors/exceptions that may happen

The characteristics of Promises are intentionally designed to provide useful, repeatable answers to all these concerns.

13.3.1 Calling Too Early

Primarily, this is a concern of whether code can introduce Zalgo-like effects (see Chapter 2), where sometimes a task finishes synchronously and sometimes asynchronously, which can lead to race conditions.

Promises by definition cannot be susceptible to this concern, because even an immediately fulfilled Promise (like new Promise(function(resolve){ resolve(42); })) cannot be observed synchronously.

That is, when you call then(..) on a Promise, even if that Promise was already resolved, the callback you provide to then(..) will always be called asynchronously (for more on this, refer back to “Jobs” in Chapter 1).

No more need to insert your own setTimeout(..,0) hacks. Promises prevent Zalgo automatically.

13.3.2 Calling Too Late

Similar to the previous point, a Promise’s then(..) registered observation callbacks are automatically scheduled when either resolve(..) or reject(..) are called by the Promise creation capability. Those scheduled callbacks will predictably be fired at the next asynchronous moment (see “Jobs” in Chapter 1).

It’s not possible for synchronous observation, so it’s not possible for a synchronous chain of tasks to run in such a way to in effect “delay” another callback from happening as expected. That is, when a Promise is resolved, all then(..) registered callbacks on it will be called, in order, immediately at the next asynchronous opportunity (again, see “Jobs” in Chapter 1), and nothing that happens inside of one of those callbacks can affect/delay the calling of the other callbacks.

For example:

p.then( function(){
    p.then( function(){
        console.log( "C" );
    } );
    console.log( "A" );
} );
p.then( function(){
    console.log( "B" );
} );
// A B C

Here, "C" cannot interrupt and precede "B", by virtue of how Promises are defined to operate.

13.3.2.1 Promise Scheduling Quirks

It’s important to note, though, that there are lots of nuances of scheduling where the relative ordering between callbacks chained off two separate Promises is not reliably predictable.

If two promises p1 and p2 are both already resolved, it should be true that p1.then(..); p2.then(..) would end up calling the callback(s) for p1 before the ones for p2. But there are subtle cases where that might not be true, such as the following:

var p3 = new Promise( function(resolve,reject){
    resolve( "B" );
} );

var p1 = new Promise( function(resolve,reject){
    resolve( p3 );
} );

var p2 = new Promise( function(resolve,reject){
    resolve( "A" );
} );

p1.then( function(v){
    console.log( v );
} );

p2.then( function(v){
    console.log( v );
} );

// A B  <-- not  B A  as you might expect

We’ll cover this more later, but as you can see, p1 is resolved not with an immediate value, but with another promise p3 which is itself resolved with the value "B". The specified behavior is to unwrap p3 into p1, but asynchronously, so p1’s callback(s) are behind p2’s callback(s) in the asynchronous Job queue (see Chapter 1).

To avoid such nuanced nightmares, you should never rely on anything about the ordering/scheduling of callbacks across Promises. In fact, a good practice is not to code in such a way where the ordering of multiple callbacks matters at all. Avoid that if you can.

13.3.3 Never Calling the Callback

This is a very common concern. It’s addressable in several ways with Promises.

First, nothing (not even a JS error) can prevent a Promise from notifying you of its resolution (if it’s resolved). If you register both fulfillment and rejection callbacks for a Promise, and the Promise gets resolved, one of the two callbacks will always be called.

Of course, if your callbacks themselves have JS errors, you may not see the outcome you expect, but the callback will in fact have been called. We’ll cover later how to be notified of an error in your callback, because even those don’t get swallowed.

But what if the Promise itself never gets resolved either way? Even that is a condition that Promises provide an answer for, using a higher level abstraction called a “race”:

// a utility for timing out a Promise
function timeoutPromise(delay) {
    return new Promise( function(resolve,reject){
        setTimeout( function(){
            reject( "Timeout!" );
        }, delay );
    } );
}

// setup a timeout for `foo()`
Promise.race( [
    foo(),                  // attempt `foo()`
    timeoutPromise( 3000 )  // give it 3 seconds
] )
.then(
    function(){
        // `foo(..)` fulfilled in time!
    },
    function(err){
        // either `foo()` rejected, or it just
        // didn't finish in time, so inspect
        // `err` to know which
    }
);

There are more details to consider with this Promise timeout pattern, but we’ll come back to it later.

Importantly, we can ensure a signal as to the outcome of foo(), to prevent it from hanging our program indefinitely.

13.3.4 Calling Too Few or Too Many Times

By definition, one is the appropriate number of times for the callback to be called. The “too few” case would be zero calls, which is the same as the “never” case we just examined.

The “too many” case is easy to explain. Promises are defined so that they can only be resolved once. If for some reason the Promise creation code tries to call resolve(..) or reject(..) multiple times, or tries to call both, the Promise will accept only the first resolution, and will silently ignore any subsequent attempts.

Because a Promise can only be resolved once, any then(..) registered callbacks will only ever be called once (each).

Of course, if you register the same callback more than once, (e.g., p.then(f); p.then(f);), it’ll be called as many times as it was registered. The guarantee that a response function is called only once does not prevent you from shooting yourself in the foot.

13.3.5 Failing to Pass Along Any Parameters/Environment

Promises can have, at most, one resolution value (fulfillment or rejection).

If you don’t explicitly resolve with a value either way, the value is undefined, as is typical in JS. But whatever the value, it will always be passed to all registered (and appropriate: fulfillment or rejection) callbacks, either now or in the future.

Something to be aware of: If you call resolve(..) or reject(..) with multiple parameters, all subsequent parameters beyond the first will be silently ignored. Although that might seem a violation of the guarantee we just described, it’s not exactly, because it constitutes an invalid usage of the Promise mechanism. Other invalid usages of the API (such as calling resolve(..) multiple times) are similarly protected, so the Promise behavior here is consistent (if not a tiny bit frustrating).

If you want to pass along multiple values, you must wrap them in another single value that you pass, such as an array or an object.

As for environment, functions in JS always retain their closure of the scope in which they’re defined (see the Scope & Closures title of this series), so they of course would continue to have access to whatever surrounding state you provide. Of course, the same is true of callbacks-only design, so this isn’t a specific augmentation of benefit from Promises – but it’s a guarantee we can rely on nonetheless.

13.3.6 Swallowing Any Errors/Exceptions

In the base sense, this is a restatement of the previous point. If you reject a Promise with a reason (aka error message), that value is passed to the rejection callback(s).

But there’s something much bigger at play here. If at any point in the creation of a Promise, or in the observation of its resolution, a JS exception error occurs, such as a TypeError or ReferenceError, that exception will be caught, and it will force the Promise in question to become rejected.

For example:

var p = new Promise( function(resolve,reject){
    foo.bar();  // `foo` is not defined, so error!
    resolve( 42 );  // never gets here :(
} );

p.then(
    function fulfilled(){
        // never gets here :(
    },
    function rejected(err){
        // `err` will be a `TypeError` exception object
        // from the `foo.bar()` line.
    }
);

The JS exception that occurs from foo.bar() becomes a Promise rejection that you can catch and respond to.

This is an important detail, because it effectively solves another potential Zalgo moment, which is that errors could create a synchronous reaction whereas nonerrors would be asynchronous. Promises turn even JS exceptions into asynchronous behavior, thereby reducing the race condition chances greatly.

But what happens if a Promise is fulfilled, but there’s a JS exception error during the observation (in a then(..) registered callback)? Even those aren’t lost, but you may find how they’re handled a bit surprising, until you dig in a little deeper:

var p = new Promise( function(resolve,reject){
    resolve( 42 );
} );

p.then(
    function fulfilled(msg){
        foo.bar();
        console.log( msg ); // never gets here :(
    },
    function rejected(err){
        // never gets here either :(
    }
);

Wait, that makes it seem like the exception from foo.bar() really did get swallowed. Never fear, it didn’t. But something deeper is wrong, which is that we’ve failed to listen for it. The p.then(..) call itself returns another promise, and it’s that promise that will be rejected with the TypeError exception.

Why couldn’t it just call the error handler we have defined there? Seems like a logical behavior on the surface. But it would violate the fundamental principle that Promises are immutable once resolved. p was already fulfilled to the value 42, so it can’t later be changed to a rejection just because there’s an error in observing p’s resolution.

Besides the principle violation, such behavior could wreak havoc, if say there were multiple then(..) registered callbacks on the promise p, because some would get called and others wouldn’t, and it would be very opaque as to why.

13.3.7 Trustable Promise?

There’s one last detail to examine to establish trust based on the Promise pattern.

You’ve no doubt noticed that Promises don’t get rid of callbacks at all. They just change where the callback is passed to. Instead of passing a callback to foo(..), we get something (ostensibly a genuine Promise) back from foo(..), and we pass the callback to that something instead.

But why would this be any more trustable than just callbacks alone? How can we be sure the something we get back is in fact a trustable Promise? Isn’t it basically all just a house of cards where we can trust only because we already trusted?

One of the most important, but often overlooked, details of Promises is that they have a solution to this issue as well. Included with the native ES6 Promise implementation is Promise.resolve(..).

If you pass an immediate, non-Promise, non-thenable value to Promise.resolve(..), you get a promise that’s fulfilled with that value. In other words, these two promises p1 and p2 will behave basically identically:

var p1 = new Promise( function(resolve,reject){
    resolve( 42 );
} );

var p2 = Promise.resolve( 42 );

But if you pass a genuine Promise to Promise.resolve(..), you just get the same promise back:

var p1 = Promise.resolve( 42 );

var p2 = Promise.resolve( p1 );

p1 === p2; // true

Even more importantly, if you pass a non-Promise thenable value to Promise.resolve(..), it will attempt to unwrap that value, and the unwrapping will keep going until a concrete final non-Promise-like value is extracted.

Recall our previous discussion of thenables?

Consider:

var p = {
    then: function(cb) {
        cb( 42 );
    }
};

// this works OK, but only by good fortune
p
.then(
    function fulfilled(val){
        console.log( val ); // 42
    },
    function rejected(err){
        // never gets here
    }
);

This p is a thenable, but it’s not a genuine Promise. Luckily, it’s reasonable, as most will be. But what if you got back instead something that looked like:

var p = {
    then: function(cb,errcb) {
        cb( 42 );
        errcb( "evil laugh" );
    }
};

p
.then(
    function fulfilled(val){
        console.log( val ); // 42
    },
    function rejected(err){
        // oops, shouldn't have run
        console.log( err ); // evil laugh
    }
);

This p is a thenable but it’s not so well behaved of a promise. Is it malicious? Or is it just ignorant of how Promises should work? It doesn’t really matter, to be honest. In either case, it’s not trustable as is.

Nonetheless, we can pass either of these versions of p to Promise.resolve(..), and we’ll get the normalized, safe result we’d expect:

Promise.resolve( p )
.then(
    function fulfilled(val){
        console.log( val ); // 42
    },
    function rejected(err){
        // never gets here
    }
);

Promise.resolve(..) will accept any thenable, and will unwrap it to its non-thenable value. But you get back from Promise.resolve(..) a real, genuine Promise in its place, one that you can trust. If what you passed in is already a genuine Promise, you just get it right back, so there’s no downside at all to filtering through Promise.resolve(..) to gain trust.

So let’s say we’re calling a foo(..) utility and we’re not sure we can trust its return value to be a well-behaving Promise, but we know it’s at least a thenable. Promise.resolve(..) will give us a trustable Promise wrapper to chain off of:

// don't just do this:
foo( 42 )
.then( function(v){
    console.log( v );
} );

// instead, do this:
Promise.resolve( foo( 42 ) )
.then( function(v){
    console.log( v );
} );

Note: Another beneficial side effect of wrapping Promise.resolve(..) around any function’s return value (thenable or not) is that it’s an easy way to normalize that function call into a well-behaving async task. If foo(42) returns an immediate value sometimes, or a Promise other times, Promise.resolve( foo(42) ) makes sure it’s always a Promise result. And avoiding Zalgo makes for much better code.

13.3.8 Trust Built

Hopefully the previous discussion now fully “resolves” (pun intended) in your mind why the Promise is trustable, and more importantly, why that trust is so critical in building robust, maintainable software.

Can you write async code in JS without trust? Of course you can. We JS developers have been coding async with nothing but callbacks for nearly two decades.

But once you start questioning just how much you can trust the mechanisms you build upon to actually be predictable and reliable, you start to realize callbacks have a pretty shaky trust foundation.

Promises are a pattern that augments callbacks with trustable semantics, so that the behavior is more reason-able and more reliable. By uninverting the inversion of control of callbacks, we place the control with a trustable system (Promises) that was designed specifically to bring sanity to our async.

13.4 Chain Flow

We’ve hinted at this a couple of times already, but Promises are not just a mechanism for a single-step this-then-that sort of operation. That’s the building block, of course, but it turns out we can string multiple Promises together to represent a sequence of async steps.

The key to making this work is built on two behaviors intrinsic to Promises:

  • Every time you call then(..) on a Promise, it creates and returns a new Promise, which we can chain with.
  • Whatever value you return from the then(..) call’s fulfillment callback (the first parameter) is automatically set as the fulfillment of the chained Promise (from the first point).

Let’s first illustrate what that means, and then we’ll derive how that helps us create async sequences of flow control. Consider the following:

var p = Promise.resolve( 21 );

var p2 = p.then( function(v){
    console.log( v );   // 21

    // fulfill `p2` with value `42`
    return v * 2;
} );

// chain off `p2`
p2.then( function(v){
    console.log( v );   // 42
} );

By returning v * 2 (i.e., 42), we fulfill the p2 promise that the first then(..) call created and returned. When p2’s then(..) call runs, it’s receiving the fulfillment from the return v * 2 statement. Of course, p2.then(..) creates yet another promise, which we could have stored in a p3 variable.

But it’s a little annoying to have to create an intermediate variable p2 (or p3, etc.). Thankfully, we can easily just chain these together:

var p = Promise.resolve( 21 );

p
.then( function(v){
    console.log( v );   // 21

    // fulfill the chained promise with value `42`
    return v * 2;
} )
// here's the chained promise
.then( function(v){
    console.log( v );   // 42
} );

So now the first then(..) is the first step in an async sequence, and the second then(..) is the second step. This could keep going for as long as you needed it to extend. Just keep chaining off a previous then(..) with each automatically created Promise.

But there’s something missing here. What if we want step 2 to wait for step 1 to do something asynchronous? We’re using an immediate return statement, which immediately fulfills the chained promise.

The key to making a Promise sequence truly async capable at every step is to recall how Promise.resolve(..) operates when what you pass to it is a Promise or thenable instead of a final value. Promise.resolve(..) directly returns a received genuine Promise, or it unwraps the value of a received thenable – and keeps going recursively while it keeps unwrapping thenables.

The same sort of unwrapping happens if you return a thenable or Promise from the fulfillment (or rejection) handler. Consider:

var p = Promise.resolve( 21 );

p.then( function(v){
    console.log( v );   // 21

    // create a promise and return it
    return new Promise( function(resolve,reject){
        // fulfill with value `42`
        resolve( v * 2 );
    } );
} )
.then( function(v){
    console.log( v );   // 42
} );

Even though we wrapped 42 up in a promise that we returned, it still got unwrapped and ended up as the resolution of the chained promise, such that the second then(..) still received 42. If we introduce asynchrony to that wrapping promise, everything still nicely works the same:

var p = Promise.resolve( 21 );

p.then( function(v){
    console.log( v );   // 21

    // create a promise to return
    return new Promise( function(resolve,reject){
        // introduce asynchrony!
        setTimeout( function(){
            // fulfill with value `42`
            resolve( v * 2 );
        }, 100 );
    } );
} )
.then( function(v){
    // runs after the 100ms delay in the previous step
    console.log( v );   // 42
} );

That’s incredibly powerful! Now we can construct a sequence of however many async steps we want, and each step can delay the next step (or not!), as necessary.

Of course, the value passing from step to step in these examples is optional. If you don’t return an explicit value, an implicit undefined is assumed, and the promises still chain together the same way. Each Promise resolution is thus just a signal to proceed to the next step.

To further the chain illustration, let’s generalize a delay-Promise creation (without resolution messages) into a utility we can reuse for multiple steps:

function delay(time) {
    return new Promise( function(resolve,reject){
        setTimeout( resolve, time );
    } );
}

delay( 100 ) // step 1
.then( function STEP2(){
    console.log( "step 2 (after 100ms)" );
    return delay( 200 );
} )
.then( function STEP3(){
    console.log( "step 3 (after another 200ms)" );
} )
.then( function STEP4(){
    console.log( "step 4 (next Job)" );
    return delay( 50 );
} )
.then( function STEP5(){
    console.log( "step 5 (after another 50ms)" );
} )
...

Calling delay(200) creates a promise that will fulfill in 200ms, and then we return that from the first then(..) fulfillment callback, which causes the second then(..)’s promise to wait on that 200ms promise.

Note: As described, technically there are two promises in that interchange: the 200ms-delay promise and the chained promise that the second then(..) chains from. But you may find it easier to mentally combine these two promises together, because the Promise mechanism automatically merges their states for you. In that respect, you could think of return delay(200) as creating a promise that replaces the earlier-returned chained promise.

To be honest, though, sequences of delays with no message passing isn’t a terribly useful example of Promise flow control. Let’s look at a scenario that’s a little more practical.

Instead of timers, let’s consider making Ajax requests:

// assume an `ajax( {url}, {callback} )` utility

// Promise-aware ajax
function request(url) {
    return new Promise( function(resolve,reject){
        // the `ajax(..)` callback should be our
        // promise's `resolve(..)` function
        ajax( url, resolve );
    } );
}

We first define a request(..) utility that constructs a promise to represent the completion of the ajax(..) call:

request( "http://some.url.1/" )
.then( function(response1){
    return request( "http://some.url.2/?v=" + response1 );
} )
.then( function(response2){
    console.log( response2 );
} );

Note: Developers commonly encounter situations in which they want to do Promise-aware async flow control with utilities that are not themselves Promise-enabled (like ajax(..) here, which expects a callback). Although the native ES6 Promise mechanism doesn’t automatically solve this pattern for us, practically all Promise libraries do. They usually call this process “lifting” or “promisifying” or some variation thereof. We’ll come back to this technique later.

Using the Promise-returning request(..), we create the first step in our chain implicitly by calling it with the first URL, and chain off that returned promise with the first then(..).

Once response1 comes back, we use that value to construct a second URL, and make a second request(..) call. That second request(..) promise is returned so that the third step in our async flow control waits for that Ajax call to complete. Finally, we print response2 once it returns.

The Promise chain we construct is not only a flow control that expresses a multistep async sequence, but it also acts as a message channel to propagate messages from step to step.

What if something went wrong in one of the steps of the Promise chain? An error/exception is on a per-Promise basis, which means it’s possible to catch such an error at any point in the chain, and that catching acts to sort of “reset” the chain back to normal operation at that point:

// step 1:
request( "http://some.url.1/" )

// step 2:
.then( function(response1){
    foo.bar(); // undefined, error!

    // never gets here
    return request( "http://some.url.2/?v=" + response1 );
} )

// step 3:
.then(
    function fulfilled(response2){
        // never gets here
    },
    // rejection handler to catch the error
    function rejected(err){
        console.log( err ); // `TypeError` from `foo.bar()` error
        return 42;
    }
)

// step 4:
.then( function(msg){
    console.log( msg );     // 42
} );

When the error occurs in step 2, the rejection handler in step 3 catches it. The return value (42 in this snippet), if any, from that rejection handler fulfills the promise for the next step (4), such that the chain is now back in a fulfillment state.

Note: As we discussed earlier, when returning a promise from a fulfillment handler, it’s unwrapped and can delay the next step. That’s also true for returning promises from rejection handlers, such that if the return 42 in step 3 instead returned a promise, that promise could delay step 4. A thrown exception inside either the fulfillment or rejection handler of a then(..) call causes the next (chained) promise to be immediately rejected with that exception.

If you call then(..) on a promise, and you only pass a fulfillment handler to it, an assumed rejection handler is substituted:

var p = new Promise( function(resolve,reject){
    reject( "Oops" );
} );

var p2 = p.then(
    function fulfilled(){
        // never gets here
    }
    // assumed rejection handler, if omitted or
    // any other non-function value passed
    // function(err) {
    //     throw err;
    // }
);

As you can see, the assumed rejection handler simply rethrows the error, which ends up forcing p2 (the chained promise) to reject with the same error reason. In essence, this allows the error to continue propagating along a Promise chain until an explicitly defined rejection handler is encountered.

Note: We’ll cover more details of error handling with Promises a little later, because there are other nuanced details to be concerned about.

If a proper valid function is not passed as the fulfillment handler parameter to then(..), there’s also a default handler substituted:

var p = Promise.resolve( 42 );

p.then(
    // assumed fulfillment handler, if omitted or
    // any other non-function value passed
    // function(v) {
    //     return v;
    // }
    null,
    function rejected(err){
        // never gets here
    }
);

As you can see, the default fulfillment handler simply passes whatever value it receives along to the next step (Promise).

Note: The then(null,function(err){ .. }) pattern – only handling rejections (if any) but letting fulfillments pass through – has a shortcut in the API: catch(function(err){ .. }). We’ll cover catch(..) more fully in the next section.

Let’s review briefly the intrinsic behaviors of Promises that enable chaining flow control:

  • A then(..) call against one Promise automatically produces a new Promise to return from the call.
  • Inside the fulfillment/rejection handlers, if you return a value or an exception is thrown, the new returned (chainable) Promise is resolved accordingly.
  • If the fulfillment or rejection handler returns a Promise, it is unwrapped, so that whatever its resolution is will become the resolution of the chained Promise returned from the current then(..).

While chaining flow control is helpful, it’s probably most accurate to think of it as a side benefit of how Promises compose (combine) together, rather than the main intent. As we’ve discussed in detail several times already, Promises normalize asynchrony and encapsulate time-dependent value state, and that is what lets us chain them together in this useful way.

Certainly, the sequential expressiveness of the chain (this-then-this-then-this…) is a big improvement over the tangled mess of callbacks as we identified in Chapter 2. But there’s still a fair amount of boilerplate (then(..) and function(){ .. }) to wade through. In the next chapter, we’ll see a significantly nicer pattern for sequential flow control expressivity, with generators.

13.4.1 Terminology: Resolve, Fulfill, and Reject

There’s some slight confusion around the terms “resolve,” “fulfill,” and “reject” that we need to clear up, before you get too much deeper into learning about Promises. Let’s first consider the Promise(..) constructor:

var p = new Promise( function(X,Y){
    // X() for fulfillment
    // Y() for rejection
} );

As you can see, two callbacks (here labeled X and Y) are provided. The first is usually used to mark the Promise as fulfilled, and the second always marks the Promise as rejected. But what’s the “usually” about, and what does that imply about accurately naming those parameters?

Ultimately, it’s just your user code and the identifier names aren’t interpreted by the engine to mean anything, so it doesn’t technically matter; foo(..) and bar(..) are equally functional. But the words you use can affect not only how you are thinking about the code, but how other developers on your team will think about it. Thinking wrongly about carefully orchestrated async code is almost surely going to be worse than the spaghetti-callback alternatives.

So it actually does kind of matter what you call them.

The second parameter is easy to decide. Almost all literature uses reject(..) as its name, and because that’s exactly (and only!) what it does, that’s a very good choice for the name. I’d strongly recommend you always use reject(..).

But there’s a little more ambiguity around the first parameter, which in Promise literature is often labeled resolve(..). That word is obviously related to “resolution,” which is what’s used across the literature (including this book) to describe setting a final value/state to a Promise. We’ve already used “resolve the Promise” several times to mean either fulfilling or rejecting the Promise.

But if this parameter seems to be used to specifically fulfill the Promise, why shouldn’t we call it fulfill(..) instead of resolve(..) to be more accurate? To answer that question, let’s also take a look at two of the Promise API methods:

var fulfilledPr = Promise.resolve( 42 );

var rejectedPr = Promise.reject( "Oops" );

Promise.resolve(..) creates a Promise that’s resolved to the value given to it. In this example, 42 is a normal, non-Promise, non-thenable value, so the fulfilled promise fulfilledPr is created for the value 42. Promise.reject("Oops") creates the rejected promise rejectedPr for the reason "Oops".

Let’s now illustrate why the word “resolve” (such as in Promise.resolve(..)) is unambiguous and indeed more accurate, if used explicitly in a context that could result in either fulfillment or rejection:

var rejectedTh = {
    then: function(resolved,rejected) {
        rejected( "Oops" );
    }
};

var rejectedPr = Promise.resolve( rejectedTh );

As we discussed earlier in this chapter, Promise.resolve(..) will return a received genuine Promise directly, or unwrap a received thenable. If that thenable unwrapping reveals a rejected state, the Promise returned from Promise.resolve(..) is in fact in that same rejected state.

So Promise.resolve(..) is a good, accurate name for the API method, because it can actually result in either fulfillment or rejection.

The first callback parameter of the Promise(..) constructor will unwrap either a thenable (identically to Promise.resolve(..)) or a genuine Promise:

var rejectedPr = new Promise( function(resolve,reject){
    // resolve this promise with a rejected promise
    resolve( Promise.reject( "Oops" ) );
} );

rejectedPr.then(
    function fulfilled(){
        // never gets here
    },
    function rejected(err){
        console.log( err ); // "Oops"
    }
);

It should be clear now that resolve(..) is the appropriate name for the first callback parameter of the Promise(..) constructor.

Warning: The previously mentioned reject(..) does not do the unwrapping that resolve(..) does. If you pass a Promise/thenable value to reject(..), that untouched value will be set as the rejection reason. A subsequent rejection handler would receive the actual Promise/thenable you passed to reject(..), not its underlying immediate value.

But now let’s turn our attention to the callbacks provided to then(..). What should they be called (both in literature and in code)? I would suggest fulfilled(..) and rejected(..):

function fulfilled(msg) {
    console.log( msg );
}

function rejected(err) {
    console.error( err );
}

p.then(
    fulfilled,
    rejected
);

In the case of the first parameter to then(..), it’s unambiguously always the fulfillment case, so there’s no need for the duality of “resolve” terminology. As a side note, the ES6 specification uses onFulfilled(..) and onRejected(..) to label these two callbacks, so they are accurate terms.

13.5 Error Handling

We’ve already seen several examples of how Promise rejection – either intentional through calling reject(..) or accidental through JS exceptions – allows saner error handling in asynchronous programming. Let’s circle back though and be explicit about some of the details that we glossed over.

The most natural form of error handling for most developers is the synchronous try..catch construct. Unfortunately, it’s synchronous-only, so it fails to help in async code patterns:

function foo() {
    setTimeout( function(){
        baz.bar();
    }, 100 );
}

try {
    foo();
    // later throws global error from `baz.bar()`
}
catch (err) {
    // never gets here
}

try..catch would certainly be nice to have, but it doesn’t work across async operations. That is, unless there’s some additional environmental support, which we’ll come back to with generators in Chapter 4.

In callbacks, some standards have emerged for patterned error handling, most notably the “error-first callback” style:

function foo(cb) {
    setTimeout( function(){
        try {
            var x = baz.bar();
            cb( null, x ); // success!
        }
        catch (err) {
            cb( err );
        }
    }, 100 );
}

foo( function(err,val){
    if (err) {
        console.error( err ); // bummer :(
    }
    else {
        console.log( val );
    }
} );

Note: The try..catch here works only from the perspective that the baz.bar() call will either succeed or fail immediately, synchronously. If baz.bar() was itself its own async completing function, any async errors inside it would not be catchable.

The callback we pass to foo(..) expects to receive a signal of an error by the reserved first parameter err. If present, error is assumed. If not, success is assumed.

This sort of error handling is technically async capable, but it doesn’t compose well at all. Multiple levels of error-first callbacks woven together with these ubiquitous if statement checks inevitably will lead you to the perils of callback hell (see Chapter 2).

So we come back to error handling in Promises, with the rejection handler passed to then(..). Promises don’t use the popular “error-first callback” design style, but instead use “split callbacks” style; there’s one callback for fulfillment and one for rejection:

var p = Promise.reject( "Oops" );

p.then(
    function fulfilled(){
        // never gets here
    },
    function rejected(err){
        console.log( err ); // "Oops"
    }
);

While this pattern of error handling makes fine sense on the surface, the nuances of Promise error handling are often a fair bit more difficult to fully grasp.

Consider:

var p = Promise.resolve( 42 );

p.then(
    function fulfilled(msg){
        // numbers don't have string functions,
        // so will throw an error
        console.log( msg.toLowerCase() );
    },
    function rejected(err){
        // never gets here
    }
);

If the msg.toLowerCase() legitimately throws an error (it does!), why doesn’t our error handler get notified? As we explained earlier, it’s because that error handler is for the p promise, which has already been fulfilled with value 42. The p promise is immutable, so the only promise that can be notified of the error is the one returned from p.then(..), which in this case we don’t capture.

That should paint a clear picture of why error handling with Promises is error-prone (pun intended). It’s far too easy to have errors swallowed, as this is very rarely what you’d intend.

Warning: If you use the Promise API in an invalid way and an error occurs that prevents proper Promise construction, the result will be an immediately thrown exception, not a rejected Promise. Some examples of incorrect usage that fail Promise construction: new Promise(null), Promise.all(), Promise.race(42), and so on. You can’t get a rejected Promise if you don’t use the Promise API validly enough to actually construct a Promise in the first place!

13.5.1 Pit of Despair

Jeff Atwood noted years ago: programming languages are often set up in such a way that by default, developers fall into the “pit of despair” (http://blog.codinghorror.com/falling-into-the-pit-of-success/) – where accidents are punished – and that you have to try harder to get it right. He implored us to instead create a “pit of success,” where by default you fall into expected (successful) action, and thus would have to try hard to fail.

Promise error handling is unquestionably “pit of despair” design. By default, it assumes that you want any error to be swallowed by the Promise state, and if you forget to observe that state, the error silently languishes/dies in obscurity – usually despair.

To avoid losing an error to the silence of a forgotten/discarded Promise, some developers have claimed that a “best practice” for Promise chains is to always end your chain with a final catch(..), like:

var p = Promise.resolve( 42 );

p.then(
    function fulfilled(msg){
        // numbers don't have string functions,
        // so will throw an error
        console.log( msg.toLowerCase() );
    }
)
.catch( handleErrors );

Because we didn’t pass a rejection handler to the then(..), the default handler was substituted, which simply propagates the error to the next promise in the chain. As such, both errors that come into p, and errors that come after p in its resolution (like the msg.toLowerCase() one) will filter down to the final handleErrors(..).

Problem solved, right? Not so fast!

What happens if handleErrors(..) itself also has an error in it? Who catches that? There’s still yet another unattended promise: the one catch(..) returns, which we don’t capture and don’t register a rejection handler for.

You can’t just stick another catch(..) on the end of that chain, because it too could fail. The last step in any Promise chain, whatever it is, always has the possibility, even decreasingly so, of dangling with an uncaught error stuck inside an unobserved Promise.

Sound like an impossible conundrum yet?

13.5.2 Uncaught Handling

It’s not exactly an easy problem to solve completely. There are other ways to approach it which many would say are better.

Some Promise libraries have added methods for registering something like a “global unhandled rejection” handler, which would be called instead of a globally thrown error. But their solution for how to identify an error as “uncaught” is to have an arbitrary-length timer, say 3 seconds, running from time of rejection. If a Promise is rejected but no error handler is registered before the timer fires, then it’s assumed that you won’t ever be registering a handler, so it’s “uncaught.”

In practice, this has worked well for many libraries, as most usage patterns don’t typically call for significant delay between Promise rejection and observation of that rejection. But this pattern is troublesome because 3 seconds is so arbitrary (even if empirical), and also because there are indeed some cases where you want a Promise to hold on to its rejectedness for some indefinite period of time, and you don’t really want to have your “uncaught” handler called for all those false positives (not-yet-handled “uncaught errors”).

Another more common suggestion is that Promises should have a done(..) added to them, which essentially marks the Promise chain as “done.” done(..) doesn’t create and return a Promise, so the callbacks passed to done(..) are obviously not wired up to report problems to a chained Promise that doesn’t exist.

So what happens instead? It’s treated as you might usually expect in uncaught error conditions: any exception inside a done(..) rejection handler would be thrown as a global uncaught error (in the developer console, basically):

var p = Promise.resolve( 42 );

p.then(
    function fulfilled(msg){
        // numbers don't have string functions,
        // so will throw an error
        console.log( msg.toLowerCase() );
    }
)
.done( null, handleErrors );

// if `handleErrors(..)` caused its own exception, it would
// be thrown globally here

This might sound more attractive than the never-ending chain or the arbitrary timeouts. But the biggest problem is that it’s not part of the ES6 standard, so no matter how good it sounds, at best it’s a lot longer way off from being a reliable and ubiquitous solution.

Are we just stuck, then? Not entirely.

Browsers have a unique capability that our code does not have: they can track and know for sure when any object gets thrown away and garbage collected. So, browsers can track Promise objects, and whenever they get garbage collected, if they have a rejection in them, the browser knows for sure this was a legitimate “uncaught error,” and can thus confidently know it should report it to the developer console.

Note: At the time of this writing, both Chrome and Firefox have early attempts at that sort of “uncaught rejection” capability, though support is incomplete at best.

However, if a Promise doesn’t get garbage collected – it’s exceedingly easy for that to accidentally happen through lots of different coding patterns – the browser’s garbage collection sniffing won’t help you know and diagnose that you have a silently rejected Promise laying around.

Is there any other alternative? Yes.

13.5.3 Pit of Success

The following is just theoretical, how Promises could be someday changed to behave. I believe it would be far superior to what we currently have. And I think this change would be possible even post-ES6 because I don’t think it would break web compatibility with ES6 Promises. Moreover, it can be polyfilled/prollyfilled in, if you’re careful. Let’s take a look:

  • Promises could default to reporting (to the developer console) any rejection, on the next Job or event loop tick, if at that exact moment no error handler has been registered for the Promise.
  • For the cases where you want a rejected Promise to hold onto its rejected state for an indefinite amount of time before observing, you could call defer(), which suppresses automatic error reporting on that Promise.

If a Promise is rejected, it defaults to noisily reporting that fact to the developer console (instead of defaulting to silence). You can opt out of that reporting either implicitly (by registering an error handler before rejection), or explicitly (with defer()). In either case, you control the false positives.

Consider:

var p = Promise.reject( "Oops" ).defer();

// `foo(..)` is Promise-aware
foo( 42 )
.then(
    function fulfilled(){
        return p;
    },
    function rejected(err){
        // handle `foo(..)` error
    }
);
...

When we create p, we know we’re going to wait a while to use/observe its rejection, so we call defer() – thus no global reporting. defer() simply returns the same promise, for chaining purposes.

The promise returned from foo(..) gets an error handler attached right away, so it’s implicitly opted out and no global reporting for it occurs either.

But the promise returned from the then(..) call has no defer() or error handler attached, so if it rejects (from inside either resolution handler), then it will be reported to the developer console as an uncaught error.

This design is a pit of success. By default, all errors are either handled or reported – what almost all developers in almost all cases would expect. You either have to register a handler or you have to intentionally opt out, and indicate you intend to defer error handling until later; you’re opting for the extra responsibility in just that specific case.

The only real danger in this approach is if you defer() a Promise but then fail to actually ever observe/handle its rejection.

But you had to intentionally call defer() to opt into that pit of despair – the default was the pit of success – so there’s not much else we could do to save you from your own mistakes.

I think there’s still hope for Promise error handling (post-ES6). I hope the powers that be will rethink the situation and consider this alternative. In the meantime, you can implement this yourself (a challenging exercise for the reader!), or use a smarter Promise library that does so for you!

Note: This exact model for error handling/reporting is implemented in my asynquence Promise abstraction library, which will be discussed in Appendix A of this book.

13.6 Promise Patterns

We’ve already implicitly seen the sequence pattern with Promise chains (this-then-this-then-that flow control) but there are lots of variations on asynchronous patterns that we can build as abstractions on top of Promises. These patterns serve to simplify the expression of async flow control – which helps make our code more reason-able and more maintainable – even in the most complex parts of our programs.

Two such patterns are codified directly into the native ES6 Promise implementation, so we get them for free, to use as building blocks for other patterns.

13.6.1 Promise.all([ .. ])

In an async sequence (Promise chain), only one async task is being coordinated at any given moment – step 2 strictly follows step 1, and step 3 strictly follows step 2. But what about doing two or more steps concurrently (aka “in parallel”)?

In classic programming terminology, a “gate” is a mechanism that waits on two or more parallel/concurrent tasks to complete before continuing. It doesn’t matter what order they finish in, just that all of them have to complete for the gate to open and let the flow control through.

In the Promise API, we call this pattern all([ .. ]).

Say you wanted to make two Ajax requests at the same time, and wait for both to finish, regardless of their order, before making a third Ajax request. Consider:

// `request(..)` is a Promise-aware Ajax utility,
// like we defined earlier in the chapter

var p1 = request( "http://some.url.1/" );
var p2 = request( "http://some.url.2/" );

Promise.all( [p1,p2] )
.then( function(msgs){
    // both `p1` and `p2` fulfill and pass in
    // their messages here
    return request(
        "http://some.url.3/?v=" + msgs.join(",")
    );
} )
.then( function(msg){
    console.log( msg );
} );

Promise.all([ .. ]) expects a single argument, an array, consisting generally of Promise instances. The promise returned from the Promise.all([ .. ]) call will receive a fulfillment message (msgs in this snippet) that is an array of all the fulfillment messages from the passed in promises, in the same order as specified (regardless of fulfillment order).

Note: Technically, the array of values passed into Promise.all([ .. ]) can include Promises, thenables, or even immediate values. Each value in the list is essentially passed through Promise.resolve(..) to make sure it’s a genuine Promise to be waited on, so an immediate value will just be normalized into a Promise for that value. If the array is empty, the main Promise is immediately fulfilled.

The main promise returned from Promise.all([ .. ]) will only be fulfilled if and when all its constituent promises are fulfilled. If any one of those promises instead is rejected, the main Promise.all([ .. ]) promise is immediately rejected, discarding all results from any other promises.

Remember to always attach a rejection/error handler to every promise, even and especially the one that comes back from Promise.all([ .. ]).

13.6.2 Promise.race([ .. ])

While Promise.all([ .. ]) coordinates multiple Promises concurrently and assumes all are needed for fulfillment, sometimes you only want to respond to the “first Promise to cross the finish line,” letting the other Promises fall away.

This pattern is classically called a “latch,” but in Promises it’s called a “race.”

Warning: While the metaphor of “only the first across the finish line wins” fits the behavior well, unfortunately “race” is kind of a loaded term, because “race conditions” are generally taken as bugs in programs (see Chapter 1). Don’t confuse Promise.race([ .. ]) with “race condition.”

Promise.race([ .. ]) also expects a single array argument, containing one or more Promises, thenables, or immediate values. It doesn’t make much practical sense to have a race with immediate values, because the first one listed will obviously win – like a foot race where one runner starts at the finish line!

Similar to Promise.all([ .. ]), Promise.race([ .. ]) will fulfill if and when any Promise resolution is a fulfillment, and it will reject if and when any Promise resolution is a rejection.

Warning: A “race” requires at least one “runner,” so if you pass an empty array, instead of immediately resolving, the main race([..]) Promise will never resolve. This is a footgun! ES6 should have specified that it either fulfills, rejects, or just throws some sort of synchronous error. Unfortunately, because of precedence in Promise libraries predating ES6 Promise, they had to leave this gotcha in there, so be careful never to send in an empty array.

Let’s revisit our previous concurrent Ajax example, but in the context of a race between p1 and p2:

// `request(..)` is a Promise-aware Ajax utility,
// like we defined earlier in the chapter

var p1 = request( "http://some.url.1/" );
var p2 = request( "http://some.url.2/" );

Promise.race( [p1,p2] )
.then( function(msg){
    // either `p1` or `p2` will win the race
    return request(
        "http://some.url.3/?v=" + msg
    );
} )
.then( function(msg){
    console.log( msg );
} );

Because only one promise wins, the fulfillment value is a single message, not an array as it was for Promise.all([ .. ]).

13.6.2.1 Timeout Race

We saw this example earlier, illustrating how Promise.race([ .. ]) can be used to express the “promise timeout” pattern:

// `foo()` is a Promise-aware function

// `timeoutPromise(..)`, defined ealier, returns
// a Promise that rejects after a specified delay

// setup a timeout for `foo()`
Promise.race( [
    foo(),                  // attempt `foo()`
    timeoutPromise( 3000 )  // give it 3 seconds
] )
.then(
    function(){
        // `foo(..)` fulfilled in time!
    },
    function(err){
        // either `foo()` rejected, or it just
        // didn't finish in time, so inspect
        // `err` to know which
    }
);

This timeout pattern works well in most cases. But there are some nuances to consider, and frankly they apply to both Promise.race([ .. ]) and Promise.all([ .. ]) equally.

13.6.2.2 “Finally”

The key question to ask is, “What happens to the promises that get discarded/ignored?” We’re not asking that question from the performance perspective – they would typically end up garbage collection eligible – but from the behavioral perspective (side effects, etc.). Promises cannot be canceled – and shouldn’t be as that would destroy the external immutability trust discussed in the “Promise Uncancelable” section later in this chapter – so they can only be silently ignored.

But what if foo() in the previous example is reserving some sort of resource for usage, but the timeout fires first and causes that promise to be ignored? Is there anything in this pattern that proactively frees the reserved resource after the timeout, or otherwise cancels any side effects it may have had? What if all you wanted was to log the fact that foo() timed out?

Some developers have proposed that Promises need a finally(..) callback registration, which is always called when a Promise resolves, and allows you to specify any cleanup that may be necessary. This doesn’t exist in the specification at the moment, but it may come in ES7+. We’ll have to wait and see.

It might look like:

var p = Promise.resolve( 42 );

p.then( something )
.finally( cleanup )
.then( another )
.finally( cleanup );

Note: In various Promise libraries, finally(..) still creates and returns a new Promise (to keep the chain going). If the cleanup(..) function were to return a Promise, it would be linked into the chain, which means you could still have the unhandled rejection issues we discussed earlier.

In the meantime, we could make a static helper utility that lets us observe (without interfering) the resolution of a Promise:

// polyfill-safe guard check
if (!Promise.observe) {
    Promise.observe = function(pr,cb) {
        // side-observe `pr`'s resolution
        pr.then(
            function fulfilled(msg){
                // schedule callback async (as Job)
                Promise.resolve( msg ).then( cb );
            },
            function rejected(err){
                // schedule callback async (as Job)
                Promise.resolve( err ).then( cb );
            }
        );

        // return original promise
        return pr;
    };
}

Here’s how we’d use it in the timeout example from before:

Promise.race( [
    Promise.observe(
        foo(),                  // attempt `foo()`
        function cleanup(msg){
            // clean up after `foo()`, even if it
            // didn't finish before the timeout
        }
    ),
    timeoutPromise( 3000 )  // give it 3 seconds
] )

This Promise.observe(..) helper is just an illustration of how you could observe the completions of Promises without interfering with them. Other Promise libraries have their own solutions. Regardless of how you do it, you’ll likely have places where you want to make sure your Promises aren’t just silently ignored by accident.

13.6.3 Variations on all([ .. ]) and race([ .. ])

While native ES6 Promises come with built-in Promise.all([ .. ]) and Promise.race([ .. ]), there are several other commonly used patterns with variations on those semantics:

  • none([ .. ]) is like all([ .. ]), but fulfillments and rejections are transposed. All Promises need to be rejected – rejections become the fulfillment values and vice versa.
  • any([ .. ]) is like all([ .. ]), but it ignores any rejections, so only one needs to fulfill instead of all of them.
  • first([ .. ]) is a like a race with any([ .. ]), which is that it ignores any rejections and fulfills as soon as the first Promise fulfills.
  • last([ .. ]) is like first([ .. ]), but only the latest fulfillment wins.

Some Promise abstraction libraries provide these, but you could also define them yourself using the mechanics of Promises, race([ .. ]) and all([ .. ]).

For example, here’s how we could define first([ .. ]):

// polyfill-safe guard check
if (!Promise.first) {
    Promise.first = function(prs) {
        return new Promise( function(resolve,reject){
            // loop through all promises
            prs.forEach( function(pr){
                // normalize the value
                Promise.resolve( pr )
                // whichever one fulfills first wins, and
                // gets to resolve the main promise
                .then( resolve );
            } );
        } );
    };
}

Note: This implementation of first(..) does not reject if all its promises reject; it simply hangs, much like a Promise.race([]) does. If desired, you could add additional logic to track each promise rejection and if all reject, call reject() on the main promise. We’ll leave that as an exercise for the reader.

13.6.4 Concurrent Iterations

Sometimes you want to iterate over a list of Promises and perform some task against all of them, much like you can do with synchronous arrays (e.g., forEach(..), map(..), some(..), and every(..)). If the task to perform against each Promise is fundamentally synchronous, these work fine, just as we used forEach(..) in the previous snippet.

But if the tasks are fundamentally asynchronous, or can/should otherwise be performed concurrently, you can use async versions of these utilities as provided by many libraries.

For example, let’s consider an asynchronous map(..) utility that takes an array of values (could be Promises or anything else), plus a function (task) to perform against each. map(..) itself returns a promise whose fulfillment value is an array that holds (in the same mapping order) the async fulfillment value from each task:

if (!Promise.map) {
    Promise.map = function(vals,cb) {
        // new promise that waits for all mapped promises
        return Promise.all(
            // note: regular array `map(..)`, turns
            // the array of values into an array of
            // promises
            vals.map( function(val){
                // replace `val` with a new promise that
                // resolves after `val` is async mapped
                return new Promise( function(resolve){
                    cb( val, resolve );
                } );
            } )
        );
    };
}

Note: In this implementation of map(..), you can’t signal async rejection, but if a synchronous exception/error occurs inside of the mapping callback (cb(..)), the main Promise.map(..) returned promise would reject.

Let’s illustrate using map(..) with a list of Promises (instead of simple values):

var p1 = Promise.resolve( 21 );
var p2 = Promise.resolve( 42 );
var p3 = Promise.reject( "Oops" );

// double values in list even if they're in
// Promises
Promise.map( [p1,p2,p3], function(pr,done){
    // make sure the item itself is a Promise
    Promise.resolve( pr )
    .then(
        // extract value as `v`
        function(v){
            // map fulfillment `v` to new value
            done( v * 2 );
        },
        // or, map to promise rejection message
        done
    );
} )
.then( function(vals){
    console.log( vals );    // [42,84,"Oops"]
} );

13.7 Promise API Recap

Let’s review the ES6 Promise API that we’ve already seen unfold in bits and pieces throughout this chapter.

Note: The following API is native only as of ES6, but there are specification-compliant polyfills (not just extended Promise libraries) which can define Promise and all its associated behavior so that you can use native Promises even in pre-ES6 browsers. One such polyfill is “Native Promise Only” (http://github.com/getify/native-promise-only), which I wrote!

13.7.1 new Promise(..) Constructor

The revealing constructor Promise(..) must be used with new, and must be provided a function callback that is synchronously/immediately called. This function is passed two function callbacks that act as resolution capabilities for the promise. We commonly label these resolve(..) and reject(..):

var p = new Promise( function(resolve,reject){
    // `resolve(..)` to resolve/fulfill the promise
    // `reject(..)` to reject the promise
} );

reject(..) simply rejects the promise, but resolve(..) can either fulfill the promise or reject it, depending on what it’s passed. If resolve(..) is passed an immediate, non-Promise, non-thenable value, then the promise is fulfilled with that value.

But if resolve(..) is passed a genuine Promise or thenable value, that value is unwrapped recursively, and whatever its final resolution/state is will be adopted by the promise.

13.7.2 Promise.resolve(..) and Promise.reject(..)

A shortcut for creating an already-rejected Promise is Promise.reject(..), so these two promises are equivalent:

var p1 = new Promise( function(resolve,reject){
    reject( "Oops" );
} );

var p2 = Promise.reject( "Oops" );

Promise.resolve(..) is usually used to create an already-fulfilled Promise in a similar way to Promise.reject(..). However, Promise.resolve(..) also unwraps thenable values (as discussed several times already). In that case, the Promise returned adopts the final resolution of the thenable you passed in, which could either be fulfillment or rejection:

var fulfilledTh = {
    then: function(cb) { cb( 42 ); }
};
var rejectedTh = {
    then: function(cb,errCb) {
        errCb( "Oops" );
    }
};

var p1 = Promise.resolve( fulfilledTh );
var p2 = Promise.resolve( rejectedTh );

// `p1` will be a fulfilled promise
// `p2` will be a rejected promise

And remember, Promise.resolve(..) doesn’t do anything if what you pass is already a genuine Promise; it just returns the value directly. So there’s no overhead to calling Promise.resolve(..) on values that you don’t know the nature of, if one happens to already be a genuine Promise.

13.7.3 then(..) and catch(..)

Each Promise instance (not the Promise API namespace) has then(..) and catch(..) methods, which allow registering of fulfillment and rejection handlers for the Promise. Once the Promise is resolved, one or the other of these handlers will be called, but not both, and it will always be called asynchronously (see “Jobs” in Chapter 1).

then(..) takes one or two parameters, the first for the fulfillment callback, and the second for the rejection callback. If either is omitted or is otherwise passed as a non-function value, a default callback is substituted respectively. The default fulfillment callback simply passes the message along, while the default rejection callback simply rethrows (propagates) the error reason it receives.

catch(..) takes only the rejection callback as a parameter, and automatically substitutes the default fulfillment callback, as just discussed. In other words, it’s equivalent to then(null,..):

p.then( fulfilled );

p.then( fulfilled, rejected );

p.catch( rejected ); // or `p.then( null, rejected )`

then(..) and catch(..) also create and return a new promise, which can be used to express Promise chain flow control. If the fulfillment or rejection callbacks have an exception thrown, the returned promise is rejected. If either callback returns an immediate, non-Promise, non-thenable value, that value is set as the fulfillment for the returned promise. If the fulfillment handler specifically returns a promise or thenable value, that value is unwrapped and becomes the resolution of the returned promise.

13.7.4 Promise.all([ .. ]) and Promise.race([ .. ])

The static helpers Promise.all([ .. ]) and Promise.race([ .. ]) on the ES6 Promise API both create a Promise as their return value. The resolution of that promise is controlled entirely by the array of promises that you pass in.

For Promise.all([ .. ]), all the promises you pass in must fulfill for the returned promise to fulfill. If any promise is rejected, the main returned promise is immediately rejected, too (discarding the results of any of the other promises). For fulfillment, you receive an array of all the passed in promises’ fulfillment values. For rejection, you receive just the first promise rejection reason value. This pattern is classically called a “gate”: all must arrive before the gate opens.

For Promise.race([ .. ]), only the first promise to resolve (fulfillment or rejection) “wins,” and whatever that resolution is becomes the resolution of the returned promise. This pattern is classically called a “latch”: first one to open the latch gets through. Consider:

var p1 = Promise.resolve( 42 );
var p2 = Promise.resolve( "Hello World" );
var p3 = Promise.reject( "Oops" );

Promise.race( [p1,p2,p3] )
.then( function(msg){
    console.log( msg );     // 42
} );

Promise.all( [p1,p2,p3] )
.catch( function(err){
    console.error( err );   // "Oops"
} );

Promise.all( [p1,p2] )
.then( function(msgs){
    console.log( msgs );    // [42,"Hello World"]
} );

Warning: Be careful! If an empty array is passed to Promise.all([ .. ]), it will fulfill immediately, but Promise.race([ .. ]) will hang forever and never resolve.

The ES6 Promise API is pretty simple and straightforward. It’s at least good enough to serve the most basic of async cases, and is a good place to start when rearranging your code from callback hell to something better.

But there’s a whole lot of async sophistication that apps often demand which Promises themselves will be limited in addressing. In the next section, we’ll dive into those limitations as motivations for the benefit of Promise libraries.

13.8 Promise Limitations

Many of the details we’ll discuss in this section have already been alluded to in this chapter, but we’ll just make sure to review these limitations specifically.

13.8.1 Sequence Error Handling

We covered Promise-flavored error handling in detail earlier in this chapter. The limitations of how Promises are designed – how they chain, specifically – creates a very easy pitfall where an error in a Promise chain can be silently ignored accidentally.

But there’s something else to consider with Promise errors. Because a Promise chain is nothing more than its constituent Promises wired together, there’s no entity to refer to the entire chain as a single thing, which means there’s no external way to observe any errors that may occur.

If you construct a Promise chain that has no error handling in it, any error anywhere in the chain will propagate indefinitely down the chain, until observed (by registering a rejection handler at some step). So, in that specific case, having a reference to the last promise in the chain is enough (p in the following snippet), because you can register a rejection handler there, and it will be notified of any propagated errors:

// `foo(..)`, `STEP2(..)` and `STEP3(..)` are
// all promise-aware utilities

var p = foo( 42 )
.then( STEP2 )
.then( STEP3 );

Although it may seem sneakily confusing, p here doesn’t point to the first promise in the chain (the one from the foo(42) call), but instead from the last promise, the one that comes from the then(STEP3) call.

Also, no step in the promise chain is observably doing its own error handling. That means that you could then register a rejection error handler on p, and it would be notified if any errors occur anywhere in the chain:

p.catch( handleErrors );

But if any step of the chain in fact does its own error handling (perhaps hidden/abstracted away from what you can see), your handleErrors(..) won’t be notified. This may be what you want – it was, after all, a “handled rejection” – but it also may not be what you want. The complete lack of ability to be notified (of “already handled” rejection errors) is a limitation that restricts capabilities in some use cases.

It’s basically the same limitation that exists with a try..catch that can catch an exception and simply swallow it. So this isn’t a limitation unique to Promises, but it is something we might wish to have a workaround for.

Unfortunately, many times there is no reference kept for the intermediate steps in a Promise-chain sequence, so without such references, you cannot attach error handlers to reliably observe the errors.

13.8.2 Single Value

Promises by definition only have a single fulfillment value or a single rejection reason. In simple examples, this isn’t that big of a deal, but in more sophisticated scenarios, you may find this limiting.

The typical advice is to construct a values wrapper (such as an object or array) to contain these multiple messages. This solution works, but it can be quite awkward and tedious to wrap and unwrap your messages with every single step of your Promise chain.

13.8.2.1 Splitting Values

Sometimes you can take this as a signal that you could/should decompose the problem into two or more Promises.

Imagine you have a utility foo(..) that produces two values (x and y) asynchronously:

function getY(x) {
    return new Promise( function(resolve,reject){
        setTimeout( function(){
            resolve( (3 * x) - 1 );
        }, 100 );
    } );
}

function foo(bar,baz) {
    var x = bar * baz;

    return getY( x )
    .then( function(y){
        // wrap both values into container
        return [x,y];
    } );
}

foo( 10, 20 )
.then( function(msgs){
    var x = msgs[0];
    var y = msgs[1];

    console.log( x, y );    // 200 599
} );

First, let’s rearrange what foo(..) returns so that we don’t have to wrap x and y into a single array value to transport through one Promise. Instead, we can wrap each value into its own promise:

function foo(bar,baz) {
    var x = bar * baz;

    // return both promises
    return [
        Promise.resolve( x ),
        getY( x )
    ];
}

Promise.all(
    foo( 10, 20 )
)
.then( function(msgs){
    var x = msgs[0];
    var y = msgs[1];

    console.log( x, y );
} );

Is an array of promises really better than an array of values passed through a single promise? Syntactically, it’s not much of an improvement.

But this approach more closely embraces the Promise design theory. It’s now easier in the future to refactor to split the calculation of x and y into separate functions. It’s cleaner and more flexible to let the calling code decide how to orchestrate the two promises – using Promise.all([ .. ]) here, but certainly not the only option – rather than to abstract such details away inside of foo(..).

13.8.2.2 Unwrap/Spread Arguments

The var x = .. and var y = .. assignments are still awkward overhead. We can employ some functional trickery (hat tip to Reginald Braithwaite, @raganwald on Twitter) in a helper utility:

function spread(fn) {
    return Function.apply.bind( fn, null );
}

Promise.all(
    foo( 10, 20 )
)
.then(
    spread( function(x,y){
        console.log( x, y );    // 200 599
    } )
)

That’s a bit nicer! Of course, you could inline the functional magic to avoid the extra helper:

Promise.all(
    foo( 10, 20 )
)
.then( Function.apply.bind(
    function(x,y){
        console.log( x, y );    // 200 599
    },
    null
) );

These tricks may be neat, but ES6 has an even better answer for us: destructuring. The array destructuring assignment form looks like this:

Promise.all(
    foo( 10, 20 )
)
.then( function(msgs){
    var [x,y] = msgs;

    console.log( x, y );    // 200 599
} );

But best of all, ES6 offers the array parameter destructuring form:

Promise.all(
    foo( 10, 20 )
)
.then( function([x,y]){
    console.log( x, y );    // 200 599
} );

We’ve now embraced the one-value-per-Promise mantra, but kept our supporting boilerplate to a minimum!

Note: For more information on ES6 destructuring forms, see the ES6 & Beyond title of this series.

13.8.3 Single Resolution

One of the most intrinsic behaviors of Promises is that a Promise can only be resolved once (fulfillment or rejection). For many async use cases, you’re only retrieving a value once, so this works fine.

But there’s also a lot of async cases that fit into a different model – one that’s more akin to events and/or streams of data. It’s not clear on the surface how well Promises can fit into such use cases, if at all. Without a significant abstraction on top of Promises, they will completely fall short for handling multiple value resolution.

Imagine a scenario where you might want to fire off a sequence of async steps in response to a stimulus (like an event) that can in fact happen multiple times, like a button click.

This probably won’t work the way you want:

// `click(..)` binds the `"click"` event to a DOM element
// `request(..)` is the previously defined Promise-aware Ajax

var p = new Promise( function(resolve,reject){
    click( "#mybtn", resolve );
} );

p.then( function(evt){
    var btnID = evt.currentTarget.id;
    return request( "http://some.url.1/?id=" + btnID );
} )
.then( function(text){
    console.log( text );
} );

The behavior here only works if your application calls for the button to be clicked just once. If the button is clicked a second time, the p promise has already been resolved, so the second resolve(..) call would be ignored.

Instead, you’d probably need to invert the paradigm, creating a whole new Promise chain for each event firing:

click( "#mybtn", function(evt){
    var btnID = evt.currentTarget.id;

    request( "http://some.url.1/?id=" + btnID )
    .then( function(text){
        console.log( text );
    } );
} );

This approach will work in that a whole new Promise sequence will be fired off for each "click" event on the button.

But beyond just the ugliness of having to define the entire Promise chain inside the event handler, this design in some respects violates the idea of separation of concerns/capabilities (SoC). You might very well want to define your event handler in a different place in your code from where you define the response to the event (the Promise chain). That’s pretty awkward to do in this pattern, without helper mechanisms.

Note: Another way of articulating this limitation is that it’d be nice if we could construct some sort of “observable” that we can subscribe a Promise chain to. There are libraries that have created these abstractions (such as RxJS – http://rxjs.codeplex.com/), but the abstractions can seem so heavy that you can’t even see the nature of Promises anymore. Such heavy abstraction brings important questions to mind such as whether (sans Promises) these mechanisms are as trustable as Promises themselves have been designed to be. We’ll revisit the “Observable” pattern in Appendix B.

13.8.4 Inertia

One concrete barrier to starting to use Promises in your own code is all the code that currently exists which is not already Promise-aware. If you have lots of callback-based code, it’s far easier to just keep coding in that same style.

“A code base in motion (with callbacks) will remain in motion (with callbacks) unless acted upon by a smart, Promises-aware developer.”

Promises offer a different paradigm, and as such, the approach to the code can be anywhere from just a little different to, in some cases, radically different. You have to be intentional about it, because Promises will not just naturally shake out from the same ol’ ways of doing code that have served you well thus far.

Consider a callback-based scenario like the following:

function foo(x,y,cb) {
    ajax(
        "http://some.url.1/?x=" + x + "&y=" + y,
        cb
    );
}

foo( 11, 31, function(err,text) {
    if (err) {
        console.error( err );
    }
    else {
        console.log( text );
    }
} );

Is it immediately obvious what the first steps are to convert this callback-based code to Promise-aware code? Depends on your experience. The more practice you have with it, the more natural it will feel. But certainly, Promises don’t just advertise on the label exactly how to do it – there’s no one-size-fits-all answer – so the responsibility is up to you.

As we’ve covered before, we definitely need an Ajax utility that is Promise-aware instead of callback-based, which we could call request(..). You can make your own, as we have already. But the overhead of having to manually define Promise-aware wrappers for every callback-based utility makes it less likely you’ll choose to refactor to Promise-aware coding at all.

Promises offer no direct answer to that limitation. Most Promise libraries do offer a helper, however. But even without a library, imagine a helper like this:

// polyfill-safe guard check
if (!Promise.wrap) {
    Promise.wrap = function(fn) {
        return function() {
            var args = [].slice.call( arguments );

            return new Promise( function(resolve,reject){
                fn.apply(
                    null,
                    args.concat( function(err,v){
                        if (err) {
                            reject( err );
                        }
                        else {
                            resolve( v );
                        }
                    } )
                );
            } );
        };
    };
}

OK, that’s more than just a tiny trivial utility. However, although it may look a bit intimidating, it’s not as bad as you’d think. It takes a function that expects an error-first style callback as its last parameter, and returns a new one that automatically creates a Promise to return, and substitutes the callback for you, wired up to the Promise fulfillment/rejection.

Rather than waste too much time talking about how this Promise.wrap(..) helper works, let’s just look at how we use it:

var request = Promise.wrap( ajax );

request( "http://some.url.1/" )
.then( .. )
..

Wow, that was pretty easy!

Promise.wrap(..) does not produce a Promise. It produces a function that will produce Promises. In a sense, a Promise-producing function could be seen as a “Promise factory.” I propose “promisory” as the name for such a thing (“Promise” + “factory”).

The act of wrapping a callback-expecting function to be a Promise-aware function is sometimes referred to as “lifting” or “promisifying”. But there doesn’t seem to be a standard term for what to call the resultant function other than a “lifted function”, so I like “promisory” better as I think it’s more descriptive.

Note: Promisory isn’t a made-up term. It’s a real word, and its definition means to contain or convey a promise. That’s exactly what these functions are doing, so it turns out to be a pretty perfect terminology match!

So, Promise.wrap(ajax) produces an ajax(..) promisory we call request(..), and that promisory produces Promises for Ajax responses.

If all functions were already promisories, we wouldn’t need to make them ourselves, so the extra step is a tad bit of a shame. But at least the wrapping pattern is (usually) repeatable so we can put it into a Promise.wrap(..) helper as shown to aid our promise coding.

So back to our earlier example, we need a promisory for both ajax(..) and foo(..):

// make a promisory for `ajax(..)`
var request = Promise.wrap( ajax );

// refactor `foo(..)`, but keep it externally
// callback-based for compatibility with other
// parts of the code for now -- only use
// `request(..)`'s promise internally.
function foo(x,y,cb) {
    request(
        "http://some.url.1/?x=" + x + "&y=" + y
    )
    .then(
        function fulfilled(text){
            cb( null, text );
        },
        cb
    );
}

// now, for this code's purposes, make a
// promisory for `foo(..)`
var betterFoo = Promise.wrap( foo );

// and use the promisory
betterFoo( 11, 31 )
.then(
    function fulfilled(text){
        console.log( text );
    },
    function rejected(err){
        console.error( err );
    }
);

Of course, while we’re refactoring foo(..) to use our new request(..) promisory, we could just make foo(..) a promisory itself, instead of remaining callback-based and needing to make and use the subsequent betterFoo(..) promisory. This decision just depends on whether foo(..) needs to stay callback-based compatible with other parts of the code base or not.

Consider:

// `foo(..)` is now also a promisory because it
// delegates to the `request(..)` promisory
function foo(x,y) {
    return request(
        "http://some.url.1/?x=" + x + "&y=" + y
    );
}

foo( 11, 31 )
.then( .. )
..

While ES6 Promises don’t natively ship with helpers for such promisory wrapping, most libraries provide them, or you can make your own. Either way, this particular limitation of Promises is addressable without too much pain (certainly compared to the pain of callback hell!).

13.8.5 Promise Uncancelable

Once you create a Promise and register a fulfillment and/or rejection handler for it, there’s nothing external you can do to stop that progression if something else happens to make that task moot.

Note: Many Promise abstraction libraries provide facilities to cancel Promises, but this is a terrible idea! Many developers wish Promises had natively been designed with external cancelation capability, but the problem is that it would let one consumer/observer of a Promise affect some other consumer’s ability to observe that same Promise. This violates the future-value’s trustability (external immutability), but morever is the embodiment of the “action at a distance” anti-pattern (http://en.wikipedia.org/wiki/Action_at_a_distance_%28computer_programming%29). Regardless of how useful it seems, it will actually lead you straight back into the same nightmares as callbacks.

Consider our Promise timeout scenario from earlier:

var p = foo( 42 );

Promise.race( [
    p,
    timeoutPromise( 3000 )
] )
.then(
    doSomething,
    handleError
);

p.then( function(){
    // still happens even in the timeout case :(
} );

The “timeout” was external to the promise p, so p itself keeps going, which we probably don’t want.

One option is to invasively define your resolution callbacks:

var OK = true;

var p = foo( 42 );

Promise.race( [
    p,
    timeoutPromise( 3000 )
    .catch( function(err){
        OK = false;
        throw err;
    } )
] )
.then(
    doSomething,
    handleError
);

p.then( function(){
    if (OK) {
        // only happens if no timeout! :)
    }
} );

This is ugly. It works, but it’s far from ideal. Generally, you should try to avoid such scenarios.

But if you can’t, the ugliness of this solution should be a clue that cancelation is a functionality that belongs at a higher level of abstraction on top of Promises. I’d recommend you look to Promise abstraction libraries for assistance rather than hacking it yourself.

Note: My asynquence Promise abstraction library provides just such an abstraction and an abort() capability for the sequence, all of which will be discussed in Appendix A.

A single Promise is not really a flow-control mechanism (at least not in a very meaningful sense), which is exactly what cancelation refers to; that’s why Promise cancelation would feel awkward.

By contrast, a chain of Promises taken collectively together – what I like to call a “sequence” – is a flow control expression, and thus it’s appropriate for cancelation to be defined at that level of abstraction.

No individual Promise should be cancelable, but it’s sensible for a sequence to be cancelable, because you don’t pass around a sequence as a single immutable value like you do with a Promise.

13.8.6 Promise Performance

This particular limitation is both simple and complex.

Comparing how many pieces are moving with a basic callback-based async task chain versus a Promise chain, it’s clear Promises have a fair bit more going on, which means they are naturally at least a tiny bit slower. Think back to just the simple list of trust guarantees that Promises offer, as compared to the ad hoc solution code you’d have to layer on top of callbacks to achieve the same protections.

More work to do, more guards to protect, means that Promises are slower as compared to naked, untrustable callbacks. That much is obvious, and probably simple to wrap your brain around.

But how much slower? Well… that’s actually proving to be an incredibly difficult question to answer absolutely, across the board.

Frankly, it’s kind of an apples-to-oranges comparison, so it’s probably the wrong question to ask. You should actually compare whether an ad-hoc callback system with all the same protections manually layered in is faster than a Promise implementation.

If Promises have a legitimate performance limitation, it’s more that they don’t really offer a line-item choice as to which trustability protections you want/need or not – you get them all, always.

Nevertheless, if we grant that a Promise is generally a little bit slower than its non-Promise, non-trustable callback equivalent – assuming there are places where you feel you can justify the lack of trustability – does that mean that Promises should be avoided across the board, as if your entire application is driven by nothing but must-be-utterly-the-fastest code possible?

Sanity check: if your code is legitimately like that, is JavaScript even the right language for such tasks? JavaScript can be optimized to run applications very performantly (see Chapter 5 and Chapter 6). But is obsessing over tiny performance tradeoffs with Promises, in light of all the benefits they offer, really appropriate?

Another subtle issue is that Promises make everything async, which means that some immediately (synchronously) complete steps still defer advancement of the next step to a Job (see Chapter 1). That means that it’s possible that a sequence of Promise tasks could complete ever-so-slightly slower than the same sequence wired up with callbacks.

Of course, the question here is this: are these potential slips in tiny fractions of performance worth all the other articulated benefits of Promises we’ve laid out across this chapter?

My take is that in virtually all cases where you might think Promise performance is slow enough to be concerned, it’s actually an anti-pattern to optimize away the benefits of Promise trustability and composability by avoiding them altogether.

Instead, you should default to using them across the code base, and then profile and analyze your application’s hot (critical) paths. Are Promises really a bottleneck, or are they just a theoretical slowdown? Only then, armed with actual valid benchmarks (see Chapter 6) is it responsible and prudent to factor out the Promises in just those identified critical areas.

Promises are a little slower, but in exchange you’re getting a lot of trustability, non-Zalgo predictability, and composability built in. Maybe the limitation is not actually their performance, but your lack of perception of their benefits?

13.9 Review

Promises are awesome. Use them. They solve the inversion of control issues that plague us with callbacks-only code.

They don’t get rid of callbacks, they just redirect the orchestration of those callbacks to a trustable intermediary mechanism that sits between us and another utility.

Promise chains also begin to address (though certainly not perfectly) a better way of expressing async flow in sequential fashion, which helps our brains plan and maintain async JS code better. We’ll see an even better solution to that problem in the next chapter!

14 You Don’t Know JS: Async & Performance

15 Chapter 4: Generators

In Chapter 2, we identified two key drawbacks to expressing async flow control with callbacks:

  • Callback-based async doesn’t fit how our brain plans out steps of a task.
  • Callbacks aren’t trustable or composable because of inversion of control.

In Chapter 3, we detailed how Promises uninvert the inversion of control of callbacks, restoring trustability/composability.

Now we turn our attention to expressing async flow control in a sequential, synchronous-looking fashion. The “magic” that makes it possible is ES6 generators.

15.1 Breaking Run-to-Completion

In Chapter 1, we explained an expectation that JS developers almost universally rely on in their code: once a function starts executing, it runs until it completes, and no other code can interrupt and run in between.

As bizarre as it may seem, ES6 introduces a new type of function that does not behave with the run-to-completion behavior. This new type of function is called a “generator.”

To understand the implications, let’s consider this example:

var x = 1;

function foo() {
    x++;
    bar();              // <-- what about this line?
    console.log( "x:", x );
}

function bar() {
    x++;
}

foo();                  // x: 3

In this example, we know for sure that bar() runs in between x++ and console.log(x). But what if bar() wasn’t there? Obviously the result would be 2 instead of 3.

Now let’s twist your brain. What if bar() wasn’t present, but it could still somehow run between the x++ and console.log(x) statements? How would that be possible?

In preemptive multithreaded languages, it would essentially be possible for bar() to “interrupt” and run at exactly the right moment between those two statements. But JS is not preemptive, nor is it (currently) multithreaded. And yet, a cooperative form of this “interruption” (concurrency) is possible, if foo() itself could somehow indicate a “pause” at that part in the code.

Note: I use the word “cooperative” not only because of the connection to classical concurrency terminology (see Chapter 1), but because as you’ll see in the next snippet, the ES6 syntax for indicating a pause point in code is yield – suggesting a politely cooperative yielding of control.

Here’s the ES6 code to accomplish such cooperative concurrency:

var x = 1;

function *foo() {
    x++;
    yield; // pause!
    console.log( "x:", x );
}

function bar() {
    x++;
}

Note: You will likely see most other JS documentation/code that will format a generator declaration as function* foo() { .. } instead of as I’ve done here with function *foo() { .. } – the only difference being the stylistic positioning of the *. The two forms are functionally/syntactically identical, as is a third function*foo() { .. } (no space) form. There are arguments for both styles, but I basically prefer function *foo.. because it then matches when I reference a generator in writing with *foo(). If I said only foo(), you wouldn’t know as clearly if I was talking about a generator or a regular function. It’s purely a stylistic preference.

Now, how can we run the code in that previous snippet such that bar() executes at the point of the yield inside of *foo()?

// construct an iterator `it` to control the generator
var it = foo();

// start `foo()` here!
it.next();
x;                      // 2
bar();
x;                      // 3
it.next();              // x: 3

OK, there’s quite a bit of new and potentially confusing stuff in those two code snippets, so we’ve got plenty to wade through. But before we explain the different mechanics/syntax with ES6 generators, let’s walk through the behavior flow:

  1. The it = foo() operation does not execute the *foo() generator yet, but it merely constructs an iterator that will control its execution. More on iterators in a bit.
  2. The first it.next() starts the *foo() generator, and runs the x++ on the first line of *foo().
  3. *foo() pauses at the yield statement, at which point that first it.next() call finishes. At the moment, *foo() is still running and active, but it’s in a paused state.
  4. We inspect the value of x, and it’s now 2.
  5. We call bar(), which increments x again with x++.
  6. We inspect the value of x again, and it’s now 3.
  7. The final it.next() call resumes the *foo() generator from where it was paused, and runs the console.log(..) statement, which uses the current value of x of 3.

Clearly, *foo() started, but did not run-to-completion – it paused at the yield. We resumed *foo() later, and let it finish, but that wasn’t even required.

So, a generator is a special kind of function that can start and stop one or more times, and doesn’t necessarily ever have to finish. While it won’t be terribly obvious yet why that’s so powerful, as we go throughout the rest of this chapter, that will be one of the fundamental building blocks we use to construct generators-as-async-flow-control as a pattern for our code.

15.1.1 Input and Output

A generator function is a special function with the new processing model we just alluded to. But it’s still a function, which means it still has some basic tenets that haven’t changed – namely, that it still accepts arguments (aka “input”), and that it can still return a value (aka “output”):

function *foo(x,y) {
    return x * y;
}

var it = foo( 6, 7 );

var res = it.next();

res.value;      // 42

We pass in the arguments 6 and 7 to *foo(..) as the parameters x and y, respectively. And *foo(..) returns the value 42 back to the calling code.

We now see a difference with how the generator is invoked compared to a normal function. foo(6,7) obviously looks familiar. But subtly, the *foo(..) generator hasn’t actually run yet as it would have with a function.

Instead, we’re just creating an iterator object, which we assign to the variable it, to control the *foo(..) generator. Then we call it.next(), which instructs the *foo(..) generator to advance from its current location, stopping either at the next yield or end of the generator.

The result of that next(..) call is an object with a value property on it holding whatever value (if anything) was returned from *foo(..). In other words, yield caused a value to be sent out from the generator during the middle of its execution, kind of like an intermediate return.

Again, it won’t be obvious yet why we need this whole indirect iterator object to control the generator. We’ll get there, I promise.

15.1.1.1 Iteration Messaging

In addition to generators accepting arguments and having return values, there’s even more powerful and compelling input/output messaging capability built into them, via yield and next(..).

Consider:

function *foo(x) {
    var y = x * (yield);
    return y;
}

var it = foo( 6 );

// start `foo(..)`
it.next();

var res = it.next( 7 );

res.value;      // 42

First, we pass in 6 as the parameter x. Then we call it.next(), and it starts up *foo(..).

Inside *foo(..), the var y = x .. statement starts to be processed, but then it runs across a yield expression. At that point, it pauses *foo(..) (in the middle of the assignment statement!), and essentially requests the calling code to provide a result value for the yield expression. Next, we call it.next( 7 ), which is passing the 7 value back in to be that result of the paused yield expression.

So, at this point, the assignment statement is essentially var y = 6 * 7. Now, return y returns that 42 value back as the result of the it.next( 7 ) call.

Notice something very important but also easily confusing, even to seasoned JS developers: depending on your perspective, there’s a mismatch between the yield and the next(..) call. In general, you’re going to have one more next(..) call than you have yield statements – the preceding snippet has one yield and two next(..) calls.

Why the mismatch?

Because the first next(..) always starts a generator, and runs to the first yield. But it’s the second next(..) call that fulfills the first paused yield expression, and the third next(..) would fulfill the second yield, and so on.

15.1.1.1.1 Tale of Two Questions

Actually, which code you’re thinking about primarily will affect whether there’s a perceived mismatch or not.

Consider only the generator code:

var y = x * (yield);
return y;

This first yield is basically asking a question: “What value should I insert here?”

Who’s going to answer that question? Well, the first next() has already run to get the generator up to this point, so obviously it can’t answer the question. So, the second next(..) call must answer the question posed by the first yield.

See the mismatch – second-to-first?

But let’s flip our perspective. Let’s look at it not from the generator’s point of view, but from the iterator’s point of view.

To properly illustrate this perspective, we also need to explain that messages can go in both directions – yield .. as an expression can send out messages in response to next(..) calls, and next(..) can send values to a paused yield expression. Consider this slightly adjusted code:

function *foo(x) {
    var y = x * (yield "Hello");    // <-- yield a value!
    return y;
}

var it = foo( 6 );

var res = it.next();    // first `next()`, don't pass anything
res.value;              // "Hello"

res = it.next( 7 );     // pass `7` to waiting `yield`
res.value;              // 42

yield .. and next(..) pair together as a two-way message passing system during the execution of the generator.

So, looking only at the iterator code:

var res = it.next();    // first `next()`, don't pass anything
res.value;              // "Hello"

res = it.next( 7 );     // pass `7` to waiting `yield`
res.value;              // 42

Note: We don’t pass a value to the first next() call, and that’s on purpose. Only a paused yield could accept such a value passed by a next(..), and at the beginning of the generator when we call the first next(), there is no paused yield to accept such a value. The specification and all compliant browsers just silently discard anything passed to the first next(). It’s still a bad idea to pass a value, as you’re just creating silently “failing” code that’s confusing. So, always start a generator with an argument-free next().

The first next() call (with nothing passed to it) is basically asking a question: “What next value does the *foo(..) generator have to give me?” And who answers this question? The first yield "hello" expression.

See? No mismatch there.

Depending on who you think about asking the question, there is either a mismatch between the yield and next(..) calls, or not.

But wait! There’s still an extra next() compared to the number of yield statements. So, that final it.next(7) call is again asking the question about what next value the generator will produce. But there’s no more yield statements left to answer, is there? So who answers?

The return statement answers the question!

And if there is no return in your generator – return is certainly not any more required in generators than in regular functions – there’s always an assumed/implicit return; (aka return undefined;), which serves the purpose of default answering the question posed by the final it.next(7) call.

These questions and answers – the two-way message passing with yield and next(..) – are quite powerful, but it’s not obvious at all how these mechanisms are connected to async flow control. We’re getting there!

15.1.2 Multiple Iterators

It may appear from the syntactic usage that when you use an iterator to control a generator, you’re controlling the declared generator function itself. But there’s a subtlety that’s easy to miss: each time you construct an iterator, you are implicitly constructing an instance of the generator which that iterator will control.

You can have multiple instances of the same generator running at the same time, and they can even interact:

function *foo() {
    var x = yield 2;
    z++;
    var y = yield (x * z);
    console.log( x, y, z );
}

var z = 1;

var it1 = foo();
var it2 = foo();

var val1 = it1.next().value;            // 2 <-- yield 2
var val2 = it2.next().value;            // 2 <-- yield 2

val1 = it1.next( val2 * 10 ).value;     // 40  <-- x:20,  z:2
val2 = it2.next( val1 * 5 ).value;      // 600 <-- x:200, z:3

it1.next( val2 / 2 );                   // y:300
                                        // 20 300 3
it2.next( val1 / 4 );                   // y:10
                                        // 200 10 3

Warning: The most common usage of multiple instances of the same generator running concurrently is not such interactions, but when the generator is producing its own values without input, perhaps from some independently connected resource. We’ll talk more about value production in the next section.

Let’s briefly walk through the processing:

  1. Both instances of *foo() are started at the same time, and both next() calls reveal a value of 2 from the yield 2 statements, respectively.
  2. val2 * 10 is 2 * 10, which is sent into the first generator instance it1, so that x gets value 20. z is incremented from 1 to 2, and then 20 * 2 is yielded out, setting val1 to 40.
  3. val1 * 5 is 40 * 5, which is sent into the second generator instance it2, so that x gets value 200. z is incremented again, from 2 to 3, and then 200 * 3 is yielded out, setting val2 to 600.
  4. val2 / 2 is 600 / 2, which is sent into the first generator instance it1, so that y gets value 300, then printing out 20 300 3 for its x y z values, respectively.
  5. val1 / 4 is 40 / 4, which is sent into the second generator instance it2, so that y gets value 10, then printing out 200 10 3 for its x y z values, respectively.

That’s a “fun” example to run through in your mind. Did you keep it straight?

15.1.2.1 Interleaving

Recall this scenario from the “Run-to-completion” section of Chapter 1:

var a = 1;
var b = 2;

function foo() {
    a++;
    b = b * a;
    a = b + 3;
}

function bar() {
    b--;
    a = 8 + b;
    b = a * 2;
}

With normal JS functions, of course either foo() can run completely first, or bar() can run completely first, but foo() cannot interleave its individual statements with bar(). So, there are only two possible outcomes to the preceding program.

However, with generators, clearly interleaving (even in the middle of statements!) is possible:

var a = 1;
var b = 2;

function *foo() {
    a++;
    yield;
    b = b * a;
    a = (yield b) + 3;
}

function *bar() {
    b--;
    yield;
    a = (yield 8) + b;
    b = a * (yield 2);
}

Depending on what respective order the iterators controlling *foo() and *bar() are called, the preceding program could produce several different results. In other words, we can actually illustrate (in a sort of fake-ish way) the theoretical “threaded race conditions” circumstances discussed in Chapter 1, by interleaving the two generator interations over the same shared variables.

First, let’s make a helper called step(..) that controls an iterator:

function step(gen) {
    var it = gen();
    var last;

    return function() {
        // whatever is `yield`ed out, just
        // send it right back in the next time!
        last = it.next( last ).value;
    };
}

step(..) initializes a generator to create its it iterator, then returns a function which, when called, advances the iterator by one step. Additionally, the previously yielded out value is sent right back in at the next step. So, yield 8 will just become 8 and yield b will just be b (whatever it was at the time of yield).

Now, just for fun, let’s experiment to see the effects of interleaving these different chunks of *foo() and *bar(). We’ll start with the boring base case, making sure *foo() totally finishes before *bar() (just like we did in Chapter 1):

// make sure to reset `a` and `b`
a = 1;
b = 2;

var s1 = step( foo );
var s2 = step( bar );

// run `*foo()` completely first
s1();
s1();
s1();

// now run `*bar()`
s2();
s2();
s2();
s2();

console.log( a, b );    // 11 22

The end result is 11 and 22, just as it was in the Chapter 1 version. Now let’s mix up the interleaving ordering and see how it changes the final values of a and b:

// make sure to reset `a` and `b`
a = 1;
b = 2;

var s1 = step( foo );
var s2 = step( bar );

s2();       // b--;
s2();       // yield 8
s1();       // a++;
s2();       // a = 8 + b;
            // yield 2
s1();       // b = b * a;
            // yield b
s1();       // a = b + 3;
s2();       // b = a * 2;

Before I tell you the results, can you figure out what a and b are after the preceding program? No cheating!

console.log( a, b );    // 12 18

Note: As an exercise for the reader, try to see how many other combinations of results you can get back rearranging the order of the s1() and s2() calls. Don’t forget you’ll always need three s1() calls and four s2() calls. Recall the discussion earlier about matching next() with yield for the reasons why.

You almost certainly won’t want to intentionally create this level of interleaving confusion, as it creates incredibly difficult to understand code. But the exercise is interesting and instructive to understand more about how multiple generators can run concurrently in the same shared scope, because there will be places where this capability is quite useful.

We’ll discuss generator concurrency in more detail at the end of this chapter.

15.2 Generator’ing Values

In the previous section, we mentioned an interesting use for generators, as a way to produce values. This is not the main focus in this chapter, but we’d be remiss if we didn’t cover the basics, especially because this use case is essentially the origin of the name: generators.

We’re going to take a slight diversion into the topic of iterators for a bit, but we’ll circle back to how they relate to generators and using a generator to generate values.

15.2.1 Producers and Iterators

Imagine you’re producing a series of values where each value has a definable relationship to the previous value. To do this, you’re going to need a stateful producer that remembers the last value it gave out.

You can implement something like that straightforwardly using a function closure (see the Scope & Closures title of this series):

var gimmeSomething = (function(){
    var nextVal;

    return function(){
        if (nextVal === undefined) {
            nextVal = 1;
        }
        else {
            nextVal = (3 * nextVal) + 6;
        }

        return nextVal;
    };
})();

gimmeSomething();       // 1
gimmeSomething();       // 9
gimmeSomething();       // 33
gimmeSomething();       // 105

Note: The nextVal computation logic here could have been simplified, but conceptually, we don’t want to calculate the next value (aka nextVal) until the next gimmeSomething() call happens, because in general that could be a resource-leaky design for producers of more persistent or resource-limited values than simple numbers.

Generating an arbitrary number series isn’t a terribly realistic example. But what if you were generating records from a data source? You could imagine much the same code.

In fact, this task is a very common design pattern, usually solved by iterators. An iterator is a well-defined interface for stepping through a series of values from a producer. The JS interface for iterators, as it is in most languages, is to call next() each time you want the next value from the producer.

We could implement the standard iterator interface for our number series producer:

var something = (function(){
    var nextVal;

    return {
        // needed for `for..of` loops
        [Symbol.iterator]: function(){ return this; },

        // standard iterator interface method
        next: function(){
            if (nextVal === undefined) {
                nextVal = 1;
            }
            else {
                nextVal = (3 * nextVal) + 6;
            }

            return { done:false, value:nextVal };
        }
    };
})();

something.next().value;     // 1
something.next().value;     // 9
something.next().value;     // 33
something.next().value;     // 105

Note: We’ll explain why we need the [Symbol.iterator]: .. part of this code snippet in the “Iterables” section. Syntactically though, two ES6 features are at play. First, the [ .. ] syntax is called a computed property name (see the this & Object Prototypes title of this series). It’s a way in an object literal definition to specify an expression and use the result of that expression as the name for the property. Next, Symbol.iterator is one of ES6’s predefined special Symbol values (see the ES6 & Beyond title of this book series).

The next() call returns an object with two properties: done is a boolean value signaling the iterator’s complete status; value holds the iteration value.

ES6 also adds the for..of loop, which means that a standard iterator can automatically be consumed with native loop syntax:

for (var v of something) {
    console.log( v );

    // don't let the loop run forever!
    if (v > 500) {
        break;
    }
}
// 1 9 33 105 321 969

Note: Because our something iterator always returns done:false, this for..of loop would run forever, which is why we put the break conditional in. It’s totally OK for iterators to be never-ending, but there are also cases where the iterator will run over a finite set of values and eventually return a done:true.

The for..of loop automatically calls next() for each iteration – it doesn’t pass any values in to the next() – and it will automatically terminate on receiving a done:true. It’s quite handy for looping over a set of data.

Of course, you could manually loop over iterators, calling next() and checking for the done:true condition to know when to stop:

for (
    var ret;
    (ret = something.next()) && !ret.done;
) {
    console.log( ret.value );

    // don't let the loop run forever!
    if (ret.value > 500) {
        break;
    }
}
// 1 9 33 105 321 969

Note: This manual for approach is certainly uglier than the ES6 for..of loop syntax, but its advantage is that it affords you the opportunity to pass in values to the next(..) calls if necessary.

In addition to making your own iterators, many built-in data structures in JS (as of ES6), like arrays, also have default iterators:

var a = [1,3,5,7,9];

for (var v of a) {
    console.log( v );
}
// 1 3 5 7 9

The for..of loop asks a for its iterator, and automatically uses it to iterate over a’s values.

Note: It may seem a strange omission by ES6, but regular objects intentionally do not come with a default iterator the way arrays do. The reasons go deeper than we will cover here. If all you want is to iterate over the properties of an object (with no particular guarantee of ordering), Object.keys(..) returns an array, which can then be used like for (var k of Object.keys(obj)) { ... Such a for..of loop over an object’s keys would be similar to a for..in loop, except that Object.keys(..) does not include properties from the [[Prototype]] chain while for..in does (see the this & Object Prototypes title of this series).

15.2.2 Iterables

The something object in our running example is called an iterator, as it has the next() method on its interface. But a closely related term is iterable, which is an object that contains an iterator that can iterate over its values.

As of ES6, the way to retrieve an iterator from an iterable is that the iterable must have a function on it, with the name being the special ES6 symbol value Symbol.iterator. When this function is called, it returns an iterator. Though not required, generally each call should return a fresh new iterator.

a in the previous snippet is an iterable. The for..of loop automatically calls its Symbol.iterator function to construct an iterator. But we could of course call the function manually, and use the iterator it returns:

var a = [1,3,5,7,9];

var it = a[Symbol.iterator]();

it.next().value;    // 1
it.next().value;    // 3
it.next().value;    // 5
..

In the previous code listing that defined something, you may have noticed this line:

[Symbol.iterator]: function(){ return this; }

That little bit of confusing code is making the something value – the interface of the something iterator – also an iterable; it’s now both an iterable and an iterator. Then, we pass something to the for..of loop:

for (var v of something) {
    ..
}

The for..of loop expects something to be an iterable, so it looks for and calls its Symbol.iterator function. We defined that function to simply return this, so it just gives itself back, and the for..of loop is none the wiser.

15.2.3 Generator Iterator

Let’s turn our attention back to generators, in the context of iterators. A generator can be treated as a producer of values that we extract one at a time through an iterator interface’s next() calls.

So, a generator itself is not technically an iterable, though it’s very similar – when you execute the generator, you get an iterator back:

function *foo(){ .. }

var it = foo();

We can implement the something infinite number series producer from earlier with a generator, like this:

function *something() {
    var nextVal;

    while (true) {
        if (nextVal === undefined) {
            nextVal = 1;
        }
        else {
            nextVal = (3 * nextVal) + 6;
        }

        yield nextVal;
    }
}

Note: A while..true loop would normally be a very bad thing to include in a real JS program, at least if it doesn’t have a break or return in it, as it would likely run forever, synchronously, and block/lock-up the browser UI. However, in a generator, such a loop is generally totally OK if it has a yield in it, as the generator will pause at each iteration, yielding back to the main program and/or to the event loop queue. To put it glibly, “generators put the while..true back in JS programming!”

That’s a fair bit cleaner and simpler, right? Because the generator pauses at each yield, the state (scope) of the function *something() is kept around, meaning there’s no need for the closure boilerplate to preserve variable state across calls.

Not only is it simpler code – we don’t have to make our own iterator interface – it actually is more reason-able code, because it more clearly expresses the intent. For example, the while..true loop tells us the generator is intended to run forever – to keep generating values as long as we keep asking for them.

And now we can use our shiny new *something() generator with a for..of loop, and you’ll see it works basically identically:

for (var v of something()) {
    console.log( v );

    // don't let the loop run forever!
    if (v > 500) {
        break;
    }
}
// 1 9 33 105 321 969

But don’t skip over for (var v of something()) ..! We didn’t just reference something as a value like in earlier examples, but instead called the *something() generator to get its iterator for the for..of loop to use.

If you’re paying close attention, two questions may arise from this interaction between the generator and the loop:

  • Why couldn’t we say for (var v of something) ..? Because something here is a generator, which is not an iterable. We have to call something() to construct a producer for the for..of loop to iterate over.
  • The something() call produces an iterator, but the for..of loop wants an iterable, right? Yep. The generator’s iterator also has a Symbol.iterator function on it, which basically does a return this, just like the something iterable we defined earlier. In other words, a generator’s iterator is also an iterable!

15.2.3.1 Stopping the Generator

In the previous example, it would appear the iterator instance for the *something() generator was basically left in a suspended state forever after the break in the loop was called.

But there’s a hidden behavior that takes care of that for you. “Abnormal completion” (i.e., “early termination”) of the for..of loop – generally caused by a break, return, or an uncaught exception – sends a signal to the generator’s iterator for it to terminate.

Note: Technically, the for..of loop also sends this signal to the iterator at the normal completion of the loop. For a generator, that’s essentially a moot operation, as the generator’s iterator had to complete first so the for..of loop completed. However, custom iterators might desire to receive this additional signal from for..of loop consumers.

While a for..of loop will automatically send this signal, you may wish to send the signal manually to an iterator; you do this by calling return(..).

If you specify a try..finally clause inside the generator, it will always be run even when the generator is externally completed. This is useful if you need to clean up resources (database connections, etc.):

function *something() {
    try {
        var nextVal;

        while (true) {
            if (nextVal === undefined) {
                nextVal = 1;
            }
            else {
                nextVal = (3 * nextVal) + 6;
            }

            yield nextVal;
        }
    }
    // cleanup clause
    finally {
        console.log( "cleaning up!" );
    }
}

The earlier example with break in the for..of loop will trigger the finally clause. But you could instead manually terminate the generator’s iterator instance from the outside with return(..):

var it = something();
for (var v of it) {
    console.log( v );

    // don't let the loop run forever!
    if (v > 500) {
        console.log(
            // complete the generator's iterator
            it.return( "Hello World" ).value
        );
        // no `break` needed here
    }
}
// 1 9 33 105 321 969
// cleaning up!
// Hello World

When we call it.return(..), it immediately terminates the generator, which of course runs the finally clause. Also, it sets the returned value to whatever you passed in to return(..), which is how "Hello World" comes right back out. We also don’t need to include a break now because the generator’s iterator is set to done:true, so the for..of loop will terminate on its next iteration.

Generators owe their namesake mostly to this consuming produced values use. But again, that’s just one of the uses for generators, and frankly not even the main one we’re concerned with in the context of this book.

But now that we more fully understand some of the mechanics of how they work, we can next turn our attention to how generators apply to async concurrency.

15.3 Iterating Generators Asynchronously

What do generators have to do with async coding patterns, fixing problems with callbacks, and the like? Let’s get to answering that important question.

We should revisit one of our scenarios from Chapter 3. Let’s recall the callback approach:

function foo(x,y,cb) {
    ajax(
        "http://some.url.1/?x=" + x + "&y=" + y,
        cb
    );
}

foo( 11, 31, function(err,text) {
    if (err) {
        console.error( err );
    }
    else {
        console.log( text );
    }
} );

If we wanted to express this same task flow control with a generator, we could do:

function foo(x,y) {
    ajax(
        "http://some.url.1/?x=" + x + "&y=" + y,
        function(err,data){
            if (err) {
                // throw an error into `*main()`
                it.throw( err );
            }
            else {
                // resume `*main()` with received `data`
                it.next( data );
            }
        }
    );
}

function *main() {
    try {
        var text = yield foo( 11, 31 );
        console.log( text );
    }
    catch (err) {
        console.error( err );
    }
}

var it = main();

// start it all up!
it.next();

At first glance, this snippet is longer, and perhaps a little more complex looking, than the callback snippet before it. But don’t let that impression get you off track. The generator snippet is actually much better! But there’s a lot going on for us to explain.

First, let’s look at this part of the code, which is the most important:

var text = yield foo( 11, 31 );
console.log( text );

Think about how that code works for a moment. We’re calling a normal function foo(..) and we’re apparently able to get back the text from the Ajax call, even though it’s asynchronous.

How is that possible? If you recall the beginning of Chapter 1, we had almost identical code:

var data = ajax( "..url 1.." );
console.log( data );

And that code didn’t work! Can you spot the difference? It’s the yield used in a generator.

That’s the magic! That’s what allows us to have what appears to be blocking, synchronous code, but it doesn’t actually block the whole program; it only pauses/blocks the code in the generator itself.

In yield foo(11,31), first the foo(11,31) call is made, which returns nothing (aka undefined), so we’re making a call to request data, but we’re actually then doing yield undefined. That’s OK, because the code is not currently relying on a yielded value to do anything interesting. We’ll revisit this point later in the chapter.

We’re not using yield in a message passing sense here, only in a flow control sense to pause/block. Actually, it will have message passing, but only in one direction, after the generator is resumed.

So, the generator pauses at the yield, essentially asking the question, “what value should I return to assign to the variable text?” Who’s going to answer that question?

Look at foo(..). If the Ajax request is successful, we call:

it.next( data );

That’s resuming the generator with the response data, which means that our paused yield expression receives that value directly, and then as it restarts the generator code, that value gets assigned to the local variable text.

Pretty cool, huh?

Take a step back and consider the implications. We have totally synchronous-looking code inside the generator (other than the yield keyword itself), but hidden behind the scenes, inside of foo(..), the operations can complete asynchronously.

That’s huge! That’s a nearly perfect solution to our previously stated problem with callbacks not being able to express asynchrony in a sequential, synchronous fashion that our brains can relate to.

In essence, we are abstracting the asynchrony away as an implementation detail, so that we can reason synchronously/sequentially about our flow control: “Make an Ajax request, and when it finishes print out the response.” And of course, we just expressed two steps in the flow control, but this same capability extends without bounds, to let us express however many steps we need to.

Tip: This is such an important realization, just go back and read the last three paragraphs again to let it sink in!

15.3.1 Synchronous Error Handling

But the preceding generator code has even more goodness to yield to us. Let’s turn our attention to the try..catch inside the generator:

try {
    var text = yield foo( 11, 31 );
    console.log( text );
}
catch (err) {
    console.error( err );
}

How does this work? The foo(..) call is asynchronously completing, and doesn’t try..catch fail to catch asynchronous errors, as we looked at in Chapter 3?

We already saw how the yield lets the assignment statement pause to wait for foo(..) to finish, so that the completed response can be assigned to text. The awesome part is that this yield pausing also allows the generator to catch an error. We throw that error into the generator with this part of the earlier code listing:

if (err) {
    // throw an error into `*main()`
    it.throw( err );
}

The yield-pause nature of generators means that not only do we get synchronous-looking return values from async function calls, but we can also synchronously catch errors from those async function calls!

So we’ve seen we can throw errors into a generator, but what about throwing errors out of a generator? Exactly as you’d expect:

function *main() {
    var x = yield "Hello World";

    yield x.toLowerCase();  // cause an exception!
}

var it = main();

it.next().value;            // Hello World

try {
    it.next( 42 );
}
catch (err) {
    console.error( err );   // TypeError
}

Of course, we could have manually thrown an error with throw .. instead of causing an exception.

We can even catch the same error that we throw(..) into the generator, essentially giving the generator a chance to handle it but if it doesn’t, the iterator code must handle it:

function *main() {
    var x = yield "Hello World";

    // never gets here
    console.log( x );
}

var it = main();

it.next();

try {
    // will `*main()` handle this error? we'll see!
    it.throw( "Oops" );
}
catch (err) {
    // nope, didn't handle it!
    console.error( err );           // Oops
}

Synchronous-looking error handling (via try..catch) with async code is a huge win for readability and reason-ability.

15.4 Generators + Promises

In our previous discussion, we showed how generators can be iterated asynchronously, which is a huge step forward in sequential reason-ability over the spaghetti mess of callbacks. But we lost something very important: the trustability and composability of Promises (see Chapter 3)!

Don’t worry – we can get that back. The best of all worlds in ES6 is to combine generators (synchronous-looking async code) with Promises (trustable and composable).

But how?

Recall from Chapter 3 the Promise-based approach to our running Ajax example:

function foo(x,y) {
    return request(
        "http://some.url.1/?x=" + x + "&y=" + y
    );
}

foo( 11, 31 )
.then(
    function(text){
        console.log( text );
    },
    function(err){
        console.error( err );
    }
);

In our earlier generator code for the running Ajax example, foo(..) returned nothing (undefined), and our iterator control code didn’t care about that yielded value.

But here the Promise-aware foo(..) returns a promise after making the Ajax call. That suggests that we could construct a promise with foo(..) and then yield it from the generator, and then the iterator control code would receive that promise.

But what should the iterator do with the promise?

It should listen for the promise to resolve (fulfillment or rejection), and then either resume the generator with the fulfillment message or throw an error into the generator with the rejection reason.

Let me repeat that, because it’s so important. The natural way to get the most out of Promises and generators is to yield a Promise, and wire that Promise to control the generator’s iterator.

Let’s give it a try! First, we’ll put the Promise-aware foo(..) together with the generator *main():

function foo(x,y) {
    return request(
        "http://some.url.1/?x=" + x + "&y=" + y
    );
}

function *main() {
    try {
        var text = yield foo( 11, 31 );
        console.log( text );
    }
    catch (err) {
        console.error( err );
    }
}

The most powerful revelation in this refactor is that the code inside *main() did not have to change at all! Inside the generator, whatever values are yielded out is just an opaque implementation detail, so we’re not even aware it’s happening, nor do we need to worry about it.

But how are we going to run *main() now? We still have some of the implementation plumbing work to do, to receive and wire up the yielded promise so that it resumes the generator upon resolution. We’ll start by trying that manually:

var it = main();

var p = it.next().value;

// wait for the `p` promise to resolve
p.then(
    function(text){
        it.next( text );
    },
    function(err){
        it.throw( err );
    }
);

Actually, that wasn’t so painful at all, was it?

This snippet should look very similar to what we did earlier with the manually wired generator controlled by the error-first callback. Instead of an if (err) { it.throw.., the promise already splits fulfillment (success) and rejection (failure) for us, but otherwise the iterator control is identical.

Now, we’ve glossed over some important details.

Most importantly, we took advantage of the fact that we knew that *main() only had one Promise-aware step in it. What if we wanted to be able to Promise-drive a generator no matter how many steps it has? We certainly don’t want to manually write out the Promise chain differently for each generator! What would be much nicer is if there was a way to repeat (aka “loop” over) the iteration control, and each time a Promise comes out, wait on its resolution before continuing.

Also, what if the generator throws out an error (intentionally or accidentally) during the it.next(..) call? Should we quit, or should we catch it and send it right back in? Similarly, what if we it.throw(..) a Promise rejection into the generator, but it’s not handled, and comes right back out?

15.4.1 Promise-Aware Generator Runner

The more you start to explore this path, the more you realize, “wow, it’d be great if there was just some utility to do it for me.” And you’re absolutely correct. This is such an important pattern, and you don’t want to get it wrong (or exhaust yourself repeating it over and over), so your best bet is to use a utility that is specifically designed to run Promise-yielding generators in the manner we’ve illustrated.

Several Promise abstraction libraries provide just such a utility, including my asynquence library and its runner(..), which will be discussed in Appendix A of this book.

But for the sake of learning and illustration, let’s just define our own standalone utility that we’ll call run(..):

// thanks to Benjamin Gruenbaum (@benjamingr on GitHub) for
// big improvements here!
function run(gen) {
    var args = [].slice.call( arguments, 1), it;

    // initialize the generator in the current context
    it = gen.apply( this, args );

    // return a promise for the generator completing
    return Promise.resolve()
        .then( function handleNext(value){
            // run to the next yielded value
            var next = it.next( value );

            return (function handleResult(next){
                // generator has completed running?
                if (next.done) {
                    return next.value;
                }
                // otherwise keep going
                else {
                    return Promise.resolve( next.value )
                        .then(
                            // resume the async loop on
                            // success, sending the resolved
                            // value back into the generator
                            handleNext,

                            // if `value` is a rejected
                            // promise, propagate error back
                            // into the generator for its own
                            // error handling
                            function handleErr(err) {
                                return Promise.resolve(
                                    it.throw( err )
                                )
                                .then( handleResult );
                            }
                        );
                }
            })(next);
        } );
}

As you can see, it’s a quite a bit more complex than you’d probably want to author yourself, and you especially wouldn’t want to repeat this code for each generator you use. So, a utility/library helper is definitely the way to go. Nevertheless, I encourage you to spend a few minutes studying that code listing to get a better sense of how to manage the generator+Promise negotiation.

How would you use run(..) with *main() in our running Ajax example?

function *main() {
    // ..
}

run( main );

That’s it! The way we wired run(..), it will automatically advance the generator you pass to it, asynchronously until completion.

Note: The run(..) we defined returns a promise which is wired to resolve once the generator is complete, or receive an uncaught exception if the generator doesn’t handle it. We don’t show that capability here, but we’ll come back to it later in the chapter.

15.4.1.1 ES7: async and await?

The preceding pattern – generators yielding Promises that then control the generator’s iterator to advance it to completion – is such a powerful and useful approach, it would be nicer if we could do it without the clutter of the library utility helper (aka run(..)).

There’s probably good news on that front. At the time of this writing, there’s early but strong support for a proposal for more syntactic addition in this realm for the post-ES6, ES7-ish timeframe. Obviously, it’s too early to guarantee the details, but there’s a pretty decent chance it will shake out similar to the following:

function foo(x,y) {
    return request(
        "http://some.url.1/?x=" + x + "&y=" + y
    );
}

async function main() {
    try {
        var text = await foo( 11, 31 );
        console.log( text );
    }
    catch (err) {
        console.error( err );
    }
}

main();

As you can see, there’s no run(..) call (meaning no need for a library utility!) to invoke and drive main() – it’s just called as a normal function. Also, main() isn’t declared as a generator function anymore; it’s a new kind of function: async function. And finally, instead of yielding a Promise, we await for it to resolve.

The async function automatically knows what to do if you await a Promise – it will pause the function (just like with generators) until the Promise resolves. We didn’t illustrate it in this snippet, but calling an async function like main() automatically returns a promise that’s resolved whenever the function finishes completely.

Tip: The async / await syntax should look very familiar to readers with experience in C#, because it’s basically identical.

The proposal essentially codifies support for the pattern we’ve already derived, into a syntactic mechanism: combining Promises with sync-looking flow control code. That’s the best of both worlds combined, to effectively address practically all of the major concerns we outlined with callbacks.

The mere fact that such a ES7-ish proposal already exists and has early support and enthusiasm is a major vote of confidence in the future importance of this async pattern.

15.4.2 Promise Concurrency in Generators

So far, all we’ve demonstrated is a single-step async flow with Promises+generators. But real-world code will often have many async steps.

If you’re not careful, the sync-looking style of generators may lull you into complacency with how you structure your async concurrency, leading to suboptimal performance patterns. So we want to spend a little time exploring the options.

Imagine a scenario where you need to fetch data from two different sources, then combine those responses to make a third request, and finally print out the last response. We explored a similar scenario with Promises in Chapter 3, but let’s reconsider it in the context of generators.

Your first instinct might be something like:

function *foo() {
    var r1 = yield request( "http://some.url.1" );
    var r2 = yield request( "http://some.url.2" );

    var r3 = yield request(
        "http://some.url.3/?v=" + r1 + "," + r2
    );

    console.log( r3 );
}

// use previously defined `run(..)` utility
run( foo );

This code will work, but in the specifics of our scenario, it’s not optimal. Can you spot why?

Because the r1 and r2 requests can – and for performance reasons, should – run concurrently, but in this code they will run sequentially; the "http://some.url.2" URL isn’t Ajax fetched until after the "http://some.url.1" request is finished. These two requests are independent, so the better performance approach would likely be to have them run at the same time.

But how exactly would you do that with a generator and yield? We know that yield is only a single pause point in the code, so you can’t really do two pauses at the same time.

The most natural and effective answer is to base the async flow on Promises, specifically on their capability to manage state in a time-independent fashion (see “Future Value” in Chapter 3).

The simplest approach:

function *foo() {
    // make both requests "in parallel"
    var p1 = request( "http://some.url.1" );
    var p2 = request( "http://some.url.2" );

    // wait until both promises resolve
    var r1 = yield p1;
    var r2 = yield p2;

    var r3 = yield request(
        "http://some.url.3/?v=" + r1 + "," + r2
    );

    console.log( r3 );
}

// use previously defined `run(..)` utility
run( foo );

Why is this different from the previous snippet? Look at where the yield is and is not. p1 and p2 are promises for Ajax requests made concurrently (aka “in parallel”). It doesn’t matter which one finishes first, because promises will hold onto their resolved state for as long as necessary.

Then we use two subsequent yield statements to wait for and retrieve the resolutions from the promises (into r1 and r2, respectively). If p1 resolves first, the yield p1 resumes first then waits on the yield p2 to resume. If p2 resolves first, it will just patiently hold onto that resolution value until asked, but the yield p1 will hold on first, until p1 resolves.

Either way, both p1 and p2 will run concurrently, and both have to finish, in either order, before the r3 = yield request.. Ajax request will be made.

If that flow control processing model sounds familiar, it’s basically the same as what we identified in Chapter 3 as the “gate” pattern, enabled by the Promise.all([ .. ]) utility. So, we could also express the flow control like this:

function *foo() {
    // make both requests "in parallel," and
    // wait until both promises resolve
    var results = yield Promise.all( [
        request( "http://some.url.1" ),
        request( "http://some.url.2" )
    ] );

    var r1 = results[0];
    var r2 = results[1];

    var r3 = yield request(
        "http://some.url.3/?v=" + r1 + "," + r2
    );

    console.log( r3 );
}

// use previously defined `run(..)` utility
run( foo );

Note: As we discussed in Chapter 3, we can even use ES6 destructuring assignment to simplify the var r1 = .. var r2 = .. assignments, with var [r1,r2] = results.

In other words, all of the concurrency capabilities of Promises are available to us in the generator+Promise approach. So in any place where you need more than sequential this-then-that async flow control steps, Promises are likely your best bet.

15.4.2.1 Promises, Hidden

As a word of stylistic caution, be careful about how much Promise logic you include inside your generators. The whole point of using generators for asynchrony in the way we’ve described is to create simple, sequential, sync-looking code, and to hide as much of the details of asynchrony away from that code as possible.

For example, this might be a cleaner approach:

// note: normal function, not generator
function bar(url1,url2) {
    return Promise.all( [
        request( url1 ),
        request( url2 )
    ] );
}

function *foo() {
    // hide the Promise-based concurrency details
    // inside `bar(..)`
    var results = yield bar(
        "http://some.url.1",
        "http://some.url.2"
    );

    var r1 = results[0];
    var r2 = results[1];

    var r3 = yield request(
        "http://some.url.3/?v=" + r1 + "," + r2
    );

    console.log( r3 );
}

// use previously defined `run(..)` utility
run( foo );

Inside *foo(), it’s cleaner and clearer that all we’re doing is just asking bar(..) to get us some results, and we’ll yield-wait on that to happen. We don’t have to care that under the covers a Promise.all([ .. ]) Promise composition will be used to make that happen.

We treat asynchrony, and indeed Promises, as an implementation detail.

Hiding your Promise logic inside a function that you merely call from your generator is especially useful if you’re going to do a sophisticated series flow-control. For example:

function bar() {
    return  Promise.all( [
          baz( .. )
          .then( .. ),
          Promise.race( [ .. ] )
        ] )
        .then( .. )
}

That kind of logic is sometimes required, and if you dump it directly inside your generator(s), you’ve defeated most of the reason why you would want to use generators in the first place. We should intentionally abstract such details away from our generator code so that they don’t clutter up the higher level task expression.

Beyond creating code that is both functional and performant, you should also strive to make code that is as reason-able and maintainable as possible.

Note: Abstraction is not always a healthy thing for programming – many times it can increase complexity in exchange for terseness. But in this case, I believe it’s much healthier for your generator+Promise async code than the alternatives. As with all such advice, though, pay attention to your specific situations and make proper decisions for you and your team.

15.5 Generator Delegation

In the previous section, we showed calling regular functions from inside a generator, and how that remains a useful technique for abstracting away implementation details (like async Promise flow). But the main drawback of using a normal function for this task is that it has to behave by the normal function rules, which means it cannot pause itself with yield like a generator can.

It may then occur to you that you might try to call one generator from another generator, using our run(..) helper, such as:

function *foo() {
    var r2 = yield request( "http://some.url.2" );
    var r3 = yield request( "http://some.url.3/?v=" + r2 );

    return r3;
}

function *bar() {
    var r1 = yield request( "http://some.url.1" );

    // "delegating" to `*foo()` via `run(..)`
    var r3 = yield run( foo );

    console.log( r3 );
}

run( bar );

We run *foo() inside of *bar() by using our run(..) utility again. We take advantage here of the fact that the run(..) we defined earlier returns a promise which is resolved when its generator is run to completion (or errors out), so if we yield out to a run(..) instance the promise from another run(..) call, it automatically pauses *bar() until *foo() finishes.

But there’s an even better way to integrate calling *foo() into *bar(), and it’s called yield-delegation. The special syntax for yield-delegation is: yield * __ (notice the extra *). Before we see it work in our previous example, let’s look at a simpler scenario:

function *foo() {
    console.log( "`*foo()` starting" );
    yield 3;
    yield 4;
    console.log( "`*foo()` finished" );
}

function *bar() {
    yield 1;
    yield 2;
    yield *foo();   // `yield`-delegation!
    yield 5;
}

var it = bar();

it.next().value;    // 1
it.next().value;    // 2
it.next().value;    // `*foo()` starting
                    // 3
it.next().value;    // 4
it.next().value;    // `*foo()` finished
                    // 5

Note: Similar to a note earlier in the chapter where I explained why I prefer function *foo() .. instead of function* foo() .., I also prefer – differing from most other documentation on the topic – to say yield *foo() instead of yield* foo(). The placement of the * is purely stylistic and up to your best judgment. But I find the consistency of styling attractive.

How does the yield *foo() delegation work?

First, calling foo() creates an iterator exactly as we’ve already seen. Then, yield * delegates/transfers the iterator instance control (of the present *bar() generator) over to this other *foo() iterator.

So, the first two it.next() calls are controlling *bar(), but when we make the third it.next() call, now *foo() starts up, and now we’re controlling *foo() instead of *bar(). That’s why it’s called delegation – *bar() delegated its iteration control to *foo().

As soon as the it iterator control exhausts the entire *foo() iterator, it automatically returns to controlling *bar().

So now back to the previous example with the three sequential Ajax requests:

function *foo() {
    var r2 = yield request( "http://some.url.2" );
    var r3 = yield request( "http://some.url.3/?v=" + r2 );

    return r3;
}

function *bar() {
    var r1 = yield request( "http://some.url.1" );

    // "delegating" to `*foo()` via `yield*`
    var r3 = yield *foo();

    console.log( r3 );
}

run( bar );

The only difference between this snippet and the version used earlier is the use of yield *foo() instead of the previous yield run(foo).

Note: yield * yields iteration control, not generator control; when you invoke the *foo() generator, you’re now yield-delegating to its iterator. But you can actually yield-delegate to any iterable; yield *[1,2,3] would consume the default iterator for the [1,2,3] array value.

15.5.1 Why Delegation?

The purpose of yield-delegation is mostly code organization, and in that way is symmetrical with normal function calling.

Imagine two modules that respectively provide methods foo() and bar(), where bar() calls foo(). The reason the two are separate is generally because the proper organization of code for the program calls for them to be in separate functions. For example, there may be cases where foo() is called standalone, and other places where bar() calls foo().

For all these exact same reasons, keeping generators separate aids in program readability, maintenance, and debuggability. In that respect, yield * is a syntactic shortcut for manually iterating over the steps of *foo() while inside of *bar().

Such manual approach would be especially complex if the steps in *foo() were asynchronous, which is why you’d probably need to use that run(..) utility to do it. And as we’ve shown, yield *foo() eliminates the need for a sub-instance of the run(..) utility (like run(foo)).

15.5.2 Delegating Messages

You may wonder how this yield-delegation works not just with iterator control but with the two-way message passing. Carefully follow the flow of messages in and out, through the yield-delegation:

function *foo() {
    console.log( "inside `*foo()`:", yield "B" );

    console.log( "inside `*foo()`:", yield "C" );

    return "D";
}

function *bar() {
    console.log( "inside `*bar()`:", yield "A" );

    // `yield`-delegation!
    console.log( "inside `*bar()`:", yield *foo() );

    console.log( "inside `*bar()`:", yield "E" );

    return "F";
}

var it = bar();

console.log( "outside:", it.next().value );
// outside: A

console.log( "outside:", it.next( 1 ).value );
// inside `*bar()`: 1
// outside: B

console.log( "outside:", it.next( 2 ).value );
// inside `*foo()`: 2
// outside: C

console.log( "outside:", it.next( 3 ).value );
// inside `*foo()`: 3
// inside `*bar()`: D
// outside: E

console.log( "outside:", it.next( 4 ).value );
// inside `*bar()`: 4
// outside: F

Pay particular attention to the processing steps after the it.next(3) call:

  1. The 3 value is passed (through the yield-delegation in *bar()) into the waiting yield "C" expression inside of *foo().
  2. *foo() then calls return "D", but this value doesn’t get returned all the way back to the outside it.next(3) call.
  3. Instead, the "D" value is sent as the result of the waiting yield *foo() expression inside of *bar() – this yield-delegation expression has essentially been paused while all of *foo() was exhausted. So "D" ends up inside of *bar() for it to print out.
  4. yield "E" is called inside of *bar(), and the "E" value is yielded to the outside as the result of the it.next(3) call.

From the perspective of the external iterator (it), it doesn’t appear any differently between controlling the initial generator or a delegated one.

In fact, yield-delegation doesn’t even have to be directed to another generator; it can just be directed to a non-generator, general iterable. For example:

function *bar() {
    console.log( "inside `*bar()`:", yield "A" );

    // `yield`-delegation to a non-generator!
    console.log( "inside `*bar()`:", yield *[ "B", "C", "D" ] );

    console.log( "inside `*bar()`:", yield "E" );

    return "F";
}

var it = bar();

console.log( "outside:", it.next().value );
// outside: A

console.log( "outside:", it.next( 1 ).value );
// inside `*bar()`: 1
// outside: B

console.log( "outside:", it.next( 2 ).value );
// outside: C

console.log( "outside:", it.next( 3 ).value );
// outside: D

console.log( "outside:", it.next( 4 ).value );
// inside `*bar()`: undefined
// outside: E

console.log( "outside:", it.next( 5 ).value );
// inside `*bar()`: 5
// outside: F

Notice the differences in where the messages were received/reported between this example and the one previous.

Most strikingly, the default array iterator doesn’t care about any messages sent in via next(..) calls, so the values 2, 3, and 4 are essentially ignored. Also, because that iterator has no explicit return value (unlike the previously used *foo()), the yield * expression gets an undefined when it finishes.

15.5.2.1 Exceptions Delegated, Too!

In the same way that yield-delegation transparently passes messages through in both directions, errors/exceptions also pass in both directions:

function *foo() {
    try {
        yield "B";
    }
    catch (err) {
        console.log( "error caught inside `*foo()`:", err );
    }

    yield "C";

    throw "D";
}

function *bar() {
    yield "A";

    try {
        yield *foo();
    }
    catch (err) {
        console.log( "error caught inside `*bar()`:", err );
    }

    yield "E";

    yield *baz();

    // note: can't get here!
    yield "G";
}

function *baz() {
    throw "F";
}

var it = bar();

console.log( "outside:", it.next().value );
// outside: A

console.log( "outside:", it.next( 1 ).value );
// outside: B

console.log( "outside:", it.throw( 2 ).value );
// error caught inside `*foo()`: 2
// outside: C

console.log( "outside:", it.next( 3 ).value );
// error caught inside `*bar()`: D
// outside: E

try {
    console.log( "outside:", it.next( 4 ).value );
}
catch (err) {
    console.log( "error caught outside:", err );
}
// error caught outside: F

Some things to note from this snippet:

  1. When we call it.throw(2), it sends the error message 2 into *bar(), which delegates that to *foo(), which then catches it and handles it gracefully. Then, the yield "C" sends "C" back out as the return value from the it.throw(2) call.
  2. The "D" value that’s next thrown from inside *foo() propagates out to *bar(), which catches it and handles it gracefully. Then the yield "E" sends "E" back out as the return value from the it.next(3) call.
  3. Next, the exception thrown from *baz() isn’t caught in *bar() – though we did catch it outside – so both *baz() and *bar() are set to a completed state. After this snippet, you would not be able to get the "G" value out with any subsequent next(..) call(s) – they will just return undefined for value.

15.5.3 Delegating Asynchrony

Let’s finally get back to our earlier yield-delegation example with the multiple sequential Ajax requests:

function *foo() {
    var r2 = yield request( "http://some.url.2" );
    var r3 = yield request( "http://some.url.3/?v=" + r2 );

    return r3;
}

function *bar() {
    var r1 = yield request( "http://some.url.1" );

    var r3 = yield *foo();

    console.log( r3 );
}

run( bar );

Instead of calling yield run(foo) inside of *bar(), we just call yield *foo().

In the previous version of this example, the Promise mechanism (controlled by run(..)) was used to transport the value from return r3 in *foo() to the local variable r3 inside *bar(). Now, that value is just returned back directly via the yield * mechanics.

Otherwise, the behavior is pretty much identical.

15.5.4 Delegating “Recursion”

Of course, yield-delegation can keep following as many delegation steps as you wire up. You could even use yield-delegation for async-capable generator “recursion” – a generator yield-delegating to itself:

function *foo(val) {
    if (val > 1) {
        // generator recursion
        val = yield *foo( val - 1 );
    }

    return yield request( "http://some.url/?v=" + val );
}

function *bar() {
    var r1 = yield *foo( 3 );
    console.log( r1 );
}

run( bar );

Note: Our run(..) utility could have been called with run( foo, 3 ), because it supports additional parameters being passed along to the initialization of the generator. However, we used a parameter-free *bar() here to highlight the flexibility of yield *.

What processing steps follow from that code? Hang on, this is going to be quite intricate to describe in detail:

  1. run(bar) starts up the *bar() generator.
  2. foo(3) creates an iterator for *foo(..) and passes 3 as its val parameter.
  3. Because 3 > 1, foo(2) creates another iterator and passes in 2 as its val parameter.
  4. Because 2 > 1, foo(1) creates yet another iterator and passes in 1 as its val parameter.
  5. 1 > 1 is false, so we next call request(..) with the 1 value, and get a promise back for that first Ajax call.
  6. That promise is yielded out, which comes back to the *foo(2) generator instance.
  7. The yield * passes that promise back out to the *foo(3) generator instance. Another yield * passes the promise out to the *bar() generator instance. And yet again another yield * passes the promise out to the run(..) utility, which will wait on that promise (for the first Ajax request) to proceed.
  8. When the promise resolves, its fulfillment message is sent to resume *bar(), which passes through the yield * into the *foo(3) instance, which then passes through the yield * to the *foo(2) generator instance, which then passes through the yield * to the normal yield that’s waiting in the *foo(3) generator instance.
  9. That first call’s Ajax response is now immediately returned from the *foo(3) generator instance, which sends that value back as the result of the yield * expression in the *foo(2) instance, and assigned to its local val variable.
  10. Inside *foo(2), a second Ajax request is made with request(..), whose promise is yielded back to the *foo(1) instance, and then yield * propagates all the way out to run(..) (step 7 again). When the promise resolves, the second Ajax response propagates all the way back into the *foo(2) generator instance, and is assigned to its local val variable.
  11. Finally, the third Ajax request is made with request(..), its promise goes out to run(..), and then its resolution value comes all the way back, which is then returned so that it comes back to the waiting yield * expression in *bar().

Phew! A lot of crazy mental juggling, huh? You might want to read through that a few more times, and then go grab a snack to clear your head!

15.6 Generator Concurrency

As we discussed in both Chapter 1 and earlier in this chapter, two simultaneously running “processes” can cooperatively interleave their operations, and many times this can yield (pun intended) very powerful asynchrony expressions.

Frankly, our earlier examples of concurrency interleaving of multiple generators showed how to make it really confusing. But we hinted that there’s places where this capability is quite useful.

Recall a scenario we looked at in Chapter 1, where two different simultaneous Ajax response handlers needed to coordinate with each other to make sure that the data communication was not a race condition. We slotted the responses into the res array like this:

function response(data) {
    if (data.url == "http://some.url.1") {
        res[0] = data;
    }
    else if (data.url == "http://some.url.2") {
        res[1] = data;
    }
}

But how can we use multiple generators concurrently for this scenario?

// `request(..)` is a Promise-aware Ajax utility

var res = [];

function *reqData(url) {
    res.push(
        yield request( url )
    );
}

Note: We’re going to use two instances of the *reqData(..) generator here, but there’s no difference to running a single instance of two different generators; both approaches are reasoned about identically. We’ll see two different generators coordinating in just a bit.

Instead of having to manually sort out res[0] and res[1] assignments, we’ll use coordinated ordering so that res.push(..) properly slots the values in the expected and predictable order. The expressed logic thus should feel a bit cleaner.

But how will we actually orchestrate this interaction? First, let’s just do it manually, with Promises:

var it1 = reqData( "http://some.url.1" );
var it2 = reqData( "http://some.url.2" );

var p1 = it1.next().value;
var p2 = it2.next().value;

p1
.then( function(data){
    it1.next( data );
    return p2;
} )
.then( function(data){
    it2.next( data );
} );

*reqData(..)’s two instances are both started to make their Ajax requests, then paused with yield. Then we choose to resume the first instance when p1 resolves, and then p2’s resolution will restart the second instance. In this way, we use Promise orchestration to ensure that res[0] will have the first response and res[1] will have the second response.

But frankly, this is awfully manual, and it doesn’t really let the generators orchestrate themselves, which is where the true power can lie. Let’s try it a different way:

// `request(..)` is a Promise-aware Ajax utility

var res = [];

function *reqData(url) {
    var data = yield request( url );

    // transfer control
    yield;

    res.push( data );
}

var it1 = reqData( "http://some.url.1" );
var it2 = reqData( "http://some.url.2" );

var p1 = it1.next().value;
var p2 = it2.next().value;

p1.then( function(data){
    it1.next( data );
} );

p2.then( function(data){
    it2.next( data );
} );

Promise.all( [p1,p2] )
.then( function(){
    it1.next();
    it2.next();
} );

OK, this is a bit better (though still manual!), because now the two instances of *reqData(..) run truly concurrently, and (at least for the first part) independently.

In the previous snippet, the second instance was not given its data until after the first instance was totally finished. But here, both instances receive their data as soon as their respective responses come back, and then each instance does another yield for control transfer purposes. We then choose what order to resume them in the Promise.all([ .. ]) handler.

What may not be as obvious is that this approach hints at an easier form for a reusable utility, because of the symmetry. We can do even better. Let’s imagine using a utility called runAll(..):

// `request(..)` is a Promise-aware Ajax utility

var res = [];

runAll(
    function*(){
        var p1 = request( "http://some.url.1" );

        // transfer control
        yield;

        res.push( yield p1 );
    },
    function*(){
        var p2 = request( "http://some.url.2" );

        // transfer control
        yield;

        res.push( yield p2 );
    }
);

Note: We’re not including a code listing for runAll(..) as it is not only long enough to bog down the text, but is an extension of the logic we’ve already implemented in run(..) earlier. So, as a good supplementary exercise for the reader, try your hand at evolving the code from run(..) to work like the imagined runAll(..). Also, my asynquence library provides a previously mentioned runner(..) utility with this kind of capability already built in, and will be discussed in Appendix A of this book.

Here’s how the processing inside runAll(..) would operate:

  1. The first generator gets a promise for the first Ajax response from "http://some.url.1", then yields control back to the runAll(..) utility.
  2. The second generator runs and does the same for "http://some.url.2", yielding control back to the runAll(..) utility.
  3. The first generator resumes, and then yields out its promise p1. The runAll(..) utility does the same in this case as our previous run(..), in that it waits on that promise to resolve, then resumes the same generator (no control transfer!). When p1 resolves, runAll(..) resumes the first generator again with that resolution value, and then res[0] is given its value. When the first generator then finishes, that’s an implicit transfer of control.
  4. The second generator resumes, yields out its promise p2, and waits for it to resolve. Once it does, runAll(..) resumes the second generator with that value, and res[1] is set.

In this running example, we use an outer variable called res to store the results of the two different Ajax responses – that’s our concurrency coordination making that possible.

But it might be quite helpful to further extend runAll(..) to provide an inner variable space for the multiple generator instances to share, such as an empty object we’ll call data below. Also, it could take non-Promise values that are yielded and hand them off to the next generator.

Consider:

// `request(..)` is a Promise-aware Ajax utility

runAll(
    function*(data){
        data.res = [];

        // transfer control (and message pass)
        var url1 = yield "http://some.url.2";

        var p1 = request( url1 ); // "http://some.url.1"

        // transfer control
        yield;

        data.res.push( yield p1 );
    },
    function*(data){
        // transfer control (and message pass)
        var url2 = yield "http://some.url.1";

        var p2 = request( url2 ); // "http://some.url.2"

        // transfer control
        yield;

        data.res.push( yield p2 );
    }
);

In this formulation, the two generators are not just coordinating control transfer, but actually communicating with each other, both through data.res and the yielded messages that trade url1 and url2 values. That’s incredibly powerful!

Such realization also serves as a conceptual base for a more sophisticated asynchrony technique called CSP (Communicating Sequential Processes), which we will cover in Appendix B of this book.

15.7 Thunks

So far, we’ve made the assumption that yielding a Promise from a generator – and having that Promise resume the generator via a helper utility like run(..) – was the best possible way to manage asynchrony with generators. To be clear, it is.

But we skipped over another pattern that has some mildly widespread adoption, so in the interest of completeness we’ll take a brief look at it.

In general computer science, there’s an old pre-JS concept called a “thunk.” Without getting bogged down in the historical nature, a narrow expression of a thunk in JS is a function that – without any parameters – is wired to call another function.

In other words, you wrap a function definition around function call – with any parameters it needs – to defer the execution of that call, and that wrapping function is a thunk. When you later execute the thunk, you end up calling the original function.

For example:

function foo(x,y) {
    return x + y;
}

function fooThunk() {
    return foo( 3, 4 );
}

// later

console.log( fooThunk() );  // 7

So, a synchronous thunk is pretty straightforward. But what about an async thunk? We can essentially extend the narrow thunk definition to include it receiving a callback.

Consider:

function foo(x,y,cb) {
    setTimeout( function(){
        cb( x + y );
    }, 1000 );
}

function fooThunk(cb) {
    foo( 3, 4, cb );
}

// later

fooThunk( function(sum){
    console.log( sum );     // 7
} );

As you can see, fooThunk(..) only expects a cb(..) parameter, as it already has values 3 and 4 (for x and y, respectively) pre-specified and ready to pass to foo(..). A thunk is just waiting around patiently for the last piece it needs to do its job: the callback.

You don’t want to make thunks manually, though. So, let’s invent a utility that does this wrapping for us.

Consider:

function thunkify(fn) {
    var args = [].slice.call( arguments, 1 );
    return function(cb) {
        args.push( cb );
        return fn.apply( null, args );
    };
}

var fooThunk = thunkify( foo, 3, 4 );

// later

fooThunk( function(sum) {
    console.log( sum );     // 7
} );

Tip: Here we assume that the original (foo(..)) function signature expects its callback in the last position, with any other parameters coming before it. This is a pretty ubiquitous “standard” for async JS function standards. You might call it “callback-last style.” If for some reason you had a need to handle “callback-first style” signatures, you would just make a utility that used args.unshift(..) instead of args.push(..).

The preceding formulation of thunkify(..) takes both the foo(..) function reference, and any parameters it needs, and returns back the thunk itself (fooThunk(..)). However, that’s not the typical approach you’ll find to thunks in JS.

Instead of thunkify(..) making the thunk itself, typically – if not perplexingly – the thunkify(..) utility would produce a function that produces thunks.

Uhhhh… yeah.

Consider:

function thunkify(fn) {
    return function() {
        var args = [].slice.call( arguments );
        return function(cb) {
            args.push( cb );
            return fn.apply( null, args );
        };
    };
}

The main difference here is the extra return function() { .. } layer. Here’s how its usage differs:

var whatIsThis = thunkify( foo );

var fooThunk = whatIsThis( 3, 4 );

// later

fooThunk( function(sum) {
    console.log( sum );     // 7
} );

Obviously, the big question this snippet implies is what is whatIsThis properly called? It’s not the thunk, it’s the thing that will produce thunks from foo(..) calls. It’s kind of like a “factory” for “thunks.” There doesn’t seem to be any kind of standard agreement for naming such a thing.

So, my proposal is “thunkory” (“thunk” + “factory”). So, thunkify(..) produces a thunkory, and a thunkory produces thunks. That reasoning is symmetric to my proposal for “promisory” in Chapter 3:

var fooThunkory = thunkify( foo );

var fooThunk1 = fooThunkory( 3, 4 );
var fooThunk2 = fooThunkory( 5, 6 );

// later

fooThunk1( function(sum) {
    console.log( sum );     // 7
} );

fooThunk2( function(sum) {
    console.log( sum );     // 11
} );

Note: The running foo(..) example expects a style of callback that’s not “error-first style.” Of course, “error-first style” is much more common. If foo(..) had some sort of legitimate error-producing expectation, we could change it to expect and use an error-first callback. None of the subsequent thunkify(..) machinery cares what style of callback is assumed. The only difference in usage would be fooThunk1(function(err,sum){...

Exposing the thunkory method – instead of how the earlier thunkify(..) hides this intermediary step – may seem like unnecessary complication. But in general, it’s quite useful to make thunkories at the beginning of your program to wrap existing API methods, and then be able to pass around and call those thunkories when you need thunks. The two distinct steps preserve a cleaner separation of capability.

To illustrate:

// cleaner:
var fooThunkory = thunkify( foo );

var fooThunk1 = fooThunkory( 3, 4 );
var fooThunk2 = fooThunkory( 5, 6 );

// instead of:
var fooThunk1 = thunkify( foo, 3, 4 );
var fooThunk2 = thunkify( foo, 5, 6 );

Regardless of whether you like to deal with the thunkories explicitly or not, the usage of thunks fooThunk1(..) and fooThunk2(..) remains the same.

15.7.1 s/promise/thunk/

So what’s all this thunk stuff have to do with generators?

Comparing thunks to promises generally: they’re not directly interchangable as they’re not equivalent in behavior. Promises are vastly more capable and trustable than bare thunks.

But in another sense, they both can be seen as a request for a value, which may be async in its answering.

Recall from Chapter 3 we defined a utility for promisifying a function, which we called Promise.wrap(..) – we could have called it promisify(..), too! This Promise-wrapping utility doesn’t produce Promises; it produces promisories that in turn produce Promises. This is completely symmetric to the thunkories and thunks presently being discussed.

To illustrate the symmetry, let’s first alter the running foo(..) example from earlier to assume an “error-first style” callback:

function foo(x,y,cb) {
    setTimeout( function(){
        // assume `cb(..)` as "error-first style"
        cb( null, x + y );
    }, 1000 );
}

Now, we’ll compare using thunkify(..) and promisify(..) (aka Promise.wrap(..) from Chapter 3):

// symmetrical: constructing the question asker
var fooThunkory = thunkify( foo );
var fooPromisory = promisify( foo );

// symmetrical: asking the question
var fooThunk = fooThunkory( 3, 4 );
var fooPromise = fooPromisory( 3, 4 );

// get the thunk answer
fooThunk( function(err,sum){
    if (err) {
        console.error( err );
    }
    else {
        console.log( sum );     // 7
    }
} );

// get the promise answer
fooPromise
.then(
    function(sum){
        console.log( sum );     // 7
    },
    function(err){
        console.error( err );
    }
);

Both the thunkory and the promisory are essentially asking a question (for a value), and respectively the thunk fooThunk and promise fooPromise represent the future answers to that question. Presented in that light, the symmetry is clear.

With that perspective in mind, we can see that generators which yield Promises for asynchrony could instead yield thunks for asynchrony. All we’d need is a smarter run(..) utility (like from before) that can not only look for and wire up to a yielded Promise but also to provide a callback to a yielded thunk.

Consider:

function *foo() {
    var val = yield request( "http://some.url.1" );
    console.log( val );
}

run( foo );

In this example, request(..) could either be a promisory that returns a promise, or a thunkory that returns a thunk. From the perspective of what’s going on inside the generator code logic, we don’t care about that implementation detail, which is quite powerful!

So, request(..) could be either:

// promisory `request(..)` (see Chapter 3)
var request = Promise.wrap( ajax );

// vs.

// thunkory `request(..)`
var request = thunkify( ajax );

Finally, as a thunk-aware patch to our earlier run(..) utility, we would need logic like this:

// ..
// did we receive a thunk back?
else if (typeof next.value == "function") {
    return new Promise( function(resolve,reject){
        // call the thunk with an error-first callback
        next.value( function(err,msg) {
            if (err) {
                reject( err );
            }
            else {
                resolve( msg );
            }
        } );
    } )
    .then(
        handleNext,
        function handleErr(err) {
            return Promise.resolve(
                it.throw( err )
            )
            .then( handleResult );
        }
    );
}

Now, our generators can either call promisories to yield Promises, or call thunkories to yield thunks, and in either case, run(..) would handle that value and use it to wait for the completion to resume the generator.

Symmetry wise, these two approaches look identical. However, we should point out that’s true only from the perspective of Promises or thunks representing the future value continuation of a generator.

From the larger perspective, thunks do not in and of themselves have hardly any of the trustability or composability guarantees that Promises are designed with. Using a thunk as a stand-in for a Promise in this particular generator asynchrony pattern is workable but should be seen as less than ideal when compared to all the benefits that Promises offer (see Chapter 3).

If you have the option, prefer yield pr rather than yield th. But there’s nothing wrong with having a run(..) utility which can handle both value types.

Note: The runner(..) utility in my asynquence library, which will be discussed in Appendix A, handles yields of Promises, thunks and asynquence sequences.

15.8 Pre-ES6 Generators

You’re hopefully convinced now that generators are a very important addition to the async programming toolbox. But it’s a new syntax in ES6, which means you can’t just polyfill generators like you can Promises (which are just a new API). So what can we do to bring generators to our browser JS if we don’t have the luxury of ignoring pre-ES6 browsers?

For all new syntax extensions in ES6, there are tools – the most common term for them is transpilers, for trans-compilers – which can take your ES6 syntax and transform it into equivalent (but obviously uglier!) pre-ES6 code. So, generators can be transpiled into code that will have the same behavior but work in ES5 and below.

But how? The “magic” of yield doesn’t obviously sound like code that’s easy to transpile. We actually hinted at a solution in our earlier discussion of closure-based iterators.

15.8.1 Manual Transformation

Before we discuss the transpilers, let’s derive how manual transpilation would work in the case of generators. This isn’t just an academic exercise, because doing so will actually help further reinforce how they work.

Consider:

// `request(..)` is a Promise-aware Ajax utility

function *foo(url) {
    try {
        console.log( "requesting:", url );
        var val = yield request( url );
        console.log( val );
    }
    catch (err) {
        console.log( "Oops:", err );
        return false;
    }
}

var it = foo( "http://some.url.1" );

The first thing to observe is that we’ll still need a normal foo() function that can be called, and it will still need to return an iterator. So, let’s sketch out the non-generator transformation:

function foo(url) {

    // ..

    // make and return an iterator
    return {
        next: function(v) {
            // ..
        },
        throw: function(e) {
            // ..
        }
    };
}

var it = foo( "http://some.url.1" );

The next thing to observe is that a generator does its “magic” by suspending its scope/state, but we can emulate that with function closure (see the Scope & Closures title of this series). To understand how to write such code, we’ll first annotate different parts of our generator with state values:

// `request(..)` is a Promise-aware Ajax utility

function *foo(url) {
    // STATE *1*

    try {
        console.log( "requesting:", url );
        var TMP1 = request( url );

        // STATE *2*
        var val = yield TMP1;
        console.log( val );
    }
    catch (err) {
        // STATE *3*
        console.log( "Oops:", err );
        return false;
    }
}

Note: For more accurate illustration, we split up the val = yield request.. statement into two parts, using the temporary TMP1 variable. request(..) happens in state *1*, and the assignment of its completion value to val happens in state *2*. We’ll get rid of that intermediate TMP1 when we convert the code to its non-generator equivalent.

In other words, *1* is the beginning state, *2* is the state if the request(..) succeeds, and *3* is the state if the request(..) fails. You can probably imagine how any extra yield steps would just be encoded as extra states.

Back to our transpiled generator, let’s define a variable state in the closure we can use to keep track of the state:

function foo(url) {
    // manage generator state
    var state;

    // ..
}

Now, let’s define an inner function called process(..) inside the closure which handles each state, using a switch statement:

// `request(..)` is a Promise-aware Ajax utility

function foo(url) {
    // manage generator state
    var state;

    // generator-wide variable declarations
    var val;

    function process(v) {
        switch (state) {
            case 1:
                console.log( "requesting:", url );
                return request( url );
            case 2:
                val = v;
                console.log( val );
                return;
            case 3:
                var err = v;
                console.log( "Oops:", err );
                return false;
        }
    }

    // ..
}

Each state in our generator is represented by its own case in the switch statement. process(..) will be called each time we need to process a new state. We’ll come back to how that works in just a moment.

For any generator-wide variable declarations (val), we move those to a var declaration outside of process(..) so they can survive multiple calls to process(..). But the “block scoped” err variable is only needed for the *3* state, so we leave it in place.

In state *1*, instead of yield request(..), we did return request(..). In terminal state *2*, there was no explicit return, so we just do a return; which is the same as return undefined. In terminal state *3*, there was a return false, so we preserve that.

Now we need to define the code in the iterator functions so they call process(..) appropriately:

function foo(url) {
    // manage generator state
    var state;

    // generator-wide variable declarations
    var val;

    function process(v) {
        switch (state) {
            case 1:
                console.log( "requesting:", url );
                return request( url );
            case 2:
                val = v;
                console.log( val );
                return;
            case 3:
                var err = v;
                console.log( "Oops:", err );
                return false;
        }
    }

    // make and return an iterator
    return {
        next: function(v) {
            // initial state
            if (!state) {
                state = 1;
                return {
                    done: false,
                    value: process()
                };
            }
            // yield resumed successfully
            else if (state == 1) {
                state = 2;
                return {
                    done: true,
                    value: process( v )
                };
            }
            // generator already completed
            else {
                return {
                    done: true,
                    value: undefined
                };
            }
        },
        "throw": function(e) {
            // the only explicit error handling is in
            // state *1*
            if (state == 1) {
                state = 3;
                return {
                    done: true,
                    value: process( e )
                };
            }
            // otherwise, an error won't be handled,
            // so just throw it right back out
            else {
                throw e;
            }
        }
    };
}

How does this code work?

  1. The first call to the iterator’s next() call would move the generator from the uninitialized state to state 1, and then call process() to handle that state. The return value from request(..), which is the promise for the Ajax response, is returned back as the value property from the next() call.
  2. If the Ajax request succeeds, the second call to next(..) should send in the Ajax response value, which moves our state to 2. process(..) is again called (this time with the passed in Ajax response value), and the value property returned from next(..) will be undefined.
  3. However, if the Ajax request fails, throw(..) should be called with the error, which would move the state from 1 to 3 (instead of 2). Again process(..) is called, this time with the error value. That case returns false, which is set as the value property returned from the throw(..) call.

From the outside – that is, interacting only with the iterator – this foo(..) normal function works pretty much the same as the *foo(..) generator would have worked. So we’ve effectively “transpiled” our ES6 generator to pre-ES6 compatibility!

We could then manually instantiate our generator and control its iterator – calling var it = foo("..") and it.next(..) and such – or better, we could pass it to our previously defined run(..) utility as run(foo,"..").

15.8.2 Automatic Transpilation

The preceding exercise of manually deriving a transformation of our ES6 generator to pre-ES6 equivalent teaches us how generators work conceptually. But that transformation was really intricate and very non-portable to other generators in our code. It would be quite impractical to do this work by hand, and would completely obviate all the benefit of generators.

But luckily, several tools already exist that can automatically convert ES6 generators to things like what we derived in the previous section. Not only do they do the heavy lifting work for us, but they also handle several complications that we glossed over.

One such tool is regenerator (https://facebook.github.io/regenerator/), from the smart folks at Facebook.

If we use regenerator to transpile our previous generator, here’s the code produced (at the time of this writing):

// `request(..)` is a Promise-aware Ajax utility

var foo = regeneratorRuntime.mark(function foo(url) {
    var val;

    return regeneratorRuntime.wrap(function foo$(context$1$0) {
        while (1) switch (context$1$0.prev = context$1$0.next) {
        case 0:
            context$1$0.prev = 0;
            console.log( "requesting:", url );
            context$1$0.next = 4;
            return request( url );
        case 4:
            val = context$1$0.sent;
            console.log( val );
            context$1$0.next = 12;
            break;
        case 8:
            context$1$0.prev = 8;
            context$1$0.t0 = context$1$0.catch(0);
            console.log("Oops:", context$1$0.t0);
            return context$1$0.abrupt("return", false);
        case 12:
        case "end":
            return context$1$0.stop();
        }
    }, foo, this, [[0, 8]]);
});

There’s some obvious similarities here to our manual derivation, such as the switch / case statements, and we even see val pulled out of the closure just as we did.

Of course, one trade-off is that regenerator’s transpilation requires a helper library regeneratorRuntime that holds all the reusable logic for managing a general generator / iterator. A lot of that boilerplate looks different than our version, but even then, the concepts can be seen, like with context$1$0.next = 4 keeping track of the next state for the generator.

The main takeaway is that generators are not restricted to only being useful in ES6+ environments. Once you understand the concepts, you can employ them throughout your code, and use tools to transform the code to be compatible with older environments.

This is more work than just using a Promise API polyfill for pre-ES6 Promises, but the effort is totally worth it, because generators are so much better at expressing async flow control in a reason-able, sensible, synchronous-looking, sequential fashion.

Once you get hooked on generators, you’ll never want to go back to the hell of async spaghetti callbacks!

15.9 Review

Generators are a new ES6 function type that does not run-to-completion like normal functions. Instead, the generator can be paused in mid-completion (entirely preserving its state), and it can later be resumed from where it left off.

This pause/resume interchange is cooperative rather than preemptive, which means that the generator has the sole capability to pause itself, using the yield keyword, and yet the iterator that controls the generator has the sole capability (via next(..)) to resume the generator.

The yield / next(..) duality is not just a control mechanism, it’s actually a two-way message passing mechanism. A yield .. expression essentially pauses waiting for a value, and the next next(..) call passes a value (or implicit undefined) back to that paused yield expression.

The key benefit of generators related to async flow control is that the code inside a generator expresses a sequence of steps for the task in a naturally sync/sequential fashion. The trick is that we essentially hide potential asynchrony behind the yield keyword – moving the asynchrony to the code where the generator’s iterator is controlled.

In other words, generators preserve a sequential, synchronous, blocking code pattern for async code, which lets our brains reason about the code much more naturally, addressing one of the two key drawbacks of callback-based async.

16 You Don’t Know JS: Async & Performance

17 Chapter 5: Program Performance

This book so far has been all about how to leverage asynchrony patterns more effectively. But we haven’t directly addressed why asynchrony really matters to JS. The most obvious explicit reason is performance.

For example, if you have two Ajax requests to make, and they’re independent, but you need to wait on them both to finish before doing the next task, you have two options for modeling that interaction: serial and concurrent.

You could make the first request and wait to start the second request until the first finishes. Or, as we’ve seen both with promises and generators, you could make both requests “in parallel,” and express the “gate” to wait on both of them before moving on.

Clearly, the latter is usually going to be more performant than the former. And better performance generally leads to better user experience.

It’s even possible that asynchrony (interleaved concurrency) can improve just the perception of performance, even if the overall program still takes the same amount of time to complete. User perception of performance is every bit – if not more! – as important as actual measurable performance.

We want to now move beyond localized asynchrony patterns to talk about some bigger picture performance details at the program level.

Note: You may be wondering about micro-performance issues like if a++ or ++a is faster. We’ll look at those sorts of performance details in the next chapter on “Benchmarking & Tuning.”

17.1 Web Workers

If you have processing-intensive tasks but you don’t want them to run on the main thread (which may slow down the browser/UI), you might have wished that JavaScript could operate in a multithreaded manner.

In Chapter 1, we talked in detail about how JavaScript is single threaded. And that’s still true. But a single thread isn’t the only way to organize the execution of your program.

Imagine splitting your program into two pieces, and running one of those pieces on the main UI thread, and running the other piece on an entirely separate thread.

What kinds of concerns would such an architecture bring up?

For one, you’d want to know if running on a separate thread meant that it ran in parallel (on systems with multiple CPUs/cores) such that a long-running process on that second thread would not block the main program thread. Otherwise, “virtual threading” wouldn’t be of much benefit over what we already have in JS with async concurrency.

And you’d want to know if these two pieces of the program have access to the same shared scope/resources. If they do, then you have all the questions that multithreaded languages (Java, C++, etc.) deal with, such as needing cooperative or preemptive locking (mutexes, etc.). That’s a lot of extra work, and shouldn’t be undertaken lightly.

Alternatively, you’d want to know how these two pieces could “communicate” if they couldn’t share scope/resources.

All these are great questions to consider as we explore a feature added to the web platform circa HTML5 called “Web Workers.” This is a feature of the browser (aka host environment) and actually has almost nothing to do with the JS language itself. That is, JavaScript does not currently have any features that support threaded execution.

But an environment like your browser can easily provide multiple instances of the JavaScript engine, each on its own thread, and let you run a different program in each thread. Each of those separate threaded pieces of your program is called a “(Web) Worker.” This type of parallelism is called “task parallelism,” as the emphasis is on splitting up chunks of your program to run in parallel.

From your main JS program (or another Worker), you instantiate a Worker like so:

var w1 = new Worker( "http://some.url.1/mycoolworker.js" );

The URL should point to the location of a JS file (not an HTML page!) which is intended to be loaded into a Worker. The browser will then spin up a separate thread and let that file run as an independent program in that thread.

Note: The kind of Worker created with such a URL is called a “Dedicated Worker.” But instead of providing a URL to an external file, you can also create an “Inline Worker” by providing a Blob URL (another HTML5 feature); essentially it’s an inline file stored in a single (binary) value. However, Blobs are beyond the scope of what we’ll discuss here.

Workers do not share any scope or resources with each other or the main program – that would bring all the nightmares of threaded programming to the forefront – but instead have a basic event messaging mechanism connecting them.

The w1 Worker object is an event listener and trigger, which lets you subscribe to events sent by the Worker as well as send events to the Worker.

Here’s how to listen for events (actually, the fixed "message" event):

w1.addEventListener( "message", function(evt){
    // evt.data
} );

And you can send the "message" event to the Worker:

w1.postMessage( "something cool to say" );

Inside the Worker, the messaging is totally symmetrical:

// "mycoolworker.js"

addEventListener( "message", function(evt){
    // evt.data
} );

postMessage( "a really cool reply" );

Notice that a dedicated Worker is in a one-to-one relationship with the program that created it. That is, the "message" event doesn’t need any disambiguation here, because we’re sure that it could only have come from this one-to-one relationship – either it came from the Worker or the main page.

Usually the main page application creates the Workers, but a Worker can instantiate its own child Worker(s) – known as subworkers – as necessary. Sometimes this is useful to delegate such details to a sort of “master” Worker that spawns other Workers to process parts of a task. Unfortunately, at the time of this writing, Chrome still does not support subworkers, while Firefox does.

To kill a Worker immediately from the program that created it, call terminate() on the Worker object (like w1 in the previous snippets). Abruptly terminating a Worker thread does not give it any chance to finish up its work or clean up any resources. It’s akin to you closing a browser tab to kill a page.

If you have two or more pages (or multiple tabs with the same page!) in the browser that try to create a Worker from the same file URL, those will actually end up as completely separate Workers. Shortly, we’ll discuss a way to “share” a Worker.

Note: It may seem like a malicious or ignorant JS program could easily perform a denial-of-service attack on a system by spawning hundreds of Workers, seemingly each with their own thread. While it’s true that it’s somewhat of a guarantee that a Worker will end up on a separate thread, this guarantee is not unlimited. The system is free to decide how many actual threads/CPUs/cores it really wants to create. There’s no way to predict or guarantee how many you’ll have access to, though many people assume it’s at least as many as the number of CPUs/cores available. I think the safest assumption is that there’s at least one other thread besides the main UI thread, but that’s about it.

17.1.1 Worker Environment

Inside the Worker, you do not have access to any of the main program’s resources. That means you cannot access any of its global variables, nor can you access the page’s DOM or other resources. Remember: it’s a totally separate thread.

You can, however, perform network operations (Ajax, WebSockets) and set timers. Also, the Worker has access to its own copy of several important global variables/features, including navigator, location, JSON, and applicationCache.

You can also load extra JS scripts into your Worker, using importScripts(..):

// inside the Worker
importScripts( "foo.js", "bar.js" );

These scripts are loaded synchronously, which means the importScripts(..) call will block the rest of the Worker’s execution until the file(s) are finished loading and executing.

Note: There have also been some discussions about exposing the <canvas> API to Workers, which combined with having canvases be Transferables (see the “Data Transfer” section), would allow Workers to perform more sophisticated off-thread graphics processing, which can be useful for high-performance gaming (WebGL) and other similar applications. Although this doesn’t exist yet in any browsers, it’s likely to happen in the near future.

What are some common uses for Web Workers?

  • Processing intensive math calculations
  • Sorting large data sets
  • Data operations (compression, audio analysis, image pixel manipulations, etc.)
  • High-traffic network communications

17.1.2 Data Transfer

You may notice a common characteristic of most of those uses, which is that they require a large amount of information to be transferred across the barrier between threads using the event mechanism, perhaps in both directions.

In the early days of Workers, serializing all data to a string value was the only option. In addition to the speed penalty of the two-way serializations, the other major negative was that the data was being copied, which meant a doubling of memory usage (and the subsequent churn of garbage collection).

Thankfully, we now have a few better options.

If you pass an object, a so-called “Structured Cloning Algorithm” (https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/The_structured_clone_algorithm) is used to copy/duplicate the object on the other side. This algorithm is fairly sophisticated and can even handle duplicating objects with circular references. The to-string/from-string performance penalty is not paid, but we still have duplication of memory using this approach. There is support for this in IE10 and above, as well as all the other major browsers.

An even better option, especially for larger data sets, is “Transferable Objects” (http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast). What happens is that the object’s “ownership” is transferred, but the data itself is not moved. Once you transfer away an object to a Worker, it’s empty or inaccessible in the originating location – that eliminates the hazards of threaded programming over a shared scope. Of course, transfer of ownership can go in both directions.

There really isn’t much you need to do to opt into a Transferable Object; any data structure that implements the Transferable interface (https://developer.mozilla.org/en-US/docs/Web/API/Transferable) will automatically be transferred this way (support Firefox & Chrome).

For example, typed arrays like Uint8Array (see the ES6 & Beyond title of this series) are “Transferables.” This is how you’d send a Transferable Object using postMessage(..):

// `foo` is a `Uint8Array` for instance

postMessage( foo.buffer, [ foo.buffer ] );

The first parameter is the raw buffer and the second parameter is a list of what to transfer.

Browsers that don’t support Transferable Objects simply degrade to structured cloning, which means performance reduction rather than outright feature breakage.

17.1.3 Shared Workers

If your site or app allows for loading multiple tabs of the same page (a common feature), you may very well want to reduce the resource usage of their system by preventing duplicate dedicated Workers; the most common limited resource in this respect is a socket network connection, as browsers limit the number of simultaneous connections to a single host. Of course, limiting multiple connections from a client also eases your server resource requirements.

In this case, creating a single centralized Worker that all the page instances of your site or app can share is quite useful.

That’s called a SharedWorker, which you create like so (support for this is limited to Firefox and Chrome):

var w1 = new SharedWorker( "http://some.url.1/mycoolworker.js" );

Because a shared Worker can be connected to or from more than one program instance or page on your site, the Worker needs a way to know which program a message comes from. This unique identification is called a “port” – think network socket ports. So the calling program must use the port object of the Worker for communication:

w1.port.addEventListener( "message", handleMessages );

// ..

w1.port.postMessage( "something cool" );

Also, the port connection must be initialized, as:

w1.port.start();

Inside the shared Worker, an extra event must be handled: "connect". This event provides the port object for that particular connection. The most convenient way to keep multiple connections separate is to use closure (see Scope & Closures title of this series) over the port, as shown next, with the event listening and transmitting for that connection defined inside the handler for the "connect" event:

// inside the shared Worker
addEventListener( "connect", function(evt){
    // the assigned port for this connection
    var port = evt.ports[0];

    port.addEventListener( "message", function(evt){
        // ..

        port.postMessage( .. );

        // ..
    } );

    // initialize the port connection
    port.start();
} );

Other than that difference, shared and dedicated Workers have the same capabilities and semantics.

Note: Shared Workers survive the termination of a port connection if other port connections are still alive, whereas dedicated Workers are terminated whenever the connection to their initiating program is terminated.

17.1.4 Polyfilling Web Workers

Web Workers are very attractive performance-wise for running JS programs in parallel. However, you may be in a position where your code needs to run in older browsers that lack support. Because Workers are an API and not a syntax, they can be polyfilled, to an extent.

If a browser doesn’t support Workers, there’s simply no way to fake multithreading from the performance perspective. Iframes are commonly thought of to provide a parallel environment, but in all modern browsers they actually run on the same thread as the main page, so they’re not sufficient for faking parallelism.

As we detailed in Chapter 1, JS’s asynchronicity (not parallelism) comes from the event loop queue, so you can force faked Workers to be asynchronous using timers (setTimeout(..), etc.). Then you just need to provide a polyfill for the Worker API. There are some listed here (https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills#web-workers), but frankly none of them look great.

I’ve written a sketch of a polyfill for Worker here (https://gist.github.com/getify/1b26accb1a09aa53ad25). It’s basic, but it should get the job done for simple Worker support, given that the two-way messaging works correctly as well as "onerror" handling. You could probably also extend it with more features, such as terminate() or faked Shared Workers, as you see fit.

Note: You can’t fake synchronous blocking, so this polyfill just disallows use of importScripts(..). Another option might have been to parse and transform the Worker’s code (once Ajax loaded) to handle rewriting to some asynchronous form of an importScripts(..) polyfill, perhaps with a promise-aware interface.

17.2 SIMD

Single instruction, multiple data (SIMD) is a form of “data parallelism,” as contrasted to “task parallelism” with Web Workers, because the emphasis is not really on program logic chunks being parallelized, but rather multiple bits of data being processed in parallel.

With SIMD, threads don’t provide the parallelism. Instead, modern CPUs provide SIMD capability with “vectors” of numbers – think: type specialized arrays – as well as instructions that can operate in parallel across all the numbers; these are low-level operations leveraging instruction-level parallelism.

The effort to expose SIMD capability to JavaScript is primarily spearheaded by Intel (https://01.org/node/1495), namely by Mohammad Haghighat (at the time of this writing), in cooperation with Firefox and Chrome teams. SIMD is on an early standards track with a good chance of making it into a future revision of JavaScript, likely in the ES7 timeframe.

SIMD JavaScript proposes to expose short vector types and APIs to JS code, which on those SIMD-enabled systems would map the operations directly through to the CPU equivalents, with fallback to non-parallelized operation “shims” on non-SIMD systems.

The performance benefits for data-intensive applications (signal analysis, matrix operations on graphics, etc.) with such parallel math processing are quite obvious!

Early proposal forms of the SIMD API at the time of this writing look like this:

var v1 = SIMD.float32x4( 3.14159, 21.0, 32.3, 55.55 );
var v2 = SIMD.float32x4( 2.1, 3.2, 4.3, 5.4 );

var v3 = SIMD.int32x4( 10, 101, 1001, 10001 );
var v4 = SIMD.int32x4( 10, 20, 30, 40 );

SIMD.float32x4.mul( v1, v2 );   // [ 6.597339, 67.2, 138.89, 299.97 ]
SIMD.int32x4.add( v3, v4 );     // [ 20, 121, 1031, 10041 ]

Shown here are two different vector data types, 32-bit floating-point numbers and 32-bit integer numbers. You can see that these vectors are sized exactly to four 32-bit elements, as this matches the SIMD vector sizes (128-bit) available in most modern CPUs. It’s also possible we may see an x8 (or larger!) version of these APIs in the future.

Besides mul() and add(), many other operations are likely to be included, such as sub(), div(), abs(), neg(), sqrt(), reciprocal(), reciprocalSqrt() (arithmetic), shuffle() (rearrange vector elements), and(), or(), xor(), not() (logical), equal(), greaterThan(), lessThan() (comparison), shiftLeft(), shiftRightLogical(), shiftRightArithmetic() (shifts), fromFloat32x4(), and fromInt32x4() (conversions).

Note: There’s an official “prollyfill” (hopeful, expectant, future-leaning polyfill) for the SIMD functionality available (https://github.com/johnmccutchan/ecmascript_simd), which illustrates a lot more of the planned SIMD capability than we’ve illustrated in this section.

17.3 asm.js

“asm.js” (http://asmjs.org/) is a label for a highly optimizable subset of the JavaScript language. By carefully avoiding certain mechanisms and patterns that are hard to optimize (garbage collection, coercion, etc.), asm.js-styled code can be recognized by the JS engine and given special attention with aggressive low-level optimizations.

Distinct from other program performance mechanisms discussed in this chapter, asm.js isn’t necessarily something that needs to be adopted into the JS language specification. There is an asm.js specification (http://asmjs.org/spec/latest/), but it’s mostly for tracking an agreed upon set of candidate inferences for optimization rather than a set of requirements of JS engines.

There’s not currently any new syntax being proposed. Instead, asm.js suggests ways to recognize existing standard JS syntax that conforms to the rules of asm.js and let engines implement their own optimizations accordingly.

There’s been some disagreement between browser vendors over exactly how asm.js should be activated in a program. Early versions of the asm.js experiment required a "use asm"; pragma (similar to strict mode’s "use strict";) to help clue the JS engine to be looking for asm.js optimization opportunities and hints. Others have asserted that asm.js should just be a set of heuristics that engines automatically recognize without the author having to do anything extra, meaning that existing programs could theoretically benefit from asm.js-style optimizations without doing anything special.

17.3.1 How to Optimize with asm.js

The first thing to understand about asm.js optimizations is around types and coercion (see the Types & Grammar title of this series). If the JS engine has to track multiple different types of values in a variable through various operations, so that it can handle coercions between types as necessary, that’s a lot of extra work that keeps the program optimization suboptimal.

Note: We’re going to use asm.js-style code here for illustration purposes, but be aware that it’s not commonly expected that you’ll author such code by hand. asm.js is more intended to a compilation target from other tools, such as Emscripten (https://github.com/kripken/emscripten/wiki). It’s of course possible to write your own asm.js code, but that’s usually a bad idea because the code is very low level and managing it can be very time consuming and error prone. Nevertheless, there may be cases where you’d want to hand tweak your code for asm.js optimization purposes.

There are some “tricks” you can use to hint to an asm.js-aware JS engine what the intended type is for variables/operations, so that it can skip these coercion tracking steps.

For example:

var a = 42;

// ..

var b = a;

In that program, the b = a assignment leaves the door open for type divergence in variables. However, it could instead be written as:

var a = 42;

// ..

var b = a | 0;

Here, we’ve used the | (“binary OR”) with value 0, which has no effect on the value other than to make sure it’s a 32-bit integer. That code run in a normal JS engine works just fine, but when run in an asm.js-aware JS engine it can signal that b should always be treated as a 32-bit integer, so the coercion tracking can be skipped.

Similarly, the addition operation between two variables can be restricted to a more performant integer addition (instead of floating point):

(a + b) | 0

Again, the asm.js-aware JS engine can see that hint and infer that the + operation should be 32-bit integer addition because the end result of the whole expression would automatically be 32-bit integer conformed anyway.

17.3.2 asm.js Modules

One of the biggest detractors to performance in JS is around memory allocation, garbage collection, and scope access. asm.js suggests one of the ways around these issues is to declare a more formalized asm.js “module” – do not confuse these with ES6 modules; see the ES6 & Beyond title of this series.

For an asm.js module, you need to explicitly pass in a tightly conformed namespace – this is referred to in the spec as stdlib, as it should represent standard libraries needed – to import necessary symbols, rather than just using globals via lexical scope. In the base case, the window object is an acceptable stdlib object for asm.js module purposes, but you could and perhaps should construct an even more restricted one.

You also must declare a “heap” – which is just a fancy term for a reserved spot in memory where variables can already be used without asking for more memory or releasing previously used memory – and pass that in, so that the asm.js module won’t need to do anything that would cause memory churn; it can just use the pre-reserved space.

A “heap” is likely a typed ArrayBuffer, such as:

var heap = new ArrayBuffer( 0x10000 );  // 64k heap

Using that pre-reserved 64k of binary space, an asm.js module can store and retrieve values in that buffer without any memory allocation or garbage collection penalties. For example, the heap buffer could be used inside the module to back an array of 64-bit float values like this:

var arr = new Float64Array( heap );

OK, so let’s make a quick, silly example of an asm.js-styled module to illustrate how these pieces fit together. We’ll define a foo(..) that takes a start (x) and end (y) integer for a range, and calculates all the inner adjacent multiplications of the values in the range, and then finally averages those values together:

function fooASM(stdlib,foreign,heap) {
    "use asm";

    var arr = new stdlib.Int32Array( heap );

    function foo(x,y) {
        x = x | 0;
        y = y | 0;

        var i = 0;
        var p = 0;
        var sum = 0;
        var count = ((y|0) - (x|0)) | 0;

        // calculate all the inner adjacent multiplications
        for (i = x | 0;
            (i | 0) < (y | 0);
            p = (p + 8) | 0, i = (i + 1) | 0
        ) {
            // store result
            arr[ p >> 3 ] = (i * (i + 1)) | 0;
        }

        // calculate average of all intermediate values
        for (i = 0, p = 0;
            (i | 0) < (count | 0);
            p = (p + 8) | 0, i = (i + 1) | 0
        ) {
            sum = (sum + arr[ p >> 3 ]) | 0;
        }

        return +(sum / count);
    }

    return {
        foo: foo
    };
}

var heap = new ArrayBuffer( 0x1000 );
var foo = fooASM( window, null, heap ).foo;

foo( 10, 20 );      // 233

Note: This asm.js example is hand authored for illustration purposes, so it doesn’t represent the same code that would be produced from a compilation tool targeting asm.js. But it does show the typical nature of asm.js code, especially the type hinting and use of the heap buffer for temporary variable storage.

The first call to fooASM(..) is what sets up our asm.js module with its heap allocation. The result is a foo(..) function we can call as many times as necessary. Those foo(..) calls should be specially optimized by an asm.js-aware JS engine. Importantly, the preceding code is completely standard JS and would run just fine (without special optimization) in a non-asm.js engine.

Obviously, the nature of restrictions that make asm.js code so optimizable reduces the possible uses for such code significantly. asm.js won’t necessarily be a general optimization set for any given JS program. Instead, it’s intended to provide an optimized way of handling specialized tasks such as intensive math operations (e.g., those used in graphics processing for games).

17.4 Review

The first four chapters of this book are based on the premise that async coding patterns give you the ability to write more performant code, which is generally a very important improvement. But async behavior only gets you so far, because it’s still fundamentally bound to a single event loop thread.

So in this chapter we’ve covered several program-level mechanisms for improving performance even further.

Web Workers let you run a JS file (aka program) in a separate thread using async events to message between the threads. They’re wonderful for offloading long-running or resource-intensive tasks to a different thread, leaving the main UI thread more responsive.

SIMD proposes to map CPU-level parallel math operations to JavaScript APIs for high-performance data-parallel operations, like number processing on large data sets.

Finally, asm.js describes a small subset of JavaScript that avoids the hard-to-optimize parts of JS (like garbage collection and coercion) and lets the JS engine recognize and run such code through aggressive optimizations. asm.js could be hand authored, but that’s extremely tedious and error prone, akin to hand authoring assembly language (hence the name). Instead, the main intent is that asm.js would be a good target for cross-compilation from other highly optimized program languages – for example, Emscripten (https://github.com/kripken/emscripten/wiki) transpiling C/C++ to JavaScript.

While not covered explicitly in this chapter, there are even more radical ideas under very early discussion for JavaScript, including approximations of direct threaded functionality (not just hidden behind data structure APIs). Whether that happens explicitly, or we just see more parallelism creep into JS behind the scenes, the future of more optimized program-level performance in JS looks really promising.

18 You Don’t Know JS: Async & Performance

19 Chapter 6: Benchmarking & Tuning

As the first four chapters of this book were all about performance as a coding pattern (asynchrony and concurrency), and Chapter 5 was about performance at the macro program architecture level, this chapter goes after the topic of performance at the micro level, focusing on single expressions/statements.

One of the most common areas of curiosity – indeed, some developers can get quite obsessed about it – is in analyzing and testing various options for how to write a line or chunk of code, and which one is faster.

We’re going to look at some of these issues, but it’s important to understand from the outset that this chapter is not about feeding the obsession of micro-performance tuning, like whether some given JS engine can run ++a faster than a++. The more important goal of this chapter is to figure out what kinds of JS performance matter and which ones don’t, and how to tell the difference.

But even before we get there, we need to explore how to most accurately and reliably test JS performance, because there’s tons of misconceptions and myths that have flooded our collective cult knowledge base. We’ve got to sift through all that junk to find some clarity.

19.1 Benchmarking

OK, time to start dispelling some misconceptions. I’d wager the vast majority of JS developers, if asked to benchmark the speed (execution time) of a certain operation, would initially go about it something like this:

var start = (new Date()).getTime(); // or `Date.now()`

// do some operation

var end = (new Date()).getTime();

console.log( "Duration:", (end - start) );

Raise your hand if that’s roughly what came to your mind. Yep, I thought so. There’s a lot wrong with this approach, but don’t feel bad; we’ve all been there.

What did that measurement tell you, exactly? Understanding what it does and doesn’t say about the execution time of the operation in question is key to learning how to appropriately benchmark performance in JavaScript.

If the duration reported is 0, you may be tempted to believe that it took less than a millisecond. But that’s not very accurate. Some platforms don’t have single millisecond precision, but instead only update the timer in larger increments. For example, older versions of windows (and thus IE) had only 15ms precision, which means the operation has to take at least that long for anything other than 0 to be reported!

Moreover, whatever duration is reported, the only thing you really know is that the operation took approximately that long on that exact single run. You have near-zero confidence that it will always run at that speed. You have no idea if the engine or system had some sort of interference at that exact moment, and that at other times the operation could run faster.

What if the duration reported is 4? Are you more sure it took about four milliseconds? Nope. It might have taken less time, and there may have been some other delay in getting either start or end timestamps.

More troublingly, you also don’t know that the circumstances of this operation test aren’t overly optimistic. It’s possible that the JS engine figured out a way to optimize your isolated test case, but in a more real program such optimization would be diluted or impossible, such that the operation would run slower than your test.

So… what do we know? Unfortunately, with those realizations stated, we know very little. Something of such low confidence isn’t even remotely good enough to build your determinations on. Your “benchmark” is basically useless. And worse, it’s dangerous in that it implies false confidence, not just to you but also to others who don’t think critically about the conditions that led to those results.

19.1.1 Repetition

“OK,” you now say, “Just put a loop around it so the whole test takes longer.” If you repeat an operation 100 times, and that whole loop reportedly takes a total of 137ms, then you can just divide by 100 and get an average duration of 1.37ms for each operation, right?

Well, not exactly.

A straight mathematical average by itself is definitely not sufficient for making judgments about performance which you plan to extrapolate to the breadth of your entire application. With a hundred iterations, even a couple of outliers (high or low) can skew the average, and then when you apply that conclusion repeatedly, you even further inflate the skew beyond credulity.

Instead of just running for a fixed number of iterations, you can instead choose to run the loop of tests until a certain amount of time has passed. That might be more reliable, but how do you decide how long to run? You might guess that it should be some multiple of how long your operation should take to run once. Wrong.

Actually, the length of time to repeat across should be based on the accuracy of the timer you’re using, specifically to minimize the chances of inaccuracy. The less precise your timer, the longer you need to run to make sure you’ve minimized the error percentage. A 15ms timer is pretty bad for accurate benchmarking; to minimize its uncertainty (aka “error rate”) to less than 1%, you need to run your each cycle of test iterations for 750ms. A 1ms timer only needs a cycle to run for 50ms to get the same confidence.

But then, that’s just a single sample. To be sure you’re factoring out the skew, you’ll want lots of samples to average across. You’ll also want to understand something about just how slow the worst sample is, how fast the best sample is, how far apart those best and worse cases were, and so on. You’ll want to know not just a number that tells you how fast something ran, but also to have some quantifiable measure of how trustable that number is.

Also, you probably want to combine these different techniques (as well as others), so that you get the best balance of all the possible approaches.

That’s all bare minimum just to get started. If you’ve been approaching performance benchmarking with anything less serious than what I just glossed over, well… “you don’t know: proper benchmarking.”

19.1.2 Benchmark.js

Any relevant and reliable benchmark should be based on statistically sound practices. I am not going to write a chapter on statistics here, so I’ll hand wave around some terms: standard deviation, variance, margin of error. If you don’t know what those terms really mean – I took a stats class back in college and I’m still a little fuzzy on them – you are not actually qualified to write your own benchmarking logic.

Luckily, smart folks like John-David Dalton and Mathias Bynens do understand these concepts, and wrote a statistically sound benchmarking tool called Benchmark.js (http://benchmarkjs.com/). So I can end the suspense by simply saying: “just use that tool.”

I won’t repeat their whole documentation for how Benchmark.js works; they have fantastic API Docs (http://benchmarkjs.com/docs) you should read. Also there are some great (http://calendar.perfplanet.com/2010/bulletproof-javascript-benchmarks/) writeups (http://monsur.hossa.in/2012/12/11/benchmarkjs.html) on more of the details and methodology.

But just for quick illustration purposes, here’s how you could use Benchmark.js to run a quick performance test:

function foo() {
    // operation(s) to test
}

var bench = new Benchmark(
    "foo test",             // test name
    foo,                    // function to test (just contents)
    {
        // ..               // optional extra options (see docs)
    }
);

bench.hz;                   // number of operations per second
bench.stats.moe;            // margin of error
bench.stats.variance;       // variance across samples
// ..

There’s lots more to learn about using Benchmark.js besides this glance I’m including here. But the point is that it’s handling all of the complexities of setting up a fair, reliable, and valid performance benchmark for a given piece of JavaScript code. If you’re going to try to test and benchmark your code, this library is the first place you should turn.

We’re showing here the usage to test a single operation like X, but it’s fairly common that you want to compare X to Y. This is easy to do by simply setting up two different tests in a “Suite” (a Benchmark.js organizational feature). Then, you run them head-to-head, and compare the statistics to conclude whether X or Y was faster.

Benchmark.js can of course be used to test JavaScript in a browser (see the “jsPerf.com” section later in this chapter), but it can also run in non-browser environments (Node.js, etc.).

One largely untapped potential use-case for Benchmark.js is to use it in your Dev or QA environments to run automated performance regression tests against critical path parts of your application’s JavaScript. Similar to how you might run unit test suites before deployment, you can also compare the performance against previous benchmarks to monitor if you are improving or degrading application performance.

19.1.2.1 Setup/Teardown

In the previous code snippet, we glossed over the “extra options” { .. } object. But there are two options we should discuss: setup and teardown.

These two options let you define functions to be called before and after your test case runs.

It’s incredibly important to understand that your setup and teardown code does not run for each test iteration. The best way to think about it is that there’s an outer loop (repeating cycles), and an inner loop (repeating test iterations). setup and teardown are run at the beginning and end of each outer loop (aka cycle) iteration, but not inside the inner loop.

Why does this matter? Let’s imagine you have a test case that looks like this:

a = a + "w";
b = a.charAt( 1 );

Then, you set up your test setup as follows:

var a = "x";

Your temptation is probably to believe that a is starting out as "x" for each test iteration.

But it’s not! It’s starting a at "x" for each test cycle, and then your repeated + "w" concatenations will be making a larger and larger a value, even though you’re only ever accessing the character "w" at the 1 position.

Where this most commonly bites you is when you make side effect changes to something like the DOM, like appending a child element. You may think your parent element is set as empty each time, but it’s actually getting lots of elements added, and that can significantly sway the results of your tests.

19.2 Context Is King

Don’t forget to check the context of a particular performance benchmark, especially a comparison between X and Y tasks. Just because your test reveals that X is faster than Y doesn’t mean that the conclusion “X is faster than Y” is actually relevant.

For example, let’s say a performance test reveals that X runs 10,000,000 operations per second, and Y runs at 8,000,000 operations per second. You could claim that Y is 20% slower than X, and you’d be mathematically correct, but your assertion doesn’t hold as much water as you’d think.

Let’s think about the results more critically: 10,000,000 operations per second is 10,000 operations per millisecond, and 10 operations per microsecond. In other words, a single operation takes 0.1 microseconds, or 100 nanoseconds. It’s hard to fathom just how small 100ns is, but for comparison, it’s often cited that the human eye isn’t generally capable of distinguishing anything less than 100ms, which is one million times slower than the 100ns speed of the X operation.

Even recent scientific studies showing that maybe the brain can process as quick as 13ms (about 8x faster than previously asserted) would mean that X is still running 125,000 times faster than the human brain can perceive a distinct thing happening. X is going really, really fast.

But more importantly, let’s talk about the difference between X and Y, the 2,000,000 operations per second difference. If X takes 100ns, and Y takes 80ns, the difference is 20ns, which in the best case is still one 650-thousandth of the interval the human brain can perceive.

What’s my point? None of this performance difference matters, at all!

But wait, what if this operation is going to happen a whole bunch of times in a row? Then the difference could add up, right?

OK, so what we’re asking then is, how likely is it that operation X is going to be run over and over again, one right after the other, and that this has to happen 650,000 times just to get a sliver of a hope the human brain could perceive it. More likely, it’d have to happen 5,000,000 to 10,000,000 times together in a tight loop to even approach relevance.

While the computer scientist in you might protest that this is possible, the louder voice of realism in you should sanity check just how likely or unlikely that really is. Even if it is relevant in rare occasions, it’s irrelevant in most situations.

The vast majority of your benchmark results on tiny operations – like the ++x vs x++ myth – are just totally bogus for supporting the conclusion that X should be favored over Y on a performance basis.

19.2.1 Engine Optimizations

You simply cannot reliably extrapolate that if X was 10 microseconds faster than Y in your isolated test, that means X is always faster than Y and should always be used. That’s not how performance works. It’s vastly more complicated.

For example, let’s imagine (purely hypothetical) that you test some microperformance behavior such as comparing:

var twelve = "12";
var foo = "foo";

// test 1
var X1 = parseInt( twelve );
var X2 = parseInt( foo );

// test 2
var Y1 = Number( twelve );
var Y2 = Number( foo );

If you understand what parseInt(..) does compared to Number(..), you might intuit that parseInt(..) potentially has “more work” to do, especially in the foo case. Or you might intuit that they should have the same amount of work to do in the foo case, as both should be able to stop at the first character "f".

Which intuition is correct? I honestly don’t know. But I’ll make the case it doesn’t matter what your intuition is. What might the results be when you test it? Again, I’m making up a pure hypothetical here, I haven’t actually tried, nor do I care.

Let’s pretend the test comes back that X and Y are statistically identical. Have you then confirmed your intuition about the "f" character thing? Nope.

It’s possible in our hypothetical that the engine might recognize that the variables twelve and foo are only being used in one place in each test, and so it might decide to inline those values. Then it may realize that Number( "12" ) can just be replaced by 12. And maybe it comes to the same conclusion with parseInt(..), or maybe not.

Or an engine’s dead-code removal heuristic could kick in, and it could realize that variables X and Y aren’t being used, so declaring them is irrelevant, so it doesn’t end up doing anything at all in either test.

And all that’s just made with the mindset of assumptions about a single test run. Modern engines are fantastically more complicated than what we’re intuiting here. They do all sorts of tricks, like tracing and tracking how a piece of code behaves over a short period of time, or with a particularly constrained set of inputs.

What if the engine optimizes a certain way because of the fixed input, but in your real program you give more varied input and the optimization decisions shake out differently (or not at all!)? Or what if the engine kicks in optimizations because it sees the code being run tens of thousands of times by the benchmarking utility, but in your real program it will only run a hundred times in near proximity, and under those conditions the engine determines the optimizations are not worth it?

And all those optimizations we just hypothesized about might happen in our constrained test but maybe the engine wouldn’t do them in a more complex program (for various reasons). Or it could be reversed – the engine might not optimize such trivial code but may be more inclined to optimize it more aggressively when the system is already more taxed by a more sophisticated program.

The point I’m trying to make is that you really don’t know for sure exactly what’s going on under the covers. All the guesses and hypothesis you can muster don’t amount to hardly anything concrete for really making such decisions.

Does that mean you can’t really do any useful testing? Definitely not!

What this boils down to is that testing not real code gives you not real results. In so much as is possible and practical, you should test actual real, non-trivial snippets of your code, and under as best of real conditions as you can actually hope to. Only then will the results you get have a chance to approximate reality.

Microbenchmarks like ++x vs x++ are so incredibly likely to be bogus, we might as well just flatly assume them as such.

19.3 jsPerf.com

While Benchmark.js is useful for testing the performance of your code in whatever JS environment you’re running, it cannot be stressed enough that you need to compile test results from lots of different environments (desktop browsers, mobile devices, etc.) if you want to have any hope of reliable test conclusions.

For example, Chrome on a high-end desktop machine is not likely to perform anywhere near the same as Chrome mobile on a smartphone. And a smartphone with a full battery charge is not likely to perform anywhere near the same as a smartphone with 2% battery life left, when the device is starting to power down the radio and processor.

If you want to make assertions like “X is faster than Y” in any reasonable sense across more than just a single environment, you’re going to need to actually test as many of those real world environments as possible. Just because Chrome executes some X operation faster than Y doesn’t mean that all browsers do. And of course you also probably will want to cross-reference the results of multiple browser test runs with the demographics of your users.

There’s an awesome website for this purpose called jsPerf (http://jsperf.com). It uses the Benchmark.js library we talked about earlier to run statistically accurate and reliable tests, and makes the test on an openly available URL that you can pass around to others.

Each time a test is run, the results are collected and persisted with the test, and the cumulative test results are graphed on the page for anyone to see.

When creating a test on the site, you start out with two test cases to fill in, but you can add as many as you need. You also have the ability to set up setup code that is run at the beginning of each test cycle and teardown code run at the end of each cycle.

Note: A trick for doing just one test case (if you’re benchmarking a single approach instead of a head-to-head) is to fill in the second test input boxes with placeholder text on first creation, then edit the test and leave the second test blank, which will delete it. You can always add more test cases later.

You can define the initial page setup (importing libraries, defining utility helper functions, declaring variables, etc.). There are also options for defining setup and teardown behavior if needed – consult the “Setup/Teardown” section in the Benchmark.js discussion earlier.

19.3.1 Sanity Check

jsPerf is a fantastic resource, but there’s an awful lot of tests published that when you analyze them are quite flawed or bogus, for any of a variety of reasons as outlined so far in this chapter.

Consider:

// Case 1
var x = [];
for (var i=0; i<10; i++) {
    x[i] = "x";
}

// Case 2
var x = [];
for (var i=0; i<10; i++) {
    x[x.length] = "x";
}

// Case 3
var x = [];
for (var i=0; i<10; i++) {
    x.push( "x" );
}

Some observations to ponder about this test scenario:

  • It’s extremely common for devs to put their own loops into test cases, and they forget that Benchmark.js already does all the repetition you need. There’s a really strong chance that the for loops in these cases are totally unnecessary noise.
  • The declaring and initializing of x is included in each test case, possibly unnecessarily. Recall from earlier that if x = [] were in the setup code, it wouldn’t actually be run before each test iteration, but instead once at the beginning of each cycle. That means x would continue growing quite large, not just the size 10 implied by the for loops.

So is the intent to make sure the tests are constrained only to how the JS engine behaves with very small arrays (size 10)? That could be the intent, but if it is, you have to consider if that’s not focusing far too much on nuanced internal implementation details.

On the other hand, does the intent of the test embrace the context that the arrays will actually be growing quite large? Is the JS engines’ behavior with larger arrays relevant and accurate when compared with the intended real world usage?

  • Is the intent to find out how much x.length or x.push(..) add to the performance of the operation to append to the x array? OK, that might be a valid thing to test. But then again, push(..) is a function call, so of course it’s going to be slower than [..] access. Arguably, cases 1 and 2 are fairer than case 3.

Here’s another example that illustrates a common apples-to-oranges flaw:

// Case 1
var x = ["John","Albert","Sue","Frank","Bob"];
x.sort();

// Case 2
var x = ["John","Albert","Sue","Frank","Bob"];
x.sort( function mySort(a,b){
    if (a < b) return -1;
    if (a > b) return 1;
    return 0;
} );

Here, the obvious intent is to find out how much slower the custom mySort(..) comparator is than the built-in default comparator. But by specifying the function mySort(..) as inline function expression, you’ve created an unfair/bogus test. Here, the second case is not only testing a custom user JS function, but it’s also testing creating a new function expression for each iteration.

Would it surprise you to find out that if you run a similar test but update it to isolate only for creating an inline function expression versus using a pre-declared function, the inline function expression creation can be from 2% to 20% slower!?

Unless your intent with this test is to consider the inline function expression creation “cost,” a better/fairer test would put mySort(..)’s declaration in the page setup – don’t put it in the test setup as that’s unnecessary redeclaration for each cycle – and simply reference it by name in the test case: x.sort(mySort).

Building on the previous example, another pitfall is in opaquely avoiding or adding “extra work” to one test case that creates an apples-to-oranges scenario:

// Case 1
var x = [12,-14,0,3,18,0,2.9];
x.sort();

// Case 2
var x = [12,-14,0,3,18,0,2.9];
x.sort( function mySort(a,b){
    return a - b;
} );

Setting aside the previously mentioned inline function expression pitfall, the second case’s mySort(..) works in this case because you have provided it numbers, but would have of course failed with strings. The first case doesn’t throw an error, but it actually behaves differently and has a different outcome! It should be obvious, but: a different outcome between two test cases almost certainly invalidates the entire test!

But beyond the different outcomes, in this case, the built in sort(..)’s comparator is actually doing “extra work” that mySort() does not, in that the built-in one coerces the compared values to strings and does lexicographic comparison. The first snippet results in [-14, 0, 0, 12, 18, 2.9, 3] while the second snippet results (likely more accurately based on intent) in [-14, 0, 0, 2.9, 3, 12, 18].

So that test is unfair because it’s not actually doing the same task between the cases. Any results you get are bogus.

These same pitfalls can even be much more subtle:

// Case 1
var x = false;
var y = x ? 1 : 2;

// Case 2
var x;
var y = x ? 1 : 2;

Here, the intent might be to test the performance impact of the coercion to a Boolean that the ? : operator will do if the x expression is not already a Boolean (see the Types & Grammar title of this book series). So, you’re apparently OK with the fact that there is extra work to do the coercion in the second case.

The subtle problem? You’re setting x’s value in the first case and not setting it in the other, so you’re actually doing work in the first case that you’re not doing in the second. To eliminate any potential (albeit minor) skew, try:

// Case 1
var x = false;
var y = x ? 1 : 2;

// Case 2
var x = undefined;
var y = x ? 1 : 2;

Now there’s an assignment in both cases, so the thing you want to test – the coercion of x or not – has likely been more accurately isolated and tested.

19.4 Writing Good Tests

Let me see if I can articulate the bigger point I’m trying to make here.

Good test authoring requires careful analytical thinking about what differences exist between two test cases and whether the differences between them are intentional or unintentional.

Intentional differences are of course normal and OK, but it’s too easy to create unintentional differences that skew your results. You have to be really, really careful to avoid that skew. Moreover, you may intend a difference but it may not be obvious to other readers of your test what your intent was, so they may doubt (or trust!) your test incorrectly. How do you fix that?

Write better, clearer tests. But also, take the time to document (using the jsPerf.com “Description” field and/or code comments) exactly what the intent of your test is, even to the nuanced detail. Call out the intentional differences, which will help others and your future self to better identify unintentional differences that could be skewing the test results.

Isolate things which aren’t relevant to your test by pre-declaring them in the page or test setup settings so they’re outside the timed parts of the test.

Instead of trying to narrow in on a tiny snippet of your real code and benchmarking just that piece out of context, tests and benchmarks are better when they include a larger (while still relevant) context. Those tests also tend to run slower, which means any differences you spot are more relevant in context.

19.5 Microperformance

OK, until now we’ve been dancing around various microperformance issues and generally looking disfavorably upon obsessing about them. I want to take just a moment to address them directly.

The first thing you need to get more comfortable with when thinking about performance benchmarking your code is that the code you write is not always the code the engine actually runs. We briefly looked at that topic back in Chapter 1 when we discussed statement reordering by the compiler, but here we’re going to suggest the compiler can sometimes decide to run different code than you wrote, not just in different orders but different in substance.

Let’s consider this piece of code:

var foo = 41;

(function(){
    (function(){
        (function(baz){
            var bar = foo + baz;
            // ..
        })(1);
    })();
})();

You may think about the foo reference in the innermost function as needing to do a three-level scope lookup. We covered in the Scope & Closures title of this book series how lexical scope works, and the fact that the compiler generally caches such lookups so that referencing foo from different scopes doesn’t really practically “cost” anything extra.

But there’s something deeper to consider. What if the compiler realizes that foo isn’t referenced anywhere else but that one location, and it further notices that the value never is anything except the 41 as shown?

Isn’t it quite possible and acceptable that the JS compiler could decide to just remove the foo variable entirely, and inline the value, such as this:

(function(){
    (function(){
        (function(baz){
            var bar = 41 + baz;
            // ..
        })(1);
    })();
})();

Note: Of course, the compiler could probably also do a similar analysis and rewrite with the baz variable here, too.

When you begin to think about your JS code as being a hint or suggestion to the engine of what to do, rather than a literal requirement, you realize that a lot of the obsession over discrete syntactic minutia is most likely unfounded.

Another example:

function factorial(n) {
    if (n < 2) return 1;
    return n * factorial( n - 1 );
}

factorial( 5 );     // 120

Ah, the good ol’ fashioned “factorial” algorithm! You might assume that the JS engine will run that code mostly as is. And to be honest, it might – I’m not really sure.

But as an anecdote, the same code expressed in C and compiled with advanced optimizations would result in the compiler realizing that the call factorial(5) can just be replaced with the constant value 120, eliminating the function and call entirely!

Moreover, some engines have a practice called “unrolling recursion,” where it can realize that the recursion you’ve expressed can actually be done “easier” (i.e., more optimally) with a loop. It’s possible the preceding code could be rewritten by a JS engine to run as:

function factorial(n) {
    if (n < 2) return 1;

    var res = 1;
    for (var i=n; i>1; i--) {
        res *= i;
    }
    return res;
}

factorial( 5 );     // 120

Now, let’s imagine that in the earlier snippet you had been worried about whether n * factorial(n-1) or n *= factorial(--n) runs faster. Maybe you even did a performance benchmark to try to figure out which was better. But you miss the fact that in the bigger context, the engine may not run either line of code because it may unroll the recursion!

Speaking of --, --n versus n-- is often cited as one of those places where you can optimize by choosing the --n version, because theoretically it requires less effort down at the assembly level of processing.

That sort of obsession is basically nonsense in modern JavaScript. That’s the kind of thing you should be letting the engine take care of. You should write the code that makes the most sense. Compare these three for loops:

// Option 1
for (var i=0; i<10; i++) {
    console.log( i );
}

// Option 2
for (var i=0; i<10; ++i) {
    console.log( i );
}

// Option 3
for (var i=-1; ++i<10; ) {
    console.log( i );
}

Even if you have some theory where the second or third option is more performant than the first option by a tiny bit, which is dubious at best, the third loop is more confusing because you have to start with -1 for i to account for the fact that ++i pre-increment is used. And the difference between the first and second options is really quite irrelevant.

It’s entirely possible that a JS engine may see a place where i++ is used and realize that it can safely replace it with the ++i equivalent, which means your time spent deciding which one to pick was completely wasted and the outcome moot.

Here’s another common example of silly microperformance obsession:

var x = [ .. ];

// Option 1
for (var i=0; i < x.length; i++) {
    // ..
}

// Option 2
for (var i=0, len = x.length; i < len; i++) {
    // ..
}

The theory here goes that you should cache the length of the x array in the variable len, because ostensibly it doesn’t change, to avoid paying the price of x.length being consulted for each iteration of the loop.

If you run performance benchmarks around x.length usage compared to caching it in a len variable, you’ll find that while the theory sounds nice, in practice any measured differences are statistically completely irrelevant.

In fact, in some engines like v8, it can be shown (http://mrale.ph/blog/2014/12/24/array-length-caching.html) that you could make things slightly worse by pre-caching the length instead of letting the engine figure it out for you. Don’t try to outsmart your JavaScript engine, you’ll probably lose when it comes to performance optimizations.

19.5.1 Not All Engines Are Alike

The different JS engines in various browsers can all be “spec compliant” while having radically different ways of handling code. The JS specification doesn’t require anything performance related – well, except ES6’s “Tail Call Optimization” covered later in this chapter.

The engines are free to decide that one operation will receive its attention to optimize, perhaps trading off for lesser performance on another operation. It can be very tenuous to find an approach for an operation that always runs faster in all browsers.

There’s a movement among some in the JS dev community, especially those who work with Node.js, to analyze the specific internal implementation details of the v8 JavaScript engine and make decisions about writing JS code that is tailored to take best advantage of how v8 works. You can actually achieve a surprisingly high degree of performance optimization with such endeavors, so the payoff for the effort can be quite high.

Some commonly cited examples (https://github.com/petkaantonov/bluebird/wiki/Optimization-killers) for v8:

  • Don’t pass the arguments variable from one function to any other function, as such “leakage” slows down the function implementation.
  • Isolate a try..catch in its own function. Browsers struggle with optimizing any function with a try..catch in it, so moving that construct to its own function means you contain the de-optimization harm while letting the surrounding code be optimizable.

But rather than focus on those tips specifically, let’s sanity check the v8-only optimization approach in a general sense.

Are you genuinely writing code that only needs to run in one JS engine? Even if your code is entirely intended for Node.js right now, is the assumption that v8 will always be the used JS engine reliable? Is it possible that someday a few years from now, there’s another server-side JS platform besides Node.js that you choose to run your code on? What if what you optimized for before is now a much slower way of doing that operation on the new engine?

Or what if your code always stays running on v8 from here on out, but v8 decides at some point to change the way some set of operations works such that what used to be fast is now slow, and vice versa?

These scenarios aren’t just theoretical, either. It used to be that it was faster to put multiple string values into an array and then call join("") on the array to concatenate the values than to just use + concatenation directly with the values. The historical reason for this is nuanced, but it has to do with internal implementation details about how string values were stored and managed in memory.

As a result, “best practice” advice at the time disseminated across the industry suggesting developers always use the array join(..) approach. And many followed.

Except, somewhere along the way, the JS engines changed approaches for internally managing strings, and specifically put in optimizations for + concatenation. They didn’t slow down join(..) per se, but they put more effort into helping + usage, as it was still quite a bit more widespread.

Note: The practice of standardizing or optimizing some particular approach based mostly on its existing widespread usage is often called (metaphorically) “paving the cowpath.”

Once that new approach to handling strings and concatenation took hold, unfortunately all the code out in the wild that was using array join(..) to concatenate strings was then sub-optimal.

Another example: at one time, the Opera browser differed from other browsers in how it handled the boxing/unboxing of primitive wrapper objects (see the Types & Grammar title of this book series). As such, their advice to developers was to use a String object instead of the primitive string value if properties like length or methods like charAt(..) needed to be accessed. This advice may have been correct for Opera at the time, but it was literally completely opposite for other major contemporary browsers, as they had optimizations specifically for the string primitives and not their object wrapper counterparts.

I think these various gotchas are at least possible, if not likely, for code even today. So I’m very cautious about making wide ranging performance optimizations in my JS code based purely on engine implementation details, especially if those details are only true of a single engine.

The reverse is also something to be wary of: you shouldn’t necessarily change a piece of code to work around one engine’s difficulty with running a piece of code in an acceptably performant way.

Historically, IE has been the brunt of many such frustrations, given that there have been plenty of scenarios in older IE versions where it struggled with some performance aspect that other major browsers of the time seemed not to have much trouble with. The string concatenation discussion we just had was actually a real concern back in the IE6 and IE7 days, where it was possible to get better performance out of join(..) than +.

But it’s troublesome to suggest that just one browser’s trouble with performance is justification for using a code approach that quite possibly could be sub-optimal in all other browsers. Even if the browser in question has a large market share for your site’s audience, it may be more practical to write the proper code and rely on the browser to update itself with better optimizations eventually.

“There is nothing more permanent than a temporary hack.” Chances are, the code you write now to work around some performance bug will probably outlive the performance bug in the browser itself.

In the days when a browser only updated once every five years, that was a tougher call to make. But as it stands now, browsers across the board are updating at a much more rapid interval (though obviously the mobile world still lags), and they’re all competing to optimize web features better and better.

If you run across a case where a browser does have a performance wart that others don’t suffer from, make sure to report it to them through whatever means you have available. Most browsers have open public bug trackers suitable for this purpose.

Tip: I’d only suggest working around a performance issue in a browser if it was a really drastic show-stopper, not just an annoyance or frustration. And I’d be very careful to check that the performance hack didn’t have noticeable negative side effects in another browser.

19.5.2 Big Picture

Instead of worrying about all these microperformance nuances, we should instead be looking at big-picture types of optimizations.

How do you know what’s big picture or not? You have to first understand if your code is running on a critical path or not. If it’s not on the critical path, chances are your optimizations are not worth much.

Ever heard the admonition, “that’s premature optimization!”? It comes from a famous quote from Donald Knuth: “premature optimization is the root of all evil.”. Many developers cite this quote to suggest that most optimizations are “premature” and are thus a waste of effort. The truth is, as usual, more nuanced.

Here is Knuth’s quote, in context:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. [emphasis added]

(http://web.archive.org/web/20130731202547/http://pplab.snu.ac.kr/courses/adv_pl05/papers/p261-knuth.pdf, Computing Surveys, Vol 6, No 4, December 1974)

I believe it’s a fair paraphrasing to say that Knuth meant: “non-critical path optimization is the root of all evil.” So the key is to figure out if your code is on the critical path – you should optimize it! – or not.

I’d even go so far as to say this: no amount of time spent optimizing critical paths is wasted, no matter how little is saved; but no amount of optimization on noncritical paths is justified, no matter how much is saved.

If your code is on the critical path, such as a “hot” piece of code that’s going to be run over and over again, or in UX critical places where users will notice, like an animation loop or CSS style updates, then you should spare no effort in trying to employ relevant, measurably significant optimizations.

For example, consider a critical path animation loop that needs to coerce a string value to a number. There are of course multiple ways to do that (see the Types & Grammar title of this book series), but which one if any is the fastest?

var x = "42";   // need number `42`

// Option 1: let implicit coercion automatically happen
var y = x / 2;

// Option 2: use `parseInt(..)`
var y = parseInt( x, 0 ) / 2;

// Option 3: use `Number(..)`
var y = Number( x ) / 2;

// Option 4: use `+` unary operator
var y = +x / 2;

// Option 5: use `|` unary operator
var y = (x | 0) / 2;

Note: I will leave it as an exercise to the reader to set up a test if you’re interested in examining the minute differences in performance among these options.

When considering these different options, as they say, “One of these things is not like the others.” parseInt(..) does the job, but it also does a lot more – it parses the string rather than just coercing. You can probably guess, correctly, that parseInt(..) is a slower option, and you should probably avoid it.

Of course, if x can ever be a value that needs parsing, such as "42px" (like from a CSS style lookup), then parseInt(..) really is the only suitable option!

Number(..) is also a function call. From a behavioral perspective, it’s identical to the + unary operator option, but it may in fact be a little slower, requiring more machinery to execute the function. Of course, it’s also possible that the JS engine recognizes this behavioral symmetry and just handles the inlining of Number(..)’s behavior (aka +x) for you!

But remember, obsessing about +x versus x | 0 is in most cases likely a waste of effort. This is a microperformance issue, and one that you shouldn’t let dictate/degrade the readability of your program.

While performance is very important in critical paths of your program, it’s not the only factor. Among several options that are roughly similar in performance, readability should be another important concern.

19.6 Tail Call Optimization (TCO)

As we briefly mentioned earlier, ES6 includes a specific requirement that ventures into the world of performance. It’s related to a specific form of optimization that can occur with function calls: tail call optimization.

Briefly, a “tail call” is a function call that appears at the “tail” of another function, such that after the call finishes, there’s nothing left to do (except perhaps return its result value).

For example, here’s a non-recursive setup with tail calls:

function foo(x) {
    return x;
}

function bar(y) {
    return foo( y + 1 );    // tail call
}

function baz() {
    return 1 + bar( 40 );   // not tail call
}

baz();                      // 42

foo(y+1) is a tail call in bar(..) because after foo(..) finishes, bar(..) is also finished except in this case returning the result of the foo(..) call. However, bar(40) is not a tail call because after it completes, its result value must be added to 1 before baz() can return it.

Without getting into too much nitty-gritty detail, calling a new function requires an extra amount of reserved memory to manage the call stack, called a “stack frame.” So the preceding snippet would generally require a stack frame for each of baz(), bar(..), and foo(..) all at the same time.

However, if a TCO-capable engine can realize that the foo(y+1) call is in tail position meaning bar(..) is basically complete, then when calling foo(..), it doesn’t need to create a new stack frame, but can instead reuse the existing stack frame from bar(..). That’s not only faster, but it also uses less memory.

That sort of optimization isn’t a big deal in a simple snippet, but it becomes a much bigger deal when dealing with recursion, especially if the recursion could have resulted in hundreds or thousands of stack frames. With TCO the engine can perform all those calls with a single stack frame!

Recursion is a hairy topic in JS because without TCO, engines have had to implement arbitrary (and different!) limits to how deep they will let the recursion stack get before they stop it, to prevent running out of memory. With TCO, recursive functions with tail position calls can essentially run unbounded, because there’s never any extra usage of memory!

Consider that recursive factorial(..) from before, but rewritten to make it TCO friendly:

function factorial(n) {
    function fact(n,res) {
        if (n < 2) return res;

        return fact( n - 1, n * res );
    }

    return fact( n, 1 );
}

factorial( 5 );     // 120

This version of factorial(..) is still recursive, but it’s also optimizable with TCO, because both inner fact(..) calls are in tail position.

Note: It’s important to note that TCO only applies if there’s actually a tail call. If you write recursive functions without tail calls, the performance will still fall back to normal stack frame allocation, and the engines’ limits on such recursive call stacks will still apply. Many recursive functions can be rewritten as we just showed with factorial(..), but it takes careful attention to detail.

One reason that ES6 requires engines to implement TCO rather than leaving it up to their discretion is because the lack of TCO actually tends to reduce the chances that certain algorithms will be implemented in JS using recursion, for fear of the call stack limits.

If the lack of TCO in the engine would just gracefully degrade to slower performance in all cases, it wouldn’t probably have been something that ES6 needed to require. But because the lack of TCO can actually make certain programs impractical, it’s more an important feature of the language than just a hidden implementation detail.

ES6 guarantees that from now on, JS developers will be able to rely on this optimization across all ES6+ compliant browsers. That’s a win for JS performance!

19.7 Review

Effectively benchmarking performance of a piece of code, especially to compare it to another option for that same code to see which approach is faster, requires careful attention to detail.

Rather than rolling your own statistically valid benchmarking logic, just use the Benchmark.js library, which does that for you. But be careful about how you author tests, because it’s far too easy to construct a test that seems valid but that’s actually flawed – even tiny differences can skew the results to be completely unreliable.

It’s important to get as many test results from as many different environments as possible to eliminate hardware/device bias. jsPerf.com is a fantastic website for crowdsourcing performance benchmark test runs.

Many common performance tests unfortunately obsess about irrelevant microperformance details like x++ versus ++x. Writing good tests means understanding how to focus on big picture concerns, like optimizing on the critical path, and avoiding falling into traps like different JS engines’ implementation details.

Tail call optimization (TCO) is a required optimization as of ES6 that will make some recursive patterns practical in JS where they would have been impossible otherwise. TCO allows a function call in the tail position of another function to execute without needing any extra resources, which means the engine no longer needs to place arbitrary restrictions on call stack depth for recursive algorithms.

20 You Don’t Know JS: Async & Performance

21 Foreword

Over the years, my employer has trusted me enough to conduct interviews. If we’re looking for someone with skills in JavaScript, my first line of questioning… actually that’s not true, I first check if the candidate needs the bathroom and/or a drink, because comfort is important, but once I’m past the bit about the candidate’s fluid in/out-take, I set about determining if the candidate knows JavaScript, or just jQuery.

Not that there’s anything wrong with jQuery. It lets you do a lot without really knowing JavaScript, and that’s a feature not a bug. But if the job calls for advanced skills in JavaScript performance and maintainability, you need someone who knows how libraries such as jQuery are put together. You need to be able to harness the core of JavaScript the same way they do.

If I want to get a picture of someone’s core JavaScript skill, I’m most interested in what they make of closures (you’ve read that book of this series already, right?) and how to get the most out of asynchronicity, which brings us to this book.

For starters, you’ll be taken through callbacks, the bread and butter of asynchronous programming. Of course, bread and butter does not make for a particularly satisfying meal, but the next course is full of tasty tasty promises!

If you don’t know promises, now is the time to learn. Promises are now the official way to provide async return values in both JavaScript and the DOM. All future async DOM APIs will use them, many already do, so be prepared! At the time of writing, Promises have shipped in most major browsers, with IE shipping soon. Once you’ve finished that, I hope you left room for the next course, Generators.

Generators snuck their way into stable versions of Chrome and Firefox without too much pomp and ceremony, because, frankly, they’re more complicated than they are interesting. Or, that’s what I thought until I saw them combined with promises. There, they become an important tool in readability and maintenance.

For dessert, well, I won’t spoil the surprise, but prepare to gaze into the future of JavaScript! Features that give you more and more control over concurrency and asynchronicity.

Well, I won’t block your enjoyment of the book any longer, on with the show! If you’ve already read part of the book before reading this Foreword, give yourself 10 asynchronous points! You deserve them!

Jake Archibald<br> jakearchibald.com, [@jaffathecake](http://twitter.com/jaffathecake)<br> Developer Advocate at Google Chrome

22 You Don’t Know JS: Async & Performance

22.1 Table of Contents

  • Foreword
  • Preface
  • Chapter 1: Asynchrony: Now & Later
    • A Program In Chunks
    • Event Loop
    • Parallel Threading
    • Concurrency
    • Jobs
    • Statement Ordering
  • Chapter 2: Callbacks
    • Continuations
    • Sequential Brain
    • Trust Issues
    • Trying To Save Callbacks
  • Chapter 3: Promises
    • What is a Promise?
    • Thenable Duck-Typing
    • Promise Trust
    • Chain Flow
    • Error Handling
    • Promise Patterns
    • Promise API Recap
    • Promise Limitations
  • Chapter 4: Generators
    • Breaking Run-to-completion
    • Generator’ing Values
    • Iterating Generators Asynchronously
    • Generators + Promises
    • Generator Delegation
    • Generator Concurrency
    • Thunks
    • Pre-ES6 Generators
  • Chapter 5: Program Performance
    • Web Workers
    • SIMD
    • asm.js
  • Chapter 6: Benchmarking & Tuning
    • Benchmarking
    • Context Is King
    • jsPerf.com
    • Writing Good Tests
    • Microperformance
    • Tail Call Optimization (TCO)
  • Appendix A: asynquence Library
  • Appendix B: Advanced Async Patterns
  • Appendix C: Acknowledgments

23 You Don’t Know JS: ES6 & Beyond

<img src=“cover.jpg” width=“300”>


Purchase digital/print copy from O’Reilly


Table of Contents

24 You Don’t Know JS: ES6 & Beyond

25 Appendix A: Acknowledgments

I have many people to thank for making this book title and the overall series happen.

First, I must thank my wife Christen Simpson, and my two kids Ethan and Emily, for putting up with Dad always pecking away at the computer. Even when not writing books, my obsession with JavaScript glues my eyes to the screen far more than it should. That time I borrow from my family is the reason these books can so deeply and completely explain JavaScript to you, the reader. I owe my family everything.

I’d like to thank my editors at O’Reilly, namely Simon St.Laurent and Brian MacDonald, as well as the rest of the editorial and marketing staff. They are fantastic to work with, and have been especially accommodating during this experiment into “open source” book writing, editing, and production.

Thank you to the many folks who have participated in making this book series better by providing editorial suggestions and corrections, including Shelley Powers, Tim Ferro, Evan Borden, Forrest L. Norvell, Jennifer Davis, Jesse Harlin, and many others. A big thank you to Rick Waldron for writing the Foreword for this title.

Thank you to the countless folks in the community, including members of the TC39 committee, who have shared so much knowledge with the rest of us, and especially tolerated my incessant questions and explorations with patience and detail. John-David Dalton, Juriy “kangax” Zaytsev, Mathias Bynens, Axel Rauschmayer, Nicholas Zakas, Angus Croll, Reginald Braithwaite, Dave Herman, Brendan Eich, Allen Wirfs-Brock, Bradley Meck, Domenic Denicola, David Walsh, Tim Disney, Peter van der Zee, Andrea Giammarchi, Kit Cambridge, Eric Elliott, André Bargull, Caitlin Potter, Brian Terlson, Ingvar Stepanyan, Chris Dickinson, Luke Hoban, and so many others, I can’t even scratch the surface.

The You Don’t Know JS book series was born on Kickstarter, so I also wish to thank all my (nearly) 500 generous backers, without whom this book series could not have happened:

Jan Szpila, nokiko, Murali Krishnamoorthy, Ryan Joy, Craig Patchett, pdqtrader, Dale Fukami, ray hatfield, R0drigo Perez [Mx], Dan Petitt, Jack Franklin, Andrew Berry, Brian Grinstead, Rob Sutherland, Sergi Meseguer, Phillip Gourley, Mark Watson, Jeff Carouth, Alfredo Sumaran, Martin Sachse, Marcio Barrios, Dan, AimelyneM, Matt Sullivan, Delnatte Pierre-Antoine, Jake Smith, Eugen Tudorancea, Iris, David Trinh, simonstl, Ray Daly, Uros Gruber, Justin Myers, Shai Zonis, Mom & Dad, Devin Clark, Dennis Palmer, Brian Panahi Johnson, Josh Marshall, Marshall, Dennis Kerr, Matt Steele, Erik Slagter, Sacah, Justin Rainbow, Christian Nilsson, Delapouite, D.Pereira, Nicolas Hoizey, George V. Reilly, Dan Reeves, Bruno Laturner, Chad Jennings, Shane King, Jeremiah Lee Cohick, od3n, Stan Yamane, Marko Vucinic, Jim B, Stephen Collins, Ægir Þorsteinsson, Eric Pederson, Owain, Nathan Smith, Jeanetteurphy, Alexandre ELISÉ, Chris Peterson, Rik Watson, Luke Matthews, Justin Lowery, Morten Nielsen, Vernon Kesner, Chetan Shenoy, Paul Tregoing, Marc Grabanski, Dion Almaer, Andrew Sullivan, Keith Elsass, Tom Burke, Brian Ashenfelter, David Stuart, Karl Swedberg, Graeme, Brandon Hays, John Christopher, Gior, manoj reddy, Chad Smith, Jared Harbour, Minoru TODA, Chris Wigley, Daniel Mee, Mike, Handyface, Alex Jahraus, Carl Furrow, Rob Foulkrod, Max Shishkin, Leigh Penny Jr., Robert Ferguson, Mike van Hoenselaar, Hasse Schougaard, rajan venkataguru, Jeff Adams, Trae Robbins, Rolf Langenhuijzen, Jorge Antunes, Alex Koloskov, Hugh Greenish, Tim Jones, Jose Ochoa, Michael Brennan-White, Naga Harish Muvva, Barkóczi Dávid, Kitt Hodsden, Paul McGraw, Sascha Goldhofer, Andrew Metcalf, Markus Krogh, Michael Mathews, Matt Jared, Juanfran, Georgie Kirschner, Kenny Lee, Ted Zhang, Amit Pahwa, Inbal Sinai, Dan Raine, Schabse Laks, Michael Tervoort, Alexandre Abreu, Alan Joseph Williams, NicolasD, Cindy Wong, Reg Braithwaite, LocalPCGuy, Jon Friskics, Chris Merriman, John Pena, Jacob Katz, Sue Lockwood, Magnus Johansson, Jeremy Crapsey, Grzegorz Pawłowski, nico nuzzaci, Christine Wilks, Hans Bergren, charles montgomery, Ariel בר-לבב Fogel, Ivan Kolev, Daniel Campos, Hugh Wood, Christian Bradford, Frédéric Harper, Ionuţ Dan Popa, Jeff Trimble, Rupert Wood, Trey Carrico, Pancho Lopez, Joël kuijten, Tom A Marra, Jeff Jewiss, Jacob Rios, Paolo Di Stefano, Soledad Penades, Chris Gerber, Andrey Dolganov, Wil Moore III, Thomas Martineau, Kareem, Ben Thouret, Udi Nir, Morgan Laupies, jory carson-burson, Nathan L Smith, Eric Damon Walters, Derry Lozano-Hoyland, Geoffrey Wiseman, mkeehner, KatieK, Scott MacFarlane, Brian LaShomb, Adrien Mas, christopher ross, Ian Littman, Dan Atkinson, Elliot Jobe, Nick Dozier, Peter Wooley, John Hoover, dan, Martin A. Jackson, Héctor Fernando Hurtado, andy ennamorato, Paul Seltmann, Melissa Gore, Dave Pollard, Jack Smith, Philip Da Silva, Guy Israeli, @megalithic, Damian Crawford, Felix Gliesche, April Carter Grant, Heidi, jim tierney, Andrea Giammarchi, Nico Vignola, Don Jones, Chris Hartjes, Alex Howes, john gibbon, David J. Groom, BBox, Yu ‘Dilys’ Sun, Nate Steiner, Brandon Satrom, Brian Wyant, Wesley Hales, Ian Pouncey, Timothy Kevin Oxley, George Terezakis, sanjay raj, Jordan Harband, Marko McLion, Wolfgang Kaufmann, Pascal Peuckert, Dave Nugent, Markus Liebelt, Welling Guzman, Nick Cooley, Daniel Mesquita, Robert Syvarth, Chris Coyier, Rémy Bach, Adam Dougal, Alistair Duggin, David Loidolt, Ed Richer, Brian Chenault, GoldFire Studios, Carles Andrés, Carlos Cabo, Yuya Saito, roberto ricardo, Barnett Klane, Mike Moore, Kevin Marx, Justin Love, Joe Taylor, Paul Dijou, Michael Kohler, Rob Cassie, Mike Tierney, Cody Leroy Lindley, tofuji, Shimon Schwartz, Raymond, Luc De Brouwer, David Hayes, Rhys Brett-Bowen, Dmitry, Aziz Khoury, Dean, Scott Tolinski - Level Up, Clement Boirie, Djordje Lukic, Anton Kotenko, Rafael Corral, Philip Hurwitz, Jonathan Pidgeon, Jason Campbell, Joseph C., SwiftOne, Jan Hohner, Derick Bailey, getify, Daniel Cousineau, Chris Charlton, Eric Turner, David Turner, Joël Galeran, Dharma Vagabond, adam, Dirk van Bergen, dave ♥♫★ furf, Vedran Zakanj, Ryan McAllen, Natalie Patrice Tucker, Eric J. Bivona, Adam Spooner, Aaron Cavano, Kelly Packer, Eric J, Martin Drenovac, Emilis, Michael Pelikan, Scott F. Walter, Josh Freeman, Brandon Hudgeons, vijay chennupati, Bill Glennon, Robin R., Troy Forster, otaku_coder, Brad, Scott, Frederick Ostrander, Adam Brill, Seb Flippence, Michael Anderson, Jacob, Adam Randlett, Standard, Joshua Clanton, Sebastian Kouba, Chris Deck, SwordFire, Hannes Papenberg, Richard Woeber, hnzz, Rob Crowther, Jedidiah Broadbent, Sergey Chernyshev, Jay-Ar Jamon, Ben Combee, luciano bonachela, Mark Tomlinson, Kit Cambridge, Michael Melgares, Jacob Adams, Adrian Bruinhout, Bev Wieber, Scott Puleo, Thomas Herzog, April Leone, Daniel Mizieliński, Kees van Ginkel, Jon Abrams, Erwin Heiser, Avi Laviad, David newell, Jean-Francois Turcot, Niko Roberts, Erik Dana, Charles Neill, Aaron Holmes, Grzegorz Ziółkowski, Nathan Youngman, Timothy, Jacob Mather, Michael Allan, Mohit Seth, Ryan Ewing, Benjamin Van Treese, Marcelo Santos, Denis Wolf, Phil Keys, Chris Yung, Timo Tijhof, Martin Lekvall, Agendine, Greg Whitworth, Helen Humphrey, Dougal Campbell, Johannes Harth, Bruno Girin, Brian Hough, Darren Newton, Craig McPheat, Olivier Tille, Dennis Roethig, Mathias Bynens, Brendan Stromberger, sundeep, John Meyer, Ron Male, John F Croston III, gigante, Carl Bergenhem, B.J. May, Rebekah Tyler, Ted Foxberry, Jordan Reese, Terry Suitor, afeliz, Tom Kiefer, Darragh Duffy, Kevin Vanderbeken, Andy Pearson, Simon Mac Donald, Abid Din, Chris Joel, Tomas Theunissen, David Dick, Paul Grock, Brandon Wood, John Weis, dgrebb, Nick Jenkins, Chuck Lane, Johnny Megahan, marzsman, Tatu Tamminen, Geoffrey Knauth, Alexander Tarmolov, Jeremy Tymes, Chad Auld, Sean Parmelee, Rob Staenke, Dan Bender, Yannick derwa, Joshua Jones, Geert Plaisier, Tom LeZotte, Christen Simpson, Stefan Bruvik, Justin Falcone, Carlos Santana, Michael Weiss, Pablo Villoslada, Peter deHaan, Dimitris Iliopoulos, seyDoggy, Adam Jordens, Noah Kantrowitz, Amol M, Matthew Winnard, Dirk Ginader, Phinam Bui, David Rapson, Andrew Baxter, Florian Bougel, Michael George, Alban Escalier, Daniel Sellers, Sasha Rudan, John Green, Robert Kowalski, David I. Teixeira (@ditma, Charles Carpenter, Justin Yost, Sam S, Denis Ciccale, Kevin Sheurs, Yannick Croissant, Pau Fracés, Stephen McGowan, Shawn Searcy, Chris Ruppel, Kevin Lamping, Jessica Campbell, Christopher Schmitt, Sablons, Jonathan Reisdorf, Bunni Gek, Teddy Huff, Michael Mullany, Michael Fürstenberg, Carl Henderson, Rick Yoesting, Scott Nichols, Hernán Ciudad, Andrew Maier, Mike Stapp, Jesse Shawl, Sérgio Lopes, jsulak, Shawn Price, Joel Clermont, Chris Ridmann, Sean Timm, Jason Finch, Aiden Montgomery, Elijah Manor, Derek Gathright, Jesse Harlin, Dillon Curry, Courtney Myers, Diego Cadenas, Arne de Bree, João Paulo Dubas, James Taylor, Philipp Kraeutli, Mihai Păun, Sam Gharegozlou, joshjs, Matt Murchison, Eric Windham, Timo Behrmann, Andrew Hall, joshua price, Théophile Villard

This book series is being produced in an open source fashion, including editing and production. We owe GitHub a debt of gratitude for making that sort of thing possible for the community!

Thank you again to all the countless folks I didn’t name but who I nonetheless owe thanks. May this book series be “owned” by all of us and serve to contribute to increasing awareness and understanding of the JavaScript language, to the benefit of all current and future community contributors.

26 You Don’t Know JS: ES6 & Beyond

27 Chapter 1: ES? Now & Future

Before you dive into this book, you should have a solid working proficiency over JavaScript up to the most recent standard (at the time of this writing), which is commonly called ES5 (technically ES 5.1). Here, we plan to talk squarely about the upcoming ES6, as well as cast our vision beyond to understand how JS will evolve moving forward.

If you are still looking for confidence with JavaScript, I highly recommend you read the other titles in this series first:

  • Up & Going: Are you new to programming and JS? This is the roadmap you need to consult as you start your learning journey.
  • Scope & Closures: Did you know that JS lexical scope is based on compiler (not interpreter!) semantics? Can you explain how closures are a direct result of lexical scope and functions as values?
  • this & Object Prototypes: Can you recite the four simple rules for how this is bound? Have you been muddling through fake “classes” in JS instead of adopting the simpler “behavior delegation” design pattern? Ever heard of objects linked to other objects (OLOO)?
  • Types & Grammar: Do you know the built-in types in JS, and more importantly, do you know how to properly and safely use coercion between types? How comfortable are you with the nuances of JS grammar/syntax?
  • Async & Performance: Are you still using callbacks to manage your asynchrony? Can you explain what a promise is and why/how it solves “callback hell”? Do you know how to use generators to improve the legibility of async code? What exactly constitutes mature optimization of JS programs and individual operations?

If you’ve already read all those titles and you feel pretty comfortable with the topics they cover, it’s time we dive into the evolution of JS to explore all the changes coming not only soon but farther over the horizon.

Unlike ES5, ES6 is not just a modest set of new APIs added to the language. It incorporates a whole slew of new syntactic forms, some of which may take quite a bit of getting used to. There’s also a variety of new organization forms and new API helpers for various data types.

ES6 is a radical jump forward for the language. Even if you think you know JS in ES5, ES6 is full of new stuff you don’t know yet, so get ready! This book explores all the major themes of ES6 that you need to get up to speed on, and even gives you a glimpse of future features coming down the track that you should be aware of.

Warning: All code in this book assumes an ES6+ environment. At the time of this writing, ES6 support varies quite a bit in browsers and JS environments (like Node.js), so your mileage may vary.

27.1 Versioning

The JavaScript standard is referred to officially as “ECMAScript” (abbreviated “ES”), and up until just recently has been versioned entirely by ordinal number (i.e., “5” for “5th edition”).

The earliest versions, ES1 and ES2, were not widely known or implemented. ES3 was the first widespread baseline for JavaScript, and constitutes the JavaScript standard for browsers like IE6-8 and older Android 2.x mobile browsers. For political reasons beyond what we’ll cover here, the ill-fated ES4 never came about.

In 2009, ES5 was officially finalized (later ES5.1 in 2011), and settled as the widespread standard for JS for the modern revolution and explosion of browsers, such as Firefox, Chrome, Opera, Safari, and many others.

Leading up to the expected next version of JS (slipped from 2013 to 2014 and then 2015), the obvious and common label in discourse has been ES6.

However, late into the ES6 specification timeline, suggestions have surfaced that versioning may in the future switch to a year-based schema, such as ES2016 (aka ES7) to refer to whatever version of the specification is finalized before the end of 2016. Some disagree, but ES6 will likely maintain its dominant mindshare over the late-change substitute ES2015. However, ES2016 may in fact signal the new year-based schema.

It has also been observed that the pace of JS evolution is much faster even than single-year versioning. As soon as an idea begins to progress through standards discussions, browsers start prototyping the feature, and early adopters start experimenting with the code.

Usually well before there’s an official stamp of approval, a feature is de facto standardized by virtue of this early engine/tooling prototyping. So it’s also valid to consider the future of JS versioning to be per-feature rather than per-arbitrary-collection-of-major-features (as it is now) or even per-year (as it may become).

The takeaway is that the version labels stop being as important, and JavaScript starts to be seen more as an evergreen, living standard. The best way to cope with this is to stop thinking about your code base as being “ES6-based,” for instance, and instead consider it feature by feature for support.

27.2 Transpiling

Made even worse by the rapid evolution of features, a problem arises for JS developers who at once may both strongly desire to use new features while at the same time being slapped with the reality that their sites/apps may need to support older browsers without such support.

The way ES5 appears to have played out in the broader industry, the typical mindset was that code bases waited to adopt ES5 until most if not all pre-ES5 environments had fallen out of their support spectrum. As a result, many are just recently (at the time of this writing) starting to adopt things like strict mode, which landed in ES5 over five years ago.

It’s widely considered to be a harmful approach for the future of the JS ecosystem to wait around and trail the specification by so many years. All those responsible for evolving the language desire for developers to begin basing their code on the new features and patterns as soon as they stabilize in specification form and browsers have a chance to implement them.

So how do we resolve this seeming contradiction? The answer is tooling, specifically a technique called transpiling (transformation + compiling). Roughly, the idea is to use a special tool to transform your ES6 code into equivalent (or close!) matches that work in ES5 environments.

For example, consider shorthand property definitions (see “Object Literal Extensions” in Chapter 2). Here’s the ES6 form:

var foo = [1,2,3];

var obj = {
    foo     // means `foo: foo`
};

obj.foo;    // [1,2,3]

But (roughly) here’s how that transpiles:

var foo = [1,2,3];

var obj = {
    foo: foo
};

obj.foo;    // [1,2,3]

This is a minor but pleasant transformation that lets us shorten the foo: foo in an object literal declaration to just foo, if the names are the same.

Transpilers perform these transformations for you, usually in a build workflow step similar to how you perform linting, minification, and other similar operations.

27.2.1 Shims/Polyfills

Not all new ES6 features need a transpiler. Polyfills (aka shims) are a pattern for defining equivalent behavior from a newer environment into an older environment, when possible. Syntax cannot be polyfilled, but APIs often can be.

For example, Object.is(..) is a new utility for checking strict equality of two values but without the nuanced exceptions that === has for NaN and -0 values. The polyfill for Object.is(..) is pretty easy:

if (!Object.is) {
    Object.is = function(v1, v2) {
        // test for `-0`
        if (v1 === 0 && v2 === 0) {
            return 1 / v1 === 1 / v2;
        }
        // test for `NaN`
        if (v1 !== v1) {
            return v2 !== v2;
        }
        // everything else
        return v1 === v2;
    };
}

Tip: Pay attention to the outer if statement guard wrapped around the polyfill. This is an important detail, which means the snippet only defines its fallback behavior for older environments where the API in question isn’t already defined; it would be very rare that you’d want to overwrite an existing API.

There’s a great collection of ES6 shims called “ES6 Shim” (https://github.com/paulmillr/es6-shim/) that you should definitely adopt as a standard part of any new JS project!

It is assumed that JS will continue to evolve constantly, with browsers rolling out support for features continually rather than in large chunks. So the best strategy for keeping updated as it evolves is to just introduce polyfill shims into your code base, and a transpiler step into your build workflow, right now and get used to that new reality.

If you decide to keep the status quo and just wait around for all browsers without a feature supported to go away before you start using the feature, you’re always going to be way behind. You’ll sadly be missing out on all the innovations designed to make writing JavaScript more effective, efficient, and robust.

27.3 Review

ES6 (some may try to call it ES2015) is just landing as of the time of this writing, and it has lots of new stuff you need to learn!

But it’s even more important to shift your mindset to align with the new way that JavaScript is going to evolve. It’s not just waiting around for years for some official document to get a vote of approval, as many have done in the past.

Now, JavaScript features land in browsers as they become ready, and it’s up to you whether you’ll get on the train early or whether you’ll be playing costly catch-up games years from now.

Whatever labels that future JavaScript adopts, it’s going to move a lot quicker than it ever has before. Transpilers and shims/polyfills are important tools to keep you on the forefront of where the language is headed.

If there’s any narrative important to understand about the new reality for JavaScript, it’s that all JS developers are strongly implored to move from the trailing edge of the curve to the leading edge. And learning ES6 is where that all starts!

28 You Don’t Know JS: ES6 & Beyond

29 Chapter 2: Syntax

If you’ve been writing JS for any length of time, odds are the syntax is pretty familiar to you. There are certainly many quirks, but overall it’s a fairly reasonable and straightforward syntax that draws many similarities from other languages.

However, ES6 adds quite a few new syntactic forms that take some getting used to. In this chapter, we’ll tour through them to find out what’s in store.

Tip: At the time of this writing, some of the features discussed in this book have been implemented in various browsers (Firefox, Chrome, etc.), but some have only been partially implemented and many others have not been implemented at all. Your experience may be mixed trying these examples directly. If so, try them out with transpilers, as most of these features are covered by those tools. ES6Fiddle (http://www.es6fiddle.net/) is a great, easy-to-use playground for trying out ES6, as is the online REPL for the Babel transpiler (http://babeljs.io/repl/).

29.1 Block-Scoped Declarations

You’re probably aware that the fundamental unit of variable scoping in JavaScript has always been the function. If you needed to create a block of scope, the most prevalent way to do so other than a regular function declaration was the immediately invoked function expression (IIFE). For example:

var a = 2;

(function IIFE(){
    var a = 3;
    console.log( a );   // 3
})();

console.log( a );       // 2

29.1.1 let Declarations

However, we can now create declarations that are bound to any block, called (unsurprisingly) block scoping. This means all we need is a pair of { .. } to create a scope. Instead of using var, which always declares variables attached to the enclosing function (or global, if top level) scope, use let:

var a = 2;

{
    let a = 3;
    console.log( a );   // 3
}

console.log( a );       // 2

It’s not very common or idiomatic thus far in JS to use a standalone { .. } block, but it’s always been valid. And developers from other languages that have block scoping will readily recognize that pattern.

I believe this is the best way to create block-scoped variables, with a dedicated { .. } block. Moreover, you should always put the let declaration(s) at the very top of that block. If you have more than one to declare, I’d recommend using just one let.

Stylistically, I even prefer to put the let on the same line as the opening {, to make it clearer that this block is only for the purpose of declaring the scope for those variables.

{   let a = 2, b, c;
    // ..
}

Now, that’s going to look strange and it’s not likely going to match the recommendations given in most other ES6 literature. But I have reasons for my madness.

There’s another experimental (not standardized) form of the let declaration called the let-block, which looks like:

let (a = 2, b, c) {
    // ..
}

That form is what I call explicit block scoping, whereas the let .. declaration form that mirrors var is more implicit, as it kind of hijacks whatever { .. } pair it’s found in. Generally developers find explicit mechanisms a bit more preferable than implicit mechanisms, and I claim this is one of those cases.

If you compare the previous two snippet forms, they’re very similar, and in my opinion both qualify stylistically as explicit block scoping. Unfortunately, the let (..) { .. } form, the most explicit of the options, was not adopted in ES6. That may be revisited post-ES6, but for now the former option is our best bet, I think.

To reinforce the implicit nature of let .. declarations, consider these usages:

let a = 2;

if (a > 1) {
    let b = a * 3;
    console.log( b );       // 6

    for (let i = a; i <= b; i++) {
        let j = i + 10;
        console.log( j );
    }
    // 12 13 14 15 16

    let c = a + b;
    console.log( c );       // 8
}

Quick quiz without looking back at that snippet: which variable(s) exist only inside the if statement, and which variable(s) exist only inside the for loop?

The answers: the if statement contains b and c block-scoped variables, and the for loop contains i and j block-scoped variables.

Did you have to think about it for a moment? Does it surprise you that i isn’t added to the enclosing if statement scope? That mental pause and questioning – I call it a “mental tax” – comes from the fact that this let mechanism is not only new to us, but it’s also implicit.

There’s also hazard in the let c = .. declaration appearing so far down in the scope. Unlike traditional var-declared variables, which are attached to the entire enclosing function scope regardless of where they appear, let declarations attach to the block scope but are not initialized until they appear in the block.

Accessing a let-declared variable earlier than its let .. declaration/initialization causes an error, whereas with var declarations the ordering doesn’t matter (except stylistically).

Consider:

{
    console.log( a );   // undefined
    console.log( b );   // ReferenceError!

    var a;
    let b;
}

Warning: This ReferenceError from accessing too-early let-declared references is technically called a Temporal Dead Zone (TDZ) error – you’re accessing a variable that’s been declared but not yet initialized. This will not be the only time we see TDZ errors – they crop up in several places in ES6. Also, note that “initialized” doesn’t require explicitly assigning a value in your code, as let b; is totally valid. A variable that’s not given an assignment at declaration time is assumed to have been assigned the undefined value, so let b; is the same as let b = undefined;. Explicit assignment or not, you cannot access b until the let b statement is run.

One last gotcha: typeof behaves differently with TDZ variables than it does with undeclared (or declared!) variables. For example:

{
    // `a` is not declared
    if (typeof a === "undefined") {
        console.log( "cool" );
    }

    // `b` is declared, but in its TDZ
    if (typeof b === "undefined") {     // ReferenceError!
        // ..
    }

    // ..

    let b;
}

The a is not declared, so typeof is the only safe way to check for its existence or not. But typeof b throws the TDZ error because farther down in the code there happens to be a let b declaration. Oops.

Now it should be clearer why I insist that let declarations should all be at the top of their scope. That totally avoids the accidental errors of accessing too early. It also makes it more explicit when you look at the start of a block, any block, what variables it contains.

Your blocks (if statements, while loops, etc.) don’t have to share their original behavior with scoping behavior.

This explicitness on your part, which is up to you to maintain with discipline, will save you lots of refactor headaches and footguns down the line.

Note: For more information on let and block scoping, see Chapter 3 of the Scope & Closures title of this series.

29.1.1.1 let + for

The only exception I’d make to the preference for the explicit form of let declaration blocking is a let that appears in the header of a for loop. The reason may seem nuanced, but I believe it to be one of the more important ES6 features.

Consider:

var funcs = [];

for (let i = 0; i < 5; i++) {
    funcs.push( function(){
        console.log( i );
    } );
}

funcs[3]();     // 3

The let i in the for header declares an i not just for the for loop itself, but it redeclares a new i for each iteration of the loop. That means that closures created inside the loop iteration close over those per-iteration variables the way you’d expect.

If you tried that same snippet but with var i in the for loop header, you’d get 5 instead of 3, because there’d only be one i in the outer scope that was closed over, instead of a new i for each iteration’s function to close over.

You could also have accomplished the same thing slightly more verbosely:

var funcs = [];

for (var i = 0; i < 5; i++) {
    let j = i;
    funcs.push( function(){
        console.log( j );
    } );
}

funcs[3]();     // 3

Here, we forcibly create a new j for each iteration, and then the closure works the same way. I prefer the former approach; that extra special capability is why I endorse the for (let .. ) .. form. It could be argued it’s somewhat more implicit, but it’s explicit enough, and useful enough, for my tastes.

let also works the same way with for..in and for..of loops (see “for..of Loops”).

29.1.2 const Declarations

There’s one other form of block-scoped declaration to consider: the const, which creates constants.

What exactly is a constant? It’s a variable that’s read-only after its initial value is set. Consider:

{
    const a = 2;
    console.log( a );   // 2

    a = 3;              // TypeError!
}

You are not allowed to change the value the variable holds once it’s been set, at declaration time. A const declaration must have an explicit initialization. If you wanted a constant with the undefined value, you’d have to declare const a = undefined to get it.

Constants are not a restriction on the value itself, but on the variable’s assignment of that value. In other words, the value is not frozen or immutable because of const, just the assignment of it. If the value is complex, such as an object or array, the contents of the value can still be modified:

{
    const a = [1,2,3];
    a.push( 4 );
    console.log( a );       // [1,2,3,4]

    a = 42;                 // TypeError!
}

The a variable doesn’t actually hold a constant array; rather, it holds a constant reference to the array. The array itself is freely mutable.

Warning: Assigning an object or array as a constant means that value will not be able to be garbage collected until that constant’s lexical scope goes away, as the reference to the value can never be unset. That may be desirable, but be careful if it’s not your intent!

Essentially, const declarations enforce what we’ve stylistically signaled with our code for years, where we declared a variable name of all uppercase letters and assigned it some literal value that we took care never to change. There’s no enforcement on a var assignment, but there is now with a const assignment, which can help you catch unintended changes.

const can be used with variable declarations of for, for..in, and for..of loops (see “for..of Loops”). However, an error will be thrown if there’s any attempt to reassign, such as the typical i++ clause of a for loop.

29.1.2.1 const Or Not

There’s some rumored assumptions that a const could be more optimizable by the JS engine in certain scenarios than a let or var would be. Theoretically, the engine more easily knows the variable’s value/type will never change, so it can eliminate some possible tracking.

Whether const really helps here or this is just our own fantasies and intuitions, the much more important decision to make is if you intend constant behavior or not. Remember: one of the most important roles for source code is to communicate clearly, not only to you, but your future self and other code collaborators, what your intent is.

Some developers prefer to start out every variable declaration as a const and then relax a declaration back to a let if it becomes necessary for its value to change in the code. This is an interesting perspective, but it’s not clear that it genuinely improves the readability or reason-ability of code.

It’s not really a protection, as many believe, because any later developer who wants to change a value of a const can just blindly change const to let on the declaration. At best, it protects accidental change. But again, other than our intuitions and sensibilities, there doesn’t appear to be objective and clear measure of what constitutes “accidents” or prevention thereof. Similar mindsets exist around type enforcement.

My advice: to avoid potentially confusing code, only use const for variables that you’re intentionally and obviously signaling will not change. In other words, don’t rely on const for code behavior, but instead use it as a tool for signaling intent, when intent can be signaled clearly.

29.1.3 Block-scoped Functions

Starting with ES6, function declarations that occur inside of blocks are now specified to be scoped to that block. Prior to ES6, the specification did not call for this, but many implementations did it anyway. So now the specification meets reality.

Consider:

{
    foo();                  // works!

    function foo() {
        // ..
    }
}

foo();                      // ReferenceError

The foo() function is declared inside the { .. } block, and as of ES6 is block-scoped there. So it’s not available outside that block. But also note that it is “hoisted” within the block, as opposed to let declarations, which suffer the TDZ error trap mentioned earlier.

Block-scoping of function declarations could be a problem if you’ve ever written code like this before, and relied on the old legacy non-block-scoped behavior:

if (something) {
    function foo() {
        console.log( "1" );
    }
}
else {
    function foo() {
        console.log( "2" );
    }
}

foo();      // ??

In pre-ES6 environments, foo() would print "2" regardless of the value of something, because both function declarations were hoisted out of the blocks, and the second one always wins.

In ES6, that last line throws a ReferenceError.

29.2 Spread/Rest

ES6 introduces a new ... operator that’s typically referred to as the spread or rest operator, depending on where/how it’s used. Let’s take a look:

function foo(x,y,z) {
    console.log( x, y, z );
}

foo( ...[1,2,3] );              // 1 2 3

When ... is used in front of an array (actually, any iterable, which we cover in Chapter 3), it acts to “spread” it out into its individual values.

You’ll typically see that usage as is shown in that previous snippet, when spreading out an array as a set of arguments to a function call. In this usage, ... acts to give us a simpler syntactic replacement for the apply(..) method, which we would typically have used pre-ES6 as:

foo.apply( null, [1,2,3] );     // 1 2 3

But ... can be used to spread out/expand a value in other contexts as well, such as inside another array declaration:

var a = [2,3,4];
var b = [ 1, ...a, 5 ];

console.log( b );                   // [1,2,3,4,5]

In this usage, ... is basically replacing concat(..), as it behaves like [1].concat( a, [5] ) here.

The other common usage of ... can be seen as essentially the opposite; instead of spreading a value out, the ... gathers a set of values together into an array. Consider:

function foo(x, y, ...z) {
    console.log( x, y, z );
}

foo( 1, 2, 3, 4, 5 );           // 1 2 [3,4,5]

The ...z in this snippet is essentially saying: “gather the rest of the arguments (if any) into an array called z.” Because x was assigned 1, and y was assigned 2, the rest of the arguments 3, 4, and 5 were gathered into z.

Of course, if you don’t have any named parameters, the ... gathers all arguments:

function foo(...args) {
    console.log( args );
}

foo( 1, 2, 3, 4, 5);            // [1,2,3,4,5]

Note: The ...args in the foo(..) function declaration is usually called “rest parameters,” because you’re collecting the rest of the parameters. I prefer “gather,” because it’s more descriptive of what it does rather than what it contains.

The best part about this usage is that it provides a very solid alternative to using the long-since-deprecated arguments array – actually, it’s not really an array, but an array-like object. Because args (or whatever you call it – a lot of people prefer r or rest) is a real array, we can get rid of lots of silly pre-ES6 tricks we jumped through to make arguments into something we can treat as an array.

Consider:

// doing things the new ES6 way
function foo(...args) {
    // `args` is already a real array

    // discard first element in `args`
    args.shift();

    // pass along all of `args` as arguments
    // to `console.log(..)`
    console.log( ...args );
}

// doing things the old-school pre-ES6 way
function bar() {
    // turn `arguments` into a real array
    var args = Array.prototype.slice.call( arguments );

    // add some elements on the end
    args.push( 4, 5 );

    // filter out odd numbers
    args = args.filter( function(v){
        return v % 2 == 0;
    } );

    // pass along all of `args` as arguments
    // to `foo(..)`
    foo.apply( null, args );
}

bar( 0, 1, 2, 3 );                  // 2 4

The ...args in the foo(..) function declaration gathers arguments, and the ...args in the console.log(..) call spreads them out. That’s a good illustration of the symmetric but opposite uses of the ... operator.

Besides the ... usage in a function declaration, there’s another case where ... is used for gathering values, and we’ll look at it in the “Too Many, Too Few, Just Enough” section later in this chapter.

29.3 Default Parameter Values

Perhaps one of the most common idioms in JavaScript relates to setting a default value for a function parameter. The way we’ve done this for years should look quite familiar:

function foo(x,y) {
    x = x || 11;
    y = y || 31;

    console.log( x + y );
}

foo();              // 42
foo( 5, 6 );        // 11
foo( 5 );           // 36
foo( null, 6 );     // 17

Of course, if you’ve used this pattern before, you know that it’s both helpful and a little bit dangerous, if for example you need to be able to pass in what would otherwise be considered a falsy value for one of the parameters. Consider:

foo( 0, 42 );       // 53 <-- Oops, not 42

Why? Because the 0 is falsy, and so the x || 11 results in 11, not the directly passed in 0.

To fix this gotcha, some people will instead write the check more verbosely like this:

function foo(x,y) {
    x = (x !== undefined) ? x : 11;
    y = (y !== undefined) ? y : 31;

    console.log( x + y );
}

foo( 0, 42 );           // 42
foo( undefined, 6 );    // 17

Of course, that means that any value except undefined can be directly passed in. However, undefined will be assumed to signal, “I didn’t pass this in.” That works great unless you actually need to be able to pass undefined in.

In that case, you could test to see if the argument is actually omitted, by it actually not being present in the arguments array, perhaps like this:

function foo(x,y) {
    x = (0 in arguments) ? x : 11;
    y = (1 in arguments) ? y : 31;

    console.log( x + y );
}

foo( 5 );               // 36
foo( 5, undefined );    // NaN

But how would you omit the first x argument without the ability to pass in any kind of value (not even undefined) that signals “I’m omitting this argument”?

foo(,5) is tempting, but it’s invalid syntax. foo.apply(null,[,5]) seems like it should do the trick, but apply(..)’s quirks here mean that the arguments are treated as [undefined,5], which of course doesn’t omit.

If you investigate further, you’ll find you can only omit arguments on the end (i.e., righthand side) by simply passing fewer arguments than “expected,” but you cannot omit arguments in the middle or at the beginning of the arguments list. It’s just not possible.

There’s a principle applied to JavaScript’s design here that is important to remember: undefined means missing. That is, there’s no difference between undefined and missing, at least as far as function arguments go.

Note: There are, confusingly, other places in JS where this particular design principle doesn’t apply, such as for arrays with empty slots. See the Types & Grammar title of this series for more information.

With all this in mind, we can now examine a nice helpful syntax added as of ES6 to streamline the assignment of default values to missing arguments:

function foo(x = 11, y = 31) {
    console.log( x + y );
}

foo();                  // 42
foo( 5, 6 );            // 11
foo( 0, 42 );           // 42

foo( 5 );               // 36
foo( 5, undefined );    // 36 <-- `undefined` is missing
foo( 5, null );         // 5  <-- null coerces to `0`

foo( undefined, 6 );    // 17 <-- `undefined` is missing
foo( null, 6 );         // 6  <-- null coerces to `0`

Notice the results and how they imply both subtle differences and similarities to the earlier approaches.

x = 11 in a function declaration is more like x !== undefined ? x : 11 than the much more common idiom x || 11, so you’ll need to be careful in converting your pre-ES6 code to this ES6 default parameter value syntax.

Note: A rest/gather parameter (see “Spread/Rest”) cannot have a default value. So, while function foo(...vals=[1,2,3]) { might seem an intriguing capability, it’s not valid syntax. You’ll need to continue to apply that sort of logic manually if necessary.

29.3.1 Default Value Expressions

Function default values can be more than just simple values like 31; they can be any valid expression, even a function call:

function bar(val) {
    console.log( "bar called!" );
    return y + val;
}

function foo(x = y + 3, z = bar( x )) {
    console.log( x, z );
}

var y = 5;
foo();                              // "bar called"
                                    // 8 13
foo( 10 );                          // "bar called"
                                    // 10 15
y = 6;
foo( undefined, 10 );               // 9 10

As you can see, the default value expressions are lazily evaluated, meaning they’re only run if and when they’re needed – that is, when a parameter’s argument is omitted or is undefined.

It’s a subtle detail, but the formal parameters in a function declaration are in their own scope (think of it as a scope bubble wrapped around just the ( .. ) of the function declaration), not in the function body’s scope. That means a reference to an identifier in a default value expression first matches the formal parameters’ scope before looking to an outer scope. See the Scope & Closures title of this series for more information.

Consider:

var w = 1, z = 2;

function foo( x = w + 1, y = x + 1, z = z + 1 ) {
    console.log( x, y, z );
}

foo();                  // ReferenceError

The w in the w + 1 default value expression looks for w in the formal parameters’ scope, but does not find it, so the outer scope’s w is used. Next, The x in the x + 1 default value expression finds x in the formal parameters’ scope, and luckily x has already been initialized, so the assignment to y works fine.

However, the z in z + 1 finds z as a not-yet-initialized-at-that-moment parameter variable, so it never tries to find the z from the outer scope.

As we mentioned in the “let Declarations” section earlier in this chapter, ES6 has a TDZ, which prevents a variable from being accessed in its uninitialized state. As such, the z + 1 default value expression throws a TDZ ReferenceError error.

Though it’s not necessarily a good idea for code clarity, a default value expression can even be an inline function expression call – commonly referred to as an immediately invoked function expression (IIFE):

function foo( x =
    (function(v){ return v + 11; })( 31 )
) {
    console.log( x );
}

foo();          // 42

There will very rarely be any cases where an IIFE (or any other executed inline function expression) will be appropriate for default value expressions. If you find yourself tempted to do this, take a step back and reevaluate!

Warning: If the IIFE had tried to access the x identifier and had not declared its own x, this would also have been a TDZ error, just as discussed before.

The default value expression in the previous snippet is an IIFE in that in the sense that it’s a function that’s executed right inline, via (31). If we had left that part off, the default value assigned to x would have just been a function reference itself, perhaps like a default callback. There will probably be cases where that pattern will be quite useful, such as:

function ajax(url, cb = function(){}) {
    // ..
}

ajax( "http://some.url.1" );

In this case, we essentially want to default cb to be a no-op empty function call if not otherwise specified. The function expression is just a function reference, not a function call itself (no invoking () on the end of it), which accomplishes that goal.

Since the early days of JS, there’s been a little-known but useful quirk available to us: Function.prototype is itself an empty no-op function. So, the declaration could have been cb = Function.prototype and saved the inline function expression creation.

29.4 Destructuring

ES6 introduces a new syntactic feature called destructuring, which may be a little less confusing if you instead think of it as structured assignment. To understand this meaning, consider:

function foo() {
    return [1,2,3];
}

var tmp = foo(),
    a = tmp[0], b = tmp[1], c = tmp[2];

console.log( a, b, c );             // 1 2 3

As you can see, we created a manual assignment of the values in the array that foo() returns to individual variables a, b, and c, and to do so we (unfortunately) needed the tmp variable.

Similarly, we can do the following with objects:

function bar() {
    return {
        x: 4,
        y: 5,
        z: 6
    };
}

var tmp = bar(),
    x = tmp.x, y = tmp.y, z = tmp.z;

console.log( x, y, z );             // 4 5 6

The tmp.x property value is assigned to the x variable, and likewise for tmp.y to y and tmp.z to z.

Manually assigning indexed values from an array or properties from an object can be thought of as structured assignment. ES6 adds a dedicated syntax for destructuring, specifically array destructuring and object destructuring. This syntax eliminates the need for the tmp variable in the previous snippets, making them much cleaner. Consider:

var [ a, b, c ] = foo();
var { x: x, y: y, z: z } = bar();

console.log( a, b, c );             // 1 2 3
console.log( x, y, z );             // 4 5 6

You’re likely more accustomed to seeing syntax like [a,b,c] on the righthand side of an = assignment, as the value being assigned.

Destructuring symmetrically flips that pattern, so that [a,b,c] on the lefthand side of the = assignment is treated as a kind of “pattern” for decomposing the righthand side array value into separate variable assignments.

Similarly, { x: x, y: y, z: z } specifies a “pattern” to decompose the object value from bar() into separate variable assignments.

29.4.1 Object Property Assignment Pattern

Let’s dig into that { x: x, .. } syntax from the previous snippet. If the property name being matched is the same as the variable you want to declare, you can actually shorten the syntax:

var { x, y, z } = bar();

console.log( x, y, z );             // 4 5 6

Pretty cool, right?

But is { x, .. } leaving off the x: part or leaving off the : x part? We’re actually leaving off the x: part when we use the shorter syntax. That may not seem like an important detail, but you’ll understand its importance in just a moment.

If you can write the shorter form, why would you ever write out the longer form? Because that longer form actually allows you to assign a property to a different variable name, which can sometimes be quite useful:

var { x: bam, y: baz, z: bap } = bar();

console.log( bam, baz, bap );       // 4 5 6
console.log( x, y, z );             // ReferenceError

There’s a subtle but super-important quirk to understand about this variation of the object destructuring form. To illustrate why it can be a gotcha you need to be careful of, let’s consider the “pattern” of how normal object literals are specified:

var X = 10, Y = 20;

var o = { a: X, b: Y };

console.log( o.a, o.b );            // 10 20

In { a: X, b: Y }, we know that a is the object property, and X is the source value that gets assigned to it. In other words, the syntactic pattern is target: source, or more obviously, property-alias: value. We intuitively understand this because it’s the same as = assignment, where the pattern is target = source.

However, when you use object destructuring assignment – that is, putting the { .. } object literal-looking syntax on the lefthand side of the = operator – you invert that target: source pattern.

Recall:

var { x: bam, y: baz, z: bap } = bar();

The syntactic pattern here is source: target (or value: variable-alias). x: bam means the x property is the source value and bam is the target variable to assign to. In other words, object literals are target <-- source, and object destructuring assignments are source --> target. See how that’s flipped?

There’s another way to think about this syntax though, which may help ease the confusion. Consider:

var aa = 10, bb = 20;

var o = { x: aa, y: bb };
var     { x: AA, y: BB } = o;

console.log( AA, BB );              // 10 20

In the { x: aa, y: bb } line, the x and y represent the object properties. In the { x: AA, y: BB } line, the x and the y also represent the object properties.

Recall how earlier I asserted that { x, .. } was leaving off the x: part? In those two lines, if you erase the x: and y: parts in that snippet, you’re left only with aa, bb and AA, BB, which in effect – only conceptually, not actually – are assignments from aa to AA and from bb to BB.

So, that symmetry may help to explain why the syntactic pattern was intentionally flipped for this ES6 feature.

Note: I would have preferred the syntax to be { AA: x , BB: y } for the destructuring assignment, as that would have preserved consistency of the more familiar target: source pattern for both usages. Alas, I’m having to train my brain for the inversion, as some readers may also have to do.

29.4.2 Not Just Declarations

So far, we’ve used destructuring assignment with var declarations (of course, they could also use let and const), but destructuring is a general assignment operation, not just a declaration.

Consider:

var a, b, c, x, y, z;

[a,b,c] = foo();
( { x, y, z } = bar() );

console.log( a, b, c );             // 1 2 3
console.log( x, y, z );             // 4 5 6

The variables can already be declared, and then the destructuring only does assignments, exactly as we’ve already seen.

Note: For the object destructuring form specifically, when leaving off a var/let/const declarator, we had to surround the whole assignment expression in ( ), because otherwise the { .. } on the lefthand side as the first element in the statement is taken to be a block statement instead of an object.

In fact, the assignment expressions (a, y, etc.) don’t actually need to be just variable identifiers. Anything that’s a valid assignment expression is allowed. For example:

var o = {};

[o.a, o.b, o.c] = foo();
( { x: o.x, y: o.y, z: o.z } = bar() );

console.log( o.a, o.b, o.c );       // 1 2 3
console.log( o.x, o.y, o.z );       // 4 5 6

You can even use computed property expressions in the destructuring. Consider:

var which = "x",
    o = {};

( { [which]: o[which] } = bar() );

console.log( o.x );                 // 4

The [which]: part is the computed property, which results in x – the property to destructure from the object in question as the source of the assignment. The o[which] part is just a normal object key reference, which equates to o.x as the target of the assignment.

You can use the general assignments to create object mappings/transformations, such as:

var o1 = { a: 1, b: 2, c: 3 },
    o2 = {};

( { a: o2.x, b: o2.y, c: o2.z } = o1 );

console.log( o2.x, o2.y, o2.z );    // 1 2 3

Or you can map an object to an array, such as:

var o1 = { a: 1, b: 2, c: 3 },
    a2 = [];

( { a: a2[0], b: a2[1], c: a2[2] } = o1 );

console.log( a2 );                  // [1,2,3]

Or the other way around:

var a1 = [ 1, 2, 3 ],
    o2 = {};

[ o2.a, o2.b, o2.c ] = a1;

console.log( o2.a, o2.b, o2.c );    // 1 2 3

Or you could reorder one array to another:

var a1 = [ 1, 2, 3 ],
    a2 = [];

[ a2[2], a2[0], a2[1] ] = a1;

console.log( a2 );                  // [2,3,1]

You can even solve the traditional “swap two variables” task without a temporary variable:

var x = 10, y = 20;

[ y, x ] = [ x, y ];

console.log( x, y );                // 20 10

Warning: Be careful: you shouldn’t mix in declaration with assignment unless you want all of the assignment expressions also to be treated as declarations. Otherwise, you’ll get syntax errors. That’s why in the earlier example I had to do var a2 = [] separately from the [ a2[0], .. ] = .. destructuring assignment. It wouldn’t make any sense to try var [ a2[0], .. ] = .., because a2[0] isn’t a valid declaration identifier; it also obviously couldn’t implicitly create a var a2 = [] declaration to use.

29.4.3 Repeated Assignments

The object destructuring form allows a source property (holding any value type) to be listed multiple times. For example:

var { a: X, a: Y } = { a: 1 };

X;  // 1
Y;  // 1

That also means you can both destructure a sub-object/array property and also capture the sub-object/array’s value itself. Consider:

var { a: { x: X, x: Y }, a } = { a: { x: 1 } };

X;  // 1
Y;  // 1
a;  // { x: 1 }

( { a: X, a: Y, a: [ Z ] } = { a: [ 1 ] } );

X.push( 2 );
Y[0] = 10;

X;  // [10,2]
Y;  // [10,2]
Z;  // 1

A word of caution about destructuring: it may be tempting to list destructuring assignments all on a single line as has been done thus far in our discussion. However, it’s a much better idea to spread destructuring assignment patterns over multiple lines, using proper indentation – much like you would in JSON or with an object literal value – for readability sake.

// harder to read:
var { a: { b: [ c, d ], e: { f } }, g } = obj;

// better:
var {
    a: {
        b: [ c, d ],
        e: { f }
    },
    g
} = obj;

Remember: the purpose of destructuring is not just less typing, but more declarative readability.

29.4.3.1 Destructuring Assignment Expressions

The assignment expression with object or array destructuring has as its completion value the full righthand object/array value. Consider:

var o = { a:1, b:2, c:3 },
    a, b, c, p;

p = { a, b, c } = o;

console.log( a, b, c );         // 1 2 3
p === o;                        // true

In the previous snippet, p was assigned the o object reference, not one of the a, b, or c values. The same is true of array destructuring:

var o = [1,2,3],
    a, b, c, p;

p = [ a, b, c ] = o;

console.log( a, b, c );         // 1 2 3
p === o;                        // true

By carrying the object/array value through as the completion, you can chain destructuring assignment expressions together:

var o = { a:1, b:2, c:3 },
    p = [4,5,6],
    a, b, c, x, y, z;

( {a} = {b,c} = o );
[x,y] = [z] = p;

console.log( a, b, c );         // 1 2 3
console.log( x, y, z );         // 4 5 4

29.4.4 Too Many, Too Few, Just Enough

With both array destructuring assignment and object destructuring assignment, you do not have to assign all the values that are present. For example:

var [,b] = foo();
var { x, z } = bar();

console.log( b, x, z );             // 2 4 6

The 1 and 3 values that came back from foo() are discarded, as is the 5 value from bar().

Similarly, if you try to assign more values than are present in the value you’re destructuring/decomposing, you get graceful fallback to undefined, as you’d expect:

var [,,c,d] = foo();
var { w, z } = bar();

console.log( c, z );                // 3 6
console.log( d, w );                // undefined undefined

This behavior follows symmetrically from the earlier stated “undefined is missing” principle.

We examined the ... operator earlier in this chapter, and saw that it can sometimes be used to spread an array value out into its separate values, and sometimes it can be used to do the opposite: to gather a set of values together into an array.

In addition to the gather/rest usage in function declarations, ... can perform the same behavior in destructuring assignments. To illustrate, let’s recall a snippet from earlier in this chapter:

var a = [2,3,4];
var b = [ 1, ...a, 5 ];

console.log( b );                   // [1,2,3,4,5]

Here we see that ...a is spreading a out, because it appears in the array [ .. ] value position. If ...a appears in an array destructuring position, it performs the gather behavior:

var a = [2,3,4];
var [ b, ...c ] = a;

console.log( b, c );                // 2 [3,4]

The var [ .. ] = a destructuring assignment spreads a out to be assigned to the pattern described inside the [ .. ]. The first part names b for the first value in a (2). But then ...c gathers the rest of the values (3 and 4) into an array and calls it c.

Note: We’ve seen how ... works with arrays, but what about with objects? It’s not an ES6 feature, but see Chapter 8 for discussion of a possible “beyond ES6” feature where ... works with spreading or gathering objects.

29.4.5 Default Value Assignment

Both forms of destructuring can offer a default value option for an assignment, using the = syntax similar to the default function argument values discussed earlier.

Consider:

var [ a = 3, b = 6, c = 9, d = 12 ] = foo();
var { x = 5, y = 10, z = 15, w = 20 } = bar();

console.log( a, b, c, d );          // 1 2 3 12
console.log( x, y, z, w );          // 4 5 6 20

You can combine the default value assignment with the alternative assignment expression syntax covered earlier. For example:

var { x, y, z, w: WW = 20 } = bar();

console.log( x, y, z, WW );         // 4 5 6 20

Be careful about confusing yourself (or other developers who read your code) if you use an object or array as the default value in a destructuring. You can create some really hard to understand code:

var x = 200, y = 300, z = 100;
var o1 = { x: { y: 42 }, z: { y: z } };

( { y: x = { y: y } } = o1 );
( { z: y = { y: z } } = o1 );
( { x: z = { y: x } } = o1 );

Can you tell from that snippet what values x, y, and z have at the end? Takes a moment of pondering, I would imagine. I’ll end the suspense:

console.log( x.y, y.y, z.y );       // 300 100 42

The takeaway here: destructuring is great and can be very useful, but it’s also a sharp sword that can cause injury (to someone’s brain) if used unwisely.

29.4.6 Nested Destructuring

If the values you’re destructuring have nested objects or arrays, you can destructure those nested values as well:

var a1 = [ 1, [2, 3, 4], 5 ];
var o1 = { x: { y: { z: 6 } } };

var [ a, [ b, c, d ], e ] = a1;
var { x: { y: { z: w } } } = o1;

console.log( a, b, c, d, e );       // 1 2 3 4 5
console.log( w );                   // 6

Nested destructuring can be a simple way to flatten out object namespaces. For example:

var App = {
    model: {
        User: function(){ .. }
    }
};

// instead of:
// var User = App.model.User;

var { model: { User } } = App;

29.4.7 Destructuring Parameters

In the following snippet, can you spot the assignment?

function foo(x) {
    console.log( x );
}

foo( 42 );

The assignment is kinda hidden: 42 (the argument) is assigned to x (the parameter) when foo(42) is executed. If parameter/argument pairing is an assignment, then it stands to reason that it’s an assignment that could be destructured, right? Of course!

Consider array destructuring for parameters:

function foo( [ x, y ] ) {
    console.log( x, y );
}

foo( [ 1, 2 ] );                    // 1 2
foo( [ 1 ] );                       // 1 undefined
foo( [] );                          // undefined undefined

Object destructuring for parameters works, too:

function foo( { x, y } ) {
    console.log( x, y );
}

foo( { y: 1, x: 2 } );              // 2 1
foo( { y: 42 } );                   // undefined 42
foo( {} );                          // undefined undefined

This technique is an approximation of named arguments (a long requested feature for JS!), in that the properties on the object map to the destructured parameters of the same names. That also means that we get optional parameters (in any position) for free, as you can see leaving off the x “parameter” worked as we’d expect.

Of course, all the previously discussed variations of destructuring are available to us with parameter destructuring, including nested destructuring, default values, and more. Destructuring also mixes fine with other ES6 function parameter capabilities, like default parameter values and rest/gather parameters.

Consider these quick illustrations (certainly not exhaustive of the possible variations):

function f1([ x=2, y=3, z ]) { .. }
function f2([ x, y, ...z], w) { .. }
function f3([ x, y, ...z], ...w) { .. }

function f4({ x: X, y }) { .. }
function f5({ x: X = 10, y = 20 }) { .. }
function f6({ x = 10 } = {}, { y } = { y: 10 }) { .. }

Let’s take one example from this snippet and examine it, for illustration purposes:

function f3([ x, y, ...z], ...w) {
    console.log( x, y, z, w );
}

f3( [] );                           // undefined undefined [] []
f3( [1,2,3,4], 5, 6 );              // 1 2 [3,4] [5,6]

There are two ... operators in use here, and they’re both gathering values in arrays (z and w), though ...z gathers from the rest of the values left over in the first array argument, while ...w gathers from the rest of the main arguments left over after the first.

29.4.7.1 Destructuring Defaults + Parameter Defaults

There’s one subtle point you should be particularly careful to notice – the difference in behavior between a destructuring default value and a function parameter default value. For example:

function f6({ x = 10 } = {}, { y } = { y: 10 }) {
    console.log( x, y );
}

f6();                               // 10 10

At first, it would seem that we’ve declared a default value of 10 for both the x and y parameters, but in two different ways. However, these two different approaches will behave differently in certain cases, and the difference is awfully subtle.

Consider:

f6( {}, {} );                       // 10 undefined

Wait, why did that happen? It’s pretty clear that named parameter x is defaulting to 10 if not passed as a property of that same name in the first argument’s object.

But what about y being undefined? The { y: 10 } value is an object as a function parameter default value, not a destructuring default value. As such, it only applies if the second argument is not passed at all, or is passed as undefined.

In the previous snippet, we are passing a second argument ({}), so the default { y: 10 } value is not used, and the { y } destructuring occurs against the passed in {} empty object value.

Now, compare { y } = { y: 10 } to { x = 10 } = {}.

For the x’s form usage, if the first function argument is omitted or undefined, the {} empty object default applies. Then, whatever value is in the first argument position – either the default {} or whatever you passed in – is destructured with the { x = 10 }, which checks to see if an x property is found, and if not found (or undefined), the 10 default value is applied to the x named parameter.

Deep breath. Read back over those last few paragraphs a couple of times. Let’s review via code:

function f6({ x = 10 } = {}, { y } = { y: 10 }) {
    console.log( x, y );
}

f6();                               // 10 10
f6( undefined, undefined );         // 10 10
f6( {}, undefined );                // 10 10

f6( {}, {} );                       // 10 undefined
f6( undefined, {} );                // 10 undefined

f6( { x: 2 }, { y: 3 } );           // 2 3

It would generally seem that the defaulting behavior of the x parameter is probably the more desirable and sensible case compared to that of y. As such, it’s important to understand why and how { x = 10 } = {} form is different from { y } = { y: 10 } form.

If that’s still a bit fuzzy, go back and read it again, and play with this yourself. Your future self will thank you for taking the time to get this very subtle gotcha nuance detail straight.

29.4.7.2 Nested Defaults: Destructured and Restructured

Although it may at first be difficult to grasp, an interesting idiom emerges for setting defaults for a nested object’s properties: using object destructuring along with what I’d call restructuring.

Consider a set of defaults in a nested object structure, like the following:

// taken from: http://es-discourse.com/t/partial-default-arguments/120/7

var defaults = {
    options: {
        remove: true,
        enable: false,
        instance: {}
    },
    log: {
        warn: true,
        error: true
    }
};

Now, let’s say that you have an object called config, which has some of these applied, but perhaps not all, and you’d like to set all the defaults into this object in the missing spots, but not override specific settings already present:

var config = {
    options: {
        remove: false,
        instance: null
    }
};

You can of course do so manually, as you might have done in the past:

config.options = config.options || {};
config.options.remove = (config.options.remove !== undefined) ?
    config.options.remove : defaults.options.remove;
config.options.enable = (config.options.enable !== undefined) ?
    config.options.enable : defaults.options.enable;
...

Yuck.

Others may prefer the assign-overwrite approach to this task. You might be tempted by the ES6 Object.assign(..) utility (see Chapter 6) to clone the properties first from defaults and then overwritten with the cloned properties from config, as so:

config = Object.assign( {}, defaults, config );

That looks way nicer, huh? But there’s a major problem! Object.assign(..) is shallow, which means when it copies defaults.options, it just copies that object reference, not deep cloning that object’s properties to a config.options object. Object.assign(..) would need to be applied (sort of “recursively”) at all levels of your object’s tree to get the deep cloning you’re expecting.

Note: Many JS utility libraries/frameworks provide their own option for deep cloning of an object, but those approaches and their gotchas are beyond our scope to discuss here.

So let’s examine if ES6 object destructuring with defaults can help at all:

config.options = config.options || {};
config.log = config.log || {};
({
    options: {
        remove: config.options.remove = defaults.options.remove,
        enable: config.options.enable = defaults.options.enable,
        instance: config.options.instance = defaults.options.instance
    } = {},
    log: {
        warn: config.log.warn = defaults.log.warn,
        error: config.log.error = defaults.log.error
    } = {}
} = config);

Not as nice as the false promise of Object.assign(..) (being that it’s shallow only), but it’s better than the manual approach by a fair bit, I think. It is still unfortunately verbose and repetitive, though.

The previous snippet’s approach works because I’m hacking the destructuring and defaults mechanism to do the property === undefined checks and assignment decisions for me. It’s a trick in that I’m destructuring config (see the = config at the end of the snippet), but I’m reassigning all the destructured values right back into config, with the config.options.enable assignment references.

Still too much, though. Let’s see if we can make anything better.

The following trick works best if you know that all the various properties you’re destructuring are uniquely named. You can still do it even if that’s not the case, but it’s not as nice – you’ll have to do the destructuring in stages, or create unique local variables as temporary aliases.

If we fully destructure all the properties into top-level variables, we can then immediately restructure to reconstitute the original nested object structure.

But all those temporary variables hanging around would pollute scope. So, let’s use block scoping (see “Block-Scoped Declarations” earlier in this chapter) with a general { } enclosing block:

// merge `defaults` into `config`
{
    // destructure (with default value assignments)
    let {
        options: {
            remove = defaults.options.remove,
            enable = defaults.options.enable,
            instance = defaults.options.instance
        } = {},
        log: {
            warn = defaults.log.warn,
            error = defaults.log.error
        } = {}
    } = config;

    // restructure
    config = {
        options: { remove, enable, instance },
        log: { warn, error }
    };
}

That seems a fair bit nicer, huh?

Note: You could also accomplish the scope enclosure with an arrow IIFE instead of the general { } block and let declarations. Your destructuring assignments/defaults would be in the parameter list and your restructuring would be the return statement in the function body.

The { warn, error } syntax in the restructuring part may look new to you; that’s called “concise properties” and we cover it in the next section!

29.5 Object Literal Extensions

ES6 adds a number of important convenience extensions to the humble { .. } object literal.

29.5.1 Concise Properties

You’re certainly familiar with declaring object literals in this form:

var x = 2, y = 3,
    o = {
        x: x,
        y: y
    };

If it’s always felt redundant to say x: x all over, there’s good news. If you need to define a property that is the same name as a lexical identifier, you can shorten it from x: x to x. Consider:

var x = 2, y = 3,
    o = {
        x,
        y
    };

29.5.2 Concise Methods

In a similar spirit to concise properties we just examined, functions attached to properties in object literals also have a concise form, for convenience.

The old way:

var o = {
    x: function(){
        // ..
    },
    y: function(){
        // ..
    }
}

And as of ES6:

var o = {
    x() {
        // ..
    },
    y() {
        // ..
    }
}

Warning: While x() { .. } seems to just be shorthand for x: function(){ .. }, concise methods have special behaviors that their older counterparts don’t; specifically, the allowance for super (see “Object super” later in this chapter).

Generators (see Chapter 4) also have a concise method form:

var o = {
    *foo() { .. }
};

29.5.2.1 Concisely Unnamed

While that convenience shorthand is quite attractive, there’s a subtle gotcha to be aware of. To illustrate, let’s examine pre-ES6 code like the following, which you might try to refactor to use concise methods:

function runSomething(o) {
    var x = Math.random(),
        y = Math.random();

    return o.something( x, y );
}

runSomething( {
    something: function something(x,y) {
        if (x > y) {
            // recursively call with `x`
            // and `y` swapped
            return something( y, x );
        }

        return y - x;
    }
} );

This obviously silly code just generates two random numbers and subtracts the smaller from the bigger. But what’s important here isn’t what it does, but rather how it’s defined. Let’s focus on the object literal and function definition, as we see here:

runSomething( {
    something: function something(x,y) {
        // ..
    }
} );

Why do we say both something: and function something? Isn’t that redundant? Actually, no, both are needed for different purposes. The property something is how we can call o.something(..), sort of like its public name. But the second something is a lexical name to refer to the function from inside itself, for recursion purposes.

Can you see why the line return something(y,x) needs the name something to refer to the function? There’s no lexical name for the object, such that it could have said return o.something(y,x) or something of that sort.

That’s actually a pretty common practice when the object literal does have an identifying name, such as:

var controller = {
    makeRequest: function(..){
        // ..
        controller.makeRequest(..);
    }
};

Is this a good idea? Perhaps, perhaps not. You’re assuming that the name controller will always point to the object in question. But it very well may not – the makeRequest(..) function doesn’t control the outer code and so can’t force that to be the case. This could come back to bite you.

Others prefer to use this to define such things:

var controller = {
    makeRequest: function(..){
        // ..
        this.makeRequest(..);
    }
};

That looks fine, and should work if you always invoke the method as controller.makeRequest(..). But you now have a this binding gotcha if you do something like:

btn.addEventListener( "click", controller.makeRequest, false );

Of course, you can solve that by passing controller.makeRequest.bind(controller) as the handler reference to bind the event to. But yuck – it isn’t very appealing.

Or what if your inner this.makeRequest(..) call needs to be made from a nested function? You’ll have another this binding hazard, which people will often solve with the hacky var self = this, such as:

var controller = {
    makeRequest: function(..){
        var self = this;

        btn.addEventListener( "click", function(){
            // ..
            self.makeRequest(..);
        }, false );
    }
};

More yuck.

Note: For more information on this binding rules and gotchas, see Chapters 1-2 of the this & Object Prototypes title of this series.

OK, what does all this have to do with concise methods? Recall our something(..) method definition:

runSomething( {
    something: function something(x,y) {
        // ..
    }
} );

The second something here provides a super convenient lexical identifier that will always point to the function itself, giving us the perfect reference for recursion, event binding/unbinding, and so on – no messing around with this or trying to use an untrustable object reference.

Great!

So, now we try to refactor that function reference to this ES6 concise method form:

runSomething( {
    something(x,y) {
        if (x > y) {
            return something( y, x );
        }

        return y - x;
    }
} );

Seems fine at first glance, except this code will break. The return something(..) call will not find a something identifier, so you’ll get a ReferenceError. Oops. But why?

The above ES6 snippet is interpreted as meaning:

runSomething( {
    something: function(x,y){
        if (x > y) {
            return something( y, x );
        }

        return y - x;
    }
} );

Look closely. Do you see the problem? The concise method definition implies something: function(x,y). See how the second something we were relying on has been omitted? In other words, concise methods imply anonymous function expressions.

Yeah, yuck.

Note: You may be tempted to think that => arrow functions are a good solution here, but they’re equally insufficient, as they’re also anonymous function expressions. We’ll cover them in “Arrow Functions” later in this chapter.

The partially redeeming news is that our something(x,y) concise method won’t be totally anonymous. See “Function Names” in Chapter 7 for information about ES6 function name inference rules. That won’t help us for our recursion, but it helps with debugging at least.

So what are we left to conclude about concise methods? They’re short and sweet, and a nice convenience. But you should only use them if you’re never going to need them to do recursion or event binding/unbinding. Otherwise, stick to your old-school something: function something(..) method definitions.

A lot of your methods are probably going to benefit from concise method definitions, so that’s great news! Just be careful of the few where there’s an un-naming hazard.

29.5.2.2 ES5 Getter/Setter

Technically, ES5 defined getter/setter literals forms, but they didn’t seem to get used much, mostly due to the lack of transpilers to handle that new syntax (the only major new syntax added in ES5, really). So while it’s not a new ES6 feature, we’ll briefly refresh on that form, as it’s probably going to be much more useful with ES6 going forward.

Consider:

var o = {
    __id: 10,
    get id() { return this.__id++; },
    set id(v) { this.__id = v; }
}

o.id;           // 10
o.id;           // 11
o.id = 20;
o.id;           // 20

// and:
o.__id;         // 21
o.__id;         // 21 -- still!

These getter and setter literal forms are also present in classes; see Chapter 3.

Warning: It may not be obvious, but the setter literal must have exactly one declared parameter; omitting it or listing others is illegal syntax. The single required parameter can use destructuring and defaults (e.g., set id({ id: v = 0 }) { .. }), but the gather/rest ... is not allowed (set id(...v) { .. }).

29.5.3 Computed Property Names

You’ve probably been in a situation like the following snippet, where you have one or more property names that come from some sort of expression and thus can’t be put into the object literal:

var prefix = "user_";

var o = {
    baz: function(..){ .. }
};

o[ prefix + "foo" ] = function(..){ .. };
o[ prefix + "bar" ] = function(..){ .. };
..

ES6 adds a syntax to the object literal definition which allows you to specify an expression that should be computed, whose result is the property name assigned. Consider:

var prefix = "user_";

var o = {
    baz: function(..){ .. },
    [ prefix + "foo" ]: function(..){ .. },
    [ prefix + "bar" ]: function(..){ .. }
    ..
};

Any valid expression can appear inside the [ .. ] that sits in the property name position of the object literal definition.

Probably the most common use of computed property names will be with Symbols (which we cover in “Symbols” later in this chapter), such as:

var o = {
    [Symbol.toStringTag]: "really cool thing",
    ..
};

Symbol.toStringTag is a special built-in value, which we evaluate with the [ .. ] syntax, so we can assign the "really cool thing" value to the special property name.

Computed property names can also appear as the name of a concise method or a concise generator:

var o = {
    ["f" + "oo"]() { .. }   // computed concise method
    *["b" + "ar"]() { .. }  // computed concise generator
};

29.5.4 Setting [[Prototype]]

We won’t cover prototypes in detail here, so for more information, see the this & Object Prototypes title of this series.

Sometimes it will be helpful to assign the [[Prototype]] of an object at the same time you’re declaring its object literal. The following has been a nonstandard extension in many JS engines for a while, but is standardized as of ES6:

var o1 = {
    // ..
};

var o2 = {
    __proto__: o1,
    // ..
};

o2 is declared with a normal object literal, but it’s also [[Prototype]]-linked to o1. The __proto__ property name here can also be a string "__proto__", but note that it cannot be the result of a computed property name (see the previous section).

__proto__ is controversial, to say the least. It’s a decades-old proprietary extension to JS that is finally standardized, somewhat begrudgingly it seems, in ES6. Many developers feel it shouldn’t ever be used. In fact, it’s in “Annex B” of ES6, which is the section that lists things JS feels it has to standardize for compatibility reasons only.

Warning: Though I’m narrowly endorsing __proto__ as a key in an object literal definition, I definitely do not endorse using it in its object property form, like o.__proto__. That form is both a getter and setter (again for compatibility reasons), but there are definitely better options. See the this & Object Prototypes title of this series for more information.

For setting the [[Prototype]] of an existing object, you can use the ES6 utility Object.setPrototypeOf(..). Consider:

var o1 = {
    // ..
};

var o2 = {
    // ..
};

Object.setPrototypeOf( o2, o1 );

Note: We’ll discuss Object again in Chapter 6. “Object.setPrototypeOf(..) Static Function” provides additional details on Object.setPrototypeOf(..). Also see “Object.assign(..) Static Function” for another form that relates o2 prototypically to o1.

29.5.5 Object super

super is typically thought of as being only related to classes. However, due to JS’s classless-objects-with-prototypes nature, super is equally effective, and nearly the same in behavior, with plain objects’ concise methods.

Consider:

var o1 = {
    foo() {
        console.log( "o1:foo" );
    }
};

var o2 = {
    foo() {
        super.foo();
        console.log( "o2:foo" );
    }
};

Object.setPrototypeOf( o2, o1 );

o2.foo();       // o1:foo
                // o2:foo

Warning: super is only allowed in concise methods, not regular function expression properties. It also is only allowed in super.XXX form (for property/method access), not in super() form.

The super reference in the o2.foo() method is locked statically to o2, and specifically to the [[Prototype]] of o2. super here would basically be Object.getPrototypeOf(o2) – resolves to o1 of course – which is how it finds and calls o1.foo().

For complete details on super, see “Classes” in Chapter 3.

29.6 Template Literals

At the very outset of this section, I’m going to have to call out the name of this ES6 feature as being awfully… misleading, depending on your experiences with what the word template means.

Many developers think of templates as being reusable renderable pieces of text, such as the capability provided by most template engines (Mustache, Handlebars, etc.). ES6’s use of the word template would imply something similar, like a way to declare inline template literals that can be re-rendered. However, that’s not at all the right way to think about this feature.

So, before we go on, I’m renaming to what it should have been called: interpolated string literals (or interpoliterals for short).

You’re already well aware of declaring string literals with " or ' delimiters, and you also know that these are not smart strings (as some languages have), where the contents would be parsed for interpolation expressions.

However, ES6 introduces a new type of string literal, using the ` backtick as the delimiter. These string literals allow basic string interpolation expressions to be embedded, which are then automatically parsed and evaluated.

Here’s the old pre-ES6 way:

var name = "Kyle";

var greeting = "Hello " + name + "!";

console.log( greeting );            // "Hello Kyle!"
console.log( typeof greeting );     // "string"

Now, consider the new ES6 way:

var name = "Kyle";

var greeting = `Hello ${name}!`;

console.log( greeting );            // "Hello Kyle!"
console.log( typeof greeting );     // "string"

As you can see, we used the `..` around a series of characters, which are interpreted as a string literal, but any expressions of the form ${..} are parsed and evaluated inline immediately. The fancy term for such parsing and evaluating is interpolation (much more accurate than templating).

The result of the interpolated string literal expression is just a plain old normal string, assigned to the greeting variable.

Warning: typeof greeting == "string" illustrates why it’s important not to think of these entities as special template values, as you cannot assign the unevaluated form of the literal to something and reuse it. The `..` string literal is more like an IIFE in the sense that it’s automatically evaluated inline. The result of a `..` string literal is, simply, just a string.

One really nice benefit of interpolated string literals is they are allowed to split across multiple lines:

var text =
`Now is the time for all good men
to come to the aid of their
country!`;

console.log( text );
// Now is the time for all good men
// to come to the aid of their
// country!

The line breaks (newlines) in the interpolated string literal were preserved in the string value.

Unless appearing as explicit escape sequences in the literal value, the value of the \r carriage return character (code point U+000D) or the value of the \r\n carriage return + line feed sequence (code points U+000D and U+000A) are both normalized to a \n line feed character (code point U+000A). Don’t worry though; this normalization is rare and would likely only happen if copy-pasting text into your JS file.

29.6.1 Interpolated Expressions

Any valid expression is allowed to appear inside ${..} in an interpolated string literal, including function calls, inline function expression calls, and even other interpolated string literals!

Consider:

function upper(s) {
    return s.toUpperCase();
}

var who = "reader";

var text =
`A very ${upper( "warm" )} welcome
to all of you ${upper( `${who}s` )}!`;

console.log( text );
// A very WARM welcome
// to all of you READERS!

Here, the inner `${who}s` interpolated string literal was a little bit nicer convenience for us when combining the who variable with the "s" string, as opposed to who + "s". There will be cases that nesting interpolated string literals is helpful, but be wary if you find yourself doing that kind of thing often, or if you find yourself nesting several levels deep.

If that’s the case, the odds are good that your string value production could benefit from some abstractions.

Warning: As a word of caution, be very careful about the readability of your code with such new found power. Just like with default value expressions and destructuring assignment expressions, just because you can do something doesn’t mean you should do it. Never go so overboard with new ES6 tricks that your code becomes more clever than you or your other team members.

29.6.1.1 Expression Scope

One quick note about the scope that is used to resolve variables in expressions. I mentioned earlier that an interpolated string literal is kind of like an IIFE, and it turns out thinking about it like that explains the scoping behavior as well.

Consider:

function foo(str) {
    var name = "foo";
    console.log( str );
}

function bar() {
    var name = "bar";
    foo( `Hello from ${name}!` );
}

var name = "global";

bar();                  // "Hello from bar!"

At the moment the `..` string literal is expressed, inside the bar() function, the scope available to it finds bar()’s name variable with value "bar". Neither the global name nor foo(..)’s name matter. In other words, an interpolated string literal is just lexically scoped where it appears, not dynamically scoped in any way.

29.6.2 Tagged Template Literals

Again, renaming the feature for sanity sake: tagged string literals.

To be honest, this is one of the cooler tricks that ES6 offers. It may seem a little strange, and perhaps not all that generally practical at first. But once you’ve spent some time with it, tagged string literals may just surprise you in their usefulness.

For example:

function foo(strings, ...values) {
    console.log( strings );
    console.log( values );
}

var desc = "awesome";

foo`Everything is ${desc}!`;
// [ "Everything is ", "!"]
// [ "awesome" ]

Let’s take a moment to consider what’s happening in the previous snippet. First, the most jarring thing that jumps out is foo`Everything...`;. That doesn’t look like anything we’ve seen before. What is it?

It’s essentially a special kind of function call that doesn’t need the ( .. ). The tag – the foo part before the `..` string literal – is a function value that should be called. Actually, it can be any expression that results in a function, even a function call that returns another function, like:

function bar() {
    return function foo(strings, ...values) {
        console.log( strings );
        console.log( values );
    }
}

var desc = "awesome";

bar()`Everything is ${desc}!`;
// [ "Everything is ", "!"]
// [ "awesome" ]

But what gets passed to the foo(..) function when invoked as a tag for a string literal?

The first argument – we called it strings – is an array of all the plain strings (the stuff between any interpolated expressions). We get two values in the strings array: "Everything is " and "!".

For convenience sake in our example, we then gather up all subsequent arguments into an array called values using the ... gather/rest operator (see the “Spread/Rest” section earlier in this chapter), though you could of course have left them as individual named parameters following the strings parameter.

The argument(s) gathered into our values array are the results of the already-evaluated interpolation expressions found in the string literal. So obviously the only element in values in our example is "awesome".

You can think of these two arrays as: the values in values are the separators if you were to splice them in between the values in strings, and then if you joined everything together, you’d get the complete interpolated string value.

A tagged string literal is like a processing step after the interpolation expressions are evaluated but before the final string value is compiled, allowing you more control over generating the string from the literal.

Typically, the string literal tag function (foo(..) in the previous snippets) should compute an appropriate string value and return it, so that you can use the tagged string literal as a value just like untagged string literals:

function tag(strings, ...values) {
    return strings.reduce( function(s,v,idx){
        return s + (idx > 0 ? values[idx-1] : "") + v;
    }, "" );
}

var desc = "awesome";

var text = tag`Everything is ${desc}!`;

console.log( text );            // Everything is awesome!

In this snippet, tag(..) is a pass-through operation, in that it doesn’t perform any special modifications, but just uses reduce(..) to loop over and splice/interleave strings and values together the same way an untagged string literal would have done.

So what are some practical uses? There are many advanced ones that are beyond our scope to discuss here. But here’s a simple idea that formats numbers as U.S. dollars (sort of like basic localization):

function dollabillsyall(strings, ...values) {
    return strings.reduce( function(s,v,idx){
        if (idx > 0) {
            if (typeof values[idx-1] == "number") {
                // look, also using interpolated
                // string literals!
                s += `$${values[idx-1].toFixed( 2 )}`;
            }
            else {
                s += values[idx-1];
            }
        }

        return s + v;
    }, "" );
}

var amt1 = 11.99,
    amt2 = amt1 * 1.08,
    name = "Kyle";

var text = dollabillsyall
`Thanks for your purchase, ${name}! Your
product cost was ${amt1}, which with tax
comes out to ${amt2}.`

console.log( text );
// Thanks for your purchase, Kyle! Your
// product cost was $11.99, which with tax
// comes out to $12.95.

If a number value is encountered in the values array, we put "$" in front of it and format it to two decimal places with toFixed(2). Otherwise, we let the value pass-through untouched.

29.6.2.1 Raw Strings

In the previous snippets, our tag functions receive the first argument we called strings, which is an array. But there’s an additional bit of data included: the raw unprocessed versions of all the strings. You can access those raw string values using the .raw property, like this:

function showraw(strings, ...values) {
    console.log( strings );
    console.log( strings.raw );
}

showraw`Hello\nWorld`;
// [ "Hello
// World" ]
// [ "Hello\nWorld" ]

The raw version of the value preserves the raw escaped \n sequence (the \ and the n are separate characters), while the processed version considers it a single newline character. However, the earlier mentioned line-ending normalization is applied to both values.

ES6 comes with a built-in function that can be used as a string literal tag: String.raw(..). It simply passes through the raw versions of the strings values:

console.log( `Hello\nWorld` );
// Hello
// World

console.log( String.raw`Hello\nWorld` );
// Hello\nWorld

String.raw`Hello\nWorld`.length;
// 12

Other uses for string literal tags included special processing for internationalization, localization, and more!

29.7 Arrow Functions

We’ve touched on this binding complications with functions earlier in this chapter, and they’re covered at length in the this & Object Prototypes title of this series. It’s important to understand the frustrations that this-based programming with normal functions brings, because that is the primary motivation for the new ES6 => arrow function feature.

Let’s first illustrate what an arrow function looks like, as compared to normal functions:

function foo(x,y) {
    return x + y;
}

// versus

var foo = (x,y) => x + y;

The arrow function definition consists of a parameter list (of zero or more parameters, and surrounding ( .. ) if there’s not exactly one parameter), followed by the => marker, followed by a function body.

So, in the previous snippet, the arrow function is just the (x,y) => x + y part, and that function reference happens to be assigned to the variable foo.

The body only needs to be enclosed by { .. } if there’s more than one expression, or if the body consists of a non-expression statement. If there’s only one expression, and you omit the surrounding { .. }, there’s an implied return in front of the expression, as illustrated in the previous snippet.

Here’s some other arrow function variations to consider:

var f1 = () => 12;
var f2 = x => x * 2;
var f3 = (x,y) => {
    var z = x * 2 + y;
    y++;
    x *= 3;
    return (x + y + z) / 2;
};

Arrow functions are always function expressions; there is no arrow function declaration. It also should be clear that they are anonymous function expressions – they have no named reference for the purposes of recursion or event binding/unbinding – though “Function Names” in Chapter 7 will describe ES6’s function name inference rules for debugging purposes.

Note: All the capabilities of normal function parameters are available to arrow functions, including default values, destructuring, rest parameters, and so on.

Arrow functions have a nice, shorter syntax, which makes them on the surface very attractive for writing terser code. Indeed, nearly all literature on ES6 (other than the titles in this series) seems to immediately and exclusively adopt the arrow function as “the new function.”

It is telling that nearly all examples in discussion of arrow functions are short single statement utilities, such as those passed as callbacks to various utilities. For example:

var a = [1,2,3,4,5];

a = a.map( v => v * 2 );

console.log( a );               // [2,4,6,8,10]

In those cases, where you have such inline function expressions, and they fit the pattern of computing a quick calculation in a single statement and returning that result, arrow functions indeed look to be an attractive and lightweight alternative to the more verbose function keyword and syntax.

Most people tend to ooh and aah at nice terse examples like that, as I imagine you just did!

However, I would caution you that it would seem to me somewhat a misapplication of this feature to use arrow function syntax with otherwise normal, multistatement functions, especially those that would otherwise be naturally expressed as function declarations.

Recall the dollabillsyall(..) string literal tag function from earlier in this chapter – let’s change it to use => syntax:

var dollabillsyall = (strings, ...values) =>
    strings.reduce( (s,v,idx) => {
        if (idx > 0) {
            if (typeof values[idx-1] == "number") {
                // look, also using interpolated
                // string literals!
                s += `$${values[idx-1].toFixed( 2 )}`;
            }
            else {
                s += values[idx-1];
            }
        }

        return s + v;
    }, "" );

In this example, the only modifications I made were the removal of function, return, and some { .. }, and then the insertion of => and a var. Is this a significant improvement in the readability of the code? Meh.

I’d actually argue that the lack of return and outer { .. } partially obscures the fact that the reduce(..) call is the only statement in the dollabillsyall(..) function and that its result is the intended result of the call. Also, the trained eye that is so used to hunting for the word function in code to find scope boundaries now needs to look for the => marker, which can definitely be harder to find in the thick of the code.

While not a hard-and-fast rule, I’d say that the readability gains from => arrow function conversion are inversely proportional to the length of the function being converted. The longer the function, the less => helps; the shorter the function, the more => can shine.

I think it’s probably more sensible and reasonable to adopt => for the places in code where you do need short inline function expressions, but leave your normal-length main functions as is.

29.7.1 Not Just Shorter Syntax, But this

Most of the popular attention toward => has been on saving those precious keystrokes by dropping function, return, and { .. } from your code.

But there’s a big detail we’ve skipped over so far. I said at the beginning of the section that => functions are closely related to this binding behavior. In fact, => arrow functions are primarily designed to alter this behavior in a specific way, solving a particular and common pain point with this-aware coding.

The saving of keystrokes is a red herring, a misleading sideshow at best.

Let’s revisit another example from earlier in this chapter:

var controller = {
    makeRequest: function(..){
        var self = this;

        btn.addEventListener( "click", function(){
            // ..
            self.makeRequest(..);
        }, false );
    }
};

We used the var self = this hack, and then referenced self.makeRequest(..), because inside the callback function we’re passing to addEventListener(..), the this binding will not be the same as it is in makeRequest(..) itself. In other words, because this bindings are dynamic, we fall back to the predictability of lexical scope via the self variable.

Herein we finally can see the primary design characteristic of => arrow functions. Inside arrow functions, the this binding is not dynamic, but is instead lexical. In the previous snippet, if we used an arrow function for the callback, this will be predictably what we wanted it to be.

Consider:

var controller = {
    makeRequest: function(..){
        btn.addEventListener( "click", () => {
            // ..
            this.makeRequest(..);
        }, false );
    }
};

Lexical this in the arrow function callback in the previous snippet now points to the same value as in the enclosing makeRequest(..) function. In other words, => is a syntactic stand-in for var self = this.

In cases where var self = this (or, alternatively, a function .bind(this) call) would normally be helpful, => arrow functions are a nicer alternative operating on the same prinicple. Sounds great, right?

Not quite so simple.

If => replaces var self = this or .bind(this) and it helps, guess what happens if you use => with a this-aware function that doesn’t need var self = this to work? You might be able to guess that it’s going to mess things up. Yeah.

Consider:

var controller = {
    makeRequest: (..) => {
        // ..
        this.helper(..);
    },
    helper: (..) => {
        // ..
    }
};

controller.makeRequest(..);

Although we invoke as controller.makeRequest(..), the this.helper reference fails, because this here doesn’t point to controller as it normally would. Where does it point? It lexically inherits this from the surrounding scope. In this previous snippet, that’s the global scope, where this points to the global object. Ugh.

In addition to lexical this, arrow functions also have lexical arguments – they don’t have their own arguments array but instead inherit from their parent – as well as lexical super and new.target (see “Classes” in Chapter 3).

So now we can conclude a more nuanced set of rules for when => is appropriate and not:

  • If you have a short, single-statement inline function expression, where the only statement is a return of some computed value, and that function doesn’t already make a this reference inside it, and there’s no self-reference (recursion, event binding/unbinding), and you don’t reasonably expect the function to ever be that way, you can probably safely refactor it to be an => arrow function.
  • If you have an inner function expression that’s relying on a var self = this hack or a .bind(this) call on it in the enclosing function to ensure proper this binding, that inner function expression can probably safely become an => arrow function.
  • If you have an inner function expression that’s relying on something like var args = Array.prototype.slice.call(arguments) in the enclosing function to make a lexical copy of arguments, that inner function expression can probably safely become an => arrow function.
  • For everything else – normal function declarations, longer multistatement function expressions, functions that need a lexical name identifier self-reference (recursion, etc.), and any other function that doesn’t fit the previous characteristics – you should probably avoid => function syntax.

Bottom line: => is about lexical binding of this, arguments, and super. These are intentional features designed to fix some common problems, not bugs, quirks, or mistakes in ES6.

Don’t believe any hype that => is primarily, or even mostly, about fewer keystrokes. Whether you save keystrokes or waste them, you should know exactly what you are intentionally doing with every character typed.

Tip: If you have a function that for any of these articulated reasons is not a good match for an => arrow function, but it’s being declared as part of an object literal, recall from “Concise Methods” earlier in this chapter that there’s another option for shorter function syntax.

If you prefer a visual decision chart for how/why to pick an arrow function:

<img src=“fig1.png”>

29.8 for..of Loops

Joining the for and for..in loops from the JavaScript we’re all familiar with, ES6 adds a for..of loop, which loops over the set of values produced by an iterator.

The value you loop over with for..of must be an iterable, or it must be a value which can be coerced/boxed to an object (see the Types & Grammar title of this series) that is an iterable. An iterable is simply an object that is able to produce an iterator, which the loop then uses.

Let’s compare for..of to for..in to illustrate the difference:

var a = ["a","b","c","d","e"];

for (var idx in a) {
    console.log( idx );
}
// 0 1 2 3 4

for (var val of a) {
    console.log( val );
}
// "a" "b" "c" "d" "e"

As you can see, for..in loops over the keys/indexes in the a array, while for..of loops over the values in a.

Here’s the pre-ES6 version of the for..of from that previous snippet:

var a = ["a","b","c","d","e"],
    k = Object.keys( a );

for (var val, i = 0; i < k.length; i++) {
    val = a[ k[i] ];
    console.log( val );
}
// "a" "b" "c" "d" "e"

And here’s the ES6 but non-for..of equivalent, which also gives a glimpse at manually iterating an iterator (see “Iterators” in Chapter 3):

var a = ["a","b","c","d","e"];

for (var val, ret, it = a[Symbol.iterator]();
    (ret = it.next()) && !ret.done;
) {
    val = ret.value;
    console.log( val );
}
// "a" "b" "c" "d" "e"

Under the covers, the for..of loop asks the iterable for an iterator (using the built-in Symbol.iterator; see “Well-Known Symbols” in Chapter 7), then it repeatedly calls the iterator and assigns its produced value to the loop iteration variable.

Standard built-in values in JavaScript that are by default iterables (or provide them) include:

  • Arrays
  • Strings
  • Generators (see Chapter 3)
  • Collections / TypedArrays (see Chapter 5)

Warning: Plain objects are not by default suitable for for..of looping. That’s because they don’t have a default iterator, which is intentional, not a mistake. However, we won’t go any further into those nuanced reasonings here. In “Iterators” in Chapter 3, we’ll see how to define iterators for our own objects, which lets for..of loop over any object to get a set of values we define.

Here’s how to loop over the characters in a primitive string:

for (var c of "hello") {
    console.log( c );
}
// "h" "e" "l" "l" "o"

The "hello" primitive string value is coerced/boxed to the String object wrapper equivalent, which is an iterable by default.

In for (XYZ of ABC).., the XYZ clause can either be an assignment expression or a declaration, identical to that same clause in for and for..in loops. So you can do stuff like this:

var o = {};

for (o.a of [1,2,3]) {
    console.log( o.a );
}
// 1 2 3

for ({x: o.a} of [ {x: 1}, {x: 2}, {x: 3} ]) {
  console.log( o.a );
}
// 1 2 3

for..of loops can be prematurely stopped, just like other loops, with break, continue, return (if in a function), and thrown exceptions. In any of these cases, the iterator’s return(..) function is automatically called (if one exists) to let the iterator perform cleanup tasks, if necessary.

Note: See “Iterators” in Chapter 3 for more complete coverage on iterables and iterators.

29.9 Regular Expressions

Let’s face it: regular expressions haven’t changed much in JS in a long time. So it’s a great thing that they’ve finally learned a couple of new tricks in ES6. We’ll briefly cover the additions here, but the overall topic of regular expressions is so dense that you’ll need to turn to chapters/books dedicated to it (of which there are many!) if you need a refresher.

29.9.1 Unicode Flag

We’ll cover the topic of Unicode in more detail in “Unicode” later in this chapter. Here, we’ll just look briefly at the new u flag for ES6+ regular expressions, which turns on Unicode matching for that expression.

JavaScript strings are typically interpreted as sequences of 16-bit characters, which correspond to the characters in the Basic Multilingual Plane (BMP) (http://en.wikipedia.org/wiki/Plane_%28Unicode%29). But there are many UTF-16 characters that fall outside this range, and so strings may have these multibyte characters in them.

Prior to ES6, regular expressions could only match based on BMP characters, which means that those extended characters were treated as two separate characters for matching purposes. This is often not ideal.

So, as of ES6, the u flag tells a regular expression to process a string with the interpretation of Unicode (UTF-16) characters, such that such an extended character will be matched as a single entity.

Warning: Despite the name implication, “UTF-16” doesn’t strictly mean 16 bits. Modern Unicode uses 21 bits, and standards like UTF-8 and UTF-16 refer roughly to how many bits are used in the representation of a character.

An example (straight from the ES6 specification): 𝄞 (the musical symbol G-clef) is Unicode point U+1D11E (0x1D11E).

If this character appears in a regular expression pattern (like /𝄞/), the standard BMP interpretation would be that it’s two separate characters (0xD834 and 0xDD1E) to match with. But the new ES6 Unicode-aware mode means that /𝄞/u (or the escaped Unicode form /\u{1D11E}/u) will match "𝄞" in a string as a single matched character.

You might be wondering why this matters? In non-Unicode BMP mode, the pattern is treated as two separate characters, but would still find the match in a string with the "𝄞" character in it, as you can see if you try:

/𝄞/.test( "𝄞-clef" );           // true

The length of the match is what matters. For example:

/^.-clef/ .test( "𝄞-clef" );        // false
/^.-clef/u.test( "𝄞-clef" );        // true

The ^.-clef in the pattern says to match only a single character at the beginning before the normal "-clef" text. In standard BMP mode, the match fails (two characters), but with u Unicode mode flagged on, the match succeeds (one character).

It’s also important to note that u makes quantifiers like + and * apply to the entire Unicode code point as a single character, not just the lower surrogate (aka rightmost half of the symbol) of the character. The same goes for Unicode characters appearing in character classes, like /[💩-💫]/u.

Note: There’s plenty more nitty-gritty details about u behavior in regular expressions, which Mathias Bynens (https://twitter.com/mathias) has written extensively about (https://mathiasbynens.be/notes/es6-unicode-regex).

29.9.2 Sticky Flag

Another flag mode added to ES6 regular expressions is y, which is often called “sticky mode.” Sticky essentially means the regular expression has a virtual anchor at its beginning that keeps it rooted to matching at only the position indicated by the regular expression’s lastIndex property.

To illustrate, let’s consider two regular expressions, the first without sticky mode and the second with:

var re1 = /foo/,
    str = "++foo++";

re1.lastIndex;          // 0
re1.test( str );        // true
re1.lastIndex;          // 0 -- not updated

re1.lastIndex = 4;
re1.test( str );        // true -- ignored `lastIndex`
re1.lastIndex;          // 4 -- not updated

Three things to observe about this snippet:

  • test(..) doesn’t pay any attention to lastIndex’s value, and always just performs its match from the beginning of the input string.
  • Because our pattern does not have a ^ start-of-input anchor, the search for "foo" is free to move ahead through the whole string looking for a match.
  • lastIndex is not updated by test(..).

Now, let’s try a sticky mode regular expression:

var re2 = /foo/y,       // <-- notice the `y` sticky flag
    str = "++foo++";

re2.lastIndex;          // 0
re2.test( str );        // false -- "foo" not found at `0`
re2.lastIndex;          // 0

re2.lastIndex = 2;
re2.test( str );        // true
re2.lastIndex;          // 5 -- updated to after previous match

re2.test( str );        // false
re2.lastIndex;          // 0 -- reset after previous match failure

And so our new observations about sticky mode:

  • test(..) uses lastIndex as the exact and only position in str to look to make a match. There is no moving ahead to look for the match – it’s either there at the lastIndex position or not.
  • If a match is made, test(..) updates lastIndex to point to the character immediately following the match. If a match fails, test(..) resets lastIndex back to 0.

Normal non-sticky patterns that aren’t otherwise ^-rooted to the start-of-input are free to move ahead in the input string looking for a match. But sticky mode restricts the pattern to matching just at the position of lastIndex.

As I suggested at the beginning of this section, another way of looking at this is that y implies a virtual anchor at the beginning of the pattern that is relative (aka constrains the start of the match) to exactly the lastIndex position.

Warning: In previous literature on the topic, it has alternatively been asserted that this behavior is like y implying a ^ (start-of-input) anchor in the pattern. This is inaccurate. We’ll explain in further detail in “Anchored Sticky” later.

29.9.2.1 Sticky Positioning

It may seem strangely limiting that to use y for repeated matches, you have to manually ensure lastIndex is in the exact right position, as it has no move-ahead capability for matching.

Here’s one possible scenario: if you know that the match you care about is always going to be at a position that’s a multiple of a number (e.g., 0, 10, 20, etc.), you can just construct a limited pattern matching what you care about, but then manually set lastIndex each time before match to those fixed positions.

Consider:

var re = /f../y,
    str = "foo       far       fad";

str.match( re );        // ["foo"]

re.lastIndex = 10;
str.match( re );        // ["far"]

re.lastIndex = 20;
str.match( re );        // ["fad"]

However, if you’re parsing a string that isn’t formatted in fixed positions like that, figuring out what to set lastIndex to before each match is likely going to be untenable.

There’s a saving nuance to consider here. y requires that lastIndex be in the exact position for a match to occur. But it doesn’t strictly require that you manually set lastIndex.

Instead, you can construct your expressions in such a way that they capture in each main match everything before and after the thing you care about, up to right before the next thing you’ll care to match.

Because lastIndex will set to the next character beyond the end of a match, if you’ve matched everything up to that point, lastIndex will always be in the correct position for the y pattern to start from the next time.

Warning: If you can’t predict the structure of the input string in a sufficiently patterned way like that, this technique may not be suitable and you may not be able to use y.

Having structured string input is likely the most practical scenario where y will be capable of performing repeated matching throughout a string. Consider:

var re = /\d+\.\s(.*?)(?:\s|$)/y
    str = "1. foo 2. bar 3. baz";

str.match( re );        // [ "1. foo ", "foo" ]

re.lastIndex;           // 7 -- correct position!
str.match( re );        // [ "2. bar ", "bar" ]

re.lastIndex;           // 14 -- correct position!
str.match( re );        // ["3. baz", "baz"]

This works because I knew something ahead of time about the structure of the input string: there is always a numeral prefix like "1. " before the desired match ("foo", etc.), and either a space after it, or the end of the string ($ anchor). So the regular expression I constructed captures all of that in each main match, and then I use a matching group ( ) so that the stuff I really care about is separated out for convenience.

After the first match ("1. foo "), the lastIndex is 7, which is already the position needed to start the next match, for "2. bar ", and so on.

If you’re going to use y sticky mode for repeated matches, you’ll probably want to look for opportunities to have lastIndex automatically positioned as we’ve just demonstrated.

29.9.2.2 Sticky Versus Global

Some readers may be aware that you can emulate something like this lastIndex-relative matching with the g global match flag and the exec(..) method, as so:

var re = /o+./g,        // <-- look, `g`!
    str = "foot book more";

re.exec( str );         // ["oot"]
re.lastIndex;           // 4

re.exec( str );         // ["ook"]
re.lastIndex;           // 9

re.exec( str );         // ["or"]
re.lastIndex;           // 13

re.exec( str );         // null -- no more matches!
re.lastIndex;           // 0 -- starts over now!

While it’s true that g pattern matches with exec(..) start their matching from lastIndex’s current value, and also update lastIndex after each match (or failure), this is not the same thing as y’s behavior.

Notice in the previous snippet that "ook", located at position 6, was matched and found by the second exec(..) call, even though at the time, lastIndex was 4 (from the end of the previous match). Why? Because as we said earlier, non-sticky matches are free to move ahead in their matching. A sticky mode expression would have failed here, because it would not be allowed to move ahead.

In addition to perhaps undesired move-ahead matching behavior, another downside to just using g instead of y is that g changes the behavior of some matching methods, like str.match(re).

Consider:

var re = /o+./g,        // <-- look, `g`!
    str = "foot book more";

str.match( re );        // ["oot","ook","or"]

See how all the matches were returned at once? Sometimes that’s OK, but sometimes that’s not what you want.

The y sticky flag will give you one-at-a-time progressive matching with utilities like test(..) and match(..). Just make sure the lastIndex is always in the right position for each match!

29.9.2.3 Anchored Sticky

As we warned earlier, it’s inaccurate to think of sticky mode as implying a pattern starts with ^. The ^ anchor has a distinct meaning in regular expressions, which is not altered by sticky mode. ^ is an anchor that always refers to the beginning of the input, and is not in any way relative to lastIndex.

Besides poor/inaccurate documentation on this topic, the confusion is unfortunately strengthened further because an older pre-ES6 experiment with sticky mode in Firefox did make ^ relative to lastIndex, so that behavior has been around for years.

ES6 elected not to do it that way. ^ in a pattern means start-of-input absolutely and only.

As a consequence, a pattern like /^foo/y will always and only find a "foo" match at the beginning of a string, if it’s allowed to match there. If lastIndex is not 0, the match will fail. Consider:

var re = /^foo/y,
    str = "foo";

re.test( str );         // true
re.test( str );         // false
re.lastIndex;           // 0 -- reset after failure

re.lastIndex = 1;
re.test( str );         // false -- failed for positioning
re.lastIndex;           // 0 -- reset after failure

Bottom line: y plus ^ plus lastIndex > 0 is an incompatible combination that will always cause a failed match.

Note: While y does not alter the meaning of ^ in any way, the m multiline mode does, such that ^ means start-of-input or start of text after a newline. So, if you combine y and m flags together for a pattern, you can find multiple ^-rooted matches in a string. But remember: because it’s y sticky, you’ll have to make sure lastIndex is pointing at the correct new line position (likely by matching to the end of the line) each subsequent time, or no subsequent matches will be made.

29.9.3 Regular Expression flags

Prior to ES6, if you wanted to examine a regular expression object to see what flags it had applied, you needed to parse them out – ironically, probably with another regular expression – from the content of the source property, such as:

var re = /foo/ig;

re.toString();          // "/foo/ig"

var flags = re.toString().match( /\/([gim]*)$/ )[1];

flags;                  // "ig"

As of ES6, you can now get these values directly, with the new flags property:

var re = /foo/ig;

re.flags;               // "gi"

It’s a small nuance, but the ES6 specification calls for the expression’s flags to be listed in this order: "gimuy", regardless of what order the original pattern was specified with. That’s the reason for the difference between /ig and "gi".

No, the order of flags specified or listed doesn’t matter.

Another tweak from ES6 is that the RegExp(..) constructor is now flags-aware if you pass it an existing regular expression:

var re1 = /foo*/y;
re1.source;                         // "foo*"
re1.flags;                          // "y"

var re2 = new RegExp( re1 );
re2.source;                         // "foo*"
re2.flags;                          // "y"

var re3 = new RegExp( re1, "ig" );
re3.source;                         // "foo*"
re3.flags;                          // "gi"

Prior to ES6, the re3 construction would throw an error, but as of ES6 you can override the flags when duplicating.

29.10 Number Literal Extensions

Prior to ES5, number literals looked like the following – the octal form was not officially specified, only allowed as an extension that browsers had come to de facto agreement on:

var dec = 42,
    oct = 052,
    hex = 0x2a;

Note: Though you are specifying a number in different bases, the number’s mathematic value is what is stored, and the default output interpretation is always base-10. The three variables in the previous snippet all have the 42 value stored in them.

To further illustrate that 052 was a nonstandard form extension, consider:

Number( "42" );             // 42
Number( "052" );            // 52
Number( "0x2a" );           // 42

ES5 continued to permit the browser-extended octal form (including such inconsistencies), except that in strict mode, the octal literal (052) form is disallowed. This restriction was done mainly because many developers had the habit (from other languages) of seemingly innocuously prefixing otherwise base-10 numbers with 0’s for code alignment purposes, and then running into the accidental fact that they’d changed the number value entirely!

ES6 continues the legacy of changes/variations to how number literals outside base-10 numbers can be represented. There’s now an official octal form, an amended hexadecimal form, and a brand-new binary form. For web compatibility reasons, the old octal 052 form will continue to be legal (though unspecified) in non-strict mode, but should really never be used anymore.

Here are the new ES6 number literal forms:

var dec = 42,
    oct = 0o52,         // or `0O52` :(
    hex = 0x2a,         // or `0X2a` :/
    bin = 0b101010;     // or `0B101010` :/

The only decimal form allowed is base-10. Octal, hexadecimal, and binary are all integer forms.

And the string representations of these forms are all able to be coerced/converted to their number equivalent:

Number( "42" );         // 42
Number( "0o52" );       // 42
Number( "0x2a" );       // 42
Number( "0b101010" );   // 42

Though not strictly new to ES6, it’s a little-known fact that you can actually go the opposite direction of conversion (well, sort of):

var a = 42;

a.toString();           // "42" -- also `a.toString( 10 )`
a.toString( 8 );        // "52"
a.toString( 16 );       // "2a"
a.toString( 2 );        // "101010"

In fact, you can represent a number this way in any base from 2 to 36, though it’d be rare that you’d go outside the standard bases: 2, 8, 10, and 16.

29.11 Unicode

Let me just say that this section is not an exhaustive everything-you-ever-wanted-to-know-about-Unicode resource. I want to cover what you need to know that’s changing for Unicode in ES6, but we won’t go much deeper than that. Mathias Bynens (http://twitter.com/mathias) has written/spoken extensively and brilliantly about JS and Unicode (see https://mathiasbynens.be/notes/javascript-unicode and http://fluentconf.com/javascript-html-2015/public/content/2015/02/18-javascript-loves-unicode).

The Unicode characters that range from 0x0000 to 0xFFFF contain all the standard printed characters (in various languages) that you’re likely to have seen or interacted with. This group of characters is called the Basic Multilingual Plane (BMP). The BMP even contains fun symbols like this cool snowman: ☃ (U+2603).

There are lots of other extended Unicode characters beyond this BMP set, which range up to 0x10FFFF. These symbols are often referred to as astral symbols, as that’s the name given to the set of 16 planes (e.g., layers/groupings) of characters beyond the BMP. Examples of astral symbols include 𝄞 (U+1D11E) and 💩 (U+1F4A9).

Prior to ES6, JavaScript strings could specify Unicode characters using Unicode escaping, such as:

var snowman = "\u2603";
console.log( snowman );         // "☃"

However, the \uXXXX Unicode escaping only supports four hexadecimal characters, so you can only represent the BMP set of characters in this way. To represent an astral character using Unicode escaping prior to ES6, you need to use a surrogate pair – basically two specially calculated Unicode-escaped characters side by side, which JS interprets together as a single astral character:

var gclef = "\uD834\uDD1E";
console.log( gclef );           // "𝄞"

As of ES6, we now have a new form for Unicode escaping (in strings and regular expressions), called Unicode code point escaping:

var gclef = "\u{1D11E}";
console.log( gclef );           // "𝄞"

As you can see, the difference is the presence of the { } in the escape sequence, which allows it to contain any number of hexadecimal characters. Because you only need six to represent the highest possible code point value in Unicode (i.e., 0x10FFFF), this is sufficient.

29.11.1 Unicode-Aware String Operations

By default, JavaScript string operations and methods are not sensitive to astral symbols in string values. So, they treat each BMP character individually, even the two surrogate halves that make up an otherwise single astral character. Consider:

var snowman = "☃";
snowman.length;                 // 1

var gclef = "𝄞";
gclef.length;                   // 2

So, how do we accurately calculate the length of such a string? In this scenario, the following trick will work:

var gclef = "𝄞";

[...gclef].length;              // 1
Array.from( gclef ).length;     // 1

Recall from the “for..of Loops” section earlier in this chapter that ES6 strings have built-in iterators. This iterator happens to be Unicode-aware, meaning it will automatically output an astral symbol as a single value. We take advantage of that using the ... spread operator in an array literal, which creates an array of the string’s symbols. Then we just inspect the length of that resultant array. ES6’s Array.from(..) does basically the same thing as [...XYZ], but we’ll cover that utility in detail in Chapter 6.

Warning: It should be noted that constructing and exhausting an iterator just to get the length of a string is quite expensive on performance, relatively speaking, compared to what a theoretically optimized native utility/property would do.

Unfortunately, the full answer is not as simple or straightforward. In addition to the surrogate pairs (which the string iterator takes care of), there are special Unicode code points that behave in other special ways, which is much harder to account for. For example, there’s a set of code points that modify the previous adjacent character, known as Combining Diacritical Marks.

Consider these two string outputs:

console.log( s1 );              // "é"
console.log( s2 );              // "é"

They look the same, but they’re not! Here’s how we created s1 and s2:

var s1 = "\xE9",
    s2 = "e\u0301";

As you can probably guess, our previous length trick doesn’t work with s2:

[...s1].length;                 // 1
[...s2].length;                 // 2

So what can we do? In this case, we can perform a Unicode normalization on the value before inquiring about its length, using the ES6 String#normalize(..) utility (which we’ll cover more in Chapter 6):

var s1 = "\xE9",
    s2 = "e\u0301";

s1.normalize().length;          // 1
s2.normalize().length;          // 1

s1 === s2;                      // false
s1 === s2.normalize();          // true

Essentially, normalize(..) takes a sequence like "e\u0301" and normalizes it to "\xE9". Normalization can even combine multiple adjacent combining marks if there’s a suitable Unicode character they combine to:

var s1 = "o\u0302\u0300",
    s2 = s1.normalize(),
    s3 = "ồ";

s1.length;                      // 3
s2.length;                      // 1
s3.length;                      // 1

s2 === s3;                      // true

Unfortunately, normalization isn’t fully perfect here, either. If you have multiple combining marks modifying a single character, you may not get the length count you’d expect, because there may not be a single defined normalized character that represents the combination of all the marks. For example:

var s1 = "e\u0301\u0330";

console.log( s1 );              // "ḛ́"

s1.normalize().length;          // 2

The further you go down this rabbit hole, the more you realize that it’s difficult to get one precise definition for “length.” What we see visually rendered as a single character – more precisely called a grapheme – doesn’t always strictly relate to a single “character” in the program processing sense.

Tip: If you want to see just how deep this rabbit hole goes, check out the “Grapheme Cluster Boundaries” algorithm (http://www.Unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries).

29.11.2 Character Positioning

Similar to length complications, what does it actually mean to ask, “what is the character at position 2?” The naive pre-ES6 answer comes from charAt(..), which will not respect the atomicity of an astral character, nor will it take into account combining marks.

Consider:

var s1 = "abc\u0301d",
    s2 = "ab\u0107d",
    s3 = "ab\u{1d49e}d";

console.log( s1 );              // "abćd"
console.log( s2 );              // "abćd"
console.log( s3 );              // "ab𝒞d"

s1.charAt( 2 );                 // "c"
s2.charAt( 2 );                 // "ć"
s3.charAt( 2 );                 // "" <-- unprintable surrogate
s3.charAt( 3 );                 // "" <-- unprintable surrogate

So, is ES6 giving us a Unicode-aware version of charAt(..)? Unfortunately, no. At the time of this writing, there’s a proposal for such a utility that’s under consideration for post-ES6.

But with what we explored in the previous section (and of course with the limitations noted thereof!), we can hack an ES6 answer:

var s1 = "abc\u0301d",
    s2 = "ab\u0107d",
    s3 = "ab\u{1d49e}d";

[...s1.normalize()][2];         // "ć"
[...s2.normalize()][2];         // "ć"
[...s3.normalize()][2];         // "𝒞"

Warning: Reminder of an earlier warning: constructing and exhausting an iterator each time you want to get at a single character is… not very ideal, performance wise. Let’s hope we get a built-in and optimized utility for this soon, post-ES6.

What about a Unicode-aware version of the charCodeAt(..) utility? ES6 gives us codePointAt(..):

var s1 = "abc\u0301d",
    s2 = "ab\u0107d",
    s3 = "ab\u{1d49e}d";

s1.normalize().codePointAt( 2 ).toString( 16 );
// "107"

s2.normalize().codePointAt( 2 ).toString( 16 );
// "107"

s3.normalize().codePointAt( 2 ).toString( 16 );
// "1d49e"

What about the other direction? A Unicode-aware version of String.fromCharCode(..) is ES6’s String.fromCodePoint(..):

String.fromCodePoint( 0x107 );      // "ć"

String.fromCodePoint( 0x1d49e );    // "𝒞"

So wait, can we just combine String.fromCodePoint(..) and codePointAt(..) to get a better version of a Unicode-aware charAt(..) from earlier? Yep!

var s1 = "abc\u0301d",
    s2 = "ab\u0107d",
    s3 = "ab\u{1d49e}d";

String.fromCodePoint( s1.normalize().codePointAt( 2 ) );
// "ć"

String.fromCodePoint( s2.normalize().codePointAt( 2 ) );
// "ć"

String.fromCodePoint( s3.normalize().codePointAt( 2 ) );
// "𝒞"

There’s quite a few other string methods we haven’t addressed here, including toUpperCase(), toLowerCase(), substring(..), indexOf(..), slice(..), and a dozen others. None of these have been changed or augmented for full Unicode awareness, so you should be very careful – probably just avoid them! – when working with strings containing astral symbols.

There are also several string methods that use regular expressions for their behavior, like replace(..) and match(..). Thankfully, ES6 brings Unicode awareness to regular expressions, as we covered in “Unicode Flag” earlier in this chapter.

OK, there we have it! JavaScript’s Unicode string support is significantly better over pre-ES6 (though still not perfect) with the various additions we’ve just covered.

29.11.3 Unicode Identifier Names

Unicode can also be used in identifier names (variables, properties, etc.). Prior to ES6, you could do this with Unicode-escapes, like:

var \u03A9 = 42;

// same as: var Ω = 42;

As of ES6, you can also use the earlier explained code point escape syntax:

var \u{2B400} = 42;

// same as: var 𫐀 = 42;

There’s a complex set of rules around exactly which Unicode characters are allowed. Furthermore, some are allowed only if they’re not the first character of the identifier name.

Note: Mathias Bynens has a great post (https://mathiasbynens.be/notes/javascript-identifiers-es6) on all the nitty-gritty details.

The reasons for using such unusual characters in identifier names are rather rare and academic. You typically won’t be best served by writing code that relies on these esoteric capabilities.

29.12 Symbols

With ES6, for the first time in quite a while, a new primitive type has been added to JavaScript: the symbol. Unlike the other primitive types, however, symbols don’t have a literal form.

Here’s how you create a symbol:

var sym = Symbol( "some optional description" );

typeof sym;     // "symbol"

Some things to note:

  • You cannot and should not use new with Symbol(..). It’s not a constructor, nor are you producing an object.
  • The parameter passed to Symbol(..) is optional. If passed, it should be a string that gives a friendly description for the symbol’s purpose.
  • The typeof output is a new value ("symbol") that is the primary way to identify a symbol.

The description, if provided, is solely used for the stringification representation of the symbol:

sym.toString();     // "Symbol(some optional description)"

Similar to how primitive string values are not instances of String, symbols are also not instances of Symbol. If, for some reason, you want to construct a boxed wrapper object form of a symbol value, you can do the following:

sym instanceof Symbol;      // false

var symObj = Object( sym );
symObj instanceof Symbol;   // true

symObj.valueOf() === sym;   // true

Note: symObj in this snippet is interchangeable with sym; either form can be used in all places symbols are utilized. There’s not much reason to use the boxed wrapper object form (symObj) instead of the primitive form (sym). Keeping with similar advice for other primitives, it’s probably best to prefer sym over symObj.

The internal value of a symbol itself – referred to as its name – is hidden from the code and cannot be obtained. You can think of this symbol value as an automatically generated, unique (within your application) string value.

But if the value is hidden and unobtainable, what’s the point of having a symbol at all?

The main point of a symbol is to create a string-like value that can’t collide with any other value. So, for example, consider using a symbol as a constant representing an event name:

const EVT_LOGIN = Symbol( "event.login" );

You’d then use EVT_LOGIN in place of a generic string literal like "event.login":

evthub.listen( EVT_LOGIN, function(data){
    // ..
} );

The benefit here is that EVT_LOGIN holds a value that cannot be duplicated (accidentally or otherwise) by any other value, so it is impossible for there to be any confusion of which event is being dispatched or handled.

Note: Under the covers, the evthub utility assumed in the previous snippet would almost certainly be using the symbol value from the EVT_LOGIN argument directly as the property/key in some internal object (hash) that tracks event handlers. If evthub instead needed to use the symbol value as a real string, it would need to explicitly coerce with String(..) or toString(), as implicit string coercion of symbols is not allowed.

You may use a symbol directly as a property name/key in an object, such as a special property that you want to treat as hidden or meta in usage. It’s important to know that although you intend to treat it as such, it is not actually a hidden or untouchable property.

Consider this module that implements the singleton pattern behavior – that is, it only allows itself to be created once:

const INSTANCE = Symbol( "instance" );

function HappyFace() {
    if (HappyFace[INSTANCE]) return HappyFace[INSTANCE];

    function smile() { .. }

    return HappyFace[INSTANCE] = {
        smile: smile
    };
}

var me = HappyFace(),
    you = HappyFace();

me === you;         // true

The INSTANCE symbol value here is a special, almost hidden, meta-like property stored statically on the HappyFace() function object.

It could alternatively have been a plain old property like __instance, and the behavior would have been identical. The usage of a symbol simply improves the metaprogramming style, keeping this INSTANCE property set apart from any other normal properties.

29.12.1 Symbol Registry

One mild downside to using symbols as in the last few examples is that the EVT_LOGIN and INSTANCE variables had to be stored in an outer scope (perhaps even the global scope), or otherwise somehow stored in a publicly available location, so that all parts of the code that need to use the symbols can access them.

To aid in organizing code with access to these symbols, you can create symbol values with the global symbol registry. For example:

const EVT_LOGIN = Symbol.for( "event.login" );

console.log( EVT_LOGIN );       // Symbol(event.login)

And:

function HappyFace() {
    const INSTANCE = Symbol.for( "instance" );

    if (HappyFace[INSTANCE]) return HappyFace[INSTANCE];

    // ..

    return HappyFace[INSTANCE] = { .. };
}

Symbol.for(..) looks in the global symbol registry to see if a symbol is already stored with the provided description text, and returns it if so. If not, it creates one to return. In other words, the global symbol registry treats symbol values, by description text, as singletons themselves.

But that also means that any part of your application can retrieve the symbol from the registry using Symbol.for(..), as long as the matching description name is used.

Ironically, symbols are basically intended to replace the use of magic strings (arbitrary string values given special meaning) in your application. But you precisely use magic description string values to uniquely identify/locate them in the global symbol registry!

To avoid accidental collisions, you’ll probably want to make your symbol descriptions quite unique. One easy way of doing that is to include prefix/context/namespacing information in them.

For example, consider a utility such as the following:

function extractValues(str) {
    var key = Symbol.for( "extractValues.parse" ),
        re = extractValues[key] ||
            /[^=&]+?=([^&]+?)(?=&|$)/g,
        values = [], match;

    while (match = re.exec( str )) {
        values.push( match[1] );
    }

    return values;
}

We use the magic string value "extractValues.parse" because it’s quite unlikely that any other symbol in the registry would ever collide with that description.

If a user of this utility wants to override the parsing regular expression, they can also use the symbol registry:

extractValues[Symbol.for( "extractValues.parse" )] =
    /..some pattern../g;

extractValues( "..some string.." );

Aside from the assistance the symbol registry provides in globally storing these values, everything we’re seeing here could have been done by just actually using the magic string "extractValues.parse" as the key, rather than the symbol. The improvements exist at the metaprogramming level more than the functional level.

You may have occasion to use a symbol value that has been stored in the registry to look up what description text (key) it’s stored under. For example, you may need to signal to another part of your application how to locate a symbol in the registry because you cannot pass the symbol value itself.

You can retrieve a registered symbol’s description text (key) using Symbol.keyFor(..):

var s = Symbol.for( "something cool" );

var desc = Symbol.keyFor( s );
console.log( desc );            // "something cool"

// get the symbol from the registry again
var s2 = Symbol.for( desc );

s2 === s;                       // true

29.12.2 Symbols as Object Properties

If a symbol is used as a property/key of an object, it’s stored in a special way so that the property will not show up in a normal enumeration of the object’s properties:

var o = {
    foo: 42,
    [ Symbol( "bar" ) ]: "hello world",
    baz: true
};

Object.getOwnPropertyNames( o );    // [ "foo","baz" ]

To retrieve an object’s symbol properties:

Object.getOwnPropertySymbols( o );  // [ Symbol(bar) ]

This makes it clear that a property symbol is not actually hidden or inaccessible, as you can always see it in the Object.getOwnPropertySymbols(..) list.

29.12.2.1 Built-In Symbols

ES6 comes with a number of predefined built-in symbols that expose various meta behaviors on JavaScript object values. However, these symbols are not registered in the global symbol registry, as one might expect.

Instead, they’re stored as properties on the Symbol function object. For example, in the “for..of” section earlier in this chapter, we introduced the Symbol.iterator value:

var a = [1,2,3];

a[Symbol.iterator];         // native function

The specification uses the @@ prefix notation to refer to the built-in symbols, the most common ones being: @@iterator, @@toStringTag, @@toPrimitive. Several others are defined as well, though they probably won’t be used as often.

Note: See “Well Known Symbols” in Chapter 7 for detailed information about how these built-in symbols are used for meta programming purposes.

29.13 Review

ES6 adds a heap of new syntax forms to JavaScript, so there’s plenty to learn!

Most of these are designed to ease the pain points of common programming idioms, such as setting default values to function parameters and gathering the “rest” of the parameters into an array. Destructuring is a powerful tool for more concisely expressing assignments of values from arrays and nested objects.

While features like => arrow functions appear to also be all about shorter and nicer-looking syntax, they actually have very specific behaviors that you should intentionally use only in appropriate situations.

Expanded Unicode support, new tricks for regular expressions, and even a new primitive symbol type round out the syntactic evolution of ES6.

30 You Don’t Know JS: ES6 & Beyond

31 Chapter 3: Organization

It’s one thing to write JS code, but it’s another to properly organize it. Utilizing common patterns for organization and reuse goes a long way to improving the readability and understandability of your code. Remember: code is at least as much about communicating to other developers as it is about feeding the computer instructions.

ES6 has several important features that help significantly improve these patterns, including: iterators, generators, modules, and classes.

31.1 Iterators

An iterator is a structured pattern for pulling information from a source in one-at-a-time fashion. This pattern has been around programming for a long time. And to be sure, JS developers have been ad hoc designing and implementing iterators in JS programs since before anyone can remember, so it’s not at all a new topic.

What ES6 has done is introduce an implicit standardized interface for iterators. Many of the built-in data structures in JavaScript will now expose an iterator implementing this standard. And you can also construct your own iterators adhering to the same standard, for maximal interoperability.

Iterators are a way of organizing ordered, sequential, pull-based consumption of data.

For example, you may implement a utility that produces a new unique identifier each time it’s requested. Or you may produce an infinite series of values that rotate through a fixed list, in round-robin fashion. Or you could attach an iterator to a database query result to pull out new rows one at a time.

Although they have not commonly been used in JS in such a manner, iterators can also be thought of as controlling behavior one step at a time. This can be illustrated quite clearly when considering generators (see “Generators” later in this chapter), though you can certainly do the same without generators.

31.1.1 Interfaces

At the time of this writing, ES6 section 25.1.1.2 (https://people.mozilla.org/~jorendorff/es6-draft.html#sec-iterator-interface) details the Iterator interface as having the following requirement:

Iterator [required]
    next() {method}: retrieves next IteratorResult

There are two optional members that some iterators are extended with:

Iterator [optional]
    return() {method}: stops iterator and returns IteratorResult
    throw() {method}: signals error and returns IteratorResult

The IteratorResult interface is specified as:

IteratorResult
    value {property}: current iteration value or final return value
        (optional if `undefined`)
    done {property}: boolean, indicates completion status

Note: I call these interfaces implicit not because they’re not explicitly called out in the specification – they are! – but because they’re not exposed as direct objects accessible to code. JavaScript does not, in ES6, support any notion of “interfaces,” so adherence for your own code is purely conventional. However, wherever JS expects an iterator – a for..of loop, for instance – what you provide must adhere to these interfaces or the code will fail.

There’s also an Iterable interface, which describes objects that must be able to produce iterators:

Iterable
    @@iterator() {method}: produces an Iterator

If you recall from “Built-In Symbols” in Chapter 2, @@iterator is the special built-in symbol representing the method that can produce iterator(s) for the object.

31.1.1.1 IteratorResult

The IteratorResult interface specifies that the return value from any iterator operation will be an object of the form:

{ value: .. , done: true / false }

Built-in iterators will always return values of this form, but more properties are, of course, allowed to be present on the return value, as necessary.

For example, a custom iterator may add additional metadata to the result object (e.g., where the data came from, how long it took to retrieve, cache expiration length, frequency for the appropriate next request, etc.).

Note: Technically, value is optional if it would otherwise be considered absent or unset, such as in the case of the value undefined. Because accessing res.value will produce undefined whether it’s present with that value or absent entirely, the presence/absence of the property is more an implementation detail or an optimization (or both), rather than a functional issue.

31.1.2 next() Iteration

Let’s look at an array, which is an iterable, and the iterator it can produce to consume its values:

var arr = [1,2,3];

var it = arr[Symbol.iterator]();

it.next();      // { value: 1, done: false }
it.next();      // { value: 2, done: false }
it.next();      // { value: 3, done: false }

it.next();      // { value: undefined, done: true }

Each time the method located at Symbol.iterator (see Chapters 2 and 7) is invoked on this arr value, it will produce a new fresh iterator. Most structures will do the same, including all the built-in data structures in JS.

However, a structure like an event queue consumer might only ever produce a single iterator (singleton pattern). Or a structure might only allow one unique iterator at a time, requiring the current one to be completed before a new one can be created.

The it iterator in the previous snippet doesn’t report done: true when you receive the 3 value. You have to call next() again, in essence going beyond the end of the array’s values, to get the complete signal done: true. It may not be clear why until later in this section, but that design decision will typically be considered a best practice.

Primitive string values are also iterables by default:

var greeting = "hello world";

var it = greeting[Symbol.iterator]();

it.next();      // { value: "h", done: false }
it.next();      // { value: "e", done: false }
..

Note: Technically, the primitive value itself isn’t iterable, but thanks to “boxing”, "hello world" is coerced/converted to its String object wrapper form, which is an iterable. See the Types & Grammar title of this series for more information.

ES6 also includes several new data structures, called collections (see Chapter 5). These collections are not only iterables themselves, but they also provide API method(s) to generate an iterator, such as:

var m = new Map();
m.set( "foo", 42 );
m.set( { cool: true }, "hello world" );

var it1 = m[Symbol.iterator]();
var it2 = m.entries();

it1.next();     // { value: [ "foo", 42 ], done: false }
it2.next();     // { value: [ "foo", 42 ], done: false }
..

The next(..) method of an iterator can optionally take one or more arguments. The built-in iterators mostly do not exercise this capability, though a generator’s iterator definitely does (see “Generators” later in this chapter).

By general convention, including all the built-in iterators, calling next(..) on an iterator that’s already been exhausted is not an error, but will simply continue to return the result { value: undefined, done: true }.

31.1.3 Optional: return(..) and throw(..)

The optional methods on the iterator interface – return(..) and throw(..) – are not implemented on most of the built-in iterators. However, they definitely do mean something in the context of generators, so see “Generators” for more specific information.

return(..) is defined as sending a signal to an iterator that the consuming code is complete and will not be pulling any more values from it. This signal can be used to notify the producer (the iterator responding to next(..) calls) to perform any cleanup it may need to do, such as releasing/closing network, database, or file handle resources.

If an iterator has a return(..) present and any condition occurs that can automatically be interpreted as abnormal or early termination of consuming the iterator, return(..) will automatically be called. You can call return(..) manually as well.

return(..) will return an IteratorResult object just like next(..) does. In general, the optional value you send to return(..) would be sent back as value in this IteratorResult, though there are nuanced cases where that might not be true.

throw(..) is used to signal an exception/error to an iterator, which possibly may be used differently by the iterator than the completion signal implied by return(..). It does not necessarily imply a complete stop of the iterator as return(..) generally does.

For example, with generator iterators, throw(..) actually injects a thrown exception into the generator’s paused execution context, which can be caught with a try..catch. An uncaught throw(..) exception would end up abnormally aborting the generator’s iterator.

Note: By general convention, an iterator should not produce any more results after having called return(..) or throw(..).

31.1.4 Iterator Loop

As we covered in the “for..of” section in Chapter 2, the ES6 for..of loop directly consumes a conforming iterable.

If an iterator is also an iterable, it can be used directly with the for..of loop. You make an iterator an iterable by giving it a Symbol.iterator method that simply returns the iterator itself:

var it = {
    // make the `it` iterator an iterable
    [Symbol.iterator]() { return this; },

    next() { .. },
    ..
};

it[Symbol.iterator]() === it;       // true

Now we can consume the it iterator with a for..of loop:

for (var v of it) {
    console.log( v );
}

To fully understand how such a loop works, recall the for equivalent of a for..of loop from Chapter 2:

for (var v, res; (res = it.next()) && !res.done; ) {
    v = res.value;
    console.log( v );
}

If you look closely, you’ll see that it.next() is called before each iteration, and then res.done is consulted. If res.done is true, the expression evaluates to false and the iteration doesn’t occur.

Recall earlier that we suggested iterators should in general not return done: true along with the final intended value from the iterator. Now you can see why.

If an iterator returned { done: true, value: 42 }, the for..of loop would completely discard the 42 value and it’d be lost. For this reason, assuming that your iterator may be consumed by patterns like the for..of loop or its manual for equivalent, you should probably wait to return done: true for signaling completion until after you’ve already returned all relevant iteration values.

Warning: You can, of course, intentionally design your iterator to return some relevant value at the same time as returning done: true. But don’t do this unless you’ve documented that as the case, and thus implicitly forced consumers of your iterator to use a different pattern for iteration than is implied by for..of or its manual equivalent we depicted.

31.1.5 Custom Iterators

In addition to the standard built-in iterators, you can make your own! All it takes to make them interoperate with ES6’s consumption facilities (e.g., the for..of loop and the ... operator) is to adhere to the proper interface(s).

Let’s try constructing an iterator that produces the infinite series of numbers in the Fibonacci sequence:

var Fib = {
    [Symbol.iterator]() {
        var n1 = 1, n2 = 1;

        return {
            // make the iterator an iterable
            [Symbol.iterator]() { return this; },

            next() {
                var current = n2;
                n2 = n1;
                n1 = n1 + current;
                return { value: current, done: false };
            },

            return(v) {
                console.log(
                    "Fibonacci sequence abandoned."
                );
                return { value: v, done: true };
            }
        };
    }
};

for (var v of Fib) {
    console.log( v );

    if (v > 50) break;
}
// 1 1 2 3 5 8 13 21 34 55
// Fibonacci sequence abandoned.

Warning: If we hadn’t inserted the break condition, this for..of loop would have run forever, which is probably not the desired result in terms of breaking your program!

The Fib[Symbol.iterator]() method when called returns the iterator object with next() and return(..) methods on it. State is maintained via n1 and n2 variables, which are kept by the closure.

Let’s next consider an iterator that is designed to run through a series (aka a queue) of actions, one item at a time:

var tasks = {
    [Symbol.iterator]() {
        var steps = this.actions.slice();

        return {
            // make the iterator an iterable
            [Symbol.iterator]() { return this; },

            next(...args) {
                if (steps.length > 0) {
                    let res = steps.shift()( ...args );
                    return { value: res, done: false };
                }
                else {
                    return { done: true }
                }
            },

            return(v) {
                steps.length = 0;
                return { value: v, done: true };
            }
        };
    },
    actions: []
};

The iterator on tasks steps through functions found in the actions array property, if any, and executes them one at a time, passing in whatever arguments you pass to next(..), and returning any return value to you in the standard IteratorResult object.

Here’s how we could use this tasks queue:

tasks.actions.push(
    function step1(x){
        console.log( "step 1:", x );
        return x * 2;
    },
    function step2(x,y){
        console.log( "step 2:", x, y );
        return x + (y * 2);
    },
    function step3(x,y,z){
        console.log( "step 3:", x, y, z );
        return (x * y) + z;
    }
);

var it = tasks[Symbol.iterator]();

it.next( 10 );          // step 1: 10
                        // { value:   20, done: false }

it.next( 20, 50 );      // step 2: 20 50
                        // { value:  120, done: false }

it.next( 20, 50, 120 ); // step 3: 20 50 120
                        // { value: 1120, done: false }

it.next();              // { done: true }

This particular usage reinforces that iterators can be a pattern for organizing functionality, not just data. It’s also reminiscent of what we’ll see with generators in the next section.

You could even get creative and define an iterator that represents meta operations on a single piece of data. For example, we could define an iterator for numbers that by default ranges from 0 up to (or down to, for negative numbers) the number in question.

Consider:

if (!Number.prototype[Symbol.iterator]) {
    Object.defineProperty(
        Number.prototype,
        Symbol.iterator,
        {
            writable: true,
            configurable: true,
            enumerable: false,
            value: function iterator(){
                var i, inc, done = false, top = +this;

                // iterate positively or negatively?
                inc = 1 * (top < 0 ? -1 : 1);

                return {
                    // make the iterator itself an iterable!
                    [Symbol.iterator](){ return this; },

                    next() {
                        if (!done) {
                            // initial iteration always 0
                            if (i == null) {
                                i = 0;
                            }
                            // iterating positively
                            else if (top >= 0) {
                                i = Math.min(top,i + inc);
                            }
                            // iterating negatively
                            else {
                                i = Math.max(top,i + inc);
                            }

                            // done after this iteration?
                            if (i == top) done = true;

                            return { value: i, done: false };
                        }
                        else {
                            return { done: true };
                        }
                    }
                };
            }
        }
    );
}

Now, what tricks does this creativity afford us?

for (var i of 3) {
    console.log( i );
}
// 0 1 2 3

[...-3];                // [0,-1,-2,-3]

Those are some fun tricks, though the practical utility is somewhat debatable. But then again, one might wonder why ES6 didn’t just ship with such a minor but delightful feature easter egg!?

I’d be remiss if I didn’t at least remind you that extending native prototypes as I’m doing in the previous snippet is something you should only do with caution and awareness of potential hazards.

In this case, the chances that you’ll have a collision with other code or even a future JS feature is probably exceedingly low. But just beware of the slight possibility. And document what you’re doing verbosely for posterity’s sake.

Note: I’ve expounded on this particular technique in this blog post (http://blog.getify.com/iterating-es6-numbers/) if you want more details. And this comment (http://blog.getify.com/iterating-es6-numbers/comment-page-1/#comment-535294) even suggests a similar trick but for making string character ranges.

31.1.6 Iterator Consumption

We’ve already shown consuming an iterator item by item with the for..of loop. But there are other ES6 structures that can consume iterators.

Let’s consider the iterator attached to this array (though any iterator we choose would have the following behaviors):

var a = [1,2,3,4,5];

The ... spread operator fully exhausts an iterator. Consider:

function foo(x,y,z,w,p) {
    console.log( x + y + z + w + p );
}

foo( ...a );            // 15

... can also spread an iterator inside an array:

var b = [ 0, ...a, 6 ];
b;                      // [0,1,2,3,4,5,6]

Array destructuring (see “Destructuring” in Chapter 2) can partially or completely (if paired with a ... rest/gather operator) consume an iterator:

var it = a[Symbol.iterator]();

var [x,y] = it;         // take just the first two elements from `it`
var [z, ...w] = it;     // take the third, then the rest all at once

// is `it` fully exhausted? Yep.
it.next();              // { value: undefined, done: true }

x;                      // 1
y;                      // 2
z;                      // 3
w;                      // [4,5]

31.2 Generators

All functions run to completion, right? In other words, once a function starts running, it finishes before anything else can interrupt.

At least that’s how it’s been for the whole history of JavaScript up to this point. As of ES6, a new somewhat exotic form of function is being introduced, called a generator. A generator can pause itself in mid-execution, and can be resumed either right away or at a later time. So it clearly does not hold the run-to-completion guarantee that normal functions do.

Moreover, each pause/resume cycle in mid-execution is an opportunity for two-way message passing, where the generator can return a value, and the controlling code that resumes it can send a value back in.

As with iterators in the previous section, there are multiple ways to think about what a generator is, or rather what it’s most useful for. There’s no one right answer, but we’ll try to consider several angles.

Note: See the Async & Performance title of this series for more information about generators, and also see Chapter 4 of this current title.

31.2.1 Syntax

The generator function is declared with this new syntax:

function *foo() {
    // ..
}

The position of the * is not functionally relevant. The same declaration could be written as any of the following:

function *foo()  { .. }
function* foo()  { .. }
function * foo() { .. }
function*foo()   { .. }
..

The only difference here is stylistic preference. Most other literature seems to prefer function* foo(..) { .. }. I prefer function *foo(..) { .. }, so that’s how I’ll present them for the rest of this title.

My reason is purely didactic in nature. In this text, when referring to a generator function, I will use *foo(..), as opposed to foo(..) for a normal function. I observe that *foo(..) more closely matches the * positioning of function *foo(..) { .. }.

Moreover, as we saw in Chapter 2 with concise methods, there’s a concise generator form in object literals:

var a = {
    *foo() { .. }
};

I would say that with concise generators, *foo() { .. } is rather more natural than * foo() { .. }. So that further argues for matching the consistency with *foo().

Consistency eases understanding and learning.

31.2.1.1 Executing a Generator

Though a generator is declared with *, you still execute it like a normal function:

foo();

You can still pass it arguments, as in:

function *foo(x,y) {
    // ..
}

foo( 5, 10 );

The major difference is that executing a generator, like foo(5,10) doesn’t actually run the code in the generator. Instead, it produces an iterator that will control the generator to execute its code.

We’ll come back to this later in “Iterator Control,” but briefly:

function *foo() {
    // ..
}

var it = foo();

// to start/advanced `*foo()`, call
// `it.next(..)`

31.2.1.2 yield

Generators also have a new keyword you can use inside them, to signal the pause point: yield. Consider:

function *foo() {
    var x = 10;
    var y = 20;

    yield;

    var z = x + y;
}

In this *foo() generator, the operations on the first two lines would run at the beginning, then yield would pause the generator. If and when resumed, the last line of *foo() would run. yield can appear any number of times (or not at all, technically!) in a generator.

You can even put yield inside a loop, and it can represent a repeated pause point. In fact, a loop that never completes just means a generator that never completes, which is completely valid, and sometimes entirely what you need.

yield is not just a pause point. It’s an expression that sends out a value when pausing the generator. Here’s a while..true loop in a generator that for each iteration yields a new random number:

function *foo() {
    while (true) {
        yield Math.random();
    }
}

The yield .. expression not only sends a value – yield without a value is the same as yield undefined – but also receives (e.g., is replaced by) the eventual resumption value. Consider:

function *foo() {
    var x = yield 10;
    console.log( x );
}

This generator will first yield out the value 10 when pausing itself. When you resume the generator – using the it.next(..) we referred to earlier – whatever value (if any) you resume with will replace/complete the whole yield 10 expression, meaning that value will be assigned to the x variable.

A yield .. expression can appear anywhere a normal expression can. For example:

function *foo() {
    var arr = [ yield 1, yield 2, yield 3 ];
    console.log( arr, yield 4 );
}

*foo() here has four yield .. expressions. Each yield results in the generator pausing to wait for a resumption value that’s then used in the various expression contexts.

yield is not technically an operator, though when used like yield 1 it sure looks like it. Because yield can be used all by itself as in var x = yield;, thinking of it as an operator can sometimes be confusing.

Technically, yield .. is of the same “expression precedence” – similar conceptually to operator precedence – as an assignment expression like a = 3. That means yield .. can basically appear anywhere a = 3 can validly appear.

Let’s illustrate the symmetry:

var a, b;

a = 3;                  // valid
b = 2 + a = 3;          // invalid
b = 2 + (a = 3);        // valid

yield 3;                // valid
a = 2 + yield 3;        // invalid
a = 2 + (yield 3);      // valid

Note: If you think about it, it makes a sort of conceptual sense that a yield .. expression would behave similar to an assignment expression. When a paused yield expression is resumed, it’s completed/replaced by the resumption value in a way that’s not terribly dissimilar from being “assigned” that value.

The takeaway: if you need yield .. to appear in a position where an assignment like a = 3 would not itself be allowed, it needs to be wrapped in a ( ).

Because of the low precedence of the yield keyword, almost any expression after a yield .. will be computed first before being sent with yield. Only the ... spread operator and the , comma operator have lower precedence, meaning they’d bind after the yield has been evaluated.

So just like with multiple operators in normal statements, another case where ( ) might be needed is to override (elevate) the low precedence of yield, such as the difference between these expressions:

yield 2 + 3;            // same as `yield (2 + 3)`

(yield 2) + 3;          // `yield 2` first, then `+ 3`

Just like = assignment, yield is also “right-associative,” which means that multiple yield expressions in succession are treated as having been ( .. ) grouped from right to left. So, yield yield yield 3 is treated as yield (yield (yield 3)). A “left-associative” interpretation like ((yield) yield) yield 3 would make no sense.

Just like with operators, it’s a good idea to use ( .. ) grouping, even if not strictly required, to disambiguate your intent if yield is combined with other operators or yields.

Note: See the Types & Grammar title of this series for more information about operator precedence and associativity.

31.2.1.3 yield *

In the same way that the * makes a function declaration into function * generator declaration, a * makes yield into yield *, which is a very different mechanism, called yield delegation. Grammatically, yield *.. will behave the same as a yield .., as discussed in the previous section.

yield * .. requires an iterable; it then invokes that iterable’s iterator, and delegates its own host generator’s control to that iterator until it’s exhausted. Consider:

function *foo() {
    yield *[1,2,3];
}

Note: As with the * position in a generator’s declaration (discussed earlier), the * positioning in yield * expressions is stylistically up to you. Most other literature prefers yield* .., but I prefer yield *.., for very symmetrical reasons as already discussed.

The [1,2,3] value produces an iterator that will step through its values, so the *foo() generator will yield those values out as it’s consumed. Another way to illustrate the behavior is in yield delegating to another generator:

function *foo() {
    yield 1;
    yield 2;
    yield 3;
}

function *bar() {
    yield *foo();
}

The iterator produced when *bar() calls *foo() is delegated to via yield *, meaning whatever value(s) *foo() produces will be produced by *bar().

Whereas with yield .. the completion value of the expression comes from resuming the generator with it.next(..), the completion value of the yield *.. expression comes from the return value (if any) from the delegated-to iterator.

Built-in iterators generally don’t have return values, as we covered at the end of the “Iterator Loop” section earlier in this chapter. But if you define your own custom iterator (or generator), you can design it to return a value, which yield *.. would capture:

function *foo() {
    yield 1;
    yield 2;
    yield 3;
    return 4;
}

function *bar() {
    var x = yield *foo();
    console.log( "x:", x );
}

for (var v of bar()) {
    console.log( v );
}
// 1 2 3
// x: 4

While the 1, 2, and 3 values are yielded out of *foo() and then out of *bar(), the 4 value returned from *foo() is the completion value of the yield *foo() expression, which then gets assigned to x.

Because yield * can call another generator (by way of delegating to its iterator), it can also perform a sort of generator recursion by calling itself:

function *foo(x) {
    if (x < 3) {
        x = yield *foo( x + 1 );
    }
    return x * 2;
}

foo( 1 );

The result from foo(1) and then calling the iterator’s next() to run it through its recursive steps will be 24. The first *foo(..) run has x at value 1, which is x < 3. x + 1 is passed recursively to *foo(..), so x is then 2. One more recursive call results in x of 3.

Now, because x < 3 fails, the recursion stops, and return 3 * 2 gives 6 back to the previous call’s yield *.. expression, which is then assigned to x. Another return 6 * 2 returns 12 back to the previous call’s x. Finally 12 * 2, or 24, is returned from the completed run of the *foo(..) generator.

31.2.2 Iterator Control

Earlier, we briefly introduced the concept that generators are controlled by iterators. Let’s fully dig into that now.

Recall the recursive *foo(..) from the previous section. Here’s how we’d run it:

function *foo(x) {
    if (x < 3) {
        x = yield *foo( x + 1 );
    }
    return x * 2;
}

var it = foo( 1 );
it.next();              // { value: 24, done: true }

In this case, the generator doesn’t really ever pause, as there’s no yield .. expression. Instead, yield * just keeps the current iteration step going via the recursive call. So, just one call to the iterator’s next() function fully runs the generator.

Now let’s consider a generator that will have multiple steps and thus multiple produced values:

function *foo() {
    yield 1;
    yield 2;
    yield 3;
}

We already know we can consume an iterator, even one attached to a generator like *foo(), with a for..of loop:

for (var v of foo()) {
    console.log( v );
}
// 1 2 3

Note: The for..of loop requires an iterable. A generator function reference (like foo) by itself is not an iterable; you must execute it with foo() to get the iterator (which is also an iterable, as we explained earlier in this chapter). You could theoretically extend the GeneratorPrototype (the prototype of all generator functions) with a Symbol.iterator function that essentially just does return this(). That would make the foo reference itself an iterable, which means for (var v of foo) { .. } (notice no () on foo) will work.

Let’s instead iterate the generator manually:

function *foo() {
    yield 1;
    yield 2;
    yield 3;
}

var it = foo();

it.next();              // { value: 1, done: false }
it.next();              // { value: 2, done: false }
it.next();              // { value: 3, done: false }

it.next();              // { value: undefined, done: true }

If you look closely, there are three yield statements and four next() calls. That may seem like a strange mismatch. In fact, there will always be one more next() call than yield expression, assuming all are evaluated and the generator is fully run to completion.

But if you look at it from the opposite perspective (inside-out instead of outside-in), the matching between yield and next() makes more sense.

Recall that the yield .. expression will be completed by the value you resume the generator with. That means the argument you pass to next(..) completes whatever yield .. expression is currently paused waiting for a completion.

Let’s illustrate this perspective this way:

function *foo() {
    var x = yield 1;
    var y = yield 2;
    var z = yield 3;
    console.log( x, y, z );
}

In this snippet, each yield .. is sending a value out (1, 2, 3), but more directly, it’s pausing the generator to wait for a value. In other words, it’s almost like asking the question, “What value should I use here? I’ll wait to hear back.”

Now, here’s how we control *foo() to start it up:

var it = foo();

it.next();              // { value: 1, done: false }

That first next() call is starting up the generator from its initial paused state, and running it to the first yield. At the moment you call that first next(), there’s no yield .. expression waiting for a completion. If you passed a value to that first next() call, it would currently just be thrown away, because no yield is waiting to receive such a value.

Note: An early proposal for the “beyond ES6” timeframe would let you access a value passed to an initial next(..) call via a separate meta property (see Chapter 7) inside the generator.

Now, let’s answer the currently pending question, “What value should I assign to x?” We’ll answer it by sending a value to the next next(..) call:

it.next( "foo" );       // { value: 2, done: false }

Now, the x will have the value "foo", but we’ve also asked a new question, “What value should I assign to y?” And we answer:

it.next( "bar" );       // { value: 3, done: false }

Answer given, another question asked. Final answer:

it.next( "baz" );       // "foo" "bar" "baz"
                        // { value: undefined, done: true }

Now it should be clearer how each yield .. “question” is answered by the next next(..) call, and so the “extra” next() call we observed is always just the initial one that starts everything going.

Let’s put all those steps together:

var it = foo();

// start up the generator
it.next();              // { value: 1, done: false }

// answer first question
it.next( "foo" );       // { value: 2, done: false }

// answer second question
it.next( "bar" );       // { value: 3, done: false }

// answer third question
it.next( "baz" );       // "foo" "bar" "baz"
                        // { value: undefined, done: true }

You can think of a generator as a producer of values, in which case each iteration is simply producing a value to be consumed.

But in a more general sense, perhaps it’s appropriate to think of generators as controlled, progressive code execution, much like the tasks queue example from the earlier “Custom Iterators” section.

Note: That perspective is exactly the motivation for how we’ll revisit generators in Chapter 4. Specifically, there’s no reason that next(..) has to be called right away after the previous next(..) finishes. While the generator’s inner execution context is paused, the rest of the program continues unblocked, including the ability for asynchronous actions to control when the generator is resumed.

31.2.3 Early Completion

As we covered earlier in this chapter, the iterator attached to a generator supports the optional return(..) and throw(..) methods. Both of them have the effect of aborting a paused generator immediately.

Consider:

function *foo() {
    yield 1;
    yield 2;
    yield 3;
}

var it = foo();

it.next();              // { value: 1, done: false }

it.return( 42 );        // { value: 42, done: true }

it.next();              // { value: undefined, done: true }

return(x) is kind of like forcing a return x to be processed at exactly that moment, such that you get the specified value right back. Once a generator is completed, either normally or early as shown, it no longer processes any code or returns any values.

In addition to return(..) being callable manually, it’s also called automatically at the end of iteration by any of the ES6 constructs that consume iterators, such as the for..of loop and the ... spread operator.

The purpose for this capability is so the generator can be notified if the controlling code is no longer going to iterate over it anymore, so that it can perhaps do any cleanup tasks (freeing up resources, resetting status, etc.). Identical to a normal function cleanup pattern, the main way to accomplish this is to use a finally clause:

function *foo() {
    try {
        yield 1;
        yield 2;
        yield 3;
    }
    finally {
        console.log( "cleanup!" );
    }
}

for (var v of foo()) {
    console.log( v );
}
// 1 2 3
// cleanup!

var it = foo();

it.next();              // { value: 1, done: false }
it.return( 42 );        // cleanup!
                        // { value: 42, done: true }

Warning: Do not put a yield statement inside the finally clause! It’s valid and legal, but it’s a really terrible idea. It acts in a sense as deferring the completion of the return(..) call you made, as any yield .. expressions in the finally clause are respected to pause and send messages; you don’t immediately get a completed generator as expected. There’s basically no good reason to opt in to that crazy bad part, so avoid doing so!

In addition to the previous snippet showing how return(..) aborts the generator while still triggering the finally clause, it also demonstrates that a generator produces a whole new iterator each time it’s called. In fact, you can use multiple iterators attached to the same generator concurrently:

function *foo() {
    yield 1;
    yield 2;
    yield 3;
}

var it1 = foo();
it1.next();             // { value: 1, done: false }
it1.next();             // { value: 2, done: false }

var it2 = foo();
it2.next();             // { value: 1, done: false }

it1.next();             // { value: 3, done: false }

it2.next();             // { value: 2, done: false }
it2.next();             // { value: 3, done: false }

it2.next();             // { value: undefined, done: true }
it1.next();             // { value: undefined, done: true }

31.2.3.1 Early Abort

Instead of calling return(..), you can call throw(..). Just like return(x) is essentially injecting a return x into the generator at its current pause point, calling throw(x) is essentially like injecting a throw x at the pause point.

Other than the exception behavior (we cover what that means to try clauses in the next section), throw(..) produces the same sort of early completion that aborts the generator’s run at its current pause point. For example:

function *foo() {
    yield 1;
    yield 2;
    yield 3;
}

var it = foo();

it.next();              // { value: 1, done: false }

try {
    it.throw( "Oops!" );
}
catch (err) {
    console.log( err ); // Exception: Oops!
}

it.next();              // { value: undefined, done: true }

Because throw(..) basically injects a throw .. in replacement of the yield 1 line of the generator, and nothing handles this exception, it immediately propagates back out to the calling code, which handles it with a try..catch.

Unlike return(..), the iterator’s throw(..) method is never called automatically.

Of course, though not shown in the previous snippet, if a try..finally clause was waiting inside the generator when you call throw(..), the finally clause would be given a chance to complete before the exception is propagated back to the calling code.

31.2.4 Error Handling

As we’ve already hinted, error handling with generators can be expressed with try..catch, which works in both inbound and outbound directions:

function *foo() {
    try {
        yield 1;
    }
    catch (err) {
        console.log( err );
    }

    yield 2;

    throw "Hello!";
}

var it = foo();

it.next();              // { value: 1, done: false }

try {
    it.throw( "Hi!" );  // Hi!
                        // { value: 2, done: false }
    it.next();

    console.log( "never gets here" );
}
catch (err) {
    console.log( err ); // Hello!
}

Errors can also propagate in both directions through yield * delegation:

function *foo() {
    try {
        yield 1;
    }
    catch (err) {
        console.log( err );
    }

    yield 2;

    throw "foo: e2";
}

function *bar() {
    try {
        yield *foo();

        console.log( "never gets here" );
    }
    catch (err) {
        console.log( err );
    }
}

var it = bar();

try {
    it.next();          // { value: 1, done: false }

    it.throw( "e1" );   // e1
                        // { value: 2, done: false }

    it.next();          // foo: e2
                        // { value: undefined, done: true }
}
catch (err) {
    console.log( "never gets here" );
}

it.next();              // { value: undefined, done: true }

When *foo() calls yield 1, the 1 value passes through *bar() untouched, as we’ve already seen.

But what’s most interesting about this snippet is that when *foo() calls throw "foo: e2", this error propagates to *bar() and is immediately caught by *bar()’s try..catch block. The error doesn’t pass through *bar() like the 1 value did.

*bar()’s catch then does a normal output of err ("foo: e2") and then *bar() finishes normally, which is why the { value: undefined, done: true } iterator result comes back from it.next().

If *bar() didn’t have a try..catch around the yield *.. expression, the error would of course propagate all the way out, and on the way through it still would complete (abort) *bar().

31.2.5 Transpiling a Generator

Is it possible to represent a generator’s capabilities prior to ES6? It turns out it is, and there are several great tools that do so, including most notably Facebook’s Regenerator tool (https://facebook.github.io/regenerator/).

But just to better understand generators, let’s try our hand at manually converting. Basically, we’re going to create a simple closure-based state machine.

We’ll keep our source generator really simple:

function *foo() {
    var x = yield 42;
    console.log( x );
}

To start, we’ll need a function called foo() that we can execute, which needs to return an iterator:

function foo() {
    // ..

    return {
        next: function(v) {
            // ..
        }

        // we'll skip `return(..)` and `throw(..)`
    };
}

Now, we need some inner variable to keep track of where we are in the steps of our “generator”’s logic. We’ll call it state. There will be three states: 0 initially, 1 while waiting to fulfill the yield expression, and 2 once the generator is complete.

Each time next(..) is called, we need to process the next step, and then increment state. For convenience, we’ll put each step into a case clause of a switch statement, and we’ll hold that in an inner function called nextState(..) that next(..) can call. Also, because x is a variable across the overall scope of the “generator,” it needs to live outside the nextState(..) function.

Here it is all together (obviously somewhat simplified, to keep the conceptual illustration clearer):

function foo() {
    function nextState(v) {
        switch (state) {
            case 0:
                state++;

                // the `yield` expression
                return 42;
            case 1:
                state++;

                // `yield` expression fulfilled
                x = v;
                console.log( x );

                // the implicit `return`
                return undefined;

            // no need to handle state `2`
        }
    }

    var state = 0, x;

    return {
        next: function(v) {
            var ret = nextState( v );

            return { value: ret, done: (state == 2) };
        }

        // we'll skip `return(..)` and `throw(..)`
    };
}

And finally, let’s test our pre-ES6 “generator”:

var it = foo();

it.next();              // { value: 42, done: false }

it.next( 10 );          // 10
                        // { value: undefined, done: true }

Not bad, huh? Hopefully this exercise solidifies in your mind that generators are actually just simple syntax for state machine logic. That makes them widely applicable.

31.2.6 Generator Uses

So, now that we much more deeply understand how generators work, what are they useful for?

We’ve seen two major patterns:

  • Producing a series of values: This usage can be simple (e.g., random strings or incremented numbers), or it can represent more structured data access (e.g., iterating over rows returned from a database query).

Either way, we use the iterator to control a generator so that some logic can be invoked for each call to next(..). Normal iterators on data structures merely pull values without any controlling logic. * Queue of tasks to perform serially: This usage often represents flow control for the steps in an algorithm, where each step requires retrieval of data from some external source. The fulfillment of each piece of data may be immediate, or may be asynchronously delayed.

From the perspective of the code inside the generator, the details of sync or async at a yield point are entirely opaque. Moreover, these details are intentionally abstracted away, such as not to obscure the natural sequential expression of steps with such implementation complications. Abstraction also means the implementations can be swapped/refactored often without touching the code in the generator at all.

When generators are viewed in light of these uses, they become a lot more than just a different or nicer syntax for a manual state machine. They are a powerful abstraction tool for organizing and controlling orderly production and consumption of data.

31.3 Modules

I don’t think it’s an exaggeration to suggest that the single most important code organization pattern in all of JavaScript is, and always has been, the module. For myself, and I think for a large cross-section of the community, the module pattern drives the vast majority of code.

31.3.1 The Old Way

The traditional module pattern is based on an outer function with inner variables and functions, and a returned “public API” with methods that have closure over the inner data and capabilities. It’s often expressed like this:

function Hello(name) {
    function greeting() {
        console.log( "Hello " + name + "!" );
    }

    // public API
    return {
        greeting: greeting
    };
}

var me = Hello( "Kyle" );
me.greeting();          // Hello Kyle!

This Hello(..) module can produce multiple instances by being called subsequent times. Sometimes, a module is only called for as a singleton (i.e., it just needs one instance), in which case a slight variation on the previous snippet, using an IIFE, is common:

var me = (function Hello(name){
    function greeting() {
        console.log( "Hello " + name + "!" );
    }

    // public API
    return {
        greeting: greeting
    };
})( "Kyle" );

me.greeting();          // Hello Kyle!

This pattern is tried and tested. It’s also flexible enough to have a wide assortment of variations for a number of different scenarios.

One of the most common is the Asynchronous Module Definition (AMD), and another is the Universal Module Definition (UMD). We won’t cover the particulars of these patterns and techniques here, but they’re explained extensively in many places online.

31.3.2 Moving Forward

As of ES6, we no longer need to rely on the enclosing function and closure to provide us with module support. ES6 modules have first class syntactic and functional support.

Before we get into the specific syntax, it’s important to understand some fairly significant conceptual differences with ES6 modules compared to how you may have dealt with modules in the past:

  • ES6 uses file-based modules, meaning one module per file. At this time, there is no standardized way of combining multiple modules into a single file.

That means that if you are going to load ES6 modules directly into a browser web application, you will be loading them individually, not as a large bundle in a single file as has been common in performance optimization efforts.

It’s expected that the contemporaneous advent of HTTP/2 will significantly mitigate any such performance concerns, as it operates on a persistent socket connection and thus can very efficiently load many smaller files in parallel and interleaved with one another. * The API of an ES6 module is static. That is, you define statically what all the top-level exports are on your module’s public API, and those cannot be amended later.

Some uses are accustomed to being able to provide dynamic API definitions, where methods can be added/removed/replaced in response to runtime conditions. Either these uses will have to change to fit with ES6 static APIs, or they will have to restrain the dynamic changes to properties/methods of a second-level object. * ES6 modules are singletons. That is, there’s only one instance of the module, which maintains its state. Every time you import that module into another module, you get a reference to the one centralized instance. If you want to be able to produce multiple module instances, your module will need to provide some sort of factory to do it. * The properties and methods you expose on a module’s public API are not just normal assignments of values or references. They are actual bindings (almost like pointers) to the identifiers in your inner module definition.

In pre-ES6 modules, if you put a property on your public API that holds a primitive value like a number or string, that property assignment was by value-copy, and any internal update of a corresponding variable would be separate and not affect the public copy on the API object.

With ES6, exporting a local private variable, even if it currently holds a primitive string/number/etc, exports a binding to the variable. If the module changes the variable’s value, the external import binding now resolves to that new value. * Importing a module is the same thing as statically requesting it to load (if it hasn’t already). If you’re in a browser, that implies a blocking load over the network. If you’re on a server (i.e., Node.js), it’s a blocking load from the filesystem.

However, don’t panic about the performance implications. Because ES6 modules have static definitions, the import requirements can be statically scanned, and loads will happen preemptively, even before you’ve used the module.

ES6 doesn’t actually specify or handle the mechanics of how these load requests work. There’s a separate notion of a Module Loader, where each hosting environment (browser, Node.js, etc.) provides a default Loader appropriate to the environment. The importing of a module uses a string value to represent where to get the module (URL, file path, etc.), but this value is opaque in your program and only meaningful to the Loader itself.

You can define your own custom Loader if you want more fine-grained control than the default Loader affords – which is basically none, as it’s totally hidden from your program’s code.

As you can see, ES6 modules will serve the overall use case of organizing code with encapsulation, controlling public APIs, and referencing dependency imports. But they have a very particular way of doing so, and that may or may not fit very closely with how you’ve already been doing modules for years.

31.3.2.1 CommonJS

There’s a similar, but not fully compatible, module syntax called CommonJS, which is familiar to those in the Node.js ecosystem.

For lack of a more tactful way to say this, in the long run, ES6 modules essentially are bound to supersede all previous formats and standards for modules, even CommonJS, as they are built on syntactic support in the language. This will, in time, inevitably win out as the superior approach, if for no other reason than ubiquity.

We face a fairly long road to get to that point, though. There are literally hundreds of thousands of CommonJS style modules in the server-side JavaScript world, and 10 times that many modules of varying format standards (UMD, AMD, ad hoc) in the browser world. It will take many years for the transitions to make any significant progress.

In the interim, module transpilers/converters will be an absolute necessity. You might as well just get used to that new reality. Whether you author in regular modules, AMD, UMD, CommonJS, or ES6, these tools will have to parse and convert to a format that is suitable for whatever environment your code will run in.

For Node.js, that probably means (for now) that the target is CommonJS. For the browser, it’s probably UMD or AMD. Expect lots of flux on this over the next few years as these tools mature and best practices emerge.

From here on out, my best advice on modules is this: whatever format you’ve been religiously attached to with strong affinity, also develop an appreciation for and understanding of ES6 modules, such as they are, and let your other module tendencies fade. They are the future of modules in JS, even if that reality is a bit of a ways off.

31.3.3 The New Way

The two main new keywords that enable ES6 modules are import and export. There’s lots of nuance to the syntax, so let’s take a deeper look.

Warning: An important detail that’s easy to overlook: both import and export must always appear in the top-level scope of their respective usage. For example, you cannot put either an import or export inside an if conditional; they must appear outside of all blocks and functions.

31.3.3.1 exporting API Members

The export keyword is either put in front of a declaration, or used as an operator (of sorts) with a special list of bindings to export. Consider:

export function foo() {
    // ..
}

export var awesome = 42;

var bar = [1,2,3];
export { bar };

Another way of expressing the same exports:

function foo() {
    // ..
}

var awesome = 42;
var bar = [1,2,3];

export { foo, awesome, bar };

These are all called named exports, as you are in effect exporting the name bindings of the variables/functions/etc.

Anything you don’t label with export stays private inside the scope of the module. That is, although something like var bar = .. looks like it’s declaring at the top-level global scope, the top-level scope is actually the module itself; there is no global scope in modules.

Note: Modules do still have access to window and all the “globals” that hang off it, just not as lexical top-level scope. However, you really should stay away from the globals in your modules if at all possible.

You can also “rename” (aka alias) a module member during named export:

function foo() { .. }

export { foo as bar };

When this module is imported, only the bar member name is available to import; foo stays hidden inside the module.

Module exports are not just normal assignments of values or references, as you’re accustomed to with the = assignment operator. Actually, when you export something, you’re exporting a binding (kinda like a pointer) to that thing (variable, etc.).

Within your module, if you change the value of a variable you already exported a binding to, even if it’s already been imported (see the next section), the imported binding will resolve to the current (updated) value.

Consider:

var awesome = 42;
export { awesome };

// later
awesome = 100;

When this module is imported, regardless of whether that’s before or after the awesome = 100 setting, once that assignment has happened, the imported binding resolves to the 100 value, not 42.

That’s because the binding is, in essence, a reference to, or a pointer to, the awesome variable itself, rather than a copy of its value. This is a mostly unprecedented concept for JS introduced with ES6 module bindings.

Though you can clearly use export multiple times inside a module’s definition, ES6 definitely prefers the approach that a module has a single export, which is known as a default export. In the words of some members of the TC39 committee, you’re “rewarded with simpler import syntax” if you follow that pattern, and conversely “penalized” with more verbose syntax if you don’t.

A default export sets a particular exported binding to be the default when importing the module. The name of the binding is literally default. As you’ll see later, when importing module bindings you can also rename them, as you commonly will with a default export.

There can only be one default per module definition. We’ll cover import in the next section, and you’ll see how the import syntax is more concise if the module has a default export.

There’s a subtle nuance to default export syntax that you should pay close attention to. Compare these two snippets:

function foo(..) {
    // ..
}

export default foo;

And this one:

function foo(..) {
    // ..
}

export { foo as default };

In the first snippet, you are exporting a binding to the function expression value at that moment, not to the identifier foo. In other words, export default .. takes an expression. If you later assign foo to a different value inside your module, the module import still reveals the function originally exported, not the new value.

By the way, the first snippet could also have been written as:

export default function foo(..) {
    // ..
}

Warning: Although the function foo.. part here is technically a function expression, for the purposes of the internal scope of the module, it’s treated like a function declaration, in that the foo name is bound in the module’s top-level scope (often called “hoisting”). The same is true for export default class Foo... However, while you can do export var foo = .., you currently cannot do export default var foo = .. (or let or const), in a frustrating case of inconsistency. At the time of this writing, there’s already discussion of adding that capability in soon, post-ES6, for consistency sake.

Recall the second snippet again:

function foo(..) {
    // ..
}

export { foo as default };

In this version of the module export, the default export binding is actually to the foo identifier rather than its value, so you get the previously described binding behavior (i.e., if you later change foo’s value, the value seen on the import side will also be updated).

Be very careful of this subtle gotcha in default export syntax, especially if your logic calls for export values to be updated. If you never plan to update a default export’s value, export default .. is fine. If you do plan to update the value, you must use export { .. as default }. Either way, make sure to comment your code to explain your intent!

Because there can only be one default per module, you may be tempted to design your module with one default export of an object with all your API methods on it, such as:

export default {
    foo() { .. },
    bar() { .. },
    ..
};

That pattern seems to map closely to how a lot of developers have already structured their pre-ES6 modules, so it seems like a natural approach. Unfortunately, it has some downsides and is officially discouraged.

In particular, the JS engine cannot statically analyze the contents of a plain object, which means it cannot do some optimizations for static import performance. The advantage of having each member individually and explicitly exported is that the engine can do the static analysis and optimization.

If your API has more than one member already, it seems like these principles – one default export per module, and all API members as named exports – are in conflict, doesn’t it? But you can have a single default export as well as other named exports; they are not mutually exclusive.

So, instead of this (discouraged) pattern:

export default function foo() { .. }

foo.bar = function() { .. };
foo.baz = function() { .. };

You can do:

export default function foo() { .. }

export function bar() { .. }
export function baz() { .. }

Note: In this previous snippet, I used the name foo for the function that default labels. That foo name, however, is ignored for the purposes of export – default is actually the exported name. When you import this default binding, you can give it whatever name you want, as you’ll see in the next section.

Alternatively, some will prefer:

function foo() { .. }
function bar() { .. }
function baz() { .. }

export { foo as default, bar, baz, .. };

The effects of mixing default and named exports will be more clear when we cover import shortly. But essentially it means that the most concise default import form would only retrieve the foo() function. The user could additionally manually list bar and baz as named imports, if they want them.

You can probably imagine how tedious that’s going to be for consumers of your module if you have lots of named export bindings. There is a wildcard import form where you import all of a module’s exports within a single namespace object, but there’s no way to wildcard import to top-level bindings.

Again, the ES6 module mechanism is intentionally designed to discourage modules with lots of exports; relatively speaking, it’s desired that such approaches be a little more difficult, as a sort of social engineering to encourage simple module design in favor of large/complex module design.

I would probably recommend you not mix default export with named exports, especially if you have a large API and refactoring to separate modules isn’t practical or desired. In that case, just use all named exports, and document that consumers of your module should probably use the import * as .. (namespace import, discussed in the next section) approach to bring the whole API in at once on a single namespace.

We mentioned this earlier, but let’s come back to it in more detail. Other than the export default ... form that exports an expression value binding, all other export forms are exporting bindings to local identifiers. For those bindings, if you change the value of a variable inside a module after exporting, the external imported binding will access the updated value:

var foo = 42;
export { foo as default };

export var bar = "hello world";

foo = 10;
bar = "cool";

When you import this module, the default and bar exports will be bound to the local variables foo and bar, meaning they will reveal the updated 10 and "cool" values. The values at time of export are irrelevant. The values at time of import are irrelevant. The bindings are live links, so all that matters is what the current value is when you access the binding.

Warning: Two-way bindings are not allowed. If you import a foo from a module, and try to change the value of your imported foo variable, an error will be thrown! We’ll revisit that in the next section.

You can also re-export another module’s exports, such as:

export { foo, bar } from "baz";
export { foo as FOO, bar as BAR } from "baz";
export * from "baz";

Those forms are similar to just first importing from the "baz" module then listing its members explicitly for export from your module. However, in these forms, the members of the "baz" module are never imported to your module’s local scope; they sort of pass through untouched.

31.3.3.2 importing API Members

To import a module, unsurprisingly you use the import statement. Just as export has several nuanced variations, so does import, so spend plenty of time considering the following issues and experimenting with your options.

If you want to import certain specific named members of a module’s API into your top-level scope, you use this syntax:

import { foo, bar, baz } from "foo";

Warning: The { .. } syntax here may look like an object literal, or even an object destructuring syntax. However, its form is special just for modules, so be careful not to confuse it with other { .. } patterns elsewhere.

The "foo" string is called a module specifier. Because the whole goal is statically analyzable syntax, the module specifier must be a string literal; it cannot be a variable holding the string value.

From the perspective of your ES6 code and the JS engine itself, the contents of this string literal are completely opaque and meaningless. The module loader will interpret this string as an instruction of where to find the desired module, either as a URL path or a local filesystem path.

The foo, bar, and baz identifiers listed must match named exports on the module’s API (static analysis and error assertion apply). They are bound as top-level identifiers in your current scope:

import { foo } from "foo";

foo();

You can rename the bound identifiers imported, as:

import { foo as theFooFunc } from "foo";

theFooFunc();

If the module has just a default export that you want to import and bind to an identifier, you can opt to skip the { .. } surrounding syntax for that binding. The import in this preferred case gets the nicest and most concise of the import syntax forms:

import foo from "foo";

// or:
import { default as foo } from "foo";

Note: As explained in the previous section, the default keyword in a module’s export specifies a named export where the name is actually default, as is illustrated by the second more verbose syntax option. The renaming from default to, in this case, foo, is explicit in the latter syntax and is identical yet implicit in the former syntax.

You can also import a default export along with other named exports, if the module has such a definition. Recall this module definition from earlier:

export default function foo() { .. }

export function bar() { .. }
export function baz() { .. }

To import that module’s default export and its two named exports:

import FOOFN, { bar, baz as BAZ } from "foo";

FOOFN();
bar();
BAZ();

The strongly suggested approach from ES6’s module philosophy is that you only import the specific bindings from a module that you need. If a module provides 10 API methods, but you only need two of them, some believe it wasteful to bring in the entire set of API bindings.

One benefit, besides code being more explicit, is that narrow imports make static analysis and error detection (accidentally using the wrong binding name, for instance) more robust.

Of course, that’s just the standard position influenced by ES6 design philosophy; there’s nothing that requires adherence to that approach.

Many developers would be quick to point out that such approaches can be more tedious, requiring you to regularly revisit and update your import statement(s) each time you realize you need something else from a module. The trade-off is in exchange for convenience.

In that light, the preference might be to import everything from the module into a single namespace, rather than importing individual members, each directly into the scope. Fortunately, the import statement has a syntax variation that can support this style of module consumption, called namespace import.

Consider a "foo" module exported as:

export function bar() { .. }
export var x = 42;
export function baz() { .. }

You can import that entire API to a single module namespace binding:

import * as foo from "foo";

foo.bar();
foo.x;          // 42
foo.baz();

Note: The * as .. clause requires the * wildcard. In other words, you cannot do something like import { bar, x } as foo from "foo" to bring in only part of the API but still bind to the foo namespace. I would have liked something like that, but for ES6 it’s all or nothing with the namespace import.

If the module you’re importing with * as .. has a default export, it is named default in the namespace specified. You can additionally name the default import outside of the namespace binding, as a top-level identifier. Consider a "world" module exported as:

export default function foo() { .. }
export function bar() { .. }
export function baz() { .. }

And this import:

import foofn, * as hello from "world";

foofn();
hello.default();
hello.bar();
hello.baz();

While this syntax is valid, it can be rather confusing that one method of the module (the default export) is bound at the top-level of your scope, whereas the rest of the named exports (and one called default) are bound as properties on a differently named (hello) identifier namespace.

As I mentioned earlier, my suggestion would be to avoid designing your module exports in this way, to reduce the chances that your module’s users will suffer these strange quirks.

All imported bindings are immutable and/or read-only. Consider the previous import; all of these subsequent assignment attempts will throw TypeErrors:

import foofn, * as hello from "world";

foofn = 42;         // (runtime) TypeError!
hello.default = 42; // (runtime) TypeError!
hello.bar = 42;     // (runtime) TypeError!
hello.baz = 42;     // (runtime) TypeError!

Recall earlier in the “exporting API Members” section that we talked about how the bar and baz bindings are bound to the actual identifiers inside the "world" module. That means if the module changes those values, hello.bar and hello.baz now reference the updated values.

But the immutable/read-only nature of your local imported bindings enforces that you cannot change them from the imported bindings, hence the TypeErrors. That’s pretty important, because without those protections, your changes would end up affecting all other consumers of the module (remember: singleton), which could create some very surprising side effects!

Moreover, though a module can change its API members from the inside, you should be very cautious of intentionally designing your modules in that fashion. ES6 modules are intended to be static, so deviations from that principle should be rare and should be carefully and verbosely documented.

Warning: There are module design philosophies where you actually intend to let a consumer change the value of a property on your API, or module APIs are designed to be “extended” by having other “plug-ins” add to the API namespace. As we just asserted, ES6 module APIs should be thought of and designed as static and unchangeable, which strongly restricts and discourages these alternative module design patterns. You can get around these limitations by exporting a plain object, which of course can then be changed at will. But be careful and think twice before going down that road.

Declarations that occur as a result of an import are “hoisted” (see the Scope & Closures title of this series). Consider:

foo();

import { foo } from "foo";

foo() can run because not only did the static resolution of the import .. statement figure out what foo is during compilation, but it also “hoisted” the declaration to the top of the module’s scope, thus making it available throughout the module.

Finally, the most basic form of the import looks like this:

import "foo";

This form does not actually import any of the module’s bindings into your scope. It loads (if not already loaded), compiles (if not already compiled), and evaluates (if not already run) the "foo" module.

In general, that sort of import is probably not going to be terribly useful. There may be niche cases where a module’s definition has side effects (such as assigning things to the window/global object). You could also envision using import "foo" as a sort of preload for a module that may be needed later.

31.3.4 Circular Module Dependency

A imports B. B imports A. How does this actually work?

I’ll state off the bat that designing systems with intentional circular dependency is generally something I try to avoid. That having been said, I recognize there are reasons people do this and it can solve some sticky design situations.

Let’s consider how ES6 handles this. First, module "A":

import bar from "B";

export default function foo(x) {
    if (x > 10) return bar( x - 1 );
    return x * 2;
}

Now, module "B":

import foo from "A";

export default function bar(y) {
    if (y > 5) return foo( y / 2 );
    return y * 3;
}

These two functions, foo(..) and bar(..), would work as standard function declarations if they were in the same scope, because the declarations are “hoisted” to the whole scope and thus available to each other regardless of authoring order.

With modules, you have declarations in entirely different scopes, so ES6 has to do extra work to help make these circular references work.

In a rough conceptual sense, this is how circular import dependencies are validated and resolved:

  • If the "A" module is loaded first, the first step is to scan the file and analyze all the exports, so it can register all those bindings available for import. Then it processes the import .. from "B", which signals that it needs to go fetch "B".
  • Once the engine loads "B", it does the same analysis of its export bindings. When it sees the import .. from "A", it knows the API of "A" already, so it can verify the import is valid. Now that it knows the "B" API, it can also validate the import .. from "B" in the waiting "A" module.

In essence, the mutual imports, along with the static verification that’s done to validate both import statements, virtually composes the two separate module scopes (via the bindings), such that foo(..) can call bar(..) and vice versa. This is symmetric to if they had originally been declared in the same scope.

Now let’s try using the two modules together. First, we’ll try foo(..):

import foo from "foo";
foo( 25 );              // 11

Or we can try bar(..):

import bar from "bar";
bar( 25 );              // 11.5

By the time either the foo(25) or bar(25) calls are executed, all the analysis/compilation of all modules has completed. That means foo(..) internally knows directly about bar(..) and bar(..) internally knows directly about foo(..).

If all we need is to interact with foo(..), then we only need to import the "foo" module. Likewise with bar(..) and the "bar" module.

Of course, we can import and use both of them if we want to:

import foo from "foo";
import bar from "bar";

foo( 25 );              // 11
bar( 25 );              // 11.5

The static loading semantics of the import statement mean that a "foo" and "bar" that mutually depend on each other via import will ensure that both are loaded, parsed, and compiled before either of them runs. So their circular dependency is statically resolved and this works as you’d expect.

31.3.5 Module Loading

We asserted at the beginning of this “Modules” section that the import statement uses a separate mechanism, provided by the hosting environment (browser, Node.js, etc.), to actually resolve the module specifier string into some useful instruction for finding and loading the desired module. That mechanism is the system Module Loader.

The default module loader provided by the environment will interpret a module specifier as a URL if in the browser, and (generally) as a local filesystem path if on a server such as Node.js. The default behavior is to assume the loaded file is authored in the ES6 standard module format.

Moreover, you will be able to load a module into the browser via an HTML tag, similar to how current script programs are loaded. At the time of this writing, it’s not fully clear if this tag will be <script type="module"> or <module>. ES6 doesn’t control that decision, but discussions in the appropriate standards bodies are already well along in parallel of ES6.

Whatever the tag looks like, you can be sure that under the covers it will use the default loader (or a customized one you’ve pre-specified, as we’ll discuss in the next section).

Just like the tag you’ll use in markup, the module loader itself is not specified by ES6. It is a separate, parallel standard (http://whatwg.github.io/loader/) controlled currently by the WHATWG browser standards group.

At the time of this writing, the following discussions reflect an early pass at the API design, and things are likely to change.

31.3.5.1 Loading Modules Outside of Modules

One use for interacting directly with the module loader is if a non-module needs to load a module. Consider:

// normal script loaded in browser via `<script>`,
// `import` is illegal here

Reflect.Loader.import( "foo" ) // returns a promise for `"foo"`
.then( function(foo){
    foo.bar();
} );

The Reflect.Loader.import(..) utility imports the entire module onto the named parameter (as a namespace), just like the import * as foo .. namespace import we discussed earlier.

Note: The Reflect.Loader.import(..) utility returns a promise that is fulfilled once the module is ready. To import multiple modules, you can compose promises from multiple Reflect.Loader.import(..) calls using Promise.all([ .. ]). For more information about Promises, see “Promises” in Chapter 4.

You can also use Reflect.Loader.import(..) in a real module to dynamically/conditionally load a module, where import itself would not work. You might, for instance, choose to load a module containing a polyfill for some ES7+ feature if a feature test reveals it’s not defined by the current engine.

For performance reasons, you’ll want to avoid dynamic loading whenever possible, as it hampers the ability of the JS engine to fire off early fetches from its static analysis.

31.3.5.2 Customized Loading

Another use for directly interacting with the module loader is if you want to customize its behavior through configuration or even redefinition.

At the time of this writing, there’s a polyfill for the module loader API being developed (https://github.com/ModuleLoader/es6-module-loader). While details are scarce and highly subject to change, we can explore what possibilities may eventually land.

The Reflect.Loader.import(..) call may support a second argument for specifying various options to customize the import/load task. For example:

Reflect.Loader.import( "foo", { address: "/path/to/foo.js" } )
.then( function(foo){
    // ..
} )

It’s also expected that a customization will be provided (through some means) for hooking into the process of loading a module, where a translation/transpilation could occur after load but before the engine compiles the module.

For example, you could load something that’s not already an ES6-compliant module format (e.g., CoffeeScript, TypeScript, CommonJS, AMD). Your translation step could then convert it to an ES6-compliant module for the engine to then process.

31.4 Classes

From nearly the beginning of JavaScript, syntax and development patterns have all strived (read: struggled) to put on a facade of supporting class-oriented development. With things like new and instanceof and a .constructor property, who couldn’t help but be teased that JS had classes hidden somewhere inside its prototype system?

Of course, JS “classes” aren’t nearly the same as classical classes. The differences are well documented, so I won’t belabor that point any further here.

Note: To learn more about the patterns used in JS to fake “classes,” and an alternative view of prototypes called “delegation,” see the second half of the this & Object Prototypes title of this series.

31.4.1 class

Although JS’s prototype mechanism doesn’t work like traditional classes, that doesn’t stop the strong tide of demand on the language to extend the syntactic sugar so that expressing “classes” looks more like real classes. Enter the ES6 class keyword and its associated mechanism.

This feature is the result of a highly contentious and drawn-out debate, and represents a smaller subset compromise from several strongly opposed views on how to approach JS classes. Most developers who want full classes in JS will find parts of the new syntax quite inviting, but will find important bits still missing. Don’t worry, though. TC39 is already working on additional features to augment classes in the post-ES6 timeframe.

At the heart of the new ES6 class mechanism is the class keyword, which identifies a block where the contents define the members of a function’s prototype. Consider:

class Foo {
    constructor(a,b) {
        this.x = a;
        this.y = b;
    }

    gimmeXY() {
        return this.x * this.y;
    }
}

Some things to note:

  • class Foo implies creating a (special) function of the name Foo, much like you did pre-ES6.
  • constructor(..) identifies the signature of that Foo(..) function, as well as its body contents.
  • Class methods use the same “concise method” syntax available to object literals, as discussed in Chapter 2. This also includes the concise generator form as discussed earlier in this chapter, as well as the ES5 getter/setter syntax. However, class methods are non-enumerable whereas object methods are by default enumerable.
  • Unlike object literals, there are no commas separating members in a class body! In fact, they’re not even allowed.

The class syntax definition in the previous snippet can be roughly thought of as this pre-ES6 equivalent, which probably will look fairly familiar to those who’ve done prototype-style coding before:

function Foo(a,b) {
    this.x = a;
    this.y = b;
}

Foo.prototype.gimmeXY = function() {
    return this.x * this.y;
}

In either the pre-ES6 form or the new ES6 class form, this “class” can now be instantiated and used just as you’d expect:

var f = new Foo( 5, 15 );

f.x;                        // 5
f.y;                        // 15
f.gimmeXY();                // 75

Caution! Though class Foo seems much like function Foo(), there are important differences:

  • A Foo(..) call of class Foo must be made with new, as the pre-ES6 option of Foo.call( obj ) will not work.
  • While function Foo is “hoisted” (see the Scope & Closures title of this series), class Foo is not; the extends .. clause specifies an expression that cannot be “hoisted.” So, you must declare a class before you can instantiate it.
  • class Foo in the top global scope creates a lexical Foo identifier in that scope, but unlike function Foo does not create a global object property of that name.

The established instanceof operator still works with ES6 classes, because class just creates a constructor function of the same name. However, ES6 introduces a way to customize how instanceof works, using Symbol.hasInstance (see “Well-Known Symbols” in Chapter 7).

Another way of thinking about class, which I find more convenient, is as a macro that is used to automatically populate a prototype object. Optionally, it also wires up the [[Prototype]] relationship if using extends (see the next section).

An ES6 class isn’t really an entity itself, but a meta concept that wraps around other concrete entities, such as functions and properties, and ties them together.

Tip: In addition to the declaration form, a class can also be an expression, as in: var x = class Y { .. }. This is primarily useful for passing a class definition (technically, the constructor itself) as a function argument or assigning it to an object property.

31.4.2 extends and super

ES6 classes also have syntactic sugar for establishing the [[Prototype]] delegation link between two function prototypes – commonly mislabeled “inheritance” or confusingly labeled “prototype inheritance” – using the class-oriented familiar terminology extends:

class Bar extends Foo {
    constructor(a,b,c) {
        super( a, b );
        this.z = c;
    }

    gimmeXYZ() {
        return super.gimmeXY() * this.z;
    }
}

var b = new Bar( 5, 15, 25 );

b.x;                        // 5
b.y;                        // 15
b.z;                        // 25
b.gimmeXYZ();               // 1875

A significant new addition is super, which is actually something not directly possible pre-ES6 (without some unfortunate hack trade-offs). In the constructor, super automatically refers to the “parent constructor,” which in the previous example is Foo(..). In a method, it refers to the “parent object,” such that you can then make a property/method access off it, such as super.gimmeXY().

Bar extends Foo of course means to link the [[Prototype]] of Bar.prototype to Foo.prototype. So, super in a method like gimmeXYZ() specifically means Foo.prototype, whereas super means Foo when used in the Bar constructor.

Note: super is not limited to class declarations. It also works in object literals, in much the same way we’re discussing here. See “Object super” in Chapter 2 for more information.

31.4.2.1 There Be super Dragons

It is not insignificant to note that super behaves differently depending on where it appears. In fairness, most of the time, that won’t be a problem. But surprises await if you deviate from a narrow norm.

There may be cases where in the constructor you would want to reference the Foo.prototype, such as to directly access one of its properties/methods. However, super in the constructor cannot be used in that way; super.prototype will not work. super(..) means roughly to call new Foo(..), but isn’t actually a usable reference to Foo itself.

Symmetrically, you may want to reference the Foo(..) function from inside a non-constructor method. super.constructor will point at Foo(..) the function, but beware that this function can only be invoked with new. new super.constructor(..) would be valid, but it wouldn’t be terribly useful in most cases, because you can’t make that call use or reference the current this object context, which is likely what you’d want.

Also, super looks like it might be driven by a function’s context just like this – that is, that they’d both be dynamically bound. However, super is not dynamic like this is. When a constructor or method makes a super reference inside it at declaration time (in the class body), that super is statically bound to that specific class hierarchy, and cannot be overridden (at least in ES6).

What does that mean? It means that if you’re in the habit of taking a method from one “class” and “borrowing” it for another class by overriding its this, say with call(..) or apply(..), that may very well create surprises if the method you’re borrowing has a super in it. Consider this class hierarchy:

class ParentA {
    constructor() { this.id = "a"; }
    foo() { console.log( "ParentA:", this.id ); }
}

class ParentB {
    constructor() { this.id = "b"; }
    foo() { console.log( "ParentB:", this.id ); }
}

class ChildA extends ParentA {
    foo() {
        super.foo();
        console.log( "ChildA:", this.id );
    }
}

class ChildB extends ParentB {
    foo() {
        super.foo();
        console.log( "ChildB:", this.id );
    }
}

var a = new ChildA();
a.foo();                    // ParentA: a
                            // ChildA: a
var b = new ChildB();       // ParentB: b
b.foo();                    // ChildB: b

All seems fairly natural and expected in this previous snippet. However, if you try to borrow b.foo() and use it in the context of a – by virtue of dynamic this binding, such borrowing is quite common and used in many different ways, including mixins most notably – you may find this result an ugly surprise:

// borrow `b.foo()` to use in `a` context
b.foo.call( a );            // ParentB: a
                            // ChildB: a

As you can see, the this.id reference was dynamically rebound so that : a is reported in both cases instead of : b. But b.foo()’s super.foo() reference wasn’t dynamically rebound, so it still reported ParentB instead of the expected ParentA.

Because b.foo() references super, it is statically bound to the ChildB/ParentB hierarchy and cannot be used against the ChildA/ParentA hierarchy. There is no ES6 solution to this limitation.

super seems to work intuitively if you have a static class hierarchy with no cross-pollination. But in all fairness, one of the main benefits of doing this-aware coding is exactly that sort of flexibility. Simply, class + super requires you to avoid such techniques.

The choice boils down to narrowing your object design to these static hierarchies – class, extends, and super will be quite nice – or dropping all attempts to “fake” classes and instead embrace dynamic and flexible, classless objects and [[Prototype]] delegation (see the this & Object Prototypes title of this series).

31.4.2.2 Subclass Constructor

Constructors are not required for classes or subclasses; a default constructor is substituted in both cases if omitted. However, the default substituted constructor is different for a direct class versus an extended class.

Specifically, the default subclass constructor automatically calls the parent constructor, and passes along any arguments. In other words, you could think of the default subclass constructor sort of like this:

constructor(...args) {
    super(...args);
}

This is an important detail to note. Not all class languages have the subclass constructor automatically call the parent constructor. C++ does, but Java does not. But more importantly, in pre-ES6 classes, such automatic “parent constructor” calling does not happen. Be careful when converting to ES6 class if you’ve been relying on such calls not happening.

Another perhaps surprising deviation/limitation of ES6 subclass constructors: in a constructor of a subclass, you cannot access this until super(..) has been called. The reason is nuanced and complicated, but it boils down to the fact that the parent constructor is actually the one creating/initializing your instance’s this. Pre-ES6, it works oppositely; the this object is created by the “subclass constructor,” and then you call a “parent constructor” with the context of the “subclass” this.

Let’s illustrate. This works pre-ES6:

function Foo() {
    this.a = 1;
}

function Bar() {
    this.b = 2;
    Foo.call( this );
}

// `Bar` "extends" `Foo`
Bar.prototype = Object.create( Foo.prototype );

But this ES6 equivalent is not allowed:

class Foo {
    constructor() { this.a = 1; }
}

class Bar extends Foo {
    constructor() {
        this.b = 2;         // not allowed before `super()`
        super();            // to fix swap these two statements
    }
}

In this case, the fix is simple. Just swap the two statements in the subclass Bar constructor. However, if you’ve been relying pre-ES6 on being able to skip calling the “parent constructor,” beware because that won’t be allowed anymore.

31.4.2.3 extending Natives

One of the most heralded benefits to the new class and extend design is the ability to (finally!) subclass the built-in natives, like Array. Consider:

class MyCoolArray extends Array {
    first() { return this[0]; }
    last() { return this[this.length - 1]; }
}

var a = new MyCoolArray( 1, 2, 3 );

a.length;                   // 3
a;                          // [1,2,3]

a.first();                  // 1
a.last();                   // 3

Prior to ES6, a fake “subclass” of Array using manual object creation and linking to Array.prototype only partially worked. It missed out on the special behaviors of a real array, such as the automatically updating length property. ES6 subclasses should fully work with “inherited” and augmented behaviors as expected!

Another common pre-ES6 “subclass” limitation is with the Error object, in creating custom error “subclasses.” When genuine Error objects are created, they automatically capture special stack information, including the line number and file where the error is created. Pre-ES6 custom error “subclasses” have no such special behavior, which severely limits their usefulness.

ES6 to the rescue:

class Oops extends Error {
    constructor(reason) {
        super(reason);
        this.oops = reason;
    }
}

// later:
var ouch = new Oops( "I messed up!" );
throw ouch;

The ouch custom error object in this previous snippet will behave like any other genuine error object, including capturing stack. That’s a big improvement!

31.4.3 new.target

ES6 introduces a new concept called a meta property (see Chapter 7), in the form of new.target.

If that looks strange, it is; pairing a keyword with a . and a property name is definitely an out-of-the-ordinary pattern for JS.

new.target is a new “magical” value available in all functions, though in normal functions it will always be undefined. In any constructor, new.target always points at the constructor that new actually directly invoked, even if the constructor is in a parent class and was delegated to by a super(..) call from a child constructor. Consider:

class Foo {
    constructor() {
        console.log( "Foo: ", new.target.name );
    }
}

class Bar extends Foo {
    constructor() {
        super();
        console.log( "Bar: ", new.target.name );
    }
    baz() {
        console.log( "baz: ", new.target );
    }
}

var a = new Foo();
// Foo: Foo

var b = new Bar();
// Foo: Bar   <-- respects the `new` call-site
// Bar: Bar

b.baz();
// baz: undefined

The new.target meta property doesn’t have much purpose in class constructors, except accessing a static property/method (see the next section).

If new.target is undefined, you know the function was not called with new. You can then force a new invocation if that’s necessary.

31.4.4 static

When a subclass Bar extends a parent class Foo, we already observed that Bar.prototype is [[Prototype]]-linked to Foo.prototype. But additionally, Bar() is [[Prototype]]-linked to Foo(). That part may not have such an obvious reasoning.

However, it’s quite useful in the case where you declare static methods (not just properties) for a class, as these are added directly to that class’s function object, not to the function object’s prototype object. Consider:

class Foo {
    static cool() { console.log( "cool" ); }
    wow() { console.log( "wow" ); }
}

class Bar extends Foo {
    static awesome() {
        super.cool();
        console.log( "awesome" );
    }
    neat() {
        super.wow();
        console.log( "neat" );
    }
}

Foo.cool();                 // "cool"
Bar.cool();                 // "cool"
Bar.awesome();              // "cool"
                            // "awesome"

var b = new Bar();
b.neat();                   // "wow"
                            // "neat"

b.awesome;                  // undefined
b.cool;                     // undefined

Be careful not to get confused that static members are on the class’s prototype chain. They’re actually on the dual/parallel chain between the function constructors.

31.4.4.1 Symbol.species Constructor Getter

One place where static can be useful is in setting the Symbol.species getter (known internally in the specification as @@species) for a derived (child) class. This capability allows a child class to signal to a parent class what constructor should be used – when not intending the child class’s constructor itself – if any parent class method needs to vend a new instance.

For example, many methods on Array create and return a new Array instance. If you define a derived class from Array, but you want those methods to continue to vend actual Array instances instead of from your derived class, this works:

class MyCoolArray extends Array {
    // force `species` to be parent constructor
    static get [Symbol.species]() { return Array; }
}

var a = new MyCoolArray( 1, 2, 3 ),
    b = a.map( function(v){ return v * 2; } );

b instanceof MyCoolArray;   // false
b instanceof Array;         // true

To illustrate how a parent class method can use a child’s species declaration somewhat like Array#map(..) is doing, consider:

class Foo {
    // defer `species` to derived constructor
    static get [Symbol.species]() { return this; }
    spawn() {
        return new this.constructor[Symbol.species]();
    }
}

class Bar extends Foo {
    // force `species` to be parent constructor
    static get [Symbol.species]() { return Foo; }
}

var a = new Foo();
var b = a.spawn();
b instanceof Foo;                   // true

var x = new Bar();
var y = x.spawn();
y instanceof Bar;                   // false
y instanceof Foo;                   // true

The parent class Symbol.species does return this to defer to any derived class, as you’d normally expect. Bar then overrides to manually declare Foo to be used for such instance creation. Of course, a derived class can still vend instances of itself using new this.constructor(..).

31.5 Review

ES6 introduces several new features that aid in code organization:

  • Iterators provide sequential access to data or operations. They can be consumed by new language features like for..of and ....
  • Generators are locally pause/resume capable functions controlled by an iterator. They can be used to programmatically (and interactively, through yield/next(..) message passing) generate values to be consumed via iteration.
  • Modules allow private encapsulation of implementation details with a publicly exported API. Module definitions are file-based, singleton instances, and statically resolved at compile time.
  • Classes provide cleaner syntax around prototype-based coding. The addition of super also solves tricky issues with relative references in the [[Prototype]] chain.

These new tools should be your first stop when trying to improve the architecture of your JS projects by embracing ES6.

32 You Don’t Know JS: ES6 & Beyond

33 Chapter 4: Async Flow Control

It’s no secret if you’ve written any significant amount of JavaScript that asynchronous programming is a required skill. The primary mechanism for managing asynchrony has been the function callback.

However, ES6 adds a new feature that helps address significant shortcomings in the callbacks-only approach to async: Promises. In addition, we can revisit generators (from the previous chapter) and see a pattern for combining the two that’s a major step forward in async flow control programming in JavaScript.

33.1 Promises

Let’s clear up some misconceptions: Promises are not about replacing callbacks. Promises provide a trustable intermediary – that is, between your calling code and the async code that will perform the task – to manage callbacks.

Another way of thinking about a Promise is as an event listener, on which you can register to listen for an event that lets you know when a task has completed. It’s an event that will only ever fire once, but it can be thought of as an event nonetheless.

Promises can be chained together, which can sequence a series of asychronously completing steps. Together with higher-level abstractions like the all(..) method (in classic terms, a “gate”) and the race(..) method (in classic terms, a “latch”), promise chains provide a mechanism for async flow control.

Yet another way of conceptualizing a Promise is that it’s a future value, a time-independent container wrapped around a value. This container can be reasoned about identically whether the underlying value is final or not. Observing the resolution of a Promise extracts this value once available. In other words, a Promise is said to be the async version of a sync function’s return value.

A Promise can only have one of two possible resolution outcomes: fulfilled or rejected, with an optional single value. If a Promise is fulfilled, the final value is called a fulfillment. If it’s rejected, the final value is called a reason (as in, a “reason for rejection”). Promises can only be resolved (fulfillment or rejection) once. Any further attempts to fulfill or reject are simply ignored. Thus, once a Promise is resolved, it’s an immutable value that cannot be changed.

Clearly, there are several different ways to think about what a Promise is. No single perspective is fully sufficient, but each provides a separate aspect of the whole. The big takeaway is that they offer a significant improvement over callbacks-only async, namely that they provide order, predictability, and trustability.

33.1.1 Making and Using Promises

To construct a promise instance, use the Promise(..) constructor:

var p = new Promise( function pr(resolve,reject){
    // ..
} );

The Promise(..) constructor takes a single function (pr(..)), which is called immediately and receives two control functions as arguments, usually named resolve(..) and reject(..). They are used as:

  • If you call reject(..), the promise is rejected, and if any value is passed to reject(..), it is set as the reason for rejection.
  • If you call resolve(..) with no value, or any non-promise value, the promise is fulfilled.
  • If you call resolve(..) and pass another promise, this promise simply adopts the state – whether immediate or eventual – of the passed promise (either fulfillment or rejection).

Here’s how you’d typically use a promise to refactor a callback-reliant function call. If you start out with an ajax(..) utility that expects to be able to call an error-first style callback:

function ajax(url,cb) {
    // make request, eventually call `cb(..)`
}

// ..

ajax( "http://some.url.1", function handler(err,contents){
    if (err) {
        // handle ajax error
    }
    else {
        // handle `contents` success
    }
} );

You can convert it to:

function ajax(url) {
    return new Promise( function pr(resolve,reject){
        // make request, eventually call
        // either `resolve(..)` or `reject(..)`
    } );
}

// ..

ajax( "http://some.url.1" )
.then(
    function fulfilled(contents){
        // handle `contents` success
    },
    function rejected(reason){
        // handle ajax error reason
    }
);

Promises have a then(..) method that accepts one or two callback functions. The first function (if present) is treated as the handler to call if the promise is fulfilled successfully. The second function (if present) is treated as the handler to call if the promise is rejected explicitly, or if any error/exception is caught during resolution.

If one of the arguments is omitted or otherwise not a valid function – typically you’ll use null instead – a default placeholder equivalent is used. The default success callback passes its fulfillment value along and the default error callback propagates its rejection reason along.

The shorthand for calling then(null,handleRejection) is catch(handleRejection).

Both then(..) and catch(..) automatically construct and return another promise instance, which is wired to receive the resolution from whatever the return value is from the original promise’s fulfillment or rejection handler (whichever is actually called). Consider:

ajax( "http://some.url.1" )
.then(
    function fulfilled(contents){
        return contents.toUpperCase();
    },
    function rejected(reason){
        return "DEFAULT VALUE";
    }
)
.then( function fulfilled(data){
    // handle data from original promise's
    // handlers
} );

In this snippet, we’re returning an immediate value from either fulfilled(..) or rejected(..), which then is received on the next event turn in the second then(..)’s fulfilled(..). If we instead return a new promise, that new promise is subsumed and adopted as the resolution:

ajax( "http://some.url.1" )
.then(
    function fulfilled(contents){
        return ajax(
            "http://some.url.2?v=" + contents
        );
    },
    function rejected(reason){
        return ajax(
            "http://backup.url.3?err=" + reason
        );
    }
)
.then( function fulfilled(contents){
    // `contents` comes from the subsequent
    // `ajax(..)` call, whichever it was
} );

It’s important to note that an exception (or rejected promise) in the first fulfilled(..) will not result in the first rejected(..) being called, as that handler only responds to the resolution of the first original promise. Instead, the second promise, which the second then(..) is called against, receives that rejection.

In this previous snippet, we are not listening for that rejection, which means it will be silently held onto for future observation. If you never observe it by calling a then(..) or catch(..), then it will go unhandled. Some browser developer consoles may detect these unhandled rejections and report them, but this is not reliably guaranteed; you should always observe promise rejections.

Note: This was just a brief overview of Promise theory and behavior. For a much more in-depth exploration, see Chapter 3 of the Async & Performance title of this series.

33.1.2 Thenables

Promises are genuine instances of the Promise(..) constructor. However, there are promise-like objects called thenables that generally can interoperate with the Promise mechanisms.

Any object (or function) with a then(..) function on it is assumed to be a thenable. Any place where the Promise mechanisms can accept and adopt the state of a genuine promise, they can also handle a thenable.

Thenables are basically a general label for any promise-like value that may have been created by some other system than the actual Promise(..) constructor. In that perspective, a thenable is generally less trustable than a genuine Promise. Consider this misbehaving thenable, for example:

var th = {
    then: function thener( fulfilled ) {
        // call `fulfilled(..)` once every 100ms forever
        setInterval( fulfilled, 100 );
    }
};

If you received that thenable and chained it with th.then(..), you’d likely be surprised that your fulfillment handler is called repeatedly, when normal Promises are supposed to only ever be resolved once.

Generally, if you’re receiving what purports to be a promise or thenable back from some other system, you shouldn’t just trust it blindly. In the next section, we’ll see a utility included with ES6 Promises that helps address this trust concern.

But to further understand the perils of this issue, consider that any object in any piece of code that’s ever been defined to have a method on it called then(..) can be potentially confused as a thenable – if used with Promises, of course – regardless of if that thing was ever intended to even remotely be related to Promise-style async coding.

Prior to ES6, there was never any special reservation made on methods called then(..), and as you can imagine there’s been at least a few cases where that method name has been chosen prior to Promises ever showing up on the radar screen. The most likely case of mistaken thenable will be async libraries that use then(..) but which are not strictly Promises-compliant – there are several out in the wild.

The onus will be on you to guard against directly using values with the Promise mechanism that would be incorrectly assumed to be a thenable.

33.1.3 Promise API

The Promise API also provides some static methods for working with Promises.

Promise.resolve(..) creates a promise resolved to the value passed in. Let’s compare how it works to the more manual approach:

var p1 = Promise.resolve( 42 );

var p2 = new Promise( function pr(resolve){
    resolve( 42 );
} );

p1 and p2 will have essentially identical behavior. The same goes for resolving with a promise:

var theP = ajax( .. );

var p1 = Promise.resolve( theP );

var p2 = new Promise( function pr(resolve){
    resolve( theP );
} );

Tip: Promise.resolve(..) is the solution to the thenable trust issue raised in the previous section. Any value that you are not already certain is a trustable promise – even if it could be an immediate value – can be normalized by passing it to Promise.resolve(..). If the value is already a recognizable promise or thenable, its state/resolution will simply be adopted, insulating you from misbehavior. If it’s instead an immediate value, it will be “wrapped” in a genuine promise, thereby normalizing its behavior to be async.

Promise.reject(..) creates an immediately rejected promise, the same as its Promise(..) constructor counterpart:

var p1 = Promise.reject( "Oops" );

var p2 = new Promise( function pr(resolve,reject){
    reject( "Oops" );
} );

While resolve(..) and Promise.resolve(..) can accept a promise and adopt its state/resolution, reject(..) and Promise.reject(..) do not differentiate what value they receive. So, if you reject with a promise or thenable, the promise/thenable itself will be set as the rejection reason, not its underlying value.

Promise.all([ .. ]) accepts an array of one or more values (e.g., immediate values, promises, thenables). It returns a promise back that will be fulfilled if all the values fulfill, or reject immediately once the first of any of them rejects.

Starting with these values/promises:

var p1 = Promise.resolve( 42 );
var p2 = new Promise( function pr(resolve){
    setTimeout( function(){
        resolve( 43 );
    }, 100 );
} );
var v3 = 44;
var p4 = new Promise( function pr(resolve,reject){
    setTimeout( function(){
        reject( "Oops" );
    }, 10 );
} );

Let’s consider how Promise.all([ .. ]) works with combinations of those values:

Promise.all( [p1,p2,v3] )
.then( function fulfilled(vals){
    console.log( vals );            // [42,43,44]
} );

Promise.all( [p1,p2,v3,p4] )
.then(
    function fulfilled(vals){
        // never gets here
    },
    function rejected(reason){
        console.log( reason );      // Oops
    }
);

While Promise.all([ .. ]) waits for all fulfillments (or the first rejection), Promise.race([ .. ]) waits only for either the first fulfillment or rejection. Consider:

// NOTE: re-setup all test values to
// avoid timing issues misleading you!

Promise.race( [p2,p1,v3] )
.then( function fulfilled(val){
    console.log( val );             // 42
} );

Promise.race( [p2,p4] )
.then(
    function fulfilled(val){
        // never gets here
    },
    function rejected(reason){
        console.log( reason );      // Oops
    }
);

Warning: While Promise.all([]) will fulfill right away (with no values), Promise.race([]) will hang forever. This is a strange inconsistency, and speaks to the suggestion that you should never use these methods with empty arrays.

33.2 Generators + Promises

It is possible to express a series of promises in a chain to represent the async flow control of your program. Consider:

step1()
.then(
    step2,
    step1Failed
)
.then(
    function step3(msg) {
        return Promise.all( [
            step3a( msg ),
            step3b( msg ),
            step3c( msg )
        ] )
    }
)
.then(step4);

However, there’s a much better option for expressing async flow control, and it will probably be much more preferable in terms of coding style than long promise chains. We can use what we learned in Chapter 3 about generators to express our async flow control.

The important pattern to recognize: a generator can yield a promise, and that promise can then be wired to resume the generator with its fulfillment value.

Consider the previous snippet’s async flow control expressed with a generator:

function *main() {

    try {
        var ret = yield step1();
    }
    catch (err) {
        ret = yield step1Failed( err );
    }

    ret = yield step2( ret );

    // step 3
    ret = yield Promise.all( [
        step3a( ret ),
        step3b( ret ),
        step3c( ret )
    ] );

    yield step4( ret );
}

On the surface, this snippet may seem more verbose than the promise chain equivalent in the earlier snippet. However, it offers a much more attractive – and more importantly, a more understandable and reason-able – synchronous-looking coding style (with = assignment of “return” values, etc.) That’s especially true in that try..catch error handling can be used across those hidden async boundaries.

Why are we using Promises with the generator? It’s certainly possible to do async generator coding without Promises.

Promises are a trustable system that uninverts the inversion of control of normal callbacks or thunks (see the Async & Performance title of this series). So, combining the trustability of Promises and the synchronicity of code in generators effectively addresses all the major deficiencies of callbacks. Also, utilities like Promise.all([ .. ]) are a nice, clean way to express concurrency at a generator’s single yield step.

So how does this magic work? We’re going to need a runner that can run our generator, receive a yielded promise, and wire it up to resume the generator with either the fulfillment success value, or throw an error into the generator with the rejection reason.

Many async-capable utilities/libraries have such a “runner”; for example, Q.spawn(..) and my asynquence’s runner(..) plug-in. But here’s a stand-alone runner to illustrate how the process works:

function run(gen) {
    var args = [].slice.call( arguments, 1), it;

    it = gen.apply( this, args );

    return Promise.resolve()
        .then( function handleNext(value){
            var next = it.next( value );

            return (function handleResult(next){
                if (next.done) {
                    return next.value;
                }
                else {
                    return Promise.resolve( next.value )
                        .then(
                            handleNext,
                            function handleErr(err) {
                                return Promise.resolve(
                                    it.throw( err )
                                )
                                .then( handleResult );
                            }
                        );
                }
            })( next );
        } );
}

Note: For a more prolifically commented version of this utility, see the Async & Performance title of this series. Also, the run utilities provided with various async libraries are often more powerful/capable than what we’ve shown here. For example, asynquence’s runner(..) can handle yielded promises, sequences, thunks, and immediate (non-promise) values, giving you ultimate flexibility.

So now running *main() as listed in the earlier snippet is as easy as:

run( main )
.then(
    function fulfilled(){
        // `*main()` completed successfully
    },
    function rejected(reason){
        // Oops, something went wrong
    }
);

Essentially, anywhere that you have more than two asynchronous steps of flow control logic in your program, you can and should use a promise-yielding generator driven by a run utility to express the flow control in a synchronous fashion. This will make for much easier to understand and maintain code.

This yield-a-promise-resume-the-generator pattern is going to be so common and so powerful, the next version of JavaScript after ES6 is almost certainly going to introduce a new function type that will do it automatically without needing the run utility. We’ll cover async functions (as they’re expected to be called) in Chapter 8.

33.3 Review

As JavaScript continues to mature and grow in its widespread adoption, asynchronous programming is more and more of a central concern. Callbacks are not fully sufficient for these tasks, and totally fall down the more sophisticated the need.

Thankfully, ES6 adds Promises to address one of the major shortcomings of callbacks: lack of trust in predictable behavior. Promises represent the future completion value from a potentially async task, normalizing behavior across sync and async boundaries.

But it’s the combination of Promises with generators that fully realizes the benefits of rearranging our async flow control code to de-emphasize and abstract away that ugly callback soup (aka “hell”).

Right now, we can manage these interactions with the aide of various async libraries’ runners, but JavaScript is eventually going to support this interaction pattern with dedicated syntax alone!

34 You Don’t Know JS: ES6 & Beyond

35 Chapter 5: Collections

Structured collection and access to data is a critical component of just about any JS program. From the beginning of the language up to this point, the array and the object have been our primary mechanism for creating data structures. Of course, many higher-level data structures have been built on top of these, as user-land libraries.

As of ES6, some of the most useful (and performance-optimizing!) data structure abstractions have been added as native components of the language.

We’ll start this chapter first by looking at TypedArrays, technically contemporary to ES5 efforts several years ago, but only standardized as companions to WebGL and not JavaScript itself. As of ES6, these have been adopted directly by the language specification, which gives them first-class status.

Maps are like objects (key/value pairs), but instead of just a string for the key, you can use any value – even another object or map! Sets are similar to arrays (lists of values), but the values are unique; if you add a duplicate, it’s ignored. There are also weak (in relation to memory/garbage collection) counterparts: WeakMap and WeakSet.

35.1 TypedArrays

As we cover in the Types & Grammar title of this series, JS does have a set of built-in types, like number and string. It’d be tempting to look at a feature named “typed array” and assume it means an array of a specific type of values, like an array of only strings.

However, typed arrays are really more about providing structured access to binary data using array-like semantics (indexed access, etc.). The “type” in the name refers to a “view” layered on type of the bucket of bits, which is essentially a mapping of whether the bits should be viewed as an array of 8-bit signed integers, 16-bit signed integers, and so on.

How do you construct such a bit-bucket? It’s called a “buffer,” and you construct it most directly with the ArrayBuffer(..) constructor:

var buf = new ArrayBuffer( 32 );
buf.byteLength;                         // 32

buf is now a binary buffer that is 32-bytes long (256-bits), that’s pre-initialized to all 0s. A buffer by itself doesn’t really allow you any interaction except for checking its byteLength property.

Tip: Several web platform features use or return array buffers, such as FileReader#readAsArrayBuffer(..), XMLHttpRequest#send(..), and ImageData (canvas data).

But on top of this array buffer, you can then layer a “view,” which comes in the form of a typed array. Consider:

var arr = new Uint16Array( buf );
arr.length;                         // 16

arr is a typed array of 16-bit unsigned integers mapped over the 256-bit buf buffer, meaning you get 16 elements.

35.1.1 Endianness

It’s very important to understand that the arr is mapped using the endian-setting (big-endian or little-endian) of the platform the JS is running on. This can be an issue if the binary data is created with one endianness but interpreted on a platform with the opposite endianness.

Endian means if the low-order byte (collection of 8-bits) of a multi-byte number – such as the 16-bit unsigned ints we created in the earlier snippet – is on the right or the left of the number’s bytes.

For example, let’s imagine the base-10 number 3085, which takes 16-bits to represent. If you have just one 16-bit number container, it’d be represented in binary as 0000110000001101 (hexadecimal 0c0d) regardless of endianness.

But if 3085 was represented with two 8-bit numbers, the endianness would significantly affect its storage in memory:

  • 0000110000001101 / 0c0d (big endian)
  • 0000110100001100 / 0d0c (little endian)

If you received the bits of 3085 as 0000110100001100 from a little-endian system, but you layered a view on top of it in a big-endian system, you’d instead see value 3340 (base-10) and 0d0c (base-16).

Little endian is the most common representation on the web these days, but there are definitely browsers where that’s not true. It’s important that you understand the endianness of both the producer and consumer of a chunk of binary data.

From MDN, here’s a quick way to test the endianness of your JavaScript:

var littleEndian = (function() {
    var buffer = new ArrayBuffer( 2 );
    new DataView( buffer ).setInt16( 0, 256, true );
    return new Int16Array( buffer )[0] === 256;
})();

littleEndian will be true or false; for most browsers, it should return true. This test uses DataView(..), which allows more low-level, fine-grained control over accessing (setting/getting) the bits from the view you layer over the buffer. The third parameter of the setInt16(..) method in the previous snippet is for telling the DataView what endianness you’re wanting it to use for that operation.

Warning: Do not confuse endianness of underlying binary storage in array buffers with how a given number is represented when exposed in a JS program. For example, (3085).toString(2) returns "110000001101", which with an assumed leading four "0"s appears to be the big-endian representation. In fact, this representation is based on a single 16-bit view, not a view of two 8-bit bytes. The DataView test above is the best way to determine endianness for your JS environment.

35.1.2 Multiple Views

A single buffer can have multiple views attached to it, such as:

var buf = new ArrayBuffer( 2 );

var view8 = new Uint8Array( buf );
var view16 = new Uint16Array( buf );

view16[0] = 3085;
view8[0];                       // 13
view8[1];                       // 12

view8[0].toString( 16 );        // "d"
view8[1].toString( 16 );        // "c"

// swap (as if endian!)
var tmp = view8[0];
view8[0] = view8[1];
view8[1] = tmp;

view16[0];                      // 3340

The typed array constructors have multiple signature variations. We’ve shown so far only passing them an existing buffer. However, that form also takes two extra parameters: byteOffset and length. In other words, you can start the typed array view at a location other than 0 and you can make it span less than the full length of the buffer.

If the buffer of binary data includes data in non-uniform size/location, this technique can be quite useful.

For example, consider a binary buffer that has a 2-byte number (aka “word”) at the beginning, followed by two 1-byte numbers, followed by a 32-bit floating point number. Here’s how you can access that data with multiple views on the same buffer, offsets, and lengths:

var first = new Uint16Array( buf, 0, 2 )[0],
    second = new Uint8Array( buf, 2, 1 )[0],
    third = new Uint8Array( buf, 3, 1 )[0],
    fourth = new Float32Array( buf, 4, 4 )[0];

35.1.3 TypedArray Constructors

In addition to the (buffer,[offset, [length]]) form examined in the previous section, typed array constructors also support these forms:

  • constructor(length): Creates a new view over a new buffer of length bytes
  • constructor(typedArr): Creates a new view and buffer, and copies the contents from the typedArr view
  • constructor(obj): Creates a new view and buffer, and iterates over the array-like or object obj to copy its contents

The following typed array constructors are available as of ES6:

  • Int8Array (8-bit signed integers), Uint8Array (8-bit unsigned integers)
    • Uint8ClampedArray (8-bit unsigned integers, each value clamped on setting to the 0-255 range)
  • Int16Array (16-bit signed integers), Uint16Array (16-bit unsigned integers)
  • Int32Array (32-bit signed integers), Uint32Array (32-bit unsigned integers)
  • Float32Array (32-bit floating point, IEEE-754)
  • Float64Array (64-bit floating point, IEEE-754)

Instances of typed array constructors are almost the same as regular native arrays. Some differences include having a fixed length and the values all being of the same “type.”

However, they share most of the same prototype methods. As such, you likely will be able to use them as regular arrays without needing to convert.

For example:

var a = new Int32Array( 3 );
a[0] = 10;
a[1] = 20;
a[2] = 30;

a.map( function(v){
    console.log( v );
} );
// 10 20 30

a.join( "-" );
// "10-20-30"

Warning: You can’t use certain Array.prototype methods with TypedArrays that don’t make sense, such as the mutators (splice(..), push(..), etc.) and concat(..).

Be aware that the elements in TypedArrays really are constrained to the declared bit sizes. If you have a Uint8Array and try to assign something larger than an 8-bit value into one of its elements, the value wraps around so as to stay within the bit length.

This could cause problems if you were trying to, for instance, square all the values in a TypedArray. Consider:

var a = new Uint8Array( 3 );
a[0] = 10;
a[1] = 20;
a[2] = 30;

var b = a.map( function(v){
    return v * v;
} );

b;              // [100, 144, 132]

The 20 and 30 values, when squared, resulted in bit overflow. To get around such a limitation, you can use the TypedArray#from(..) function:

var a = new Uint8Array( 3 );
a[0] = 10;
a[1] = 20;
a[2] = 30;

var b = Uint16Array.from( a, function(v){
    return v * v;
} );

b;              // [100, 400, 900]

See the “Array.from(..) Static Function” section in Chapter 6 for more information about the Array.from(..) that is shared with TypedArrays. Specifically, the “Mapping” section explains the mapping function accepted as its second argument.

One interesting behavior to consider is that TypedArrays have a sort(..) method much like regular arrays, but this one defaults to numeric sort comparisons instead of coercing values to strings for lexicographic comparison. For example:

var a = [ 10, 1, 2, ];
a.sort();                               // [1,10,2]

var b = new Uint8Array( [ 10, 1, 2 ] );
b.sort();                               // [1,2,10]

The TypedArray#sort(..) takes an optional compare function argument just like Array#sort(..), which works in exactly the same way.

35.2 Maps

If you have a lot of JS experience, you know that objects are the primary mechanism for creating unordered key/value-pair data structures, otherwise known as maps. However, the major drawback with objects-as-maps is the inability to use a non-string value as the key.

For example, consider:

var m = {};

var x = { id: 1 },
    y = { id: 2 };

m[x] = "foo";
m[y] = "bar";

m[x];                           // "bar"
m[y];                           // "bar"

What’s going on here? The two objects x and y both stringify to "[object Object]", so only that one key is being set in m.

Some have implemented fake maps by maintaining a parallel array of non-string keys alongside an array of the values, such as:

var keys = [], vals = [];

var x = { id: 1 },
    y = { id: 2 };

keys.push( x );
vals.push( "foo" );

keys.push( y );
vals.push( "bar" );

keys[0] === x;                  // true
vals[0];                        // "foo"

keys[1] === y;                  // true
vals[1];                        // "bar"

Of course, you wouldn’t want to manage those parallel arrays yourself, so you could define a data structure with methods that automatically do the management under the covers. Besides having to do that work yourself, the main drawback is that access is no longer O(1) time-complexity, but instead is O(n).

But as of ES6, there’s no longer any need to do this! Just use Map(..):

var m = new Map();

var x = { id: 1 },
    y = { id: 2 };

m.set( x, "foo" );
m.set( y, "bar" );

m.get( x );                     // "foo"
m.get( y );                     // "bar"

The only drawback is that you can’t use the [ ] bracket access syntax for setting and retrieving values. But get(..) and set(..) work perfectly suitably instead.

To delete an element from a map, don’t use the delete operator, but instead use the delete(..) method:

m.set( x, "foo" );
m.set( y, "bar" );

m.delete( y );

You can clear the entire map’s contents with clear(). To get the length of a map (i.e., the number of keys), use the size property (not length):

m.set( x, "foo" );
m.set( y, "bar" );
m.size;                         // 2

m.clear();
m.size;                         // 0

The Map(..) constructor can also receive an iterable (see “Iterators” in Chapter 3), which must produce a list of arrays, where the first item in each array is the key and the second item is the value. This format for iteration is identical to that produced by the entries() method, explained in the next section. That makes it easy to make a copy of a map:

var m2 = new Map( m.entries() );

// same as:
var m2 = new Map( m );

Because a map instance is an iterable, and its default iterator is the same as entries(), the second shorter form is more preferable.

Of course, you can just manually specify an entries list (array of key/value arrays) in the Map(..) constructor form:

var x = { id: 1 },
    y = { id: 2 };

var m = new Map( [
    [ x, "foo" ],
    [ y, "bar" ]
] );

m.get( x );                     // "foo"
m.get( y );                     // "bar"

35.2.1 Map Values

To get the list of values from a map, use values(..), which returns an iterator. In Chapters 2 and 3, we covered various ways to process an iterator sequentially (like an array), such as the ... spread operator and the for..of loop. Also, “Arrays” in Chapter 6 covers the Array.from(..) method in detail. Consider:

var m = new Map();

var x = { id: 1 },
    y = { id: 2 };

m.set( x, "foo" );
m.set( y, "bar" );

var vals = [ ...m.values() ];

vals;                           // ["foo","bar"]
Array.from( m.values() );       // ["foo","bar"]

As discussed in the previous section, you can iterate over a map’s entries using entries() (or the default map iterator). Consider:

var m = new Map();

var x = { id: 1 },
    y = { id: 2 };

m.set( x, "foo" );
m.set( y, "bar" );

var vals = [ ...m.entries() ];

vals[0][0] === x;               // true
vals[0][1];                     // "foo"

vals[1][0] === y;               // true
vals[1][1];                     // "bar"

35.2.2 Map Keys

To get the list of keys, use keys(), which returns an iterator over the keys in the map:

var m = new Map();

var x = { id: 1 },
    y = { id: 2 };

m.set( x, "foo" );
m.set( y, "bar" );

var keys = [ ...m.keys() ];

keys[0] === x;                  // true
keys[1] === y;                  // true

To determine if a map has a given key, use has(..):

var m = new Map();

var x = { id: 1 },
    y = { id: 2 };

m.set( x, "foo" );

m.has( x );                     // true
m.has( y );                     // false

Maps essentially let you associate some extra piece of information (the value) with an object (the key) without actually putting that information on the object itself.

While you can use any kind of value as a key for a map, you typically will use objects, as strings and other primitives are already eligible as keys of normal objects. In other words, you’ll probably want to continue to use normal objects for maps unless some or all of the keys need to be objects, in which case map is more appropriate.

Warning: If you use an object as a map key and that object is later discarded (all references unset) in attempt to have garbage collection (GC) reclaim its memory, the map itself will still retain its entry. You will need to remove the entry from the map for it to be GC-eligible. In the next section, we’ll see WeakMaps as a better option for object keys and GC.

35.3 WeakMaps

WeakMaps are a variation on maps, which has most of the same external behavior but differs underneath in how the memory allocation (specifically its GC) works.

WeakMaps take (only) objects as keys. Those objects are held weakly, which means if the object itself is GC’d, the entry in the WeakMap is also removed. This isn’t observable behavior, though, as the only way an object can be GC’d is if there’s no more references to it – once there are no more references to it, you have no object reference to check if it exists in the WeakMap.

Otherwise, the API for WeakMap is similar, though more limited:

var m = new WeakMap();

var x = { id: 1 },
    y = { id: 2 };

m.set( x, "foo" );

m.has( x );                     // true
m.has( y );                     // false

WeakMaps do not have a size property or clear() method, nor do they expose any iterators over their keys, values, or entries. So even if you unset the x reference, which will remove its entry from m upon GC, there is no way to tell. You’ll just have to take JavaScript’s word for it!

Just like Maps, WeakMaps let you soft-associate information with an object. But they are particularly useful if the object is not one you completely control, such as a DOM element. If the object you’re using as a map key can be deleted and should be GC-eligible when it is, then a WeakMap is a more appropriate option.

It’s important to note that a WeakMap only holds its keys weakly, not its values. Consider:

var m = new WeakMap();

var x = { id: 1 },
    y = { id: 2 },
    z = { id: 3 },
    w = { id: 4 };

m.set( x, y );

x = null;                       // { id: 1 } is GC-eligible
y = null;                       // { id: 2 } is GC-eligible
                                // only because { id: 1 } is

m.set( z, w );

w = null;                       // { id: 4 } is not GC-eligible

For this reason, WeakMaps are in my opinion better named “WeakKeyMaps.”

35.4 Sets

A set is a collection of unique values (duplicates are ignored).

The API for a set is similar to map. The add(..) method takes the place of the set(..) method (somewhat ironically), and there is no get(..) method.

Consider:

var s = new Set();

var x = { id: 1 },
    y = { id: 2 };

s.add( x );
s.add( y );
s.add( x );

s.size;                         // 2

s.delete( y );
s.size;                         // 1

s.clear();
s.size;                         // 0

The Set(..) constructor form is similar to Map(..), in that it can receive an iterable, like another set or simply an array of values. However, unlike how Map(..) expects entries list (array of key/value arrays), Set(..) expects a values list (array of values):

var x = { id: 1 },
    y = { id: 2 };

var s = new Set( [x,y] );

A set doesn’t need a get(..) because you don’t retrieve a value from a set, but rather test if it is present or not, using has(..):

var s = new Set();

var x = { id: 1 },
    y = { id: 2 };

s.add( x );

s.has( x );                     // true
s.has( y );                     // false

Note: The comparison algorithm in has(..) is almost identical to Object.is(..) (see Chapter 6), except that -0 and 0 are treated as the same rather than distinct.

35.4.1 Set Iterators

Sets have the same iterator methods as maps. Their behavior is different for sets, but symmetric with the behavior of map iterators. Consider:

var s = new Set();

var x = { id: 1 },
    y = { id: 2 };

s.add( x ).add( y );

var keys = [ ...s.keys() ],
    vals = [ ...s.values() ],
    entries = [ ...s.entries() ];

keys[0] === x;
keys[1] === y;

vals[0] === x;
vals[1] === y;

entries[0][0] === x;
entries[0][1] === x;
entries[1][0] === y;
entries[1][1] === y;

The keys() and values() iterators both yield a list of the unique values in the set. The entries() iterator yields a list of entry arrays, where both items of the array are the unique set value. The default iterator for a set is its values() iterator.

The inherent uniqueness of a set is its most useful trait. For example:

var s = new Set( [1,2,3,4,"1",2,4,"5"] ),
    uniques = [ ...s ];

uniques;                        // [1,2,3,4,"1","5"]

Set uniqueness does not allow coercion, so 1 and "1" are considered distinct values.

35.5 WeakSets

Whereas a WeakMap holds its keys weakly (but its values strongly), a WeakSet holds its values weakly (there aren’t really keys).

var s = new WeakSet();

var x = { id: 1 },
    y = { id: 2 };

s.add( x );
s.add( y );

x = null;                       // `x` is GC-eligible
y = null;                       // `y` is GC-eligible

Warning: WeakSet values must be objects, not primitive values as is allowed with sets.

35.6 Review

ES6 defines a number of useful collections that make working with data in structured ways more efficient and effective.

TypedArrays provide “view”s of binary data buffers that align with various integer types, like 8-bit unsigned integers and 32-bit floats. The array access to binary data makes operations much easier to express and maintain, which enables you to more easily work with complex data like video, audio, canvas data, and so on.

Maps are key-value pairs where the key can be an object instead of just a string/primitive. Sets are unique lists of values (of any type).

WeakMaps are maps where the key (object) is weakly held, so that GC is free to collect the entry if it’s the last reference to an object. WeakSets are sets where the value is weakly held, again so that GC can remove the entry if it’s the last reference to that object.

36 You Don’t Know JS: ES6 & Beyond

37 Chapter 6: API Additions

From conversions of values to mathematic calculations, ES6 adds many static properties and methods to various built-in natives and objects to help with common tasks. In addition, instances of some of the natives have new capabilities via various new prototype methods.

Note: Most of these features can be faithfully polyfilled. We will not dive into such details here, but check out “ES6 Shim” (https://github.com/paulmillr/es6-shim/) for standards-compliant shims/polyfills.

37.1 Array

One of the most commonly extended features in JS by various user libraries is the Array type. It should be no surprise that ES6 adds a number of helpers to Array, both static and prototype (instance).

37.1.1 Array.of(..) Static Function

There’s a well known gotcha with the Array(..) constructor, which is that if there’s only one argument passed, and that argument is a number, instead of making an array of one element with that number value in it, it constructs an empty array with a length property equal to the number. This action produces the unfortunate and quirky “empty slots” behavior that’s reviled about JS arrays.

Array.of(..) replaces Array(..) as the preferred function-form constructor for arrays, because Array.of(..) does not have that special single-number-argument case. Consider:

var a = Array( 3 );
a.length;                       // 3
a[0];                           // undefined

var b = Array.of( 3 );
b.length;                       // 1
b[0];                           // 3

var c = Array.of( 1, 2, 3 );
c.length;                       // 3
c;                              // [1,2,3]

Under what circumstances would you want to use Array.of(..) instead of just creating an array with literal syntax, like c = [1,2,3]? There’s two possible cases.

If you have a callback that’s supposed to wrap argument(s) passed to it in an array, Array.of(..) fits the bill perfectly. That’s probably not terribly common, but it may scratch an itch for you.

The other scenario is if you subclass Array (see “Classes” in Chapter 3) and want to be able to create and initialize elements in an instance of your subclass, such as:

class MyCoolArray extends Array {
    sum() {
        return this.reduce( function reducer(acc,curr){
            return acc + curr;
        }, 0 );
    }
}

var x = new MyCoolArray( 3 );
x.length;                       // 3 -- oops!
x.sum();                        // 0 -- oops!

var y = [3];                    // Array, not MyCoolArray
y.length;                       // 1
y.sum();                        // `sum` is not a function

var z = MyCoolArray.of( 3 );
z.length;                       // 1
z.sum();                        // 3

You can’t just (easily) create a constructor for MyCoolArray that overrides the behavior of the Array parent constructor, because that constructor is necessary to actually create a well-behaving array value (initializing the this). The “inherited” static of(..) method on the MyCoolArray subclass provides a nice solution.

37.1.2 Array.from(..) Static Function

An “array-like object” in JavaScript is an object that has a length property on it, specifically with an integer value of zero or higher.

These values have been notoriously frustrating to work with in JS; it’s been quite common to need to transform them into an actual array, so that the various Array.prototype methods (map(..), indexOf(..) etc.) are available to use with it. That process usually looks like:

// array-like object
var arrLike = {
    length: 3,
    0: "foo",
    1: "bar"
};

var arr = Array.prototype.slice.call( arrLike );

Another common task where slice(..) is often used is in duplicating a real array:

var arr2 = arr.slice();

In both cases, the new ES6 Array.from(..) method can be a more understandable and graceful – if also less verbose – approach:

var arr = Array.from( arrLike );

var arrCopy = Array.from( arr );

Array.from(..) looks to see if the first argument is an iterable (see “Iterators” in Chapter 3), and if so, it uses the iterator to produce values to “copy” into the returned array. Because real arrays have an iterator for those values, that iterator is automatically used.

But if you pass an array-like object as the first argument to Array.from(..), it behaves basically the same as slice() (no arguments!) or apply(..) does, which is that it simply loops over the value, accessing numerically named properties from 0 up to whatever the value of length is.

Consider:

var arrLike = {
    length: 4,
    2: "foo"
};

Array.from( arrLike );
// [ undefined, undefined, "foo", undefined ]

Because positions 0, 1, and 3 didn’t exist on arrLike, the result was the undefined value for each of those slots.

You could produce a similar outcome like this:

var emptySlotsArr = [];
emptySlotsArr.length = 4;
emptySlotsArr[2] = "foo";

Array.from( emptySlotsArr );
// [ undefined, undefined, "foo", undefined ]

37.1.2.1 Avoiding Empty Slots

There’s a subtle but important difference in the previous snippet between the emptySlotsArr and the result of the Array.from(..) call. Array.from(..) never produces empty slots.

Prior to ES6, if you wanted to produce an array initialized to a certain length with actual undefined values in each slot (no empty slots!), you had to do extra work:

var a = Array( 4 );                             // four empty slots!

var b = Array.apply( null, { length: 4 } );     // four `undefined` values

But Array.from(..) now makes this easier:

var c = Array.from( { length: 4 } );            // four `undefined` values

Warning: Using an empty slot array like a in the previous snippets would work with some array functions, but others ignore empty slots (like map(..), etc.). You should never intentionally work with empty slots, as it will almost certainly lead to strange/unpredictable behavior in your programs.

37.1.2.2 Mapping

The Array.from(..) utility has another helpful trick up its sleeve. The second argument, if provided, is a mapping callback (almost the same as the regular Array#map(..) expects) which is called to map/transform each value from the source to the returned target. Consider:

var arrLike = {
    length: 4,
    2: "foo"
};

Array.from( arrLike, function mapper(val,idx){
    if (typeof val == "string") {
        return val.toUpperCase();
    }
    else {
        return idx;
    }
} );
// [ 0, 1, "FOO", 3 ]

Note: As with other array methods that take callbacks, Array.from(..) takes an optional third argument that if set will specify the this binding for the callback passed as the second argument. Otherwise, this will be undefined.

See “TypedArrays” in Chapter 5 for an example of using Array.from(..) in translating values from an array of 8-bit values to an array of 16-bit values.

37.1.3 Creating Arrays and Subtypes

In the last couple of sections, we’ve discussed Array.of(..) and Array.from(..), both of which create a new array in a similar way to a constructor. But what do they do in subclasses? Do they create instances of the base Array or the derived subclass?

class MyCoolArray extends Array {
    ..
}

MyCoolArray.from( [1, 2] ) instanceof MyCoolArray;  // true

Array.from(
    MyCoolArray.from( [1, 2] )
) instanceof MyCoolArray;                           // false

Both of(..) and from(..) use the constructor that they’re accessed from to construct the array. So if you use the base Array.of(..) you’ll get an Array instance, but if you use MyCoolArray.of(..), you’ll get a MyCoolArray instance.

In “Classes” in Chapter 3, we covered the @@species setting which all the built-in classes (like Array) have defined, which is used by any prototype methods if they create a new instance. slice(..) is a great example:

var x = new MyCoolArray( 1, 2, 3 );

x.slice( 1 ) instanceof MyCoolArray;                // true

Generally, that default behavior will probably be desired, but as we discussed in Chapter 3, you can override if you want:

class MyCoolArray extends Array {
    // force `species` to be parent constructor
    static get [Symbol.species]() { return Array; }
}

var x = new MyCoolArray( 1, 2, 3 );

x.slice( 1 ) instanceof MyCoolArray;                // false
x.slice( 1 ) instanceof Array;                      // true

It’s important to note that the @@species setting is only used for the prototype methods, like slice(..). It’s not used by of(..) and from(..); they both just use the this binding (whatever constructor is used to make the reference). Consider:

class MyCoolArray extends Array {
    // force `species` to be parent constructor
    static get [Symbol.species]() { return Array; }
}

var x = new MyCoolArray( 1, 2, 3 );

MyCoolArray.from( x ) instanceof MyCoolArray;       // true
MyCoolArray.of( [2, 3] ) instanceof MyCoolArray;    // true

37.1.4 copyWithin(..) Prototype Method

Array#copyWithin(..) is a new mutator method available to all arrays (including Typed Arrays; see Chapter 5). copyWithin(..) copies a portion of an array to another location in the same array, overwriting whatever was there before.

The arguments are target (the index to copy to), start (the inclusive index to start the copying from), and optionally end (the exclusive index to stop copying). If any of the arguments are negative, they’re taken to be relative from the end of the array.

Consider:

[1,2,3,4,5].copyWithin( 3, 0 );         // [1,2,3,1,2]

[1,2,3,4,5].copyWithin( 3, 0, 1 );      // [1,2,3,1,5]

[1,2,3,4,5].copyWithin( 0, -2 );        // [4,5,3,4,5]

[1,2,3,4,5].copyWithin( 0, -2, -1 );    // [4,2,3,4,5]

The copyWithin(..) method does not extend the array’s length, as the first example in the previous snippet shows. Copying simply stops when the end of the array is reached.

Contrary to what you might think, the copying doesn’t always go in left-to-right (ascending index) order. It’s possible this would result in repeatedly copying an already copied value if the from and target ranges overlap, which is presumably not desired behavior.

So internally, the algorithm avoids this case by copying in reverse order to avoid that gotcha. Consider:

[1,2,3,4,5].copyWithin( 2, 1 );     // ???

If the algorithm was strictly moving left to right, then the 2 should be copied to overwrite the 3, then that copied 2 should be copied to overwrite 4, then that copied 2 should be copied to overwrite 5, and you’d end up with [1,2,2,2,2].

Instead, the copying algorithm reverses direction and copies 4 to overwrite 5, then copies 3 to overwrite 4, then copies 2 to overwrite 3, and the final result is [1,2,2,3,4]. That’s probably more “correct” in terms of expectation, but it can be confusing if you’re only thinking about the copying algorithm in a naive left-to-right fashion.

37.1.5 fill(..) Prototype Method

Filling an existing array entirely (or partially) with a specified value is natively supported as of ES6 with the Array#fill(..) method:

var a = Array( 4 ).fill( undefined );
a;
// [undefined,undefined,undefined,undefined]

fill(..) optionally takes start and end parameters, which indicate a subset portion of the array to fill, such as:

var a = [ null, null, null, null ].fill( 42, 1, 3 );

a;                                  // [null,42,42,null]

37.1.6 find(..) Prototype Method

The most common way to search for a value in an array has generally been the indexOf(..) method, which returns the index the value is found at or -1 if not found:

var a = [1,2,3,4,5];

(a.indexOf( 3 ) != -1);             // true
(a.indexOf( 7 ) != -1);             // false

(a.indexOf( "2" ) != -1);           // false

The indexOf(..) comparison requires a strict === match, so a search for "2" fails to find a value of 2, and vice versa. There’s no way to override the matching algorithm for indexOf(..). It’s also unfortunate/ungraceful to have to make the manual comparison to the -1 value.

Tip: See the Types & Grammar title of this series for an interesting (and controversially confusing) technique to work around the -1 ugliness with the ~ operator.

Since ES5, the most common workaround to have control over the matching logic has been the some(..) method. It works by calling a function callback for each element, until one of those calls returns a true/truthy value, and then it stops. Because you get to define the callback function, you have full control over how a match is made:

var a = [1,2,3,4,5];

a.some( function matcher(v){
    return v == "2";
} );                                // true

a.some( function matcher(v){
    return v == 7;
} );                                // false

But the downside to this approach is that you only get the true/false indicating if a suitably matched value was found, but not what the actual matched value was.

ES6’s find(..) addresses this. It works basically the same as some(..), except that once the callback returns a true/truthy value, the actual array value is returned:

var a = [1,2,3,4,5];

a.find( function matcher(v){
    return v == "2";
} );                                // 2

a.find( function matcher(v){
    return v == 7;                  // undefined
});

Using a custom matcher(..) function also lets you match against complex values like objects:

var points = [
    { x: 10, y: 20 },
    { x: 20, y: 30 },
    { x: 30, y: 40 },
    { x: 40, y: 50 },
    { x: 50, y: 60 }
];

points.find( function matcher(point) {
    return (
        point.x % 3 == 0 &&
        point.y % 4 == 0
    );
} );                                // { x: 30, y: 40 }

Note: As with other array methods that take callbacks, find(..) takes an optional second argument that if set will specify the this binding for the callback passed as the first argument. Otherwise, this will be undefined.

37.1.7 findIndex(..) Prototype Method

While the previous section illustrates how some(..) yields a boolean result for a search of an array, and find(..) yields the matched value itself from the array search, there’s also a need for finding the positional index of the matched value.

indexOf(..) does that, but there’s no control over its matching logic; it always uses === strict equality. So ES6’s findIndex(..) is the answer:

var points = [
    { x: 10, y: 20 },
    { x: 20, y: 30 },
    { x: 30, y: 40 },
    { x: 40, y: 50 },
    { x: 50, y: 60 }
];

points.findIndex( function matcher(point) {
    return (
        point.x % 3 == 0 &&
        point.y % 4 == 0
    );
} );                                // 2

points.findIndex( function matcher(point) {
    return (
        point.x % 6 == 0 &&
        point.y % 7 == 0
    );
} );                                // -1

Don’t use findIndex(..) != -1 (the way it’s always been done with indexOf(..)) to get a boolean from the search, because some(..) already yields the true/false you want. And don’t do a[ a.findIndex(..) ] to get the matched value, because that’s what find(..) accomplishes. And finally, use indexOf(..) if you need the index of a strict match, or findIndex(..) if you need the index of a more customized match.

Note: As with other array methods that take callbacks, findIndex(..) takes an optional second argument that if set will specify the this binding for the callback passed as the first argument. Otherwise, this will be undefined.

37.1.8 entries(), values(), keys() Prototype Methods

In Chapter 3, we illustrated how data structures can provide a patterned item-by-item enumeration of their values, via an iterator. We then expounded on this approach in Chapter 5, as we explored how the new ES6 collections (Map, Set, etc.) provide several methods for producing different kinds of iterations.

Because it’s not new to ES6, Array might not be thought of traditionally as a “collection,” but it is one in the sense that it provides these same iterator methods: entries(), values(), and keys(). Consider:

var a = [1,2,3];

[...a.values()];                    // [1,2,3]
[...a.keys()];                      // [0,1,2]
[...a.entries()];                   // [ [0,1], [1,2], [2,3] ]

[...a[Symbol.iterator]()];          // [1,2,3]

Just like with Set, the default Array iterator is the same as what values() returns.

In “Avoiding Empty Slots” earlier in this chapter, we illustrated how Array.from(..) treats empty slots in an array as just being present slots with undefined in them. That’s actually because under the covers, the array iterators behave that way:

var a = [];
a.length = 3;
a[1] = 2;

[...a.values()];        // [undefined,2,undefined]
[...a.keys()];          // [0,1,2]
[...a.entries()];       // [ [0,undefined], [1,2], [2,undefined] ]

37.2 Object

A few additional static helpers have been added to Object. Traditionally, functions of this sort have been seen as focused on the behaviors/capabilities of object values.

However, starting with ES6, Object static functions will also be for general-purpose global APIs of any sort that don’t already belong more naturally in some other location (i.e., Array.from(..)).

37.2.1 Object.is(..) Static Function

The Object.is(..) static function makes value comparisons in an even more strict fashion than the === comparison.

Object.is(..) invokes the underlying SameValue algorithm (ES6 spec, section 7.2.9). The SameValue algorithm is basically the same as the === Strict Equality Comparison Algorithm (ES6 spec, section 7.2.13), with two important exceptions.

Consider:

var x = NaN, y = 0, z = -0;

x === x;                            // false
y === z;                            // true

Object.is( x, x );                  // true
Object.is( y, z );                  // false

You should continue to use === for strict equality comparisons; Object.is(..) shouldn’t be thought of as a replacement for the operator. However, in cases where you’re trying to strictly identify a NaN or -0 value, Object.is(..) is now the preferred option.

Note: ES6 also adds a Number.isNaN(..) utility (discussed later in this chapter) which may be a slightly more convenient test; you may prefer Number.isNaN(x) over Object.is(x,NaN). You can accurately test for -0 with a clumsy x == 0 && 1 / x === -Infinity, but in this case Object.is(x,-0) is much better.

37.2.2 Object.getOwnPropertySymbols(..) Static Function

The “Symbols” section in Chapter 2 discusses the new Symbol primitive value type in ES6.

Symbols are likely going to be mostly used as special (meta) properties on objects. So the Object.getOwnPropertySymbols(..) utility was introduced, which retrieves only the symbol properties directly on an object:

var o = {
    foo: 42,
    [ Symbol( "bar" ) ]: "hello world",
    baz: true
};

Object.getOwnPropertySymbols( o );  // [ Symbol(bar) ]

37.2.3 Object.setPrototypeOf(..) Static Function

Also in Chapter 2, we mentioned the Object.setPrototypeOf(..) utility, which (unsurprisingly) sets the [[Prototype]] of an object for the purposes of behavior delegation (see the this & Object Prototypes title of this series). Consider:

var o1 = {
    foo() { console.log( "foo" ); }
};
var o2 = {
    // .. o2's definition ..
};

Object.setPrototypeOf( o2, o1 );

// delegates to `o1.foo()`
o2.foo();                           // foo

Alternatively:

var o1 = {
    foo() { console.log( "foo" ); }
};

var o2 = Object.setPrototypeOf( {
    // .. o2's definition ..
}, o1 );

// delegates to `o1.foo()`
o2.foo();                           // foo

In both previous snippets, the relationship between o2 and o1 appears at the end of the o2 definition. More commonly, the relationship between an o2 and o1 is specified at the top of the o2 definition, as it is with classes, and also with __proto__ in object literals (see “Setting [[Prototype]]” in Chapter 2).

Warning: Setting a [[Prototype]] right after object creation is reasonable, as shown. But changing it much later is generally not a good idea and will usually lead to more confusion than clarity.

37.2.4 Object.assign(..) Static Function

Many JavaScript libraries/frameworks provide utilities for copying/mixing one object’s properties into another (e.g., jQuery’s extend(..)). There are various nuanced differences between these different utilities, such as whether a property with value undefined is ignored or not.

ES6 adds Object.assign(..), which is a simplified version of these algorithms. The first argument is the target, and any other arguments passed are the sources, which will be processed in listed order. For each source, its enumerable and own (e.g., not “inherited”) keys, including symbols, are copied as if by plain = assignment. Object.assign(..) returns the target object.

Consider this object setup:

var target = {},
    o1 = { a: 1 }, o2 = { b: 2 },
    o3 = { c: 3 }, o4 = { d: 4 };

// setup read-only property
Object.defineProperty( o3, "e", {
    value: 5,
    enumerable: true,
    writable: false,
    configurable: false
} );

// setup non-enumerable property
Object.defineProperty( o3, "f", {
    value: 6,
    enumerable: false
} );

o3[ Symbol( "g" ) ] = 7;

// setup non-enumerable symbol
Object.defineProperty( o3, Symbol( "h" ), {
    value: 8,
    enumerable: false
} );

Object.setPrototypeOf( o3, o4 );

Only the properties a, b, c, e, and Symbol("g") will be copied to target:

Object.assign( target, o1, o2, o3 );

target.a;                           // 1
target.b;                           // 2
target.c;                           // 3

Object.getOwnPropertyDescriptor( target, "e" );
// { value: 5, writable: true, enumerable: true,
//   configurable: true }

Object.getOwnPropertySymbols( target );
// [Symbol("g")]

The d, f, and Symbol("h") properties are omitted from copying; non-enumerable properties and non-owned properties are all excluded from the assignment. Also, e is copied as a normal property assignment, not duplicated as a read-only property.

In an earlier section, we showed using setPrototypeOf(..) to set up a [[Prototype]] relationship between an o2 and o1 object. There’s another form that leverages Object.assign(..):

var o1 = {
    foo() { console.log( "foo" ); }
};

var o2 = Object.assign(
    Object.create( o1 ),
    {
        // .. o2's definition ..
    }
);

// delegates to `o1.foo()`
o2.foo();                           // foo

Note: Object.create(..) is the ES5 standard utility that creates an empty object that is [[Prototype]]-linked. See the this & Object Prototypes title of this series for more information.

37.3 Math

ES6 adds several new mathematic utilities that fill in holes or aid with common operations. All of these can be manually calculated, but most of them are now defined natively so that in some cases the JS engine can either more optimally perform the calculations, or perform them with better decimal precision than their manual counterparts.

It’s likely that asm.js/transpiled JS code (see the Async & Performance title of this series) is the more likely consumer of many of these utilities rather than direct developers.

Trigonometry:

  • cosh(..) - Hyperbolic cosine
  • acosh(..) - Hyperbolic arccosine
  • sinh(..) - Hyperbolic sine
  • asinh(..) - Hyperbolic arcsine
  • tanh(..) - Hyperbolic tangent
  • atanh(..) - Hyperbolic arctangent
  • hypot(..) - The squareroot of the sum of the squares (i.e., the generalized Pythagorean theorem)

Arithmetic:

  • cbrt(..) - Cube root
  • clz32(..) - Count leading zeros in 32-bit binary representation
  • expm1(..) - The same as exp(x) - 1
  • log2(..) - Binary logarithm (log base 2)
  • log10(..) - Log base 10
  • log1p(..) - The same as log(x + 1)
  • imul(..) - 32-bit integer multiplication of two numbers

Meta:

  • sign(..) - Returns the sign of the number
  • trunc(..) - Returns only the integer part of a number
  • fround(..) - Rounds to nearest 32-bit (single precision) floating-point value

37.4 Number

Importantly, for your program to properly work, it must accurately handle numbers. ES6 adds some additional properties and functions to assist with common numeric operations.

Two additions to Number are just references to the preexisting globals: Number.parseInt(..) and Number.parseFloat(..).

37.4.1 Static Properties

ES6 adds some helpful numeric constants as static properties:

  • Number.EPSILON - The minimum value between any two numbers: 2^-52 (see Chapter 2 of the Types & Grammar title of this series regarding using this value as a tolerance for imprecision in floating-point arithmetic)
  • Number.MAX_SAFE_INTEGER - The highest integer that can “safely” be represented unambiguously in a JS number value: 2^53 - 1
  • Number.MIN_SAFE_INTEGER - The lowest integer that can “safely” be represented unambiguously in a JS number value: -(2^53 - 1) or (-2)^53 + 1.

Note: See Chapter 2 of the Types & Grammar title of this series for more information about “safe” integers.

37.4.2 Number.isNaN(..) Static Function

The standard global isNaN(..) utility has been broken since its inception, in that it returns true for things that are not numbers, not just for the actual NaN value, because it coerces the argument to a number type (which can falsely result in a NaN). ES6 adds a fixed utility Number.isNaN(..) that works as it should:

var a = NaN, b = "NaN", c = 42;

isNaN( a );                         // true
isNaN( b );                         // true -- oops!
isNaN( c );                         // false

Number.isNaN( a );                  // true
Number.isNaN( b );                  // false -- fixed!
Number.isNaN( c );                  // false

37.4.3 Number.isFinite(..) Static Function

There’s a temptation to look at a function name like isFinite(..) and assume it’s simply “not infinite”. That’s not quite correct, though. There’s more nuance to this new ES6 utility. Consider:

var a = NaN, b = Infinity, c = 42;

Number.isFinite( a );               // false
Number.isFinite( b );               // false

Number.isFinite( c );               // true

The standard global isFinite(..) coerces its argument, but Number.isFinite(..) omits the coercive behavior:

var a = "42";

isFinite( a );                      // true
Number.isFinite( a );               // false

You may still prefer the coercion, in which case using the global isFinite(..) is a valid choice. Alternatively, and perhaps more sensibly, you can use Number.isFinite(+x), which explicitly coerces x to a number before passing it in (see Chapter 4 of the Types & Grammar title of this series).

JavaScript number values are always floating point (IEEE-754). So the notion of determining if a number is an “integer” is not about checking its type, because JS makes no such distinction.

Instead, you need to check if there’s any non-zero decimal portion of the value. The easiest way to do that has commonly been:

x === Math.floor( x );

ES6 adds a Number.isInteger(..) helper utility that potentially can determine this quality slightly more efficiently:

Number.isInteger( 4 );              // true
Number.isInteger( 4.2 );            // false

Note: In JavaScript, there’s no difference between 4, 4., 4.0, or 4.0000. All of these would be considered an “integer”, and would thus yield true from Number.isInteger(..).

In addition, Number.isInteger(..) filters out some clearly not-integer values that x === Math.floor(x) could potentially mix up:

Number.isInteger( NaN );            // false
Number.isInteger( Infinity );       // false

Working with “integers” is sometimes an important bit of information, as it can simplify certain kinds of algorithms. JS code by itself will not run faster just from filtering for only integers, but there are optimization techniques the engine can take (e.g., asm.js) when only integers are being used.

Because of Number.isInteger(..)’s handling of NaN and Infinity values, defining a isFloat(..) utility would not be just as simple as !Number.isInteger(..). You’d need to do something like:

function isFloat(x) {
    return Number.isFinite( x ) && !Number.isInteger( x );
}

isFloat( 4.2 );                     // true
isFloat( 4 );                       // false

isFloat( NaN );                     // false
isFloat( Infinity );                // false

Note: It may seem strange, but Infinity should neither be considered an integer nor a float.

ES6 also defines a Number.isSafeInteger(..) utility, which checks to make sure the value is both an integer and within the range of Number.MIN_SAFE_INTEGER-Number.MAX_SAFE_INTEGER (inclusive).

var x = Math.pow( 2, 53 ),
    y = Math.pow( -2, 53 );

Number.isSafeInteger( x - 1 );      // true
Number.isSafeInteger( y + 1 );      // true

Number.isSafeInteger( x );          // false
Number.isSafeInteger( y );          // false

37.5 String

Strings already have quite a few helpers prior to ES6, but even more have been added to the mix.

37.5.1 Unicode Functions

“Unicode-Aware String Operations” in Chapter 2 discusses String.fromCodePoint(..), String#codePointAt(..), and String#normalize(..) in detail. They have been added to improve Unicode support in JS string values.

String.fromCodePoint( 0x1d49e );            // "𝒞"

"ab𝒞d".codePointAt( 2 ).toString( 16 );     // "1d49e"

The normalize(..) string prototype method is used to perform Unicode normalizations that either combine characters with adjacent “combining marks” or decompose combined characters.

Generally, the normalization won’t create a visible effect on the contents of the string, but will change the contents of the string, which can affect how things like the length property are reported, as well as how character access by position behave:

var s1 = "e\u0301";
s1.length;                          // 2

var s2 = s1.normalize();
s2.length;                          // 1
s2 === "\xE9";                      // true

normalize(..) takes an optional argument that specifies the normalization form to use. This argument must be one of the following four values: "NFC" (default), "NFD", "NFKC", or "NFKD".

Note: Normalization forms and their effects on strings is well beyond the scope of what we’ll discuss here. See “Unicode Normalization Forms” (http://www.unicode.org/reports/tr15/) for more information.

37.5.2 String.raw(..) Static Function

The String.raw(..) utility is provided as a built-in tag function to use with template string literals (see Chapter 2) for obtaining the raw string value without any processing of escape sequences.

This function will almost never be called manually, but will be used with tagged template literals:

var str = "bc";

String.raw`\ta${str}d\xE9`;
// "\tabcd\xE9", not "  abcdé"

In the resultant string, \ and t are separate raw characters, not the one escape sequence character \t. The same is true with the Unicode escape sequence.

37.5.3 repeat(..) Prototype Function

In languages like Python and Ruby, you can repeat a string as:

"foo" * 3;                          // "foofoofoo"

That doesn’t work in JS, because * multiplication is only defined for numbers, and thus "foo" coerces to the NaN number.

However, ES6 defines a string prototype method repeat(..) to accomplish the task:

"foo".repeat( 3 );                  // "foofoofoo"

37.5.4 String Inspection Functions

In addition to String#indexOf(..) and String#lastIndexOf(..) from prior to ES6, three new methods for searching/inspection have been added: startsWith(..), endsWith(..), and includes(..).

var palindrome = "step on no pets";

palindrome.startsWith( "step on" ); // true
palindrome.startsWith( "on", 5 );   // true

palindrome.endsWith( "no pets" );   // true
palindrome.endsWith( "no", 10 );    // true

palindrome.includes( "on" );        // true
palindrome.includes( "on", 6 );     // false

For all the string search/inspection methods, if you look for an empty string "", it will either be found at the beginning or the end of the string.

Warning: These methods will not by default accept a regular expression for the search string. See “Regular Expression Symbols” in Chapter 7 for information about disabling the isRegExp check that is performed on this first argument.

37.6 Review

ES6 adds many extra API helpers on the various built-in native objects:

  • Array adds of(..) and from(..) static functions, as well as prototype functions like copyWithin(..) and fill(..).
  • Object adds static functions like is(..) and assign(..).
  • Math adds static functions like acosh(..) and clz32(..).
  • Number adds static properties like Number.EPSILON, as well as static functions like Number.isFinite(..).
  • String adds static functions like String.fromCodePoint(..) and String.raw(..), as well as prototype functions like repeat(..) and includes(..).

Most of these additions can be polyfilled (see ES6 Shim), and were inspired by utilities in common JS libraries/frameworks.

38 You Don’t Know JS: ES6 & Beyond

39 Chapter 7: Meta Programming

Meta programming is programming where the operation targets the behavior of the program itself. In other words, it’s programming the programming of your program. Yeah, a mouthful, huh?

For example, if you probe the relationship between one object a and another b – are they [[Prototype]] linked? – using a.isPrototypeOf(b), this is commonly referred to as introspection, a form of meta programming. Macros (which don’t exist in JS, yet) – where the code modifies itself at compile time – are another obvious example of meta programming. Enumerating the keys of an object with a for..in loop, or checking if an object is an instance of a “class constructor”, are other common meta programming tasks.

Meta programming focuses on one or more of the following: code inspecting itself, code modifying itself, or code modifying default language behavior so other code is affected.

The goal of meta programming is to leverage the language’s own intrinsic capabilities to make the rest of your code more descriptive, expressive, and/or flexible. Because of the meta nature of meta programming, it’s somewhat difficult to put a more precise definition on it than that. The best way to understand meta programming is to see it through examples.

ES6 adds several new forms/features for meta programming on top of what JS already had.

39.1 Function Names

There are cases where your code may want to introspect on itself and ask what the name of some function is. If you ask what a function’s name is, the answer is surprisingly somewhat ambiguous. Consider:

function daz() {
    // ..
}

var obj = {
    foo: function() {
        // ..
    },
    bar: function baz() {
        // ..
    },
    bam: daz,
    zim() {
        // ..
    }
};

In this previous snippet, “what is the name of obj.foo()” is slightly nuanced. Is it "foo", "", or undefined? And what about obj.bar() – is it named "bar" or "baz"? Is obj.bam() named "bam" or "daz"? What about obj.zim()?

Moreover, what about functions which are passed as callbacks, like:

function foo(cb) {
    // what is the name of `cb()` here?
}

foo( function(){
    // I'm anonymous!
} );

There are quite a few ways that functions can be expressed in programs, and it’s not always clear and unambiguous what the “name” of that function should be.

More importantly, we need to distinguish whether the “name” of a function refers to its name property – yes, functions have a property called name – or whether it refers to the lexical binding name, such as bar in function bar() { .. }.

The lexical binding name is what you use for things like recursion:

function foo(i) {
    if (i < 10) return foo( i * 2 );
    return i;
}

The name property is what you’d use for meta programming purposes, so that’s what we’ll focus on in this discussion.

The confusion comes because by default, the lexical name a function has (if any) is also set as its name property. Actually there was no official requirement for that behavior by the ES5 (and prior) specifications. The setting of the name property was nonstandard but still fairly reliable. As of ES6, it has been standardized.

Tip: If a function has a name value assigned, that’s typically the name used in stack traces in developer tools.

39.1.1 Inferences

But what happens to the name property if a function has no lexical name?

As of ES6, there are now inference rules which can determine a sensible name property value to assign a function even if that function doesn’t have a lexical name to use.

Consider:

var abc = function() {
    // ..
};

abc.name;               // "abc"

Had we given the function a lexical name like abc = function def() { .. }, the name property would of course be "def". But in the absence of the lexical name, intuitively the "abc" name seems appropriate.

Here are other forms that will infer a name (or not) in ES6:

(function(){ .. });                 // name:
(function*(){ .. });                // name:
window.foo = function(){ .. };      // name:

class Awesome {
    constructor() { .. }            // name: Awesome
    funny() { .. }                  // name: funny
}

var c = class Awesome { .. };       // name: Awesome

var o = {
    foo() { .. },                   // name: foo
    *bar() { .. },                  // name: bar
    baz: () => { .. },              // name: baz
    bam: function(){ .. },          // name: bam
    get qux() { .. },               // name: get qux
    set fuz() { .. },               // name: set fuz
    ["b" + "iz"]:
        function(){ .. },           // name: biz
    [Symbol( "buz" )]:
        function(){ .. }            // name: [buz]
};

var x = o.foo.bind( o );            // name: bound foo
(function(){ .. }).bind( o );       // name: bound

export default function() { .. }    // name: default

var y = new Function();             // name: anonymous
var GeneratorFunction =
    function*(){}.__proto__.constructor;
var z = new GeneratorFunction();    // name: anonymous

The name property is not writable by default, but it is configurable, meaning you can use Object.defineProperty(..) to manually change it if so desired.

39.2 Meta Properties

In the “new.target” section of Chapter 3, we introduced a concept new to JS in ES6: the meta property. As the name suggests, meta properties are intended to provide special meta information in the form of a property access that would otherwise not have been possible.

In the case of new.target, the keyword new serves as the context for a property access. Clearly new is itself not an object, which makes this capability special. However, when new.target is used inside a constructor call (a function/method invoked with new), new becomes a virtual context, so that new.target can refer to the target constructor that new invoked.

This is a clear example of a meta programming operation, as the intent is to determine from inside a constructor call what the original new target was, generally for the purposes of introspection (examining typing/structure) or static property access.

For example, you may want to have different behavior in a constructor depending on if it’s directly invoked or invoked via a child class:

class Parent {
    constructor() {
        if (new.target === Parent) {
            console.log( "Parent instantiated" );
        }
        else {
            console.log( "A child instantiated" );
        }
    }
}

class Child extends Parent {}

var a = new Parent();
// Parent instantiated

var b = new Child();
// A child instantiated

There’s a slight nuance here, which is that the constructor() inside the Parent class definition is actually given the lexical name of the class (Parent), even though the syntax implies that the class is a separate entity from the constructor.

Warning: As with all meta programming techniques, be careful of creating code that’s too clever for your future self or others maintaining your code to understand. Use these tricks with caution.

39.3 Well Known Symbols

In the “Symbols” section of Chapter 2, we covered the new ES6 primitive type symbol. In addition to symbols you can define in your own program, JS predefines a number of built-in symbols, referred to as Well Known Symbols (WKS).

These symbol values are defined primarily to expose special meta properties that are being exposed to your JS programs to give you more control over JS’s behavior.

We’ll briefly introduce each and discuss their purpose.

39.3.1 Symbol.iterator

In Chapters 2 and 3, we introduced and used the @@iterator symbol, automatically used by ... spreads and for..of loops. We also saw @@iterator as defined on the new ES6 collections as defined in Chapter 5.

Symbol.iterator represents the special location (property) on any object where the language mechanisms automatically look to find a method that will construct an iterator instance for consuming that object’s values. Many objects come with a default one defined.

However, we can define our own iterator logic for any object value by setting the Symbol.iterator property, even if that’s overriding the default iterator. The meta programming aspect is that we are defining behavior which other parts of JS (namely, operators and looping constructs) use when processing an object value we define.

Consider:

var arr = [4,5,6,7,8,9];

for (var v of arr) {
    console.log( v );
}
// 4 5 6 7 8 9

// define iterator that only produces values
// from odd indexes
arr[Symbol.iterator] = function*() {
    var idx = 1;
    do {
        yield this[idx];
    } while ((idx += 2) < this.length);
};

for (var v of arr) {
    console.log( v );
}
// 5 7 9

39.3.2 Symbol.toStringTag and Symbol.hasInstance

One of the most common meta programming tasks is to introspect on a value to find out what kind it is, usually to decide what operations are appropriate to perform on it. With objects, the two most common inspection techniques are toString() and instanceof.

Consider:

function Foo() {}

var a = new Foo();

a.toString();               // [object Object]
a instanceof Foo;           // true

As of ES6, you can control the behavior of these operations:

function Foo(greeting) {
    this.greeting = greeting;
}

Foo.prototype[Symbol.toStringTag] = "Foo";

Object.defineProperty( Foo, Symbol.hasInstance, {
    value: function(inst) {
        return inst.greeting == "hello";
    }
} );

var a = new Foo( "hello" ),
    b = new Foo( "world" );

b[Symbol.toStringTag] = "cool";

a.toString();               // [object Foo]
String( b );                // [object cool]

a instanceof Foo;           // true
b instanceof Foo;           // false

The @@toStringTag symbol on the prototype (or instance itself) specifies a string value to use in the [object ___] stringification.

The @@hasInstance symbol is a method on the constructor function which receives the instance object value and lets you decide by returning true or false if the value should be considered an instance or not.

Note: To set @@hasInstance on a function, you must use Object.defineProperty(..), as the default one on Function.prototype is writable: false. See the this & Object Prototypes title of this series for more information.

39.3.3 Symbol.species

In “Classes” in Chapter 3, we introduced the @@species symbol, which controls which constructor is used by built-in methods of a class that needs to spawn new instances.

The most common example is when subclassing Array and wanting to define which constructor (Array(..) or your subclass) inherited methods like slice(..) should use. By default, slice(..) called on an instance of a subclass of Array would produce a new instance of that subclass, which is frankly what you’ll likely often want.

However, you can meta program by overriding a class’s default @@species definition:

class Cool {
    // defer `@@species` to derived constructor
    static get [Symbol.species]() { return this; }

    again() {
        return new this.constructor[Symbol.species]();
    }
}

class Fun extends Cool {}

class Awesome extends Cool {
    // force `@@species` to be parent constructor
    static get [Symbol.species]() { return Cool; }
}

var a = new Fun(),
    b = new Awesome(),
    c = a.again(),
    d = b.again();

c instanceof Fun;           // true
d instanceof Awesome;       // false
d instanceof Cool;          // true

The Symbol.species setting defaults on the built-in native constructors to the return this behavior as illustrated in the previous snippet in the Cool definition. It has no default on user classes, but as shown that behavior is easy to emulate.

If you need to define methods that generate new instances, use the meta programming of the new this.constructor[Symbol.species](..) pattern instead of the hard-wiring of new this.constructor(..) or new XYZ(..). Derived classes will then be able to customize Symbol.species to control which constructor vends those instances.

39.3.4 Symbol.toPrimitive

In the Types & Grammar title of this series, we discussed the ToPrimitive abstract coercion operation, which is used when an object must be coerced to a primitive value for some operation (such as == comparison or + addition). Prior to ES6, there was no way to control this behavior.

As of ES6, the @@toPrimitive symbol as a property on any object value can customize that ToPrimitive coercion by specifying a method.

Consider:

var arr = [1,2,3,4,5];

arr + 10;               // 1,2,3,4,510

arr[Symbol.toPrimitive] = function(hint) {
    if (hint == "default" || hint == "number") {
        // sum all numbers
        return this.reduce( function(acc,curr){
            return acc + curr;
        }, 0 );
    }
};

arr + 10;               // 25

The Symbol.toPrimitive method will be provided with a hint of "string", "number", or "default" (which should be interpreted as "number"), depending on what type the operation invoking ToPrimitive is expecting. In the previous snippet, the additive + operation has no hint ("default" is passed). A multiplicative * operation would hint "number" and a String(arr) would hint "string".

Warning: The == operator will invoke the ToPrimitive operation with no hint – the @@toPrimitive method, if any is called with hint "default" – on an object if the other value being compared is not an object. However, if both comparison values are objects, the behavior of == is identical to ===, which is that the references themselves are directly compared. In this case, @@toPrimitive is not invoked at all. See the Types & Grammar title of this series for more information about coercion and the abstract operations.

39.3.5 Regular Expression Symbols

There are four well known symbols that can be overridden for regular expression objects, which control how those regular expressions are used by the four corresponding String.prototype functions of the same name:

  • @@match: The Symbol.match value of a regular expression is the method used to match all or part of a string value with the given regular expression. It’s used by String.prototype.match(..) if you pass it a regular expression for the pattern matching.

The default algorithm for matching is laid out in section 21.2.5.6 of the ES6 specification (https://people.mozilla.org/~jorendorff/es6-draft.html#sec-regexp.prototype-@@match). You could override this default algorithm and provide extra regex features, such as look-behind assertions.

Symbol.match is also used by the isRegExp abstract operation (see the note in “String Inspection Functions” in Chapter 6) to determine if an object is intended to be used as a regular expression. To force this check to fail for an object so it’s not treated as a regular expression, set the Symbol.match value to false (or something falsy). * @@replace: The Symbol.replace value of a regular expression is the method used by String.prototype.replace(..) to replace within a string one or all occurrences of character sequences that match the given regular expression pattern.

The default algorithm for replacing is laid out in section 21.2.5.8 of the ES6 specification (https://people.mozilla.org/~jorendorff/es6-draft.html#sec-regexp.prototype-@@replace).

One cool use for overriding the default algorithm is to provide additional replacer argument options, such as supporting "abaca".replace(/a/g,[1,2,3]) producing "1b2c3" by consuming the iterable for successive replacement values. * @@search: The Symbol.search value of a regular expression is the method used by String.prototype.search(..) to search for a sub-string within another string as matched by the given regular expression.

The default algorithm for searching is laid out in section 21.2.5.9 of the ES6 specification (https://people.mozilla.org/~jorendorff/es6-draft.html#sec-regexp.prototype-@@search). * @@split: The Symbol.split value of a regular expression is the method used by String.prototype.split(..) to split a string into sub-strings at the location(s) of the delimiter as matched by the given regular expression.

The default algorithm for splitting is laid out in section 21.2.5.11 of the ES6 specification (https://people.mozilla.org/~jorendorff/es6-draft.html#sec-regexp.prototype-@@split).

Overriding the built-in regular expression algorithms is not for the faint of heart! JS ships with a highly optimized regular expression engine, so your own user code will likely be a lot slower. This kind of meta programming is neat and powerful, but it should only be used in cases where it’s really necessary or beneficial.

39.3.6 Symbol.isConcatSpreadable

The @@isConcatSpreadable symbol can be defined as a boolean property (Symbol.isConcatSpreadable) on any object (like an array or other iterable) to indicate if it should be spread out if passed to an array concat(..).

Consider:

var a = [1,2,3],
    b = [4,5,6];

b[Symbol.isConcatSpreadable] = false;

[].concat( a, b );      // [1,2,3,[4,5,6]]

39.3.7 Symbol.unscopables

The @@unscopables symbol can be defined as an object property (Symbol.unscopables) on any object to indicate which properties can and cannot be exposed as lexical variables in a with statement.

Consider:

var o = { a:1, b:2, c:3 },
    a = 10, b = 20, c = 30;

o[Symbol.unscopables] = {
    a: false,
    b: true,
    c: false
};

with (o) {
    console.log( a, b, c );     // 1 20 3
}

A true in the @@unscopables object indicates the property should be unscopable, and thus filtered out from the lexical scope variables. false means it’s OK to be included in the lexical scope variables.

Warning: The with statement is disallowed entirely in strict mode, and as such should be considered deprecated from the language. Don’t use it. See the Scope & Closures title of this series for more information. Because with should be avoided, the @@unscopables symbol is also moot.

39.4 Proxies

One of the most obviously meta programming features added to ES6 is the Proxy feature.

A proxy is a special kind of object you create that “wraps” – or sits in front of – another normal object. You can register special handlers (aka traps) on the proxy object which are called when various operations are performed against the proxy. These handlers have the opportunity to perform extra logic in addition to forwarding the operations on to the original target/wrapped object.

One example of the kind of trap handler you can define on a proxy is get that intercepts the [[Get]] operation – performed when you try to access a property on an object. Consider:

var obj = { a: 1 },
    handlers = {
        get(target,key,context) {
            // note: target === obj,
            // context === pobj
            console.log( "accessing: ", key );
            return Reflect.get(
                target, key, context
            );
        }
    },
    pobj = new Proxy( obj, handlers );

obj.a;
// 1

pobj.a;
// accessing: a
// 1

We declare a get(..) handler as a named method on the handler object (second argument to Proxy(..)), which receives a reference to the target object (obj), the key property name ("a"), and the self/receiver/proxy (pobj).

After the console.log(..) tracing statement, we “forward” the operation onto obj via Reflect.get(..). We will cover the Reflect API in the next section, but note that each available proxy trap has a corresponding Reflect function of the same name.

These mappings are symmetric on purpose. The proxy handlers each intercept when a respective meta programming task is performed, and the Reflect utilities each perform the respective meta programming task on an object. Each proxy handler has a default definition that automatically calls the corresponding Reflect utility. You will almost certainly use both Proxy and Reflect in tandem.

Here’s a list of handlers you can define on a proxy for a target object/function, and how/when they are triggered:

  • get(..): via [[Get]], a property is accessed on the proxy (Reflect.get(..), . property operator, or [ .. ] property operator)
  • set(..): via [[Set]], a property value is set on the proxy (Reflect.set(..), the = assignment operator, or destructuring assignment if it targets an object property)
  • deleteProperty(..): via [[Delete]], a property is deleted from the proxy (Reflect.deleteProperty(..) or delete)
  • apply(..) (if target is a function): via [[Call]], the proxy is invoked as a normal function/method (Reflect.apply(..), call(..), apply(..), or the (..) call operator)
  • construct(..) (if target is a constructor function): via [[Construct]], the proxy is invoked as a constructor function (Reflect.construct(..) or new)
  • getOwnPropertyDescriptor(..): via [[GetOwnProperty]], a property descriptor is retrieved from the proxy (Object.getOwnPropertyDescriptor(..) or Reflect.getOwnPropertyDescriptor(..))
  • defineProperty(..): via [[DefineOwnProperty]], a property descriptor is set on the proxy (Object.defineProperty(..) or Reflect.defineProperty(..))
  • getPrototypeOf(..): via [[GetPrototypeOf]], the [[Prototype]] of the proxy is retrieved (Object.getPrototypeOf(..), Reflect.getPrototypeOf(..), __proto__, Object#isPrototypeOf(..), or instanceof)
  • setPrototypeOf(..): via [[SetPrototypeOf]], the [[Prototype]] of the proxy is set (Object.setPrototypeOf(..), Reflect.setPrototypeOf(..), or __proto__)
  • preventExtensions(..): via [[PreventExtensions]], the proxy is made non-extensible (Object.preventExtensions(..) or Reflect.preventExtensions(..))
  • isExtensible(..): via [[IsExtensible]], the extensibility of the proxy is probed (Object.isExtensible(..) or Reflect.isExtensible(..))
  • ownKeys(..): via [[OwnPropertyKeys]], the set of owned properties and/or owned symbol properties of the proxy is retrieved (Object.keys(..), Object.getOwnPropertyNames(..), Object.getOwnSymbolProperties(..), Reflect.ownKeys(..), or JSON.stringify(..))
  • enumerate(..): via [[Enumerate]], an iterator is requested for the proxy’s enumerable owned and “inherited” properties (Reflect.enumerate(..) or for..in)
  • has(..): via [[HasProperty]], the proxy is probed to see if it has an owned or “inherited” property (Reflect.has(..), Object#hasOwnProperty(..), or "prop" in obj)

Tip: For more information about each of these meta programming tasks, see the “Reflect API” section later in this chapter.

In addition to the notations in the preceding list about actions that will trigger the various traps, some traps are triggered indirectly by the default actions of another trap. For example:

var handlers = {
        getOwnPropertyDescriptor(target,prop) {
            console.log(
                "getOwnPropertyDescriptor"
            );
            return Object.getOwnPropertyDescriptor(
                target, prop
            );
        },
        defineProperty(target,prop,desc){
            console.log( "defineProperty" );
            return Object.defineProperty(
                target, prop, desc
            );
        }
    },
    proxy = new Proxy( {}, handlers );

proxy.a = 2;
// getOwnPropertyDescriptor
// defineProperty

The getOwnPropertyDescriptor(..) and defineProperty(..) handlers are triggered by the default set(..) handler’s steps when setting a property value (whether newly adding or updating). If you also define your own set(..) handler, you may or may not make the corresponding calls against context (not target!) which would trigger these proxy traps.

39.4.1 Proxy Limitations

These meta programming handlers trap a wide array of fundamental operations you can perform against an object. However, there are some operations which are not (yet, at least) available to intercept.

For example, none of these operations are trapped and forwarded from pobj proxy to obj target:

var obj = { a:1, b:2 },
    handlers = { .. },
    pobj = new Proxy( obj, handlers );

typeof obj;
String( obj );
obj + "";
obj == pobj;
obj === pobj

Perhaps in the future, more of these underlying fundamental operations in the language will be interceptable, giving us even more power to extend JavaScript from within itself.

Warning: There are certain invariants – behaviors which cannot be overridden – that apply to the use of proxy handlers. For example, the result from the isExtensible(..) handler is always coerced to a boolean. These invariants restrict some of your ability to customize behaviors with proxies, but they do so only to prevent you from creating strange and unusual (or inconsistent) behavior. The conditions for these invariants are complicated so we won’t fully go into them here, but this post (http://www.2ality.com/2014/12/es6-proxies.html#invariants) does a great job of covering them.

39.4.2 Revocable Proxies

A regular proxy always traps for the target object, and cannot be modified after creation – as long as a reference is kept to the proxy, proxying remains possible. However, there may be cases where you want to create a proxy that can be disabled when you want to stop allowing it to proxy. The solution is to create a revocable proxy:

var obj = { a: 1 },
    handlers = {
        get(target,key,context) {
            // note: target === obj,
            // context === pobj
            console.log( "accessing: ", key );
            return target[key];
        }
    },
    { proxy: pobj, revoke: prevoke } =
        Proxy.revocable( obj, handlers );

pobj.a;
// accessing: a
// 1

// later:
prevoke();

pobj.a;
// TypeError

A revocable proxy is created with Proxy.revocable(..), which is a regular function, not a constructor like Proxy(..). Otherwise, it takes the same two arguments: target and handlers.

The return value of Proxy.revocable(..) is not the proxy itself as with new Proxy(..). Instead, it’s an object with two properties: proxy and revoke – we used object destructuring (see “Destructuring” in Chapter 2) to assign these properties to pobj and prevoke() variables, respectively.

Once a revocable proxy is revoked, any attempts to access it (trigger any of its traps) will throw a TypeError.

An example of using a revocable proxy might be handing out a proxy to another party in your application that manages data in your model, instead of giving them a reference to the real model object itself. If your model object changes or is replaced, you want to invalidate the proxy you handed out so the other party knows (via the errors!) to request an updated reference to the model.

39.4.3 Using Proxies

The meta programming benefits of these Proxy handlers should be obvious. We can almost fully intercept (and thus override) the behavior of objects, meaning we can extend object behavior beyond core JS in some very powerful ways. We’ll look at a few example patterns to explore the possibilities.

39.4.3.1 Proxy First, Proxy Last

As we mentioned earlier, you typically think of a proxy as “wrapping” the target object. In that sense, the proxy becomes the primary object that the code interfaces with, and the actual target object remains hidden/protected.

You might do this because you want to pass the object somewhere that can’t be fully “trusted,” and so you need to enforce special rules around its access rather than passing the object itself.

Consider:

var messages = [],
    handlers = {
        get(target,key) {
            // string value?
            if (typeof target[key] == "string") {
                // filter out punctuation
                return target[key]
                    .replace( /[^\w]/g, "" );
            }

            // pass everything else through
            return target[key];
        },
        set(target,key,val) {
            // only set unique strings, lowercased
            if (typeof val == "string") {
                val = val.toLowerCase();
                if (target.indexOf( val ) == -1) {
                    target.push(val);
                }
            }
            return true;
        }
    },
    messages_proxy =
        new Proxy( messages, handlers );

// elsewhere:
messages_proxy.push(
    "heLLo...", 42, "wOrlD!!", "WoRld!!"
);

messages_proxy.forEach( function(val){
    console.log(val);
} );
// hello world

messages.forEach( function(val){
    console.log(val);
} );
// hello... world!!

I call this proxy first design, as we interact first (primarily, entirely) with the proxy.

We enforce some special rules on interacting with messages_proxy that aren’t enforced for messages itself. We only add elements if the value is a string and is also unique; we also lowercase the value. When retrieving values from messages_proxy, we filter out any punctuation in the strings.

Alternatively, we can completely invert this pattern, where the target interacts with the proxy instead of the proxy interacting with the target. Thus, code really only interacts with the main object. The easiest way to accomplish this fallback is to have the proxy object in the [[Prototype]] chain of the main object.

Consider:

var handlers = {
        get(target,key,context) {
            return function() {
                context.speak(key + "!");
            };
        }
    },
    catchall = new Proxy( {}, handlers ),
    greeter = {
        speak(who = "someone") {
            console.log( "hello", who );
        }
    };

// setup `greeter` to fall back to `catchall`
Object.setPrototypeOf( greeter, catchall );

greeter.speak();                // hello someone
greeter.speak( "world" );       // hello world

greeter.everyone();             // hello everyone!

We interact directly with greeter instead of catchall. When we call speak(..), it’s found on greeter and used directly. But when we try to access a method like everyone(), that function doesn’t exist on greeter.

The default object property behavior is to check up the [[Prototype]] chain (see the this & Object Prototypes title of this series), so catchall is consulted for an everyone property. The proxy get() handler then kicks in and returns a function that calls speak(..) with the name of the property being accessed ("everyone").

I call this pattern proxy last, as the proxy is used only as a last resort.

39.4.3.2 “No Such Property/Method”

A common complaint about JS is that objects aren’t by default very defensive in the situation where you try to access or set a property that doesn’t already exist. You may wish to predefine all the properties/methods for an object, and have an error thrown if a nonexistent property name is subsequently used.

We can accomplish this with a proxy, either in proxy first or proxy last design. Let’s consider both.

var obj = {
        a: 1,
        foo() {
            console.log( "a:", this.a );
        }
    },
    handlers = {
        get(target,key,context) {
            if (Reflect.has( target, key )) {
                return Reflect.get(
                    target, key, context
                );
            }
            else {
                throw "No such property/method!";
            }
        },
        set(target,key,val,context) {
            if (Reflect.has( target, key )) {
                return Reflect.set(
                    target, key, val, context
                );
            }
            else {
                throw "No such property/method!";
            }
        }
    },
    pobj = new Proxy( obj, handlers );

pobj.a = 3;
pobj.foo();         // a: 3

pobj.b = 4;         // Error: No such property/method!
pobj.bar();         // Error: No such property/method!

For both get(..) and set(..), we only forward the operation if the target object’s property already exists; error thrown otherwise. The proxy object (pobj) is the main object code should interact with, as it intercepts these actions to provide the protections.

Now, let’s consider inverting with proxy last design:

var handlers = {
        get() {
            throw "No such property/method!";
        },
        set() {
            throw "No such property/method!";
        }
    },
    pobj = new Proxy( {}, handlers ),
    obj = {
        a: 1,
        foo() {
            console.log( "a:", this.a );
        }
    };

// setup `obj` to fall back to `pobj`
Object.setPrototypeOf( obj, pobj );

obj.a = 3;
obj.foo();          // a: 3

obj.b = 4;          // Error: No such property/method!
obj.bar();          // Error: No such property/method!

The proxy last design here is a fair bit simpler with respect to how the handlers are defined. Instead of needing to intercept the [[Get]] and [[Set]] operations and only forward them if the target property exists, we instead rely on the fact that if either [[Get]] or [[Set]] get to our pobj fallback, the action has already traversed the whole [[Prototype]] chain and not found a matching property. We are free at that point to unconditionally throw the error. Cool, huh?

39.4.3.3 Proxy Hacking the [[Prototype]] Chain

The [[Get]] operation is the primary channel by which the [[Prototype]] mechanism is invoked. When a property is not found on the immediate object, [[Get]] automatically hands off the operation to the [[Prototype]] object.

That means you can use the get(..) trap of a proxy to emulate or extend the notion of this [[Prototype]] mechanism.

The first hack we’ll consider is creating two objects which are circularly linked via [[Prototype]] (or, at least it appears that way!). You cannot actually create a real circular [[Prototype]] chain, as the engine will throw an error. But a proxy can fake it!

Consider:

var handlers = {
        get(target,key,context) {
            if (Reflect.has( target, key )) {
                return Reflect.get(
                    target, key, context
                );
            }
            // fake circular `[[Prototype]]`
            else {
                return Reflect.get(
                    target[
                        Symbol.for( "[[Prototype]]" )
                    ],
                    key,
                    context
                );
            }
        }
    },
    obj1 = new Proxy(
        {
            name: "obj-1",
            foo() {
                console.log( "foo:", this.name );
            }
        },
        handlers
    ),
    obj2 = Object.assign(
        Object.create( obj1 ),
        {
            name: "obj-2",
            bar() {
                console.log( "bar:", this.name );
                this.foo();
            }
        }
    );

// fake circular `[[Prototype]]` link
obj1[ Symbol.for( "[[Prototype]]" ) ] = obj2;

obj1.bar();
// bar: obj-1 <-- through proxy faking [[Prototype]]
// foo: obj-1 <-- `this` context still preserved

obj2.foo();
// foo: obj-2 <-- through [[Prototype]]

Note: We didn’t need to proxy/forward [[Set]] in this example, so we kept things simpler. To be fully [[Prototype]] emulation compliant, you’d want to implement a set(..) handler that searches the [[Prototype]] chain for a matching property and respects its descriptor behavior (e.g., set, writable). See the this & Object Prototypes title of this series.

In the previous snippet, obj2 is [[Prototype]] linked to obj1 by virtue of the Object.create(..) statement. But to create the reverse (circular) linkage, we create property on obj1 at the symbol location Symbol.for("[[Prototype]]") (see “Symbols” in Chapter 2). This symbol may look sort of special/magical, but it isn’t. It just allows me a conveniently named hook that semantically appears related to the task I’m performing.

Then, the proxy’s get(..) handler looks first to see if a requested key is on the proxy. If not, the operation is manually handed off to the object reference stored in the Symbol.for("[[Prototype]]") location of target.

One important advantage of this pattern is that the definitions of obj1 and obj2 are mostly not intruded by the setting up of this circular relationship between them. Although the previous snippet has all the steps intertwined for brevity’s sake, if you look closely, the proxy handler logic is entirely generic (doesn’t know about obj1 or obj2 specifically). So, that logic could be pulled out into a simple helper that wires them up, like a setCircularPrototypeOf(..) for example. We’ll leave that as an exercise for the reader.

Now that we’ve seen how we can use get(..) to emulate a [[Prototype]] link, let’s push the hackery a bit further. Instead of a circular [[Prototype]], what about multiple [[Prototype]] linkages (aka “multiple inheritance”)? This turns out to be fairly straightforward:

var obj1 = {
        name: "obj-1",
        foo() {
            console.log( "obj1.foo:", this.name );
        },
    },
    obj2 = {
        name: "obj-2",
        foo() {
            console.log( "obj2.foo:", this.name );
        },
        bar() {
            console.log( "obj2.bar:", this.name );
        }
    },
    handlers = {
        get(target,key,context) {
            if (Reflect.has( target, key )) {
                return Reflect.get(
                    target, key, context
                );
            }
            // fake multiple `[[Prototype]]`
            else {
                for (var P of target[
                    Symbol.for( "[[Prototype]]" )
                ]) {
                    if (Reflect.has( P, key )) {
                        return Reflect.get(
                            P, key, context
                        );
                    }
                }
            }
        }
    },
    obj3 = new Proxy(
        {
            name: "obj-3",
            baz() {
                this.foo();
                this.bar();
            }
        },
        handlers
    );

// fake multiple `[[Prototype]]` links
obj3[ Symbol.for( "[[Prototype]]" ) ] = [
    obj1, obj2
];

obj3.baz();
// obj1.foo: obj-3
// obj2.bar: obj-3

Note: As mentioned in the note after the earlier circular [[Prototype]] example, we didn’t implement the set(..) handler, but it would be necessary for a complete solution that emulates [[Set]] actions as normal [[Prototype]]s behave.

obj3 is set up to multiple-delegate to both obj1 and obj2. In obj3.baz(), the this.foo() call ends up pulling foo() from obj1 (first-come, first-served, even though there’s also a foo() on obj2). If we reordered the linkage as obj2, obj1, the obj2.foo() would have been found and used.

But as is, the this.bar() call doesn’t find a bar() on obj1, so it falls over to check obj2, where it finds a match.

obj1 and obj2 represent two parallel [[Prototype]] chains of obj3. obj1 and/or obj2 could themselves have normal [[Prototype]] delegation to other objects, or either could themself be a proxy (like obj3 is) that can multiple-delegate.

Just as with the circular [[Prototype]] example earlier, the definitions of obj1, obj2, and obj3 are almost entirely separate from the generic proxy logic that handles the multiple-delegation. It would be trivial to define a utility like setPrototypesOf(..) (notice the “s”!) that takes a main object and a list of objects to fake the multiple [[Prototype]] linkage to. Again, we’ll leave that as an exercise for the reader.

Hopefully the power of proxies is now becoming clearer after these various examples. There are many other powerful meta programming tasks that proxies enable.

39.5 Reflect API

The Reflect object is a plain object (like Math), not a function/constructor like the other built-in natives.

It holds static functions which correspond to various meta programming tasks that you can control. These functions correspond one-to-one with the handler methods (traps) that Proxies can define.

Some of the functions will look familiar as functions of the same names on Object:

  • Reflect.getOwnPropertyDescriptor(..)
  • Reflect.defineProperty(..)
  • Reflect.getPrototypeOf(..)
  • Reflect.setPrototypeOf(..)
  • Reflect.preventExtensions(..)
  • Reflect.isExtensible(..)

These utilities in general behave the same as their Object.* counterparts. However, one difference is that the Object.* counterparts attempt to coerce their first argument (the target object) to an object if it’s not already one. The Reflect.* methods simply throw an error in that case.

An object’s keys can be accessed/inspected using these utilities:

  • Reflect.ownKeys(..): Returns the list of all owned keys (not “inherited”), as returned by both Object.getOwnPropertyNames(..) and Object.getOwnPropertySymbols(..). See the “Property Enumeration Order” section for information about the order of keys.
  • Reflect.enumerate(..): Returns an iterator that produces the set of all non-symbol keys (owned and “inherited”) that are enumerable (see the this & Object Prototypes title of this series). Essentially, this set of keys is the same as those processed by a for..in loop. See the “Property Enumeration Order” section for information about the order of keys.
  • Reflect.has(..): Essentially the same as the in operator for checking if a property is on an object or its [[Prototype]] chain. For example, Reflect.has(o,"foo") essentially performs "foo" in o.

Function calls and constructor invocations can be performed manually, separate of the normal syntax (e.g., (..) and new) using these utilities:

  • Reflect.apply(..): For example, Reflect.apply(foo,thisObj,[42,"bar"]) calls the foo(..) function with thisObj as its this, and passes in the 42 and "bar" arguments.
  • Reflect.construct(..): For example, Reflect.construct(foo,[42,"bar"]) essentially calls new foo(42,"bar").

Object property access, setting, and deletion can be performed manually using these utilities:

  • Reflect.get(..): For example, Reflect.get(o,"foo") retrieves o.foo.
  • Reflect.set(..): For example, Reflect.set(o,"foo",42) essentially performs o.foo = 42.
  • Reflect.deleteProperty(..): For example, Reflect.deleteProperty(o,"foo") essentially performs delete o.foo.

The meta programming capabilities of Reflect give you programmatic equivalents to emulate various syntactic features, exposing previously hidden-only abstract operations. For example, you can use these capabilities to extend features and APIs for domain specific languages (DSLs).

39.5.1 Property Ordering

Prior to ES6, the order used to list an object’s keys/properties was implementation dependent and undefined by the specification. Generally, most engines have enumerated them in creation order, though developers have been strongly encouraged not to ever rely on this ordering.

As of ES6, the order for listing owned properties is now defined (ES6 specification, section 9.1.12) by the [[OwnPropertyKeys]] algorithm, which produces all owned properties (strings or symbols), regardless of enumerability. This ordering is only guaranteed for Reflect.ownKeys(..) (and by extension, Object.getOwnPropertyNames(..) and Object.getOwnPropertySymbols(..)).

The ordering is:

  1. First, enumerate any owned properties that are integer indexes, in ascending numeric order.
  2. Next, enumerate the rest of the owned string property names in creation order.
  3. Finally, enumerate owned symbol properties in creation order.

Consider:

var o = {};

o[Symbol("c")] = "yay";
o[2] = true;
o[1] = true;
o.b = "awesome";
o.a = "cool";

Reflect.ownKeys( o );               // [1,2,"b","a",Symbol(c)]
Object.getOwnPropertyNames( o );    // [1,2,"b","a"]
Object.getOwnPropertySymbols( o );  // [Symbol(c)]

On the other hand, the [[Enumerate]] algorithm (ES6 specification, section 9.1.11) produces only enumerable properties, from the target object as well as its [[Prototype]] chain. It is used by both Reflect.enumerate(..) and for..in. The observable ordering is implementation dependent and not controlled by the specification.

By contrast, Object.keys(..) invokes the [[OwnPropertyKeys]] algorithm to get a list of all owned keys. However, it filters out non-enumerable properties and then reorders the list to match legacy implementation-dependent behavior, specifically with JSON.stringify(..) and for..in. So, by extension the ordering also matches that of Reflect.enumerate(..).

In other words, all four mechanisms (Reflect.enumerate(..), Object.keys(..), for..in, and JSON.stringify(..)) will match with the same implementation-dependent ordering, though they technically get there in different ways.

Implementations are allowed to match these four to the ordering of [[OwnPropertyKeys]], but are not required to. Nevertheless, you will likely observe the following ordering behavior from them:

var o = { a: 1, b: 2 };
var p = Object.create( o );
p.c = 3;
p.d = 4;

for (var prop of Reflect.enumerate( p )) {
    console.log( prop );
}
// c d a b

for (var prop in p) {
    console.log( prop );
}
// c d a b

JSON.stringify( p );
// {"c":3,"d":4}

Object.keys( p );
// ["c","d"]

Boiling this all down: as of ES6, Reflect.ownKeys(..), Object.getOwnPropertyNames(..), and Object.getOwnPropertySymbols(..) all have predictable and reliable ordering guaranteed by the specification. So it’s safe to build code that relies on this ordering.

Reflect.enumerate(..), Object.keys(..), and for..in (as well as JSON.stringify(..) by extension) continue to share an observable ordering with each other, as they always have. But that ordering will not necessarily be the same as that of Reflect.ownKeys(..). Care should still be taken in relying on their implementation-dependent ordering.

39.6 Feature Testing

What is a feature test? It’s a test that you run to determine if a feature is available or not. Sometimes, the test is not just for existence, but for conformance to specified behavior – features can exist but be buggy.

This is a meta programming technique, to test the environment your program runs in to then determine how your program should behave.

The most common use of feature tests in JS is checking for the existence of an API and if it’s not present, defining a polyfill (see Chapter 1). For example:

if (!Number.isNaN) {
    Number.isNaN = function(x) {
        return x !== x;
    };
}

The if statement in this snippet is meta programming: we’re probing our program and its runtime environment to determine if and how we should proceed.

But what about testing for features that involve new syntax?

You might try something like:

try {
    a = () => {};
    ARROW_FUNCS_ENABLED = true;
}
catch (err) {
    ARROW_FUNCS_ENABLED = false;
}

Unfortunately, this doesn’t work, because our JS programs are compiled. Thus, the engine will choke on the () => {} syntax if it is not already supporting ES6 arrow functions. Having a syntax error in your program prevents it from running, which prevents your program from subsequently responding differently if the feature is supported or not.

To meta program with feature tests around syntax-related features, we need a way to insulate the test from the initial compile step our program runs through. For instance, if we could store the code for the test in a string, then the JS engine wouldn’t by default try to compile the contents of that string, until we asked it to.

Did your mind just jump to using eval(..)?

Not so fast. See the Scope & Closures title of this series for why eval(..) is a bad idea. But there’s another option with less downsides: the Function(..) constructor.

Consider:

try {
    new Function( "( () => {} )" );
    ARROW_FUNCS_ENABLED = true;
}
catch (err) {
    ARROW_FUNCS_ENABLED = false;
}

OK, so now we’re meta programming by determining if a feature like arrow functions can compile in the current engine or not. You might then wonder, what would we do with this information?

With existence checks for APIs, and defining fallback API polyfills, there’s a clear path for what to do with either test success or failure. But what can we do with the information that we get from ARROW_FUNCS_ENABLED being true or false?

Because the syntax can’t appear in a file if the engine doesn’t support that feature, you can’t just have different functions defined in the file with and without the syntax in question.

What you can do is use the test to determine which of a set of JS files you should load. For example, if you had a set of these feature tests in a bootstrapper for your JS application, it could then test the environment to determine if your ES6 code can be loaded and run directly, or if you need to load a transpiled version of your code (see Chapter 1).

This technique is called split delivery.

It recognizes the reality that your ES6 authored JS programs will sometimes be able to entirely run “natively” in ES6+ browsers, but other times need transpilation to run in pre-ES6 browsers. If you always load and use the transpiled code, even in the new ES6-compliant environments, you’re running suboptimal code at least some of the time. This is not ideal.

Split delivery is more complicated and sophisticated, but it represents a more mature and robust approach to bridging the gap between the code you write and the feature support in browsers your programs must run in.

39.6.1 FeatureTests.io

Defining feature tests for all of the ES6+ syntax, as well as the semantic behaviors, is a daunting task you probably don’t want to tackle yourself. Because these tests require dynamic compilation (new Function(..)), there’s some unfortunate performance cost.

Moreover, running these tests every single time your app runs is probably wasteful, as on average a user’s browser only updates once in a several week period at most, and even then, new features aren’t necessarily showing up with every update.

Finally, managing the list of feature tests that apply to your specific code base – rarely will your programs use the entirety of ES6 – is unruly and error-prone.

The “https://featuretests.io” feature-tests-as-a-service offers solutions to these frustrations.

You can load the service’s library into your page, and it loads the latest test definitions and runs all the feature tests. It does so using background processing with Web Workers, if possible, to reduce the performance overhead. It also uses LocalStorage persistence to cache the results in a way that can be shared across all sites you visit which use the service, which drastically reduces how often the tests need to run on each browser instance.

You get runtime feature tests in each of your users’ browsers, and you can use those tests results dynamically to serve users the most appropriate code (no more, no less) for their environments.

Moreover, the service provides tools and APIs to scan your files to determine what features you need, so you can fully automate your split delivery build processes.

FeatureTests.io makes it practical to use feature tests for all parts of ES6 and beyond to make sure that only the best code is ever loaded and run for any given environment.

39.7 Tail Call Optimization (TCO)

Normally, when a function call is made from inside another function, a second stack frame is allocated to separately manage the variables/state of that other function invocation. Not only does this allocation cost some processing time, but it also takes up some extra memory.

A call stack chain typically has at most 10-15 jumps from one function to another and another. In those scenarios, the memory usage is not likely any kind of practical problem.

However, when you consider recursive programming (a function calling itself repeatedly) – or mutual recursion with two or more functions calling each other – the call stack could easily be hundreds, thousands, or more levels deep. You can probably see the problems that could cause, if memory usage grows unbounded.

JavaScript engines have to set an arbitrary limit to prevent such programming techniques from crashing by running the browser and device out of memory. That’s why we get the frustrating “RangeError: Maximum call stack size exceeded” thrown if the limit is hit.

Warning: The limit of call stack depth is not controlled by the specification. It’s implementation dependent, and will vary between browsers and devices. You should never code with strong assumptions of exact observable limits, as they may very well change from release to release.

Certain patterns of function calls, called tail calls, can be optimized in a way to avoid the extra allocation of stack frames. If the extra allocation can be avoided, there’s no reason to arbitrarily limit the call stack depth, so the engines can let them run unbounded.

A tail call is a return statement with a function call, where nothing has to happen after the call except returning its value.

This optimization can only be applied in strict mode. Yet another reason to always be writing all your code as strict!

Here’s a function call that is not in tail position:

"use strict";

function foo(x) {
    return x * 2;
}

function bar(x) {
    // not a tail call
    return 1 + foo( x );
}

bar( 10 );              // 21

1 + .. has to be performed after the foo(x) call completes, so the state of that bar(..) invocation needs to be preserved.

But the following snippet demonstrates calls to foo(..) and bar(..) where both are in tail position, as they’re the last thing to happen in their code path (other than the return):

"use strict";

function foo(x) {
    return x * 2;
}

function bar(x) {
    x = x + 1;
    if (x > 10) {
        return foo( x );
    }
    else {
        return bar( x + 1 );
    }
}

bar( 5 );               // 24
bar( 15 );              // 32

In this program, bar(..) is clearly recursive, but foo(..) is just a regular function call. In both cases, the function calls are in proper tail position. The x + 1 is evaluated before the bar(..) call, and whenever that call finishes, all that happens is the return.

Proper Tail Calls (PTC) of these forms can be optimized – called tail call optimization (TCO) – so that the extra stack frame allocation is unnecessary. Instead of creating a new stack frame for the next function call, the engine just reuses the existing stack frame. That works because a function doesn’t need to preserve any of the current state, as nothing happens with that state after the PTC.

TCO means there’s practically no limit to how deep the call stack can be. That trick slightly improves regular function calls in normal programs, but more importantly opens the door to using recursion for program expression even if the call stack could be tens of thousands of calls deep.

We’re no longer relegated to simply theorizing about recursion for problem solving, but can actually use it in real JavaScript programs!

As of ES6, all PTC should be optimizable in this way, recursion or not.

39.7.1 Tail Call Rewrite

The hitch, however, is that only PTC can be optimized; non-PTC will still work of course, but will cause stack frame allocation as they always did. You’ll have to be careful about structuring your functions with PTC if you expect the optimizations to kick in.

If you have a function that’s not written with PTC, you may find the need to manually rearrange your code to be eligible for TCO.

Consider:

"use strict";

function foo(x) {
    if (x <= 1) return 1;
    return (x / 2) + foo( x - 1 );
}

foo( 123456 );          // RangeError

The call to foo(x-1) isn’t a PTC because its result has to be added to (x / 2) before returning.

However, to make this code eligible for TCO in an ES6 engine, we can rewrite it as follows:

"use strict";

var foo = (function(){
    function _foo(acc,x) {
        if (x <= 1) return acc;
        return _foo( (x / 2) + acc, x - 1 );
    }

    return function(x) {
        return _foo( 1, x );
    };
})();

foo( 123456 );          // 381