V8.wiki.git

Repo URL: https://github.com/v8/v8.wiki.git
Edited by:
Cover image: Cover image
Share this using: email, Google+, Twitter, Facebook.
Exports: EPUB | MOBI

If V8 in a Chromium canary turns out to be crashy, it will get rolled back to the V8 version of the previous canary. It is therefore important to keep V8’s API compatible from one canary version to the next.

We continuously run a bot that signals API stability violations. It compiles Chromium’s HEAD with V8’s current canary version.

Failures of this bot are currently only FYI and no action is required. The blame list can be used to easily identify dependent CLs in case of a rollback.

If you break this bot, be reminded to increase the window between a V8 change and a dependent Chromium change next time.

1 ARM debugging with the simulator

The simulator and debugger can be very helpful when working with v8 code generation.

  • It is convenient as it allows you to test code generation without access to actual hardware.
  • No cross or native compilation is needed.
  • The simulator fully supports the debugging of generated code.

Please note that this simulator is designed for v8 purposes. Only the features used by v8 are implemented, and you might encounter unimplemented features or instructions. In this case, feel free to implement them and submit the code!

1.1 Details on the ARM Debugger

Compile the ARM simulator shell with:

make arm.debug

on an x86 host using your regular compiler.

1.1.1 Starting the Debugger

There are different ways of starting the debugger:

$ out/arm.debug/d8 --stop_sim_at <n>

The simulator will start the debugger after executing n instructions.

$ out/arm.debug/d8 --stop_at <function name>

The simulator will stop at the given JavaScript function.

Also you can directly generate ‘stop’ instructions in the ARM code. Stops are generated with

Assembler::stop(const char* msg, Condition cond, int32_t code)

When the Simulator hits a stop, it will print msg and start the debugger.

1.1.2 Debugging commands.

Usual commands:

Enter help in the debugger prompt to get details on available commands. These include usual gdb-like commands, such as stepi, cont, disasm, etc. If the Simulator is run under gdb, the “gdb” debugger command will give control to gdb. You can then use cont from gdb to go back to the debugger.

Debugger specific commands:

Here’s a list of the ARM debugger specific commands, along with examples.
The JavaScript file “func.js” used below contains:

function test() {
  print(“In function test.”);
}
test();
  • printobject <register> (alias po), will describe an object held in a register.
$ out/arm.debug/d8 func.js --stop_at test

Simulator hit stop-at 
  0xb544d6a8  e92d4902       stmdb sp!, {r1, r8, fp, lr} 
sim> print r0 
r0: 0xb547ec15 -1253577707 
sim> printobject r0 
r0: 
0xb547ec15: [Function] 
 - map = 0x0xb540ff01 
 - initial_map = 
 - shared_info = 0xb547eb2d <SharedFunctionInfo> 
   - name = #test 
 - context = 0xb60083f1 <FixedArray[52]> 
 - code = 0xb544d681 <Code> 
   #arguments: 0xb545a15d <Proxy> (callback) 
   #length: 0xb545a14d <Proxy> (callback) 
   #name: 0xb545a155 <Proxy> (callback) 
   #prototype: 0xb545a145 <Proxy> (callback) 
   #caller: 0xb545a165 <Proxy> (callback)
  • break <address>, will insert a breakpoint at the specified address.

  • del, will delete the current breakpoint.

You can have only one such breakpoint. This is useful if you want to insert a breakpoint at runtime.

$ out/arm.debug/d8 func.js --stop_at test

Simulator hit stop-at 
  0xb53a1ee8  e92d4902       stmdb sp!, {r1, r8, fp, lr} 
sim> disasm 5 
  0xb53a1ee8  e92d4902       stmdb sp!, {r1, r8, fp, lr} 
  0xb53a1eec  e28db008       add fp, sp, #8 
  0xb53a1ef0  e59a200c       ldr r2, [r10, #+12] 
  0xb53a1ef4  e28fe004       add lr, pc, #4 
  0xb53a1ef8  e15d0002       cmp sp, r2 
sim> break 0xb53a1ef8 
sim> cont 
  0xb53a1ef8  e15d0002       cmp sp, r2 
sim> disasm 5 
  0xb53a1ef8  e15d0002       cmp sp, r2 
  0xb53a1efc  359ff034       ldrcc pc, [pc, #+52] 
  0xb53a1f00  e5980017       ldr r0, [r8, #+23] 
  0xb53a1f04  e59f1030       ldr r1, [pc, #+48] 
  0xb53a1f08  e52d0004       str r0, [sp, #-4]! 
sim> break 0xb53a1f08 
setting breakpoint failed 
sim> del 
sim> break 0xb53a1f08 
sim> cont 
  0xb53a1f08  e52d0004       str r0, [sp, #-4]! 
sim> del 
sim> cont 
In function test.
  • Generated stop instuctions, will work as breakpoints with a few additional features.

The first argument is a help message, the second is the condition, and the third is the stop code. If a code is specified, and is less than 256, the stop is said to be “watched”, and can be disabled/enabled; a counter also keeps track of how many times the Simulator hits this code.

If we are working on this v8 C++ code, which is reached when running our JavaScript file.

__ stop("My stop.", al, 123); 
__ mov(r0, r0); 
__ mov(r0, r0); 
__ mov(r0, r0);
__ mov(r0, r0);
__ mov(r0, r0); 
__ stop("My second stop.", al, 0x1); 
__ mov(r1, r1); 
__ mov(r1, r1); 
__ mov(r1, r1); 
__ mov(r1, r1);
__ mov(r1, r1); 

Here’s a sample debugging session:

We hit the first stop.

Simulator hit My stop. 
  0xb53559e8  e1a00000       mov r0, r0

We can see the following stop using disasm. The address of the message string is inlined in the code after the svc stop instruction.

sim> disasm 
  0xb53559e8  e1a00000       mov r0, r0 
  0xb53559ec  e1a00000       mov r0, r0 
  0xb53559f0  e1a00000       mov r0, r0 
  0xb53559f4  e1a00000       mov r0, r0 
  0xb53559f8  e1a00000       mov r0, r0 
  0xb53559fc  ef800001       stop 1 - 0x1 
  0xb5355a00  08338a97       stop message: My second stop 
  0xb5355a04  e1a00000       mov r1, r1 
  0xb5355a08  e1a00000       mov r1, r1 
  0xb5355a0c  e1a00000       mov r1, r1

Information can be printed for all (watched) stops which were hit at least once.

sim> stop info all 
Stop information: 
stop 123 - 0x7b:    Enabled,    counter = 1,    My stop. 
sim> cont 
Simulator hit My second stop 
  0xb5355a04  e1a00000       mov r1, r1 
sim> stop info all 
Stop information: 
stop 1 - 0x1:   Enabled,    counter = 1,    My second stop 
stop 123 - 0x7b:    Enabled,    counter = 1,    My stop.

Stops can be disabled or enabled. (Only available for watched stops.)

sim> stop disable 1 
sim> cont 
Simulator hit My stop. 
  0xb5356808  e1a00000       mov r0, r0 
sim> cont 
Simulator hit My stop. 
  0xb5356c28  e1a00000       mov r0, r0 
sim> stop info all 
Stop information: 
stop 1 - 0x1:   Disabled,   counter = 2,    My second stop 
stop 123 - 0x7b:    Enabled,    counter = 3,    My stop. 
sim> stop enable 1 
sim> cont 
Simulator hit My second stop 
  0xb5356c44  e1a00000       mov r1, r1 
sim> stop disable all 
sim> con
In function test.

2 What is a committer?

Technically, committers are people who have write access to the V8 Git repository. Committers can submit their own patches or patches from others.

This privilege is granted with some expectation of responsibility: committers are people who care about the V8 project and want to help meet its goals. Committers are not just people who can make changes, but people who have demonstrated their ability to collaborate with the team, get the most knowledgeable people to review code, contribute high-quality code, and follow through to fix issues (in code or tests).

A committer is a contributor to the V8 projects’ success and a citizen helping the projects succeed. See Committers Responsibility.

3 How do I become a committer?

In a nutshell, contribute 20 non-trivial patches and get at least three different people to review them (you’ll need three people to support you). Then ask someone to nominate you. You’re demonstrating your:

  • commitment to the project (20 good patches requires a lot of your valuable time),
  • ability to collaborate with the team,
  • understanding of how the team works (policies, processes for testing and code review, etc),
  • understanding of the projects’ code base and coding style, and
  • ability to write good code (last but certainly not least)

A current committer nominates you by sending email to containing:

  • your first and last name
  • your Google Code email address
  • an explanation of why you should be a committer,
  • embedded list of links to revisions (about top 10) containing your patches

Two other committers need to second your nomination. If no one objects in 5 working days (U.S.), you’re a committer. If anyone objects or wants more information, the committers discuss and usually come to a consensus (within the 5 working days). If issues cannot be resolved, there’s a vote among current committers.

Once you get approval from the existing committers, we’ll send you instructions for write access to Git. You’ll also be added to .

In the worst case, this can drag out for two weeks. Keep writing patches! Even in the rare cases where a nomination fails, the objection is usually something easy to address like “more patches” or “not enough people are familiar with this person’s work.”

3.1 Setting up push access to the repository

When you are accepted as a committer make sure to set up push access to the repo.

4 Maintaining committer status

You don’t really need to do much to maintain committer status: just keep being awesome and helping the V8 project!

In the unhappy event that a committer continues to disregard good citizenship (or actively disrupts the project), we may need to revoke that person’s status. The process is the same as for nominating a new committer: someone suggests the revocation with a good reason, two people second the motion, and a vote may be called if consensus cannot be reached. I hope that’s simple enough, and that we never have to test it in practice.

(Source: inspired by http://dev.chromium.org/getting-involved/become-a-committer )

We continuously run layout tests on our FYI waterfall to prevent integration problems with Chromium.

On test failures, the bots compare the results of V8 Tip-of-Tree with Chromium’s pinned V8 version, to only flag newly introduced V8 problems (with false positives <<5%). Blame assignment is trivial as the linux release bot tests all revisions.

Commits with newly introduced failures are normally reverted to unblock auto-rolling into Chromium. In case you break layout tests and the changes are expected, follow this procedure:

  1. Land a Chromium change setting NeedsManualRebaseline for the changed tests (more).
  2. Land your V8 CL and wait 1-2 days until it cycles into Chromium.
  3. Switch NeedsManualRebaseline to NeedsRebaseline in Chromium. Tests will be automatically rebaselined.

Please associate all CLs with a BUG.
In order to be able to build V8 from scratch on Linux/Mac for x64 please heed the following steps:

4.1 Getting the V8 source TL;DR

More in-depth information can be found here.

  1. Install Git

  2. Install depot_tools

  3. Update depot_tools by executing the following into your terminal/shell:

    gclient
  4. Go into the directory where you want to download the V8 source into and execute the following in your terminal/shell:

    fetch v8
    cd v8

Don’t simply git clone the V8 repository!

4.2 Building V8 TL;DR

More in-depth information can be found here.

  1. For macOS: install Xcode and accept its license agreement. (If you’ve installed the command-line tools separately, remove them first.)

  2. Make sure that you are in the V8 source directory. If you followed every step up until step 4 above, you’re already at the right location.

  3. Download all the build dependencies by executing the following in your terminal/shell:

    gclient sync
  4. Generate the necessary build files by executing the following in your terminal/shell:

    tools/dev/v8gen.py x64.release
  5. Compile the source by executing the following in your terminal/shell:

    ninja -C out.gn/x64.release
  6. Run the tests by executing the following in your terminal/shell:

    tools/run-tests.py --gn

**Build issues? File a bug at code.google.com/p/v8/issues or ask for help on *

5 Building V8

V8 is built with the help of GN. GN is a meta build system of sorts, as it generates build files for a number of other build systems. How you build therefore depends on what “back-end” build system and compiler you’re using.
The instructions below assume that you already have a checkout of V8 but haven’t yet installed the build dependencies.

If you intend to develop on V8, i.e., send patches and work with changelists, you will need to install the dependencies as described here.

More information on GN can be found in Chromium’s documentation or GN’s own.

5.1 Prerequisite: Build dependencies

All build dependencies are fetched by running:

gclient sync

GN itself is distributed with depot_tools.

5.2 Building

There are two workflows for building v8. A raw workflow using commands on a lower level and a convenience workflow using wrapper scripts.

5.2.0.1 Build instructions (convenience workflow)

Use a convenience script to generate your build files, e.g.:

tools/dev/v8gen.py x64.release

Call v8gen.py --help for more information. You can add an alias v8gen calling the script and also use it in other checkouts.

List available configurations (or bots from a master):

tools/dev/v8gen.py list

tools/dev/v8gen.py list -m client.v8

Build like a particular bot from waterfall client.v8 in folder foo:

tools/dev/v8gen.py -b "V8 Linux64 - debug builder" -m client.v8 foo

5.2.0.2 Build instructions (raw workflow)

First, generate the necessary build files:

gn args out.gn/foo

An editor will open for specifying the gn arguments. You can replace foo with an arbitrary directory name. Note that due to the conversion from gyp to gn, we use a separate out.gn folder, to not collide with old gyp folders. If you don’t use gyp or keep your subfolders separate, you can also use out.

You can also pass the arguments on the command line:

gn gen out.gn/foo --args='is_debug=false target_cpu="x64" v8_target_cpu="arm64" use_goma=true'

This will generate build files for compiling V8 with the arm64 simulator in release mode using goma for compilation. For an overview of all available gn arguments run:

gn args out.gn/foo --list

5.3 Compilation

For building all of V8 run (assuming gn generated to the x64.release folder):

ninja -C out.gn/x64.release

To build specific targets like d8, add them to the command line:

ninja -C out.gn/x64.release d8

5.4 Testing

You can pass the output directory to the test driver. Other relevant flags will be inferred from the build:

tools/run-tests.py --outdir out.gn/foo

You can also test your most recently compiled build (in out.gn):

tools/run-tests.py --gn

**Build issues? File a bug at code.google.com/p/v8/issues or ask for help on *


GYP has been deprecated in favor of [[GN|Building with GN]].
***

6 Building V8

V8 is built with the help of GYP. GYP is a meta build system of sorts, as it generates build files for a number of other build systems. How you build therefore depends on what “back-end” build system and compiler you’re using.
The instructions below assume that you already have a checkout of V8 but haven’t yet installed the build dependencies.

If you intend to develop on V8, i.e., send patches and work with changelists, you will need to install the dependencies as described here.

6.1 Prerequisite: Installing GYP

First, you need GYP itself. GYP is fetched together with the other dependencies by running:

gclient sync

6.2 Building

6.2.1 GCC + make

Requires GNU make 3.81 or later. Should work with any GCC >= 4.8 or any recent clang (3.5 highly recommended). For the officially supported clang version please check V8’s DEPS file.

6.2.1.1 Build instructions

The top-level Makefile defines a number of targets for each target architecture (ia32, x64, arm, arm64) and mode (debug, optdebug, or release). So your basic command for building is:

make ia32.release

or analogously for the other architectures and modes. You can build both debug and release binaries with just one command:

make ia32

To automatically build in release mode for the host architecture:

make native

You can also can build all architectures in a given mode at once:

make release

Or everything:

make

6.2.1.2 Optional parameters

  • -j specifies the number of parallel build processes. Set it (roughly) to the number of CPU cores your machine has. The GYP/make based V8 build also supports distcc, so you can compile with -j100 or so, provided you have enough machines around.

  • OUTDIR=foo specifies where the compiled binaries go. It defaults to ./out/. In this directory, a subdirectory will be created for each architecture and mode. You will find the d8 shell’s binary in foo/ia32.release/d8, for example.

  • library=shared or component=shared_library (the two are completely equivalent) builds V8 as a shared library (libv8.so).

  • soname_version=1.2.3 is only relevant for shared library builds and configures the SONAME of the library. Both the SONAME and the filename of the library will be libv8.so.1.2.3 if you specify this. Due to a peculiarity in GYP, if you specify a custom SONAME, the library’s path will no longer be encoded in the binaries, so you’ll have to run d8 as follows:

    LD_LIBRARY_PATH=out/ia32.release/lib.target out/ia32.release/d8
  • console=readline enables readline support for the d8 shell. You need readline development headers for this (libreadline-dev on Ubuntu).

  • disassembler=on enables the disassembler for release mode binaries (it’s always enabled for debug binaries). This is useful if you want to inspect generated machine code.

  • snapshot=off disables building with a heap snapshot. Compiling will be a little faster, but V8’s start up will be slightly slower.

  • gdbjit=on enables GDB JIT support.

  • liveobjectlist=on enables the Live Object List feature.

  • vfp3=off is only relevant for ARM builds with snapshot and disables the use of VFP3 instructions in the snapshot.

  • debuggersupport=off disables the javascript debugger.

  • werror=no omits the -Werror flag. This is especially useful for not officially supported C++ compilers (e.g. newer versions of the GCC) so that compile warnings are ignored.

  • strictaliasing=off passes the -fno-strict-aliasing flag to GCC. This may help to work around build failures on officially unsupported platforms and/or GCC versions.

  • regexp=interpreted chooses the interpreted mode of the irregexp regular expression engine instead of the native code mode.

  • hardfp=on creates “hardfp” binaries on ARM.

6.2.2 Ninja

To build d8:

export GYP_GENERATORS=ninja
gypfiles/gyp_v8
ninja -C out/Debug d8

Specify out/Release for a release build. I recommend setting up an alias so that you don’t need to type out that build directory path.

If you want to build all targets, use ninja -C out/Debug all. It’s faster to build only the target you’re working on, like d8 or unittests.

Note: You need to set v8_target_arch if you want a non-native build, i.e. either

export GYP_DEFINES="v8_target_arch=arm"
gypfiles/gyp_v8 ...

or

gypfiles/gyp_v8 -Dv8_target_arch=arm ...

6.2.2.1 Using goma (Googlers only)

To use goma you need to set the use_goma gyp define, either by passing it to gyp_v8, i.e.

gypfiles/gyp_v8 -Duse_goma=1

or by setting the environment variable $GYP_DEFINES appropriately:

export GYP_DEFINES="use_goma=1"

Note: You may need to also set gomadir to point to the directory where you installed goma, if it’s not in the default location.

If you are using goma, you’ll also want to bump the job limit, i.e.

ninja -j 100 -C out/Debug d8

6.2.3 Cross-compiling

Similar to building with Clang, you can also use a cross-compiler. Just export your toolchain (CXX/LINK environment variables should be enough) and compile. For example:

export CXX=/path/to/cross-compile-g++
export LINK=/path/to/cross-compile-g++
make arm.release

6.2.4 Xcode

From the root of your V8 checkout, run either of:

gypfiles/gyp_v8 -Dtarget_arch=ia32
gypfiles/gyp_v8 -Dtarget_arch=x64

This will generate Xcode project files in gypfiles/ that you can then either open with Xcode or compile directly from the command line:

xcodebuild -project gypfiles/all.xcodeproj -configuration Release
xcodebuild -project gypfiles/all.xcodeproj

Note: If you have configured your GYP_GENERATORS environment variable, either unset it, or set it to xcode for this to work.

6.2.4.1 Custom build settings

You can export the GYP_DEFINES environment variable in your shell to configure custom build options. The syntax is GYP_DEFINES="-Dvariable1=value1 -Dvariable2=value2" and so on for as many variables as you wish. Possibly interesting options include:

  • -Dcomponent=shared_library (see library=shared in the GCC + make section above)
  • -Dconsole=readline (see console=readline)
  • -Dv8_enable_disassembler=1 (see disassembler=on)
  • -Dv8_use_snapshot='false' (see snapshot=off)
  • -Dv8_enable_gdbjit=1 (see gdbjit=on)
  • -Dv8_use_liveobjectlist=true (see liveobjectlist=on)

6.2.5 Visual Studio

You need Visual Studio 2013, older versions might still work at the moment, but this will probably change soon because we intend to use C++11 features.

6.2.5.1 Prerequisites

If you are a non-googler you need to set DEPOT_TOOLS_WIN_TOOLCHAIN=0 in the CMD. For further information about building on Windows have a look at Chromium’s build instructions.

After you created checkout of V8, all dependencies will be already installed.

If you are getting errors during build mentioning that ‘python’ could not be found, add the ‘python.exe’ to PATH.

If you have Visual Studio 2013 and 2015 installed side-by-side and set the environment variable GYP_MSVS_VERSION to ‘2013’. In that case the right project files are going to be created.

6.2.5.2 Building

  • If you use the command prompt:
    1. Generate project files:

      set GYP_GENERATORS=ninja
      python gypfiles\gyp_v8

      Specify the path to python.exe if you don’t have it in your PATH.
      Append -Dtarget_arch=x64 if you want to build 64bit binaries. If you switch between ia32 and x64 targets, you may have to manually delete the generated .vcproj/.sln files before regenerating them.
      Example:

      third_party/python_26/python.exe gypfiles\gyp_v8 -Dtarget_arch=x64
    2. Build:

      Either open build\All.sln in Visual Studio, or compile on the command line as follows (adapt the path as necessary, or simply put devenv.com in your PATH):

      "c:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com" /build Release build\All.sln

      Replace Release with Debug to build in Debug mode.
      The built binaries will be in build\Release\ or build\Debug.

  • If you use cygwin, the workflow is the same, but the syntax is slightly different:
    1. Generate project files:

      export GYP_GENERATORS=ninja
      gypfiles/gyp_v8

      This will spit out a bunch of warnings about missing input files, but it seems to be OK to ignore them. (If you have time to figure this out, we’d happily accept a patch that makes the warnings go away!)

    2. Build:

      /cygdrive/c/Program\ Files\ (x86)/Microsoft\ Visual\ Studio\ 9.0/Common7/IDE/devenv.com /build Release gypfiles/all.sln

6.2.5.3 Custom build settings

See the “custom build settings” section for Xcode above.

6.2.5.4 Running tests

You can abuse the test driver’s –buildbot flag to make it find the executables where MSVC puts them:

python tools/run-tests.py --buildbot --outdir build --arch ia32 --mode Release

6.2.6 MinGW

Building on MinGW is not officially supported, but it is possible. You even have two options:

6.2.6.1 Option 1: With Cygwin Installed

Requirements:

  • MinGW
  • Cygwin, including Python
  • Python from www.python.org (yes, you need two Python installations!)

Building:

  1. Open a MinGW shell
  2. export PATH=$PATH:/c/cygwin/bin (or wherever you installed Cygwin)
  3. make ia32.release -j8

Running tests:

  1. Open a MinGW shell
  2. export PATH=/c/Python27:$PATH (or wherever you installed Python)
  3. make ia32.release.check -j8

6.2.6.2 Option 2: Without Cygwin, just MinGW

Requirements:

  • MinGW
  • Python from www.python.org

Building and testing:

  1. Open a MinGW shell
  2. tools/mingw-generate-makefiles.sh (re-run this any time a *.gyp* file changed, such as after updating your checkout)
  3. make ia32.release (unfortunately -jX doesn’t seem to work here)
  4. make ia32.release.check -j8

7 Final Note

<b>If you have problems or questions, please file bugs or send an email to .</b>
Built-in functions in V8 come in different flavors wrt implementation, depending on their functionality, performance requirements, and sometimes plain historical development.

Some are implemented in JavaScript directly, and are compiled into executable code at runtime just like any user JavaScript. Some of them resort to so-called runtime functions for part of their functionality. Runtime functions are written in C++ and called from JavaScript through a “%”-prefix. Usually, these runtime functions are limited to V8 internal JavaScript code. For debugging purposes, they can also be called from normal JavaScript code, if V8 is run with the flag –allow-natives-syntax. Some runtime functions are directly embedded by the compiler into generated code. For a list, see src/runtime/runtime.h.

Other functions are implemented as built-ins, which themselves can be implemented in a number of different ways. Some are implemented directly in platform-dependent assembly. Some are implemented in CodeStubAssembler, a platform-independent abstraction. Yet others are directly implemented in C++. Built-ins are sometimes also used to implement pieces of glue code, not necessarily entire functions. For a list, see src/builtins/builtins.h.

Quick links: browse | browse bleeding edge | changes.

8 Command-Line Access

8.1 Git

See [[Using Git|Using Git]].

9 Source Code Branches

There are several different branches of V8; if you’re unsure of which version to get, you most likely want the up-to-date stable version. Have a look at our Release Process for more information about the different branches used.

You may want to follow the V8 version that Chrome is shipping on its stable (or beta) channels, see http://omahaproxy.appspot.com.

10 V8 public API compatibility

V8 public API (basically the files under include/ directory) may change over time. New types/methods may be added without breaking existing functionality. When we decide that want to drop some existing class/methods, we first mark it with V8_DEPRECATED macro which will cause compile time warnings when the deprecated methods are called by the embedder. We keep deprecated method for one branch and then remove it. E.g. if v8::CpuProfiler::FindCpuProfile was plain non deprecated in 3.17 branch, marked as V8_DEPRECATED in 3.18, it may well be removed in 3.19 branch.

We also maintain a document mentioning important API changes for each version.

As part of the Chromium team, the V8 team is committed to preserving and fostering a diverse, welcoming community. To this end, the Chromium Code of Conduct applies to our repos and organizations, mailing lists, blog content, and any other Chromium-supported communication group, as well as any private communication initiated in the context of these spaces
This document is intended as an introduction to writing CodeStubAssembler builtins, and is targeted towards V8 developers.

11 Builtins

In V8, builtins can be seen as chunks of code that are executable by the VM at runtime. A common use case is to implement the functions of builtin objects (such as RegExp or Promise), but builtins can also be used to provide other internal functionality (e.g. as part of the IC system).

V8’s builtins can be implemented using a number of different methods (each with different trade-offs):

  • Platform-dependent assembly language: can be highly efficient, but need manual ports to all platforms and are difficult to maintain.
  • C++: very similar in style to runtime functions and have access to V8’s powerful runtime functionality, but usually not suited to performance-sensitive areas.
  • JavaScript: concise and readable code, access to fast intrinsics, but frequent usage of slow runtime calls, subject to unpredictable performance through type pollution, and subtle issues around (complicated and non-obvious) JS semantics.
  • CodeStubAssembler: provides efficient low-level functionality that is very close to assembly language while remaining platform-independent and preserving readability.

The remaining document will focus on the latter and give a brief tutorial for developing a simple CodeStubAssembler (CSA) builtin exposed to JavaScript.

12 CodeStubAssembler

V8’s CodeStubAssembler is a custom, platform-agnostic assembler that provides low-level primitives as a thin abstraction over assembly, but also offers an extensive library of higher-level functionality.

// Low-level:
// Loads the pointer-sized data at addr into value.
Node* addr = /* ... */;
Node* value = Load(MachineType::IntPtr(), addr);

// And high-level:
// Performs the JS operation ToString(object).
// ToString semantics are specified at https://tc39.github.io/ecma262/#sec-tostring. 
Node* object = /* ... */;
Node* string = ToString(context, object);

CSA builtins run through part of the TurboFan compilation pipeline (including block scheduling and register allocation, but notably not through optimization passes) which then emits the final executable code.

13 Writing a CodeStubAssembler Builtin

In this section, we will write a simple CSA builtin that takes a single argument, and returns whether it represents the number 42. The builtin will be exposed to JS by installing it on the Math object (because we can).

This example demonstrates:

  • Creating a CSA builtin with JavaScript linkage, which can be called like a JS function.
  • Using CSA to implement simple logic: Smi and heap-number handling, conditionals, and calls to TFS builtins.
  • Using CSA Variables.
  • Installation of the CSA builtin on the Math object.

In case you’d like to follow along locally, the following code is based off revision 7a8d20a7.

13.1 Declaring MathIs42

Builtins are declared in the BUILTIN_LIST_BASE macro in src/builtins/builtins-definitions.h. To create a new CSA builtin with JS linkage and one parameter named X’:

#define BUILTIN_LIST_BASE(CPP, API, TFJ, TFC, TFS, TFH, ASM, DBG)              \
  // [... snip ...]
  TFJ(MathIs42, 1, kX)                                                         \
  // [... snip ...]

Note that BUILTIN_LIST_BASE takes several different macros that denote different builtin kinds (see inline documentation for more details). CSA builtins specifically are split into:

  • TFJ: JavaScript linkage.
  • TFS: Stub linkage.
  • TFC: Stub linkage builtin requiring a custom interface descriptor (e.g. if arguments are untagged or need to be passed in specific registers).
  • TFH: Specialized stub linkage builtin used for IC handlers.

13.2 Defining MathIs42

Builtin definitions are located in src/builtins/builtins-*-gen.cc files, roughly organized by topic. Since we will be writing a Math builtin, we’ll put our definition into src/builtins/builtins-math-gen.cc.

// TF_BUILTIN is a convenience macro that creates a new subclass of the given
// assembler behind the scenes.
TF_BUILTIN(MathIs42, MathBuiltinsAssembler) {
  // Load the current function context (an implicit argument for every stub)
  // and the X argument. Note that we can refer to parameters by the names
  // defined in the builtin declaration.
  Node* const context = Parameter(Descriptor::kContext);
  Node* const x = Parameter(Descriptor::kX);

  // At this point, x can be basically anything - a Smi, a HeapNumber,
  // undefined, or any other arbitrary JS object. Let’s call the ToNumber
  // builtin to convert x to a number we can use.
  // CallBuiltin can be used to conveniently call any CSA builtin.
  Node* const number = CallBuiltin(Builtins::kToNumber, context, x);

  // Create a CSA variable to store the resulting value. The type of the
  // variable is kTagged since we will only be storing tagged pointers in it.
  VARIABLE(var_result, MachineRepresentation::kTagged);

  // We need to define a couple of labels which will be used as jump targets.
  Label if_issmi(this), if_isheapnumber(this), out(this);

  // ToNumber always returns a number. We need to distinguish between Smis
  // and heap numbers - here, we check whether number is a Smi and conditionally
  // jump to the corresponding labels.
  Branch(TaggedIsSmi(number), &if_issmi, &if_isheapnumber);

  // Binding a label begins generating code for it.
  BIND(&if_issmi);
  {
    // SelectBooleanConstant returns the JS true/false values depending on
    // whether the passed condition is true/false. The result is bound to our
    // var_result variable, and we then unconditionally jump to the out label.
    var_result.Bind(SelectBooleanConstant(SmiEqual(number, SmiConstant(42))));
    Goto(&out);
  }

  BIND(&if_isheapnumber);
  {
    // ToNumber can only return either a Smi or a heap number. Just to make sure
    // we add an assertion here that verifies number is actually a heap number.
    CSA_ASSERT(this, IsHeapNumber(number));
    // Heap numbers wrap a floating point value. We need to explicitly extract
    // this value, perform a floating point comparison, and again bind
    // var_result based on the outcome.
    Node* const value = LoadHeapNumberValue(number);
    Node* const is_42 = Float64Equal(value, Float64Constant(42));
    var_result.Bind(SelectBooleanConstant(is_42));
    Goto(&out);
  }

  BIND(&out);
  {
    Node* const result = var_result.value();
    CSA_ASSERT(this, IsBoolean(result));
    Return(result);
  }
}

13.3 Attaching Math.Is42

Builtin objects such as Math are set up mostly in src/bootstrapper.cc (with some setup occurring in .js files). Attaching our new builtin is simple:

// Existing code to set up Math, included here for clarity.
Handle<JSObject> math = factory->NewJSObject(cons, TENURED);
JSObject::AddProperty(global, name, math, DONT_ENUM);
// [... snip ...]
SimpleInstallFunction(math, "is42", Builtins::kMathIs42, 1, true);

Now that Is42 is attached, it can be called from JS:

$ out/debug/d8
d8> Math.is42(42)
true
d8> Math.is42("42.0")
true
d8> Math.is42(true)
false
d8> Math.is42({ valueOf: () => 42 })
true

13.4 Defining and calling a builtin with stub linkage

CSA builtins can also be created with stub linkage (instead of JS linkage as we used above in MathIs42). Such builtins can be useful to extract commonly-used code into a separate code object that can be used by multiple callers, while the code is only produced once. Let’s extract the code that handles heap numbers into a separate builtin called MathIsHeapNumber42, and call it from MathIs42.

Defining and using TFS stubs is easy; declaration are again placed in src/builtins/builtins-definitions.h:

#define BUILTIN_LIST_BASE(CPP, API, TFJ, TFC, TFS, TFH, ASM, DBG)              \
  // [... snip ...]
  TFS(MathIsHeapNumber42, kX)                                                  \
  TFJ(MathIs42, 1, kX)                                                         \
  // [... snip ...]

Note that currently, order within BUILTIN_LIST_BASE does matter. Since MathIs42 calls MathIsHeapNumber42, the former needs to be listed after the latter (this requirement should be lifted at some point).

The definition is also straightforward. In src/builtins/builtins-math-gen.cc:

// Defining a TFS builtin works exactly the same way as TFJ builtins.
TF_BUILTIN(MathIsHeapNumber42, MathBuiltinsAssembler) {
  Node* const x = Parameter(Descriptor::kX);
  CSA_ASSERT(this, IsHeapNumber(x));
  Node* const value = LoadHeapNumberValue(x);
  Node* const is_42 = Float64Equal(value, Float64Constant(42));
  Return(SelectBooleanConstant(is_42));
}

Finally, let’s call our new builtin from MathIs42:

TF_BUILTIN(MathIs42, MathBuiltinsAssembler) {
  // [... snip ...]
  BIND(&if_isheapnumber);
  {
    // Instead of handling heap numbers inline, we now call into our new TFS stub.
    var_result.Bind(CallBuiltin(Builtins::kMathIsHeapNumber42, context, number));
    Goto(&out);
  }
  // [... snip ...]
}

Why should you care about TFS builtins at all? Why not leave the code inline (or extracted into a helper method for better readability)?

An important reason is code space: builtins are generated at compile-time and included in the V8 snapshot, thus unconditionally taking up (significant) space in every created isolate. Extracting large chunks of commonly used code to TFS builtins can quickly lead to space savings in the 10s to 100s of KBs.

14 Basic commit guidelines

When you’re committing to the V8 repositories, ensure that you follow those guidelines:

  1. Find the right reviewer for your changes and for patches you’re asked to review.
  2. Be available on IM and/or email before and after you land the change.
  3. Watch the waterfall until all bots turn green after your change.
  4. When landing a TBR change (To Be Reviewed), make sure to notify the people whose code you’re changing. Usually just send the review e-mail.

In short, do the right thing for the project, not the easiest thing to get code committed, and above all: use your best judgement.

Don’t be afraid to ask questions. There is always someone who will immediately read messages sent to the v8-committers mailing list who can help you.

15 Changes with multiple reviewers

There are occasionally changes with a lot of reviewers on them, since sometimes several people might need to be in the loop for a change because of multiple areas of responsibility and expertise.

The problem is that without some guidelines, there’s no clear responsibility given in these reviews.

If you’re the sole reviewer on a change, you know you have to do a good job. When there are three other people, you sometimes assume that somebody else must have looked carefully at some part of the review. Sometimes all the reviewers think this and the change isn’t reviewed properly.

In other cases, some reviewers say “LGTM” for a patch, while others are still expecting changes. The author can get confused as to the status of the review, and some patches have been checked in where at least one reviewer expected further changes before committing.

At the same time, we want to encourage many people to participate in the review process and keep tabs on what’s going on.

So, here are some guidelines to help clarify the process:

  1. When a patch author requests more than one reviewer, they should make clear in the review request email what they expect the responsibility of each reviewer to be. For example, you could write this in the email:
    • larry: bitmap changes
    • sergey: process hacks
    • everybody else: FYI
  2. In this case, you might be on the review list because you’ve asked to be in the loop for multiprocess changes, but you wouldn’t be the primary reviewer and the author and other reviewers wouldn’t be expecting you to review all the diffs in detail.
  3. If you get a review that includes many other people, and the author didn’t do (1), please ask them what part you’re responsible for if you don’t want to review the whole thing in detail.
  4. The author should wait for approval from everybody on the reviewer list before checking in.
  5. People who are on a review without clear review responsibility (i.e. drive-by reviews) should be super responsive and not hold up the review. The patch author should feel free to ping them mercilessly if they are.
  6. If you’re an “FYI” person on a review and you didn’t actually review in detail (or at all), but don’t have a problem with the patch, note this. You could say something like “rubber stamp” or “ACK” instead of “LGTM.” This way the real reviewers know not to trust that you did their work for them, but the author of the patch knows they don’t have to wait for further feedback from you. Hopefully we can still keep everybody in the loop but have clear ownership and detailed reviews. It might even speed up some changes since you can quickly “ACK” changes you don’t care about, and the author knows they don’t have to wait for feedback from you.

(Adapted from: http://dev.chromium.org/developers/committers-responsibility )

The information on this page explains how to contribute to V8. Be sure to read the whole thing — including the small print at the end — before sending us a contribution.

16 Get the code

See Checking out source.

17 Before you contribute

17.1 Ask on V8’s mailing list for guidance

Before you start working on a larger contribution V8, you should get in touch with us first through the V8 contributor mailing list so we can help out and possibly guide you. Coordinating up front makes it much easier to avoid frustration later on.

17.2 Sign the CLA

Before we can use your code you have to sign the Google Individual Contributor License Agreement, which you can do online. This is mainly because you own the copyright to your changes, even after your contribution becomes part of our codebase, so we need your permission to use and distribute your code. We also need to be sure of various other things, for instance that you’ll tell us if you know that your code infringes on other people’s patents. You don’t have to do this until after you’ve submitted your code for review and a member has approved it, but you will have to do it before we can put your code into our codebase.

Contributions made by corporations are covered by a different agreement than the one above, the Software Grant and Corporate Contributor License Agreement.

Sign them online here.

18 Submit your code

The source code of V8 follows the Google C++ Style Guide so you should familiarize yourself with those guidelines. Before submitting code you must pass all our tests, and have to successfully run the presubmit checks:

tools/presubmit.py

The presubmit script uses a linter from Google, cpplint.py. External contributors can get this from here and place it in their path.

18.1 Upload to V8’s codereview tool

All submissions, including submissions by project members, require review. We use the same code-review tools and process as the Chromium project. In order to submit a patch, you need to get the depot_tools and follow these instructions on requesting a review (using your V8 workspace instead of a Chromium workspace).

18.2 Look out for breakage or regressions

Before submitting your code please check the buildbot console to see that the columns are mostly green before checking in your changes — otherwise you will not know if your changes break the build or not. When your change is committed, watch the buildbot console until the bots turn green after your change.

In general, V8 should conform to Google’s/Chrome’s C++ Style Guide for new code that is written. Your V8 code should conform to them as much as possible. There will always be cases where Google/Chrome Style Guide conformity or Google/Chrome best practices are extremely cumbersome or underspecified for our use cases. We document these exceptions here.

19 Cross-compiling with GN

First, make sure you can build with GN.

Then, add android to your .gclient configuration file.

target_os = ['android']  # Add this to get Android stuff checked out.

The target_os field is a list, so if you’re also building on unix it’ll look like this:

target_os = ['android', 'unix']  # multiple target oses

Run gclient sync, and you’ll get a large checkout under ./third_party/android_tools.

Enable developer mode on your phone or tablet, and turn on USB debugging, via instructions here. Also, get the handy adb tool on your path. It’s in your checkout at ./third_party/android_tools/sdk/platform-tools.

Use v8gen.py to generate an arm release or debug build:

tools/dev/v8gen.py arm.release

Then run `gn args out.gn/arm.release’ and make sure you have the following keys:

target_os = "android"
target_cpu = "arm"
v8_target_cpu = "arm"
is_component_build = false

The keys should be the same for debug builds. If you are building for an arm64 device like the Pixel C, which supports 32bit and 64bit binaries, the keys should look like this:

target_os = "android"
target_cpu = "arm64"
v8_target_cpu = "arm64"
is_component_build = false

Now build:

ninja -C out.gn/arm.release d8

Using adb copy the binary and snapshot files to the phone:

adb push out.gn/arm.release/d8 /data/local/tmp
adb push out.gn/arm.release/natives_blob.bin /data/local/tmp
adb push out.gn/arm.release/snapshot_blob.bin /data/local/tmp


rebuffat:~/src/v8$ adb shell
bullhead:/ $ cd /data/local/tmp
bullhead:/data/local/tmp $ ls
v8 natives_blob.bin snapshot_blob.bin
bullhead:/data/local/tmp $ ./d8
V8 version 5.8.0 (candidate)
d8> 'w00t!'
"w00t!"
d8> 

20 Building with Gyp

See Building with Gyp for more information on how to build.

21 Using Sourcery G++ Lite

The Sourcery G++ Lite cross compiler suite is a free version of Sourcery G++ from CodeSourcery. There is a page for the GNU Toolchain for ARM Processors. Determine the version you need for your host/target combination.

The following instructions uses 2009q1-203 for ARM GNU/Linux, and if using a different version please change the URLs and TOOL_PREFIX below accordingly.

21.1 Installing on host and target

The simplest way of setting this up is to install the full Sourcery G++ Lite package on both the host and target at the same location. This will ensure that all the libraries required are available on both sides. If you want to use the default libraries on the host there is no need the install anything on the target.

The following script will install in /opt/codesourcery:

#!/bin/sh

sudo mkdir /opt/codesourcery
cd /opt/codesourcery
sudo chown $USERNAME .
chmod g+ws .
umask 2
wget http://www.codesourcery.com/sgpp/lite/arm/portal/package4571/public/arm-none-linux-gnueabi/arm-2009q1-203-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar.bz2
tar -xvf arm-2009q1-203-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar.bz2

22 Prerequisites

23 Get the code

  • Use the instructions from Using Git to get the code
  • Once you need to add the android dependencies:

    v8$ echo "target_os = ['android']" >> ../.gclient && gclient sync --nohooks
  • The sync will take a while the first time as it downloads the Android NDK to v8/third_party
  • If you want to use a different NDK, or you are building on Mac where you must supply your own NDK, you need to pass the path to your NDK installation when issuing running make.

    make android_arm.release -j16 android_ndk_root=[full path to ndk]

24 Get the Android SDK

  • tested version: r15
  • download the SDK from http://developer.android.com/sdk/index.html
  • extract it
  • install the “Platform tools” using the SDK manager that you can start by running tools/android
  • now you have a platform_tools/adb binary which will be used later; put it in your PATH or remember where it is

25 Set up your device

  • Enable USB debugging (Gingerbread: Settings > Applications > Development > USB debugging; Ice Cream Sandwich: Settings > Developer Options > USB debugging)
  • connect your device to your workstation
  • make sure adb devices shows it; you may have to edit udev rules to give yourself proper permissions
  • run adb shell to get an ssh-like shell on the device. In that shell, do:

    cd /data/local/tmp
    mkdir v8
    cd v8

26 Push stuff onto the device

  • make sure your device is connected
  • from your workstation’s shell:

    adb push /file/you/want/to/push /data/local/tmp/v8/

27 Compile V8 for Android

Currently two architectures (android_arm and android_ia32) are supported, each in debug or release mode. The following steps work equally well for both ARM and ia32, on either the emulator or real devices.

  • compile:

    make android_arm.release -j16
  • push the resulting binary to the device:

    adb push out/android_arm.release/d8 /data/local/tmp/v8/d8
  • the most comfortable way to run it is from your workstation’s shell as a one-off command (rather than starting an interactive shell session on the device), that way you can use pipes or whatever to process the output as necessary:

    adb shell /data/local/tmp/v8/d8 <parameters>
  • warning: when you cancel such an “adb shell whatever” command using Ctrl+C, the process on the phone will sometimes keep running.
  • Alternatively, use the .check suffix to automatically push test binaries and test cases onto the device and run them.

    make android_arm.release.check

28 Profile

  • compile a binary, push it to the device, keep a copy of it on the host

    make android_arm.release -j16
    adb push out/android_arm.release/d8 /data/local/tmp/v8/d8-version.under.test
    cp out/android_arm.release/d8 ./d8-version.under.test
  • get a profiling log and copy it to the host:

    adb shell /data/local/tmp/v8/d8-version.under.test benchmark.js --prof
    adb pull /data/local/tmp/v8/v8.log ./
  • open v8.log in your favorite editor and edit the first line to match the full path of the d8-version.under.test binary on your workstation (instead of the /data/local/tmp/v8/ path it had on the device)
  • run the tick processor with the host’s d8 and an appropriate nm binary:

    cp out/ia32.release/d8 ./d8  # only required once
    tools/linux-tick-processor --nm=$ANDROID_NDK_ROOT/toolchain/bin/arm-linux-androideabi-nm

29 Using the Static Libraries

The static libraries created by the build process are found in out/android_arm.release/obj.target/tools/gyp/. They are “thin” archives, which means that the .a files contain symbolic links to the .o files used to make the archive. This makes these libraries unusable on any machine but the one that built the library.

A program linking with V8 must link with libv8_libplatform.a libv8_base.a libv8_libbase.a and one of the snapshot libaries such aslibv8_nosnapshot.a that will be produced if V8 is compiled with the snapshot=off option.

Unless V8 was compiled with i18nsupport=off option the program must also link with the International Components for Unicode (ICU) library found in out/android_arm.release/obj.target/third_party/icu/.

30 Compile SpiderMonkey for Lollipop

cd firefox/js/src
autoconf2.13
./configure \
  --target=arm-linux-androideabi \
  --with-android-ndk=$ANDROID_NDK_ROOT \
  --with-android-version=21 \
  --without-intl-api \
  --disable-tests \
  --enable-android-libstdcxx \
  --enable-pie
make
adb push -p js/src/shell/js /data/local/tmp/js

31 Introduction

V8 provides extensive debugging functionality to both users and embedders. Users will usually interact with the V8 debugger through the Chrome Devtools interface. Embedders (including Devtools) need to rely directly on the Inspector Protocol.

This page is intended to give embedders the basic tools they need to implement debugging support in V8.

32 Connecting to Inspector

V8’s command-line debug shell d8 includes a simple inspector integration through the InspectorFrontend and InspectorClient. The client sets up a communication channel for messages sent from the embedder to V8 in

  static void SendInspectorMessage(
      const v8::FunctionCallbackInfo<v8::Value>& args) {
    // [...] Create a StringView that Inspector can understand.
    session->dispatchProtocolMessage(message_view);
  }

while the frontend establishes a channel for messages sent from V8 to the embedder by implementing
sendResponse and sendNotification, which then forward to:

  void Send(const v8_inspector::StringView& string) {
    // [...] String transformations.
    // Grab the global property called 'receive' from the current context.
    Local<String> callback_name =
        v8::String::NewFromUtf8(isolate_, "receive", v8::NewStringType::kNormal)
            .ToLocalChecked();
    Local<Context> context = context_.Get(isolate_);
    Local<Value> callback =
        context->Global()->Get(context, callback_name).ToLocalChecked();
    // And call it to pass the message on to JS.
    if (callback->IsFunction()) {
      // [...]
      MaybeLocal<Value> result = Local<Function>::Cast(callback)->Call(
          context, Undefined(isolate_), 1, args);
    }
  }

33 Using the Inspector Protocol

Continuing with our example, d8 forwards inspector messages to JavaScript. The following code implements a basic, but fully functional interaction with inspector through d8:

// inspector-demo.js
// Receiver function called by d8.
function receive(message) {
  print(message)
}

const msg = JSON.stringify({
      id: 0,
      method: "Debugger.enable",
    });

// Call the function provided by d8.
send(msg);

// Run this file by executing 'd8 --enable-inspector inspector-demo.js'.

34 Further Documentation

A more fleshed-out example of Inspector API usage is available at test-api.js, which implements a simple debugging API for use by V8’s test suite.

V8 also contains an alternative Inspector integration at inspector-test.cc.

The Chrome Devtools wiki provides full documentation of all available functions.
JavaScript’s integration with Netscape Navigator in the mid 1990s made it much easier for web developers to access HTML page elements such as forms, frames, and images. JavaScript quickly became popular for customizing controls and adding animation and by the late 1990s the vast majority of scripts simply swapped one image for another in response to user-generated mouse events.

More recently, following the arrival of AJAX, JavaScript has become a central technology for implementing web-based applications such as our very own GMail. JavaScript programs have grown from a few lines to several hundred kilobytes of source code. While JavaScript is very efficient in doing the things it was designed to do, performance has become a limiting factor to further development of web-based JavaScript applications.

V8 is a new JavaScript engine specifically designed for fast execution of large JavaScript applications. In several benchmark tests, V8 is many times faster than JScript (in Internet Explorer), SpiderMonkey (in Firefox), and JavaScriptCore (in Safari). If your web application is bound by JavaScript execution speed, using V8 instead of your current JavaScript engine is likely to improve your application’s performance. How big the improvement is depends on how much JavaScript is executed and the nature of that JavaScript. For example, if the functions in your application tend to be run again and again, the performance improvement will be greater than if many different functions tend to run only once. The reason for this will become clearer as you read the rest of this document.

There are three key areas to V8’s performance:

35 Fast Property Access

JavaScript is a dynamic programming language: properties can be added to, and deleted from, objects on the fly. This means an object’s properties are likely to change. Most JavaScript engines use a dictionary-like data structure as storage for object properties - each property access requires a dynamic lookup to resolve the property’s location in memory. This approach makes accessing properties in JavaScript typically much slower than accessing instance variables in programming languages like Java and Smalltalk. In these languages, instance variables are located at fixed offsets determined by the compiler due to the fixed object layout defined by the object’s class. Access is simply a matter of a memory load or store, often requiring only a single instruction.

To reduce the time required to access JavaScript properties, V8 does not use dynamic lookup to access properties. Instead, V8 dynamically creates hidden classes behind the scenes. This basic idea is not new - the prototype-based programming language Self used maps to do something similar. (See for example, An Efficient Implementation of Self, a Dynamically-Typed Object-Oriented Language Based on Prototypes). In V8, an object changes its hidden class when a new property is added.

To clarify this, imagine a simple JavaScript function as follows:

function Point(x, y) {
  this.x = x;
  this.y = y;
}

When new Point(x, y) is executed a new Point object is created. When V8 does this for the first time, V8 creates an initial hidden class of Point, called C0 in this example. As the object does not yet have any properties defined the initial class is empty. At this stage the Point object’s hidden class is C0.

Executing the first statement in Point (this.x = x;) creates a new property, x, in the Point object. In this case, V8:

creates another hidden class C1, based on C0, then adds information to C1 that describes the object as having one property, x, the value of which is stored at offset 0 (zero) in the Point object.
updates C0 with a class transition indicating that if a property x is added to an object described by C0 then the hidden class C1 should be used instead of C0. At this stage the Point object’s hidden class is C1.

Executing the second statement in Point (this.y = y;) creates a new property, y, in the Point object. In this case, V8:

  • creates another hidden class C2, based on C1, then adds information to C2 that describes the object as also having property y stored at offset 1 (one) in the Point object.
  • updates C1 with a class transition indicating that if a property y is added to an object described by C1 then the hidden class C2 should be used instead of C1. At this stage the Point object’s hidden class is C2.

It might seem inefficient to create a new hidden class whenever a property is added. However, because of the class transitions the hidden classes can be reused. The next time a new Point is created no new hidden classes are created, instead the new Point object shares the classes with the first Point object. For example, if another Point object is created:

  • initially the Point object has no properties so the newly created object refers to the initial class C0.
  • when property x is added, V8 follows the hidden class transition from C0 to C1 and writes the value of x at the offset specified by C1.
  • when property y is added, V8 follows the hidden class transition from C1 to C2 and writes the value of y at the offset specified by C2.

Even though JavaScript is more dynamic than most object oriented languages, the runtime behavior of most JavaScript programs results in a high degree of structure-sharing using the above approach. There are two advantages to using hidden classes: property access does not require a dictionary lookup, and they enable V8 to use the classic class-based optimization, inline caching. For more on inline caching see Efficient Implementation of the Smalltalk-80 System.

36 Dynamic Machine Code Generation

V8 compiles JavaScript source code directly into machine code when it is first executed. There are no intermediate byte codes, no interpreter. Property access is handled by inline cache code that may be patched with other machine instructions as V8 executes.

During initial execution of the code for accessing a property of a given object, V8 determines the object’s current hidden class. V8 optimizes property access by predicting that this class will also be used for all future objects accessed in the same section of code and uses the information in the class to patch the inline cache code to use the hidden class. If V8 has predicted correctly the property’s value is assigned (or fetched) in a single operation. If the prediction is incorrect, V8 patches the code to remove the optimisation.

For example, the JavaScript code to access property x from a Point object is:

point.x

In V8, the machine code generated for accessing x is:

# ebx = the point object
cmp [ebx,<hidden class offset>],<cached hidden class>
jne <inline cache miss>
mov eax,[ebx, <cached x offset>]

If the object’s hidden class does not match the cached hidden class, execution jumps to the V8 runtime system that handles inline cache misses and patches the inline cache code. If there is a match, which is the common case, the value of the x property is simply retrieved.

When there are many objects with the same hidden class the same benefits are obtained as for most static languages. The combination of using hidden classes to access properties with inline caching and machine code generation optimises for cases where the same type of object is frequently created and accessed in a similar way. This greatly improves the speed at which most JavaScript code can be executed.

37 Efficient Garbage Collection

V8 reclaims memory used by objects that are no longer required in a process known as garbage collection. To ensure fast object allocation, short garbage collection pauses, and no memory fragmentation V8 employs a stop-the-world, generational, accurate, garbage collector. This means that V8:

  • stops program execution when performing a garbage collection cycle.
  • processes only part of the object heap in most garbage collection cycles. This minimizes the impact of stopping the application.
  • always knows exactly where all objects and pointers are in memory. This avoids falsely identifying objects as pointers which can result in memory leaks.

In V8, the object heap is segmented into two parts: new space where objects are created, and old space to which objects surviving a garbage collection cycle are promoted. If an object is moved in a garbage collection cycle, V8 updates all pointers to the object.
If you’ve read the Getting Started guide you will already be familiar with using V8 as a standalone virtual machine and with some key V8 concepts such as handles, scopes, and contexts. This document discusses these concepts further and introduces others that are key to embedding V8 within your own C++ application.

The V8 API provides functions for compiling and executing scripts, accessing C++ methods and data structures, handling errors, and enabling security checks. Your application can use V8 just like any other C++ library. Your C++ code accesses V8 through the V8 API by including the header include/v8.h.

The V8 Design Elements document provides background you may find useful when optimizing your application for V8.

38 Audience

This document is intended for C++ programmers who want to embed the V8 JavaScript engine within a C++ application. It will help you to make your own application’s C++ objects and methods available to JavaScript, and to make JavaScript objects and functions available to your C++ application.

39 Handles and Garbage Collection

A handle provides a reference to a JavaScript object’s location in the heap. The V8 garbage collector reclaims memory used by objects that can no longer again be accessed. During the garbage collection process the garbage collector often moves objects to different locations in the heap. When the garbage collector moves an object the garbage collector also updates all handles that refer to the object with the object’s new location.

An object is considered garbage if it is inaccessible from JavaScript and there are no handles that refer to it. From time to time the garbage collector removes all objects considered to be garbage. V8’s garbage collection mechanism is key to V8’s performance. To learn more about it see V8 Design Elements.

There are several types of handles:

  • Local handles are held on a stack and are deleted when the appropriate destructor is called. These handles’ lifetime is determined by a handle scope, which is often created at the beginning of a function call. When the handle scope is deleted, the garbage collector is free to deallocate those objects previously referenced by handles in the handle scope, provided they are no longer accessible from JavaScript or other handles. This type of handle is used in the example in Getting Started.

Local handles have the class Local<SomeType>.

Note: The handle stack is not part of the C++ call stack, but the handle scopes are embedded in the C++ stack. Handle scopes can only be stack-allocated, not allocated with new.

  • Persistent handles provide a reference to a heap-allocated JavaScript Object, just like a local handle. There are two flavors, which differ in the lifetime management of the reference they handle. Use a persistent handle when you need to keep a reference to an object for more than one function call, or when handle lifetimes do not correspond to C++ scopes. Google Chrome, for example, uses persistent handles to refer to Document Object Model (DOM) nodes. A persistent handle can be made weak, using PersistentBase::SetWeak, to trigger a callback from the garbage collector when the only references to an object are from weak persistent handles.
  • A UniquePersistent<SomeType> handle relies on C++ constructors and destructors to manage the lifetime of the underlying object.
  • A Persistent<SomeType> can be constructed with its constructor, but must be explicitly cleared with Persistent::Reset.

  • There are other types of handles which are rarely used, that we will only briefly mention here
  • Eternal is a persistent handle for JavaScript objects that are expected to never be deleted. It is cheaper to use because it relieves the garbage collector from determining the liveness of that object.
  • Both Persistent and UniquePersistent cannot be copied, which makes them unsuitable as values with pre-C++11 standard library containers. PersistentValueMap and PersistentValueVector provide container classes for persistent values, with map and vector-like semantics. C++11 embedders do not require these, since C++11 move semantics solve the underlying problem.

Of course, creating a local handle every time you create an object can result in a lot of handles! This is where handle scopes are very useful. You can think of a handle scope as a container that holds lots of handles. When the handle scope’s destructor is called all handles created within that scope are removed from the stack. As you would expect, this results in the objects to which the handles point being eligible for deletion from the heap by the garbage collector.

Returning to our very simple example, described in Getting Started, in the following diagram you can see the handle-stack and heap-allocated objects. Note that Context::New() returns a Local handle, and we create a new Persistent handle based on it to demonstrate the usage of Persistent handles.

When the destructor, HandleScope::~HandleScope, is called the handle scope is deleted. Objects referred to by handles within the deleted handle scope are eligible for removal in the next garbage collection if there are no other references to them. The garbage collector can also remove the source_obj, and script_obj objects from the heap as they are no longer referenced by any handles or otherwise reachable from JavaScript. Since the context handle is a persistent handle, it is not removed when the handle scope is exited. The only way to remove the context handle is to explicitly call Reset on it.

Note: Throughout this document the term handle refers to a local handle, when discussing a persistent handle that term is used in full.

It is important to be aware of one common pitfall with this model: you cannot return a local handle directly from a function that declares a handle scope. If you do the local handle you’re trying to return will end up being deleted by the handle scope’s destructor immediately before the function returns. The proper way to return a local handle is construct an EscapableHandleScope instead of a HandleScope and to call the Escape method on the handle scope, passing in the handle whose value you want to return. Here’s an example of how that works in practice:

// This function returns a new array with three elements, x, y, and z.
Local<Array> NewPointArray(int x, int y, int z) {
  v8::Isolate* isolate = v8::Isolate::GetCurrent();

  // We will be creating temporary handles so we use a handle scope.
  EscapableHandleScope handle_scope(isolate);

  // Create a new empty array.
  Local<Array> array = Array::New(isolate, 3);

  // Return an empty result if there was an error creating the array.
  if (array.IsEmpty())
    return Local<Array>();

  // Fill out the values
  array->Set(0, Integer::New(isolate, x));
  array->Set(1, Integer::New(isolate, y));
  array->Set(2, Integer::New(isolate, z));

  // Return the value through Escape.
  return handle_scope.Escape(array);
}

The Escape method copies the value of its argument into the enclosing scope, deletes all its local handles, and then gives back the new handle copy which can safely be returned.

40 Contexts

In V8, a context is an execution environment that allows separate, unrelated, JavaScript applications to run in a single instance of V8. You must explicitly specify the context in which you want any JavaScript code to be run.

Why is this necessary? Because JavaScript provides a set of built-in utility functions and objects that can be changed by JavaScript code. For example, if two entirely unrelated JavaScript functions both changed the global object in the same way then unexpected results are fairly likely to happen.

In terms of CPU time and memory, it might seem an expensive operation to create a new execution context given the number of built-in objects that must be built. However, V8’s extensive caching ensures that, while the first context you create is somewhat expensive, subsequent contexts are much cheaper. This is because the first context needs to create the built-in objects and parse the built-in JavaScript code while subsequent contexts only have to create the built-in objects for their context. With the V8 snapshot feature (activated with build option snapshot=yes, which is the default) the time spent creating the first context will be highly optimized as a snapshot includes a serialized heap which contains already compiled code for the built-in JavaScript code. Along with garbage collection, V8’s extensive caching is also key to V8’s performance, for more information see V8 Design Elements.

When you have created a context you can enter and exit it any number of times. While you are in context A you can also enter a different context, B, which means that you replace A as the current context with B. When you exit B then A is restored as the current context. This is illustrated below:

Note that the built-in utility functions and objects of each context are kept separate. You can optionally set a security token when you create a context. See the Security Model section for more information.

The motivation for using contexts in V8 was so that each window and iframe in a browser can have its own fresh JavaScript environment.

41 Templates

A template is a blueprint for JavaScript functions and objects in a context. You can use a template to wrap C++ functions and data structures within JavaScript objects so that they can be manipulated by JavaScript scripts. For example, Google Chrome uses templates to wrap C++ DOM nodes as JavaScript objects and to install functions in the global namespace. You can create a set of templates and then use the same ones for every new context you make. You can have as many templates as you require. However you can only have one instance of any template in any given context.

In JavaScript there is a strong duality between functions and objects. To create a new type of object in Java or C++ you would typically define a new class. In JavaScript you create a new function instead, and create instances using the function as a constructor. The layout and functionality of a JavaScript object is closely tied to the function that constructed it. This is reflected in the way V8 templates work. There are two types of templates:

  • Function templates

A function template is the blueprint for a single function. You create a JavaScript instance of the template by calling the template’s GetFunction method from within the context in which you wish to instantiate the JavaScript function. You can also associate a C++ callback with a function template which is called when the JavaScript function instance is invoked.

  • Object templates

Each function template has an associated object template. This is used to configure objects created with this function as their constructor. You can associate two types of C++ callbacks with object templates:

  • accessor callbacks are invoked when a specific object property is accessed by a script
  • interceptor callbacks are invoked when any object property is accessed by a script
    Accessors and interceptors are discussed later in this document.

The following code provides an example of creating a template for the global object and setting the built-in global functions.

// Create a template for the global object and set the
// built-in global functions.
Local<ObjectTemplate> global = ObjectTemplate::New(isolate);
global->Set(String::NewFromUtf8(isolate, "log"), FunctionTemplate::New(isolate, LogCallback));

// Each processor gets its own context so different processors
// do not affect each other.
Persistent<Context> context = Context::New(isolate, NULL, global);

This example code is taken from JsHttpProcessor::Initializer in the process.cc sample.

42 Accessors

An accessor is a C++ callback that calculates and returns a value when an object property is accessed by a JavaScript script. Accessors are configured through an object template, using the SetAccessor method. This method takes the name of the property with which it is associated and two callbacks to run when a script attempts to read or write the property.

The complexity of an accessor depends upon the type of data you are manipulating:

42.1 Accessing Static Global Variables

Let’s say there are two C++ integer variables, x and y that are to be made available to JavaScript as global variables within a context. To do this, you need to call C++ accessor functions whenever a script reads or writes those variables. These accessor functions convert a C++ integer to a JavaScript integer using Integer::New, and convert a JavaScript integer to a C++ integer using Int32Value. An example is provided below:

void XGetter(Local<String> property,
              const PropertyCallbackInfo<Value>& info) {
  info.GetReturnValue().Set(x);
}

void XSetter(Local<String> property, Local<Value> value,
             const PropertyCallbackInfo<Value>& info) {
  x = value->Int32Value();
}

// YGetter/YSetter are so similar they are omitted for brevity

Local<ObjectTemplate> global_templ = ObjectTemplate::New(isolate);
global_templ->SetAccessor(String::NewFromUtf8(isolate, "x"), XGetter, XSetter);
global_templ->SetAccessor(String::NewFromUtf8(isolate, "y"), YGetter, YSetter);
Persistent<Context> context = Context::New(isolate, NULL, global_templ);

Note that the object template in the code above is created at the same time as the context. The template could have been created in advance and then used for any number of contexts.

42.2 Accessing Dynamic Variables

In the preceding example the variables were static and global. What if the data being manipulated is dynamic, as is true of the DOM tree in a browser? Let’s imagine x and y are object fields on the C++ class Point:

class Point {
 public:
  Point(int x, int y) : x_(x), y_(y) { }
  int x_, y_;
}

To make any number of C++ point instances available to JavaScript we need to create one JavaScript object for each C++ point and make a connection between the JavaScript object and the C++ instance. This is done with external values and internal object fields.

First create an object template for the point wrapper object:

Local<ObjectTemplate> point_templ = ObjectTemplate::New(isolate);

Each JavaScript point object keeps a reference to the C++ object for which it is a wrapper with an internal field. These fields are so named because they cannot be accessed from within JavaScript, they can only be accessed from C++ code. An object can have any number of internal fields, the number of internal fields is set on the object template as follows:

point_templ->SetInternalFieldCount(1);

Here the internal field count is set to 1 which means the object has one internal field, with an index of 0, that points to a C++ object.

Add the x and y accessors to the template:

point_templ.SetAccessor(String::NewFromUtf8(isolate, "x"), GetPointX, SetPointX);
point_templ.SetAccessor(String::NewFromUtf8(isolate, "y"), GetPointY, SetPointY);

Next, wrap a C++ point by creating a new instance of the template and then setting the internal field 0 to an external wrapper around the point p.

Point* p = ...;
Local<Object> obj = point_templ->NewInstance();
obj->SetInternalField(0, External::New(isolate, p));

The external object is simply a wrapper around a void*. External objects can only be used to store reference values in internal fields. JavaScript objects can not have references to C++ objects directly so the external value is used as a “bridge” to go from JavaScript into C++. In that sense external values are the opposite of handles since handles lets C++ make references to JavaScript objects.

Here’s the definition of the get and set accessors for x, the y accessor definitions are identical except y replaces x:

void GetPointX(Local<String> property,
               const PropertyCallbackInfo<Value>& info) {
  Local<Object> self = info.Holder();
  Local<External> wrap = Local<External>::Cast(self->GetInternalField(0));
  void* ptr = wrap->Value();
  int value = static_cast<Point*>(ptr)->x_;
  info.GetReturnValue().Set(value);
}

void SetPointX(Local<String> property, Local<Value> value,
               const PropertyCallbackInfo<Value>& info) {
  Local<Object> self = info.Holder();
  Local<External> wrap = Local<External>::Cast(self->GetInternalField(0));
  void* ptr = wrap->Value();
  static_cast<Point*>(ptr)->x_ = value->Int32Value();
}

Accessors extract the reference to the point object that was wrapped by the JavaScript object and then read and writes the associated field. This way these generic accessors can be used on any number of wrapped point objects.

43 Interceptors

You can also specify a callback for whenever a script accesses any object property. These are called interceptors. For efficiency, there are two types of interceptors:

  • named property interceptors - called when accessing properties with string names.
    An example of this, in a browser environment, is document.theFormName.elementName.
  • indexed property interceptors - called when accessing indexed properties. An example of this, in a browser environment, is document.forms.elements[0].

The sample process.cc, provided with the V8 source code, includes an example of using interceptors. In the following code snippet SetNamedPropertyHandler specifies the MapGet and MapSet interceptors:

Local<ObjectTemplate> result = ObjectTemplate::New(isolate);
result->SetNamedPropertyHandler(MapGet, MapSet);

The MapGet interceptor is provided below:

void JsHttpRequestProcessor::MapGet(Local<String> name,
                                    const PropertyCallbackInfo<Value>& info) {
  // Fetch the map wrapped by this object.
  map<string, string> *obj = UnwrapMap(info.Holder());

  // Convert the JavaScript string to a std::string.
  string key = ObjectToString(name);

  // Look up the value if it exists using the standard STL idiom.
  map<string, string>::iterator iter = obj->find(key);

  // If the key is not present return an empty handle as signal.
  if (iter == obj->end()) return;

  // Otherwise fetch the value and wrap it in a JavaScript string.
  const string &value = (*iter).second;
  info.GetReturnValue().Set(String::NewFromUtf8(value.c_str(), String::kNormalString, value.length()));
}

As with accessors, the specified callbacks are invoked whenever a property is accessed. The difference between accessors and interceptors is that interceptors handle all properties, while accessors are associated with one specific property.

44 Security Model

The “same origin policy” (first introduced with Netscape Navigator 2.0) prevents a document or script loaded from one “origin” from getting or setting properties of a document from a different “origin”. The term origin is defined here as a combination of domain name (www.example.com), protocol (http or https) and port (for example, www.example.com:81 is not the same as www.example.com). All three must match for two webpages to be considered to have the same origin. Without this protection, a malicious web page could compromise the integrity of another web page.

In V8 an “origin” is defined as a context. Access to any context other than the one from which you are calling is not allowed by default. To access a context other than the one from which you are calling, you need to use security tokens or security callbacks. A security token can be any value but is typically a symbol, a canonical string that does not exist anywhere else. You can optionally specify a security token with SetSecurityToken when you set up a context. If you do not specify a security token V8 will automatically generate one for the context you are creating.

When an attempt is made to access a global variable the V8 security system first checks the security token of the global object being accessed against the security token of the code attempting to access the global object. If the tokens match access is granted. If the tokens do not match V8 performs a callback to check if access should be allowed. You can specify whether access to an object should be allowed by setting the security callback on the object, using the SetAccessCheckCallbacks method on object templates. The V8 security system can then fetch the security callback of the object being accessed and call it to ask if another context is allowed to access it. This callback is given the object being accessed, the name of the property being accessed, the type of access (read, write, or delete for example) and returns whether or not to allow access.

This mechanism is implemented in Google Chrome so that if security tokens do not match, a special callback is used to allow access only to the following: window.focus(), window.blur(), window.close(), window.location, window.open(), history.forward(), history.back(), and history.go().

45 Exceptions

V8 will throw an exception if an error occurs - for example, when a script or function attempts to read a property that does not exist, or if a function is called that is not a function.

V8 returns an empty handle if an operation did not succeed. It is therefore important that your code checks a return value is not an empty handle before continuing execution. Check for an empty handle with the Local class’s public member function IsEmpty().

You can catch exceptions with TryCatch, for example:

TryCatch trycatch(isolate);
Local<Value> v = script->Run();
if (v.IsEmpty()) {
  Local<Value> exception = trycatch.Exception();
  String::Utf8Value exception_str(exception);
  printf("Exception: %s\n", *exception_str);
  // ...
}

If the value returned is an empty handle, and you do not have a TryCatch in place, your code must bail out. If you do have a TryCatch the exception is caught and your code is allowed to continue processing.

46 Inheritance

JavaScript is a class-free, object-oriented language, and as such, it uses prototypal inheritance instead of classical inheritance. This can be puzzling to programmers trained in conventional object-oriented languages like C++ and Java.

Class-based object-oriented languages, such as Java and C++, are founded on the concept of two distinct entities: classes and instances. JavaScript is a prototype-based language and so does not make this distinction: it simply has objects. JavaScript does not natively support the declaration of class hierarchies; however, JavaScript’s prototype mechanism simplifies the process of adding custom properties and methods to all instances of an object. In JavaScript, you can add custom properties to objects. For example:

// Create an object "bicycle" 
function bicycle(){ 
} 
// Create an instance of bicycle called roadbike
var roadbike = new bicycle()
// Define a custom property, wheels, on roadbike 
roadbike.wheels = 2

A custom property added this way only exists for that instance of the object. If we create another instance of bicycle(), called mountainbike for example, mountainbike.wheels would return undefined unless the wheels property is explicitly added.

Sometimes this is exactly what is required, at other times it would be helpful to add the custom property to all instances of an object - all bicycles have wheels after all. This is where the prototype object of JavaScript is very useful. To use the prototype object, reference the keyword prototype on the object before adding the custom property to it as follows:

// First, create the "bicycle" object
function bicycle(){ 
}
// Assign the wheels property to the object's prototype
bicycle.prototype.wheels = 2

All instances of bicycle() will now have the wheels property prebuilt into them.

The same approach is used in V8 with templates. Each FunctionTemplate has a PrototypeTemplate method which gives a template for the function’s prototype. You can set properties, and associate C++ functions with those properties, on a PrototypeTemplate which will then be present on all instances of the corresponding FunctionTemplate. For example:

Local<FunctionTemplate> biketemplate = FunctionTemplate::New(isolate);
biketemplate->PrototypeTemplate().Set(
    String::NewFromUtf8(isolate, "wheels"),
    FunctionTemplate::New(isolate, MyWheelsMethodCallback)->GetFunction();
)

This causes all instances of biketemplate to have a wheels method in their prototype chain which, when called, causes the C++ function MyWheelsMethodCallback to be called.

V8’s FunctionTemplate class provides the public member function Inherit() which you can call when you want a function template to inherit from another function template, as follows:

void Inherit(Local<FunctionTemplate> parent);

You are working on a change. You want to evaluate code coverage for your new code.

V8 provides 2 tools for doing this - local, on your machine; and build infrastructure support.

47 Local

Relative to the root of the v8 repo, use ./tools/gcov.sh (tested on linux). This will use gnu’s code coverage tooling and some scripting to produce a html report, where you can drill down coverage info per directory, file, and then down to line of code.

The script will build v8 under a separate out directory, using gcov settings. We use a separate directory to avoid clobbering your normal build settings. This separate directory is called cov - it is created immediately under the repo root. gcov.sh will then run the test suite, and produce the report. The path to the report is provided when the script completes.

If your change has architecture specific components, you can cumulatively collect coverage from architecture specific runs.

./tools/gcov.sh x64 arm

This will in-place rebuild for each architecture, clobbering the binaries from the previous run, but preserving and accumulating over the coverage results.

By default, the script collects from Release runs. If you want Debug, you may specify so:

BUILD_TYPE=Debug ./tools/gcov.sh x64 arm arm64

Running the script with no options will provide a summary of options as well.

48 Code Coverage Bot

For each change that landed, we run a x64 coverage analysis - see the coverage bot. We don’t run bots for coverage for other architectures.

To get the report for a particular run, you want to list the build steps, find the “gsutil coverage report” one (towards the end), and open the “report” under it.
The following samples are provided as part of the source code download:

49 process.cc

This sample provides the code necessary to extend a hypothetical HTTP request processing application - which could be part of a web server, for example - so that it is scriptable. It takes a JavaScript script as an argument, which must provide a function called Process. The JavaScript Process function can be used to, for example, collect information such as how many hits each page served by the fictional web server gets.

50 shell.cc

This sample takes filenames as arguments then reads and executes their contents. Includes a command prompt at which you can enter JavaScript code snippets which are then executed. In this sample additional functions like print are also added to JavaScript through the use of object and function templates.
The V8 project aims to develop a high-performance, standards-compliant ECMAScript/JavaScript implementation. This document outlines our guidelines for “language-facing” changes and the process through which they are enforced.

51 Guidelines

We strive to be responsible stewards of JavaScript language, balancing interoperability and innovation. Our guidelines align closely with those of Blink project. Keep in mind that these are directional beacons, not bright-line rules.

51.1 Compatibility Risk

Factors that reduce compatibility risk include:

  • Acceptance at TC39 committee. TC39 is a primary steward of JavaScript language. We track progress of language features through the TC39 process and consider features that reach higher stages more stable and ready for implementation. The V8 team is actively involved in the TC39 committee and champions new features when appropriate.
  • Interest from other browser vendors. Implementations in other browsers are a clear signal of feature usefulness. In order of strength, this includes:
  1. compatible implementation in more than one engine
  2. compatible implementation in one engine
  3. implementation in one or more engines under an experimental flag
  4. other vendors expressed interest in the feature

51.2 Impact on Web Platform

In prioritizing the work of implementing new features, we will prefer those which unblock significant new capability, performance or expressiveness.

For every change we implement we want to validate our design and implementation every step of the way. As such, we aim to have a robust set of use cases for new features we implement; ideally, we want to be involved with the community, identifying groups of developers ready and willing to provide feedback for our implementations.

51.3 Technical Considerations

From day one, V8 has been all about performance. We strive to set standards for JavaScript performance across browsers, both by implementing advanced optimizations and by building robust benchmarks in our Octane benchmarking suite.

From a compatibility risk perspective, bugs and inadvertent incompatibilities in language implementations are very painful for users. Therefore we take special care to ensure high-quality implementations of JavaScript features we ship.

With this in mind, we expect all JavaScript changes implemented in V8 to be accompanied by:

  1. Assessment of the impact on the codebase. V8 is a highly complex and tightly knit codebase with many interdependencies; support cost for particularly pervasive features might be substantial.
  2. Conformance tests, ideally suitable for future inclusion in ECMA-262 test suite.
  3. Performance tests.

As the implementation of a particular feature progresses, we expect both conformance and performance test suites to progress in parallel.

52 Process

Language features implemented in V8 go over three stages: experimental implementation, staging, and shipping without a flag.

52.1 Experimental implementation

Anyone who wants to implement a feature in V8 must contact v8-users@googlegroups.com with an intent-to-implement e-mail. Then follow these steps:

  • Clarify the feature status with regard to the criteria in the guidelines on this page (TC39 acceptance, interest from browser vendors, testing plans) in an intent-to-implement e-mail.
  • Provide a design doc to clarify V8 code base impact.
  • The implementation should also consider DevTools support.
  • Implement the feature under a --harmony-X flag.
  • Develop conformance and performance tests in parallel.

52.2 Staging

At this stage, the feature becomes available in V8 under the --es-staging flag. The criteria for moving a feature to that stage are:

  • The specification of the feature is stable
    • One example of feature stability indicator is it being advanced to stage 3 of the TC39 process
  • Implementation is mostly complete; remaining issues are identified and documented
  • Conformance tests are in place
  • Performance regression tests are in place

52.3 Turning the flag on - shipping feature to the open Web

As the implementation of a feature progresses, we evaluate community feedback on feature design and implementation. The V8 team makes a decision to turn the feature on by default based on the community opinion of the feature and the technical maturity of the implementation.

Some community signals we consider before shipping:

  • The feature is on the clear track to standardization at TC39. For example, a feature spec is available and has been through several rounds of reviews, or a feature is at Stage 3+ of the TC39 process.
  • There is a clear interest in the feature from other browser vendors. For example, another engine is shipping a compatible implementation in an experimental or stable channel.

The following technical criteria must be met for shipping:

  1. The implementation is complete; any feedback received from the staged implementation is addressed.
  2. No technical debt: the V8 team is satisfied with the feature’s implementation quality (including basic DevTools support).
  3. Performance is consistent with our high-performance goals.

    52.3 Prerequisites

  • V8 3.0.9 or newer
  • GDB 7.0 or newer
  • Linux OS
  • CPU with Intel-compatible architecture (ia32 or x64)

53 Introduction

GDB JIT interface integration allows V8 to provide GDB with the symbol and debugging information for a native code emitted in runtime.

When GDB JIT interface is disabled a typical backtrace in GDB will contain frames marked with ??. This frames correspond to dynamically generated code:

#8  0x08281674 in v8::internal::Runtime_SetProperty (args=...) at src/runtime.cc:3758
#9  0xf5cae28e in ?? ()
#10 0xf5cc3a0a in ?? ()
#11 0xf5cc38f4 in ?? ()
#12 0xf5cbef19 in ?? ()
#13 0xf5cb09a2 in ?? ()
#14 0x0809e0a5 in v8::internal::Invoke (construct=false, func=..., receiver=..., argc=0, args=0x0, 
    has_pending_exception=0xffffd46f) at src/execution.cc:97

However enabling GDB JIT integration allows GDB to produce more informative stack trace:

#6  0x082857fc in v8::internal::Runtime_SetProperty (args=...) at src/runtime.cc:3758
#7  0xf5cae28e in ?? ()
#8  0xf5cc3a0a in loop () at test.js:6
#9  0xf5cc38f4 in test.js () at test.js:13
#10 0xf5cbef19 in ?? ()
#11 0xf5cb09a2 in ?? ()
#12 0x0809e1f9 in v8::internal::Invoke (construct=false, func=..., receiver=..., argc=0, args=0x0, 
    has_pending_exception=0xffffd44f) at src/execution.cc:97

Frames still unknown to GDB correspond to native code without source information. See GDBJITInterface#KnownLimitations for more details.

GDB JIT interface is specified in the GDB documentation: http://sourceware.org/gdb/current/onlinedocs/gdb/JIT-Interface.html

54 Enabling GDB JIT integration

GDBJIT currently is by default excluded from the compilation and disabled in runtime. To enable it:

  1. Build V8 library with ENABLE_GDB_JIT_INTERFACE defined. If you are using scons to build V8 run it with gdbjit=on.
  2. Pass --gdbjit flag when starting V8.

To check that you have enabled GDB JIT integration correctly try setting breakpoint on __jit_debug_register_code. This function will be invoked to notify GDB about new code objects.

55 Known Limitations

  • GDB side of JIT Interface currently (as of GDB 7.2) does not handle registration of code objects very effectively. Each next registration takes more time: with 500 registered objects each next registration takes more than 50ms, with 1000 registered code objects - more than 300 ms. This problem was reported to GDB developers (http://sourceware.org/ml/gdb/2011-01/msg00002.html) but currently there is no solution available. To reduce pressure on GDB current implementation of GDB JIT integration operates in two modes: default and full (enabled by --gdbjit-full flag). In default mode V8 notifies GDB only about code objects that have source information attached (this usually includes all user scripts). In full - about all generated code objects (stubs, ICs, trampolines).

  • On x64 GDB is unable to properly unwind stack without .eh_frame section (Issue 1053 (on Google Code))

  • GDB is not notified about code deserialized from the snapshot (Issue 1054 (on Google Code))

  • Only Linux OS on Intel-compatible CPUs is supported. For different OSes either a different ELF-header should be generated or a completely different object format should be used.

  • Enabling GDB JIT interface disables compacting GC. This is done to reduce pressure on GDB as unregistering and registering each moved code object will incur considerable overhead.

  • GDB JIT integration provides only approximate source information. It does not provide any information about local variables, function’s arguments, stack layout etc. It does not enable stepping through JavaScript code or setting breakpoint on the given line. However one can set a breakpoint on a function by it’s name.
  • The source code can be browsed online with Chromium Codesearch.
  • For instructions how to set up Eclipse for V8, see this document.

This project’s Git repository may be accessed using many other client programs and plug-ins. See your client’s documentation for more information.
This document introduces some key V8 concepts and provides a hello world example to get you started with V8 code.

55.1 Audience

This document is intended for C++ programmers who want to embed the V8 JavaScript engine within a C++ application.

56 Hello World

Let’s look at a Hello World example that takes a JavaScript statement as a string argument, executes it as JavaScript code, and prints the result to standard out.

  • An isolate is a VM instance with its own heap.
  • A local handle is a pointer to an object. All V8 objects are accessed using handles, they are necessary because of the way the V8 garbage collector works.
  • A handle scope can be thought of as a container for any number of handles. When you’ve finished with your handles, instead of deleting each one individually you can simply delete their scope.
  • A context is an execution environment that allows separate, unrelated, JavaScript code to run in a single instance of V8. You must explicitly specify the context in which you want any JavaScript code to be run.

These concepts are discussed in greater detail in the [[Embedder’s Guide|Embedder’s Guide]].

57 Run the Example

Follow the steps below to run the example yourself:

  1. Download the V8 source code by following the [[git|Using-Git]] instructions.
  2. This hello world example is compatible with version 5.8. You can check out this branch with git checkout -b 5.8 -t branch-heads/5.8
  3. Create a build configuration using the helper script: tools/dev/v8gen.py x64.release
  4. Edit the default build configuration by running gn args out.gn/x64.release. Add two lines to your configuration: is_component_build = false and v8_static_library = true.
  5. Build via ninja -C out.gn/x64.release on a Linux x64 system to generate the correct binaries.
  6. Compile hello-world.cc, linking to the static libraries created in the build process. For example, on 64bit Linux using the GNU compiler:

    g++ -I. -Iinclude samples/hello-world.cc -o hello-world -Wl,--start-group \
    out.gn/x64.release/obj/{libv8_{base,libbase,external_snapshot,libplatform,libsampler},\
    third_party/icu/libicu{uc,i18n},src/inspector/libinspector}.a \
    -Wl,--end-group -lrt -ldl -pthread -std=c++0x
  7. V8 requires its ‘startup snapshot’ to run. Copy the snapshot files to where your binary is stored:
    cp out.gn/x64.release/*.bin .
  8. For more complex code, V8 will fail without an ICU data file. Copy this file as well: cp out.gn/x64.release/icudtl.dat .
  9. Run the hello_world executable file at the command line.
    For example, on Linux, still in the V8 directory, type the following at the command line:
    ./hello_world
  10. You will see Hello, World!.

Of course this is a very simple example and it’s likely you’ll want to do more than just execute scripts as strings! For more information see the [[Embedder’s Guide|Embedder’s Guide]]. If you are looking for an example which is in sync with master simply check out the file hello-world.cc.

58 General

This article describes how ports should be handled.

59 MIPS

59.1 Straight-forward MIPS ports

  1. Do them yourself.

59.2 More complicated MIPS ports

  1. CC the MIPS team in the CL. Use the mailing list v8-mips-ports.at.googlegroups.com for that purpose.
  2. The MIPS team will provide you with a patch which you need to merge into your CL.
  3. Then land the CL.

60 PPC (not officially supported)

  1. Contact/CC the PPC team in the CL if needed. Use the mailing list v8-ppc-ports.at.googlegroups.com for that purpose.

61 x87 (not officially supported)

  1. Contact/CC the x87 team in the CL if needed. Use the mailing list v8-x87-ports.at.googlegroups.com for that purpose.

62 s390 (not officially supported)

  1. Contact/CC the s390 team in the CL if needed. Use the mailing list v8-s390-ports.at.googlegroups.com for that purpose.

63 ARM

63.1 Straight-forward ARM ports

  1. Do them yourself.

63.2 When you are lost

  1. CC the ARM team in the CL. Use the mailing list v8-arm-ports.at.googlegroups.com for that purpose.
    [[images/v8logo.png]]

    63.2 What is V8?

V8 is Google’s open source high-performance JavaScript engine, written in C++ and used in Google Chrome, the open source browser from Google. It implements ECMAScript as specified in ECMA-262, and runs on Windows 7 or later, macOS 10.5+, and Linux systems that use IA-32, ARM, or MIPS processors. V8 can run standalone, or can be embedded into any C++ application.

Quick links: Introduction | Getting Started with Embedding | Contributing.

Additionally to this wiki you can find more information here:

63.3 Talks

63.4 Design Docs

This documentation is aimed at C++ developers who want to use V8 in their applications, as well as anyone interested in V8’s design and performance. This document introduces you to V8, while the remaining documentation shows you how to use V8 in your code and describes some of its design details, as well as providing a set of JavaScript benchmarks for measuring V8’s performance.

64 About V8

V8 implements ECMAScript as specified in ECMA-262, 5th edition, and runs on Windows (XP or newer), Mac OS X (10.5 or newer), and Linux systems that use IA-32, x64, or ARM processors.

V8 compiles and executes JavaScript source code, handles memory allocation for objects, and garbage collects objects it no longer needs. V8’s stop-the-world, generational, accurate garbage collector is one of the keys to V8’s performance. You can learn about this and other performance aspects in V8 Design Elements.

JavaScript is most commonly used for client-side scripting in a browser, being used to manipulate Document Object Model (DOM) objects for example. The DOM is not, however, typically provided by the JavaScript engine but instead by a browser. The same is true of V8—Google Chrome provides the DOM. V8 does however provide all the data types, operators, objects and functions specified in the ECMA standard.

V8 enables any C++ application to expose its own objects and functions to JavaScript code. It’s up to you to decide on the objects and functions you would like to expose to JavaScript. There are many examples of applications that do this, for example: Adobe Flash and the Dashboard Widgets in Apple’s Mac OS X and Yahoo! Widgets.

65 Introduction

If you have a patch to the master branch (e.g. an important bug fix) that needs to be merged into one of the production V8 branches, read on.

For the examples, a branched 2.4 version of V8 will be used. Substitute “2.4” with your version number. See [[Release Process|Release Process]] for more information about version numbers.

An associated issue on Chromium’s or V8’s issue tracker is mandatory if a patch is merged. This helps with keeping track of merges.
You can use a template to create an issue.

66 What qualifies a merge candidate?

  • The patch fixes a severe bug (order of importance)
  1. Security bug
  2. Stability bug
  3. Correctness bug
  4. Performance bug
  • The patch does not alter APIs
  • The patch does not change behavior present before branch cut (except if the behavior change fixes a bug)

More information can be found on the relevant Chromium page. When in doubt, send an email to .

67 Merge process outlined

The merge process in the Chromium and V8 tracker is driven by labels in the form of

Merge-[Status]-[Branch]

The currently important labels for V8 are:

  1. Merge-Request-{Branch} initiates the process => This fix should be merged into #.#
  2. Merge-Review-{Branch} The merge is not approved yet for #.# e.g. because Canary coverage is missing
  3. Merge-Approved-{Branch} => Simply means that the Chrome TPMs have signed off on the merge
  4. Merge-Merged-{Branch} => When the merge is done, the Merge-Approved label is swapped with this one. {Branch} is the name/number of the V8 branch e.g. 4.3 for M-43.

68 Instructions for git using the automated script

68.1 How to check if a commit was already merged/reverted/has Canary coverage

Use mergeinfo.py to get all the commits which are connected to the HASH according to Git.

tools/release/mergeinfo.py HASH

If it tells you Is on Canary: No Canary coverage you should not merge yet because the fix was not yet deployed on a Canary build. A good rule of the thumb is to wait at least 3 days after the fix has landed until the merge is conducted.

68.2 Step 1: Run the script

Let’s assume you’re merging revision af3cf11 to branch 2.4 (please specify full git hashes - abbreviations are used here for simplicity).

tools/release/merge_to_branch.py --branch 2.4 af3cf11

Run the script with ‘-h’ to display its help message, which includes more options (e.g. you can specify a file containing your patch, or you can reverse a patch, specify a custom commit message, or resume a merging process you’ve canceled before). Note that the script will use a temporary checkout of v8 - it won’t touch your work space.
You can also merge more than one revision at once, just list them all.

tools/release/merge_to_branch.py --branch 2.4 af3cf11 cf33f1b sf3cf09

68.3 Step 2: Observe the branch waterfall

If one of the builders is not green after handling your patch, revert the merge immediately. A bot (AutoTagBot) will take care of the correct versioning after a 10 minute wait.

69 Patching a version used on Canary/Dev

In case you need to patch a Canary/Dev version, which should not happen often
follow these instructions:

69.1 Step 1: Merge to roll branch

Example version used is 2.4.4.

tools/release/roll_merge.py --branch 5.7.433 af3cf11

69.2 Step 2: Make Chromium aware of the fix

Example Chromium branch used is 2978

$ git checkout chromium/2978
$ git merge 5.7.433.1
$ git push

69.3 Step 3: The end

Chrome/Chromium should should pick of the change when they build automatically.

70 FAQ

When two people are merging at the same time a race-condition can happen in the merge scripts. If this is the case, contact and .

70.2 Is there a TL;DR?

  1. Create issue on issue tracker
  2. Check status of the fix with tools/release/mergeinfo.py
  3. Add Merge-Request-{Branch} to the issue
  4. Wait until somebody adds Merge-Approved-{Branch}
  5. Merge

    70.2 Introduction

V8’s CPU & Heap profilers are trivial to use from V8’s shells (see V8Profiler), but it may appear confusing how to use them with Chromium. This page should help you with it.

71 Instructions

71.1 Why using V8’s profilers with Chromium is different from using them with V8 shells?

Chromium is a complex application, unlike V8 shells. Below is the list of Chromium features that affect profiler usage:

  • each renderer is a separate process (OK, not actually each, but let’s omit this detail), so they can’t share the same log file;
  • sandbox built around renderer process prevents it from writing to a disk;
  • Developer Tools configure profilers for their own purposes;
  • V8’s logging code contains some optimizations to simplify logging state checks.

71.2 So, how to run Chromium to get a CPU profile?

Here is how to run Chromium in order to get a CPU profile from the start of the process:

./Chromium --no-sandbox --js-flags="--logfile=%t.log --prof"

Please note that you wouldn’t see profiles in Developer Tools, because all the data is being logged to a file, not to Developer Tools.

71.2.1 Flags description

  • –no-sandbox - turns off the renderer sandbox, obviously must have;
  • –js-flags - this is the containers for flags passed to V8:
    • –logfile=%t.log - specifies a name pattern for log files; %t gets expanded into current time in milliseconds, so each process gets its own log file; you can use prefixes and suffixes if you want, like this: prefix-%t-suffix.log;
    • –prof - tells V8 to write statistical profiling information into the log file.

71.3 Notes

Under Windows, be sure to turn on .MAP file creation for chrome.dll, but not for chrome.exe.

72 Introduction

The V8 release process is tightly connected to Chrome’s. The V8 team is using all four Chrome release channels to push new versions to the users.

If you want to look up what V8 version is in a Chrome release you can check OmahaProxy. For each Chrome release a separate branch is created in the V8 repository to make the trace-back easier e.g. for Chrome 45.0.2413.0.

73 Canary releases

Every day a new Canary build is pushed to the users via Chrome’s Canary channel. Normally the deliverable is the latest, stable enough version from master.

Branches for a Canary normally look like this

remotes/origin/4.5.35

74 Dev releases

Every week a new Dev build is pushed to the users via Chrome’s Dev channel. Normally the deliverable includes the latest stable enough V8 version on the Canary channel.

Branches for a Dev normally look like this

remotes/origin/4.5.35

75 Beta releases

Roughly every 6 weeks a new major branch is created e.g. for Chrome 44. This is happening in sync with the creation of Chrome’s Beta channel. The Chrome Beta is pinned to the head of V8’s branch. After approx. 6 weeks the branch is promoted to Stable.

Changes are only cherry-picked onto the branch in order to stabilize the version.

Branches for a Beta normally look like this

remotes/branch-heads/4.5

They are based on a Canary branch.

76 Stable releases

Roughly every 6 weeks a new major Stable release is done. No special branch is created as the latest Beta branch is simply promoted to Stable. This version is pushed to the users via Chrome’s Stable channel.

Branches for a Stable normally look like this

remotes/branch-heads/4.5

They are promoted (reused) Beta branches.

77 Which version should I embed in my application?

The tip of the same branch that Chrome’s Stable channel uses.

We often backmerge important bug fixes to a stable branch, so if you care about stability and security and correctness, you should include those updates too – that’s why we recommend “the tip of the branch”, as opposed to an exact version.

As soon as a new branch is promoted to Stable, we stop maintaining the previous stable branch. This happens every six weeks, so you should be prepared to update at least this often.

Example: The current stable Chrome release is 44.0.2403.125, with V8 4.4.63.25. So you should embed branch-heads/4.4. And you should update to branch-heads/4.5 when Chrome 45 is released on the Stable channel.

Related: Which V8 version should I use?
V8 fully uses Chromium’s Security process. In order to report a V8 security bug please follow these steps:

  1. Check Chromium’s guidelines for security bugs
  2. Create a new bug and add the component “Blink>JavaScript”
    All internal errors thrown in V8 capture a stack trace when they are created that can be accessed from JavaScript through the error.stack property. V8 also has various hooks for controlling how stack traces are collected and formatted, and for allowing custom errors to also collect stack traces. This document outlines V8’s JavaScript stack trace API.

78 Basic stack traces

By default, almost all errors thrown by V8 have a stack property that holds the topmost 10 stack frames, formatted as a string. Here’s an example of a fully formatted stack trace:

ReferenceError: FAIL is not defined
   at Constraint.execute (deltablue.js:525:2)
   at Constraint.recalculate (deltablue.js:424:21)
   at Planner.addPropagate (deltablue.js:701:6)
   at Constraint.satisfy (deltablue.js:184:15)
   at Planner.incrementalAdd (deltablue.js:591:21)
   at Constraint.addConstraint (deltablue.js:162:10)
   at Constraint.BinaryConstraint (deltablue.js:346:7)
   at Constraint.EqualityConstraint (deltablue.js:515:38)
   at chainTest (deltablue.js:807:6)
   at deltaBlue (deltablue.js:879:2)

The stack trace is collected when the error is created and is the same regardless of where or how many times the error is thrown. We collect 10 frames because it is usually enough to be useful but not so many that it has a noticeable performance impact. You can control how many stack frames are collected by setting the variable

Error.stackTraceLimit

Setting it to 0 will disable stack trace collection. Any finite integer value will be used as the maximum number of frames to collect. Setting it to Infinity means that all frames will be collected. This variable only affects the current context, it has to be set explicitly for each context that needs a different value. (Note that what is known as a “context” in V8 terminology corresponds to a page or iframe in Google Chrome). To set a different default value that affects all contexts use the

--stack-trace-limit <value>

command-line flag to V8. To pass this flag to V8 when running Google Chrome use

--js-flags="--stack-trace-limit <value>"

78.0.1 Stack trace collection for custom exceptions

The stack trace mechanism used for built-in errors is implemented using a general stack trace collection API that is also available to user scripts. The function

Error.captureStackTrace(error, constructorOpt)

adds a stack property to the given error object that will yield the stack trace at the time captureStackTrace was called. Stack traces collected through Error.captureStackTrace are immediately collected, formatted, and attached to the given error object.

The optional constructorOpt parameter allows you to pass in a function value. When collecting the stack trace all frames above the topmost call to this function, including that call, will be left out of the stack trace. This can be useful to hide implementation details that won’t be useful to the user. The usual way of defining a custom error that captures a stack trace would be:

function MyError() {
  Error.captureStackTrace(this, MyError);
  // any other initialization
}

Passing in MyError as a second argument means that the constructor call to MyError won’t show up in the stack trace.

79 Customizing stack traces

Unlike Java where the stack trace of an exception is a structured value that allows inspection of the stack state, the stack property in V8 just holds a flat string containing the formatted stack trace. This is for no other reason than compatibility with other browsers. However, this is not hardcoded but only the default behavior and can be overridden by user scripts.

For efficiency stack traces are not formatted when they are captured but on demand, the first time the stack property is accessed. A stack trace is formatted by calling

Error.prepareStackTrace(error, structuredStackTrace)

and using whatever this call returns as the value of the stack property. If you assign a different function value to Error.prepareStackTrace that function will be used to format stack traces. It will be passed the error object that it is preparing a stack trace for and a structured representation of the stack. User stack trace formatters are free to format the stack trace however they want and even return non-string values. It is safe to retain references to the structured stack trace object after a call to prepareStackTrace completes so that it is also a valid return value. Note that the custom prepareStackTrace function is only called once the stack property of Error object is accessed.

The structured stack trace is an Array of CallSite objects, each of which represents a stack frame. A CallSite object defines the following methods

  • getThis: returns the value of this
  • getTypeName: returns the type of this as a string. This is the name of the function stored in the constructor field of this, if available, otherwise the object’s [[Class]] internal property.
  • getFunction: returns the current function
  • getFunctionName: returns the name of the current function, typically its name property. If a name property is not available an attempt will be made to try to infer a name from the function’s context.
  • getMethodName: returns the name of the property of this or one of its prototypes that holds the current function
  • getFileName: if this function was defined in a script returns the name of the script
  • getLineNumber: if this function was defined in a script returns the current line number
  • getColumnNumber: if this function was defined in a script returns the current column number
  • getEvalOrigin: if this function was created using a call to eval returns a CallSite object representing the location where eval was called
  • isToplevel: is this a toplevel invocation, that is, is this the global object?
  • isEval: does this call take place in code defined by a call to eval?
  • isNative: is this call in native V8 code?
  • isConstructor: is this a constructor call?

The default stack trace is created using the CallSite API so any information that is available there is also available through this API.

To maintain restrictions imposed on strict mode functions, frames that have a strict mode function and all frames below (its caller etc.) are not allow to access their receiver and function objects. For those frames, getFunction() and getThis() will return undefined.

80 Compatibility

The API described here is specific to V8 and is not supported by any other JavaScript implementations. Most implementations do provide an error.stack property but the format of the stack trace is likely to be different from the format described here. The recommended use of this API is

  • Only rely on the layout of the formatted stack trace if you know your code is running in v8.
  • It is safe to set Error.stackTraceLimit and Error.prepareStackTrace regardless of which implementation is running your code but be aware that it will only have an effect if your code is running in V8.

81 Appendix: Stack trace format

The default stack trace format used by V8 can for each stack frame give the following information:

  • Whether the call is a construct call.
  • The type of the this value (Type).
  • The name of the function called (functionName).
  • The name of the property of this or one of its prototypes that holds the function (methodName).
  • The current location within the source (location)

Any of these may be unavailable and different formats for stack frames are used depending on how much of this information is available. If all the above information is available a formatted stack frame will look like this:

at Type.functionName [as methodName] (location)

or, in the case of a construct call

at new functionName (location)

If only one of functionName and methodName is available, or if they are both available but the same, the format will be:

at Type.name (location)

If neither is available <anonymous> will be used as the name.

The Type value is the name of the function stored in the constructor field of this. In v8 all constructor calls set this property to the constructor function so unless this field has been actively changed after the object was created it it will hold the name of the function it was created by. If it is unavailable the [[Class]] property of the object will be used.

One special case is the global object where the Type is not shown. In that case the stack frame will be formatted as

at functionName [as methodName] (location)

The location itself has several possible formats. Most common is the file name, line and column number within the script that defined the current function

fileName:lineNumber:columnNumber

If the current function was created using eval the format will be

eval at position

where position is the full position where the call to eval occurred. Note that this means that positions can be nested if there are nested calls to eval, for instance:

eval at Foo.a (eval at Bar.z (myscript.js:10:3))

If a stack frame is within V8’s libraries the location will be

native

and if is unavailable it will be

unknown location

This page contains a list of articles and documents from the V8 team with tips and tricks on how to write idiomatic, optimizable JavaScript, and what pitfalls to avoid.

82 Articles and blog posts

V8 includes a test framework that allows you to test the engine. The framework lets you run both our own test suites that are included with the source code and others, currently only the Mozilla tests.

82.1 Running the V8 tests

Before you run the tests, you will have to build V8 with GYP using the instructions here or with GN using the instructions here - see below.

You can append .check to any build target to have tests run for it, e.g.

make ia32.release.check
make ia32.check
make release.check
make check # builds and tests everything (no dot before "check"!)

Before submitting patches, you should always run the quickcheck target, which builds a fast debug build and runs only the most relevant tests:

make quickcheck

If you built V8 using GN, you can run tests like this:

tools/run-tests.py --gn

or if you want have multiple GN configurations and don’t want to run the tests on the last compiled configuration:

tools/run-tests.py --outdir=out.gn/ia32.release

You can also run tests manually:

tools/run-tests.py --arch-and-mode=ia32.release [--outdir=foo]

Or you can run individual tests:

tools/run-tests.py --arch=ia32 cctest/test-heap/SymbolTable mjsunit/delete-in-eval

Run the script with --help to find out about its other options, --outdir defaults to out. Also note that using the cctest binary to run multiple tests in one process is not supported.

82.2 Running the Mozilla and Test262 tests

The V8 test framework comes with support for running the Mozilla as well as the Test262 test suite. To download the test suites and then run them for the first time, do the following:

tools/run-tests.py --download-data mozilla
tools/run-tests.py --download-data test262

To run the tests subsequently, you may omit the flag that downloads the test suite:

tools/run-tests.py mozilla
tools/run-tests.py test262

Note that V8 fails a number of Mozilla tests because they require Firefox-specific extensions.

82.3 Running the WebKit tests

Sometimes all of the above tests pass but WebKit build bots fail. To make sure WebKit tests pass run:

tools/run-tests.py --progress=verbose --outdir=out --arch=ia32 --mode=release webkit --timeout=200

Replace –arch and other parameters with values that match your build options.

82.4 Running Microbenchmarks

Under test/js-perf-test we have microbenchmarks to track feature performance. There is a special runner for these, tools/run_perf.py. Run them like:

  python -u tools/run_perf.py --arch x64 
      --binary-override-path ~/v8/out.gn/x64.release/d8 
      test/js-perf-test/JSTests.json

If you don’t want to run all the JSTests, you can provide a filter argument:

  python -u tools/run_perf.py --arch x64 
      --binary-override-path ~/v8/out.gn/x64.release/d8 
      --filter JSTests/TypedArrays 
      test/js-perf-test/JSTests.json

82.5 Updating the bytecode expectations

Sometimes the bytecode expectations may change resulting in cctest failures. To update the golden files, build test/cctest/generate-bytecode-expectations by running:

 ninja -C out.gn/x64.release generate-bytecode-expectations

and then updating the default set of inputs by passing the --rebaseline flag to the generated binary:

 ./out.gn/x64.release/generate-bytecode-expectations --rebaseline

The updated goldens will be available in test/cctest/interpreter/bytecode_expectations/

83 Introduction

Currently, V8 provides support for tracing. It works automatically when V8 is embedded in Chrome through the Chrome tracing system. But you can also enable it in any standalone V8 or within an embedder that uses the Default Platform.
More details about the trace-viewer can be found here.

84 Tracing in d8

To start tracing, use the --enable-tracing option. V8 generates a v8_trace.json that you can open in Chrome. To open it in Chrome, go to “chrome://tracing” then click on Load and then load the v8-trace.json file.

Each trace event is a associated with a set of categories, you can enable/disable the recording of trace events based on their categories. With the above flag only, we only enable the default categories (a set of categories that has a low overhead). To enable more categories and have a more fine control of the different parameters you will need to pass a config file.
An example of a config file traceconfig.json:

{
 "record_mode": "record-continuously",
 "included_categories": ["v8", "disabled-by-default-v8.runtime_stats"]
}

An example of calling d8 with tracing and a traceconfig file:

d8 --enable-tracing --trace-config=traceconfig.json

The trace config format is compatible with the one of Chrome Tracing, however, we don’t support regular expression in included categories list, and V8 doesn’t need excluded categories list, thus the trace config file for V8 can be reused in Chrome tracing, but you can’t reuse Chrome trace config file in V8 tracing if the trace config file contains regular expression, besides, V8 will ignore excluded categories list.

85 Enabling Runtime Call Statistics in tracing

To get Runtime Call Statistics, please record the trace with the following 2 categories enabled: v8 and disabled-by-default-v8.runtime_stats. Each top level V8 trace event will contain the runtime statistics for the period of that event. By selecting any of those events in trace-viewer, the runtime stats table will be displayed in the lower panel, selecting multiple will create a merged view.

86 Enabling GC Object Statistics in tracing

To get the GC Object Statistics in tracing, you need to collect a trace with disabled-by-default.v8.gc_stats category enabled also you need to use the following js-flags:

--track_gc_object_stats --noincremental-marking

Once you load the trace in trace-viewer, search for slices named: V8.GC_Object_Stats, in the lower panel you will find the statistics. Selecting multiple slices will create a merged view.

87 How to get an issue triaged

  • V8 tracker: Set the state to Untriaged
  • Chromium tracker: Set the state to Untriaged and add the component Blink>JavaScript

88 How to assign V8 issues in the Chromium tracker

Please move issues to the V8 specialty sheriffs queue of one of the
following categories:

  • Memory: component:blink>javascript status=Untriaged label:Performance-Memory
    • Will show up in this query
  • Stability: status=available,untriaged component:Blink>JavaScript label:Stability -label:Clusterfuzz
    • Will show up in this query
    • No CC needed, will be triaged by a sheriff automatically
  • Performance: ….org
  • Clusterfuzz: Set the bug to the following state:
    • label:ClusterFuzz component:Blink>JavaScript status:Untriaged
    • Will show up in this query.
    • No CC needed, will be triaged by a sheriff automatically

If you need the attention of a sheriff, please consult the rotation information.

Use the component Blink>JavaScript on all issues.

Please note that this only applies to issues tracked in the Chromium issue tracker.
TurboFan is one of V8’s optimizing compilers leveraging a concept called “Sea of Nodes”. One of V8’s blog posts offers a high level overview of TurboFan. More detail can be found in the following resources:

89 Articles and blog posts

90 Talks

91 TurboFan Design Documents

These are design documents that are mostly concerned with TurboFan internals.

92 Related Design Documents

These are design documents that also affect TurboFan in a significant way.

D8 is useful for running some JavaScript locally or debugging changes you have made to V8. A normal V8 build using [[GN|Building with GN]] for x64 will output a D8 binary in out.gn/x64.optdebug/d8. You can call D8 with the --help argument for more information about usage and flags.

Printing output is probably going to be very important if you plan to use D8 to run JavaScript files, rather than interactively. This is very simple with the print() function:

--- test.js ---
print("Hello, World!");
out.gn/x64.optdebug/d8 test.js
> Hello, World!

92.2 Read Input

Using read() you can store the contents of a file into a variable.

d8> var license = read("LICENSE");
d8> license
"This license applies to all parts of V8 that are not externally
maintained libraries.  The externally maintained libraries used by V8
are:
... (etc.) "

Use readline() to interactively enter text:

d8> var greeting = readline();
Welcome
d8> greeting
"Welcome"

92.3 Load External Scripts

load() runs another JavaScript file in the current context, meaning that you can then access anything declared in that file.

--- util.js ---
function greet(name) {
  return "Hello, " + name;
}
d8> load('util.js');
d8> greet('World!');
"Hello, World!"

92.4 Pass Flags Into JavaScript

It’s possible to make command line arguments available to your JavaScript code at runtime with D8. Just pass them after -- on the command line and you will be able to access them at the top level of your script using the arguments object.

out.gn/x64.optdebug/d8 -- hi

You can now access an array of the arguments using the arguments object:

d8> arguments[0]
"hi"
d8>

92.5 More Resources

Kevin Ennis’ D8 Guide has really good information about exploring V8 using D8.

93 Git repository

V8’s git repository is located at https://chromium.googlesource.com/v8/v8.git

V8’s master branch has also an official git mirror on github: http://github.com/v8/v8-git-mirror.

Don’t just git-clone either of these URLs if you want to build V8 from your checkout, instead follow the instructions below to get everything set up correctly.

93.1 Prerequisites

  1. Git. To install using apt-get:

    apt-get install git
  2. depot_tools. See instructions.
  3. For push access, you need to setup a .netrc file with your git password:
    1. Go to https://chromium.googlesource.com/new-password - login with your committer account (e.g. @chromium.org account, non-chromium.org ones work too). Note: creating a new password doesn’t automatically revoke any previously created passwords.
    2. Have a look at the big, grey box containing shell commands. Paste those lines into your shell.

93.2 How to start

Make sure depot_tools are up-to-date by typing once:

gclient

Then get V8, including all branches and dependencies:

mkdir ~/v8
cd ~/v8
fetch v8
cd v8

After that you’re intentionally in a detached head state.

Optionally you can specify how new branches should be tracked:

git config branch.autosetupmerge always
git config branch.autosetuprebase always

Alternatively, you can create new local branches like this (recommended):

git new-branch mywork

93.3 Staying up-to-date

Update your current branch with git pull. Note that if you’re not on a branch, git pull won’t work, and you’ll need to use git fetch instead.

git pull

Sometimes dependencies of v8 are updated. You can synchronize those by running

gclient sync

93.4 Sending code for reviewing

git cl upload

93.5 Committing

You can use the CQ checkbox on codereview for committing (preferred). See also the chromium instructions for CQ flags and troubleshooting.

If you need more trybots than the default, add the following to your commit message on rietveld (e.g. for adding a nosnap bot):

CQ_INCLUDE_TRYBOTS=tryserver.v8:v8_linux_nosnap_rel

To land manually, update your branch:

git pull --rebase origin

Then commit using

git cl land

94 For project members

94.1 Try jobs

94.1.1 Creating a try job from codereview

  1. Upload a CL to rietveld.

    git cl upload
  2. Try the CL by sending a try job to the try bots like this:

    git cl try
  3. Wait for the try bots to build and you will get an e-mail with the result. You can also check the try state at your patch on codereview.
  4. If applying the patch fails you either need to rebase your patch or specify the v8 revision to sync to:

    git cl try --revision=1234

94.1.2 Creating a try job from a local branch

  1. Commit some changes to a git branch in the local repo.
  2. Try the change by sending a try job to the try bots like this:

    git cl try
  3. Wait for the try bots to build and you will get an e-mail with the result. Note: There are issues with some of the slaves at the moment. Sending try jobs from codereview is recommended.

94.1.3 Useful arguments

The revision argument tells the try bot what revision of the code base will be used for applying your local changes to. Without the revision, our LKGR revision is used as the base (http://v8-status.appspot.com/lkgr).

git cl try --revision=1234

To avoid running your try job on all bots, use the –bot flag with a comma-separated list of builder names. Example:

git cl try --bot=v8_mac_rel

94.1.4 Viewing the try server

http://build.chromium.org/p/tryserver.v8/waterfall

94.1.5 Access credentials

If asked for access credentials, use your @chromium.org email address and your generated password from googlecode.com.
Today’s highly optimized virtual machines can run web apps at blazing speed. But one shouldn’t rely only on them to achieve great performances: a carefully optimized algorithm or a less expensive function can often reach many-fold speed improvements on all browsers. Chrome DevToolsCPU Profiler helps you analyze your code bottlenecks. But sometimes, you need to go deeper and more granular: this is where V8’s internal profiler comes in handy.

Let’s use that profiler to examine the Mandelbrot explorer demo that Microsoft released together with IE10. After the demo release, V8 has fixed a bug that slowed down the computation unnecessarily (hence the poor performance of Chrome in the demo’s blog post) and further optimized the engine, implementing a faster exp() approximation than what the standard system libraries provide. Following these changes, the demo ran 8x faster than previously measured in Chrome.

But what if you want the code to run faster on all browsers? You should first understand what keeps your CPU busy. Run Chrome (Windows and Linux Canary) with the following command line switches, which will cause it to output profiler tick information (in the v8.log file) for the URL you specify, which in our case was a local version of the Mandelbrot demo without web workers:

$ ./chrome --js-flags=”--prof” --no-sandbox http://localhost:8080/index.html

When preparing the test case, make sure it begins its work immediately upon load, and simply close Chrome when the computation is done (hit Alt+F4), so that you only have the ticks you care about in the log file. Also note that web workers aren’t yet profiled correctly with this technique.

Then, process the v8.log file with the tick-processor script that ships with V8 (or the new practical web version):

$ v8/tools/linux-tick-processor v8.log

Here’s an interesting snippet of the processed output that should catch your attention:

Statistical profiling result from null, (14306 ticks, 0 unaccounted, 0 excluded).
 [Shared libraries]:
   ticks  total  nonlib   name
   6326   44.2%    0.0%  /lib/x86_64-linux-gnu/libm-2.15.so
   3258   22.8%    0.0%  /.../chrome/src/out/Release/lib/libv8.so
   1411    9.9%    0.0%  /lib/x86_64-linux-gnu/libpthread-2.15.so
     27    0.2%    0.0%  /.../chrome/src/out/Release/lib/libwebkit.so

The top section shows that V8 is spending more time inside an OS-specific system library than in its own code. Let’s look at what’s responsible for it by examining the “bottom up” output section, where you can read indented lines as “was called by” (and lines starting with a * mean that the function has been optimized by Crankshaft):

[Bottom up (heavy) profile]:
  Note: percentage shows a share of a particular caller in the total
  amount of its parent calls.
  Callers occupying less than 2.0% are not shown.

   ticks parent  name
   6326   44.2%  /lib/x86_64-linux-gnu/libm-2.15.so
   6325  100.0%    LazyCompile: *exp native math.js:91
   6314   99.8%      LazyCompile: *calculateMandelbrot http://localhost:8080/Demo.js:215

More than 44% of the total time is spent executing the exp() function inside a system library! Adding some overhead for calling system libraries, that means about two thirds of the overall time are spent evaluating Math.exp().

If you look at the JavaScript code, you’ll see that exp() is used solely to produce a smooth grayscale palette. There are countless ways to produce a smooth grayscale palette, but let’s suppose you really really like exponential gradients. Here is where algorithmic optimization comes into play.

You’ll notice that exp() is called with an argument in the range -4 < x < 0, so we can safely replace it with its Taylor approximation for that range, which will deliver the same smooth gradient with only a multiplication and a couple of divisions:

exp(x) ≈ 1 / ( 1 - x + x*x / 2) for -4 < x < 0 

Tweaking the algorithm this way boosts the performance by an extra 30% compared to latest Canary and 5x to the system library based Math.exp() on Chrome Canary.

This example shows how V8’s internal profiler can help you go deeper into understanding your code bottlenecks, and that a smarter algorithm can push performance even further.

To compare VM performances that represents today’s complex and demanding web applications, one might also want to consider a more comprehensive set of benchmarks such as the Octane Javascript Benchmark Suite.

95 Introduction

V8 has built-in support for the Linux perf tool. By default, this support is disabled, but by using the –perf-prof and –perf-prof-debug-info command line options, V8 will write out performance data during execution into a file that can be used to analyze the performance of V8’s JITted code with the Linux perf tool.

96 Setup

In order to analyze V8 JIT code with the Linux perf tool you will need to:

  • Use a very recent Linux kernel that provides high-resolution timing information to the perf tool and to V8’s perf integration in order to synchronize JIT code performance samples with the standard performance data collected by the Linux perf tool.
  • Use a recent very recent version of the Linux perf tool or apply the patch that supports JIT code to perf and build it yourself.

Install a new Linux kernel, and then reboot your machine:

sudo apt-get install linux-generic-lts-wily

Install dependencies:

sudo apt-get install libdw-dev libunwind8-dev systemtap-sdt-dev libaudit-dev libslang2-dev binutils-dev liblzma-dev

Download kernel sources that includes the latest perf tool source:

cd <path_to_kernel_checkout>
git clone --depth 1 git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
cd tip/tools/perf
make

97 Build

To use V8’s integration with Linux perf you need to build it with the appropriate GN build flag activated. You can set “enable_profiling = true” in an existing GN build configuration.

echo "enable_profiling = true" >> out.gn/x64.release/args.gn
ninja -C out.gn/x64.release

Alternatively, you create a new clean build configuration with only the single build flag set to enable perf support:

cd <path_to_your_v8_checkout>
tools/dev/v8gen.py x64.release
echo "enable_profiling = true" >> out.gn/x64.release/args.gn
ninja -C out.gn/x64.release

98 Running d8 with perf flags

Once you have the right kernel, perf tool and build of V8, you can add the –perf-prof to the V8 command line to record performance samples in JIT code. Here’s an example that records samples from a test JavaScript file:

cd <path_to_your_v8_checkout>
echo "(function f() { var s = 0; for (var i = 0; i < 1000000000; i++) { s += i; } return s; })();" > test.js
<path_to_kernel_checkout>/tip/tools/perf/perf record -k mono out.gn/x64.release/d8 --perf-prof test.js

99 Evaluating perf output

After execution finishes, you must combine the static information gathered from the perf tool with the performance samples output by V8 for JIT code:

<path_to_kernel_checkout>/tip/tools/perf/perf inject -j -i perf.data -o perf.data.jitted

Finally you can use the Linux perf tool to explore the performance bottlenecks in your JITted code:

<path_to_kernel_checkout>/tip/tools/perf/perf report -i perf.data.jitted

100 Introduction

V8 has built-in sample based profiling. Profiling is turned off by default, but can be enabled via the –prof command line option. The sampler records stacks of both JavaScript and C/C++ code.

101 Build

Build the d8 shell following the instructions at [[Building with GYP|Building with GYP]].

102 Command Line

To start profiling, use the --prof option. When profiling, V8 generates a v8.log file which contains profiling data.

Windows:

build\Release\d8 --prof script.js

Other platforms (replace “ia32” with “x64” if you want to profile the x64 build):

out/ia32.release/d8 --prof script.js

103 Process the Generated Output

Log file processing is done using JS scripts running by the d8 shell. For this to work, a d8 binary (or symlink, or d8.exe on Windows) must be in the root of your V8 checkout, or in the path specified by the environment variable D8_PATH. Note: this binary is just used to process the log, but not for the actual profiling, so it doesn’t matter which version etc. it is.

Make sure d8 used for analysis was not build with is_component_build!

Windows:

tools\windows-tick-processor.bat v8.log

Linux:

tools/linux-tick-processor v8.log

Mac OS X:

tools/mac-tick-processor v8.log

103.1 Snapshot-based VM build and builtins reporting

When a snapshot-based VM build is being used, code objects from a snapshot that don’t correspond to functions are reported with generic names like “A builtin from the snapshot”, because their real names are not stored in the snapshot. To see the names the following steps must be taken:

  • --log-snapshot-positions flag must be passed to VM (along with --prof); this way, for deserialized objects the (memory address, snapshot offset) pairs are being emitted into profiler log;

  • --snapshot-log=<log file from mksnapshot> flag must be passed to the tick processor script; a log file from the mksnapshot program (a snapshot log) contains address-offset pairs for serialized objects, and their names; using the snapshot log, names can be mapped onto deserialized objects during profiler log processing; the snapshot log file is called snapshot.log and resides alongside with V8’s compiled files.

An example of usage:

out/ia32.release/d8 --prof --log-snapshot-positions script.js
tools/linux-tick-processor --snapshot-log=out/ia32.release/obj.target/v8_snapshot/geni/snapshot.log v8.log

104 Programmatic Control of Profiling

If you would like to control in your application when profile samples are collected, you can do so.

First you’ll probably want to use the --noprof-auto command line switch which prevents the profiler from automatically starting to record profile ticks.

Profile ticks will not be recorded until your application specifically invokes these APIs:

  • V8::ResumeProfiler() - start/resume collection of data
  • V8::PauseProfiler() - pause collection of data

105 Example Output

Statistical profiling result from benchmarks\v8.log, (4192 ticks, 0 unaccounted, 0 excluded).

 [Shared libraries]:
   ticks  total  nonlib   name
      9    0.2%    0.0%  C:\WINDOWS\system32\ntdll.dll
      2    0.0%    0.0%  C:\WINDOWS\system32\kernel32.dll

 [JavaScript]:
   ticks  total  nonlib   name
    741   17.7%   17.7%  LazyCompile: am3 crypto.js:108
    113    2.7%    2.7%  LazyCompile: Scheduler.schedule richards.js:188
    103    2.5%    2.5%  LazyCompile: rewrite_nboyer earley-boyer.js:3604
    103    2.5%    2.5%  LazyCompile: TaskControlBlock.run richards.js:324
     96    2.3%    2.3%  Builtin: JSConstructCall
    ...

 [C++]:
   ticks  total  nonlib   name
     94    2.2%    2.2%  v8::internal::ScavengeVisitor::VisitPointers
     33    0.8%    0.8%  v8::internal::SweepSpace
     32    0.8%    0.8%  v8::internal::Heap::MigrateObject
     30    0.7%    0.7%  v8::internal::Heap::AllocateArgumentsObject
    ...


 [GC]:
   ticks  total  nonlib   name
    458   10.9%

 [Bottom up (heavy) profile]:
  Note: percentage shows a share of a particular caller in the total
  amount of its parent calls.
  Callers occupying less than 2.0% are not shown.

   ticks parent  name
    741   17.7%  LazyCompile: am3 crypto.js:108
    449   60.6%    LazyCompile: montReduce crypto.js:583
    393   87.5%      LazyCompile: montSqrTo crypto.js:603
    212   53.9%        LazyCompile: bnpExp crypto.js:621
    212  100.0%          LazyCompile: bnModPowInt crypto.js:634
    212  100.0%            LazyCompile: RSADoPublic crypto.js:1521
    181   46.1%        LazyCompile: bnModPow crypto.js:1098
    181  100.0%          LazyCompile: RSADoPrivate crypto.js:1628
    ...

106 Timeline plot

The timeline plot visualizes where V8 is spending time. This can be used to find bottlenecks and spot things that are unexpected (for example, too much time spent in the garbage collector). Data for the plot are gathered by both sampling and instrumentation. Linux with gnuplot 4.6 is required.

To create a timeline plot, run V8 as described above, with the option --log-timer-events additional to --prof:

out/ia32.release/d8 --prof --log-timer-events script.js

The output is then passed to a plot script, similar to the tick-processor:

tools/plot-timer-events v8.log

This creates timer-events.png in the working directory, which can be opened with most image viewers.

107 Options

Since recording log output comes with a certain performance overhead, the script attempts to correct this using a distortion factor. If not specified, it tries to find out automatically. You can however also specify the distortion factor manually.

tools/plot-timer-events --distortion=4500 v8.log

You can also manually specify a certain range for which to create the plot or statistical profile, expressed in milliseconds:

tools/plot-timer-events --distortion=4500 --range=1000,2000 v8.log
tools/linux-tick-processor --distortion=4500 --range=1000,2000 v8.log

108 HTML 5 version

Both statistical profile and timeline plot are available in the browser. However, the statistical profile lacks C++ symbol resolution and the Javascript port of gnuplot performs an order of magnitude slower than the native one.

108.1 Version numbering scheme

V8 version numbers are of the form x.y.z.w, where:

  • x.y is the Chromium milestone divided by 10 (e.g. M60 → 6.0)
  • z is automatically bumped whenever there’s a new LKGR (typically a few times per day)
  • w is bumped for manually backmerged patches after a branch point

If w is 0, it’s omitted from the version number. E.g. v5.9.211 (instead of “v5.9.211.0”) gets bumped up to v5.9.211.1 after backmerging a patch.

108.2 Which V8 version should I use?

Embedders of V8 should generally use the head of the branch corresponding to the minor version of V8 that ships in Chrome.

108.2.1 Finding the minor version of V8 corresponding to the latest stable Chrome

To find out what version this is,

  1. Go to https://omahaproxy.appspot.com/
  2. Find the latest stable Chrome version in the table
  3. Check the v8_version column (to the right) on the same row

Example: at the time of this writing, the site indicates that for mac/stable, the Chrome release version is 59.0.3071.86, which corresponds to V8 version 5.9.211.31.

108.2.2 Finding the head of the corresponding branch

V8’s version-related branches do not appear in the online repository at https://chromium.googlesource.com/v8/v8.git; instead only tags appear. To find the head of that branch, go to the URL in this form:

https://chromium.googlesource.com/v8/v8.git/+/branch-heads/<minor-version>

Example: for the V8 minor version 5.9 found above, we go to https://chromium.googlesource.com/v8/v8.git/+/branch-heads/5.9, finding a commit titled “Version 5.9.211.33”. Thus, the version of V8 that embedders should use at the time of this writing is 5.9.211.33.

Caution: You should not simply find the numerically-greatest tag corresponding to the above minor V8 version, as sometimes those are not supported, e.g. they are tagged before deciding where to cut minor releases. Such versions do not receive backports or similar.

Example: the V8 tags 5.9.212, 5.9.213, 5.9.214, 5.9.214.1, …, and 5.9.223 are abandoned, despite being numerically greater than the branch head of 5.9.211.33.

108.2.3 Checking out the head of the corresponding branch

If you have the source already, you can check out the head somewhat directly. If you’ve retrieved the source using depot_tools then you should be able to do

$ git branch --remotes | grep branch-heads/

to list the relevant branches. You’ll want to check out the one corresponding to the minor V8 version you found above, and use that. The tag that you end up on is the appropriate V8 version for you as the embedder.

If you did not use depot_tools, edit .git/config and add the line below to the [remote "origin"] section:

fetch = +refs/branch-heads/*:refs/remotes/branch-heads/*

Example: for the V8 minor version 5.9 found above, we can do

$ git checkout branch-heads/5.9
HEAD is now at 8c3db649d8... Version 5.9.211.33

108.3 Background

For stability reasons, Node master uses a V8 branch, that is at stable or older. For additional integration, the V8 team builds Node with V8 master, i.e., with a V8 version that is several weeks newer than the V8 version in the official Node master.

If the v8_node_linux64_rel bot fails on the V8 Commit Queue, there is either a legitimate problem with your CL (fix it) or Node must be modified. If the Node tests failed, search for “Not OK” in the logfiles. This document describes how to reproduce the problem locally and how to make changes to V8’s Node fork if your V8 CL causes the build to fail.

Note: Patches in V8’s fork are usually cherry-picked by the person who updates V8 in Node (usually several weeks or month later). If you merged a fix to V8’s Node fork, there’s nothing else you need to do.

109 Reproduce locally

Clone V8’s Node repository and check out the lkgr branch.

git clone https://github.com/v8/node.git
git checkout -b vee-eight-lkgr origin/vee-eight-lkgr

Or, if you already have a Node checkout, add v8/node as remote.

cd $NODE
git remote add v8-fork git@github.com:v8/node.git 
git checkout -b vee-eight-lkgr v8-fork/vee-eight-lkgr

Apply your patch, i.e., replace node/deps/v8 with a copy of v8 (lkgr branch) and build Node.

$V8/tools/release/update_node.py $V8 $NODE
cd $NODE
./configure && make -j48 test

You can run single tests.

./node test/parallel/test-that-you-want.js

For debug builds, set v8_optimized_debug in common.gypi to true and run

./configure --debug && make -j48 test

To run the debug binary, run ./node_g rather than ./node

110 Make changes to Node.js

If you need to change something in Node so your CL doesn’t break the build anymore, do the following. You need a GitHub account for this.

110.0.1 Get the Node sources

Fork V8’s Node repository on Github (click the fork button). Clone your Node repository and check out the lkgr branch.

git clone git@github.com:your_user_name/node.git

If you already have a checkout of your fork of Node, you do not fork the repo. Instead, add v8/node as remote:

cd $NODE
git remote add v8-fork git@github.com:v8/node.git 
git checkout -b vee-eight-lkgr v8-fork/vee-eight-lkgr

Make sure you have the correct branch and check that the current version builds and runs. Then create a new branch for your fix.

git checkout vee-eight-lkgr
./configure && make -j48 test
git checkout -b fix-something

110.0.2 Apply your patch

Replace node/deps/v8 with a copy of v8 (lkgr branch) and build Node.

$V8/tools/release/update_node.py $V8 $NODE
cd $NODE
./configure && make -j48 test

110.0.3 Make fixes to Node

Make your changes to Node (not to deps/v8) and commit them.

git commit -m "subsystem: fix something"

Note: if you make several commits, please squash them into one and format according to Node’s guidelines. Github’s review works differently than V8 Chromium and your commit messages will end up in Node exactly how you wrote them locally (doesn’t matter what you type in the PR message). It’s OK to force push onto your fix-something branch.

Build and run the tests again. Double check that your formatting looks like the rest of the file.

./configure && make -j48 test
make lint
git push origin fix-something

Once you have pushed the fixes to your repository, open a Pull Request on GitHub. This will send an email to the V8 node-js team. They will review and merge your PR. Once the PR is merged, you can run the CQ for your V8 commit again and land it. If you have specific questions, ping the V8 node-js team maintainers.

111 ECMAScript 402

V8 optionally implements the ECMAScript 402 API. The API is enabled by default, but can be turned off at compile time.

111.1 Prerequisites

The i18n implementation adds a dependency on ICU. If you run

make dependencies

a suitable version of ICU is checked out into third_party/icu.

111.1.1 Alternative ICU checkout

You can check out the ICU sources at a different location and define the gyp variable icu_gyp_path to point at the icu.gyp file.

111.1.2 System ICU

Last but not least, you can compile V8 against a version of ICU installed in your system. To do so, specify the gyp variable use_system_icu=1. If you also have want_separate_host_toolset enabled, the bundled ICU will still be compiled to generate the V8 snapshot. The system ICU will only be used for the target architecture.

111.2 Embedding V8

If you embed V8 in your application, but your application itself doesn’t use ICU, you will need to initialize ICU before calling into V8 by executing:

v8::V8::InitializeICU();

It is safe to invoke this method if ICU was not compiled in, then it does nothing.

111.3 Compiling without i18n support

To build V8 without i18n support use

make i18nsupport=off native
© https://gittobook.org, 2018. Unauthorized use and/or duplication of this material without express and written permission from this author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to this site, with appropriate and specific direction to the original content.
Table