For example, web page has one line of code (e.g. foo(42);) an interpreter can perform that very quickly. Here's the example:
But if we know the function is going to be run 10,000 times you would want to take the capital cost of optimization up front and increase peak performance.
What if this is being run by a low-memory device (phone)? Then we might want something like this:
Takeaway - code requires many different optimal translations depending on context (e.g. phone, server, IoT device, etc.).
With this in mind V8 has been looking at a completely new execution pipeline. Here's the history:
TurboFan is an optimizing compiler.
Ignition is an interpreter and has a small memory footprint but faster startup. It's integrated with TurboFan so that if a page is loaded many times it can pass it to TurboFan. This allows for a lot of different configurations.
It's optimized for Node.js - this is server side language that imbeds the V8 engine.









No comments:
Post a Comment