/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- * vim: set ts=8 sts=2 et sw=2 tw=80: * This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
/* * [SMDOC] Garbage Collector * * This code implements an incremental mark-and-sweep garbage collector, with * most sweeping carried out in the background on a parallel thread. * * Full vs. zone GC * ---------------- * * The collector can collect all zones at once, or a subset. These types of * collection are referred to as a full GC and a zone GC respectively. * * It is possible for an incremental collection that started out as a full GC to * become a zone GC if new zones are created during the course of the * collection. * * Incremental collection * ---------------------- * * For a collection to be carried out incrementally the following conditions * must be met: * - the collection must be run by calling js::GCSlice() rather than js::GC() * - the GC parameter JSGC_INCREMENTAL_GC_ENABLED must be true. * * The last condition is an engine-internal mechanism to ensure that incremental * collection is not carried out without the correct barriers being implemented. * For more information see 'Incremental marking' below. * * If the collection is not incremental, all foreground activity happens inside * a single call to GC() or GCSlice(). However the collection is not complete * until the background sweeping activity has finished. * * An incremental collection proceeds as a series of slices, interleaved with * mutator activity, i.e. running JavaScript code. Slices are limited by a time * budget. The slice finishes as soon as possible after the requested time has * passed. * * Collector states * ---------------- * * The collector proceeds through the following states, the current state being * held in JSRuntime::gcIncrementalState: * * - Prepare - unmarks GC things, discards JIT code and other setup * - MarkRoots - marks the stack and other roots * - Mark - incrementally marks reachable things * - Sweep - sweeps zones in groups and continues marking unswept zones * - Finalize - performs background finalization, concurrent with mutator * - Compact - incrementally compacts by zone * - Decommit - performs background decommit and chunk removal * * Roots are marked in the first MarkRoots slice; this is the start of the GC * proper. The following states can take place over one or more slices. * * In other words an incremental collection proceeds like this: * * Slice 1: Prepare: Starts background task to unmark GC things * * ... JS code runs, background unmarking finishes ... * * Slice 2: MarkRoots: Roots are pushed onto the mark stack. * Mark: The mark stack is processed by popping an element, * marking it, and pushing its children. * * ... JS code runs ... * * Slice 3: Mark: More mark stack processing. * * ... JS code runs ... * * Slice n-1: Mark: More mark stack processing. * * ... JS code runs ... * * Slice n: Mark: Mark stack is completely drained. * Sweep: Select first group of zones to sweep and sweep them. * * ... JS code runs ... * * Slice n+1: Sweep: Mark objects in unswept zones that were newly * identified as alive (see below). Then sweep more zone * sweep groups. * * ... JS code runs ... * * Slice n+2: Sweep: Mark objects in unswept zones that were newly * identified as alive. Then sweep more zones. * * ... JS code runs ... * * Slice m: Sweep: Sweeping is finished, and background sweeping * started on the helper thread. * * ... JS code runs, remaining sweeping done on background thread ... * * When background sweeping finishes the GC is complete. * * Incremental marking * ------------------- * * Incremental collection requires close collaboration with the mutator (i.e., * JS code) to guarantee correctness. * * - During an incremental GC, if a memory location (except a root) is written * to, then the value it previously held must be marked. Write barriers * ensure this. * * - Any object that is allocated during incremental GC must start out marked. * * - Roots are marked in the first slice and hence don't need write barriers. * Roots are things like the C stack and the VM stack. * * The problem that write barriers solve is that between slices the mutator can * change the object graph. We must ensure that it cannot do this in such a way * that makes us fail to mark a reachable object (marking an unreachable object * is tolerable). * * We use a snapshot-at-the-beginning algorithm to do this. This means that we * promise to mark at least everything that is reachable at the beginning of * collection. To implement it we mark the old contents of every non-root memory * location written to by the mutator while the collection is in progress, using * write barriers. This is described in gc/Barrier.h. * * Incremental sweeping * -------------------- * * Sweeping is difficult to do incrementally because object finalizers must be * run at the start of sweeping, before any mutator code runs. The reason is * that some objects use their finalizers to remove themselves from caches. If * mutator code was allowed to run after the start of sweeping, it could observe * the state of the cache and create a new reference to an object that was just * about to be destroyed. * * Sweeping all finalizable objects in one go would introduce long pauses, so * instead sweeping broken up into groups of zones. Zones which are not yet * being swept are still marked, so the issue above does not apply. * * The order of sweeping is restricted by cross compartment pointers - for * example say that object |a| from zone A points to object |b| in zone B and * neither object was marked when we transitioned to the Sweep phase. Imagine we * sweep B first and then return to the mutator. It's possible that the mutator * could cause |a| to become alive through a read barrier (perhaps it was a * shape that was accessed via a shape table). Then we would need to mark |b|, * which |a| points to, but |b| has already been swept. * * So if there is such a pointer then marking of zone B must not finish before * marking of zone A. Pointers which form a cycle between zones therefore * restrict those zones to being swept at the same time, and these are found * using Tarjan's algorithm for finding the strongly connected components of a * graph. * * GC things without finalizers, and things with finalizers that are able to run * in the background, are swept on the background thread. This accounts for most * of the sweeping work. * * Reset * ----- * * During incremental collection it is possible, although unlikely, for * conditions to change such that incremental collection is no longer safe. In * this case, the collection is 'reset' by resetIncrementalGC(). If we are in * the mark state, this just stops marking, but if we have started sweeping * already, we continue non-incrementally until we have swept the current sweep * group. Following a reset, a new collection is started. * * Compacting GC * ------------- * * Compacting GC happens at the end of a major GC as part of the last slice. * There are three parts: * * - Arenas are selected for compaction. * - The contents of those arenas are moved to new arenas. * - All references to moved things are updated. * * Collecting Atoms * ---------------- * * Atoms are collected differently from other GC things. They are contained in * a special zone and things in other zones may have pointers to them that are * not recorded in the cross compartment pointer map. Each zone holds a bitmap * with the atoms it might be keeping alive, and atoms are only collected if * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how * this bitmap is managed.
*/
using mozilla::EnumSet; using mozilla::MakeScopeExit; using mozilla::Maybe; using mozilla::Nothing; using mozilla::Some; using mozilla::TimeDuration; using mozilla::TimeStamp;
using JS::SliceBudget; using JS::TimeBudget; using JS::WorkBudget;
static_assert(std::size(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT, "We have defined a slot count for each kind.");
// A table converting an object size in "slots" (increments of // sizeof(js::Value)) to the total number of bytes in the corresponding // AllocKind. See gc::slotsToThingKind. This primarily allows wasm jit code to // remain compliant with the AllocKind system. // // To use this table, subtract sizeof(NativeObject) from your desired allocation // size, divide by sizeof(js::Value) to get the number of "slots", and then // index into this table. See gc::GetGCObjectKindForBytes. const constexpr uint32_t gc::slotsToAllocKindBytes[] = { // These entries correspond exactly to gc::slotsToThingKind. The numeric // comments therefore indicate the number of slots that the "bytes" would // correspond to. // clang-format off /* 0 */ sizeof(JSObject_Slots0), sizeof(JSObject_Slots2), sizeof(JSObject_Slots2), sizeof(JSObject_Slots4), /* 4 */ sizeof(JSObject_Slots4), sizeof(JSObject_Slots8), sizeof(JSObject_Slots8), sizeof(JSObject_Slots8), /* 8 */ sizeof(JSObject_Slots8), sizeof(JSObject_Slots12), sizeof(JSObject_Slots12), sizeof(JSObject_Slots12), /* 12 */ sizeof(JSObject_Slots12), sizeof(JSObject_Slots16), sizeof(JSObject_Slots16), sizeof(JSObject_Slots16), /* 16 */ sizeof(JSObject_Slots16) // clang-format on
};
// Please also update jit-test/tests/gc/gczeal.js when updating this help text. // clang-format off constchar gc::ZealModeHelpText[] = " Specifies how zealous the garbage collector should be. Some of these modes\n" " can be set simultaneously, by passing multiple level options, e.g. \"2;4\"\n" " will activate both modes 2 and 4. Modes can be specified by name or\n" " number.\n" " \n" " Values:\n" " 0: (None) Normal amount of collection (resets all modes)\n" " 1: (RootsChange) Collect when roots are added or removed\n" " 2: (Alloc) Collect when every N allocations (default: 100)\n" " 4: (VerifierPre) Verify pre write barriers between instructions\n" " 6: (YieldBeforeRootMarking) Incremental GC in two slices that yields\n" " before root marking\n" " 7: (GenerationalGC) Collect the nursery every N nursery allocations\n" " 8: (YieldBeforeMarking) Incremental GC in two slices that yields\n" " between the root marking and marking phases\n" " 9: (YieldBeforeSweeping) Incremental GC in two slices that yields\n" " between the marking and sweeping phases\n" " 10: (IncrementalMultipleSlices) Incremental GC in many slices\n" " 11: (IncrementalMarkingValidator) Verify incremental marking\n" " 12: (ElementsBarrier) Use the individual element post-write barrier\n" " regardless of elements size\n" " 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n" " 14: (Compact) Perform a shrinking collection every N allocations\n" " 15: (CheckHeapAfterGC) Walk the heap to check its integrity after every\n" " GC\n" " 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that yields\n" " before sweeping the atoms table\n" " 18: (CheckGrayMarking) Check gray marking invariants after every GC\n" " 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that yields\n" " before sweeping weak caches\n" " 21: (YieldBeforeSweepingObjects) Incremental GC that yields once per\n" " zone before sweeping foreground finalized objects\n" " 22: (YieldBeforeSweepingNonObjects) Incremental GC that yields once per\n" " zone before sweeping non-object GC things\n" " 23: (YieldBeforeSweepingPropMapTrees) Incremental GC that yields once\n" " per zone before sweeping shape trees\n" " 24: (CheckWeakMapMarking) Check weak map marking invariants after every\n" " GC\n" " 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n" " during gray marking\n"; // clang-format on
// The set of zeal modes that yield at specific points in collection. static constexpr EnumSet<ZealMode> YieldPointZealModes = {
ZealMode::YieldBeforeRootMarking,
ZealMode::YieldBeforeMarking,
ZealMode::YieldBeforeSweeping,
ZealMode::YieldBeforeSweepingAtoms,
ZealMode::YieldBeforeSweepingCaches,
ZealMode::YieldBeforeSweepingObjects,
ZealMode::YieldBeforeSweepingNonObjects,
ZealMode::YieldBeforeSweepingPropMapTrees,
ZealMode::YieldWhileGrayMarking};
// The set of zeal modes that control incremental slices. static constexpr EnumSet<ZealMode> IncrementalSliceZealModes =
YieldPointZealModes +
EnumSet<ZealMode>{ZealMode::IncrementalMultipleSlices};
// The set of zeal modes that trigger GC periodically. static constexpr EnumSet<ZealMode> PeriodicGCZealModes =
IncrementalSliceZealModes + EnumSet<ZealMode>{ZealMode::Alloc,
ZealMode::GenerationalGC,
ZealMode::Compact};
// The set of zeal modes that are mutually exclusive. All of these trigger GC // except VerifierPre. static constexpr EnumSet<ZealMode> ExclusiveZealModes =
PeriodicGCZealModes + EnumSet<ZealMode>{ZealMode::VerifierPre};
// Modes that trigger periodically are mutually exclusive. If we're setting // one of those, we first reset all of them.
ZealMode zealMode = ZealMode(zeal); if (ExclusiveZealModes.contains(zealMode)) { for (auto mode : ExclusiveZealModes) { if (hasZealMode(mode)) {
clearZealMode(mode);
}
}
}
if (zealMode == ZealMode::GenerationalGC) {
evictNursery(JS::GCReason::EVICT_NURSERY);
nursery().enterZealMode();
}
bool GCRuntime::parseAndSetZeal(constchar* str) { // Set the zeal mode from a string consisting of one or more mode specifiers // separated by ';', optionally followed by a ',' and the trigger frequency. // The mode specifiers can by a mode name or its number.
auto text = CharRange(str, strlen(str));
CharRangeVector parts; if (!SplitStringBy(text, ',', &parts)) { returnfalse;
}
bool GCRuntime::zealModeControlsYieldPoint() const { // Indicates whether a zeal mode is enabled that controls the point at which // the collector yields to the mutator. Yield can happen once per collection // or once per zone depending on the mode. return hasAnyZealModeOf(YieldPointZealModes);
}
bool GCRuntime::hasZealMode(ZealMode mode) const {
static_assert(size_t(ZealMode::Limit) < sizeof(zealModeBits) * 8, "Zeal modes must fit in zealModeBits"); return zealModeBits & (1 << uint32_t(mode));
}
for (auto& marker : markers) { if (!marker->init()) { returnfalse;
}
}
if (!initSweepActions()) { returnfalse;
}
UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone); if (!zone || !zone->init()) { returnfalse;
}
// The atoms zone is stored as the first element of the zones vector.
MOZ_ASSERT(zone->isAtomsZone());
MOZ_ASSERT(zones().empty());
MOZ_ALWAYS_TRUE(zones().reserve(1)); // ZonesVector has inline capacity 4.
zones().infallibleAppend(zone.release());
// Wait for nursery background free to end and disable it to release memory. if (nursery().isEnabled()) {
nursery().disable();
}
// Wait until the background finalization and allocation stops and the // helper thread shuts down before we forcefully release any remaining GC // memory.
sweepTask.join();
markTask.join();
freeTask.join();
allocTask.cancelAndWait();
decommitTask.cancelAndWait(); #ifdef DEBUG
{
MOZ_ASSERT(dispatchedParallelTasks == 0);
AutoLockHelperThreadState lock;
MOZ_ASSERT(queuedParallelTasks.ref().isEmpty(lock));
} #endif
bool GCRuntime::freezeSharedAtomsZone() { // This is called just after permanent atoms and well-known symbols have been // created. At this point all existing atoms and symbols are permanent. // // This method makes the current atoms zone into a shared atoms zone and // removes it from the zones list. Everything in it is marked black. A new // empty atoms zone is created, where all atoms local to this runtime will // live. // // The shared atoms zone will not be collected until shutdown when it is // returned to the zone list by restoreSharedAtomsZone().
void GCRuntime::restoreSharedAtomsZone() { // Return the shared atoms zone to the zone list. This allows the contents of // the shared atoms zone to be collected when the parent runtime is shut down.
// Insert at start to preserve invariant that atoms zones come first.
AutoEnterOOMUnsafeRegion oomUnsafe; if (!zones().insert(zones().begin(), sharedAtomsZone_)) {
oomUnsafe.crash("restoreSharedAtomsZone");
}
// Special case: if there is still an `AutoDisableGenerationalGC` active (eg // from the --no-ggc command-line flag), then do not allow controlling the // state of the nursery. Done here where cx is available. if (key == JSGC_NURSERY_ENABLED && cx->generationalDisabled > 0) { returnfalse;
}
bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value,
AutoLockGC& lock) { switch (key) { case JSGC_SLICE_TIME_BUDGET_MS:
defaultTimeBudgetMS_ = value; break; case JSGC_INCREMENTAL_GC_ENABLED:
setIncrementalGCEnabled(value != 0); break; case JSGC_PER_ZONE_GC_ENABLED:
perZoneGCEnabled = value != 0; break; case JSGC_COMPACTING_ENABLED:
compactingEnabled = value != 0; break; case JSGC_NURSERY_ENABLED: {
AutoUnlockGC unlock(lock);
setNurseryEnabled(value != 0); break;
} case JSGC_PARALLEL_MARKING_ENABLED:
setParallelMarkingEnabled(value != 0); break; case JSGC_INCREMENTAL_WEAKMAP_ENABLED: for (auto& marker : markers) {
marker->incrementalWeakMapMarkingEnabled = value != 0;
} break; case JSGC_SEMISPACE_NURSERY_ENABLED: {
AutoUnlockGC unlock(lock);
nursery().setSemispaceEnabled(value); break;
} case JSGC_MIN_EMPTY_CHUNK_COUNT:
setMinEmptyChunkCount(value, lock); break; default: if (IsGCThreadParameter(key)) { return setThreadParameter(key, value, lock);
}
if (!tunables.setParameter(key, value)) { returnfalse;
}
updateAllGCStartThresholds();
}
returntrue;
}
bool GCRuntime::setThreadParameter(JSGCParamKey key, uint32_t value,
AutoLockGC& lock) { if (rt->parentRuntime) { // Don't allow these to be set for worker runtimes. returnfalse;
}
switch (key) { case JSGC_HELPER_THREAD_RATIO: if (value == 0) { returnfalse;
}
helperThreadRatio = double(value) / 100.0; break; case JSGC_MAX_HELPER_THREADS: if (value == 0) { returnfalse;
}
maxHelperThreads = value; break; case JSGC_MAX_MARKING_THREADS:
maxMarkingThreads = std::min(size_t(value), MaxParallelWorkers); break; default:
MOZ_CRASH("Unexpected parameter key");
}
void GCRuntime::setNurseryEnabled(bool enabled) { if (enabled) {
nursery().enable();
} else { if (nursery().isEnabled()) {
minorGC(JS::GCReason::EVICT_NURSERY);
nursery().disable();
}
}
}
void GCRuntime::updateHelperThreadCount() { if (!CanUseExtraThreads()) { // startTask will run the work on the main thread if the count is 1.
MOZ_ASSERT(helperThreadCount == 1);
markingThreadCount = 1;
// Number of extra threads required during parallel marking to ensure we can // start the necessary marking tasks. Background free and background // allocation may already be running and we want to avoid these tasks blocking // marking. In real configurations there will be enough threads that this // won't affect anything. static constexpr size_t SpareThreadsDuringParallelMarking = 2;
// Calculate the target thread count for parallel marking, which uses separate // parameters to let us adjust this independently.
markingThreadCount = std::min(cpuCount / 2, maxMarkingThreads.ref());
// Calculate the overall target thread count taking into account the separate // target for parallel marking threads. Add spare threads to avoid blocking // parallel marking when there is other GC work happening.
size_t targetCount =
std::max(helperThreadCount.ref(),
markingThreadCount.ref() + SpareThreadsDuringParallelMarking);
// Attempt to create extra threads if possible. This is not supported when // using an external thread pool.
AutoLockHelperThreadState lock;
(void)HelperThreadState().ensureThreadCount(targetCount, lock);
// Limit all thread counts based on the number of threads available, which may // be fewer than requested.
size_t availableThreadCount = GetHelperThreadCount();
MOZ_ASSERT(availableThreadCount != 0);
targetCount = std::min(targetCount, availableThreadCount);
helperThreadCount = std::min(helperThreadCount.ref(), availableThreadCount); if (availableThreadCount < SpareThreadsDuringParallelMarking) {
markingThreadCount = 1;
} else {
markingThreadCount =
std::min(markingThreadCount.ref(),
availableThreadCount - SpareThreadsDuringParallelMarking);
}
// Update the maximum number of threads that will be used for GC work.
maxParallelThreads = targetCount;
}
bool GCRuntime::initOrDisableParallelMarking() { // Attempt to initialize parallel marking state or disable it on failure. This // is called when parallel marking is enabled or disabled.
MOZ_ASSERT(markers.length() != 0);
if (updateMarkersVector()) { returntrue;
}
// Failed to initialize parallel marking so disable it instead.
MOZ_ASSERT(parallelMarkingEnabled);
parallelMarkingEnabled = false;
MOZ_ALWAYS_TRUE(updateMarkersVector()); returnfalse;
}
// Update the helper thread system's global count by subtracting this // runtime's current contribution |reservedMarkingThreads| and adding the new // contribution |newCount|.
AutoLockHelperThreadState lock; auto& globalCount = HelperThreadState().gcParallelMarkingThreads;
MOZ_ASSERT(globalCount >= reservedMarkingThreads);
size_t newGlobalCount = globalCount - reservedMarkingThreads + newCount; if (newGlobalCount > HelperThreadState().threadCount) { // Not enough total threads. returnfalse;
}
bool GCRuntime::updateMarkersVector() {
MOZ_ASSERT(helperThreadCount >= 1, "There must always be at least one mark task");
MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
assertNoMarkingWork();
// Limit worker count to number of GC parallel tasks that can run // concurrently, otherwise one thread can deadlock waiting on another.
size_t targetCount = std::min(markingWorkerCount(), getMaxParallelThreads());
if (rt->isMainRuntime()) { // For the main runtime, reserve helper threads as long as parallel marking // is enabled. Worker runtimes may not mark in parallel if there are // insufficient threads available at the time.
size_t threadsToReserve = targetCount > 1 ? targetCount : 0; if (!reserveMarkingThreads(threadsToReserve)) { returnfalse;
}
}
if (markers.length() > targetCount) { return markers.resize(targetCount);
}
while (markers.length() < targetCount) { auto marker = MakeUnique<GCMarker>(rt); if (!marker) { returnfalse;
}
#ifdef JS_GC_ZEAL if (maybeMarkStackLimit) {
marker->setMaxCapacity(maybeMarkStackLimit);
} #endif
if (!marker->init()) { returnfalse;
}
if (!markers.emplaceBack(std::move(marker))) { returnfalse;
}
}
returntrue;
}
template <typename F> staticbool EraseCallback(CallbackVector<F>& vector, F callback) { for (Callback<F>* p = vector.begin(); p != vector.end(); p++) { if (p->op == callback) {
vector.erase(p); returntrue;
}
}
returnfalse;
}
template <typename F> staticbool EraseCallback(CallbackVector<F>& vector, F callback, void* data) { for (Callback<F>* p = vector.begin(); p != vector.end(); p++) { if (p->op == callback && p->data == data) {
vector.erase(p); returntrue;
}
}
void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) { // Can be called from finalizers
MOZ_ALWAYS_TRUE(EraseCallback(blackRootTracers.ref(), traceOp));
}
bool GCRuntime::addRoot(Value* vp, constchar* name) { /* * Sometimes Firefox will hold weak references to objects and then convert * them to strong references by calling AddRoot (e.g., via PreserveWrapper, * or ModifyBusyCount in workers). We need a read barrier to cover these * cases.
*/
MOZ_ASSERT(vp);
Value value = *vp; if (value.isGCThing()) {
ValuePreWriteBarrier(value);
}
bool js::gc::IsCurrentlyAnimating(const TimeStamp& lastAnimationTime, const TimeStamp& currentTime) { // Assume that we're currently animating if js::NotifyAnimationActivity has // been called in the last second. staticconstauto oneSecond = TimeDuration::FromSeconds(1); return !lastAnimationTime.IsNull() &&
currentTime < (lastAnimationTime + oneSecond);
}
bool GCRuntime::shouldCompact() { // Compact on shrinking GC if enabled. Skip compacting in incremental GCs // if we are currently animating, unless the user is inactive or we're // responding to memory pressure.
if (!isShrinkingGC() || !isCompactingGCEnabled()) { returnfalse;
}
bool GCRuntime::triggerGC(JS::GCReason reason) { /* * Don't trigger GCs if this is being called off the main thread from * onTooMuchMalloc().
*/ if (!CurrentThreadCanAccessRuntime(rt)) { returnfalse;
}
/* GC is already running. */ if (JS::RuntimeHeapIsCollecting()) { returnfalse;
}
if (trigger.shouldTrigger) { // Start or continue an in progress incremental GC. We do this to try to // avoid performing non-incremental GCs on zones which allocate a lot of // data, even when incremental slices can't be triggered via scheduling in // the event loop.
triggerZoneGC(zone, JS::GCReason::ALLOC_TRIGGER, trigger.usedBytes,
trigger.thresholdBytes);
}
}
// Trigger a zone GC. budgetIncrementalGC() will work out whether to do an // incremental or non-incremental collection.
triggerZoneGC(zone, reason, trigger.usedBytes, trigger.thresholdBytes); returntrue;
}
bool GCRuntime::shouldDecommit() const { switch (gcOptions()) { case JS::GCOptions::Normal: // If we are allocating heavily enough to trigger "high frequency" GC then // skip decommit so that we do not compete with the mutator. return !schedulingState.inHighFrequencyGCMode(); case JS::GCOptions::Shrink: // If we're doing a shrinking GC we always decommit to release as much // memory as possible. returntrue; case JS::GCOptions::Shutdown: // There's no point decommitting as we are about to free everything.
--> --------------------
--> maximum size reached
--> --------------------
Messung V0.5
¤ Dauer der Verarbeitung: 0.11 Sekunden
(vorverarbeitet)
¤
Die Informationen auf dieser Webseite wurden
nach bestem Wissen sorgfältig zusammengestellt. Es wird jedoch weder Vollständigkeit, noch Richtigkeit,
noch Qualität der bereit gestellten Informationen zugesichert.
Bemerkung:
Die farbliche Syntaxdarstellung und die Messung sind noch experimentell.