Quellcode-Bibliothek nsCycleCollector.cpp
Sprache: C
/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ /* vim: set ts=8 sts=2 et sw=2 tw=80: */ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
// // This file implements a garbage-cycle collector based on the paper // // Concurrent Cycle Collection in Reference Counted Systems // Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072 // // We are not using the concurrent or acyclic cases of that paper; so // the green, red and orange colors are not used. // // The collector is based on tracking pointers of four colors: // // Black nodes are definitely live. If we ever determine a node is // black, it's ok to forget about, drop from our records. // // White nodes are definitely garbage cycles. Once we finish with our // scanning, we unlink all the white nodes and expect that by // unlinking them they will self-destruct (since a garbage cycle is // only keeping itself alive with internal links, by definition). // // Snow-white is an addition to the original algorithm. A snow-white node // has reference count zero and is just waiting for deletion. // // Grey nodes are being scanned. Nodes that turn grey will turn // either black if we determine that they're live, or white if we // determine that they're a garbage cycle. After the main collection // algorithm there should be no grey nodes. // // Purple nodes are *candidates* for being scanned. They are nodes we // haven't begun scanning yet because they're not old enough, or we're // still partway through the algorithm. // // XPCOM objects participating in garbage-cycle collection are obliged // to inform us when they ought to turn purple; that is, when their // refcount transitions from N+1 -> N, for nonzero N. Furthermore we // require that *after* an XPCOM object has informed us of turning // purple, they will tell us when they either transition back to being // black (incremented refcount) or are ultimately deleted.
// Incremental cycle collection // // Beyond the simple state machine required to implement incremental // collection, the CC needs to be able to compensate for things the browser // is doing during the collection. There are two kinds of problems. For each // of these, there are two cases to deal with: purple-buffered C++ objects // and JS objects.
// The first problem is that an object in the CC's graph can become garbage. // This is bad because the CC touches the objects in its graph at every // stage of its operation. // // All cycle collected C++ objects that die during a cycle collection // will end up actually getting deleted by the SnowWhiteKiller. Before // the SWK deletes an object, it checks if an ICC is running, and if so, // if the object is in the graph. If it is, the CC clears mPointer and // mParticipant so it does not point to the raw object any more. Because // objects could die any time the CC returns to the mutator, any time the CC // accesses a PtrInfo it must perform a null check on mParticipant to // ensure the object has not gone away. // // JS objects don't always run finalizers, so the CC can't remove them from // the graph when they die. Fortunately, JS objects can only die during a GC, // so if a GC is begun during an ICC, the browser synchronously finishes off // the ICC, which clears the entire CC graph. If the GC and CC are scheduled // properly, this should be rare. // // The second problem is that objects in the graph can be changed, say by // being addrefed or released, or by having a field updated, after the object // has been added to the graph. The problem is that ICC can miss a newly // created reference to an object, and end up unlinking an object that is // actually alive. // // The basic idea of the solution, from "An on-the-fly Reference Counting // Garbage Collector for Java" by Levanoni and Petrank, is to notice if an // object has had an additional reference to it created during the collection, // and if so, don't collect it during the current collection. This avoids having // to rerun the scan as in Bacon & Rajan 2001. // // For cycle collected C++ objects, we modify AddRef to place the object in // the purple buffer, in addition to Release. Then, in the CC, we treat any // objects in the purple buffer as being alive, after graph building has // completed. Because they are in the purple buffer, they will be suspected // in the next CC, so there's no danger of leaks. This is imprecise, because // we will treat as live an object that has been Released but not AddRefed // during graph building, but that's probably rare enough that the additional // bookkeeping overhead is not worthwhile. // // For JS objects, the cycle collector is only looking at gray objects. If a // gray object is touched during ICC, it will be made black by UnmarkGray. // Thus, if a JS object has become black during the ICC, we treat it as live. // Merged JS zones have to be handled specially: we scan all zone globals. // If any are black, we treat the zone as being black.
// Safety // // An XPCOM object is either scan-safe or scan-unsafe, purple-safe or // purple-unsafe. // // An nsISupports object is scan-safe if: // // - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though // this operation loses ISupports identity (like nsIClassInfo). // - Additionally, the operation |traverse| on the resulting // nsXPCOMCycleCollectionParticipant does not cause *any* refcount // adjustment to occur (no AddRef / Release calls). // // A non-nsISupports ("native") object is scan-safe by explicitly // providing its nsCycleCollectionParticipant. // // An object is purple-safe if it satisfies the following properties: // // - The object is scan-safe. // // When we receive a pointer |ptr| via // |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We // can check the scan-safety, but have no way to ensure the // purple-safety; objects must obey, or else the entire system falls // apart. Don't involve an object in this scheme if you can't // guarantee its purple-safety. The easiest way to ensure that an // object is purple-safe is to use nsCycleCollectingAutoRefCnt. // // When we have a scannable set of purple nodes ready, we begin // our walks. During the walks, the nodes we |traverse| should only // feed us more scan-safe nodes, and should not adjust the refcounts // of those nodes. // // We do not |AddRef| or |Release| any objects during scanning. We // rely on the purple-safety of the roots that call |suspect| to // hold, such that we will clear the pointer from the purple buffer // entry to the object before it is destroyed. The pointers that are // merely scan-safe we hold only for the duration of scanning, and // there should be no objects released from the scan-safe set during // the scan. // // We *do* call |Root| and |Unroot| on every white object, on // either side of the calls to |Unlink|. This keeps the set of white // objects alive during the unlinking. //
#if !defined(__MINGW32__) # ifdef WIN32 # include <crtdbg.h> # include <errno.h> # endif #endif
#include"base/process_util.h"
#include"mozilla/ArrayUtils.h" #include"mozilla/AutoRestore.h" #include"mozilla/CycleCollectedJSContext.h" #include"mozilla/CycleCollectedJSRuntime.h" #include"mozilla/CycleCollectorStats.h" #include"mozilla/DebugOnly.h" #include"mozilla/HashFunctions.h" #include"mozilla/HashTable.h" #include"mozilla/HoldDropJSObjects.h" #include"mozilla/Maybe.h" /* This must occur *after* base/process_util.h to avoid typedefs conflicts. */ #include <stdint.h> #include <stdio.h>
// Enable assertions that are useful for diagnosing errors in graph // construction. // #define DEBUG_CC_GRAPH
#define DEFAULT_SHUTDOWN_COLLECTIONS 5
// One to do the freeing, then another to detect there is no more work to do. #define NORMAL_SHUTDOWN_COLLECTIONS 2
// Cycle collector environment variables // // MOZ_CC_ALL_TRACES: If set to "all", any cycle collector logging done will be // WantAllTraces, which disables various cycle collector optimizations to give a // fuller picture of the heap. If set to "shutdown", only shutdown logging will // be WantAllTraces. The default is none. // // MOZ_CC_RUN_DURING_SHUTDOWN: In non-NS_FREE_PERMANENT_DATA builds, if this is // set, run cycle collections at shutdown. // // MOZ_CC_LOG_ALL: If defined, always log cycle collector heaps. // // MOZ_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown. // // MOZ_CC_LOG_SHUTDOWN_SKIP: If set to a non-negative integer value n, then skip // logging for the first n shutdown CCs. This implies MOZ_CC_LOG_SHUTDOWN. The // first log or two are much larger than the rest, so it can be useful to reduce // the total size of logs if you know already that the initial logs aren't // interesting. // // MOZ_CC_LOG_WINDOW_ONLY: If this is set, only log CCs where at least one DOM // window is still alive, as determined by GetCurrentInnerOrOuterWindowCount(). // The purpose of this is to avoid useless logs when investigating intermittent // window leaks. Note that this count is only updated in DEBUG builds, and will // only be read on the main thread. // // MOZ_CC_LOG_THREAD: If set to "main", only automatically log main thread // CCs. If set to "worker", only automatically log worker CCs. If set to "all", // log either. The default value is "all". This must be used with either // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything. // // MOZ_CC_LOG_PROCESS: If set to "main", only automatically log main process // CCs. If set to "content", only automatically log tab CCs. If set to "all", // log everything. The default value is "all". This must be used with either // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything. // // MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as // logs from MOZ_CC_LOG_ALL and MOZ_CC_LOG_SHUTDOWN, or other uses // of nsICycleCollectorListener) // // MOZ_CC_DISABLE_GC_LOG: If defined, don't make a GC log whenever we make a // cycle collector log. This can be useful for leaks that go away when shutdown // gets slower, when the JS heap is not involved in the leak. The default is to // make the GC log.
// Various parameters of this collector can be tuned using environment // variables.
//////////////////////////////////////////////////////////////////////// // Base types ////////////////////////////////////////////////////////////////////////
class PtrInfo;
class EdgePool { public: // EdgePool allocates arrays of void*, primarily to hold PtrInfo*. // However, at the end of a block, the last two pointers are a null // and then a void** pointing to the next block. This allows // EdgePool::Iterators to be a single word but still capable of crossing // block boundaries.
// Store the null sentinel so that we can have valid iterators // before adding any edges and without adding any blocks.
PtrInfoOrBlock mSentinelAndBlocks[2];
Iterator& operator++() { if (!mPointer->ptrInfo) { // Null pointer is a sentinel for link to the next block.
mPointer = (mPointer + 1)->block->mPointers;
}
++mPointer; return *this;
}
PtrInfo* operator*() const { if (!mPointer->ptrInfo) { // Null pointer is a sentinel for link to the next block. return (mPointer + 1)->block->mPointers->ptrInfo;
} return mPointer->ptrInfo;
} booloperator==(const Iterator& aOther) const { return mPointer == aOther.mPointer;
} booloperator!=(const Iterator& aOther) const { return mPointer != aOther.mPointer;
}
// This structure should be kept as small as possible; we may expect // hundreds of thousands of them to be allocated and touched // repeatedly during each cycle collection. class PtrInfo final { public: // mParticipant knows a more concrete type. void* mPointer;
nsCycleCollectionParticipant* mParticipant;
uint32_t mColor : 2;
uint32_t mInternalRefs : 30;
uint32_t mRefCount;
// We initialize mRefCount to a large non-zero value so // that it doesn't look like a JS object to the cycle collector // in the case where the object dies before being traversed.
MOZ_ASSERT(!IsGrayJS() && !IsBlackJS());
}
// Allow NodePool::NodeBlock's constructor to compile.
PtrInfo()
: mPointer{nullptr},
mParticipant{nullptr},
mColor{0},
mInternalRefs{0},
mRefCount{0} {
MOZ_ASSERT_UNREACHABLE("should never be called");
}
// this PtrInfo must be part of a NodePool
EdgePool::Iterator LastChild() const {
CC_GRAPH_ASSERT((this + 1)->mFirstChild.Initialized()); return (this + 1)->mFirstChild;
}
// this PtrInfo must be part of a NodePool void SetLastChild(EdgePool::Iterator aLastChild) {
CC_GRAPH_ASSERT(aLastChild.Initialized());
(this + 1)->mFirstChild = aLastChild;
}
constchar* piName = "Unknown"; if (mParticipant) {
piName = mParticipant->ClassName();
}
nsPrintfCString msg("%s, for class %s", aMessage, piName);
NS_WARNING(msg.get());
CrashReporter::RecordAnnotationNSCString(
CrashReporter::Annotation::CycleCollector, msg);
MOZ_CRASH();
}
/** * A structure designed to be used like a linked list of PtrInfo, except * it allocates many PtrInfos at a time.
*/ class NodePool { private: // The -2 allows us to use |NodeBlockSize + 1| for |mEntries|, and fit // |mNext|, all without causing slop. enum { NodeBlockSize = 4 * 1024 - 2 };
struct NodeBlock { // We create and destroy NodeBlock using moz_xmalloc/free rather than new // and delete to avoid calling its constructor and destructor.
NodeBlock() : mNext{nullptr} {
MOZ_ASSERT_UNREACHABLE("should never be called");
// Ensure NodeBlock is the right size (see the comment on NodeBlockSize // above).
static_assert( sizeof(NodeBlock) == 81904 || // 32-bit; equals 19.996 x 4 KiB pages sizeof(NodeBlock) ==
131048, // 64-bit; equals 31.994 x 4 KiB pages "ill-sized NodeBlock");
}
~NodeBlock() { MOZ_ASSERT_UNREACHABLE("should never be called"); }
NodeBlock* mNext;
PtrInfo mEntries[NodeBlockSize + 1]; // +1 to store last child of last node
};
private: // mFirstBlock is a reference to allow an Enumerator to be constructed // for an empty graph.
NodeBlock*& mFirstBlock;
NodeBlock* mCurBlock; // mNext is the next value we want to return, unless mNext == mBlockEnd // NB: mLast is a reference to allow enumerating while building!
PtrInfo* mNext;
PtrInfo* mBlockEnd;
PtrInfo*& mLast;
};
size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const { // We don't measure the things pointed to by mEntries[] because those // pointers are non-owning.
size_t n = 0;
NodeBlock* b = mBlocks; while (b) {
n += aMallocSizeOf(b);
b = b->mNext;
} return n;
}
private:
NodeBlock* mBlocks;
PtrInfo* mLast;
};
struct PtrToNodeHashPolicy { using Key = PtrInfo*; using Lookup = void*;
struct WeakMapping { // map and key will be null if the corresponding objects are GC marked
PtrInfo* mMap;
PtrInfo* mKey;
PtrInfo* mKeyDelegate;
PtrInfo* mVal;
};
// Try to match the size of a jemalloc bucket, to minimize slop bytes. // - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries' // Segment is 16,372 bytes. // - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries' // Segment is 32,760 bytes. staticconst uint32_t kEntriesPerSegment = 1365; staticconst size_t kSegmentSize = sizeof(nsPurpleBufferEntry) * kEntriesPerSegment; typedef SegmentedVector<nsPurpleBufferEntry, kSegmentSize,
InfallibleAllocPolicy>
PurpleBufferVector;
PurpleBufferVector mEntries;
uint32_t oldLength = mEntries.Length();
uint32_t keptLength = 0; auto revIter = mEntries.IterFromLast(); auto iter = mEntries.Iter(); // After iteration this points to the first empty entry. auto firstEmptyIter = mEntries.Iter(); auto iterFromLastEntry = mEntries.IterFromLast(); for (; !iter.Done(); iter.Next()) {
nsPurpleBufferEntry& e = iter.Get(); if (e.mObject) { if (!aVisitor.Visit(*this, &e)) { return;
}
}
// Visit call above may have cleared the entry, or the entry was empty // already. if (!e.mObject) { // Try to find a non-empty entry from the end of the vector. for (; !revIter.Done(); revIter.Prev()) {
nsPurpleBufferEntry& otherEntry = revIter.Get(); if (&e == &otherEntry) { break;
} if (otherEntry.mObject) { if (!aVisitor.Visit(*this, &otherEntry)) { return;
} // Visit may have cleared otherEntry. if (otherEntry.mObject) {
e.Swap(otherEntry);
revIter.Prev(); // We've swapped this now empty entry. break;
}
}
}
}
// Entry is non-empty even after the Visit call, ensure it is kept // in mEntries. if (e.mObject) {
firstEmptyIter.Next();
++keptLength;
}
if (&e == &revIter.Get()) { break;
}
}
// There were some empty entries. if (oldLength != keptLength) { // While visiting entries, some new ones were possibly added. This can // happen during CanSkip. Move all such new entries to be after other // entries. Note, we don't call Visit on newly added entries! if (&iterFromLastEntry.Get() != &mEntries.GetLast()) {
iterFromLastEntry.Next(); // Now pointing to the first added entry. auto& iterForNewEntries = iterFromLastEntry; while (!iterForNewEntries.Done()) {
MOZ_ASSERT(!firstEmptyIter.Done());
MOZ_ASSERT(!firstEmptyIter.Get().mObject);
firstEmptyIter.Get().Swap(iterForNewEntries.Get());
firstEmptyIter.Next();
iterForNewEntries.Next();
}
}
// RemoveSkippable removes entries from the purple buffer synchronously // (1) if !aAsyncSnowWhiteFreeing and nsPurpleBufferEntry::mRefCnt is 0 or // (2) if nsXPCOMCycleCollectionParticipant::CanSkip() for the obj or // (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false. // (4) If aRemoveChildlessNodes is true, then any nodes in the purple buffer // that will have no children in the cycle collector graph will also be // removed. CanSkip() may be run on these children. void RemoveSkippable(nsCycleCollector* aCollector, SliceBudget& aBudget, bool aRemoveChildlessNodes, bool aAsyncSnowWhiteFreeing,
CC_ForgetSkippableCallback aCb);
//////////////////////////////////////////////////////////////////////// // Top level structure for the cycle collector. ////////////////////////////////////////////////////////////////////////
class JSPurpleBuffer;
class nsCycleCollector : public nsIMemoryReporter { public:
NS_DECL_ISUPPORTS
NS_DECL_NSIMEMORYREPORTER
private: bool mActivelyCollecting; bool mFreeingSnowWhite; // mScanInProgress should be false when we're collecting white objects. bool mScanInProgress;
CycleCollectorResults mResults;
TimeStamp mCollectionStart;
/** * GraphWalker is templatized over a Visitor class that must provide * the following two methods: * * bool ShouldVisitNode(PtrInfo const *pi); * void VisitNode(PtrInfo *pi);
*/ template <class Visitor> class GraphWalker { private:
Visitor mVisitor;
void DoWalk(nsDeque<PtrInfo>& aQueue);
void CheckedPush(nsDeque<PtrInfo>& aQueue, PtrInfo* aPi) { if (!aPi) {
MOZ_CRASH();
} if (!aQueue.Push(aPi, fallible)) {
mVisitor.Failed();
}
}
public: void Walk(PtrInfo* aPi); void WalkFromRoots(CCGraph& aGraph); // copy-constructing the visitor should be cheap, and less // indirection than using a reference explicit GraphWalker(const Visitor aVisitor) : mVisitor(aVisitor) {}
};
//////////////////////////////////////////////////////////////////////// // The static collector struct ////////////////////////////////////////////////////////////////////////
namespace geckoprofiler::markers { struct CCIntervalMarker : public mozilla::BaseMarkerType<CCIntervalMarker> { static constexpr constchar* Name = "CC"; static constexpr constchar* Description = "Summary data for the core part of a cycle collection, possibly " "encompassing a set of incremental slices. The thread is not " "blocked for the entire major CC interval, only for the individual " "slices.";
staticinlinevoid ToParticipant(nsISupports* aPtr,
nsXPCOMCycleCollectionParticipant** aCp) { // We use QI to move from an nsISupports to an // nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper // object that implements traversal and unlinking logic for the nsISupports // in question.
*aCp = nullptr;
CallQueryInterface(aPtr, aCp);
}
staticvoid ToParticipant(void* aParti, nsCycleCollectionParticipant** aCp) { // If the participant is null, this is an nsISupports participant, // so we must QI to get the real participant.
template <class Visitor>
MOZ_NEVER_INLINE void GraphWalker<Visitor>::WalkFromRoots(CCGraph& aGraph) {
nsDeque<PtrInfo> queue;
NodePool::Enumerator etor(aGraph.mNodes); for (uint32_t i = 0; i < aGraph.mRootCount; ++i) {
CheckedPush(queue, etor.GetNext());
}
DoWalk(queue);
}
template <class Visitor>
MOZ_NEVER_INLINE void GraphWalker<Visitor>::DoWalk(nsDeque<PtrInfo>& aQueue) { // Use a aQueue to match the breadth-first traversal used when we // built the graph, for hopefully-better locality. while (aQueue.GetSize() > 0) {
PtrInfo* pi = aQueue.PopFront();
/** * Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in * $MOZ_CC_LOG_DIRECTORY or in the system's temp directory. No existing * file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll * try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so * on.
*/
already_AddRefed<nsIFile> CreateTempFile(constchar* aPrefix) {
nsPrintfCString filename("%s.%d%s%s.log", aPrefix, mProcessIdentifier,
mFilenameIdentifier.IsEmpty() ? "" : ".",
NS_ConvertUTF16toUTF8(mFilenameIdentifier).get());
// Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from // the fallback directories in OpenTempFile. We don't use an nsCOMPtr // here because OpenTempFile uses an in/out param and getter_AddRefs // wouldn't work.
nsIFile* logFile = nullptr; if (char* env = PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
Unused << NS_WARN_IF(
NS_FAILED(NS_NewNativeLocalFile(nsCString(env), &logFile)));
}
// On Android or B2G, this function will open a file named // aFilename under a memory-reporting-specific folder // (/data/local/tmp/memory-reports). Otherwise, it will open a // file named aFilename under "NS_OS_TEMP_DIR".
nsresult rv =
nsDumpUtils::OpenTempFile(filename, &logFile, "memory-reports"_ns); if (NS_FAILED(rv)) {
NS_IF_RELEASE(logFile); return nullptr;
}
return dont_AddRef(logFile);
}
nsresult OpenLog(FileInfo* aLog) { // Initially create the log in a file starting with "incomplete-". // We'll move the file and strip off the "incomplete-" once the dump // completes. (We do this because we don't want scripts which poll // the filesystem looking for GC/CC dumps to grab a file before we're // finished writing to it.)
nsAutoCString incomplete;
incomplete += "incomplete-";
incomplete += aLog->mPrefix;
MOZ_ASSERT(!aLog->mFile);
aLog->mFile = CreateTempFile(incomplete.get()); if (NS_WARN_IF(!aLog->mFile)) { return NS_ERROR_UNEXPECTED;
}
// Strip off "incomplete-".
nsCOMPtr<nsIFile> logFileFinalDestination = CreateTempFile(aLog->mPrefix); if (NS_WARN_IF(!logFileFinalDestination)) { return NS_ERROR_UNEXPECTED;
}
nsAutoString logFileFinalDestinationName;
logFileFinalDestination->GetLeafName(logFileFinalDestinationName); if (NS_WARN_IF(logFileFinalDestinationName.IsEmpty())) { return NS_ERROR_UNEXPECTED;
}
if (NS_SUCCEEDED(aLog->mFile->MoveTo(/* directory */ nullptr,
logFileFinalDestinationName))) { // Save the file path.
aLog->mFile = logFileFinalDestination;
}
// Log to the error console.
nsAutoString logPath;
aLog->mFile->GetPath(logPath);
nsAutoString msg =
aCollectorKind + u" Collector log dumped to "_ns + logPath;
// We don't want any JS to run between ScanRoots and CollectWhite calls, // and since ScanRoots calls this method, better to log the message // asynchronously.
RefPtr<LogStringMessageAsync> log = new LogStringMessageAsync(msg);
NS_DispatchToCurrentThread(log); return NS_OK;
}
¤ Die Informationen auf dieser Webseite wurden
nach bestem Wissen sorgfältig zusammengestellt. Es wird jedoch weder Vollständigkeit, noch Richtigkeit,
noch Qualität der bereit gestellten Informationen zugesichert.0.81Bemerkung:
(vorverarbeitet)
¤
Die Informationen auf dieser Webseite wurden
nach bestem Wissen sorgfältig zusammengestellt. Es wird jedoch weder Vollständigkeit, noch Richtigkeit,
noch Qualität der bereit gestellten Informationen zugesichert.
Bemerkung:
Die farbliche Syntaxdarstellung ist noch experimentell.