深入理解GO语言:GC原理及源码分析
Go 中的runtime 类似 Java的虚拟机,它负责管理包括内存分配、垃圾回收、栈处理、goroutine、channel、切片(slice)、map 和反射(reflection)等。Go 的可执行文件都比相对应的源代码文件要大很多,这是因为 Go 的 runtime 嵌入到了每一个可执行文件当中。常见的几种gc算法:引用计数:对每个对象维护一个引用计数,当引用该对象的对象被销毁时,引用计数
Go 中的runtime 类似 Java的虚拟机,它负责管理包括内存分配、垃圾回收、栈处理、goroutine、channel、切片(slice)、map 和反射(reflection)等。Go 的可执行文件都比相对应的源代码文件要大很多,这是因为 Go 的 runtime 嵌入到了每一个可执行文件当中。
常见的几种gc算法:
引用计数:对每个对象维护一个引用计数,当引用该对象的对象被销毁时,引用计数减1,当引用计数器为0是回收该对象。
优点:对象可以很快的被回收,不会出现内存耗尽或达到某个阀值时才回收。
缺点:不能很好的处理循环引用,而且实时维护引用计数,有也一定的代价。
代表语言:Python、PHP、Swift
标记-清除:从根变量开始遍历所有引用的对象,引用的对象标记为"被引用",没有被标记的进行回收。
优点:解决了引用计数的缺点。
缺点:需要STW,即要暂时停掉程序运行。
代表语言:Golang(其采用三色标记法)
分代收集:按照对象生命周期长短划分不同的代空间,生命周期长的放入老年代,而短的放入新生代,不同代有不能的回收算法和回收频率。
优点:回收性能好
缺点:算法复杂
代表语言: JAVA
每种算法都不是完美的,都是折中的产物。
Gc流程图:
Stack scan:收集根对象(全局变量,和G stack),开启写屏障。全局变量、开启写屏障需要STW,G stack只需要停止该G就好,时间比较少。
Mark: 扫描所有根对象, 和根对象可以到达的所有对象, 标记它们不被回收
Mark Termination: 完成标记工作, 重新扫描部分根对象(要求STW)
Sweep: 按标记结果清扫span
从上图中我们可以看到整个GC流程会进行两次STW(Stop The World), 第一次是Mark阶段的开始, 第二次是Mark Termination阶段.
第一次STW会准备根对象的扫描, 启动写屏障(Write Barrier)和辅助GC(mutator assist).
第二次STW会重新扫描部分根对象, 禁用写屏障(Write Barrier)和辅助GC(mutator assist).
需要注意的是, 不是所有根对象的扫描都需要STW, 例如扫描栈上的对象只需要停止拥有该栈的G.
三色标记
有黑、灰、白三个集合,每种颜色的含义:
白色:对象未被标记,gcmarkBits对应的位为0
灰色:对象已被标记,但这个对象包含的子对象未标记,gcmarkBits对应的位为1
黑色:对象已被标记,且这个对象包含的子对象也已标记,gcmarkBits对应的位为1
灰色和黑色的gcmarkBits都是1,如何区分二者呢?
标记任务有标记队列,在标记队列中的是灰色,不在标记队里中的是黑色。标记过程见下图:
上图中根对象A是栈上分配的对象,H是堆中分配的全局变量,根对象A、H内部有分别引用了其他对象,而其他对象内部可能还引用额其他对象,各个对象见的关系如上图所示。
- 初始状态下所有对象都是白色的。
- 接着开始扫描根对象,A、H是根对象所以被扫描到,A,H变为灰色对象。
- 接下来就开始扫描灰色对象,通过A到达B,B被标注灰色,A扫描结束后被标注黑色。同理J,K都被标注灰色,H被标注黑色。
- 继续扫描灰色对象,通过B到达C,C 被标注灰色,B被标注黑色,因为J,K没有引用对象,J,K标注黑色结束
- 最终,黑色的对象会被保留下来,白色对象D,E,F会被回收掉。
屏障
上图,假如B对象变黑后,又给B指向对象G,因为这个时候G对象已经扫描过了,所以G 对象还是白色,会被误回收。怎么解决这个问题呢?
最简单的方法就是STW(stop the world)
。也就是说,停止所有的协程。这个方法比较暴力会引起程序的卡顿,并不友好。让GC回收器,满足下面两种情况之一时,可保对象不丢失. 所以引出强-弱三色不变式:
强三色不变式:黑色不能引用白色对象。
弱三色不变式:被黑色引用的白色对象都处于灰色保护。
如何实现这个两个公式呢?这就是屏障机制。
GO1.5 采用了插入屏障、删除屏障。到了GO1.8采用混合屏障。黑色对象的内存槽有两种位置, 栈
和堆
. 栈空间的特点是容量小,但是要求相应速度快,因为函数调用弹出频繁使用, 所以“插入屏障”机制,在栈空间的对象操作中不使用. 而仅仅使用在堆空间对象的操作中。
插入屏障:插入屏障只对堆上的内存分配起作用,栈空间先扫描一遍然后启动STW后再重新扫描一遍扫描后停止STW。如果在对象在插入平展期间分配内存会自动设置成灰色,不用再重新扫描。
删除屏障:删除屏障适用于栈和堆,在删除屏障机制下删除一个节点该节点会被置成灰色,后续会继续扫描该灰色对象的子对象。该方法就是精准度不够高
混合屏障:
插入写屏障和删除写屏障的短板:
插入写屏障:结束时需要STW来重新扫描栈,标记栈上引用的白色对象的存活;
删除写屏障:回收精度低,GC开始时STW扫描堆栈来记录初始快照,这个过程会保护开始时刻的所有存活对象。
混合写屏障规则
具体操作:
1、GC开始将栈上的对象全部扫描并标记为黑色(之后不再进行第二次重复扫描,无需STW),
2、GC期间,任何在栈上创建的新对象,均为黑色。
3、被删除的对象标记为灰色。
4、被添加的对象标记为灰色。
满足
: 变形的弱三色不变式.
伪代码如下:
添加下游对象(当前下游对象slot, 新下游对象ptr) {
//1
标记灰色(当前下游对象slot) //只要当前下游对象被移走,就标记灰色
//2
标记灰色(新下游对象ptr)
//3
当前下游对象slot = 新下游对象ptr
}
上面说到整个GC有两次STW,采用混合屏障后可以大幅压缩第二次STW的时间。
Gc pacer
触发gc的时机:
阈值gcTriggerHeap:默认内存扩大一倍,启动gc
定期gcTriggerTime:默认2min触发一次gc,src/runtime/proc.go:forcegcperiod
手动gcTriggerCycle:runtime.gc()
当然了阀值是根据使用内存的增加动态变化的。假如前一次GC之后内存使用Hm(n-1)为1GB,默认GCGO=100,那么下一次会在接近Hg(2GB)的位置发起新一轮的GC。如下图:
Ht的时候开始GC,Ha的时候结束GC,Ha非常接近Hg。
(1)如何保证在Ht开始gc时所有的span都被清扫完?
除了有一个后台清扫协程外,用户的分配内存时也需要辅助清扫来保证在开启下一轮的gc时span都被清扫完毕。假设有k page的span需要sweep,那么距离下一次gc还有Ht-Hm(n-1)的内存可供分配,那么平均每申请1byte内存需要清扫k/ Ht-Hm(n-1) page 的span。(k值会根据sweep进度更改)
辅助清扫申请新span时才会检查,,辅助清扫的触发可以看cacheSpan函数, 触发时G会帮助回收"工作量"页的对象, 工作量的计算公式是:
spanBytes * sweepPagesPerByte
意思是分配的大小乘以系数sweepPagesPerByte, sweepPagesPerByte的计算在函数gcSetTriggerRatio中, 公式是:
// 当前的Heap大小
heapLiveBasis := atomic.Load64(&memstats.heap_live)
// 距离触发GC的Heap大小 = 下次触发GC的Heap大小 - 当前的Heap大小
heapDistance := int64(trigger) - int64(heapLiveBasis)
heapDistance -= 1024 * 1024
if heapDistance < _PageSize {
heapDistance = _PageSize
}
// 已清扫的页数
pagesSwept := atomic.Load64(&mheap_.pagesSwept)
// 未清扫的页数 = 使用中的页数 - 已清扫的页数
sweepDistancePages := int64(mheap_.pagesInUse) - int64(pagesSwept)
if sweepDistancePages <= 0 {
mheap_.sweepPagesPerByte = 0
} else {
// 每分配1 byte(的span)需要辅助清扫的页数 = 未清扫的页数 / 距离触发GC的Heap大小
mheap_.sweepPagesPerByte = float64(sweepDistancePages) / float64(heapDistance)
}
(2)如何保证在Ha时gc都被mark完?
Gc在Ht开始,在到达Hg时尽量标记完所有的对象,除了后台的标记协程外还需要在分配内存是进行辅助mark。从Ht到Hg的内存可以分配,这个时候还有scanWorkExpected的对象需要scan,那么平均分配1byte内存需要辅助mark量:scanWorkExpected/(Hg-Ht) 个对象,scanWorkExpected会根据mark进度更改。
辅助标记的触发可以查看上面的mallocgc函数, 触发时G会帮助扫描"工作量"个对象, 工作量的计算公式是:
debtBytes * assistWorkPerByte
意思是分配的大小乘以系数assistWorkPerByte, assistWorkPerByte的计算在函数revise中, 公式是:
// 等待扫描的对象数量 = 未扫描的对象数量 - 已扫描的对象数量
scanWorkExpected := int64(memstats.heap_scan) - c.scanWork
if scanWorkExpected < 1000 {
scanWorkExpected = 1000
}
// 距离触发GC的Heap大小 = 期待触发GC的Heap大小 - 当前的Heap大小
// 注意next_gc的计算跟gc_trigger不一样, next_gc等于heap_marked * (1 + gcpercent / 100)
heapDistance := int64(memstats.next_gc) - int64(atomic.Load64(&memstats.heap_live))
if heapDistance <= 0 {
heapDistance = 1
}
// 每分配1 byte需要辅助扫描的对象数量 = 等待扫描的对象数量 / 距离触发GC的Heap大小
c.assistWorkPerByte = float64(scanWorkExpected) / float64(heapDistance)
c.assistBytesPerWork = float64(heapDistance) / float64(scanWorkExpected)
根对象
在GC的标记阶段首先需要标记的就是"根对象", 从根对象开始可到达的所有对象都会被认为是存活的.
根对象包含了全局变量, 各个G的栈上的变量等, GC会先扫描根对象然后再扫描根对象可到达的所有对象.
Fixed Roots: 特殊的扫描工作 :
fixedRootFinalizers: 扫描析构器队列
fixedRootFreeGStacks: 释放已中止的G的栈
Flush Cache Roots: 释放mcache中的所有span, 要求STW
Data Roots: 扫描可读写的全局变量
BSS Roots: 扫描只读的全局变量
Span Roots: 扫描各个span中特殊对象(析构器列表)
Stack Roots: 扫描各个G的栈
标记阶段(Mark)会做其中的"Fixed Roots", "Data Roots", "BSS Roots", "Span Roots", "Stack Roots".
完成标记阶段(Mark Termination)会做其中的"Fixed Roots", "Flush Cache Roots".
对象扫描
当拿到一个对象的p时如何找到该对象的span和heapbit。以下分析是基于go1.10
我们在内存分配部分介绍过2 bit表示一个字,一个字节就可以表示4个字。2bit中一个表示是否被scan另一个表示该对象内是否有指针类型,根据地址p可以根据固定偏移计算出该p对应的hbit:
func heapBitsForAddr(addr uintptr) heapBits {
// 2 bits per work, 4 pairs per byte, and a mask is hard coded.
off := (addr - mheap_.arena_start) / sys.PtrSize
return heapBits{(*uint8)(unsafe.Pointer(mheap_.bitmap - off/4 - 1)), uint32(off & 3)}
}
查找p对应的span更简单了,我们前面介绍过spans区域中就是记录每个page对应的span结构,所以根据p对page取余计算出是第几个page就可以找到对应的span指针了
mheap_.spans[(p-mheap_.arena_start)>>_PageShift]
以下分析是基于go1.11及之后
Go1.11及以后的版本改用了稀疏索引的方式来管理整体的内存. 可以超过 512G 内存, 也可以允许内存空间扩展时不连续.在全局的 mheap struct 中有个 arenas 二阶数组, 在 linux amd64 上,一阶只有一个 slot, 二阶有 4M 个 slot, 每个 slot 指向一个 heapArena 结构, 每个 heapArena 结构可以管理 64M 内存, 所以在新的版本中, go 可以管理 4M*64M=256TB 内存, 即目前 64 位机器中 48bit 的寻址总线全部 256TB 内存。可以通过指针加上一定得偏移量, 就知道属于哪个 heap arean 64M 块. 再通过对 64M 求余, 结合 spans 数组, 即可知道属于哪个 mspan 了,结合 heapArean 的 bitmap 和每 8 个字节在 heapArean 中的偏移, 就可知道对象每 8 个字节是指针还是普通数据。
源码分析
源码分析引自:https://www.cnblogs.com/zkweb/p/7880099.html 讲的很详细:
go触发gc会从gcStart函数开始:
// gcStart transitions the GC from _GCoff to _GCmark (if
// !mode.stwMark) or _GCmarktermination (if mode.stwMark) by
// performing sweep termination and GC initialization.
//
// This may return without performing this transition in some cases,
// such as when called on a system stack or with locks held.
func gcStart(mode gcMode, trigger gcTrigger) {
// 判断当前G是否可抢占, 不可抢占时不触发GC
// Since this is called from malloc and malloc is called in
// the guts of a number of libraries that might be holding
// locks, don't attempt to start GC in non-preemptible or
// potentially unstable situations.
mp := acquirem()
if gp := getg(); gp == mp.g0 || mp.locks > 1 || mp.preemptoff != "" {
releasem(mp)
return
}
releasem(mp)
mp = nil
// 并行清扫上一轮GC未清扫的span
// Pick up the remaining unswept/not being swept spans concurrently
//
// This shouldn't happen if we're being invoked in background
// mode since proportional sweep should have just finished
// sweeping everything, but rounding errors, etc, may leave a
// few spans unswept. In forced mode, this is necessary since
// GC can be forced at any point in the sweeping cycle.
//
// We check the transition condition continuously here in case
// this G gets delayed in to the next GC cycle.
for trigger.test() && gosweepone() != ^uintptr(0) {
sweep.nbgsweep++
}
// 上锁, 然后重新检查gcTrigger的条件是否成立, 不成立时不触发GC
// Perform GC initialization and the sweep termination
// transition.
semacquire(&work.startSema)
// Re-check transition condition under transition lock.
if !trigger.test() {
semrelease(&work.startSema)
return
}
// 记录是否强制触发, gcTriggerCycle是runtime.GC用的
// For stats, check if this GC was forced by the user.
work.userForced = trigger.kind == gcTriggerAlways || trigger.kind == gcTriggerCycle
// 判断是否指定了禁止并行GC的参数
// In gcstoptheworld debug mode, upgrade the mode accordingly.
// We do this after re-checking the transition condition so
// that multiple goroutines that detect the heap trigger don't
// start multiple STW GCs.
if mode == gcBackgroundMode {
if debug.gcstoptheworld == 1 {
mode = gcForceMode
} else if debug.gcstoptheworld == 2 {
mode = gcForceBlockMode
}
}
// Ok, we're doing it! Stop everybody else
semacquire(&worldsema)
// 跟踪处理
if trace.enabled {
traceGCStart()
}
// 启动后台扫描任务(G)
if mode == gcBackgroundMode {
gcBgMarkStartWorkers()
}
// 重置标记相关的状态
gcResetMarkState()
// 重置参数
work.stwprocs, work.maxprocs = gcprocs(), gomaxprocs
work.heap0 = atomic.Load64(&memstats.heap_live)
work.pauseNS = 0
work.mode = mode
// 记录开始时间
now := nanotime()
work.tSweepTerm = now
work.pauseStart = now
// 停止所有运行中的G, 并禁止它们运行
systemstack(stopTheWorldWithSema)
// !!!!!!!!!!!!!!!!
// 世界已停止(STW)...
// !!!!!!!!!!!!!!!!
// 清扫上一轮GC未清扫的span, 确保上一轮GC已完成
// Finish sweep before we start concurrent scan.
systemstack(func() {
finishsweep_m()
})
// 清扫sched.sudogcache和sched.deferpool
// clearpools before we start the GC. If we wait they memory will not be
// reclaimed until the next GC cycle.
clearpools()
// 增加GC计数
work.cycles++
// 判断是否并行GC模式
if mode == gcBackgroundMode { // Do as much work concurrently as possible
// 标记新一轮GC已开始
gcController.startCycle()
work.heapGoal = memstats.next_gc
// 设置全局变量中的GC状态为_GCmark
// 然后启用写屏障
// Enter concurrent mark phase and enable
// write barriers.
//
// Because the world is stopped, all Ps will
// observe that write barriers are enabled by
// the time we start the world and begin
// scanning.
//
// Write barriers must be enabled before assists are
// enabled because they must be enabled before
// any non-leaf heap objects are marked. Since
// allocations are blocked until assists can
// happen, we want enable assists as early as
// possible.
setGCPhase(_GCmark)
// 重置后台标记任务的计数
gcBgMarkPrepare() // Must happen before assist enable.
// 计算扫描根对象的任务数量
gcMarkRootPrepare()
// 标记所有tiny alloc等待合并的对象
// Mark all active tinyalloc blocks. Since we're
// allocating from these, they need to be black like
// other allocations. The alternative is to blacken
// the tiny block on every allocation from it, which
// would slow down the tiny allocator.
gcMarkTinyAllocs()
// 启用辅助GC
// At this point all Ps have enabled the write
// barrier, thus maintaining the no white to
// black invariant. Enable mutator assists to
// put back-pressure on fast allocating
// mutators.
atomic.Store(&gcBlackenEnabled, 1)
// 记录标记开始的时间
// Assists and workers can start the moment we start
// the world.
gcController.markStartTime = now
// 重新启动世界
// 前面创建的后台标记任务会开始工作, 所有后台标记任务都完成工作后, 进入完成标记阶段
// Concurrent mark.
systemstack(startTheWorldWithSema)
// !!!!!!!!!!!!!!!
// 世界已重新启动...
// !!!!!!!!!!!!!!!
// 记录停止了多久, 和标记阶段开始的时间
now = nanotime()
work.pauseNS += now - work.pauseStart
work.tMark = now
} else {
// 不是并行GC模式
// 记录完成标记阶段开始的时间
t := nanotime()
work.tMark, work.tMarkTerm = t, t
work.heapGoal = work.heap0
// 跳过标记阶段, 执行完成标记阶段
// 所有标记工作都会在世界已停止的状态执行
// (标记阶段会设置work.markrootDone=true, 如果跳过则它的值是false, 完成标记阶段会执行所有工作)
// 完成标记阶段会重新启动世界
// Perform mark termination. This will restart the world.
gcMarkTermination(memstats.triggerRatio)
}
semrelease(&work.startSema)
}
接下来一个个分析gcStart调用的函数, 建议配合上面的"回收对象的流程"中的图理解.
函数gcBgMarkStartWorkers用于启动后台标记任务, 先分别对每个P启动一个:
// gcBgMarkStartWorkers prepares background mark worker goroutines.
// These goroutines will not run until the mark phase, but they must
// be started while the work is not stopped and from a regular G
// stack. The caller must hold worldsema.
func gcBgMarkStartWorkers() {
// Background marking is performed by per-P G's. Ensure that
// each P has a background GC G.
for _, p := range &allp {
if p == nil || p.status == _Pdead {
break
}
// 如果已启动则不重复启动
if p.gcBgMarkWorker == 0 {
go gcBgMarkWorker(p)
// 启动后等待该任务通知信号量bgMarkReady再继续
notetsleepg(&work.bgMarkReady, -1)
noteclear(&work.bgMarkReady)
}
}
}
这里虽然为每个P启动了一个后台标记任务, 但是可以同时工作的只有25%, 这个逻辑在协程M获取G时调用的findRunnableGCWorker中:
// findRunnableGCWorker returns the background mark worker for _p_ if it
// should be run. This must only be called when gcBlackenEnabled != 0.
func (c *gcControllerState) findRunnableGCWorker(_p_ *p) *g {
if gcBlackenEnabled == 0 {
throw("gcControllerState.findRunnable: blackening not enabled")
}
if _p_.gcBgMarkWorker == 0 {
// The mark worker associated with this P is blocked
// performing a mark transition. We can't run it
// because it may be on some other run or wait queue.
return nil
}
if !gcMarkWorkAvailable(_p_) {
// No work to be done right now. This can happen at
// the end of the mark phase when there are still
// assists tapering off. Don't bother running a worker
// now because it'll just return immediately.
return nil
}
// 原子减少对应的值, 如果减少后大于等于0则返回true, 否则返回false
decIfPositive := func(ptr *int64) bool {
if *ptr > 0 {
if atomic.Xaddint64(ptr, -1) >= 0 {
return true
}
// We lost a race
atomic.Xaddint64(ptr, +1)
}
return false
}
// 减少dedicatedMarkWorkersNeeded, 成功时后台标记任务的模式是Dedicated
// dedicatedMarkWorkersNeeded是当前P的数量的25%去除小数点
// 详见startCycle函数
if decIfPositive(&c.dedicatedMarkWorkersNeeded) {
// This P is now dedicated to marking until the end of
// the concurrent mark phase.
_p_.gcMarkWorkerMode = gcMarkWorkerDedicatedMode
} else {
// 减少fractionalMarkWorkersNeeded, 成功是后台标记任务的模式是Fractional
// 上面的计算如果小数点后有数值(不能够整除)则fractionalMarkWorkersNeeded为1, 否则为0
// 详见startCycle函数
// 举例来说, 4个P时会执行1个Dedicated模式的任务, 5个P时会执行1个Dedicated模式和1个Fractional模式的任务
if !decIfPositive(&c.fractionalMarkWorkersNeeded) {
// No more workers are need right now.
return nil
}
// 按Dedicated模式的任务的执行时间判断cpu占用率是否超过预算值, 超过时不启动
// This P has picked the token for the fractional worker.
// Is the GC currently under or at the utilization goal?
// If so, do more work.
//
// We used to check whether doing one time slice of work
// would remain under the utilization goal, but that has the
// effect of delaying work until the mutator has run for
// enough time slices to pay for the work. During those time
// slices, write barriers are enabled, so the mutator is running slower.
// Now instead we do the work whenever we're under or at the
// utilization work and pay for it by letting the mutator run later.
// This doesn't change the overall utilization averages, but it
// front loads the GC work so that the GC finishes earlier and
// write barriers can be turned off sooner, effectively giving
// the mutator a faster machine.
//
// The old, slower behavior can be restored by setting
// gcForcePreemptNS = forcePreemptNS.
const gcForcePreemptNS = 0
// TODO(austin): We could fast path this and basically
// eliminate contention on c.fractionalMarkWorkersNeeded by
// precomputing the minimum time at which it's worth
// next scheduling the fractional worker. Then Ps
// don't have to fight in the window where we've
// passed that deadline and no one has started the
// worker yet.
//
// TODO(austin): Shorter preemption interval for mark
// worker to improve fairness and give this
// finer-grained control over schedule?
now := nanotime() - gcController.markStartTime
then := now + gcForcePreemptNS
timeUsed := c.fractionalMarkTime + gcForcePreemptNS
if then > 0 && float64(timeUsed)/float64(then) > c.fractionalUtilizationGoal {
// Nope, we'd overshoot the utilization goal
atomic.Xaddint64(&c.fractionalMarkWorkersNeeded, +1)
return nil
}
_p_.gcMarkWorkerMode = gcMarkWorkerFractionalMode
}
// 安排后台标记任务执行
// Run the background mark worker
gp := _p_.gcBgMarkWorker.ptr()
casgstatus(gp, _Gwaiting, _Grunnable)
if trace.enabled {
traceGoUnpark(gp, 0)
}
return gp
}
gcResetMarkState函数会重置标记相关的状态:
// gcResetMarkState resets global state prior to marking (concurrent
// or STW) and resets the stack scan state of all Gs.
//
// This is safe to do without the world stopped because any Gs created
// during or after this will start out in the reset state.
func gcResetMarkState() {
// This may be called during a concurrent phase, so make sure
// allgs doesn't change.
lock(&allglock)
for _, gp := range allgs {
gp.gcscandone = false // set to true in gcphasework
gp.gcscanvalid = false // stack has not been scanned
gp.gcAssistBytes = 0
}
unlock(&allglock)
work.bytesMarked = 0
work.initialHeapLive = atomic.Load64(&memstats.heap_live)
work.markrootDone = false
}
stopTheWorldWithSema函数会停止整个世界, 这个函数必须在g0中运行:
// stopTheWorldWithSema is the core implementation of stopTheWorld.
// The caller is responsible for acquiring worldsema and disabling
// preemption first and then should stopTheWorldWithSema on the system
// stack:
//
// semacquire(&worldsema, 0)
// m.preemptoff = "reason"
// systemstack(stopTheWorldWithSema)
//
// When finished, the caller must either call startTheWorld or undo
// these three operations separately:
//
// m.preemptoff = ""
// systemstack(startTheWorldWithSema)
// semrelease(&worldsema)
//
// It is allowed to acquire worldsema once and then execute multiple
// startTheWorldWithSema/stopTheWorldWithSema pairs.
// Other P's are able to execute between successive calls to
// startTheWorldWithSema and stopTheWorldWithSema.
// Holding worldsema causes any other goroutines invoking
// stopTheWorld to block.
func stopTheWorldWithSema() {
_g_ := getg()
// If we hold a lock, then we won't be able to stop another M
// that is blocked trying to acquire the lock.
if _g_.m.locks > 0 {
throw("stopTheWorld: holding locks")
}
lock(&sched.lock)
// 需要停止的P数量
sched.stopwait = gomaxprocs
// 设置gc等待标记, 调度时看见此标记会进入等待
atomic.Store(&sched.gcwaiting, 1)
// 抢占所有运行中的G
preemptall()
// 停止当前的P
// stop current P
_g_.m.p.ptr().status = _Pgcstop // Pgcstop is only diagnostic.
// 减少需要停止的P数量(当前的P算一个)
sched.stopwait--
// 抢占所有在Psyscall状态的P, 防止它们重新参与调度
// try to retake all P's in Psyscall status
for i := 0; i < int(gomaxprocs); i++ {
p := allp[i]
s := p.status
if s == _Psyscall && atomic.Cas(&p.status, s, _Pgcstop) {
if trace.enabled {
traceGoSysBlock(p)
traceProcStop(p)
}
p.syscalltick++
sched.stopwait--
}
}
// 防止所有空闲的P重新参与调度
// stop idle P's
for {
p := pidleget()
if p == nil {
break
}
p.status = _Pgcstop
sched.stopwait--
}
wait := sched.stopwait > 0
unlock(&sched.lock)
// 如果仍有需要停止的P, 则等待它们停止
// wait for remaining P's to stop voluntarily
if wait {
for {
// 循环等待 + 抢占所有运行中的G
// wait for 100us, then try to re-preempt in case of any races
if notetsleep(&sched.stopnote, 100*1000) {
noteclear(&sched.stopnote)
break
}
preemptall()
}
}
// 逻辑正确性检查
// sanity checks
bad := ""
if sched.stopwait != 0 {
bad = "stopTheWorld: not stopped (stopwait != 0)"
} else {
for i := 0; i < int(gomaxprocs); i++ {
p := allp[i]
if p.status != _Pgcstop {
bad = "stopTheWorld: not stopped (status != _Pgcstop)"
}
}
}
if atomic.Load(&freezing) != 0 {
// Some other thread is panicking. This can cause the
// sanity checks above to fail if the panic happens in
// the signal handler on a stopped thread. Either way,
// we should halt this thread.
lock(&deadlock)
lock(&deadlock)
}
if bad != "" {
throw(bad)
}
// 到这里所有运行中的G都会变为待运行, 并且所有的P都不能被M获取
// 也就是说所有的go代码(除了当前的)都会停止运行, 并且不能运行新的go代码
}
finishsweep_m函数会清扫上一轮GC未清扫的span, 确保上一轮GC已完成:
// finishsweep_m ensures that all spans are swept.
//
// The world must be stopped. This ensures there are no sweeps in
// progress.
//
//go:nowritebarrier
func finishsweep_m() {
// sweepone会取出一个未sweep的span然后执行sweep
// 详细将在下面sweep阶段时分析
// Sweeping must be complete before marking commences, so
// sweep any unswept spans. If this is a concurrent GC, there
// shouldn't be any spans left to sweep, so this should finish
// instantly. If GC was forced before the concurrent sweep
// finished, there may be spans to sweep.
for sweepone() != ^uintptr(0) {
sweep.npausesweep++
}
// 所有span都sweep完成后, 启动一个新的markbit时代
// 这个函数是实现span的gcmarkBits和allocBits的分配和复用的关键, 流程如下
// - span分配gcmarkBits和allocBits
// - span完成sweep
// - 原allocBits不再被使用
// - gcmarkBits变为allocBits
// - 分配新的gcmarkBits
// - 开启新的markbit时代
// - span完成sweep, 同上
// - 开启新的markbit时代
// - 2个时代之前的bitmap将不再被使用, 可以复用这些bitmap
nextMarkBitArenaEpoch()
}
clearpools函数会清理sched.sudogcache和sched.deferpool, 让它们的内存可以被回收:
func clearpools() {
// clear sync.Pools
if poolcleanup != nil {
poolcleanup()
}
// Clear central sudog cache.
// Leave per-P caches alone, they have strictly bounded size.
// Disconnect cached list before dropping it on the floor,
// so that a dangling ref to one entry does not pin all of them.
lock(&sched.sudoglock)
var sg, sgnext *sudog
for sg = sched.sudogcache; sg != nil; sg = sgnext {
sgnext = sg.next
sg.next = nil
}
sched.sudogcache = nil
unlock(&sched.sudoglock)
// Clear central defer pools.
// Leave per-P pools alone, they have strictly bounded size.
lock(&sched.deferlock)
for i := range sched.deferpool {
// disconnect cached list before dropping it on the floor,
// so that a dangling ref to one entry does not pin all of them.
var d, dlink *_defer
for d = sched.deferpool[i]; d != nil; d = dlink {
dlink = d.link
d.link = nil
}
sched.deferpool[i] = nil
}
unlock(&sched.deferlock)
}
startCycle标记开始了新一轮的GC:
// startCycle resets the GC controller's state and computes estimates
// for a new GC cycle. The caller must hold worldsema.
func (c *gcControllerState) startCycle() {
c.scanWork = 0
c.bgScanCredit = 0
c.assistTime = 0
c.dedicatedMarkTime = 0
c.fractionalMarkTime = 0
c.idleMarkTime = 0
// 伪装heap_marked的值如果gc_trigger的值很小, 防止后面对triggerRatio做出错误的调整
// If this is the first GC cycle or we're operating on a very
// small heap, fake heap_marked so it looks like gc_trigger is
// the appropriate growth from heap_marked, even though the
// real heap_marked may not have a meaningful value (on the
// first cycle) or may be much smaller (resulting in a large
// error response).
if memstats.gc_trigger <= heapminimum {
memstats.heap_marked = uint64(float64(memstats.gc_trigger) / (1 + memstats.triggerRatio))
}
// 重新计算next_gc, 注意next_gc的计算跟gc_trigger不一样
// Re-compute the heap goal for this cycle in case something
// changed. This is the same calculation we use elsewhere.
memstats.next_gc = memstats.heap_marked + memstats.heap_marked*uint64(gcpercent)/100
if gcpercent < 0 {
memstats.next_gc = ^uint64(0)
}
// 确保next_gc和heap_live之间最少有1MB
// Ensure that the heap goal is at least a little larger than
// the current live heap size. This may not be the case if GC
// start is delayed or if the allocation that pushed heap_live
// over gc_trigger is large or if the trigger is really close to
// GOGC. Assist is proportional to this distance, so enforce a
// minimum distance, even if it means going over the GOGC goal
// by a tiny bit.
if memstats.next_gc < memstats.heap_live+1024*1024 {
memstats.next_gc = memstats.heap_live + 1024*1024
}
// 计算可以同时执行的后台标记任务的数量
// dedicatedMarkWorkersNeeded等于P的数量的25%去除小数点
// 如果可以整除则fractionalMarkWorkersNeeded等于0否则等于1
// totalUtilizationGoal是GC所占的P的目标值(例如P一共有5个时目标是1.25个P)
// fractionalUtilizationGoal是Fractiona模式的任务所占的P的目标值(例如P一共有5个时目标是0.25个P)
// Compute the total mark utilization goal and divide it among
// dedicated and fractional workers.
totalUtilizationGoal := float64(gomaxprocs) * gcGoalUtilization
c.dedicatedMarkWorkersNeeded = int64(totalUtilizationGoal)
c.fractionalUtilizationGoal = totalUtilizationGoal - float64(c.dedicatedMarkWorkersNeeded)
if c.fractionalUtilizationGoal > 0 {
c.fractionalMarkWorkersNeeded = 1
} else {
c.fractionalMarkWorkersNeeded = 0
}
// 重置P中的辅助GC所用的时间统计
// Clear per-P state
for _, p := range &allp {
if p == nil {
break
}
p.gcAssistTime = 0
}
// 计算辅助GC的参数
// 参考上面对计算assistWorkPerByte的公式的分析
// Compute initial values for controls that are updated
// throughout the cycle.
c.revise()
if debug.gcpacertrace > 0 {
print("pacer: assist ratio=", c.assistWorkPerByte,
" (scan ", memstats.heap_scan>>20, " MB in ",
work.initialHeapLive>>20, "->",
memstats.next_gc>>20, " MB)",
" workers=", c.dedicatedMarkWorkersNeeded,
"+", c.fractionalMarkWorkersNeeded, "\n")
}
}
setGCPhase函数会修改表示当前GC阶段的全局变量和是否开启写屏障的全局变量:
//go:nosplit
func setGCPhase(x uint32) {
atomic.Store(&gcphase, x)
writeBarrier.needed = gcphase == _GCmark || gcphase == _GCmarktermination
writeBarrier.enabled = writeBarrier.needed || writeBarrier.cgo
}
gcBgMarkPrepare函数会重置后台标记任务的计数:
// gcBgMarkPrepare sets up state for background marking.
// Mutator assists must not yet be enabled.
func gcBgMarkPrepare() {
// Background marking will stop when the work queues are empty
// and there are no more workers (note that, since this is
// concurrent, this may be a transient state, but mark
// termination will clean it up). Between background workers
// and assists, we don't really know how many workers there
// will be, so we pretend to have an arbitrarily large number
// of workers, almost all of which are "waiting". While a
// worker is working it decrements nwait. If nproc == nwait,
// there are no workers.
work.nproc = ^uint32(0)
work.nwait = ^uint32(0)
}
gcMarkRootPrepare函数会计算扫描根对象的任务数量:
// gcMarkRootPrepare queues root scanning jobs (stacks, globals, and
// some miscellany) and initializes scanning-related state.
//
// The caller must have call gcCopySpans().
//
// The world must be stopped.
//
//go:nowritebarrier
func gcMarkRootPrepare() {
// 释放mcache中的所有span的任务, 只在完成标记阶段(mark termination)中执行
if gcphase == _GCmarktermination {
work.nFlushCacheRoots = int(gomaxprocs)
} else {
work.nFlushCacheRoots = 0
}
// 计算block数量的函数, rootBlockBytes是256KB
// Compute how many data and BSS root blocks there are.
nBlocks := func(bytes uintptr) int {
return int((bytes + rootBlockBytes - 1) / rootBlockBytes)
}
work.nDataRoots = 0
work.nBSSRoots = 0
// data和bss每一轮GC只扫描一次
// 并行GC中会在后台标记任务中扫描, 完成标记阶段(mark termination)中不扫描
// 非并行GC会在完成标记阶段(mark termination)中扫描
// Only scan globals once per cycle; preferably concurrently.
if !work.markrootDone {
// 计算扫描可读写的全局变量的任务数量
for _, datap := range activeModules() {
nDataRoots := nBlocks(datap.edata - datap.data)
if nDataRoots > work.nDataRoots {
work.nDataRoots = nDataRoots
}
}
// 计算扫描只读的全局变量的任务数量
for _, datap := range activeModules() {
nBSSRoots := nBlocks(datap.ebss - datap.bss)
if nBSSRoots > work.nBSSRoots {
work.nBSSRoots = nBSSRoots
}
}
}
// span中的finalizer和各个G的栈每一轮GC只扫描一次
// 同上
if !work.markrootDone {
// 计算扫描span中的finalizer的任务数量
// On the first markroot, we need to scan span roots.
// In concurrent GC, this happens during concurrent
// mark and we depend on addfinalizer to ensure the
// above invariants for objects that get finalizers
// after concurrent mark. In STW GC, this will happen
// during mark termination.
//
// We're only interested in scanning the in-use spans,
// which will all be swept at this point. More spans
// may be added to this list during concurrent GC, but
// we only care about spans that were allocated before
// this mark phase.
work.nSpanRoots = mheap_.sweepSpans[mheap_.sweepgen/2%2].numBlocks()
// 计算扫描各个G的栈的任务数量
// On the first markroot, we need to scan all Gs. Gs
// may be created after this point, but it's okay that
// we ignore them because they begin life without any
// roots, so there's nothing to scan, and any roots
// they create during the concurrent phase will be
// scanned during mark termination. During mark
// termination, allglen isn't changing, so we'll scan
// all Gs.
work.nStackRoots = int(atomic.Loaduintptr(&allglen))
} else {
// We've already scanned span roots and kept the scan
// up-to-date during concurrent mark.
work.nSpanRoots = 0
// The hybrid barrier ensures that stacks can't
// contain pointers to unmarked objects, so on the
// second markroot, there's no need to scan stacks.
work.nStackRoots = 0
if debug.gcrescanstacks > 0 {
// Scan stacks anyway for debugging.
work.nStackRoots = int(atomic.Loaduintptr(&allglen))
}
}
// 计算总任务数量
// 后台标记任务会对markrootNext进行原子递增, 来决定做哪个任务
// 这种用数值来实现锁自由队列的办法挺聪明的, 尽管google工程师觉得不好(看后面markroot函数的分析)
work.markrootNext = 0
work.markrootJobs = uint32(fixedRootCount + work.nFlushCacheRoots + work.nDataRoots + work.nBSSRoots + work.nSpanRoots + work.nStackRoots)
}
gcMarkTinyAllocs函数会标记所有tiny alloc等待合并的对象:
// gcMarkTinyAllocs greys all active tiny alloc blocks.
//
// The world must be stopped.
func gcMarkTinyAllocs() {
for _, p := range &allp {
if p == nil || p.status == _Pdead {
break
}
c := p.mcache
if c == nil || c.tiny == 0 {
continue
}
// 标记各个P中的mcache中的tiny
// 在上面的mallocgc函数中可以看到tiny是当前等待合并的对象
_, hbits, span, objIndex := heapBitsForObject(c.tiny, 0, 0)
gcw := &p.gcw
// 标记一个对象存活, 并把它加到标记队列(该对象变为灰色)
greyobject(c.tiny, 0, 0, hbits, span, gcw, objIndex)
// gcBlackenPromptly变量表示当前是否禁止本地队列, 如果已禁止则把标记任务flush到全局队列
if gcBlackenPromptly {
gcw.dispose()
}
}
}
startTheWorldWithSema函数会重新启动世界:
func startTheWorldWithSema() {
_g_ := getg()
// 禁止G被抢占
_g_.m.locks++ // disable preemption because it can be holding p in a local var
// 判断收到的网络事件(fd可读可写或错误)并添加对应的G到待运行队列
gp := netpoll(false) // non-blocking
injectglist(gp)
// 判断是否要启动gc helper
add := needaddgcproc()
lock(&sched.lock)
// 如果要求改变gomaxprocs则调整P的数量
// procresize会返回有可运行任务的P的链表
procs := gomaxprocs
if newprocs != 0 {
procs = newprocs
newprocs = 0
}
p1 := procresize(procs)
// 取消GC等待标记
sched.gcwaiting = 0
// 如果sysmon在等待则唤醒它
if sched.sysmonwait != 0 {
sched.sysmonwait = 0
notewakeup(&sched.sysmonnote)
}
unlock(&sched.lock)
// 唤醒有可运行任务的P
for p1 != nil {
p := p1
p1 = p1.link.ptr()
if p.m != 0 {
mp := p.m.ptr()
p.m = 0
if mp.nextp != 0 {
throw("startTheWorld: inconsistent mp->nextp")
}
mp.nextp.set(p)
notewakeup(&mp.park)
} else {
// Start M to run P. Do not start another M below.
newm(nil, p)
add = false
}
}
// 如果有空闲的P,并且没有自旋中的M则唤醒或者创建一个M
// Wakeup an additional proc in case we have excessive runnable goroutines
// in local queues or in the global queue. If we don't, the proc will park itself.
// If we have lots of excessive work, resetspinning will unpark additional procs as necessary.
if atomic.Load(&sched.npidle) != 0 && atomic.Load(&sched.nmspinning) == 0 {
wakep()
}
// 启动gc helper
if add {
// If GC could have used another helper proc, start one now,
// in the hope that it will be available next time.
// It would have been even better to start it before the collection,
// but doing so requires allocating memory, so it's tricky to
// coordinate. This lazy approach works out in practice:
// we don't mind if the first couple gc rounds don't have quite
// the maximum number of procs.
newm(mhelpgc, nil)
}
// 允许G被抢占
_g_.m.locks--
// 如果当前G要求被抢占则重新尝试
if _g_.m.locks == 0 && _g_.preempt { // restore the preemption request in case we've cleared it in newstack
_g_.stackguard0 = stackPreempt
}
}
重启世界后各个M会重新开始调度, 调度时会优先使用上面提到的findRunnableGCWorker函数查找任务, 之后就有大约25%的P运行后台标记任务.
后台标记任务的函数是gcBgMarkWorker:
func gcBgMarkWorker(_p_ *p) {
gp := getg()
// 用于休眠后重新获取P的构造体
type parkInfo struct {
m muintptr // Release this m on park.
attach puintptr // If non-nil, attach to this p on park.
}
// We pass park to a gopark unlock function, so it can't be on
// the stack (see gopark). Prevent deadlock from recursively
// starting GC by disabling preemption.
gp.m.preemptoff = "GC worker init"
park := new(parkInfo)
gp.m.preemptoff = ""
// 设置当前的M并禁止抢占
park.m.set(acquirem())
// 设置当前的P(需要关联到的P)
park.attach.set(_p_)
// 通知gcBgMarkStartWorkers可以继续处理
// Inform gcBgMarkStartWorkers that this worker is ready.
// After this point, the background mark worker is scheduled
// cooperatively by gcController.findRunnable. Hence, it must
// never be preempted, as this would put it into _Grunnable
// and put it on a run queue. Instead, when the preempt flag
// is set, this puts itself into _Gwaiting to be woken up by
// gcController.findRunnable at the appropriate time.
notewakeup(&work.bgMarkReady)
for {
// 让当前G进入休眠
// Go to sleep until woken by gcController.findRunnable.
// We can't releasem yet since even the call to gopark
// may be preempted.
gopark(func(g *g, parkp unsafe.Pointer) bool {
park := (*parkInfo)(parkp)
// 重新允许抢占
// The worker G is no longer running, so it's
// now safe to allow preemption.
releasem(park.m.ptr())
// 设置关联的P
// 把当前的G设到P的gcBgMarkWorker成员, 下次findRunnableGCWorker会使用
// 设置失败时不休眠
// If the worker isn't attached to its P,
// attach now. During initialization and after
// a phase change, the worker may have been
// running on a different P. As soon as we
// attach, the owner P may schedule the
// worker, so this must be done after the G is
// stopped.
if park.attach != 0 {
p := park.attach.ptr()
park.attach.set(nil)
// cas the worker because we may be
// racing with a new worker starting
// on this P.
if !p.gcBgMarkWorker.cas(0, guintptr(unsafe.Pointer(g))) {
// The P got a new worker.
// Exit this worker.
return false
}
}
return true
}, unsafe.Pointer(park), "GC worker (idle)", traceEvGoBlock, 0)
// 检查P的gcBgMarkWorker是否和当前的G一致, 不一致时结束当前的任务
// Loop until the P dies and disassociates this
// worker (the P may later be reused, in which case
// it will get a new worker) or we failed to associate.
if _p_.gcBgMarkWorker.ptr() != gp {
break
}
// 禁止G被抢占
// Disable preemption so we can use the gcw. If the
// scheduler wants to preempt us, we'll stop draining,
// dispose the gcw, and then preempt.
park.m.set(acquirem())
if gcBlackenEnabled == 0 {
throw("gcBgMarkWorker: blackening not enabled")
}
// 记录开始时间
startTime := nanotime()
decnwait := atomic.Xadd(&work.nwait, -1)
if decnwait == work.nproc {
println("runtime: work.nwait=", decnwait, "work.nproc=", work.nproc)
throw("work.nwait was > work.nproc")
}
// 切换到g0运行
systemstack(func() {
// 设置G的状态为等待中这样它的栈可以被扫描(两个后台标记任务可以互相扫描对方的栈)
// Mark our goroutine preemptible so its stack
// can be scanned. This lets two mark workers
// scan each other (otherwise, they would
// deadlock). We must not modify anything on
// the G stack. However, stack shrinking is
// disabled for mark workers, so it is safe to
// read from the G stack.
casgstatus(gp, _Grunning, _Gwaiting)
// 判断后台标记任务的模式
switch _p_.gcMarkWorkerMode {
default:
throw("gcBgMarkWorker: unexpected gcMarkWorkerMode")
case gcMarkWorkerDedicatedMode:
// 这个模式下P应该专心执行标记
// 执行标记, 直到被抢占, 并且需要计算后台的扫描量来减少辅助GC和唤醒等待中的G
gcDrain(&_p_.gcw, gcDrainUntilPreempt|gcDrainFlushBgCredit)
// 被抢占时把本地运行队列中的所有G都踢到全局运行队列
if gp.preempt {
// We were preempted. This is
// a useful signal to kick
// everything out of the run
// queue so it can run
// somewhere else.
lock(&sched.lock)
for {
gp, _ := runqget(_p_)
if gp == nil {
break
}
globrunqput(gp)
}
unlock(&sched.lock)
}
// 继续执行标记, 直到无更多任务, 并且需要计算后台的扫描量来减少辅助GC和唤醒等待中的G
// Go back to draining, this time
// without preemption.
gcDrain(&_p_.gcw, gcDrainNoBlock|gcDrainFlushBgCredit)
case gcMarkWorkerFractionalMode:
// 这个模式下P应该适当执行标记
// 执行标记, 直到被抢占, 并且需要计算后台的扫描量来减少辅助GC和唤醒等待中的G
gcDrain(&_p_.gcw, gcDrainUntilPreempt|gcDrainFlushBgCredit)
case gcMarkWorkerIdleMode:
// 这个模式下P只在空闲时执行标记
// 执行标记, 直到被抢占或者达到一定的量, 并且需要计算后台的扫描量来减少辅助GC和唤醒等待中的G
gcDrain(&_p_.gcw, gcDrainIdle|gcDrainUntilPreempt|gcDrainFlushBgCredit)
}
// 恢复G的状态到运行中
casgstatus(gp, _Gwaiting, _Grunning)
})
// 如果标记了禁止本地标记队列则flush到全局标记队列
// If we are nearing the end of mark, dispose
// of the cache promptly. We must do this
// before signaling that we're no longer
// working so that other workers can't observe
// no workers and no work while we have this
// cached, and before we compute done.
if gcBlackenPromptly {
_p_.gcw.dispose()
}
// 累加所用时间
// Account for time.
duration := nanotime() - startTime
switch _p_.gcMarkWorkerMode {
case gcMarkWorkerDedicatedMode:
atomic.Xaddint64(&gcController.dedicatedMarkTime, duration)
atomic.Xaddint64(&gcController.dedicatedMarkWorkersNeeded, 1)
case gcMarkWorkerFractionalMode:
atomic.Xaddint64(&gcController.fractionalMarkTime, duration)
atomic.Xaddint64(&gcController.fractionalMarkWorkersNeeded, 1)
case gcMarkWorkerIdleMode:
atomic.Xaddint64(&gcController.idleMarkTime, duration)
}
// Was this the last worker and did we run out
// of work?
incnwait := atomic.Xadd(&work.nwait, +1)
if incnwait > work.nproc {
println("runtime: p.gcMarkWorkerMode=", _p_.gcMarkWorkerMode,
"work.nwait=", incnwait, "work.nproc=", work.nproc)
throw("work.nwait > work.nproc")
}
// 判断是否所有后台标记任务都完成, 并且没有更多的任务
// If this worker reached a background mark completion
// point, signal the main GC goroutine.
if incnwait == work.nproc && !gcMarkWorkAvailable(nil) {
// 取消和P的关联
// Make this G preemptible and disassociate it
// as the worker for this P so
// findRunnableGCWorker doesn't try to
// schedule it.
_p_.gcBgMarkWorker.set(nil)
// 允许G被抢占
releasem(park.m.ptr())
// 准备进入完成标记阶段
gcMarkDone()
// 休眠之前会重新关联P
// 因为上面允许被抢占, 到这里的时候可能就会变成其他P
// 如果重新关联P失败则这个任务会结束
// Disable preemption and prepare to reattach
// to the P.
//
// We may be running on a different P at this
// point, so we can't reattach until this G is
// parked.
park.m.set(acquirem())
park.attach.set(_p_)
}
}
}
gcDrain函数用于执行标记:
// gcDrain scans roots and objects in work buffers, blackening grey
// objects until all roots and work buffers have been drained.
//
// If flags&gcDrainUntilPreempt != 0, gcDrain returns when g.preempt
// is set. This implies gcDrainNoBlock.
//
// If flags&gcDrainIdle != 0, gcDrain returns when there is other work
// to do. This implies gcDrainNoBlock.
//
// If flags&gcDrainNoBlock != 0, gcDrain returns as soon as it is
// unable to get more work. Otherwise, it will block until all
// blocking calls are blocked in gcDrain.
//
// If flags&gcDrainFlushBgCredit != 0, gcDrain flushes scan work
// credit to gcController.bgScanCredit every gcCreditSlack units of
// scan work.
//
//go:nowritebarrier
func gcDrain(gcw *gcWork, flags gcDrainFlags) {
if !writeBarrier.needed {
throw("gcDrain phase incorrect")
}
gp := getg().m.curg
// 看到抢占标志时是否要返回
preemptible := flags&gcDrainUntilPreempt != 0
// 没有任务时是否要等待任务
blocking := flags&(gcDrainUntilPreempt|gcDrainIdle|gcDrainNoBlock) == 0
// 是否计算后台的扫描量来减少辅助GC和唤醒等待中的G
flushBgCredit := flags&gcDrainFlushBgCredit != 0
// 是否只执行一定量的工作
idle := flags&gcDrainIdle != 0
// 记录初始的已扫描数量
initScanWork := gcw.scanWork
// 扫描idleCheckThreshold(100000)个对象以后检查是否要返回
// idleCheck is the scan work at which to perform the next
// idle check with the scheduler.
idleCheck := initScanWork + idleCheckThreshold
// 如果根对象未扫描完, 则先扫描根对象
// Drain root marking jobs.
if work.markrootNext < work.markrootJobs {
// 如果标记了preemptible, 循环直到被抢占
for !(preemptible && gp.preempt) {
// 从根对象扫描队列取出一个值(原子递增)
job := atomic.Xadd(&work.markrootNext, +1) - 1
if job >= work.markrootJobs {
break
}
// 执行根对象扫描工作
markroot(gcw, job)
// 如果是idle模式并且有其他工作, 则返回
if idle && pollWork() {
goto done
}
}
}
// 根对象已经在标记队列中, 消费标记队列
// 如果标记了preemptible, 循环直到被抢占
// Drain heap marking jobs.
for !(preemptible && gp.preempt) {
// 如果全局标记队列为空, 把本地标记队列的一部分工作分过去
// (如果wbuf2不为空则移动wbuf2过去, 否则移动wbuf1的一半过去)
// Try to keep work available on the global queue. We used to
// check if there were waiting workers, but it's better to
// just keep work available than to make workers wait. In the
// worst case, we'll do O(log(_WorkbufSize)) unnecessary
// balances.
if work.full == 0 {
gcw.balance()
}
// 从本地标记队列中获取对象, 获取不到则从全局标记队列获取
var b uintptr
if blocking {
// 阻塞获取
b = gcw.get()
} else {
// 非阻塞获取
b = gcw.tryGetFast()
if b == 0 {
b = gcw.tryGet()
}
}
// 获取不到对象, 标记队列已为空, 跳出循环
if b == 0 {
// work barrier reached or tryGet failed.
break
}
// 扫描获取到的对象
scanobject(b, gcw)
// 如果已经扫描了一定数量的对象(gcCreditSlack的值是2000)
// Flush background scan work credit to the global
// account if we've accumulated enough locally so
// mutator assists can draw on it.
if gcw.scanWork >= gcCreditSlack {
// 把扫描的对象数量添加到全局
atomic.Xaddint64(&gcController.scanWork, gcw.scanWork)
// 减少辅助GC的工作量和唤醒等待中的G
if flushBgCredit {
gcFlushBgCredit(gcw.scanWork - initScanWork)
initScanWork = 0
}
idleCheck -= gcw.scanWork
gcw.scanWork = 0
// 如果是idle模式且达到了检查的扫描量, 则检查是否有其他任务(G), 如果有则跳出循环
if idle && idleCheck <= 0 {
idleCheck += idleCheckThreshold
if pollWork() {
break
}
}
}
}
// In blocking mode, write barriers are not allowed after this
// point because we must preserve the condition that the work
// buffers are empty.
done:
// 把扫描的对象数量添加到全局
// Flush remaining scan work credit.
if gcw.scanWork > 0 {
atomic.Xaddint64(&gcController.scanWork, gcw.scanWork)
// 减少辅助GC的工作量和唤醒等待中的G
if flushBgCredit {
gcFlushBgCredit(gcw.scanWork - initScanWork)
}
gcw.scanWork = 0
}
}
markroot函数用于执行根对象扫描工作:
// markroot scans the i'th root.
//
// Preemption must be disabled (because this uses a gcWork).
//
// nowritebarrier is only advisory here.
//
//go:nowritebarrier
func markroot(gcw *gcWork, i uint32) {
// 判断取出的数值对应哪种任务
// (google的工程师觉得这种办法可笑)
// TODO(austin): This is a bit ridiculous. Compute and store
// the bases in gcMarkRootPrepare instead of the counts.
baseFlushCache := uint32(fixedRootCount)
baseData := baseFlushCache + uint32(work.nFlushCacheRoots)
baseBSS := baseData + uint32(work.nDataRoots)
baseSpans := baseBSS + uint32(work.nBSSRoots)
baseStacks := baseSpans + uint32(work.nSpanRoots)
end := baseStacks + uint32(work.nStackRoots)
// Note: if you add a case here, please also update heapdump.go:dumproots.
switch {
// 释放mcache中的所有span, 要求STW
case baseFlushCache <= i && i < baseData:
flushmcache(int(i - baseFlushCache))
// 扫描可读写的全局变量
// 这里只会扫描i对应的block, 扫描时传入包含哪里有指针的bitmap数据
case baseData <= i && i < baseBSS:
for _, datap := range activeModules() {
markrootBlock(datap.data, datap.edata-datap.data, datap.gcdatamask.bytedata, gcw, int(i-baseData))
}
// 扫描只读的全局变量
// 这里只会扫描i对应的block, 扫描时传入包含哪里有指针的bitmap数据
case baseBSS <= i && i < baseSpans:
for _, datap := range activeModules() {
markrootBlock(datap.bss, datap.ebss-datap.bss, datap.gcbssmask.bytedata, gcw, int(i-baseBSS))
}
// 扫描析构器队列
case i == fixedRootFinalizers:
// Only do this once per GC cycle since we don't call
// queuefinalizer during marking.
if work.markrootDone {
break
}
for fb := allfin; fb != nil; fb = fb.alllink {
cnt := uintptr(atomic.Load(&fb.cnt))
scanblock(uintptr(unsafe.Pointer(&fb.fin[0])), cnt*unsafe.Sizeof(fb.fin[0]), &finptrmask[0], gcw)
}
// 释放已中止的G的栈
case i == fixedRootFreeGStacks:
// Only do this once per GC cycle; preferably
// concurrently.
if !work.markrootDone {
// Switch to the system stack so we can call
// stackfree.
systemstack(markrootFreeGStacks)
}
// 扫描各个span中特殊对象(析构器列表)
case baseSpans <= i && i < baseStacks:
// mark MSpan.specials
markrootSpans(gcw, int(i-baseSpans))
// 扫描各个G的栈
default:
// 获取需要扫描的G
// the rest is scanning goroutine stacks
var gp *g
if baseStacks <= i && i < end {
gp = allgs[i-baseStacks]
} else {
throw("markroot: bad index")
}
// 记录等待开始的时间
// remember when we've first observed the G blocked
// needed only to output in traceback
status := readgstatus(gp) // We are not in a scan state
if (status == _Gwaiting || status == _Gsyscall) && gp.waitsince == 0 {
gp.waitsince = work.tstart
}
// 切换到g0运行(有可能会扫到自己的栈)
// scang must be done on the system stack in case
// we're trying to scan our own stack.
systemstack(func() {
// 判断扫描的栈是否自己的
// If this is a self-scan, put the user G in
// _Gwaiting to prevent self-deadlock. It may
// already be in _Gwaiting if this is a mark
// worker or we're in mark termination.
userG := getg().m.curg
selfScan := gp == userG && readgstatus(userG) == _Grunning
// 如果正在扫描自己的栈则切换状态到等待中防止死锁
if selfScan {
casgstatus(userG, _Grunning, _Gwaiting)
userG.waitreason = "garbage collection scan"
}
// 扫描G的栈
// TODO: scang blocks until gp's stack has
// been scanned, which may take a while for
// running goroutines. Consider doing this in
// two phases where the first is non-blocking:
// we scan the stacks we can and ask running
// goroutines to scan themselves; and the
// second blocks.
scang(gp, gcw)
// 如果正在扫描自己的栈则把状态切换回运行中
if selfScan {
casgstatus(userG, _Gwaiting, _Grunning)
}
})
}
}
scang函数负责扫描G的栈:
// scang blocks until gp's stack has been scanned.
// It might be scanned by scang or it might be scanned by the goroutine itself.
// Either way, the stack scan has completed when scang returns.
func scang(gp *g, gcw *gcWork) {
// Invariant; we (the caller, markroot for a specific goroutine) own gp.gcscandone.
// Nothing is racing with us now, but gcscandone might be set to true left over
// from an earlier round of stack scanning (we scan twice per GC).
// We use gcscandone to record whether the scan has been done during this round.
// 标记扫描未完成
gp.gcscandone = false
// See http://golang.org/cl/21503 for justification of the yield delay.
const yieldDelay = 10 * 1000
var nextYield int64
// 循环直到扫描完成
// Endeavor to get gcscandone set to true,
// either by doing the stack scan ourselves or by coercing gp to scan itself.
// gp.gcscandone can transition from false to true when we're not looking
// (if we asked for preemption), so any time we lock the status using
// castogscanstatus we have to double-check that the scan is still not done.
loop:
for i := 0; !gp.gcscandone; i++ {
// 判断G的当前状态
switch s := readgstatus(gp); s {
default:
dumpgstatus(gp)
throw("stopg: invalid status")
// G已中止, 不需要扫描它
case _Gdead:
// No stack.
gp.gcscandone = true
break loop
// G的栈正在扩展, 下一轮重试
case _Gcopystack:
// Stack being switched. Go around again.
// G不是运行中, 首先需要防止它运行
case _Grunnable, _Gsyscall, _Gwaiting:
// Claim goroutine by setting scan bit.
// Racing with execution or readying of gp.
// The scan bit keeps them from running
// the goroutine until we're done.
if castogscanstatus(gp, s, s|_Gscan) {
// 原子切换状态成功时扫描它的栈
if !gp.gcscandone {
scanstack(gp, gcw)
gp.gcscandone = true
}
// 恢复G的状态, 并跳出循环
restartg(gp)
break loop
}
// G正在扫描它自己, 等待扫描完毕
case _Gscanwaiting:
// newstack is doing a scan for us right now. Wait.
// G正在运行
case _Grunning:
// Goroutine running. Try to preempt execution so it can scan itself.
// The preemption handler (in newstack) does the actual scan.
// 如果已经有抢占请求, 则抢占成功时会帮我们处理
// Optimization: if there is already a pending preemption request
// (from the previous loop iteration), don't bother with the atomics.
if gp.preemptscan && gp.preempt && gp.stackguard0 == stackPreempt {
break
}
// 抢占G, 抢占成功时G会扫描它自己
// Ask for preemption and self scan.
if castogscanstatus(gp, _Grunning, _Gscanrunning) {
if !gp.gcscandone {
gp.preemptscan = true
gp.preempt = true
gp.stackguard0 = stackPreempt
}
casfrom_Gscanstatus(gp, _Gscanrunning, _Grunning)
}
}
// 第一轮休眠10毫秒, 第二轮休眠5毫秒
if i == 0 {
nextYield = nanotime() + yieldDelay
}
if nanotime() < nextYield {
procyield(10)
} else {
osyield()
nextYield = nanotime() + yieldDelay/2
}
}
// 扫描完成, 取消抢占扫描的请求
gp.preemptscan = false // cancel scan request if no longer needed
}
设置preemptscan后, 在抢占G成功时会调用scanstack扫描它自己的栈, 具体代码在这里.
扫描栈用的函数是scanstack:
// scanstack scans gp's stack, greying all pointers found on the stack.
//
// scanstack is marked go:systemstack because it must not be preempted
// while using a workbuf.
//
//go:nowritebarrier
//go:systemstack
func scanstack(gp *g, gcw *gcWork) {
if gp.gcscanvalid {
return
}
if readgstatus(gp)&_Gscan == 0 {
print("runtime:scanstack: gp=", gp, ", goid=", gp.goid, ", gp->atomicstatus=", hex(readgstatus(gp)), "\n")
throw("scanstack - bad status")
}
switch readgstatus(gp) &^ _Gscan {
default:
print("runtime: gp=", gp, ", goid=", gp.goid, ", gp->atomicstatus=", readgstatus(gp), "\n")
throw("mark - bad status")
case _Gdead:
return
case _Grunning:
print("runtime: gp=", gp, ", goid=", gp.goid, ", gp->atomicstatus=", readgstatus(gp), "\n")
throw("scanstack: goroutine not stopped")
case _Grunnable, _Gsyscall, _Gwaiting:
// ok
}
if gp == getg() {
throw("can't scan our own stack")
}
mp := gp.m
if mp != nil && mp.helpgc != 0 {
throw("can't scan gchelper stack")
}
// Shrink the stack if not much of it is being used. During
// concurrent GC, we can do this during concurrent mark.
if !work.markrootDone {
shrinkstack(gp)
}
// Scan the stack.
var cache pcvalueCache
scanframe := func(frame *stkframe, unused unsafe.Pointer) bool {
// scanframeworker会根据代码地址(pc)获取函数信息
// 然后找到函数信息中的stackmap.bytedata, 它保存了函数的栈上哪些地方有指针
// 再调用scanblock来扫描函数的栈空间, 同时函数的参数也会这样扫描
scanframeworker(frame, &cache, gcw)
return true
}
// 枚举所有调用帧, 分别调用scanframe函数
gentraceback(^uintptr(0), ^uintptr(0), 0, gp, 0, nil, 0x7fffffff, scanframe, nil, 0)
// 枚举所有defer的调用帧, 分别调用scanframe函数
tracebackdefers(gp, scanframe, nil)
gp.gcscanvalid = true
}
scanblock函数是一个通用的扫描函数, 扫描全局变量和栈空间都会用它, 和scanobject不同的是bitmap需要手动传入:
// scanblock scans b as scanobject would, but using an explicit
// pointer bitmap instead of the heap bitmap.
//
// This is used to scan non-heap roots, so it does not update
// gcw.bytesMarked or gcw.scanWork.
//
//go:nowritebarrier
func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork) {
// Use local copies of original parameters, so that a stack trace
// due to one of the throws below shows the original block
// base and extent.
b := b0
n := n0
arena_start := mheap_.arena_start
arena_used := mheap_.arena_used
// 枚举扫描的地址
for i := uintptr(0); i < n; {
// 找到bitmap中对应的byte
// Find bits for the next word.
bits := uint32(*addb(ptrmask, i/(sys.PtrSize*8)))
if bits == 0 {
i += sys.PtrSize * 8
continue
}
// 枚举byte
for j := 0; j < 8 && i < n; j++ {
// 如果该地址包含指针
if bits&1 != 0 {
// 标记在该地址的对象存活, 并把它加到标记队列(该对象变为灰色)
// Same work as in scanobject; see comments there.
obj := *(*uintptr)(unsafe.Pointer(b + i))
if obj != 0 && arena_start <= obj && obj < arena_used {
// 找到该对象对应的span和bitmap
if obj, hbits, span, objIndex := heapBitsForObject(obj, b, i); obj != 0 {
// 标记一个对象存活, 并把它加到标记队列(该对象变为灰色)
greyobject(obj, b, i, hbits, span, gcw, objIndex)
}
}
}
// 处理下一个指针下一个bit
bits >>= 1
i += sys.PtrSize
}
}
}
greyobject用于标记一个对象存活, 并把它加到标记队列(该对象变为灰色):
// obj is the start of an object with mark mbits.
// If it isn't already marked, mark it and enqueue into gcw.
// base and off are for debugging only and could be removed.
//go:nowritebarrierrec
func greyobject(obj, base, off uintptr, hbits heapBits, span *mspan, gcw *gcWork, objIndex uintptr) {
// obj should be start of allocation, and so must be at least pointer-aligned.
if obj&(sys.PtrSize-1) != 0 {
throw("greyobject: obj not pointer-aligned")
}
mbits := span.markBitsForIndex(objIndex)
if useCheckmark {
// checkmark是用于检查是否所有可到达的对象都被正确标记的机制, 仅除错使用
if !mbits.isMarked() {
printlock()
print("runtime:greyobject: checkmarks finds unexpected unmarked object obj=", hex(obj), "\n")
print("runtime: found obj at *(", hex(base), "+", hex(off), ")\n")
// Dump the source (base) object
gcDumpObject("base", base, off)
// Dump the object
gcDumpObject("obj", obj, ^uintptr(0))
getg().m.traceback = 2
throw("checkmark found unmarked object")
}
if hbits.isCheckmarked(span.elemsize) {
return
}
hbits.setCheckmarked(span.elemsize)
if !hbits.isCheckmarked(span.elemsize) {
throw("setCheckmarked and isCheckmarked disagree")
}
} else {
if debug.gccheckmark > 0 && span.isFree(objIndex) {
print("runtime: marking free object ", hex(obj), " found at *(", hex(base), "+", hex(off), ")\n")
gcDumpObject("base", base, off)
gcDumpObject("obj", obj, ^uintptr(0))
getg().m.traceback = 2
throw("marking free object")
}
// 如果对象所在的span中的gcmarkBits对应的bit已经设置为1则可以跳过处理
// If marked we have nothing to do.
if mbits.isMarked() {
return
}
// 设置对象所在的span中的gcmarkBits对应的bit为1
// mbits.setMarked() // Avoid extra call overhead with manual inlining.
atomic.Or8(mbits.bytep, mbits.mask)
// 如果确定对象不包含指针(所在span的类型是noscan), 则不需要把对象放入标记队列
// If this is a noscan object, fast-track it to black
// instead of greying it.
if span.spanclass.noscan() {
gcw.bytesMarked += uint64(span.elemsize)
return
}
}
// 把对象放入标记队列
// 先放入本地标记队列, 失败时把本地标记队列中的部分工作转移到全局标记队列, 再放入本地标记队列
// Queue the obj for scanning. The PREFETCH(obj) logic has been removed but
// seems like a nice optimization that can be added back in.
// There needs to be time between the PREFETCH and the use.
// Previously we put the obj in an 8 element buffer that is drained at a rate
// to give the PREFETCH time to do its work.
// Use of PREFETCHNTA might be more appropriate than PREFETCH
if !gcw.putFast(obj) {
gcw.put(obj)
}
}
gcDrain函数扫描完根对象, 就会开始消费标记队列, 对从标记队列中取出的对象调用scanobject函数:
// scanobject scans the object starting at b, adding pointers to gcw.
// b must point to the beginning of a heap object or an oblet.
// scanobject consults the GC bitmap for the pointer mask and the
// spans for the size of the object.
//
//go:nowritebarrier
func scanobject(b uintptr, gcw *gcWork) {
// Note that arena_used may change concurrently during
// scanobject and hence scanobject may encounter a pointer to
// a newly allocated heap object that is *not* in
// [start,used). It will not mark this object; however, we
// know that it was just installed by a mutator, which means
// that mutator will execute a write barrier and take care of
// marking it. This is even more pronounced on relaxed memory
// architectures since we access arena_used without barriers
// or synchronization, but the same logic applies.
arena_start := mheap_.arena_start
arena_used := mheap_.arena_used
// Find the bits for b and the size of the object at b.
//
// b is either the beginning of an object, in which case this
// is the size of the object to scan, or it points to an
// oblet, in which case we compute the size to scan below.
// 获取对象对应的bitmap
hbits := heapBitsForAddr(b)
// 获取对象所在的span
s := spanOfUnchecked(b)
// 获取对象的大小
n := s.elemsize
if n == 0 {
throw("scanobject n == 0")
}
// 对象大小过大时(maxObletBytes是128KB)需要分割扫描
// 每次最多只扫描128KB
if n > maxObletBytes {
// Large object. Break into oblets for better
// parallelism and lower latency.
if b == s.base() {
// It's possible this is a noscan object (not
// from greyobject, but from other code
// paths), in which case we must *not* enqueue
// oblets since their bitmaps will be
// uninitialized.
if s.spanclass.noscan() {
// Bypass the whole scan.
gcw.bytesMarked += uint64(n)
return
}
// Enqueue the other oblets to scan later.
// Some oblets may be in b's scalar tail, but
// these will be marked as "no more pointers",
// so we'll drop out immediately when we go to
// scan those.
for oblet := b + maxObletBytes; oblet < s.base()+s.elemsize; oblet += maxObletBytes {
if !gcw.putFast(oblet) {
gcw.put(oblet)
}
}
}
// Compute the size of the oblet. Since this object
// must be a large object, s.base() is the beginning
// of the object.
n = s.base() + s.elemsize - b
if n > maxObletBytes {
n = maxObletBytes
}
}
// 扫描对象中的指针
var i uintptr
for i = 0; i < n; i += sys.PtrSize {
// 获取对应的bit
// Find bits for this word.
if i != 0 {
// Avoid needless hbits.next() on last iteration.
hbits = hbits.next()
}
// Load bits once. See CL 22712 and issue 16973 for discussion.
bits := hbits.bits()
// 检查scan bit判断是否继续扫描, 注意第二个scan bit是checkmark
// During checkmarking, 1-word objects store the checkmark
// in the type bit for the one word. The only one-word objects
// are pointers, or else they'd be merged with other non-pointer
// data into larger allocations.
if i != 1*sys.PtrSize && bits&bitScan == 0 {
break // no more pointers in this object
}
// 检查pointer bit, 不是指针则继续
if bits&bitPointer == 0 {
continue // not a pointer
}
// 取出指针的值
// Work here is duplicated in scanblock and above.
// If you make changes here, make changes there too.
obj := *(*uintptr)(unsafe.Pointer(b + i))
// 如果指针在arena区域中, 则调用greyobject标记对象并把对象放到标记队列中
// At this point we have extracted the next potential pointer.
// Check if it points into heap and not back at the current object.
if obj != 0 && arena_start <= obj && obj < arena_used && obj-b >= n {
// Mark the object.
if obj, hbits, span, objIndex := heapBitsForObject(obj, b, i); obj != 0 {
greyobject(obj, b, i, hbits, span, gcw, objIndex)
}
}
}
// 统计扫描过的大小和对象数量
gcw.bytesMarked += uint64(n)
gcw.scanWork += int64(i)
}
在所有后台标记任务都把标记队列消费完毕时, 会执行gcMarkDone函数准备进入完成标记阶段(mark termination):
在并行GC中gcMarkDone会被执行两次, 第一次会禁止本地标记队列然后重新开始后台标记任务, 第二次会进入完成标记阶段(mark termination)。
// gcMarkDone transitions the GC from mark 1 to mark 2 and from mark 2
// to mark termination.
//
// This should be called when all mark work has been drained. In mark
// 1, this includes all root marking jobs, global work buffers, and
// active work buffers in assists and background workers; however,
// work may still be cached in per-P work buffers. In mark 2, per-P
// caches are disabled.
//
// The calling context must be preemptible.
//
// Note that it is explicitly okay to have write barriers in this
// function because completion of concurrent mark is best-effort
// anyway. Any work created by write barriers here will be cleaned up
// by mark termination.
func gcMarkDone() {
top:
semacquire(&work.markDoneSema)
// Re-check transition condition under transition lock.
if !(gcphase == _GCmark && work.nwait == work.nproc && !gcMarkWorkAvailable(nil)) {
semrelease(&work.markDoneSema)
return
}
// 暂时禁止启动新的后台标记任务
// Disallow starting new workers so that any remaining workers
// in the current mark phase will drain out.
//
// TODO(austin): Should dedicated workers keep an eye on this
// and exit gcDrain promptly?
atomic.Xaddint64(&gcController.dedicatedMarkWorkersNeeded, -0xffffffff)
atomic.Xaddint64(&gcController.fractionalMarkWorkersNeeded, -0xffffffff)
// 判断本地标记队列是否已禁用
if !gcBlackenPromptly {
// 本地标记队列是否未禁用, 禁用然后重新开始后台标记任务
// Transition from mark 1 to mark 2.
//
// The global work list is empty, but there can still be work
// sitting in the per-P work caches.
// Flush and disable work caches.
// 禁用本地标记队列
// Disallow caching workbufs and indicate that we're in mark 2.
gcBlackenPromptly = true
// Prevent completion of mark 2 until we've flushed
// cached workbufs.
atomic.Xadd(&work.nwait, -1)
// GC is set up for mark 2. Let Gs blocked on the
// transition lock go while we flush caches.
semrelease(&work.markDoneSema)
// 把所有本地标记队列中的对象都推到全局标记队列
systemstack(func() {
// Flush all currently cached workbufs and
// ensure all Ps see gcBlackenPromptly. This
// also blocks until any remaining mark 1
// workers have exited their loop so we can
// start new mark 2 workers.
forEachP(func(_p_ *p) {
_p_.gcw.dispose()
})
})
// 除错用
// Check that roots are marked. We should be able to
// do this before the forEachP, but based on issue
// #16083 there may be a (harmless) race where we can
// enter mark 2 while some workers are still scanning
// stacks. The forEachP ensures these scans are done.
//
// TODO(austin): Figure out the race and fix this
// properly.
gcMarkRootCheck()
// 允许启动新的后台标记任务
// Now we can start up mark 2 workers.
atomic.Xaddint64(&gcController.dedicatedMarkWorkersNeeded, 0xffffffff)
atomic.Xaddint64(&gcController.fractionalMarkWorkersNeeded, 0xffffffff)
// 如果确定没有更多的任务则可以直接跳到函数顶部
// 这样就当作是第二次调用了
incnwait := atomic.Xadd(&work.nwait, +1)
if incnwait == work.nproc && !gcMarkWorkAvailable(nil) {
// This loop will make progress because
// gcBlackenPromptly is now true, so it won't
// take this same "if" branch.
goto top
}
} else {
// 记录完成标记阶段开始的时间和STW开始的时间
// Transition to mark termination.
now := nanotime()
work.tMarkTerm = now
work.pauseStart = now
// 禁止G被抢占
getg().m.preemptoff = "gcing"
// 停止所有运行中的G, 并禁止它们运行
systemstack(stopTheWorldWithSema)
// !!!!!!!!!!!!!!!!
// 世界已停止(STW)...
// !!!!!!!!!!!!!!!!
// The gcphase is _GCmark, it will transition to _GCmarktermination
// below. The important thing is that the wb remains active until
// all marking is complete. This includes writes made by the GC.
// 标记对根对象的扫描已完成, 会影响gcMarkRootPrepare中的处理
// Record that one root marking pass has completed.
work.markrootDone = true
// 禁止辅助GC和后台标记任务的运行
// Disable assists and background workers. We must do
// this before waking blocked assists.
atomic.Store(&gcBlackenEnabled, 0)
// 唤醒所有因为辅助GC而休眠的G
// Wake all blocked assists. These will run when we
// start the world again.
gcWakeAllAssists()
// Likewise, release the transition lock. Blocked
// workers and assists will run when we start the
// world again.
semrelease(&work.markDoneSema)
// 计算下一次触发gc需要的heap大小
// endCycle depends on all gcWork cache stats being
// flushed. This is ensured by mark 2.
nextTriggerRatio := gcController.endCycle()
// 进入完成标记阶段, 会重新启动世界
// Perform mark termination. This will restart the world.
gcMarkTermination(nextTriggerRatio)
}
}
gcMarkTermination函数会进入完成标记阶段:
func gcMarkTermination(nextTriggerRatio float64) {
// World is stopped.
// Start marktermination which includes enabling the write barrier.
// 禁止辅助GC和后台标记任务的运行
atomic.Store(&gcBlackenEnabled, 0)
// 重新允许本地标记队列(下次GC使用)
gcBlackenPromptly = false
// 设置当前GC阶段到完成标记阶段, 并启用写屏障
setGCPhase(_GCmarktermination)
// 记录开始时间
work.heap1 = memstats.heap_live
startTime := nanotime()
// 禁止G被抢占
mp := acquirem()
mp.preemptoff = "gcing"
_g_ := getg()
_g_.m.traceback = 2
// 设置G的状态为等待中这样它的栈可以被扫描
gp := _g_.m.curg
casgstatus(gp, _Grunning, _Gwaiting)
gp.waitreason = "garbage collection"
// 切换到g0运行
// Run gc on the g0 stack. We do this so that the g stack
// we're currently running on will no longer change. Cuts
// the root set down a bit (g0 stacks are not scanned, and
// we don't need to scan gc's internal state). We also
// need to switch to g0 so we can shrink the stack.
systemstack(func() {
// 开始STW中的标记
gcMark(startTime)
// 必须立刻返回, 因为外面的G的栈有可能被移动, 不能在这之后访问外面的变量
// Must return immediately.
// The outer function's stack may have moved
// during gcMark (it shrinks stacks, including the
// outer function's stack), so we must not refer
// to any of its variables. Return back to the
// non-system stack to pick up the new addresses
// before continuing.
})
// 重新切换到g0运行
systemstack(func() {
work.heap2 = work.bytesMarked
// 如果启用了checkmark则执行检查, 检查是否所有可到达的对象都有标记
if debug.gccheckmark > 0 {
// Run a full stop-the-world mark using checkmark bits,
// to check that we didn't forget to mark anything during
// the concurrent mark process.
gcResetMarkState()
initCheckmarks()
gcMark(startTime)
clearCheckmarks()
}
// 设置当前GC阶段到关闭, 并禁用写屏障
// marking is complete so we can turn the write barrier off
setGCPhase(_GCoff)
// 唤醒后台清扫任务, 将在STW结束后开始运行
gcSweep(work.mode)
// 除错用
if debug.gctrace > 1 {
startTime = nanotime()
// The g stacks have been scanned so
// they have gcscanvalid==true and gcworkdone==true.
// Reset these so that all stacks will be rescanned.
gcResetMarkState()
finishsweep_m()
// Still in STW but gcphase is _GCoff, reset to _GCmarktermination
// At this point all objects will be found during the gcMark which
// does a complete STW mark and object scan.
setGCPhase(_GCmarktermination)
gcMark(startTime)
setGCPhase(_GCoff) // marking is done, turn off wb.
gcSweep(work.mode)
}
})
// 设置G的状态为运行中
_g_.m.traceback = 0
casgstatus(gp, _Gwaiting, _Grunning)
// 跟踪处理
if trace.enabled {
traceGCDone()
}
// all done
mp.preemptoff = ""
if gcphase != _GCoff {
throw("gc done but gcphase != _GCoff")
}
// 更新下一次触发gc需要的heap大小(gc_trigger)
// Update GC trigger and pacing for the next cycle.
gcSetTriggerRatio(nextTriggerRatio)
// 更新用时记录
// Update timing memstats
now := nanotime()
sec, nsec, _ := time_now()
unixNow := sec*1e9 + int64(nsec)
work.pauseNS += now - work.pauseStart
work.tEnd = now
atomic.Store64(&memstats.last_gc_unix, uint64(unixNow)) // must be Unix time to make sense to user
atomic.Store64(&memstats.last_gc_nanotime, uint64(now)) // monotonic time for us
memstats.pause_ns[memstats.numgc%uint32(len(memstats.pause_ns))] = uint64(work.pauseNS)
memstats.pause_end[memstats.numgc%uint32(len(memstats.pause_end))] = uint64(unixNow)
memstats.pause_total_ns += uint64(work.pauseNS)
// 更新所用cpu记录
// Update work.totaltime.
sweepTermCpu := int64(work.stwprocs) * (work.tMark - work.tSweepTerm)
// We report idle marking time below, but omit it from the
// overall utilization here since it's "free".
markCpu := gcController.assistTime + gcController.dedicatedMarkTime + gcController.fractionalMarkTime
markTermCpu := int64(work.stwprocs) * (work.tEnd - work.tMarkTerm)
cycleCpu := sweepTermCpu + markCpu + markTermCpu
work.totaltime += cycleCpu
// Compute overall GC CPU utilization.
totalCpu := sched.totaltime + (now-sched.procresizetime)*int64(gomaxprocs)
memstats.gc_cpu_fraction = float64(work.totaltime) / float64(totalCpu)
// 重置清扫状态
// Reset sweep state.
sweep.nbgsweep = 0
sweep.npausesweep = 0
// 统计强制开始GC的次数
if work.userForced {
memstats.numforcedgc++
}
// 统计执行GC的次数然后唤醒等待清扫的G
// Bump GC cycle count and wake goroutines waiting on sweep.
lock(&work.sweepWaiters.lock)
memstats.numgc++
injectglist(work.sweepWaiters.head.ptr())
work.sweepWaiters.head = 0
unlock(&work.sweepWaiters.lock)
// 性能统计用
// Finish the current heap profiling cycle and start a new
// heap profiling cycle. We do this before starting the world
// so events don't leak into the wrong cycle.
mProf_NextCycle()
// 重新启动世界
systemstack(startTheWorldWithSema)
// !!!!!!!!!!!!!!!
// 世界已重新启动...
// !!!!!!!!!!!!!!!
// 性能统计用
// Flush the heap profile so we can start a new cycle next GC.
// This is relatively expensive, so we don't do it with the
// world stopped.
mProf_Flush()
// 移动标记队列使用的缓冲区到自由列表, 使得它们可以被回收
// Prepare workbufs for freeing by the sweeper. We do this
// asynchronously because it can take non-trivial time.
prepareFreeWorkbufs()
// 释放未使用的栈
// Free stack spans. This must be done between GC cycles.
systemstack(freeStackSpans)
// 除错用
// Print gctrace before dropping worldsema. As soon as we drop
// worldsema another cycle could start and smash the stats
// we're trying to print.
if debug.gctrace > 0 {
util := int(memstats.gc_cpu_fraction * 100)
var sbuf [24]byte
printlock()
print("gc ", memstats.numgc,
" @", string(itoaDiv(sbuf[:], uint64(work.tSweepTerm-runtimeInitTime)/1e6, 3)), "s ",
util, "%: ")
prev := work.tSweepTerm
for i, ns := range []int64{work.tMark, work.tMarkTerm, work.tEnd} {
if i != 0 {
print("+")
}
print(string(fmtNSAsMS(sbuf[:], uint64(ns-prev))))
prev = ns
}
print(" ms clock, ")
for i, ns := range []int64{sweepTermCpu, gcController.assistTime, gcController.dedicatedMarkTime + gcController.fractionalMarkTime, gcController.idleMarkTime, markTermCpu} {
if i == 2 || i == 3 {
// Separate mark time components with /.
print("/")
} else if i != 0 {
print("+")
}
print(string(fmtNSAsMS(sbuf[:], uint64(ns))))
}
print(" ms cpu, ",
work.heap0>>20, "->", work.heap1>>20, "->", work.heap2>>20, " MB, ",
work.heapGoal>>20, " MB goal, ",
work.maxprocs, " P")
if work.userForced {
print(" (forced)")
}
print("\n")
printunlock()
}
semrelease(&worldsema)
// Careful: another GC cycle may start now.
// 重新允许当前的G被抢占
releasem(mp)
mp = nil
// 如果是并行GC, 让当前M继续运行(会回到gcBgMarkWorker然后休眠)
// 如果不是并行GC, 则让当前M开始调度
// now that gc is done, kick off finalizer thread if needed
if !concurrentSweep {
// give the queued finalizers, if any, a chance to run
Gosched()
}
}
gcSweep函数会唤醒后台清扫任务:
后台清扫任务会在程序启动时调用的gcenable函数中启动.
func gcSweep(mode gcMode) {
if gcphase != _GCoff {
throw("gcSweep being done but phase is not GCoff")
}
// 增加sweepgen, 这样sweepSpans中两个队列角色会交换, 所有span都会变为"待清扫"的span
lock(&mheap_.lock)
mheap_.sweepgen += 2
mheap_.sweepdone = 0
if mheap_.sweepSpans[mheap_.sweepgen/2%2].index != 0 {
// We should have drained this list during the last
// sweep phase. We certainly need to start this phase
// with an empty swept list.
throw("non-empty swept list")
}
mheap_.pagesSwept = 0
unlock(&mheap_.lock)
// 如果非并行GC则在这里完成所有工作(STW中)
if !_ConcurrentSweep || mode == gcForceBlockMode {
// Special case synchronous sweep.
// Record that no proportional sweeping has to happen.
lock(&mheap_.lock)
mheap_.sweepPagesPerByte = 0
unlock(&mheap_.lock)
// Sweep all spans eagerly.
for sweepone() != ^uintptr(0) {
sweep.npausesweep++
}
// Free workbufs eagerly.
prepareFreeWorkbufs()
for freeSomeWbufs(false) {
}
// All "free" events for this mark/sweep cycle have
// now happened, so we can make this profile cycle
// available immediately.
mProf_NextCycle()
mProf_Flush()
return
}
// 唤醒后台清扫任务
// Background sweep.
lock(&sweep.lock)
if sweep.parked {
sweep.parked = false
ready(sweep.g, 0, true)
}
unlock(&sweep.lock)
}
后台清扫任务的函数是bgsweep:
func bgsweep(c chan int) {
sweep.g = getg()
// 等待唤醒
lock(&sweep.lock)
sweep.parked = true
c <- 1
goparkunlock(&sweep.lock, "GC sweep wait", traceEvGoBlock, 1)
// 循环清扫
for {
// 清扫一个span, 然后进入调度(一次只做少量工作)
for gosweepone() != ^uintptr(0) {
sweep.nbgsweep++
Gosched()
}
// 释放一些未使用的标记队列缓冲区到heap
for freeSomeWbufs(true) {
Gosched()
}
// 如果清扫未完成则继续循环
lock(&sweep.lock)
if !gosweepdone() {
// This can happen if a GC runs between
// gosweepone returning ^0 above
// and the lock being acquired.
unlock(&sweep.lock)
continue
}
// 否则让后台清扫任务进入休眠, 当前M继续调度
sweep.parked = true
goparkunlock(&sweep.lock, "GC sweep wait", traceEvGoBlock, 1)
}
}
gosweepone函数会从sweepSpans中取出单个span清扫:
//go:nowritebarrier
func gosweepone() uintptr {
var ret uintptr
// 切换到g0运行
systemstack(func() {
ret = sweepone()
})
return ret
}
sweepone函数如下:
// sweeps one span
// returns number of pages returned to heap, or ^uintptr(0) if there is nothing to sweep
//go:nowritebarrier
func sweepone() uintptr {
_g_ := getg()
sweepRatio := mheap_.sweepPagesPerByte // For debugging
// 禁止G被抢占
// increment locks to ensure that the goroutine is not preempted
// in the middle of sweep thus leaving the span in an inconsistent state for next GC
_g_.m.locks++
// 检查是否已完成清扫
if atomic.Load(&mheap_.sweepdone) != 0 {
_g_.m.locks--
return ^uintptr(0)
}
// 更新同时执行sweep的任务数量
atomic.Xadd(&mheap_.sweepers, +1)
npages := ^uintptr(0)
sg := mheap_.sweepgen
for {
// 从sweepSpans中取出一个span
s := mheap_.sweepSpans[1-sg/2%2].pop()
// 全部清扫完毕时跳出循环
if s == nil {
atomic.Store(&mheap_.sweepdone, 1)
break
}
// 其他M已经在清扫这个span时跳过
if s.state != mSpanInUse {
// This can happen if direct sweeping already
// swept this span, but in that case the sweep
// generation should always be up-to-date.
if s.sweepgen != sg {
print("runtime: bad span s.state=", s.state, " s.sweepgen=", s.sweepgen, " sweepgen=", sg, "\n")
throw("non in-use span in unswept list")
}
continue
}
// 原子增加span的sweepgen, 失败表示其他M已经开始清扫这个span, 跳过
if s.sweepgen != sg-2 || !atomic.Cas(&s.sweepgen, sg-2, sg-1) {
continue
}
// 清扫这个span, 然后跳出循环
npages = s.npages
if !s.sweep(false) {
// Span is still in-use, so this returned no
// pages to the heap and the span needs to
// move to the swept in-use list.
npages = 0
}
break
}
// 更新同时执行sweep的任务数量
// Decrement the number of active sweepers and if this is the
// last one print trace information.
if atomic.Xadd(&mheap_.sweepers, -1) == 0 && atomic.Load(&mheap_.sweepdone) != 0 {
if debug.gcpacertrace > 0 {
print("pacer: sweep done at heap size ", memstats.heap_live>>20, "MB; allocated ", (memstats.heap_live-mheap_.sweepHeapLiveBasis)>>20, "MB during sweep; swept ", mheap_.pagesSwept, " pages at ", sweepRatio, " pages/byte\n")
}
}
// 允许G被抢占
_g_.m.locks--
// 返回清扫的页数
return npages
}
span的sweep函数用于清扫单个span:
// Sweep frees or collects finalizers for blocks not marked in the mark phase.
// It clears the mark bits in preparation for the next GC round.
// Returns true if the span was returned to heap.
// If preserve=true, don't return it to heap nor relink in MCentral lists;
// caller takes care of it.
//TODO go:nowritebarrier
func (s *mspan) sweep(preserve bool) bool {
// It's critical that we enter this function with preemption disabled,
// GC must not start while we are in the middle of this function.
_g_ := getg()
if _g_.m.locks == 0 && _g_.m.mallocing == 0 && _g_ != _g_.m.g0 {
throw("MSpan_Sweep: m is not locked")
}
sweepgen := mheap_.sweepgen
if s.state != mSpanInUse || s.sweepgen != sweepgen-1 {
print("MSpan_Sweep: state=", s.state, " sweepgen=", s.sweepgen, " mheap.sweepgen=", sweepgen, "\n")
throw("MSpan_Sweep: bad span state")
}
if trace.enabled {
traceGCSweepSpan(s.npages * _PageSize)
}
// 统计已清理的页数
atomic.Xadd64(&mheap_.pagesSwept, int64(s.npages))
spc := s.spanclass
size := s.elemsize
res := false
c := _g_.m.mcache
freeToHeap := false
// The allocBits indicate which unmarked objects don't need to be
// processed since they were free at the end of the last GC cycle
// and were not allocated since then.
// If the allocBits index is >= s.freeindex and the bit
// is not marked then the object remains unallocated
// since the last GC.
// This situation is analogous to being on a freelist.
// 判断在special中的析构器, 如果对应的对象已经不再存活则标记对象存活防止回收, 然后把析构器移到运行队列
// Unlink & free special records for any objects we're about to free.
// Two complications here:
// 1. An object can have both finalizer and profile special records.
// In such case we need to queue finalizer for execution,
// mark the object as live and preserve the profile special.
// 2. A tiny object can have several finalizers setup for different offsets.
// If such object is not marked, we need to queue all finalizers at once.
// Both 1 and 2 are possible at the same time.
specialp := &s.specials
special := *specialp
for special != nil {
// A finalizer can be set for an inner byte of an object, find object beginning.
objIndex := uintptr(special.offset) / size
p := s.base() + objIndex*size
mbits := s.markBitsForIndex(objIndex)
if !mbits.isMarked() {
// This object is not marked and has at least one special record.
// Pass 1: see if it has at least one finalizer.
hasFin := false
endOffset := p - s.base() + size
for tmp := special; tmp != nil && uintptr(tmp.offset) < endOffset; tmp = tmp.next {
if tmp.kind == _KindSpecialFinalizer {
// Stop freeing of object if it has a finalizer.
mbits.setMarkedNonAtomic()
hasFin = true
break
}
}
// Pass 2: queue all finalizers _or_ handle profile record.
for special != nil && uintptr(special.offset) < endOffset {
// Find the exact byte for which the special was setup
// (as opposed to object beginning).
p := s.base() + uintptr(special.offset)
if special.kind == _KindSpecialFinalizer || !hasFin {
// Splice out special record.
y := special
special = special.next
*specialp = special
freespecial(y, unsafe.Pointer(p), size)
} else {
// This is profile record, but the object has finalizers (so kept alive).
// Keep special record.
specialp = &special.next
special = *specialp
}
}
} else {
// object is still live: keep special record
specialp = &special.next
special = *specialp
}
}
// 除错用
if debug.allocfreetrace != 0 || raceenabled || msanenabled {
// Find all newly freed objects. This doesn't have to
// efficient; allocfreetrace has massive overhead.
mbits := s.markBitsForBase()
abits := s.allocBitsForIndex(0)
for i := uintptr(0); i < s.nelems; i++ {
if !mbits.isMarked() && (abits.index < s.freeindex || abits.isMarked()) {
x := s.base() + i*s.elemsize
if debug.allocfreetrace != 0 {
tracefree(unsafe.Pointer(x), size)
}
if raceenabled {
racefree(unsafe.Pointer(x), size)
}
if msanenabled {
msanfree(unsafe.Pointer(x), size)
}
}
mbits.advance()
abits.advance()
}
}
// 计算释放的对象数量
// Count the number of free objects in this span.
nalloc := uint16(s.countAlloc())
if spc.sizeclass() == 0 && nalloc == 0 {
// 如果span的类型是0(大对象)并且其中的对象已经不存活则释放到heap
s.needzero = 1
freeToHeap = true
}
nfreed := s.allocCount - nalloc
if nalloc > s.allocCount {
print("runtime: nelems=", s.nelems, " nalloc=", nalloc, " previous allocCount=", s.allocCount, " nfreed=", nfreed, "\n")
throw("sweep increased allocation count")
}
// 设置新的allocCount
s.allocCount = nalloc
// 判断span是否无未分配的对象
wasempty := s.nextFreeIndex() == s.nelems
// 重置freeindex, 下次分配从0开始搜索
s.freeindex = 0 // reset allocation index to start of span.
if trace.enabled {
getg().m.p.ptr().traceReclaimed += uintptr(nfreed) * s.elemsize
}
// gcmarkBits变为新的allocBits
// 然后重新分配一块全部为0的gcmarkBits
// 下次分配对象时可以根据allocBits得知哪些元素是未分配的
// gcmarkBits becomes the allocBits.
// get a fresh cleared gcmarkBits in preparation for next GC
s.allocBits = s.gcmarkBits
s.gcmarkBits = newMarkBits(s.nelems)
// 更新freeindex开始的allocCache
// Initialize alloc bits cache.
s.refillAllocCache(0)
// 如果span中已经无存活的对象则更新sweepgen到最新
// 下面会把span加到mcentral或者mheap
// We need to set s.sweepgen = h.sweepgen only when all blocks are swept,
// because of the potential for a concurrent free/SetFinalizer.
// But we need to set it before we make the span available for allocation
// (return it to heap or mcentral), because allocation code assumes that a
// span is already swept if available for allocation.
if freeToHeap || nfreed == 0 {
// The span must be in our exclusive ownership until we update sweepgen,
// check for potential races.
if s.state != mSpanInUse || s.sweepgen != sweepgen-1 {
print("MSpan_Sweep: state=", s.state, " sweepgen=", s.sweepgen, " mheap.sweepgen=", sweepgen, "\n")
throw("MSpan_Sweep: bad span state after sweep")
}
// Serialization point.
// At this point the mark bits are cleared and allocation ready
// to go so release the span.
atomic.Store(&s.sweepgen, sweepgen)
}
if nfreed > 0 && spc.sizeclass() != 0 {
// 把span加到mcentral, res等于是否添加成功
c.local_nsmallfree[spc.sizeclass()] += uintptr(nfreed)
res = mheap_.central[spc].mcentral.freeSpan(s, preserve, wasempty)
// freeSpan会更新sweepgen
// MCentral_FreeSpan updates sweepgen
} else if freeToHeap {
// 把span释放到mheap
// Free large span to heap
// NOTE(rsc,dvyukov): The original implementation of efence
// in CL 22060046 used SysFree instead of SysFault, so that
// the operating system would eventually give the memory
// back to us again, so that an efence program could run
// longer without running out of memory. Unfortunately,
// calling SysFree here without any kind of adjustment of the
// heap data structures means that when the memory does
// come back to us, we have the wrong metadata for it, either in
// the MSpan structures or in the garbage collection bitmap.
// Using SysFault here means that the program will run out of
// memory fairly quickly in efence mode, but at least it won't
// have mysterious crashes due to confused memory reuse.
// It should be possible to switch back to SysFree if we also
// implement and then call some kind of MHeap_DeleteSpan.
if debug.efence > 0 {
s.limit = 0 // prevent mlookup from finding this span
sysFault(unsafe.Pointer(s.base()), size)
} else {
mheap_.freeSpan(s, 1)
}
c.local_nlargefree++
c.local_largefree += size
res = true
}
// 如果span未加到mcentral或者未释放到mheap, 则表示span仍在使用
if !res {
// 把仍在使用的span加到sweepSpans的"已清扫"队列中
// The span has been swept and is still in-use, so put
// it on the swept in-use list.
mheap_.sweepSpans[sweepgen/2%2].push(s)
}
return res
}
从bgsweep和前面的分配器可以看出扫描阶段的工作是十分懒惰(lazy)的,
实际可能会出现前一阶段的扫描还未完成, 就需要开始新一轮的GC的情况,
所以每一轮GC开始之前都需要完成前一轮GC的扫描工作(Sweep Termination阶段).
GC的整个流程都分析完毕了, 最后贴上写屏障函数writebarrierptr的实现:
// NOTE: Really dst *unsafe.Pointer, src unsafe.Pointer,
// but if we do that, Go inserts a write barrier on *dst = src.
//go:nosplit
func writebarrierptr(dst *uintptr, src uintptr) {
if writeBarrier.cgo {
cgoCheckWriteBarrier(dst, src)
}
if !writeBarrier.needed {
*dst = src
return
}
if src != 0 && src < minPhysPageSize {
systemstack(func() {
print("runtime: writebarrierptr *", dst, " = ", hex(src), "\n")
throw("bad pointer in write barrier")
})
}
// 标记指针
writebarrierptr_prewrite1(dst, src)
// 设置指针到目标
*dst = src
}
writebarrierptr_prewrite1函数如下:
// writebarrierptr_prewrite1 invokes a write barrier for *dst = src
// prior to the write happening.
//
// Write barrier calls must not happen during critical GC and scheduler
// related operations. In particular there are times when the GC assumes
// that the world is stopped but scheduler related code is still being
// executed, dealing with syscalls, dealing with putting gs on runnable
// queues and so forth. This code cannot execute write barriers because
// the GC might drop them on the floor. Stopping the world involves removing
// the p associated with an m. We use the fact that m.p == nil to indicate
// that we are in one these critical section and throw if the write is of
// a pointer to a heap object.
//go:nosplit
func writebarrierptr_prewrite1(dst *uintptr, src uintptr) {
mp := acquirem()
if mp.inwb || mp.dying > 0 {
releasem(mp)
return
}
systemstack(func() {
if mp.p == 0 && memstats.enablegc && !mp.inwb && inheap(src) {
throw("writebarrierptr_prewrite1 called with mp.p == nil")
}
mp.inwb = true
gcmarkwb_m(dst, src)
})
mp.inwb = false
releasem(mp)
}
gcmarkwb_m函数如下:
func gcmarkwb_m(slot *uintptr, ptr uintptr) {
if writeBarrier.needed {
// Note: This turns bad pointer writes into bad
// pointer reads, which could be confusing. We avoid
// reading from obviously bad pointers, which should
// take care of the vast majority of these. We could
// patch this up in the signal handler, or use XCHG to
// combine the read and the write. Checking inheap is
// insufficient since we need to track changes to
// roots outside the heap.
//
// Note: profbuf.go omits a barrier during signal handler
// profile logging; that's safe only because this deletion barrier exists.
// If we remove the deletion barrier, we'll have to work out
// a new way to handle the profile logging.
if slot1 := uintptr(unsafe.Pointer(slot)); slot1 >= minPhysPageSize {
if optr := *slot; optr != 0 {
// 标记旧指针
shade(optr)
}
}
// TODO: Make this conditional on the caller's stack color.
if ptr != 0 && inheap(ptr) {
// 标记新指针
shade(ptr)
}
}
}
shade函数如下:
// Shade the object if it isn't already.
// The object is not nil and known to be in the heap.
// Preemption must be disabled.
//go:nowritebarrier
func shade(b uintptr) {
if obj, hbits, span, objIndex := heapBitsForObject(b, 0, 0); obj != 0 {
gcw := &getg().m.p.ptr().gcw
// 标记一个对象存活, 并把它加到标记队列(该对象变为灰色)
greyobject(obj, 0, 0, hbits, span, gcw, objIndex)
// 如果标记了禁止本地标记队列则flush到全局标记队列
if gcphase == _GCmarktermination || gcBlackenPromptly {
// Ps aren't allowed to cache work during mark
// termination.
gcw.dispose()
}
}
}
更多推荐
所有评论(0)