天天看點

kubernetes Event 源碼解析

kubernetes Event 源碼解析

鏡像下載下傳、域名解析、時間同步請點選

阿裡巴巴開源鏡像站

我們通過

kubectl describe [資源]

指令,可以在看到Event輸出,并且經常依賴event進行問題定位,從event中可以分析整個POD的運作軌迹,為服務的客觀測性提供資料來源,由此可見,event在Kubernetes中起着舉足輕重的作用。

kubernetes Event 源碼解析

event并不隻是kubelet中都有的,關于event的操作被封裝在

client-go/tools/record

包,我們完全可以在寫入自定義的event。

現在讓我們來一步步揭開event的面紗。

一、Event定義

其實event也是一個資源對象,并且通過apiserver将event存儲在etcd中,是以我們也可以通過

kubectl get event

指令檢視對應的event對象。

以下是一個event的yaml檔案:

apiVersion: v1
count: 1
eventTime: null
firstTimestamp: "2020-03-02T13:08:22Z"
involvedObject:
  apiVersion: v1
  kind: Pod
  name: example-foo-d75d8587c-xsf64
  namespace: default
  resourceVersion: "429837"
  uid: ce611c62-6c1a-4bd8-9029-136a1adf7de4
kind: Event
lastTimestamp: "2020-03-02T13:08:22Z"
message: Pod sandbox changed, it will be killed and re-created.
metadata:
  creationTimestamp: "2020-03-02T13:08:30Z"
  name: example-foo-d75d8587c-xsf64.15f87ea1df862b64
  namespace: default
  resourceVersion: "479466"
  selfLink: /api/v1/namespaces/default/events/example-foo-d75d8587c-xsf64.15f87ea1df862b64
  uid: 9fe6f72a-341d-4c49-960b-e185982d331a
reason: SandboxChanged
reportingComponent: ""
reportingInstance: ""
source:
  component: kubelet
  host: minikube
type: Normal           

主要字段說明:

  • involvedObject: 觸發event的資源類型
  • lastTimestamp:最後一次觸發的時間
  • message:事件說明
  • metadata :event的元資訊,name,namespace等
  • reason:event的原因
  • source:上報事件的來源,比如kubelet中的某個節點
  • type:事件類型,Normal或Warning

event字段定義可以看這裡:

types.go#L5078

接下來我們來看看,整個event是如何下入的。

二、寫入事件

1、這裡以kubelet為例,看看是如何進行事件寫入的

2、文中代碼以Kubernetes 1.17.3為例進行分析

先以一幅圖來看下整個的處理流程

kubernetes Event 源碼解析

建立操作事件的用戶端:

kubelet/app/server.go#L461
// makeEventRecorder sets up kubeDeps.Recorder if it's nil. It's a no-op otherwise.
func makeEventRecorder(kubeDeps *kubelet.Dependencies, nodeName types.NodeName) {
    if kubeDeps.Recorder != nil {
        return
    }
    //事件廣播
    eventBroadcaster := record.NewBroadcaster()
    //建立EventRecorder
    kubeDeps.Recorder = eventBroadcaster.NewRecorder(legacyscheme.Scheme, v1.EventSource{Component: componentKubelet, Host: string(nodeName)})
    //發送event至log輸出
    eventBroadcaster.StartLogging(klog.V(3).Infof)
    if kubeDeps.EventClient != nil {
        klog.V(4).Infof("Sending events to api server.")
        //發送event至apiserver
        eventBroadcaster.StartRecordingToSink(&v1core.EventSinkImpl{Interface: kubeDeps.EventClient.Events("")})
    } else {
        klog.Warning("No api server defined - no events will be sent to API server.")
    }
}           

通過

makeEventRecorder

建立了

EventRecorder

執行個體,這是一個事件廣播器,通過它提供了StartLogging和StartRecordingToSink兩個事件處理函數,分别将event發送給log和apiserver。

NewRecorder

EventRecorder

的執行個體,它提供了

Event

Eventf

等方法供事件記錄。

EventBroadcaster

我們來看下EventBroadcaster接口定義:

event.go#L113
// EventBroadcaster knows how to receive events and send them to any EventSink, watcher, or log.
type EventBroadcaster interface {
    //
    StartEventWatcher(eventHandler func(*v1.Event)) watch.Interface
    StartRecordingToSink(sink EventSink) watch.Interface
    StartLogging(logf func(format string, args ...interface{})) watch.Interface
    NewRecorder(scheme *runtime.Scheme, source v1.EventSource) EventRecorder
    Shutdown()
}           

具體實作是通過

eventBroadcasterImpl

struct來實作了各個方法。

其中StartLogging 和 StartRecordingToSink 其實就是完成了對事件的消費,EventRecorder實作對事件的寫入,中間通過channel實作了生産者消費者模型。

EventRecorder

我們先來看下

EventRecorder

接口定義:

event.go#L88

,提供了一下4個方法

// EventRecorder knows how to record events on behalf of an EventSource.
type EventRecorder interface {
    // Event constructs an event from the given information and puts it in the queue for sending.
    // 'object' is the object this event is about. Event will make a reference-- or you may also
    // pass a reference to the object directly.
    // 'type' of this event, and can be one of Normal, Warning. New types could be added in future
    // 'reason' is the reason this event is generated. 'reason' should be short and unique; it
    // should be in UpperCamelCase format (starting with a capital letter). "reason" will be used
    // to automate handling of events, so imagine people writing switch statements to handle them.
    // You want to make that easy.
    // 'message' is intended to be human readable.
    //
    // The resulting event will be created in the same namespace as the reference object.
    Event(object runtime.Object, eventtype, reason, message string)
    // Eventf is just like Event, but with Sprintf for the message field.
    Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{})
    // PastEventf is just like Eventf, but with an option to specify the event's 'timestamp' field.
    PastEventf(object runtime.Object, timestamp metav1.Time, eventtype, reason, messageFmt string, args ...interface{})
    // AnnotatedEventf is just like eventf, but with annotations attached
    AnnotatedEventf(object runtime.Object, annotations map[string]string, eventtype, reason, messageFmt string, args ...interface{})
}           

主要參數說明:

  • object

    對應event資源定義中的

    involvedObject

  • eventtype

    對應event資源定義中的type,可選Normal,Warning.
  • reason

    :事件原因
  • message

    :事件消息

我們來看下當我們調用

Event(object runtime.Object, eventtype, reason, message string)

的整個過程。

發現最終都調用到了

generateEvent

方法:

event.go#L316
func (recorder *recorderImpl) generateEvent(object runtime.Object, annotations map[string]string, timestamp metav1.Time, eventtype, reason, message string) {
    .....
    event := recorder.makeEvent(ref, annotations, eventtype, reason, message)
    event.Source = recorder.source
    go func() {
        // NOTE: events should be a non-blocking operation
        defer utilruntime.HandleCrash()
        recorder.Action(watch.Added, event)
    }()
}           

最終事件在一個

goroutine

中通過調用

recorder.Action

進入處理,這裡保證了每次調用event方法都是非阻塞的。

其中

makeEvent

的作用主要是構造了一個event對象,事件name根據InvolvedObject中的name加上時間戳生成:

注意看:對于一些非namespace資源産生的event,event的namespace是default
func (recorder *recorderImpl) makeEvent(ref *v1.ObjectReference, annotations map[string]string, eventtype, reason, message string) *v1.Event {
    t := metav1.Time{Time: recorder.clock.Now()}
    namespace := ref.Namespace
    if namespace == "" {
        namespace = metav1.NamespaceDefault
    }
    return &v1.Event{
        ObjectMeta: metav1.ObjectMeta{
            Name:        fmt.Sprintf("%v.%x", ref.Name, t.UnixNano()),
            Namespace:   namespace,
            Annotations: annotations,
        },
        InvolvedObject: *ref,
        Reason:         reason,
        Message:        message,
        FirstTimestamp: t,
        LastTimestamp:  t,
        Count:          1,
        Type:           eventtype,
    }
}           

進一步跟蹤

Action

方法,

apimachinery/blob/master/pkg/watch/mux.go#L188:23
// Action distributes the given event among all watchers.
func (m *Broadcaster) Action(action EventType, obj runtime.Object) {
    m.incoming <- Event{action, obj}
}           

将event寫入到了一個channel裡面。

注意:

這個Action方式是

apimachinery

包中的方法,因為實作的sturt

recorderImpl

*watch.Broadcaster

作為一個匿名struct,并且在

NewRecorder

進行

Broadcaster

指派,這個

Broadcaster

其實就是

eventBroadcasterImpl

中的

Broadcaster

到此,基本清楚了event最終被寫入到了

Broadcaster

incoming

channel中,下面看下是怎麼進行消費的。

三、消費事件

makeEventRecorder

調用的

StartLogging

StartRecordingToSink

其實就是完成了對事件的消費。

  • StartLogging

    直接将event輸出到日志
  • StartRecordingToSink

    将事件寫入到apiserver

兩個方法内部都調用了

StartEventWatcher

方法,并且傳入一個

eventHandler

方法對event進行處理

func (e *eventBroadcasterImpl) StartEventWatcher(eventHandler func(*v1.Event)) watch.Interface {
    watcher := e.Watch()
    go func() {
        defer utilruntime.HandleCrash()
        for watchEvent := range watcher.ResultChan() {
            event, ok := watchEvent.Object.(*v1.Event)
            if !ok {
                // This is all local, so there's no reason this should
                // ever happen.
                continue
            }
            eventHandler(event)
        }
    }()
    return watcher
}           

watcher.ResultChan

方法就拿到了事件,這裡是在一個goroutine中通過

func (m *Broadcaster) loop() 

==>

func (m *Broadcaster) distribute(event Event)

方法調用将event又寫入了

broadcasterWatcher.result

主要看下

StartRecordingToSink

提供的的

eventHandler

recordToSink

func recordToSink(sink EventSink, event *v1.Event, eventCorrelator *EventCorrelator, sleepDuration time.Duration) {
    // Make a copy before modification, because there could be multiple listeners.
    // Events are safe to copy like this.
    eventCopy := *event
    event = &eventCopy
    result, err := eventCorrelator.EventCorrelate(event)
    if err != nil {
        utilruntime.HandleError(err)
    }
    if result.Skip {
        return
    }
    tries := 0
    for {
        if recordEvent(sink, result.Event, result.Patch, result.Event.Count > 1, eventCorrelator) {
            break
        }
        tries++
        if tries >= maxTriesPerEvent {
            klog.Errorf("Unable to write event '%#v' (retry limit exceeded!)", event)
            break
        }
        // Randomize the first sleep so that various clients won't all be
        // synced up if the master goes down.
        // 第一次重試增加随機性,防止 apiserver 重新開機的時候所有的事件都在同一時間發送事件
        if tries == 1 {
            time.Sleep(time.Duration(float64(sleepDuration) * rand.Float64()))
        } else {
            time.Sleep(sleepDuration)
        }
    }
}           

其中event被經過了一個

eventCorrelator.EventCorrelate(event)

方法做預處理,主要是聚合相同的事件(避免産生的事件過多,增加 etcd 和 apiserver 的壓力,也會導緻檢視 pod 事件很不清晰)

下面一個for循環就是在進行重試,最大重試次數是12次,調用

recordEvent

方法才真正将event寫入到了apiserver。

事件處理

我們來看下

EventCorrelate

// EventCorrelate filters, aggregates, counts, and de-duplicates all incoming events
func (c *EventCorrelator) EventCorrelate(newEvent *v1.Event) (*EventCorrelateResult, error) {
    if newEvent == nil {
        return nil, fmt.Errorf("event is nil")
    }
    aggregateEvent, ckey := c.aggregator.EventAggregate(newEvent)
    observedEvent, patch, err := c.logger.eventObserve(aggregateEvent, ckey)
    if c.filterFunc(observedEvent) {
        return &EventCorrelateResult{Skip: true}, nil
    }
    return &EventCorrelateResult{Event: observedEvent, Patch: patch}, err
}           

分别調用了

aggregator.EventAggregate

 logger.eventObserve

filterFunc

三個方法,分别作用是:

1、

aggregator.EventAggregate

:聚合event,如果在最近 10 分鐘出現過 10 個相似的事件(除了 message 和時間戳之外其他關鍵字段都相同的事件),aggregator 會把它們的 message 設定為

(combined from similar events)+event.Message

2、

logger.eventObserve

:它會把相同的事件以及包含

aggregator

被聚合了的相似的事件,通過增加

Count

字段來記錄事件發生了多少次。

3、

filterFunc

: 這裡實作了一個基于令牌桶的限流算法,如果超過設定的速率則丢棄,保證了apiserver的安全。

我們主要來看下

aggregator.EventAggregate

func (e *EventAggregator) EventAggregate(newEvent *v1.Event) (*v1.Event, string) {
    now := metav1.NewTime(e.clock.Now())
    var record aggregateRecord
    // eventKey is the full cache key for this event
    //eventKey 是将除了時間戳外所有字段結合在一起
    eventKey := getEventKey(newEvent)
    // aggregateKey is for the aggregate event, if one is needed.
    //aggregateKey 是除了message和時間戳外的字段結合在一起,localKey 是message
    aggregateKey, localKey := e.keyFunc(newEvent)
    // Do we have a record of similar events in our cache?
    e.Lock()
    defer e.Unlock()
    //從cache中根據aggregateKey查詢是否存在,如果是相同或者相類似的事件會被放入cache中
    value, found := e.cache.Get(aggregateKey)
    if found {
        record = value.(aggregateRecord)
    }
    //判斷上次事件産生的時間是否超過10分鐘,如何操作則重新生成一個localKeys集合(集合中存放message)
    maxInterval := time.Duration(e.maxIntervalInSeconds) * time.Second
    interval := now.Time.Sub(record.lastTimestamp.Time)
    if interval > maxInterval {
        record = aggregateRecord{localKeys: sets.NewString()}
    }
    // Write the new event into the aggregation record and put it on the cache
    //将locakKey也就是message放入集合中,如果message相同就是覆寫了
    record.localKeys.Insert(localKey)
    record.lastTimestamp = now
    e.cache.Add(aggregateKey, record)
    // If we are not yet over the threshold for unique events, don't correlate them
    //判斷localKeys集合中存放的類似事件是否超過10個,
    if uint(record.localKeys.Len()) < e.maxEvents {
        return newEvent, eventKey
    }
    // do not grow our local key set any larger than max
    record.localKeys.PopAny()
    // create a new aggregate event, and return the aggregateKey as the cache key
    // (so that it can be overwritten.)
    eventCopy := &v1.Event{
        ObjectMeta: metav1.ObjectMeta{
            Name:      fmt.Sprintf("%v.%x", newEvent.InvolvedObject.Name, now.UnixNano()),
            Namespace: newEvent.Namespace,
        },
        Count:          1,
        FirstTimestamp: now,
        InvolvedObject: newEvent.InvolvedObject,
        LastTimestamp:  now,
        //這裡會對message加個字首:(combined from similar events):
        Message:        e.messageFunc(newEvent),
        Type:           newEvent.Type,
        Reason:         newEvent.Reason,
        Source:         newEvent.Source,
    }
    return eventCopy, aggregateKey
}           

aggregator.EventAggregate

方法中其實就是判斷了通過cache和localKeys判斷事件是否相似,如果最近 10 分鐘出現過 10 個相似的事件就合并并加上字首,後續通過

logger.eventObserve

方法進行count累加,如果message也相同,肯定就是直接count++。

四、總結

event處理的整個流程基本就是這樣,我們可以概括為以下幾點,也可以結合文中的圖對比一起來看:

1、建立

EventRecorder

對象,通過其提供的

Event

等方法,建立好event對象

2、将建立出來的對象發送給

EventBroadcaster

中的channel中

EventBroadcaster

通過背景運作的goroutine,從管道中取出事件,并廣播給提前注冊好的handler處理

4、當輸出log的handler收到事件就直接列印事件

5、當

EventSink

handler收到處理事件就通過預處理之後将事件發送給apiserver

6、其中預處理包含三個動作,1、限流 2、聚合 3、計數

7、apiserver收到事件處理之後就存儲在etcd中

回顧event的整個流程,可以看到event并不是保證100%事件寫入(從預處理的過程來看),這樣做是為了後端服務etcd的可用性,因為event事件在整個叢集中産生是非常頻繁的,尤其在服務不穩定的時候,而相比Deployment,Pod等其他資源,又沒那麼的重要。是以這裡做了個取舍。

提供全面,高效和穩定的系統鏡像、應用軟體下載下傳、域名解析和時間同步服務。”

繼續閱讀