天天看點

安卓ServiceManager啟動:徹底了解ServiceManager啟動流程,這一篇就夠了

基于Android 6.0的源碼剖析, 本文詳細地講解了ServiceManager啟動流程
framework/native/cmds/servicemanager/
  - service_manager.c
  - binder.c
  
kernel/drivers/ (不同Linux分支路徑略有不同)
  - staging/android/binder.c
  - android/binder.c 
           

一. 概述

ServiceManager是Binder IPC通信過程中的守護程序,本身也是一個Binder服務,但并沒有采用libbinder中的多線程模型來與Binder驅動通信,而是自行編寫了binder.c直接和Binder驅動來通信,并且隻有一個循環binder_loop來進行讀取和處理事務,這樣的好處是簡單而高效。

ServiceManager本身工作相對簡單,其功能:查詢和注冊服務。 對于Binder IPC通信過程中,其實更多的情形是BpBinder和BBinder之間的通信,比如ActivityManagerProxy和ActivityManagerService之間的通信等。

1.1 流程圖

啟動過程主要以下幾個階段:

  1. 打開binder驅動:binder_open;
  2. 注冊成為binder服務的大管家:binder_become_context_manager;
  3. 進入無限循環,處理client端發來的請求:binder_loop;

二. 啟動過程

ServiceManager是由init程序通過解析init.rc檔案而建立的,其所對應的可執行程式/system/bin/servicemanager,所對應的源檔案是service_manager.c,程序名為/system/bin/servicemanager。

service servicemanager /system/bin/servicemanager
    class core
    user system
    group system
    critical
    onrestart restart healthd
    onrestart restart zygote
    onrestart restart media
    onrestart restart surfaceflinger
    onrestart restart drm
           

啟動Service Manager的入口函數是service_manager.c中的main()方法,代碼如下:

2.1 main

[ -> service_manager.c]

int main(int argc, char **argv) {
    struct binder_state *bs;
    //打開binder驅動,申請128k位元組大小的記憶體空間 【見小節2.2】
    bs = binder_open(128*1024);
    ...

    //成為上下文管理者 【見小節2.3】
    if (binder_become_context_manager(bs)) {
        return -1;
    }

    selinux_enabled = is_selinux_enabled(); //selinux權限是否使能
    sehandle = selinux_android_service_context_handle();
    selinux_status_open(true);

    if (selinux_enabled > 0) {
        if (sehandle == NULL) {  
            abort(); //無法擷取sehandle
        }
        if (getcon(&service_manager_context) != 0) {
            abort(); //無法擷取service_manager上下文
        }
    }
    ...

    //進入無限循環,處理client端發來的請求 【見小節2.4】
    binder_loop(bs, svcmgr_handler);
    return 0;
}
           

2.2 binder_open

[-> servicemanager/binder.c]

struct binder_state *binder_open(size_t mapsize)
{
    struct binder_state *bs;【見小節2.2.1】
    struct binder_version vers;

    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return NULL;
    }

    //通過系統調用陷入核心,打開Binder裝置驅動
    bs->fd = open("/dev/binder", O_RDWR);
    if (bs->fd < 0) {
        goto fail_open; // 無法打開binder裝置
    }

     //通過系統調用,ioctl擷取binder版本資訊
    if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
        (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
        goto fail_open; //核心空間與使用者空間的binder不是同一版本
    }

    bs->mapsize = mapsize;
    //通過系統調用,mmap記憶體映射,mmap必須是page的整數倍
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
        goto fail_map; // binder裝置記憶體無法映射
    }

    return bs;

fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return NULL;
}
           

打開binder驅動相關操作:

先調用open()打開binder裝置,open()方法經過系統調用,進入Binder驅動,然後調用方法binder_open(),該方法會在Binder驅動層建立一個

binder_proc

對象,再将

binder_proc

對象指派給fd->private_data,同時放入全局連結清單

binder_procsstatic HLIST_HEAD(binder_procs);)

。再通過ioctl()檢驗目前binder版本與Binder驅動層的版本是否一緻。

調用mmap()進行記憶體映射,同理mmap()方法經過系統調用,對應于Binder驅動層的binder_mmap()方法,該方法會在Binder驅動層建立

Binder_buffer

對象,并放入目前binder_proc的

proc->buffers

連結清單。

2.2.1 binder_state

[-> servicemanager/binder.c]

struct binder_state
{
    int fd; // dev/binder的檔案描述符
    void *mapped; //指向mmap的記憶體位址
    size_t mapsize; //配置設定的記憶體大小,預設為128KB
};
           

2.3 binder_become_context_manager

[-> servicemanager/binder.c]

int binder_become_context_manager(struct binder_state *bs) {
    //通過ioctl,傳遞BINDER_SET_CONTEXT_MGR指令【見小節2.3.1】
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
           

成為上下文的管理者,整個系統中隻有一個這樣的管理者。 通過ioctl()方法經過系統調用,對應于Binder驅動層的binder_ioctl()方法.

2.3.1 binder_ioctl

[-> kernel/drivers/android/binder.c]

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) {
    binder_lock(__func__);
    switch (cmd) {
      case BINDER_SET_CONTEXT_MGR:
          ret = binder_ioctl_set_ctx_mgr(filp);//【見小節2.3.2】
          break;
      }
      case :...
    }
    binder_unlock(__func__);
}
           

根據參數

BINDER_SET_CONTEXT_MGR

,最終調用binder_ioctl_set_ctx_mgr()方法,這個過程會持有binder_main_lock。

2.3.2 binder_ioctl_set_ctx_mgr

[-> kernel/drivers/android/binder.c]

static int binder_ioctl_set_ctx_mgr(struct file *filp)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    kuid_t curr_euid = current_euid();

    //保證隻建立一次mgr_node對象
    if (binder_context_mgr_node != NULL) {
        ret = -EBUSY; 
        goto out;
    }

    if (uid_valid(binder_context_mgr_uid)) {
        ...
    } else {
        //設定目前線程euid作為Service Manager的uid
        binder_context_mgr_uid = curr_euid;
    }

    //建立ServiceManager實體【見小節2.3.3】
    binder_context_mgr_node = binder_new_node(proc, 0, 0);
    ...
    binder_context_mgr_node->local_weak_refs++;
    binder_context_mgr_node->local_strong_refs++;
    binder_context_mgr_node->has_strong_ref = 1;
    binder_context_mgr_node->has_weak_ref = 1;
out:
    return ret;
}
           

進入binder驅動,在Binder驅動中定義的靜态變量

// service manager所對應的binder_node;
static struct binder_node *binder_context_mgr_node;
// 運作service manager的線程uid
static kuid_t binder_context_mgr_uid = INVALID_UID;
           

建立了全局的binder_node對象

binder_context_mgr_node

,并将binder_context_mgr_node的強弱引用各加1.

2.3.3 binder_new_node

[-> kernel/drivers/android/binder.c]

static struct binder_node *binder_new_node(struct binder_proc *proc,
                       binder_uintptr_t ptr,
                       binder_uintptr_t cookie)
{
    struct rb_node **p = &proc->nodes.rb_node;
    struct rb_node *parent = NULL;
    struct binder_node *node;
    //首次進來為空
    while (*p) {
        parent = *p;
        node = rb_entry(parent, struct binder_node, rb_node);

        if (ptr < node->ptr)
            p = &(*p)->rb_left;
        else if (ptr > node->ptr)
            p = &(*p)->rb_right;
        else
            return NULL;
    }

    //給新建立的binder_node 配置設定核心空間
    node = kzalloc(sizeof(*node), GFP_KERNEL);
    if (node == NULL)
        return NULL;
    binder_stats_created(BINDER_STAT_NODE);
    // 将新建立的node對象添加到proc紅黑樹;
    rb_link_node(&node->rb_node, parent, p);
    rb_insert_color(&node->rb_node, &proc->nodes);
    node->debug_id = ++binder_last_id;
    node->proc = proc;
    node->ptr = ptr;
    node->cookie = cookie;
    node->work.type = BINDER_WORK_NODE;  //設定binder_work的type
    INIT_LIST_HEAD(&node->work.entry);
    INIT_LIST_HEAD(&node->async_todo);
    return node;
}
           

在Binder驅動層建立binder_node結構體對象,并将目前binder_proc加入到

binder_node

node->proc

。并建立binder_node的async_todo和binder_work兩個隊列。

2.4 binder_loop

[-> servicemanager/binder.c]

void binder_loop(struct binder_state *bs, binder_handler func) {
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;
    //将BC_ENTER_LOOPER指令發送給binder驅動,讓Service Manager進入循環 【見小節2.4.1】
    binder_write(bs, readbuf, sizeof(uint32_t));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); //進入循環,不斷地binder讀寫過程
        if (res < 0) {
            break;
        }

        // 解析binder資訊 【見小節2.5】
        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        if (res == 0) {
            break;
        }
        if (res < 0) {
            break;
        }
    }
}
           

進入循環讀寫操作,由main()方法傳遞過來的參數func指向svcmgr_handler。

binder_write

通過ioctl()将BC_ENTER_LOOPER指令發送給binder驅動,此時bwr隻有write_buffer有資料,進入binder_thread_write()方法。 接下來進入for循環,執行ioctl(),此時bwr隻有read_buffer有資料,那麼進入binder_thread_read()方法。

2.4.1 binder_write

[-> servicemanager/binder.c]

int binder_write(struct binder_state *bs, void *data, size_t len) {
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data; //此處data為BC_ENTER_LOOPER
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    //【見小節2.4.2】
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    return res;
}
           

根據傳遞進來的參數,初始化bwr,其中write_size大小為4,write_buffer指向緩沖區的起始位址,其内容為BC_ENTER_LOOPER請求協定号。通過ioctl将bwr資料發送給binder驅動,則調用其binder_ioctl方法,如下:

2.4.2 binder_ioctl

[-> kernel/drivers/android/binder.c]

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;
    struct binder_thread *thread;
    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    ...

    binder_lock(__func__);
    thread = binder_get_thread(proc); //擷取binder_thread,為binder_open建立的binder_thread
    switch (cmd) {
      case BINDER_WRITE_READ:  //進行binder的讀寫操作
          ret = binder_ioctl_write_read(filp, cmd, arg, thread); //【見小節2.4.3】
          if (ret)
              goto err;
          break;
      case ...
    }
    ret = 0;

err:
    if (thread)
        thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
    binder_unlock(__func__);
     ...
    return ret;
}
           

2.4.3 binder_ioctl_write_read

[-> kernel/drivers/android/binder.c]

static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    void __user *ubuf = (void __user *)arg;
    struct binder_write_read bwr;

    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { //把使用者空間資料ubuf拷貝到bwr
        ret = -EFAULT;
        goto out;
    }

    if (bwr.write_size > 0) { //此時寫緩存有資料【見小節2.4.4】
        ret = binder_thread_write(proc, thread,
                  bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
        ...
    }

    if (bwr.read_size > 0) { //此時讀緩存無資料
        ...
    }

    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { //将核心資料bwr拷貝到使用者空間ubuf
        ret = -EFAULT;
        goto out;
    }
out:
    return ret;
}
           

此處将使用者空間的binder_write_read結構體 拷貝到核心空間.

2.4.4 binder_thread_write

[-> kernel/drivers/android/binder.c]

static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) {
  uint32_t cmd;
  void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
  void __user *ptr = buffer + *consumed;
  void __user *end = buffer + size;
  
  while (ptr < end && thread->return_error == BR_OK) {
    get_user(cmd, (uint32_t __user *)ptr); //擷取指令
    switch (cmd) {
      case BC_ENTER_LOOPER:
          //設定該線程的looper狀态
          thread->looper |= BINDER_LOOPER_STATE_ENTERED;
          break;
      case ...;
    }
  }    }
           

從bwr.write_buffer拿出cmd資料,此處為BC_ENTER_LOOPER. 可見上層本次調用binder_write()方法,主要是完成設定目前線程的looper狀态為BINDER_LOOPER_STATE_ENTERED。

2.5 binder_parse

[-> servicemanager/binder.c]

int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;

    while (ptr < end) {
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
        switch(cmd) {
        case BR_NOOP:  //無操作,退出循環
            break;
        case BR_TRANSACTION_COMPLETE:
            break;
        case BR_INCREFS:
        case BR_ACQUIRE:
        case BR_RELEASE:
        case BR_DECREFS:
            ptr += sizeof(struct binder_ptr_cookie);
            break;
        case BR_TRANSACTION: {
            struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
            ...
            binder_dump_txn(txn);
            if (func) {
                unsigned rdata[256/4];
                struct binder_io msg; 
                struct binder_io reply;
                int res;
                //【見小節2.5.1】
                bio_init(&reply, rdata, sizeof(rdata), 4);
                bio_init_from_txn(&msg, txn); //從txn解析出binder_io資訊
                 //【見小節2.6】
                res = func(bs, txn, &msg, &reply);
                //【見小節3.4】
                binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
            }
            ptr += sizeof(*txn);
            break;
        }
        case BR_REPLY: {
            struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
            ...
            binder_dump_txn(txn);
            if (bio) {
                bio_init_from_txn(bio, txn);
                bio = 0;
            }
            ptr += sizeof(*txn);
            r = 0;
            break;
        }
        case BR_DEAD_BINDER: {
            struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;
            ptr += sizeof(binder_uintptr_t);
            // binder死亡消息【見小節3.3】
            death->func(bs, death->ptr);
            break;
        }
        case BR_FAILED_REPLY:
            r = -1;
            break;
        case BR_DEAD_REPLY:
            r = -1;
            break;
        default:
            return -1;
        }
    }
    return r;
}
           

解析binder資訊,此處參數ptr指向BC_ENTER_LOOPER,func指向svcmgr_handler。故有請求到來,則調用svcmgr_handler。

2.5.1 bio_init

[-> servicemanager/binder.c]

void bio_init(struct binder_io *bio, void *data,
              size_t maxdata, size_t maxoffs)
{
    size_t n = maxoffs * sizeof(size_t);
    if (n > maxdata) {
        ...
    }

    bio->data = bio->data0 = (char *) data + n;
    bio->offs = bio->offs0 = data;
    bio->data_avail = maxdata - n;
    bio->offs_avail = maxoffs;
    bio->flags = 0;
}
           

其中

struct binder_io
{
    char *data;            /* pointer to read/write from */
    binder_size_t *offs;   /* array of offsets */
    size_t data_avail;     /* bytes available in data buffer */
    size_t offs_avail;     /* entries available in offsets array */

    char *data0;           //data buffer起點位置
    binder_size_t *offs0;  //buffer偏移量的起點位置
    uint32_t flags;
    uint32_t unused;
};
           

2.5.2 bio_init_from_txn

[-> servicemanager/binder.c]

void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{
    bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;
    bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;
    bio->data_avail = txn->data_size;
    bio->offs_avail = txn->offsets_size / sizeof(size_t);
    bio->flags = BIO_F_SHARED;
}
           

将readbuf的資料賦給bio對象的data

2.6 svcmgr_handler

[-> service_manager.c]

int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si; //【見小節2.6.1】
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;
    ...
    
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    ...

    switch(txn->code) {
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE: 
        s = bio_get_string16(msg, &len); //服務名
        //根據名稱查找相應服務 【見小節3.1】
        handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);
        //【見小節3.1.2】
        bio_put_ref(reply, handle);
        return 0;

    case SVC_MGR_ADD_SERVICE: 
        s = bio_get_string16(msg, &len); //服務名
        handle = bio_get_ref(msg); //handle【見小節3.2.3】
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;
         //注冊指定服務 【見小節3.2】
        if (do_add_service(bs, s, len, handle, txn->sender_euid,
            allow_isolated, txn->sender_pid))
            return -1;
        break;

    case SVC_MGR_LIST_SERVICES: {  
        uint32_t n = bio_get_uint32(msg);

        if (!svc_can_list(txn->sender_pid)) {
            return -1;
        }
        si = svclist;
        while ((n-- > 0) && si)
            si = si->next;
        if (si) {
            bio_put_string16(reply, si->name);
            return 0;
        }
        return -1;
    }
    default:
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}
           

該方法的功能:查詢服務,注冊服務,以及列舉所有服務

2.6.1 svcinfo

struct svcinfo
{
    struct svcinfo *next;
    uint32_t handle; //服務的handle值
    struct binder_death death;
    int allow_isolated;
    size_t len; //名字長度
    uint16_t name[0]; //服務名
};
           

每一個服務用svcinfo結構體來表示,該handle值是在注冊服務的過程中,由服務所在程序那一端所确定的。

三. 核心工作

servicemanager的核心工作就是注冊服務和查詢服務。

3.1 do_find_service

[-> service_manager.c]

uint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
    //查詢相應的服務 【見小節3.1.1】
    struct svcinfo *si = find_svc(s, len);

    if (!si || !si->handle) {
        return 0;
    }

    if (!si->allow_isolated) {
        uid_t appid = uid % AID_USER;
        //檢查該服務是否允許孤立于程序而單獨存在
        if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
            return 0;
        }
    }

    //服務是否滿足查詢條件
    if (!svc_can_find(s, len, spid)) {
        return 0;
    }
    return si->handle;
}
           

查詢到目标服務,并傳回該服務所對應的handle

3.1.1 find_svc

struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
    struct svcinfo *si;

    for (si = svclist; si; si = si->next) {
        //當名字完全一緻,則傳回查詢到的結果
        if ((len == si->len) &&
            !memcmp(s16, si->name, len * sizeof(uint16_t))) {
            return si;
        }
    }
    return NULL;
}
           

從svclist服務清單中,根據服務名周遊查找是否已經注冊。當服務已存在

svclist

,則傳回相應的服務名,否則傳回NULL。

當找到服務的handle, 則調用bio_put_ref(reply, handle),将handle封裝到reply.

3.1.2 bio_put_ref

void bio_put_ref(struct binder_io *bio, uint32_t handle) {
    struct flat_binder_object *obj;

    if (handle)
        obj = bio_alloc_obj(bio); //[見小節3.1.3]
    else
        obj = bio_alloc(bio, sizeof(*obj));

    if (!obj)
        return;

    obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    obj->type = BINDER_TYPE_HANDLE; //傳回的是HANDLE類型
    obj->handle = handle;
    obj->cookie = 0;
}
           

3.1.3 bio_alloc_obj

static struct flat_binder_object *bio_alloc_obj(struct binder_io *bio)
{
    struct flat_binder_object *obj;
    obj = bio_alloc(bio, sizeof(*obj));//[見小節3.1.4]

    if (obj && bio->offs_avail) {
        bio->offs_avail--;
        *bio->offs++ = ((char*) obj) - ((char*) bio->data0);
        return obj;
    }
    bio->flags |= BIO_F_OVERFLOW;
    return NULL;
}
           

3.1.4 bio_alloc

static void *bio_alloc(struct binder_io *bio, size_t size)
{
    size = (size + 3) & (~3);
    if (size > bio->data_avail) {
        bio->flags |= BIO_F_OVERFLOW;
        return NULL;
    } else {
        void *ptr = bio->data;
        bio->data += size;
        bio->data_avail -= size;
        return ptr;
    }
}
           

3.2 do_add_service

[-> service_manager.c]

int do_add_service(struct binder_state *bs,
                   const uint16_t *s, size_t len,
                   uint32_t handle, uid_t uid, int allow_isolated,
                   pid_t spid)
{
    struct svcinfo *si;

    if (!handle || (len == 0) || (len > 127))
        return -1;

    //權限檢查【見小節3.2.1】
    if (!svc_can_register(s, len, spid)) {
        return -1;
    }

    //服務檢索【見小節3.1.1】
    si = find_svc(s, len);
    if (si) {
        if (si->handle) {
            svcinfo_death(bs, si); //服務已注冊時,釋放相應的服務【見小節3.2.2】
        }
        si->handle = handle;
    } else {
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {  //記憶體不足,無法配置設定足夠記憶體
            return -1;
        }
        si->handle = handle;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); //記憶體拷貝服務資訊
        si->name[len] = '\0';
        si->death.func = (void*) svcinfo_death;
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;
        si->next = svclist; // svclist儲存所有已注冊的服務
        svclist = si;
    }

    //以BC_ACQUIRE指令,handle為目标的資訊,通過ioctl發送給binder驅動
    binder_acquire(bs, handle);
    //以BC_REQUEST_DEATH_NOTIFICATION指令的資訊,通過ioctl發送給binder驅動,主要用于清理記憶體等收尾工作。[見小節3.3]
    binder_link_to_death(bs, handle, &si->death);
    return 0;
}
           

注冊服務的分以下3部分工作:

  • svc_can_register:檢查權限,檢查selinux權限是否滿足;
  • find_svc:服務檢索,根據服務名來查詢比對的服務;
  • svcinfo_death:釋放服務,當查詢到已存在同名的服務,則先清理該服務資訊,再将目前的服務加入到服務清單svclist;

3.2.1 svc_can_register

[-> service_manager.c]

static int svc_can_register(const uint16_t *name, size_t name_len, pid_t spid) {
    const char *perm = "add";
    //檢查selinux權限是否滿足
    return check_mac_perms_from_lookup(spid, perm, str8(name, name_len)) ? 1 : 0;
}
           

3.2.2 svcinfo_death

[-> service_manager.c]

void svcinfo_death(struct binder_state *bs, void *ptr) {
    struct svcinfo *si = (struct svcinfo* ) ptr;

    if (si->handle) {
        binder_release(bs, si->handle);
        si->handle = 0;
    }
}
           

3.2.3 bio_get_ref

[-> servicemanager/binder.c]

uint32_t bio_get_ref(struct binder_io *bio) {
    struct flat_binder_object *obj;

    obj = _bio_get_obj(bio);
    if (!obj)
        return 0;

    if (obj->type == BINDER_TYPE_HANDLE)
        return obj->handle;

    return 0;
}
           

3.3 binder_link_to_death

[-> servicemanager/binder.c]

void binder_link_to_death(struct binder_state *bs, uint32_t target, struct binder_death *death) {
    struct {
        uint32_t cmd;
        struct binder_handle_cookie payload;
    } __attribute__((packed)) data;

    data.cmd = BC_REQUEST_DEATH_NOTIFICATION;
    data.payload.handle = target;
    data.payload.cookie = (uintptr_t) death;
    binder_write(bs, &data, sizeof(data)); //[見小節3.3.1]
}
           

binder_write經過跟小節2.4.1一樣的方式, 進入Binder driver後,直接調用後進入binder_thread_write, 處理BC_REQUEST_DEATH_NOTIFICATION指令

3.3.1 binder_ioctl_write_read

[-> kernel/drivers/android/binder.c]

static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    void __user *ubuf = (void __user *)arg;
    struct binder_write_read bwr;

    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { //把使用者空間資料ubuf拷貝到bwr
        ret = -EFAULT;
        goto out;
    }
    if (bwr.write_size > 0) { //此時寫緩存有資料【見小節3.3.2】
        ret = binder_thread_write(proc, thread,
                  bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
         if (ret < 0) {
              bwr.read_consumed = 0;
              if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                  ret = -EFAULT;
              goto out;
          }
    }

    if (bwr.read_size > 0) { //此時讀緩存有資料【見小節3.3.3】
        ret = binder_thread_read(proc, thread, bwr.read_buffer,
                 bwr.read_size,
                 &bwr.read_consumed,
                 filp->f_flags & O_NONBLOCK);
        if (!list_empty(&proc->todo))  //程序todo隊列不為空,則喚醒該隊列中的線程
            wake_up_interruptible(&proc->wait);
        if (ret < 0) {
            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                ret = -EFAULT;
            goto out;
        }
    }

    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { //将核心資料bwr拷貝到使用者空間ubuf
        ret = -EFAULT;
        goto out;
    }
out:
    return ret;
}
           

3.3.2 binder_thread_write

[-> kernel/drivers/android/binder.c]

static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) {
  uint32_t cmd;
  struct binder_context *context = proc->context;
  void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
  void __user *ptr = buffer + *consumed; //ptr指向小節3.2.3中bwr中write_buffer的data.
  void __user *end = buffer + size;
  while (ptr < end && thread->return_error == BR_OK) {
    get_user(cmd, (uint32_t __user *)ptr); //擷取BC_REQUEST_DEATH_NOTIFICATION
    ptr += sizeof(uint32_t);
    switch (cmd) {
        case BC_REQUEST_DEATH_NOTIFICATION:{ //注冊死亡通知
            uint32_t target;
            void __user *cookie;
            struct binder_ref *ref;
            struct binder_ref_death *death;

            get_user(target, (uint32_t __user *)ptr); //擷取target
            ptr += sizeof(uint32_t);
            get_user(cookie, (void __user * __user *)ptr); //擷取death
            ptr += sizeof(void *);

            ref = binder_get_ref(proc, target); //拿到目标服務的binder_ref

            if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
                if (ref->death) {
                    break;  //已設定死亡通知
                }
                death = kzalloc(sizeof(*death), GFP_KERNEL);

                INIT_LIST_HEAD(&death->work.entry);
                death->cookie = cookie;
                ref->death = death;
                if (ref->node->proc == NULL) { //當目标binder服務所在程序已死,則發送死亡通知
                    ref->death->work.type = BINDER_WORK_DEAD_BINDER;
                    //目前線程為binder線程,則直接添加到目前線程的todo隊列. 接下來,進入[小節3.2.6]
                    if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
                        list_add_tail(&ref->death->work.entry, &thread->todo);
                    } else {
                        list_add_tail(&ref->death->work.entry, &proc->todo);
                        wake_up_interruptible(&proc->wait);
                    }
                }
            } else {
                ...
            }
        } break;
      case ...;
    }
    *consumed = ptr - buffer;
  }    }
           

此方法中的proc, thread都是指目前servicemanager程序的資訊. 此時TODO隊列有資料,則進入binder_thread_read.

那麼哪些場景會向隊列增加BINDER_WORK_DEAD_BINDER事務呢? 那就是當binder所在程序死亡後,會調用binder_release方法, 然後調用binder_node_release.這個過程便會發出死亡通知的回調.

3.3.3 binder_thread_read

static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block)
    ...
    //隻有目前線程todo隊列為空,并且transaction_stack也為空,才會開始處于目前程序的事務
    if (wait_for_proc_work) {
        ...
        ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    } else {
        ...
        ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
    }
    binder_lock(__func__); //加鎖

    if (wait_for_proc_work)
        proc->ready_threads--; //空閑的binder線程減1
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        //從todo隊列拿出前面放入的binder_work, 此時type為BINDER_WORK_DEAD_BINDER
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work,
                         entry);
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            w = list_first_entry(&proc->todo, struct binder_work,
                         entry);
        }

        switch (w->type) {
            case BINDER_WORK_DEAD_BINDER: {
              struct binder_ref_death *death;
              uint32_t cmd;

              death = container_of(w, struct binder_ref_death, work);
              if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
                  ...
              else
                  cmd = BR_DEAD_BINDER; //進入此分支
              put_user(cmd, (uint32_t __user *)ptr);//拷貝到使用者空間[見小節3.3.4]
              ptr += sizeof(uint32_t);

              //此處的cookie是前面傳遞的svcinfo_death
              put_user(death->cookie, (binder_uintptr_t __user *)ptr);
              ptr += sizeof(binder_uintptr_t);

              if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
                  ...
              } else
                  list_move(&w->entry, &proc->delivered_death);
              if (cmd == BR_DEAD_BINDER)
                  goto done;
            } break;
        }
    }
    ...
    return 0;
}
           

将指令BR_DEAD_BINDER寫到使用者空間, 此處的cookie是前面傳遞的svcinfo_death. 當binder_loop下一次 執行binder_parse的過程便會處理該消息。

3.3.4 binder_parse

[-> servicemanager/binder.c]

int binder_parse(struct binder_state *bs, struct binder_io *bio, uintptr_t ptr, size_t size, binder_handler func) {
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;

    while (ptr < end) {
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
        switch(cmd) {
            case BR_DEAD_BINDER: {
                struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;
                ptr += sizeof(binder_uintptr_t);
                // binder死亡消息【見小節3.3.5】
                death->func(bs, death->ptr);
                break;
            }
            ...
        }
    }
    return r;
}
           

由小節3.2的 si->death.func = (void*) svcinfo_death; 可知此處 death->func便是執行svcinfo_death()方法.

3.3.5 svcinfo_death

[-> service_manager.c]

void svcinfo_death(struct binder_state *bs, void *ptr) {
    struct svcinfo *si = (struct svcinfo* ) ptr;

    if (si->handle) {
        binder_release(bs, si->handle);
        si->handle = 0;
    }
}
           

3.3.6 binder_release

[-> service_manager.c]

void binder_release(struct binder_state *bs, uint32_t target) {
    uint32_t cmd[2];
    cmd[0] = BC_RELEASE;
    cmd[1] = target;
    binder_write(bs, cmd, sizeof(cmd));
}
           

向Binder Driver寫入BC_RELEASE指令, 最終進入Binder Driver後執行binder_dec_ref(ref, 1)來減少binder node的引用.

3.4 binder_send_reply

[-> servicemanager/binder.c]

void binder_send_reply(struct binder_state *bs, struct binder_io *reply, binder_uintptr_t buffer_to_free, int status) {
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
        uint32_t cmd_reply;
        struct binder_transaction_data txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER; //free buffer指令
    data.buffer = buffer_to_free;
    data.cmd_reply = BC_REPLY; // reply指令
    data.txn.target.ptr = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {
        data.txn.flags = TF_STATUS_CODE;
        data.txn.data_size = sizeof(int);
        data.txn.offsets_size = 0;
        data.txn.data.ptr.buffer = (uintptr_t)&status;
        data.txn.data.ptr.offsets = 0;
    } else {
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
        data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
    }
    //向Binder驅動通信
    binder_write(bs, &data, sizeof(data));
}
           

當小節2.5執行binder_parse方法,先調用svcmgr_handler(),再然後執行binder_send_reply過程。該方法會調用 [小節2.4.1] binder_write進入binder驅動後,将BC_FREE_BUFFER和BC_REPLY指令協定發送給Binder驅動,向client端發送reply. 其中data的資料區中儲存的是TYPE為HANDLE.

四. 總結

ServiceManger集中管理系統内的所有服務,通過權限控制程序是否有權注冊服務,通過字元串名稱來查找對應的Service; 由于ServiceManger程序建立跟所有向其注冊服務的死亡通知, 那麼當服務所在程序死亡後, 會隻需告知ServiceManager. 每個Client通過查詢ServiceManager可擷取Server程序的情況,降低所有Client程序直接檢測會導緻負載過重。

ServiceManager啟動流程:

  1. 打開binder驅動,并調用mmap()方法配置設定128k的記憶體映射空間:binder_open();
  2. 通知binder驅動使其成為守護程序:binder_become_context_manager();
  3. 驗證selinux權限,判斷程序是否有權注冊或檢視指定服務;
  4. 進入循環狀态,等待Client端的請求:binder_loop()。
  5. 注冊服務的過程,根據服務名稱,但同一個服務已注冊,重新注冊前會先移除之前的注冊資訊;
  6. 死亡通知: 當binder所在程序死亡後,會調用binder_release方法,然後調用binder_node_release.這個過程便會發出死亡通知的回調.

ServiceManager最核心的兩個功能為查詢和注冊服務:

  • 注冊服務:記錄服務名和handle資訊,儲存到svclist清單;
  • 查詢服務:根據服務名查詢相應的的handle資訊。

繼續閱讀