天天看點

Binder架構 -- Binder 驅動Binder架構 – Binder 驅動

Binder架構 – Binder 驅動

Binder架構 – android AIDL 的使用

Binder架構 – 使用者空間和驅動的互動

Binder架構 – Binder 驅動

Binder 架構 – binder 使用者空間架構

核心的檔案結構

  1. task_struct

    Linux核心通過一個被稱為程序描述符的task_struct結構體來管理程序,這個結構體包含了一個程序所需的所有資訊。

  2. struct file 和 struct files_struct

    在*nuix 系統中,萬物皆為檔案,在核心中檔案用一個struct file來描述,在使用者空間用一個整形的檔案描述符來表示,和核心的struct file對應。一個程序中所有的struct file檔案用struct files_struct 組織, struct task_struct 結構體有一個 *struct files_struct files 域描述在這個程序中打開的檔案。

kernle 的記憶體管理

記憶體區域

linux 中程序空間位址分為兩部分,核心空間位址和使用者空間位址。在32位系統上面,Linux的虛拟位址空間為0~4G位元組。這4G 位元組的空間分為兩部分。将最高的1G位元組(從虛拟位址0xC0000000 到0xFFFFFFFF),供核心使用,稱為“核心空間”。而将較低的3G位元組(從虛拟位址0x00000000 到0xBFFFFFFF),供各個程序使用,稱為“使用者空間”。核心空間被各個程序共享。

根據硬體的特性,其中核心空間位址又可以分為幾個區域,主要有:ZONE_DMA, ZONE_NORMAL, ZONE_HIGHEM。在實體記憶體直接線性映射到核心空間ZONE_NORMAL,理論上如果記憶體不超過1G,1G線性空間足夠映射實體記憶體了。如果實體記憶體大于1G,為了使核心空間的1G線性位址可以通路到大于1G的實體記憶體,把實體記憶體分為兩部分,ZONE_NORMAL 區域的進行直接記憶體映射,這個區域的大小一般是896MB,也就是說存在一個線性關系:virtual address = physical address + PAGE_OFFSET,這裡的PAGE_OFFSET為3G。剩下一個128MB的空間,稱為高端記憶體,這個空間作為一個視窗動态進行映射,這樣就可以通路大于1G的記憶體。ZONE_DMA 主要用于硬體特定的位址通路。

Android X86 模拟器上可以看到:MemTotal HighTotal LowTotal。

generic_x86:/ # cat /proc/meminfo
MemTotal:        1030820 kB
MemFree:          519392 kB
Buffers:            4460 kB
Cached:           325292 kB
SwapCached:            0 kB
Active:           184672 kB
Inactive:         290712 kB

HighTotal:        180104 kB
HighFree:           1132 kB
LowTotal:         850716 kB
LowFree:          518260 kB
           
Binder架構 -- Binder 驅動Binder架構 – Binder 驅動

mm_struct 和vm_area_struct

mm_struct 用來描述一個程序的虛拟位址空間。程序的 mm_struct 則包含裝入 的可執行映像資訊以及程序的頁目錄指針pgd。該結構還包含有指向 vm_area_struct結構的幾個指針,每個vm_area_struct代表程序的一個虛拟位址區間。 vm_area_struct結構含有指向vm_operations_struct結構的一個指針, vm_operations_struct描述了在這個區間的操作

##Binder 控制資料結構

###binder_proc

binder_proc和程序相關,使用者空間中每個程序中對應一個核心的binder_proc 結構體。所有程序的binder_proc 結構體用雙連結清單組織。在核心中雙連結清單相關的結構體是 hlist_node,具體使用參考相關API

struct binder_proc {
	struct hlist_node proc_node;
	struct rb_root threads;
	struct rb_root nodes;
	struct rb_root refs_by_desc;
	struct rb_root refs_by_node;
	int pid;
	struct vm_area_struct *vma;
	struct mm_struct *vma_vm_mm;
	struct task_struct *tsk;
	struct files_struct *files;
	struct hlist_node deferred_work_node;
	int deferred_work;
	void *buffer;
	ptrdiff_t user_buffer_offset;

	struct list_head buffers;
	struct rb_root free_buffers;
	struct rb_root allocated_buffers;
	size_t free_async_space;

	struct page **pages;
	size_t buffer_size;
	uint32_t buffer_free;
	struct list_head todo;
	wait_queue_head_t wait;
	struct binder_stats stats;
	struct list_head delivered_death;
	int max_threads;
	int requested_threads;
	int requested_threads_started;
	int ready_threads;
	long default_priority;
	struct dentry *debugfs_entry;
};
           

binder_thread

binder_thread 結構體和使用者線程相關,用來描述使用者空間的線程資訊。binder_proc 結構體中有一個 struct rb_root threads 紅黑樹儲存每個程序的線程資訊。rb_node 是核心中的紅黑樹結構。binder_thread 結構體中有一個 struct rb_node rb_node 域,表示自己的在紅黑樹種的節點。

struct binder_thread {
	struct binder_proc *proc;
	struct rb_node rb_node;
	int pid;
	int looper;
	struct binder_transaction *transaction_stack;
	struct list_head todo;
	uint32_t return_error; /* Write failed, return error code in read buf */
	uint32_t return_error2; /* Write failed, return error code in read */
		/* buffer. Used when sending a reply to a dead process that */
		/* we are also waiting on */
	wait_queue_head_t wait;
	struct binder_stats stats;
};
           

binder_node

binder_node 在核心中表示一個Binder 服務,代表服務端,也用紅黑樹的方式組織。

struct binder_node {
	int debug_id;
	struct binder_work work;
	union {
		struct rb_node rb_node;
		struct hlist_node dead_node;
	};
	struct binder_proc *proc;
	struct hlist_head refs;
	int internal_strong_refs;
	int local_weak_refs;
	int local_strong_refs;
	binder_uintptr_t ptr;
	binder_uintptr_t cookie;
	unsigned has_strong_ref:1;
	unsigned pending_strong_ref:1;
	unsigned has_weak_ref:1;
	unsigned pending_weak_ref:1;
	unsigned has_async_transaction:1;
	unsigned accept_fds:1;
	unsigned min_priority:8;
	struct list_head async_todo;
};
           

binder_ref

binder_ref 也表示核心中Binder 的節點,但是和binder_node不同的是binder_ref 表示的是代理端。binder_node 和 binder_ref 是互相關聯的,代表的是一對多的關系,是以在binder_node中,binder_ref 用一個雙連結清單表示 struct hlist_head refs。binder_ref僅僅有一個binder_node的指針,這也和服務端,用戶端的關系對應起來。

struct binder_ref {
	/* Lookups needed: */
	/*   node + proc => ref (transaction) */
	/*   desc + proc => ref (transaction, inc/dec ref) */
	/*   node => refs + procs (proc exit) */
	int debug_id;
	struct rb_node rb_node_desc;
	struct rb_node rb_node_node;
	struct hlist_node node_entry;
	struct binder_proc *proc;
	struct binder_node *node;
	uint32_t desc;
	int strong;
	int weak;
	struct binder_ref_death *death;
};
           

binder_work

binder_work 代表一個Binder 事物,具體來說,每次ioctl 産生一個binder_work。

struct binder_work {
	struct list_head entry;
	enum {
		BINDER_WORK_TRANSACTION = 1,
		BINDER_WORK_TRANSACTION_COMPLETE,
		BINDER_WORK_NODE,
		BINDER_WORK_DEAD_BINDER,
		BINDER_WORK_DEAD_BINDER_AND_CLEAR,
		BINDER_WORK_CLEAR_DEATH_NOTIFICATION,
	} type;
};
           

在核心中這些結構如下圖:

Binder架構 -- Binder 驅動Binder架構 – Binder 驅動

Binder 傳輸資料結構

struct binder_write_read

struct binder_write_read 結構體描述了一次 binder ioctl BINDER_WRITE_READ 從使用者空間需要copy 的資料和需要從核心空間傳回的資料。

/*
 * On 64-bit platforms where user code may run in 32-bits the driver must
 * translate the buffer (and local binder) addresses appropriately.
 */

struct binder_write_read {
	binder_size_t		write_size;	     /* bytes to write */
	binder_size_t		write_consumed;	 /* bytes consumed by driver */
	binder_uintptr_t	write_buffer;
	binder_size_t		read_size;	     /* bytes to read */
	binder_size_t		read_consumed;	 /* bytes consumed by driver */
	binder_uintptr_t	read_buffer;
};
           

Binder 檔案操作

通過struct file_operations 結構體的定義binder 一共支援ioctl, mmap , open ,close, poll flush 這幾種操作,最終要的是三個 open, ioctl mmap. 這三個函數我們前面已經接觸過。

static const struct file_operations binder_fops = {
	.owner = THIS_MODULE,
	.poll = binder_poll,
	.unlocked_ioctl = binder_ioctl,
	.compat_ioctl = binder_ioctl,
	.mmap = binder_mmap,
	.open = binder_open,
	.flush = binder_flush,
	.release = binder_release,
};
           

binder_open

  1. kzalloc 申請binder_proc 空間, 初始化 proc->todo 連結清單,
static int binder_open(struct inode *nodp, struct file *filp)
{
	struct binder_proc *proc;

	proc = kzalloc(sizeof(*proc), GFP_KERNEL);    // 申請binder_proc 記憶體
	if (proc == NULL) 
		return -ENOMEM;
	get_task_struct(current);                     // 擷取目前程序
	proc->tsk = current;
	proc->vma_vm_mm = current->mm;                // mm 代表目前程序的記憶體管理資訊
	INIT_LIST_HEAD(&proc->todo);                  // 初始化 todo 連結清單
	init_waitqueue_head(&proc->wait);             // 初始化線程排程隊列
	proc->default_priority = task_nice(current);

	binder_lock(__func__);

	binder_stats_created(BINDER_STAT_PROC);       // 核心中記錄打開的Binde 驅動次數
	hlist_add_head(&proc->proc_node, &binder_procs);  //binder_proc 加入到雙向連結清單中
	proc->pid = current->group_leader->pid;
	INIT_LIST_HEAD(&proc->delivered_death);       // 初始化delivered_death  binder_proc 雙向連結清單
	filp->private_data = proc;

	binder_unlock(__func__);

	if (binder_debugfs_dir_entry_proc) {
		char strbuf[11];

		snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
		proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,
			binder_debugfs_dir_entry_proc, proc, &binder_proc_fops);
	}

	return 0;
}
           

binder_stats

binder_stats_created(BINDER_STAT_PROC) 函數中記錄binder 打開的次數。核心中有一個binder_stats 結構體,描述了7種binder 狀态數量。

enum binder_stat_types {
	BINDER_STAT_PROC,
	BINDER_STAT_THREAD,
	BINDER_STAT_NODE,
	BINDER_STAT_REF,
	BINDER_STAT_DEATH,
	BINDER_STAT_TRANSACTION,
	BINDER_STAT_TRANSACTION_COMPLETE,
	BINDER_STAT_COUNT
};

struct binder_stats {
	int br[_IOC_NR(BR_FAILED_REPLY) + 1];
	int bc[_IOC_NR(BC_DEAD_BINDER_DONE) + 1];
	int obj_created[BINDER_STAT_COUNT];
	int obj_deleted[BINDER_STAT_COUNT];
};

static struct binder_stats binder_stats;

static inline void binder_stats_deleted(enum binder_stat_types type)
{
	binder_stats.obj_deleted[type]++;
}

static inline void binder_stats_created(enum binder_stat_types type)
{
	binder_stats.obj_created[type]++;
}
           

binder_mmap

static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
	int ret;
	struct vm_struct *area;
	struct binder_proc *proc = filp->private_data;  //擷取目前程序的binder_proc 結構體
	const char *failure_string;
	struct binder_buffer *buffer;

	if (proc->tsk != current)
		return -EINVAL;

	if ((vma->vm_end - vma->vm_start) > SZ_4M)    // 最多4M 空間
		vma->vm_end = vma->vm_start + SZ_4M;


	vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;

	mutex_lock(&binder_mmap_lock);
	if (proc->buffer) {                           // 已經mmap 傳回
		ret = -EBUSY;
		failure_string = "already mapped";
		goto err_already_mapped;
	}

   // 申請虛拟空間位址,指的是邏輯空間,在32 位機子上,高端記憶體位址空間是動态配置設定,
   64 位不清楚。在這裡隻配置設定了位址,實體記憶體沒有配置設定。
	area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);  	if (area == NULL) {
		ret = -ENOMEM;
		failure_string = "get_vm_area";
		goto err_get_vm_area_failed;
	}
	proc->buffer = area->addr;     // proc->buffer 指派,已經配置設定
	// 計算核心空間位址和使用者空間位址的偏移量。其實是同一塊記憶體
	proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;
	mutex_unlock(&binder_mmap_lock); 
	//用于存放核心配置設定的實體頁的頁描述指針:struct page *,每個實體頁對應這樣一個struct page結構   
	proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL); 
	proc->buffer_size = vma->vm_end - vma->vm_start;

	vma->vm_ops = &binder_vm_ops;
	vma->vm_private_data = proc;
	
    //為binder記憶體的最開始的一個頁的位址建立虛拟到實體頁的映射,
    僅僅一個也,注意傳遞的參數,第二個參數為1, 第四個和第三個參數內插補點為PAGE_SIZE
	if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {
		ret = -ENOMEM;
		failure_string = "alloc small buf";
		goto err_alloc_small_buf_failed;
	}
	
	// 每個程序配置設定的buffer 也用雙向連結清單管理
	buffer = proc->buffer;
	INIT_LIST_HEAD(&proc->buffers);
	list_add(&buffer->entry, &proc->buffers);
	buffer->free = 1;
	
	//buffer 插入binder_proc 的free_buffer 域的紅黑樹中
	binder_insert_free_buffer(proc, buffer);
	proc->free_async_space = proc->buffer_size / 2;
	barrier();
	proc->files = get_files_struct(current);
	proc->vma = vma;
	proc->vma_vm_mm = vma->vm_mm;

	return 0;
           

binder_buffer

binder_buffer 結構體用來描述mmap 的核心空間記憶體

struct binder_buffer {
	struct list_head entry; /* free and allocated entries by address */
	struct rb_node rb_node; /* free entry by size or allocated entry */
				/* by address */
	unsigned free:1;
	unsigned allow_user_free:1;
	unsigned async_transaction:1;
	unsigned debug_id:29;

	struct binder_transaction *transaction;

	struct binder_node *target_node;
	size_t data_size;
	size_t offsets_size;
	uint8_t data[0];
};
           

binder_update_page_range

在binder_mmap 函數中,最重要的一個調用是binder_update_page_range,在這個函數中配置設定真正的實體記憶體,然後和頁表映射,最後映射到邏輯位址。

if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {
}
	
static int binder_update_page_range(struct binder_proc *proc, int allocate,
				    void *start, void *end,
				    struct vm_area_struct *vma)
{
	void *page_addr;
	unsigned long user_page_addr;
	struct page **page;
	struct mm_struct *mm;

	if (end <= start)
		return 0;

	if (vma)
		mm = NULL;
	else
		mm = get_task_mm(proc->tsk);

	if (mm) {
		down_write(&mm->mmap_sem);
		vma = proc->vma;
		if (vma && mm != proc->vma_vm_mm) {
			pr_err("%d: vma mm and task mm mismatch\n",
				proc->pid);
			vma = NULL;
		}
	}

	if (allocate == 0)
		goto free_range;

    // 注意在上邊已經注釋過 end - start = PAGE_SIZE 是以這裡隻有一次循環
	for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
		int ret;

		page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];

		BUG_ON(*page);
		//配置設定一個實體頁,并将該實體頁的struct page指針值存放在proc->pages二維數組中  
		*page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
		ret = map_kernel_range_noflush((unsigned long)page_addr,PAGE_SIZE, PAGE_KERNEL, page);
		flush_cache_vmap((unsigned long)page_addr,(unsigned long)page_addr + PAGE_SIZE);
	
	   // 計算使用者空間位址, 建立邏輯位址和實體位址的映射
		user_page_addr = (uintptr_t)page_addr + proc->user_buffer_offset;
		ret = vm_insert_page(vma, user_page_addr, page[0]);
	
	if (mm) {
		up_write(&mm->mmap_sem);
		mmput(mm);
	}
	return 0;
	
}
           

binder_ioctl

binder_ioctl 一共有以下幾個指令:

#define BINDER_WRITE_READ		_IOWR('b', 1, struct binder_write_read) // binder 讀寫操作,binder 通信主要用這個指令進行
#define BINDER_SET_IDLE_TIMEOUT		_IOW('b', 3, __s64)
#define BINDER_SET_MAX_THREADS		_IOW('b', 5, __u32)              // 設定最大線程數
#define BINDER_SET_IDLE_PRIORITY	_IOW('b', 6, __s32)
#define BINDER_SET_CONTEXT_MGR		_IOW('b', 7, __s32)               // ServiceManager 使用,标記為ServiceManger binder。
#define BINDER_THREAD_EXIT		_IOW('b', 8, __s32)                     // 線程退出
#define BINDER_VERSION			_IOWR('b', 9, struct binder_version)   // 版本号
           

binder_ioctl 從整體上看不複雜,結構還是比較清晰的。在ioctl 最重要的函數是binder_ioctl_write_read,所有的binder 資料傳輸都在這裡完成。這個我們放在後邊分析。

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	if (unlikely(current->mm != proc->vma_vm_mm)) {
		pr_err("current mm mismatch proc mm\n");
		return -EINVAL;
	}
	trace_binder_ioctl(cmd, arg);

   // binder_stop_on_user_error= 0 是以這裡不阻塞
	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);   
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	// 擷取使用者态調用的線程的資訊,并且加入到 binder_proc threads 的紅黑樹中。
	thread = binder_get_thread(proc);
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ:
		ret = binder_ioctl_write_read(filp, cmd, arg, thread);
		if (ret)
			goto err;
		break;
	case BINDER_SET_MAX_THREADS:
		if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {
			ret = -EINVAL;
			goto err;
		}
		break;
	case BINDER_SET_CONTEXT_MGR:
		ret = binder_ioctl_set_ctx_mgr(filp);
		if (ret)
			goto err;
		break;
	case BINDER_THREAD_EXIT:
		binder_debug(BINDER_DEBUG_THREADS, "%d:%d exit\n",
			     proc->pid, thread->pid);
		binder_free_thread(proc, thread);
		thread = NULL;
		break;
	case BINDER_VERSION: {
		struct binder_version __user *ver = ubuf;

		if (size != sizeof(struct binder_version)) {
			ret = -EINVAL;
			goto err;
		}
		if (put_user(BINDER_CURRENT_PROTOCOL_VERSION,
			     &ver->protocol_version)) {
			ret = -EINVAL;
			goto err;
		}
		break;
	}
	default:
		ret = -EINVAL;
		goto err;
	}
	ret = 0;
err:
    // 标記 thread looper 的狀态
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		pr_info("%d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}
           

ServiceManger 與驅動的互動

ServiceManger 中一次調用了下面四個函數,前面已經分析了核心中這幾個API,那看下這幾次調用到底做了什麼工作。

  1. open("/dev/binder", ORDWR | OCLOEXEC)
  2. ioctl(bs->fd, BINDER_VERSION, &vers)
  3. mmap(NULL, mapsize, PROTREAD, MAPPRIVATE, bs->fd, 0)
  4. ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
  5. ioctl(bs->fd, BINDER_WRITE_READ, &bwr); bwr 資料中有 BC_ENTER_LOOPER
  6. ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

open

open 函數計較簡單,在上面的分析中在open 建立 binder_proc 雙向連結清單,初始化程序相關的資訊,初始化紅黑樹。

ioctl BINDER_VERSION

第一次調用 ioctl, 指令字為:BINDER_VERSION

case BINDER_VERSION: {
		struct binder_version __user *ver = ubuf;

		if (size != sizeof(struct binder_version)) {
			ret = -EINVAL;
			goto err;
		}
		if (put_user(BINDER_CURRENT_PROTOCOL_VERSION,
			     &ver->protocol_version)) {
			ret = -EINVAL;
			goto err;
		}
		break;
           

把核心中的Binder Version 放到傳遞到核心空間的使用者空間位址中。使用者空間可以判斷下版本号是否一緻。

mmap

在 mmap 中配置設定虛拟空間位址,配置設定一個頁大小的實體空間,建立核心空間位址和使用者控件位址的映射

ioctl BINDER_SET_CONTEXT_MGR

case BINDER_SET_CONTEXT_MGR:
	ret = binder_ioctl_set_ctx_mgr(filp);
	if (ret)
		goto err;
	break;	
           

binder_ioctl_set_ctx_mgr 幹了一件事情, binder_new_node, 注意最後的兩個參數是0,0.

  1. binder_new_node首先在binder_proc 的nodes 函數中尋找合适的插入位置,由于是第一次調用,這時還沒有任何的節點插入紅黑樹。
  2. kzalloc 配置設定binder_node 節點
  3. node 節點插入紅黑樹,
  4. node->debug_id = ++binder_last_id; 注意 binder_last_id 為全局靜态變量,是以 node->debug_id = 1;
  5. prt 和cook 域複制,都是0。0代表ServiceManager.

到這裡,binder 驅動的第一個binder_node 節點建立起來

static int binder_ioctl_set_ctx_mgr(struct file *filp)
{
	int ret = 0;
	struct binder_proc *proc = filp->private_data;
	kuid_t curr_euid = current_euid();
    
    ......
    
	binder_context_mgr_node = binder_new_node(proc, 0, 0);
	if (binder_context_mgr_node == NULL) {
		ret = -ENOMEM;
		goto out;
	}
	binder_context_mgr_node->local_weak_refs++;
	binder_context_mgr_node->local_strong_refs++;
	binder_context_mgr_node->has_strong_ref = 1;
	binder_context_mgr_node->has_weak_ref = 1;
out:
	return ret;
}

static struct binder_node *binder_new_node(struct binder_proc *proc,
					   binder_uintptr_t ptr,
					   binder_uintptr_t cookie)
{
	struct rb_node **p = &proc->nodes.rb_node;
	struct rb_node *parent = NULL;
	struct binder_node *node;

   // 第一調用這個函數, binder_proc 的node 節點為空,還沒有node *p== null
	while (*p) {
		parent = *p;
		node = rb_entry(parent, struct binder_node, rb_node);

		if (ptr < node->ptr)
			p = &(*p)->rb_left;
		else if (ptr > node->ptr)
			p = &(*p)->rb_right;
		else
			return NULL;
	}

	node = kzalloc(sizeof(*node), GFP_KERNEL);
	if (node == NULL)
		return NULL;
	binder_stats_created(BINDER_STAT_NODE);
	rb_link_node(&node->rb_node, parent, p);
	rb_insert_color(&node->rb_node, &proc->nodes);
	node->debug_id = ++binder_last_id;
	node->proc = proc;
	node->ptr = ptr;
	node->cookie = cookie;
	node->work.type = BINDER_WORK_NODE;
	INIT_LIST_HEAD(&node->work.entry);
	INIT_LIST_HEAD(&node->async_todo);
	
	return node;
}
           

ioctl BINDER_WRITE_READ 和 BC_ENTER_LOOPER

在這次調用中還是來到了我們前面跳過的binder_ioctl_write_read函數。

ServiceManager 調用

首先看下調用的代碼,注意bwr.write_buffer 所指區域的資料 readbuf[0] = BC_ENTER_LOOPER;

{
    uint32_t readbuf[32];
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));
}
    
int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

    return res;
}
           

binder_ioctl_write_read

binder ioctl 進入核心,擷取目前線程的結構體 binder_thread;

thread = binder_get_thread(proc);

case BINDER_WRITE_READ:
		ret = binder_ioctl_write_read(filp, cmd, arg, thread);
		if (ret)
			goto err;
		break;
           
static int binder_ioctl_write_read(struct file *filp,
				unsigned int cmd, unsigned long arg,
				struct binder_thread *thread)
{
	int ret = 0;
	struct binder_proc *proc = filp->private_data;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;
	struct binder_write_read bwr;

	if (size != sizeof(struct binder_write_read)) {
		ret = -EINVAL;
		goto out;
	}
	if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
		ret = -EFAULT;
		goto out;
	}

	if (bwr.write_size > 0) {
		ret = binder_thread_write(proc, thread,
					  bwr.write_buffer,
					  bwr.write_size,
					  &bwr.write_consumed);
		trace_binder_write_done(ret);
		if (ret < 0) {
			bwr.read_consumed = 0;
			if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
				ret = -EFAULT;
			goto out;
		}
	}
	if (bwr.read_size > 0) {
	   ......
	}
	
	if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
		ret = -EFAULT;
		goto out;
	}
out:
	return ret;
}

           

binder_ioctl_write_read 中

  1. copy_from_user 把binder_write_read 結構體從使用者空間copy 進來。
  2. bwr.write_size > 0 并且 bwr.read_size ==0. 來到了binder_thread_write。
  3. while 循環,每次從使用者空間中讀取一個 int 大小的資料,實際是從 調用的 uint32_t readbuf[32] 中讀取,隻有 readbuf[0]= BC_ENTER_LOOPER。
  4. 和線程操作相關的cmd 一共三個,所有的操作都是對 binder_thread looper |= 操作。标記對應的線程狀态。
  5. binder_ioctl_write_read 傳回, ioctl 傳回。是以這一步無阻塞。

binder_thread loop 标記

cmd 功能 loop enum
BC_REGISTER_LOOPER 代理線程注冊looper BINDER_LOOPER_STATE_REGISTERED = 0x01
BC_ENTER_LOOPER 主線程循環 BINDER_LOOPER_STATE_ENTERED = 0x02
BC_EXIT_LOOPER 線程退出 BINDER_LOOPER_STATE_EXITED = 0x04
BINDER_LOOPER_STATE_INVALID = 0x08
BINDER_LOOPER_STATE_WAITING = 0x10
BINDER_LOOPER_STATE_NEED_RETURN = 0x20

binder_thread_write

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{
	uint32_t cmd;
	void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		
		switch (cmd) {
		......
		case BC_REGISTER_LOOPER:
			
			if (thread->looper & BINDER_LOOPER_STATE_ENTERED) {
				thread->looper |= BINDER_LOOPER_STATE_INVALID;
				
			} else if (proc->requested_threads == 0) {
				thread->looper |= BINDER_LOOPER_STATE_INVALID;
					proc->pid, thread->pid);
			} else {
				proc->requested_threads--;
				proc->requested_threads_started++;
			}
			thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
			break;
		case BC_ENTER_LOOPER:
			
			if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
				thread->looper |= BINDER_LOOPER_STATE_INVALID;
			}
			thread->looper |= BINDER_LOOPER_STATE_ENTERED;
			break;
		case BC_EXIT_LOOPER:
			thread->looper |= BINDER_LOOPER_STATE_EXITED;
			break;
       ......
		default:
			return -EINVAL;
		}
		*consumed = ptr - buffer;
	}
	return 0;
}
           

總結: ioctl BC_ENTER_LOOPER 就是标記紅黑樹中的binder_thread 的狀态。

ioctl(bs->fd, BINDER_WRITE_READ, &bwr)

ServiceManger 調用

再看下這步的調用代碼, (binder_write 裡面好像定義過binder_write_read,這段代碼是不是可以複用呢,總是能看到這樣需要改進的神奇代碼), 這次binder_write_read的bwr.read_size > 0 進入了讀模式。

struct binder_write_read bwr;
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
           

binder_ioctl_write_read

binder_ioctl-> binder_ioctl_write_read -> binder_thread_read

static int binder_ioctl_write_read(struct file *filp,
				unsigned int cmd, unsigned long arg,
				struct binder_thread *thread)
{

	if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
		ret = -EFAULT;
		goto out;
	}

	if (bwr.write_size > 0) {
	   ......
	}
	if (bwr.read_size > 0) {
	   // 調用open 的時候沒有設定O_NONBLOCK 标記,filp->f_flags & O_NONBLOCK == 0
		ret = binder_thread_read(proc, thread, bwr.read_buffer,
					 bwr.read_size,
					 &bwr.read_consumed,
					 filp->f_flags & O_NONBLOCK);
		trace_binder_read_done(ret);
		if (!list_empty(&proc->todo))
			wake_up_interruptible(&proc->wait);
		if (ret < 0) {
			if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
				ret = -EFAULT;
			goto out;
		}
	}
	
out:
	return ret;
}
           

binder_thread_read

binder_thread_read 阻塞 wait_event_freezable_exclusive, 這時候ServiceManager 進入阻塞狀态

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{
	void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
   // 第一次進來 transaction_stack == null, todo連結清單也為空,wait_for_proc_work = true
	wait_for_proc_work = thread->transaction_stack == NULL &&
				list_empty(&thread->todo);

    // 線程狀态
	thread->looper |= BINDER_LOOPER_STATE_WAITING;
	// ready_threads 計數加一 這裡是1, 表示一個等待線程
	if (wait_for_proc_work)
		proc->ready_threads++;

	binder_unlock(__func__);

	if (wait_for_proc_work) {
		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
					BINDER_LOOPER_STATE_ENTERED))) {
				proc->pid, thread->pid, thread->looper);
			wait_event_interruptible(binder_user_error_wait,
						 binder_stop_on_user_error < 2);
		}
		binder_set_nice(proc->default_priority);
		if (non_block) {
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else
		    // 代碼會來到這裡阻塞 ,binder_has_proc_work 判斷 todo 隊列是否為空,為空則阻塞
			ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	} else {
		if (non_block) {
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
	}
	......
}
           
Binder架構 -- Binder 驅動Binder架構 – Binder 驅動

繼續閱讀