文章僅僅用于個人的學習記錄,基本上内容都是網上各個大神的傑作,此處摘錄過來以自己的了解學習方式記錄一下。 個人最為認可和推崇的大神文章: http://blog.csdn.net/luoshengyang/article/details/6618363 羅升陽Binder系列文章 http://blog.csdn.net/innost/article/details/47208049 Innost的Binder講解
https://my.oschina.net/youranhongcha/blog/149575 侯 亮的Binder系列文章.
1、ProcessState
它被設計成單例模式,是以一個程序隻會走一次它的構造,這也間接導緻了隻會打開一次binder裝置(某一個server端.) 而當調用ProcessState::self()的時候就會調用.
ProcessState::ProcessState() //構造 : mDriverFD(open_driver())//在初始化清單時打開驅動. , mVMStart(MAP_FAILED) , mManagesContexts(false) , mBinderContextCheckFunc(NULL) , mBinderContextUserData(NULL) , mThreadPoolStarted(false) , mThreadPoolSeq(1) { if (mDriverFD >= 0) { // mmap the binder, providing a chunk of virtual address space to receive transactions. //把裝置檔案映射到程序的虛拟位址空間.第一個好處,可以直接讀寫。 //BINDER_VM_SIZE = ((1*1024*1024) - (4096 *2)) 1M-8K mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0); if (mVMStart == MAP_FAILED) { //...... } #else mDriverFD = -1; #endif } //...... }
ProcessState中另一個比較有意思的域是mHandleToObject,在.h檔案中Vector<handle_entry>mHandleToObject聲明.它是本程序中記 錄所有BpBinder的向量表,非常重要.在Binder架構中,應用程序是通過“binder句柄”來找到對應的BpBinder的。從這張向量表中我們可 以看 到,那個句柄值其實對應着這個向量表的下标.
struct handle_entry { IBinder* binder; RefBase::weakref_type* refs; };
其中binder域記錄的就是BpBinder對象. 2、IPCThreadState最終和Binder驅動打交道的地方,它是線程單例,存到了線程的本地存儲區域. 由于 IPCThreadState的構造函數的聲明如下,是在private下面聲明的是以其它類不能執行個體化它,但是可以通過self()方法 來得到它的執行個體。 private: IPCThreadState(); ~IPCThreadState();
IPCThreadState* IPCThreadState::self() { if (gHaveTLS) {//第一次進來為false restart: const pthread_key_t k = gTLS; //TLS是Thread Local Storage的意思.線程本地存儲的意思(C層實作的和java層的ThreadLocal類一樣), //有pthread_getspecific,那麼肯定有地方調用 pthread_setspecific。 IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k); if (st) return st; return new IPCThreadState; } //...... }
構造:函數如下,需要特别關注的由mProcess私有變量的初始化問題,此變量标記了目前線程輸入那個程序,并且通過 ProcessState的self方法拿到這個單例模式的ProcessState的變量,就可以拿到在執行個體化時候打開binder裝置時傳回 的檔案描述符fd了,其實最終IPCThreadState還是通過這個fd通過ioctl機制來和裝置驅動檔案通信。
IPCThreadState::IPCThreadState() : mProcess(ProcessState::self()),//很關鍵!!! mMyThreadId(androidGetTid()), mStrictModePolicy(0), mLastTransactionBinderFlags(0) { pthread_setspecific(gTLS, this);//在構造中這個存入本地存儲區.保證每個線程有自己的IPCThreadState. clearCaller(); mIn.setDataCapacity(256); mOut.setDataCapacity(256); }
2.1、 IPCThreadState中的client端送出請求和驅動互動的大體流程: 一般在代理端如BpServiceManager的addservice調用時,本身代理端是沒有通信機制的它還是要靠BpBinder來操作, 是以都會獲得一個mRemote,這是一個BpBinder對象然後調用它的 transact()方法傳入指定的指令 code,更具code驅動會做 出反應。
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a binder has died, it will never come back to life. if (mAlive) { //當是一個service通過service manager 調用addservice,然後到這裡的時候. //這裡的mHandle為0,code為ADD_SERVICE_TRANSACTION。ADD_SERVICE_TRANSACTION是上面以參數形式傳進來的, //那mHandle的隻是由前面執行個體化BpBinder傳入的值決定的,一般在如:BpXXXX : public BpInterface<XXX> //而在執行個體化BpXXX的時候構造中都會需要傳入一個IBinder對象,在跨程序調用的時候這個對象就是此處的BpBinder //傳入這個BpBinder的時候由于模闆類BpInterface就會去執行個體化BpRefBase最終執行個體化這個BpBinder的相關變量. status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }
可以看出BpBinder最終是調用IPCThreadState去和驅動打交道的。此處顯示通過執行個體化擷取 IPCThreadState執行個體,然後 調用它的 transact.
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err = data.errorCheck(); //IPCThreadState::transact函數的參數flags ==0 ,這是因為在BpBinder中預設參數清單傳入的0. flags |= TF_ACCEPT_FDS;//TF_ACCEPT_FDS = 0x10 ,binder.h linux中
//......
if (err == NO_ERROR) { //準備好一個struct binder_transaction_data結構體變量,這個是等一下要傳輸給Binder驅動程式的資料. //這裡就是發送資料的地方.把它的handle,code,data封裝好然後寫入到IPCThreadState的mOut中 err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); } //...... //如果沒有設定TF_ONE_WAY這個置也就是需要回複。 if ((flags & TF_ONE_WAY) == 0) {//0x10&0x01确實會進入. //...... //等待回複,如果reply可以用就直接用,否則自己建立一個. if (reply) { err = waitForResponse(reply); } else { Parcel fakeReply; err = waitForResponse(&fakeReply); } //...... } else { //不需要回複的時候會走到這裡. err = waitForResponse(NULL, NULL); } return err; }
可以看到在IPCThreadState的 transact方法中主要有兩步,第一步把要傳輸的資料封裝成 binder_transaction_data的結構 體(就是要這麼定義,也友善驅動那别解析啊,大家都按照這種來)然後寫入到Parcel mOut當中,這個就是目前的IPCThrea d State要往驅動中傳入的序列化的資料.第二步就是真正通過ioctl要和驅動去交流的地方在 waitForResponse當中.
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { binder_transaction_data tr;//代表Binder傳輸過程中的資料.
//這個結構體的初始化設定.
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */ tr.target.handle = handle; tr.code = code;//注意!!!!這個code會存入到封裝的binder_transaction_data結構體當中!!!!!注意和cmd的差別 tr.flags = binderFlags; tr.cookie = 0; tr.sender_pid = 0; tr.sender_euid = 0; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); //data.ipcData();就相當于. //writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER); //writeString16("android.os.IServiceManager"); //writeString16("media.player"); //writeStrongBinder(new MediaPlayerService()); tr.data.ptr.buffer = data.ipcData();
//真正要傳輸的資料儲存的地方.
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
//記錄一下偏移量.
tr.data.ptr.offsets = data.ipcObjects(); } else if (statusBuffer) { //...... } else { return (mLastError = err); } //Parcel mOut 往驅動發送的資料最終都存入到mOut. mOut.writeInt32(cmd);//cmd為BC_TRANSACTION直接存入到mOut當中,在binder_thread_write方法中讀取. mOut.write(&tr, sizeof(tr)); return NO_ERROR; }
注意此處寫入的cmd不要和一開始從BpXXXX傳入的code弄混了,這個cmd就是和binder裝置檔案進行ioctl的時候的指令 碼,在驅動的實作中又具體的定義.此處的 writeStrongBinder也十分的關鍵後面再分析.接下來看一下 waitForResponse。
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1) { //
最後bwr.write_size和bwr.read_size均為0,IPCThreadState::talkWithDriver函數什麼也不做,傳回到IPCThreadState::waitForResponse函
中
//
在IPCThreadState::waitForResponse函數,又繼續從mIn讀出一個整數,這個便是BR_TRANSACTION_COMPLETE.
//
進入到Binder驅動程式中的binder_ioctl函數中。由于bwr.write_size為0,bwr.read_size不為0,這次
直接就進入到binder_thread_read //函數中。這時候,
//thread->transaction_stack!=0,thread-
>todo為空,線程通過:
wait_event_interruptible(thread->wait, binder_has_thread_work(thread))
//進入睡眠狀态,等待Service Manager來喚醒了。(!!!!反正最終又是要等待Service Manager來喚醒,注意此時Service Manager已經被喚醒)
//主要調用了talkWithDriver函數來與Binder驅動程式進行互動 if ((err=talkWithDriver()) < NO_ERROR) break; //從驅動傳回以後,這裡開始操作mIn了,看來talkWithDriver中把mOut發出去,然後從driver中讀到資料放到mIn中了。 err = mIn.errorCheck();//addservice時候,mIn讀出一個整數,這個便是BR_NOOP了,這是一個空操作,什麼也不做。 if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = mIn.readInt32();//取出來驅動放入的指令. //...... switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish; case BR_ACQUIRE_RESULT: { ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); const int32_t result = mIn.readInt32(); if (!acquireResult) continue; *acquireResult = result ? NO_ERROR : INVALID_OPERATION; } goto finish; case BR_REPLY: { binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); } else { err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this); } } else { freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this); continue; } } goto finish; default: err = executeCommand(cmd);//注意! if (err != NO_ERROR) goto finish; break; } } finish: //...... return err; }
此時我們主要關注的是往驅動發出的時候的事情,那麼此時就是去調用 talkWithDriver.一定要注意那個mProcess的成員變 量mDriverFD,最終是通過,這個fd和binder裝置檔案互動. binder_write_read這個結構體用來進一步的封裝和驅動互動的資料 結構。通過下面可見最終還是通過ioctl來和binder驅動進行互動.Binder驅動收到後根據不同的cmd做出不同的處理.
status_t IPCThreadState::talkWithDriver(bool doReceive) { if (mProcess->mDriverFD <= 0) { return -EBADF; } // binder_write_read是用來與Binder裝置交換資料的結構, binder_write_read bwr;//來換傳入Binder指令的時候就會用到這個結構體. // Is the read buffer empty? const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We don't want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; //先是在寫的緩沖區寫入要發送給驅動的資料, bwr.write_size = outAvail; bwr.write_buffer = (uintptr_t)mOut.data();//!!!非常重要,看出來把前面的cmd和
binder_transaction_data放入到了buffer
// This is what we'll read. if (doReceive && needRead) { //接收資料緩沖區資訊的填充。如果以後收到資料,就直接填在mIn中了。 bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (uintptr_t)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; }
//......
// Return immediately if there is nothing to do. status_t err; do { //...... #if defined(HAVE_ANDROID_OS) //看來不是read/write調用去和Binder裝置檔案進行通信,而是ioctl方式。 if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//最終去和驅動進行通信. err = NO_ERROR; else err = -errno; #else err = INVALID_OPERATION; #endif if (mProcess->mDriverFD <= 0) { err = -EBADF; } //...... } while (err == -EINTR);//應該是隻要有資料就會一直循環的往外發送,通過ioctl的傳回值判斷的.
//......
if (err >= NO_ERROR) { //到這裡從驅動中回來了,回複資料就在bwr中了,bmr接收回複資料的buffer就是mIn提供的 if (bwr.write_consumed > 0) { if (bwr.write_consumed < mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else mOut.setDataSize(0);//從驅動回來以後,首先是把mOut的資料清空.友善下一次操作啊. } if (bwr.read_consumed > 0) { mIn.setDataSize(bwr.read_consumed);//然後設定已經讀取的内容的大小. mIn.setDataPosition(0); } //...... return NO_ERROR; } //然後就傳回到了waitForResponse當中.我們去看一下. //在addservice流程中到這裡,我們發送addService的流程就徹底走完了。 return err; }
此處一定要注意bwr算是又進行了一層的封裝,把 binder_transaction_data和cmd等放入mOut當中的資料通過: bwr.write_buffer = (uintptr_t)mOut.data()、bwr.read_buffer = (uintptr_t)mIn.data()等放入到了binder_write_read結構體的對應的buffer緩沖區中。
3、 ProcessState::self()->startThreadPool()和 IPCThreadState::self()->joinThreadPool()兩個操作. 3.1、 ProcessState:: startThreadPool方法.
void ProcessState::startThreadPool() { AutoMutex _l(mLock); if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); } }
可以看到就是調用了spawnPooledThread(true).
void ProcessState::spawnPooledThread(bool isMain) { if (mThreadPoolStarted) { String8 name = makeBinderThreadName(); ALOGV("Spawning new pooled thread, name=%s\n", name.string()); //建立線程池,然後run起來,和java的Thread何其像也。 sp<Thread> t = new PoolThread(isMain);//會去執行個體化父類Thread. t->run(name.string());//調用PoolThread::run,實際調用了父類Thread的run } }
Thread的run函數最終調用子類的threadLoop函數,這裡即為PoolThread::threadLoop函數:
virtual bool threadLoop() { //此時mIsMain為true. IPCThreadState::self()->joinThreadPool(mIsMain); return false; }
可以看到此處也是調用 IPCThreadState::self()->joinThreadPool().一般在native層的标準調用: ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool(); 它們的差別是,這裡的參數isMain都是等于true,表示是應用程式自己主動建立的Binder線程,而不是Binder驅動程式請求應用程式創 建的.接下來看看 IPCThreadState:: joinThreadPool()的實作.
注意它在. h的聲明檔案中void joinThreadPool(bool isMain = true) 預設的參數是true.
void IPCThreadState::joinThreadPool(bool isMain) { mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); set_sched_policy(mMyThreadId, SP_FOREGROUND); status_t result; do {//進入到這個循環中. processPendingDerefs(); // now get the next command to be processed, waiting if necessary result = getAndExecuteCommand();//去裡面讀取 if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) { ...... } // Let this thread exit the thread pool if it is no longer // needed and it is not the main process thread. if(result == TIMED_OUT && !isMain) { break; } } while (result != -ECONNREFUSED && result != -EBADF); ...... mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false); }
函數最終是在一個無窮循環中,通過調用talkWithDriver函數來和Binder驅動程式進行互動,實際上就是調用 talkWithDriver來等待Client 的請求,然後調用executeCommand來處理請求,而在getAndExecuteCommand 函數中,最終會調 用BBinder::transact來真正處理Client的 請求.
status_t IPCThreadState::getAndExecuteCommand() { status_t result; int32_t cmd; result = talkWithDriver(); if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) return result; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } result = executeCommand(cmd); // After executing the command, ensure that the thread is returned to the // foreground cgroup before rejoining the pool. The driver takes care of // restoring the priority, but doesn't do anything with cgroups so we // need to take care of that here in userspace. Note that we do make // sure to go in the foreground after executing a transaction, but // there are other callbacks into user code that could have changed // our group so we want to make absolutely sure it is put back. set_sched_policy(mMyThreadId, SP_FOREGROUND); } return result; }
可以看到就是調用到executeCommand(int cmd )
status_t IPCThreadState::executeCommand(int32_t cmd) { BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch (cmd) { ...... default: printf("*** BAD COMMAND %d received from Binder driver\n", cmd); result = UNKNOWN_ERROR; break; } if (result != NO_ERROR) { mLastError = result; } return result; }
它會走到case為:BR_TRANSACTION當中,我們單獨提取出來分析.
case BR_TRANSACTION: //Binder通信 { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); const pid_t origPid = mCallingPid; const uid_t origUid = mCallingUid; const int32_t origStrictModePolicy = mStrictModePolicy; const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags; mCallingPid = tr.sender_pid; mCallingUid = tr.sender_euid; mLastTransactionBinderFlags = tr.flags; int curPrio = getpriority(PRIO_PROCESS, mMyThreadId); if (gDisableBackgroundScheduling) { if (curPrio > ANDROID_PRIORITY_NORMAL) { // We have inherited a reduced priority from the caller, but do not // want to run in that state in this process. The driver set our // priority already (though not our scheduling class), so bounce // it back to the default before invoking the transaction. setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL); } } else { if (curPrio >= ANDROID_PRIORITY_BACKGROUND) { // We want to use the inherited priority from the caller. // Ensure this thread is in the background scheduling class, // since the driver won't modify scheduling classes for us. // The scheduling group is reset to default by the caller // once this method returns after the transaction is complete. set_sched_policy(mMyThreadId, SP_BACKGROUND); } } //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid); //前面是來了一個指令,解析成BR_TRANSACTION,然後讀取後續的資訊 Parcel reply; status_t error; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_TRANSACTION thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << " / code " << TypeCode(tr.code) << ": " << indent << buffer << dedent << endl << "Data addr = " << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer) << ", offsets addr=" << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl; } if (tr.target.ptr) { sp<BBinder> b((BBinder*)tr.cookie);//看來這個binder_transaction_data.cookie很關鍵啊!!!!!!!! error = b->transact(tr.code, buffer, &reply, tr.flags); } else { /* the_context_object是IPCThreadState.cpp中定義的一個全局變量, 可通過setTheContextObject函數設定 */ error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); } //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n", // mCallingPid, origPid, origUid); if ((tr.flags & TF_ONE_WAY) == 0) { LOG_ONEWAY("Sending reply to %d!", mCallingPid); if (error < NO_ERROR) reply.setError(error); sendReply(reply, 0); } else { LOG_ONEWAY("NOT sending reply to %d!", mCallingPid); } mCallingPid = origPid; mCallingUid = origUid; mStrictModePolicy = origStrictModePolicy; mLastTransactionBinderFlags = origTransactionBinderFlags; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << ": " << indent << reply << dedent << endl; } } break;
接下來再看一下BBinder::transact的實作:
status_t BBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { data.setDataPosition(0); status_t err = NO_ERROR; switch (code) { case PING_TRANSACTION: reply->writeInt32(pingBinder()); break; default: err = onTransact(code, data, reply, flags);//就是調用自己的onTransact函數嘛,誰執行個體化的就調用誰! break; } if (reply != NULL) { reply->setDataPosition(0); } return err; }
然後會調用到子類的onTransact函數來處理. PCThreadState接收到了Client處的請求後,就會調用BBinder類的transact函數,并傳入相關參數,BBinder類的transact函數最終調 用BnMediaPlayerService類的onTransact函數,于是,就開始真正地處理Client的請求 .