天天看点

handler ,Looper的机制,分析源码(二)消息的收发

之前从使用handler入手,顺了下handler线程切换的流程。

地址:

http://blog.csdn.net/y1962475006/article/details/52243671

这篇需要详细分析各部分的关系和工作。

一、handler、Looper、message、messagequeue的关系

先看一个UML图:

handler ,Looper的机制,分析源码(二)消息的收发

可以看到它们之间都是依赖:

Handler -> Looper

Handler -> MessageQueue(其实这个依赖是取自Looper的)

Looper -> MessageQueue

Looper -> Thread

MessageQueue -> Message

Message -> Message

Message -> Handler

从下到上,从这个UML图中可以看出:

  • Message 是一个单链表结构;
  • MessageQueue直接管理Message;
  • Looper 持有着线程信息;
  • Looper 和 Handler 可以操作MessageQueue;

“MessageQueue”是由单链表实现的队列。

从构造函数说起

1、 Looper

private Looper(boolean quitAllowed) {
        mQueue = new MessageQueue(quitAllowed);
        mThread = Thread.currentThread();
    }
           

可以看到,在Looper构造的时候,实例了MessageQueue和Thread。同时,构造函数是私有的,所以不能由外部new 一个Looper。只能通过prepare方法生成。

private static void prepare(boolean quitAllowed) {
        if (sThreadLocal.get() != null) {
            throw new RuntimeException("Only one Looper may be created per thread");
        }
        sThreadLocal.set(new Looper(quitAllowed));
    }
           

ThreadLocal 是一个线程维度的数据结构,就是说,每个线程都有一份它存储的变量的副本,互不干扰。有兴趣可以看一下它实现的源码。

2、MessageQueue

MessageQueue(boolean quitAllowed) {
        mQuitAllowed = quitAllowed;
        mPtr = nativeInit();
    }
           

构造参数就一个quitAllowed,字面意思就是是否允许退出。这里先不管他。重头戏是

mPtr = nativeInit();
           

这是一个native方法。mPtr 是一个long型。那就看看这个native方法(frameworks/base/core/jni/android_os_MessageQueue.cpp):

static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) {
    NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue();
    if (!nativeMessageQueue) {
        jniThrowRuntimeException(env, "Unable to allocate native queue");
        return 0;
    }

    nativeMessageQueue->incStrong(env);
    return reinterpret_cast<jlong>(nativeMessageQueue);
}
           

有趣的是,在nativeInit方法中,又new了一个NativeMessageQueue,也就是一个本地消息队列,并且返回了这个本地消息队列的地址偏移量。于是,mPtr就被赋值为这个偏移量。到此,Java层的MessageQueue就有了NativeMessage的对应关系,后续可以通过mPtr偏移量找到NativeMessageQueue。

再看看这个NativeMessageQueue的构造函数:

NativeMessageQueue::NativeMessageQueue() :
        mPollEnv(NULL), mPollObj(NULL), mExceptionObj(NULL) {
    mLooper = Looper::getForThread();
    if (mLooper == NULL) {
        mLooper = new Looper(false);
        Looper::setForThread(mLooper);
    }
}
           

啊哈?这里又来了一个Looper?NO NO ,这并不是我们之前的java层的Looper,而是C++ native层的。而这个Looper,也是和线程绑定的。

到此,java层和native层各有一个MessageQueue 和Looper,而且都是和线程一对一的。只不过java 层、native层两者的关系相反:java层 Looper依赖MessageQueue ,native层MessageQueue依赖Looper。

handler ,Looper的机制,分析源码(二)消息的收发

发送过程

上一篇说到过,handler的发送,最后都走到了handler.enqueueMessage:

private boolean enqueueMessage(MessageQueue queue, Message msg, long uptimeMillis) {
        msg.target = this;
        if (mAsynchronous) {
            msg.setAsynchronous(true);
        }
        return queue.enqueueMessage(msg, uptimeMillis);
    }
           

然后是queue.enqueueMessage(msg, uptimeMillis)。

来看:

boolean enqueueMessage(Message msg, long when) {
 <!-- 如果target即messgae没有绑定Handler,会直接抛异常退出;-->
        if (msg.target == null) {
            throw new IllegalArgumentException("Message must have a target.");
        }
        <!-- 如果msg标记为正在使用,退出-->

        if (msg.isInUse()) {
            throw new IllegalStateException(msg + " This message is already in use.");
        }
                synchronized (this) {
                 <!-- 如果线程正在退出,返回false并回收msg-->
            if (mQuitting) {
                IllegalStateException e = new IllegalStateException(
                        msg.target + " sending message to a Handler on a dead thread");
                Log.w(TAG, e.getMessage(), e);
                msg.recycle();
                return false;

            }
         <!-- 下面是正常流程-->
          <!--标记msg为使用状态-->
            msg.markInUse();

            msg.when = when;
            Message p = mMessages;
            boolean needWake;
                <!--  如果队列是空的,新来的msg作为单链表的头-->
            if (p == null || when == 0 || when < p.when) {
                // New head, wake up the event queue if blocked.
                msg.next = p;
                mMessages = msg;
                needWake = mBlocked;

            } else {
             <!--如果队列不为空,msg消息插入队列,是按照时间顺序插入的,也就是说队列是按时间由小到大排序-->
                // Inserted within the middle of the queue.  Usually we don't have to wake
                // up the event queue unless there is a barrier at the head of the queue
                // and the message is the earliest asynchronous message in the queue.
                needWake = mBlocked && p.target == null && msg.isAsynchronous();
                Message prev;
                for (;;) {
                    prev = p;
                    p = p.next;
                    <!--找到第一个时间比入队时间大的位置-->
                    if (p == null || when < p.when) {
                        break;
                    }
                    if (needWake && p.isAsynchronous()) {
                        needWake = false;
                    }
                }
                msg.next = p; // invariant: p == prev.next
                prev.next = msg;

            }

            // We can assume mPtr != 0 because mQuitting is false.
            <!--    如果有必要就唤醒队列-->
            if (needWake) {
                nativeWake(mPtr);
            }
        }
        return true;
    }
           

插入的Message按时间排了序,也就是说,MessageQueue是由单链表实现的按时间排序的队列。这点很重要。

“如果有必要就唤醒队列”是根据needWake这个字段判断的。只有在队列当前处在阻塞状态并且设置了屏障(target==null)、msg是异步消息的时候,才会去唤醒。看下native的实现:

static void android_os_MessageQueue_nativeWake(JNIEnv* env, jclass clazz, jlong ptr) {
    NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);
    nativeMessageQueue->wake();
}
           

呀呀,NativeMessageQueue又通过java 层传过来的mptr生成的,并且发现除了nativeInit()外所有的本地方法都是要传mPtr过去的,这也证实了,java 层通过mptr这个偏移量找C++层的对应队列。瞧一瞧NativeMessageQueue的wake():

void NativeMessageQueue::wake() {
    mLooper->wake();
}
           

调用了Looper 的wake。

回顾发送的过程,可以发现,enqueueMessage的过程,只是在Java层把Message实例插入了单链表,native层做的事只是底层队列的唤醒与否。

读取Message

上一篇说到过,读取message是通过Looper.loop(),开启无限循环读取的,loop()方法里,是通过queue.next()拿到message的。我们看这个方法:

Message next() {
        // Return here if the message loop has already quit and been disposed.
        // This can happen if the application tries to restart a looper after quit
        // which is not supported.
        final long ptr = mPtr;
        if (ptr == 0) {
            return null;
        }

        int pendingIdleHandlerCount = -1; // -1 only during first iteration
        int nextPollTimeoutMillis = 0;
        <!--开始进入循环-->
        for (;;) {
            if (nextPollTimeoutMillis != 0) {
                Binder.flushPendingCommands();
            }
            <!--设置休眠,0表示不休眠-->
            nativePollOnce(ptr, nextPollTimeoutMillis);

            synchronized (this) {
                // Try to retrieve the next message.  Return if found.
                final long now = SystemClock.uptimeMillis();
                Message prevMsg = null;
                Message msg = mMessages;
               <!-- 如果消息队列不为空,且设置了屏障,遍历找出下一个异步的消息-->
                if (msg != null && msg.target == null) {
                    // Stalled by a barrier.  Find the next asynchronous message in the queue.
                    do {
                        prevMsg = msg;
                        msg = msg.next;
                    } while (msg != null && !msg.isAsynchronous());
                }
                if (msg != null) {
                <!--如果消息队列不空且,当前要处理的消息设置的处理时间比现在要大,就是还没到要处理它,设置等待时间。-->
                    if (now < msg.when) {
                        // Next message is not ready.  Set a timeout to wake up when it is ready.
                        nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);
                    } else {
                    <!--如果可以处理这个消息了,取出这个msg,并标记为使用中-->
                        // Got a message.
                        mBlocked = false;
                        if (prevMsg != null) {
                            prevMsg.next = msg.next;
                        } else {
                            mMessages = msg.next;
                        }
                        msg.next = null;
                        if (DEBUG) Log.v(TAG, "Returning message: " + msg);
                        msg.markInUse();
                        return msg;
                    }
                } else {
                <!--消息队列为空,进入闲等待-->
                    // No more messages.
                    nextPollTimeoutMillis = -1;
                }
                <!--后面的省略。。。-->

        } 
           

注释中解释了大部分和消息获取相关的代码。其中nativePollOnce 是一个native方法,表示等待或者说休眠多久。nextPollTimeoutMillis 表示下次循环的时候,需要等待的时间,0表示不等待,-1表示无限等待。但是无限等待不就是死循环吗?那不就导致ANR了?这里应该理解忙等待和闲等待。举个例子,现在有的手扶电梯为了节能,在没有人的时候会是停着的,这就是闲等待,如果有人站上去,就会唤醒运作;如果电梯上没有位置了,那么后来的人就要等待,这就是忙等待。

看nativePollOnce的代码:

static void android_os_MessageQueue_nativePollOnce(JNIEnv* env, jobject obj,
        jlong ptr, jint timeoutMillis) {
    NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);
    nativeMessageQueue->pollOnce(env, obj, timeoutMillis);
}
           

对应的:

void NativeMessageQueue::pollOnce(JNIEnv* env, jobject pollObj, int timeoutMillis) {
    mPollEnv = env;
    mPollObj = pollObj;
    mLooper->pollOnce(timeoutMillis);
    mPollObj = NULL;
    mPollEnv = NULL;

    if (mExceptionObj) {
        env->Throw(mExceptionObj);
        env->DeleteLocalRef(mExceptionObj);
        mExceptionObj = NULL;
    }
}
           

发现最后调用了Looper的pollOnce方法。

native 层Looper

到这里估计就有人问了。为什么MessageQueuen的native 方法分析了,不管发送和读取到Looper这就停了?

其实是故意的哈哈。上面分析到,发送消息,在java层,就是msg丢到单链表里,然后native层Looper唤醒线程;读取消息,在java层,MessageQueue取出需要处理的消息,然后native层Looper设置等待。最后都落在了Looper,由Looper处理线程的唤醒和等待。

那么就看一下native层Looper(5.0以上在system/core/libutils/Looper.cpp)的构造方法:

Looper::Looper(bool allowNonCallbacks) :
        mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false),
        mPolling(false), mEpollFd(-1), mEpollRebuildRequired(false),
        mNextRequestSeq(0), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) {
    mWakeEventFd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
    LOG_ALWAYS_FATAL_IF(mWakeEventFd < 0, "Could not make wake event fd: %s",
                        strerror(errno));

    AutoMutex _l(mLock);
    rebuildEpollLocked();
}
           

可以看到,这里用的是linux系统中的epoll机制。epoll是一个Linux下的IO多路复用的机制,可以监听fd以及fd上的事件例如读写打开关闭,当有事件时返回,没事件时阻塞等待唤醒或者超时。更详细的这里不作详解。但是要留意mWakeEventFd,这个就是和唤醒有关的fd。

直接看Looper->pollOnce() 和 Looper->wake()。

int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {
    int result = 0;
    for (;;) {
        while (mResponseIndex < mResponses.size()) {
            const Response& response = mResponses.itemAt(mResponseIndex++);
            int ident = response.request.ident;
            if (ident >= 0) {
                int fd = response.request.fd;
                int events = response.events;
                void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE
                ALOGD("%p ~ pollOnce - returning signalled identifier %d: "
                        "fd=%d, events=0x%x, data=%p",
                        this, ident, fd, events, data);
#endif
                if (outFd != NULL) *outFd = fd;
                if (outEvents != NULL) *outEvents = events;
                if (outData != NULL) *outData = data;
                return ident;
            }
        }

        if (result != 0) {
#if DEBUG_POLL_AND_WAKE
            ALOGD("%p ~ pollOnce - returning result %d", this, result);
#endif
            if (outFd != NULL) *outFd = 0;
            if (outEvents != NULL) *outEvents = 0;
            if (outData != NULL) *outData = NULL;
            return result;
        }

        result = pollInner(timeoutMillis);
    }
}
           

前面那些可以先不管,直接到最后一句result = pollInner(timeoutMillis); 因为只有这一句涉及到了时间:

int Looper::pollInner(int timeoutMillis) {
#if DEBUG_POLL_AND_WAKE
    ALOGD("%p ~ pollOnce - waiting: timeoutMillis=%d", this, timeoutMillis);
#endif

    // Adjust the timeout based on when the next message is due.
    if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX) {
        nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
        int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime);
        if (messageTimeoutMillis >= 0
                && (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) {
            timeoutMillis = messageTimeoutMillis;
        }
#if DEBUG_POLL_AND_WAKE
        ALOGD("%p ~ pollOnce - next message in %" PRId64 "ns, adjusted timeout: timeoutMillis=%d",
                this, mNextMessageUptime - now, timeoutMillis);
#endif
    }

    // Poll.
    int result = POLL_WAKE;
    mResponses.clear();
    mResponseIndex = 0;

    // We are about to idle.
    mPolling = true;

    struct epoll_event eventItems[EPOLL_MAX_EVENTS];
    <!--阻塞到这里,如果有事件就绪或者超时,会从阻塞状态推出,eventcount>0,eventItems包含了就绪的事件fd-->
    int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis);

    // No longer idling.
    mPolling = false;

    // Acquire lock.
    mLock.lock();

    // Rebuild epoll set if needed.
    if (mEpollRebuildRequired) {
        mEpollRebuildRequired = false;
        rebuildEpollLocked();
        goto Done;
    }
    <!--小于0认为出错-->
    // Check for poll error.
    if (eventCount < 0) {
        if (errno == EINTR) {
            goto Done;
        }
        ALOGW("Poll failed with an unexpected error: %s", strerror(errno));
        result = POLL_ERROR;
        goto Done;
    }
    <!--等于零认为超时-->
    // Check for poll timeout.
    if (eventCount == 0) {
#if DEBUG_POLL_AND_WAKE
        ALOGD("%p ~ pollOnce - timeout", this);
#endif
        result = POLL_TIMEOUT;
        goto Done;
    }

    // Handle all events.
#if DEBUG_POLL_AND_WAKE
    ALOGD("%p ~ pollOnce - handling events from %d fds", this, eventCount);
#endif
<!--大于0的时候,遍历就绪的fd,如果有mWakeEventFd,且事件是EPOLLIN(可读取或者socket关闭),去读它。-->
    for (int i = 0; i < eventCount; i++) {
        int fd = eventItems[i].data.fd;
        uint32_t epollEvents = eventItems[i].events;
        if (fd == mWakeEventFd) {
            if (epollEvents & EPOLLIN) {
                awoken();
            } else {
                ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents);
            }
        } else {
            ssize_t requestIndex = mRequests.indexOfKey(fd);
            if (requestIndex >= 0) {
                int events = 0;
                if (epollEvents & EPOLLIN) events |= EVENT_INPUT;
                if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT;
                if (epollEvents & EPOLLERR) events |= EVENT_ERROR;
                if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP;
                pushResponse(events, mRequests.valueAt(requestIndex));
            } else {
                ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is "
                        "no longer registered.", epollEvents, fd);
            }
        }
    }
Done: ;

    // Invoke pending message callbacks.
    mNextMessageUptime = LLONG_MAX;
    while (mMessageEnvelopes.size() != 0) {
        nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
        const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0);
        if (messageEnvelope.uptime <= now) {
            // Remove the envelope from the list.
            // We keep a strong reference to the handler until the call to handleMessage
            // finishes.  Then we drop it so that the handler can be deleted *before*
            // we reacquire our lock.
            { // obtain handler
                sp<MessageHandler> handler = messageEnvelope.handler;
                Message message = messageEnvelope.message;
                mMessageEnvelopes.removeAt(0);
                mSendingMessage = true;
                mLock.unlock();

#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
                ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d",
                        this, handler.get(), message.what);
#endif
                handler->handleMessage(message);
            } // release handler

            mLock.lock();
            mSendingMessage = false;
            result = POLL_CALLBACK;
        } else {
            // The last message left at the head of the queue determines the next wakeup time.
            mNextMessageUptime = messageEnvelope.uptime;
            break;
        }
    }

    // Release lock.
    mLock.unlock();

    // Invoke all response callbacks.
    for (size_t i = 0; i < mResponses.size(); i++) {
        Response& response = mResponses.editItemAt(i);
        if (response.request.ident == POLL_CALLBACK) {
            int fd = response.request.fd;
            int events = response.events;
            void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
            ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p",
                    this, response.request.callback.get(), fd, events, data);
#endif
            // Invoke the callback.  Note that the file descriptor may be closed by
            // the callback (and potentially even reused) before the function returns so
            // we need to be a little careful when removing the file descriptor afterwards.
            int callbackResult = response.request.callback->handleEvent(fd, events, data);
            if (callbackResult == 0) {
                removeFd(fd, response.request.seq);
            }

            // Clear the callback reference in the response structure promptly because we
            // will not clear the response vector itself until the next poll.
            response.request.callback.clear();
            result = POLL_CALLBACK;
        }
    }
    return result;
}
           

awoken方法:

void Looper::awoken() {
        #if DEBUG_POLL_AND_WAKE
            ALOGD("%p ~ awoken", this);
        #endif

            uint64_t counter;
            TEMP_FAILURE_RETRY(read(mWakeEventFd, &counter, sizeof(uint64_t)));
        }
           

最后对mWakeEventFd做了读取。

所以到这里,阻塞从jave的Looper.loop()->MessageQueue.next()->nativePollOnce()->native Looper.pollOnce()->Looper.pollInner()->epoll_wait();最终停在了epoll这里。

再看Looper.wake():

void Looper::wake() {
#if DEBUG_POLL_AND_WAKE
    ALOGD("%p ~ wake", this);
#endif

    uint64_t inc = 1;
    ssize_t nWrite = TEMP_FAILURE_RETRY(write(mWakeEventFd, &inc, sizeof(uint64_t)));
    if (nWrite != sizeof(uint64_t)) {
        if (errno != EAGAIN) {
            LOG_ALWAYS_FATAL("Could not write wake signal to fd %d: %s",
                    mWakeEventFd, strerror(errno));
        }
    }
}
           

这个代码很简单,只要往mWakeEventFd写入。就绪时,内核会通知给epoll_wait,进程再次调用时,epoll_wait会返回这个fd。再回顾一下取消息的过程(伪代码):

Looper.loop(){
    while(true){
        Message msg = Message.Next(){
            while(true){
                设置超时阻塞;
                取出单链表头消息;
                if(消息是就绪的){
                    立即返回该消息;
                }else{
                    更新下次阻塞时间;
                    continue;
                }
            }
        }
        分发msg
    }
}
           

发现花括号里两个无限循环,由于message单链表是按时间由小到大排序的,就绪的消息一出队列就立马返回分发了,剩下未就绪的通过epoll_wait 等待设置时间超时(返回0)或者有新的可就绪消息入队列(mWakeEventFd进行写操作),这个过程CPU去做其他事,是个闲等待,这里说的阻塞是指阻塞了操作MessageQueue的线程。当时间超时或者有新的消息入队列的时候,epoll_wait 会退出阻塞,线程变为活跃态,循环继续。

这个过程中你也许看到了native层的Message会Handler。没错,native层也是有的,而且作用和java层的一样,只不过是用来处理其他硬件设备的fd的,我们不用考虑。

总结

到此为止,分析了java 层和native层的消息发送和读取的源码过程:

java 层负责对Message这个单链表进行插入和删除操作,native负责线程的唤醒和休眠。

最后上一个图:

handler ,Looper的机制,分析源码(二)消息的收发

绿色是发送流程,黄色是读取流程。