相機,從上到下概覽一下,真是太大了,上面的APP->Framework->CameraServer->CameraHAL,HAL程序中Pipeline、接各種算法的Node、再往下的ISP、3A、Driver,真是太大了,想把它搞懂真不是個簡單的事情。不過我們奔着要把它搞懂的目标,一點點的啃,弄懂一點少一點,我們的功力也在不斷的前進中一步步的增強。
本節,我們就來看一下HAL層一幀處理完成,通過HIDL定義的接口processCaptureResult将資料回傳的邏輯。我自己使用的Android8.0的系統源碼是通過百度雲盤分享的,大家可從Android 8.0系統源碼分析--開篇中下載下傳,百度雲盤的下載下傳連結和密碼都有。
Camera系統也提供了非常多的dump方式,可以dump metadata、dump buffer,對我們分析問題都有非常大的幫助。
1、MTK系統中camera sensor的資訊會通過kernal中imagesensor.c中的邏輯将所有camera sensor info寫入到proc/driver/camera_info檔案中,我們可以直接cat proc/driver/camera_info檢視camera sensor的詳細資訊,其中記錄了sensor類型,比如IMX386_mipi_raw;記錄了sensor支援的分辨率大小,還有一些其他原始資訊;
2、我們可以執行adb shell dumpsys media.camera > camerainfo.txt收集camera metadata資訊,大家可以試一下,從這裡可以輕易得到很多camera metadata資訊;
3、android-8.0.0\system\media\camera\docs\docs.html檔案中定義了AOSP所有的metadata,我們可以直接輕按兩下用浏覽器打開它,就可以看到所有metadata,當然,每個SOC廠商肯定會自己新增一些,一般metadata的命名都是下劃線拼接的,比如android.scaler.cropRegion,它的命名為ANDROID_SCALER_CROP_REGION,我們在HAL程序中就可以通過它去查詢目前cropRegion的值。
4、Google提供的基于Camera API2 Demo位址:android-Camera2Basic(普通的預覽拍照功能)、android-Camera2Video(錄像功能)。
HIDL接口的定義有ICameraDevice.hal、ICameraDeviceSession.hal、ICameraDeviceCallback.hal,檔案目錄在hardware\interfaces\camera\device\xx目錄下,xx為版本号,我所下載下傳的源碼有1.0和3.2兩個版本,我們來看一下3.2版本下ICameraDeviceCallback.hal的定義,源碼如下:
package [email protected];
import [email protected]::types;
/**
*
* Callback methods for the HAL to call into the framework.
*
* These methods are used to return metadata and image buffers for a completed
* or failed captures, and to notify the framework of asynchronous events such
* as errors.
*
* The framework must not call back into the HAL from within these callbacks,
* and these calls must not block for extended periods.
*
*/
interface ICameraDeviceCallback {
/**
* processCaptureResult:
*
* Send results from one or more completed or partially completed captures
* to the framework.
* processCaptureResult() may be invoked multiple times by the HAL in
* response to a single capture request. This allows, for example, the
* metadata and low-resolution buffers to be returned in one call, and
* post-processed JPEG buffers in a later call, once it is available. Each
* call must include the frame number of the request it is returning
* metadata or buffers for. Only one call to processCaptureResult
* may be made at a time by the HAL although the calls may come from
* different threads in the HAL.
*
* A component (buffer or metadata) of the complete result may only be
* included in one process_capture_result call. A buffer for each stream,
* and the result metadata, must be returned by the HAL for each request in
* one of the processCaptureResult calls, even in case of errors producing
* some of the output. A call to processCaptureResult() with neither
* output buffers or result metadata is not allowed.
*
* The order of returning metadata and buffers for a single result does not
* matter, but buffers for a given stream must be returned in FIFO order. So
* the buffer for request 5 for stream A must always be returned before the
* buffer for request 6 for stream A. This also applies to the result
* metadata; the metadata for request 5 must be returned before the metadata
* for request 6.
*
* However, different streams are independent of each other, so it is
* acceptable and expected that the buffer for request 5 for stream A may be
* returned after the buffer for request 6 for stream B is. And it is
* acceptable that the result metadata for request 6 for stream B is
* returned before the buffer for request 5 for stream A is. If multiple
* capture results are included in a single call, camera framework must
* process results sequentially from lower index to higher index, as if
* these results were sent to camera framework one by one, from lower index
* to higher index.
*
* The HAL retains ownership of result structure, which only needs to be
* valid to access during this call.
*
* The output buffers do not need to be filled yet; the framework must wait
* on the stream buffer release sync fence before reading the buffer
* data. Therefore, this method should be called by the HAL as soon as
* possible, even if some or all of the output buffers are still in
* being filled. The HAL must include valid release sync fences into each
* output_buffers stream buffer entry, or -1 if that stream buffer is
* already filled.
*
* If the result buffer cannot be constructed for a request, the HAL must
* return an empty metadata buffer, but still provide the output buffers and
* their sync fences. In addition, notify() must be called with an
* ERROR_RESULT message.
*
* If an output buffer cannot be filled, its status field must be set to
* STATUS_ERROR. In addition, notify() must be called with a ERROR_BUFFER
* message.
*
* If the entire capture has failed, then this method still needs to be
* called to return the output buffers to the framework. All the buffer
* statuses must be STATUS_ERROR, and the result metadata must be an
* empty buffer. In addition, notify() must be called with a ERROR_REQUEST
* message. In this case, individual ERROR_RESULT/ERROR_BUFFER messages
* must not be sent.
*
* Performance requirements:
*
* This is a non-blocking call. The framework must handle each CaptureResult
* within 5ms.
*
* The pipeline latency (see S7 for definition) should be less than or equal to
* 4 frame intervals, and must be less than or equal to 8 frame intervals.
*
*/
processCaptureResult(vec<CaptureResult> results);
/**
* notify:
*
* Asynchronous notification callback from the HAL, fired for various
* reasons. Only for information independent of frame capture, or that
* require specific timing. Multiple messages may be sent in one call; a
* message with a higher index must be considered to have occurred after a
* message with a lower index.
*
* Multiple threads may call notify() simultaneously.
*
* Buffers delivered to the framework must not be dispatched to the
* application layer until a start of exposure timestamp (or input image's
* start of exposure timestamp for a reprocess request) has been received
* via a SHUTTER notify() call. It is highly recommended to dispatch this
* call as early as possible.
*
* ------------------------------------------------------------------------
* Performance requirements:
*
* This is a non-blocking call. The framework must handle each message in 5ms.
*/
notify(vec<NotifyMsg> msgs);
};
看到這些大段大段的注釋,就能明白,這些接口肯定都是非常重要的,詳細的注釋也是很好的習慣,看Android的源碼感覺确實很舒服,難怪人家的代碼能成為标準,不管是命名,格式,注釋,分包,分類,所有的地方都讓人感覺舒服!!
好了,進入到我們本節的主題吧,ICameraDeviceCallback是HIDL定義的回調接口,processCaptureResult方法就是從HAL層回調到CameraServer的接口,CameraServer這一側的回調類就是Camera3Device,因為在openCamera時,構造出來的Camera3Device進行初始化,Camera3Device類的initialize方法中與HAL進行連接配接,擷取session時,将自己作為callback回調類傳遞到了HAL,是以後續HAL就會回調到Camera3Device類的processCaptureResult方法當中。
我們再來回顧一下Camera3Device類的initialize方法,Camera3Device檔案的目錄路徑為frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp,initialize方法的源碼如下:
status_t Camera3Device::initialize(sp<CameraProviderManager> manager) {
ATRACE_CALL();
Mutex::Autolock il(mInterfaceLock);
Mutex::Autolock l(mLock);
ALOGV("%s: Initializing HIDL device for camera %s", __FUNCTION__, mId.string());
if (mStatus != STATUS_UNINITIALIZED) {
CLOGE("Already initialized!");
return INVALID_OPERATION;
}
if (manager == nullptr) return INVALID_OPERATION;
sp<ICameraDeviceSession> session;
ATRACE_BEGIN("CameraHal::openSession");
status_t res = manager->openSession(mId.string(), this,
/*out*/ &session);
ATRACE_END();
if (res != OK) {
SET_ERR_L("Could not open camera session: %s (%d)", strerror(-res), res);
return res;
}
res = manager->getCameraCharacteristics(mId.string(), &mDeviceInfo);
if (res != OK) {
SET_ERR_L("Could not retrive camera characteristics: %s (%d)", strerror(-res), res);
session->close();
return res;
}
std::shared_ptr<RequestMetadataQueue> queue;
auto requestQueueRet = session->getCaptureRequestMetadataQueue(
[&queue](const auto& descriptor) {
queue = std::make_shared<RequestMetadataQueue>(descriptor);
if (!queue->isValid() || queue->availableToWrite() <= 0) {
ALOGE("HAL returns empty request metadata fmq, not use it");
queue = nullptr;
// don't use the queue onwards.
}
});
if (!requestQueueRet.isOk()) {
ALOGE("Transaction error when getting request metadata fmq: %s, not use it",
requestQueueRet.description().c_str());
return DEAD_OBJECT;
}
auto resultQueueRet = session->getCaptureResultMetadataQueue(
[&queue = mResultMetadataQueue](const auto& descriptor) {
queue = std::make_unique<ResultMetadataQueue>(descriptor);
if (!queue->isValid() || queue->availableToWrite() <= 0) {
ALOGE("HAL returns empty result metadata fmq, not use it");
queue = nullptr;
// Don't use the queue onwards.
}
});
if (!resultQueueRet.isOk()) {
ALOGE("Transaction error when getting result metadata queue from camera session: %s",
resultQueueRet.description().c_str());
return DEAD_OBJECT;
}
mInterface = std::make_unique<HalInterface>(session, queue);
std::string providerType;
mVendorTagId = manager->getProviderTagIdLocked(mId.string());
return initializeCommonLocked();
}
該方法中就是調用manager->openSession(mId.string(), this, &session)來打開session的,manager是方法入參sp<CameraProviderManager>,調用openSession方法的第二個參數就是回調接口,傳值為this,第三個是輸出參數,當session在HAL程序中建立成功,則會通過回調指派給這個輸出參數。進一步來看一下CameraProviderManager類的openSession方法的實作,源碼如下:
status_t CameraProviderManager::openSession(const std::string &id,
const sp<hardware::camera::device::V3_2::ICameraDeviceCallback>& callback,
/*out*/
sp<hardware::camera::device::V3_2::ICameraDeviceSession> *session) {
std::lock_guard<std::mutex> lock(mInterfaceMutex);
auto deviceInfo = findDeviceInfoLocked(id,
/*minVersion*/ {3,0}, /*maxVersion*/ {4,0});
if (deviceInfo == nullptr) return NAME_NOT_FOUND;
auto *deviceInfo3 = static_cast<ProviderInfo::DeviceInfo3*>(deviceInfo);
Status status;
hardware::Return<void> ret;
ret = deviceInfo3->mInterface->open(callback, [&status, &session]
(Status s, const sp<device::V3_2::ICameraDeviceSession>& cameraSession) {
status = s;
if (status == Status::OK) {
*session = cameraSession;
}
});
if (!ret.isOk()) {
ALOGE("%s: Transaction error opening a session for camera device %s: %s",
__FUNCTION__, id.c_str(), ret.description().c_str());
return DEAD_OBJECT;
}
return mapToStatusT(status);
}
這裡的deviceInfo3->mInterface->open就會通過HIDL進入到CameraHalServer程序當中了。
好,回過頭來看Camera3Device類的processCaptureResult方法,源碼如下:
// Only one processCaptureResult should be called at a time, so
// the locks won't block. The locks are present here simply to enforce this.
hardware::Return<void> Camera3Device::processCaptureResult(
const hardware::hidl_vec<
hardware::camera::device::V3_2::CaptureResult>& results) {
if (mProcessCaptureResultLock.tryLock() != OK) {
// This should never happen; it indicates a wrong client implementation
// that doesn't follow the contract. But, we can be tolerant here.
ALOGE("%s: callback overlapped! waiting 1s...",
__FUNCTION__);
if (mProcessCaptureResultLock.timedLock(1000000000 /* 1s */) != OK) {
ALOGE("%s: cannot acquire lock in 1s, dropping results",
__FUNCTION__);
// really don't know what to do, so bail out.
return hardware::Void();
}
}
for (const auto& result : results) {
processOneCaptureResultLocked(result);
}
mProcessCaptureResultLock.unlock();
return hardware::Void();
}
可以看到,該方法的代碼非常簡潔,我們是不是也應該有所領悟,重要的節點的回調就要這樣,簡明,讓人看着非常容易明白,如果我們以後在實作一些功能時,子產品與子產品之間對接的邊界地方的邏輯,就應該盡可能簡明,其他子產品的同僚如果看到這裡的時候,就能非常容易的明白代碼編寫着的意思。該方法中就是for循環對每個result調用processOneCaptureResultLocked進一步處理,processOneCaptureResultLocked方法的源碼如下:
void Camera3Device::processOneCaptureResultLocked(
const hardware::camera::device::V3_2::CaptureResult& result) {
camera3_capture_result r;
status_t res;
r.frame_number = result.frameNumber;
hardware::camera::device::V3_2::CameraMetadata resultMetadata;
if (result.fmqResultSize > 0) {
resultMetadata.resize(result.fmqResultSize);
if (mResultMetadataQueue == nullptr) {
return; // logged in initialize()
}
if (!mResultMetadataQueue->read(resultMetadata.data(), result.fmqResultSize)) {
ALOGE("%s: Frame %d: Cannot read camera metadata from fmq, size = %" PRIu64,
__FUNCTION__, result.frameNumber, result.fmqResultSize);
return;
}
} else {
resultMetadata.setToExternal(const_cast<uint8_t *>(result.result.data()),
result.result.size());
}
if (resultMetadata.size() != 0) {
r.result = reinterpret_cast<const camera_metadata_t*>(resultMetadata.data());
size_t expected_metadata_size = resultMetadata.size();
if ((res = validate_camera_metadata_structure(r.result, &expected_metadata_size)) != OK) {
ALOGE("%s: Frame %d: Invalid camera metadata received by camera service from HAL: %s (%d)",
__FUNCTION__, result.frameNumber, strerror(-res), res);
return;
}
} else {
r.result = nullptr;
}
std::vector<camera3_stream_buffer_t> outputBuffers(result.outputBuffers.size());
std::vector<buffer_handle_t> outputBufferHandles(result.outputBuffers.size());
for (size_t i = 0; i < result.outputBuffers.size(); i++) {
auto& bDst = outputBuffers[i];
const StreamBuffer &bSrc = result.outputBuffers[i];
ssize_t idx = mOutputStreams.indexOfKey(bSrc.streamId);
if (idx == NAME_NOT_FOUND) {
ALOGE("%s: Frame %d: Buffer %zu: Invalid output stream id %d",
__FUNCTION__, result.frameNumber, i, bSrc.streamId);
return;
}
bDst.stream = mOutputStreams.valueAt(idx)->asHalStream();
buffer_handle_t *buffer;
res = mInterface->popInflightBuffer(result.frameNumber, bSrc.streamId, &buffer);
if (res != OK) {
ALOGE("%s: Frame %d: Buffer %zu: No in-flight buffer for stream %d",
__FUNCTION__, result.frameNumber, i, bSrc.streamId);
return;
}
bDst.buffer = buffer;
bDst.status = mapHidlBufferStatus(bSrc.status);
bDst.acquire_fence = -1;
if (bSrc.releaseFence == nullptr) {
bDst.release_fence = -1;
} else if (bSrc.releaseFence->numFds == 1) {
bDst.release_fence = dup(bSrc.releaseFence->data[0]);
} else {
ALOGE("%s: Frame %d: Invalid release fence for buffer %zu, fd count is %d, not 1",
__FUNCTION__, result.frameNumber, i, bSrc.releaseFence->numFds);
return;
}
}
r.num_output_buffers = outputBuffers.size();
r.output_buffers = outputBuffers.data();
camera3_stream_buffer_t inputBuffer;
if (result.inputBuffer.streamId == -1) {
r.input_buffer = nullptr;
} else {
if (mInputStream->getId() != result.inputBuffer.streamId) {
ALOGE("%s: Frame %d: Invalid input stream id %d", __FUNCTION__,
result.frameNumber, result.inputBuffer.streamId);
return;
}
inputBuffer.stream = mInputStream->asHalStream();
buffer_handle_t *buffer;
res = mInterface->popInflightBuffer(result.frameNumber, result.inputBuffer.streamId,
&buffer);
if (res != OK) {
ALOGE("%s: Frame %d: Input buffer: No in-flight buffer for stream %d",
__FUNCTION__, result.frameNumber, result.inputBuffer.streamId);
return;
}
inputBuffer.buffer = buffer;
inputBuffer.status = mapHidlBufferStatus(result.inputBuffer.status);
inputBuffer.acquire_fence = -1;
if (result.inputBuffer.releaseFence == nullptr) {
inputBuffer.release_fence = -1;
} else if (result.inputBuffer.releaseFence->numFds == 1) {
inputBuffer.release_fence = dup(result.inputBuffer.releaseFence->data[0]);
} else {
ALOGE("%s: Frame %d: Invalid release fence for input buffer, fd count is %d, not 1",
__FUNCTION__, result.frameNumber, result.inputBuffer.releaseFence->numFds);
return;
}
r.input_buffer = &inputBuffer;
}
r.partial_result = result.partialResult;
processCaptureResult(&r);
}
第一行就是給成員變量frame_number指派,看到了吧,上節我們講RequestThread的預覽循環時,也多次提到該屬性,非常重要,它是CameraServer、CameraHalServer兩個程序對Request對标的标志!接着根據if (result.fmqResultSize > 0)讀取metadata,再下來就是outputBuffers了,for循環将result.outputBuffers中的StreamBuffer一個一個取出,然後調用mInterface->popInflightBuffer(result.frameNumber, bSrc.streamId, &buffer)再去取HAL填充完成的buffer指針,這個buffer指針就是最終我們要的資料載體了,它是上一節Android 8.0系統源碼分析--Camera RequestThread預覽循環源碼分析我們已經講過的,在Request發送到HAL程序前,封裝buffer時已經放置到成員變量mInflightBufferMap當中了,這裡就反過程将它取出來。再下來是對inputBuffer輸入buffer的處理,處理完,封裝參數camera3_capture_result就解析好了,接着調用processCaptureResult處理一幀結果,該方法是同名的重載方法,入參為const camera3_capture_result *result類型,該方法的源碼如下:
void Camera3Device::processCaptureResult(const camera3_capture_result *result) {
ATRACE_CALL();
status_t res;
uint32_t frameNumber = result->frame_number;
if (result->result == NULL && result->num_output_buffers == 0 &&
result->input_buffer == NULL) {
SET_ERR("No result data provided by HAL for frame %d",
frameNumber);
return;
}
if (!mUsePartialResult &&
result->result != NULL &&
result->partial_result != 1) {
SET_ERR("Result is malformed for frame %d: partial_result %u must be 1"
" if partial result is not supported",
frameNumber, result->partial_result);
return;
}
bool isPartialResult = false;
CameraMetadata collectedPartialResult;
CaptureResultExtras resultExtras;
bool hasInputBufferInRequest = false;
// Get shutter timestamp and resultExtras from list of in-flight requests,
// where it was added by the shutter notification for this frame. If the
// shutter timestamp isn't received yet, append the output buffers to the
// in-flight request and they will be returned when the shutter timestamp
// arrives. Update the in-flight status and remove the in-flight entry if
// all result data and shutter timestamp have been received.
nsecs_t shutterTimestamp = 0;
{
Mutex::Autolock l(mInFlightLock);
ssize_t idx = mInFlightMap.indexOfKey(frameNumber);
if (idx == NAME_NOT_FOUND) {
SET_ERR("Unknown frame number for capture result: %d",
frameNumber);
return;
}
InFlightRequest &request = mInFlightMap.editValueAt(idx);
ALOGVV("%s: got InFlightRequest requestId = %" PRId32
", frameNumber = %" PRId64 ", burstId = %" PRId32
", partialResultCount = %d, hasCallback = %d",
__FUNCTION__, request.resultExtras.requestId,
request.resultExtras.frameNumber, request.resultExtras.burstId,
result->partial_result, request.hasCallback);
// Always update the partial count to the latest one if it's not 0
// (buffers only). When framework aggregates adjacent partial results
// into one, the latest partial count will be used.
if (result->partial_result != 0)
request.resultExtras.partialResultCount = result->partial_result;
// Check if this result carries only partial metadata
if (mUsePartialResult && result->result != NULL) {
if (result->partial_result > mNumPartialResults || result->partial_result < 1) {
SET_ERR("Result is malformed for frame %d: partial_result %u must be in"
" the range of [1, %d] when metadata is included in the result",
frameNumber, result->partial_result, mNumPartialResults);
return;
}
isPartialResult = (result->partial_result < mNumPartialResults);
if (isPartialResult) {
request.collectedPartialResult.append(result->result);
}
if (isPartialResult && request.hasCallback) {
// Send partial capture result
sendPartialCaptureResult(result->result, request.resultExtras,
frameNumber);
}
}
shutterTimestamp = request.shutterTimestamp;
hasInputBufferInRequest = request.hasInputBuffer;
// Did we get the (final) result metadata for this capture?
if (result->result != NULL && !isPartialResult) {
if (request.haveResultMetadata) {
SET_ERR("Called multiple times with metadata for frame %d",
frameNumber);
return;
}
if (mUsePartialResult &&
!request.collectedPartialResult.isEmpty()) {
collectedPartialResult.acquire(
request.collectedPartialResult);
}
request.haveResultMetadata = true;
}
uint32_t numBuffersReturned = result->num_output_buffers;
if (result->input_buffer != NULL) {
if (hasInputBufferInRequest) {
numBuffersReturned += 1;
} else {
ALOGW("%s: Input buffer should be NULL if there is no input"
" buffer sent in the request",
__FUNCTION__);
}
}
request.numBuffersLeft -= numBuffersReturned;
if (request.numBuffersLeft < 0) {
SET_ERR("Too many buffers returned for frame %d",
frameNumber);
return;
}
camera_metadata_ro_entry_t entry;
res = find_camera_metadata_ro_entry(result->result,
ANDROID_SENSOR_TIMESTAMP, &entry);
if (res == OK && entry.count == 1) {
request.sensorTimestamp = entry.data.i64[0];
}
// If shutter event isn't received yet, append the output buffers to
// the in-flight request. Otherwise, return the output buffers to
// streams.
if (shutterTimestamp == 0) {
request.pendingOutputBuffers.appendArray(result->output_buffers,
result->num_output_buffers);
} else {
returnOutputBuffers(result->output_buffers,
result->num_output_buffers, shutterTimestamp);
}
if (result->result != NULL && !isPartialResult) {
if (shutterTimestamp == 0) {
request.pendingMetadata = result->result;
request.collectedPartialResult = collectedPartialResult;
} else if (request.hasCallback) {
CameraMetadata metadata;
metadata = result->result;
sendCaptureResult(metadata, request.resultExtras,
collectedPartialResult, frameNumber,
hasInputBufferInRequest);
}
}
removeInFlightRequestIfReadyLocked(idx);
} // scope for mInFlightLock
if (result->input_buffer != NULL) {
if (hasInputBufferInRequest) {
Camera3Stream *stream =
Camera3Stream::cast(result->input_buffer->stream);
res = stream->returnInputBuffer(*(result->input_buffer));
// Note: stream may be deallocated at this point, if this buffer was the
// last reference to it.
if (res != OK) {
ALOGE("%s: RequestThread: Can't return input buffer for frame %d to"
" its stream:%s (%d)", __FUNCTION__,
frameNumber, strerror(-res), res);
}
} else {
ALOGW("%s: Input buffer should be NULL if there is no input"
" buffer sent in the request, skipping input buffer return.",
__FUNCTION__);
}
}
}
第一步還是取幀号,isPartialResult是指部分,目前對Partial的具體含義還不是很了解,可能的情況比如拍HDR,需要采集三幀,三幀的FrameNumber相同,這三幀一起才能解析合成一幀圖檔,是以三幀中的每一幀就是Partial的意思了。shutterTimestamp一直不為0,該值是從HAL過來的,為什麼一直不為0還需要往HAL那麼追究,它不為0導緻if (shutterTimestamp == 0)判斷為false,則進入else分支,調用returnOutputBuffers歸還buffer,緊接着的下面幾句,當result->result非空并且目前的回調幀不是部分結果時(if (result->result != NULL && !isPartialResult)),就調用sendCaptureResult把結果回傳給APP。
我們先來看一下returnOutputBuffers的邏輯,然後再分析sendCaptureResult是如何把結果回傳到APP的。returnOutputBuffers方法的源碼如下:
void Camera3Device::returnOutputBuffers(
const camera3_stream_buffer_t *outputBuffers, size_t numBuffers,
nsecs_t timestamp) {
for (size_t i = 0; i < numBuffers; i++)
{
Camera3Stream *stream = Camera3Stream::cast(outputBuffers[i].stream);
status_t res = stream->returnBuffer(outputBuffers[i], timestamp);
// Note: stream may be deallocated at this point, if this buffer was
// the last reference to it.
if (res != OK) {
ALOGE("Can't return buffer to its stream: %s (%d)",
strerror(-res), res);
}
}
}
歸還的buffer用for循環來分别處理,每個buffer都有歸屬的Stream,直接調用stream類的returnBuffer方法,該方法實作在Camera3Stream基類中,Camera3Stream.cpp檔案路徑為frameworks\av\services\camera\libcameraservice\device3\Camera3Stream.cpp,
status_t Camera3Stream::returnBuffer(const camera3_stream_buffer &buffer,
nsecs_t timestamp) {
ATRACE_CALL();
Mutex::Autolock l(mLock);
// Check if this buffer is outstanding.
if (!isOutstandingBuffer(buffer)) {
ALOGE("%s: Stream %d: Returning an unknown buffer.", __FUNCTION__, mId);
return BAD_VALUE;
}
/**
* TODO: Check that the state is valid first.
*
* <HAL3.2 IN_CONFIG and IN_RECONFIG in addition to CONFIGURED.
* >= HAL3.2 CONFIGURED only
*
* Do this for getBuffer as well.
*/
status_t res = returnBufferLocked(buffer, timestamp);
if (res == OK) {
fireBufferListenersLocked(buffer, /*acquired*/false, /*output*/true);
}
// Even if returning the buffer failed, we still want to signal whoever is waiting for the
// buffer to be returned.
mOutputBufferReturnedSignal.signal();
removeOutstandingBuffer(buffer);
return res;
}
該方法中還是轉調returnBufferLocked來繼續處理的,fireBufferListenersLocked方法我們上一節已經講過了,隻有對API1有效,API2都不再使用這個接口了,returnBufferLocked方法是被子類重寫的,我們就來看一下Camera3OutputStream類中該方法的實作,源碼如下:
status_t Camera3OutputStream::returnBufferLocked(
const camera3_stream_buffer &buffer,
nsecs_t timestamp) {
ATRACE_CALL();
status_t res = returnAnyBufferLocked(buffer, timestamp, /*output*/true);
if (res != OK) {
return res;
}
mLastTimestamp = timestamp;
mFrameCount++;
return OK;
}
還是進一步調用returnAnyBufferLocked來處理,returnAnyBufferLocked方法又回到了父類Camera3IOStreamBase當中,源碼如下:
status_t Camera3IOStreamBase::returnAnyBufferLocked(
const camera3_stream_buffer &buffer,
nsecs_t timestamp,
bool output) {
status_t res;
// returnBuffer may be called from a raw pointer, not a sp<>, and we'll be
// decrementing the internal refcount next. In case this is the last ref, we
// might get destructed on the decStrong(), so keep an sp around until the
// end of the call - otherwise have to sprinkle the decStrong on all exit
// points.
sp<Camera3IOStreamBase> keepAlive(this);
decStrong(this);
if ((res = returnBufferPreconditionCheckLocked()) != OK) {
return res;
}
sp<Fence> releaseFence;
res = returnBufferCheckedLocked(buffer, timestamp, output,
&releaseFence);
// Res may be an error, but we still want to decrement our owned count
// to enable clean shutdown. So we'll just return the error but otherwise
// carry on
if (releaseFence != 0) {
mCombinedFence = Fence::merge(mName, mCombinedFence, releaseFence);
}
if (output) {
mHandoutOutputBufferCount--;
}
mHandoutTotalBufferCount--;
if (mHandoutTotalBufferCount == 0 && mState != STATE_IN_CONFIG &&
mState != STATE_IN_RECONFIG && mState != STATE_PREPARING) {
/**
* Avoid a spurious IDLE->ACTIVE->IDLE transition when using buffers
* before/after register_stream_buffers during initial configuration
* or re-configuration, or during prepare pre-allocation
*/
ALOGV("%s: Stream %d: All buffers returned; now idle", __FUNCTION__,
mId);
sp<StatusTracker> statusTracker = mStatusTracker.promote();
if (statusTracker != 0) {
statusTracker->markComponentIdle(mStatusId, mCombinedFence);
}
}
mBufferReturnedSignal.signal();
if (output) {
mLastTimestamp = timestamp;
}
return res;
}
首先調用returnBufferPreconditionCheckLocked來進行參數檢查,如果狀态不對或者要歸還的buffer數量為0,那麼就是有問題的,這裡就會直接傳回錯誤碼;參數檢查通過,繼續調用returnBufferCheckedLocked進一步處理,returnBufferCheckedLocked方法的實作又下放到了子類Camera3OutputStream當中,源碼如下:
status_t Camera3OutputStream::returnBufferCheckedLocked(
const camera3_stream_buffer &buffer,
nsecs_t timestamp,
bool output,
/*out*/
sp<Fence> *releaseFenceOut) {
(void)output;
ALOG_ASSERT(output, "Expected output to be true");
status_t res;
// Fence management - always honor release fence from HAL
sp<Fence> releaseFence = new Fence(buffer.release_fence);
int anwReleaseFence = releaseFence->dup();
/**
* Release the lock briefly to avoid deadlock with
* StreamingProcessor::startStream -> Camera3Stream::isConfiguring (this
* thread will go into StreamingProcessor::onFrameAvailable) during
* queueBuffer
*/
sp<ANativeWindow> currentConsumer = mConsumer;
mLock.unlock();
ANativeWindowBuffer *anwBuffer = container_of(buffer.buffer, ANativeWindowBuffer, handle);
/**
* Return buffer back to ANativeWindow
*/
if (buffer.status == CAMERA3_BUFFER_STATUS_ERROR) {
// Cancel buffer
ALOGW("A frame is dropped for stream %d", mId);
res = currentConsumer->cancelBuffer(currentConsumer.get(),
anwBuffer,
anwReleaseFence);
if (res != OK) {
ALOGE("%s: Stream %d: Error cancelling buffer to native window:"
" %s (%d)", __FUNCTION__, mId, strerror(-res), res);
}
notifyBufferReleased(anwBuffer);
if (mUseBufferManager) {
// Return this buffer back to buffer manager.
mBufferReleasedListener->onBufferReleased();
}
} else {
if (mTraceFirstBuffer && (stream_type == CAMERA3_STREAM_OUTPUT)) {
{
char traceLog[48];
snprintf(traceLog, sizeof(traceLog), "Stream %d: first full buffer\n", mId);
ATRACE_NAME(traceLog);
}
mTraceFirstBuffer = false;
}
/* Certain consumers (such as AudioSource or HardwareComposer) use
* MONOTONIC time, causing time misalignment if camera timestamp is
* in BOOTTIME. Do the conversion if necessary. */
res = native_window_set_buffers_timestamp(mConsumer.get(),
mUseMonoTimestamp ? timestamp - mTimestampOffset : timestamp);
if (res != OK) {
ALOGE("%s: Stream %d: Error setting timestamp: %s (%d)",
__FUNCTION__, mId, strerror(-res), res);
return res;
}
res = queueBufferToConsumer(currentConsumer, anwBuffer, anwReleaseFence);
if (res != OK) {
ALOGE("%s: Stream %d: Error queueing buffer to native window: "
"%s (%d)", __FUNCTION__, mId, strerror(-res), res);
}
}
mLock.lock();
// Once a valid buffer has been returned to the queue, can no longer
// dequeue all buffers for preallocation.
if (buffer.status != CAMERA3_BUFFER_STATUS_ERROR) {
mStreamUnpreparable = true;
}
if (res != OK) {
close(anwReleaseFence);
}
*releaseFenceOut = releaseFence;
return res;
}
mConsumer就是在建立目前流時構造方法中傳入的參數sp<Surface>了,它才是我們申請buffer的源泉!現在buffer已經經過HAL的ISP、3A、各種算法處理填充好了,需要把它queue入隊到宿主Surface當中去進行界面顯示或者照片儲存了,它需要回家去了!!!!當buffer狀态(buffer.status)正常,則進入else分支,調用queueBufferToConsumer歸還buffer,queueBufferToConsumer方法的源碼如下:
status_t Camera3OutputStream::queueBufferToConsumer(sp<ANativeWindow>& consumer,
ANativeWindowBuffer* buffer, int anwReleaseFence) {
return consumer->queueBuffer(consumer.get(), buffer, anwReleaseFence);
}
很簡單,就是直接調用ANativeWindow對象的queueBuffer方法将buffer歸還回去,ANativeWindow是OpenGL定義的圖形接口,在Android上的實作就是Surface和SurfaceFlinger,一個用于生産buffer,一個用于消費buffer。
好,看完了buffer歸還的邏輯,一層層往上回到Camera3Device類的processCaptureResult方法當中,繼續來看sendCaptureResult的實作,源碼如下:
void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata,
CaptureResultExtras &resultExtras,
CameraMetadata &collectedPartialResult,
uint32_t frameNumber,
bool reprocess) {
if (pendingMetadata.isEmpty())
return;
Mutex::Autolock l(mOutputLock);
// TODO: need to track errors for tighter bounds on expected frame number
if (reprocess) {
if (frameNumber < mNextReprocessResultFrameNumber) {
SET_ERR("Out-of-order reprocess capture result metadata submitted! "
"(got frame number %d, expecting %d)",
frameNumber, mNextReprocessResultFrameNumber);
return;
}
mNextReprocessResultFrameNumber = frameNumber + 1;
} else {
if (frameNumber < mNextResultFrameNumber) {
SET_ERR("Out-of-order capture result metadata submitted! "
"(got frame number %d, expecting %d)",
frameNumber, mNextResultFrameNumber);
return;
}
mNextResultFrameNumber = frameNumber + 1;
}
CaptureResult captureResult;
captureResult.mResultExtras = resultExtras;
captureResult.mMetadata = pendingMetadata;
// Append any previous partials to form a complete result
if (mUsePartialResult && !collectedPartialResult.isEmpty()) {
captureResult.mMetadata.append(collectedPartialResult);
}
captureResult.mMetadata.sort();
// Check that there's a timestamp in the result metadata
camera_metadata_entry timestamp = captureResult.mMetadata.find(ANDROID_SENSOR_TIMESTAMP);
if (timestamp.count == 0) {
SET_ERR("No timestamp provided by HAL for frame %d!",
frameNumber);
return;
}
mTagMonitor.monitorMetadata(TagMonitor::RESULT,
frameNumber, timestamp.data.i64[0], captureResult.mMetadata);
insertResultLocked(&captureResult, frameNumber);
}
首先還是給幀号指派,然後根據函數入參填充一個CaptureResult對象,用于回調結果,最後調用insertResultLocked将指派好的CaptureResult結果對象插入到結果隊列上,insertResultLocked方法的源碼如下:
void Camera3Device::insertResultLocked(CaptureResult *result,
uint32_t frameNumber) {
if (result == nullptr) return;
camera_metadata_t *meta = const_cast<camera_metadata_t *>(
result->mMetadata.getAndLock());
set_camera_metadata_vendor_id(meta, mVendorTagId);
result->mMetadata.unlock(meta);
if (result->mMetadata.update(ANDROID_REQUEST_FRAME_COUNT,
(int32_t*)&frameNumber, 1) != OK) {
SET_ERR("Failed to set frame number %d in metadata", frameNumber);
return;
}
if (result->mMetadata.update(ANDROID_REQUEST_ID, &result->mResultExtras.requestId, 1) != OK) {
SET_ERR("Failed to set request ID in metadata for frame %d", frameNumber);
return;
}
// Valid result, insert into queue
List<CaptureResult>::iterator queuedResult =
mResultQueue.insert(mResultQueue.end(), CaptureResult(*result));
ALOGVV("%s: result requestId = %" PRId32 ", frameNumber = %" PRId64
", burstId = %" PRId32, __FUNCTION__,
queuedResult->mResultExtras.requestId,
queuedResult->mResultExtras.frameNumber,
queuedResult->mResultExtras.burstId);
mResultSignal.signal();
}
該方法中先進行參數判斷,然後将傳過來的結果插入到mResultQueue結果隊列的末尾,最後調用mResultSignal.signal()通知阻塞線程有消息來了,那麼它要通知誰呢?就是FrameProcessorBase幀處理線程了,它是在CameraDeviceClient對象初始化的initializeImpl方法中就構造并運作起來的,我們來回顧一下。CameraDeviceClient類的initializeImpl方法的源碼如下:
template<typename TProviderPtr>
status_t CameraDeviceClient::initializeImpl(TProviderPtr providerPtr) {
ATRACE_CALL();
status_t res;
res = Camera2ClientBase::initialize(providerPtr);
if (res != OK) {
return res;
}
String8 threadName;
mFrameProcessor = new FrameProcessorBase(mDevice);
threadName = String8::format("CDU-%s-FrameProc", mCameraIdStr.string());
mFrameProcessor->run(threadName.string());
mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,
FRAME_PROCESSOR_LISTENER_MAX_ID,
/*listener*/this,
/*sendPartials*/true);
return OK;
}
可以看到,這裡調用mFrameProcessor的run方法時,傳入的線程名字,我們也可以通過ps -T指令檢視目前程序的所有線程,就可以找到它。最後調用mFrameProcessor->registerListener注冊回調接口,我們來看一下四個參數,FRAME_PROCESSOR_LISTENER_MIN_ID、FRAME_PROCESSOR_LISTENER_MAX_ID都是定義在CameraDeviceClient.h頭檔案中,FRAME_PROCESSOR_LISTENER_MIN_ID = 0,FRAME_PROCESSOR_LISTENER_MAX_ID = 0x7fffffffL,這兩個值表示可以注冊的回調接口的數量,可以看到是非常大,絕對是可以滿足我們的需求了;第三個參數就是回調接口對象,這裡傳入的this,是以FrameProcessorBase線程處理好一幀結果後,就會回調到CameraDeviceClient類中了,回調的方法就是onResultAvailable了;第四個參數表示是否支援部分結果回調,這裡寫死true,架構層肯定是要能支援各種各樣的需求,是以部分結果的回調肯定也是需要支援的了。
好,FrameProcessorBase的初始化分析完了,我們繼續看一下它的主循環threadLoop,該方法的源碼如下:
bool FrameProcessorBase::threadLoop() {
status_t res;
sp<CameraDeviceBase> device;
{
device = mDevice.promote();
if (device == 0) return false;
}
res = device->waitForNextFrame(kWaitDuration);
if (res == OK) {
processNewFrames(device);
} else if (res != TIMED_OUT) {
ALOGE("FrameProcessorBase: Error waiting for new "
"frames: %s (%d)", strerror(-res), res);
}
return true;
}
這裡就是調用device->waitForNextFrame(kWaitDuration)判斷結果隊列中是否有資料,kWaitDuration表示等待的間隔時間,值為10毫秒,定義在FrameProcessorBase.h頭檔案中,源碼如下:
static const nsecs_t kWaitDuration = 10000000; // 10 ms
這裡的device就是Camera3Device對象了,我們來看一下它的waitForNextFrame方法的實作,源碼如下:
status_t Camera3Device::waitForNextFrame(nsecs_t timeout) {
status_t res;
Mutex::Autolock l(mOutputLock);
while (mResultQueue.empty()) {
res = mResultSignal.waitRelative(mOutputLock, timeout);
if (res == TIMED_OUT) {
return res;
} else if (res != OK) {
ALOGW("%s: Camera %s: No frame in %" PRId64 " ns: %s (%d)",
__FUNCTION__, mId.string(), timeout, strerror(-res), res);
return res;
}
}
return OK;
}
這裡的邏輯很清晰,如果結果隊列mResultQueue不為空,則直接傳回,因為有資料可以處理了;如果為空,那麼就等待kWaitDuration(10毫秒),不管這裡的傳回值是什麼,都不會影響幀處理線程FrameProcessorBase的循環,因為即使哪一幀資料出錯了,幀處理線程也不能因為這個影響而退出,還是要繼續正常循環處理的。那麼當有資料後,就會調用processNewFrames取資料并進行處理了,processNewFrames方法的源碼如下:
void FrameProcessorBase::processNewFrames(const sp<CameraDeviceBase> &device) {
status_t res;
ATRACE_CALL();
CaptureResult result;
ALOGV("%s: Camera %s: Process new frames", __FUNCTION__, device->getId().string());
while ( (res = device->getNextResult(&result)) == OK) {
// TODO: instead of getting frame number from metadata, we should read
// this from result.mResultExtras when CameraDeviceBase interface is fixed.
camera_metadata_entry_t entry;
entry = result.mMetadata.find(ANDROID_REQUEST_FRAME_COUNT);
if (entry.count == 0) {
ALOGE("%s: Camera %s: Error reading frame number",
__FUNCTION__, device->getId().string());
break;
}
ATRACE_INT("cam2_frame", entry.data.i32[0]);
if (!processSingleFrame(result, device)) {
break;
}
if (!result.mMetadata.isEmpty()) {
Mutex::Autolock al(mLastFrameMutex);
mLastFrame.acquire(result.mMetadata);
}
}
if (res != NOT_ENOUGH_DATA) {
ALOGE("%s: Camera %s: Error getting next frame: %s (%d)",
__FUNCTION__, device->getId().string(), strerror(-res), res);
return;
}
return;
}
這裡的while判斷條件就是取幀是否為空,結果指派到局部變量result引用中,然後調用processSingleFrame進行處理,processSingleFrame方法的源碼如下:
bool FrameProcessorBase::processSingleFrame(CaptureResult &result,
const sp<CameraDeviceBase> &device) {
ALOGV("%s: Camera %s: Process single frame (is empty? %d)",
__FUNCTION__, device->getId().string(), result.mMetadata.isEmpty());
return processListeners(result, device) == OK;
}
該方法很簡單,直接轉調processListeners進行處理,processListeners方法的源碼如下:
status_t FrameProcessorBase::processListeners(const CaptureResult &result,
const sp<CameraDeviceBase> &device) {
ATRACE_CALL();
camera_metadata_ro_entry_t entry;
// Check if this result is partial.
bool isPartialResult =
result.mResultExtras.partialResultCount < mNumPartialResults;
// TODO: instead of getting requestID from CameraMetadata, we should get it
// from CaptureResultExtras. This will require changing Camera2Device.
// Currently Camera2Device uses MetadataQueue to store results, which does not
// include CaptureResultExtras.
entry = result.mMetadata.find(ANDROID_REQUEST_ID);
if (entry.count == 0) {
ALOGE("%s: Camera %s: Error reading frame id", __FUNCTION__, device->getId().string());
return BAD_VALUE;
}
int32_t requestId = entry.data.i32[0];
List<sp<FilteredListener> > listeners;
{
Mutex::Autolock l(mInputMutex);
List<RangeListener>::iterator item = mRangeListeners.begin();
// Don't deliver partial results to listeners that don't want them
while (item != mRangeListeners.end()) {
if (requestId >= item->minId && requestId < item->maxId &&
(!isPartialResult || item->sendPartials)) {
sp<FilteredListener> listener = item->listener.promote();
if (listener == 0) {
item = mRangeListeners.erase(item);
continue;
} else {
listeners.push_back(listener);
}
}
item++;
}
}
ALOGV("%s: Camera %s: Got %zu range listeners out of %zu", __FUNCTION__,
device->getId().string(), listeners.size(), mRangeListeners.size());
List<sp<FilteredListener> >::iterator item = listeners.begin();
for (; item != listeners.end(); item++) {
(*item)->onResultAvailable(result);
}
return OK;
}
該方法中就是将所有注冊的listeners取出來,調用它的onResultAvailable方法了,而listeners我們前面已經講過了,就是CameraDeviceClient類,接着繼續來看CameraDeviceClient類的onResultAvailable方法,源碼如下:
void CameraDeviceClient::onResultAvailable(const CaptureResult& result) {
ATRACE_CALL();
ALOGV("%s", __FUNCTION__);
// Thread-safe. No lock necessary.
sp<hardware::camera2::ICameraDeviceCallbacks> remoteCb = mRemoteCallback;
if (remoteCb != NULL) {
remoteCb->onResultReceived(result.mMetadata, result.mResultExtras);
}
}
這裡的remoteCb就回到Camera Application程序當中了,它就是CameraDeviceImpl類的内部類CameraDeviceCallbacks對象了,CameraDeviceCallbacks類的源碼如下:
public class CameraDeviceCallbacks extends ICameraDeviceCallbacks.Stub {
@Override
public IBinder asBinder() {
return this;
}
@Override
public void onDeviceError(final int errorCode, CaptureResultExtras resultExtras) {
if (DEBUG) {
Log.d(TAG, String.format(
"Device error received, code %d, frame number %d, request ID %d, subseq ID %d",
errorCode, resultExtras.getFrameNumber(), resultExtras.getRequestId(),
resultExtras.getSubsequenceId()));
}
synchronized (mInterfaceLock) {
if (mRemoteDevice == null) {
return; // Camera already closed
}
switch (errorCode) {
case ERROR_CAMERA_DISCONNECTED:
CameraDeviceImpl.this.mDeviceHandler.post(mCallOnDisconnected);
break;
default:
Log.e(TAG, "Unknown error from camera device: " + errorCode);
// no break
case ERROR_CAMERA_DEVICE:
case ERROR_CAMERA_SERVICE:
mInError = true;
final int publicErrorCode = (errorCode == ERROR_CAMERA_DEVICE) ?
StateCallback.ERROR_CAMERA_DEVICE :
StateCallback.ERROR_CAMERA_SERVICE;
Runnable r = new Runnable() {
@Override
public void run() {
if (!CameraDeviceImpl.this.isClosed()) {
mDeviceCallback.onError(CameraDeviceImpl.this, publicErrorCode);
}
}
};
CameraDeviceImpl.this.mDeviceHandler.post(r);
break;
case ERROR_CAMERA_REQUEST:
case ERROR_CAMERA_RESULT:
case ERROR_CAMERA_BUFFER:
onCaptureErrorLocked(errorCode, resultExtras);
break;
}
}
}
@Override
public void onRepeatingRequestError(long lastFrameNumber) {
if (DEBUG) {
Log.d(TAG, "Repeating request error received. Last frame number is " +
lastFrameNumber);
}
synchronized (mInterfaceLock) {
// Camera is already closed or no repeating request is present.
if (mRemoteDevice == null || mRepeatingRequestId == REQUEST_ID_NONE) {
return; // Camera already closed
}
checkEarlyTriggerSequenceComplete(mRepeatingRequestId, lastFrameNumber);
mRepeatingRequestId = REQUEST_ID_NONE;
}
}
@Override
public void onDeviceIdle() {
if (DEBUG) {
Log.d(TAG, "Camera now idle");
}
synchronized (mInterfaceLock) {
if (mRemoteDevice == null) return; // Camera already closed
if (!CameraDeviceImpl.this.mIdle) {
CameraDeviceImpl.this.mDeviceHandler.post(mCallOnIdle);
}
CameraDeviceImpl.this.mIdle = true;
}
}
@Override
public void onCaptureStarted(final CaptureResultExtras resultExtras, final long timestamp) {
int requestId = resultExtras.getRequestId();
final long frameNumber = resultExtras.getFrameNumber();
if (DEBUG) {
Log.d(TAG, "Capture started for id " + requestId + " frame number " + frameNumber);
}
final CaptureCallbackHolder holder;
synchronized (mInterfaceLock) {
if (mRemoteDevice == null) return; // Camera already closed
// Get the callback for this frame ID, if there is one
holder = CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);
if (holder == null) {
return;
}
if (isClosed()) return;
// Dispatch capture start notice
holder.getHandler().post(
new Runnable() {
@Override
public void run() {
if (!CameraDeviceImpl.this.isClosed()) {
final int subsequenceId = resultExtras.getSubsequenceId();
final CaptureRequest request = holder.getRequest(subsequenceId);
if (holder.hasBatchedOutputs()) {
// Send derived onCaptureStarted for requests within the batch
final Range<Integer> fpsRange =
request.get(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE);
for (int i = 0; i < holder.getRequestCount(); i++) {
holder.getCallback().onCaptureStarted(
CameraDeviceImpl.this,
holder.getRequest(i),
timestamp - (subsequenceId - i) *
NANO_PER_SECOND / fpsRange.getUpper(),
frameNumber - (subsequenceId - i));
}
} else {
holder.getCallback().onCaptureStarted(
CameraDeviceImpl.this,
holder.getRequest(resultExtras.getSubsequenceId()),
timestamp, frameNumber);
}
}
}
});
}
}
@Override
public void onResultReceived(CameraMetadataNative result,
CaptureResultExtras resultExtras) throws RemoteException {
int requestId = resultExtras.getRequestId();
long frameNumber = resultExtras.getFrameNumber();
if (DEBUG) {
Log.v(TAG, "Received result frame " + frameNumber + " for id "
+ requestId);
}
synchronized (mInterfaceLock) {
if (mRemoteDevice == null) return; // Camera already closed
// TODO: Handle CameraCharacteristics access from CaptureResult correctly.
result.set(CameraCharacteristics.LENS_INFO_SHADING_MAP_SIZE,
getCharacteristics().get(CameraCharacteristics.LENS_INFO_SHADING_MAP_SIZE));
final CaptureCallbackHolder holder =
CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);
final CaptureRequest request = holder.getRequest(resultExtras.getSubsequenceId());
boolean isPartialResult =
(resultExtras.getPartialResultCount() < mTotalPartialCount);
boolean isReprocess = request.isReprocess();
// Check if we have a callback for this
if (holder == null) {
if (DEBUG) {
Log.d(TAG,
"holder is null, early return at frame "
+ frameNumber);
}
mFrameNumberTracker.updateTracker(frameNumber, /*result*/null, isPartialResult,
isReprocess);
return;
}
if (isClosed()) {
if (DEBUG) {
Log.d(TAG,
"camera is closed, early return at frame "
+ frameNumber);
}
mFrameNumberTracker.updateTracker(frameNumber, /*result*/null, isPartialResult,
isReprocess);
return;
}
Runnable resultDispatch = null;
CaptureResult finalResult;
// Make a copy of the native metadata before it gets moved to a CaptureResult
// object.
final CameraMetadataNative resultCopy;
if (holder.hasBatchedOutputs()) {
resultCopy = new CameraMetadataNative(result);
} else {
resultCopy = null;
}
// Either send a partial result or the final capture completed result
if (isPartialResult) {
final CaptureResult resultAsCapture =
new CaptureResult(result, request, resultExtras);
// Partial result
resultDispatch = new Runnable() {
@Override
public void run() {
if (!CameraDeviceImpl.this.isClosed()) {
if (holder.hasBatchedOutputs()) {
// Send derived onCaptureProgressed for requests within
// the batch.
for (int i = 0; i < holder.getRequestCount(); i++) {
CameraMetadataNative resultLocal =
new CameraMetadataNative(resultCopy);
CaptureResult resultInBatch = new CaptureResult(
resultLocal, holder.getRequest(i), resultExtras);
holder.getCallback().onCaptureProgressed(
CameraDeviceImpl.this,
holder.getRequest(i),
resultInBatch);
}
} else {
holder.getCallback().onCaptureProgressed(
CameraDeviceImpl.this,
request,
resultAsCapture);
}
}
}
};
finalResult = resultAsCapture;
} else {
List<CaptureResult> partialResults =
mFrameNumberTracker.popPartialResults(frameNumber);
final long sensorTimestamp =
result.get(CaptureResult.SENSOR_TIMESTAMP);
final Range<Integer> fpsRange =
request.get(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE);
final int subsequenceId = resultExtras.getSubsequenceId();
final TotalCaptureResult resultAsCapture = new TotalCaptureResult(result,
request, resultExtras, partialResults, holder.getSessionId());
// Final capture result
resultDispatch = new Runnable() {
@Override
public void run() {
if (!CameraDeviceImpl.this.isClosed()) {
if (holder.hasBatchedOutputs()) {
// Send derived onCaptureCompleted for requests within
// the batch.
for (int i = 0; i < holder.getRequestCount(); i++) {
resultCopy.set(CaptureResult.SENSOR_TIMESTAMP,
sensorTimestamp - (subsequenceId - i) *
NANO_PER_SECOND / fpsRange.getUpper());
CameraMetadataNative resultLocal =
new CameraMetadataNative(resultCopy);
TotalCaptureResult resultInBatch = new TotalCaptureResult(
resultLocal, holder.getRequest(i), resultExtras,
partialResults, holder.getSessionId());
holder.getCallback().onCaptureCompleted(
CameraDeviceImpl.this,
holder.getRequest(i),
resultInBatch);
}
} else {
holder.getCallback().onCaptureCompleted(
CameraDeviceImpl.this,
request,
resultAsCapture);
}
}
}
};
finalResult = resultAsCapture;
}
holder.getHandler().post(resultDispatch);
// Collect the partials for a total result; or mark the frame as totally completed
mFrameNumberTracker.updateTracker(frameNumber, finalResult, isPartialResult,
isReprocess);
// Fire onCaptureSequenceCompleted
if (!isPartialResult) {
checkAndFireSequenceComplete();
}
}
}
@Override
public void onPrepared(int streamId) {
final OutputConfiguration output;
final StateCallbackKK sessionCallback;
if (DEBUG) {
Log.v(TAG, "Stream " + streamId + " is prepared");
}
synchronized (mInterfaceLock) {
output = mConfiguredOutputs.get(streamId);
sessionCallback = mSessionStateCallback;
}
if (sessionCallback == null) return;
if (output == null) {
Log.w(TAG, "onPrepared invoked for unknown output Surface");
return;
}
final List<Surface> surfaces = output.getSurfaces();
for (Surface surface : surfaces) {
sessionCallback.onSurfacePrepared(surface);
}
}
@Override
public void onRequestQueueEmpty() {
final StateCallbackKK sessionCallback;
if (DEBUG) {
Log.v(TAG, "Request queue becomes empty");
}
synchronized (mInterfaceLock) {
sessionCallback = mSessionStateCallback;
}
if (sessionCallback == null) return;
sessionCallback.onRequestQueueEmpty();
}
/**
* Called by onDeviceError for handling single-capture failures.
*/
private void onCaptureErrorLocked(int errorCode, CaptureResultExtras resultExtras) {
final int requestId = resultExtras.getRequestId();
final int subsequenceId = resultExtras.getSubsequenceId();
final long frameNumber = resultExtras.getFrameNumber();
final CaptureCallbackHolder holder =
CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);
final CaptureRequest request = holder.getRequest(subsequenceId);
Runnable failureDispatch = null;
if (errorCode == ERROR_CAMERA_BUFFER) {
// Because 1 stream id could map to multiple surfaces, we need to specify both
// streamId and surfaceId.
List<Surface> surfaces =
mConfiguredOutputs.get(resultExtras.getErrorStreamId()).getSurfaces();
for (Surface surface : surfaces) {
if (!request.containsTarget(surface)) {
continue;
}
if (DEBUG) {
Log.v(TAG, String.format("Lost output buffer reported for frame %d, target %s",
frameNumber, surface));
}
failureDispatch = new Runnable() {
@Override
public void run() {
if (!CameraDeviceImpl.this.isClosed()) {
holder.getCallback().onCaptureBufferLost(
CameraDeviceImpl.this,
request,
surface,
frameNumber);
}
}
};
// Dispatch the failure callback
holder.getHandler().post(failureDispatch);
}
} else {
boolean mayHaveBuffers = (errorCode == ERROR_CAMERA_RESULT);
// This is only approximate - exact handling needs the camera service and HAL to
// disambiguate between request failures to due abort and due to real errors. For
// now, assume that if the session believes we're mid-abort, then the error is due
// to abort.
int reason = (mCurrentSession != null && mCurrentSession.isAborting()) ?
CaptureFailure.REASON_FLUSHED :
CaptureFailure.REASON_ERROR;
final CaptureFailure failure = new CaptureFailure(
request,
reason,
/*dropped*/ mayHaveBuffers,
requestId,
frameNumber);
failureDispatch = new Runnable() {
@Override
public void run() {
if (!CameraDeviceImpl.this.isClosed()) {
holder.getCallback().onCaptureFailed(
CameraDeviceImpl.this,
request,
failure);
}
}
};
// Fire onCaptureSequenceCompleted if appropriate
if (DEBUG) {
Log.v(TAG, String.format("got error frame %d", frameNumber));
}
mFrameNumberTracker.updateTracker(frameNumber, /*error*/true, request.isReprocess());
checkAndFireSequenceComplete();
// Dispatch the failure callback
holder.getHandler().post(failureDispatch);
}
}
}
我們來看一下回調回來的兩個參數,分别是CameraMetadataNative、CaptureResultExtras類型,其中根本沒有buffer資料,那我們怎麼取預覽或者拍照資料呢?這就是API2架構的修改了,API2的架構已經不像API1那樣直接在回調接口中支援buffer資料的回調了,而我們要想取到預覽或者拍照結果的buffer資料,可以通過ImageReader來實作,關于使用ImageReader擷取預覽和拍照的buffer資料的,網上有很多教程,這裡就不講了,大家可以自己去查,主要的重點就是将構造好的ImageReader的Surface通過createCaptureSession下發下去,然後重寫OnImageAvailableListener類的onImageAvailable方法,當我們前面講過的returnBuffer将buffer資料回填給Surface之後,顯示系統就會回調onImageAvailable方法,我們就可以取到想要的buffer了。
好了,本節我們講了下資料回傳在CameraServer中的流轉最後到Camera Application的所有過程的源碼,希望大家詳細認真的讀懂該部落格後能對CameraServer程序的邏輯流程有進一步的了解。