天天看點

OpenCore代碼閱讀--PVPlayer的實作

1 Player的組成

  OpenCore的Player的編譯檔案是pvplayer/Android.mk,将生成動态庫檔案 libopencoreplayer.so。這個庫包含了兩方面的内容:一方是Player的engine(引擎),一方面是為 Android構件的Player,這實際上是一個擴充卡(adapter)。engine的路徑是engine/player;adapter的路徑是 android。

  

OpenCore代碼閱讀--PVPlayer的實作

2 Player Engine部分

    OpenCore 的 Player Engine 具有清晰明确的接口。在這個接口之上,不同的系統可以根據自己的情況實作不同 Player 。目錄 engines 中的檔案結構如下所示: 

engines/player/

|-- Android.mk

|-- build

|   |-- linux_nj

|   |-- make

|   `-- makefile.conf

|-- config

|   `-- linux_nj

|-- include

|   |-- pv_player_datasink.h

|   |-- pv_player_datasinkfilename.h

|   |-- pv_player_datasinkpvmfnode.h

|   |-- pv_player_datasource.h

|   |-- pv_player_datasourcepvmfnode.h

|   |-- pv_player_datasourceurl.h

|   |-- pv_player_events.h

|   |-- pv_player_factory.h

|   |-- pv_player_interface.h

|   |-- pv_player_license_acquisition_interface.h

|   |-- pv_player_registry_interface.h

|   |-- pv_player_track_selection_interface.h

|   `-- pv_player_types.h

|-- sample_app

|   |-- Android.mk

|   |-- build

|   |-- sample_player_app_release.txt

|   `-- src

|-- src

|   |-- pv_player_datapath.cpp

|   |-- pv_player_datapath.h

|   |-- pv_player_engine.cpp

|   |-- pv_player_engine.h

|   |-- pv_player_factory.cpp

|   |-- pv_player_node_registry.h

|   `-- pv_player_sdkinfo.h

`-- test

    |-- Android.mk

    |-- build

    |-- config

    `-- src

     其中,engines/player/include目錄中是接口頭檔案,engines/player/src目錄源檔案和私有頭檔案,主要頭檔案的功能如下所示: 

   pv_player_types.h :定義一些資料結構和枚舉值 

   pv_player_events.h :定義UUID和一些錯誤值。 

   pv_player_datasink.h :datasink 是媒體資料的輸出 , 定義類 PVPlayerDataSink , 這是媒體資料輸出的基類 , 作為接口使用 

   pv_player_datasinkfilename.h : 定義類 PVPlayerDataSinkFilename 繼承 PVPlayerDataSink 。 

   pv_player_datasinkpvmfnode.h : 定義類 PVPlayerDataSinkPVMFNode 繼承 PVPlayerDataSink 。 

   pv_player_datasource.h :datasource 是媒體資料的輸入 , 定義類 PVPlayerDataSource ,這是媒體資料輸入的基類,作為接口使用。 

   pv_player_datasourcepvmfnode.h :定義類PVPlayerDataSourcePVMFNode繼承PVPlayerDataSource。 

   pv_player_datasourceurl.h :定義類PVPlayerDataSourceURL繼承PVPlayerDataSource。 

   pv_player_interface.h : 定義 Player 的接口 PVPlayerInterface , 這是一個接口類。 

   pv_player_factory.h : 主要定義工廠類 PVPlayerFactory ,用于建立和銷毀PVPlayerInterface。 

    事實上,在engines/player/src 目錄中 , 主要實作類為 pv_player_engine.cpp , 其中定義了類 PVPlayerEngine ,PVPlayerEngine繼承了PVPlayerInterface,這是一個實作類,在PVPlayerFactory建立PVPlayerInterface接口的時候,實際建立的是PVPlayerEngine。

OpenCore代碼閱讀--PVPlayer的實作

           在 Player Engine 的實作中,包含了編解碼和流控制等功能,而輸出的媒體需要從外部設定進來。 PVPlayerInterface 定義的接口基本是按照操作順序的,主要的接口如下所示: 

    在Player Engine的實作中,包含了編解碼和流控制等功能,而輸出的媒體需要從外部設定進來。PVPlayerInterface定義的接口基本是按照操作順序的,主要的接口如下所示:

PVCommandId AddDataSource(PVPlayerDataSource& aDataSource, const OsclAny* aContextData = NULL);

PVCommandId Init(const OsclAny* aContextData = NULL);

PVCommandId AddDataSink(PVPlayerDataSink& aDataSink, const OsclAny* aContextData = NULL);

PVCommandId Prepare(const OsclAny* aContextData = NULL);

PVCommandId Start(const OsclAny* aContextData = NULL);

PVCommandId Pause(const OsclAny* aContextData = NULL);

PVCommandId Resume(const OsclAny* aContextData = NULL);

PVCommandId Stop(const OsclAny* aContextData = NULL);

PVCommandId RemoveDataSink(PVPlayerDataSink& aDataSink, const OsclAny* aContextData = NULL);

PVCommandId Reset(const OsclAny* aContextData = NULL);

PVCommandId RemoveDataSource(PVPlayerDataSource& aDataSource, const OsclAny* aContextData = NULL); 

   這裡面的DataSink可能包含Video的輸出和Audio的輸出兩者部分。在pv_player_types.h檔案中,定義了Player的狀态機,以PVP_STATE_為開頭,如下所示:

typedef enum

{

    PVP_STATE_IDLE        = 1,

    PVP_STATE_INITIALIZED = 2,

    PVP_STATE_PREPARED    = 3,

    PVP_STATE_STARTED     = 4,

    PVP_STATE_PAUSED      = 5,

    PVP_STATE_ERROR       = 6

} PVPlayerState; 

   PVPlayerInterface 中的各個操作如果成功,可以更改Player的狀态機:初始化的時候Player是PVP_STATE_IDLE狀态,調用Init後,進入 PVP_STATE_INITIALIZED狀态;調用AddDataSink,進入PVP_STATE_PREPARED狀态;調用Prepare後, 進入PVP_STATE_PREPARED狀态;調用start後進入PVP_STATE_STARTED狀态,之後可以調用 pause進入PVP_STATE_PAUSED狀态。

   PVP_STATE_STARTED和PVP_STATE_PAUSED狀态是播放情況下的狀态,可以使用start和pause函數在這兩個狀态中切換。

   在播放過程中,調用stop可以傳回PVP_STATE_INITIALIZED狀态,在調用RemoveDataSource傳回PVP_STATE_IDLE狀态。

3 Android Player Adapter

在android目錄中定義為Player的擴充卡,這個目錄主要包含的檔案如下所示:

   android

   |-- Android.mk

   |-- android_audio_mio.cpp

   |-- android_audio_mio.h

   |-- android_audio_output.cpp

   |-- android_audio_output.h

   |-- android_audio_output_threadsafe_callbacks.cpp

   |-- android_audio_output_threadsafe_callbacks.h

   |-- android_audio_stream.cpp

   |-- android_audio_stream.h

   |-- android_log_appender.h

   |-- android_surface_output.cpp

   |-- android_surface_output.h

   |-- mediascanner.cpp

   |-- metadatadriver.cpp

   |-- metadatadriver.h

   |-- playerdriver.cpp

   |-- playerdriver.h

   `-- thread_init.cpp 

      這個Android的Player的“擴充卡”需要調用OpenCore的Player Engine的接口,實作Android的媒體播放器的服務所需要接口,即最終實作一個PVPlayer,而PVPlayer實際上是繼承了 MediaPlayerInterface。

在實作過程中,首先實作了一個PlayerDriver,然後再使用PVPlayer,PVPlayer通過調用PlayerDriver來完成具體的功能。整個實作的結構圖如圖所示:

OpenCore代碼閱讀--PVPlayer的實作

 對PVPlayerDriver的各種操作使用各種指令來完成,這些指令在playerdriver.h中進行的定義。

enum player_command_type {

    PLAYER_QUIT                     = 1,

    PLAYER_SETUP                    = 2,

    PLAYER_SET_DATA_SOURCE          = 3,

    PLAYER_SET_VIDEO_SURFACE        = 4,

    PLAYER_SET_AUDIO_SINK           = 5,

    PLAYER_INIT                     = 6,

    PLAYER_PREPARE                  = 7,

    PLAYER_START                    = 8,

    PLAYER_STOP                     = 9,

    PLAYER_PAUSE                    = 10,

    PLAYER_RESET                    = 11,

    PLAYER_SET_LOOP                 = 12,

    PLAYER_SEEK                     = 13,

    PLAYER_GET_POSITION             = 14,

    PLAYER_GET_DURATION             = 15,

    PLAYER_GET_STATUS               = 16,

    PLAYER_REMOVE_DATA_SOURCE       = 17,

    PLAYER_CANCEL_ALL_COMMANDS      = 18,

}; 

      這些指令一般實作的是PVPlayerInterface各個接口的簡單封裝,例如對于較為簡單的暫停播放這個操作,整個系統執行的過程如下所示:

       1.在PVPlayer中的pause函數(在playerdriver.cpp檔案中)

status_t PVPlayer::pause()

{

    LOGV("pause");

    return mPlayerDriver->enqueueCommand(new PlayerPause(0,0));

       這時調用其成員mPlayerDriver(PlayerDriver類型)的函數,将一個PlayerPause指令加入了指令序列,具體的各種指令功能在playerdriver.h檔案中。

       2.PlayerDriver類的enqueueCommand将間接調用各個以handle為開頭的函數,對于PlayerPause指令,調用的函數是handlePause

void PlayerDriver::handlePause(PlayerPause* ec)

{

    LOGV("call pause");

    mPlayer->Pause(0);

    FinishSyncCommand(ec);

}

       這裡的mPlayer是一個PVPlayerInterface類型的指針,使用這個指針調用到了OpenCore的 Player Engine中的PVPlayerEngine類。

 在這個播放器擴充卡的實作中,一個主要工作是 将Android架構中定義的媒體的輸出(包括Audio的輸出和Video的輸出)轉換成,OpenCore的 Player Engine需要的形式。在這裡兩個重要的類是android_surface_output.cpp實作的 AndroidSurfaceOutput,android_audio_output.cpp實作的AndroidAudioOutput。

對于Video輸出的設定過程,在類PlayerDriver中定義了3個成員:

    PVPlayerDataSink        *mVideoSink;

    PVMFNodeInterface       *mVideoNode;

    PvmiMIOControl          *mVideoOutputMIO;

      這裡的mVideoSink 的類型為PVPlayerDataSink,這是Player Engine中定義的類接口,mVideoNode的類型為VMFNodeInterface,在pvmi/pvmf/include的 pvmf_node_interface.h中定義,這是所有的PVMF的NODE都需要繼承的統一接口,mVideoOutputMIO的類型為 PvmiMIOControl也在pvmi/pvmf/include中定義,這是媒體圖形輸出控制的接口類。

       1.在PVPlayer的setVideoSurface用以設定一個Video輸出的界面,這裡使用的參數的類型是ISurface指針:

status_t PVPlayer::setVideoSurface(const sp<ISurface>& surface)

{

    LOGV("setVideoSurface(%p)", surface.get());

    mSurface = surface;

    return OK;

     setVideoSurface函數設定的是PVPlayer中的一個成員mSurface,真正設定Video輸出的界面的功能在run_set_video_surface()函數中實作:

void PVPlayer::run_set_video_surface(status_t s, void *cookie)

{

    LOGV("run_set_video_surface s=%d", s);

    if (s == NO_ERROR) {

        PVPlayer *p = (PVPlayer*)cookie;

        if (p->mSurface == NULL) {

            run_set_audio_output(s, cookie);

        } else {

            p->mPlayerDriver->enqueueCommand(new PlayerSetVideoSurface(p->mSurface, run_set_audio_output, cookie));

        }

    }

     這時使用的指令是PlayerSetVideoSurface,最終将調用到PlayerDriver中的handleSetVideoSurface函數。

       2.handleSetVideoSurface函數的實作如下所示:

void PlayerDriver::handleSetVideoSurface(PlayerSetVideoSurface* ec)

{

    int error = 0;

    mVideoOutputMIO = new AndroidSurfaceOutput(ec->surface());

    mVideoNode = PVMediaOutputNodeFactory::CreateMediaOutputNode(mVideoOutputMIO);

    mVideoSink = new PVPlayerDataSinkPVMFNode; 

    ((PVPlayerDataSinkPVMFNode *)mVideoSink)->SetDataSinkNode(mVideoNode);

    ((PVPlayerDataSinkPVMFNode *)mVideoSink)->SetDataSinkFormatType(PVMF_YUV420); 

    OSCL_TRY(error, mPlayer->AddDataSink(*mVideoSink, ec));

    OSCL_FIRST_CATCH_ANY(error, commandFailed(ec));

      在這裡首先建立的建立成員mVideoOutputMIO(類型為PvmiMIOControl),這時建立的類是類 AndroidSurfaceOutput,這個類繼承了PvmiMIOControl,是以可以作為PvmiMIOControl使用。然後調用 PVMediaOutputNodeFactory::CreateMediaOutputNode建立了PVMFNodeInterface 類型的mVideoNode。随後建立PVPlayerDataSinkPVMFNode類型的 mVideoSink,PVPlayerDataSinkPVMFNode本身繼承了PVPlayerDataSink,是以可以作為 PVPlayerDataSink使用。調用SetDataSinkNode函數将mVideoNode設定為mVideoSink的資料輸出節點。

OpenCore代碼閱讀--PVPlayer的實作

       事實上,對于Video的輸出,基本的功能都是在類AndroidSurfaceOutput中完成的,在這個類當中,主要的工作是将Android的 ISurface輸出作為Player Engine的輸出。最後調用了AddDataSink将mVideoSink增加為了PVPlayerInterface的輸出。

      在android_surface_output.cpp檔案中實作了類AndroidSurfaceOutput,這個類相當于一個OpenCore Player Engine的Video輸出和Android輸出的“擴充卡”。AndroidSurfaceOutput類本身繼承了類 PvmiMIOControl,而其構造函數又以ISurface類型為參數。這個類的實作是使用ISurface實作PvmiMIOControl的各 個接口。

--------------------------------華麗的分割線-----------------------------------------

我們知道,MediaPlayerInterface接口是Android架構中承上啟下的關鍵接口,Android下面幾個播放器都是沖這個接口派生過 來的,前面在寫flac的時候已經基本看了一些關于OGG player的相關東西,但是那個隻是音頻,還沒有涉及到視訊,下面簡單的介紹一下其中最複雜的PVPlayer

class PVPlayer : public MediaPlayerInterface {         public:                 PVPlayer();                 virtual ~PVPlayer();                 virtual status_t initCheck(); //1                 virtual status_t setDataSource(const char *url);//2                 virtual status_t setDataSource(int fd, int64_t offset, int64_t length);//2                 virtual status_t setVideoSurface(const sp<ISurface>& surface);//3                 virtual status_t prepare();//4                 virtual status_t prepareAsync();//5                 virtual status_t start();//5                 virtual status_t stop();//6                 virtual status_t pause();//6                 virtual bool isPlaying();                 virtual status_t seekTo(int msec);                 virtual status_t getCurrentPosition(int *msec);                 virtual status_t getDuration(int *msec);                 virtual status_t reset();                 virtual status_t setLooping(int loop);                 virtual player_type playerType() { return PV_PLAYER; }                 // make available to PlayerDriver                 void sendEvent(int msg, int ext1=0, int ext2=0) { MediaPlayerBase::sendEvent(msg, ext1, ext2); }         private:                 static void do_nothing(status_t s, void *cookie, bool cancelled) { }                 static void run_init(status_t s, void *cookie, bool cancelled);                 static void run_set_video_surface(status_t s, void *cookie, bool cancelled);                 static void run_set_audio_output(status_t s, void *cookie, bool cancelled);                 static void run_prepare(status_t s, void *cookie, bool cancelled);                 PlayerDriver* mPlayerDriver;                 char * mDataSourcePath;                 bool mIsDataSourceSet;                 sp<ISurface> mSurface;                 int mSharedFd;                 status_t mInit;                 int mDuration; #ifdef MAX_OPENCORE_INSTANCES                 static volatile int32_t sNumInstances; #endif };

1、 virtual status_t initCheck(); //這個函數會在我們建立播放器的時候調用,可以做一些基本的初始化工作,如下所示:

sp<MediaPlayerBase> createPlayer(player_type playerType, void* cookie,                 notify_callback_f notifyFunc) {         sp<MediaPlayerBase> p;         switch (playerType) {                 case PV_PLAYER:                         LOGV(" create PVPlayer");                         p = new PVPlayer();                         break;                 case SONIVOX_PLAYER:                         LOGV(" create MidiFile");                         p = new MidiFile();                         break;                 case VORBIS_PLAYER:                         LOGV(" create VorbisPlayer");                         p = new VorbisPlayer();                         break;         }         if (p != NULL) {                 if (p->initCheck() == NO_ERROR) {                         p->setNotifyCallback(cookie, notifyFunc);                 } else {                         p.clear();                 }         }         if (p == NULL) {                 LOGE("Failed to create player object");         }         return p; }

發現還有setNotifyCallback函數也會一起調用。

2、 兩種設定源的方式 setDataSource,一個是打開一個檔案重頭開始,一個事從檔案的中間開始。

status_t MediaPlayerService::Client::setDataSource(const char *url) {         LOGV("setDataSource(%s)", url);         if (url == NULL)                 return UNKNOWN_ERROR;         if (strncmp(url, "content://", 10) == 0) {                 // get a filedescriptor for the content Uri and                 // pass it to the setDataSource(fd) method                 String16 url16(url);                 int fd = android::openContentProviderFile(url16);//.content.......                 if (fd < 0)                 {                         LOGE("Couldn't open fd for %s", url);                         return UNKNOWN_ERROR;                 }                 setDataSource(fd, 0, 0x7fffffffffLL); // this sets mStatus                 close(fd);                 return mStatus;         } else {                 player_type playerType = getPlayerType(url);                 LOGV("player type = %d", playerType);                 // create the right type of player                 sp<MediaPlayerBase> p = createPlayer(playerType);                 if (p == NULL) return NO_INIT;                 if (!p->hardwareOutput()) {                         mAudioOutput = new AudioOutput();                         static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);                 }                 // now set data source                 LOGV(" setDataSource");                 mStatus = p->setDataSource(url);                 if (mStatus == NO_ERROR) mPlayer = p;                 return mStatus;         } } status_t MediaPlayerService::Client::setDataSource(int fd, int64_t offset, int64_t length) {         LOGV("setDataSource fd=%d, offset=%lld, length=%lld", fd, offset, length);         struct stat sb;         int ret = fstat(fd, &sb);         if (ret != 0) {                 LOGE("fstat(%d) failed: %d, %s", fd, ret, strerror(errno));                 return UNKNOWN_ERROR;         }         LOGV("st_dev = %llu", sb.st_dev);         LOGV("st_mode = %u", sb.st_mode);         LOGV("st_uid = %lu", sb.st_uid);         LOGV("st_gid = %lu", sb.st_gid);         LOGV("st_size = %llu", sb.st_size);         if (offset >= sb.st_size) {                 LOGE("offset error");                 ::close(fd);                 return UNKNOWN_ERROR;         }         if (offset + length > sb.st_size) {                 length = sb.st_size - offset;                 LOGV("calculated length = %lld", length);         }         player_type playerType = getPlayerType(fd, offset, length);         LOGV("player type = %d", playerType);         // create the right type of player         sp<MediaPlayerBase> p = createPlayer(playerType);         if (p == NULL) return NO_INIT;         if (!p->hardwareOutput()) {                 mAudioOutput = new AudioOutput();                 static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);         }         // now set data source         mStatus = p->setDataSource(fd, offset, length);         if (mStatus == NO_ERROR) mPlayer = p;         return mStatus; }

3 、

status_t MediaPlayerService::Client::setVideoSurface(const sp<ISurface>& surface) { LOGV("[%d] setVideoSurface(%p)", mConnId, surface.get()); sp<MediaPlayerBase> p = getPlayer(); if (p == 0) return UNKNOWN_ERROR; return p->setVideoSurface(surface); }

// 由此可見,其實架構把顯示的功能并沒有替你做下來,因為這個地方很多p->setVideoSurface(surface);傳回值都是空的,這 個函數隻是給你一個接口,把上層的一個 sp<ISurface>給你,至于你在他上面畫什麼東西,是你的事情,如果你調用這個函數就有,不調用就沒有,很明白簡單。是以說,顯示的 工作還得自己來做。

4、5、 都是設定了源,但是在播放前的一些準備工作,一個是同步,一個是異步。

sp<IMemory> MediaPlayerService::decode(const char* url, uint32_t *pSampleRate, int* pNumChannels, int* pFormat) {         LOGV("decode(%s)", url);         sp<MemoryBase> mem;         sp<MediaPlayerBase> player;         // Protect our precious, precious DRMd ringtones by only allowing         // decoding of http, but not filesystem paths or content Uris.         // If the application wants to decode those, it should open a         // filedescriptor for them and use that.         if (url != NULL && strncmp(url, "http:// ", 7) != 0) {                 LOGD("Can't decode %s by path, use filedescriptor instead", url);                 return mem;         }         player_type playerType = getPlayerType(url);         LOGV("player type = %d", playerType);         // create the right type of player         sp<AudioCache> cache = new AudioCache(url);         player = android::createPlayer(playerType, cache.get(), cache->notify);         if (player == NULL) goto Exit;         if (player->hardwareOutput()) goto Exit;         static_cast<MediaPlayerInterface*>(player.get())->setAudioSink(cache);         // set data source         if (player->setDataSource(url) != NO_ERROR) goto Exit;         LOGV("prepare");         player->prepareAsync();         LOGV("wait for prepare");         if (cache->wait() != NO_ERROR) goto Exit;         LOGV("start");         player->start();         LOGV("wait for playback complete");         if (cache->wait() != NO_ERROR) goto Exit;         mem = new MemoryBase(cache->getHeap(), 0, cache->size());         *pSampleRate = cache->sampleRate();         *pNumChannels = cache->channelCount();         *pFormat = cache->format();         LOGV("return memory @ %p, sampleRate=%u, channelCount = %d, format = %d", mem->pointer(), *pSampleRate, *pNumChannels, *pFormat); Exit:         if (player != 0) player->reset();         return mem; }

這個工程有這樣幾步,準備,然後等待準備完成,然後開始,然後等待start完成,完成之後就可以得到解碼後的資料。上面的幾個接口函數基本上是沒有什麼東西的,下面我們具體來看這個PVPlayer怎麼實作的。主要看他的幾個私有函數:

static void do_nothing(status_t s, void *cookie, bool cancelled) { }

static void run_init(status_t s, void *cookie, bool cancelled);

static void run_set_video_surface(status_t s, void *cookie, bool cancelled);

static void run_set_audio_output(status_t s, void *cookie, bool cancelled);

static void run_prepare(status_t s, void *cookie, bool cancelled);

和幾個私有成員;

PlayerDriver* mPlayerDriver; //整個pv的播放引擎

char * mDataSourcePath;//資料源

bool mIsDataSourceSet;//一個資料源的辨別符

sp<ISurface> mSurface;//顯示面

int mSharedFd; //這個估計是檔案句柄

status_t mInit; //一個狀态标志

int mDuration; //檔案播放長度

我們來看實作:

// ---------------------------------------------------------------------------- // implement the Packet Video player // ---------------------------------------------------------------------------- PVPlayer::PVPlayer() {         LOGV("PVPlayer constructor");         mDataSourcePath = NULL;         mSharedFd = -1;         mIsDataSourceSet = false;         mDuration = -1;         mPlayerDriver = NULL;         LOGV("construct PlayerDriver");         mPlayerDriver = new PlayerDriver(this);         LOGV("send PLAYER_SETUP");         mInit = mPlayerDriver->enqueueCommand(new PlayerSetup(0,0));//................ } status_t PVPlayer::initCheck() {         return mInit; }....................... PVPlayer::~PVPlayer() {         LOGV("PVPlayer destructor");         if (mPlayerDriver != NULL) {                 PlayerQuit quit = PlayerQuit(0,0);//.........                 mPlayerDriver->enqueueCommand(&quit); // will wait on mSyncSem, signaled by player thread         }         free(mDataSourcePath); //..............         if (mSharedFd >= 0) {                 close(mSharedFd);         } } status_t PVPlayer::setDataSource(const char *url) {         LOGV("setDataSource(%s)", url);         if (mSharedFd >= 0) {                 close(mSharedFd);                 mSharedFd = -1;         }         free(mDataSourcePath);         mDataSourcePath = NULL;         // Don't let somebody trick us in to reading some random block of memory         if (strncmp("sharedfd://", url, 11) == 0)                 return android::UNKNOWN_ERROR;  mDataSourcePath = strdup(url);         return OK; } //...........mDataSourcePath..........opencore......... status_t PVPlayer::setDataSource(int fd, int64_t offset, int64_t length) {         // This is all a big hack to allow PV to play from a file descriptor.         // Eventually we'll fix PV to use a file descriptor directly instead         // of using mmap().         LOGV("setDataSource(%d, %lld, %lld)", fd, offset, length);         if (mSharedFd >= 0) {                 close(mSharedFd);                 mSharedFd = -1;         }         free(mDataSourcePath);         mDataSourcePath = NULL;         char buf[80];         mSharedFd = dup(fd);         sprintf(buf, "sharedfd://%d:%lld:%lld", mSharedFd, offset, length);         mDataSourcePath = strdup(buf);         return OK;

然後是:

status_t PVPlayer::setVideoSurface(const sp<ISurface>& surface)

{

LOGV("setVideoSurface(%p)", surface.get());

mSurface = surface;

return OK;

}

然後是prepare,這個函數如果你在setsource裡面沒有做什麼事情的話,這個裡面就開始忙了

status_t PVPlayer::prepare()

{

status_t ret;

// We need to differentiate the two valid use cases for prepare():

// 1. new PVPlayer/reset()->setDataSource()->prepare()

// 2. new PVPlayer/reset()->setDataSource()->prepare()/prepareAsync()

// ->start()->...->stop()->prepare()

// If data source has already been set previously, no need to run

// a sequence of commands and only the PLAYER_PREPARE command needs

// to be run.

if (!mIsDataSourceSet) {//首先看我們的源設定了沒有,需不需要重新設定

// set data source

LOGV("prepare");

LOGV(" data source = %s", mDataSourcePath);

ret = mPlayerDriver->enqueueCommand(new PlayerSetDataSource(mDataSourcePath,0,0));//如果需要,首先發送設定源的指令

if (ret != OK)

return ret;

// init然後是初始化的指令

LOGV(" init");

ret = mPlayerDriver->enqueueCommand(new PlayerInit(0,0));

if (ret != OK)

return ret;

// set video surface, if there is one然後設定顯示面

if (mSurface != NULL) {

LOGV(" set video surface");

ret = mPlayerDriver->enqueueCommand(new PlayerSetVideoSurface(mSurface,0,0));

if (ret != OK)

return ret;

}

// set audio output然後設定音頻

// If we ever need to expose selectable audio output setup, this can be broken

// out. In the meantime, however, system audio routing APIs should suffice.

LOGV(" set audio sink");

ret = mPlayerDriver->enqueueCommand(new PlayerSetAudioSink(mAudioSink,0,0));

if (ret != OK)

return ret;

// New data source has been set successfully.

mIsDataSourceSet = true;

}

// prepare 一些列的搞好了之後,才發送準備指令

LOGV(" prepare");

return mPlayerDriver->enqueueCommand(new PlayerPrepare(0,0));

}

如果是異步的話 就涉及到回調

status_t PVPlayer::prepareAsync()

{

LOGV("prepareAsync");

status_t ret = OK;

if (!mIsDataSourceSet) { // If data source has NOT been set.

// Set our data source as cached in setDataSource() above.

LOGV(" data source = %s", mDataSourcePath);

ret = mPlayerDriver->enqueueCommand(new PlayerSetDataSource(mDataSourcePath,run_init,this));

//這裡設定了一個回調函數run_init,表明我們的setdatasource完成後會幹哈,

mIsDataSourceSet = true;

} else { // If data source has been already set.

// No need to run a sequence of commands.

// The only command needed to run is PLAYER_PREPARE.

ret = mPlayerDriver->enqueueCommand(new PlayerPrepare(do_nothing, NULL));

}

return ret;

}

初始化的函數

void PVPlayer::run_init(status_t s, void *cookie, bool cancelled)

{

LOGV("run_init s=%d, cancelled=%d", s, cancelled);

if (s == NO_ERROR && !cancelled) {

PVPlayer *p = (PVPlayer*)cookie;

p->mPlayerDriver->enqueueCommand(new PlayerInit(run_set_video_surface, cookie));

}//這裡發現初始化完成之後還有下步run_set_video_surface

}

void PVPlayer::run_set_video_surface(status_t s, void *cookie, bool cancelled)

{

LOGV("run_set_video_surface s=%d, cancelled=%d", s, cancelled);

if (s == NO_ERROR && !cancelled) {

// If we don't have a video surface, just skip to the next step.

PVPlayer *p = (PVPlayer*)cookie;

if (p->mSurface == NULL) {

run_set_audio_output(s, cookie, false);

} else {

p->mPlayerDriver->enqueueCommand(new PlayerSetVideoSurface(p->mSurface, run_set_audio_output, cookie));

//設定視訊之後還要run_set_audio_output

}

}

}

void PVPlayer::run_set_audio_output(status_t s, void *cookie, bool cancelled)

{

LOGV("run_set_audio_output s=%d, cancelled=%d", s, cancelled);

if (s == NO_ERROR && !cancelled) {

PVPlayer *p = (PVPlayer*)cookie;

p->mPlayerDriver->enqueueCommand(new PlayerSetAudioSink(p->mAudioSink, run_prepare, cookie));

}

}

void PVPlayer::run_prepare(status_t s, void *cookie, bool cancelled)

{

LOGV("run_prepare s=%d, cancelled=%d", s, cancelled);

if (s == NO_ERROR && !cancelled) {

PVPlayer *p = (PVPlayer*)cookie;

p->mPlayerDriver->enqueueCommand(new PlayerPrepare(do_nothing,0));

}

}

最後才是do_nothing,把同步做的事情分成幾步來做了。

剩下的基本都是很簡單的,就是發送一條又一條的指令即可,其實主要的操作,都在PlayerDriver中。

下 面我們來簡單的看看這個PlayerDriver,這個事實作播放的主要成員,首先他是一個管理器,他管理者OpenCore的整個架構,最後的輸出的 MIO,其次,他是一個異步的東東,存在着一個指令的隊列。這個裡面雖然代碼很多,但是思路很清晰,我們就不一一的列出來,這個裡面主要的播放功能被封裝 到一個叫PVPlayerInterface的接口中了。

這裡我們首先分析它的視訊顯示,在接收到設定顯示面的時候,有這樣的一個處理:

// if no device-specific MIO was created, use the generic one

if (mio == NULL) {

LOGW("Using generic video MIO");

mio = new AndroidSurfaceOutput();

}

// initialize the MIO parameters

status_t ret = mio->set(mPvPlayer, command->surface(), mEmulation);

if (ret != NO_ERROR) {

LOGE("Video MIO set failed");

commandFailed(command);

delete mio;

return;

}

mVideoOutputMIO = mio;

mVideoNode = PVMediaOutputNodeFactory ::CreateMediaOutputNode(mVideoOutputMIO);

mVideoSink = new PVPlayerDataSinkPVMFNode ;

((PVPlayerDataSinkPVMFNode  *)mVideoSink)->SetDataSinkNode(mVideoNode);

((PVPlayerDataSinkPVMFNode  *)mVideoSink)->SetDataSinkFormatType(PVMF_YUV420);

OSCL_TRY(error, mPlayer->AddDataSink(*mVideoSink, command));

這 個mPlayer就是一個PVPlayerInterface的成員。在opencore中我們的最終都是要封裝成NODE的,MIO可以屬于 NODE,MIO主要負責和硬體打交道的一部分,這幾行代碼首先建立一個AndroidSurfaceOutput的MIO,建立了之後設定一些基本的屬 性set函數,然後,由MIO建立OutPutNode,這就是一個NOde了,這個東東建立了之後,就要把這個Node添加到整個資料鍊路中,并且設定 一下基本的屬性。這樣我們的輸出Node就添加到了資料鍊路中了,那麼我們的MIO是如何工作的呢?

建立MIO之後有這樣的一個函數

status_t AndroidSurfaceOutput::set(PVPlayer* pvPlayer, const sp<ISurface>& surface, bool emulation)

{

mPvPlayer = pvPlayer;

mSurface = surface;

mEmulation = emulation;

return NO_ERROR;

}

這個函數就設定了我們Vedio out MIO最主要的幾個屬性,一個是pvPlayer,一個是surface,最後的一個參數應該是表明是不是模拟器。

我們來看看這個MIO。關于MIO前面我們基本已經講過,可能沒有放在blog上。一般由這幾個接口派生:

public OsclTimerObject, public PvmiMIOControl,

public PvmiMediaTransfer, public PvmiCapabilityAndConfig

但是作為一個視訊輸出的MIO,這裡有幾個自己特色的函數:

// For frame buffer

virtual bool initCheck();

virtual PVMFStatus writeFrameBuf(uint8* aData, uint32 aDataLen, const PvmiMediaXferHeader& data_header_info);

virtual void postLastFrame();

virtual void closeFrameBuf();

bool GetVideoSize(int *w, int *h);

我們一跳一條的分析:

首先是initCheck,這個函數什麼時候開始調用?

// create a frame buffer for software codecs

OSCL_EXPORT_REF bool AndroidSurfaceOutput::initCheck()

{

// initialize only when we have all the required parameters

//首先看看是不是視訊相關的屬性發生改變了,如果不是視訊就不用管他,直接傳回

if (!checkVideoParameterFlags ())

return mInitialized;

// release resources if previously initialized 删除以前配置設定的緩存

closeFrameBuf();

// reset flags in case display format changes in the middle of a stream

resetVideoParameterFlags (); //視訊改變的标志位還原

// copy parameters in case we need to adjust them

//得到新的寬和高,(包括視訊和顯示器,我們在橫屏的時候就可以這樣搞) 

int displayWidth = iVideoDisplayWidth;

int displayHeight = iVideoDisplayHeight;

int frameWidth = iVideoWidth;

int frameHeight = iVideoHeight;

int frameSize;

// RGB-565 frames are 2 bytes/pixel //因為我們的資料都是565的16位的像素點

//&-2 表明取偶數

displayWidth = (displayWidth + 1) & -2;

displayHeight = (displayHeight + 1) & -2;

frameWidth = (frameWidth + 1) & -2;

frameHeight = (frameHeight + 1) & -2;

frameSize = frameWidth * frameHeight * 2;

// create frame buffer heap and register with surfaceflinger //然後配置設定兩幀的資料

mFrameHeap = new MemoryHeapBase(frameSize * kBufferCount);

if (mFrameHeap->heapID() < 0) {

LOGE("Error creating frame buffer heap");

return false;

}

//配置設定之後把資料指明給buffer,這樣我們的buffer是到是一個什麼樣的格式,長和寬分别是多少

ISurface::BufferHeap buffers(displayWidth, displayHeight,

frameWidth, frameHeight, PIXEL_FORMAT_RGB_565, mFrameHeap);

//然後注冊這個buffer

mSurface->registerBuffers(buffers);

// create frame buffers

//mFrameBuffers[i]儲存的是第I幀的起始位置

for (int i = 0; i < kBufferCount; i++) {

mFrameBuffers[i] = i * frameSize;

}

//然後初始化視訊資料轉換器

// initialize software color converter

iColorConverter = ColorConvert16::NewL();

iColorConverter->Init(displayWidth, displayHeight, frameWidth, displayWidth, displayHeight, displayWidth, CCROTATE_NONE);

iColorConverter->SetMemHeight(frameHeight);

iColorConverter->SetMode(1);

LOGV("video = %d x %d", displayWidth, displayHeight);

LOGV("frame = %d x %d", frameWidth, frameHeight);

LOGV("frame #bytes = %d", frameSize);

// register frame buffers with SurfaceFlinger

mFrameBufferIndex = 0;

mInitialized = true;

mPvPlayer->sendEvent(MEDIA_SET_VIDEO_SIZE, iVideoDisplayWidth, iVideoDisplayHeight);

return mInitialized;

}

然後我們的東東就搞定了。

第二個函數:writeFrameBuf這個函數是什麼時候用的呢?我們前面看MIO的時候就已經知道MIO的資料傳遞是通過隊列的方式,消息和資料可以再一個隊列裡傳輸。在MIO的writeAsync函數中有這樣的一個case:

case PVMI_MEDIAXFER_FMT_TYPE_DATA :

switch(aFormatIndex)

{

case PVMI_MEDIAXFER_FMT_INDEX_FMT_SPECIFIC_INFO:

……………………

case PVMI_MEDIAXFER_FMT_INDEX_DATA:

//data contains the media bitstream.

//Verify the state

if (iState!=STATE_STARTED)

{

PVLOGGER_LOGMSG(PVLOGMSG_INST_REL, iLogger, PVLOGMSG_ERR,

(0,"AndroidSurfaceOutput::writeAsync: Error - Invalid state"));

status=PVMFErrInvalidState;

}

else

{

//printf("V WriteAsync { seq=%d, ts=%d }\n", data_header_info.seq_num, data_header_info.timestamp);

// Call playback to send data to IVA for Color Convert

status = writeFrameBuf(aData, aDataLen, data_header_info);

PVLOGGER_LOGMSG(PVLOGMSG_INST_REL, iLogger, PVLOGMSG_ERR,

(0,"AndroidSurfaceOutput::writeAsync: Playback Progress - frame %d",iFrameNumber++));

}

break;

這個地方就開始調用我們的畫螢幕的函數。這個時候就是我們接受到一幀資料的時候,注意我們這裡基本上音頻和視訊的同步在NODE架構中已經做了,這裡能夠收到資料,叫表明一定是要顯示的資料,不用考慮什麼音視訊同步了。

OSCL_EXPORT_REF PVMFStatus AndroidSurfaceOutput::writeFrameBuf(uint8* aData, uint32 aDataLen, const PvmiMediaXferHeader& data_header_info)

{

if (mSurface == 0) return PVMFFailure;

if (++mFrameBufferIndex == kBufferCount) mFrameBufferIndex = 0;

iColorConverter->Convert(aData, static_cast<uint8*>(mFrameHeap->base()) + mFrameBuffers[mFrameBufferIndex]);

// post to SurfaceFlinger

mSurface->postBuffer(mFrameBuffers[mFrameBufferIndex]);

return PVMFSuccess;

}

這個函數其實很簡單,直接把對應的資料轉換到我們顯示屏支援的,然後直接postBuffer即可顯示。

看下面的一個函數:

// post the last video frame to refresh screen after pause

void AndroidSurfaceOutput::postLastFrame()

{

mSurface->postBuffer(mFrameBuffers[mFrameBufferIndex]);

}

把資料顯示在螢幕上。在暫停的時候,這個函數就可以調用,我們的視訊會不變。

OSCL_EXPORT_REF void AndroidSurfaceOutput::closeFrameBuf()

{

LOGV("closeFrameBuf");

if (!mInitialized) return;

mInitialized = false;

if (mSurface.get()) {

LOGV("unregisterBuffers");

mSurface->unregisterBuffers();

mSurface.clear();

}

// free frame buffers

LOGV("free frame buffers");

for (int i = 0; i < kBufferCount; i++) {

mFrameBuffers[i] = 0;

}

// free heaps

LOGV("free mFrameHeap");

mFrameHeap.clear();

// free color converter

if (iColorConverter != 0)

{

LOGV("free color converter");

delete iColorConverter;

iColorConverter = 0;

}

}

這個函數前面已經說過,是一個簡單的清空操作。

OSCL_EXPORT_REF bool AndroidSurfaceOutput::GetVideoSize(int *w, int *h) {

*w = iVideoDisplayWidth;

*h = iVideoDisplayHeight;

return iVideoDisplayWidth != 0 && iVideoDisplayHeight != 0;

}