轉載請注明出處:http://blog.csdn.net/adits/article/details/8242146
開發環境簡介
1. 主機系統: Unbuntu10.10
2. android系統版本: 4.0.3(Linux kernel 3.0.8)
綜述
android的音頻系統非常龐大複雜:涉及到java應用程式,java架構層,JNI,本地服務(AudioFlinger和AudioPolicyService),硬體抽象層HAL,ALSA-LIB和ALSA-DRIVER。
本文将先分析音頻系統的啟動與子產品加載流程,并具體分析一個JAVA API的調用流程;最後在此基礎上自然地為android系統添加USB AUDIO裝置的放音和錄音功能。
全文可分為如下幾大部分:
1. 本地服務的啟動流程分析。
1.1 AudioFlinger啟動流程及其所涉及的HAL層子產品啟動流程分析。
1.2 AudioPolicyService啟動流程及其所涉及的HAL層子產品啟動流程分析。
2. JAVA API setDeviceConnectionState()調用流程詳解,同時為android系統添加USB AUDIO裝置的放音和錄音功能。
3. ALSA-LIB淺述以及asound.conf配置檔案的書寫。
4. 重新擷取USB AUDIO裝置的硬體參數。
詳述
1. 本地服務的啟動流程分析。
AudioFlinger和AudioPolicyService兩大音頻服務都是在android系統啟動時就啟動的。
當linux kenerl啟動完成後,會啟動android的init程序(system/core/init/init.c)。
int main(int argc, char **argv)
{
.....
init_parse_config_file("/init.rc");
.....
}
init.rc檔案中儲存了許多系統啟動時需要啟動的服務。其中就有多媒體服務mediaserver的啟動:
此服務在檔案frameworks/base/media/mediaserver/main_mediaserver.cpp中定義,而音頻子系統的兩大學地服務AudioFlinger和AudioPolicyService就是在此啟動的。
int main(int argc, char** argv)
{
.....
AudioFlinger::instantiate(); //執行個體化AudioFlinger
.....
AudioPolicyService::instantiate(); //執行個體化AudioPolicyService
.....
}
1.1 AudioFlinger啟動流程及其所涉及的HAL層子產品啟動流程分析。
根據上文分析,将調用AudioFlinger::instantiate()函數執行個體化AudioFlinger。但是AudioFlinger.cpp中并沒有找到此函數,那必然在其父類中。AudioFlinger類有很多父類,一時難以确定instantiate()到底在哪個父類中定義的。直接搜尋吧!
grep -rn "instantiate" frameworks/base/
很快找到instantiate()函數的定義處在./frameworks/base/include/binder/BinderService.h頭檔案中
template<typename SERVICE>
class BinderService
{
public:
static status_t publish() {
sp<IServiceManager> sm(defaultServiceManager());
return sm->addService(String16(SERVICE::getServiceName()), new SERVICE());
......
static void instantiate() { publish(); }
.....
}
}
這裡用到了模闆,需要确定SERVICE是什麼東東。
AudioFlinger類是在AudioFlinger.h中定義的,而恰好包含了頭檔案BinderService.h。
class AudioFlinger :
public BinderService<AudioFlinger>,
public BnAudioFlinger
{
friend class BinderService<AudioFlinger>;
public:
static char const* getServiceName() { return "media.audio_flinger"; }
.....
}
原來AudioFlinger類繼承了BinderService類,同時把自己(AudioFlinger)傳遞給SERVICE。而addService函數第一個參數調用了AudioFlinger類的靜态成員函數getServiceName()擷取AudioFlinger的服務名稱;其第二個參數便是建立了一個AudioFlinger的執行個體。至此,明白了執行個體化函數instantiate()就是要向服務管理器注冊的服務是AudioFlinger。
既然此時執行個體化了AudioFlinger,那麼看看AudioFlinger類的構造函數具體做了哪些初始化工作。
AudioFlinger::AudioFlinger()
: BnAudioFlinger(),
mPrimaryHardwareDev(0), mMasterVolume(1.0f), mMasterMute(false), mNextUniqueId(1),
mBtNrecIsOff(false)
{
}
此構造函數做了一些無關緊要的事情,不管它。既然AudioFlinger服務是第一次啟動,則将調到函數AudioFlinger::onFirstRef(至于為什麼,我還沒有搞明白,可以通過log資訊确信确實是這麼回事)。
void AudioFlinger::onFirstRef()
{
......
for (size_t i = 0; i < ARRAY_SIZE(audio_interfaces); i++) {
const hw_module_t *mod;
audio_hw_device_t *dev;
rc = load_audio_interface(audio_interfaces[i], &mod,&dev);
.....
mAudioHwDevs.push(dev); // 把通過load_audio_interface()函數獲得的裝置存入元素為audio_hw_device_t
// 類型的模闆變量mAudioHwDevs中
.....
}
看到load_audio_interface()函數的名字,知曉,應當是加載音頻接口的。
static int load_audio_interface(const char *if_name, const hw_module_t **mod,
audio_hw_device_t **dev)
{
......
rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, mod);
if (rc)
goto out;
rc = audio_hw_device_open(*mod, dev);
.....
}
首先通過函數hw_get_module_by_class擷取ID号為AUDIO_HARDWARE_MODULE_ID的音頻子產品,此ID在頭檔案hardware/libhardware/include/hardware/audio.h中定義。此頭檔案中定義了一個十分重要的結構體struct audio_hw_device,其中包含了許多函數接口(函數指針):
struct audio_hw_device {
struct hw_device_t common;
/**
* used by audio flinger to enumerate what devices are supported by
* each audio_hw_device implementation.
*
* Return value is a bitmask of 1 or more values of audio_devices_t
*/
uint32_t (*get_supported_devices)(const struct audio_hw_device *dev);
/**
* check to see if the audio hardware interface has been initialized.
* returns 0 on success, -ENODEV on failure.
*/
int (*init_check)(const struct audio_hw_device *dev);
......
/* set/get global audio parameters */
int (*set_parameters)(struct audio_hw_device *dev, const char *kv_pairs);
.....
/** This method creates and opens the audio hardware output stream */
int (*open_output_stream)(struct audio_hw_device *dev, uint32_t devices,
int *format, uint32_t *channels,
uint32_t *sample_rate,
struct audio_stream_out **out);
......
/** This method creates and opens the audio hardware input stream */
int (*open_input_stream)(struct audio_hw_device *dev, uint32_t devices,
int *format, uint32_t *channels,
uint32_t *sample_rate,
audio_in_acoustics_t acoustics,
struct audio_stream_in **stream_in);
.....
}
ID為AUDIO_HARDWARE_MODULE_ID的音頻子產品到底在哪兒定義的那?既然是HAL層子產品,必定在hardware目錄下定義的
$ grep -rn AUDIO_HARDWARE_MODULE_ID hardware/
hardware/libhardware_legacy/audio/audio_hw_hal.cpp:602:id: AUDIO_HARDWARE_MODULE_ID,
hardware/libhardware/modules/audio/audio_hw.c:435: .id = AUDIO_HARDWARE_MODULE_ID,
hardware/libhardware/include/hardware/audio.h:37:
#define AUDIO_HARDWARE_MODULE_ID "audio"
從搜尋結果發現,AUDIO_HARDWARE_MODULE_ID的音頻子產品有兩處定義,具體用的是哪一個那?
先分析下這兩個子產品最終編譯進哪些子產品。
通過查閱Android.mk曉得,audio_hw.c先被編譯進audio_policy.stub子產品,而後被編譯進libhardware子產品;
同樣的,audio_hw_hal.cpp先被編譯進libaudiopolicy_legacy子產品,而後被編譯進libhardware_legacy子產品;
而libhardware和libhardware_legacy子產品都在audioFlinger中用到。
通過log資訊,确認使用的是libaudiopolicy_legacy子產品,即具體調到hardware/libhardware_legacy/audio/audio_hw_hal.cpp檔案中所定義的子產品了。
在擷取到HAL層音頻子產品後,接下來執行audio_hw_device_open()函數,打開裝置。此函數也在audio.h頭檔案中定義,函數體如下:
/** convenience API for opening and closing a supported device */
static inline int audio_hw_device_open(const struct hw_module_t* module,
struct audio_hw_device** device)
{
return module->methods->open(module, AUDIO_HARDWARE_INTERFACE,
(struct hw_device_t**)device);
}
struct hw_module_t是在hardware/libhardware/include/hardware/hardware.h頭檔案中定義的,其中嵌套了struct hw_module_methods_t。此結構體很簡單,隻有一個函數指針open
typedef struct hw_module_methods_t {
/** Open a specific device */
int (*open)(const struct hw_module_t* module, const char* id,
struct hw_device_t** device);
} hw_module_methods_t;
在确定具體子產品後,很容易确定open函數指針的具體實作
struct legacy_audio_module HAL_MODULE_INFO_SYM = {
module: {
common: {
tag: HARDWARE_MODULE_TAG,
version_major: 1,
version_minor: 0,
id: AUDIO_HARDWARE_MODULE_ID,
name: "LEGACY Audio HW HAL",
author: "The Android Open Source Project",
methods: &legacy_audio_module_methods,
dso : NULL,
reserved : {0},
},
},
};
static struct hw_module_methods_t legacy_audio_module_methods = {
open: legacy_adev_open
};
open()的實作就是legacy_adev_open()函數了!
static int legacy_adev_open(const hw_module_t* module, const char* name,
hw_device_t** device)
{
struct legacy_audio_device *ladev;
int ret;
if (strcmp(name, AUDIO_HARDWARE_INTERFACE) != 0) // 看來對應宏AUDIO_HARDWARE_INTERFACE,除了用來判斷希望打開的是否音頻硬體接口外,并沒有做什麼更多的事情
return -EINVAL;
.....
ladev->device.get_supported_devices = adev_get_supported_devices;
ladev->device.init_check = adev_init_check;
.....
ladev->device.open_output_stream = adev_open_output_stream;
ladev->device.close_output_stream = adev_close_output_stream;
ladev->device.open_input_stream = adev_open_input_stream;
ladev->device.close_input_stream = adev_close_input_stream;
.....
ladev->hwif = createAudioHardware();
.....
*device = &ladev->device.common; // 将目前裝置資訊層層傳回給回調函數,最後傳回到
// load_audio_interface()函數,并儲存在
// mAudioHwDevs模闆變量中,供AudioFlinger類使用。
return 0;
}
這裡主要做了一些初始化工作,即給函數指針提供具體實作函數;但createAudioHardware()應該做了更多的事情。
先從函數createAudioHardware()的傳回值入手。struct legacy_audio_device的定義如下:
struct legacy_audio_device {
struct audio_hw_device device;
struct AudioHardwareInterface *hwif;
};
原來createAudioHardware()的傳回值是一個硬體裝置接口AudioHardwareInterface。
類AudioHardwareInterface正好在audio_hw_hal.cpp檔案中所包含的頭檔案hardware_legacy/AudioHardwareInterface.h中定義的虛類(結構體能調到類,還是頭一遭見到,雖然結構體和類長得很象)。那麼我很想知道createAudioHardware()具體做了哪些事情。
首先需要确定函數createAudioHardware()的定義在哪兒?有幾處定義?調用的具體是哪一個?
AudioHardwareInterface.h頭檔案中對createAudioHardware函數的聲明,沒有包含在任何類中,而僅僅包含在名字空間android_audio_legacy中,這和audio_hw_hal.cpp同在一個名字空間中。
namespace android_audio_legacy {
.....
extern "C" AudioHardwareInterface* createAudioHardware(void);
}; // namespace android
經搜尋,發現createAudioHardware()函數有四處定義。
$ grep -rn createAudioHardware hardware/ --exclude-dir=.svn
hardware/alsa_sound/AudioHardwareALSA.cpp:45:
android_audio_legacy::AudioHardwareInterface *createAudioHardware(void) {
hardware/msm7k/libaudio-qsd8k/AudioHardware.cpp:2021:
extern "C" AudioHardwareInterface* createAudioHardware(void) {
hardware/msm7k/libaudio-qdsp5v2/AudioHardware.cpp:337:
extern "C" AudioHardwareInterface* createAudioHardware(void) {
hardware/msm7k/libaudio/AudioHardware.cpp:1132:
extern "C" AudioHardwareInterface* createAudioHardware(void) {
隻有AudioHardwareALSA.cpp檔案中包含了頭檔案hardware_legacy/AudioHardwareInterface.h,并且傳回值是android_audio_legacy名字空間的AudioHardwareInterface類對象。則createAudioHardware函數的具體實作很可能是它了,通過log資訊證明了這一點。
進入AudioHardwareALSA.cpp,不難看出,此函數,最後會通過執行代碼如下代碼建立AudioHardwareALSA類對象。
AudioHardwareALSA類的構造函數如下:
AudioHardwareALSA::AudioHardwareALSA() :
mALSADevice(0),
mAcousticDevice(0)
{
......
int err = hw_get_module(ALSA_HARDWARE_MODULE_ID,
(hw_module_t const**)&module);
if (err == 0) {
hw_device_t* device;
err = module->methods->open(module, ALSA_HARDWARE_NAME, &device);
if (err == 0) {
mALSADevice = (alsa_device_t *)device;
mALSADevice->init(mALSADevice, mDeviceList);
.....
err = hw_get_module(ACOUSTICS_HARDWARE_MODULE_ID,
(hw_module_t const**)&module);
if (err == 0) {
hw_device_t* device;
err = module->methods->open(module, ACOUSTICS_HARDWARE_NAME, &device);
.....
}
宏ALSA_HARDWARE_MODULE_ID是在頭檔案hardware/alsa_sound/AudioHardwareALSA.h中定義的。
子產品所對應的結構體類型為hw_module_t,在頭檔案hardware/libhardware/include/hardware/hardware.h中定義。
在構造函數中,首先調用函數hw_get_module()擷取ID為ALSA_HARDWARE_MODULE_ID的ALSA硬體子產品,看來即将進入龐大而又功能強大的ALSA音頻子系統了!
經過搜尋,很快确定ID為ALSA_HARDWARE_MODULE_ID的ALSA硬體抽象層的具體實作在檔案hardware/alsa_sound/alsa_default.cpp中。
$ grep -rn ALSA_HARDWARE_MODULE_ID hardware/ --exclude-dir=.svn
hardware/alsa_sound/AudioHardwareALSA.h:39:#define ALSA_HARDWARE_MODULE_ID "alsa"
hardware/alsa_sound/alsa_default.cpp:59:
id : ALSA_HARDWARE_MODULE_ID,
hardware/alsa_sound/AudioHardwareALSA.cpp:150:
int err = hw_get_module(ALSA_HARDWARE_MODULE_ID,
則很快找到此子產品的具體内容如下:
extern "C" const hw_module_t HAL_MODULE_INFO_SYM = {
tag : HARDWARE_MODULE_TAG,
version_major : 1,
version_minor : 0,
id : ALSA_HARDWARE_MODULE_ID,
name : "ALSA module",
author : "Wind River",
methods : &s_module_methods,
dso : 0,
reserved : { 0, },
};
s_module_methods函數的實作如下:
static hw_module_methods_t s_module_methods = {
open : s_device_open
};
s_device_open函數的實作如下:
static int s_device_open(const hw_module_t* module, const char* name, //有些困惑,
// 此open函數實作中并沒有對調用者傳遞下來的name(ALSA_HARDWARE_NAME)作如何處理。
hw_device_t** device) //device存儲傳回的子產品資訊
{
alsa_device_t *dev;
dev = (alsa_device_t *) malloc(sizeof(*dev));
if (!dev) return -ENOMEM;
memset(dev, 0, sizeof(*dev));
/* initialize the procs */
dev->common.tag = HARDWARE_DEVICE_TAG;
dev->common.version = 0;
dev->common.module = (hw_module_t *) module;
dev->common.close = s_device_close;
dev->init = s_init;
dev->open = s_open;
dev->close = s_close;
dev->route = s_route;
*device = &dev->common; // 把此子產品資訊傳回給調用者
return 0;
}
經過上述分析,知道了module->methods->open函數具體調用流程了。
然後對ALSA硬體抽象層子產品做了初始化的工作。
這裡用到一個結構體變量mALSADevice,它在頭檔案hardware/alsa_sound/AudioHardwareALSA.h中定義的struct alsa_device_t變量。
struct alsa_device_t {
hw_device_t common;
status_t (*init)(alsa_device_t *, ALSAHandleList &);
status_t (*open)(alsa_handle_t *, uint32_t, int);
status_t (*close)(alsa_handle_t *);
status_t (*route)(alsa_handle_t *, uint32_t, int);
};
此結構體僅僅提供了一些函數調用接口,在這裡都有了具體的實作。則mALSADevice->init()将調到s_init()函數中。
static status_t s_init(alsa_device_t *module, ALSAHandleList &list)
{
list.clear();
snd_pcm_uframes_t bufferSize = _defaultsOut.bufferSize;
for (size_t i = 1; (bufferSize & ~i) != 0; i <<= 1)
bufferSize &= ~i;
_defaultsOut.module = module;
_defaultsOut.bufferSize = bufferSize;
list.push_back(_defaultsOut);
bufferSize = _defaultsIn.bufferSize;
.....
list.push_back(_defaultsIn);
.....
}
這裡會把_defaultsOut和_defaultsIn東東儲存在ALSA句柄清單ALSAHandleList中。
首先需要明确_defaultsOut和_defaultsIn具體是什麼東東。
static alsa_handle_t _defaultsOut = {
module : 0,
devices : android_audio_legacy::AudioSystem::DEVICE_OUT_ALL, // 支援的所有
// 輸出音頻裝置
curDev : 0,
curMode : 0,
handle : 0, // PCM節點
format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT
channels : 2,
sampleRate : DEFAULT_SAMPLE_RATE,
latency : 200000, // Desired Delay in usec
bufferSize : DEFAULT_SAMPLE_RATE / 5, // Desired Number of samples
modPrivate : 0,
};
static alsa_handle_t _defaultsIn = {
module : 0,
devices : android_audio_legacy::AudioSystem::DEVICE_IN_ALL, // 支援的所有
// 輸入音頻裝置
curDev : 0,
curMode : 0,
handle : 0, // PCM節點
format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT
channels : 2, // 聲道數: 1表示單聲道,2表示立體聲。如果與實際使用的USB AUDIO裝置參數
// 的不一緻,将導緻USB AUDIO裝置不能使用。
sampleRate : DEFAULT_SAMPLE_RATE, // 采樣率,如果與實際使用的USB AUDIO裝置參數
// 的不一緻,将導緻聲音失真
latency : 250000, // Desired Delay in usec
bufferSize : 2048, // Desired Number of samples
modPrivate : 0,
};
那ALSAHandleList又是什麼東東?
ALSAHandleList在頭檔案hardware/alsa_sound/AudioHardwareALSA.h中定義的List模闆變量。
typedef List<alsa_handle_t> ALSAHandleList;
原來就是struct asla_handle_t的一個清單而已,而_defaultsOut和_defaultsIn正是這樣的結構體變量。
struct alsa_handle_t {
alsa_device_t * module;
uint32_t devices;
uint32_t curDev;
int curMode;
snd_pcm_t * handle; // PCM節點
snd_pcm_format_t format;
uint32_t channels;
uint32_t sampleRate;
unsigned int latency; // Delay in usec
unsigned int bufferSize; // Size of sample buffer
void * modPrivate;
};
ALSA硬體抽象層正是這樣獲得了輸出音頻通道和輸入音頻通道的相關初始化硬體參數,以後在使用中并不試圖改變這些硬體參數(針對真能手機和平闆來說,也卻是不需要改變)。是以,在擴充android系統功能,為其添加對USB AUDIO裝置的支援時,就不得不考慮時事改變channels和sampleRate這兩個硬體參數的值。
至此,AudioFlinger服務首次啟動過程分析完畢!
1.2 AudioPolicyService啟動流程及其所涉及的HAL層子產品啟動流程分析。
AudioPolicyService服務的啟動流程類似于AudioFlinger服務的啟動過程,将簡要分析。
先看下AudioPolicyService類的定義(AudioPolicyService.h)(提供此類的定義,主要是為下面instantiate()函數服務的):
class AudioPolicyService :
public BinderService<AudioPolicyService>, // 繼承了BinderService類,
// 并把自己(AudioPolicyService)傳遞給
// BinderService。
public BnAudioPolicyService,
// public AudioPolicyClientInterface,
public IBinder::DeathRecipient
{
friend class BinderService<AudioPolicyService>;
public:
// for BinderService
static const char *getServiceName() { return "media.audio_policy"; }
.....
}
根據前面的分析,曉得将通過調用如下代碼啟動AudioPolicyService服務。
此代碼最後将調到AudioPolicyService類的構造函數
AudioPolicyService::AudioPolicyService()
: BnAudioPolicyService() , mpAudioPolicyDev(NULL) , mpAudioPolicy(NULL)
{
char value[PROPERTY_VALUE_MAX];
const struct hw_module_t *module;
int forced_val;
int rc;
Mutex::Autolock _l(mLock);
// start tone playback thread
mTonePlaybackThread = new AudioCommandThread(String8(""));
// start audio commands thread
mAudioCommandThread = new AudioCommandThread(String8("ApmCommandThread"));
/* instantiate the audio policy manager */
rc = hw_get_module(AUDIO_POLICY_HARDWARE_MODULE_ID, &module);
if (rc)
return;
rc = audio_policy_dev_open(module, &mpAudioPolicyDev);
LOGE_IF(rc, "couldn't open audio policy device (%s)", strerror(-rc));
if (rc)
return;
rc = mpAudioPolicyDev->create_audio_policy(mpAudioPolicyDev, &aps_ops, this,
&mpAudioPolicy);
LOGE_IF(rc, "couldn't create audio policy (%s)", strerror(-rc));
if (rc)
return;
rc = mpAudioPolicy->init_check(mpAudioPolicy);
.....
}
(1)首先開啟了放音線程和音頻指令線程。這些工作都是通過建立AudioCommandThread線程類對象完成。
AudioCommandThread類在頭檔案frameworks/base/services/audioflinger/AudioPolicyService.h中定義
是AudioPolicyService類的私有子類。
AudioCommandThread線程類建立了對象後,将進入死循環中,等待要處理的事件傳來。
bool AudioPolicyService::AudioCommandThread::threadLoop()
{
nsecs_t waitTime = INT64_MAX;
mLock.lock();
while (!exitPending())
{
while(!mAudioCommands.isEmpty()) {
.....
switch (command->mCommand) {
.....
case SET_PARAMETERS: {
ParametersData *data = (ParametersData *)command->mParam;
LOGV("AudioCommandThread() processing set parameters string %s, io %d",
data->mKeyValuePairs.string(), data->mIO);
command->mStatus = AudioSystem::setParameters(data->mIO, data->mKeyValuePairs);
if (command->mWaitStatus) {
command->mCond.signal();
mWaitWorkCV.wait(mLock);
}
delete data;
}break;
.....
}
這裡隻列出了switch語句中的一種情況的處理代碼,因為後面分析setDeviceConnectionState()函數的調用流程時将用到。
當command->mCommand值為SET_PARAMETERS時,将調用libmedia庫(frameworks/base/media/libmedia/AudioSystem.cpp)中的函數setParameters()做進一步處理。
(2)然後調用函數hw_get_module()獲得ID号為AUDIO_POLICY_HARDWARE_MODULE_ID的硬體抽象層的音頻政策子產品。宏AUDIO_POLICY_HARDWARE_MODULE_ID在頭檔案hardware/libhardware/include/hardware/audio_policy.h中定義。
ID号為AUDIO_POLICY_HARDWARE_MODULE_ID的子產品也有兩處具體實作,同樣通過log資訊,确認調用的是libhardware_legacy子產品中的AUDIO_POLICY_HARDWARE_MODULE_ID子子產品的具體實作。
$ grep -rn AUDIO_POLICY_HARDWARE_MODULE_ID hardware/ --exclude-dir=.svn
hardware/libhardware_legacy/audio/audio_policy_hal.cpp:414:
id: AUDIO_POLICY_HARDWARE_MODULE_ID,
hardware/libhardware/modules/audio/audio_policy.c:318:
.id = AUDIO_POLICY_HARDWARE_MODULE_ID,
audio_policy_hal.cpp檔案中定義的AUDIO_POLICY_HARDWARE_MODULE_ID子產品如下:
struct legacy_ap_module HAL_MODULE_INFO_SYM = {
module: {
common: {
tag: HARDWARE_MODULE_TAG,
version_major: 1,
version_minor: 0,
id: AUDIO_POLICY_HARDWARE_MODULE_ID,
name: "LEGACY Audio Policy HAL",
author: "The Android Open Source Project",
methods: &legacy_ap_module_methods,
dso : NULL,
reserved : {0},
},
},
};
(3)再然後調用audio_policy_dev_open()函數(在頭檔案hardware/libhardware/include/hardware/audio_policy.h中定義)。
首先分析函數參數:第一個參數就是上面擷取的子產品,第二個參數mpAudioPolicyDev是struct audio_policy_device 指針變量,在頭檔案AudioPolicyService.h中定義。而struct audio_policy_device是在頭檔案audio_policy.h中定義的。
struct audio_policy_device {
struct hw_device_t common;
int (*create_audio_policy)(const struct audio_policy_device *device,
struct audio_policy_service_ops *aps_ops,
void *service,
struct audio_policy **ap);
.....
}
最後看下audio_policy_dev_open()函數的實作
/** convenience API for opening and closing a supported device */
static inline int audio_policy_dev_open(const hw_module_t* module,
struct audio_policy_device** device)
{
return module->methods->open(module, AUDIO_POLICY_INTERFACE,
(hw_device_t**)device);
}
由上述分析可知,open函數指針就指向legacy_ap_dev_open()函數。
static int legacy_ap_dev_open(const hw_module_t* module, const char* name,
hw_device_t** device) // 參數device儲存傳回的子產品資訊
{
struct legacy_ap_device *dev;
if (strcmp(name, AUDIO_POLICY_INTERFACE) != 0)// 參數name(AUDIO_POLICY_INTERFACE)
// 就這點用處
return -EINVAL;
dev = (struct legacy_ap_device *)calloc(1, sizeof(*dev));
if (!dev)
return -ENOMEM;
dev->device.common.tag = HARDWARE_DEVICE_TAG;
dev->device.common.version = 0;
dev->device.common.module = const_cast<hw_module_t*>(module);
dev->device.common.close = legacy_ap_dev_close;
dev->device.create_audio_policy = create_legacy_ap;
dev->device.destroy_audio_policy = destroy_legacy_ap;
*device = &dev->device.common; // 将目前子產品具體資訊指派給device,并回饋給調用者
return 0;
}
(4)再接下來調用的mpAudioPolicyDev->create_audio_policy()函數指針具體就是create_legacy_ap()。
第二個參數&aps_ops是struct audio_policy_service_ops變量,是APS(AudioPolicyService)的操作接口,并且傳遞的是aps_ops的位址,則被調用者使用的将是在APS中函數接口的實作。
namespace {
struct audio_policy_service_ops aps_ops = {
open_output : aps_open_output,
open_duplicate_output : aps_open_dup_output,
close_output : aps_close_output,
suspend_output : aps_suspend_output,
restore_output : aps_restore_output,
open_input : aps_open_input,
close_input : aps_close_input,
set_stream_volume : aps_set_stream_volume,
set_stream_output : aps_set_stream_output,
set_parameters : aps_set_parameters,
get_parameters : aps_get_parameters,
start_tone : aps_start_tone,
stop_tone : aps_stop_tone,
set_voice_volume : aps_set_voice_volume,
move_effects : aps_move_effects,
};
}; // namespace <unnamed>
struct audio_policy_service_ops在頭檔案hardware/libhardware/include/hardware/audio_policy.h中定義,是包含音頻相關控制函數的接口。可見aps_ops接口将為HAL層提供服務。
第四個參數是&mpAudioPolicy。mpAudioPolicy是struct audio_policy的指針變量(AudioPolicyService.h)。而struct audio_policy也是在audio_policy.h中定義,将要用到的接口如下:
struct audio_policy {
/*
* configuration functions
*/
/* indicate a change in device connection status */
int (*set_device_connection_state)(struct audio_policy *pol,
audio_devices_t device,
audio_policy_dev_state_t state,
const char *device_address);
.....
/* check proper initialization */
int (*init_check)(const struct audio_policy *pol);
.....
}
接下來看看create_audio_policy()函數指針的具體實作:
static int create_legacy_ap(const struct audio_policy_device *device,
struct audio_policy_service_ops *aps_ops,
void *service,
struct audio_policy **ap)
{
struct legacy_audio_policy *lap;
int ret;
if (!service || !aps_ops)
return -EINVAL;
lap = (struct legacy_audio_policy *)calloc(1, sizeof(*lap));
if (!lap)
return -ENOMEM;
lap->policy.set_device_connection_state = ap_set_device_connection_state;
......
lap->policy.init_check = ap_init_check;
lap->policy.get_output = ap_get_output;
lap->policy.start_output = ap_start_output;
lap->policy.stop_output = ap_stop_output;
lap->policy.release_output = ap_release_output;
lap->policy.get_input = ap_get_input;
lap->policy.start_input = ap_start_input;
lap->policy.stop_input = ap_stop_input;
lap->policy.release_input = ap_release_input;
.....
lap->service = service; // APS
lap->aps_ops = aps_ops; // 在APS中實作
lap->service_client =
new AudioPolicyCompatClient(aps_ops, service);
if (!lap->service_client) {
ret = -ENOMEM;
goto err_new_compat_client;
}
lap->apm = createAudioPolicyManager(lap->service_client);
......
*ap = &lap->policy; // 将目前音頻政策的配置的位址指派給*ap,并傳回給mpAudioPolicy
......
}
此函數中建立了重要對象:AudioPolicyCompatClient類對象和createAudioPolicyManager函數建立音頻政策管理器,需要分析下。
先分析AudioPolicyCompatClient類對象的建立。此類在頭檔案hardware/libhardware_legacy/audio/AudioPolicyCompatClient.h中定義。
namespace android_audio_legacy {
class AudioPolicyCompatClient : public AudioPolicyClientInterface {
// 父類是AudioPolicyClientInterface,與下文中提到的
// AudioPolicyManagerBase::
// AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)類型一緻
public:
AudioPolicyCompatClient(struct audio_policy_service_ops *serviceOps,
void *service) :
mServiceOps(serviceOps) , mService(service) {} // serviceOps = aps_ops,
// service = this(APS)
......
private:
struct audio_policy_service_ops* mServiceOps;
void* mService;
......
}
此構造函數主要初始化兩個私有化變量mServiceOps和mService,并把建立的對象作為函數createAudioPolicyManager的唯一參數,來建立音頻政策管理器。
然而,函數createAudioPolicyManager有多個定義,搜尋結果如下:
$ grep -rn createAudioPolicyManager hardware/ --exclude-dir=.svn
hardware/alsa_sound/AudioPolicyManagerALSA.cpp:31:
extern "C" android_audio_legacy::AudioPolicyInterface*
createAudioPolicyManager(
android_audio_legacy::AudioPolicyClientInterface *clientInterface)
hardware/libhardware_legacy/audio/AudioPolicyManagerDefault.cpp:24:
extern "C" AudioPolicyInterface*
createAudioPolicyManager(AudioPolicyClientInterface *clientInterface)
hardware/msm7k/libaudio-qsd8k/AudioPolicyManager.cpp:39:
extern "C" AudioPolicyInterface*
createAudioPolicyManager(AudioPolicyClientInterface *clientInterface)
hardware/msm7k/libaudio-qdsp5v2/AudioPolicyManager.cpp:39:
extern "C" AudioPolicyInterface*
createAudioPolicyManager(AudioPolicyClientInterface *clientInterface)
hardware/msm7k/libaudio/AudioPolicyManager.cpp:35:
extern "C" AudioPolicyInterface*
createAudioPolicyManager(AudioPolicyClientInterface *clientInterface)
函數createAudioPolicyManager雖然有多個定義,但是它們最後将建立AudioPolicyManagerBase類對象,以AudioPolicyManagerALSA類為例分析
AudioPolicyManagerALSA::AudioPolicyManagerALSA(
android_audio_legacy::AudioPolicyClientInterface *clientInterface)
: AudioPolicyManagerBase(clientInterface) // clientInterface正是
// AudioPolicyCompatClient類對象,而AudioPolicyCompatClient類具體實作了
// AudioPolicyClientInterface類的需函數接口
{
}
AudioPolicyManagerBase::AudioPolicyManagerBase(
AudioPolicyClientInterface *clientInterface)
:
#ifdef AUDIO_POLICY_TEST
Thread(false),
#endif //AUDIO_POLICY_TEST
mPhoneState(AudioSystem::MODE_NORMAL), mRingerMode(0),
mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),
mTotalEffectsCpuLoad(0), mTotalEffectsMemory(0),
mA2dpSuspended(false)
{
mpClientInterface = clientInterface; // mpClientInterface:将用它回調到APS
for (int i = 0; i < AudioSystem::NUM_FORCE_USE; i++) {
mForceUse[i] = AudioSystem::FORCE_NONE;
}
initializeVolumeCurves();
// devices available by default are speaker, ear piece and microphone
mAvailableOutputDevices = AudioSystem::DEVICE_OUT_EARPIECE |
AudioSystem::DEVICE_OUT_SPEAKER; // 可用的輸出音頻裝置
mAvailableInputDevices = AudioSystem::DEVICE_IN_BUILTIN_MIC; // 可用的輸入音頻裝置
......
mHardwareOutput = mpClientInterface->openOutput(&outputDesc->mDevice,
&outputDesc->mSamplingRate,
&outputDesc->mFormat,
&outputDesc->mChannels,
&outputDesc->mLatency,
outputDesc->mFlags);
......
setOutputDevice(mHardwareOutput, (uint32_t)AudioSystem::DEVICE_OUT_SPEAKER,
true);
......
}
mpClientInterface->openOutput()函數先回掉到AudioPolicyCompatClient類的openOutput()函數。
audio_io_handle_t AudioPolicyCompatClient::openOutput(uint32_t *pDevices,
uint32_t *pSamplingRate,
uint32_t *pFormat,
uint32_t *pChannels,
uint32_t *pLatencyMs,
AudioSystem::output_flags flags)
{
return mServiceOps->open_output(mService, pDevices, pSamplingRate, pFormat,
pChannels, pLatencyMs,
(audio_policy_output_flags_t)flags);
}
由前面分析可知,在建立AudioPolicyCompatClient類對象時,mServiceOps被初始化為APS的struct audio_policy_service_ops變量aps_ops;則将回調到ops中的open_output()函數,具體調到aps_open_output()函數:
static audio_io_handle_t aps_open_output(void *service,
uint32_t *pDevices,
uint32_t *pSamplingRate,
uint32_t *pFormat,
uint32_t *pChannels,
uint32_t *pLatencyMs,
audio_policy_output_flags_t flags)
{
sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
if (af == NULL) {
LOGW("%s: could not get AudioFlinger", __func__);
return 0;
}
return af->openOutput(pDevices, pSamplingRate, pFormat, pChannels,
pLatencyMs, flags);
}
不難看出,将調到AudioFlinger的openOutput()函數。
int AudioFlinger::openOutput(uint32_t *pDevices,
uint32_t *pSamplingRate,
uint32_t *pFormat,
uint32_t *pChannels,
uint32_t *pLatencyMs,
uint32_t flags)
{
......
audio_hw_device_t *outHwDev;
......
outHwDev = findSuitableHwDev_l(*pDevices);
if (outHwDev == NULL)
return 0;
status = outHwDev->open_output_stream(outHwDev, *pDevices, (int *)&format,
&channels, &samplingRate, &outStream);
......
return 0;
}
struct audio_hw_device_t是在頭檔案hardware/libhardware/include/hardware/audio.h中定義的一個結構體。
前面在分析AudioFlinger類的load_audio_interface函數時,已經分析過struct audio_hw_device。
audio_hw_device_t* AudioFlinger::findSuitableHwDev_l(uint32_t devices)
{
/* first matching HW device is returned */
for (size_t i = 0; i < mAudioHwDevs.size(); i++) { // 前文分析過,mAudioHwDevs變量儲存了HAL層可用音頻裝置
audio_hw_device_t *dev = mAudioHwDevs[i];
if ((dev->get_supported_devices(dev) & devices) == devices)
return dev;
}
return NULL;
}
由前文分析可知,此處的get_supported_devices()函數指針将具體調到audio_hw_hal.cpp檔案中的adev_get_supported_devices(),如下所示,這裡列出了系統所支援的所有輸出/輸入音頻裝置。是以,我們要也要仿照此音頻裝置的定義名稱,在這裡添加USB AUDIO音頻裝置的名稱,以及它們在别處的定義。
static uint32_t adev_get_supported_devices(const struct audio_hw_device *dev)
{
/* XXX: The old AudioHardwareInterface interface is not smart enough to
* tell us this, so we'll lie and basically tell AF that we support the
* below input/output devices and cross our fingers. To do things properly,
* audio hardware interfaces that need advanced features (like this) should
* convert to the new HAL interface and not use this wrapper. */
return (/* OUT */
AUDIO_DEVICE_OUT_EARPIECE |
AUDIO_DEVICE_OUT_SPEAKER |
AUDIO_DEVICE_OUT_WIRED_HEADSET |
AUDIO_DEVICE_OUT_WIRED_HEADPHONE |
AUDIO_DEVICE_OUT_AUX_DIGITAL |
AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET |
AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET |
AUDIO_DEVICE_OUT_ALL_SCO |
AUDIO_DEVICE_OUT_DEFAULT |
/* IN */
AUDIO_DEVICE_IN_COMMUNICATION |
AUDIO_DEVICE_IN_AMBIENT |
AUDIO_DEVICE_IN_BUILTIN_MIC |
AUDIO_DEVICE_IN_WIRED_HEADSET |
AUDIO_DEVICE_IN_AUX_DIGITAL |
AUDIO_DEVICE_IN_BACK_MIC |
AUDIO_DEVICE_IN_ALL_SCO |
AUDIO_DEVICE_IN_DEFAULT);
}
當找到合适的裝置之後,将調用outHwDev->open_output_stream()函數打開相應裝置的輸出流;同樣的道理,将具體調到audio_hw_hal.cpp檔案中的adev_open_output_stream()函數。
static int adev_open_output_stream(struct audio_hw_device *dev,
uint32_t devices,
int *format,
uint32_t *channels,
uint32_t *sample_rate,
struct audio_stream_out **stream_out)
{
struct legacy_audio_device *ladev = to_ladev(dev);
status_t status;
struct legacy_stream_out *out;
int ret;
out = (struct legacy_stream_out *)calloc(1, sizeof(*out));
if (!out)
return -ENOMEM;
out->legacy_out = ladev->hwif->openOutputStream(devices, format, channels,
sample_rate, &status);
......
}
由前文分析可知,ladev->hwif具體指AudioHardwareALSA類;則ladev->hwif->openOutputStream()函數調到AudioHardwareALSA::openOutputStream()函數。
android_audio_legacy::AudioStreamOut *
AudioHardwareALSA::openOutputStream(uint32_t devices,
int *format,
uint32_t *channels,
uint32_t *sampleRate,
status_t *status)
{
......
// Find the appropriate alsa device
for(ALSAHandleList::iterator it = mDeviceList.begin(); // mDeviceList是用來存儲
// 輸出/輸入音頻通道資訊的
// 句柄的,總共兩個句柄,
// 分别對應輸出和輸入音頻通道
it != mDeviceList.end(); ++it)
if (it->devices & devices) {
err = mALSADevice->open(&(*it), devices, mode()); // 當調用open()函數時,
// 就已經知道要打開的是
// 輸入音頻通道還是輸出
// 音頻通道
if (err) break;
out = new AudioStreamOutALSA(this, &(*it)); // 此類對象的建立本可不必理會,
// 但它的父類ALSAStreamOps類的對象也會随之建立;而ALSAStreamOps
// 類将在後面用到(mParent = this(AudioHardwareALSA))
err = out->set(format, channels, sampleRate);
break;
}
......
}
由前文對AudioHardwareALSA類的啟動流程分析可知,mDeviceList是用來存儲輸出/輸入音頻通道資訊的句柄的。
mALSADevice表示在初始化ALSA裝置時所指向的一個具體ALSA裝置的操作接口。則mALSADevice->open()函數,将具體調到ALSA子產品的s_open()函數。
static status_t s_open(alsa_handle_t *handle, uint32_t devices, int mode)
{
// Close off previously opened device.
// It would be nice to determine if the underlying device actually
// changes, but we might be recovering from an error or manipulating
// mixer settings (see asound.conf).
//
s_close(handle); // 先關閉先前打開的音頻通道
LOGD("open called for devices %08x in mode %d...", devices, mode);
const char *stream = streamName(handle);
const char *devName = deviceName(handle, devices, mode);
int err;
for (;;) {
// The PCM stream is opened in blocking mode, per ALSA defaults. The
// AudioFlinger seems to assume blocking mode too, so asynchronous mode
// should not be used.
err = snd_pcm_open(&handle->handle, devName, direction(handle),
// handle->handle:儲存從ALSA-LIB中獲得的PCM節點
SND_PCM_ASYNC);
if (err == 0) break;
// See if there is a less specific name we can try.
// Note: We are changing the contents of a const char * here.
char *tail = strrchr(devName, '_');
if (!tail) break;
*tail = 0;
}
if (err < 0) {
// None of the Android defined audio devices exist. Open a generic one.
devName = "default";
err = snd_pcm_open(&handle->handle, devName, direction(handle), 0);
}
if (err < 0) {
LOGE("Failed to Initialize any ALSA %s device: %s",
stream, strerror(err));
return NO_INIT;
}
err = setHardwareParams(handle);
if (err == NO_ERROR) err = setSoftwareParams(handle);
LOGI("Initialized ALSA %s device %s", stream, devName);
handle->curDev = devices;
handle->curMode = mode;
return err;
}
(1) 調用函數streamName()函數擷取音頻流名稱。
const char *streamName(alsa_handle_t *handle)
{
return snd_pcm_stream_name(direction(handle));
}
snd_pcm_stream_name()函數是ALSA-LIB API,在external/alsa-lib/src/pcm/pcm.c檔案中定義。
要想獲得音頻流名稱,不得不先分析direction()函數。
snd_pcm_stream_t direction(alsa_handle_t *handle)
{
return (handle->devices & android_audio_legacy::AudioSystem::DEVICE_OUT_ALL) ? SND_PCM_STREAM_PLAYBACK
: SND_PCM_STREAM_CAPTURE;
}
原來direction()函數就是用來傳回PCM流的方向(放音或者錄音)。
direction()函數的傳回值将作為snd_pcm_stream_name()的參數,
const char *snd_pcm_stream_name(snd_pcm_stream_t stream)
{
if (stream > SND_PCM_STREAM_LAST)
return NULL;
return snd_pcm_stream_names[stream];
}
static const char *const snd_pcm_stream_names[] = {
STREAM(PLAYBACK),
STREAM(CAPTURE),
};
好吧,音頻流的名稱不是放音就是錄音。
(2)接下來調用deviceName()函數擷取裝置名稱。這點很重要,将為我們在asound.conf為新添加的USB AUDIO音頻裝置命名提供規則。
const char *deviceName(alsa_handle_t *handle, uint32_t device, int mode)
{
static char devString[ALSA_NAME_MAX];
int hasDevExt = 0;
strcpy(devString, devicePrefix[direction(handle)]);
for (int dev = 0; device && dev < deviceSuffixLen; dev++)
if (device & deviceSuffix[dev].device) {
ALSA_STRCAT (devString, deviceSuffix[dev].suffix);
device &= ~deviceSuffix[dev].device;
hasDevExt = 1;
}
if (hasDevExt) switch (mode) {
case android_audio_legacy::AudioSystem::MODE_NORMAL:
ALSA_STRCAT (devString, "_normal")
;
break;
case android_audio_legacy::AudioSystem::MODE_RINGTONE:
ALSA_STRCAT (devString, "_ringtone")
;
break;
case android_audio_legacy::AudioSystem::MODE_IN_CALL:
ALSA_STRCAT (devString, "_incall")
;
break;
};
return devString;
}
用字元數組devString儲存設備名稱。
首先把裝置字首複制給devString。是以,作為輸出音頻通道裝置的名稱,必以AndroidPlayback為字首;作為輸入音頻通道裝置的名稱,必以AndroidCapture為字首。
static const char *devicePrefix[SND_PCM_STREAM_LAST + 1] = {
/* SND_PCM_STREAM_PLAYBACK : */"AndroidPlayback",
/* SND_PCM_STREAM_CAPTURE : */"AndroidCapture",
};
接下來從deviceSuffix數組中查找合适的字尾,追加到devString字元數組中。
/* The following table(s) need to match in order of the route bits
*/
static const device_suffix_t deviceSuffix[] = {
{android_audio_legacy::AudioSystem::DEVICE_OUT_EARPIECE, "_Earpiece"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_SPEAKER, "_Speaker"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_BLUETOOTH_SCO, "_Bluetooth"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_WIRED_HEADSET, "_Headset"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_BLUETOOTH_A2DP, "_Bluetooth-A2DP"},
};
struct device_suffix_t的定義如下:
struct device_suffix_t {
const android_audio_legacy::AudioSystem::audio_devices device;
const char *suffix;
};
PS: 我們也要在此數組中添加USB AUDIO音頻裝置的相關資訊。同時也要在定義了類似DEVICE_OUT_EARPIECE裝置的類中定義USB AUDIO音頻裝置:
1.frameworks/base/media/java/android/media/AudioSystem.java
2.frameworks/base/media/java/android/media/AudioManager.java
3.hardware/libhardware_legacy/include/hardware_legacy/AudioSystemLegacy.h
4.system/core/include/system/audio.h
題外話:android系統中對音頻裝置的定義如下(AudioSystem.java):
public static final int DEVICE_OUT_EARPIECE = 0x1; // 0x1 << 0
public static final int DEVICE_OUT_SPEAKER = 0x2; // 0x1 << 1
public static final int DEVICE_OUT_WIRED_HEADSET = 0x4; // 0x1 << 2
public static final int DEVICE_OUT_WIRED_HEADPHONE = 0x8; // 0x1 << 3
public static final int DEVICE_OUT_BLUETOOTH_SCO = 0x10; // 0x1 << 4
public static final int DEVICE_OUT_BLUETOOTH_SCO_HEADSET = 0x20; // 0x1 << 5
public static final int DEVICE_OUT_BLUETOOTH_SCO_CARKIT = 0x40; // 0x1 << 6
public static final int DEVICE_OUT_BLUETOOTH_A2DP = 0x80; // 0x1 << 7
public static final int DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES = 0x100;
// 0x100 << 0
public static final int DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER = 0x200; // 0x100 << 1
public static final int DEVICE_OUT_AUX_DIGITAL = 0x400; // 0x100 << 2
public static final int DEVICE_OUT_ANLG_DOCK_HEADSET = 0x800; // 0x100 << 3
public static final int DEVICE_OUT_DGTL_DOCK_HEADSET = 0x1000; // 0x1000 << 0
public static final int DEVICE_OUT_DEFAULT = 0x8000; // 0x1000 << 3
// input devices
public static final int DEVICE_IN_COMMUNICATION = 0x10000; // 0x10000 << 0
public static final int DEVICE_IN_AMBIENT = 0x20000; // 0x10000 << 1
public static final int DEVICE_IN_BUILTIN_MIC1 = 0x40000; // 0x10000 << 2
public static final int DEVICE_IN_BUILTIN_MIC2 = 0x80000; // 0x10000 << 3
public static final int DEVICE_IN_MIC_ARRAY = 0x100000; // 0x100000 << 0
public static final int DEVICE_IN_BLUETOOTH_SCO_HEADSET = 0x200000;
// 0x100000 << 1
public static final int DEVICE_IN_WIRED_HEADSET = 0x400000; // 0x100000 << 2
public static final int DEVICE_IN_AUX_DIGITAL = 0x800000; // 0x100000 << 3
當裝置越來越多時,很難保證等号右邊的數字中零的個數不寫錯。采用注釋部分的定義方式較好。
當找到了裝置字尾後,将對變量hasDevExt指派為1,表示還會有擴充名稱(_normal,_ringtone或者_incall)。
至此,一個裝置的PCM節點名稱就形成了!
(3)程式将執行到調用snd_pcm_open() ALSA-LIB API,并把剛才得到的裝置名稱devName作為參數之一,調到ALSA-LIB,進而調到ALSA-DRIVER,去打開所指定的音頻裝置。如果打開指定的音頻裝置失敗了,将打開預設的音頻裝置。
(4)如果成功打開音頻裝置,程式繼續往下執行,将調用setHardwareParams()函數設定硬體參數。這些硬體參數包括緩沖區大小,采樣率,聲道數和音頻格式等。其實這些硬體參數都在struct alsa_handle_t中定義,在分析初始化函數s_init()函數時已有分析,在預設音頻裝置配置_defaultsOut和_defaultsIn中已經指定。
但是,在使用USB AUDIO輸入音頻裝置時,預設的輸入音頻配置中的聲道數和采樣率很有可能與實際使用的USB AUDIO的不一緻,導緻USB AUDIO裝置不可用或者音頻失真。是以,需要在執行setHardwareParams()函數前,并且知道是要打開輸入音頻通道時(輸出音頻通道的硬體參數配置可用),需要檢測時間使用的USB AUDIO音頻裝置的這兩個硬體參數,重新對_defaultsIn中聲道數和采樣率進行指派。将在後面做詳細分析。
status_t setHardwareParams(alsa_handle_t *handle)
{
......
unsigned int requestedRate = handle->sampleRate;
......
err = snd_pcm_hw_params_set_channels(handle->handle, hardwareParams,
handle->channels);
......
err = snd_pcm_hw_params_set_rate_near(handle->handle, hardwareParams,
&requestedRate, 0);
......
}
(5)最後程式會執行到設定軟體參數的函數setHardwareParams()中。在添加USB AUDIO音頻裝置時,這裡沒有遇到問題,不再分析。
2. JAVA API setDeviceConnectionState()調用流程詳解。
把最複雜的兩大學地服務分析完後,後面的任務就很輕了!
JAVA API setDeviceConnectionState()在檔案frameworks/base/media/java/android/media/AudioSystem.java中定義。
public static native int setDeviceConnectionState(int device, int state, String device_address);
第一個參數就是要打開的音頻裝置的辨別符,将一路傳遞下去,直到ALSA子產品(alsa_default.cpp);
第二個參數表示第一個參數所指的音頻裝置是否可用;
第三個參數表示裝置位址,一般為空。
看到java關鍵字native,曉得将調到對應的JNI(frameworks/base/core/jni/android_media_AudioSystem.cpp)代碼。
static int
android_media_AudioSystem_setDeviceConnectionState(JNIEnv *env, jobject thiz,
jint device, jint state, jstring device_address)
{
const char *c_address = env->GetStringUTFChars(device_address, NULL);
int status = check_AudioSystem_Command(
AudioSystem::setDeviceConnectionState(static_cast <audio_devices_t>(device),
static_cast <audio_policy_dev_state_t>(state),
c_address));
env->ReleaseStringUTFChars(device_address, c_address);
return status;
}
顯然,又調到libmedia庫(frameworks/base/media/libmedia/AudioSystem.cpp)中setDeviceConnectionState()函數。
status_t AudioSystem::setDeviceConnectionState(audio_devices_t device,
audio_policy_dev_state_t state,
const char *device_address)
{
const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
const char *address = "";
if (aps == 0) return PERMISSION_DENIED;
if (device_address != NULL) {
address = device_address;
}
return aps->setDeviceConnectionState(device, state, address);
}
顯然,調到APS的setDeviceConnectionState()函數。
status_t AudioPolicyService::setDeviceConnectionState(audio_devices_t device,
audio_policy_dev_state_t state,
const char *device_address)
{
......
return mpAudioPolicy->set_device_connection_state(mpAudioPolicy, device,
state, device_address);
}
根據前面對AudioPolicyService服務的啟動流程分析可知,mpAudioPolicy指向在檔案audio_policy_hal.cpp中定義的音頻政策子產品。則mpAudioPolicy->set_device_connection_state()函數具體調到函數ap_set_device_connection_state()函數。
static int ap_set_device_connection_state(struct audio_policy *pol,
audio_devices_t device,
audio_policy_dev_state_t state,
const char *device_address)
{
struct legacy_audio_policy *lap = to_lap(pol);
return lap->apm->setDeviceConnectionState(
(AudioSystem::audio_devices)device,
(AudioSystem::device_connection_state)state,
device_address);
}
同樣,由前面對AudioPolicyService服務的啟動流程分析可知,lap->apm指向AudioPolicyManagerBase類對象。則lap->apm->setDeviceConnectionState()函數将調到AudioPolicyManagerBase::setDeviceConnectionState()函數。
status_t AudioPolicyManagerBase::setDeviceConnectionState(AudioSystem::audio_devices device,
AudioSystem::device_connection_state state,
const char *device_address)
{
......
// handle output devices
if (AudioSystem::isOutputDevice(device)) { // 如果是輸出裝置
......
switch (state)
{
// handle output device connection
case AudioSystem::DEVICE_STATE_AVAILABLE:
if (mAvailableOutputDevices & device) {
LOGW("setDeviceConnectionState() device already connected: %x", device);
return INVALID_OPERATION;
}
LOGV("setDeviceConnectionState() connecting device %x", device);
// register new device as available
mAvailableOutputDevices |= device; // 把目前裝置加入到可用裝置變量中
.....
// handle output device disconnection
case AudioSystem::DEVICE_STATE_UNAVAILABLE: {
if (!(mAvailableOutputDevices & device)) {
LOGW("setDeviceConnectionState() device not connected: %x", device);
return INVALID_OPERATION;
}
LOGV("setDeviceConnectionState() disconnecting device %x", device);
// remove device from available output devices
mAvailableOutputDevices &= ~device; // 把目前裝置從可用裝置變量中去除
......
}
// request routing change if necessary
uint32_t newDevice = getNewDevice(mHardwareOutput, false);
......
updateDeviceForStrategy();
setOutputDevice(mHardwareOutput, newDevice);
// 如果輸出音頻裝置是USB AUDIO(USB 放音),那麼應該知道輸入音頻裝置為SUB MIC
if (device == AudioSystem::DEVICE_OUT_WIRED_HEADSET) {
device = AudioSystem::DEVICE_IN_WIRED_HEADSET;
} else if (device == AudioSystem::DEVICE_OUT_BLUETOOTH_SCO ||
device == AudioSystem::DEVICE_OUT_BLUETOOTH_SCO_HEADSET ||
device == AudioSystem::DEVICE_OUT_BLUETOOTH_SCO_CARKIT) {
device = AudioSystem::DEVICE_IN_BLUETOOTH_SCO_HEADSET;
} else {
return NO_ERROR;
}
}
// handle input devices
if (AudioSystem::isInputDevice(device)) { // 如果是輸入裝置
switch (state)
{
// handle input device connection
case AudioSystem::DEVICE_STATE_AVAILABLE: {
if (mAvailableInputDevices & device) {
LOGW("setDeviceConnectionState() device already connected: %d", device);
return INVALID_OPERATION;
}
mAvailableInputDevices |= device;
}
break;
// handle input device disconnection
case AudioSystem::DEVICE_STATE_UNAVAILABLE: {
if (!(mAvailableInputDevices & device)) {
LOGW("setDeviceConnectionState() device not connected: %d", device);
return INVALID_OPERATION;
}
mAvailableInputDevices &= ~device;
} break;
default:
LOGE("setDeviceConnectionState() invalid state: %x", state);
return BAD_VALUE;
}
audio_io_handle_t activeInput = getActiveInput();
if (activeInput != 0) {
AudioInputDescriptor *inputDesc = mInputs.valueFor(activeInput);
uint32_t newDevice = getDeviceForInputSource(inputDesc->mInputSource);
if (newDevice != inputDesc->mDevice) {
LOGV("setDeviceConnectionState() changing device from %x to %x for input %d",
inputDesc->mDevice, newDevice, activeInput);
inputDesc->mDevice = newDevice;
AudioParameter param = AudioParameter();
param.addInt(String8(AudioParameter::keyRouting), (int)newDevice);
mpClientInterface->setParameters(activeInput, param.toString());
}
}
return NO_ERROR;
}
LOGW("setDeviceConnectionState() invalid device: %x", device);
return BAD_VALUE;
}
(1) 目前裝置是輸出裝置時,程式執行到getNewDevice()函數,将獲得新裝置,作為設定輸出裝置函數setOutputDevice()的第二個參數。
uint32_t AudioPolicyManagerBase::getNewDevice(audio_io_handle_t output, bool fromCache)
{
uint32_t device = 0;
AudioOutputDescriptor *outputDesc = mOutputs.valueFor(output);
// check the following by order of priority to request a routing change if necessary:
// 1: the strategy enforced audible is active on the output:
// use device for strategy enforced audible
// 2: we are in call or the strategy phone is active on the output:
// use device for strategy phone
// 3: the strategy sonification is active on the output:
// use device for strategy sonification
// 4: the strategy media is active on the output:
// use device for strategy media
// 5: the strategy DTMF is active on the output:
// use device for strategy DTMF
if (outputDesc->isUsedByStrategy(STRATEGY_ENFORCED_AUDIBLE)) { // 判斷是否使用了這五種音頻政策之一的
// STRATEGY_ENFORCED_AUDIBLE
device = getDeviceForStrategy(STRATEGY_ENFORCED_AUDIBLE, fromCache); // 若使用了STRATEGY_ENFORCED_AUDIBLE,
// 并獲得相關裝置,需要增加對USB AUDIO裝置的支援
} else if (isInCall() ||
outputDesc->isUsedByStrategy(STRATEGY_PHONE)) {
device = getDeviceForStrategy(STRATEGY_PHONE, fromCache);
} else if (outputDesc->isUsedByStrategy(STRATEGY_SONIFICATION)) {
device = getDeviceForStrategy(STRATEGY_SONIFICATION, fromCache);
} else if (outputDesc->isUsedByStrategy(STRATEGY_MEDIA)) {
device = getDeviceForStrategy(STRATEGY_MEDIA, fromCache);
} else if (outputDesc->isUsedByStrategy(STRATEGY_DTMF)) {
device = getDeviceForStrategy(STRATEGY_DTMF, fromCache);
}
......
return device;
}
......
uint32_t AudioPolicyManagerBase::getDeviceForStrategy(routing_strategy strategy, bool fromCache)
{
uint32_t device = 0;
if (fromCache) {
LOGV("getDeviceForStrategy() from cache strategy %d, device %x", strategy, mDeviceForStrategy[strategy]);
return mDeviceForStrategy[strategy];
}
switch (strategy) {
case STRATEGY_DTMF:
if (!isInCall()) {
// when off call, DTMF strategy follows the same rules as MEDIA strategy
device = getDeviceForStrategy(STRATEGY_MEDIA, false);
break;
}
// when in call, DTMF and PHONE strategies follow the same rules
// FALL THROUGH
case STRATEGY_PHONE: // 由于我BOX不支援通話功能,是以可以不在此政策下添加對USB AUDIO裝置的支援
// for phone strategy, we first consider the forced use and then the available devices by order
// of priority
......
break;
case STRATEGY_SONIFICATION: // 由于我BOX不支援通話功能,是以可以不在此政策下添加對USB AUDIO裝置的支援
// If incall, just select the STRATEGY_PHONE device: The rest of the behavior is handled by
// handleIncallSonification().
......
case STRATEGY_ENFORCED_AUDIBLE:
// strategy STRATEGY_ENFORCED_AUDIBLE uses same routing policy as STRATEGY_SONIFICATION
// except when in call where it doesn't default to STRATEGY_PHONE behavior
......
case STRATEGY_MEDIA: { // 多媒體播放政策,需要添加對USB AUDIO裝置的支援
uint32_t device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_WIRED_HEADPHONE;
if (device2 == 0) {
device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_WIRED_HEADSET;
}
.......
// 添加對USB AUDIO裝置支援的代碼
if (device2 == 0) {
device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_USB_AUDIO;
}
// end
.......
}
(2)擷取到新裝置後,程式繼續向下執行到updateDeviceForStrategy()函數,根據音頻政策更新了相應的裝置。
void AudioPolicyManagerBase::updateDeviceForStrategy()
{
for (int i = 0; i < NUM_STRATEGIES; i++) {
mDeviceForStrategy[i] = getDeviceForStrategy((routing_strategy)i, false);
}
}
由于此函數仍将調到剛剛分析的函數getDeviceForStrategy(),故不再深入分析。
(3)更新完裝置後,程式繼續向下執行到setOutputDevice()函數,用新擷取到的裝置名稱作為第二個參數(第一個參數是在建立AudioPolicyManagerBase類對象時獲得的輸出音頻通道),來設定輸出裝置。此函數很重要,它将調到ALSA子產品(alsa_default.cpp)。
void AudioPolicyManagerBase::setOutputDevice(audio_io_handle_t output, uint32_t device, bool force, int delayMs)
{
LOGV("setOutputDevice() output %d device %x delayMs %d", output, device, delayMs);
AudioOutputDescriptor *outputDesc = mOutputs.valueFor(output);
......
// do the routing
AudioParameter param = AudioParameter();
param.addInt(String8(AudioParameter::keyRouting), (int)device); // 第一個參數,暗示将重新選擇路由,
// 第二個參數就是要打開的音頻裝置的辨別符
mpClientInterface->setParameters(mHardwareOutput, param.toString(), delayMs); // 将從HAL層調回到frameworks層
......
}
調用函數setOutputDevice()時,明明隻有兩個參數,怎麼函數原型卻有4個參數了那。有點兒詭異吧!我也困惑了好大會兒。最後才發現此函數的聲明(AudioPolicyManagerBase.h)中,最後兩個參數(第三個和第四個參數)已經有了預設值。原來,C++中有了預設值的參數,在調用時可以不寫出來!!!
void setOutputDevice(audio_io_handle_t output, uint32_t device, bool force = false, int delayMs = 0);
此函數的重點在于調用了函數mpClientInterface->setParameters()。
第一個參數mHardwareOutput:表示輸出音頻通道,
第三個參數delayMs:表示等待時間,值為預設值0。
通過前文分析可知,mpClientInterface就是AudioPolicyCompatClient類的對象,則mpClientInterface->setParameters()函數将調到AudioPolicyCompatClient類的setParameters()函數。
void AudioPolicyCompatClient::setParameters(audio_io_handle_t ioHandle,
const String8& keyValuePairs,
int delayMs)
{
mServiceOps->set_parameters(mService, ioHandle, keyValuePairs.string(),
// 在建立APS類對象時,調用create_audio_policy()第三個參數this(即APS)一路傳遞下來,并在建立
// AudioPolicyCompatClient類對象時對mService進行了初始化
delayMs);
}
而mServiceOps在建立AudioPolicyCompatClient類對象時,指向APS的struct audio_policy_service_ops變量aps_ops。則mServiceOps->set_parameters()函數将調到APS的aps_set_parameters()函數。
static void aps_set_parameters(void *service, audio_io_handle_t io_handle,
const char *kv_pairs, int delay_ms)
{
AudioPolicyService *audioPolicyService = (AudioPolicyService *)service;
audioPolicyService->setParameters(io_handle, kv_pairs, delay_ms);
}
顯然将調到APS類的setParameters()函數。
void AudioPolicyService::setParameters(audio_io_handle_t ioHandle,
const char *keyValuePairs,
int delayMs)
{
mAudioCommandThread->parametersCommand((int)ioHandle, keyValuePairs,
delayMs);
}
mAudioCommandThread就是在建立APS類對象時,啟動的一個音頻指令線程。函數mAudioCommandThread->parametersCommand()将根據第二個參數産生一個設定音頻參數的指令,并發給此線程的處理函數threadLoop()。
status_t AudioPolicyService::AudioCommandThread::parametersCommand(int ioHandle,
const char *keyValuePairs,
int delayMs)
{
status_t status = NO_ERROR;
AudioCommand *command = new AudioCommand();
command->mCommand = SET_PARAMETERS;
ParametersData *data = new ParametersData();
data->mIO = ioHandle;
data->mKeyValuePairs = String8(keyValuePairs); // keyValuePairs儲存了從AudioPolicyManagerBase類傳遞的音頻參數param
command->mParam = data;
if (delayMs == 0) {
command->mWaitStatus = true;
} else {
command->mWaitStatus = false;
}
Mutex::Autolock _l(mLock);
insertCommand_l(command, delayMs); // 調用函數insertCommand_l()把指令插入到此線程處理函數threadLoop()中
......
}
// insertCommand_l() must be called with mLock held
void AudioPolicyService::AudioCommandThread::insertCommand_l(AudioCommand *command, int delayMs)
{
......
// acquire wake lock to make sure delayed commands are processed
if (mName != "" && mAudioCommands.isEmpty()) {
acquire_wake_lock(PARTIAL_WAKE_LOCK, mName.string());
}
// check same pending commands with later time stamps and eliminate them
for (i = mAudioCommands.size()-1; i >= 0; i--) {
AudioCommand *command2 = mAudioCommands[i];
......
switch (command->mCommand) {
case SET_PARAMETERS: {
ParametersData *data = (ParametersData *)command->mParam;
ParametersData *data2 = (ParametersData *)command2->mParam;
if (data->mIO != data2->mIO) break;
LOGV("Comparing parameter command %s to new command %s",
data2->mKeyValuePairs.string(), data->mKeyValuePairs.string());
AudioParameter param = AudioParameter(data->mKeyValuePairs);
AudioParameter param2 = AudioParameter(data2->mKeyValuePairs);
for (size_t j = 0; j < param.size(); j++) {
String8 key;
String8 value;
param.getAt(j, key, value);
for (size_t k = 0; k < param2.size(); k++) {
String8 key2;
String8 value2;
param2.getAt(k, key2, value2);
if (key2 == key) {
param2.remove(key2);
LOGV("Filtering out parameter %s", key2.string());
break;
}
}
}
// if all keys have been filtered out, remove the command.
// otherwise, update the key value pairs
if (param2.size() == 0) {
removedCommands.add(command2);
} else {
data2->mKeyValuePairs = param2.toString();
}
} break;
......
}
插入指令函數AudioCommandThread::insertCommand_l()和指令處理函數AudioCommandThread::threadLoop()的銜接點就是mAudioCommands清單。
bool AudioPolicyService::AudioCommandThread::threadLoop()
{
nsecs_t waitTime = INT64_MAX;
mLock.lock();
while (!exitPending())
{
while(!mAudioCommands.isEmpty()) {
nsecs_t curTime = systemTime();
// commands are sorted by increasing time stamp: execute them from index 0 and up
if (mAudioCommands[0]->mTime <= curTime) {
AudioCommand *command = mAudioCommands[0];
mAudioCommands.removeAt(0);
mLastCommand = *command;
switch (command->mCommand) {
......
case SET_PARAMETERS: {
ParametersData *data = (ParametersData *)command->mParam;
LOGV("AudioCommandThread() processing set parameters string %s, io %d",
data->mKeyValuePairs.string(), data->mIO);
command->mStatus = AudioSystem::setParameters(data->mIO, data->mKeyValuePairs);
if (command->mWaitStatus) {
command->mCond.signal();
mWaitWorkCV.wait(mLock);
}
delete data;
}break;
......
}
此線程處理函數并沒有真正去設定參數,而是把設定參數的實際操作交給了函數AudioSystem::setParameters()。
status_t AudioSystem::setParameters(audio_io_handle_t ioHandle, const String8& keyValuePairs) {
const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
if (af == 0) return PERMISSION_DENIED;
return af->setParameters(ioHandle, keyValuePairs);
}
又調到AudioFlinger了!
status_t AudioFlinger::setParameters(int ioHandle, const String8& keyValuePairs)
{
......
// check calling permissions
if (!settingsAllowed()) {
return PERMISSION_DENIED;
}
// ioHandle == 0 means the parameters are global to the audio hardware interface
if (ioHandle == 0) { // ioHandle就是在函數AudioPolicyManagerBase::setOutputDevice()内調用
// mpClientInterface->setParameters()函數時,
// 傳遞的第一個參數mHardwareOutput,其值有其中可能;經檢視log資訊确認,目前值是0。
// 将改變全局輸入/輸出音頻裝置
AutoMutex lock(mHardwareLock);
mHardwareStatus = AUDIO_SET_PARAMETER;
status_t final_result = NO_ERROR;
for (size_t i = 0; i < mAudioHwDevs.size(); i++) { // 将對所有的音頻裝置的參數都重新設定
audio_hw_device_t *dev = mAudioHwDevs[i];
result = dev->set_parameters(dev, keyValuePairs.string());
final_result = result ?: final_result;
}
......
}
由前面對AudioFlinger服務和AudioPolicyService服務的啟動流程分析可知,mAudioHwDevs具體指在檔案audio_hw_hal.cpp所實作的音頻子產品中定義的裝置。則dev->set_parameters()函數就是adev_set_parameters()函數。
static int adev_set_parameters(struct audio_hw_device *dev, const char *kvpairs)
{
struct legacy_audio_device *ladev = to_ladev(dev);
return ladev->hwif->setParameters(String8(kvpairs));
}
同樣根據前面AudioFlinger服務分析可知,ladev->hwif具體指ALSA音頻類AudioHardwareALSA的對象。則函數ladev->hwif->setParameters()就是函數AudioHardwareALSA類的setParameters()函數。此函數在頭檔案hardware/alsa_sound/AudioHardwareALSA.h中定義。
class ALSAStreamOps
{
public:
ALSAStreamOps(AudioHardwareALSA *parent, alsa_handle_t *handle);
virtual ~ALSAStreamOps();
status_t set(int *format, uint32_t *channels, uint32_t *rate);
status_t setParameters(const String8& keyValuePairs);
......
}
此setParameters()函數的實作在提供了ALSA流操作接口的檔案hardware/alsa_sound/ALSAStreamOps.cpp中。
status_t ALSAStreamOps::setParameters(const String8& keyValuePairs)
{
AudioParameter param = AudioParameter(keyValuePairs);
String8 key = String8(AudioParameter::keyRouting);
status_t status = NO_ERROR;
int device;
LOGV("setParameters() %s", keyValuePairs.string());
if (param.getInt(key, device) == NO_ERROR) {
AutoMutex lock(mLock);
mParent->mALSADevice->route(mHandle, (uint32_t)device, mParent->mode());
param.remove(key);
}
if (param.size()) {
status = BAD_VALUE;
}
return status;
}
看到route()函數,有點兒面熟吧?!
mParent是在AudioHardwareALSA.h頭檔案中定義的AudioHardwareALSA類的指針對象,是在建立ALSAStreamOps類執行個體時初始化的。沒有看到什麼時候直接建立了ALSAStreamOps類的對象。不過,ALSAStreamOps類同時被AudioStreamOutALSA類和AudioStreamInALSA類繼承了,則在建立這兩個子類的同時将建立ALSAStreamOps類的對象。 ALSAStreamOps類的子類之一AudioStreamOutALSA類在AudioPolicyManagerBase類被建立時,打開輸出音頻通道的操作一路調到AudioHardwareALSA類的打開音頻通道時被建立。是以,此時mParent将指向AudioHardwareALSA類,不為空。
AudioHardwareALSA * mParent;
mALSADevice也在建立AudioHardwareALSA類的對象時有了struct alsa_device_t類型的值
struct alsa_device_t {
hw_device_t common;
status_t (*init)(alsa_device_t *, ALSAHandleList &);
status_t (*open)(alsa_handle_t *, uint32_t, int);
status_t (*close)(alsa_handle_t *);
status_t (*route)(alsa_handle_t *, uint32_t, int);
};
而這些ALSA函數接口已經指向了具體的函數實作(alsa_default.cpp)。則調到HAL層ALSA音頻子產品(alsa_default.cpp)中s_route()函數。
static status_t s_route(alsa_handle_t *handle, uint32_t devices, int mode)
{
LOGD("route called for devices %08x in mode %d...", devices, mode);
//@eric:20110316
//When RingCall come,AudioHardWareALSA Set RINGMODE and will call this func.
//FIXME:I think Our Audio Device only has one handle, so we can not reopen it
if (handle->handle && handle->curDev == devices /*&& handle->curMode == mode*/) return NO_ERROR;
return s_open(handle, devices, mode);
}
進而調到s_open()函數。前面已經對s_open()函數做過分析,這裡不再重述。
3. ALSA-LIB淺述以及asound.conf配置檔案的書寫。
ALSA(Advanced Linux Sound Architecture)音頻驅動是一個十分龐大,複雜和功能強大的音頻系統,應用領域遠遠高于先前的OSS (Open Sound System)音頻驅動。由于ALSA-DRIVER過于龐雜,給開發者使用ALSA-DRIVER帶來許多不變,故ALSA-DRIVER的API ALSA-LIB應運而生。我們可以通過調用ALSA-LIB的API間接與ALSA-DRIVER打交道。
關于ALSA的更多更詳盡的介紹請參閱其官網(http://www.alsa-project.org/main/index.php/Main_Page)。
前文在分析ALSA音頻子產品(alsa_default.cpp)時,已經提到會在s_open()函數中調到ALSA-LIB的API函數snd_pcm_open()函數。就是此函數來實際實作音頻通道的打開的!
snd_pcm_open(&handle->handle, devName, direction(handle), SND_PCM_ASYNC);
函數snd_pcm_open()在檔案external/alsa-lib/src/pcm/pcm.c中定義。
int snd_pcm_open(snd_pcm_t **pcmp, const char *name,
snd_pcm_stream_t stream, int mode)
{
int err;
assert(pcmp && name);
err = snd_config_update();
if (err < 0)
return err;
return snd_pcm_open_noupdate(pcmp, snd_config, name, stream, mode, 0);
}
(1)更新配置檔案函數snd_config_update()。
/**
* \brief Updates #snd_config by rereading the global configuration files (if needed).
* \return A non-negative value if successful, otherwise a negative error code.
* \retval 0 No action is needed.
* \retval 1 The configuration tree has been rebuilt.
*
* The global configuration files are specified in the environment variable
* \c ALSA_CONFIG_PATH. If this is not set, the default value is
* "/usr/share/alsa/alsa.conf".
*
* \warning If the configuration tree is reread, all string pointers and
* configuration node handles previously obtained from this tree become invalid.
*/
int snd_config_update(void)
{
int err;
#ifdef HAVE_LIBPTHREAD
pthread_mutex_lock(&snd_config_update_mutex);
#endif
err = snd_config_update_r(&snd_config, &snd_config_global_update, NULL);
#ifdef HAVE_LIBPTHREAD
pthread_mutex_unlock(&snd_config_update_mutex);
#endif
return err;
}
又調到snd_config_update_r()函數了。
/**
* \brief Updates a configuration tree by rereading the configuration files (if needed).
* \param _top Address of the handle to the top level node.
* \param _update Address of a pointer to private update information.
* \param cfgs A list of configuration file names, delimited with ':'.
* If \p cfgs is set to \c NULL, the default global configuration
* file is used ("/usr/share/alsa/alsa.conf").
* \return A non-negative value if successful, otherwise a negative error code.
* \retval 0 No action is needed.
* \retval 1 The configuration tree has been rebuilt.
*
* The global configuration files are specified in the environment variable
* \c ALSA_CONFIG_PATH.
*
* \warning If the configuration tree is reread, all string pointers and
* configuration node handles previously obtained from this tree become invalid.
*/
int snd_config_update_r(snd_config_t **_top, snd_config_update_t **_update, const char *cfgs) // cfgs = NULL
{
......
configs = cfgs;
if (!configs) {
configs = getenv(ALSA_CONFIG_PATH_VAR); // ???
if (!configs || !*configs) {
configs = ALSA_CONFIG_PATH_DEFAULT; // 宏ALSA_CONFIG_PATH_DEFAULT
// 便是/usr/share/alsa/alsa.conf
.......
}
這裡提到要讀取配置檔案/usr/share/alsa/alsa.conf,即是在源碼中的檔案external/alsa-lib/src/conf/alsa.conf。在解析alsa.conf配置檔案的同時,将解析alsa.conf檔案中所包含的檔案/etc/asound.conf。據ALSA官方網站介紹,asound.conf是全局配置檔案。
網上關于asound.conf介紹很豐富了,其官網網址如下:http://www.alsa-project.org/alsa-doc/alsa-lib/pcm_plugins.html。
在android的源碼中,也有現成的例子可以參考,比如三星的一個音頻配置檔案device/samsung/crespo/asound.conf。
pcm節點真是對具體的音頻裝置的硬體參數的配置。可以配置的硬體參數如下:
pcm.name {
type hw # Kernel PCM
card INT/STR # Card name (string) or number (integer)
[device INT] # Device number (default 0)
[subdevice INT] # Subdevice number (default -1: first available)
[sync_ptr_ioctl BOOL] # Use SYNC_PTR ioctl rather than the direct mmap access for control structures
[nonblock BOOL] # Force non-blocking open mode
[format STR] # Restrict only to the given format
[channels INT] # Restrict only to the given channels
[rate INT] # Restrict only to the given rate
}
僅舉一例
pcm.AndroidCapture_Usb-audio_normal { // 此PCM節點為錄音裝置的PCM節點
type hooks
slave.pcm "hw:1,0" // 1:表示聲霸卡1(聲霸卡0就是系統内嵌的聲霸卡),0表示聲霸卡1上的裝置編号為0的裝置
}
AndroidCapture_Usb-audio_normal:字首"AndroidCapture"和擴充"_normal"是固定的,字尾"_Usb-audio"是要在alsa_default.cpp中設定的。
/* The following table(s) need to match in order of the route bits
*/
static const device_suffix_t deviceSuffix[] = {
{android_audio_legacy::AudioSystem::DEVICE_OUT_EARPIECE, "_Earpiece"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_SPEAKER, "_Speaker"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_BLUETOOTH_SCO, "_Bluetooth"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_WIRED_HEADSET, "_Headset"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_BLUETOOTH_A2DP, "_Bluetooth-A2DP"},
{android_audio_legacy::AudioSystem::DEVICE_OUT_USB_AUDIO, "_Usb-audio"},
{android_audio_legacy::AudioSystem::DEVICE_IN_USB_AUDIO, "_Usb-audio"},
};
在此配置檔案中,不要指定USB AUDIO音頻裝置的聲道數和采樣率。因為不同的USB AUDIO音頻裝置具有不同的聲道數channels或采樣率rate。當要打開輸入音頻裝置進行錄音時。就需要根據實際使用的USB AUDIO音頻裝置的聲道數或采樣率這兩個硬體參數進行重新設定(如果不改變channels,當channels值與實際裝置的不一緻時,将打不開音頻裝置;如果不改變rate,當rate值與實際裝置的不一緻時,聲音将失真)。而打開輸出音頻通道時,采用預設值是沒有問題的。
4. 重新擷取USB AUDIO裝置的硬體參數。
前文已經提到,可在alsa_default.cpp檔案中的s_open()函數在調用setHardwareParams()函數設定硬體參數之前重設聲道數channels或采樣率rate。
4.1 有三種方法獲得實際使用的USB AUDIO裝置的硬體參數資訊。
第一種方法: 自己寫個字元串處理函數,從檔案/proc/asound/card1/stream0中讀取(隻需讀取錄音部分的)。
$ cat /proc/asound/card1/stream0
SAGE Technology SAGE AirMouse at usb-0000:00:1d.3-2, full speed : USB Audio
Playback:
Status: Stop
Interface 2
Altset 1
Format: S16_LE
Channels: 1
Endpoint: 6 OUT (NONE)
Rates: 16000
Capture:
Status: Stop
Interface 1
Altset 1
Format: S16_LE
Channels: 1 // 這個聲道數與預設值2不一緻
Endpoint: 5 IN (NONE)
Rates: 16000 // 這個采樣率與預設值48000不一緻
第二種方法:自己寫代碼,從核心USB驅動中讀取。主要涉及到如下檔案(也是我們項目所采用的方法)
sound/usb/card.c
drivers/hid/hid-input.c
第三種方法:因為是直接調用系統現有接口,是比較可靠的方法:調用ALSA-LIB庫的API。主要涉及到如下檔案
4.2 把輸入音頻通道的聲道數和采樣率改回預設值
當打開指定PCM節點的音頻裝置失敗後,系統會去打開預設的音頻通道(alsa_default.cpp)。
for (;;) {
// The PCM stream is opened in blocking mode, per ALSA defaults. The
// AudioFlinger seems to assume blocking mode too, so asynchronous mode
// should not be used.
err = snd_pcm_open(&handle->handle, devName, direction(handle),
.......
}
if (err < 0) {
//要在此處把輸入音頻通道的聲道數和采樣率改回預設值
// None of the Android defined audio devices exist. Open a generic one.
devName = "default";
err = snd_pcm_open(&handle->handle, devName, direction(handle), 0);
}
PS:隻需調用setDeviceConnectionState()函數打開輸出音頻通道,不要去打開輸入音頻通道。因為使用者使用的音頻聊天工具(如Skypee)會自動打開。我們隻需在打開USB AUDIO輸出音頻通道時,強制設定USB MIC為輸入音頻通道。這樣就能實作用USB MIC 進行語音聊天了!
結貼~~~~~~~