理解 Audio 音频系统二 之 audioserver & AudioPolicyService
发布日期:2021-06-29 14:50:50 浏览次数:3 分类:技术文章

本文共 126045 字,大约阅读时间需要 420 分钟。

理解 Audio 音频系统二 之 AudioPolicyService

二、audioserver & AudioPolicyService

我们知道,在 Audio系统中,核心的两块是 AudioPolicy 和 AudioFlinger,

其中 AudioPolicy 相当于军师的角色,专门来制定 Audio 播放时的相关的策略及设定相关的参数,
而 AudioFlinger 相当于将军,会根据军师的策略来执行。

+ AudioPolicy 启动流程总结

  1. AudioPolicy 和 AudioFlinger 的启动

    main_audioserver.cpp会编译成可执行文件audioserver,

    在主线程中,主要是监控子进程的状态,
    而在它的子进程中分别调用 AudioFlinger::instantiate() 和 AudioPolicyService::instantiate(),
    分别对AudioFlinger 和 AudioPolicyService 进行初始化。

    同样会被它初始化的还有语音识别 SoundTriggerHwService::instantiate()

    和 VR 语音模块 VRAudioServiceNative::instantiate(),不过这些不是我们目前的重点,后续再学习分析。

  2. AudioPolicyService::instantiate()

    首先注册一个名为 media.audio_policy 的服务,在注册服务过程中会
    首先调用 AudioPolicyService::onFirstRef() 函数,它是我们 AudioPolicySerivce 启动时的核心代码。

  3. AudioPolicyService::onFirstRef()

    在该代码中,主要工作如下:
    (1)创建了三个AudioCommandThread,名字分别为"ApmTone",“ApmAudio”,“ApmOutput”。
    (2)实例化 AudioPolicyClient 对象
    (3)初始化 AudioPolicyManager ,传参就是 AudioPolicyClient对象。
    (4)AudioPolicyEffects 音效初始化。

  4. 在 AudioPolicyManager 初始化过程中

    (1)解析Audio 配置文件,audio_policy.xml 和 audio_policy.conf,
    解析配置文件中当前系统所支持的输出设备mAvailableOutputDevices、输入设备mAvailableInputDevices、
    默认输出设备mDefaultOutputDevice。

    (2)调用 config.setDeafult() 初始化 mHwModules ,配置 Module 的名字为 “primary”,

    配置默认输出设备为 AUDIO_DEVICE_OUT_SPEAKER,默认输入设备为 AUDIO_DEVICE_IN_BUILTIN_MIC。

    (3)通过 getName() 加载模块,调用loadHwModule_l 函数初始化 HwModule 模块,调用 openDevice 初始化 hardware 层的audio配置,

    hardware 层会返回给上层 Audio 操作方法,及初始化 输入链表 streams_input_cfg_list 和输出流链表 streams_output_cfg_list ,

    (4)初始化默认音量解级参数,包括 通话、系统、铃声、闹钟、通知、蓝牙、拨号盘等音量等级。

    (5)给每一个输入 和 输出设备,都分配一个线程,及对应的输入 和 输出 stream 流,

    根据 audio类型调用不同的函数(Playback 、 OFFLOAD 、 DIRECT 类型等)

    (6) 最终所有的信息保存在 HwModule[ ] 中,将所有的输出设备保存在 mDeviceForStrategy[ ] 中。

    至此,所有的输入设备 和 输出 设备 都有对应的stream 及 单独的 thread 。

  5. AudioPolicyEffects 音效初始化

    解析audio_effects.conf 文件,得到并加载 系统支持的音效库。
    初始化各个 音效对应的参数,将各音效和 对应的输入 和输出流绑定在一起,
    这样,当上层要初始化使用音效时,就会在对应的threadloop 中调用 process_l 音效处理函数。
    创建一个 AudioFlinger 客户端,将 effect 和 AudioFlinger 客户端绑定在一起。

接下来,详细看下代码过程:

1. AudioPolicyService 的启动流程

1.1 main_audioserver.cpp

在系统初始化时,会去解析 rc 文件,根据 rc 文件来启动相关的服务。

# @ \frameworks\av\media\audioserver\audioserver.rcservice audioserver /system/bin/audioserver			// 定义service 的名字为 audioserver    class main					// 设置 class 的名字为 main    user audioserver			// 权根 所有者为 audioserver    # media gid needed for /dev/fm (radio) and for /data/misc/media (tee)    group audio camera drmrpc inet media mediadrm net_bt net_bt_admin net_bw_acct oem_2901  // 权根 所属组    ioprio rt 4					// io调度优先级 4    writepid /dev/cpuset/foreground/tasks /dev/stune/foreground/tasks    onrestart restart audio-hal-2-0		// 当重启该进程时,会自动执行 restart audio-hal-2-0 命令

可以看到,在 audioserver.rc 文件中,将 /system/bin/audioserver 可执行程序在开机时启动起来了,定义service 的名字为 audioserver。

audioserver 可执行程序的代码位于 main_audioserver.cpp 中,如下:

// @ \frameworks\av\media\audioserver\main_audioserver.cpp#define LOG_TAG "audioserver"#include 
// 包含 binder 通信相关头文件#include
// 包含进程管理相关头文件,fork() #include
// 包含 Service 相关头文件, class IServiceManager : public IInterface // 包含 getService、checkService、addService 等方法// from LOCAL_C_INCLUDES#include "AudioFlinger.h" // 包含 class android::AudioFlinger 的头文件#include "AudioPolicyService.h" // 包含 class android::AudioPolicyService 的头文件#include "AAudioService.h" // 包含 class android::AAudioService 的头文件#include "SoundTriggerHwService.h" // 包含 语音识别模块 相关#ifdef VRAUDIOSERVICE_ENABLE#include "VRAudioService.h" // 包含VR audio 相关#endifusing namespace android;int main(int argc __unused, char **argv){
signal(SIGPIPE, SIG_IGN); // 在Socket 通信过程中,如果收到SIGPIPE信号,导致进程退出。 // 为了避免进程退出, 设置捕获SIGPIPE信号,或者忽略它, 给它设置 SIG_IGN 处理,保证 server 能够正常进行。 // 创建子进程 if (doLog && (childPid = fork()) != 0) {
// 在父进程中 // (1). 实例化 ProcessState,打开 /dev/binder 驱动,保存 binder 设备的文件描述符,用于后续的 binder 通信 sp
proc(ProcessState::self()); // (2). 在 ProcessState 中创建一个线程池 ProcessState::self()->startThreadPool(); // (3). 将当前线程加入线程池中,也就是说明,父进程中的 死循理 会一直进行着 IPCThreadState::self()->joinThreadPool(); for (;;) {
// (4). 在父进程中等待 子进程 childPid 的状态 // 监听的子进程事件包含如下: // WEXITED(已执行完毕退出),WSTOPPED(进入暂停状态),WCONTINUED(暂停后又继续执行时的状态) int ret = waitid(P_PID, childPid, &info, WEXITED | WSTOPPED | WCONTINUED); // (5). 如果父进程捕获到 EINTR 错误: // 当前父进程的 waitpid 慢系统调用(slow system call)被阻塞了,也就是说被系统中断了。 // 导致当前调用返回错误,设置errno为EINTR(相应的错误描述为“Interrupted system call”), // 出现这种下情况对 当前父进程没是影响,所以 continue。 if (ret == EINTR) continue; // (5). 如果是其他未知错误,则当前退出当前 audioserver 进程 if (ret < 0) break; // (6). 解析获捕到的子进程的状态。 char buffer[32]; const char *code; switch (info.si_code) {
case CLD_EXITED: code = "CLD_EXITED"; break; // 子进程 已终止 case CLD_KILLED: code = "CLD_KILLED"; break; // 子进程 异常终止(无core) case CLD_DUMPED: code = "CLD_DUMPED"; break; // 子进程 异常终止(有core) case CLD_STOPPED: code = "CLD_STOPPED"; break; // 子进程 已停止 case CLD_TRAPPED: code = "CLD_TRAPPED"; break; // 子进程 处于调试器调试状态 case CLD_CONTINUED: code = "CLD_CONTINUED"; break; // 停止的子进程 已经继续执行 struct rusage usage; // (7). 获取子进程的资源使用信息,并打印出来 getrusage(RUSAGE_CHILDREN, &usage); ALOG(LOG_ERROR, "media.log", "pid %d status %d code %s user %ld.%03lds sys %ld.%03lds", info.si_pid, info.si_status, code, usage.ru_utime.tv_sec, usage.ru_utime.tv_usec / 1000, usage.ru_stime.tv_sec, usage.ru_stime.tv_usec / 1000); // (8). 获得 Binder 的 IServiceManager对象。 // "ServiceManager进程"是一个守护进程,而defaultServiceManager()获取到的是C++层的IServiceManager类的一个实例 sp
sm = defaultServiceManager(); // (9). 获得 名为 "media.log" 的service sp
binder = sm->getService(String16("media.log")); if (binder != 0) { Vector
args; binder->dump(-1, args); } } else { // all other services 在子进程中,启动其他的 service // (1). 如果 父进程的media.log 死亡了,kill 子进程; 但如果子进程死亡了,不要 kill 父进程 if (doLog) { prctl(PR_SET_PDEATHSIG, SIGKILL); // if parent media.log dies before me, kill me also setpgid(0, 0); // but if I die first, don't kill my parent } // (2). 实例化 ProcessState,打开 /dev/binder 驱动,保存 binder 设备的文件描述符,用于后续的 binder 通信 sp
proc(ProcessState::self()); // (3). 获得 Binder 的 IServiceManager对象。 sp
sm = defaultServiceManager(); ALOGI("ServiceManager: %p", sm.get()); // (4). 启动 AudioFlinger AudioFlinger::instantiate(); // (5). 启动 AudioPolicyService AudioPolicyService::instantiate(); // (6). 启动 语音识别模块 SoundTriggerHwService::instantiate(); #ifdef VRAUDIOSERVICE_ENABLE // (7). 如果支持 VR ,则启动 VR audio 模块 VRAudioServiceNative::instantiate();#endif // (7). 创建一个线程池 ProcessState::self()->startThreadPool(); // 将当前线程加入线程池 IPCThreadState::self()->joinThreadPool(); } }

1.1.1 父进程中

  1. 实例化 ProcessState,打开 /dev/binder 驱动,保存 binder 设备的文件描述符,用于后续的 binder 通信

  2. 在 ProcessState 中创建一个线程池

  3. 将当前线程加入线程池中,也就是说明,父进程中的 死循理 会一直进行着

  4. 在父进程中等待 子进程 childPid 的状态

  5. 如果父进程捕获到 EINTR 错误:

    当前父进程的 waitpid 慢系统调用(slow system call)被阻塞了,也就是说被系统中断了。
    导致当前调用返回错误,设置errno为EINTR(相应的错误描述为“Interrupted system call”),
    出现这种下情况对 当前父进程没是影响,所以 continue。

  6. 如果是其他未知错误,则当前退出当前 audioserver 进程

    if (ret < 0) break;

  7. 解析获捕到的子进程的状态。

  8. 获取子进程的资源使用信息,并打印出来

  9. 获得 Binder 的 IServiceManager对象。

    "ServiceManager进程"是一个守护进程,而defaultServiceManager()获取到的是C++层的IServiceManager类的一个实例

  10. 获得 名为 “media.log” 的service

1.1.2 子进程中

  1. 如果 父进程的media.log 死亡了,kill 子进程; 但如果子进程死亡了,不要 kill 父进程
  2. 实例化 ProcessState,打开 /dev/binder 驱动,保存 binder 设备的文件描述符,用于后续的 binder 通信
  3. 获得 Binder 的 IServiceManager对象。
  4. 启动 AudioFlinger
  5. 启动 AudioPolicyService
  6. 启动 语音识别模块
  7. 如果支持 VR ,则启动 VR audio 模块
  8. 创建一个线程与主线程 binder 通信

可以看出,其实 AudioFlinger 和 AudioPolicyService 同属于同一个进程,

AudioPolicyService 是通过 AudioPolicyService::instantiate(); 来启动的。

1.2 AudioPolicyService::instantiate()

// @ \frameworks\native\include\binder\BinderService.hclass BinderService{
public: static status_t publish(bool allowIsolated = false) {
sp
sm(defaultServiceManager()); // 添加 一个service return sm->addService( String16(SERVICE::getServiceName()), new SERVICE(), allowIsolated); } static void instantiate() {
publish(); }

在调用,AudioPolicyService::instantiate() 实际上就是调用 addService()。

在 addService 函数中,一共实现了三个作用,

1.2.1 getServiceName 获取 要创建的service 的名字

注册名为“media.audio_policy”的服务。
// @ \frameworks\av\services\audiopolicy\service\AudioPolicyService.h// for BinderService    static const char *getServiceName() ANDROID_API {
return "media.audio_policy"; }

1.2.2 new SERVICE()

调用构造函数,做一些初始化动作。
// @ \frameworks\av\services\audiopolicy\service\AudioPolicyService.cppAudioPolicyService::AudioPolicyService()    : BnAudioPolicyService(), mpAudioPolicyDev(NULL), mpAudioPolicy(NULL),      mAudioPolicyManager(NULL), mAudioPolicyClient(NULL), mPhoneState(AUDIO_MODE_INVALID){
}

1.2.3 调用 AudioPolicyService::onFirstRef()

由于 sm 是带 sp 的 sp
强引用类型的指针所以在第一次调用 AudioPolicyService 模块时,会调用它的 AudioPolicyService::onFirstRef()
// @ \frameworks\av\services\audiopolicy\service\AudioPolicyService.cppvoid AudioPolicyService::onFirstRef(){
{
// 1. 创建 AudioCommandThread线程 // start tone playback thread,用于播放 tone 音,tone 是音调的意思。 mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this); // start audio commands thread,用于执行 aduio 命令。 mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this); // start output activity command thread,用于执行 audio 输出命令。 mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this); // 2. 实例化 AudioPolicyClient 对象 mAudioPolicyClient = new AudioPolicyClient(this); // 3. 实例化 AudioPolicyManager 对象 mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient); ---> return new AudioPolicyManager(clientInterface); } // 4. 初始化音效相关 // load audio processing modules sp
audioPolicyEffects = new AudioPolicyEffects(); {
mAudioPolicyEffects = audioPolicyEffects; }}
1.2.3.1 主线 createAudioPolicyManager(mAudioPolicyClient)

可以看出 我们 audio 的主线是在 AudioPolicyManager 函数中进行的,

主要工作如下:

  1. XML / CONF 配置文件解析
    解析结果保存在mHwModules、 mAvailableOutputDevices 和 mAvailableInputDevices 中
    根据解析到的信息,对 audio 做初始化,重点是初始化 mHwModules
  2. 解析 audio_policy_volumes.xml 和 default_volume_tables.xml 文件中的音量等级,并对系统做初始化
  3. 获得一个 class EngineInstance 对象
  4. 获得 class ManagerInterfaceImpl : public AudioPolicyManagerInterface 接口
  5. 保存 AudioPolicyManager class 到 mApmObserver 中
  6. 检查 audio 是否有 主输出,PrimaryOutput
  7. 绑定 可用的输出流 output streams,获取每个流的 操作函数 ,及对每个outstream 分配不同的线程。
  8. 绑定 可用的输入流 input streams,获取每个流的 操作函数 ,及对每个inputstream 分配不同的线程。
  9. 检测每个 输入和输出 设备是否已经绑定好对应的输入输出流,检查默认的输出设备是否可用
  10. 更新设备列表,将所有设备流保存在mDeviceForStrategy数组中

至此,对应的输入输出流都创建好了

return new AudioPolicyManager(clientInterface);// @ \frameworks\av\services\audiopolicy\managerdefault\AudioPolicyManager.cppAudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface):    mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),    mA2dpSuspended(false),    mAudioPortGeneration(1),    mBeaconMuteRefCount(0),    mBeaconPlayingRefCount(0),    mBeaconMuted(false),    mTtsOutputAvailable(false),    mMasterMono(false),    mMusicEffectOutput(AUDIO_IO_HANDLE_NONE),    mHasComputedSoundTriggerSupportsConcurrentCapture(false){
mUidCached = getuid(); // 返回调用者的 UID,表示程序执行者 mpClientInterface = clientInterface; // 保存 AudioPolicyClient 对象 // step 1 : XML / CONF 配置文件解析#ifdef USE_XML_AUDIO_POLICY_CONF // (1)如果使用 xml 来配置的话,则解析 audio_policy_configuration.xml 文件 // #define AUDIO_POLICY_XML_CONFIG_FILE_NAME "audio_policy_configuration.xml" // (2)初始化一个vector audio_stream_type_t 类型的stream ,用于保存解析到的 xml 中的 stream 信息 mVolumeCurves = new VolumeCurvesCollection(); // 解析结果保存在mHwModules、 mAvailableOutputDevices 和 mAvailableInputDevices 中 AudioPolicyConfig config(mHwModules, mAvailableOutputDevices, mAvailableInputDevices, mDefaultOutputDevice, speakerDrcEnabled, static_cast
(mVolumeCurves)); // (3)遍历解析 "/odm/etc", "/vendor/etc/audio", "/vendor/etc", "/system/etc" 几个目录, // 如果找到 "audio_policy_configuration.xml" 文件,并解析成功,则break退出 if (deserializeAudioPolicyXmlConfig(config) != NO_ERROR) {
#else // step 1. 如果使用 conf 文件配置的话 // (1)初始化一个vector audio_stream_type_t 类型的stream ,用于保存解析到的 xml 中的 stream 信息 // 解析结果保存在mHwModules、 mAvailableOutputDevices 和 mAvailableInputDevices 中 mVolumeCurves = new StreamDescriptorCollection(); AudioPolicyConfig config(mHwModules, mAvailableOutputDevices, mAvailableInputDevices, mDefaultOutputDevice, speakerDrcEnabled); // (2)加载 并 解析 conf 文件 // #define AUDIO_POLICY_CONFIG_FILE "/system/etc/audio_policy.conf" // #define AUDIO_POLICY_VENDOR_CONFIG_FILE "/vendor/etc/audio_policy.conf" if ((ConfigParsingUtils::loadConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE, config) != NO_ERROR) && (ConfigParsingUtils::loadConfig(AUDIO_POLICY_CONFIG_FILE, config) != NO_ERROR)) {
#endif ALOGE("could not load audio policy configuration file, setting defaults"); // 根据解析到的信息,对 audio 做初始化,重点是初始化 mHwModules config.setDefault(); } // step 2. 解析 audio_policy_volumes.xml 和 default_volume_tables.xml 文件中的音量等级,并对系统做初始化 // must be done after reading the policy (since conditionned by Speaker Drc Enabling) mVolumeCurves->initializeVolumeCurves(speakerDrcEnabled); // step 3. 获得一个 class EngineInstance 对象 // Once policy config has been parsed, retrieve an instance of the engine and initialize it. audio_policy::EngineInstance *engineInstance = audio_policy::EngineInstance::getInstance(); // step 4. 获得 class ManagerInterfaceImpl : public AudioPolicyManagerInterface 接口 // 其定义的路径为 @ \frameworks\av\services\audiopolicy\enginedefault\src\Engine.h // Retrieve the Policy Manager Interface mEngine = engineInstance->queryInterface
(); // ---> return &mManagerInterface; // step 5. 保存 AudioPolicyManager class 到 mApmObserver 中。 mEngine->setObserver(this); // step 6. 检查 audio 是否有 主输出,PrimaryOutput status_t status = mEngine->initCheck(); // ---> return hasPrimaryOutput() ? NO_ERROR : NO_INIT; // ---> return mPrimaryOutput != 0; //@\frameworks\av\services\audiopolicy\managerdefault\AudioPolicyManager.h (void) status; ALOG_ASSERT(status == NO_ERROR, "Policy engine not initialized(err=%d)", status); // step 7. 开始绑定 可用的输出设备 // mAvailableOutputDevices and mAvailableInputDevices now contain all attached devices // open all output streams needed to access attached devices audio_devices_t outputDeviceTypes = mAvailableOutputDevices.types(); audio_devices_t inputDeviceTypes = mAvailableInputDevices.types() & ~AUDIO_DEVICE_BIT_IN; for (size_t i = 0; i < mHwModules.size(); i++) {
// 打开 audio.primary.msm8937.so 名字的库文件, audio.primary, audio.a2dp,audio.usb,audio.remote.submix,audio.stub // 传入的name 是 primary, a2dp,usb,remote.submix,stub // loadHwModule 对audio 模块进行初始化 mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->getName()); // open all output streams needed to access attached devices // except for direct output streams that are only opened when they are actually // required by an app. // This also validates mAvailableOutputDevices list for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++) {
const sp
outProfile = mHwModules[i]->mOutputProfiles[j]; if (!outProfile->hasSupportedDevices()) {
ALOGW("Output profile contains no device on module %s", mHwModules[i]->getName()); continue; } if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_TTS) != 0) {
mTtsOutputAvailable = true; } if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
continue; } // 如果是第一次,则调用 getSupportedDeviceForType() 函数 获取对应的输出设备类型 // 所支持的设备类型定义在 audio_policy.conf 中 audio_devices_t profileType = outProfile->getSupportedDevicesType(); if ((profileType & mDefaultOutputDevice->type()) != AUDIO_DEVICE_NONE) {
profileType = mDefaultOutputDevice->type(); } else {
// chose first device present in profile's SupportedDevices also part of outputDeviceTypes profileType = outProfile->getSupportedDeviceForType(outputDeviceTypes); } // 保存 输出设备 列表到 mProfile 中,并将对应设备的 通道、采样速率、格式 等信息保存起来 sp
outputDesc = new SwAudioOutputDescriptor(outProfile, mpClientInterface); --------> | mProfile(outProfile) | AudioOutputDescriptor(outProfile, mpClientInterface) | |=======> 清除所有 audio 配置 | | port->pickAudioProfile(mSamplingRate, mChannelMask, mFormat); | | ======> format = formatToCompare; | | channelMask = pickedChannelMask; | | samplingRate = pickedSamplingRate; | | LOGV("%s Port[nm:%s] profile rate=%d, format=%d, channels=%d", | | __FUNCTION__, mName.string(), samplingRate, channelMask, format); | | <======= | |<======= <------- // 获得输出设备列表 const DeviceVector &supportedDevices = outProfile->getSupportedDevices(); const DeviceVector &devicesForType = supportedDevices.getDevicesFromType(profileType); String8 address = devicesForType.size() > 0 ? devicesForType.itemAt(0)->mAddress : String8(""); outputDesc->mDevice = profileType; audio_config_t config = AUDIO_CONFIG_INITIALIZER; config.sample_rate = outputDesc->mSamplingRate; config.channel_mask = outputDesc->mChannelMask; config.format = outputDesc->mFormat; audio_io_handle_t output = AUDIO_IO_HANDLE_NONE; // 初始化输出流, 获取每个流的 操作函数 ,及对每个outstream 分配不同的线程。 status_t status = mpClientInterface->openOutput(outProfile->getModuleHandle(), &output, &config, &outputDesc->mDevice, address, &outputDesc->mLatency, outputDesc->mFlags); outputDesc->mSamplingRate = config.sample_rate; outputDesc->mChannelMask = config.channel_mask; outputDesc->mFormat = config.format; for (size_t k = 0; k < supportedDevices.size(); k++) {
ssize_t index = mAvailableOutputDevices.indexOf(supportedDevices[k]); // give a valid ID to an attached device once confirmed it is reachable if (index >= 0 && !mAvailableOutputDevices[index]->isAttached()) {
mAvailableOutputDevices[index]->attach(mHwModules[i]); } } if (mPrimaryOutput == 0 && outProfile->getFlags() & AUDIO_OUTPUT_FLAG_PRIMARY) {
mPrimaryOutput = outputDesc; } 最终将,outputDesc 保存在 out中。 addOutput(output, outputDesc); setOutputDevice(outputDesc, outputDesc->mDevice, true, 0, NULL, address.string()); } // step 8. 开始绑定 可用的输入设备 // 同理,在输入流中,也差不多 // open input streams needed to access attached devices to validate // mAvailableInputDevices list for (size_t j = 0; j < mHwModules[i]->mInputProfiles.size(); j++) {
const sp
inProfile = mHwModules[i]->mInputProfiles[j]; // chose first device present in profile's SupportedDevices also part of // inputDeviceTypes // 获取支持的设备列表 audio_devices_t profileType = inProfile->getSupportedDeviceForType(inputDeviceTypes); // 初始化一个 AudioInputDescriptor 对象 sp
inputDesc = new AudioInputDescriptor(inProfile); // 将输入列表保存在 AudioInputDescriptor 对象中 inputDesc->mDevice = profileType; // find the address DeviceVector inputDevices = mAvailableInputDevices.getDevicesFromType(profileType); // the inputs vector must be of size 1, but we don't want to crash here String8 address = inputDevices.size() > 0 ? inputDevices.itemAt(0)->mAddress : String8(""); ALOGV(" for input device 0x%x using address %s", profileType, address.string()); ALOGE_IF(inputDevices.size() == 0, "Input device list is empty!"); audio_config_t config = AUDIO_CONFIG_INITIALIZER; config.sample_rate = inputDesc->mSamplingRate; config.channel_mask = inputDesc->mChannelMask; config.format = inputDesc->mFormat; audio_io_handle_t input = AUDIO_IO_HANDLE_NONE; // 初始化输入流设备的操作方法, status_t status = mpClientInterface->openInput(inProfile->getModuleHandle(), &input, &config, &inputDesc->mDevice, address, AUDIO_SOURCE_MIC, AUDIO_INPUT_FLAG_NONE); if (status == NO_ERROR) { const DeviceVector &supportedDevices = inProfile->getSupportedDevices(); for (size_t k = 0; k < supportedDevices.size(); k++) { ssize_t index = mAvailableInputDevices.indexOf(supportedDevices[k]); // give a valid ID to an attached device once confirmed it is reachable if (index >= 0) { sp
devDesc = mAvailableInputDevices[index]; if (!devDesc->isAttached()) { devDesc->attach(mHwModules[i]); devDesc->importAudioPort(inProfile, true); } } } mpClientInterface->closeInput(input); } } } // 检测每个 输入和输出 设备是否已经绑定好对应的输入输出流 // make sure all attached devices have been allocated a unique ID for (size_t i = 0; i < mAvailableOutputDevices.size();) { if (!mAvailableOutputDevices[i]->isAttached()) { ALOGW("Output device %08x unreachable", mAvailableOutputDevices[i]->type()); mAvailableOutputDevices.remove(mAvailableOutputDevices[i]); continue; } // The device is now validated and can be appended to the available devices of the engine mEngine->setDeviceConnectionState(mAvailableOutputDevices[i], AUDIO_POLICY_DEVICE_STATE_AVAILABLE); i++; } for (size_t i = 0; i < mAvailableInputDevices.size();) { if (!mAvailableInputDevices[i]->isAttached()) { ALOGW("Input device %08x unreachable", mAvailableInputDevices[i]->type()); mAvailableInputDevices.remove(mAvailableInputDevices[i]); continue; } // The device is now validated and can be appended to the available devices of the engine mEngine->setDeviceConnectionState(mAvailableInputDevices[i], AUDIO_POLICY_DEVICE_STATE_AVAILABLE); i++; } // 确认默认的输出设备是可用的。 // make sure default device is reachable if (mDefaultOutputDevice == 0 || mAvailableOutputDevices.indexOf(mDefaultOutputDevice) < 0) { ALOGE("Default device %08x is unreachable", mDefaultOutputDevice->type()); } ALOGE_IF((mPrimaryOutput == 0), "Failed to open primary output"); // 更新设备列表,将所有设备流保存在mDeviceForStrategy数组中 updateDevicesAndOutputs(); -------> void AudioPolicyManager::updateDevicesAndOutputs() { for (int i = 0; i < NUM_STRATEGIES; i++) { mDeviceForStrategy[i] = getDeviceForStrategy((routing_strategy)i, false /*fromCache*/); } mPreviousOutputs = mOutputs; } <--------}
1.2.3.1.1 audio_policy_configuration.xml / audio_policy.conf 配置文件解析过程
  1. XML 解析 deserializeAudioPolicyXmlConfig
    在该函数中,会依次遍历解析 “/odm/etc”, “/vendor/etc/audio”, “/vendor/etc”, “/system/etc” 几个目录,
    如果找到 “audio_policy_configuration.xml” 文件,并解析成功,则break退出
// @ \frameworks\av\services\audiopolicy\managerdefault\AudioPolicyManager.cpp#define AUDIO_POLICY_XML_CONFIG_FILE_NAME "audio_policy_configuration.xml"#ifdef USE_XML_AUDIO_POLICY_CONF// Treblized audio policy xml config will be located in /odm/etc or /vendor/etc.static const char *kConfigLocationList[] =  {
"/odm/etc", "/vendor/etc/audio", "/vendor/etc", "/system/etc"};static const int kConfigLocationListSize = (sizeof(kConfigLocationList) / sizeof(kConfigLocationList[0]));static status_t deserializeAudioPolicyXmlConfig(AudioPolicyConfig &config) {
char audioPolicyXmlConfigFile[AUDIO_POLICY_XML_CONFIG_FILE_PATH_MAX_LENGTH]; for (int i = 0; i < kConfigLocationListSize; i++) {
PolicySerializer serializer; // 依次遍历解析 "/odm/etc", "/vendor/etc/audio", "/vendor/etc", "/system/etc" 几个目录,如果解析成功则break退出 snprintf(audioPolicyXmlConfigFile, sizeof(audioPolicyXmlConfigFile), "%s/%s", kConfigLocationList[i], AUDIO_POLICY_XML_CONFIG_FILE_NAME); ret = serializer.deserialize(audioPolicyXmlConfigFile, config); if (ret == NO_ERROR) {
break; } } return ret;}#endif

audio_policy_configuration.xml 文件内容如下:

在xml 中主要配置如下内容:

  • mix 端口配置 mixPorts

    1). 配置 primary output 类型 播放时的 音频参数
    2). 配置 deep_buffer 类型 播放时的 音频参数
    3). 配置 compressed_offload 类型 播放时的 音频参数,及在播放 mp3 / aac / aac_lc 各格式文件时的参数
    4). 配置 voice_tx 通话时 采集时的 音频参数
    5). 配置 primary input 类型时 采集时的 音频参数
    6). 配置 voice_rx 通话时 网络端输入的 音频参数

  • 输出设备端口配置 devicePorts

    1). 听筒声音配置
    2). 听筒声音配置 及 增益配置
    3). 无线耳机声音配置
    4). 有线耳机声音配置
    5). 蓝牙相关声音配置
    6). 通话采集 声音配置
    7). 通话主 MIC 采集 声音配置
    8). 通话副 MIC 采集 声音配置
    9). 耳机 MIC 采集 声音配置
    10). 蓝牙耳机 MIC 采集 声音配置
    11). 通话接收端 声音配置

  • 声音通路配置,sources 是声音输信端口,sink 是声音输出端口

    1. 输出端 听筒"Earpiece"
      输入端支持 “primary output,deep_buffer,BT SCO Headset Mic”
    2. 输出端 喇叭"Speaker"
      输入端支持 “primary output,deep_buffer,compressed_offload,BT SCO Headset Mic,Telephony Rx”
    3. 输出端 有线耳机 Wired Headse
      输入端支持 “primary output,deep_buffer,compressed_offload,BT SCO Headset Mic,Telephony Rx”
    4. 输出端 有线耳机 Wired Headphon
      输入端支持 “primary output,deep_buffer,compressed_offload,BT SCO Headset Mic,Telephony Rx”
    5. 输出端 通话输入 Telephony Tx
      输入端支持 voice_tx
    6. 输出端 主输入 primary input
      输入端支持 “Built-In Mic,Built-In Back Mic,Wired Headset Mic,BT SCO Headset Mic”
    7. 输出端 通话输入 Telephony Tx
      输入端支持 “Built-In Mic,Built-In Back Mic,Wired Headset Mic,BT SCO Headset Mic”
    8. 输出端 通话输出 voice_rx
      输入端支持 Telephony Rx
  • HDMI 声音输出配置

  • 其他的音频文件参数包含

    a2dp_audio_policy_configuration.xml 、
    usb_audio_policy_configuration.xml 、
    r_submix_audio_policy_configuration.xml

  • 音量参数文件包含

    audio_policy_volumes.xml, default_volume_tables.xml

Speaker
Built-In Mic
Built-In Back Mic
Speaker
// 1). 配置 primary output 类型 播放时的 音频参数
// 2). 配置 deep_buffer 类型 播放时的 音频参数
// 3). 配置 compressed_offload 类型 播放时的 音频参数,及在播放 mp3 / aac / aac_lc 各格式文件时的参数
// 4). 配置 voice_tx 通话时 采集时的 音频参数
// 5). 配置 primary input 类型时 采集时的 音频参数
// 6). 配置 voice_rx 通话时 网络端输入的 音频参数
// 1). 听筒声音配置
// 2). 听筒声音配置 及 增益配置
// 3). 无线耳机声音配置
// 4). 有线耳机声音配置
// 5). 蓝牙相关声音配置
// 6). 通话采集 声音配置
// 7). 通话主 MIC 采集 声音配置
// 8). 通话副 MIC 采集 声音配置
// 9). 耳机 MIC 采集 声音配置
// 10). 蓝牙耳机 MIC 采集 声音配置
// 11). 通话接收端 声音配置
// 声音通路配置,sources 是声音输信端口,sink 是声音输出端口
// 1. 输出端 听筒"Earpiece" , 输入端支持 "primary output,deep_buffer,BT SCO Headset Mic"
// 2. 输出端 喇叭"Speaker" , 输入端支持 "primary output,deep_buffer,compressed_offload,BT SCO Headset Mic,Telephony Rx"
// 3. 输出端 有线耳机 Wired Headse, 输入端支持 "primary output,deep_buffer,compressed_offload,BT SCO Headset Mic,Telephony Rx"
// 4. 输出端 有线耳机 Wired Headphon , 输入端支持 "primary output,deep_buffer,compressed_offload,BT SCO Headset Mic,Telephony Rx"
// 5. 输出端 通话输入 Telephony Tx , 输入端支持 voice_tx
// 6. 输出端 主输入 primary input , 输入端支持 "Built-In Mic,Built-In Back Mic,Wired Headset Mic,BT SCO Headset Mic"
// 7. 输出端 通话输入 Telephony Tx , 输入端支持 "Built-In Mic,Built-In Back Mic,Wired Headset Mic,BT SCO Headset Mic"
// 8. 输出端 通话输出 voice_rx , 输入端支持 Telephony Rx
// HDMI 声音输出配置
// 其他的音频文件参数包含:a2dp_audio_policy_configuration.xml 、usb_audio_policy_configuration.xml 、r_submix_audio_policy_configuration.xml
// 音量等级参数文件包含 audio_policy_volumes.xml, default_volume_tables.xml

在前面代码中,我们看到 了 两个有意思的文件 audio_policy_volumes.xml 和 default_volume_tables.xml。

从下面代码,可以看出 这两个文件主要是下配置我们实际各个场景下的音量等级。

  • 在 audio_policy_volumes.xml 文件中,分类的场景包括:
    (1)通话声音,AUDIO_STREAM_VOICE_CALL 时各种输出设备的音量等级
    (2)系统声音,AUDIO_STREAM_SYSTEM 时各种输出设备的音量等级
    (3)系统铃声,AUDIO_STREAM_RING 时的音量等级
    (4)闹钟声音,AUDIO_STREAM_ALARM 时的音量等级
    (5)通知声音,AUDIO_STREAM_NOTIFICATION 的音量等级
    (6)蓝牙通话,AUDIO_STREAM_BLUETOOTH_SCO 的音量等级
    (7)播号盘声音,AUDIO_STREAM_DTMF 的音量等级
    (8)及其他一些场景,AUDIO_STREAM_TTS,AUDIO_STREAM_ACCESSIBILITY,AUDIO_STREAM_REROUTING 和 AUDIO_STREAM_PATCH 。

  1. CONF 解析 ConfigParsingUtils::loadConfig()

    前面我们先讲了xml 配置方式,但在我们 MSM8937 平台中,
    因为没有定义 USE_XML_AUDIO_POLICY_CONF 宏控,所以默认走的是 conf 文件。
    在新平台中,我们一般都是使用 audio_policy.conf 文件来配置了。

    我们先来看下 audio_policy.conf 文件长什么样子:

# Global configuration section:# - lists input and output devices always present on the device# as well as the output device selected by default.# Devices are designated by a string that corresponds to the enum in audio.h# - defines whether the speaker output path uses DRC# "TRUE" means DRC is enabled, "FALSE" or omission means DRC isn't used.global_configuration {
attached_output_devices AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_TELEPHONY_TX default_output_device AUDIO_DEVICE_OUT_SPEAKER attached_input_devices AUDIO_DEVICE_IN_BUILTIN_MIC|AUDIO_DEVICE_IN_BACK_MIC|AUDIO_DEVICE_IN_REMOTE_SUBMIX|AUDIO_DEVICE_IN_FM_TUNER|AUDIO_DEVICE_IN_VOICE_CALL|AUDIO_DEVICE_IN_TELEPHONY_RX speaker_drc_enabled TRUE}# audio hardware module section: contains descriptors for all audio hw modules present on the# device. Each hw module node is named after the corresponding hw module library base name.# For instance, "primary" corresponds to audio.primary..so.# The "primary" module is mandatory and must include at least one output with# AUDIO_OUTPUT_FLAG_PRIMARY flag.# Each module descriptor contains one or more output profile descriptors and zero or more# input profile descriptors. Each profile lists all the parameters supported by a given output# or input stream category.# The "channel_masks", "formats", "devices" and "flags" are specified using strings corresponding# to enums in audio.h and audio_policy.h. They are concatenated by use of "|" without space or "\n".audio_hw_modules {
primary {
outputs {
primary {
sampling_rates 44100|48000 channel_masks AUDIO_CHANNEL_OUT_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_LINE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_AUX_DIGITAL|AUDIO_DEVICE_OUT_PROXY|AUDIO_DEVICE_OUT_FM flags AUDIO_OUTPUT_FLAG_PRIMARY|AUDIO_OUTPUT_FLAG_FAST } raw {
sampling_rates 48000 channel_masks AUDIO_CHANNEL_OUT_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_LINE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_AUX_DIGITAL|AUDIO_DEVICE_OUT_PROXY flags AUDIO_OUTPUT_FLAG_FAST|AUDIO_OUTPUT_FLAG_RAW } deep_buffer {
sampling_rates 44100|48000 channel_masks AUDIO_CHANNEL_OUT_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_LINE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_AUX_DIGITAL|AUDIO_DEVICE_OUT_PROXY flags AUDIO_OUTPUT_FLAG_DEEP_BUFFER } direct_pcm {
sampling_rates 8000|11025|16000|22050|32000|44100|48000|64000|88200|96000|176400|192000 channel_masks AUDIO_CHANNEL_OUT_MONO|AUDIO_CHANNEL_OUT_STEREO|AUDIO_CHANNEL_OUT_2POINT1|AUDIO_CHANNEL_OUT_QUAD|AUDIO_CHANNEL_OUT_PENTA|AUDIO_CHANNEL_OUT_5POINT1|AUDIO_CHANNEL_OUT_6POINT1|AUDIO_CHANNEL_OUT_7POINT1 formats AUDIO_FORMAT_PCM_16_BIT|AUDIO_FORMAT_PCM_24_BIT_PACKED|AUDIO_FORMAT_PCM_8_24_BIT devices AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_LINE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_PROXY|AUDIO_DEVICE_OUT_AUX_DIGITAL flags AUDIO_OUTPUT_FLAG_DIRECT } compress_offload {
sampling_rates 8000|11025|12000|16000|22050|24000|32000|44100|48000|64000|88200|96000|176400|192000 channel_masks AUDIO_CHANNEL_OUT_MONO|AUDIO_CHANNEL_OUT_STEREO|AUDIO_CHANNEL_OUT_2POINT1|AUDIO_CHANNEL_OUT_QUAD|AUDIO_CHANNEL_OUT_PENTA|AUDIO_CHANNEL_OUT_5POINT1|AUDIO_CHANNEL_OUT_6POINT1|AUDIO_CHANNEL_OUT_7POINT1 formats AUDIO_FORMAT_MP3|AUDIO_FORMAT_AC3|AUDIO_FORMAT_E_AC3|AUDIO_FORMAT_FLAC|AUDIO_FORMAT_ALAC|AUDIO_FORMAT_APE|AUDIO_FORMAT_AAC_LC|AUDIO_FORMAT_AAC_HE_V1|AUDIO_FORMAT_AAC_HE_V2|AUDIO_FORMAT_WMA|AUDIO_FORMAT_WMA_PRO|AUDIO_FORMAT_VORBIS|AUDIO_FORMAT_AAC_ADTS_LC|AUDIO_FORMAT_AAC_ADTS_HE_V1|AUDIO_FORMAT_AAC_ADTS_HE_V2 devices AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_LINE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_AUX_DIGITAL|AUDIO_DEVICE_OUT_PROXY flags AUDIO_OUTPUT_FLAG_DIRECT|AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD|AUDIO_OUTPUT_FLAG_NON_BLOCKING } incall_music {
sampling_rates 8000|16000|48000 channel_masks AUDIO_CHANNEL_OUT_MONO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_LINE|AUDIO_DEVICE_OUT_ALL_SCO flags AUDIO_OUTPUT_FLAG_DIRECT|AUDIO_OUTPUT_FLAG_INCALL_MUSIC } voice_tx {
sampling_rates 8000|16000|48000 channel_masks AUDIO_CHANNEL_OUT_STEREO|AUDIO_CHANNEL_OUT_MONO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_TELEPHONY_TX } voip_rx {
sampling_rates 8000|16000 channel_masks AUDIO_CHANNEL_OUT_MONO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_LINE|AUDIO_DEVICE_OUT_ALL_SCO flags AUDIO_OUTPUT_FLAG_DIRECT|AUDIO_OUTPUT_FLAG_VOIP_RX } } inputs {
primary {
sampling_rates 8000|11025|12000|16000|22050|24000|32000|44100|48000 channel_masks AUDIO_CHANNEL_IN_MONO|AUDIO_CHANNEL_IN_STEREO|AUDIO_CHANNEL_IN_FRONT_BACK formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_IN_WIRED_HEADSET|AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET|AUDIO_DEVICE_IN_FM_TUNER|AUDIO_DEVICE_IN_VOICE_CALL } surround_sound {
sampling_rates 8000|11025|12000|16000|22050|24000|32000|44100|48000 channel_masks AUDIO_CHANNEL_IN_MONO|AUDIO_CHANNEL_IN_STEREO|AUDIO_CHANNEL_IN_FRONT_BACK|AUDIO_CHANNEL_INDEX_MASK_3|AUDIO_CHANNEL_INDEX_MASK_4|AUDIO_CHANNEL_IN_5POINT1 formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_IN_BUILTIN_MIC|AUDIO_DEVICE_IN_BACK_MIC } voice_rx {
sampling_rates 8000|16000|48000 channel_masks AUDIO_CHANNEL_IN_STEREO|AUDIO_CHANNEL_IN_MONO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_IN_TELEPHONY_RX } } } a2dp {
outputs {
a2dp {
sampling_rates 44100 channel_masks AUDIO_CHANNEL_OUT_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_ALL_A2DP } } inputs {
a2dp {
sampling_rates 44100|48000 channel_masks AUDIO_CHANNEL_IN_MONO|AUDIO_CHANNEL_IN_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_IN_BLUETOOTH_A2DP } } } usb {
outputs {
usb_accessory {
sampling_rates 44100 channel_masks AUDIO_CHANNEL_OUT_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_USB_ACCESSORY } usb_device {
sampling_rates dynamic channel_masks dynamic formats dynamic devices AUDIO_DEVICE_OUT_USB_DEVICE } } inputs {
usb_device {
sampling_rates dynamic channel_masks AUDIO_CHANNEL_IN_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_IN_USB_DEVICE } } } r_submix {
outputs {
submix {
sampling_rates 48000 channel_masks AUDIO_CHANNEL_OUT_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_OUT_REMOTE_SUBMIX } } inputs {
submix {
sampling_rates 48000 channel_masks AUDIO_CHANNEL_IN_STEREO formats AUDIO_FORMAT_PCM_16_BIT devices AUDIO_DEVICE_IN_REMOTE_SUBMIX } } }}

可以看出在 audio_policy.conf 文件中,其实和 xml 差不多,主要是对各个audio场景的格式配置,

在 ConfigParsingUtils::loadConfig() 中解析过程如下:
解析 conf 文件,说白了就是对 txt 的匹配过程:
其匹配的字符串定义如下:

@ \frameworks\av\services\audiopolicy\common\managerdefinitions\include\audio_policy_conf.h#define AUDIO_HARDWARE_MODULE_ID_MAX_LEN 32#define AUDIO_POLICY_CONFIG_FILE "/system/etc/audio_policy.conf"#define AUDIO_POLICY_VENDOR_CONFIG_FILE "/vendor/etc/audio_policy.conf"// global configuration#define GLOBAL_CONFIG_TAG "global_configuration"#define ATTACHED_OUTPUT_DEVICES_TAG "attached_output_devices"#define DEFAULT_OUTPUT_DEVICE_TAG "default_output_device"#define ATTACHED_INPUT_DEVICES_TAG "attached_input_devices"#define SPEAKER_DRC_ENABLED_TAG "speaker_drc_enabled"#define AUDIO_HAL_VERSION_TAG "audio_hal_version"// hw modules descriptions#define AUDIO_HW_MODULE_TAG "audio_hw_modules"#define OUTPUTS_TAG "outputs"#define INPUTS_TAG "inputs"#define SAMPLING_RATES_TAG "sampling_rates"#define FORMATS_TAG "formats"#define CHANNELS_TAG "channel_masks"#define DEVICES_TAG "devices"#define FLAGS_TAG "flags"#define APM_DEVICES_TAG "devices"#define APM_DEVICE_TYPE "type"#define APM_DEVICE_ADDRESS "address"#define MIXERS_TAG "mixers"#define MIXER_TYPE "type"#define MIXER_TYPE_MUX "mux"#define MIXER_TYPE_MIX "mix"#define GAINS_TAG "gains"#define GAIN_MODE "mode"#define GAIN_CHANNELS "channel_mask"#define GAIN_MIN_VALUE "min_value_mB"#define GAIN_MAX_VALUE "max_value_mB"#define GAIN_DEFAULT_VALUE "default_value_mB"#define GAIN_STEP_VALUE "step_value_mB"#define GAIN_MIN_RAMP_MS "min_ramp_ms"#define GAIN_MAX_RAMP_MS "max_ramp_ms"#define DYNAMIC_VALUE_TAG "dynamic" // special value for "channel_masks", "sampling_rates" and                                    // "formats" in outputs descriptors indicating that supported                                    // values should be queried after opening the output.

解析的源代码如下:

// @ \frameworks\av\services\audiopolicy\common\managerdefinitions\src\ConfigParsingUtils.cpp// 传入参数:// #define AUDIO_POLICY_CONFIG_FILE "/system/etc/audio_policy.conf"// #define AUDIO_POLICY_VENDOR_CONFIG_FILE "/vendor/etc/audio_policy.conf"//staticstatus_t ConfigParsingUtils::loadConfig(const char *path, AudioPolicyConfig &config){
// (1). 打开 config 文件 data = (char *)load_file(path, NULL); config_load(root, data); // (2). 解析 audio_hw_modules 节点下的内容 HwModuleCollection hwModules; loadHwModules(root, hwModules, config); // (3). 解析 global_configuration 节点下的内容 // legacy audio_policy.conf files have one global_configuration section, attached to primary. loadGlobalConfig(root, config, hwModules.getModuleFromName(AUDIO_HARDWARE_MODULE_ID_PRIMARY)); // (4). 保存解析的内容 config.setHwModules(hwModules); // (5). 关闭文件 config_free(root); free(root); free(data); ALOGI("loadAudioPolicyConfig() loaded %s\n", path); return NO_ERROR;}loadHwModules() 函数如下://staticvoid ConfigParsingUtils::loadHwModules(cnode *root, HwModuleCollection &hwModules, AudioPolicyConfig &config){
cnode *node = config_find(root, AUDIO_HW_MODULE_TAG); // 找到 "audio_hw_modules" 节点 while (node) {
sp
module = new HwModule(node->name); if (loadHwModule(node, module, config) == NO_ERROR) {
hwModules.add(module); } node = node->next; }}loadHwModule() 函数如下://staticstatus_t ConfigParsingUtils::loadHwModule(cnode *root, sp
&module, AudioPolicyConfig &config){
status_t status = NAME_NOT_FOUND; cnode *node = config_find(root, DEVICES_TAG); // 解析 "devices" if (node != NULL) {
node = node->first_child; DeviceVector devices; while (node) {
// 遍历解析出所有的 子节点 ALOGV("loadHwModule() loading device %s", node->name); status_t tmpStatus = loadHwModuleDevice(node, devices); node = node->next; } module->setDeclaredDevices(devices); } node = config_find(root, OUTPUTS_TAG); // 解析 "outputs" 节点内容 if (node != NULL) {
node = node->first_child; while (node) {
// 遍历解析出所有的子节点 ALOGV("loadHwModule() loading output %s", node->name); status_t tmpStatus = loadHwModuleProfile(node, module, AUDIO_PORT_ROLE_SOURCE); node = node->next; } } node = config_find(root, INPUTS_TAG); // 解析 "inputs" 节点内容 if (node != NULL) {
node = node->first_child; while (node) {
ALOGV("loadHwModule() loading input %s", node->name); status_t tmpStatus = loadHwModuleProfile(node, module, AUDIO_PORT_ROLE_SINK); if (status == NAME_NOT_FOUND || status == NO_ERROR) {
status = tmpStatus; } node = node->next; } } loadModuleGlobalConfig(root, module, config); // 解析 global_configuration return status;}// Step 1. 解析 global_configuration 节点//staticvoid ConfigParsingUtils::loadModuleGlobalConfig(cnode *root, const sp
&module, AudioPolicyConfig &config){
cnode *node = config_find(root, GLOBAL_CONFIG_TAG); // conf step 3. 找到 global_configuration 节点 DeviceVector declaredDevices; if (module != NULL) {
declaredDevices = module->getDeclaredDevices(); } node = node->first_child; while (node) {
if (strcmp(ATTACHED_OUTPUT_DEVICES_TAG, node->name) == 0) {
// conf step 4. 找到 “attached_output_devices” 节点 DeviceVector availableOutputDevices; loadDevicesFromTag(node->value, availableOutputDevices, declaredDevices); ALOGV("loadGlobalConfig() Attached Output Devices %08x", availableOutputDevices.types()); config.addAvailableOutputDevices(availableOutputDevices); // 将 “attached_output_devices” 节点下的内容保存在 availableOutputDevices vector 中 } else if (strcmp(DEFAULT_OUTPUT_DEVICE_TAG, node->name) == 0) {
// conf step 5. 找到 “default_output_device” 节点 audio_devices_t device = AUDIO_DEVICE_NONE; deviceFromString(node->value, device); if (device != AUDIO_DEVICE_NONE) {
sp
defaultOutputDevice = new DeviceDescriptor(device); config.setDefaultOutputDevice(defaultOutputDevice); // 设置 “default_output_device” 的内容,设置默认设备为 AUDIO_DEVICE_OUT_SPEAKER ALOGV("loadGlobalConfig() mDefaultOutputDevice %08x", defaultOutputDevice->type()); } } else if (strcmp(ATTACHED_INPUT_DEVICES_TAG, node->name) == 0) {
// conf step 6. 找到 “attached_input_devices” 节点 DeviceVector availableInputDevices; loadDevicesFromTag(node->value, availableInputDevices, declaredDevices); ALOGV("loadGlobalConfig() Available InputDevices %08x", availableInputDevices.types()); config.addAvailableInputDevices(availableInputDevices); // 保在 attached_input_devices 的内容在 availableInputDevices 中 } else if (strcmp(AUDIO_HAL_VERSION_TAG, node->name) == 0) {
// conf step 7. 找到 “audio_hal_version” 节点 uint32_t major, minor; sscanf((char *)node->value, "%u.%u", &major, &minor); module->setHalVersion(major, minor); ALOGV("loadGlobalConfig() mHalVersion = major %u minor %u", major, minor); } node = node->next; }}//staticvoid ConfigParsingUtils::loadGlobalConfig(cnode *root, AudioPolicyConfig &config, const sp
& primaryModule){
cnode *node = config_find(root, GLOBAL_CONFIG_TAG); // conf step 1. 找到 “global_configuration” 节点 if (node == NULL) {
return; } node = node->first_child; while (node) {
if (strcmp(SPEAKER_DRC_ENABLED_TAG, node->name) == 0) {
// conf step 2. 解析 “speaker_drc_enabled” 字串 bool speakerDrcEnabled; if (utilities::convertTo
(node->value, speakerDrcEnabled)) { ALOGV("loadGlobalConfig() mSpeakerDrcEnabled = %d", speakerDrcEnabled); config.setSpeakerDrcEnabled(speakerDrcEnabled); } } node = node->next; } loadModuleGlobalConfig(root, primaryModule, config); // 解析 “global_configuration” 节点内容}

在前面的代码中,惟一有个看不太懂的点就是:

loadModuleGlobalConfi() 会同时被 ConfigParsingUtils::loadHwModule 和 ConfigParsingUtils::loadGlobalConfig, 先后调用两次。

因此,有点没看懂为啥需要调用两次 ???

好,audio conf 解析到此结束,我们反回 1.2.3.1 中,继续往下看代码。

1.2.3.1.2 config.setDefault(); 初始化 mHwModules

初始化 一个 name = "primary " 的 module,

添加默认输出设备为: mDeviceType = AUDIO_DEVICE_OUT_SPEAKER;
默认输入设备为: mDeviceType = AUDIO_DEVICE_IN_BUILTIN_MIC;

最终添加于是 mHwModules 这个可变 vector 中。

@\frameworks\av\services\audiopolicy\common\managerdefinitions\include\AudioPolicyConfig.hvoid setDefault(void){
mDefaultOutputDevices = new DeviceDescriptor(AUDIO_DEVICE_OUT_SPEAKER); ---> 初始化默认输出设备为: mDeviceType = AUDIO_DEVICE_OUT_SPEAKER; sp
module; sp
defaultInputDevice = new DeviceDescriptor(AUDIO_DEVICE_IN_BUILTIN_MIC); ---> 初始化默认输入设备为: mDeviceType = AUDIO_DEVICE_IN_BUILTIN_MIC; // 将AUDIO_DEVICE_OUT_SPEAKER 和 AUDIO_DEVICE_IN_BUILTIN_MIC 分别添加到 设备支持列表中 mAvailableOutputDevices.add(mDefaultOutputDevices); mAvailableInputDevices.add(defaultInputDevice); module = new HwModule("primary"); ----> 初始化 mName = "primary"; halVersionMajor=0; halVersionMinor=0 // 配置 module 支持的 输出设备 sp
outProfile; outProfile = new OutputProfile(String8("primary")); outProfile->attach(module); outProfile->addAudioProfile( new AudioProfile(AUDIO_FORMAT_PCM_16_BIT, AUDIO_CHANNEL_OUT_STEREO, 44100)); outProfile->addSupportedDevice(mDefaultOutputDevices); outProfile->setFlags(AUDIO_OUTPUT_FLAG_PRIMARY); module->mOutputProfiles.add(outProfile); // 配置 module 支持的 输入设备 sp
inProfile; inProfile = new InputProfile(String8("primary")); inProfile->attach(module); inProfile->addAudioProfile(new AudioProfile(AUDIO_FORMAT_PCM_16_BIT, AUDIO_CHANNEL_IN_MONO, 8000)); inProfile->addSupportedDevice(defaultInputDevice); module->mInputProfiles.add(inProfile); // 添加module 到 mHwModules 中 mHwModules.add(module);}
1.2.3.1.3 解析 audio_policy_volumes.xml 和 default_volume_tables.xml 文件中的音量等级,并对系统做初始化

解析文件前,我们来看下xml 文件长什么样:

在前面代码中,我们看到 了 两个有意思的文件 audio_policy_volumes.xml 和 default_volume_tables.xml。
从下面代码,可以看出 这两个文件主要是下配置我们实际各个场景下的音量等级。

  • 在 audio_policy_volumes.xml 文件中,分类的场景包括:
    (1)通话声音,AUDIO_STREAM_VOICE_CALL 时各种输出设备的音量等级
    (2)系统声音,AUDIO_STREAM_SYSTEM 时各种输出设备的音量等级
    (3)系统铃声,AUDIO_STREAM_RING 时的音量等级
    (4)闹钟声音,AUDIO_STREAM_ALARM 时的音量等级
    (5)通知声音,AUDIO_STREAM_NOTIFICATION 的音量等级
    (6)蓝牙通话,AUDIO_STREAM_BLUETOOTH_SCO 的音量等级
    (7)播号盘声音,AUDIO_STREAM_DTMF 的音量等级
    (8)及其他一些场景,AUDIO_STREAM_TTS,AUDIO_STREAM_ACCESSIBILITY,AUDIO_STREAM_REROUTING 和 AUDIO_STREAM_PATCH 。
@ \frameworks\av\services\audiopolicy\config\audio_policy_volumes.xml
0,-4200
33,-2800
66,-1400
100,0
0,-2400
33,-1600
66,-800
100,0
0,-2400
33,-1600
66,-800
100,0
1,-3000
33,-2600
66,-2200
100,-1800
1,-2970
33,-2010
66,-1020
100,0
1,-2970
33,-2010
66,-1020
100,0
1,-2970
33,-2010
66,-1020
100,0
0,-4200
33,-2800
66,-1400
100,0
0,-2400
33,-1600
66,-800
100,0
0,-4200
33,-2800
66,-1400
100,0
1,-3000
33,-2600
66,-2200
100,-1800
1,-3000
33,-2600
66,-2200
100,-1800
  • 在default_volume_tables.xml 文件中
    主要是Android 原生的 audio 声音的配置信息。
0,0
100,0
0,-9600
100,-9600
1,-2400
33,-1800
66,-1200
100,-600
1,-5800
20,-4000
60,-1700
100,0
1,-4950
33,-3350
66,-1700
100,0
1,-5800
20,-4000
60,-1700
100,0
1,-4950
33,-3350
66,-1700
100,0
1,-5800
20,-4000
60,-2100
100,-1000

其解析过程如下:

@ \frameworks\av\services\audiopolicy\common\managerdefinitions\src\StreamDescriptor.cppvoid StreamDescriptorCollection::initializeVolumeCurves(bool isSpeakerDrcEnabled){
// 使用 for 循环解析 xml 文件为一个二维数组 for (int i = 0; i < AUDIO_STREAM_CNT; i++) {
for (int j = 0; j < DEVICE_CATEGORY_CNT; j++) {
setVolumeCurvePoint(static_cast
(i), static_cast
(j), Gains::sVolumeProfiles[i][j]); } } // 根据解析到的二维数组对系统进行配置,此处用到了 前面conf 中的配置 speaker_drc_enabled TRUE // Check availability of DRC on speaker path: if available, override some of the speaker curves if (isSpeakerDrcEnabled) {
setVolumeCurvePoint(AUDIO_STREAM_SYSTEM, DEVICE_CATEGORY_SPEAKER, Gains::sDefaultSystemVolumeCurveDrc); setVolumeCurvePoint(AUDIO_STREAM_RING, DEVICE_CATEGORY_SPEAKER, Gains::sSpeakerSonificationVolumeCurveDrc); setVolumeCurvePoint(AUDIO_STREAM_ALARM, DEVICE_CATEGORY_SPEAKER, Gains::sSpeakerSonificationVolumeCurveDrc); setVolumeCurvePoint(AUDIO_STREAM_NOTIFICATION, DEVICE_CATEGORY_SPEAKER, Gains::sSpeakerSonificationVolumeCurveDrc); setVolumeCurvePoint(AUDIO_STREAM_MUSIC, DEVICE_CATEGORY_SPEAKER, Gains::sSpeakerMediaVolumeCurveDrc); setVolumeCurvePoint(AUDIO_STREAM_ACCESSIBILITY, DEVICE_CATEGORY_SPEAKER, Gains::sSpeakerMediaVolumeCurveDrc); }}
1.2.3.1.4 loadHwModule 对audio 模块进行初始化

传入的name 是 primary, a2dp,usb,remote.submix,stub。

  1. 首先根据name 匹配,检查是否已经绑定过
  2. 调用 openDevice 初始化audio 相关信息及操件方法,保存在 dev 中
  3. 设置 hardware staus 为 AUDIO_HW_INIT, 调用 dev->initCheck(),然后配置status为 AUDIO_HW_IDLE
  4. 对 audio 音量进行配置
mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->getName());  /// 前面说过是 primary经过一系列调用后,最终会调用到 AudioFlinger::loadHwModule_l 中如下@ \src\frameworks\av\services\audioflinger\AudioFlinger.cpp// loadHwModule_l() must be called with AudioFlinger::mLock heldaudio_module_handle_t AudioFlinger::loadHwModule_l(const char *name){
// 检查是否已经绑定过 for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
ALOGW("loadHwModule() module %s already loaded", name); return mAudioHwDevs.keyAt(i); } } // 解析 上层传入的name,此入我们为 audio.primary,如果支持其他的设备,获得 PRIMARY 的设备, // 在 AudioPolicyManager.cpp 会依次遍历调用这个函数, // 初始化audio 相关信息及操件方法,保存在 dev中,这样直接通过dev 就能调用到驱动的方法。 int rc = mDevicesFactoryHal->openDevice(name, &dev); // 设置 hardware staus 为 AUDIO_HW_INIT, 调用 dev->initCheck(),然后配置status为 AUDIO_HW_IDLE; mHardwareStatus = AUDIO_HW_INIT; rc = dev->initCheck(); ----> @ \hardware\qcom\audio\hal\audio_hw.c ----> adev->device.init_check = adev_init_check; mHardwareStatus = AUDIO_HW_IDLE; // 对 audio 音量进行配置 // Check and cache this HAL's level of support for master mute and master // volume. If this is the first HAL opened, and it supports the get // methods, use the initial values provided by the HAL as the current // master mute and volume settings. AudioHwDevice::Flags flags = static_cast
(0); {
// scope for auto-lock pattern AutoMutex lock(mHardwareLock); mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME; if (OK == dev->setMasterVolume(mMasterVolume)) {
flags = static_cast
(flags | AudioHwDevice::AHWD_CAN_SET_MASTER_VOLUME); } mHardwareStatus = AUDIO_HW_SET_MASTER_MUTE; if (OK == dev->setMasterMute(mMasterMute)) {
flags = static_cast
(flags | AudioHwDevice::AHWD_CAN_SET_MASTER_MUTE); } mHardwareStatus = AUDIO_HW_IDLE; } audio_module_handle_t handle = (audio_module_handle_t) nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE); mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags)); ALOGI("loadHwModule() Loaded %s audio interface, handle %d", name, handle); return handle;}
  1. mDevicesFactoryHal->openDevice(name, &dev)
    mDevicesFactoryHal 是 DevicesFactoryHalInterface 类型的对象,
    其调用的函数为 DevicesFactoryHalLocal::openDevice ,
@\frameworks\av\services\audioflinger\AudioFlinger.hclass AudioFlinger :    public BinderService
, public BnAudioFlinger{
sp
mDevicesFactoryHal;}@\frameworks\av\media\libaudiohal\DevicesFactoryHalLocal.cppstatus_t DevicesFactoryHalLocal::openDevice(const char *name, sp
*device) {
audio_hw_device_t *dev; status_t rc = load_audio_interface(name, &dev); if (rc == OK) {
*device = new DeviceHalLocal(dev); } return rc;}@\frameworks\av\media\libaudiohal\DevicesFactoryHalLocal.cppstatic status_t load_audio_interface(const char *if_name, audio_hw_device_t **dev){
// 根据 rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod); rc = audio_hw_device_open(mod, dev);}

在 hardware 层中,根据AUDIO_HARDWARE_MODULE_ID 找到对应的 audio_module 结构体,最终调用 adev_open() 方法。

@ \src\hardware\qcom\audio\hal\audio_hw.cstatic struct hw_module_methods_t hal_module_methods = {
.open = adev_open,};struct audio_module HAL_MODULE_INFO_SYM = {
.common = {
.tag = HARDWARE_MODULE_TAG, .module_api_version = AUDIO_MODULE_API_VERSION_0_1, .hal_api_version = HARDWARE_HAL_API_VERSION, .id = AUDIO_HARDWARE_MODULE_ID, .name = "QCOM Audio HAL", .author = "The Linux Foundation", .methods = &hal_module_methods, },};

在 adev_open() 方法中:

可以看出,在 open 方法中,主要是对 audio 的一些初始化,
重点是初始化adev->streams_output_cfg_list 和 adev->streams_input_cfg_list 等链表。

@ \src\hardware\qcom\audio\hal\audio_hw.cstatic int adev_open(const hw_module_t *module, const char *name,                     hw_device_t **device){
if (strcmp(name, AUDIO_HARDWARE_INTERFACE) != 0) return -EINVAL; *device = &adev->device.common; adev = calloc(1, sizeof(struct audio_device)); adev->device.common.tag = HARDWARE_DEVICE_TAG; adev->device.common.version = AUDIO_DEVICE_API_VERSION_2_0; adev->device.common.module = (struct hw_module_t *)module; adev->device.common.close = adev_close; adev->device.init_check = adev_init_check; adev->device.set_voice_volume = adev_set_voice_volume; adev->device.set_master_volume = adev_set_master_volume; adev->device.get_master_volume = adev_get_master_volume; adev->device.set_master_mute = adev_set_master_mute; adev->device.get_master_mute = adev_get_master_mute; adev->device.set_mode = adev_set_mode; adev->device.set_mic_mute = adev_set_mic_mute; adev->device.get_mic_mute = adev_get_mic_mute; adev->device.set_parameters = adev_set_parameters; adev->device.get_parameters = adev_get_parameters; adev->device.get_input_buffer_size = adev_get_input_buffer_size; adev->device.open_output_stream = adev_open_output_stream; adev->device.close_output_stream = adev_close_output_stream; adev->device.open_input_stream = adev_open_input_stream; adev->device.close_input_stream = adev_close_input_stream; adev->device.create_audio_patch = adev_create_audio_patch; adev->device.release_audio_patch = adev_release_audio_patch; adev->device.get_audio_port = adev_get_audio_port; adev->device.set_audio_port_config = adev_set_audio_port_config; adev->device.dump = adev_dump; /* Set the default route before the PCM stream is opened */ adev->mode = AUDIO_MODE_NORMAL; adev->active_input = NULL; adev->primary_output = NULL; adev->out_device = AUDIO_DEVICE_NONE; adev->bluetooth_nrec = true; adev->acdb_settings = TTY_MODE_OFF; adev->allow_afe_proxy_usage = true; adev->bt_sco_on = false; /* adev->cur_hdmi_channels = 0; by calloc() */ adev->snd_dev_ref_cnt = calloc(SND_DEVICE_MAX, sizeof(int)); voice_init(adev); list_init(&adev->usecase_list); adev->cur_wfd_channels = 2; adev->offload_usecases_state = 0; adev->is_channel_status_set = false; adev->perf_lock_opts[0] = 0x101; adev->perf_lock_opts[1] = 0x20E; adev->perf_lock_opts_size = 2; /* Loads platform specific libraries dynamically */ adev->platform = platform_init(adev); if (access(VISUALIZER_LIBRARY_PATH, R_OK) == 0) {
adev->visualizer_lib = dlopen(VISUALIZER_LIBRARY_PATH, RTLD_NOW); ALOGV("%s: DLOPEN successful for %s", __func__, VISUALIZER_LIBRARY_PATH); adev->visualizer_start_output = (int (*)(audio_io_handle_t, int))dlsym(adev->visualizer_lib, "visualizer_hal_start_output"); adev->visualizer_stop_output = (int (*)(audio_io_handle_t, int))dlsym(adev->visualizer_lib, "visualizer_hal_stop_output"); } } audio_extn_init(adev); audio_extn_listen_init(adev, adev->snd_card); audio_extn_gef_init(adev); audio_extn_hw_loopback_init(adev); if (access(OFFLOAD_EFFECTS_BUNDLE_LIBRARY_PATH, R_OK) == 0) {
adev->offload_effects_lib = dlopen(OFFLOAD_EFFECTS_BUNDLE_LIBRARY_PATH, RTLD_NOW); ALOGV("%s: DLOPEN successful for %s", __func__, OFFLOAD_EFFECTS_BUNDLE_LIBRARY_PATH); adev->offload_effects_start_output = (int (*)(audio_io_handle_t, int, struct mixer *)) dlsym(adev->offload_effects_lib, "offload_effects_bundle_hal_start_output"); adev->offload_effects_stop_output = (int (*)(audio_io_handle_t, int)) dlsym(adev->offload_effects_lib, "offload_effects_bundle_hal_stop_output"); adev->offload_effects_set_hpx_state = (int (*)(bool)) dlsym(adev->offload_effects_lib,"offload_effects_bundle_set_hpx_state"); adev->offload_effects_get_parameters =(void (*)(struct str_parms *, struct str_parms *)) dlsym(adev->offload_effects_lib, "offload_effects_bundle_get_parameters"); adev->offload_effects_set_parameters = (void (*)(struct str_parms *)) dlsym(adev->offload_effects_lib,"offload_effects_bundle_set_parameters"); } } audio_extn_ds2_enable(adev); *device = &adev->device.common; audio_extn_utils_update_streams_cfg_lists(adev->platform, adev->mixer,&adev->streams_output_cfg_list, &adev->streams_input_cfg_list); audio_device_ref_count++; char value[PROPERTY_VALUE_MAX]; int trial; if (property_get("vendor.audio_hal.period_size", value, NULL) > 0) {
trial = atoi(value); if (period_size_is_plausible_for_low_latency(trial)) {
pcm_config_low_latency.period_size = trial; pcm_config_low_latency.start_threshold = trial / 4; pcm_config_low_latency.avail_min = trial / 4; configured_low_latency_capture_period_size = trial; } } if (property_get("vendor.audio_hal.in_period_size", value, NULL) > 0) {
trial = atoi(value); if (period_size_is_plausible_for_low_latency(trial)) {
configured_low_latency_capture_period_size = trial; } } if (property_get("vendor.audio_hal.period_multiplier", value, NULL) > 0) {
af_period_multiplier = atoi(value); if (af_period_multiplier < 0) af_period_multiplier = 2; else if (af_period_multiplier > 4) af_period_multiplier = 4; ALOGV("new period_multiplier = %d", af_period_multiplier); } adev->multi_offload_enable = property_get_bool("vendor.audio.offload.multiple.enabled", false); pthread_mutex_unlock(&adev_init_lock); if (adev->adm_init) adev->adm_data = adev->adm_init(); qahwi_init(*device); audio_extn_perf_lock_init(); audio_extn_adsp_hdlr_init(adev->mixer); audio_extn_snd_mon_init(); pthread_mutex_lock(&adev->lock); audio_extn_snd_mon_register_listener(adev, adev_snd_mon_cb); adev->card_status = CARD_STATUS_ONLINE; pthread_mutex_unlock(&adev->lock); audio_extn_sound_trigger_init(adev); /* dependent on snd_mon_init() */ /* Allocate memory for Device config params */ adev->device_cfg_params = (struct audio_device_config_param*) calloc(platform_get_max_codec_backend(), sizeof(struct audio_device_config_param)); if (adev->device_cfg_params == NULL) ALOGE("%s: Memory allocation failed for Device config params", __func__); ALOGV("%s: exit", __func__); return 0;}
1.2.3.1.5 绑定可用的输出流 mpClientInterface->openOutput

step 1. 将第一个打开的设备指定为 主输出设备

step 2. 找到最合适的 输出设备
step 3. 打开输出流
step 4. 创建输出流对应的 线程,并添加到 mMmapThreads 中
step 5. 如果是其他的输出流类型,建立对应的用于 offload , directoutput , MixerThread

@\frameworks\av\services\audiopolicy\managerdefault\AudioPolicyManager.cppstatus_t status = mpClientInterface->openOutput(outProfile->getModuleHandle(),                              &output, &config, &outputDesc->mDevice, address, &outputDesc->mLatency, outputDesc->mFlags);------------------------------------------------------------------------------------@\frameworks\av\services\audiopolicy\service\AudioPolicyClientImpl.cppstatus_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,audio_io_handle_t *output, 	 audio_config_t *config,audio_devices_t *devices, const String8& address, uint32_t *latencyMs,audio_output_flags_t flags){
sp
af = AudioSystem::get_audio_flinger(); return af->openOutput(module, output, config, devices, address, latencyMs, flags);}------------------------------------------------------------------------------------@ \frameworks\av\media\libaudioclient\IAudioFlinger.cppvirtual status_t openOutput(audio_module_handle_t module, audio_io_handle_t *output, audio_config_t *config, audio_devices_t *devices, const String8& address, uint32_t *latencyMs, audio_output_flags_t flags){
status_t status = remote()->transact(OPEN_OUTPUT, data, &reply);}status_t BnAudioFlinger::onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){
switch (code) {
case OPEN_OUTPUT: {
CHECK_INTERFACE(IAudioFlinger, data, reply); audio_module_handle_t module = (audio_module_handle_t)data.readInt32(); audio_config_t config = {
}; if (data.read(&config, sizeof(audio_config_t)) != NO_ERROR) {
ALOGE("b/23905951"); } audio_devices_t devices = (audio_devices_t)data.readInt32(); String8 address(data.readString8()); audio_output_flags_t flags = (audio_output_flags_t) data.readInt32(); uint32_t latencyMs = 0; audio_io_handle_t output = AUDIO_IO_HANDLE_NONE; status_t status = openOutput(module, &output, &config, &devices, address, &latencyMs, flags); ALOGV("OPEN_OUTPUT output, %d", output); reply->writeInt32((int32_t)status); if (status == NO_ERROR) {
reply->writeInt32((int32_t)output); reply->write(&config, sizeof(audio_config_t)); reply->writeInt32(devices); reply->writeInt32(latencyMs); } return NO_ERROR; } break; default: return BBinder::onTransact(code, data, reply, flags);}------------------------------------------------------------------------------------@ \frameworks\av\services\audioflinger\AudioFlinger.cppstatus_t AudioFlinger::openOutput(audio_module_handle_t module, audio_io_handle_t *output, audio_config_t *config, audio_devices_t *devices, const String8& address, uint32_t *latencyMs, audio_output_flags_t flags){
ALOGI("openOutput() this %p, module %d Device %x, SamplingRate %d, Format %#08x, Channels %x, flags %x", this, module, (devices != NULL) ? *devices : 0, config->sample_rate, config->format, config->channel_mask, flags); sp
thread = openOutput_l(module, output, config, *devices, address, flags); if (thread != 0) {
if ((flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) == 0) {
PlaybackThread *playbackThread = (PlaybackThread *)thread.get(); *latencyMs = playbackThread->latency(); // notify client processes of the new output creation playbackThread->ioConfigChanged(AUDIO_OUTPUT_OPENED); // step 1. 将第一个打开的设备指定为 主输出设备 // the first primary output opened designates the primary hw device if ((mPrimaryHardwareDev == NULL) && (flags & AUDIO_OUTPUT_FLAG_PRIMARY)) {
ALOGI("Using module %d as the primary audio interface", module); mPrimaryHardwareDev = playbackThread->getOutput()->audioHwDev; AutoMutex lock(mHardwareLock); mHardwareStatus = AUDIO_HW_SET_MODE; mPrimaryHardwareDev->hwDevice()->setMode(mMode); mHardwareStatus = AUDIO_HW_IDLE; } } else {
MmapThread *mmapThread = (MmapThread *)thread.get(); mmapThread->ioConfigChanged(AUDIO_OUTPUT_OPENED); } return NO_ERROR; } return NO_INIT;}------------------------------------------------------------------------------------sp
AudioFlinger::openOutput_l(audio_module_handle_t module,audio_io_handle_t *output, audio_config_t *config, audio_devices_t devices, const String8& address, audio_output_flags_t flags){
// step 2. 找到最合适的 输出设备 AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices); mHardwareStatus = AUDIO_HW_OUTPUT_OPEN; // step 3. 打开输出流 status_t status = outHwDev->openOutputStream( &outputStream, *output, devices, flags, config, address.string()); mHardwareStatus = AUDIO_HW_IDLE; if (status == NO_ERROR) {
if (flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) {
// step 4. 创建输出流对应的 线程,并添加到 mMmapThreads 中 sp
thread = new MmapPlaybackThread(this, *output, outHwDev, outputStream, devices, AUDIO_DEVICE_NONE, mSystemReady); mMmapThreads.add(*output, thread); ALOGV("openOutput_l() created mmap playback thread: ID %d thread %p", *output, thread.get()); return thread; } else {
sp
thread; // step 5. 如果是其他的输出流类型,建立对应的用于 offload , directoutput , MixerThread if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady); ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread.get()); } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT) || !isValidPcmSinkFormat(config->format) || !isValidPcmSinkChannelMask(config->channel_mask)) {
thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady); ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread.get()); } else {
thread = new MixerThread(this, outputStream, *output, devices, mSystemReady); ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread.get()); } mPlaybackThreads.add(*output, thread); return thread; } } return 0;}
  1. 找到最合适的 输出设备模块 findSuitableHwDev_l()
@ \frameworks\av\services\audioflinger\AudioFlinger.cppAudioHwDevice* AudioFlinger::findSuitableHwDev_l( audio_module_handle_t module, audio_devices_t devices){
// 兼容老版本的 Audio policy manager // if module is 0, the request comes from an old policy manager and we should load well known modules if (module == 0) {
ALOGW("findSuitableHwDev_l() loading well know audio hw modules"); // 加载 audio module for (size_t i = 0; i < arraysize(audio_interfaces); i++) {
loadHwModule_l(audio_interfaces[i]); } // then try to find a module supporting the requested device. for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
AudioHwDevice *audioHwDevice = mAudioHwDevs.valueAt(i); sp
dev = audioHwDevice->hwDevice(); uint32_t supportedDevices; // 获取支持的设备列表 if (dev->getSupportedDevices(&supportedDevices) == OK && (supportedDevices & devices) == devices) {
return audioHwDevice; } } } else {
// check a match for the requested module handle AudioHwDevice *audioHwDevice = mAudioHwDevs.valueFor(module); if (audioHwDevice != NULL) {
return audioHwDevice; } } return NULL;}
  1. 打开输出流 outHwDev->openOutputStream()

    创建一个 AudioStreamOut 对象,调用其open 方法

@ \frameworks\av\services\audioflinger\AudioHwDevice.cppstatus_t AudioHwDevice::openOutputStream( AudioStreamOut **ppStreamOut, audio_io_handle_t handle, audio_devices_t devices,        audio_output_flags_t flags,  struct audio_config *config,  const char *address){
struct audio_config originalConfig = *config; // 创建一个 AudioStreamOut 对象 AudioStreamOut *outputStream = new AudioStreamOut(this, flags); // Try to open the HAL first using the current format. ALOGV("openOutputStream(), try sampleRate %d, Format %#x,channelMask %#x", config->sample_rate, config->format, config->channel_mask); status_t status = outputStream->open(handle, devices, config, address); *ppStreamOut = outputStream; return status;}

open 方法原型如下:

status_t AudioStreamOut::open(audio_io_handle_t handle, audio_devices_t devices, struct audio_config *config, const char *address){
sp
outStream; audio_output_flags_t customFlags = (config->format == AUDIO_FORMAT_IEC61937) ? (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_IEC958_NONAUDIO) : flags; // 以 AUDIO_FORMAT_IEC61937 格式调用 openOutputStream 打开输出流 int status = hwDev()->openOutputStream( handle, devices, customFlags, config, address, &outStream); ALOGV("AudioStreamOut::open(), HAL returned stream %p, sampleRate %d, Format %#x, channelMask %#x, status %d", outStream.get(), config->sample_rate, config->format, config->channel_mask, status); // 如果不识别 AUDIO_FORMAT_IEC61937 格式,则以 AUDIO_FORMAT_PCM_16_BIT 格式打开 // Some HALs may not recognize AUDIO_FORMAT_IEC61937. But if we declare it as PCM then it will probably work. if (status != NO_ERROR && config->format == AUDIO_FORMAT_IEC61937) {
struct audio_config customConfig = *config; customConfig.format = AUDIO_FORMAT_PCM_16_BIT; status = hwDev()->openOutputStream( handle, devices, customFlags, &customConfig, address, &outStream); ALOGV("AudioStreamOut::open(), treat IEC61937 as PCM, status = %d", status); } // 打开成功后,获得 一帧 Audio 数据的大小 if (status == NO_ERROR) {
stream = outStream; status = stream->getFrameSize(&mHalFrameSize); } return status;}

AUDIO_FORMAT_IEC61937 和 AUDIO_FORMAT_PCM_16_BIT区别:

IEC61937:非线性PCM编码音频位,它是由 Sony 和 Philips 所共同制定的規格, 又稱作 S/PDIF.要是用來傳送壓縮後的音訊資料或是解壓縮後的立體聲資料.

PCM_16_BIT:PCM 脉冲编码调制

在 hwDev()->openOutputStream() 中

@ /frameworks/av/services/audioflinger/AudioStreamOut.cpphwDev()->openOutputStream	------> 返回的是 DeviceHalInterface 类型的对象	调用的是@ /frameworks/av/media/libaudiohal/include/media/audiohal/DeviceHalInterface.hvirtual status_t openOutputStream(            audio_io_handle_t handle,            audio_devices_t devices,            audio_output_flags_t flags,            struct audio_config *config,            const char *address,            sp
*outStream) = 0; 接着调用到:@ /frameworks/av/media/libaudiohal/DeviceHalLocal.cppstatus_t DeviceHalLocal::openOutputStream( audio_io_handle_t handle,audio_devices_t devices,audio_output_flags_t flags, struct audio_config *config,const char *address,sp
*outStream) {
audio_stream_out_t *halStream; ALOGV("open_output_stream handle: %d devices: %x flags: %#x srate: %d format %#x channels %x address %s", handle, devices, flags, config->sample_rate, config->format, config->channel_mask, address); int openResut = mDev->open *outStream = new StreamOutHalLocal(halStream, this); } ALOGV("open_output_stream status %d stream %p", openResut, halStream); return openResut;}最终在 调用到 hardware 中的:audio_hw_device_t *mDev;@ /hardware/qcom/audio/hal/audio_hw.cadev->device.open_output_stream = adev_open_output_stream;adev->device.open_input_stream = adev_open_input_stream;int adev_open_output_stream(struct audio_hw_device *dev,audio_io_handle_t handle,audio_devices_t devices, audio_output_flags_t flags,struct audio_config *config,struct audio_stream_out **stream_out,const char *address __unused){
bool is_hdmi = devices & AUDIO_DEVICE_OUT_AUX_DIGITAL; bool is_usb_dev = audio_is_usb_out_device(devices) && (devices != AUDIO_DEVICE_OUT_USB_ACCESSORY); bool direct_dev = is_hdmi || is_usb_dev; // 构造一个 stream_out 类型的buff out = (struct stream_out *)calloc(1, sizeof(struct stream_out)); out->flags = flags; out->devices = devices; out->dev = adev; format = out->format = config->format; out->sample_rate = config->sample_rate; out->channel_mask = config->channel_mask; if (out->channel_mask == AUDIO_CHANNEL_NONE) out->supported_channel_masks[0] = AUDIO_CHANNEL_OUT_STEREO; else out->supported_channel_masks[0] = out->channel_mask; out->handle = handle; out->bit_width = CODEC_BACKEND_DEFAULT_BIT_WIDTH; out->compr_config.codec = (struct snd_codec *) calloc(1, sizeof(struct snd_codec)); out->stream.pause = out_pause; out->stream.resume = out_resume; out->stream.flush = out_flush; if (out->flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
out->stream.set_callback = out_set_callback; out->stream.drain = out_drain; out->usecase = get_offload_usecase(adev, true /* is_compress */); ALOGV("Compress Offload usecase .. usecase selected %d", out->usecase); } else {
out->usecase = get_offload_usecase(adev, false /* is_compress */); ALOGV("non-offload DIRECT_usecase ... usecase selected %d ", out->usecase); } } if ((config->offload_info.format & AUDIO_FORMAT_MAIN_MASK) == AUDIO_FORMAT_PCM) {
/*Based on platform support, configure appropriate alsa format for corresponding hal input format. */ out->compr_config.codec->format = hal_format_to_alsa( config->offload_info.format); out->hal_op_format = alsa_format_to_hal( out->compr_config.codec->format); out->hal_ip_format = out->format; /*for direct non-compress playback populate bit_width based on selected alsa format as *hal input format and alsa format might differ based on platform support. */ out->bit_width = audio_bytes_per_sample( out->hal_op_format) << 3; out->compr_config.fragments = DIRECT_PCM_NUM_FRAGMENTS; /* Check if alsa session is configured with the same format as HAL input format, * if not then derive correct fragment size needed to accomodate the * conversion of HAL input format to alsa format.*/ audio_extn_utils_update_direct_pcm_fragment_size(out); } else if (audio_extn_passthru_is_passthrough_stream(out)) {
out->compr_config.fragment_size = audio_extn_passthru_get_buffer_size(&config->offload_info); out->compr_config.fragments = COMPRESS_OFFLOAD_NUM_FRAGMENTS; } else {
out->compr_config.fragment_size = platform_get_compress_offload_buffer_size(&config->offload_info); out->compr_config.fragments = COMPRESS_OFFLOAD_NUM_FRAGMENTS; } ........此处省略一大部分代码,主要作用根据 audio 类填充 out 对象 out->stream.common.get_sample_rate = out_get_sample_rate; out->stream.common.set_sample_rate = out_set_sample_rate; out->stream.common.get_buffer_size = out_get_buffer_size; out->stream.common.get_channels = out_get_channels; out->stream.common.get_format = out_get_format; out->stream.common.set_format = out_set_format; out->stream.common.standby = out_standby; out->stream.common.dump = out_dump; out->stream.common.set_parameters = out_set_parameters; out->stream.common.get_parameters = out_get_parameters; out->stream.common.add_audio_effect = out_add_audio_effect; out->stream.common.remove_audio_effect = out_remove_audio_effect; out->stream.get_latency = out_get_latency; out->stream.set_volume = out_set_volume; out->stream.write = out_write; out->stream.get_render_position = out_get_render_position; out->stream.get_next_write_timestamp = out_get_next_write_timestamp; out->stream.get_presentation_position = out_get_presentation_position; config->format = out->stream.common.get_format(&out->stream.common); config->channel_mask = out->stream.common.get_channels(&out->stream.common); config->sample_rate = out->stream.common.get_sample_rate(&out->stream.common); // 最终配置完参数后,将当前的out lock 起来 lock_output_stream(out); // 给 audio 注册回调函数,当stream 操作时会自动调用 out_snd_mon_cb 函数 audio_extn_snd_mon_register_listener(out, out_snd_mon_cb); ---> return add_listener(stream, cb); // 在 stream 包含了 audio 的一系列参数 *stream_out = &out->stream;}

在回调 out_snd_mon_cb 函数中:主要就是获取 audio的online 状态,

如果是OFFLINE状态,则会调用out_on_error 检测是否是 AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD 模式,
如果不是,则将当前状态设置为 standby 模式。

static void out_snd_mon_cb(void * stream, struct str_parms * parms){
if (parse_snd_card_status(parms, &card, &status) < 0) return; --------> *status = !strcmp(state, "ONLINE") ? CARD_STATUS_ONLINE :CARD_STATUS_OFFLINE; if (status == CARD_STATUS_OFFLINE) out_on_error(stream); -----> out_standby(&out->stream.common); <-------}

总结在 open stream 中,上层,最主要就 是下获取到了一系列的操作方法,保存在 outputStream 中,

  1. 创建输出流对应的线程 MmapPlaybackThread()

前面2 中我们知道 ,对不同的 output stream 都会有其对应的操作方法,

此时 调用 MmapPlaybackThread 的目的就是给不同的 outputstream 都分配一个不同的 thread线程,
最终调用 mPlaybackThreads.add(*output, thread) 将当前的输出流线程保存在 mMmapThreads 中。

AudioFlinger::MmapPlaybackThread::MmapPlaybackThread(        const sp
& audioFlinger, audio_io_handle_t id, AudioHwDevice *hwDev, AudioStreamOut *output, audio_devices_t outDevice, audio_devices_t inDevice, bool systemReady) : MmapThread(audioFlinger, id, hwDev, output->stream, outDevice, inDevice, systemReady), mStreamType(AUDIO_STREAM_MUSIC), mStreamVolume(1.0), mStreamMute(false), mOutput(output){
snprintf(mThreadName, kThreadNameLength, "AudioMmapOut_%X", id); mChannelCount = audio_channel_count_from_out_mask(mChannelMask); mMasterVolume = audioFlinger->masterVolume_l(); mMasterMute = audioFlinger->masterMute_l(); if (mAudioHwDev) {
if (mAudioHwDev->canSetMasterVolume()) {
mMasterVolume = 1.0; } if (mAudioHwDev->canSetMasterMute()) {
mMasterMute = false; } }}创建线程中,主要核心代码为 MmapThread(audioFlinger, id, hwDev, output->stream, outDevice, inDevice, systemReady)----> ThreadBase(audioFlinger, id, outDevice, inDevice, MMAP, systemReady),

关于线程的我们暂时不深究,后面再来详细分析。

1.2.3.2 new AudioPolicyEffects() 解析 音效 xml 文件

AudioPolicyEffects 主要的目的在于解析第三方音效库,给第三方提供了对应的接口。

从代码中,可以看出,

首先会去解析 xml 文件,如果解析加载失败,则检查是否有 conf 文 件。
实际在我们代码中,使用的是 conf 文 件,xml 的配置文件,并没有找到对应的文件

// @\frameworks\av\services\audiopolicy\service\AudioPolicyEffects.cppAudioPolicyEffects::AudioPolicyEffects(){
status_t loadResult = loadAudioEffectXmlConfig(); if (loadResult < 0) {
ALOGW("Failed to load XML effect configuration, fallback to .conf"); // load automatic audio effect modules if (access(AUDIO_EFFECT_VENDOR_CONFIG_FILE, R_OK) == 0) {
loadAudioEffectConfig(AUDIO_EFFECT_VENDOR_CONFIG_FILE); } else if (access(AUDIO_EFFECT_DEFAULT_CONFIG_FILE, R_OK) == 0) {
loadAudioEffectConfig(AUDIO_EFFECT_DEFAULT_CONFIG_FILE); } } else if (loadResult > 0) {
ALOGE("Effect config is partially invalid, skipped %d elements", loadResult); }}
1.2.3.2.1 加载XML配置文件 loadAudioEffectXmlConfig()

由于实际我们代码中,不是走xml,并没有 audio_effects.xml 文件,此处就只列出对应的代码,

后面我们重点分析conf 的情况

status_t AudioPolicyEffects::loadAudioEffectXmlConfig() {
auto result = effectsConfig::parse(); auto loadProcessingChain = [](auto& processingChain, auto& streams) {
for (auto& stream : processingChain) {
auto effectDescs = std::make_unique
(); for (auto& effect : stream.effects) {
effectDescs->mEffects.add( new EffectDesc{
effect.get().name.c_str(), effect.get().uuid}); } streams.add(stream.type, effectDescs.release()); } }; loadProcessingChain(result.parsedConfig->preprocess, mInputSources); loadProcessingChain(result.parsedConfig->postprocess, mOutputStreams); // Casting from ssize_t to status_t is probably safe, there should not be more than 2^31 errors return result.nbSkippedElement;}
  1. effectsConfig::parse()

函数定义如下,可以看出,默认解析的是 “/vendor/etc/audio_effects.xml” 文件

@/frameworks/av/media/libeffects/config/include/media/EffectsConfig.h/** Default path of effect configuration file. */constexpr char DEFAULT_PATH[] = "/vendor/etc/audio_effects.xml";ParsingResult parse(const char* path = DEFAULT_PATH);

函数实现如下:

ParsingResult parse(const char* path) {
// step 1 . 实例化一个 XMLDocument 对象,并打开 "/vendor/etc/audio_effects.xml" 文件。 XMLDocument doc; doc.LoadFile(path); auto config = std::make_unique
(); size_t nbSkippedElements = 0; auto registerFailure = [&nbSkippedElements](bool result) {
nbSkippedElements += result ? 0 : 1; }; for (auto& xmlConfig : getChildren(doc, "audio_effects_conf")) {
// Parse library // 解析库文件 for (auto& xmlLibraries : getChildren(xmlConfig, "libraries")) {
for (auto& xmlLibrary : getChildren(xmlLibraries, "library")) {
registerFailure(parseLibrary(xmlLibrary, &config->libraries)); } } // Parse effects for (auto& xmlEffects : getChildren(xmlConfig, "effects")) {
for (auto& xmlEffect : getChildren(xmlEffects)) {
registerFailure(parseEffect(xmlEffect, config->libraries, &config->effects)); } } // Parse pre processing chains for (auto& xmlPreprocess : getChildren(xmlConfig, "preprocess")) {
for (auto& xmlStream : getChildren(xmlPreprocess, "stream")) {
registerFailure(parseStream(xmlStream, config->effects, &config->preprocess)); } } // Parse post processing chains for (auto& xmlPostprocess : getChildren(xmlConfig, "postprocess")) {
for (auto& xmlStream : getChildren(xmlPostprocess, "stream")) {
registerFailure(parseStream(xmlStream, config->effects, &config->postprocess)); } } } return {
std::move(config), nbSkippedElements};}
1.2.3.2.2 加载vendor conf 配置文件 loadAudioEffectConfig(AUDIO_EFFECT_VENDOR_CONFIG_FILE)

conf的路径为 “/vendor/etc/audio_effects.conf”

代码路径为: \hardware\qcom\audio\configs\msm8937\audio_effects.conf
在学习何解析它前,我们来看下它的内容是啥,便于后续分析

可以看出,在conf 文件中主要量包含了支持的各种库文件。

  1. libraries 节点

    各种场景下的音效对应的库文件

  2. pre_processing 节点

    默认预处理库,如果音频 HAL 实现对默认软件音频预处理效果支持,则添加到 audio_effect.conf "库"部份

  3. effects 节点

    列出了需要加载的音效,必须包含“library” 和 “uuid”,
    libraries 的文件必须是 前面 libraries 节点下包含的库文件
    uuid 为唯一能够指定某种具体音效的 id,这里的uuid 是音效厂商实现的某种音效而指定的,并非通用音效的 uuid

@ \src\hardware\qcom\audio\configs\msm8937\audio_effects.conf# List of effect libraries to load. Each library element must contain a "path" element giving the full path of the library .so file.#    libraries {
#
{
# path
# }# }// 各种场景下的音效对应的库文件libraries {
bundle {
path /vendor/lib/soundfx/libbundlewrapper.so } reverb {
// 混音 path /vendor/lib/soundfx/libreverbwrapper.so } qcbassboost {
path /vendor/lib/soundfx/libqcbassboost.so } qcvirt {
path /vendor/lib/soundfx/libqcvirt.so } qcreverb {
path /vendor/lib/soundfx/libqcreverb.so } visualizer_sw {
path /vendor/lib/soundfx/libvisualizer.so } visualizer_hw {
path /vendor/lib/soundfx/libqcomvisualizer.so } downmix {
path /vendor/lib/soundfx/libdownmix.so } loudness_enhancer {
//声音增强 path /vendor/lib/soundfx/libldnhncr.so } proxy {
path /vendor/lib/soundfx/libeffectproxy.so } offload_bundle {
path /vendor/lib/soundfx/libqcompostprocbundle.so } audio_pre_processing {
path /vendor/lib/soundfx/libqcomvoiceprocessing.so }}// 默认预处理库,如果音频 HAL 实现对默认软件音频预处理效果支持,则添加到 audio_effect.conf "库"部份# Default pre-processing library. Add to audio_effect.conf "libraries" section if# audio HAL implements support for default software audio pre-processing effects## pre_processing {
# path /vendor/lib/soundfx/libaudiopreprocessing.so# }# list of effects to load. Each effect element must contain a "library" and a "uuid" element.# The value of the "library" element must correspond to the name of one library element in the "libraries" element.# The name of the effect element is indicative, only the value of the "uuid" element designates the effect.# The uuid is the implementation specific UUID as specified by the effect vendor. This is not the generic effect type UUID.# effects {
#
{
# library
# uuid
# }# ...# }// 列出了需要加载的音效,必须包含“library” 和 “uuid”,// libraries 的文件必须是 前面 libraries 节点下包含的库文件// uuid 为唯一能够指定某种具体音效的 id,这里的uuid 是音效厂商实现的某种音效而指定的,并非通用音效的 uuideffects {
# additions for the proxy implementation# Proxy implementation #effectname {
#library proxy #uuid xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # SW implemetation of the effect. Added as a node under the proxy to # indicate this as a sub effect. #libsw {
#library libSW #uuid yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy #} End of SW effect # HW implementation of the effect. Added as a node under the proxy to # indicate this as a sub effect. #libhw {
#library libHW #uuid zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz #}End of HW effect #} End of effect proxy bassboost {
library proxy uuid 14804144-a5ee-4d24-aa88-0002a5d5c51b libsw {
library qcbassboost uuid 23aca180-44bd-11e2-bcfd-0800200c9a66 } libhw {
library offload_bundle uuid 2c4a8c24-1581-487f-94f6-0002a5d5c51b } } virtualizer {
library proxy uuid d3467faa-acc7-4d34-acaf-0002a5d5c51b libsw {
library qcvirt uuid e6c98a16-22a3-11e2-b87b-f23c91aec05e } libhw {
library offload_bundle uuid 509a4498-561a-4bea-b3b1-0002a5d5c51b } } equalizer {
library proxy uuid c8e70ecd-48ca-456e-8a4f-0002a5d5c51b libsw {
library bundle uuid ce772f20-847d-11df-bb17-0002a5d5c51b } libhw {
library offload_bundle uuid a0dac280-401c-11e3-9379-0002a5d5c51b } } volume {
library bundle uuid 119341a0-8469-11df-81f9-0002a5d5c51b } reverb_env_aux {
library proxy uuid 48404ac9-d202-4ccc-bf84-0002a5d5c51b libsw {
library qcreverb uuid a8c1e5f3-293d-43cd-95ec-d5e26c02e217 } libhw {
library offload_bundle uuid 79a18026-18fd-4185-8233-0002a5d5c51b } } reverb_env_ins {
library proxy uuid b707403a-a1c1-4291-9573-0002a5d5c51b libsw {
library qcreverb uuid 791fff8b-8129-4655-83a4-59bc61034c3a } libhw {
library offload_bundle uuid eb64ea04-973b-43d2-8f5e-0002a5d5c51b } } reverb_pre_aux {
library proxy uuid 1b78f587-6d1c-422e-8b84-0002a5d5c51b libsw {
library qcreverb uuid 53ef1db5-c0c0-445b-b060-e34d20ebb70a } libhw {
library offload_bundle uuid 6987be09-b142-4b41-9056-0002a5d5c51b } } reverb_pre_ins {
library proxy uuid f3e178d2-ebcb-408e-8357-0002a5d5c51b libsw {
library qcreverb uuid b08a0e38-22a5-11e2-b87b-f23c91aec05e } libhw {
library offload_bundle uuid aa2bebf6-47cf-4613-9bca-0002a5d5c51b } } visualizer {
library proxy uuid 1d0a1a53-7d5d-48f2-8e71-27fbd10d842c libsw {
library visualizer_sw uuid d069d9e0-8329-11df-9168-0002a5d5c51b } libhw {
library visualizer_hw uuid 7a8044a0-1a71-11e3-a184-0002a5d5c51b } } downmix {
library downmix uuid 93f04452-e4fe-41cc-91f9-e475b6d1d69f } hw_acc {
library offload_bundle uuid 7d1580bd-297f-4683-9239-e475b6d1d69f } loudness_enhancer {
library loudness_enhancer uuid fa415329-2034-4bea-b5dc-5b381c8d1e2c } aec {
library audio_pre_processing uuid 0f8d0d2a-59e5-45fe-b6e4-248c8a799109 } ns {
library audio_pre_processing uuid 1d97bb0b-9e2f-4403-9ae3-58c2554306f8 }}// 音频预处理 配置# Default pre-processing effects. Add to audio_effect.conf "effects" section if# audio HAL implements support for them.## agc {
# library pre_processing# uuid aa8130e0-66fc-11e0-bad0-0002a5d5c51b# }# aec {
# library pre_processing# uuid bb392ec0-8d4d-11e0-a896-0002a5d5c51b# }# ns {
# library pre_processing# uuid c06c8400-8e06-11e0-9cb6-0002a5d5c51b# }# Audio preprocessor configurations.# The pre processor configuration consists in a list of elements each describing# pre processor settings for a given input source. Valid input source names are:# "mic", "camcorder", "voice_recognition", "voice_communication"# Each input source element contains a list of effects elements. The name of the effect# element must be the name of one of the effects in the "effects" list of the file.# Each effect element may optionally contain a list of parameters and their# default value to apply when the pre processor effect is created.# A parameter is defined by a "param" element and a "value" element. Each of these elements# consists in one or more elements specifying a type followed by a value.# The types defined are: "int", "short", "float", "bool" and "string"# When both "param" and "value" are a single int, a simple form is allowed where just# the param and value pair is present in the parameter description# pre_processing {
#
{
#
{ #
{ # param { # int|short|float|bool|string
# [ int|short|float|bool|string
]# ...# }# value { # int|short|float|bool|string
# [ int|short|float|bool|string
]# ...# }# }#
{
}# ...# }# ...# }# ...# }# Added aec, ns effects for voice_commuincation, which are supported by the boardpre_processing { voice_communication { aec { } ns { } }}## TODO: add default audio pre processor configurations after debug and tuning phase#

看完 conf 文件,主要是定义了要加载的 so 库路径 和 对应的 uuid 号

status_t AudioPolicyEffects::loadAudioEffectConfig(const char *path){
// step 1: 加载 conf 文件,路径为: /vendor/etc/audio_effects.conf data = (char *)load_file(path, NULL); // step 2: 解析conf 文件的节点 root = config_node("", ""); config_load(root, data); Vector
effects; // step 3: 解析conf 中 effects 节点的配置信息,并将解析结果保存在effects中。 loadEffects(root, effects); =====================》 + cnode *node = config_find(root, EFFECTS_TAG); + while (node) {
+ EffectDesc *effect = loadEffect(node); + -----------> + | AudioEffect::stringToGuid(node->value, &uuid) + | return new EffectDesc(root->name, uuid); + | //---> mName=qcbassboost, mUuid=23aca180-44bd-11e2-bcfd-0800200c9a66 + <----------- + + if (effect == NULL) {
+ node = node->next; + continue; + } + // effects 是 EffectDesc 类型的vector ,将解析好的conf 保存在 vector 中。 + effects.add(effect); + node = node->next; + } 《==================== // step 4: 加载输入流音效处理,解析conf 中 pre_processing 节点的配置信息,并将解析结果保存在effects中。 loadInputEffectConfigurations(root, effects); =====================》 + cnode *node = config_find(root, PREPROCESSING_TAG); + while (node) {
+ audio_source_t source = inputSourceNameToEnum(node->name); + EffectDescVector *desc = loadEffectConfig(node, effects); + mInputSources.add(source, desc); + node = node->next; + } + + // Automatic input effects are configured per audio_source_t + KeyedVector< audio_source_t, EffectDescVector* > mInputSources; 《==================== // step 5: 加载输出流音效处理 loadStreamEffectConfigurations(root, effects); =====================》 + cnode *node = config_find(root, OUTPUT_SESSION_PROCESSING_TAG); + while (node) {
+ audio_stream_type_t stream = streamNameToEnum(node->name); + if (stream == AUDIO_STREAM_PUBLIC_CNT) {
+ ALOGW("loadStreamEffectConfigurations() invalid output stream %s", node->name); + node = node->next; + continue; + } + ALOGV("loadStreamEffectConfigurations() loading output stream %s", node->name); + EffectDescVector *desc = loadEffectConfig(node, effects); + mOutputStreams.add(stream, desc); + node = node->next; + } 《==================== // step 6: 释放 EffectDesc 对象的空间 for (size_t i = 0; i < effects.size(); i++) {
delete effects[i]; } config_free(root); free(root); free(data); return NO_ERROR;}
  1. loadEffectConfig(node, effects)
AudioPolicyEffects::EffectDescVector *AudioPolicyEffects::loadEffectConfig( cnode *root, const Vector 
& effects){
EffectDescVector *desc = new EffectDescVector(); while (node) // 寻打匹配的 so 库文件 for ( size_t i = 0; i < effects.size(); i++) {
if (strncmp(effects[i]->mName, node->name, EFFECT_STRING_LEN_MAX) == 0) {
ALOGV("loadEffectConfig() found effect %s in list", node->name); break; } } EffectDesc *effect = new EffectDesc(*effects[i]); // deep copy loadEffectParameters(node, effect->mParams); ===============》 effect_param_t *param = loadEffectParameter(node); params.add(param); 《============== ALOGV("loadEffectConfig() adding effect %s uuid %08x", effect->mName, effect->mUuid.timeLow); desc->mEffects.add(effect); node = node->next; } return desc;}
1.2.3.2.3 音效库的初始化 (以LoudnessEnhancer 为例)

前面在代码中定义好的这些库文件的源码位于@ \src\frameworks\av\media\libeffects 目录下,

如下:
在这里插入图片描述
对应的代码就在各个文件夹下。

我们以 @ src\frameworks\av\media\libeffects\loudness 为例:

@ \src\frameworks\av\media\libeffects\loudness\EffectLoudnessEnhancer.cpp在代码中,核心代码就是这丙个结构体:// effect_handle_t interface implementation for DRC effectconst struct effect_interface_s gLEInterface = {
LE_process, // 音效主处理函数 LE_command, // 系统与算法的通讯函数,通过command设置effect的参数、状态等告知effect当前处理的track信息以及设备信息 LE_getDescriptor, //获取effect的描述结构体 NULL,};audio_effect_library_t 是标准第三方音效算法 接口 API// This is the only symbol that needs to be exported__attribute__ ((visibility ("default")))audio_effect_library_t AUDIO_EFFECT_LIBRARY_INFO_SYM = {
.tag = AUDIO_EFFECT_LIBRARY_TAG, .version = EFFECT_LIBRARY_API_VERSION, .name = "Loudness Enhancer Library", .implementor = "The Android Open Source Project", .create_effect = LELib_Create, //用于创建effect,主要实现初始化,以及effect_handle_t接口实现, .release_effect = LELib_Release, .get_descriptor = LELib_GetDescriptor,};

可以看出,在打开库文件,创建 音效时,会自动调用 .create_effect = LELib_Create 函数。

在 create 函数中:

//--- Effect Library Interface Implementationint LELib_Create(const effect_uuid_t *uuid,int32_t sessionId __unused,int32_t ioId __unused,effect_handle_t *pHandle) {
// 拷贝要打开的库对应的 uuid memcmp(uuid, &gLEDescriptor.uuid, sizeof(effect_uuid_t)); // 创建一个 LoudnessEnhancerContext 对象 pContext LoudnessEnhancerContext *pContext = new LoudnessEnhancerContext; // 在 pContext 中保存 音效处理的核心结构体 gLEInterface,并将当前状态设置为 未初始化 pContext->mItfe = &gLEInterface; pContext->mState = LOUDNESS_ENHANCER_STATE_UNINITIALIZED; pContext->mCompressor = NULL; // 初始化 LoudnessEnhancer 音效,保存在pContext中,主要是对参数的初如化 ret = LE_init(pContext); // 保存对象在 pHandle 中,并将当前状态设置为 已初始化 *pHandle = (effect_handle_t)pContext; pContext->mState = LOUDNESS_ENHANCER_STATE_INITIALIZED; ALOGV(" LELib_Create context is %p", pContext); return 0;}
  1. LE_init() 函数实现如下
    主要是对一些参数的初始化
int LE_init(LoudnessEnhancerContext *pContext){
ALOGV("LE_init(%p)", pContext); pContext->mConfig.inputCfg.accessMode = EFFECT_BUFFER_ACCESS_READ; pContext->mConfig.inputCfg.channels = AUDIO_CHANNEL_OUT_STEREO; pContext->mConfig.inputCfg.format = AUDIO_FORMAT_PCM_16_BIT; pContext->mConfig.inputCfg.samplingRate = 44100; pContext->mConfig.inputCfg.bufferProvider.getBuffer = NULL; pContext->mConfig.inputCfg.bufferProvider.releaseBuffer = NULL; pContext->mConfig.inputCfg.bufferProvider.cookie = NULL; pContext->mConfig.inputCfg.mask = EFFECT_CONFIG_ALL; pContext->mConfig.outputCfg.accessMode = EFFECT_BUFFER_ACCESS_ACCUMULATE; pContext->mConfig.outputCfg.channels = AUDIO_CHANNEL_OUT_STEREO; pContext->mConfig.outputCfg.format = AUDIO_FORMAT_PCM_16_BIT; pContext->mConfig.outputCfg.samplingRate = 44100; pContext->mConfig.outputCfg.bufferProvider.getBuffer = NULL; pContext->mConfig.outputCfg.bufferProvider.releaseBuffer = NULL; pContext->mConfig.outputCfg.bufferProvider.cookie = NULL; pContext->mConfig.outputCfg.mask = EFFECT_CONFIG_ALL; pContext->mTargetGainmB = LOUDNESS_ENHANCER_DEFAULT_TARGET_GAIN_MB; float targetAmp = pow(10, pContext->mTargetGainmB/2000.0f); // mB to linear amplification ALOGV("LE_init(): Target gain=%dmB <=> factor=%.2fX", pContext->mTargetGainmB, targetAmp); if (pContext->mCompressor == NULL) {
pContext->mCompressor = new le_fx::AdaptiveDynamicRangeCompression(); pContext->mCompressor->Initialize(targetAmp, pContext->mConfig.inputCfg.samplingRate); } LE_setConfig(pContext, &pContext->mConfig); return 0;}
  1. .create_effect = LELib_Create 被谁调用的, JAVA 调用流程解析
    上层 Java 中音效代码位于:
    \src\frameworks\base\media\java\android\media\audiofx\ 目录下:

在这里插入图片描述

我们来看下 \frameworks\base\media\java\android\media\audiofx\LoudnessEnhancer.java 文件:
应用层要调用这个音效时,会实例化 LoudnessEnhancer 对象,我们来看下它的构造函数

@ \src\frameworks\av\media\libeffects\data\audio_effects.conf@ \src\frameworks\base\media\java\android\media\audiofx\AudioEffect.java// EFFECT_TYPE_LOUDNESS_ENHANCER 就是对应的 UUID, 上层是通过UUID 来获取对应的音效的,该 UUID 和 audio_effects.conf 文件中的一致。public static final UUID EFFECT_TYPE_LOUDNESS_ENHANCER = UUID.fromString("fe3199be-aed0-413f-87bb-11260eb63cf1");public static final UUID EFFECT_TYPE_NULL = UUID.fromString("ec7178ec-e5e1-4432-a3f4-4657e6795210");// audioSession 是指定的 effect sessionID, 如果是 0 则说明是全局音效 ------------------------------->// LoudnessEnhancer 继承自 AudioEffect。@ \frameworks\base\media\java\android\media\audiofx\LoudnessEnhancer.javapublic class LoudnessEnhancer extends AudioEffect {
private final static String TAG = "LoudnessEnhancer"; public LoudnessEnhancer(int audioSession) throws IllegalStateException, IllegalArgumentException,UnsupportedOperationException, RuntimeException {
super(EFFECT_TYPE_LOUDNESS_ENHANCER, EFFECT_TYPE_NULL, 0, audioSession); } public LoudnessEnhancer(int priority, int audioSession) throws IllegalStateException, IllegalArgumentException,UnsupportedOperationException, RuntimeException {
super(EFFECT_TYPE_LOUDNESS_ENHANCER, EFFECT_TYPE_NULL, priority, audioSession); }}

LoudnessEnhancer 继承自 AudioEffect,

在构造函数中,调用 super(EFFECT_TYPE_LOUDNESS_ENHANCER, EFFECT_TYPE_NULL, 0, audioSession); ,
其实调用的就是下父类的 构造函数,将 UUID 和 SessionID 传入。

父类构造函数如下:

在父类 AudioEffect 会加载 libaudioeffect_jni.so 库,初始化 native。
在构造函数中,调用 jni 代码,native_setup ,参数为 音效的 UUID

@ \frameworks\base\media\java\android\media\audiofx\AudioEffect.javapublic class AudioEffect {
static {
System.loadLibrary("audioeffect_jni"); native_init(); } /* UUID for Loudness Enhancer */ public static final UUID EFFECT_TYPE_LOUDNESS_ENHANCER = UUID.fromString("fe3199be-aed0-413f-87bb-11260eb63cf1"); public AudioEffect(UUID type, UUID uuid, int priority, int audioSession) throws IllegalArgumentException, UnsupportedOperationException, RuntimeException {
// native initialization 参数为 音效的 UUID int initResult = native_setup(new WeakReference
(this), type.toString(), uuid.toString(), priority, audioSession, id, desc, ActivityThread.currentOpPackageName()); mId = id[0]; mDescriptor = desc[0]; synchronized (mStateLock) {
mState = STATE_INITIALIZED; } }}

在native 层,JNI 代码位置如下: @ \frameworks\base\media\jni\audioeffect\android_media_AudioEffect.cpp

主要工作 如下:
// 清空 audioEffect 配置
// 初始化一个class,其中主要对象为 effect_callback_cookie effect_callback_cookie
// 创建一个 AudioEffect 对象
// AudioEffect 初始化检查
// 配置 AudioEffect

@ \src\frameworks\base\media\jni\audioeffect\Android.bpname: "libaudioeffect_jni"@ \frameworks\base\media\jni\audioeffect\android_media_AudioEffect.cppstatic jint android_media_AudioEffect_native_setup(JNIEnv *env, jobject thiz, jobject weak_this,        jstring type, jstring uuid, jint priority, jint sessionId, jintArray jId, jobjectArray javadesc, jstring opPackageName){
ALOGV("android_media_AudioEffect_native_setup"); sp
lpAudioEffect; // 清空 audioEffect 配置 setAudioEffect(env, thiz, 0); typeStr = env->GetStringUTFChars(type, NULL); // 获取到音效的 UUID 的数据 (EFFECT_TYPE_LOUDNESS_ENHANCER) uuidStr = env->GetStringUTFChars(uuid, NULL); // 获取 EFFECT_TYPE_NULL 的数据 // 初始化一个class,其中主要对象为 effect_callback_cookie effect_callback_cookie 。 lpJniStorage = new AudioEffectJniStorage(); lpJniStorage->mCallbackData.audioEffect_class = (jclass)env->NewGlobalRef(fields.clazzEffect); // we use a weak reference so the AudioEffect object can be garbage collected. lpJniStorage->mCallbackData.audioEffect_ref = env->NewGlobalRef(weak_this); ALOGV("setup: lpJniStorage: %p audioEffect_ref %p audioEffect_class %p, &mCallbackData %p",lpJniStorage, lpJniStorage->mCallbackData.audioEffect_ref,lpJniStorage->mCallbackData.audioEffect_class, &lpJniStorage->mCallbackData); // 创建一个 AudioEffect 对象 // create the native AudioEffect object lpAudioEffect = new AudioEffect(typeStr,String16(opPackageNameStr.c_str()), uuidStr,priority,effectCallback, &lpJniStorage->mCallbackData, (audio_session_t)sessionId,AUDIO_IO_HANDLE_NONE); // AudioEffect 初始化检查 lStatus = translateError(lpAudioEffect->initCheck()); // get the effect descriptor desc = lpAudioEffect->descriptor(); // 配置 audioEffect setAudioEffect(env, thiz, lpAudioEffect); return (jint) AUDIOEFFECT_SUCCESS;}// 从下可以看出 native_setup 对应的 JNI 函数为 android_media_AudioEffect_native_setup。// Dalvik VM type signaturesstatic const JNINativeMethod gMethods[] = {
{
"native_init", "()V", (void *)android_media_AudioEffect_native_init}, {
"native_setup", "(Ljava/lang/Object;Ljava/lang/String;Ljava/lang/String;II[I[Ljava/lang/Object;Ljava/lang/String;)I", (void *)android_media_AudioEffect_native_setup}, {
"native_finalize", "()V", (void *)android_media_AudioEffect_native_finalize}, {
"native_release", "()V", (void *)android_media_AudioEffect_native_release}, {
"native_setEnabled", "(Z)I", (void *)android_media_AudioEffect_native_setEnabled}, {
"native_getEnabled", "()Z", (void *)android_media_AudioEffect_native_getEnabled}, {
"native_hasControl", "()Z", (void *)android_media_AudioEffect_native_hasControl}, {
"native_setParameter", "(I[BI[B)I", (void *)android_media_AudioEffect_native_setParameter}, {
"native_getParameter", "(I[BI[B)I", (void *)android_media_AudioEffect_native_getParameter}, {
"native_command", "(II[BI[B)I", (void *)android_media_AudioEffect_native_command}, {
"native_query_effects", "()[Ljava/lang/Object;", (void *)android_media_AudioEffect_native_queryEffects}, {
"native_query_pre_processing", "(I)[Ljava/lang/Object;", (void *)android_media_AudioEffect_native_queryPreProcessings},};
  1. 创建一个 AudioEffect 对象 new AudioEffect()

step 1 : 通过 Binder 获得 IAudioFlinger 对象

step 2 : 调用 audioFlinger 创建 音效 createEffect ,传参为 mDescriptor ,包含了音效的 UUID
step 3 : 调用 IAudioFlinger 的 createEffect 函数
step 4 : 通过 binder 调用 AudioFinge.cpp 中的create_effect
step 5 : 在 AudioFlinger 中检查调用者否是 audio policy manager
step 6 : 如果 UUID 不为空的话,获取音效的描述信息 descriptor
step 7 : 如果 UUID 为空的话,获取音效的个数,保存在 numEffects 中,并遍历最合适的音效,返回其描述信息 descriptor
step 8 : 寻找到和 effect 匹配的 output stream
step 9 : 如果前面没有找到输出流的话,则通过 sessionId 来遍历获取一个最合适的
step 10 : 根据找到的流,来找到其对应的 线程 thread
step 11 : 创建一个 AudioFlinger 的客户端 sp client
step 12 :将 effecct 和 audiofinger 对象绑定在一起,并给上层返回 线程thread 的 ID 号
step 13 :返回线程thread的 handler

// \frameworks\base\media\jni\audioeffect\android_media_AudioEffect.cpplpAudioEffect = new AudioEffect(typeStr,String16(opPackageNameStr.c_str()),                    uuidStr,priority,effectCallback, &lpJniStorage->mCallbackData,	                    (audio_session_t)sessionId,AUDIO_IO_HANDLE_NONE);// \src\frameworks\av\media\libaudioclient\AudioEffect.cppAudioEffect::AudioEffect(const effect_uuid_t *type, const String16& opPackageName, const effect_uuid_t *uuid,		int32_t priority, effect_callback_t cbf, void* user, audio_session_t sessionId, audio_io_handle_t io)    	: mStatus(NO_INIT), mOpPackageName(opPackageName){
ALOGV("Constructor string\n - type: %s\n - uuid: %s", typeStr, uuidStr); if (stringToGuid(typeStr, &type) == NO_ERROR) {
pType = &type; } if (stringToGuid(uuidStr, &uuid) == NO_ERROR) {
pUuid = &uuid; } mStatus = set(pType, pUuid, priority, cbf, user, sessionId, io);}-------------->// \src\frameworks\av\media\libaudioclient\AudioEffect.cppstatus_t AudioEffect::set(const effect_uuid_t *type, const effect_uuid_t *uuid, int32_t priority, effect_callback_t cbf, void* user, audio_session_t sessionId, audio_io_handle_t io){
sp
iEffect; ALOGV("set %p mUserData: %p uuid: %p timeLow %08x", this, user, type, type ? type->timeLow : 0); if (mIEffect != 0) {
ALOGW("Effect already in use"); return INVALID_OPERATION; } // step 1 : 通过 Binder 获得 IAudioFlinger 对象 const sp
& audioFlinger = AudioSystem::get_audio_flinger(); ----------> // \frameworks\av\media\libaudioclient\AudioSystem.cpp sp
sm = defaultServiceManager(); do {
binder = sm->getService(String16("media.audio_flinger")); if (binder != 0) break; ALOGW("AudioFlinger not published, waiting..."); usleep(500000); // 0.5 s } while (true); <---------- mPriority = priority; mCbf = cbf; mUserData = user; mSessionId = sessionId; memset(&mDescriptor, 0, sizeof(effect_descriptor_t)); mDescriptor.type = *(type != NULL ? type : EFFECT_UUID_NULL); mDescriptor.uuid = *(uuid != NULL ? uuid : EFFECT_UUID_NULL); mIEffectClient = new EffectClient(this); mClientPid = IPCThreadState::self()->getCallingPid(); // step 2 : 调用 audioFlinger 创建 音效 createEffect ,传参为 mDescriptor ,包含了音效的 UUID iEffect = audioFlinger->createEffect((effect_descriptor_t *)&mDescriptor, mIEffectClient, priority, io, mSessionId, mOpPackageName, mClientPid,&mStatus, &mId, &enabled); mEnabled = (volatile int32_t)enabled; cblk = iEffect->getCblk(); mIEffect = iEffect; mCblkMemory = cblk; mCblk = static_cast
(cblk->pointer()); int bufOffset = ((sizeof(effect_param_cblk_t) - 1) / sizeof(int) + 1) * sizeof(int); mCblk->buffer = (uint8_t *)mCblk + bufOffset; IInterface::asBinder(iEffect)->linkToDeath(mIEffectClient); ALOGV("set() %p OK effect: %s id: %d status %d enabled %d pid %d", this, mDescriptor.name, mId, mStatus, mEnabled, mClientPid); if (mSessionId > AUDIO_SESSION_OUTPUT_MIX) {
AudioSystem::acquireAudioSessionId(mSessionId, mClientPid); } return mStatus;}

step 3 : 调用 IAudioFlinger 的 createEffect 函数:

@ \src\frameworks\av\media\libaudioclient\include\media\IAudioFlinger.hclass IAudioFlinger : public IInterface{
public: DECLARE_META_INTERFACE(AudioFlinger); virtual sp
createEffect( effect_descriptor_t *pDesc, const sp
& client, int32_t priority, // AudioFlinger doesn't take over handle reference from client audio_io_handle_t output, audio_session_t sessionId, const String16& callingPackage, pid_t pid, status_t *status, int *id, int *enabled) = 0;}-------------------->@ \src\frameworks\av\media\libaudioclient\IAudioFlinger.cppvirtual sp
createEffect( effect_descriptor_t *pDesc, const sp
& client, int32_t priority, audio_io_handle_t output, audio_session_t sessionId, const String16& opPackageName, pid_t pid, status_t *status, int *id, int *enabled){
Parcel data, reply; sp
effect; data.writeInterfaceToken(IAudioFlinger::getInterfaceDescriptor()); data.write(pDesc, sizeof(effect_descriptor_t)); data.writeStrongBinder(IInterface::asBinder(client)); data.writeInt32(priority); data.writeInt32((int32_t) output); data.writeInt32(sessionId); data.writeString16(opPackageName); data.writeInt32((int32_t) pid); // step 4 : 通过 binder 调用 AudioFinge.cpp 中的create_effect // @ \src\frameworks\av\services\audioflinger\AudioFlinger.cpp status_t lStatus = remote()->transact(CREATE_EFFECT, data, &reply); lStatus = reply.readInt32(); int tmp = reply.readInt32(); if (id != NULL) {
*id = tmp; } tmp = reply.readInt32(); if (enabled != NULL) {
*enabled = tmp; } effect = interface_cast
(reply.readStrongBinder()); reply.read(pDesc, sizeof(effect_descriptor_t)); if (status != NULL) { *status = lStatus; } return effect;}----------->step 4 : 通过 binder 调用 AudioFinge.cpp 中的create_effect// @ \src\frameworks\av\services\audioflinger\AudioFlinger.cppsp
AudioFlinger::createEffect( effect_descriptor_t *pDesc, const sp
& effectClient, int32_t priority, audio_io_handle_t io, audio_session_t sessionId, const String16& opPackageName, pid_t pid, status_t *status, int *id, int *enabled){ const uid_t callingUid = IPCThreadState::self()->getCallingUid(); ALOGV("createEffect pid %d, effectClient %p, priority %d, sessionId %d, io %d, factory %p", pid, effectClient.get(), priority, sessionId, io, mEffectsFactoryHal.get()); //step 5: 在 AudioFlinger 中检查调用者否是 audio policy manager // Session AUDIO_SESSION_OUTPUT_STAGE is reserved for output stage effects // that can only be created by audio policy manager (running in same process) if (sessionId == AUDIO_SESSION_OUTPUT_STAGE && getpid_cached != pid) { lStatus = PERMISSION_DENIED; goto Exit; } { if (!EffectsFactoryHalInterface::isNullUuid(&pDesc->uuid)) { // step 6 : 如果 UUID 不为空的话,获取音效的描述信息 descriptor // if uuid is specified, request effect descriptor lStatus = mEffectsFactoryHal->getDescriptor(&pDesc->uuid, &desc); } else { // if uuid is not specified, look for an available implementation of the required type in effect factory // step 7 : 如果 UUID 为空的话,获取音效的个数,保存在 numEffects 中 lStatus = mEffectsFactoryHal->queryNumberEffects(&numEffects); // 遍历 各个音效,通过 sessionID 来判断到最合适的那一个 for (uint32_t i = 0; i < numEffects; i++) { lStatus = mEffectsFactoryHal->getDescriptor(i, &desc); if (memcmp(&desc.type, &pDesc->type, sizeof(effect_uuid_t)) == 0) { // If matching type found save effect descriptor. If the session is // 0 and the effect is not auxiliary, continue enumeration in case // an auxiliary version of this effect type is available found = true; d = desc; if (sessionId != AUDIO_SESSION_OUTPUT_MIX || (desc.flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY) { break; } } } // For same effect type, chose auxiliary version over insert version if // connect to output mix (Compliance to OpenSL ES) if (sessionId == AUDIO_SESSION_OUTPUT_MIX && (d.flags & EFFECT_FLAG_TYPE_MASK) != EFFECT_FLAG_TYPE_AUXILIARY) { desc = d; } } // step 8 : 寻找到和 effect 匹配的 output stream // return effect descriptor *pDesc = desc; if (io == AUDIO_IO_HANDLE_NONE && sessionId == AUDIO_SESSION_OUTPUT_MIX) { // if the output returned by getOutputForEffect() is removed before we lock the // mutex below, the call to checkPlaybackThread_l(io) below will detect it // and we will exit safely io = AudioSystem::getOutputForEffect(&desc); ALOGV("createEffect got output %d", io); } Mutex::Autolock _l(mLock); // If output is not specified try to find a matching audio session ID in one of the // output threads. // If output is 0 here, sessionId is neither SESSION_OUTPUT_STAGE nor SESSION_OUTPUT_MIX // because of code checking output when entering the function. // Note: io is never 0 when creating an effect on an input // step 9: 如果前面没有找到输出流的话,则通过 sessionId 来遍历所有的流,播放录音等,获取一个最合适的 if (io == AUDIO_IO_HANDLE_NONE) { // look for the thread where the specified audio session is present // thread with same effect session is preferable for (size_t i = 0; i < mPlaybackThreads.size(); i++) { uint32_t sessionType = mPlaybackThreads.valueAt(i)->hasAudioSession(sessionId); if (sessionType != 0) { io = mPlaybackThreads.keyAt(i); if ((sessionType & ThreadBase::EFFECT_SESSION) != 0) { break; } } } if (io == AUDIO_IO_HANDLE_NONE) { for (size_t i = 0; i < mRecordThreads.size(); i++) { if (mRecordThreads.valueAt(i)->hasAudioSession(sessionId) != 0) { io = mRecordThreads.keyAt(i); break; } } } if (io == AUDIO_IO_HANDLE_NONE) { for (size_t i = 0; i < mMmapThreads.size(); i++) { if (mMmapThreads.valueAt(i)->hasAudioSession(sessionId) != 0) { io = mMmapThreads.keyAt(i); break; } } } // If no output thread contains the requested session ID, default to // first output. The effect chain will be moved to the correct output // thread when a track with the same session ID is created if (io == AUDIO_IO_HANDLE_NONE && mPlaybackThreads.size() > 0) { io = mPlaybackThreads.keyAt(0); } ALOGV("createEffect() got io %d for effect %s", io, desc.name); } // step 10: 根据找到的流,来找到其对应的 线程 thread ThreadBase *thread = checkRecordThread_l(io); if (thread == NULL) { thread = checkPlaybackThread_l(io); if (thread == NULL) { thread = checkMmapThread_l(io); if (thread == NULL) { ALOGE("createEffect() unknown output thread"); lStatus = BAD_VALUE; goto Exit; } } } else { // Check if one effect chain was awaiting for an effect to be created on this // session and used it instead of creating a new one. sp
chain = getOrphanEffectChain_l(sessionId); if (chain != 0) { Mutex::Autolock _l(thread->mLock); thread->addEffectChain_l(chain); } } // step 11: 创建一个 AudioFlinger 的客户端 sp
client sp
client = registerPid(pid); -----> client = new Client(this, pid); =====> mAudioFlinger(audioFlinger) mMemoryDealer = new MemoryDealer(heapSize, "AudioFlinger::Client"); <==== mClients.add(pid, client); <----- // step 12:将 effecct 和 audiofinger 对象绑定在一起,并给将线程thread 的 ID 号保存在 id中 // create effect on selected output thread bool pinned = (sessionId > AUDIO_SESSION_OUTPUT_MIX) && isSessionAcquired_l(sessionId); handle = thread->createEffect_l(client, effectClient, priority, sessionId, &desc, enabled, &lStatus, pinned); if (handle != 0 && id != NULL) { *id = handle->id(); } } *status = lStatus; // step 13:返回线程thread的 handler return handle; }
  1. 获取音效的描述 mEffectsFactoryHal->getDescriptor(&pDesc->uuid, &desc)
mEffectsFactoryHal->getDescriptor(&pDesc->uuid, &desc);---------->@ \frameworks\av\services\audioflinger\AudioFlinger.hsp
mEffectsFactoryHal;---------->@ \frameworks\av\media\libaudiohal\include\media\audiohal\EffectsFactoryHalInterface.hclass EffectsFactoryHalInterface : public RefBase{
public: // Returns the number of different effects in all loaded libraries. virtual status_t queryNumberEffects(uint32_t *pNumEffects) = 0; // Returns a descriptor of the next available effect. virtual status_t getDescriptor(uint32_t index, effect_descriptor_t *pDescriptor) = 0; virtual status_t getDescriptor(const effect_uuid_t *pEffectUuid, effect_descriptor_t *pDescriptor) = 0;}---------->@ \frameworks\av\media\libeffects\factory\EffectsFactory.cint EffectGetDescriptor(const effect_uuid_t *uuid, effect_descriptor_t *pDescriptor){
int ret = init(); // 首先对 effect 做初始化,加载 effect conf 文件 --------> EffectLoadEffectConfig(); updateNumEffects(); <-------- pthread_mutex_lock(&gLibLock); ret = findEffect(NULL, uuid, &l, &d); // 找到和 UUID 匹配的 effecct,保存在 pDescriptor 中 if (ret == 0) {
*pDescriptor = *d; } pthread_mutex_unlock(&gLibLock); return ret;}
  1. 寻找到和 effect 匹配的 output stream ,AudioSystem::getOutputForEffect(&desc)
    寻找到和 effect 匹配的 output stream
io = AudioSystem::getOutputForEffect(&desc);@ \src\frameworks\av\media\libaudioclient\AudioSystem.cppstatus_t AudioSystem::getStreamVolumeIndex(audio_stream_type_t stream,int *index,audio_devices_t device){
const sp
& aps = AudioSystem::get_audio_policy_service(); return aps->getStreamVolumeIndex(stream, index, device);}@ \frameworks\av\media\libaudioclient\IAudioPolicyService.cppvirtual audio_devices_t getDevicesForStream(audio_stream_type_t stream){
Parcel data, reply; data.writeInterfaceToken(IAudioPolicyService::getInterfaceDescriptor()); data.writeInt32(static_cast
(stream)); remote()->transact(GET_DEVICES_FOR_STREAM, data, &reply); return (audio_devices_t) reply.readInt32();}@ \frameworks\av\services\audiopolicy\managerdefault\AudioPolicyManager.cppaudio_devices_t AudioPolicyManager::getDevicesForStream(audio_stream_type_t stream) {
// By checking the range of stream before calling getStrategy, we avoid // getStrategy's behavior for invalid streams. getStrategy would do a ALOGE // and then return STRATEGY_MEDIA, but we want to return the empty set. audio_devices_t devices = AUDIO_DEVICE_NONE; for (int curStream = 0; curStream < AUDIO_STREAM_FOR_POLICY_CNT; curStream++) {
if (!streamsMatchForvolume(stream, (audio_stream_type_t)curStream)) {
continue; } routing_strategy curStrategy = getStrategy((audio_stream_type_t)curStream); audio_devices_t curDevices = getDeviceForStrategy((routing_strategy)curStrategy, false /*fromCache*/); SortedVector
outputs = getOutputsForDevice(curDevices, mOutputs); for (size_t i = 0; i < outputs.size(); i++) {
sp
outputDesc = mOutputs.valueFor(outputs[i]); if (outputDesc->isStreamActive((audio_stream_type_t)curStream)) {
curDevices |= outputDesc->device(); } } devices |= curDevices; } /*Filter SPEAKER_SAFE out of results, as AudioService doesn't know about it and doesn't really need to.*/ if (devices & AUDIO_DEVICE_OUT_SPEAKER_SAFE) {
devices |= AUDIO_DEVICE_OUT_SPEAKER; devices &= ~AUDIO_DEVICE_OUT_SPEAKER_SAFE; } return devices;}
1.2.3.2.4 音效库的调用

前面,在 1.2.3.2.3 中讲解了,整个 音效库的初始化过程,

我们知道 ,每个音效都会有其对应的 platybackthread。
所以,在需要调用音效时,自然就是在 thread 中来调用了。

@ \frameworks\av\media\libeffects\loudness\EffectLoudnessEnhancer.cpp// effect_handle_t interface implementation for DRC effectconst struct effect_interface_s gLEInterface = {
LE_process, LE_command, LE_getDescriptor, NULL,};

还是以 LoudnessEnhancer 来举例,音效具体实现,我们前面说过在 LE_process 中,现在问题来了,是哪里调用的它呢?

@ \src\frameworks\av\services\audioflinger\AudioFlinger.hDefaultKeyedVector< audio_io_handle_t, sp
> mPlaybackThreads;可以知道 ,其实 mPlaybackThreads 是一个 vector.sp
的具体实现在@\src\frameworks\av\services\audioflinger\Threads.cppbool AudioFlinger::PlaybackThread::threadLoop(){
while (!exitPending()) {
Vector< sp
> effectChains; // 处理 audio的事件 processConfigEvents_l(); saveOutputTracks(); // mMixerStatusIgnoringFastTracks is also updated internally mMixerStatus = prepareTracks_l(&tracksToRemove); if (mBytesRemaining == 0) {
// 如果判断到 不是 OFFLOAD 和 DIRECT 时,则调用 effect 的process_l 函数,进行音效处理。 // only process effects if we're going to write if (mSleepTimeUs == 0 && mType != OFFLOAD && mType != DIRECT) {
for (size_t i = 0; i < effectChains.size(); i ++) {
effectChains[i]->process_l(); //此时就调用了 LE_process 函数 } } } 如果是 OFFLOAD 或 DIRECT 的类型,同样出调用各自的 process_l 函数。 进行音效处理。 if (mType == OFFLOAD || mType == DIRECT) {
for (size_t i = 0; i < effectChains.size(); i ++) {
effectChains[i]->process_l(); } } }}

分析过程中不太懂的问题,先记下来,后续学习补上

  1. binder 通信相关
  2. ProcessState 和 IPCThreadState 进程管理相关
  3. SoundTriggerHwService 语音识别模块
  4. AudioFlinger 相关
  5. VRAudioService 虚拟现实相关
  6. C++ thread \src\frameworks\av\services\audioflinger\Threads.cpp

参考资料

《》

《》
《》
《》

https://blog.csdn.net/kuang_tian_you/article/details/83510713

转载地址:https://ciellee.blog.csdn.net/article/details/101980726 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:20191005 记
下一篇:理解 Audio 音频系统 一 之 Audio 学习思路

发表评论

最新留言

路过,博主的博客真漂亮。。
[***.116.15.85]2024年04月22日 08时16分15秒