引言
在前面的文章中,我已經介紹了如何使用 WebRTC 的 Native API,通過它們大家應該已經了解了正常 API 的一些使用方法和套路。從本文開始,我将介紹一下我這邊對 Native API 預設實作的覆寫過程,本文我們将先來介紹一些如何把 Java 中的音視訊傳輸給 WebRTC Lib。其他在 Java 中使用 WebRTC 的經驗均收錄于
<在 Java 中使用 WebRTC>中,對這個方向感興趣的同學可以翻閱一下。本文源代碼可通過掃描文章下方的公衆号擷取或
付費下載下傳。
音視訊資料采集
從Java采集音頻資料
接口介紹
之前在介紹如何建立PeerConnectionFactory時,我們提到了AudioDeviceModule這個接口,WebRTC捕捉音頻資料就是通過它來完成的。而我們正是通過實作這個接口,将自定義的音頻采集子產品注入到WebRTC中的。接下來我們先簡單的看一下這個接口都包含什麼内容。
// 這裡我隻留下一些關鍵的内容
class AudioDeviceModule : public rtc::RefCountInterface {
public:
// 該回調是音頻采集的關鍵,當我們有新的音頻資料時,需要将其封裝成正确的形式,通過該回調傳遞音頻資料
// Full-duplex transportation of PCM audio
virtual int32_t RegisterAudioCallback(AudioTransport* audioCallback) = 0;
// 列出所有可使用的音頻輸入輸出裝置,因為我們要代理整個音頻采集(輸出)子產品,是以這些函數隻傳回一個裝置就行了
// Device enumeration
virtual int16_t PlayoutDevices() = 0;
virtual int16_t RecordingDevices() = 0;
virtual int32_t PlayoutDeviceName(uint16_t index,
char name[kAdmMaxDeviceNameSize],
char guid[kAdmMaxGuidSize]) = 0;
virtual int32_t RecordingDeviceName(uint16_t index,
char name[kAdmMaxDeviceNameSize],
char guid[kAdmMaxGuidSize]) = 0;
// 在需要進行音頻采集和音頻輸出時,上層接口會通過下列函數指定想要使用的裝置,因為前面幾個函數我們隻傳回了一個裝置,所有上層接口隻會使用該裝置
// Device selection
virtual int32_t SetPlayoutDevice(uint16_t index) = 0;
virtual int32_t SetPlayoutDevice(WindowsDeviceType device) = 0;
virtual int32_t SetRecordingDevice(uint16_t index) = 0;
virtual int32_t SetRecordingDevice(WindowsDeviceType device) = 0;
// 初始化内容
// Audio transport initialization
virtual int32_t PlayoutIsAvailable(bool* available) = 0;
virtual int32_t InitPlayout() = 0;
virtual bool PlayoutIsInitialized() const = 0;
virtual int32_t RecordingIsAvailable(bool* available) = 0;
virtual int32_t InitRecording() = 0;
virtual bool RecordingIsInitialized() const = 0;
// 開始錄音/播放的接口
// Audio transport control
virtual int32_t StartPlayout() = 0;
virtual int32_t StopPlayout() = 0;
virtual bool Playing() const = 0;
virtual int32_t StartRecording() = 0;
virtual int32_t StopRecording() = 0;
virtual bool Recording() const = 0;
// 後面這部分是音頻播放相關,我并沒有使用到
// Audio mixer initialization
virtual int32_t InitSpeaker() = 0;
virtual bool SpeakerIsInitialized() const = 0;
virtual int32_t InitMicrophone() = 0;
virtual bool MicrophoneIsInitialized() const = 0;
// Speaker volume controls
virtual int32_t SpeakerVolumeIsAvailable(bool* available) = 0;
virtual int32_t SetSpeakerVolume(uint32_t volume) = 0;
virtual int32_t SpeakerVolume(uint32_t* volume) const = 0;
virtual int32_t MaxSpeakerVolume(uint32_t* maxVolume) const = 0;
virtual int32_t MinSpeakerVolume(uint32_t* minVolume) const = 0;
// Microphone volume controls
virtual int32_t MicrophoneVolumeIsAvailable(bool* available) = 0;
virtual int32_t SetMicrophoneVolume(uint32_t volume) = 0;
virtual int32_t MicrophoneVolume(uint32_t* volume) const = 0;
virtual int32_t MaxMicrophoneVolume(uint32_t* maxVolume) const = 0;
virtual int32_t MinMicrophoneVolume(uint32_t* minVolume) const = 0;
// Speaker mute control
virtual int32_t SpeakerMuteIsAvailable(bool* available) = 0;
virtual int32_t SetSpeakerMute(bool enable) = 0;
virtual int32_t SpeakerMute(bool* enabled) const = 0;
// Microphone mute control
virtual int32_t MicrophoneMuteIsAvailable(bool* available) = 0;
virtual int32_t SetMicrophoneMute(bool enable) = 0;
virtual int32_t MicrophoneMute(bool* enabled) const = 0;
// 多聲道支援
// Stereo support
virtual int32_t StereoPlayoutIsAvailable(bool* available) const = 0;
virtual int32_t SetStereoPlayout(bool enable) = 0;
virtual int32_t StereoPlayout(bool* enabled) const = 0;
virtual int32_t StereoRecordingIsAvailable(bool* available) const = 0;
virtual int32_t SetStereoRecording(bool enable) = 0;
virtual int32_t StereoRecording(bool* enabled) const = 0;
// Playout delay
virtual int32_t PlayoutDelay(uint16_t* delayMS) const = 0;
};
實作内容
簡單浏覽完AudioDeviceModule之後,想必大家應該已經有思路了,我這裡因為隻涉及到音頻采集,是以隻實作了其中幾個接口。簡單的講,我的思路就是在AudioDeviceModule中建立一個線程,當
StartReCording
被調用時,該線程開始以某一頻率調用Java的相關代碼來擷取Audio PCM資料,然後以回調的形式上交資料。下面我就來介紹一下我實作的核心内容。
// 首先,我定了一個兩個下級接口與Java端接口對應
class Capturer {
public:
virtual bool isJavaWrapper() {
return false;
}
virtual ~Capturer() {}
// Returns the sampling frequency in Hz of the audio data that this
// capturer produces.
virtual int SamplingFrequency() = 0;
// Replaces the contents of |buffer| with 10ms of captured audio data
// (see FakeAudioDevice::SamplesPerFrame). Returns true if the capturer can
// keep producing data, or false when the capture finishes.
virtual bool Capture(rtc::BufferT<int16_t> *buffer) = 0;
};
class Renderer {
public:
virtual ~Renderer() {}
// Returns the sampling frequency in Hz of the audio data that this
// renderer receives.
virtual int SamplingFrequency() const = 0;
// Renders the passed audio data and returns true if the renderer wants
// to keep receiving data, or false otherwise.
virtual bool Render(rtc::ArrayView<const int16_t> data) = 0;
};
// 這兩個下級接口的實作如下
class JavaAudioCapturerWrapper final : public FakeAudioDeviceModule::Capturer {
public:
// 構造函數主要是儲存Java音頻采集類的全局引用,然後擷取到需要的函數
JavaAudioCapturerWrapper(jobject audio_capturer)
: java_audio_capturer(audio_capturer) {
WEBRTC_LOG("Instance java audio capturer wrapper.", INFO);
JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
audio_capture_class = env->GetObjectClass(java_audio_capturer);
sampling_frequency_method = env->GetMethodID(audio_capture_class, "samplingFrequency", "()I");
capture_method = env->GetMethodID(audio_capture_class, "capture", "(I)Ljava/nio/ByteBuffer;");
WEBRTC_LOG("Instance java audio capturer wrapper end.", INFO);
}
// 析構函數釋放Java引用
~JavaAudioCapturerWrapper() {
JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
if (audio_capture_class != nullptr) {
env->DeleteLocalRef(audio_capture_class);
audio_capture_class = nullptr;
}
if (java_audio_capturer) {
env->DeleteGlobalRef(java_audio_capturer);
java_audio_capturer = nullptr;
}
}
bool isJavaWrapper() override {
return true;
}
// 調用Java端函數擷取采樣率,這裡我是調用了一次Java函數之後,就講該值緩存了起來
int SamplingFrequency() override {
if (sampling_frequency_in_hz == 0) {
JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
this->sampling_frequency_in_hz = env->CallIntMethod(java_audio_capturer, sampling_frequency_method);
}
return sampling_frequency_in_hz;
}
// 調用Java函數擷取PCM資料,這裡值得注意的是需要傳回16-bit-小端序的PCM資料,
bool Capture(rtc::BufferT<int16_t> *buffer) override {
buffer->SetData(
FakeAudioDeviceModule::SamplesPerFrame(SamplingFrequency()), // 通過該函數計算data buffer的size
[&](rtc::ArrayView<int16_t> data) { // 得到前一個參數設定的指定大小的資料塊
JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
size_t length;
jobject audio_data_buffer = env->CallObjectMethod(java_audio_capturer, capture_method,
data.size() * 2);// 因為Java端操作的資料類型是Byte,是以這裡size * 2
void *audio_data_address = env->GetDirectBufferAddress(audio_data_buffer);
jlong audio_data_size = env->GetDirectBufferCapacity(audio_data_buffer);
length = (size_t) audio_data_size / 2; // int16 等于 2個Byte
memcpy(data.data(), audio_data_address, length * 2);
env->DeleteLocalRef(audio_data_buffer);
return length;
});
return buffer->size() == buffer->capacity();
}
private:
jobject java_audio_capturer;
jclass audio_capture_class;
jmethodID sampling_frequency_method;
jmethodID capture_method;
int sampling_frequency_in_hz = 0;
};
size_t FakeAudioDeviceModule::SamplesPerFrame(int sampling_frequency_in_hz) {
return rtc::CheckedDivExact(sampling_frequency_in_hz, kFramesPerSecond);
}
constexpr int kFrameLengthMs = 10; // 10ms采集一次資料
constexpr int kFramesPerSecond = 1000 / kFrameLengthMs; //每秒采集的幀數
// 播放器裡其實什麼也沒幹^.^
class DiscardRenderer final : public FakeAudioDeviceModule::Renderer {
public:
explicit DiscardRenderer(int sampling_frequency_in_hz)
: sampling_frequency_in_hz_(sampling_frequency_in_hz) {}
int SamplingFrequency() const override {
return sampling_frequency_in_hz_;
}
bool Render(rtc::ArrayView<const int16_t>) override {
return true;
}
private:
int sampling_frequency_in_hz_;
};
// 接下來是AudioDeviceModule的核心實作,我使用WebRTC提供的EventTimerWrapper和跨平台線程庫來實作周期性Java采集函數調用
std::unique_ptr<webrtc::EventTimerWrapper> tick_;
rtc::PlatformThread thread_;
// 構造函數
FakeAudioDeviceModule::FakeAudioDeviceModule(std::unique_ptr<Capturer> capturer,
std::unique_ptr<Renderer> renderer,
float speed)
: capturer_(std::move(capturer)),
renderer_(std::move(renderer)),
speed_(speed),
audio_callback_(nullptr),
rendering_(false),
capturing_(false),
done_rendering_(true, true),
done_capturing_(true, true),
tick_(webrtc::EventTimerWrapper::Create()),
thread_(FakeAudioDeviceModule::Run, this, "FakeAudioDeviceModule") {
}
// 主要是将rendering_置為true
int32_t FakeAudioDeviceModule::StartPlayout() {
rtc::CritScope cs(&lock_);
RTC_CHECK(renderer_);
rendering_ = true;
done_rendering_.Reset();
return 0;
}
// 主要是将rendering_置為false
int32_t FakeAudioDeviceModule::StopPlayout() {
rtc::CritScope cs(&lock_);
rendering_ = false;
done_rendering_.Set();
return 0;
}
// 主要是将capturing_置為true
int32_t FakeAudioDeviceModule::StartRecording() {
rtc::CritScope cs(&lock_);
WEBRTC_LOG("Start audio recording", INFO);
RTC_CHECK(capturer_);
capturing_ = true;
done_capturing_.Reset();
return 0;
}
// 主要是将capturing_置為false
int32_t FakeAudioDeviceModule::StopRecording() {
rtc::CritScope cs(&lock_);
WEBRTC_LOG("Stop audio recording", INFO);
capturing_ = false;
done_capturing_.Set();
return 0;
}
// 設定EventTimer的頻率,并開啟線程
int32_t FakeAudioDeviceModule::Init() {
RTC_CHECK(tick_->StartTimer(true, kFrameLengthMs / speed_));
thread_.Start();
thread_.SetPriority(rtc::kHighPriority);
return 0;
}
// 儲存上層音頻采集的回調函數,之後我們會用它上交音頻資料
int32_t FakeAudioDeviceModule::RegisterAudioCallback(webrtc::AudioTransport *callback) {
rtc::CritScope cs(&lock_);
RTC_DCHECK(callback || audio_callback_);
audio_callback_ = callback;
return 0;
}
bool FakeAudioDeviceModule::Run(void *obj) {
static_cast<FakeAudioDeviceModule *>(obj)->ProcessAudio();
return true;
}
void FakeAudioDeviceModule::ProcessAudio() {
{
rtc::CritScope cs(&lock_);
if (needDetachJvm) {
WEBRTC_LOG("In audio device module process audio", INFO);
}
auto start = std::chrono::steady_clock::now();
if (capturing_) {
// Capture 10ms of audio. 2 bytes per sample.
// 擷取音頻資料
const bool keep_capturing = capturer_->Capture(&recording_buffer_);
uint32_t new_mic_level;
if (keep_capturing) {
// 通過回調函數上交音頻資料,這裡包括:資料,資料大小,每次采樣資料多少byte,聲道數,采樣率,延時等
audio_callback_->RecordedDataIsAvailable(
recording_buffer_.data(), recording_buffer_.size(), 2, 1,
static_cast<const uint32_t>(capturer_->SamplingFrequency()), 0, 0, 0, false, new_mic_level);
}
// 如果沒有音頻資料了,就停止采集
if (!keep_capturing) {
capturing_ = false;
done_capturing_.Set();
}
}
if (rendering_) {
size_t samples_out;
int64_t elapsed_time_ms;
int64_t ntp_time_ms;
const int sampling_frequency = renderer_->SamplingFrequency();
// 從上層接口擷取音頻資料
audio_callback_->NeedMorePlayData(
SamplesPerFrame(sampling_frequency), 2, 1, static_cast<const uint32_t>(sampling_frequency),
playout_buffer_.data(), samples_out, &elapsed_time_ms, &ntp_time_ms);
// 播放音頻資料
const bool keep_rendering = renderer_->Render(
rtc::ArrayView<const int16_t>(playout_buffer_.data(), samples_out));
if (!keep_rendering) {
rendering_ = false;
done_rendering_.Set();
}
}
auto end = std::chrono::steady_clock::now();
auto diff = std::chrono::duration<double, std::milli>(end - start).count();
if (diff > kFrameLengthMs) {
WEBRTC_LOG("JNI capture audio data timeout, real capture time is " + std::to_string(diff) + " ms", DEBUG);
}
// 如果AudioDeviceModule要被銷毀了,就Detach Thread
if (capturer_->isJavaWrapper() && needDetachJvm && !detached2Jvm) {
DETACH_CURRENT_THREAD_IF_NEEDED();
detached2Jvm = true;
} else if (needDetachJvm) {
detached2Jvm = true;
}
}
// 時間沒到就一直等,當夠了10ms會觸發下一次音頻處理過程
tick_->Wait(WEBRTC_EVENT_INFINITE);
}
// 析構函數
FakeAudioDeviceModule::~FakeAudioDeviceModule() {
WEBRTC_LOG("In audio device module FakeAudioDeviceModule", INFO);
StopPlayout(); // 關閉播放
StopRecording(); // 關閉采集
needDetachJvm = true; // 觸發工作線程的Detach
while (!detached2Jvm) { // 等待工作線程Detach完畢
}
WEBRTC_LOG("In audio device module after detached2Jvm", INFO);
thread_.Stop();// 關閉線程
WEBRTC_LOG("In audio device module ~FakeAudioDeviceModule finished", INFO);
}
順便一提,在Java端我采用了直接記憶體來傳遞音頻資料,主要是因為這樣減少記憶體拷貝。
從Java采集視訊資料
從Java采集視訊資料和采集音頻資料的過程十分相似,不過視訊采集子產品的注入是在建立VideoSource的時候,此外還有一個需要注意的點是,需要在SignallingThread建立VideoCapturer。
//...
video_source = rtc->CreateVideoSource(rtc->CreateFakeVideoCapturerInSignalingThread());
//...
FakeVideoCapturer *RTC::CreateFakeVideoCapturerInSignalingThread() {
if (video_capturer) {
return signaling_thread->Invoke<FakeVideoCapturer *>(RTC_FROM_HERE,
rtc::Bind(&RTC::CreateFakeVideoCapturer, this,
video_capturer));
} else {
return nullptr;
}
}
VideoCapturer這個接口中需要我們實作的内容也并不多,關鍵的就是主循環,開始,關閉,接下來看一下我的實作吧。
// 構造函數
FakeVideoCapturer::FakeVideoCapturer(jobject video_capturer)
: running_(false),
video_capturer(video_capturer),
is_screen_cast(false),
ticker(webrtc::EventTimerWrapper::Create()),
thread(FakeVideoCapturer::Run, this, "FakeVideoCapturer") {
// 儲存會使用到的Java函數
JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
video_capture_class = env->GetObjectClass(video_capturer);
get_width_method = env->GetMethodID(video_capture_class, "getWidth", "()I");
get_height_method = env->GetMethodID(video_capture_class, "getHeight", "()I");
get_fps_method = env->GetMethodID(video_capture_class, "getFps", "()I");
capture_method = env->GetMethodID(video_capture_class, "capture", "()Lpackage/name/of/rtc4j/model/VideoFrame;");
width = env->CallIntMethod(video_capturer, get_width_method);
previous_width = width;
height = env->CallIntMethod(video_capturer, get_height_method);
previous_height = height;
fps = env->CallIntMethod(video_capturer, get_fps_method);
// 設定上交的資料格式YUV420
static const cricket::VideoFormat formats[] = {
{width, height, cricket::VideoFormat::FpsToInterval(fps), cricket::FOURCC_I420}
};
SetSupportedFormats({&formats[0], &formats[arraysize(formats)]});
// 根據Java中回報的FPS設定主循環執行間隔
RTC_CHECK(ticker->StartTimer(true, rtc::kNumMillisecsPerSec / fps));
thread.Start();
thread.SetPriority(rtc::kHighPriority);
// 因為Java端傳輸過來的時Jpg圖檔,是以我這裡用libjpeg-turbo進行了解壓,轉成YUV420
decompress_handle = tjInitDecompress();
WEBRTC_LOG("Create fake video capturer, " + std::to_string(width) + ", " + std::to_string(height), INFO);
}
// 析構函數
FakeVideoCapturer::~FakeVideoCapturer() {
thread.Stop();
SignalDestroyed(this);
// 釋放Java資源
JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
if (video_capture_class != nullptr) {
env->DeleteLocalRef(video_capture_class);
video_capture_class = nullptr;
}
// 釋放解壓器
if (decompress_handle) {
if (tjDestroy(decompress_handle) != 0) {
WEBRTC_LOG("Release decompress handle failed, reason is: " + std::string(tjGetErrorStr2(decompress_handle)),
ERROR);
}
}
WEBRTC_LOG("Free fake video capturer", INFO);
}
bool FakeVideoCapturer::Run(void *obj) {
static_cast<FakeVideoCapturer *>(obj)->CaptureFrame();
return true;
}
void FakeVideoCapturer::CaptureFrame() {
{
rtc::CritScope cs(&lock_);
if (running_) {
int64_t t0 = rtc::TimeMicros();
JNIEnv *env = ATTACH_CURRENT_THREAD_IF_NEEDED();
// 從Java端擷取每一幀的圖檔,
jobject java_video_frame = env->CallObjectMethod(video_capturer, capture_method);
if (java_video_frame == nullptr) { // 如果傳回的圖檔為空,就上交一張純黑的圖檔
rtc::scoped_refptr<webrtc::I420Buffer> buffer = webrtc::I420Buffer::Create(previous_width,
previous_height);
webrtc::I420Buffer::SetBlack(buffer);
OnFrame(webrtc::VideoFrame(buffer, (webrtc::VideoRotation) previous_rotation, t0), previous_width,
previous_height);
return;
}
// Java中使用直接記憶體來傳輸圖檔
jobject java_data_buffer = env->CallObjectMethod(java_video_frame, GET_VIDEO_FRAME_BUFFER_GETTER_METHOD());
auto data_buffer = (unsigned char *) env->GetDirectBufferAddress(java_data_buffer);
auto length = (unsigned long) env->CallIntMethod(java_video_frame, GET_VIDEO_FRAME_LENGTH_GETTER_METHOD());
int rotation = env->CallIntMethod(java_video_frame, GET_VIDEO_FRAME_ROTATION_GETTER_METHOD());
int width;
int height;
// 解壓Jpeg頭部資訊,擷取長寬
tjDecompressHeader(decompress_handle, data_buffer, length, &width, &height);
previous_width = width;
previous_height = height;
previous_rotation = rotation;
// 以32對齊的方式解壓并上交YUV420資料,這裡采用32對齊是因為這樣編碼效率更高,此外mac上的videotoolbox編碼要求必須使用32對齊
rtc::scoped_refptr<webrtc::I420Buffer> buffer =
webrtc::I420Buffer::Create(width, height,
width % 32 == 0 ? width : width / 32 * 32 + 32,
(width / 2) % 32 == 0 ? (width / 2) : (width / 2) / 32 * 32 + 32,
(width / 2) % 32 == 0 ? (width / 2) : (width / 2) / 32 * 32 + 32);
uint8_t *planes[] = {buffer->MutableDataY(), buffer->MutableDataU(), buffer->MutableDataV()};
int strides[] = {buffer->StrideY(), buffer->StrideU(), buffer->StrideV()};
tjDecompressToYUVPlanes(decompress_handle, data_buffer, length, planes, width, strides, height,
TJFLAG_FASTDCT | TJFLAG_NOREALLOC);
env->DeleteLocalRef(java_data_buffer);
env->DeleteLocalRef(java_video_frame);
// OnFrame 函數就是将資料遞交給WebRTC的接口
OnFrame(webrtc::VideoFrame(buffer, (webrtc::VideoRotation) rotation, t0), width, height);
}
}
ticker->Wait(WEBRTC_EVENT_INFINITE);
}
// 開啟
cricket::CaptureState FakeVideoCapturer::Start(
const cricket::VideoFormat &format) {
//SetCaptureFormat(&format); This will cause crash in CentOS
running_ = true;
SetCaptureState(cricket::CS_RUNNING);
WEBRTC_LOG("Start fake video capturing", INFO);
return cricket::CS_RUNNING;
}
// 關閉
void FakeVideoCapturer::Stop() {
running_ = false;
//SetCaptureFormat(nullptr); This will cause crash in CentOS
SetCaptureState(cricket::CS_STOPPED);
WEBRTC_LOG("Stop fake video capturing", INFO);
}
// YUV420
bool FakeVideoCapturer::GetPreferredFourccs(std::vector<uint32_t> *fourccs) {
fourccs->push_back(cricket::FOURCC_I420);
return true;
}
// 調用預設實作
void FakeVideoCapturer::AddOrUpdateSink(rtc::VideoSinkInterface<webrtc::VideoFrame> *sink,
const rtc::VideoSinkWants &wants) {
cricket::VideoCapturer::AddOrUpdateSink(sink, wants);
}
void FakeVideoCapturer::RemoveSink(rtc::VideoSinkInterface<webrtc::VideoFrame> *sink) {
cricket::VideoCapturer::RemoveSink(sink);
}
至此,如何從Java端擷取音視訊資料的部分就介紹完了,你會發現這個東西其實并不難,我這就算是抛磚引玉吧,大家可以通過我的實作,更快的了解這部分的流程。
文章說明
更多有價值的文章均收錄于
貝貝貓的文章目錄
版權聲明: 本部落格所有文章除特别聲明外,均采用 BY-NC-SA 許可協定。轉載請注明出處!
創作聲明: 本文基于下列所有參考内容進行創作,其中可能涉及複制、修改或者轉換,圖檔均來自網絡,如有侵權請聯系我,我會第一時間進行删除。
參考内容
[1]
JNI的替代者—使用JNA通路Java外部功能接口[2]
Linux共享對象之編譯參數fPIC[3]
Android JNI 使用總結[4]
FFmpeg 倉庫