天天看點

Android技術分享| 超簡單,給 Android WebRTC增加美顔濾鏡功能

  • 視訊采集渲染流程分析

    在增加濾鏡功能之前,需要對 WebRTC 視訊采集的流程有一定了解。

WebRTC 中定義了 VideoCapture 接口類,其中定義了相機的初始化,預覽,停止預覽銷毀等操作。

實作類是 CameraCapture,并且封裝了Camera1Capture、Camera2Capture 兩個子類,甚至還有螢幕共享。

WebRTC 中開始視訊采集非常的簡單:

val videoCapture = createVideoCapture()
videoSource = videoCapture.isScreencast.let { factory.createVideoSource(it) }
videoCapture.initialize(surfaceTextureHelper,applicationContext,videoSource?.capturerObserver)
videoCapture.startCapture(480, 640, 30)           

這裡主要看一下 VideoSource類和capturerObserver。

VideoSource 中有以下方法

@Override
    public void onFrameCaptured(VideoFrame frame) {
      final VideoProcessor.FrameAdaptationParameters parameters =
          nativeAndroidVideoTrackSource.adaptFrame(frame);
      synchronized (videoProcessorLock) {
        if (videoProcessor != null) {
          videoProcessor.onFrameCaptured(frame, parameters);
          return;
        }
      }
      VideoFrame adaptedFrame = VideoProcessor.applyFrameAdaptationParameters(frame, parameters);
      if (adaptedFrame != null) {
        nativeAndroidVideoTrackSource.onFrameCaptured(adaptedFrame);
        adaptedFrame.release();
      }
    }           

采集到的視訊幀資料會回調給 onFrameCaptured,在這裡會做一下對視訊的裁切縮放處理,并通過nativeAndroidVideoTrackSource傳遞給 Native層。

重點是 VideoProcessor 對象,據查是在2019年2月新增的。VideoSource裡面有 setVideoProcessor 方法用于設定VideoProcessor,在上面方法中可知,如果設定了VideoProcessor,視訊幀則走VideoProcessor的onFrameCaptured,否則的話直接傳入 Native。

用 VideoProcessor 來實作處理發送前的視訊幀非常友善,我們先來看下VideoProcessor類。

public interface VideoProcessor extends CapturerObserver {
  public static class FrameAdaptationParameters {
   ...

    public FrameAdaptationParameters(int cropX, int cropY, int cropWidth, int cropHeight,
        int scaleWidth, int scaleHeight, long timestampNs, boolean drop) {
      ...
    }
  }

  default void onFrameCaptured(VideoFrame frame, FrameAdaptationParameters parameters) {
    VideoFrame adaptedFrame = applyFrameAdaptationParameters(frame, parameters);
    if (adaptedFrame != null) {
      onFrameCaptured(adaptedFrame);
      adaptedFrame.release();
    }
  }
....
 }           

VideoSource中調用的 onFrameCaptured(frame, parameters) 并非CapturerObserver的onFrameCaptured,也就是暫時不會傳入Native增,它在這個方法中也做了對ViewFrame的裁切縮放,之後再傳入底層。

是以我們可以在這裡實作對視訊幀的美顔濾鏡處理。

class FilterProcessor : VideoProcessor{

            private var videoSink:VideoSink

        override fun onCapturerStarted(success: Boolean) {
        }

        override fun onCapturerStopped() {
        }

        override fun onFrameCaptured(frame: VideoFrame?) { 
          val newFrame = // TODO: 在這對VideoFrame進行視訊濾鏡美顔處理 
          sink.onFrame(newFrame)
        }

        override fun setSink(sink: VideoSink?) {
            //設定視訊接收器 用來渲染并将frame傳入Native
          videoSink = sink
        }
    }

val videoCapture = createVideoCapture()
videoSource = videoCapture.isScreencast.let { factory.createVideoSource(it) }
videoSource.setVideoProcessor(FilterProcessor())//設定處理器
videoCapture.initialize(surfaceTextureHelper,applicationContext,videoSource?.capturerObserver)
videoCapture.startCapture(480, 640, 30)           

美顔的話可以用 GPUImage,也可以用商用SDK。

public class CapturerObserverProxy implements CapturerObserver {
    public static final String TAG = CapturerObserverProxy.class.getSimpleName();

    private CapturerObserver originalObserver;
    private RTCVideoEffector videoEffector;

    public CapturerObserverProxy(final SurfaceTextureHelper surfaceTextureHelper,
                                 CapturerObserver observer,
                                 RTCVideoEffector effector) {

        this.originalObserver = observer;
        this.videoEffector = effector;

        final Handler handler = surfaceTextureHelper.getHandler();
        ThreadUtils.invokeAtFrontUninterruptibly(handler, () ->
                videoEffector.init(surfaceTextureHelper)
        );
    }

    @Override
    public void onCapturerStarted(boolean success) {
        this.originalObserver.onCapturerStarted(success);
    }

    @Override
    public void onCapturerStopped() {
        this.originalObserver.onCapturerStopped();
    }

    @Override
    public void onFrameCaptured(VideoFrame frame) {
        if (this.videoEffector.needToProcessFrame()) {
            VideoFrame.I420Buffer originalI420Buffer = frame.getBuffer().toI420();
            VideoFrame.I420Buffer effectedI420Buffer =
                    this.videoEffector.processByteBufferFrame(
                            originalI420Buffer, frame.getRotation(), frame.getTimestampNs());

            VideoFrame effectedVideoFrame = new VideoFrame(
                    effectedI420Buffer, frame.getRotation(), frame.getTimestampNs());
            originalI420Buffer.release();
            this.originalObserver.onFrameCaptured(effectedVideoFrame);
        } else {
            this.originalObserver.onFrameCaptured(frame);
        }
    }
}

 videoCapturer.initialize(videoCapturerSurfaceTextureHelper, context, observerProxy);