天天看點

使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊前言一、環境準備二、實驗過程總結

文章目錄

  • 前言
  • 一、環境準備
  • 二、實驗過程
    • 1、素材準備
    • 2、人體骨骼關鍵點檢測
      • 2.1、具體代碼
      • 2.2、實作效果
    • 3、将動作映射到皮影戲中
      • 3.1、具體代碼
      • 3.2、實作效果
    • 4、将視訊中的動作映射為皮影戲,合成視訊
      • 4.1、具體代碼
      • 4.2、實作效果
    • 5、視訊效果展示
  • 總結

前言

通過PaddleHub完成人體骨骼關鍵點檢測,将人體骨骼關鍵點進行連接配接,擷取到人體的肢體骨骼,在骨骼肢體上覆寫皮影素材,得到皮影人了。最後将視訊中連續幀進行轉換,就可以實作“皮影戲”的效果了。

提示:以下是本篇文章正文内容,下面案例可供參考

一、環境準備

以下是我所使用的環境

軟體 & 環境
Python 3.7.0
PyCharm 2019.3.3

首先我們需要通過pip安裝PaddlePaddle和PaddleHub

pip install PaddlePaddle
pip install PaddleHub
           

完成後,通過PaddleHub來安裝人體骨骼關鍵點檢測模型 human_pose_estimation_resnet50_mpii

hub install human_pose_estimation_resnet50_mpii==1.1.1
           

參考文章:

AI 實作皮影戲,傳承正在消失的藝術:https://aistudio.baidu.com/aistudio/projectdetail/764130?fromQRCode=1&shared=1

二、實驗過程

1、素材準備

首先,建立以下檔案夾

檔案夾 用途
work/imgs 存放圖檔資源
work/output_pose 存放人體骨骼關鍵點識别後的圖檔
work/mp4_img 存放視訊按幀導出的圖檔
work/mp4_img_analysis 存放視訊圖檔映射為皮影戲的結果
work/shadow_play_material 存放皮影的素材圖檔

shadow_play_material 中的圖檔素材可以通過上述連結的 “檔案” -“work/shadow_play_material” 中擷取,該素材是合成皮影形象的關鍵素材,不能缺少。

與此同時,也要把在 “work” 中的皮影背景圖 “background.jpg” 下載下傳下來,該素材是合成皮影圖像的關鍵素材,不能缺少。

2、人體骨骼關鍵點檢測

将圖檔資源放到 “work/imgs” 中,檢測後,會生成相應檔案在存放在 “work/output_pose” 中

2.1、具體代碼

import os
import cv2
import paddlehub as hub
import matplotlib.pyplot as plt
from matplotlib.image import imread
import numpy as np

def show_img(img_path, size=8):
    '''
        檔案讀取圖檔顯示
    '''
    im = imread(img_path)
    plt.figure(figsize=(size, size))
    plt.axis("off")
    plt.imshow(im)


def img_show_bgr(image, size=8):
    '''
        cv讀取的圖檔顯示
    '''
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    plt.figure(figsize=(size, size))
    plt.imshow(image)

    plt.axis("off")
    plt.show()

show_img('work/imgs/2.jpg')
pose_estimation = hub.Module(name="human_pose_estimation_resnet50_mpii")
result = pose_estimation.keypoint_detection(paths=['work/imgs/2.jpg'], visualization=True, output_dir="work/output_pose/")
show_img('work/output_pose/2.jpg')
           

2.2、實作效果

将提取前後的圖檔進行對比,可以看到右圖中人體骨骼關鍵點已經檢測并标記出來了

使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊前言一、環境準備二、實驗過程總結

3、将動作映射到皮影戲中

要實作皮影戲的效果我們首先要解析,人體各個骨骼關鍵點的位置資訊,通過關節點的資訊計算皮影的肢體位置,和旋轉方向,進而達到肢體同步。

通過2個骨骼關鍵點可以确認肢體的長度和旋轉角度,根據長度就可以對素材進行縮放,根據旋轉角度,可以先對素材進行中心旋轉,再計算旋轉後圖檔的位移資訊,就可以得到最終映射骨骼關鍵點位置。将各個素材圖檔映射到對應的肢體上,便可以達到動作映射的效果。

3.1、具體代碼

import os
import cv2
import paddlehub as hub
import matplotlib.pyplot as plt
from matplotlib.image import imread
import numpy as np

def show_img(img_path, size=8):
    '''
        檔案讀取圖檔顯示
    '''
    im = imread(img_path)
    plt.figure(figsize=(size, size))
    plt.axis("off")
    plt.imshow(im)


def img_show_bgr(image, size=8):
    '''
        cv讀取的圖檔顯示
    '''
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    plt.figure(figsize=(size, size))
    plt.imshow(image)

    plt.axis("off")
    plt.show()

pose_estimation = hub.Module(name="human_pose_estimation_resnet50_mpii")

def get_true_angel(value):
    '''
    轉轉得到角度值
    '''
    return value / np.pi * 180


def get_angle(x1, y1, x2, y2):
    '''
    計算旋轉角度
    '''
    dx = abs(x1 - x2)
    dy = abs(y1 - y2)
    result_angele = 0
    if x1 == x2:
        if y1 > y2:
            result_angele = 180
    else:
        if y1 != y2:
            the_angle = int(get_true_angel(np.arctan(dx / dy)))
        if x1 < x2:
            if y1 > y2:
                result_angele = -(180 - the_angle)
            elif y1 < y2:
                result_angele = -the_angle
            elif y1 == y2:
                result_angele = -90
        elif x1 > x2:
            if y1 > y2:
                result_angele = 180 - the_angle
            elif y1 < y2:
                result_angele = the_angle
            elif y1 == y2:
                result_angele = 90

    if result_angele < 0:
        result_angele = 360 + result_angele
    return result_angele


def rotate_bound(image, angle, key_point_y):
    '''
    旋轉圖像,并取得關節點偏移量
    '''
    # 擷取圖像的尺寸
    (h, w) = image.shape[:2]
    # 旋轉中心
    (cx, cy) = (w / 2, h / 2)
    # 關鍵點必須在中心的y軸上
    (kx, ky) = cx, key_point_y
    d = abs(ky - cy)

    # 設定旋轉矩陣
    M = cv2.getRotationMatrix2D((cx, cy), -angle, 1.0)
    cos = np.abs(M[0, 0])
    sin = np.abs(M[0, 1])

    # 計算圖像旋轉後的新邊界
    nW = int((h * sin) + (w * cos))
    nH = int((h * cos) + (w * sin))

    # 計算旋轉後的相對位移
    move_x = nW / 2 + np.sin(angle / 180 * np.pi) * d
    move_y = nH / 2 - np.cos(angle / 180 * np.pi) * d

    # 調整旋轉矩陣的移動距離(t_{x}, t_{y})
    M[0, 2] += (nW / 2) - cx
    M[1, 2] += (nH / 2) - cy

    return cv2.warpAffine(image, M, (nW, nH)), int(move_x), int(move_y)


def get_distences(x1, y1, x2, y2):
    return ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5

def append_img_by_sk_points(img, append_img_path, key_point_y, first_point, second_point, append_img_reset_width=None, append_img_max_height_rate=1, middle_flip=False, append_img_max_height=None):
    '''
    将需要添加的肢體圖檔進行縮放
    '''
    append_image = cv2.imdecode(np.fromfile(append_img_path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)

    # 根據長度進行縮放
    sk_height = int(get_distences(first_point[0], first_point[1], second_point[0], second_point[1]) * append_img_max_height_rate)
    # 縮放制約
    if append_img_max_height:
        sk_height = min(sk_height, append_img_max_height)

    sk_width = int(
        sk_height / append_image.shape[0] * append_image.shape[1]) if append_img_reset_width is None else int(
        append_img_reset_width)
    if sk_width <= 0:
        sk_width = 1
    if sk_height <= 0:
        sk_height = 1

    # 關鍵點映射
    key_point_y_new = int(key_point_y / append_image.shape[0] * append_image.shape[1])
    # 縮放圖檔
    append_image = cv2.resize(append_image, (sk_width, sk_height))

    img_height, img_width, _ = img.shape
    # 是否根據骨骼節點位置在 圖像中間的左右來控制是否進行 左右翻轉圖檔
    # 主要處理頭部的翻轉, 預設頭部是朝左
    if middle_flip:
        middle_x = int(img_width / 2)
        if first_point[0] < middle_x and second_point[0] < middle_x:
            append_image = cv2.flip(append_image, 1)

    # 旋轉角度
    angle = get_angle(first_point[0], first_point[1], second_point[0], second_point[1])
    append_image, move_x, move_y = rotate_bound(append_image, angle=angle, key_point_y=key_point_y_new)
    app_img_height, app_img_width, _ = append_image.shape

    zero_x = first_point[0] - move_x
    zero_y = first_point[1] - move_y

    (b, g, r) = cv2.split(append_image)
    for i in range(0, r.shape[0]):
        for j in range(0, r.shape[1]):
            if 230 > r[i][j] > 200 and 0 <= zero_y + i < img_height and 0 <= zero_x + j < img_width:
                img[zero_y + i][zero_x + j] = append_image[i][j]
    return img


body_img_path_map = {
    "right_hip": "./work/shadow_play_material/right_hip.jpg",
    "right_knee": "./work/shadow_play_material/right_knee.jpg",
    "left_hip": "./work/shadow_play_material/left_hip.jpg",
    "left_knee": "./work/shadow_play_material/left_knee.jpg",
    "left_elbow": "./work/shadow_play_material/left_elbow.jpg",
    "left_wrist": "./work/shadow_play_material/left_wrist.jpg",
    "right_elbow": "./work/shadow_play_material/right_elbow.jpg",
    "right_wrist": "./work/shadow_play_material/right_wrist.jpg",
    "head": "./work/shadow_play_material/head.jpg",
    "body": "./work/shadow_play_material/body.jpg"
}


def get_combine_img(img_path, pose_estimation=pose_estimation, body_img_path_map=body_img_path_map, backgroup_img_path='work/background.jpg'):
    '''
    識别圖檔中的關節點,并将皮影的肢體進行對應,最後與原圖像拼接後輸出
    '''
    result = pose_estimation.keypoint_detection(paths=[img_path])
    image = cv2.imread(img_path)

    # 背景圖檔
    backgroup_image = cv2.imread(backgroup_img_path)
    image_flag = cv2.resize(backgroup_image, (image.shape[1], image.shape[0]))

    # 最小寬度
    min_width = int(get_distences(result[0]['data']['head_top'][0], result[0]['data']['head_top'][1],
                                  result[0]['data']['upper_neck'][0], result[0]['data']['upper_neck'][1]) / 3)

    # 右大腿
    append_img_reset_width = max(int(get_distences(result[0]['data']['pelvis'][0], result[0]['data']['pelvis'][1],
                                                   result[0]['data']['left_hip'][0],
                                                   result[0]['data']['right_hip'][1]) * 1.6), min_width)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['right_hip'], key_point_y=10,
                                         first_point=result[0]['data']['right_hip'],
                                         second_point=result[0]['data']['right_knee'],
                                         append_img_reset_width=append_img_reset_width)

    # 右小腿
    append_img_reset_width = max(int(get_distences(result[0]['data']['pelvis'][0], result[0]['data']['pelvis'][1],
                                                   result[0]['data']['left_hip'][0],
                                                   result[0]['data']['right_hip'][1]) * 1.5), min_width)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['right_knee'], key_point_y=10,
                                         first_point=result[0]['data']['right_knee'],
                                         second_point=result[0]['data']['right_ankle'],
                                         append_img_reset_width=append_img_reset_width)

    # 左大腿
    append_img_reset_width = max(int(get_distences(result[0]['data']['pelvis'][0], result[0]['data']['pelvis'][1],
                                                   result[0]['data']['left_hip'][0],
                                                   result[0]['data']['left_hip'][1]) * 1.6), min_width)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['left_hip'], key_point_y=0,
                                         first_point=result[0]['data']['left_hip'],
                                         second_point=result[0]['data']['left_knee'],
                                         append_img_reset_width=append_img_reset_width)

    # 左小腿
    append_img_reset_width = max(int(get_distences(result[0]['data']['pelvis'][0], result[0]['data']['pelvis'][1],
                                                   result[0]['data']['left_hip'][0],
                                                   result[0]['data']['left_hip'][1]) * 1.5), min_width)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['left_knee'], key_point_y=10,
                                         first_point=result[0]['data']['left_knee'],
                                         second_point=result[0]['data']['left_ankle'],
                                         append_img_reset_width=append_img_reset_width)

    # 右手臂
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['left_elbow'], key_point_y=25,
                                         first_point=result[0]['data']['right_shoulder'],
                                         second_point=result[0]['data']['right_elbow'], append_img_max_height_rate=1.2)

    # 右手肘
    append_img_max_height = int(get_distences(result[0]['data']['right_shoulder'][0], result[0]['data']['right_shoulder'][1],
                                              result[0]['data']['right_elbow'][0], result[0]['data']['right_elbow'][1]) * 1.6)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['left_wrist'], key_point_y=10,
                                         first_point=result[0]['data']['right_elbow'],
                                         second_point=result[0]['data']['right_wrist'], append_img_max_height_rate=1.5,
                                         append_img_max_height=append_img_max_height)

    # 左手臂
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['right_elbow'], key_point_y=25,
                                         first_point=result[0]['data']['left_shoulder'],
                                         second_point=result[0]['data']['left_elbow'], append_img_max_height_rate=1.2)

    # 左手肘
    append_img_max_height = int(get_distences(result[0]['data']['left_shoulder'][0], result[0]['data']['left_shoulder'][1],
                                              result[0]['data']['left_elbow'][0], result[0]['data']['left_elbow'][1]) * 1.6)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['right_wrist'], key_point_y=10,
                                         first_point=result[0]['data']['left_elbow'],
                                         second_point=result[0]['data']['left_wrist'], append_img_max_height_rate=1.5,
                                         append_img_max_height=append_img_max_height)

    # 頭
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['head'], key_point_y=10,
                                         first_point=result[0]['data']['head_top'],
                                         second_point=result[0]['data']['upper_neck'], append_img_max_height_rate=1.2,
                                         middle_flip=True)

    # 身體
    append_img_reset_width = max(int(get_distences(result[0]['data']['left_shoulder'][0], result[0]['data']['left_shoulder'][1],
                                                   result[0]['data']['right_shoulder'][0], result[0]['data']['right_shoulder'][1]) * 1.2),
                                 min_width * 3)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['body'], key_point_y=20,
                                         first_point=result[0]['data']['upper_neck'],
                                         second_point=result[0]['data']['pelvis'],
                                         append_img_reset_width=append_img_reset_width, append_img_max_height_rate=1.2)

    result_img = np.concatenate((image, image_flag), axis=1)

    return result_img

pos_img_path = 'work/output_pose/2.jpg'

result_img = get_combine_img(pos_img_path, pose_estimation, body_img_path_map)
img_show_bgr(result_img, size=10)
           

3.2、實作效果

使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊前言一、環境準備二、實驗過程總結
使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊前言一、環境準備二、實驗過程總結

4、将視訊中的動作映射為皮影戲,合成視訊

在此之前,要找一個視訊作為素材,我是去B站的舞蹈區找了一個小姐姐的視訊,用了前一分鐘的内容作為素材。

(建議測試的話不要用太長的視訊,這裡的視訊是1分鐘,60Hz的幀率,3644張圖檔, i5-7300HQ的CPU處理了75分鐘才完成)

将視訊素材按幀儲存為圖檔,并分析每張圖檔的肢體動作,轉為皮影姿勢,最後将分析後的圖檔合成視訊。

将視訊素材放到 “work” 目錄下

4.1、具體代碼

import os
import cv2
import paddlehub as hub
import matplotlib.pyplot as plt
from matplotlib.image import imread
import numpy as np

def show_img(img_path, size=8):
    '''
        檔案讀取圖檔顯示
    '''
    im = imread(img_path)
    plt.figure(figsize=(size, size))
    plt.axis("off")
    plt.imshow(im)


def img_show_bgr(image, size=8):
    '''
        cv讀取的圖檔顯示
    '''
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    plt.figure(figsize=(size, size))
    plt.imshow(image)

    plt.axis("off")
    plt.show()

pose_estimation = hub.Module(name="human_pose_estimation_resnet50_mpii")

def get_true_angel(value):
    '''
    轉轉得到角度值
    '''
    return value / np.pi * 180


def get_angle(x1, y1, x2, y2):
    '''
    計算旋轉角度
    '''
    dx = abs(x1 - x2)
    dy = abs(y1 - y2)
    result_angele = 0
    if x1 == x2:
        if y1 > y2:
            result_angele = 180
    else:
        if y1 != y2:
            the_angle = int(get_true_angel(np.arctan(dx / dy)))
        if x1 < x2:
            if y1 > y2:
                result_angele = -(180 - the_angle)
            elif y1 < y2:
                result_angele = -the_angle
            elif y1 == y2:
                result_angele = -90
        elif x1 > x2:
            if y1 > y2:
                result_angele = 180 - the_angle
            elif y1 < y2:
                result_angele = the_angle
            elif y1 == y2:
                result_angele = 90

    if result_angele < 0:
        result_angele = 360 + result_angele
    return result_angele


def rotate_bound(image, angle, key_point_y):
    '''
    旋轉圖像,并取得關節點偏移量
    '''
    # 擷取圖像的尺寸
    (h, w) = image.shape[:2]
    # 旋轉中心
    (cx, cy) = (w / 2, h / 2)
    # 關鍵點必須在中心的y軸上
    (kx, ky) = cx, key_point_y
    d = abs(ky - cy)

    # 設定旋轉矩陣
    M = cv2.getRotationMatrix2D((cx, cy), -angle, 1.0)
    cos = np.abs(M[0, 0])
    sin = np.abs(M[0, 1])

    # 計算圖像旋轉後的新邊界
    nW = int((h * sin) + (w * cos))
    nH = int((h * cos) + (w * sin))

    # 計算旋轉後的相對位移
    move_x = nW / 2 + np.sin(angle / 180 * np.pi) * d
    move_y = nH / 2 - np.cos(angle / 180 * np.pi) * d

    # 調整旋轉矩陣的移動距離(t_{x}, t_{y})
    M[0, 2] += (nW / 2) - cx
    M[1, 2] += (nH / 2) - cy

    return cv2.warpAffine(image, M, (nW, nH)), int(move_x), int(move_y)


def get_distences(x1, y1, x2, y2):
    return ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5

def append_img_by_sk_points(img, append_img_path, key_point_y, first_point, second_point, append_img_reset_width=None, append_img_max_height_rate=1, middle_flip=False, append_img_max_height=None):
    '''
    将需要添加的肢體圖檔進行縮放
    '''
    append_image = cv2.imdecode(np.fromfile(append_img_path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)

    # 根據長度進行縮放
    sk_height = int(get_distences(first_point[0], first_point[1], second_point[0], second_point[1]) * append_img_max_height_rate)
    # 縮放制約
    if append_img_max_height:
        sk_height = min(sk_height, append_img_max_height)

    sk_width = int(
        sk_height / append_image.shape[0] * append_image.shape[1]) if append_img_reset_width is None else int(
        append_img_reset_width)
    if sk_width <= 0:
        sk_width = 1
    if sk_height <= 0:
        sk_height = 1

    # 關鍵點映射
    key_point_y_new = int(key_point_y / append_image.shape[0] * append_image.shape[1])
    # 縮放圖檔
    append_image = cv2.resize(append_image, (sk_width, sk_height))

    img_height, img_width, _ = img.shape
    # 是否根據骨骼節點位置在 圖像中間的左右來控制是否進行 左右翻轉圖檔
    # 主要處理頭部的翻轉, 預設頭部是朝左
    if middle_flip:
        middle_x = int(img_width / 2)
        if first_point[0] < middle_x and second_point[0] < middle_x:
            append_image = cv2.flip(append_image, 1)

    # 旋轉角度
    angle = get_angle(first_point[0], first_point[1], second_point[0], second_point[1])
    append_image, move_x, move_y = rotate_bound(append_image, angle=angle, key_point_y=key_point_y_new)
    app_img_height, app_img_width, _ = append_image.shape

    zero_x = first_point[0] - move_x
    zero_y = first_point[1] - move_y

    (b, g, r) = cv2.split(append_image)
    for i in range(0, r.shape[0]):
        for j in range(0, r.shape[1]):
            if 230 > r[i][j] > 200 and 0 <= zero_y + i < img_height and 0 <= zero_x + j < img_width:
                img[zero_y + i][zero_x + j] = append_image[i][j]
    return img


body_img_path_map = {
    "right_hip": "./work/shadow_play_material/right_hip.jpg",
    "right_knee": "./work/shadow_play_material/right_knee.jpg",
    "left_hip": "./work/shadow_play_material/left_hip.jpg",
    "left_knee": "./work/shadow_play_material/left_knee.jpg",
    "left_elbow": "./work/shadow_play_material/left_elbow.jpg",
    "left_wrist": "./work/shadow_play_material/left_wrist.jpg",
    "right_elbow": "./work/shadow_play_material/right_elbow.jpg",
    "right_wrist": "./work/shadow_play_material/right_wrist.jpg",
    "head": "./work/shadow_play_material/head.jpg",
    "body": "./work/shadow_play_material/body.jpg"
}


def get_combine_img(img_path, pose_estimation=pose_estimation, body_img_path_map=body_img_path_map, backgroup_img_path='work/background.jpg'):
    '''
    識别圖檔中的關節點,并将皮影的肢體進行對應,最後與原圖像拼接後輸出
    '''
    result = pose_estimation.keypoint_detection(paths=[img_path])
    image = cv2.imread(img_path)

    # 背景圖檔
    backgroup_image = cv2.imread(backgroup_img_path)
    image_flag = cv2.resize(backgroup_image, (image.shape[1], image.shape[0]))

    # 最小寬度
    min_width = int(get_distences(result[0]['data']['head_top'][0], result[0]['data']['head_top'][1],
                                  result[0]['data']['upper_neck'][0], result[0]['data']['upper_neck'][1]) / 3)

    # 右大腿
    append_img_reset_width = max(int(get_distences(result[0]['data']['pelvis'][0], result[0]['data']['pelvis'][1],
                                                   result[0]['data']['left_hip'][0],
                                                   result[0]['data']['right_hip'][1]) * 1.6), min_width)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['right_hip'], key_point_y=10,
                                         first_point=result[0]['data']['right_hip'],
                                         second_point=result[0]['data']['right_knee'],
                                         append_img_reset_width=append_img_reset_width)

    # 右小腿
    append_img_reset_width = max(int(get_distences(result[0]['data']['pelvis'][0], result[0]['data']['pelvis'][1],
                                                   result[0]['data']['left_hip'][0],
                                                   result[0]['data']['right_hip'][1]) * 1.5), min_width)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['right_knee'], key_point_y=10,
                                         first_point=result[0]['data']['right_knee'],
                                         second_point=result[0]['data']['right_ankle'],
                                         append_img_reset_width=append_img_reset_width)

    # 左大腿
    append_img_reset_width = max(int(get_distences(result[0]['data']['pelvis'][0], result[0]['data']['pelvis'][1],
                                                   result[0]['data']['left_hip'][0],
                                                   result[0]['data']['left_hip'][1]) * 1.6), min_width)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['left_hip'], key_point_y=0,
                                         first_point=result[0]['data']['left_hip'],
                                         second_point=result[0]['data']['left_knee'],
                                         append_img_reset_width=append_img_reset_width)

    # 左小腿
    append_img_reset_width = max(int(get_distences(result[0]['data']['pelvis'][0], result[0]['data']['pelvis'][1],
                                                   result[0]['data']['left_hip'][0],
                                                   result[0]['data']['left_hip'][1]) * 1.5), min_width)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['left_knee'], key_point_y=10,
                                         first_point=result[0]['data']['left_knee'],
                                         second_point=result[0]['data']['left_ankle'],
                                         append_img_reset_width=append_img_reset_width)

    # 右手臂
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['left_elbow'], key_point_y=25,
                                         first_point=result[0]['data']['right_shoulder'],
                                         second_point=result[0]['data']['right_elbow'], append_img_max_height_rate=1.2)

    # 右手肘
    append_img_max_height = int(get_distences(result[0]['data']['right_shoulder'][0], result[0]['data']['right_shoulder'][1],
                                              result[0]['data']['right_elbow'][0], result[0]['data']['right_elbow'][1]) * 1.6)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['left_wrist'], key_point_y=10,
                                         first_point=result[0]['data']['right_elbow'],
                                         second_point=result[0]['data']['right_wrist'], append_img_max_height_rate=1.5,
                                         append_img_max_height=append_img_max_height)

    # 左手臂
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['right_elbow'], key_point_y=25,
                                         first_point=result[0]['data']['left_shoulder'],
                                         second_point=result[0]['data']['left_elbow'], append_img_max_height_rate=1.2)

    # 左手肘
    append_img_max_height = int(get_distences(result[0]['data']['left_shoulder'][0], result[0]['data']['left_shoulder'][1],
                                              result[0]['data']['left_elbow'][0], result[0]['data']['left_elbow'][1]) * 1.6)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['right_wrist'], key_point_y=10,
                                         first_point=result[0]['data']['left_elbow'],
                                         second_point=result[0]['data']['left_wrist'], append_img_max_height_rate=1.5,
                                         append_img_max_height=append_img_max_height)

    # 頭
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['head'], key_point_y=10,
                                         first_point=result[0]['data']['head_top'],
                                         second_point=result[0]['data']['upper_neck'], append_img_max_height_rate=1.2,
                                         middle_flip=True)

    # 身體
    append_img_reset_width = max(int(get_distences(result[0]['data']['left_shoulder'][0], result[0]['data']['left_shoulder'][1],
                                                   result[0]['data']['right_shoulder'][0], result[0]['data']['right_shoulder'][1]) * 1.2),
                                 min_width * 3)
    image_flag = append_img_by_sk_points(image_flag, body_img_path_map['body'], key_point_y=20,
                                         first_point=result[0]['data']['upper_neck'],
                                         second_point=result[0]['data']['pelvis'],
                                         append_img_reset_width=append_img_reset_width, append_img_max_height_rate=1.2)

    result_img = np.concatenate((image, image_flag), axis=1)

    return result_img

# 素材圖檔位置
input_video = 'work/test1.mp4'

def transform_video_to_image(video_file_path, img_path):
    '''
    将視訊中每一幀儲存成圖檔
    '''
    video_capture = cv2.VideoCapture(video_file_path)
    fps = video_capture.get(cv2.CAP_PROP_FPS)
    count = 0
    while(True):
        ret, frame = video_capture.read()
        if ret:
            cv2.imwrite(img_path + '%d.jpg' % count, frame)
            count += 1
        else:
            break
    video_capture.release()
    print('視訊圖檔儲存成功, 共有 %d 張' % count)
    return fps

# 将視訊中每一幀儲存成圖檔
fps = transform_video_to_image(input_video, 'work/mp4_img/')

def analysis_pose(input_frame_path, output_frame_path, is_print=True):
    '''
    分析圖檔中的人體姿勢, 并轉換為皮影姿勢,輸出結果
    '''
    file_items = os.listdir(input_frame_path)
    file_len = len(file_items)
    for i, file_item in enumerate(file_items):
        if is_print:
            print(i+1,'/', file_len, ' ', os.path.join(output_frame_path, file_item))
        combine_img = get_combine_img(os.path.join(input_frame_path, file_item))
        cv2.imwrite(os.path.join(output_frame_path, file_item), combine_img)

# 分析圖檔中的人體姿勢, 并轉換為皮影姿勢,輸出結果
analysis_pose('work/mp4_img/', 'work/mp4_img_analysis/', is_print=False)


def combine_image_to_video(comb_path, output_file_path, fps=30, is_print=False):
    '''
        合并圖像到視訊
    '''
    fourcc = cv2.VideoWriter_fourcc(*'MP4V')

    file_items = os.listdir(comb_path)
    file_len = len(file_items)
    # print(comb_path, file_items)
    if file_len > 0:
        temp_img = cv2.imread(os.path.join(comb_path, file_items[0]))
        img_height, img_width = temp_img.shape[0], temp_img.shape[1]

        out = cv2.VideoWriter(output_file_path, fourcc, fps, (img_width, img_height))

        for i in range(file_len):
            pic_name = os.path.join(comb_path, str(i) + ".jpg")
            if is_print:
                print(i + 1, '/', file_len, ' ', pic_name)
            img = cv2.imread(pic_name)
            out.write(img)
        out.release()

# 合并圖像到視訊
combine_image_to_video('work/mp4_img_analysis/', 'work/mp4_analysis.mp4', fps)
           

4.2、實作效果

使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊前言一、環境準備二、實驗過程總結
使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊前言一、環境準備二、實驗過程總結
使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊前言一、環境準備二、實驗過程總結
使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊前言一、環境準備二、實驗過程總結

5、視訊效果展示

https://www.bilibili.com/video/BV1XN411f7dT/

總結

以上便是使用飛槳PaddleHub實作按幀将視訊動作映射為皮影戲,并合成視訊的内容,本文僅僅簡單介紹了PaddleHub的使用方法。如有寫的不好的地方,歡迎大家提點寶貴的建議。