天天看點

用戶端使用librtmp推流實踐建立連接配接發送音視訊編碼資訊發送音視訊資料包關閉連接配接

直播推流一般都采用RTMP協定,它是一種基于TCP的應用層協定。librtmp是RTMP協定的一個實作庫,它封裝了協定細節,友善我們調用。

本文主要介紹如何利用librtmp庫實作音視訊推流,不涉及RTMP協定的細節解析。

建立連接配接

首先我們需要引入rtmp.h頭檔案,開始建立連接配接:

// 為RTMP對象配置設定記憶體
    _rtmp = RTMP_Alloc();
    // 初始化_rtmp
    RTMP_Init(_rtmp);
    // 設定逾時時間
    RTMP_SetSocketTimeout(_rtmp, 30);
    // 設定推流位址
    RTMP_SetupURL(_rtmp, "rtmp://xxx");
    // 設定可寫狀态,推流關鍵,一定要設定
    RTMP_EnableWrite(_rtmp);
    // 建立連接配接
    if (RTMP_Connect(_rtmp, NULL) == false) {
        return -1;
    }
    // 建立Stream連接配接
    if (RTMP_ConnectStream(_rtmp, 0) == false) {
        return -1;
    }           

發送音視訊編碼資訊

在發送音視訊資料包之前,我們需要首先發送音視訊流的編碼資訊。這些編碼資訊至關重要,沒有它們,解碼器将無法解碼。

視訊描述資訊

常見H264格式編碼的視訊流,描述資訊被稱為AVCDecoderConfigurationRecord,該結構在“ISO-14496-15 AVC file format”中有詳細說明。

需要注意,如果我們推的是視訊檔案,可以使用ffmpeg直接從檔案中讀取,它被存放在extradata中。如果是編碼器輸出,那麼需要根據編碼器輸出的SPS、PPS重新配置AVCDecoderConfigurationRecord結構,構造示例代碼如下:

- (void)sendAudioData:(char*)data size:(int)size pts:(long)pts {
    char* body = malloc(1024);
    int i = 0;
    /*AVCDecoderConfigurationRecord*/
    body[i++] = 0x01;
    body[i++] = sps[1];
    body[i++] = sps[2];
    body[i++] = sps[3];
    body[i++] = 0xff;
    /*sps*/
    body[i++] = 0xe1;
    body[i++] = (sLen >> 8) & 0xff;
    body[i++] = sLen & 0xff;
    memcpy(&body[i], sps, sLen);
    i += sLen;
    /*pps*/
    body[i++] = 0x01;
    body[i++] = (plen >> 8) & 0xff;
    body[i++] = (plen) & 0xff;
    memcpy(&body[i], pps, plen);
}           

視訊描述資訊構造完成後就可以封包推流了。RTMP封包使用FLV tag格式,是以還需要加上tag header。示例代碼如下:

- (void)sendVideoInfo:(void*)data size:(int)size {
    RTMPPacket *packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + size + 5);
    memset(packet, 0, RTMP_HEAD_SIZE);
    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = size + 5;
    char *body = (char *)packet->m_body;
    memset(body,0,size + 5);
    int i = 0;
    // 前四位1表示關鍵幀,後四位7表示AVC格式
    body[i++] = 0x17;
    // 0x00 表示AVC Sequence Header
    body[i++] = 0x00;
    body[i++] = 0x00;
    body[i++] = 0x00;
    body[i++] = 0x00;
    // 拷貝AVCDecoderConfigurationRecord
    memcpy(&body[i], data, size);
    
    packet->m_body = body;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    packet->m_nBodySize = size + 5;
    packet->m_nChannel = 0x04;
    packet->m_nTimeStamp = 0;
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nInfoField2 = _rtmp->m_stream_id;
    RTMP_SendPacket(_rtmp, packet, true);
    free(packet);
}
           

音頻描述資訊

AAC格式編碼的音頻流,描述資訊被稱為AudioSpecificConfig,該結構在“ISO-14496-3 Audio”中有詳細說明。

同樣的,如果推的是檔案檔案,可以使用ffmpeg直接讀取。如果是編碼器輸出,那麼需要根據采樣率、通道數等重新配置AudioSpecificConfig結構。

接下來就是封包推流,示例代碼如下:

- (void)sendAudioInfo:(void*)data size:(int)size {
    RTMPPacket *packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + size + 2);
    memset(packet, 0, RTMP_HEAD_SIZE);
    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = size + 2;
    char *body = (char *)packet->m_body;
    memset(body,0,size + 2);
    int i = 0;
    body[i++] = 0xAF;
    body[i++] = 0x00;
    // 拷貝AudioSpecificConfig
    memcpy(&body[i], data, size);
    
    packet->m_body = body;
    packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
    packet->m_nBodySize = size + 2;
    packet->m_nChannel = 0x04;
    packet->m_nTimeStamp = 0;
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nInfoField2 = _rtmp->m_stream_id;
    RTMP_SendPacket(_rtmp, packet, true);
    free(packet);
}           

發送音視訊資料包

視訊資料包

發送視訊資料時,需要注意一點:一般視訊編碼器輸出的視訊包前四個位元組已經就是NALU SIZE了,如果是這種情況,需要跳過前四個位元組。

-(void)sendVideoData:(char*)data size:(int)size pts:(long)pts key:(BOOL)isKey {    
    // skip 4 bytes
//    data += 4;
//    size -= 4;
    RTMPPacket *packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + size + 9);
    memset(packet, 0, RTMP_HEAD_SIZE);
    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = size + 9;
    char *body = (char *)packet->m_body;
    memset(body,0,size + 9);
    int i = 0;
    // header
    // 前四位1表示關鍵幀,前四位2表示非關鍵幀
    body[i++] = isKey ? 0x17 : 0x27;
    // 0x01 表示AVC NALU
    body[i++] = 0x01;
    body[i++] = 0x00;
    body[i++] = 0x00;
    body[i++] = 0x00;
    // NALU SIZE
    body[i++] = size >> 24 & 0xff;
    body[i++] = size >> 16 & 0xff;
    body[i++] = size >> 8 & 0xff;
    body[i++] = size & 0xff;
    memcpy(&body[i], data, size);
    
    packet->m_body = body;
    packet->m_nBodySize = size + 9;
    packet->m_hasAbsTimestamp = 0;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    packet->m_nInfoField2 = _rtmp->m_stream_id;
    packet->m_nChannel = 0x04;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nTimeStamp = pts;
    RTMP_SendPacket(_rtmp, packet, true);
    free(packet);
}           

音頻資料包

發送音頻資料包的方式和發送描述資訊的方式類似,示例代碼如下:

- (void)sendAudioData:(char*)data size:(int)size pts:(long)pts {
    RTMPPacket *packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE+size+2);
    memset(packet, 0, RTMP_HEAD_SIZE);
    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = size + 2;
    char *body = (char *)packet->m_body;
    memset(body,0,size + 2);
    int i = 0;
    body[i++] = 0xAF;
    body[i++] = 0x01;
    memcpy(&body[i], data, size);
    
    packet->m_body = body;
    packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
    packet->m_nBodySize = size + 2;
    packet->m_nChannel = 0x04;
    packet->m_nTimeStamp = pts;
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nInfoField2 = _rtmp->m_stream_id;
    RTMP_SendPacket(_rtmp, packet, true);
    free(packet);
}           

關閉連接配接

推流結束時,我們需要關閉連接配接:

// 關閉連接配接
    RTMP_Close(_rtmp);
    // 釋放記憶體
    RTMP_Free(_rtmp);
    _rtmp = NULL;           

繼續閱讀