參考文章:WebRTC研究:包組時間差計算-InterArrival - 劍癡乎
WebRTC研究:Trendline濾波器-TrendlineEstimator - 劍癡乎
感覺寫得挺好的,圖都是參考文章的。
PS:還是有很多沒懂,記錄大概的流程。
一 RTP擴充頭Sequence Number、RTCP Feedback包算出receive_time
看一下具體的包是什麼樣。
bede是1位元組的辨別,Identifier是3(新版本是3,變化的),看webrtc sdp能知道。Extension Data占2位元組,是序列号,從1依次遞增。
![](https://img.laitimes.com/img/9ZDMuAjOiMmIsIjOiQnIsIyZuBnL4QGZjRzMkFzYyETYlZTMidjMjRzN5EDMjdTO2EWYkFzLc52YucWbp5GZzNmLn9Gbi1yZtl2Lc9CX6MHc0RHaiojIsJye.png)
WebRTC中有RTCB Feedback的具體定義,FMT=15,PT是205。
Packet Chunks:記錄包的狀态,接收端收到RTP包的時間。可以了解為RTP包的ACK,帶時間偏移。
Packet Chunks占2位元組,有三種。
1 Run Length Chunk:表示有連續多少包為相同到達狀态。
0開頭占1位 | 狀态占2位| 13位長度。
狀态的值:
00-Packet not received
01-Packet received, small delta
10-Packet received, large or negative delta
11-[Reserved]
2 1 bit status Vector Chunk:狀态1位表示,能表示14個包。10+14個狀态位,跟0x8000 &。
3 2 bit status Vector Chunk:狀态2位表示,能表示7個包。11+7個狀态位,跟0xc000 &。
狀态:0表示沒有接收,1表示接收。
recv delta:
zero padding:補0對齊。
後面計算主要取Reference Time。?
二 包組時間差計算-InterArrival
網絡延時濾波器即TrendLine filter需要參數:發送時刻內插補點(
timestamp_delta
)、到達時刻內插補點(
arrival_time_delta
)和包組資料大小內插補點(
packet_size_delta
)--這個沒用到?。
bool calculated_deltas = inter_arrival_for_packet->ComputeDeltas(
packet_feedback.sent_packet.send_time, //rtp包發送時間
packet_feedback.receive_time, //接收端收到rtp包的時間
at_time, //處理RTCP包系統時間
packet_size.bytes(),
&send_delta, &recv_delta, &size_delta);
TransportFeedbackAdapter::ProcessTransportFeedbackInner(中
packet_feedback.receive_time =
current_offset_ + packet_offset.RoundDownTo(TimeDelta::Millis(1));
const TimeDelta delta = feedback.GetBaseDelta(last_timestamp_)
.RoundDownTo(TimeDelta::Millis(1));
current_offset_ += delta;
current_offset_取的reference time,沒有加上recv delta ? 待确定
base_time_ticks_ = ByteReader<int32_t, 3>::ReadBigEndian(&payload[12]);
怎麼判斷新包組?取的5ms,是5ms的RTP包。現在代碼,我看不是5ms了。
bool InterArrival::NewTimestampGroup(int64_t arrival_time_ms,
uint32_t timestamp) const {
if (current_timestamp_group_.IsFirstPacket()) {
return false;
} else if (BelongsToBurst(arrival_time_ms, timestamp)) {
return false;
} else {
uint32_t timestamp_diff =
timestamp - current_timestamp_group_.first_timestamp;
return timestamp_diff > kTimestampGroupLengthTicks;
}
}
kTimestampGroupLengthTick值 ?
constexpr int kTimestampGroupTicks =
(kTimestampGroupLengthMs << kInterArrivalShift) / 1000;
=5<<(18+8)/1000=335544.320
三 Arrival-time filter即Trendline濾波器
d1是網絡延時。
概述:最小二乘法,線性回歸,算出斜率trend,判斷是那種網絡狀态:正常、過載、不足。
點是(RTP包接收時間內插補點,平滑延時)。
//網絡三種狀态:Overuse,網絡使用過載,發生擁塞。
enum BandwidthUsage {kBwNormal = 0, kBwUnderusing = 1,kBwOverusing = 2,};
TrendlineEstimator::UpdateTrendline(recv_delta_ms, send_delta_ms, send_time_ms, arrival_time_ms,packet_size);
{
1 const double delta_ms = recv_delta_ms - send_delta_ms;
accumulated_delay_ += delta_ms;
2 smoothed_delay_ = smoothing_coef_ * smoothed_delay_ +
(1 - smoothing_coef_) * accumulated_delay_; //延時平滑下
3 delay_hist_.emplace_back(
static_cast<double>(arrival_time_ms - first_arrival_time_ms_),
smoothed_delay_, accumulated_delay_);
// 0 < trend < 1 -> the delay increases, queues are filling up
// trend == 0 -> the delay does not change
// trend < 0 -> the delay decreases, queues are being emptied
trend = LinearFitSlope(delay_hist_).value_or(trend);//最小二乘法
5 Detect(trend, send_delta_ms, arrival_time_ms);
}
TrendlineEstimator::Detect(double trend, double ts_delta, int64_t now_ms) {
const double modified_trend =
std::min(num_of_deltas_, kMinNumDeltas) * trend * threshold_gain_;//*4.0
//用time_over_using_、overuse_counter_判斷,主要是過載。
if (modified_trend > threshold_) {
time_over_using_ += ts_delta;
time_over_using_>10 && trend比原來的大
hypothesis_ = BandwidthUsage::kBwOverusing;
}
else if (modified_trend < -threshold_) {
hypothesis_ = BandwidthUsage::kBwUnderusing;
} else {
hypothesis_ = BandwidthUsage::kBwNormal;
}
}
threshold_在TrendlineEstimator::UpdateThreshold更新。
四 碼率控制器AimdRateControl
//注釋:no over-use, 加法增加. over-use, 乘法減少. 帶寬改變, 不可知, 從慢啟動開始.
// A rate control implementation based on additive increases of
// bitrate when no over-use is detected and multiplicative decreases when
// over-uses are detected. When we think the available bandwidth has changes or
// is unknown, we will switch to a "slow-start mode" where we increase
// multiplicatively.
class AimdRateControl {
}
enum class RateControlState { kRcHold, kRcIncrease, kRcDecrease };
AimdRateControl::Update
--| AimdRateControl::ChangeBitrate
constexpr double kDefaultBackoffFactor = 0.85;
LinkCapacityEstimator link_capacity_; //鍊路容量
AimdRateControl::ChangeBitrate(
{
//estimated_throughput從300kbps開始,限制吞吐量=估計吞吐量*1.5
DataRate estimated_throughput =
input.estimated_throughput.value_or(latest_estimated_throughput_);
ChangeState(input, at_time);
const DataRate troughput_based_limit =
1.5 * estimated_throughput + DataRate::KilobitsPerSec(10);
case RateControlState::kRcIncrease:
if (estimated_throughput > link_capacity_.UpperBound())
{
if (current_bitrate_ < troughput_based_limit
if (link_capacity_.has_estimate()) {
//加性增加
DataRate additive_increase =
AdditiveRateIncrease(at_time, time_last_bitrate_change_);//+0.08
increased_bitrate = current_bitrate_ + additive_increase;
}
else {
//乘法增加--計算一大堆!
}
new_bitrate = std::min(increased_bitrate, troughput_based_limit);
}
case RateControlState::kRcDecrease: {
decreased_bitrate = estimated_throughput * beta_; //*0.85
if (decreased_bitrate > current_bitrate_
if (link_capacity_.has_estimate()) {
decreased_bitrate = beta_ * link_capacity_.estimate();
}
// Avoid increasing the rate when over-using.
if (decreased_bitrate < current_bitrate_) {
new_bitrate = decreased_bitrate;
}
link_capacity_.OnOveruseDetected(estimated_throughput);
rate_control_state_ = RateControlState::kRcHold;
}
void AimdRateControl::ChangeState(const RateControlInput& input,
Timestamp at_time) {
//網絡狀态:正常, 碼率控制狀态是Hold, 改成Increase .
//網絡狀态:overuse過載, 碼率控制狀态變成Decrease.
//網絡狀态:underuse, 碼率控制狀态變成Hold.
}
五 碼率最後作用于Pacer、編碼、FEC
RtpTransportControllerSend::PostUpdates(
{
pacer()->SetPacingRates(update.pacer_config->data_rate(),
update.pacer_config->pad_rate());
}
基于延時得出的碼率:
SendSideBandwidthEstimation::UpdateDelayBasedEstimate(的delay_based_limit_。
基于丢包更新碼率:
SendSideBandwidthEstimation::UpdatePacketsLost(