天天看點

IO 模型知多少 (2)1. 引言2. Socket 程式設計基礎2. 同步阻塞 IO3. 同步非阻塞 IO4. IO 多路複用5. 驗證 I/O 模型6. 總結

文章目錄

  • 1. 引言
  • 2. Socket 程式設計基礎
  • 2. 同步阻塞 IO
  • 3. 同步非阻塞 IO
  • 4. IO 多路複用
  • 5. 驗證 I/O 模型
    • 5.1 驗證同步阻塞I/O發起的系統調用
    • 5.2 驗證I/O多路複用發起的系統調用
  • 6. 總結

1. 引言

之前的一篇介紹 IO 模型的文章 IO 模型知多少(1) 比較偏理論,很多同學反應不是很好了解。這一篇咱們換一個角度,從代碼角度來分析一下。

2. Socket 程式設計基礎

開始之前,我們先來梳理一下,需要提前了解的幾個概念:

socket: 直譯為“插座”,在計算機通信領域,socket 被翻譯為“套接字”,它是計算機之間進行通信的一種約定或一種方式。通過 socket 這種約定,一台計算機可以接收其他計算機的資料,也可以向其他計算機發送資料。我們把插頭插到插座上就能從電網獲得電力供應,同樣,應用程式為了與遠端計算機進行資料傳輸,需要連接配接到網際網路,而 socket 就是用來連接配接到網際網路的工具。

另外還需要知道的是,socket 程式設計的基本流程。

IO 模型知多少 (2)1. 引言2. Socket 程式設計基礎2. 同步阻塞 IO3. 同步非阻塞 IO4. IO 多路複用5. 驗證 I/O 模型6. 總結

2. 同步阻塞 IO

先回顧下概念:阻塞IO是指,應用程序中線程在發起IO調用後至核心執行IO操作傳回結果之前,若發起系統調用的線程一直處于等待狀态,則此次IO操作為阻塞IO。

public static void Start()
{
    //1. 建立Tcp Socket對象
    var serverSocket = new Socket(AddressFamily.InterNetwork, 
                                   SocketType.Stream, ProtocolType.Tcp);
    var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);
    //2. 綁定Ip端口
    serverSocket.Bind(ipEndpoint);
    //3. 開啟監聽,指定最大連接配接數
    serverSocket.Listen(10);   
    Console.WriteLine($"服務端已啟動({ipEndpoint})-等待連接配接...");

    while(true)
    {
        //4. 等待用戶端連接配接
        var clientSocket = serverSocket.Accept();//阻塞
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-已連接配接");
        Span<byte> buffer = new Span<byte>(new byte[512]);
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-開始接收資料...");
        int readLength = clientSocket.Receive(buffer);//阻塞
        var msg = Encoding.UTF8.GetString(buffer.ToArray(), 0, readLength);
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-接收資料:{msg}");
        var sendBuffer = Encoding.UTF8.GetBytes($"received:{msg}");
        clientSocket.Send(sendBuffer);    
    }
}
           
IO 模型知多少 (2)1. 引言2. Socket 程式設計基礎2. 同步阻塞 IO3. 同步非阻塞 IO4. IO 多路複用5. 驗證 I/O 模型6. 總結

代碼很簡單,直接看注釋就OK了,運作結果如上圖所示,但有幾個問題點需要着重說明下:

  • 等待連接配接處

    serverSocket.Accept()

    ,線程阻塞!
  • 接收資料處

    clientSocket.Receive(buffer)

    ,線程阻塞!

會導緻什麼問題呢:

  • 隻有一次資料讀取完成後,才可以接受下一個連接配接請求
  • 一個連接配接,隻能接收一次資料

3. 同步非阻塞 IO

看完,你可能會說,這兩個問題很好解決啊,建立一個新線程去接收資料就是了。于是就有了下面的代碼改進。

public static void Start2()    
{          
    //1. 建立Tcp Socket對象          
    var serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, 
                                   ProtocolType.Tcp  );         
    var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);          
    //2. 綁定Ip端口        
    serverSocket.Bind(ipEndpoint);          
    //3. 開啟監聽,指定最大連接配接數        
    serverSocket.Listen(10);          
    Console.WriteLine($"服務端已啟動({ipEndpoint})-等待連接配接...");              
    while(true)          
    {              
        //4. 等待用戶端連接配接             
        var clientSocket = serverSocket.Accept();//阻塞              
        Task.Run(() => ReceiveData(clientSocket));         
    }    
}        
 
private static void ReceiveData(Socket clientSocket)    
{          
    Console.WriteLine($"{clientSocket.RemoteEndPoint}-已連接配接");          
    Span<byte> buffer = new Span<byte>(new byte[512]);             
    while(true)          
    {              
        if(clientSocket.Available == 0)     
            continue  ;              
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-開始接收資料...");              
        int readLength = clientSocket.Receive(buffer);  //阻塞              
        var msg = Encoding.UTF8.GetString(buffer.ToArray(), 0, readLength);              
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-接收資料:{msg}");              
        var sendBuffer = Encoding.UTF8.GetBytes($"received:{msg}");            
        clientSocket.Send(sendBuffer);          
    }    
}
           
IO 模型知多少 (2)1. 引言2. Socket 程式設計基礎2. 同步阻塞 IO3. 同步非阻塞 IO4. IO 多路複用5. 驗證 I/O 模型6. 總結

是的,多線程解決了上述的問題,但如果你觀察以上動圖後,你應該能發現個問題:才建立4個用戶端連接配接,CPU的占用率就開始直線上升了。

而這個問題的本質就是,服務端的IO模型為阻塞IO模型,為了解決阻塞導緻的問題,采用重複輪詢,導緻無效的系統調用,進而導緻CPU持續走高。

4. IO 多路複用

既然知道原因所在,咱們就來予以改造。适用異步方式來處理連接配接、接收和發送資料。

public static class NioServer 
{  
	private static ManualResetEvent _acceptEvent = new ManualResetEvent(true);  
	private static ManualResetEvent _readEvent = new ManualResetEvent(true);  

	public static void Start ()  
	{  
		//1. 建立Tcp Socket對象  
		var serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, 
		                               ProtocolType.Tcp);  
		// serverSocket.Blocking = false;//設定為非阻塞  
		var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);  
		//2. 綁定Ip端口  
		serverSocket.Bind(ipEndpoint);  
		//3. 開啟監聽,指定最大連接配接數  
		serverSocket.Listen(10);  
		Console.WriteLine($ "服務端已啟動({ipEndpoint})-等待連接配接...");   
	
		while(true)  
		{  
			_acceptEvent.Reset (); //重置信号量  
			serverSocket.BeginAccept(OnClientConnected, serverSocket);  
			_acceptEvent.WaitOne (); //阻塞  
		}  
	}
  
	private static void OnClientConnected(IAsyncResult ar)  
	{  
		_acceptEvent.Set (); //當有用戶端連接配接進來後,則釋放信号量  
		var serverSocket = ar.AsyncState as Socket;  
		Debug.Assert(serverSocket != null, nameof(serverSocket) + " != null");   
		
		var clientSocket = serverSocket.EndAccept(ar);  
		Console.WriteLine($ "{clientSocket.RemoteEndPoint}-已連接配接");  
		
		while(true)  
		{  
			_readEvent.Reset (); //重置信号量   
			var stateObj = new StateObject { ClientSocket = clientSocket };  
			clientSocket.BeginReceive(stateObj.Buffer, 0, stateObj.Buffer.Length, 
			                       SocketFlags.None, OnMessageReceived, stateObj);  
			_readEvent.WaitOne (); //阻塞等待  
		}  
	}  
	
	private static void OnMessageReceived(IAsyncResult ar)  
	{  
		var state = ar.AsyncState as StateObject;  
		Debug.Assert(state != null, nameof(state) + " != null");  
		var receiveLength = state.ClientSocket.EndReceive(ar);  
		
		if(receiveLength > 0)  
		{   
			var msg = Encoding.UTF8.GetString(state.Buffer, 0, receiveLength);   
			Console.WriteLine($ "{state.ClientSocket.RemoteEndPoint}-接收資料:{msg}");   
			
			var sendBuffer = Encoding.UTF8.GetBytes($ "received:{msg}");  
			state.ClientSocket.BeginSend(sendBuffer, 0, sendBuffer.Length, 
			                    SocketFlags.None, SendMessage, state.ClientSocket);  
		} 
	}  
	
	private static void SendMessage(IAsyncResult ar)  
	{  
		var clientSocket = ar.AsyncState as Socket;  
		Debug.Assert(clientSocket != null, nameof(clientSocket) + " != null");  
		clientSocket.EndSend(ar);  
		_readEvent.Set (); //發送完畢後,釋放信号量  
	}
} 

public class StateObject
{  
	// Client socket.  
	public Socket ClientSocket = null;  
	// Size of receive buffer.  
	public const int BufferSize = 1024;  
	// Receive buffer.  
	public byte[] Buffer = new byte[BufferSize]; 
}
           

首先來看運作結果,從下圖可以看到,除了建立連接配接時CPU出現抖動外,在消息接收和發送階段,CPU占有率趨于平緩,且占用率低。

IO 模型知多少 (2)1. 引言2. Socket 程式設計基礎2. 同步阻塞 IO3. 同步非阻塞 IO4. IO 多路複用5. 驗證 I/O 模型6. 總結

分析代碼後我們發現:

  • CPU使用率是下來了,但代碼複雜度上升了。
  • 使用異步接口處理用戶端連接配接: BeginAccept和 EndAccept
  • 使用異步接口接收資料: BeginReceive和 EndReceive
  • 使用異步接口發送資料: BeginSend和 EndSend
  • 使用 ManualResetEvent進行線程同步,避免線程空轉

那你可能好奇,以上模型是何種 IO 多路複用模型呢?

好問題,我們來一探究竟。

5. 驗證 I/O 模型

要想驗證應用使用的何種 IO 模型,隻需要确定應用運作時發起了哪些系統調用即可。對于 Linux 系統來說,我們可以借助

strace

指令來跟蹤指定應用發起的系統調用和信号。

5.1 驗證同步阻塞I/O發起的系統調用

可以使用 VSCode Remote 連接配接到自己的 Linux 系統上,然後建立項目

Io.Demo

,以上面非阻塞 IO 的代碼進行測試,執行以下啟動跟蹤指令:

[email protected]:~/coding/dotnet$ ls 
Io.Demo 
[email protected]:~/coding/dotnet$ strace -ff -o Io.Demo /strace /io dotnet run --project Io.Demo/
Press any key to start! 
服務端已啟動(127.0.0.1:5001)-等待連接配接... 
127.0.0.1:36876-已連接配接 
127.0.0.1:36876-開始接收資料... 
127.0.0.1:36876-接收資料:1
           

另起指令行,執行

nc localhost 5001

模拟用戶端連接配接。

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ nc localhost 5001
1
received:1
           

使用

netstat

指令檢視建立的連接配接。

[email protected]:/proc/3763$ netstat -natp | grep 5001 
(Not all processes could be identified, non-owned process info  
will not be shown, you would have to be root to see it all.) 
tcp     0    0  127.0.0.1 : 5001      0.0.0.0 :*            LISTEN      3763/Io.Demo      
tcp     0    0  127.0.0.1 : 36920     127.0.0.1 : 5001      ESTABLISHED 3798/nc        
tcp     0    0  127.0.0.1 : 5001      127.0.0.1 : 36920     ESTABLISHED 3763/Io.Demo 
           

另起指令行,執行

ps-h|grep dotnet

抓取程序 Id。

[email protected]:~/coding/ dotnet/Io.Demo$ ps -h | grep dotnet  
3694 pts/1   S+    0:11 strace -ff -o Io.Demo/strace/io dotnet run --project Io.Demo/ 
3696 pts/1   Sl+   0:01 dotnet run --project Io.Demo/ 
3763 pts/1   Sl+   0:00 /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo  
3779 pts/2   S+    0:00 grep --color = auto dotnet 
[email protected]:~/coding/ dotnet$ ls Io.Demo/strace/ # 檢視生成的系統調用檔案 
io.3696  io.3702  io.3708  io.3714  io.3720  io.3726  io.3732  io.3738  io.3744  io.3750  io.3766  io.3772  io.3782  io.3827 
io.3697  io.3703  io.3709  io.3715  io.3721  io.3727  io.3733  io.3739  io.3745  io.3751  io.3767  io.3773  io.3786  io.3828 
io.3698  io.3704  io.3710  io.3716  io.3722  io.3728  io.3734  io.3740  io.3746  io.3752  io.3768  io.3774  io.3787 
io.3699  io.3705  io.3711  io.3717  io.3723  io.3729  io.3735  io.3741  io.3747  io.3763  io.3769  io.3777  io.3797 
io.3700  io.3706  io.3712  io.3718  io.3724  io.3730  io.3736  io.3742  io.3748  io.3764  io.3770  io.3780  io.3799 
io.3701  io.3707  io.3713  io.3719  io.3725  io.3731  io.3737  io.3743  io.3749  io.3765  io.3771  io.3781  io.3800
           

有上可知,程序Id為3763,依次執行以下指令可以檢視該程序的線程和産生的檔案描述符:

[email protected]:~/coding/dotnet/Io.Demo$ cd /proc/3763  # 進入程序目錄
[email protected]:/proc/3763$ ls 
attr    cmdline     environ io     mem     ns       pagemap   sched   smaps_rollup syscall    wchan 
autogroup  comm       exe   limits   mountinfo  numa_maps   patch_state schedstat stack     task 
auxv    coredump_filter fd    loginuid  mounts   oom_adj    personality sessionid stat     timers 
cgroup   cpuset      fdinfo  map_files mountstats oom_score   projid_map  setgroups statm     timerslack_ns 
clear_refs cwd       gid_map maps    net     oom_score_adj root     smaps   status    uid_map 
[email protected]:/proc/3763$ ll task # 檢視目前程序啟動的線程 
total 0 
dr-xr-xr-x 9 shengjie shengjie 0  5 月  10  16:36  ./
dr-xr-xr-x 9 shengjie shengjie 0  5 月  10  16:34  ../
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3763/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3765/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3766/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3767/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3768/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3769/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3770/
[email protected]:/proc/3763$ ll fd # 檢視目前程序系統調用産生的檔案描述符 
total 0 
dr-x------ 2 shengjie shengjie  0  5 月  10  16:36  ./
dr-xr-xr-x 9 shengjie shengjie  0  5 月  10  16:34  ../
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  0 ->  /dev/pts/1 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  1 ->  /dev/pts/1 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  10 ->  'socket:[44292]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  100 ->  /dev/random 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  11 ->  'socket:[41675]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  13 ->  'pipe:[45206]' 
l-wx------ 1 shengjie shengjie 64  5 月  10  16:37  14 ->  'pipe:[45206]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  15 ->  /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  16 ->  /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  17 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  18 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Console.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  19 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  2 ->  /dev/pts/1 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  20 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.Extensions.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  21 ->  /dev/pts/1 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  22 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Text.Encoding.Extensions.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  23 ->  /dev/urandom 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  24 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Sockets.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  25 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Primitives.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  26 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/Microsoft.Win32.Primitives.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  27 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Tracing.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  28 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.Tasks.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  29 ->  'socket:[43429]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  3 ->  'pipe:[42148]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  30 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.ThreadPool.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  31 ->  'socket:[42149]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  32 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Memory.dll 
l-wx------ 1 shengjie shengjie 64  5 月  10  16:37  4 ->  'pipe:[42148]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  42 ->  /dev/urandom 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  5 ->  /dev/pts/1 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  6 ->  /dev/pts/1 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  7 ->  /dev/pts/1 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  9 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Private.CoreLib.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  99 ->  /dev/urandom
           

從上面的輸出來看,.NET Core控制台應用啟動時啟動了多個線程,并在10、11、29、31号檔案描述符啟動了socket監聽。那哪一個檔案描述符監聽的是5001端口呢。

[email protected]:~/coding/dotnet/Io.Demo$ cat/proc/net/tcp | grep 1389  # 檢視5001端口号相關的tcp連結(0x1389 為5001十六進制)   
4: 0100007F:1389  00000000:0000  0A  00000000:00000000  00:00000000  00000000  1000     0  43429  1  0000000000000000  100  0  0  10  0              
12: 0100007F:9038  0100007F:1389  01  00000000:00000000  00:00000000  00000000  1000     0  44343  1  0000000000000000  20  4  30  10  - 1             
13: 0100007F:1389  0100007F:9038  01  00000000:00000000  00:00000000  00000000  1000     0  42149  1  0000000000000000  20  4  29  10  - 1
           

從中可以看到

inode

為 [43429] 的 socket 監聽在 5001 端口号,是以可以找到上面的輸出行

lrwx------1shengjie shengjie645月1016:3729->'socket:[43429]'

,進而判斷監聽 5001 端口号 socket 對應的檔案描述符為 29。

當然,也可以從記錄到

strace

目錄的日志檔案找到線索。在文中我們已經提及,socket 服務端程式設計的一般流程,都要經過 socket->bind->accept->read->write流程。是以可以通過抓取關鍵字,檢視相關系統調用。

[email protected]:~/coding/dotnet/Io.Demo$ grep 'bind' strace/ -rn 
strace/io.3696:4570:bind(10,{sa_family = AF_UNIX,sun_path = "/tmp/dotnet-diagnostic-3696-327175-socket"}, 110) =  0 
strace/io.3763:2241:bind(11,{sa_family = AF_UNIX,sun_path = "/tmp/dotnet-diagnostic-3763-328365-socket"}, 110) =  0 
strace/io.3763:2949:bind(29,{sa_family = AF_INET,sin_port = htons(5001 ),sin_addr = inet_addr("127.0.0.1" )}, 16) =  0 
strace/io.3713:4634:bind(11,{sa_family = AF_UNIX,sun_path = "/tmp/dotnet-diagnostic-3713-327405-socket"}, 110) =  0
           

從上可知,在主線程也就是

io.3763

線程的系統調用檔案中,将29号檔案描述符與監聽在

127.0.0.1:5001

的 socket 進行了綁定。同時也明白了 .NET Core 自動建立的另外 2 個 socket 是與 diagnostic 相關。

接下來咱們重點看下3763号線程産生的系統調用。

[email protected] :~ /coding/dotnet/Io.Demo$ cd strace/
[email protected] :~ /coding/dotnet/Io.Demo/strace$ cat io.3763  # 僅截取相關片段 
socket(AF_INET,SOCK_STREAM | SOCK_CLOEXEC,IPPROTO_TCP) =  29 
setsockopt(29,SOL_SOCKET,SO_REUSEADDR, [ 1 ],  4) =  0 
bind(29, {sa_family = AF_INET,sin_port = htons(5001), sin_addr = inet_addr("127.0.0.1")}, 16) =  0 
listen(29,10)   
write(21, "\346\234\215\345\212\241\347\253\257\345\267\262\345\220\257\345\212\250(127.0.0.1:500" ...,  51) = 51 
accept4(29, {sa_family = AF_INET,sin_port = htons(36920 ), sin_addr = inet_addr("127.0.0.1")},  [16], SOCK_CLOEXEC) = 31 
write(21, "127.0.0.1:36920-\345\267\262\350\277\236\346\216\245\n", 26) =  26 
write(21, "127.0.0.1:36920-\345\274\200\345\247\213\346\216\245\346\224\266\346\225\260\346" ...,  38) =  38 
recvmsg(31, { msg_name = NULL,msg_namelen = 0,msg_iov =[{ iov_base = "1\n",iov_len = 512 }], msg_iovlen = 1,msg_controllen = 0,msg_flags = 0 },  0) =  2 
write(21, "127.0.0.1:36920-\346\216\245\346\224\266\346\225\260\346\215\256\357\274\2321" ...,  34) =  34 
sendmsg(31, { msg_name = NULL,msg_namelen = 0,msg_iov =[{ iov_base = "received:1\n",iov_len = 11 }], msg_iovlen = 1,msg_controllen = 0,msg_flags = 0 },  0) =  11 
accept4(29, 0x7fecf001c978, [ 16 ], SOCK_CLOEXEC) =  ? ERESTARTSYS(To be restarted if SA_RESTART is  set)
--- SIGWINCH { si_signo = SIGWINCH,si_code = SI_KERNEL }  ---
           

從中我們可以發現幾個關鍵的系統調用:socket、bind、listen、accept4、recvmsg、sendmsg 通過指令

man

指令可以檢視下

accept4

recvmsg

系統調用的相關說明:

[email protected]:~/coding/dotnet/Io.Demo/strace$ man accept4 
If no pending connections are present on the queue, and the socket is  not marked as nonblocking,accept () blocks the caller until a  
      connection  is  present. 
	  
[email protected]:~/coding/dotnet/Io.Demo/strace$ man recvmsg 
If no messages are available at the socket,the receive calls wait for a message to arrive, unless the socket is nonblocking (see fcntl (2))
           

也就是說

accept4

recvmsg

是阻塞式系統調用。

5.2 驗證I/O多路複用發起的系統調用

同樣以上面I/O多路複用的代碼進行驗證,驗證步驟類似:

[email protected]:~/coding/dotnet$ strace -ff -o Io.Demo/strace2/io dotnet run --project Io.Demo/
Press any key to start! 
服務端已啟動(127.0.0.1:5001)-等待連接配接... 
127.0.0.1:37098 -已連接配接 
127.0.0.1:37098 -接收資料:1  

127.0.0.1:37098 -接收資料:2  

[email protected]:~/coding/dotnet/Io.Demo$ nc localhost 5001 
1
received:1 
2
received:2  

[email protected]:/proc/2449 $ netstat -natp | grep 5001 
(Not all processes could be identified, non -owned process info  
will not be shown, you would have to be root to see it all .) 
tcp     0    0  127.0.0.1:5001      0.0.0.0 :*          LISTEN      2449/Io.Demo      
tcp     0    0  127.0.0.1:5001      127.0.0.1:56296     ESTABLISHED 2449/Io.Demo      
tcp     0    0  127.0.0.1:56296     127.0.0.1:5001      ESTABLISHED 2499/nc     
 
[email protected]:~/coding/dotnet/Io.Demo$ ps -h | grep dotnet  
2400 pts/3   S+    0:10 strace -ff -o ./Io.Demo/strace2/io dotnet run --project Io.Demo/
2402 pts/3   Sl+   0:01 dotnet run --project Io.Demo/
2449 pts/3   Sl+   0:00 /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo  
2516 pts/5   S+    0:00 grep --color = auto dotnet   


[email protected]:~/coding/dotnet/Io.Demo$ cd/proc/2449/
[email protected]:/proc/2449 $ ll task 
total 0 
dr-xr-xr-x 11 shengjie shengjie 0  5 月  10  22:15  ./
dr-xr-xr-x  9 shengjie shengjie 0  5 月  10  22:15  ../
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2449/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2451/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2452/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2453/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2454/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2455/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2456/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2459/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2462/
[email protected]:/proc/2449 $ ll fd 
total 0 
dr-x------ 2 shengjie shengjie  0  5 月  10  22:15  ./
dr-xr-xr-x 9 shengjie shengjie  0  5 月  10  22:15  ../
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  0  ->  /dev/pts/3 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  1  ->  /dev/pts/3 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  10  ->  'socket:[35001]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  100  ->  /dev/random 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  11  ->  'socket:[34304]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  13  ->  'pipe:[31528]' 
l-wx------ 1 shengjie shengjie 64  5 月  10  22:16  14  ->  'pipe:[31528]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  15  ->  /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  16  ->  /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  17  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  18  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Console.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  19  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  2  ->  /dev/pts/3 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  20  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.Extensions.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  21  ->  /dev/pts/3 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  22  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Text.Encoding.Extensions.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  23  ->  /dev/urandom 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  24  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Sockets.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  25  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Primitives.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  26  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/Microsoft.Win32.Primitives.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  27  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Tracing.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  28  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.Tasks.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  29  ->  'socket:[31529]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  3  ->  'pipe:[32055]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  30  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.ThreadPool.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  31  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Collections.Concurrent.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  32  ->  'anon_inode:[eventpoll]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  33  ->  'pipe:[32059]' 
l-wx------ 1 shengjie shengjie 64  5 月  10  22:16  34  ->  'pipe:[32059]' 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  35  ->  'socket:[35017]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  36  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Memory.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  37  ->  /dev/urandom 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  38  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Debug.dll 
l-wx------ 1 shengjie shengjie 64  5 月  10  22:16  4  ->  'pipe:[32055]' 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  5  ->  /dev/pts/3 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  6  ->  /dev/pts/3 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  7  ->  /dev/pts/3 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  9  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Private.CoreLib.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  99  ->  /dev/urandom 
[email protected]:/proc/2449 $ cat/proc/net/tcp | grep 1389   
 0: 0100007F:1389  00000000:0000  0A  00000000:00000000  00:00000000  00000000  1000     0  31529  1  0000000000000000  100  0  0  10  0              
 8: 0100007F:1389  0100007F:DBE8 01  00000000:00000000  00:00000000  00000000  1000     0  35017  1  0000000000000000  20  4  29  10  -1             
12: 0100007F:DBE8 0100007F:1389  01  00000000:00000000  00:00000000  00000000  1000     0  28496  1  0000000000000000  20  4  30  10  -1  
           

過濾

strace2

目錄日志,抓取監聽在

localhost:5001

socket 對應的檔案描述符。

[email protected]:~/coding/dotnet/Io.Demo$ grep 'bind' strace2/ -rn
strace2/io.2449:2243:bind(11, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2449-23147-socket"}, 110) = 0
strace2/io.2449:2950:bind(29, {sa_family=AF_INET, sin_port=htons(5001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
strace2/io.2365:4568:bind(10, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2365-19043-socket"}, 110) = 0
strace2/io.2420:4634:bind(11, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2420-22262-socket"}, 110) = 0
strace2/io.2402:4569:bind(10, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2402-22042-socket"}, 110) = 0
           

從中可以看出同樣是29号檔案描述符,相關系統調用記錄中

io.2449

檔案中,打開檔案,可以發現相關系統調用如下:

[email protected]:~/coding/dotnet/Io.Demo$ cat strace2/io.2449 # 截取相關系統調用
socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 29
setsockopt(29, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
bind(29, {sa_family=AF_INET, sin_port=htons(5001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
listen(29, 10) 
accept4(29, 0x7fa16c01b9e8, [16], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
epoll_create1(EPOLL_CLOEXEC)            = 32
epoll_ctl(32, EPOLL_CTL_ADD, 29, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=0, u64=0}}) = 0
accept4(29, 0x7fa16c01cd60, [16], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
           

從中我們可以發現

accept4

直接傳回-1而不阻塞,監聽在

127.0.0.1:5001

的 socket 對應的29号檔案描述符最終作為

epoll_ctl

的參數關聯到

epoll_create1

建立的 32 号檔案描述符上。最終32号檔案描述符會被

epoll_wait

阻塞,以等待連接配接請求。我們可以抓取 epoll 相關的系統調用來驗證:

[email protected]:~/coding/dotnet/Io.Demo$ grep 'epoll' strace2/ -rn
strace2/io.2459:364:epoll_ctl(32, EPOLL_CTL_ADD, 35, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=1, u64=1}}) = 0
strace2/io.2462:21:epoll_wait(32, [{EPOLLIN, {u32=0, u64=0}}], 1024, -1) = 1
strace2/io.2462:42:epoll_wait(32, [{EPOLLOUT, {u32=1, u64=1}}], 1024, -1) = 1
strace2/io.2462:43:epoll_wait(32, [{EPOLLIN|EPOLLOUT, {u32=1, u64=1}}], 1024, -1) = 1
strace2/io.2462:53:epoll_wait(32, 
strace2/io.2449:3033:epoll_create1(EPOLL_CLOEXEC)            = 32
strace2/io.2449:3035:epoll_ctl(32, EPOLL_CTL_ADD, 33, {EPOLLIN|EPOLLET, {u32=4294967295, u64=18446744073709551615}}) = 0
strace2/io.2449:3061:epoll_ctl(32, EPOLL_CTL_ADD, 29, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=0, u64=0}}) = 0
           

是以我們可以斷定同步非阻塞I/O的示例使用的時IO多路複用的epoll模型。

關于

epoll

相關指令,

man

指令可以檢視下

epoll_create1

epoll_ctl

epoll_wait

系統調用的相關說明:

[email protected]:~/coding/dotnet/Io.Demo/strace$ man epoll_create
DESCRIPTION
       epoll_create() creates a new epoll(7) instance.  Since Linux 2.6.8, the size argument is ignored, but must be
       greater than zero; see NOTES below.
	   
       epoll_create() returns a file descriptor referring to the new epoll instance.  This file descriptor  is  used
	   for  all  the subsequent calls to the epoll interface.

[email protected]:~/coding/dotnet/Io.Demo/strace$ man epoll_ctl
DESCRIPTION
       This  system  call  performs  control  operations on the epoll(7) instance referred to by the file descriptor
       epfd.  It requests that the operation op be performed for the target file descriptor, fd.
	   
       Valid values for the op argument are:
	   
       EPOLL_CTL_ADD
				Register the target file descriptor fd on the epoll instance referred to by the file  descriptor  epfd
				and associate the event event with the internal file linked to fd.
				
	   EPOLL_CTL_MOD
                Change the event event associated with the target file descriptor fd.
				
	   EPOLL_CTL_DEL
                Remove  (deregister)  the  target file descriptor fd from the epoll instance referred to by epfd.  The
				event is ignored and can be NULL (but see BUGS below).
				
[email protected]:~/coding/dotnet/Io.Demo/strace$ man epoll_wait
DESCRIPTION
       The  epoll_wait()  system  call  waits for events on the epoll(7) instance referred to by the file descriptor
       epfd.  The memory area pointed to by events will contain the events that will be available  for  the  caller.
       Up to maxevents are returned by epoll_wait().  The maxevents argument must be greater than zero.
	   
       The  timeout  argument  specifies  the number of milliseconds that epoll_wait() will block.  Time is measured
       against the CLOCK_MONOTONIC clock.  The call will block until either:
	   
       *  a file descriptor delivers an event;
	   
       *  the call is interrupted by a signal handler; or
	   
       *  the timeout expires.
           

簡而言之,epoll通過建立一個新的檔案描述符來替換舊的檔案描述符來完成阻塞工作,當有事件或逾時時通知原有檔案描述符進行處理,以實作非阻塞的線程模型。

6. 總結

寫完這篇文章,對I/O模型的了解有所加深,但由于對Linux系統的了解不深,是以難免有纰漏之處,大家多多指教。

同時也不僅感歎Linux的強大之處,一切皆檔案的設計思想,讓一切都有迹可循。現在.NET 已經完全實作跨平台了,那麼Linux作業系統大家就有必要熟悉起來了。