天天看点

使用绑定CPU提升redis的性能

作者:中启乘数科技

1. 背景

我们知道Redis只能利用到单核的性能,所以我们可以通过把Redis服务进程绑定了一个指定的CPU的核上提升性能。

2. 绑定CPU的命令

可以使用taskset把正在运行的Redis服务进程绑定到一个指定的CPU的核上。

先使用lscpu查看CPU的核的情况:

[redis@csu06 bin]$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                40
On-line CPU(s) list:   0-39
Thread(s) per core:    2
Core(s) per socket:    10
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
Stepping:              7
CPU MHz:               1001.055
CPU max MHz:           3200.0000
CPU min MHz:           1000.0000
BogoMIPS:              4800.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              14080K
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx
est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs_enhanced ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities           

可以看出,上面的配置中有两个物理CPU,每个CPU有10个核,超线程后变成20个核,CPU核的编号为0~39。

然后使用taskset命令把redis-server进程绑定到一个指定的核上。查看redis-server进程的pid:

[redis@csu06 bin]$ ps -ef|grep redis-server
redis    12287  4895  0 23:51 pts/3    00:00:00 grep --color=auto redis-server
redis    20676     1 36 20:55 ?        01:03:35 ./redis-server 127.0.0.1:6379           

从上面可以看到redis-server的进程号为20676,使用taskset命令把此进程绑定到核2上的命令如下:

[redis@csu06 bin]$ taskset -pc 2 20676
pid 20676's current affinity list: 0-39
pid 20676's new affinity list: 2           

也可以使用numactl命令在启动redis服务进程的时候,把服务进程绑定到指定的cpu核上:

numactl -C 2 --membind=0 ./redis-server redis.conf           

上面的命令-C 2中的2表示第2个核,--membind=0中的0表示使用哪个cpu的node,通常两路服务器,numa的node编号为0和1。注意需要把内存绑定到指定CPU的node上。

3. 性能提升情况

我们本次测试的Redis版本为3.2,但是使用redis-benchmark工具使用的6.0的版本。

性能测试的情况:

操作 未绑定CPU的QPS 绑定CPU的QPS 提升百分比
SET 202404 216694 7.1%
HSET 199310 213106 6.9%
GET 211963 229100 8.1%

3.1 没有绑定CPU的情况

SET操作:

[root@csudev04 tmp]# ./redis-benchmark -h 10.197.160.6 -p 6379 -t SET -d 188 -c 40 -n 10000000 --threads 40
====== SET ======
  10000000 requests completed in 49.41 seconds
  40 parallel clients
  188 bytes payload
  keep alive: 1
  host configuration "save": 900 1 300 10 60 10000
  host configuration "appendonly": no
  multi-thread: yes
  threads: 40

Latency by percentile distribution:
0.000% <= 0.087 milliseconds (cumulative count 2)
50.000% <= 0.191 milliseconds (cumulative count 5320713)
75.000% <= 0.223 milliseconds (cumulative count 7563067)
87.500% <= 0.247 milliseconds (cumulative count 8993858)
93.750% <= 0.263 milliseconds (cumulative count 9532146)
96.875% <= 0.279 milliseconds (cumulative count 9759961)
98.438% <= 0.295 milliseconds (cumulative count 9858470)
99.219% <= 0.335 milliseconds (cumulative count 9922985)
99.609% <= 0.383 milliseconds (cumulative count 9963005)
99.805% <= 0.431 milliseconds (cumulative count 9982927)
99.902% <= 0.463 milliseconds (cumulative count 9991493)
99.951% <= 0.487 milliseconds (cumulative count 9995351)
99.976% <= 0.519 milliseconds (cumulative count 9997949)
99.988% <= 0.551 milliseconds (cumulative count 9998889)
99.994% <= 0.599 milliseconds (cumulative count 9999437)
99.997% <= 0.631 milliseconds (cumulative count 9999720)
99.998% <= 0.687 milliseconds (cumulative count 9999856)
99.999% <= 0.839 milliseconds (cumulative count 9999924)
100.000% <= 1.231 milliseconds (cumulative count 9999962)
100.000% <= 1.415 milliseconds (cumulative count 9999981)
100.000% <= 1.543 milliseconds (cumulative count 9999991)
100.000% <= 1.591 milliseconds (cumulative count 9999996)
100.000% <= 1.615 milliseconds (cumulative count 9999998)
100.000% <= 1.879 milliseconds (cumulative count 9999999)
100.000% <= 3.183 milliseconds (cumulative count 10000000)
100.000% <= 3.183 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.177% <= 0.103 milliseconds (cumulative count 17704)
64.586% <= 0.207 milliseconds (cumulative count 6458617)
98.800% <= 0.303 milliseconds (cumulative count 9880044)
99.744% <= 0.407 milliseconds (cumulative count 9974401)
99.969% <= 0.503 milliseconds (cumulative count 9996878)
99.995% <= 0.607 milliseconds (cumulative count 9999518)
99.999% <= 0.703 milliseconds (cumulative count 9999876)
99.999% <= 0.807 milliseconds (cumulative count 9999916)
99.999% <= 0.903 milliseconds (cumulative count 9999933)
99.999% <= 1.007 milliseconds (cumulative count 9999944)
100.000% <= 1.103 milliseconds (cumulative count 9999951)
100.000% <= 1.207 milliseconds (cumulative count 9999960)
100.000% <= 1.303 milliseconds (cumulative count 9999971)
100.000% <= 1.407 milliseconds (cumulative count 9999980)
100.000% <= 1.503 milliseconds (cumulative count 9999988)
100.000% <= 1.607 milliseconds (cumulative count 9999997)
100.000% <= 1.703 milliseconds (cumulative count 9999998)
100.000% <= 1.903 milliseconds (cumulative count 9999999)
100.000% <= 4.103 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 202404.58 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.190     0.080     0.191     0.263     0.319     3.183           

HSET操作:

[root@csudev04 tmp]# ./redis-benchmark -h 10.197.160.6 -p 6379 -t HSET -d 188 -c 40 -n 10000000 --threads 40                                     [30/1914]====== HSET ======
  10000000 requests completed in 50.17 seconds
  40 parallel clients
  188 bytes payload
  keep alive: 1
  host configuration "save": 900 1 300 10 60 10000
  host configuration "appendonly": no
  multi-thread: yes
  threads: 40

Latency by percentile distribution:
0.000% <= 0.087 milliseconds (cumulative count 9)
50.000% <= 0.191 milliseconds (cumulative count 5196902)
75.000% <= 0.231 milliseconds (cumulative count 7773520)
87.500% <= 0.255 milliseconds (cumulative count 9026213)
93.750% <= 0.271 milliseconds (cumulative count 9493641)
96.875% <= 0.287 milliseconds (cumulative count 9715493)
98.438% <= 0.311 milliseconds (cumulative count 9848800)
99.219% <= 0.351 milliseconds (cumulative count 9922917)
99.609% <= 0.399 milliseconds (cumulative count 9964413)
99.805% <= 0.439 milliseconds (cumulative count 9981221)
99.902% <= 0.471 milliseconds (cumulative count 9990359)
99.951% <= 0.503 milliseconds (cumulative count 9995407)
99.976% <= 0.535 milliseconds (cumulative count 9997739)
99.988% <= 0.591 milliseconds (cumulative count 9998830)
99.994% <= 0.663 milliseconds (cumulative count 9999405)
99.997% <= 0.807 milliseconds (cumulative count 9999698)
99.998% <= 0.871 milliseconds (cumulative count 9999853)
99.999% <= 0.967 milliseconds (cumulative count 9999924)
100.000% <= 1.159 milliseconds (cumulative count 9999964)
100.000% <= 1.319 milliseconds (cumulative count 9999981)
100.000% <= 1.391 milliseconds (cumulative count 9999992)
100.000% <= 1.423 milliseconds (cumulative count 9999996)
100.000% <= 1.439 milliseconds (cumulative count 9999998)
100.000% <= 1.463 milliseconds (cumulative count 9999999)
100.000% <= 1.479 milliseconds (cumulative count 10000000)
100.000% <= 1.479 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.135% <= 0.103 milliseconds (cumulative count 13463)
62.429% <= 0.207 milliseconds (cumulative count 6242943)
98.220% <= 0.303 milliseconds (cumulative count 9821963)
99.684% <= 0.407 milliseconds (cumulative count 9968429)
99.954% <= 0.503 milliseconds (cumulative count 9995407)
99.990% <= 0.607 milliseconds (cumulative count 9999014)
99.995% <= 0.703 milliseconds (cumulative count 9999489)
99.997% <= 0.807 milliseconds (cumulative count 9999698)
99.999% <= 0.903 milliseconds (cumulative count 9999890)
99.999% <= 1.007 milliseconds (cumulative count 9999938)
100.000% <= 1.103 milliseconds (cumulative count 9999955)
100.000% <= 1.207 milliseconds (cumulative count 9999968)
100.000% <= 1.303 milliseconds (cumulative count 9999979)
100.000% <= 1.407 milliseconds (cumulative count 9999994)
100.000% <= 1.503 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 199310.39 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.193     0.080     0.191     0.279     0.343     1.479           

GET 操作:

10000000 requests completed in 47.18 seconds
  40 parallel clients
  188 bytes payload
  keep alive: 1
  host configuration "save": 900 1 300 10 60 10000
  host configuration "appendonly": no
  multi-thread: yes
  threads: 40

Latency by percentile distribution:
0.000% <= 0.087 milliseconds (cumulative count 46)
50.000% <= 0.183 milliseconds (cumulative count 5424899)
75.000% <= 0.215 milliseconds (cumulative count 7745874)
87.500% <= 0.231 milliseconds (cumulative count 8751165)
93.750% <= 0.247 milliseconds (cumulative count 9406289)
96.875% <= 0.263 milliseconds (cumulative count 9706206)
98.438% <= 0.295 milliseconds (cumulative count 9862450)
99.219% <= 0.335 milliseconds (cumulative count 9931485)
99.609% <= 0.375 milliseconds (cumulative count 9965478)
99.805% <= 0.415 milliseconds (cumulative count 9982072)
99.902% <= 0.447 milliseconds (cumulative count 9990406)
99.951% <= 0.479 milliseconds (cumulative count 9995434)
99.976% <= 0.511 milliseconds (cumulative count 9997591)
99.988% <= 0.575 milliseconds (cumulative count 9998849)
99.994% <= 0.639 milliseconds (cumulative count 9999426)
99.997% <= 0.759 milliseconds (cumulative count 9999709)
99.998% <= 0.807 milliseconds (cumulative count 9999856)
99.999% <= 0.895 milliseconds (cumulative count 9999924)
100.000% <= 1.119 milliseconds (cumulative count 9999963)
100.000% <= 1.311 milliseconds (cumulative count 9999981)
100.000% <= 1.431 milliseconds (cumulative count 9999991)
100.000% <= 1.679 milliseconds (cumulative count 9999996)
100.000% <= 1.727 milliseconds (cumulative count 9999998)
100.000% <= 1.991 milliseconds (cumulative count 9999999)
100.000% <= 2.103 milliseconds (cumulative count 10000000)
100.000% <= 2.103 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.744% <= 0.103 milliseconds (cumulative count 74442)
71.773% <= 0.207 milliseconds (cumulative count 7177317)
98.807% <= 0.303 milliseconds (cumulative count 9880672)
99.794% <= 0.407 milliseconds (cumulative count 9979436)
99.972% <= 0.503 milliseconds (cumulative count 9997249)
99.992% <= 0.607 milliseconds (cumulative count 9999153)
99.996% <= 0.703 milliseconds (cumulative count 9999559)
99.999% <= 0.807 milliseconds (cumulative count 9999856)
99.999% <= 0.903 milliseconds (cumulative count 9999930)
99.999% <= 1.007 milliseconds (cumulative count 9999942)
100.000% <= 1.103 milliseconds (cumulative count 9999959)
100.000% <= 1.207 milliseconds (cumulative count 9999972)
100.000% <= 1.303 milliseconds (cumulative count 9999980)
100.000% <= 1.407 milliseconds (cumulative count 9999988)
100.000% <= 1.503 milliseconds (cumulative count 9999994)
100.000% <= 1.703 milliseconds (cumulative count 9999997)
100.000% <= 1.807 milliseconds (cumulative count 9999998)
100.000% <= 2.007 milliseconds (cumulative count 9999999)
100.000% <= 2.103 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 211963.20 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.181     0.080     0.183     0.255     0.319     2.103           

3.2 绑定CPU的情况:

SET操作:

[root@csudev04 tmp]# ./redis-benchmark -h 10.197.160.6 -p 6379 -t SET -d 188 -c 40 -n 10000000 --threads 40                                     [103/1839]====== SET ======
  10000000 requests completed in 46.15 seconds
  40 parallel clients
  188 bytes payload
  keep alive: 1
  host configuration "save": 900 1 300 10 60 10000
  host configuration "appendonly": no
  multi-thread: yes
  threads: 40

Latency by percentile distribution:
0.000% <= 0.087 milliseconds (cumulative count 2)
50.000% <= 0.183 milliseconds (cumulative count 5705298)
75.000% <= 0.207 milliseconds (cumulative count 7720789)
87.500% <= 0.223 milliseconds (cumulative count 8954458)
93.750% <= 0.231 milliseconds (cumulative count 9387762)
96.875% <= 0.247 milliseconds (cumulative count 9790129)
98.438% <= 0.255 milliseconds (cumulative count 9859058)
99.219% <= 0.279 milliseconds (cumulative count 9937894)
99.609% <= 0.295 milliseconds (cumulative count 9971292)
99.805% <= 0.303 milliseconds (cumulative count 9982787)
99.902% <= 0.319 milliseconds (cumulative count 9994444)
99.951% <= 0.327 milliseconds (cumulative count 9996516)
99.976% <= 0.343 milliseconds (cumulative count 9997861)
99.988% <= 0.399 milliseconds (cumulative count 9998786)
99.994% <= 0.519 milliseconds (cumulative count 9999390)
99.997% <= 0.631 milliseconds (cumulative count 9999711)
99.998% <= 0.807 milliseconds (cumulative count 9999854)
99.999% <= 1.239 milliseconds (cumulative count 9999924)
100.000% <= 1.543 milliseconds (cumulative count 9999963)
100.000% <= 2.039 milliseconds (cumulative count 9999981)
100.000% <= 2.167 milliseconds (cumulative count 9999992)
100.000% <= 2.207 milliseconds (cumulative count 9999996)
100.000% <= 2.231 milliseconds (cumulative count 9999998)
100.000% <= 3.887 milliseconds (cumulative count 9999999)
100.000% <= 4.543 milliseconds (cumulative count 10000000)
100.000% <= 4.543 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.048% <= 0.103 milliseconds (cumulative count 4762)
77.208% <= 0.207 milliseconds (cumulative count 7720789)
99.828% <= 0.303 milliseconds (cumulative count 9982787)
99.989% <= 0.407 milliseconds (cumulative count 9998856)
99.994% <= 0.503 milliseconds (cumulative count 9999370)
99.995% <= 0.607 milliseconds (cumulative count 9999548)
99.998% <= 0.703 milliseconds (cumulative count 9999814)
99.999% <= 0.807 milliseconds (cumulative count 9999854)
99.999% <= 0.903 milliseconds (cumulative count 9999873)
99.999% <= 1.007 milliseconds (cumulative count 9999884)
99.999% <= 1.103 milliseconds (cumulative count 9999904)
99.999% <= 1.207 milliseconds (cumulative count 9999921)
99.999% <= 1.303 milliseconds (cumulative count 9999931)
99.999% <= 1.407 milliseconds (cumulative count 9999937)
100.000% <= 1.503 milliseconds (cumulative count 9999958)
100.000% <= 1.607 milliseconds (cumulative count 9999968)
100.000% <= 1.703 milliseconds (cumulative count 9999973)
100.000% <= 2.007 milliseconds (cumulative count 9999977)
100.000% <= 2.103 milliseconds (cumulative count 9999986)
100.000% <= 3.103 milliseconds (cumulative count 9999998)
100.000% <= 4.103 milliseconds (cumulative count 9999999)
100.000% <= 5.103 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 216694.12 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.177     0.080     0.183     0.239     0.271     4.543           

HSET操作:

[root@csudev04 tmp]# ./redis-benchmark -h 10.197.160.6 -p 6379 -t HSET -d 188 -c 40 -n 10000000 --threads 40                                     [97/1914]====== HSET ======
  10000000 requests completed in 46.92 seconds
  40 parallel clients
  188 bytes payload
  keep alive: 1
  host configuration "save": 900 1 300 10 60 10000
  host configuration "appendonly": no
  multi-thread: yes
  threads: 40

Latency by percentile distribution:
0.000% <= 0.087 milliseconds (cumulative count 1)
50.000% <= 0.183 milliseconds (cumulative count 5438986)
75.000% <= 0.207 milliseconds (cumulative count 7525817)
87.500% <= 0.231 milliseconds (cumulative count 9208067)
93.750% <= 0.239 milliseconds (cumulative count 9548825)
96.875% <= 0.247 milliseconds (cumulative count 9743486)
98.438% <= 0.263 milliseconds (cumulative count 9890471)
99.219% <= 0.279 milliseconds (cumulative count 9940448)
99.609% <= 0.295 milliseconds (cumulative count 9976057)
99.805% <= 0.303 milliseconds (cumulative count 9986733)
99.902% <= 0.311 milliseconds (cumulative count 9992865)
99.951% <= 0.319 milliseconds (cumulative count 9995870)
99.976% <= 0.335 milliseconds (cumulative count 9997677)
99.988% <= 0.431 milliseconds (cumulative count 9998806)
99.994% <= 0.559 milliseconds (cumulative count 9999418)
99.997% <= 0.631 milliseconds (cumulative count 9999713)
99.998% <= 0.823 milliseconds (cumulative count 9999848)
99.999% <= 1.263 milliseconds (cumulative count 9999924)
100.000% <= 1.503 milliseconds (cumulative count 9999965)
100.000% <= 1.575 milliseconds (cumulative count 9999981)
100.000% <= 1.671 milliseconds (cumulative count 9999991)
100.000% <= 1.759 milliseconds (cumulative count 9999996)
100.000% <= 1.823 milliseconds (cumulative count 9999998)
100.000% <= 1.887 milliseconds (cumulative count 9999999)
100.000% <= 3.047 milliseconds (cumulative count 10000000)
100.000% <= 3.047 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.011% <= 0.103 milliseconds (cumulative count 1138)
75.258% <= 0.207 milliseconds (cumulative count 7525817)
99.867% <= 0.303 milliseconds (cumulative count 9986733)
99.987% <= 0.407 milliseconds (cumulative count 9998662)
99.992% <= 0.503 milliseconds (cumulative count 9999156)
99.996% <= 0.607 milliseconds (cumulative count 9999588)
99.998% <= 0.703 milliseconds (cumulative count 9999815)
99.998% <= 0.807 milliseconds (cumulative count 9999845)
99.999% <= 0.903 milliseconds (cumulative count 9999867)
99.999% <= 1.007 milliseconds (cumulative count 9999888)
99.999% <= 1.103 milliseconds (cumulative count 9999910)
99.999% <= 1.207 milliseconds (cumulative count 9999919)
99.999% <= 1.303 milliseconds (cumulative count 9999929)
99.999% <= 1.407 milliseconds (cumulative count 9999943)
100.000% <= 1.503 milliseconds (cumulative count 9999965)
100.000% <= 1.607 milliseconds (cumulative count 9999984)
100.000% <= 1.703 milliseconds (cumulative count 9999991)
100.000% <= 1.807 milliseconds (cumulative count 9999997)
100.000% <= 1.903 milliseconds (cumulative count 9999999)
100.000% <= 3.103 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 213106.03 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.180     0.080     0.183     0.239     0.271     3.047           

GET操作:

[root@csudev04 tmp]# ./redis-benchmark -h 10.197.160.6 -p 6379 -t GET -d 188 -c 40 -n 10000000 --threads 40                                      [32/1844]====== GET ======
  10000000 requests completed in 43.65 seconds
  40 parallel clients
  188 bytes payload
  keep alive: 1
  host configuration "save": 900 1 300 10 60 10000
  host configuration "appendonly": no
  multi-thread: yes
  threads: 40

Latency by percentile distribution:
0.000% <= 0.087 milliseconds (cumulative count 5)
50.000% <= 0.167 milliseconds (cumulative count 5189215)
75.000% <= 0.191 milliseconds (cumulative count 7809609)
87.500% <= 0.207 milliseconds (cumulative count 9113370)
93.750% <= 0.215 milliseconds (cumulative count 9530184)
96.875% <= 0.223 milliseconds (cumulative count 9785681)
98.438% <= 0.231 milliseconds (cumulative count 9917091)
99.219% <= 0.239 milliseconds (cumulative count 9970378)
99.805% <= 0.247 milliseconds (cumulative count 9988411)
99.902% <= 0.255 milliseconds (cumulative count 9993815)
99.951% <= 0.263 milliseconds (cumulative count 9995744)
99.976% <= 0.295 milliseconds (cumulative count 9997829)
99.988% <= 0.351 milliseconds (cumulative count 9998803)
99.994% <= 0.527 milliseconds (cumulative count 9999392)
99.997% <= 0.639 milliseconds (cumulative count 9999725)
99.998% <= 0.759 milliseconds (cumulative count 9999850)
99.999% <= 1.919 milliseconds (cumulative count 9999925)
100.000% <= 4.207 milliseconds (cumulative count 9999962)
100.000% <= 6.023 milliseconds (cumulative count 9999981)
100.000% <= 6.063 milliseconds (cumulative count 9999993)
100.000% <= 6.079 milliseconds (cumulative count 9999996)
100.000% <= 6.087 milliseconds (cumulative count 9999998)
100.000% <= 6.095 milliseconds (cumulative count 9999999)
100.000% <= 7.327 milliseconds (cumulative count 10000000)
100.000% <= 7.327 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.115% <= 0.103 milliseconds (cumulative count 11469)
91.134% <= 0.207 milliseconds (cumulative count 9113370)
99.981% <= 0.303 milliseconds (cumulative count 9998071)
99.990% <= 0.407 milliseconds (cumulative count 9999049)
99.993% <= 0.503 milliseconds (cumulative count 9999342)
99.995% <= 0.607 milliseconds (cumulative count 9999538)
99.998% <= 0.703 milliseconds (cumulative count 9999825)
99.999% <= 0.807 milliseconds (cumulative count 9999865)
99.999% <= 0.903 milliseconds (cumulative count 9999884)
99.999% <= 1.007 milliseconds (cumulative count 9999898)
99.999% <= 1.103 milliseconds (cumulative count 9999910)
99.999% <= 1.207 milliseconds (cumulative count 9999918)
99.999% <= 1.903 milliseconds (cumulative count 9999920)
99.999% <= 2.007 milliseconds (cumulative count 9999925)
99.999% <= 4.103 milliseconds (cumulative count 9999933)
100.000% <= 5.103 milliseconds (cumulative count 9999966)
100.000% <= 6.103 milliseconds (cumulative count 9999999)
100.000% <= 8.103 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 229100.33 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.167     0.080     0.167     0.215     0.231     7.327           

4. 总结

目前服务器都有多个核,当绑定到某个核上后,避免进程在多个核做切换,这样可以提升性能。另也可以做为一个应急手段,当Redis性能快不够的时候,可以用命令taskset把Redis的服务进程绑定到指定的CPU核上,可以提升7%左右的性能。

继续阅读