天天看點

linux 下開啟 trim 功能

對于 ssd 硬碟,假如長期使用, 并且已經用光磁盤 free lists 中的空間,  都會嚴重影響磁盤寫能力 (就算磁盤空間空閑率為 90%) ,  

  注,

   但實際上是由于 ssd 使用 flash 進行資料儲存, 每次資料讀寫過程都需要将曾經使用過的磁盤資料塊抹掉後再重寫, 出現重複 io 增加了系統額外資源, 而機械硬碟不需要把資料抹掉而是直接重寫,是以,對于需要進行頻繁寫操作,(overwrite 操作) 或者沒有 freelists 空間的情況而言, ssd 會發現産生嚴重的 io

1. linux 下可以通過啟用 trim 功能讓電腦自動重新生成 freelists

啟用 trim 方法

1. 建議使用 ext4 格式

2. 核心必須大于 2.6.28

3.hdparm -i /dev/sda 查詢是否支援

* data set management trim supported (支援提示)

4. fstab 中加入 discard 參數

/dev/sda1 / ext4 discard,defaults

5. swap 分區啟用方法

echo 1 > /proc/sys/vm/swappiness

2. 建議使用 noop 排程算法

linux has several different disk schedulers, which are responsible for determining in which order read and write requests to the disk are handled.  using thenoop scheduler means that linux will simply handle requests in the order they are

received, without giving any consideration to where the data physically resides on the disk.  this is good for solid-state drives because they have no moving parts, and seek times are identical for all sectors on the disk.

commits reads and writes to disks – the intention of providing different schedulers is to allow better optimisation for different classes of workload.

process was reading from one part of the disk, and one writing to another, the heads would have to seek back and forth across the disk for every operation. the scheduler’s main goal is to optimise disk access times.

an i/o scheduler can use the following techniques to improve performance:

<dl></dl>

<dt>request merging</dt>

<dd>the scheduler merges adjacent requests together to reduce disk seeking</dd>

<dt>elevator</dt>

<dd>the scheduler orders requests based on their physical location on the block device, and it basically tries to seek in one direction as much as possible.</dd>

<dt>prioritisation</dt>

<dd>the scheduler has complete control over how it prioritises requests, and can do so in a number of ways</dd>

all i/o schedulers should also take into account resource starvation, to ensure requests eventually do get serviced!

there are currently 4 available:

no-op scheduler

anticipatory io scheduler (as)

deadline scheduler

complete fair queueing scheduler (cfq)

this scheduler only implements request merging.

the anticipatory scheduler is the default scheduler in older 2.6 kernels – if you've not specified one, this is the one that will be loaded. it implements request merging, a one-way elevator, read and write request batching, and attempts

some anticipatory reads by holding off a bit after a read batch if it thinks a user is going to ask for more data. it tries to optimise for physical disks by avoiding head movements if possible – one downside to this is that it probably give highly erratic

performance on database or storage systems.

with the actual data being held in cache, the deadline scheduler will also prefer readers – as long as the deadline for a write request hasn't passed. the kernel docs suggest this is the preferred scheduler for database systems, especially if you have

the complete fair queueing scheduler implements both request merging and the elevator, and attempts to give all users of a particular device the same number of io requests over a particular time interval. this should make it more efficient

the most reliable way to change schedulers is to set the kernel option “elevator” at boot time. you can set it to one of “as”, “cfq”, “deadline” or “noop”, to set the appropriate scheduler.

it seems under more recent 2.6 kernels (2.6.11, possibly earlier), you can change the scheduler at runtime by echoing the name of the scheduler into/sys/block/$devicename/queue/scheduler, where the device name is the

basename of the block device, eg “sda” for/dev/sda.

i've not personally done any testing on this, so i can't speak from experience yet. the anticipatory scheduler will be the default one for a reason however - it is optimised for the common case. if you've only got single disk systems

(ie, no raid - hardware or software) then this scheduler is probably the right one for you. if it's a multiuser system, you will probably find cfq or deadline providing better performance, and the numbers seem to back deadline giving the best performance for

database systems.

the noop scheduler has minimal cpu overhead in managing the queues and may be well suited to systems with either low seek times, such as an ssd or systems using a hardware raid controller, which often has its own io scheduler designed

around the raid semantics.

2. 啟用 wiper 工具對 ssd 進行重新清空

wiper.sh  由 hdparm 工具附帶, 但 rhel5,6 都預設不帶改工具, 建議重新編譯安裝