
this is an old question that i've seen from time to time. my understanding of it is rather limited (having read about the differences a long time ago, but the factoid(s) involved never really stuck).
as i understand it,
buffers
are used by programs with active i/o operations, i.e. data waiting to be written to disk
cache
is the result of completed i/o operations, i.e. buffers that have been flushed or data read from disk to satisfy a request.
the "cached" total will also include some other memory allocations, such as any tmpfs filesytems. to see this in effect try:
and you will see the "cache" value drop by the 100mb that you copied to the ram-based filesystem (assuming there was enough free ram, you might find some of it ended up in swap if the machine is already over-committed in terms of memory use). the "sync; echo
3 > /proc/sys/vm/drop_caches" before each call to free should write anything pending in all write buffers (the sync) and clear all cached/buffered disk blocks from memory so free will only be reading other allocations in the "cached" value.
the ram used by virtual machines (such as those running under vmware) will also be counted in free's "cached" value, as will ram used by currently open memory-mapped files.
so it isn't as simple as "buffers counts pending file/network writes and cached counts recently read/written blocks held in ram to save future physical reads", though for most purposes this simpler description will do.
warning this explain a strong method
not recommended on production server! so you're warned, don't blame me if something goes wrong.
for understanding, the thing, you could force your system to delegate as many memory as possible to<code>cache</code> than
drop the cached file:
preamble
before of doing the test, you could open another window an hit:
for following evolution of swap in real time.
nota: you must dispose of as many disk free on current directory, you have mem+swap
the demo
nota, the host on wich i've done this is strongly used. this will be more significant on a really quiet machine.
a buffer is something that has yet to be "written" to disk. a cache is something that has been "read" from the disk and stored for later use.
is shared memory?
cache 和 buffer的区别:
cache:高速缓存,是位于cpu与主内存间的一种容量较小但速度很高的存储器。由于cpu的速度远高于主内存,cpu直接从内存中存取数据要等待一定时间周期, cache中保存着cpu刚用过或循环使用的一部分数据,当cpu再次使用该部分数据时可从cache中直接调用,这样就减少了cpu的等待时间,提高了系统的效率。cache又分为一级cache(l1 cache)和二级cache(l2 cache),l1
cache集成在cpu内部,l2 cache早期一般是焊在主板上,现在也都集成在cpu内部,常见的容量有256kb或512kb l2 cache。
buffer:缓冲区,一个用于存储速度不同步的设备或优先级不同的设备之间传输数据的区域。通过缓冲区,可以使进程之间的相互等待变少,从而使从速度慢的设备读入数据时,速度快的设备的操作进程不发生间断。
free中的buffer和cache:(它们都是占用内存):
buffer : 作为buffer cache的内存,是块设备的读写缓冲区
cache: 作为page cache的内存, 文件系统的cache
如果 cache 的值很大,说明cache住的文件数很多。如果频繁访问到的文件都能被cache住,那么磁盘的读io bi会非常小。
一句话区分:readcache,writebuffer.
page cache和buffer cache最大的差别在于:page
cache是对文件数据的缓存;buffer cache是对设备数据的缓存!
<a target="_blank" href="http://os.51cto.com/art/200709/56603.htm">参考2</a>