天天看點

How Many Maps And Reduces 讀了這篇文章相信就能解釋為什麼将dfs.block.size設定的太大也是不好的原因了! Partitioning your job into maps and reduces

讀了這篇文章相信就能解釋為什麼将dfs.block.size設定的太大也是不好的原因了!

new-mr-api 的切割大小的影響參數

  – mapred.max.split.size 切割出的split最大size 預設:Long.MAX_VALUE

  – mapred.min.split.size 切割出的split最小size 預設:1

• new-mr-api的切割大小算法

  – splitSize = max[minSize, min(maxSize, blockSize)]

  – minSize = ${mapred.min.split.size}

  – maxSize = ${mapred.max.split.size}

• mapred.max.split.size可以增大map數 (将mapred.max.split.size的大小調的小一點)

• mapred.min.split.size可以減少map數  (将mapred.min.split.size的大小調的比blockSize大,繼續大下去則map數變小)

測試:  http://www.51testing.com/?uid-445759-action-viewspace-itemid-809639

Partitioning your job into maps and reduces

Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. Increasing the number of tasks increases the framework overhead, but increases load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces where the framework runs out of resources for the overhead.

Number of Maps

The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.

Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.

The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.

Number of Reduces

The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.

Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.

The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.

The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).

http://wiki.apache.org/hadoop/HowManyMapsAndReduces

What is your typical block size in HDFS of your production clusters?

I am primarily looking for block size distribution in production clusters. From your Hadoop deployment experience, what is min, average, max blocksizes you have seen?

The default block size is, of course, 64MB for newly created files. We (Cloudera) generally recommend starting at 128MB instead. While the block size effects (best case) sequential read and write sizes, it also has a direct impact on the map task performance due to how input splits are calculated, by default. Generally, you'll get a single input split for each HDFS block (modulo all the ways you can change this). What you're looking for is to amortize the cost of JVM startup and scheduler overhead over the length of the job. In other words, if you have a small block size, each map task has very little to do; it schedules the task, finds a machine, starts a JVM, processes a very small amount of data, and exits. As CPUs get faster, the individual map task run time gets shorter and the cost of having more tasks gets higher. You want to find a balance where each task is able to process a reasonable[1] amount of data while still getting the benefits of parallelism.

The smaller the block size, the more tasks you get, the more scheduling activity occurs. You don't want jobs with hundreds of thousands of tasks (unless that's specifically what they prescribe in terms of input data size), but you don't want too few such that you don't take advantage of all slots on the cluster. You may be tempted to try and aim for each job utilizing 100% of the cluster, no more, no less, but in a multitenant environment where there is slot contention that doesn't work either. All of this is very overwhelming, so what's the right answer? Start with 128MB and observe. Remember that the block size  is per file, not for all of HDFS. The dfs.block.size parameter only affects the size of newly created files that don't specifically set a block size.

I also want to caution against over-thinking / over-tuning something like this. You don't want to micromanage the block size of each dataset in the cluster. You probably have better fish to fry.

[1] I get that "reasonable" is terribly subjective. My personal bar is that each task takes at least 30 seconds.

Ref: http://www.quora.com/HDFS/What-is-your-typical-block-size-in-HDFS-of-your-production-clusters#

繼續閱讀