天天看點

Tensorflow調參報錯:Resource exhausted OOM when allocating tensor with shape

錯誤資訊

Resource ex hausted: OOM when allocating tensor with shape[200,256,28,28] and****

這是一種調參時常遇到的問題,由于電腦顯存不夠而導緻,我的電腦顯存是8g,在調整參數 IMAGES_PER_GPU = 2時,會導緻這樣的錯誤,将其改回1錯誤消失(降低了batch size的大小)。

解決辦法

  1. 減少Batch 的大小
  2. 分析錯誤的位置,在哪一層出現顯示卡不夠,比如在全連接配接層出現的,則降低全連接配接層的次元,把2048改成1024啥的
  3. 增加pool 層,降低整個網絡的次元。
  4. 修改輸入圖檔的大小
  5. 有錢任性,換台電腦
# python train.py --logdir myLog --batch_size 256 --dropout_rate 0.5


OP_REQUIRES failed at conv_ops.cc:636 : Resource exhausted: OOM when allocating tensor with shape[32,32,417,417] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
 
    callbacks=[logging, checkpoint])
  File "D:\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "D:\Anaconda3\lib\site-packages\keras\engine\training.py", line 1415, in fit_generator
    initial_epoch=initial_epoch)
  File "D:\Anaconda3\lib\site-packages\keras\engine\training_generator.py", line 213, in fit_generator
    class_weight=class_weight)
  File "D:\Anaconda3\lib\site-packages\keras\engine\training.py", line 1215, in train_on_batch
    outputs = self.train_function(ins)
  File "D:\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2666, in __call__
    return self._call(inputs)
  File "D:\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2636, in _call
    fetched = self._callable_fn(*array_vals)
  File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1382, in __call__
    run_metadata_ptr)
  File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 519, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,32,417,417] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, _class=["loc:@batch_normalization_2/cond/FusedBatchNorm/Switch"], data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](zero_padding2d_1/Pad, conv2d_2/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 1 Chunks of size 16384 totalling 16.0KiB
2018-09-26 18:50:05.482594: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 8 Chunks of size 21504 totalling 168.0KiB
2018-09-26 18:50:05.482884: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 2 Chunks of size 32768 totalling 64.0KiB
2018-09-26 18:50:05.483090: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 8 Chunks of size 43008 totalling 336.0KiB
2018-09-26 18:50:05.483276: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 5 Chunks of size 65024 totalling 317.5KiB
2018-09-26 18:50:05.483457: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 2 Chunks of size 73728 totalling 144.0KiB
2018-09-26 18:50:05.483656: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 8 Chunks of size 86016 totalling 672.0KiB
2018-09-26 18:50:05.483844: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 3 Chunks of size 129792 totalling 380.3KiB
2018-09-26 18:50:05.484411: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 11 Chunks of size 131072 totalling 1.38MiB
2018-09-26 18:50:05.484719: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 1 Chunks of size 196608 totalling 192.0KiB
2018-09-26 18:50:05.484902: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 5 Chunks of size 259584 totalling 1.24MiB
2018-09-26 18:50:05.485216: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 3 Chunks of size 294912 totalling 864.0KiB
2018-09-26 18:50:05.485494: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 1 Chunks of size 454400 totalling 443.8KiB
2018-09-26 18:50:05.485748: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 3 Chunks of size 519168 totalling 1.49MiB
2018-09-26 18:50:05.486063: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 11 Chunks of size 524288 totalling 5.50MiB
2018-09-26 18:50:05.486245: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 1 Chunks of size 786432 totalling 768.0KiB
2018-09-26 18:50:05.486419: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 4 Chunks of size 1038336 totalling 3.96MiB
2018-09-26 18:50:05.486590: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 12 Chunks of size 1179648 totalling 13.50MiB
2018-09-26 18:50:05.486764: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 1 Chunks of size 1817088 totalling 1.73MiB
2018-09-26 18:50:05.486934: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 3 Chunks of size 2076672 totalling 5.94MiB
2018-09-26 18:50:05.487432: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 7 Chunks of size 2097152 totalling 14.00MiB
2018-09-26 18:50:05.487719: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 12 Chunks of size 4718592 totalling 54.00MiB
2018-09-26 18:50:05.487982: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 1 Chunks of size 7268352 totalling 6.93MiB
2018-09-26 18:50:05.488284: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 8 Chunks of size 18874368 totalling 144.00MiB
2018-09-26 18:50:05.488560: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 1 Chunks of size 431485952 totalling 411.50MiB
2018-09-26 18:50:05.488842: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:674] 1 Chunks of size 712249344 totalling 679.25MiB
2018-09-26 18:50:05.489097: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:678] Sum Total of in-use chunks: 1.32GiB
2018-09-26 18:50:05.489374: I T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:680] Stats: 
Limit:                  3211594956
InUse:                  1415122432
MaxInUse:               2420054016
NumAllocs:                    1707
MaxAllocSize:            712249344

           

參考部落格

如果你覺得本文對你有幫助,記得點贊、投币、收藏,支援一下up主,那麼我們下期再見

繼續閱讀