天天看點

torch從零學習筆記系列

Tensor

tensor 是torch裡最重要的一種資料結構,專門為數值資料設計的

Multi-dimensional matrix

一個tensor 可以認為是一個多元矩陣,我們處理資料的時候經常把資料存儲在一個四維矩陣中[N,C,H,W] batchsize大小=N,通道數=C,或者說特征圖數目,圖像的長寬H*W。不同于C的數組,tensor的下标是從1開始的,而不是C的0
           

tensor的類型包括了:

–ByteTensor – contains unsigned chars

–CharTensor – contains signed chars

–ShortTensor – contains shorts

–IntTensor – contains ints

–LongTensor – contains longs

–FloatTensor – contains floats

–DoubleTensor – contains doubles

如何定義一個tensor:

x=torch.Tensor(n1,n2,n3,..,nk)

x:size()傳回x這個tensor的大小

torch從零學習筆記系列

x:size(n)傳回x第n維的大小

x:dim()傳回k,x是幾維的張量

如何通路一個tensor裡的元素

x[n][k],或者采用x:storage()[x:storageOffset()+(n-1)*x:stride(1)+(k-1)*x:stride(2)(更慢)

tensor其實是sotrage的一種特殊的視角,is a particular way of viewing a strogage:a storage only represents a chunk of memory ,while the Tensor interprets this chunk of memory as having dimensions
           

tensor 賦初值:

x = torch.Tensor(,)
s = x:storage()
for i=,s:size() do -- fill up the Storage
  s[i] = i
end
> x -- s is interpreted by x as a D matrix
              
             
         
         
[torch.DoubleTensor of dimension x5]
           
x = torch.Tensor(,)
i = 

x:apply(function()
  i = i + 
  return i
end)

> x
              
             
         
         
[torch.DoubleTensor of dimension x5]

> x:stride()
 
   -- element in the last dimension are contiguous!
[torch.LongStorage of size ]
           

Most numeric operations are implemented only for FloatTensor and DoubleTensor. Other Tensor types are useful if you want to save memory space

使用如下指令設定預設的tensor類型:

torch.setdefaulttensortype(‘torch.FloatTensor’)

tensor指派問題

x=torch.Tensor(5):zero()

x:zero()

x:narrow(1,2,3):fill(1)

y = torch.Tensor(x:size()):copy(x)

y = x:clone()

torch.Tensor() create new tensor object,配置設定新的記憶體

torch.Tensor(tensor) 類似指針,新建立的tensor和原來的tensor是共享同一塊記憶體的。

x=torch.Tensor(3,4)
 y=torch.Tensor(x)
 y[3][2]=1                                                                     [0.0001s]
 x[3][2]
1
           
x = torch.Tensor(torch.LongStorage({,,,}))
x:size()




           

torch.Tensor(storage,[storageOffset,sizes,[strides]])

s=torch.Storage(10)

x=torch.Tensor(s,1,torch.LongStorage{2,5}),x是s的另一種排列方式

x,s共享

torch.Tensor(table)

> torch.Tensor({{1,2,3,4}, {5,6,7,8}})
 1  2  3  4
 5  6  7  8
[torch.DoubleTensor of dimension 2x4]
           

tensor的操作/manipulate

  1. clone()

    y=x:clone(), x和y不共享位址

  2. contiguous():如果原來的存儲是連續的位址,那麼不拷貝,否則拷貝

    If the given Tensor contents are contiguous in memory, returns the exact same Tensor (no memory copy).

    Otherwise (not contiguous in memory), returns a clone (memory copy).

  3. t()轉置
x = torch.Tensor(,):fill()
> x
     
     
[torch.DoubleTensor of dimension x3]

-- x is contiguous, so y points to the same thing
y = x:contiguous():fill()
> y
     
     
[torch.DoubleTensor of dimension x3]

-- contents of x have been changed
> x
     
     
[torch.DoubleTensor of dimension x3]

-- x:t() is not contiguous, so z is a clone
z = x:t():contiguous():fill()
> z
   
   
   
[torch.DoubleTensor of dimension x2]

-- contents of x have not been changed
> x
     
     
[torch.DoubleTensor of dimension x3]
           
  1. type() x:type() torch.type(x) x = torch.Tensor(3):fill(3.14)
  2. typeAs(tensor)
x = torch.Tensor():fill()
> x
 
 
 
[torch.DoubleTensor of dimension ]

y = x:type('torch.DoubleTensor')
> y
 
 
 
[torch.DoubleTensor of dimension ]

-- zero y contents
y:zero()

-- contents of x have been changed
> x
 
 
 
[torch.DoubleTensor of dimension ]
           
  1. isTensor(obj)

tensor大小

  1. nDimension() dim()
  2. size(dim) size()
  3. #self x=torch.Tensor(2,3) #x—>2 ,3
  4. stride(dim) stride()
  5. storage()
  6. isContiguous()
  7. isSize(storage) tensor跟storage的元素數目是否比對
  8. isSameSizeAs(tensor)
  9. nElement() tensor的元素數目
  10. storageOffset():

    Return the first index (starting at 1) used in the tensor’s storage.

Referencing a tensor to an existing tensor or chunk of memory

淺拷貝,共享同一塊記憶體,no memory copy

y=torch.Storage(10)

x=torch.Tensor()

x:set(y,1,10)

x=torch.Tensor(y,1,10)

–set(Tensor)

x=torch.Tensor(2,5):fill(3.14)

y=torch.Tensor():set(x)

y:zero()

x也是全零

–isSetTo(tensor)

當且僅當,當兩個tensor的size,stride,記憶體位址和offset是一樣的

x=torch.Tesor(2,5)

y=torch.Tensor()

y:isSetTo(x)

y:set(x)

y:isSetTo(x)

y:t():isSetTo(x)

-[self] set(storage,[storageOffset,sizes,[strides]])

s=torch.Storage(10):fill(1)

sz=torch.LongStorage({2,5})

x=torch.Tensor()

x:set(s,1,sz)

–深拷貝copy和初始化

[self]copy(tensor)

x=torch.Tensor(4):fill(1)

y=torch.Tensor(2,2):copy(x)

[self]fill(value)

resizeAs()

resize(sizes)

–Extracting sub-Tensors 提取tensor的子tensor

[self]narrow(dim,index,size),淺拷貝

x = torch.Tensor(5,6):zero()

y=x:narrow(1,2,3)

[Tensor]sub(dims,dimle…[,dim4s[,dim4e]])淺拷貝

[Tensor]select(dim,index)

x=torch.Tensor(5,6):zero()

y=x:select(1,2):fill(2)

[Tensor] [{ dim1,dim2,… }] or [{ {dim1s,dim1e}, {dim2s,dim2e} }]

x[{{},{4,5}}

index 深拷貝

x=torch.rand(5,5)

y=x:index(1,torch.LongTensor{3,1})

indexCopy(dim,index,tensor)

indexAdd(dim,index,tensor)

indexFill(dim,index,val)

gather(dim,index)