天天看點

tf建立tensor_【深度學習】建立tensor、form numpy to tensor、tf.zeros()、tf.

一、form numpy to tensor

C:\Users\hasee>ipython

Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]

Type 'copyright', 'credits' or 'license' for more information

IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import tensorflow as tf

G:\Users\hasee\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.

from ._conv import register_converters as _register_converters

In [2]: import numpy as np

In [3]: tf.convert_to_tensor(np.ones([2,3]))

2019-07-26 09:55:04.608429: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

Out[3]:

array([[1., 1., 1.],

[1., 1., 1.]])>

In [4]: tf.convert_to_tensor(np.zeros([2,3]))

Out[4]:

array([[0., 0., 0.],

[0., 0., 0.]])>

In [5]: tf.convert_to_tensor([1,2])

Out[5]:

In [6]: tf.convert_to_tensor([1,2.])

Out[6]:

In [7]: tf.convert_to_tensor([[1],[2.]])

Out[7]:

array([[1.],

[2.]], dtype=float32)>

二、tf.zeros()注意到zeros()函數括号裡面是shape(參數)。

In [8]: tf.zeros([])

Out[8]:

In [9]: tf.zeros([1])

Out[9]:

In [10]: tf.zeros([2,2])

Out[10]:

array([[0., 0.],

[0., 0.]], dtype=float32)>

In [11]: tf.zeros([2,3,3])

Out[11]:

array([[[0., 0., 0.],

[0., 0., 0.],

[0., 0., 0.]],

[[0., 0., 0.],

[0., 0., 0.],

[0., 0., 0.]]], dtype=float32)>

tf.zeros_like()

a=tf.zeros([2,3,3])

In [13]: tf.zeros_like(a)

Out[13]:

array([[[0., 0., 0.],

[0., 0., 0.],

[0., 0., 0.]],

[[0., 0., 0.],

[0., 0., 0.],

[0., 0., 0.]]], dtype=float32)>

In [14]: tf.zeros(a.shape)

Out[14]:

array([[[0., 0., 0.],

[0., 0., 0.],

[0., 0., 0.]],

[[0., 0., 0.],

[0., 0., 0.],

[0., 0., 0.]]], dtype=float32)>

tf.ones()

In [15]: tf.ones(1)

Out[15]:

In [16]: tf.ones([])

Out[16]:

In [17]: tf.ones([2])

Out[17]:

In [18]: tf.ones([2,3])

Out[18]:

array([[1., 1., 1.],

[1., 1., 1.]], dtype=float32)>

In [19]: tf.ones_like(a)

Out[19]:

array([[[1., 1., 1.],

[1., 1., 1.],

[1., 1., 1.]],

[[1., 1., 1.],

[1., 1., 1.],

[1., 1., 1.]]], dtype=float32)>

tf.fill()

In [20]: tf.fill([2,2],0)

Out[20]:

array([[0, 0],

[0, 0]])>

In [21]: tf.fill([2,2],0.)

Out[21]:

array([[0., 0.],

[0., 0.]], dtype=float32)>

In [22]: tf.fill([2,2],1)

Out[22]:

array([[1, 1],

[1, 1]])>

In [23]: tf.fill([2,2],9)

Out[23]:

array([[9, 9],

[9, 9]])>

normal(帶有均值和方差的随機初始化)

1、正态分布

tf.random.truncated_normal()推薦使用

In [24]: tf.random.normal([2,2],mean=1,stddev=1)

Out[24]:

array([[ 0.74498326,  1.2692506 ],

[-0.186342  ,  2.4405448 ]], dtype=float32)>

In [25]: tf.random.normal([2,2])

Out[25]:

array([[-0.8279127 ,  0.4940451 ],

[-0.5491317 ,  0.34313682]], dtype=float32)>

In [26]: tf.random.truncated_normal([2,2],mean=1,stddev=1)

Out[26]:

array([[ 0.6495031, -0.7680944],

[ 0.5350886,  1.9835862]], dtype=float32)>

2、均勻分布

In [27]: tf.random.uniform([2,2],minval=0,maxval=1)

Out[27]:

array([[0.47823155, 0.5105146 ],

[0.94305336, 0.24052179]], dtype=float32)>

In [28]: tf.random.uniform([2,2],minval=1,maxval=100)

Out[28]:

array([[63.2407  , 90.102486],

[89.598114, 65.97715 ]], dtype=float32)>

應用 random permutation(随機打散)

In [29]: idx=tf.range(10)

In [30]: idx=tf.random.shuffle(idx)

In [31]: idx

Out[31]:

标量在deep learning中的呈現方式。

out=tf.random.uniform([4,10])

In [34]: out

Out[34]:

array([[0.24594736, 0.1750027 , 0.38304508, 0.20767057, 0.6574384 ,

0.95960414, 0.9286723 , 0.21315277, 0.38152838, 0.70844436],

[0.6412004 , 0.15667486, 0.37563896, 0.36031973, 0.391747  ,

0.68108964, 0.19414902, 0.00660384, 0.8796239 , 0.49293447],

[0.5348973 , 0.0479815 , 0.9055302 , 0.7179408 , 0.06859863,

0.35083127, 0.32901812, 0.6015886 , 0.7346724 , 0.40839958],

[0.33565128, 0.34543478, 0.49530745, 0.2669555 , 0.7481148 ,

0.22204006, 0.94577575, 0.9286548 , 0.7074944 , 0.7671062 ]],

dtype=float32)>

In [35]: y=tf.range(4)

In [36]: y=tf.one_hot(y,depth=10)

In [37]: y

Out[37]:

array([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],

[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],

[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],

[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]], dtype=float32)>

In [38]: loss=tf.keras.losses.mse(y,out)

In [39]: loss

Out[39]:

In [40]: loss=tf.reduce_mean(loss)

In [41]: loss

Out[41]: