天天看點

PyTorch實作sin函數模拟

PyTorch實作sin函數模拟

文章目錄

  • ​​PyTorch實作sin函數模拟​​
  • ​​一、簡介​​
  • ​​二、第一種方法​​
  • ​​三、第二種方法​​
  • ​​四、總結​​

一、簡介

本文旨在使用兩種方法來實作sin函數的模拟,具體的模拟方法是使用機器學習來實作的,我們使用Python的torch子產品進行機器學習,進而為sin确定多項式的系數。

二、第一種方法

# 這個案例相當于是使用torch來模拟sin函數進行計算啦。
# 通過3次函數來模拟sin函數,實作類似于機器學習的操作。


import torch
import math

dtype = torch.float
# 資料的類型

device = torch.device("cpu")
# 裝置的類型
# device = torch.device("cuda:0") # Uncomment this to run on GPU

# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
# 與numpy的linspace是類似的

y = torch.sin(x)
# tensor->張量

# Randomly initialize weights
# 标準的高斯函數分布。
# 随機産生一個參數,然後通過學習來進行改進參數。
a = torch.randn((), device=device, dtype=dtype)
# a

b = torch.randn((), device=device, dtype=dtype)
# b

c = torch.randn((), device=device, dtype=dtype)
# c

d = torch.randn((), device=device, dtype=dtype)
# d


learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y
    y_pred = a + b * x + c * x ** 2 + d * x ** 3
    # 這個也是一個張量。
    # 3次函數來進行模拟。

    # Compute and print loss
    loss = (y_pred - y).pow(2).sum().item()
    if t % 100 == 99:
        print(t, loss)
    # 計算誤差

    # Backprop to compute gradients of a, b, c, d with respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_a = grad_y_pred.sum()
    grad_b = (grad_y_pred * x).sum()
    grad_c = (grad_y_pred * x ** 2).sum()
    grad_d = (grad_y_pred * x ** 3).sum()
    # 計算誤差。

    # Update weights using gradient descent
    # 更新參數,每一次都要更新。
    a -= learning_rate * grad_a
    b -= learning_rate * grad_b
    c -= learning_rate * grad_c
    d -= learning_rate * grad_d
    # reward

# 最終的結果
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')      

運作結果:

99 676.0404663085938
199 478.38140869140625
299 339.39117431640625
399 241.61537170410156
499 172.80801391601562
599 124.37007904052734
699 90.26084899902344
799 66.23435974121094
899 49.30537033081055
999 37.37403106689453
1099 28.96288299560547
1199 23.031932830810547
1299 18.848905563354492
1399 15.898048400878906
1499 13.81600570678711
1599 12.34669017791748
1699 11.309612274169922
1799 10.57749080657959
1899 10.060576438903809
1999 9.695555686950684
Result: y = -0.03098311647772789 + 0.852223813533783 x + 0.005345103796571493 x^2 + -0.09268788248300552 x^3      

三、第二種方法

import torch
import math

dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0")  # Uncomment this to run on GPU

# Create Tensors to hold input and outputs.
# By default, requires_grad=False, which indicates that we do not need to
# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)

# Create random Tensors for weights. For a third order polynomial, we need
# 4 weights: y = a + b x + c x^2 + d x^3
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Tensors during the backward pass.
a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
d = torch.randn((), device=device, dtype=dtype, requires_grad=True)

learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y using operations on Tensors.
    y_pred = a + b * x + c * x ** 2 + d * x ** 3

    # Compute and print loss using operations on Tensors.
    # Now loss is a Tensor of shape (1,)
    # loss.item() gets the scalar value held in the loss.
    loss = (y_pred - y).pow(2).sum()
    if t % 100 == 99:
        print(t, loss.item())

    # Use autograd to compute the backward pass. This call will compute the
    # gradient of loss with respect to all Tensors with requires_grad=True.
    # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
    # the gradient of the loss with respect to a, b, c, d respectively.
    loss.backward()

    # Manually update weights using gradient descent. Wrap in torch.no_grad()
    # because weights have requires_grad=True, but we don't need to track this
    # in autograd.
    with torch.no_grad():
        a -= learning_rate * a.grad
        b -= learning_rate * b.grad
        c -= learning_rate * c.grad
        d -= learning_rate * d.grad

        # Manually zero the gradients after updating weights
        a.grad = None
        b.grad = None
        c.grad = None
        d.grad = None

print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')      

運作結果:

99 1702.320556640625
199 1140.3609619140625
299 765.3402709960938
399 514.934326171875
499 347.6383972167969
599 235.80038452148438
699 160.98876953125
799 110.91152954101562
899 77.36819458007812
999 54.883243560791016
1099 39.79965591430664
1199 29.673206329345703
1299 22.869291305541992
1399 18.293842315673828
1499 15.214327812194824
1599 13.1397705078125
1699 11.740955352783203
1799 10.796865463256836
1899 10.159022331237793
1999 9.727652549743652
Result: y = 0.019909318536520004 + 0.8338049650192261 x + -0.0034346890170127153 x^2 + -0.09006795287132263 x^3      

四、總結

繼續閱讀