PyTorch学习笔记-02

PyTorch入门-1

Posted by hmoytx on May 7, 2018

快速入门

Tensor

Tensor是PyTorch中重要的数据结构,可以当成是一个高维数组。本身和numpy的ndarrays很像,但是两者的区别在于,前者是可以使用GPU加速的。除此以外两者的基本操作是没有太大区别的。 现在在ipyhton中演示一下基本的操作

In [1]: import torch

In [2]: import torch as t

In [3]: x = t.Tensor(5,5)  # 为初始化的5*5的矩阵

In [4]: x
Out[4]:
tensor([[ 1.6907e-01,  1.7743e+28,  1.1673e-32,  6.7685e+22,  2.9185e+32],
        [ 5.3723e+19,  4.4721e+21,  4.8365e+30,  3.0135e+29,  6.6135e+19],
        [ 4.3845e+31,  1.8886e+31,  7.1849e+22,  1.0086e-32,  1.7443e+28],
        [ 1.8970e+31,  1.2424e+22,  2.5072e-12,  4.5144e+27,  9.1041e-12],
        [ 6.2609e+22,  4.7428e+30,  1.5555e+28,  2.7259e+20,  8.1330e+32]])

In [5]: x = t.rand(5,5) #使用[0,1]均匀分布  t.randn()生成正态分布的数组

In [6]: x
Out[6]:
tensor([[ 0.5446,  0.5416,  0.0336,  0.6157,  0.0828],
        [ 0.9987,  0.2257,  0.6390,  0.4904,  0.9439],
        [ 0.7656,  0.6790,  0.3703,  0.4811,  0.8022],
        [ 0.0563,  0.5616,  0.7834,  0.6812,  0.5832],
        [ 0.7723,  0.1685,  0.7272,  0.0102,  0.3450]])

有关于加法有几种形式

In [7]: y = t.rand(5,5)

In [8]: x+y     #直接用+进行相加
Out[8]:
tensor([[ 0.6177,  0.6424,  0.3739,  0.6916,  0.2209],
        [ 1.3712,  0.5577,  1.3626,  1.0993,  1.5942],
        [ 1.5684,  0.7533,  1.0628,  1.1381,  1.2293],
        [ 0.1066,  1.5505,  1.0467,  1.5348,  1.4244],
        [ 1.0477,  0.2311,  1.0946,  0.6907,  0.8061]])

In [9]: t.add(x,y) #需要用变量接收 可以直接以参数的形式 t.add(x, y, out=res)
Out[9]:
tensor([[ 0.6177,  0.6424,  0.3739,  0.6916,  0.2209],
        [ 1.3712,  0.5577,  1.3626,  1.0993,  1.5942],
        [ 1.5684,  0.7533,  1.0628,  1.1381,  1.2293],
        [ 0.1066,  1.5505,  1.0467,  1.5348,  1.4244],
        [ 1.0477,  0.2311,  1.0946,  0.6907,  0.8061]])

In [10]: y.add(x)   #不修改y本身的值
Out[10]:
tensor([[ 0.6177,  0.6424,  0.3739,  0.6916,  0.2209],
        [ 1.3712,  0.5577,  1.3626,  1.0993,  1.5942],
        [ 1.5684,  0.7533,  1.0628,  1.1381,  1.2293],
        [ 0.1066,  1.5505,  1.0467,  1.5348,  1.4244],
        [ 1.0477,  0.2311,  1.0946,  0.6907,  0.8061]])

In [11]: y
Out[11]:
tensor([[ 0.0731,  0.1009,  0.3402,  0.0759,  0.1381],
        [ 0.3724,  0.3321,  0.7236,  0.6089,  0.6503],
        [ 0.8028,  0.0743,  0.6925,  0.6570,  0.4271],
        [ 0.0503,  0.9889,  0.2633,  0.8536,  0.8411],
        [ 0.2754,  0.0626,  0.3674,  0.6805,  0.4611]])

In [12]: y.add_(x) #会修改y的本身
Out[12]:
tensor([[ 0.6177,  0.6424,  0.3739,  0.6916,  0.2209],
        [ 1.3712,  0.5577,  1.3626,  1.0993,  1.5942],
        [ 1.5684,  0.7533,  1.0628,  1.1381,  1.2293],
        [ 0.1066,  1.5505,  1.0467,  1.5348,  1.4244],
        [ 1.0477,  0.2311,  1.0946,  0.6907,  0.8061]])

In [13]: y
Out[13]:
tensor([[ 0.6177,  0.6424,  0.3739,  0.6916,  0.2209],
        [ 1.3712,  0.5577,  1.3626,  1.0993,  1.5942],
        [ 1.5684,  0.7533,  1.0628,  1.1381,  1.2293],
        [ 0.1066,  1.5505,  1.0467,  1.5348,  1.4244],
        [ 1.0477,  0.2311,  1.0946,  0.6907,  0.8061]])

其他的一些生成操作

In [15]: a = t.ones(5) #生成一个全是1的Tensor

In [16]: a
Out[16]: tensor([ 1.,  1.,  1.,  1.,  1.])

In [17]: b = t.zeros(5) #生成一个全是0的Tensor

In [18]: b
Out[18]: tensor([ 0.,  0.,  0.,  0.,  0.])

支持numpy的操作

In [19]: x[:, 1] #访问一列的数据
Out[19]: tensor([ 0.5416,  0.2257,  0.6790,  0.5616,  0.1685])

In [21]: c = np.ones(5)

In [22]: c
Out[22]: array([ 1.,  1.,  1.,  1.,  1.])

In [20]: import numpy as np

In [21]: c = np.ones(5)

In [22]: c
Out[22]: array([ 1.,  1.,  1.,  1.,  1.])

In [23]: d = t.from_numpy(c)

In [24]: d
Out[24]: tensor([ 1.,  1.,  1.,  1.,  1.], dtype=torch.float64)

由于电脑本身的问题无法使用GPU,这里只是简单的介绍

通过.cuda方法转换为GPU的Tensor,从而加速运算,这里注意当加速的Tensor比较小的时候,不建议使用GPU,使用GPU的优势是在大规模数据和复杂运算才能体现。