Torch - Basic n NN Basic

Torch - Basic

1. Really Basic

x = torch.rand(3, 2)
x

 0.6748  0.2027
 0.5566  0.6601
 0.8654  0.0982
[torch.FloatTensor of size 3x2]
y = torch.ones(x.size())
y

 1  1
 1  1
 1  1
[torch.FloatTensor of size 3x2]
z = x + y
z

 1.6748  1.2027
 1.5566  1.6601
 1.8654  1.0982
[torch.FloatTensor of size 3x2]
z[0]

 1.6748
 1.2027
[torch.FloatTensor of size 2]
z[:, 1:]

 1.2027
 1.6601
 1.0982
[torch.FloatTensor of size 3x1]

2. in place by _

# Return a new tensor z + 1
z.add(1)

 2.6748  2.2027
 2.5566  2.6601
 2.8654  2.0982
[torch.FloatTensor of size 3x2]

# z tensor is unchanged
z
 1.6748  1.2027
 1.5566  1.6601
 1.8654  1.0982
[torch.FloatTensor of size 3x2]

# Add 1 and update z tensor in-place
z.add_(1)

 2.6748  2.2027
 2.5566  2.6601
 2.8654  2.0982
[torch.FloatTensor of size 3x2]

# z has been updated
z

 2.6748  2.2027
 2.5566  2.6601
 2.8654  2.0982
[torch.FloatTensor of size 3x2]

3. Reshape

z.size()

torch.Size([3, 2])
z.resize_(2, 3)

 2.6748  2.2027  2.5566
 2.6601  2.8654  2.0982
[torch.FloatTensor of size 2x3]

4. Numpy to Torch and back

# define numpy
a = np.random.rand(4,3)
a

array([[ 0.07329374,  0.70278791,  0.05379227],
       [ 0.84907361,  0.06458597,  0.08650546],
       [ 0.17677941,  0.40166556,  0.67643291],
       [ 0.83582946,  0.63455498,  0.05706477]])

# torch from numpy    
b = torch.from_numpy(a)
b

 0.0733  0.7028  0.0538
 0.8491  0.0646  0.0865
 0.1768  0.4017  0.6764
 0.8358  0.6346  0.0571
[torch.DoubleTensor of size 4x3]

# numpy from torch
b.numpy()

array([[ 0.07329374,  0.70278791,  0.05379227],
       [ 0.84907361,  0.06458597,  0.08650546],
       [ 0.17677941,  0.40166556,  0.67643291],
       [ 0.83582946,  0.63455498,  0.05706477]])

The memory is shared between the Numpy array and Torch tensor.

# Multiply PyTorch Tensor by 2, in place
b.mul_(2)

 0.1466  1.4056  0.1076
 1.6981  0.1292  0.1730
 0.3536  0.8033  1.3529
 1.6717  1.2691  0.1141
[torch.DoubleTensor of size 4x3]

# Numpy array matches new values from Tensor
a

array([[ 0.14658749,  1.40557582,  0.10758454],
       [ 1.69814723,  0.12917195,  0.17301091],
       [ 0.35355882,  0.80333112,  1.35286582],
       [ 1.67165893,  1.26910996,  0.11412954]])

Torch - NN Basic

net.fc1 = nn.Linear(784, 128)
net.fc1.weight.data.normal_(std=0.01)
net.fc1.bias.data.fill_(0)
print(net.fc1.weight)
print(net.fc1.bias)

Torch - NN Basic

x = torch.ones(2,2)
x = Variable(x, requires_grad=True)
x
Variable containing:
 1  1
 1  1
[torch.FloatTensor of size 2x2]


y = 3*(x+1)**2
y
Variable containing:
 12  12
 12  12
[torch.FloatTensor of size 2x2]

z = y.mean()
z
Variable containing:
 12
[torch.FloatTensor of size 1]
## grad_fn shows the function that generated this variable
y.grad_fn
# <MulBackward0 at 0x7f3762fdc080>
z.grad_fn
# <MeanBackward1 at 0x7f3762fdc208>
x.grad
z.backward()
x.grad
Variable containing:
 3  3
 3  3
[torch.FloatTensor of size 2x2]

Last updated

Was this helpful?