In 2019, the war for ML frameworks has two main contenders: PyTorch and TensorFlow . There is a growing adoption of PyTorch by researchers and students due to ease of use, while in industry, Tensorflow is currently still the platform of choice.
Some of the key advantages of PyTorch are:
There is a growing popularity of PyTorch in research. Below plot showing monthly number of mentions of the word “PyTorch” as a percentage of all mentions among other deep learning frameworks. We can see there is an steep upward trend of PyTorch in arXiv in 2019 reaching almost 50%.
Dynamic graph generation, tight Python language integration , and a relatively simple API makes PyTorch an excellent platform for research and experimentation.
PyTorch provides a very clean interface to get the right combination of tools to be installed. Below a snapshot to choose and the corresponding command. Stable represents the most currently tested and supported version of PyTorch. This should be suitable for many users. Preview is available if you want the latest version, not fully tested and supported. You can choose from Anaconda (recommended) and Pip installation packages and supporting various CUDA versions as well.
Now we will discuss key PyTorch Library modules like Tensors , Autograd , Optimizers and Neural Networks (NN ) which are essential to create and train neural networks.
Tensors are the workhorse of PyTorch. We can think of tensors as multi-dimensional arrays. PyTorch has an extensive library of operations on them provided by the torch module. PyTorch Tensors are very close to the very popular NumPy arrays . In fact, PyTorch features seamless interoperability with NumPy. Compared with NumPy arrays, PyTorch tensors have added advantage that both tensors and related operations can run on the CPU or GPU. The second important thing that PyTorch provides allows tensors to keep track of the operations performed on them that helps to compute gradients or derivatives of an output with respect to any of its inputs.
Tensor refers to the generalization of vectors and matrices to an arbitrary number of dimensions. The dimensionality of a tensor coincides with the number of indexes used to refer to scalar values within the tensor. A tensor of order zero (0D tensor) is just a number or a scalar . A tensor of order one (1D tensor) is an array of numbers or a vector . Similarly a 2nd-order tensor (2D)is an array of vectors or a matrix .
Now let us create a tensor in PyTorch.
After importing the torch module, we called a function torch.ones that creates a (2D) tensor of size 9 filled with the values 1.0.
Other ways include using torch.zeros
;
zero filled tensor, torch.randn
;
from random uniform distribution.
Each tensor has an associated type and size. The default tensor type when you use the torch.Tensor
constructor is torch.FloatTensor
. However, you can convert a tensor to a different type ( float
,
long
,
double
, etc.) by specifying it at initialization or later using one of the typecasting methods. There are two ways to specify the initialization type: either by directly calling the constructor of a specific tensor type, such as FloatTensor
or LongTensor
, or using a special method, torch.tensor()
, and providing the dtype.
To find the maximum
item in a tensor as well as the index
that contains the maximum value. These can be done with the max()
and argmax()
functions. We can also use item()
to extract a standard Python value from a 1D tensor.
Most functions that operate on a tensor and return a tensor create a new tensor to store the result. If you need an
in-place
function look for a function with an appended underscore ( _
) e.g torch.transpose_
will do in-place transpose of a tensor.
Converting between tensors and Numpy is very simple using torch.from_numpy
&
torch.numpy()
.
Another common operation is
reshaping
a tensor. This is one of the frequently used operations and very useful too. We can do this with either view()
or reshape()
:
Tensor.reshape()
and Tensor.view()
though are not the same.
Tensor.view()
works only on contiguous tensors and will never
copy memory. It will raise an error on a non-contiguous tensor. But you can make the tensor contiguous by calling contiguous()
and then you can call view()
. Tensor.reshape()
will work on any tensor and can
make a clone if it is needed. PyTorch supports broadcasting similar to NumPy. Broadcasting allows you to perform operations between two tensors. Refer here for the broadcasting semantics.
Three attributes which uniquely define a tensor are:
device: Wherethe tensor’s physical memory is actually stored, e.g., on a CPU, or a GPU. The torch.device
contains a device type ( 'cpu'
or 'cuda'
) and optional device ordinal for the device type.
layout: Howwe logically interpret this physical memory. The most common layout is a strided tensor. Strides are a list of integers: the k-th stride represents the jump in the memory necessary to go from one element to the next one in the k-th dimension of the Tensor.
dtype: What is actually stored in each element of the tensor? This could be floats or integers etc. PyTorch has nine different data types.
Autograd is automatic differentiation system. What does automatic differentiation do? Given a network, it calculates the gradients automatically. When computing the forwards pass, autograd simultaneously performs the requested computations and builds up a graph representing the function that computes the gradient.
PyTorch tensors can remember where they come from in terms of the operations and parent tensors that originated them, and they can provide the chain of derivatives of such operations with respect to their inputs automatically. This is achieved through requires_grad
,
if set to True.
t= torch.tensor([1.0, 0.0], requires_grad=True)
After calculating the gradient, the value of the derivative is automatically populated as a grad
attribute of the tensor. For any composition of functions with any number of tensors with requires_grad= True
; PyTorch would compute derivatives throughout the chain of functions and accumulate their values in the grad
attribute of those tensors.
Optimizers are used to update weights and biases i.e. the internal parameters of a model to reduce the error. Please refer to my another article for more details.
PyTorch has an torch.optim
package with various optimization algorithms like SGD (Stochastic Gradient Descent), Adam, RMSprop etc .
Let us see how we can create one of the provided optimizers SGD or Adam.
import torch.optim as optim
params = torch.tensor([1.0, 0.0], requires_grad=True)learning_rate = 1e-3## SGD
optimizer = optim.SGD([params], lr=learning_rate)## Adam
optimizer = optim.Adam([params], lr=learning_rate)
Without using optimizers, we would need to manually update the model parameters by something like:
for params in model.parameters():
params -= params.grad * learning_rate
We can use the step()
method from our optimizer to take a forward step, instead of manually updating each parameter.
optimizer.step()
The value of params is updated when step is called. The optimizer looks into params.grad
and updates params
by subtracting learning_rate
times grad
from it, exactly as we did in without using optimizer.
torch.optim
module helps us to abstract away the specific optimization scheme with just passing a list of params. Since there are multiple optimization schemes to choose from, we just need to choose one for our problem and rest the underlying PyTorch library does the magic for us.
In PyTorch the torch.nn
package defines a set of modules which are similar to the layers of a neural network. A module receives input tensors and computes output tensors. The torch.nn
package also defines a set of useful loss functions that are commonly used when training neural networks.
Steps of building a neural network are:
Let us follow the above steps and create a simple neural network in PyTorch.
We call our NN Net
here .
We’re inheriting from nn.Module
. Combined with super().__init__()
this creates a class that tracks the architecture and provides a lot of useful methods and attributes.
Our neural network Net
has one hidden layer self.hl
and one output layer self.ol
.
self.hl = nn.Linear(1, 10)
This line creates a module for a linear transformation with 1 inputs and 10 outputs. It also automatically creates the weight and bias tensors. You can access the weight and bias tensors once the network net
is created with net.hl.weight
and net.hl.bias
.
We have defined activation using self.relu = nn.ReLU()
.
PyTorch networks created with nn.Module
must have a forward()
method defined. It takes in a tensor x
and passes it through the operations you defined in the __init__
method.
def forward(self, x):
hidden = self.hl(x)
activation = self.relu(hidden)
output = self.ol(activation)
We can see that the input tensor goes through the hidden layer, then activation function (relu), then the output layer.
Here we have to calculate error or loss and backward propagate our error gradient to update our weight parameters.
A loss function takes the (output, target) and computes a value that estimates how far away the output
is from the target
.There are several different loss functions
under the torch.nn
package . A simple loss is nn.MSELoss
which computes the mean-squared error between the input and the target.
output = net(input)
loss_fn = nn.MSELoss()
loss = loss_fn(output, target)
A simple function call loss.backward()
propagates the error. Don’t forget to clear the existing gradients though else gradients will be accumulated to existing gradients. After calling loss.backward()
have a look at hidden layer bias gradients before and after the backward call.
So after calling the backward(), we see the gradients are calculated for the hidden layer.
We have already seen how optimizer helps us to update the parameters of the model.
# create your optimizer
optimizer = optim.Adam(net.parameters(), lr=1e-2)optimizer.zero_grad() # zero the gradient buffersoutput = net(input) # calculate output
loss = loss_fn(output, target) #calculate loss
loss.backward() # calculate gradientoptimizer.step() # update parameters
Now with our basic steps (1,2,3) complete, we just need to iteratively train our neural network to find the minimum loss. So we run the training_loop
for many epochs until we minimize the loss.
Let us run our neural network to train for input x_t
and target y_t
.
We call training_loop
for 1500 epochs an pass all other arguments like optimizer
, model
, loss_fn
inputs
,
and target
. After every 300 epochs we print the loss and we can see the loss decreasing after every iteration. Looks like our very basic neural network is learning.
We plot the model output (black crosses) and target data (red circles), the model seems to learn quickly.
So far we discussed the basic or essential elements of PyTorch to get you started. Creating machine learning based solutions for real problems involves significant effort into data preparation. However, PyTorch library provides many tools to make data loading easy and more readable like torchvision
, torchtext
and torchaudio
to work with image, text and audio data respectively.
Before I finish the article, I also want to mention a very important tool called TensorBoard . Training machine learning models is often very hard. A tool that can help in visualizing our model and understanding the training progress is always needed, when we encounter some problems.
TensorBoard helps to log events from our model training, including various scalars (e.g. accuracy, loss), images, histograms etc. Since release of PyTorch 1.2.0, TensorBoard is now a PyTorch built-in feature. Please follow this and this tutorials for installation and use of TensorBoard in Pytorch.
Thanks for the read. See you soon with another post
我来评几句
登录后评论已发表评论数()