Now that we know WTF a tensor is, and saw how Numpy's
ndarray can be used to represent them, let's switch gears and see how they are represented in PyTorch.
PyTorch has made an impressive dent on the machine learning scene since Facebook open-sourced it in early 2017. It may not have the widespread adoption that TensorFlow has -- which was initially released well over a year prior, enjoys the backing of Google, and had the luxury of establishing itself as the gold standard as a new wave of neural networking tools was being ushered in -- but the attention that PyTorch receives in the research community especially is quite real. Much of this attention comes both from its relationship to Torch proper, and its dynamic computation graph.
As excited as I have recently been by turning my own attention to PyTorch, this is not really a PyTorch tutorial; it's more of an introduction to PyTorch's
Tensor class, which is reasonably analogous to Numpy's
Tensor (Very) Basics
So let's take a look at some of PyTorch's tensor basics, starting with creating a tensor (using the
You can transpose a tensor in one of 2 ways:
Both result in the following output:
Note that neither result in a change to the original.
Reshape a tensor with view:
And another example:
It should be obvious that mathematical conventions which are followed by Numpy carry over to PyTorch Tensors (specifically I'm referring to row and column notation).
Create a tensor and fill it with zeros (you can accomplish something similar with
Create a tensor with randoms pulled from the normal distribution:
Shape, dimensions, and datatype of a tensor object:
It should also be obvious that, beyond mathematical concepts, a number of programmatic and instantiation similarities between
Tensor implementations exist.
You can slice PyTorch tensors the same way you slice
ndarrays, which should be familiar to anyone who uses other Python structures:
Tensor To and From Numpy
You can easily create a tensors from an
ndarray and vice versa. These operations are fast, since the data of both structures will share the same memory space, and so no copying is involved. This is obviously an efficient approach.
Basic Tensor Operations
Here are a few tensor operations, which you can compare with Numpy implementations for fun. First up is the cross product:
Next is the matrix product:
And finally, elementwise multiplication:
A Word About GPUs
PyTorch tensors have inherent GPU support. Specifying to use the GPU memory and CUDA cores for storing and performing tensor calculations is easy; the
cuda package can help determine whether GPUs are available, and the package's
cuda() method assigns a tensor to the GPU.
- WTF is a Tensor?!?
- Getting Started with PyTorch Part 1: Understanding How Automatic Differentiation Works
- A Simple Starter Guide to Build a Neural Network