Pytorch is similar to NumPy in the way that it manages computations, but has a strong GPU support. Similarly to NumPy, it also has a C (the programming language) backend, so they are both much faster than native Python libraries. NumPy could be GPU accelerated (with some extra code), but it doesn’t have this strong GPU support that PyTorch or TensorFlow do. Finally, PyTorch was specifically tailored for GPU functionality in Python.
TensorFlow, on the other hand, was written mainly in C++ and CUDA, and was not specifically created for Python. It provides functionalities in C, C++, Java, Go, and there is community support for Haskell and Rust. So, with TF, you are not restricted by Python. Even if the syntax differs a bit across languages, the concepts are the same.
Now, PyTorch has deep neural networks functionalities and that is why it is often compared with TensorFlow, sklearn, etc. Moreover, TensorFlow has a peculiar logic (with concepts like placeholders, sessions, etc.). So, for TensorFlow, you need to make that extra effort. Knowing NumPy (which is my underlying assumption for the dear reader), it is easier to switch to PyTorch than TensorFlow, that is why it is gaining popularity so fast. As TensorFlow was used by Google for so long, it is very easy to deploy algorithms using it. So you can think about it as more product oriented. Logically, you want to be able to deploy the algorithms that you are creating (you can check out TensorFlow Serving for more on that). PyTorch, on the other hand, is more recent, so it does not have the full range of capabilities of other packages. I am sure that this will change and they are on their way to catch up. However, TensorFlow has this module tf.contrib. That’s code, contributed by the community, that will probably be embedded into the core TF. Point being, the TensorFlow community is stronger, because TF is older. I don’t know if PyTorch will catch up to the community of TF (that depends on the users and the adoption).
So, my verdict would be that TensorFlow has kind of stood the test of time (if you can use this expression for a framework that is not that old itself) and is still more widely adopted. It has more capabilities and a better scalability for larger projects (projects of the scale of Google). PyTorch is surely gaining momentum as it is easier to learn but doesn’t have the same integration. It is very good for small projects that need to be done quite fast, but is not optimal for product deployment.
batch_size = 32
input_shape = 5
output_shape = 10
from torch.autograd import Variable
X = Variable(torch.randn(batch_size, input_shape))
y = Variable(torch.randn(batch_size, output_shape), requires_grad=False)
model = torch.nn.Sequential(
loss_function = torch.nn.MSELoss()
learning_rate = 0.001
for i in range(10):
y_pred = model(X)
loss = loss_function(y_pred, y)
# Zero gradients
# Update weights
for param in model.parameters():
param.data -= learning_rate * param.grad.data