Hi guys,
first of all, I`d like to say that notebook 3 is cool, but it looks like there is an issue in the code cell in the "Evaluation model" section. The first cell in this section:
test_loss, test_acc = 0, 0
model_lenet5_v1_mnist_loaded.to(device)
model_lenet5_v1_mnist_loaded.eval()
with torch.inference_mode():
for X, y in test_dataloader:
X, y = X.to(device), y.to(device)
y_pred = model_lenet5_v1_mnist_loaded(X)
test_loss += loss_fn(y_pred, y)
test_acc += accuracy(y_pred, y)
test_loss /= len(test_dataloader)
test_acc /= len(test_dataloader)
print(f"Test loss: {test_loss: .5f}| Test acc: {test_acc: .5f}")
X, and y are loaded to the device in every iteration, but in the next cell:
import pandas as pd
import seaborn as sn
from sklearn.metrics import accuracy_score, confusion_matrix
y_pred = []
y_test = []
for x, y in test_dataloader:
y_pred.append(model_lenet5_v1_mnist_loaded(x).detach().numpy())
y_test.append(y.detach().numpy())
y_pred = np.concatenate(y_pred)
y_test = np.concatenate(y_test)
X is staying at the CPU device. If you run this example with GPU this cell will fail. The string with the forward pass should include loading x to the device and then loading results back to the CPU.
Could you please check it and fix it in the notebook?