Convolutional neural networks (CNNs) hold significant importance in the image categorization field. The issue of vanishing gradients with increasing dataset size is effectively addressed by ResNet through its residual blocks. This paper focuses on comparing the outcomes of 2 ResNet models using the CIFAR10 dataset. The 60,000 photos in the CIFAR10 dataset are split equally between 10 classes. Preprocessing steps for the data, including normalization and augmentation, are part of the experiment. The cross-entropy loss function is employed for optimization. Results indicate that ResNet18 outperformed ResNet50. For ResNet18, the training loss decreased from 1.8464 to 0.2006, and accuracy increased from 0.3311 to 0.9286 with a test accuracy of 0.8294. In contrast, for ResNet50, training loss went from 5.3736 to 0.4618, and accuracy rose from 0.0989 to 0.8377 with a test accuracy of 0.7604. One possible reason for this outcome is that ResNet50 might be more prone to overfitting due to many more parameters and the CIFAR10 dataset’s small size. Additionally, different hyperparameter settings and data augmentation fine-tuning might also contribute.