Skip to content

A step-by-step implementation of a ResNet-18 model for image classification on the CIFAR-10 dataset

Notifications You must be signed in to change notification settings

deepmancer/resnet-cifar-classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 

Repository files navigation

🖼️ Training ResNet-18 for CIFAR-10 Image Classification

PyTorch Python Scikit-learn

About

This project implements ResNet-18 from scratch in PyTorch and trains it on the CIFAR-10 dataset to achieve high accuracy in image classification.


📝 Project Description

This repository provides a complete implementation of the ResNet-18 architecture, a deep residual network renowned for its simplicity and effectiveness in image classification tasks. The model is trained on the CIFAR-10 dataset, which contains 60,000 32x32 color images in 10 classes. The focus of this project is to build and train the ResNet-18 model from the ground up, achieving high accuracy, and exploring the learned feature space of the model.

🎯 Objectives

  • Model: Implement the ResNet-18 architecture from scratch in PyTorch.
  • Task: Train the model on the CIFAR-10 dataset to classify images into one of 10 categories.
  • Feature Exploration: Visualize and analyze the feature space using t-SNE and k-Nearest Neighbors (k-NN) algorithms.

🛠️ Prerequisites

Ensure you have the following dependencies installed before running the project:

  • Python 3.6+: The programming language used for the project.
  • PyTorch: The deep learning framework employed for building and training ResNet-18.
  • Scikit-learn: For implementing the t-SNE algorithm and k-NN classifiers.

Install the required libraries via pip:

pip install torch scikit-learn

📊 Model Performance

The ResNet-18 model, after training on the CIFAR-10 dataset, achieves the following accuracy:

Model Accuracy
ResNet-18 90.71%

🔍 Exploring Feature Space

Understanding the feature space learned by a deep network can provide insights into how the model discriminates between different classes. In this project, we explore the outputs of the penultimate layer (one layer before the final classification layer) using the following methods:

  • t-SNE
  • k-NN

Feature Space Definition

The feature space refers to the high-dimensional space where the outputs of the neurons in the layer just before the final classification layer reside. Visualizing this space provides valuable insights into the model's internal representations.

Feature Space Visualization

🌀 t-SNE Visualization

t-SNE (t-distributed Stochastic Neighbor Embedding) is used to reduce the dimensionality of the feature space and project it into a 2D plane for visualization. This technique helps in understanding how well the model separates different classes in the feature space.

t-SNE Visualization

🧩 k-Nearest Neighbors (k-NN)

k-NN is applied to the feature space to explore the clustering of different classes. It helps in assessing the compactness and separability of the feature vectors generated by the model.

k-NN Visualization

Releases

No releases published

Packages

No packages published