Skip to content

Implemetation of simple example of knowledge distillation, with teacher network Res-Net152 and student network with 3 convolution blocks on custom classification dataset

Notifications You must be signed in to change notification settings

Giminosk/Knowledge-Distillation

Repository files navigation

Knowledge distillation

Implemetation of simple example of knowledge distillation. Knowledge distillation is a process in machine learning where a small, more efficient model is trained to mimic the behavior of a larger, more complex model. The goal is to transfer the knowledge learned by the larger model to the smaller one, while reducing its size and computational requirements. This technique is often used to improve the performance of resource-limited devices or to accelerate the inference time of models in production.

  • Reference: paper
  • Dataset: custom dataset for classification task
  • Teacher model: Res-Net152
  • Student model: Conv-Net with tree convolution blocks and two fully-connected

Teacher network pretrain results

Student network train results

  • black line - with teacher
  • blue line - withous teacher

About

Implemetation of simple example of knowledge distillation, with teacher network Res-Net152 and student network with 3 convolution blocks on custom classification dataset

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages