Stay organized with collections
Save and categorize content based on your preferences.
Leveraging Pretrained Models
Training a convolutional neural network to perform image classification tasks
typically requires an extremely large amount of training data, and can be very
time-consuming, taking days or even weeks to complete. But what if you could
leverage existing image models trained on enormous datasets, such as via
TensorFlow-Slim,
and adapt them for use in your own classification tasks?
One common technique for leveraging pretrained models is feature extraction:
retrieving intermediate representations produced by the pretrained model, and
then feeding these representations into a new model as input. For example, if
you're training an image-classification model to distinguish different types of
vegetables, you could feed training images of carrots, celery, and so on, into a
pretrained model, and then extract the features from its final convolution
layer, which capture all the information the model has learned about the images'
higher-level attributes: color, texture, shape, etc. Then, when building your
new classification model, instead of starting with raw pixels, you can use these
extracted features as input, and add your fully connected classification layers
on top. To increase performance when using feature extraction with a pretrained
model, engineers often fine-tune the weight parameters applied to the
extracted features.
For a more in-depth exploration of feature extraction and fine tuning when using
pretrained models, see the following Exercise.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003ePretrained image models can be leveraged to perform image classification tasks, saving time and resources compared to training a new model from scratch.\u003c/p\u003e\n"],["\u003cp\u003eFeature extraction involves using the intermediate representations from a pretrained model as input for a new model, enabling the utilization of learned features like color, texture, and shape.\u003c/p\u003e\n"],["\u003cp\u003eFine-tuning the weight parameters of extracted features can further enhance the performance of the new classification model built on top of the pretrained model.\u003c/p\u003e\n"]]],[],null,["# ML Practicum: Image Classification\n\n\u003cbr /\u003e\n\nLeveraging Pretrained Models\n----------------------------\n\nTraining a convolutional neural network to perform image classification tasks\ntypically requires an extremely large amount of training data, and can be very\ntime-consuming, taking days or even weeks to complete. But what if you could\nleverage existing image models trained on enormous datasets, such as via\n[TensorFlow-Slim](https://github.com/tensorflow/models/tree/master/research/slim),\nand adapt them for use in your own classification tasks?\n\nOne common technique for leveraging pretrained models is *feature extraction* :\nretrieving intermediate representations produced by the pretrained model, and\nthen feeding these representations into a new model as input. For example, if\nyou're training an image-classification model to distinguish different types of\nvegetables, you could feed training images of carrots, celery, and so on, into a\npretrained model, and then extract the features from its final convolution\nlayer, which capture all the information the model has learned about the images'\nhigher-level attributes: color, texture, shape, etc. Then, when building your\nnew classification model, instead of starting with raw pixels, you can use these\nextracted features as input, and add your fully connected classification layers\non top. To increase performance when using feature extraction with a pretrained\nmodel, engineers often *fine-tune* the weight parameters applied to the\nextracted features.\n\nFor a more in-depth exploration of feature extraction and fine tuning when using\npretrained models, see the following Exercise.\n| **Key Terms**\n|\n| |----------------------|---------------|\n| | - feature extraction | - fine tuning |\n|"]]