Day 2: Setting Up a Model to Classify Images


Welcome to Day 2 of our journey into integrating machine learning into your mobile app. Today, you’ll learn how to set up a pre-trained model to classify images. We’ll use TensorFlow.js for React Native and Core ML for iOS development.


What You’ll Learn Today

  1. Download a pre-trained image classification model.
  2. Load the model in TensorFlow.js and Core ML.
  3. Test the model by classifying sample images.

Step 1: Download a Pre-Trained Model


Step 2: Using TensorFlow.js

1. Install Required Libraries

npm install @tensorflow-models/mobilenet @tensorflow/tfjs-react-native

2. Load the MobileNet Model

Create a new file ImageClassifier.js:

import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-react-native';
import * as mobilenet from '@tensorflow-models/mobilenet';
import { bundleResourceIO } from '@tensorflow/tfjs-react-native';

let model;

export const loadModel = async () => {
  await tf.ready();
  model = await mobilenet.load();
  console.log('MobileNet model loaded!');
};

export const classifyImage = async (image) => {
  if (!model) {
    console.error('Model not loaded.');
    return [];
  }
  const predictions = await model.classify(image);
  return predictions;
};

3. Integrate the Classifier in the App

Update App.js:

import React, { useEffect, useState } from 'react';
import { View, Text, StyleSheet, Button, Image } from 'react-native';
import { loadModel, classifyImage } from './ImageClassifier';
import * as ImagePicker from 'react-native-image-picker';

const App = () => {
  const [predictions, setPredictions] = useState([]);
  const [imageUri, setImageUri] = useState(null);

  useEffect(() => {
    loadModel();
  }, []);

  const pickImage = () => {
    ImagePicker.launchImageLibrary({}, (response) => {
      if (response.assets) {
        const uri = response.assets[0].uri;
        setImageUri(uri);
        classify(uri);
      }
    });
  };

  const classify = async (uri) => {
    const predictions = await classifyImage(uri);
    setPredictions(predictions);
  };

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Image Classifier</Text>
      <Button title="Pick an Image" onPress={pickImage} />
      {imageUri && <Image source={{ uri: imageUri }} style={styles.image} />}
      {predictions.map((p, index) => (
        <Text key={index} style={styles.prediction}>
          {p.className}: {(p.probability * 100).toFixed(2)}%
        </Text>
      ))}
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    padding: 20,
  },
  title: {
    fontSize: 24,
    fontWeight: 'bold',
    marginBottom: 20,
  },
  image: {
    width: 200,
    height: 200,
    marginVertical: 20,
  },
  prediction: {
    fontSize: 16,
    marginVertical: 5,
  },
});

export default App;

4. Run the App

npx react-native run-android
npx react-native run-ios
  • Pick an image and view the predictions below it.
See also  Day 4: Implementing Face Recognition or Emotion Detection

Step 3: Using Core ML

1. Add a Core ML Model

  • Drag the downloaded .mlmodel file into your Xcode project.

2. Load the Model in Swift

Open ViewController.swift:

import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
    let model = try? VNCoreMLModel(for: MobileNetV2().model)

    override func viewDidLoad() {
        super.viewDidLoad()
    }

    func classifyImage(_ image: UIImage) {
        guard let model = model else { return }
        let request = VNCoreMLRequest(model: model) { (request, error) in
            if let results = request.results as? [VNClassificationObservation] {
                for result in results.prefix(3) {
                    print("\(result.identifier): \(result.confidence * 100)%")
                }
            }
        }

        guard let cgImage = image.cgImage else { return }
        let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
        try? handler.perform([request])
    }

    @IBAction func pickImage(_ sender: UIButton) {
        let picker = UIImagePickerController()
        picker.delegate = self
        picker.sourceType = .photoLibrary
        present(picker, animated: true, completion: nil)
    }

    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
        if let image = info[.originalImage] as? UIImage {
            classifyImage(image)
        }
        dismiss(animated: true, completion: nil)
    }
}

3. Test the App

  • Run the app on an iOS device or simulator.
  • Pick an image from the gallery.
  • View the classification results in the Xcode console.

SEO Optimization for This Tutorial

Keywords: TensorFlow.js image classification, Core ML image classification, mobile image classifier, TensorFlow.js MobileNet, Core ML tutorial.

Meta Description: Learn how to set up an image classification model in your mobile app using TensorFlow.js and Core ML. Step-by-step tutorial with code samples.


Summary

Today, you integrated a pre-trained image classification model into your app. Users can now upload an image and see predictions in real-time.

What’s Next: Tomorrow, you’ll take it further by running a pre-trained object detection model in your app.

Stay tuned for Day 3: Running a Pre-Trained Object Detection Model.


Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.