Welcome to Day 3 of integrating machine learning into your mobile app! Today, you’ll implement object detection using pre-trained models. Object detection is a key feature for apps that require real-time analysis of images to identify and locate objects.
What You’ll Learn Today
- Download and set up an object detection model.
- Implement object detection using TensorFlow.js in React Native.
- Use Core ML for object detection on iOS.
- Test the app and visualize detection results.
Step 1: Download a Pre-Trained Object Detection Model
- TensorFlow.js: Use the Coco SSD model.
- Core ML: Use a compatible object detection model like YOLO or MobileNetSSD from Apple’s Core ML Models.
Step 2: Using TensorFlow.js for Object Detection
1. Install Required Libraries
npm install @tensorflow-models/coco-ssd @tensorflow/tfjs-react-native
2. Set Up the Object Detection Model
Create a new file ObjectDetector.js
:
import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-react-native';
import * as cocoSsd from '@tensorflow-models/coco-ssd';
import { decodeJpeg } from '@tensorflow/tfjs-react-native';
let model;
export const loadModel = async () => {
await tf.ready();
model = await cocoSsd.load();
console.log('Coco SSD model loaded!');
};
export const detectObjects = async (imageData) => {
if (!model) {
console.error('Model not loaded.');
return [];
}
const imageTensor = decodeJpeg(imageData);
const predictions = await model.detect(imageTensor);
return predictions;
};
3. Integrate Object Detection in the App
Update App.js
:
import React, { useEffect, useState } from 'react';
import { View, Text, StyleSheet, Button, Image } from 'react-native';
import { loadModel, detectObjects } from './ObjectDetector';
import * as ImagePicker from 'react-native-image-picker';
const App = () => {
const [imageUri, setImageUri] = useState(null);
const [detections, setDetections] = useState([]);
useEffect(() => {
loadModel();
}, []);
const pickImage = () => {
ImagePicker.launchImageLibrary({}, (response) => {
if (response.assets) {
const uri = response.assets[0].uri;
const base64 = response.assets[0].base64;
setImageUri(uri);
runDetection(base64);
}
});
};
const runDetection = async (imageData) => {
const results = await detectObjects(imageData);
setDetections(results);
};
return (
<View style={styles.container}>
<Text style={styles.title}>Object Detection</Text>
<Button title="Pick an Image" onPress={pickImage} />
{imageUri && <Image source={{ uri: imageUri }} style={styles.image} />}
{detections.map((detection, index) => (
<Text key={index} style={styles.detection}>
{detection.class}: {Math.round(detection.score * 100)}% (X: {detection.bbox[0]}, Y: {detection.bbox[1]})
</Text>
))}
</View>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
padding: 20,
},
title: {
fontSize: 24,
fontWeight: 'bold',
marginBottom: 20,
},
image: {
width: 200,
height: 200,
marginVertical: 20,
},
detection: {
fontSize: 16,
marginVertical: 5,
},
});
export default App;
4. Run the App
npx react-native run-android
npx react-native run-ios
- Pick an image and view detected objects with their confidence scores.
Step 3: Using Core ML for Object Detection
1. Add a Core ML Object Detection Model
- Drag and drop the
.mlmodel
file into your Xcode project.
2. Integrate the Model
Open ViewController.swift
:
import UIKit
import CoreML
import Vision
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
let model = try? VNCoreMLModel(for: YOLOv3Tiny().model)
override func viewDidLoad() {
super.viewDidLoad()
}
func detectObjects(_ image: UIImage) {
guard let model = model else { return }
let request = VNCoreMLRequest(model: model) { (request, error) in
if let results = request.results as? [VNRecognizedObjectObservation] {
for result in results {
let label = result.labels.first?.identifier ?? "Unknown"
let confidence = result.confidence * 100
print("\(label): \(confidence)%")
}
}
}
guard let cgImage = image.cgImage else { return }
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
try? handler.perform([request])
}
@IBAction func pickImage(_ sender: UIButton) {
let picker = UIImagePickerController()
picker.delegate = self
picker.sourceType = .photoLibrary
present(picker, animated: true, completion: nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let image = info[.originalImage] as? UIImage {
detectObjects(image)
}
dismiss(animated: true, completion: nil)
}
}
3. Test the App
- Run the app on an iOS device or simulator.
- Select an image and view detected objects in the Xcode console.
SEO Optimization for This Tutorial
Keywords: TensorFlow.js object detection, Core ML object detection, mobile object detection, Coco SSD TensorFlow.js, Core ML YOLO integration.
Meta Description: Learn how to integrate object detection into your mobile app using TensorFlow.js and Core ML. Step-by-step guide with sample code for React Native and iOS.
Summary
Today, you added object detection to your app using pre-trained models. This feature allows your app to identify objects in images with confidence scores.
What’s Next: Tomorrow, you’ll implement advanced use cases like face recognition or emotion detection.
Stay tuned for Day 4: Face Recognition or Emotion Detection.