Today, we’ll implement basic voice recognition using the Expo Speech API or the React Native Voice library. By the end of this session, you’ll have a functional setup for capturing and processing voice input.
1. Choosing the Right Library
Expo Speech API
- Ideal for quick setups in Expo-based projects.
- Simplified integration with text-to-speech capabilities.
- Documentation: Expo Speech API
React Native Voice
- More versatile, supporting native voice recognition.
- Works in both bare React Native and Expo projects with EAS.
- Documentation: React Native Voice
2. Setting Up the Environment
Step 1: Create a New React Native Project
If you don’t have a project set up, initialize one using Expo CLI:
expo init voice-recognition-app
cd voice-recognition-app
Step 2: Install Required Dependencies
- For Expo Speech API: No additional installation is needed.
- For React Native Voice:
npm install react-native-voice
Step 3: Set Up Permissions
Voice recognition requires microphone access:
- Android: Add the following to
AndroidManifest.xml
:<uses-permission android:name="android.permission.RECORD_AUDIO" />
- iOS: Add this to
Info.plist
:<key>NSMicrophoneUsageDescription</key> <string>This app requires microphone access for voice commands.</string>
3. Implementing Voice Recognition
Option 1: Using Expo Speech API
The Expo Speech API supports text-to-speech but can be used in conjunction with voice commands.
Example:
import React from 'react';
import { Button, View, Text } from 'react-native';
import * as Speech from 'expo-speech';
export default function App() {
const speak = () => {
Speech.speak("Hello, how can I help you?");
};
return (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
<Button title="Speak" onPress={speak} />
</View>
);
}
Option 2: Using React Native Voice
This library captures spoken input and returns it as text.
Example:
import React, { useState } from 'react';
import { Button, View, Text, PermissionsAndroid, Platform } from 'react-native';
import Voice from '@react-native-voice/voice';
export default function App() {
const [recognizedText, setRecognizedText] = useState("");
const startListening = async () => {
if (Platform.OS === 'android') {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
{
title: "Microphone Permission",
message: "This app needs access to your microphone to recognize speech.",
}
);
if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
return;
}
}
Voice.start("en-US");
};
Voice.onSpeechResults = (event) => {
setRecognizedText(event.value[0]);
};
return (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
<Button title="Start Listening" onPress={startListening} />
<Text style={{ marginTop: 20 }}>Recognized Text: {recognizedText}</Text>
</View>
);
}
4. Testing Your Setup
Step 1: Run the App
Start the development server:
expo start
Step 2: Test Voice Recognition
- Expo Speech API: Click the button and listen to the text-to-speech output.
- React Native Voice: Speak into your microphone and observe the recognized text displayed on the screen.
5. Key Considerations
- Languages Supported: Ensure the API or library supports your target languages.
- Permissions: Always handle microphone permissions gracefully.
- Error Handling: Implement fallback messages for unrecognized input (covered in Day 9).
Next Steps
On Day 3, we’ll focus on capturing and analyzing voice input, extracting meaningful insights, and building actionable features from it.
References and Links:
SEO Keywords: React Native voice recognition, Expo Speech API setup, microphone permissions in React Native, voice input handling, React Native Voice library tutorial.