Voice commands enable hands-free interaction by mapping spoken phrases to specific app actions. On Day 4, we’ll implement functionality that allows users to control app features using their voice.
1. Understanding Voice Command Mapping
Voice commands involve:
- Capturing Input: Convert speech to text.
- Matching Commands: Compare the recognized text to a predefined set of commands.
- Triggering Actions: Execute corresponding app functionalities.
2. Setting Up Command Handlers
Step 1: Define a Command List
Create a dictionary of commands and their corresponding actions:
const commands = {
"open settings": () => console.log("Navigating to Settings..."),
"show weather": () => console.log("Fetching Weather Data..."),
"play music": () => console.log("Playing Music..."),
};
Step 2: Match Recognized Text to Commands
Write a function to match the recognized text:
const handleCommand = (text) => {
const command = Object.keys(commands).find((cmd) =>
text.toLowerCase().includes(cmd)
);
if (command) {
commands[command]();
} else {
console.log("Command not recognized.");
}
};
3. Capturing Voice Input and Executing Commands
Complete Implementation
import React, { useState } from 'react';
import { Button, View, Text, StyleSheet, Platform, PermissionsAndroid } from 'react-native';
import Voice from '@react-native-voice/voice';
export default function App() {
const [recognizedText, setRecognizedText] = useState("");
const [isListening, setIsListening] = useState(false);
const commands = {
"open settings": () => console.log("Navigating to Settings..."),
"show weather": () => console.log("Fetching Weather Data..."),
"play music": () => console.log("Playing Music..."),
};
const handleCommand = (text) => {
const command = Object.keys(commands).find((cmd) =>
text.toLowerCase().includes(cmd)
);
if (command) {
commands[command]();
} else {
console.log("Command not recognized.");
}
};
const startListening = async () => {
if (Platform.OS === 'android') {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
{
title: "Microphone Permission",
message: "This app requires access to your microphone for voice commands.",
}
);
if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
return;
}
}
try {
setIsListening(true);
Voice.start("en-US");
} catch (error) {
console.error("Error starting voice recognition:", error);
}
};
const stopListening = () => {
setIsListening(false);
Voice.stop();
};
Voice.onSpeechResults = (event) => {
const text = event.value[0];
setRecognizedText(text);
handleCommand(text);
setIsListening(false);
};
return (
<View style={styles.container}>
<Text style={styles.instructions}>
{isListening ? "Listening..." : "Press the button and speak a command"}
</Text>
<Button
title={isListening ? "Stop Listening" : "Start Listening"}
onPress={isListening ? stopListening : startListening}
/>
<Text style={styles.resultText}>Recognized Text: {recognizedText}</Text>
</View>
);
}
const styles = StyleSheet.create({
container: { flex: 1, justifyContent: "center", alignItems: "center" },
instructions: { fontSize: 18, marginBottom: 20 },
resultText: { fontSize: 20, marginTop: 20 },
});
4. Adding Navigation Actions
Step 1: Integrate React Navigation
Install React Navigation and dependencies:
npm install @react-navigation/native react-native-screens react-native-safe-area-context react-native-gesture-handler react-native-reanimated react-native-stack
Step 2: Add Navigation to the Command List
Update the commands
object to include navigation actions:
const commands = {
"open settings": () => navigation.navigate("Settings"),
"show weather": () => navigation.navigate("Weather"),
};
Pass the navigation
prop into handleCommand
:
const handleCommand = (text, navigation) => {
const command = Object.keys(commands).find((cmd) =>
text.toLowerCase().includes(cmd)
);
if (command) {
commands[command](navigation);
} else {
console.log("Command not recognized.");
}
};
5. Testing the Implementation
Step 1: Start the App
Run the development server:
expo start
Step 2: Test Voice Commands
- Speak commands like “Open settings” or “Show weather.”
- Confirm that the corresponding actions or navigation occur.
6. Key Considerations
- Command Variations: Use NLP libraries or APIs for matching synonyms or variations (e.g., “Go to settings” instead of “Open settings”).
- Command Conflict: Prioritize or specify unique identifiers for similar commands.
7. Key Concepts Covered
- Mapping voice commands to app actions.
- Capturing and processing voice input.
- Adding navigation functionality to voice commands.
Next Steps
On Day 5, we’ll implement text-to-speech functionality to provide spoken feedback for user commands.
References and Links:
SEO Keywords: React Native voice commands, app actions from voice input, React Native Voice tutorial, voice-controlled navigation, integrating voice recognition in React Native.