Voice-based navigation enhances accessibility and usability by allowing users to navigate between app screens with simple voice commands. On Day 6, we’ll integrate voice commands with React Navigation to enable screen transitions.
1. Setting Up React Navigation
Step 1: Install React Navigation and Dependencies
Install the required packages for navigation:
npm install @react-navigation/native react-native-screens react-native-safe-area-context react-native-gesture-handler react-native-reanimated react-native-stack
Step 2: Configure React Navigation
Wrap your app in a NavigationContainer
and define screens:
import React from 'react';
import { NavigationContainer } from '@react-navigation/native';
import { createStackNavigator } from '@react-navigation/stack';
import HomeScreen from './HomeScreen';
import SettingsScreen from './SettingsScreen';
const Stack = createStackNavigator();
export default function App() {
return (
<NavigationContainer>
<Stack.Navigator initialRouteName="Home">
<Stack.Screen name="Home" component={HomeScreen} />
<Stack.Screen name="Settings" component={SettingsScreen} />
</Stack.Navigator>
</NavigationContainer>
);
}
2. Setting Up Voice-Based Navigation
Step 1: Modify Command List
Add navigation commands to the command list:
const commands = {
"go to home": (navigation) => navigation.navigate("Home"),
"open settings": (navigation) => navigation.navigate("Settings"),
};
Step 2: Integrate Navigation into Voice Handling
Pass the navigation
prop to the command handler:
const handleCommand = (text, navigation) => {
const command = Object.keys(commands).find((cmd) =>
text.toLowerCase().includes(cmd)
);
if (command) {
commands[command](navigation);
} else {
Speech.speak("Sorry, I did not understand that command.");
}
};
3. Complete Implementation
App.js
import React, { useState } from 'react';
import { Button, View, Text, StyleSheet, Platform, PermissionsAndroid } from 'react-native';
import { NavigationContainer } from '@react-navigation/native';
import { createStackNavigator } from '@react-navigation/stack';
import Voice from '@react-native-voice/voice';
import * as Speech from 'expo-speech';
const Stack = createStackNavigator();
function HomeScreen({ navigation }) {
const [recognizedText, setRecognizedText] = useState("");
const [isListening, setIsListening] = useState(false);
const commands = {
"go to home": () => navigation.navigate("Home"),
"open settings": () => navigation.navigate("Settings"),
};
const handleCommand = (text) => {
const command = Object.keys(commands).find((cmd) =>
text.toLowerCase().includes(cmd)
);
if (command) {
commands[command]();
} else {
Speech.speak("Sorry, I did not understand that command.");
}
};
const startListening = async () => {
if (Platform.OS === 'android') {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
{
title: "Microphone Permission",
message: "This app requires access to your microphone for voice commands.",
}
);
if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
return;
}
}
try {
setIsListening(true);
Voice.start("en-US");
} catch (error) {
console.error("Error starting voice recognition:", error);
}
};
const stopListening = () => {
setIsListening(false);
Voice.stop();
};
Voice.onSpeechResults = (event) => {
const text = event.value[0];
setRecognizedText(text);
handleCommand(text);
setIsListening(false);
};
return (
<View style={styles.container}>
<Text style={styles.instructions}>
{isListening ? "Listening..." : "Press the button and speak a command"}
</Text>
<Button
title={isListening ? "Stop Listening" : "Start Listening"}
onPress={isListening ? stopListening : startListening}
/>
<Text style={styles.resultText}>Recognized Text: {recognizedText}</Text>
</View>
);
}
function SettingsScreen() {
return (
<View style={styles.container}>
<Text style={styles.instructions}>Settings Screen</Text>
</View>
);
}
export default function App() {
return (
<NavigationContainer>
<Stack.Navigator initialRouteName="Home">
<Stack.Screen name="Home" component={HomeScreen} />
<Stack.Screen name="Settings" component={SettingsScreen} />
</Stack.Navigator>
</NavigationContainer>
);
}
const styles = StyleSheet.create({
container: { flex: 1, justifyContent: "center", alignItems: "center" },
instructions: { fontSize: 18, marginBottom: 20 },
resultText: { fontSize: 20, marginTop: 20 },
});
4. Testing Voice Navigation
Step 1: Start the Development Server
Run the app:
expo start
Step 2: Test Voice Commands
- Speak commands like “Go to home” or “Open settings.”
- Verify that the app navigates to the correct screen.
5. Key Considerations
- Multiple Commands per Screen: Support variations like “Go to settings” or “Settings screen.”
- Feedback: Confirm navigation success with spoken feedback:
Speech.speak("Navigating to Settings.");
. - Default Fallback: Provide a default action for unrecognized commands.
6. Key Concepts Covered
- Voice-controlled navigation between app screens.
- Integrating React Navigation with voice commands.
- Handling navigation feedback with text-to-speech.
Next Steps
On Day 7, we’ll integrate the app with third-party APIs like Google Assistant or Alexa for extended voice-controlled functionalities.
References and Links:
SEO Keywords: React Native voice navigation, voice-controlled app screens, integrating voice commands with React Navigation, Expo Speech API navigation, voice-driven mobile apps.