Remember the Rabbit R1 and how it sucked? This is an attempt at making a handheld AI assistant. It runs fully offline.
How much experience does your group have? Does the project use anything (art, music, starter kits) you didn't create?
Uses a raspberry pi and waveshare 3.5 inch screen, with prebuilt Raspberry Pi OS image. Rest all built on top.
What challenges did you encounter?
Code runs fine, but ALSA (Advanced Linux Sound Architecture) in Linux being broken as usual, fails to send microphone input to python's speech_recognition. Code will have to be written without speech_recognition or modified to run with PulseAudio (the good guy who fixed audio for Linux).
Also, since Waveshare refuses to update the code for its products since 2 years (5 years for my model), getting to run it with newer versions of Debian turns into driver hell for the display. It takes effort to run just 10 fps of refresh rate on the screen (with a bad apple test).