Developing a microcontroller-based electronic instrument inspired by the theremin for real-time audio synthesis and control. The system uses proximity sensing and multimodal inputs to drive low-latency audio generation, signal mixing, and real-time visualization on an ESP32 dual-core platform.
The classic theremin is one of the earliest electronic instruments. It is controlled entirely without physical contact, using the player's hand proximity to antennas. This project reimagines that concept for modern electronic music, combining proximity sensing with additional control interfaces to enable real-time synthesis of EDM-style sounds.
Beyond the musical application, this project serves as a platform to explore embedded control, real-time signal processing, and interactive physical computing on resource-constrained hardware.
The instrument is built around an ESP32 dual-core microcontroller. Core 1 handles GUI rendering, sensor reading, and interrupt handling at 60 Hz. Core 2 runs the audio synthesis and sample playback pipeline with minimal latency. The two cores communicate through shared memory and hardware timers.
The multimodal input system includes a time-of-flight sensor for proximity-based pitch control, a Hall effect sensor, microphone input, and physical controls (buttons, dials, switches) for mode selection and parameter tuning. Outputs include audio via the onboard DAC, a 60 Hz display for waveform visualization, and status LEDs.
The primary challenge is achieving low-latency audio generation on an embedded platform. Human perception of audio latency is extremely sensitive. Anything above ~10ms feels sluggish for a musical instrument. This requires careful optimization of the signal processing pipeline, interrupt priorities, and inter-core communication to keep the audio path fast while still reading sensors and updating the display.
This project is in its early design phase. The system architecture and block diagram have been defined, and initial hardware prototyping is underway. Next steps include core firmware development, sensor integration, and audio synthesis algorithm implementation.
This page will be updated with hardware photos, waveform captures, and demo videos as the project progresses.
While this looks like a music project on the surface, the underlying skills transfer directly to robotics: real-time embedded control on a dual-core MCU, low-latency sensor fusion, interrupt-driven firmware design, and human-in-the-loop interaction. These are the same problems that make robot control hard — just expressed through a different medium.