# software for multrack audio and servo/LED playback over DMX



## DrNeon (Dec 23, 2020)

I've decided to build my dream decoration of a motion-sensing skeleton band, and while I have a very rudimentary version running, I now want to go down the rabbit-hole of animatronics, etc. So here goes:

Goal: Skeleton props, each run by a battery-powered ESP8266 connected to a PCA9685 controlling some LEDs and servos, and also a PIR motion sensor. When the first skeleton prop sees motion, it starts a multitrack audio file (with all tracks muted) playing on a Raspberry PI Zero W and then selectively unmutes the track associated with that particular skeleton. It also selectively enables a sequences of LEDs and servo motions corresponding to that skeleton (that are played with the audio file). Subsequent motion detected by other skeletons will then unmute their associated audio track and enable the LEDs/servo motion playback as well. So basically a sequencer is playing multitrack audio and LED and servo sequences that are enabled selectively by motion events at each skeleton. Once the playback for a particular song is done, a new song/sequence is cued up for a motion event to start it.

What I have now: I have two different setups.

A) My original setup uses a Raspberry Pi as a MQTT broker and running Node Red to respond to MQTT messages by running ecasound to play back and selectively unmute tracks in a multitrack audio file. Each prop has a ESP8266 "node" that communicates with the Pi over MQTT, sending messages when a PIR motion sensor is triggered and receiving messages to turn on LEDs and motors. While this setup works for the motion sensor triggered multitrack audio playback portion, the MQTT communication is too slow for servo and music-synced LED control, and there is no software to sequence events timed to music.

B) My new setup is built with the goal of using ArtNet to remotely control DMX channels on the ESP8266. In other words, the ESP8266 is a DMX receiver (over WiFI) and should control LEDs/Servos this way. The ESP8266 still has the motion sensor, but it will depend on the sequencer software as to how that trigger is used to unmute tracks and start/stop playback. I am currently using QLC+ as the control software as it seems to sorta allow multitrack audio and can provide DMX signals synced with audio. The DMX from QLC+ -> ESP8266 is working nicely, and the multitrack audio playback seems ok, but even though QLC+ has S/M (solo/mute) parameters for each track, I don't think they can be changed during playback or remotely configured (which sort of defeats the purpose).

So my main question is what software would work well for multitrack audio playback, with solo/mute that can be changed during playback via a remote interface, as well as sequencing ability with tracks capable of sending DMX information over ArtNet (and those tracks also able to be muted/unmuted [or enabled/disabled] during playback via a remote interface)


----------



## Ghost-0-Coaster (Sep 13, 2020)

Hey DrNeon, 
Just getting started on the animatronic scenario this Halloween and did a ghost themed roller coaster in my front yard. I had a finagled multi track audio that I used as well as a triggered audio track for my ghoul in the coaster. It had to be a separate system since the car moved on the track. I had to get a little creative and ended up using a small FM transmitter to broadcast that audio to a digital mixer. I'm a sound engineer by trade and already had this in my arsenal. It worked out great, and I was able to raise and lower levels of the different "tracks" depending on whether there were a lot of people gathered or not.

Not sure what your budget is, but one option is getting a digital mixer. The model I have is the Behringer Xair18. I believe you can control the volume and mutes as well as other parameters via midi and QLC+. They also make mixers with less channels that are cheaper. You could also probably mix the band on the fly with an ipad as well. Another cool thing is that you have multiple outputs, so you could do a surround sound type of deal, with individual speakers behind each "performer". A lot of the things you have going on are outside my particular wheelhouse, but the audio part is my jam. If you are interested in going down that road, I can be of assistance. I plan on bumping up my game for next Halloween, so I might need some help myself!

Sounds like a cool project!


----------



## BobNJ (Jun 25, 2015)

You did not specify how many audio tracks you wanted to use for your project. Take a look at the Tsunami Super WAV Trigger at Sparkfun. It supports multiple wav files than can be manipulated on the fly.


----------



## David_AVD (Nov 9, 2012)

This sounds like something that could be written using the Bass audio library as it supports multi-channel audio and control over every channel. Not sure how much you're into programming.


----------



## DrNeon (Dec 23, 2020)

So I have a setup that seems to work (just need to get the actual props). Raspberry Pi is running a MQTT server and, using Node Red, listening for messages indicating "who" saw motion. Nodes consist of a battery-powered NodeMCU board and a separate 16-channel PWM board (PCA9685), with a motion sensor going into the NodeMCU and some servos and RGB LEDs on the PCA9685. When the NodeMCU sees motion, it becomes "activated" and posts to a MQTT thread, and then the Pi starts playback of multiple audio tracks using ecasound, with all tracks muted except the one corresponding to whomever saw motion. Simultaneously, playback for multiple MIDI files starts; each MIDI file contains CC messages that are translated to ArtNet data via midimonster. Activated nodes receive the ArtNet data and respond accordingly. Any subsequent node that sees motion becomes activated (i.e. starts responding to ArtNet data) and also unmutes one of the audio tracks being played by ecasound. Each MIDI file corresponds to a unique node/ArtNet universe, played back through a unique MIDI port generated by midimonster.


The MIDI data is generated from a QLC+ show that contains all the sequencing, etc (running on my Windows machine). I use the MIDI output plugin, and record the MIDI data with a MIDI editor program in Windows. Playback on the Pi is done with aplaymidi for now, though its a bit tricky to get everything perfectly in sync due to delays (fractions of a second) between programs starting. Someone on a different forum suggested that using Python for MIDI playback may help with this, but I haven't looked into it yet.


The next step is to get skeletons...plastic skeletons that is.


----------



## hudtechllc (Sep 3, 2015)

This is what Im using.
https://i.ytimg.com/an_webp/Jk3Zsyr...I-6i4AG&rs=AOn4CLDbagdhNxqSw7KtQElV3iDf1ZCnHA


----------

