Order Tray | Contact Us | Home | SIG Lists

[aprssig] New APRS virtual reality Idea!

Scott Miller scott at opentrac.org
Thu Sep 20 20:43:04 UTC 2007


I've been trying to find some information, but this is one of those 
cases where there's too much of the WRONG information out there... there 
are tons of 3d audio processing chips, but they're intended for decoding 
surround sound streams.

What I'm trying to find is a chip (or DSP code) that'll take a mono 
source and at least X/Y inputs for the position.

Come to think of it, maybe it's not really so hard.  Use the Pythagorean 
theorem to get the distance between the 'source' and each ear, and the 
speed of sound gives you the delay for each headphone speaker.  Run the 
source audio through a delay line and pick your taps based on the 
required delay.

Can anyone think of a reason that wouldn't work?  Doing it with speakers 
(rather than headphones) would complicate things, but maybe not so much 
if you knew how to do the proper surround encoding.

Might take a few kbytes of RAM to implement, depending on the spatial 
resolution needed.

I wonder how much difference the acoustic model of your head makes - if 
your head is in the way of the signal, do you need to attenuate one 
channel to keep your brain from thinking something's wrong?

Scott
N1VG

Jim wrote:
> This is scary...!
> 
> I was talking about exactly the same thing with a friend of mine the other
> day.
> 
> Expand it to the mobile "room" and quadraphonic (surround sound) becomes a
> real possibility. Most cars already have the speakers.
> 
> If tied into the direction of travel from a GPS, it could all be relative to
> the vehicle.
> 
> Alas I don't have any DSP knowledge to know where to start.
> 
> 
> 
> Jim, G1HUL
> 
> 
> -----Original Message-----
> From: aprssig-bounces at lists.tapr.org [mailto:aprssig-bounces at lists.tapr.org]
> On Behalf Of Robert Bruninga
> Sent: 20 September 2007 19:35
> To: 'TAPR APRS Mailing List'
> Subject: [aprssig] New APRS virtual reality Idea!
> 
> This is a NEAT idea if anyone wants a fun project.
> That also knows how to write some DSP software:
> 
> Imagine an APRS product that works like this:
> 
> Imagine wearing a pair of headphones. 
> Close your eyes and face north.
> When an APRS user with a D7 HT speaks,
> You HEAR him in the direction where he is.
> 
> If he is to the East, you hear him to the right.
> If he is to the west, you hear him to your left.
> Anywhere in between, and  the earphones are phased so that you hear his
> direction.
> 
> Now, too bad the APRS PTT mode does not put the position data at the FRONT
> of a packet, but at the end.  At the front, you could know who is talking
> from where and you could then phase delay
> his voice to create the correct virtual postion.   But it isnt.
> The packet is at the end.
> 
> So, given this end-PTT limitation, then here is how I would implement this
> and it also makes it simpler.
> 
> 1) Pass the voice through both earphones in MONO.
> 2) When the PTT mode packet comes in
> 3) Send a "roger-beep" to the earphones.
>    (A) Phased to indicate direction 
>    (B) Tone frequency to indicate distance.
> 
> High tone means close.  Low tone means far.  Any other tone inbetween...
> 
> Once that is working, make it proportional to own-heading, and now you can
> "see APRS in the dark"...
> 
> Bob, WB4APR
> 
> 
> _______________________________________________
> aprssig mailing list
> aprssig at lists.tapr.org
> https://lists.tapr.org/cgi-bin/mailman/listinfo/aprssig
> 
> 
> 
> _______________________________________________
> aprssig mailing list
> aprssig at lists.tapr.org
> https://lists.tapr.org/cgi-bin/mailman/listinfo/aprssig
> 
> 





More information about the aprssig mailing list