iOS App Acoustically Measures Distances Up To 25 Meters 154
n01 writes "A recently published app for the iOS platform uses the propagation of sound waves to measure distances of up to 25 meters in a dual device mode. The technique works through repeatedly sending a chirp signal from the master device to which the other (reflector) device synchronizes itself and then replies in a similar fashion. A novel combination of techniques has been engineered to enhance the robustness in noisy environments, such as using an optimum-autocorrelation-signal and semi-automatic frequency calibration together with an averaging over multiple cycles."
XO (OLPC) (Score:5, Informative)
Re:Not impressive (Score:5, Informative)
Yep, neat, but not exactly ground breaking. The OLPC had such an application for the last few years.
Just use a damn tape measure! (Score:0, Informative)
Fuck, it's even easier just to use a damn tape measure. You don't have to synchronize them or any of this bullshit. Not only is a tape measure easier, but it's a fuck of a lot cheaper, too. Now you don't have to drop at least $1000 on some Apple devices.
It's one thing to use technology when it simplifies some existing task or job, but it's just fucking stupid when you use technology that only makes a simple task even more awkward, difficult and expensive.
Re:Not impressive (Score:4, Informative)
GPS containing units had better be able to do that or they'd never get your location down to something reasonable.
Re:Not impressive (Score:4, Informative)
You just say "I started transmitting at x and you received it at y. x-y/speed of sound at sea level = your result.
And then "your result" has at minimum a wavelength or two of precision, which sucks mightily at audio frequencies. This is why they use a nonperiodic (in this case chirped) waveform and correlation instead of "I started transmitting". You could have read this [wikimedia.org], at least, before making an ass of yourself.
Not that it's so novel as they try to make it sound, either -- SONAR and RADAR guys did all that long ago, and you'd get the basics needed to implement it in your first semester of DSP in any EE program. In fact, if they're even doing "semiautomatic frequency calibration", they're obviously using linear chirps -- exponential chirps are relatively immune to Doppler or other frequency shifts, and since there's no analog design, are no harder to implement -- suggesting they haven't had (or slept through) any formal education in the field.
It just bugs me when people who know even less run down every decent, if not outstanding, project like this with their own mix of even lamer approaches ("just as good!") and pie-in-the-sky fantasy ("then I'll get excited")
Re:wait, dual-device mode? (Score:5, Informative)
Re:"Novel"? Really? (Score:5, Informative)
Keep in mind that all that 'specialised equipment' evolved out of a need to improve the simpler predecessor systems.
Sonar and sonic range finding systems use all that 'extra equipment' to achieve ranges far in excess of 25m and in mediums much more variable than air. The impulse response of miniature consumer grade condenser microphones and speakers are more than adequate for air use within an octave of the audible spectrum. The speakers in the iPhone are primarily limited by their output power, and the fairly omnidirectional nature of the microphones may lack overall sensitivity, but both are simple parameters that really only end up reducing total available range and accuracy (as compared to specialised custom hardware using the same algorithmic solutions).
Applying the same design principles that would normally be applied to a specialised system design to an iPhone implementation, would be very unlikely to provide anything unknown to someone in the industry. This is very similar to early stage engineering "proofs of concept" that are used to test various parameters within a system design, without the interactive complexity of implementing the entire system.
There is nothing within this extremely simple setup that hasn't been done as part of a larger system design. A single (consumer grade) speaker + microphone used in transmissive, active echo, or for passive echolocation is not unusual. Considering the iPhone has excessive processing capability to implement all the standard approaches (correlation, convolution, deconvolution, filtering, impulse response measurement, etc), there is no real need to be 'clever' as such.
'Back in the day', when trying to do this with a 10MIPS DSP in real time with moving objects, it was much more important to come up with better algorithms and shortcuts. Of course, this could otherwise have easily been done with standard theoretical methods and a modern processor a hundred times more powerful.
I see patents pop up all the time that describe things that are far from novel. Most of those patents are usually 'invented' by people with no real experience in the given fields. ie. Ideas that seem like earth shattering discoveries to the uninitiated, but are really just standard techniques used by properly skilled engineers.
I'm not saying that this iPhone app is bad/good, just that it is VERY unlikely to contain any actual improvements to the current state of the art (or the state of the art 20 years ago for that matter). I say this, because there is no real need to do anything new to achieve the results that they are claiming.
BTW, in the past I've worked on sonar/radar systems for air, ocean and rock. The biggest problem in 'noisy' environments is a lack of output level. Multipath isn't a major problem for a point to point (ie. line of sight, shortest path) ranging device - unless you're talking about wave guide shapes/sizes over long distances.