Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Media (Apple) Media Music

Music Listeners Test 128kbps vs. 256kbps AAC 428

notthatwillsmith writes "Maximum PC did double-blind testing with ten listeners in order to determine whether or not normal people could discern the quality difference between the new 256kbps iTunes Plus files and the old, DRM-laden 128kbps tracks. But wait, there's more! To add an extra twist, they also tested Apple's default iPod earbuds vs. an expensive pair of Shure buds to see how much of an impact earbud quality had on the detection rate."
This discussion has been archived. No new comments can be posted.

Music Listeners Test 128kbps vs. 256kbps AAC

Comments Filter:
  • by Anonymous Coward on Thursday May 31, 2007 @08:31PM (#19346105)
    I can tell the difference between 192 and 128 kbps. It's all in the treble which sounds less bright at 128 kbps. It's very easy to detect lower bit rates if you concentrate on the treble.
  • by garett_spencley ( 193892 ) on Thursday May 31, 2007 @08:45PM (#19346227) Journal
    Despite what Apple charges for a set of its replacement buds, the earphones that come with 90 percent of the digital media players on the market are throw-away items--they're only in the box so you'll have something to listen to when you bring the player home.

    I'm a musician. I've recorded and released an album [cdbaby.com] (sorry for the shameless plug but it's only to put my post in context - honest). I own expensive studio earphones, have experience mixing and mastering etc.

    I don't own a 5th generation iPod but I do own an iPod Shuffle that has since stopped playing MP3s. It still works as a storage device and I still have the headphones. I kept on to the headphones because I prefer them over all other ear buds I have. They don't beat the studio headphones, but I would not consider them "throw aways". I found they're pretty good quality and I began using them with all of my portable devices. I would generally agree that most ear buds that come with cd players and probably many other mp3 players are of relatively low quality, but I was very impressed with the ones that came with the iPod Shuffle. I will never throw them away.
  • Re:Synopsis (Score:5, Interesting)

    by no_opinion ( 148098 ) on Thursday May 31, 2007 @08:46PM (#19346233)
    Not only that, but audio professionals typically do codec and compression tests using an ABX test.

    http://en.wikipedia.org/wiki/ABX_test [wikipedia.org]

    This would have been more interesting if they had used a statistically valid sample size and not only compared 128 to 256, but also to lossless.
  • treble troubles (Score:4, Interesting)

    by fred fleenblat ( 463628 ) on Thursday May 31, 2007 @09:03PM (#19346353) Homepage
    Me, personally, what I find unsatisfying about compressed music is that the treble is the first thing to go, and even at high bit rates AAC and MP3 each seem to just make all cymbals, brushes, triangles, and synthetic tones in the high registers sound equally like white noise.

    I found a tonality frequency setting in LAME that seemed to cure this problem, but neither iTunes nor ITMS seems to let you adjust or purchase based on this issue.

    Perhaps not everyone is sensitive to this, but maybe there are other settings or aspects of compression that other people are sensitive to which I am not...leading one to the possible conclusion that compressed music might be made better by personalizing each rip to the hearing response of the listener rather than compromising on an average human hearing model.
  • Re:The results... (Score:4, Interesting)

    by Anonymous Coward on Thursday May 31, 2007 @09:20PM (#19346463)
    Many similar tests have proven that most humans have trouble detecting any change in audio quality above 160->192 Kbps or in mp3s. A quick web search will show that even "audiophiles" really can't discern the difference. 128 has a clear "tinny" quality that disappears as the bit rate goes up. Based on this, I believe that 256 tracks as compared with the original cds would never be accurately identified. Clearly this should have been a part of this test. The idea that "lossy" means "audible" has not been proven in any real world tests.

         
  • Re:Synopsis (Score:4, Interesting)

    by QuietLagoon ( 813062 ) on Thursday May 31, 2007 @09:28PM (#19346529)
    The new standard for research methodology: finding 10 people at the corner starbucks, asking them to help you for an "article" you're writing.

    This is what the Internet has reduced us to: it does not matter if it is correct, so long as it is delivered quickly.

  • Age and music choice (Score:4, Interesting)

    by Charles Dodgeson ( 248492 ) * <jeffrey@goldmark.org> on Thursday May 31, 2007 @09:29PM (#19346541) Homepage Journal
    The unexpected age results (that older people were better at telling the difference for the bitrates) may well be a consequence of music choice. Each subject picked their own music, and it is very clear that these quality differences are more noticable in some types of music than in others. The first time I played an iTunes purchased classical piece on a cheap component stereo system, I thought something was broken. I hadn't noticed a problem with most popular music, but I find some jazz and most classical digitized at 128bps un-listenable on my low-end component stereo.
  • Close but not quite (Score:2, Interesting)

    by progprog ( 1016317 ) on Thursday May 31, 2007 @09:32PM (#19346553) Homepage

    One of their key ideas was having the participants submit music they were intimately familiar with. Unfortunately, they should have taken the idea to its logical conclusion: having each participant tested only with songs they submit. Also, they could have at least published the statistics on how participants performed on the song they submitted.

    I find it easy to tell the difference between say lossless or even 320 and 128/192 when listening to music I'm very familiar with. But give me a set of random songs I've never heard before and I'd have a much harder time. You don't have to be an audiophile - you just have to be paying attention.

    My grievance with low bit rates and/or inferior sound equipment is simply that you won't know what you are missing. And I'm not one of those gold-plated cable audiophiles either -- my "serious" listening equipment is the Etymotics ER4s with a headphone amp. Used for lossless songs, of course.

  • Better for albums (Score:5, Interesting)

    by AlpineR ( 32307 ) <wagnerr@umich.edu> on Thursday May 31, 2007 @09:33PM (#19346555) Homepage
    The big difference that the 256 Kb/s + DRM-free option makes for me is that now I'll buy albums from iTunes Store. Previously I would use iTunes to buy one to three tracks if there was some artist I liked but didn't want a whole album from. But usually I order the CDs online for $8 to $14, rip them to AAC at 192 Kb/s, and put the disc away to collect dust on my overflowing CD rack. Now I can get higher quality cheaper and faster.

    Yes, ideally I would rip all my music to a lossless format. And ideally everything would be available on SACD at 2822 KHz rather than 44.1 KHz CDs. But that's just not practical with my 500+ album collection. It'd fill up my laptop's hard drive real quick and allow me to put only a fraction onto my iPod.

    I'm also disappointed that the article only tested the tracks on iPods with earbuds. Most of my listening is on a decent stereo system fed from my laptop. Ripping is about convenience, not portability. I only use my iPod when riding the Metro or an airplane. With all the outside noise the bitrate doesn't matter.

    And being DRM-free isn't just a matter of idealism. I get frustrated when I go to burn an MP3 CD for my car and discover that one of the tracks I selected is DRMed. Sure there are ways to get around it, but it's just not worth the bother.

    AlpineR
  • Re:The results... (Score:5, Interesting)

    by tackledingleberry ( 1109949 ) on Thursday May 31, 2007 @09:34PM (#19346569)
    The MPEG community uses a MUSHRA test* to judge the quality of new codecs and to decide on bitrates etc. If there are n-codecs under test than the subject can switch A-B style between n+2 different versions of the same piece of music. These are the n-codecs and a reference or lossless version. He does not know which is which. He can also switch to one which he knows is the reference track (so the reference track is in there twice, labelled in one case and not labelled in the other). The task is to rate (0-100) each of the unknown tracks based on how similar it is to the reference track. One important thing to remember about the task is that the subject must rate similarity, rather than 'quality' or anything else. A certain codec could, for instance, add a load of warm bass to a piece of music making it more pleasurable (maybe) to listen to, but decreasing its similarity to the reference piece. The idea is that the subject should be able to pick the reference track from the unknowns (giving it a score of 100) and then rate all of the other unknowns in terms of similarity to the reference. The codec with the highest score wins. This type of test would be carried out for each of a number of pieces of music, with a lot of listeners.

    * sorry, I've no good link- it's in ITU-R BS.1534-1 "Method for the subjective assessment of intermediate quality level of coding systems".
  • Re:The results... (Score:5, Interesting)

    by Proudrooster ( 580120 ) on Thursday May 31, 2007 @09:40PM (#19346607) Homepage
    True, the only time you will generally notice the difference is if the track has a crowd clapping or drumkit (hi-hat) cymbals. At 128k I think cymbals sound horrible and undefined. At 192k I start not to be less annoyed.
  • Re:The results... (Score:5, Interesting)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday May 31, 2007 @10:00PM (#19346757) Homepage Journal

    We theorize that the Apple buds were less capable of reproducing high frequencies and that this weakness amplified the listeners' perception of aliasing in the compressed audio signal. But that's just a theory.

    Can anyone explain this to me? I know what aliasing is; basically it's when your top frequencies hit the Nyquist limit and kind of bounce back downward (how's that for scientific?), and I know what it sounds like. However, the last time I checked, you'd remove aliasing by cutting high frequencies out of the final analog wave with a lowpass filter. Unless something's radically changed since then, wouldn't the presumably lower-response Apple buds actually show less aliasing that the expensive ones that can better reproduce the higher (and unwanted) frequencies?

    Or have I been trolled into reasoning with audiophiles? If that's the case, let me know so I can pack up and go home.

  • In my studio... (Score:3, Interesting)

    by Khyber ( 864651 ) <techkitsune@gmail.com> on Thursday May 31, 2007 @11:18PM (#19347369) Homepage Journal
    I have NO accurate speakers. Instead I cut even more costs and just have a few separate stereos with different speakers hooked up. I use my high-quality Shure studio headphones for recording, then when I'm done, I play it back on all three systems, and I note just how it sounds on each system, so I know over a wide range of speakers/amplifiers (from car amp to hosue speakers, car speakers to house amp, etc.) what I can expect to hear. I listen to it as if I'm hearing it out of Joe Sixpack's home stereo rig, so I know what the average consumer is likely to hear. Saves me money (pawn shops serve my purposes VERY well,) and works out pretty damned well in the end. Also, one thing I don't do is compress the fuck out of my music either. No compressors, thank you. If my meter clips red, I lower the main until everything pans out. I hate metal music nowdays that SOUNDS LIKE IT'S IN ALL CAPS due to the compressors used to sustain a louder volume level without clipping the meter.
  • by WidescreenFreak ( 830043 ) on Friday June 01, 2007 @12:43AM (#19347893) Homepage Journal
    Actually, I notice a huge difference between 128K and 192K when listening to classical music. For music that doesn't contain the brashness of percussion or brass instruments, the distortion at lower encoding levels is fairly good; however, brass instruments (including brass cymbals) in particular are unbearably distorted when 128K is used but come across rather cleanly when >192K is used. I've finally accepted that a variable rate between 224K and 320K is where I need to encode my tracks in order to make them as close to the original CDs as I can tolerate without using the actual CDs.
  • by markh1967 ( 315861 ) on Friday June 01, 2007 @04:29AM (#19348951)
    I've been very skeptical about these subjective tests ever since I read about one in 'New Scientist' many years ago.

    Back when the great audiophile debate was between CD and vinyl, New Scientist magazine put a load of audiophiles to the test by playing them the same piece of music from CD and then from Vinyl and asked them to identify which version was from which media and describe the differences between them.

    What they didn't tell them was that they simply played the same CD track twice so any differences they thought they heard were purely a result of their own perception fantasies; it didn't stop them from describing in some detail how the two tracks varied though.

  • Not quite... (Score:4, Interesting)

    by Ayanami Rei ( 621112 ) * <rayanami AT gmail DOT com> on Friday June 01, 2007 @11:46AM (#19352549) Journal
    Nearly all music is recorded and processed at 48kHz. The Red Book standard unfortunately went with 44.1 (for some esoteric reason having to do with syncing with an analog video standard or something back in the 80s). So, there's at least a down-conversion from 48 to 44.1, which isn't the end of the world but you lose some fidelity in the process since its really hard to do that "right" (and its only been recently that people have stopped using langrangian techniques and used truncated sinc functions or polyphase filters to do a decent job without it taking 50 forevers)
  • Re:Not quite... (Score:5, Interesting)

    by omeomi ( 675045 ) on Friday June 01, 2007 @12:50PM (#19353555) Homepage
    The Red Book standard unfortunately went with 44.1 (for some esoteric reason having to do with syncing with an analog video standard or something back in the 80s).

    Huh, you're right...

    http://www.cs.columbia.edu/~hgs/audio/44.1.html [columbia.edu]

    I always assumed that 44.1kHz was chosen because they took the necessary (Nyquist) sample rate to be able to record up to 20kHz (40kHz), and added a bit for good measure. There's always been that rumor that the time length of a CD was chosen to be able to fit Beethoven's Ninth Symphony, so I always figured they knew they wanted 16 bit, and a length of about 74 minutes, and just picked the >40kHz sampling rate that would get them there with that fancy new "CD" technology that was being developed. I'm happy to know that we're all using 44.1kHz for an even stupider reason ;-).

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...