Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Media (Apple) Media Music

Music Listeners Test 128kbps vs. 256kbps AAC 428

Posted by CowboyNeal
from the perfect-pitches dept.
notthatwillsmith writes "Maximum PC did double-blind testing with ten listeners in order to determine whether or not normal people could discern the quality difference between the new 256kbps iTunes Plus files and the old, DRM-laden 128kbps tracks. But wait, there's more! To add an extra twist, they also tested Apple's default iPod earbuds vs. an expensive pair of Shure buds to see how much of an impact earbud quality had on the detection rate."
This discussion has been archived. No new comments can be posted.

Music Listeners Test 128kbps vs. 256kbps AAC

Comments Filter:
  • Hmmm (Score:0, Insightful)

    by Anonymous Coward on Thursday May 31, 2007 @07:26PM (#19346069)
    To the people who care about this, they'll have their minds set. To the people who don't, well they don't.
  • Re:Of course.. (Score:3, Insightful)

    by Divebus (860563) on Thursday May 31, 2007 @07:32PM (#19346119)

    Test confirms the generally known (but debatable) points:
    1. Not many can detect the improvement of higher kbps
    2. Expensive earbuds are way better than the default ones.

    3. 128kbps AAC isn't all that bad.

  • Not worth it? (Score:4, Insightful)

    by Lost Engineer (459920) on Thursday May 31, 2007 @07:35PM (#19346133)
    FTFA

    we just don't think DRM-free tracks alone are worth paying an extra 30 cents a track for..
    Have fun buying your album again to play it on your cell phone's MP3 player.
  • Cost and quality (Score:3, Insightful)

    by eebra82 (907996) on Thursday May 31, 2007 @07:36PM (#19346149) Homepage
    "Eight of the 10 participants expressed a preference for the higher-bit rate songs while listening with the Apple buds, compared to only six who picked the higher-quality track while listening to the Shure's."

    I don't buy this. I have a friend who claims to be an audiophile - and he is - with sound equipment worth well over $40,000. He states that the more expensive and professional your gear is, the easier it is to spot low quality music.

    So the article contradicts with his statement, and I have to agree with him on this one. Logically speaking, professional speakers should produce results far closer to the source than the ones that aren't.
  • Re:Synopsis (Score:5, Insightful)

    by Ohreally_factor (593551) on Thursday May 31, 2007 @07:56PM (#19346317) Journal
    The new standard for research methodology: finding 10 people at the corner starbucks, asking them to help you for an "article" you're writing.

    Oh ,and while we're at, let's throw another variable into the mix! That'll make it even more scientifical! (And that's not even getting into any other variables that slipped in thru carelessness.)

    Frankly, I wouldn't trust these MPC bozos to tell me if it was raining while I was urinating on their backs.
  • Re:The results... (Score:5, Insightful)

    by dangitman (862676) on Thursday May 31, 2007 @07:58PM (#19346323)

    We're all for DRM-free music, but 256Kb/s still seems like a pretty low bit rate--especially when you're using a lossy codec.

    Are they on crack? 256 Kbps is quite a high bitrate for a lossy CODEC. Their wording is also really bizarre. A low bitrate would be worse for a lossless track, because an uncompressed or lossless track, by definition, should have a much higher bitrate than a track compressed with a lossy CODEC.

    Do they even know what they are talking about?

  • Re:Synopsis (Score:3, Insightful)

    by timeOday (582209) on Thursday May 31, 2007 @08:06PM (#19346377)
    I agree the comparison to lossless would be interesting.

    As for ABX, it seems like the most demanding possible test, which I agree makes it attractive in theory. But in real life, the relevant question is "does this sound good" without a back-to-back reference sample for comparison. I also keep my photo collection in .jpg. Can I see the jpg distortion if I do a 1:1 blowup and carefully compare to a TIFF image? Sure. But at normal viewing size and distance, it just doesn't bother me, and that's my personal benchmark. Neither am I shelling out big bucks for a Blu-Ray player, even though I can see DVD compression artificts if I really try.

    Hearing capability is also very individual. I'll be the first to admit my hearing isn't great. Even a simple test on yourself is more valuable than a statistically large sample of people who aren't you.

    As for Apple's new offering, I wouldn't pay 3x for a difference that I personally would only maybe be able to detect in a back-to-back comparison that will never happen.

  • by pev (2186) on Thursday May 31, 2007 @08:17PM (#19346445) Homepage

    Logically speaking, professional speakers should produce results far closer to the source than the ones that aren't.

    Er, WTF? Audiophiles don't use 'professional' kit they buy posh shiny Audiophile setups. If you want to listen to music as the recording engineer intended, buy a set of decent powered studio monitors for far less then supposed audiophile setups. You'll be far closer to the intended sound than any artificial response you get from consumer gear. And yes, audiophiles are consumers too, just consumers with more cash to blow than common sense.

    As a bonus thought, instead of spending 10K on hardware, spend some thought instead on looking at the acoustics of your listening space.

    ~Pev
  • by raehl (609729) <raehl311&yahoo,com> on Thursday May 31, 2007 @08:31PM (#19346551) Homepage
    I have a friend who claims to be an audiophile - and he is - with sound equipment worth well over $40,000.

    I can't tell if you're being sarcastic or not. Assuming you're not... ...having $40,000 in sound equipment says about as much about your ability to judge sound quality as spending $300 on Celine Dion tickets says about your taste in music.
  • Re:Synopsis (Score:3, Insightful)

    by Babbster (107076) <`moc.liamg' `ta' `bbabnoraa'> on Thursday May 31, 2007 @09:04PM (#19346789) Homepage

    The sample-set should also include musicians and audiophiles into the mix. They are far more likely to give an objective opinion compared to people randomly pulled off the street.

    Bullshit. First of all, the testing procedure should be designed to eliminate subjectivity. That's the purpose of double-blind testing. Second, why would anyone but a musician or audiophile care what a musician or audiophile has to say on this issue? Are they experts on hearing? The latter group would be particularly useless since, in the main, "audiophiles" tend to equate spending more money to a better listening experience, even if they can't demonstrate the difference to the unwashed masses.

    No, unless you're specifically looking for the differences between the way an "audiophile" perceives different bitrates and how an average person does, such "experts" would be the last people you'd want involved in a test like this.

    Of course, the whole thing is pointless because even if their test setup was perfect the small sample size renders the results anecdotal at best.
  • by Anonymous Coward on Thursday May 31, 2007 @09:10PM (#19346831)
    Another idea that occured to be about older people picking the higher bitrate:. I'd guess the younger generation has likely grown up more accustomed to hearing lossy MP3s as "normal", whereas the older people probably even owned some vinyl in their lives.

    (not to say vinyl is the best quality among anything, or whatever, but it sure beats MP3)
  • by Anonymous Coward on Thursday May 31, 2007 @09:16PM (#19346879)
    It seems obvious to me they do NOT know what they were doing.
    RTFA or not. ( Guess which I chose? )

    10 subjects is hardly enough to prove ANYTHING, other than that they
    have no idea how to perform a remotely rigorous scientific analysis.

    You can expect 2 idiots, 3 to be biased, 4 to be honest, and 1 to lie.

    I think 100 would begin to scratch the surface. I'm not trying to be
    a snarky science dick, this is self evident. This is epinion.com bullshit.

    Show me 10 people who have ipods and I'll show you 5 aol users. (lol)
  • Re:The results... (Score:4, Insightful)

    by artisteeternite (638994) on Thursday May 31, 2007 @09:27PM (#19346959)

    They tested music ripped from CD and encoded by iTunes. That makes this test irrelevant to the music to the iTunes store, since that music comes from the original masters (higher quality than the CD) and is encoded using customised settings (per-album or per-song), while iTunes uses some fairly general settings.

    So then, it seems that there would be an even more noticeable difference between 128Kb/s and 256Kb/s. Which means if using this lower quality 128Kb/s track, the research showed that the difference in quality isn't worth an extra 30 cents, then doesn't it still hold true that a higher quality 128Kb/s track purchased from iTunes would be even closer in quality to the 256Kb/s track, and still not worth the extra 30 cents?

    If ripping a CD to iTunes at 128Kb/s creates a lower quality track than purchasing a 128Kb/s track from the iTunes Store, then I think ripping from a CD to iTunes actually adds more weight to the argument that the 256Kb/s tracks are not worth an extra 30 cents.

  • Re:Synopsis (Score:3, Insightful)

    by tchdab1 (164848) on Thursday May 31, 2007 @09:49PM (#19347115) Homepage
    Yes. They say...

    We'd be more excited if Apple increased the bit rate even further, or--even better--if they used a lossless format.

    But then they don't test their assumption.

    How ascientific. Excitement is all mental anyways.
  • Mod Parent Up (Score:2, Insightful)

    by Anonymous Coward on Thursday May 31, 2007 @09:49PM (#19347127)
    If you spend a huge amount of money in a particular thing, you have a vested interest in that thing. Much of the audiophile world falls into this category. One example of this is audiophiles who buy expensive power cords for their amps etc, which plug in to the wall. I'm not sure what they think is behind the wall...
  • by baka_vic (769186) on Thursday May 31, 2007 @10:02PM (#19347255)

    That's it. "best" Not "like the original", which is a poor substitute for "best".

    The problem is, "best" is subjective. One's person's "best" is not the same as another. When comparing against the original, we have a baseline to compare against.

    And example of this would be that different codecs preserve certain frequencies differently. Different people are more sensitive to changes in different frequencies. If it just happens that a codec does preserve a those particular frequencies that you are sensitive to, then of course you will feel that that codec is bad.

    Of course, I'm oversimplifing things. Factors like the music, speakers/headphones, etc, all play a part in how you preceive quality of the codec.

    So, basically, the idea is, when doing testing which will be relevant to others, we need to test against the original. But if you're testing to see what codec is best for your own personal use, yes, you can use the codec that sounds "best" to you.

  • by liftphreaker (972707) on Thursday May 31, 2007 @10:39PM (#19347477)
    This test, to a large extent, tells us about the output of the codecs, rather than tell us about the differences between 128k/256k encoding. For a really meaningful test, we must ensure that each song was encoded using the exact same settings.

    I can create 256k MP3's which sound worse than 128k MP3's, both from the same WAV. There are a large number of customizations you can use in the encoding process which can really affect the output.
  • Re:The results... (Score:4, Insightful)

    by Richard_J_N (631241) on Thursday May 31, 2007 @10:48PM (#19347545)
    It's a matter of personal taste, but I was given a pair of very expensive noise-blocking earbuds, and I *hate* them. Firstly, to block the noise, you have to jam them into your ears till it hurts. And then, the "sound-stage" is moved to directly between the earbuds, so the orchestra sounds like it is inside my head(*). Ugh. I tend to prefer lightweight in-ear headphones with a folding headband for travel (much more comfortable), and proper fullsize headphones (not necessarily especially expensive) for non mobile listening. On aircraft, I've given up on classical music completely.

    (*)If interested in this effect, try playing with sox, and the "earwax" plugin. Some samples are on the web too.
  • Re:The results... (Score:3, Insightful)

    by vought (160908) on Thursday May 31, 2007 @11:43PM (#19347895)
    Aah, you mean cymbals?


    Yeah. And vinyl.

    Sorry, SueAnn - you lost me when you misspelled vinyl.
  • Re:The results... (Score:2, Insightful)

    by FlyingCheese (883571) on Thursday May 31, 2007 @11:46PM (#19347909)
    That depends on what the files are being used for. Lower bitrate for a lossless encoder is bad for portable devices because it's harder on the processor, which kills battery life.
  • Re:AAC a standard? (Score:3, Insightful)

    by CronoCloud (590650) <cronocloudauron@gma[ ]com ['il.' in gap]> on Friday June 01, 2007 @12:45AM (#19348191)
    http://en.wikipedia.org/wiki/Advanced_Audio_Coding [wikipedia.org]

    Outside of Apple, the biggest supporter seems to be Sony. Both the PS3 and PSP (and newer Sony phones and Walkmans) can play it, and it's the default codec for the CD ripping ability of the PS3. So the DRMless iTunes songs will benefit PS3/PSP owners quite a bit allowing them to buy songs from iTunes and use them on their machines.

    AAC is also the standard audio format of MPEG4/H.264 video.

  • Re:The results... (Score:5, Insightful)

    by Yoozer (1055188) on Friday June 01, 2007 @01:19AM (#19348305) Homepage

    The open reel tape used in the studio was recorded at ether 15 or 30 ips.
    And you had to either pretty wealthy to use virgin tape or hope the previous recordings would be properly wiped. It's an analog medium with the main advantage that overdriving the inputs gives a nice effect ("warmth") - compared to early digital boxes who just clipped and truncated instead of dithered. Every time you have to play or record tape, it degrades a little bit; surely you know of the multitracking in Bohemian Rhapsody that went on and on until the tape was nearly transparent

    Furthermore, vinyl is lowpass filtered at 16khz anyway. Gone are the harmonics. The higher fidelity is in the first few playings; after that, the medium degrades. What use is it to have something that'll play properly 10-20 times?

    good CRO2 tape and a quality recording and playback deck and you really couldn't tell the difference between live and tape.
    Live sound is always a compromise; always an unpredictable venue, crowd, and response (and in the worst case a clueless mixing engineer or band member who decides that eleven is just not enough for his guitar); soundchecks just can't fix this.

    There is absolutely nothing wrong with digital. The whole 24/96 deal is a godsend because it means much more headroom. Having it in digital format means that you can play and record without ghosts from the past, without degradation. This caused some engineers to add noise afterwards to get rid of the sterility - but what they call sterility is simply unheard-of silence that couldn't be had previously. Engineers back in the day would've killed to have the possibilities we have now.

    As for sounding plastic, I think you're confusing the medium with the mixing. Are you familiar with the term "loudness wars"?
  • by TjOeNeR (1110041) on Friday June 01, 2007 @01:38AM (#19348423)
    I'm new at Slashdot (just pointing it out so don't shoot me). I've been a fan of music all of my life. Sometimes it's just obligatory to own a lost track from some obscure dance album and all you find is some lousy MP3 that's chewed up and spat back out. But then again, you've got the track what's to complain about? Sure, it could've been better quality, but you still have it and that's the point. If I rip my own cds I use lame with -V2 because I like the quality, it sounds better do me than -V4, but sometimes you're just happy to hear the song, even if it's on the radio. And what shitty quality is that? (no DAB)
  • Re:The results... (Score:3, Insightful)

    by Viv (54519) on Friday June 01, 2007 @01:38AM (#19348431)

    Can anyone explain this to me? I know what aliasing is; basically it's when your top frequencies hit the Nyquist limit and kind of bounce back downward (how's that for scientific?), and I know what it sounds like. However, the last time I checked, you'd remove aliasing by cutting high frequencies out of the final analog wave with a lowpass filter. Unless something's radically changed since then, wouldn't the presumably lower-response Apple buds actually show less aliasing that the expensive ones that can better reproduce the higher (and unwanted) frequencies?


    The aliasing happens when you do the analog to digital conversion; if aliasing exists in the digital recording, it's going to exist irrespective of what kind device is attached to the D/A converter on output.

    Also, the effects of aliasing won't be heard exclusively at the higher frequencies; the way aliasing works is that frequencies above the Nyquist frequency get "folded" about the Nyquist frequency. For example, if an aliased frequency at fN+c is sampled (fN being the Nyquist frequency), it will get folded back to fN-c.

    Example: You're sampling at 44kHz (aka, CD audio), resulting in fN of 22kHz. For some reason, power leaked into the D/A at 39kHz. Aliasing occurs, and the power gets "folded" back onto 44kHz-39kHz=5kHz. You'll hear that whether you have buds or the OMGZORAWESOME $400 earphones, more than likely. Of course, if it leaked in at 22.1kHz, it'd be folded back onto 21.9kHz, so maybe you would.

    Personally, my interpretation is, "That word you keep using... I do not think it means what you think it means." (this being directed at the article author, not at you.)
  • Re:The results... (Score:5, Insightful)

    by cheater512 (783349) <nick@nickstallman.net> on Friday June 01, 2007 @05:34AM (#19349513) Homepage
    I wouldnt make any conclusions. There were only 10 people tested.
  • Re:The results... (Score:1, Insightful)

    by Anonymous Coward on Friday June 01, 2007 @08:09AM (#19350473)
    I'm guessing that if you're jamming them into your ears until they hurt, then you're not doing it correctly. I have a pair of Etymotics and never had them hurt my ears. On the contrary. As the gp stated, they do more to protect your ears than anything, meetinks: you're able to play sound at a lower volume, rather than having to crank it up to 11 to be able to discern any sort of details.......

    But that's just my dos centavos
  • Re:The results... (Score:3, Insightful)

    by CaseyB (1105) on Friday June 01, 2007 @08:39AM (#19350805)
    You're preaching to the deaf. All the rational arguments in the world aren't going to convince the $400-volume-knob crowd that the godless computers aren't ripping the color, warmth, texture, flavour, and smell out of their wax-cylinder and vacuum tube audio.

    After all, just look at this chart: you can clearly see how digital audio is ultimately a series of ugly, jagged, sharp steps, while analogue audio is infinitely variable...
  • by rising_hope (900951) on Friday June 01, 2007 @11:29AM (#19353207)
    This is really true. In a good classical music recording, the recording instruments need to be very sensitive and precise. If you can't hear chairs adjust, people wheezing from breathing, and the reverb of the walls of the room with great resonance, you're just not going to get the dynamic range that the music requires. Either the recording sucks, your playback setup (speakers/playback unit) sucks, or the volume is set too low. Strings, in particular, are very acoustically sensitive to dynamic volume ranges. On the other hand, when you're listening to pop and other types of music, generally the more the studio's muted/padded room sound, the "cleaner" the overall sound is considered. Two drastically different musical forms, to be sure.

Are you having fun yet?

Working...