Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Media (Apple) Media Music

Music Listeners Test 128kbps vs. 256kbps AAC 428

notthatwillsmith writes "Maximum PC did double-blind testing with ten listeners in order to determine whether or not normal people could discern the quality difference between the new 256kbps iTunes Plus files and the old, DRM-laden 128kbps tracks. But wait, there's more! To add an extra twist, they also tested Apple's default iPod earbuds vs. an expensive pair of Shure buds to see how much of an impact earbud quality had on the detection rate."
This discussion has been archived. No new comments can be posted.

Music Listeners Test 128kbps vs. 256kbps AAC

Comments Filter:
  • The results... (Score:5, Informative)

    by Kagura ( 843695 ) on Thursday May 31, 2007 @07:27PM (#19346071)
    Apple's iTunes store--in partnership with EMI--is now hawking DRM-free music at twice the bit rate of its standard fare (256Kb/s vs. 128Kb/s) and charging a $0.30-per-track premium for it. We're all for DRM-free music, but 256Kb/s still seems like a pretty low bit rate--especially when you're using a lossy codec.

    So we decided to test a random sample of our colleagues to see if they could detect any audible difference between a song ripped from a CD and encoded in Apple's lossy AAC format at 128K/s, and the same song ripped and encoded in lossy AAC at 256Kb/s.

    Our 10 test subjects range in age from 23 to 56. Seven of the 10 are male. Eight are editors by trade; two art directors. Four participants have musical backgrounds (defined as having played an instrument and/or sung in a band). We asked each participant to provide us with a CD containing a track they considered themselves to be intimately familiar with. We used iTunes to rip the tracks and copied them to a fifth-generation 30GB iPod. We were hoping participants would choose a diverse collection of music, and they did: Classical, jazz, electronica, alternative, straight-ahead rock, and pop were all represented; in fact country was the only style not in the mix. (See the chart at the end of the story for details.)

    We hypothesized that no one would be able to discern the difference using the inexpensive earbuds (MSRP: $29) that Apple provides with its product, so we also acquired a set of high-end Shure SE420 earphones (MSRP: $400). We were confident that the better phones would make the task much easier, since they would reveal more flaws in the songs encoded at lower bit rates.

    METHODOLOGY

    We asked each participant to listen with the Apple buds first and to choose between Track A, Track B, or to express no preference. We then tested using the SE420's and asked the participant to choose between Track C, Track D, or to express no preference. The tests were administered double-blind, meaning that neither the test subject nor the person conducting the test knew which tracks were encoded at which bit rates.

    The biggest surprise of the test actually disproved our hypothesis: Eight of the 10 participants expressed a preference for the higher-bit rate songs while listening with the Apple buds, compared to only six who picked the higher-quality track while listening to the Shure's. Several of the test subjects went so far as to tell they felt more confident expressing a preference while listening to the Apple buds. We theorize that the Apple buds were less capable of reproducing high frequencies and that this weakness amplified the listeners' perception of aliasing in the compressed audio signal. But that's just a theory.
    LEAVE IT TO THE OLD FOGEYS

    Age also factored differently than we expected. Our hearing tends to deteriorate as we get older, but all three of our subjects who are over 40 years old (and the oldest listener in the next-oldest bracket) correctly identified the higher bit-rate tracks using both the Apple and the Shure earphones. Three of the four subjects aged between 31 and 40 correctly identified the higher bit-rate tracks with the Apple earbuds, but only two were successful with the Shures. Two of three under-30 subjects picked the higher-quality tracks with the Apples, but only one of them made the right choice with the Shures. All four musicians picked the higher-quality track while listening to the Apples, and three of the four were correct with the Shures.

    Despite being less able to detect the bit rate of the songs while listening to the Shure SE420 earphones, eight of 10 subjects expressed a preference for them over the Apple buds. Several people commented on the Shure's ability to block extraneous noise. While listening to the SE420s, one person remarked "Wow, I'd forgotten that wood-block sound was even in this song." Another said "The difference between the Shure earphones and the Apple earbuds was more significant than the difference between the song encoded at 128Kb/s and the one recorded
    • Re:The results... (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Thursday May 31, 2007 @07:52PM (#19346287) Journal

      It would be crazy to pay that premium if you're going to buy the entire album.
      DRM'd and DRM-free albums cost the same. There is no reason to buy the DRM, if you are buying a whole album.

      In the end, Apple's move doesn't change our opinion that the best way to acquire digital music remains buying the CD:
      They tested music ripped from CD and encoded by iTunes. That makes this test irrelevant to the music to the iTunes store, since that music comes from the original masters (higher quality than the CD) and is encoded using customised settings (per-album or per-song), while iTunes uses some fairly general settings.

      On my own, completely unscientific, tests, the 256Kb/s tracks are noticeably better. I upgraded a couple of albums yesterday and discovered I could hear the lyrics clearly in a few places where they had been obscured by instrumentals in one of them. The difference is only noticeable if you are specifically listening for it though; I wouldn't be able to tell you the bitrate in a blind listening (hearing them one after the other I probably could).

      Having the songs DRM-free is definitely worth it though. I stopped buying music from iTMS when I started owning multiple portable devices that could play back AAC, but not Apple DRM.

      • Re:The results... (Score:4, Insightful)

        by artisteeternite ( 638994 ) on Thursday May 31, 2007 @09:27PM (#19346959)

        They tested music ripped from CD and encoded by iTunes. That makes this test irrelevant to the music to the iTunes store, since that music comes from the original masters (higher quality than the CD) and is encoded using customised settings (per-album or per-song), while iTunes uses some fairly general settings.

        So then, it seems that there would be an even more noticeable difference between 128Kb/s and 256Kb/s. Which means if using this lower quality 128Kb/s track, the research showed that the difference in quality isn't worth an extra 30 cents, then doesn't it still hold true that a higher quality 128Kb/s track purchased from iTunes would be even closer in quality to the 256Kb/s track, and still not worth the extra 30 cents?

        If ripping a CD to iTunes at 128Kb/s creates a lower quality track than purchasing a 128Kb/s track from the iTunes Store, then I think ripping from a CD to iTunes actually adds more weight to the argument that the 256Kb/s tracks are not worth an extra 30 cents.

      • Re:The results... (Score:5, Informative)

        by omeomi ( 675045 ) on Thursday May 31, 2007 @10:26PM (#19347405) Homepage
        That makes this test irrelevant to the music to the iTunes store, since that music comes from the original masters (higher quality than the CD)

        Do you have any actual evidence that iTunes tracks are encoded from master tracks that are higher quality than CD (i.e. greater than 44.1kHz/16bit)? I have a hunch they're encoded from the same 44.1kHz/16bit file that you'd get if you ripped the CD yourself...In fact, I know they've done exactly this in at least once case, my own album...but I'm not signed to a major label, so it's possible things are different, but I doubt it...
        • Re:The results... (Score:5, Informative)

          by node 3 ( 115640 ) on Thursday May 31, 2007 @10:39PM (#19347471)
          When Jobs introduced the music store, he stated that this is exactly the case. It's not universal, but for some (many? a few? most? I have no idea) they went back to the original masters and used those to for the iTunes music store.
        • Re:The results... (Score:5, Informative)

          by ElephanTS ( 624421 ) on Friday June 01, 2007 @02:16AM (#19348605)
          no you're quite correct. the gp seems to think that they break out the 1\2" masters to re-encode for iTunes. They don't. Much music from the last 15 years was mastered at 48 or 44.1 digitally and there isn't even a higher quality master to be had.

          - an audio pro

      • by flyingfsck ( 986395 ) on Thursday May 31, 2007 @10:58PM (#19347627)
        "I could hear the lyrics clearly in a few places"

        Hmm, I guess that is only because you don't have low skin effect monster cables on your earbuds... ;)

        Anyhoo, if you can hear a difference, then you haven't gone to enough heavy metal shows.
    • Re:The results... (Score:5, Insightful)

      by dangitman ( 862676 ) on Thursday May 31, 2007 @07:58PM (#19346323)

      We're all for DRM-free music, but 256Kb/s still seems like a pretty low bit rate--especially when you're using a lossy codec.

      Are they on crack? 256 Kbps is quite a high bitrate for a lossy CODEC. Their wording is also really bizarre. A low bitrate would be worse for a lossless track, because an uncompressed or lossless track, by definition, should have a much higher bitrate than a track compressed with a lossy CODEC.

      Do they even know what they are talking about?

      • by Anonymous Coward
        It seems obvious to me they do NOT know what they were doing.
        RTFA or not. ( Guess which I chose? )

        10 subjects is hardly enough to prove ANYTHING, other than that they
        have no idea how to perform a remotely rigorous scientific analysis.

        You can expect 2 idiots, 3 to be biased, 4 to be honest, and 1 to lie.

        I think 100 would begin to scratch the surface. I'm not trying to be
        a snarky science dick, this is self evident. This is epinion.com bullshit.

        Show me 10 people who have ipods and I'll show you 5 aol users.
    • Re:The results... (Score:4, Interesting)

      by Anonymous Coward on Thursday May 31, 2007 @08:20PM (#19346463)
      Many similar tests have proven that most humans have trouble detecting any change in audio quality above 160->192 Kbps or in mp3s. A quick web search will show that even "audiophiles" really can't discern the difference. 128 has a clear "tinny" quality that disappears as the bit rate goes up. Based on this, I believe that 256 tracks as compared with the original cds would never be accurately identified. Clearly this should have been a part of this test. The idea that "lossy" means "audible" has not been proven in any real world tests.

           
      • Re:The results... (Score:5, Interesting)

        by Proudrooster ( 580120 ) on Thursday May 31, 2007 @08:40PM (#19346607) Homepage
        True, the only time you will generally notice the difference is if the track has a crowd clapping or drumkit (hi-hat) cymbals. At 128k I think cymbals sound horrible and undefined. At 192k I start not to be less annoyed.
        • by WidescreenFreak ( 830043 ) on Thursday May 31, 2007 @11:43PM (#19347893) Homepage Journal
          Actually, I notice a huge difference between 128K and 192K when listening to classical music. For music that doesn't contain the brashness of percussion or brass instruments, the distortion at lower encoding levels is fairly good; however, brass instruments (including brass cymbals) in particular are unbearably distorted when 128K is used but come across rather cleanly when >192K is used. I've finally accepted that a variable rate between 224K and 320K is where I need to encode my tracks in order to make them as close to the original CDs as I can tolerate without using the actual CDs.
          • by Ceriel Nosforit ( 682174 ) on Friday June 01, 2007 @07:18AM (#19350065)
            Classical music usually has a wide dynamic range whilst most of the rest doesn't. The audio engineer working on a pop track runs everything through an audio volume-level compressor, bringing every sound to more or less the same volume level. In classical music it is quite normal to play certain things at the level of a whisper.

            This means that most of popular music never uses the digital bits representing these low-volume whispers but confines itself to loud shouts and blaring synths, so a lot of the 'bandwidth' on a CD is wasted because of it. Classical music on the other hand uses most of the available bandwidth thanks to the sane use of audio level compressors. When this wideband signal is to have its data compressed then it requires a lot more storage space than the popular music would.
        • Parse error (Score:5, Funny)

          by Kadin2048 ( 468275 ) * <slashdot@kadin.xoxy@net> on Friday June 01, 2007 @12:07AM (#19347995) Homepage Journal
          At 192k I start not to be less annoyed.

          I don't think that means what you think it means.

    • Re:The results... (Score:5, Interesting)

      by tackledingleberry ( 1109949 ) on Thursday May 31, 2007 @08:34PM (#19346569)
      The MPEG community uses a MUSHRA test* to judge the quality of new codecs and to decide on bitrates etc. If there are n-codecs under test than the subject can switch A-B style between n+2 different versions of the same piece of music. These are the n-codecs and a reference or lossless version. He does not know which is which. He can also switch to one which he knows is the reference track (so the reference track is in there twice, labelled in one case and not labelled in the other). The task is to rate (0-100) each of the unknown tracks based on how similar it is to the reference track. One important thing to remember about the task is that the subject must rate similarity, rather than 'quality' or anything else. A certain codec could, for instance, add a load of warm bass to a piece of music making it more pleasurable (maybe) to listen to, but decreasing its similarity to the reference piece. The idea is that the subject should be able to pick the reference track from the unknowns (giving it a score of 100) and then rate all of the other unknowns in terms of similarity to the reference. The codec with the highest score wins. This type of test would be carried out for each of a number of pieces of music, with a lot of listeners.

      * sorry, I've no good link- it's in ITU-R BS.1534-1 "Method for the subjective assessment of intermediate quality level of coding systems".
    • Re:The results... (Score:5, Interesting)

      by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday May 31, 2007 @09:00PM (#19346757) Homepage Journal

      We theorize that the Apple buds were less capable of reproducing high frequencies and that this weakness amplified the listeners' perception of aliasing in the compressed audio signal. But that's just a theory.

      Can anyone explain this to me? I know what aliasing is; basically it's when your top frequencies hit the Nyquist limit and kind of bounce back downward (how's that for scientific?), and I know what it sounds like. However, the last time I checked, you'd remove aliasing by cutting high frequencies out of the final analog wave with a lowpass filter. Unless something's radically changed since then, wouldn't the presumably lower-response Apple buds actually show less aliasing that the expensive ones that can better reproduce the higher (and unwanted) frequencies?

      Or have I been trolled into reasoning with audiophiles? If that's the case, let me know so I can pack up and go home.

      • Re: (Score:3, Insightful)

        by Viv ( 54519 )

        Can anyone explain this to me? I know what aliasing is; basically it's when your top frequencies hit the Nyquist limit and kind of bounce back downward (how's that for scientific?), and I know what it sounds like. However, the last time I checked, you'd remove aliasing by cutting high frequencies out of the final analog wave with a lowpass filter. Unless something's radically changed since then, wouldn't the presumably lower-response Apple buds actually show less aliasing that the expensive ones that can b

      • Re:The results... (Score:5, Informative)

        by dmayle ( 200765 ) on Friday June 01, 2007 @02:40AM (#19348709) Homepage Journal

        I can explain this to you, but it will probably easier to use an analogy to get the point across.

        We know that a listening device (in this case earphones) has a certain frequency response, and can introduce noise into the source. Some listening devices produce less noise, and have more accurate frequency responses. In terms of simple examples, think: (Speaker > Landline > Mobile > Tin-can phone) (I know, the phones have sound systems behind them that affect the sound, but you get my point.).

        Well, you know what? This is also true of encoding audio in a lossy format. So, instead of thinking of the anti-aliasing, imagine that we are encoding into another format. In the case of the apple phones, think of the transitions as (Source -> 128k AAC -> 192k MP3 (The apple phones)) versus (Source -> 256k AAC -> 192k MP3 (The apple phones)). Since additional noise is being introduced into the system, it should be pretty obvious which comes from the higher quality source. If we imagine the Shure headphones as having a perfect response, it will be (Source -> 128k AAC -> FLAC) versus (Source -> 256k AAC -> FLAC). There is no additional noise added, so you have to discern entirely based on the difference between the two AAC files.

        To get back to the issue of aliasing, aliasing is what happens when a signal of one frequency gets recorded in a medium without enough precision to record that frequency. The nyquist limit says that for any frequency, you need twice that frequency in recordings to be able to capture the frequency (so a 5Khz sound can be heard on a 10Khz recording) but that assumes that the recording is in phase with the sound, and so it's a little more complicated than that. In any case, you can think of aliasing as the "beat" between two different frequencies. For example, if you listen to a sound at 3000 Hz and one at 3100 Hz at the same time, you will hear a 100 Hz "beat" that is the difference between the two. However, if you listen to the 3000 Hz frequency, and then the 3100 Hz Frequency, you might not be able to tell the difference between the two. It's only when playing the two sounds together that you hear the beat (just like you won't notice aliasing unless you actually record it into another format.)

    • Re:The results... (Score:4, Informative)

      by squizzar ( 1031726 ) on Thursday May 31, 2007 @09:13PM (#19346859)
      There's so many factors involved with these things that it is very hard to make a judgement. A well organised test would specifically select songs that do not compress well with lossy codecs. It is conceivable that music with a fairly even PSD (power spectral density) would not compress as well as one with a PSD that focusses more on certain areas, since the amount of information stored would have to be spread across a greater range. Hence the higher bitrate should sound better because more detail is preserved. Think speech quality (telephone, AM radio) vs CD quality, it sounds like the original, but the detail is all missing. That's what that extra bitrate adds back in. 128K is acceptable to the majority of people out there. Some people are more sensitive, I know people who work in professional audio and they can't stand 128K, personally, the vast majority of times I can't tell. I generally use OGG at around 160Kbit, and when an mp3 gets played I do get a sense that it is not quite the same, but it's not conclusive - it could just be the encoder used.

      The headphones do make a difference. I used the stock headphones with my portable music player. Dropped them in/on/off something and broke them and got a set of Sennheiser ear buds. They do not cost $400. The interesting thing is I perceived the same effect as the people in the test: A reduction in bass 'kick' but a clearer response. There is definitely a lot to be said for good quality listening equipment, but in that arena, proper over the ear headphones are the only way to go. They aren't that practical though. The standard ear buds don't have the high frequency response and clarity you can get from slightly more expensive ones. Spending as much on your ear buds as on the player itself seems a little excessive though. You could probably get a larger size player, decent headphones, and use FLAC and get better quality than 256K mp3 through a set of very expensive ear buds. Also, you are going to be even more upset when they end up in your beer or something.

      Finally, spotting mp3 artefacts is a strange thing. I'd never noticed any (at 128K) until someone pointed out the sound to me (usually it's cymbals). From then on, it became much clearer, and now I notice it a lot more (again it's mostly cymbals). Some songs are more susceptible than others, again I guess it is related to the make-up of the music.

      Essentially I have come to the conclusion that: OGG sounds better than MP3 (although some of the audio professionals I know think the oposite), ear buds can only go so far and break - not worth spending a fortune, but worth spending a little, and that if you _really_ want to hear stuff at the finest detail, you should invest in some good over the ear headphones. It's a different experience: the noise occlusion, crisp, clear sound, and defined and powerful bass. The main thing you notice is that strong bass does not corrupt the higher frequencies, giving a very different overall feel of the sound, one that is, in my opinion, quite unique.
    • Re:The results... (Score:5, Informative)

      by callmevinny ( 1101147 ) on Thursday May 31, 2007 @10:13PM (#19347329)
      This is a population proportion test.

      Is the quality level distinguishable such that the
      proportion of people detecting it is greater than
      a coin toss (p = 0.5)?

      The hypothesis:
      Null : p = 0.5 The quality is not distinguishable
      Alternative : p != 0.5 The quality is distinguishable

      This is, arguably, a two-tailed test. We wish to see if the
      null hypothesis is rejected.

      The test has a requirement that:

      np >= 5 and
      n(1-p) >= 5

      p = 0.5
      n = sample size = 10

      In both cases np = 10 x 0.5 = 5 so we barely make it.
      and have an approximately normal distribution.

      p_bar = sample proportion = 0.6 (in the one case)

      sigma_p_bar = sqrt(p(1-p)/n) = 0.158

      95% confidence interval: alpha = 0.05, two-tailed means
      use alpha/2 = 0.025 as rejection region on both ends of the
      normal distribution.

      z_0.025 = 1.96

      Right-tail rejection value:
      p_bar_alpha/2 = p + z_0.025 x sigma_p_bar = 0.5 + 1.96 x 0.158
      p_bar_alpha/2 = 0.809

      Left-tail:
      p_bar_alpha/2 = p - z_0.025 x sigma_p_bar = 0.5 + 1.96 x 0.158
      p_bar_alpha/2 = 0.190

      Decision rule:
      If p_bar is greater than 0.809 or less than 0.19 we can
      reject the null hypothesis and declare distinguishable
      quality.

      Since p_bar = 0.6 the null hypothesis is not rejected and
      their is no statistical evidence that the quality was
      distinguishable.

      For p_bar = 0.8 (the second sample with the shures)
      the null hypothesis is also not rejected. Just barely though.

      The problem is the sample size is just too small to try
      and prove anything with any statistical validity.

      Although, I suspect the article was written more as a
      case to generate ad revenue and perhaps push shure
      headphones.
  • Synopsis (Score:5, Informative)

    by sc0p3 ( 972992 ) <jaredbroad@gmailREDHAT.com minus distro> on Thursday May 31, 2007 @07:28PM (#19346075) Homepage Journal
    8/10 Picked High Bit Rate with Apple Headphones
    6/10 Picked High Bit Rate with Shure Headphones


    100% certainty that 10 people sample-set is too little for a Yes-No experiement.
    • by phasm42 ( 588479 )
      The result isn't as useful without knowing how those that didn't pick the high bit rate were split up. Out of the 4 that didn't pick high bit rate with Shure headphones, how many picked low bit rate, and how many couldn't tell the difference?
    • Re:Synopsis (Score:5, Interesting)

      by no_opinion ( 148098 ) on Thursday May 31, 2007 @07:46PM (#19346233)
      Not only that, but audio professionals typically do codec and compression tests using an ABX test.

      http://en.wikipedia.org/wiki/ABX_test [wikipedia.org]

      This would have been more interesting if they had used a statistically valid sample size and not only compared 128 to 256, but also to lossless.
      • Re: (Score:3, Insightful)

        by timeOday ( 582209 )
        I agree the comparison to lossless would be interesting.

        As for ABX, it seems like the most demanding possible test, which I agree makes it attractive in theory. But in real life, the relevant question is "does this sound good" without a back-to-back reference sample for comparison. I also keep my photo collection in .jpg. Can I see the jpg distortion if I do a 1:1 blowup and carefully compare to a TIFF image? Sure. But at normal viewing size and distance, it just doesn't bother me, and that's my pers

      • Re: (Score:3, Insightful)

        by tchdab1 ( 164848 )
        Yes. They say...

        We'd be more excited if Apple increased the bit rate even further, or--even better--if they used a lossless format.

        But then they don't test their assumption.

        How ascientific. Excitement is all mental anyways.
    • Re:Synopsis (Score:5, Insightful)

      by Ohreally_factor ( 593551 ) on Thursday May 31, 2007 @07:56PM (#19346317) Journal
      The new standard for research methodology: finding 10 people at the corner starbucks, asking them to help you for an "article" you're writing.

      Oh ,and while we're at, let's throw another variable into the mix! That'll make it even more scientifical! (And that's not even getting into any other variables that slipped in thru carelessness.)

      Frankly, I wouldn't trust these MPC bozos to tell me if it was raining while I was urinating on their backs.
      • Re:Synopsis (Score:4, Interesting)

        by QuietLagoon ( 813062 ) on Thursday May 31, 2007 @08:28PM (#19346529)
        The new standard for research methodology: finding 10 people at the corner starbucks, asking them to help you for an "article" you're writing.

        This is what the Internet has reduced us to: it does not matter if it is correct, so long as it is delivered quickly.

    • The sample-set should also include musicians and audiophiles into the mix. They are far more likely to give an objective opinion compared to people randomly pulled off the street. Both know what to listen for and are well tuned in finding the distortion which is inherit in lossy compression.

      In my personal experience, I have listened to mp3 as well as other competing formats for over 10 years and it is very easy for me to discern the difference in bitrates. I wasn't able to do this at first, but I tuned m
      • Re: (Score:3, Insightful)

        by Babbster ( 107076 )

        The sample-set should also include musicians and audiophiles into the mix. They are far more likely to give an objective opinion compared to people randomly pulled off the street.

        Bullshit. First of all, the testing procedure should be designed to eliminate subjectivity. That's the purpose of double-blind testing. Second, why would anyone but a musician or audiophile care what a musician or audiophile has to say on this issue? Are they experts on hearing? The latter group would be particularly useless

    • 100% certainty that 10 people sample-set is too little for a Yes-No experiement.

      Probability that a bunch of editors will fuck up the statistical design of an experiment: 96.2% Seriously, you're writing for a magazine with decent readership, and you can't spend a week finding 90 more people at a coffeeshop who are willing to listen to music for 15 minutes apiece? Possibly get some statistical validity?

      I'd give this shit the "honorable mention" you-suck ribbon at a 5th grade science fair.

    • 100% certainty that 10 people sample-set is too little for a Yes-No experiement

      Really? We are testing the hypothesis that people can tell 128k and 256k apart. If the hypothesis is false, then it will be 50/50 whether they get one right or not. The chances of getting 8 or more right out of 10 when an individual trial has probability 1/2 is C(10,8)(1/2)^8(1/2)^2 + C(10,9)(1/2)^9(1/2) + C(10,10)(1/2)^10. That's 56/1024, or 6.3%. That's pretty good grounds for rejecting the hypothesis.

      For the other tes

  • Test confirms the generally known (but debatable) points:
    1. Not many can detect the improvement of higher kbps
    2. Expensive earbuds are way better than the default ones.

    But what do you do with this fanboi? "One of the two people who expressed a preference for Apple's product told us "It seemed like I got better kick from the bass."" I hope he was completely deaf.
    • Hey.. even cheap earbuds [computerbrain.com] are better than the default ones.
    • Re: (Score:3, Insightful)

      by Divebus ( 860563 )

      Test confirms the generally known (but debatable) points:
      1. Not many can detect the improvement of higher kbps
      2. Expensive earbuds are way better than the default ones.

      3. 128kbps AAC isn't all that bad.

  • by emptybody ( 12341 ) on Thursday May 31, 2007 @07:33PM (#19346125) Homepage Journal
    that article doesnt provide enough data to make any conclusions.
    maybe they should go back to statistics 101
    • by jcgf ( 688310 )
      This is maximum pc here. They're not about accurate statistics, they're about:

      1. convincing everyone that they absolutely need to be concerned with details down to the areal density of their hard disks when building PCs to sell more expensive drives

      2. complaining about inacurrate benchmarks, yet still using them to judge products

      3. selling the latter half of the magazine for ads

      4. ??????

      5. profit

  • Not worth it? (Score:4, Insightful)

    by Lost Engineer ( 459920 ) on Thursday May 31, 2007 @07:35PM (#19346133)
    FTFA

    we just don't think DRM-free tracks alone are worth paying an extra 30 cents a track for..
    Have fun buying your album again to play it on your cell phone's MP3 player.
    • The fact I couldn't play the music on my (Nokia) phone's built-in music player was the reason I stopped buying from iTMS. I'll probably start again now. 256Kb/s AAC is the same quality as the music I've ripped from CD, and the convenience is a huge incentive.
  • Cost and quality (Score:3, Insightful)

    by eebra82 ( 907996 ) on Thursday May 31, 2007 @07:36PM (#19346149) Homepage
    "Eight of the 10 participants expressed a preference for the higher-bit rate songs while listening with the Apple buds, compared to only six who picked the higher-quality track while listening to the Shure's."

    I don't buy this. I have a friend who claims to be an audiophile - and he is - with sound equipment worth well over $40,000. He states that the more expensive and professional your gear is, the easier it is to spot low quality music.

    So the article contradicts with his statement, and I have to agree with him on this one. Logically speaking, professional speakers should produce results far closer to the source than the ones that aren't.
    • by FightCopyright ( 1098295 ) on Thursday May 31, 2007 @07:49PM (#19346265)
      Yeah, well I used to have a gf who claimed to be a Scientologist, and she gave over $40k to the church. She states that some alien is responsible for blowing up volcanos that created humans, but you know what... the bitch is just wrong.
    • You're assuming A) a minimum level of quality, so that "better" means "better in all areas" not just "better on average". and B) that it's impossible for deficiencies one area to compliment improvements in another. In short, while what you say is generally true, there are a lot of variables.

      For example, the data loss in AAC encoding is most noticeable at higher frequencies; it's possible that A) the Apple headphones have better clarity at higher frequencies than the Shure headphones, or B) that the Apple he
    • Re: (Score:3, Insightful)

      by pev ( 2186 )

      Logically speaking, professional speakers should produce results far closer to the source than the ones that aren't.

      Er, WTF? Audiophiles don't use 'professional' kit they buy posh shiny Audiophile setups. If you want to listen to music as the recording engineer intended, buy a set of decent powered studio monitors for far less then supposed audiophile setups. You'll be far closer to the intended sound than any artificial response you get from consumer gear. And yes, audiophiles are consumers too, just consu

      • Kinda, sorta, but not really.

        Many rercording engineers preview their mixes on the most attrocious speakers they can find to check that it still sounds OK on the kind of equipment. It will sound much better on better gear and they know it - but they know how 90% of people will listen to it and want to cater to that possibility. It's not about recapturing the way they intended it (that is in their head, not on a studio monitor or an audiphile rig). (Why else do you think pretty much all the CDs released have
      • Re:Cost and quality (Score:5, Informative)

        by adminstring ( 608310 ) on Thursday May 31, 2007 @09:19PM (#19346901)
        I agree with your statement that audiophiles don't use "professional" equipment, but I disagree with your statement that studio monitors will give you the sound that the recording engineer intended. This is because, as you imply, there is a distinct difference between accurate speakers and good-sounding speakers, and recording studios use accurate speakers, while consumers, even audiophiles, are better off with good-sounding speakers.

        If you working in a recording studio, you want accuracy at all costs. You must hear everything distinctly, because you need to make important decisions based on what you hear. If "it sounds great" is all you are getting from your speakers, you won't make those tough decisions (more cymbals, different reverb, more compression on the vocals, or whatever.) You'll just leave it alone and it won't be as good as it could be. However, those extremely accurate speakers that are perfect for recording studio use are NOT pleasant for casual listening. Everything is too crisp and sharp, and they will tend to make you want a break from all that detail.

        When I'm working on a mix in the studio, I want everything in very crisp detail so I can make judgments; when I'm listening to the final product, I want the music to "hang together" and present itself to me as a coherent whole. There are other differences between studio monitors and "normal" speakers (for example, consistency of frequency response) but this relatively subjective factor of detailed sound vs. coherent sound is one of the more important ones I have experienced.

        The recording engineer did not intend for you to listen to the music on studio monitors. Studio monitors are a tool with a specific use, and that use is not everyday listening. The attributes of a good studio monitor just don't match up with the attributes of a good audiophile speaker. This is why audiophiles buy certain kinds of speakers, and recording engineers buy other kinds. I've been lucky enough to own both kinds of speakers, and I've tried using them for the wrong purpose with less-than-stellar results. Mixes made on good-sounding speakers are inconsistent on other speakers, and music played through accurate speakers isn't as pleasant to the ear.
        • In my studio... (Score:3, Interesting)

          by Khyber ( 864651 )
          I have NO accurate speakers. Instead I cut even more costs and just have a few separate stereos with different speakers hooked up. I use my high-quality Shure studio headphones for recording, then when I'm done, I play it back on all three systems, and I note just how it sounds on each system, so I know over a wide range of speakers/amplifiers (from car amp to hosue speakers, car speakers to house amp, etc.) what I can expect to hear. I listen to it as if I'm hearing it out of Joe Sixpack's home stereo rig,
    • Re: (Score:3, Insightful)

      by raehl ( 609729 )
      I have a friend who claims to be an audiophile - and he is - with sound equipment worth well over $40,000.

      I can't tell if you're being sarcastic or not. Assuming you're not... ...having $40,000 in sound equipment says about as much about your ability to judge sound quality as spending $300 on Celine Dion tickets says about your taste in music.
  • Humidity?? (Score:5, Funny)

    by TopSpin ( 753 ) * on Thursday May 31, 2007 @07:36PM (#19346153) Journal
    Clearly these test are inadequate, or at least they haven't disclosed enough information on the testing conditions. As any true audiophile knows, headphone performance is strongly affected by atmospheric conditions; I'll bet that if they had bothered to maintain proper water vapor saturation levels in the test facility the complete the inadequacy of the ear buds would have been obvious to everyone involved, because sensory receptors (hair cells) in the human ear only achieve full sensitively under controlled conditions.

    No doubt they also failed to account for magnetic field alignment; the flaws of low bit rate reproductions are much easier to perceive when the listener is not aligned with Earth's natural axial vectors. The solenoidal force lines ruin the high band pass attenuation of any digital audio and will make both low and high bit rate reproductions equally poor, so naturally there wasn't a strong correlation among the test subjects.

    Idiots.

    </sarcasm>
  • Hardly a conclusive or thorough study - were it really double-blind, some subjects should have heard two 128 Kb/s tracks, while others heard two 256 Kb/s tracks, and there should have been a "no difference" option. Also, some types of music, or some particular musicians, make it much easier to discern difference between bitrates, but every subject listened to a different song.

    Personally, I can tell the difference between 128 and 256 versions of most Radiohead songs, where there are frequently numerous l
    • Re: (Score:3, Funny)

      by Anonymous Coward

      while Coldplay songs, which are more simplistic, make it harder to discern.
      I prefer Coldplay @ 0kb/s
  • by rueger ( 210566 ) on Thursday May 31, 2007 @07:43PM (#19346211) Homepage
    Judging by the comments from the six people who actually got to read the article I'm glad it got slashdotted before I wasted my time on it.
  • by garett_spencley ( 193892 ) on Thursday May 31, 2007 @07:45PM (#19346227) Journal
    Despite what Apple charges for a set of its replacement buds, the earphones that come with 90 percent of the digital media players on the market are throw-away items--they're only in the box so you'll have something to listen to when you bring the player home.

    I'm a musician. I've recorded and released an album [cdbaby.com] (sorry for the shameless plug but it's only to put my post in context - honest). I own expensive studio earphones, have experience mixing and mastering etc.

    I don't own a 5th generation iPod but I do own an iPod Shuffle that has since stopped playing MP3s. It still works as a storage device and I still have the headphones. I kept on to the headphones because I prefer them over all other ear buds I have. They don't beat the studio headphones, but I would not consider them "throw aways". I found they're pretty good quality and I began using them with all of my portable devices. I would generally agree that most ear buds that come with cd players and probably many other mp3 players are of relatively low quality, but I was very impressed with the ones that came with the iPod Shuffle. I will never throw them away.
    • by dn15 ( 735502 )
      I agree that they're at least decent. I won't pretend they're top-of-the-line but I've heard much worse from the cheap-o end of the earbud / headphone marktet.
    • Re: (Score:3, Funny)

      Your shameless plug just sold at least one of your CDs. And ignore the AC's comment--they obviously didn't give your music a listen. You are most certainly a musician. I, however, am a drummer, which just means I hang out with musicians. *BADDA BOOM*
  • To our subjects ears, there wasnt a tremendous distinction between the tracks encoded at 128Kb/s and those encoded at 256Kb/s. None of them were absolutely sure about their choices with either set of earphones, even after an average of five back-to-back A/B listening tests... We'd be more excited if Apple increased the bit rate even further, or--even better--if they used a lossless format.

    Ok, so by DOUBLING the bitrate, there was only a marginal increase in quality... to the point where on a good set of he
  • treble troubles (Score:4, Interesting)

    by fred fleenblat ( 463628 ) on Thursday May 31, 2007 @08:03PM (#19346353) Homepage
    Me, personally, what I find unsatisfying about compressed music is that the treble is the first thing to go, and even at high bit rates AAC and MP3 each seem to just make all cymbals, brushes, triangles, and synthetic tones in the high registers sound equally like white noise.

    I found a tonality frequency setting in LAME that seemed to cure this problem, but neither iTunes nor ITMS seems to let you adjust or purchase based on this issue.

    Perhaps not everyone is sensitive to this, but maybe there are other settings or aspects of compression that other people are sensitive to which I am not...leading one to the possible conclusion that compressed music might be made better by personalizing each rip to the hearing response of the listener rather than compromising on an average human hearing model.
    • Most of my friends seem to have quite a bit of hearing loss (all under 25). I don't seem to have much, though, and I've worked in steam turbine and gas turbine power plants (exceedingly loud places). If these test subjects were anything like my friends they have to turn up music so loud that it is impossible to tell the difference between a cell phone speaker and an Imax theater.
  • by cciRRus ( 889392 ) on Thursday May 31, 2007 @08:07PM (#19346379)

    Maximum PC did double-blind testing with ten listeners in order to determine whether or not normal people could discern the quality difference between the new 256kbps iTunes Plus files and the old, DRM-laden 128kbps tracks.
    Shouldn't that be a "double-deaf" test?
  • Age and music choice (Score:4, Interesting)

    by Charles Dodgeson ( 248492 ) * <jeffrey@goldmark.org> on Thursday May 31, 2007 @08:29PM (#19346541) Homepage Journal
    The unexpected age results (that older people were better at telling the difference for the bitrates) may well be a consequence of music choice. Each subject picked their own music, and it is very clear that these quality differences are more noticable in some types of music than in others. The first time I played an iTunes purchased classical piece on a cheap component stereo system, I thought something was broken. I hadn't noticed a problem with most popular music, but I find some jazz and most classical digitized at 128bps un-listenable on my low-end component stereo.
  • Close but not quite (Score:2, Interesting)

    by progprog ( 1016317 )

    One of their key ideas was having the participants submit music they were intimately familiar with. Unfortunately, they should have taken the idea to its logical conclusion: having each participant tested only with songs they submit. Also, they could have at least published the statistics on how participants performed on the song they submitted.

    I find it easy to tell the difference between say lossless or even 320 and 128/192 when listening to music I'm very familiar with. But give me a set of random s

  • Better for albums (Score:5, Interesting)

    by AlpineR ( 32307 ) <wagnerr@umich.edu> on Thursday May 31, 2007 @08:33PM (#19346555) Homepage
    The big difference that the 256 Kb/s + DRM-free option makes for me is that now I'll buy albums from iTunes Store. Previously I would use iTunes to buy one to three tracks if there was some artist I liked but didn't want a whole album from. But usually I order the CDs online for $8 to $14, rip them to AAC at 192 Kb/s, and put the disc away to collect dust on my overflowing CD rack. Now I can get higher quality cheaper and faster.

    Yes, ideally I would rip all my music to a lossless format. And ideally everything would be available on SACD at 2822 KHz rather than 44.1 KHz CDs. But that's just not practical with my 500+ album collection. It'd fill up my laptop's hard drive real quick and allow me to put only a fraction onto my iPod.

    I'm also disappointed that the article only tested the tracks on iPods with earbuds. Most of my listening is on a decent stereo system fed from my laptop. Ripping is about convenience, not portability. I only use my iPod when riding the Metro or an airplane. With all the outside noise the bitrate doesn't matter.

    And being DRM-free isn't just a matter of idealism. I get frustrated when I go to burn an MP3 CD for my car and discover that one of the tracks I selected is DRMed. Sure there are ways to get around it, but it's just not worth the bother.

    AlpineR
  • It only matters what you hear with your music and your listening conditions.

    I sometimes like to listen to classical on a cheapish low-end component stereo. At 128bps, the quality is so noticiably bad for me as to make it pretty awful. But I don't have that problem with many other types of music under other listening conditions (car, iPod, computer speakers). So when I get a chance (I'm travelling now), I'll see what 256k does for me under the conditions that matter. The results may mean that I'll buy

  • I figured the best thing to use for evaluation was some music I was already very familiar with: "Hunky Dory" by David Bowie (1971). A very well-recorded album featuring all styles from straight-forward rock to lush orchestrations.

    (good deal too; the LP was just $9.99, though the individual tracks were $1.29)

    Must admit I was not disappointed. Previously I've rarely bought any music from iTunes - just my own CDs ripped to MP3 at 192 or 256K. The 256K AAC sounded great on the big speakers. Very clean and
  • by liftphreaker ( 972707 ) on Thursday May 31, 2007 @10:39PM (#19347477)
    This test, to a large extent, tells us about the output of the codecs, rather than tell us about the differences between 128k/256k encoding. For a really meaningful test, we must ensure that each song was encoded using the exact same settings.

    I can create 256k MP3's which sound worse than 128k MP3's, both from the same WAV. There are a large number of customizations you can use in the encoding process which can really affect the output.
  • by flyingfsck ( 986395 ) on Thursday May 31, 2007 @10:51PM (#19347563)
    Both kinds of music were missing: They had neither Country nor Western music in the test.

    What is America coming to?
  • by johnpod ( 977795 ) on Thursday May 31, 2007 @10:52PM (#19347567)
    Hydrogen audio forums provide a lot of very good information including well designed double blind comparisons between codecs and bits rates. See this page for details and the links to other testing sites. http://wiki.hydrogenaudio.org/index.php?title=List ening_Tests [hydrogenaudio.org] All in all an excellent resource for any serious listener.
  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday June 01, 2007 @01:15AM (#19348289) Journal
    First, to those who made comments about 128k encoding, you may be thinking of mp3. (Or maybe not, who knows...) From what I've heard, AAC, Vorbis, and AC3 all sound better than mp3 at similar bitrates.

    Second, I remember there was a comment on Slashdot awhile back, before they actually came out with these, and I want to confirm... Apparently, CDs are recorded at a certain physical bitrate/frequency, and there are digital masters which are at a higher rate... it's late, so I'm not entirely coherent, but think of it as somewhat equivalent to resolution of a DVD (quality of video is proportional to resolution (HD vs normal) and bitrate). The point was that 256k may actually sound better than a CD, since it comes from a better source than a CD.

    If so, this whole test is BS, since they did not do a comparison of CD vs 128k (either iTunes-DRM'd or custom-ripped) vs 256k (un-DRM'd, from the iTunes store). Specifically, I'd want to hear 256k vs CD. But at the same time, I don't know if any iPod, or specifically the one they are using, would be able to handle the higher resolution. If not, you'd have to specifically check your soundcard, too.

    And finally, again vaguely remembering this from a Slashdot comment (so correct me if I'm wrong), but there was some comment about "The 30c may seem small, but imagine buying a whole album at these prices..." And I seem to remember that a full album is always $9.99. Still high compared to, for example, the minimum you'd pay for a FLAC-encoded album at MagnaTune, but if you're buying a whole album (and if that's accurate), you may as well just opt for un-DRM'd -- especially if it sounds better than a CD (which would probably cost more anyway.)

    But then, of course, I'd like to hear a much bigger study, with more rigorous controls in place. 10 people is just not enough, no matter how you set it up.

    And personally, if I had any money to spend on music, I'd be buying un-DRM'd stuff. But probably not from iTunes -- not till there's a web interface (or at least an API) that doesn't require me to download their client software. After all, if I'm buying a single file, the point of the client software is to implement the DRM, and if I buy the un-DRM'd version... Not that it shouldn't also work in the iTunes client, but it'd be nice for it to work natively in AmaroK, or just in a browser.
  • by kinglink ( 195330 ) on Friday June 01, 2007 @01:46AM (#19348463)
    "And as much as we dislike DRM, we just don't think DRM-free tracks alone are worth paying an extra 30 cents a track for.. It would be crazy to pay that premium if you're going to buy the entire album. We'd be more excited if Apple increased the bit rate even further, or--even better--if they used a lossless format."

    First off, I've yet to see a lossless formula that WORKS. And by works I mean is easily convertible into mp3/aac so I can use it on a portable player I already own, able to be used. I've seen APE and FLAC, both are too much hassle, and the APE files I got were in japanese. Here's a little fact, Ape doesn't necessarily know how to correctly encode Japanese into ID3, end result? Buffer overflow, bad data. Oh and if they work? They are larger than mp3s and AAC. Lossless codec means all the data has to remain, trust me, that's not a good thing when coupled with all the other little hassles it has.

    Second. It'd be crazy to spend 99 cents just to license your files so that you can only use as Apple approves. Paying money to crack the music so I can use it as I want is illegal according to them so why am I paying the money to get locked into their plan. However DRM free music is easily worth 1 dollar and 30 cents because it's mine (It AAC but I can live with that). I don't have to ask permission to use it in another player, I don't have to ask permission to convert it to a data format I choose. Personally I'm fine with 192 for most recordings, I'm not an audiophile, I'm just a listener. If you want the highest grade data, or are an audiophile you'll be buying CDs or fully lossless data, you're not going to fuck with iTunes anyways.

    Btw their other idea is to get rid of the apple iBuds and get quality recievers. Hint, This is what got the less interchangable results? I don't exactly see why getting a "higher quality" headset would be desirable if it creates less of a difference instead of more of a difference between two bit rates. Higher quality means I should hear everything. If you are asking people "can you hear the difference" they already should be listening as hard as they can. The theory they try to explain it with doesn't make much sense. They are telling us 30 cents doesn't make a difference but they are trying to sell us on dropping 400 bucks on noise reducing headsets you can get for around 100 if you're clever. Hell they are EARBUDS!!! So far I've notice two things about earbuds. They are uncomfortable, and they are worthless compared to my headphones. If you're talking about noise reducing earbuds just be smart buy a good set of headphones.

    Overall a throw away article, I'm still only going to buy DRM less music (I expect you out there to do the same, I'm assuming 30 cents won't kill you, but that's your choice) and hope to soon if Apple ever put the DRMless music out, and had the music I listen to (so far not really). I'm assuming you all are STILL buying music like you are going to. The only mind's this article changes is the cavemen hiding under the rock who still scream "ahhh cds bad", and he's still trying to figure out our compooters, so showing him the internet might not be smart just yet.
  • by markh1967 ( 315861 ) on Friday June 01, 2007 @03:29AM (#19348951)
    I've been very skeptical about these subjective tests ever since I read about one in 'New Scientist' many years ago.

    Back when the great audiophile debate was between CD and vinyl, New Scientist magazine put a load of audiophiles to the test by playing them the same piece of music from CD and then from Vinyl and asked them to identify which version was from which media and describe the differences between them.

    What they didn't tell them was that they simply played the same CD track twice so any differences they thought they heard were purely a result of their own perception fantasies; it didn't stop them from describing in some detail how the two tracks varied though.

Heard that the next Space Shuttle is supposed to carry several Guernsey cows? It's gonna be the herd shot 'round the world.

Working...