Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Handhelds Apple Technology

NVIDIA Challenges Apple's iPad Benchmarks 198

MojoKid writes "At the iPad unveiling last week, Apple flashed up a slide claiming that the iPad 2 was 2x as fast as Nvidia's Tegra 3, while the new iPad would be 4x more powerful than Team Green's best tablet. NVIDIA's response boils down to: 'it's flattering to be compared to you, but how about a little data on which tests you ran and how you crunched the numbers?' NVIDIA is right to call Apple out on the meaningless nature of such a comparison, and the company is likely feeling a bit dogpiled given that TI was waving unverified webpage benchmarks around less than two weeks ago. That said, the Imagination Technologies (PowerVR) GPUs built into the iPad 2 and the new iPad both utilize tile-based rendering. In some ways, 2012 is a repeat of 2001 — memory bandwidth is at an absolute premium because adding more bandwidth has a direct impact on power consumption. The GPU inside NVIDIA's Tegra 2 and Tegra 3 is a traditional chip, which means it's subject to significant overdraw, especially at higher resolutions. Apple's comparisons may be bogus, but Tegra 3's bandwidth issue they indirectly point to aren't. It will be interesting to see NVIDIA's next move and what their rumored Tegra 3+ chip might bring."
This discussion has been archived. No new comments can be posted.

NVIDIA Challenges Apple's iPad Benchmarks

Comments Filter:
  • by imagined.by ( 2589739 ) on Sunday March 11, 2012 @12:29PM (#39318537)
    The irony in this is that this is coming from a company that presented chunks of wood as their next-gen graphics cards.
    • Re:This is funny. (Score:5, Insightful)

      by arbiter1 ( 1204146 ) on Sunday March 11, 2012 @12:43PM (#39318617)
      yea its also irony that claims their chip is 4x faster was known to cheat their benchmarks years ago to make their systems look faster then they were.
      • Re:This is funny. (Score:5, Insightful)

        by DJRumpy ( 1345787 ) on Sunday March 11, 2012 @01:09PM (#39318755)

        One also has to consider that the older iPad 2 smeared the floor with the Tegra 3, why would they think that twice the performance is 'meaningless'? Considering Apple typically doesn't play too lose with the marketing statistics for metrics like battery life, real world performance, etc, then I don't find this to be a stretch. I will be interesting to see the real world benchmarks when the hardware arrives.

        http://hothardware.com/Reviews/Asus-Eee-Pad-Transformer-Prime-Preview/?page=7 [hothardware.com]

        • Re:This is funny. (Score:5, Interesting)

          by Anonymous Coward on Sunday March 11, 2012 @01:26PM (#39318851)

          Smeared the floor with Tegra 3? I'm sorry, but meaningless benchmarks are meaningless. I hold both Tegra 2 (Ice Cream Sandwich) and iPad 2 devices in my hand at this very minute, and I can tell you that there is essentially no noticeable difference between the two in terms of responsiveness or 3D performance from the point of view of the end user (and that's despite the iPad 2 having a significantly lower-resolution screen than the Tegra 2 device. The latter has 30% more pixels than the iPad 2 does.)

          For the iPad 2 to "wipe the floor" with Tegra 3, it would have to be significantly slower than Tegra 2, and it isn't. Hence, these benchmarks can be nothing other than complete nonsense.

          • by jo_ham ( 604554 )

            But your highly scientific benchmark is?

            • Re:This is funny. (Score:5, Informative)

              by ozmanjusri ( 601766 ) <aussie_bob@nOspAm.hotmail.com> on Sunday March 11, 2012 @06:11PM (#39320741) Journal
              This is a Tegra 3 to A5 benchmark:

              http://hardware.slashdot.org/story/11/12/01/1518244/nvidias-tegra-3-outruns-apples-a5-in-first-benchmarks [slashdot.org].

              Note that in the Apple infographic, they're claiming a 2X speed advanatge over the Tegra 3 with the iPad2's A5. That doesn't appear to be true, though the A5 has some advantages.

              Whether the A5X has picked up enough ground to quadruple the Tegra's performance remains to be seen, and given Apple's debunked claims about the A5, seems unlikely.

              • Re: (Score:3, Insightful)

                by otuz ( 85014 )

                According to the first link in your linked article, the iPad2 beat the Tegra 3 by a very good margin: http://hothardware.com/Reviews/Asus-Eee-Pad-Transformer-Prime-Preview/?page=7 [hothardware.com]

                • It shows two OpenGL-based benchmarks where iPad2 was faster than the Asus Transformer Prime running Android 3.2 Honeycomb, not A5X Vs Tegra 3. Other benchmarks show different aspects of the two systems, including many where the positions are reversed. It's also worth noting that the 3D subsystem in Android was overhauled for ICS, which is likely to perform better than Honeycomb in the same test.

                  It may well be that the PowerVR GPUs are faster than NVidia's conventional GPUs, but that's still only one part

              • Re: (Score:3, Informative)

                by dwightk ( 415372 )

                soo... I'm guessing you just read the headline and skipped the: "*Update - 3/9/12: We became aware of an error in calculation for our GLBenchmark Egypt Offscreen results here and have since updated the chart above. As you can see, the iPad 2 boasts a significant performance advantage in this test versus the Tegra 3-powered Transformer Prime."

          • by poly_pusher ( 1004145 ) on Sunday March 11, 2012 @01:52PM (#39319009)
            Since you have both, Could you run the OpenGL Egypt benchmark for comparison?
          • Ummm... Are you testing this by playing the same game on both?

        • Re:This is funny. (Score:4, Insightful)

          by cheesybagel ( 670288 ) on Sunday March 11, 2012 @01:28PM (#39318869)
          Apple doesn't play too lose with marketing statistics? You simply are forgetting the late PowerPC times where a water-cooled Apple system was slower than an air cooled Intel PC.
          • Is there some commercial or ad you are referring to?

            • Re: (Score:3, Informative)

              by Narcocide ( 102829 )

              No, he's referring to a conspicuous weakness of the final lines of (quad-core, btw) G5 macs compared to the company's own first competing Intel offerings. Another not-so-well-known weakness is that they also drew more juice under load than most full-sized refrigerators.

              • by Anonymous Coward

                I believe the parent is pointing to the fact that Apple doesn't appear to have put out any misleading marketing materials claiming that PPC was dominating Intel's chipsets on the G5. Is there some marketing benchmark out there that Apple lied about?

              • No, he's referring to a conspicuous weakness of the final lines of (quad-core, btw) G5 macs compared to the company's own first competing Intel offerings. Another not-so-well-known weakness is that they also drew more juice under load than most full-sized refrigerators.

                The G5's were faster than P4's of the same period but the Core Duo was a significantly different animal from P4's that were around when the G5 came out. I see no conflict here. Intel learned some things from the competition and stepped up their game with the Core series of processors.

            • not sure about the water cooled system comment, but they did add "4G" to the iphone 4S with the OS patch.. everyone knows the software didn't upgrade the hardware to LTE.

              • by jo_ham ( 604554 )

                I guess it depends what the carriers are calling "4G". I assume the menu displays whatever the carrier has termed 4G, since the 4S supports most of those "3.5G" technologies that have been rebranded as 4G.

                Although sometimes software upgrades can upgrade hardware - remember the enforced-charged-$1.99-SO 802.11n patch for some early systems with draft-n support but no software support when they came out? (yes, yes, I know that's not what has happened with the 4S)

                • my point is they changed it with a software update.. whereas the non-4G but marketed as 4G Android phones have always been that way.

                  shetchy marketing if you ask me

                  (whether or not ATT was behind the decision doesn't matter because Apple included it in THEIR software update)

                  • AT&T convinced Apple that HSPA+ was 4G. That's all.
                    • by Belial6 ( 794905 )
                      Did the battery die when they did that? Because the Apple fanboys were insistent that 4G drained batteries faster, and that was why Apple didn't support it.
          • Apple also in those days would use a benchmark utility that was optimized for their chip, and Intel/AMD based computer they competed against just use some program they just downloaded from the web that has almost 0 optimization for the cpu its on.
          • Apple doesn't play too lose with marketing statistics? You simply are forgetting the late PowerPC times where a water-cooled Apple system was slower than an air cooled Intel PC.

            That is a bogus point. Those water cooled G5s were the standard shipping system. Its entire fair to compare a stock Mac against a stock PC.

            The real "engineering" of the PPC vs x86 comparison was through the benchmarking utility. IIRC Apple used a very old version of ByteMarks that was compiled/optimized for the 486 even though they were running on a Pentium at the time. When ByteMarks was recompiled to optimize for the Pentium the PPC advantage faded.

        • Re:This is funny. (Score:5, Interesting)

          by ozmanjusri ( 601766 ) <aussie_bob@nOspAm.hotmail.com> on Sunday March 11, 2012 @05:42PM (#39320525) Journal

          Considering Apple typically doesn't play too lose with the marketing statistics

          What planet are you from?

          "After a legal complaint by 70-year-old William Gillis over the "twice as fast for half the price" statement found in iPhone 3G marketing, Apple responded with a 9-page, 32-point rebuttal—one paragraph of which included this overly harsh, but very telling, statement:

          Plaintiff's claims, and those of the purported class, are barred by the fact that the alleged deceptive statements were such that no reasonable person in Plaintiff's position could have reasonably relied on or misunderstood Apple's statements as claims of fact.

          In other words, if you believe what Apple says in an Apple ad, you are not a reasonable person.

          http://gizmodo.com/5101110/apple-no-reasonable-person-should-trust-their-marketing

        • by EEPROMS ( 889169 )
          One small problem with those benches, they are all single threaded bench's. A single core in a T3 tablet is slower than a single core in an iPAD but when you add all four cores in the T3 tablet together they work out faster. So testing cores yes the Apple iPAD 2 would win but when you multi thread the tests (need ICS for that and also the newest barely out of beta bench tests) things look very different.
    • by fuzzyfuzzyfungus ( 1223518 ) on Sunday March 11, 2012 @12:48PM (#39318653) Journal

      The irony in this is that this is coming from a company that presented chunks of wood as their next-gen graphics cards.

      Hey! Some of us care about 'Green Computing' here, you earth-raping performance whore!

      • by Surt ( 22457 )

        Sure, wooden graphics cards don't use as much electricity, but they have to cut down trees to make them. All in all, the wooden graphic cards are actually worse for the environment.

      • Hey! Some of us care about 'Green Computing' here, you earth-raping performance whore!

        Which is why I only use Radeon HD 5xxx [wikipedia.org] cards.

    • Re:This is funny. (Score:5, Informative)

      by mTor ( 18585 ) on Sunday March 11, 2012 @02:03PM (#39319075)

      The irony in this is that this is coming from a company that presented chunks of wood as their next-gen graphics cards.

      I had no idea what you were talking about but a quick search showed this:

      http://semiaccurate.com/2009/10/01/nvidia-fakes-fermi-boards-gtc/ [semiaccurate.com]

      LOL... Nvidia faked a graphics board with a piece of PCB-looking plastic/wood that was screwed to the side of a PC with common hardware-store grade wood screws.

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • by jo_ham ( 604554 )

        Actually I am looking at a Galaxy SII as my next phone.

        Currently using an iPhone 3GS and the upgrade choice (I'll buy the phone outright either way) is between the iPhone 4S and the Samsung Galaxy SII. Both are about even in the running so far.

      • Not true

        Steve shit into a box, put a dock connector on it and even hardcore apple zealots said fuck no. It was off the market in like six months.

        Can we be done with this meme? Sheesh.

  • by blahbooboo ( 839709 ) on Sunday March 11, 2012 @12:31PM (#39318543)

    Does using the tablet have smooth and instant responsiveness? At the end of the day, that's all that matters. Tegra 100 or ipad 100 won't matter if the OS that uses it isn't smooth and keeps up with the user interactions. Consumers just care about experience, how they get there isn't of interest to anyone other than nerds.

    • What about iOS/Android gamers? Some of those games are pretty taxing and require pretty heavy-duty GPUs to run smoothly...

      • by UnknowingFool ( 672806 ) on Sunday March 11, 2012 @12:52PM (#39318675)
        From what we know the A5X is pretty much the same as the A5 except it uses 4 PowerVR SGX543 cores instead of 2. Now this 4 core GPU configuration is the same as the PS Vita albeit the Vita uses a 4 core ARM as the CPU and the Vita runs a smaller 960 × 544 qHD screen. Comparatively, the Vita should beat the iPad on gaming given the hardware for intensely graphic games. For Angry Birds, it may not make much of a difference. At the present time, we don't know if Apple tweaked the A5X in other ways to boost game performance.
        • by samkass ( 174571 ) on Sunday March 11, 2012 @01:04PM (#39318725) Homepage Journal

          From what we know the A5X is pretty much the same as the A5 except it uses 4 PowerVR SGX543 cores instead of 2. Now this 4 core GPU configuration is the same as the PS Vita albeit the Vita uses a 4 core ARM as the CPU and the Vita runs a smaller 960 × 544 qHD screen. Comparatively, the Vita should beat the iPad on gaming given the hardware for intensely graphic games. For Angry Birds, it may not make much of a difference. At the present time, we don't know if Apple tweaked the A5X in other ways to boost game performance.

          The "New iPad" also has twice as much RAM as a Vita (1GB vs 512MB), which could make a significant difference to practical gaming capability. As you note, as well, we have no idea what else Apple tweaked in the chip. Combined with the difficulty in an apples-to-apples comparison between two very different devices, it'll be hard to ever know how different the raw specs are. I think it's reasonable to say, though, that the "New iPad" will be excellent for gaming, as will a Vita.

      • by fuzzyfuzzyfungus ( 1223518 ) on Sunday March 11, 2012 @12:54PM (#39318681) Journal
        Given Apple's (relative) hardware homogeneity(certainly more than Android; but the steadily accumulating pile of older iDevices is inevitable and not going away just yet...), I assume that iOS games will largely tax the GPU as hard as possible; but not try overshooting(just as console games generally push right to the edge, since the edge is a known quantity). It will be interesting to see if the new 'retina display' ipads end up seeing titles that sacrifice complexity in other areas to push native resolution, or whether we'll see a lot of 'well, it's smoothly upsampled; but fundamentally the same resolution as the iPad N-1' stuff...

        One thing that I don't think has come up yet; but would be interesting to see, is whether Nvidia tries to turn their disadvantage into a bonus by doing more aggressive power scaling...

        If, as TFA suggests, Tegra parts are held back by memory bandwidth; because faster busses are power hungry, this suggests that they might be able to substantially speed-bump their parts when the device is on AC power or otherwise not power constrained. So long as the switchover is handled reasonably elegantly, that could turn out to be an advantage in the various HDMI dock/computer replacement/etc. scenarios...
      • How about the much derided Flash? I have a Sony Tablet S [wikipedia.org] which sports a Tegra 2 and it rarely gets used for gaming aside from the kid playing Angry Birds. But it does get used for YouTube and Comedy Central programming, a lot. However it stutters on most video playback when I visit the non-mobile non-app Youtube site, or if heaven forbid I haven't scrolled precisely to where I can only see the video on The Daily Show/Colbert Report and none of the ads found at the bottom.

        I'm still inclined to believe it'
        • Interesting, even the first generation Snapdragon in my old Desire renders Flash Video smoothly... Sounds like Tegra might have a few driver issues...?

        • by marsu_k ( 701360 )
          That's odd, I regularly watch the flash-based versions of The Daily Show and Colbert on my Transformer (the original one, so Tegra 2) and it seems to play them just fine, fullscreen also. Then again, the version of Android that Asus ships is pretty much plain vanilla, I don't know how much "enhancements" Sony has added to it.
        • Which browser are you using? It probably is a browser performance issue.
    • So why did Apple show their benchmark?

    • I like my iPad1, though it's sluggish. I am (too) anxiously awaiting two new iPads due this Friday. I even kept the running commentary on the announcement up in a browser window (yes, I felt a bit dirty afterward). When I heard the proclamation of the speed difference, that certainly seemed to imply a 4-core processing using. At least, that was in the realm of possibility (4 CPU cores and 4 GPU cores vs the Tegra). I'm not convinced now that the claim is valid except for very special conditions with a hos

    • Consumers just care about experience, how they get there isn't of interest to anyone other than nerds.

      True, but then why is Apple boasting about 2x, 4x, whatever?

      RT.

      • Consumers just care about experience, how they get there isn't of interest to anyone other than nerds.

        True, but then why is Apple boasting about 2x, 4x, whatever?

        Ah, because performance is important, except when Apple doesn't have it?

    • Does using the tablet have smooth and instant responsiveness? At the end of the day, that's all that matters.

      Well, my experience with Ipad 2 is that it has frequent stalls and responsiveness problems, so I guess it matters, and I guess Apple has not addressed that. Good.

    • You mean, numbers are meaningless because Apple customers can't count past 2? (Joke guys, joke. Never mind that Apple marketing apparently can't count past two [cnn.com])

      My take on it: 1) Apple needs Qualcomm's LTE modem, leaving little room for alternatives for that tablet model. 2) Apple wants to keep its margins up and having only one processor design saves some money there 3) Reality distortion goes into overdrive to distract Apple fans from the reality that the ipad Wifi model is actually slower than the new rou

    • Does using the tablet have smooth and instant responsiveness? At the end of the day, that's all that matters. Tegra 100 or ipad 100 won't matter if the OS that uses it isn't smooth and keeps up with the user interactions. Consumers just care about experience, how they get there isn't of interest to anyone other than nerds.

      I don't do games in tablet, but the ICS update significantly improve this in my TF101 (with tegra2). I own an ipad 2 too, and right now I prefer using the TF101 as may main tablet.

  • PowerVR, eh? (Score:4, Interesting)

    by msobkow ( 48369 ) on Sunday March 11, 2012 @12:44PM (#39318623) Homepage Journal

    I didn't know the PowerVR chips were still around. I had one of the early video cards based on the technology for my PC years ago. It worked ok, but that was long before things like shaders were an issue.

    Still, we are talking about a portable device, so I'd think battery life would be more important than having the latest whizz-bang shaders. And just look at all the grief people hare having with the Android lineup due to shader differences between vendors.

    Thank God I focus on business programming, not video games. I've yet to hear of ANY tablet or smartphone having problems displaying graphs and charts.

    • Re:PowerVR, eh? (Score:4, Informative)

      by JimCanuck ( 2474366 ) on Sunday March 11, 2012 @12:57PM (#39318691)

      PowerVR GPU's are integrated on a lot of ARM processors used by many mobile companies. Its not a secret, but only Apple related articles like to poke fun at it. PowerVR went from being a "brand name" to being the developer behind a lot of graphics on everything from PC's, to game consoles, to HDTV's, to cell phones etc.

      For that matter Samsung had been integrating the both of them before the iPhone in any flavor came out. And continues to do so.

    • PowerVR hasn't shown their face on the PC side in years; but they are something of an 800lb gorilla in power-constrained GPU environments. Not the only player; but a lot of ARM SoCs of various flavors include them. Intel even enlisted them, rather than its in-house designs or the traditional PC guys, for a number of its very-low-power Atom parts...
    • PowerVR left the PC graphics business a long time ago. They however have focused mainly on the mobile devices. Like ARM, PowerVR does not make products but licenses their designs to others. The PS Vita uses the same graphics setup as the new iPad: 4 SGX543 cores. TI had used PowerVR in the last several generations of OMAP products.
    • I didn't know the PowerVR chips were still around.

      PowerVR has been doing very well in mobile devices for years. ARM's Mali is just starting to take away some market share from them, but before that most ARM SoCs came with a PowerVR GPU. Like ARM, they license the designs to anyone willing to pay, so if you want to make a SoC then a PowerVR GPU was an obvious choice - nVidia only uses the Tegra GPUs in their own chips, they don't license them. Now it's a less obvious choice, because you can license both CPU and GPU designs from ARM and the latest Mali st

      • by msobkow ( 48369 )

        3D data visualization does not require the use of anything more complex than Gourard shading, which does not rely on programmable shaders. You don't need ultra-realistic visuals to garner the gist of a 3D visualization. In fact, I'd argue that using fancy shaders on such data would skew the correlation between shading and depth/Z coordinate information.

        Note that the problem being encountered by game developers is not an inconsistent or unreliable implementation of the built-in OpenGL shaders, but the p

  • by Anonymous Coward

    Bought a Galaxy Tab for the Tegra 2, was so utterly disappointed. The real world performance was atrocious even compared to devices it was officially benchmarked better against. Sold it within 3 months. Still waiting on a great Android tablet....

  • by fuzzyfuzzyfungus ( 1223518 ) on Sunday March 11, 2012 @12:45PM (#39318633) Journal
    Just ask Intel about Apple's benchmarking strategy: For years, the finest in graphic design publicly asserted that PPC was so bitchin' that it was pretty much just letting Intel and x86 live because killing your inferiors is bad taste. Then, one design win, and x86 is suddenly eleventy-billion percent faster than that old-and-busted PPC legacy crap.

    Or ask Amazon: Amazon releases 'Kindle' e-reader device. His Steveness declares "Nobody reads". And now Apple is pushing books, newspapers, and their own pet proprietary publishing platform...

    Cheer up, emo Nvidia, all you have to do is sell Apple a Tegra N SoC, or even just the rights to include your GPU in their AN SoC, and Tim Cook will personally explain to the world that PowerVR GPUs are slow, weak, make you 30% less creative and are produced entirely from conflict minerals.
    • by TheRaven64 ( 641858 ) on Sunday March 11, 2012 @01:07PM (#39318745) Journal

      Just ask Intel about Apple's benchmarking strategy: For years, the finest in graphic design publicly asserted that PPC was so bitchin' that it was pretty much just letting Intel and x86 live because killing your inferiors is bad taste. Then, one design win, and x86 is suddenly eleventy-billion percent faster than that old-and-busted PPC legacy crap.

      This wasn't totally misleading. The G4 was slightly faster than equivalent Intel chips when it was launched and AltiVec was a lot better than SSE for a lot of things. More importantly, AltiVec was actually used, while a lot of x86 code was still compiled using scalar x87 floating point stuff. Things like video editing - which Apple benchmarked - were a lot faster on PowerPC because of this. It didn't matter that hand-optimised code for x86 could often beat hand-optimised code for PowerPC, it mattered that code people were actually running was faster on PowerPC. After about 800MHz, the G4 didn't get much by way of improvements and the G5, while a nice chip, was expensive and used too much power for portables. The Pentium M was starting to push ahead of the PowerPC chips Apple was using in portables (which got a tiny speed bump but nothing amazing) and the Core widened the gap. By the Core 2, the gap was huge.

      It wasn't just one design win, it was that the PowerPC chips for mobile were designs that competed well with the P2 and P3, but were never improved beyond that. The last few speedbumps were so starved for memory bandwidth that they came with almost no performance increase. Between the P3 and the Core 2, Intel had two complete microarchitecture designs and one partial redesign. Freescale had none and IBM wasn't interested in chips for laptops.

      • by epine ( 68316 ) on Sunday March 11, 2012 @04:05PM (#39319845)

        This wasn't totally misleading. The G4 was slightly faster than equivalent Intel chips when it was launched and AltiVec was a lot better than SSE for a lot of things. More importantly, AltiVec was actually used, while a lot of x86 code was still compiled using scalar x87 floating point stuff.

        This was totally misleading, for any informed definition of misleading.

        Just as there are embarrassingly parallel algorithms, there are embarrassingly wide instruction mixes. In the P6 architecture there were a three uop/cycle retirement gate, with a fat queue in front. If your instruction mix had any kind of stall (dependency chain, memory access, branch mispredict) the retirement usually caught up before the queue was filled. In the rare case (Steve Jobs' favorite Photoshop filter) where the instruction mix could sustain a retirement rate of 4 instructions per cycle, x86 showed badly against PPC. Conversely, on bumpy instruction streams full of execution hazards, x86 compared favourably since it had superior OOO head-room.

        CoreDuo rebalanced the architecture primarily by adding a fair amount of micro-op fusing, so that one retirement slot effectively retired two instructions (without increasing the amount of retirement dependency checking in that pipeline stage). In some ways, the maligned x86 architecture starts to shine when your implementation adds the fancy trick of micro-op fusion, since the RMW addressing mode is fused at the instruction level. In RISC these instructions are split up into separate read and write portions. That was an asset at many lithographic nodes. But not at the CoreDuo node, as history recounts. Now x86 has caught up on the retirement side, and PPC is panting for breath on the fetch stream (juggling two instructions where x86 encodes only one).

        The multitasking agility of x86 was also heavily and happily used. It happens not to show up in pure Photoshop kernels. Admittedly, SSE was pretty pathetic in the early incarnations. Intel decided to add it to the instruction set, but implemented it double pumped (two dispatch cycles per SSE operation). Of course they knew that future devices would double the dispatch width, so this was a way to crack the chicken and egg problem. Yeah, it was an ugly slow iterative process.

        The advantage of PPC was never better than horses for courses, and PPC was picky about the courses. It really liked a groomed track.

        x86 hardly gave a damn about a groomed track. It had deep OOO resources all the way through the cache hierarchy to main memory and back. The P6 was the generation where how you handled erratic memory latency mattered for important workloads (ever heard of a server?) than the political correctness of your instruction encoding.

        Apple never faltered in waving around groomed track benchmark numbers as if the average Mac user sat around and ran Photoshop blur filters 24 by 7. That was Apple's idea of a server workload.

        mov eax, [esi]
        inc eax
        mov [esi], eax

        That's a RISC program in x86 notation. Whether the first and second use of [esi] amounts to the same memory location as any other memory access that OOO might interleave is a big problem. That's a lot of hazard detection to do to maintain four-wide retirement.

        Here is a CISC program in x86 notation. I can't show it to you in PPC notation, since PPC is a proper subset minus this feature.

        inc [esi]

        Clearly, with a clever implementation, you can arrange that the hazard check against potentially interleaved accesses to memory is performed once, not twice. It takes a lot of transistors to reach the blissful state of clever implementation. That's precisely the story of CoreDuo. It finally hit the bliss threshold (helped greatly that the Prescott people and their marketing overlords were busy walking the green plank).

        Did Apple tell any of this story in vaguely the same way? Nooooo. It waved around one embarrassingly wide instruction stream that appealed to cool people until it turned blue in the face.

        Cure for the blue face: make an about face.

        Do I trust this new iPad 3 benchmark? Hahahahahaha. You know, I've never let out my inner six year old in 5000 posts, but it feels good.

    • by perpenso ( 1613749 ) on Sunday March 11, 2012 @01:42PM (#39318943)
      Having back-in-the-day written a fair bit of code that ran on both PPC and Intel x86, including a bit of assembly for both, I'd agree that Apple's comparisons were more a work of marketing than engineering but PPC legitimately had its moments. Apple used phrases like "up to twice as fast" and there were certainly cases where this was true, however these tended to be very specialized situations where the underlying algorithm played to the natural strengths of the PPC architecture. Such case do not represent the more general code and common algorithms. In general my recollection of those days is that PPC had about a 25% performance advantage over x86. However this advantage was nullified by Intel's ability to reach much higher clock rates.

      Overall, as a Mac game developer, it took a bit of effort to get Mac ports on a par with their PC counterparts. One caveat here, emphasize "port" - that the games tended to have been written with only x86 in mind. Contrary to popular belief it is entirely possibly to write code in high level languages that favor one architecture over the other, CISC or RISC, etc. So the x86 side may have had an advantage in that the code was naturally written to favor that architecture. However a counterpoint would be that we did profile extensively and re-write perfectly working original code where we thought we could leverage the PPC architecture. This included dropping down to assembly when compilers could not leverage the architecture properly. Still, this only achieved parity.

      Again, note this was back-in-the-day, games that were not using a GPU. So it was more of a CPU v CPU comparison.
      • Out of curiosity, did the Mac sales bring in enough revenue to be worth all the costs of doing the porting?

        • Out of curiosity, did the Mac sales bring in enough revenue to be worth all the costs of doing the porting?

          I never saw financials but the publisher continued to support the Mac throughout the PPC era. Now in the x86 era its a little bit easier to do the port and the Mac market is many times larger. I think doing a Mac version of a game today is much less risky.

  • by volcanopele ( 537152 ) on Sunday March 11, 2012 @01:03PM (#39318721)
    The graphics capabilities of both the iPad (2nd and 3rd gen) and Tegra 3 tablets are more than capable of playing high quality games. At the very least, direct ports from the last console generation (like GTA III and The Bard's Tale) run just fine on both types of tablet devices. The problem is not the GPU of either Apple or Google's tablets. The problem is money -- how much money are developers willing to spend on producing a game where the max selling price is ~$10 (I've only seen >$15 on the Final Fantasy ports). This limits the scope of mobile games on either OS to either pretty tech demos (like Infinity Blade), games designed to the lowest common device (think Gameloft's games), cheaply designed casual games that don't push the GPU in the slightest (Angry Birds, Jetpack Joyride), or ports of older games (FF Tactics, GTA III, The Bard's Tale).
    Don't get me wrong, I love gaming on my iPad (or at least I like it enough to have no desire to get a PS Vita), but there are few games that truly push the GPU because there is just no money in it to do so. Until people are willing to pay $30-40 for a top-notch game on their mobile device, we won't.

    and before someone says that touchscreens are another factor, please, that's only a problem with ports (or developers who think touchscreen games are just like console or handheld games without thinking (*cough*EA sports*cough*). Fighting games that require you to hit a bunch of virtual buttons are wretched on a touch screen device. fighting games like Infinity Blade are pretty fun because they take advantage of the touch screen, rather than treat the screen like a virtual controller. I actually did like GTA III, but I often had to find alternative ways to complete missions because running and gunning was more difficult than using the sniper rifle.

  • Nvidia is stupid enough to take the bait. Good job.

  • the old ipad 2 is faster than the tegra 3, according to arstecnica, so it should make sense that the new ipad is even faster. i can't find the link but i saw it a few days ago, maybe here in a comment

    • Well there was this slasdot article [slashdot.org] however the summary is misleading in that it claims the Tegra3 beat the A5 but reading the article, it appears that the A5 beat the Tegra 3. For the most part, the two could not be compared as the Tegra 3 ran Android benchmarks which cannot be applied to Apple and the A5.
    • the old ipad 2 is faster than the tegra 3, according to arstecnica, so it should make sense that the new ipad is even faster. i can't find the link but i saw it a few days ago, maybe here in a comment

      Why is everyone focused on benchmarks when they have been notoriously unreliable. There are games available for both platforms, ie: Riptide GP, and there are youtube videos of them being played side by side. The Tegra 3 version had more details and was on a higher resolution platform. Seems to me that Tegra 3 won that bout. There are other examples of applications on both platforms. Unless your favorite application is a benchmark, lets hear about real applications that exist and are being used by actual pe

  • by RalphBNumbers ( 655475 ) on Sunday March 11, 2012 @01:31PM (#39318887)

    We recently saw a graphics benchmark of the A5 vs the Tegra3 posted to /., and the A5 beat the Tegra in real-world-ish benchmarks, and more than doubled it's score in fill rate. [hothardware.com]

    The A5X is basically just the A5 with twice as many GPU cores, and graphics problems tend to be embarrassingly parallel, so unless it scales up really poorly with those extra cores (due to shared bandwidth limitations, or poor geometry scaling) it should have no problem beating the Tegra 3 by 2x, especially in terms of fill rate.

    And when you quadruple the number of pixels on your screen, as Apple just did, which measurement matters? Fill rate.

  • by TraumaHound ( 30184 ) on Sunday March 11, 2012 @01:33PM (#39318899)

    Considering that these graphics benchmarks from Anandtech [anandtech.com] show the iPad 2 GPU handily beating a Tegra 3, it doesn't seem like much of a stretch that the iPad 3 GPU should beat it further.

  • by milkmage ( 795746 ) on Sunday March 11, 2012 @02:14PM (#39319157)

    tegra smegma a5x tri-dual-octo-quad core ACME RX3200 Rocket Skates GigaHertzMegaPixelPerSecond my asshole graphics is irrelevant.
    the ONLY thing that matters is how it works when its in your hands.

    does it drive 2048x1536 at least as well as the ipad 2? yes or no.

    the way i see it, neither NVIDIA or Apple can say anything about relative performance because there is nothing using tegra at that resolution.. you can benchmark/extrapolate all you want, but all that matters is real world.

    the "quad core A5X GPU" damn well better be faster beause it's driving 4x as many pixels.

  • I wonder how well they both fair with heavy use of alpha blending. I know this will cause big problems for the tile based PowerVR chips.

Technology is dominated by those who manage what they do not understand.

Working...