Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology (Apple) Hardware IT Technology

What Makes Apple's Power Mac G5 Processor So Hot 313

An anonymous reader writes "58 million transistors can drive a lot of power. Apparently, Apple appreciated the choices IBM processor architects made when designing the 970 family. This article provides the 64-bit architecture big picture for the 970 family (A.K.A. the Power Mac G5) and the critical issues in IBM's 64-bit POWER designs, covering 32-bit compatibility, power management, and processor bus design."
This discussion has been archived. No new comments can be posted.

What Makes Apple's Power Mac G5 Processor So Hot

Comments Filter:
  • by WormholeFiend ( 674934 ) on Monday October 25, 2004 @11:07AM (#10620819)
    If it's so hot, maybe it's not cool enough.

  • by Anonymous Coward on Monday October 25, 2004 @11:07AM (#10620821)
    Insufficient cooling?
  • by Sensible Clod ( 771142 ) on Monday October 25, 2004 @11:08AM (#10620832) Homepage
    Basically, we took one of our superchips that go into superservers, with a gitastic cache and frontside bus, stripped it down a bit so we don't cut into our own market, and gave it a new name. Isn't that cool?
    • Basically, we took one of our superchips that go into superservers, with a gitastic cache and frontside bus, stripped it down a bit so we don't cut into our own market, and gave it a new name. Isn't that cool?


      For us Apple users - you bet!

      Well, except for the fact that the processor's so hot, but you know what I mean :)
    • by John Whitley ( 6067 ) on Monday October 25, 2004 @01:56PM (#10622593) Homepage
      Wow. Artful and elegant rebalancing of engineering tradeoffs for very diferent markets reduced to a knee-jerk oversimplification in one fell swoop. And it got a +5 Insightful for that, too boot. Here are some reasons why the "stripped it down a bit so we don't cut into our own market" statement is ridiculous:
      1. If selling POWER series chips to Apple was going to undermine IBM's server business, IBM would have a hell of a lot more to worry about from the plain 'ol x86 market.
      2. IBM's POWER-series chips are designed to trade away ultra-high-speed clock rates in favor of low failure rates. The design rule (feature size on chip) is pulled back from the bleeding edge and other layout techniques are employed to make these processors rock solid, to avoid costly downtime from hardware failures in business servers.
      3. These days Apple is well known for its forays into the cluster computing space -- but that's a far cry from the sort of transactional throughput capacity of IBM's high-end servers. I.e. not the same markets!
  • 64 bit integers (Score:5, Insightful)

    by Xpilot ( 117961 ) on Monday October 25, 2004 @11:10AM (#10620847) Homepage
    From the article:
    ...64-bit processors also accelerate complex mathematical calculations through their ability to perform calculations directly on 64-bit numbers...
    Don't they mean 64-bit integers? Since floating point registers in most modern CPU's are 64-bit wide already.

    • Addressing (Score:5, Informative)

      by vlad_petric ( 94134 ) on Monday October 25, 2004 @11:24AM (#10620963) Homepage
      Most importantly, it can address up 2^64 bytes of memory. And yes, that generally implies 64-bit integer GPRS. BTW, vector operations on x86 (MMX) also operate with 64 bit registers, but it can only access 4G (32G if you use the extended bits "hack").
    • Re:64 bit integers (Score:5, Informative)

      by Waffle Iron ( 339739 ) on Monday October 25, 2004 @12:20PM (#10621614)
      Since floating point registers in most modern CPU's are 64-bit wide already.

      Actually, since most modern CPUs are x86 variants, the floating point registers are usually 80 bits wide (and have been since the 1981 introduction of the 8087).

      As far as "complex mathematical calculations" go, 64-bit integers aren't really that big a deal. It's pretty rare to need integers bigger than 2^32 but no bigger than 2^64; floating point usually handles big numbers more flexibly.

      The big deal with 64-bit CPUs is 64-bit address pointers and operations on them (which usually aren't more complex than adding and shifting).

  • for a moment I thinked that the question was related to temperatura, as in "What makes AMD Processors So Hot?", but in the article temperature issues are almost not touched (just a suggestion of checking liquid cooling, could be a hint?)
  • ob Memory (Score:5, Funny)

    by GillBates0 ( 664202 ) on Monday October 25, 2004 @11:16AM (#10620902) Homepage Journal
    18 exabytes ought to be enough for anybody

    -GillBates0, 2004.

    • Re:ob Memory (Score:5, Informative)

      by Mark_in_Brazil ( 537925 ) on Monday October 25, 2004 @12:00PM (#10621410)
      18 exabytes ought to be enough for anybody

      -GillBates0, 2004
      So where does it end?

      This page [plexos.com] makes a fairly convincing argument that 256 bit CPUs should be enough (basically, there would be no way to exhaust the amount of memory a 256 bit CPU could access, because the number of memory locations is about the same as the number of atoms in the universe).

      --Mark
  • by suso ( 153703 ) on Monday October 25, 2004 @11:18AM (#10620913) Journal
    They should make the G5 powerbooks have a teflon underside.

    "Turn it over, and you can cook dinner".
  • by trigeek ( 662294 ) on Monday October 25, 2004 @11:18AM (#10620914)
    One side-effect of 64-bit computing that I don't hear a lot of discussion about is the increase in the size of a pointer. A standard implementation of a linked list of integers will now be 50 to 100% larger (depending on if you use 32 or 64 bit integers), simply because the pointers take up more space. If I bought a 64 bit system, simply because it's the "Best", but only got 1GB of RAM, I have less useful memory, because the pointers take up all of my physical RAM. Do the architects of these systems take this into account?
    • by Eccles ( 932 ) on Monday October 25, 2004 @11:30AM (#10621023) Journal
      In most meaningful, sizeable programs, pointers aren't a significant chunk of memory usage. (And for small programs, it doesn't matter.) I would think most modern apps consume most of their memory storing images, which aren't affected by the 32->64 change.

      Also, 64-bit pointers allow you to go from a max of 4GB of RAM to 16 billion GB, so the assumption is memory prices will keep dropping and you'll have much more than twice as much RAM on your 64-bit system anyway.
      • by aphor ( 99965 )

        I would think most modern apps consume most of their memory storing images, which aren't affected by the 32->64 change.

        You may forget that a bitblit, or a bulk memory copy operation can be accomplished in half of the time using the same number of 64 bit registers as 32 bit registers. How do you think common operations like scaling and color transformation will be affected by the increased register size and memory IO path? In my experience (Ultrasparc real world apps like GIMP and OpenSSL) most bulk i

    • After reading the privous post, I somehow got to think about viagra spam... "Increase The Size Of Your Pointer Now! Buy A 64-Bit PowerMac G5 And Get That Extra Size Today!"
    • A linked list is just a container, so yes, if you're using your linked list - pointers will eat a bunch of space if you're storing small bits of data.

      But then again, you don't have to use linked lists. The C++ standard template library has all sorts of wonderful containers that may be better suited than simple linked-lists. Java has some neat containers as well.

      • by SnapShot ( 171582 ) on Monday October 25, 2004 @12:06PM (#10621479)
        The C++ standard template library uses MAGIC PIXIE DUST and MOONBEAMS for the data structures. Java Containers, on the other hand, are held together with PURE THOUGHTS and KITTEN WHISKERS. That's how they avoid using pointers... ;-)

        • As an aside, did you know that Java has garbage-collection as a side-effect of using pure thoughts to hold its data structures together? And the kitten whiskers are what allows the JIT compiler to work so well.

          On the other side of the fence, C++'s usage of magic pixie dust and moonbeams is what gives it its flexibility -- and by that I mean the ability to program in the procedural style or object-oriented style or template style, etc. The downside is that magic and moonbeams aren't really...reliable. So yo
    • by Gopal.V ( 532678 ) on Monday October 25, 2004 @11:32AM (#10621047) Homepage Journal
      The instruction sets are also generally 64 bit... so you end up with lesser disk space as well :) .. add the space in loading these into RAM ..

      The notable exception is the Arm's thumb instruction set [embedded.com] (it's cool).

      The sad part "my address bus is bigger than you" is going the "I have more MHz than you" way soon as parallel CPUs (mulit-core or otherwise) become cheaper.. 90% of our tasks are better done parallel than using a single fast chip . Hell , half of the tasks really don't need anything beyond a 300/400 mhz clocks.
    • by HeghmoH ( 13204 ) on Monday October 25, 2004 @11:32AM (#10621049) Homepage Journal
      On Mac OS X, dynamic memory has a granularity of 16 bytes. That means that if your linked list node is only 8 bytes (4 bytes data, 4 bytes pointer) then it will get a 16-byte allocation anyway, and you'll waste the extra space. Using 16 bytes per node won't hurt at all. Allocation overhead makes the standard linked list a fairly wasteful way to store data anyway.
      • One of the better approaches, though a little more complicated is linked arrays. It has the nice advantage of being speedy when iterating through the arrays and can be extended as the room in the current arrays runs out.
    • Since the AMD64 instruction set can also run code compiled for 32 bit, it can also work with 32 bit pointers. However, any code optimized for AMD64, will probably contain all 64 bit pointers.

      I guess if you plan on shelving out for 64 bit, you should plan on getting more ram w/ it.
      • It wouldn't be safe to assume a pointer can be stored in 32-bits in 64-bit mode. The OS can map your data anywhere in the 64-bit space [which maps physically to a 40-bit bus, at least on my AMD64]. It's also not portable to do that. Even if your OS maps stuff within 4GB [e.g. top 32-bits are zero] another 64-bit box [sparc, ppc, etc..] may not [and likely would not] do that.

        If you build in 32-bit mode [e.g. -m32] you lose the major benefits of the 64 which is namely the extra registers.

        Tom
    • [blockquote]A standard implementation of a linked list of integers will now be 50 to 100% larger[/blockquote]

      That's assuming each node has a 'next' pointer. That's how they teach it in CS classes, but in the real world it makes more sense to use an index into an allocator. It requires cleverness from the language and library designers, of course, but there isn't necessarily a size increase.
    • Re: (Score:2, Interesting)

      Comment removed based on user account deletion
      • by spaceyhackerlady ( 462530 ) on Monday October 25, 2004 @11:56AM (#10621372)
        On Silicon Graphics 64-bit machines, this was solved by having two ABIs, one 32-bit, one 64-bit.

        Sun machines with UltraSPARC processors do this too. They run 64 bit kernels, and applications are 32 bits. Unless you actually need 64 bits, in which case you feed the compiler some differnet options and it makes a 64 bit executable for you.

        Both Solaris and Linux do it the same way. When you build a kernel for Linux on an UltraSPARC machine the part about kernel support for different kinds of executables offers you (among other options) 32 bit ELF (which you need), 64 bit ELF (optional), and Solaris emulation (never tried it...).

        ...laura

    • If I bought a 64 bit system, simply because it's the "Best", but only got 1GB of RAM, I have less useful memory, because the pointers take up all of my physical RAM. Do the architects of these systems take this into account?

      Umm, duh? You increase your footprint by a small amount (say 10%) to get access to a flat 16EB address space. With memory running $200/G for the good stuff, why not?

    • by Anonymous Coward
      This is an excellent observation. Please forgive me if this seems offtopic -- my primary development platform is PC, however I've got quite a bit of experience working on 64 bit.

      From a developer perspective (at least in software I've worked on) programmers often do not realize that a vast number of projects took dependencies on 4 byte pointers.

      Structure alignment, pointer size, etc have plagued all projects I've worked on, even after a cursory perusal at the code indicating "It looks good". Lots of bugs
  • by green pizza ( 159161 ) on Monday October 25, 2004 @11:27AM (#10620992) Homepage
    Does anyone have the numbers to compare how many watts of power the G5 uses vs a similar AthlonXP or AMD64? Ie, I'd like to see how a 2.0 or 2.5 GHz G5 compares to a 2.0 or 2.5 GHz AMD processor.
    • by Anonymous Coward on Monday October 25, 2004 @11:59AM (#10621392)
      It's hard to make a comparison because for some reason IBM/Apple doesn't want to release official measurements for power usage. Which is strange because they should do really well in that measurement compared to AMD and Intel. Here's their official numbers:

      2.4 GHz A64- 89 W
      3.4 GHz P4(Northwood)- 89 W
      3.4 GHz P4(Prescott)- 103 W

      Best guess on the 2.5 GHz G5 is around 65 W.
    • First off power consumption for a processor = .5*C*V^2*F where c is capacitance, V is voltage, and F is frequency. So if you can find capacitance you can get a pretty good estimate of the processor's power needs.
      From Intel's datasheets [intel.com]: P4 90 nm (prescott) 520-550 models 84 W of design power (what Intel recommends the heatsink be able to pull).
      550-560 models 115 W of design power.

      From AMD's datasheets [amd.com]: design power (measured with max amplitude and nominal voltage) is 89 watts for all power rating
  • Nitpicking... (Score:5, Insightful)

    by Tristandh ( 723519 ) on Monday October 25, 2004 @11:27AM (#10620998)
    Capable of addressing an astronomical 18 billion GB, or 18 exabytes, of memory,

    I know the first 2 digits are 1 and 8, but 2^64 bytes is still 'only' 16 exabytes...
    • Re:Nitpicking... (Score:5, Informative)

      by TonyZahn ( 534930 ) on Monday October 25, 2004 @11:54AM (#10621342) Homepage
      If you had read the sidebar you'd see the article defined the issues with base-2 and base-10 number names, and introduced the prefixes "mibi-" and "gibi", which should be familiar to /.ers.

      When they say 18 exabytes, they're talking base-10, otherwise they would have used the "gibi-" equivalent (exibytes?)
      • When they say 18 exabytes, they're talking base-10, otherwise they would have used the "gibi-" equivalent (exibytes?)

        If they meant base10, they should have used existing conventions (10^x) rather than trying to shoehorn this kibi stuff into usage. Common usage is that an exabyte is 2^60 bytes. Nobody in there right mind uses exibyte for that. The standard is something that NIST is trying to force on the community without any sort of support - they're exceeding their mandate.

    • Not actually, almost language abuse to name 1024 bytes a kilobyte. Look at the National Institute of Standards and Technology for definition of international units [nist.gov]. 1024 bytes is a kibibyte rather than a kilobyte. Kilo really mean 1000.

      So, 2^64 = 1.84467e19 bytes = 1.84467e16 KB = 1.84467e13 MB = 1.84467e10 GB = 18.4467e09 GB = 18 billion GB

  • In the past few years where PC' Fanatics have gone to great measures in extreme clocking and tweaking to push the speeds to the limits, Mod'rs have ran into and pretty much conquered heat issues along with other roadblocks. Technology that has been adapted by PC and PC-part manufacturers, I'm no Mac expert but even thumbing through the latest magazines I have yet to stumble upon a "Mac" mod or a how-to on making them extreme, is this due to proprietary or licensing issues? Or can a Mac be reverse engineered
    • Fans and cooling (Score:5, Interesting)

      by 2nd Post! ( 213333 ) <gundbear.pacbell@net> on Monday October 25, 2004 @12:07PM (#10621483) Homepage
      Your iMac G5 has two fans. Not much space left for additional cooling, really, without interfering with the current cooling setup.

      Your PowerMac G5 has nine fans. Again, not much space left for additional cooling without interfering.

      And get this, the PowerMac G5 already uses a liquid cooling setup. The only possible additional mod is to hook the current setup to a resevoir and radiator on the outside of the case, as the inside already has a radiator per CPU and something like a 120mm fan per CPU.
    • Oh, and I found this [centurytel.net]. What's interesting is that this mod is to make the Mac even quieter, and not more performant.
  • For the record... (Score:2, Redundant)

    by lamz ( 60321 ) *
    18 billion GB, or 18 exabytes, huh? That ought to be enough for everyone.
  • by renoX ( 11677 ) on Monday October 25, 2004 @12:12PM (#10621526)
    The *BIG* thing for x86 for "64bit computing" is not in fact the 64 bitness but doubling the number of GPR!

    As the PPC instruction set is sane (x86 is not, urgh), beside the extra-instruction needed for 64 bit computing, there are very few difference between a PPC running on 64bit code or a PPC running on 32bit unless of course you have an app which needs more than 4GB of memory or do lots of 64-bit integer calculation..
  • It used to be... (Score:3, Interesting)

    by Balthisar ( 649688 ) on Monday October 25, 2004 @12:32PM (#10621737) Homepage
    It used to be that Mac fanatics would be proud about how little current the PPC used -- and consequently how little heat it gave off -- when compared with the Intel-style architecture. Guess we can't make that argument any more.

    Now that G5's are liquid cooled, it makes me wonder if a 2.5GHz G5 is *really* a 2.5GHz G5, or if it's an overclocked 1.8GHz chip. You know, overclockers really pump things up with cool liquid cooling stuff. What's the fastest a 2.5GHz G5 could run with a traditional cooling system, like a fan and heatsink?

    Oh, one more thing before I'm modded as a troll: my G4 PowerBook is my 8th Macintosh. What I'm asking is genuine curiosity.
    • Re:It used to be... (Score:4, Informative)

      by dhovis ( 303725 ) on Monday October 25, 2004 @02:51PM (#10623176)

      I have mod points, but I figured I'd answer your question instead.

      When you do a die shrink, you can lower the power required at particular clock rate, or you can run at a higher clock speed with the same power dissipated. So when IBM went from 130nm for the 970 to 90nm for the 970FX, the top clock speed went up from 2GHz to 2.5GHz. Other than the process change, I believe there were very few changes to the chip.

      Now, when you go from 130 nm to 90nm, the linear dimension across the chip is ~70% of what is was, and the area of the chip is (70%)^2 or about 50% of the previous chip.

      Lets use some numbers, these may not be 100% accurate, but they'll explain the basic concept. The 2GHz 970 had a die size of about 121mm^2 and put out a maximum of 42W. That is about 350mW/mm^2. If we assume that the 2.5GHz 970FX has that same power consumption, but has a die size of 60mm^2, then the 970FX will produce 700mW/mm^2. So you have the same amount of power, but you are trying to suck it out of a smaller piece of silicon. So you need much more efficient cooling to keep the chip temperature the same. Hence, the liquid cooling system in the dual 2.5GHz G5.

    • Re:It used to be... (Score:3, Interesting)

      by Balthisar ( 649688 )

      I guess I should have been a little clearer. I know that the 2.5GHz chip is rated as 2.5GHz. I also know that just the clockspeed alone does not a system make. I realize the same amount of power has to be pumped through smaller pipes and be disapated. I realize it's quiet and I miss whisper-quietness of my graphite iMac (Mac #6).

      I guess the question I'm asking -- aside from being a question -- is really meant to be thought provoking. Does the use of a liquid cooling system fundamentally change the defacto

  • by C.Batt ( 715986 ) on Monday October 25, 2004 @12:36PM (#10621782) Homepage Journal
    Introduction to 64-bit computing [ibm.com]

    There's an informative link at the bottom of the article for those requiring a bit more insight into the effect of 64-bit computing. /wishes he had exa-bytes of memory right now... VS.NET on WinXP is a PIG!
  • by shed ( 68365 ) on Monday October 25, 2004 @01:25PM (#10622283)
    From the article:



    The quest for CPU power has been largely defeated by bloated software in applications and operating systems. Some programs I wrote in Basic on an Apple II ran faster than when written in a modern language on a G4 Dual-processor Mac with hardware 1,000 times faster.


    Come on. What language are we talking about here? My basement collection includes a II+, a IIe, two IIcs and a Franklin compatible. I challenge anyone to come up with a program in Applesoft that runs faster on one of my museum pieces than on a modern Mac using C++, Java or even Perl. I mod his article -1 for troll.


    While software has become bloated and to some extent inefficient, people often forget that we expect a lot more from our computers now than the single-tasking 80 column display days.

    • the single-tasking 80 column display days

      _80_ columns?! Why, you young punks had it good! When I was a boy, we only had 40 columns, and we felt grateful, as our fathers had no video display at all. Grandfathers? They used their fingers or an abacus.

      You've come a long way, baby.
  • by Mr. Slurpee ( 97260 ) on Monday October 25, 2004 @02:59PM (#10623249) Homepage
    for literal heat, this puppy is pretty hot.

    my dual 2.5GHz PowerMac G5 [apple.com] idles at 52C (125F) on CPU A and 50C (122F) on CPU B. the memory controller is actually one of the hotter things, it idles at 62C (143F). however, it's not the hottest thing, of course: at full load (DVD rip+encode or playing 15 videos at once + MP3 + tasks + flicking around Exposé) both CPUs have hit a max of 83C (181F) (the computer is supposed to automatically sleep around 90C or so).

    so why so effing hot? i mean, this idles at the max temp my athlon 2500 peaks at! it certainly idles at a hotter temp than it needs to, but i have no problem with that: the system runs the fans dynamically to keep the noise down, so at idle it's not as cool as it could be. the difference in noise in my room when i sleep the athlon is ridiculous - the G5 sounds like a slightly loud external hard drive that's spun up. the system also has a liquid cooling system [overclockers.com.au] to quench the processors. this seems to just keep the processors within their range. the value that i see in it is response to new heat - the CPU temps flick around a lot and are very responsive to load and the loss of load. after ramping up the CPUs to >80C, it take about three or four seconds after the load drops for the CPU temps to drop 15-20C, then maybe a total of ten or twelve seconds to drop to idle temp.

    for some real-world perspective... a DVD rip+encode with HandBrake [m0k.org] with using ffmpeg engine, MP3 audio, 2-pass encoding, and gunning for your average 700MB movie time (800-1300kbps?) takes slightly less than the length of the DVD. an hour and a half long movie took about and hour and fifteen minutes to get on to my hard drive. MP3 ripping in iTunes will run up to 28x, but it's not fully loading the processors so i wonder about a drive read bottleneck. the first night i got it, i was at a loss for how to really test the speed on it, so i just decided to open up a shitload of videos. basically i played a DVD (fluff, the GPU does that), opened up something in VLC, opened up about 13 videos in QuickTime of various sizes and formats, played some MP3 music (fluff again, that's ball sweat of a cutting edge proc), and still had enough processing power to comfortably navigate files, chat, browse web pages, and flick around Exposé [apple.com]. around all of these things plus one is when a few of the videos would start stuttering and expose would start dropping frames to keep collapse speed uniform. anything past this would really start robbing time from videos.

    all in all? it's fast. it's quiet. it gets hot, but it takes care of itself. coming from a 375MHz G3-upgraded PowerMac 7600 (vintage '98), i'm not doing too shabby. i just decided i'd scramjet at mach 7 to the top of the pack and then sit there for another few years.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...