Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Businesses Intel Apple Hardware

Apple, ARM, and Intel 246

Hugh Pickens writes "Jean-Louis Gassée says Apple and Samsung are engaged in a knives-out smartphone war. But when it comes to chips, the two companies must pretend to be civil because Samsung is the sole supplier of ARM-based processors for the iPhone. So why hasn't Intel jumped at the chance to become Apple's ARM source? 'The first explanation is architectural disdain,' writes Gassée. 'Intel sees "no future for ARM," it's a culture of x86 true believers. And they have a right to their conviction: With each iteration of its manufacturing technology, Intel has full control over how to improve its processors.' Next is pride. Intel would have to accept Apple's design and 'pour' it into silicon — it would become a lowlymerchant foundry.' Intel knows how to design and manufacture standard parts, but it has little experience manufacturing other people's custom designs or pricing them. But the most likely answer to the Why-Not-Intel question is money. Intel meticulously tunes the price points for its processors to generate the revenue that will fund development. Intel's published prices range from a 'low' $117 for a Core i3 processor to $999 for a top-of-the-line Core i7 device. Compare this to iSuppli's estimate for the cost of the A6 processor: $17.50. Even if more A6 chips could be produced per wafer — an unproven assumption — Intel's revenue per A6 wafer start would be much lower than with their x86 microprocessors. In Intel's perception of reality, this would destroy the business model. 'For all of Intel's semiconductor design and manufacturing feats, its processors suffer from a genetic handicap: They have to support the legacy x86 instruction set, and thus they're inherently more complicated than legacy-free ARM devices, they require more transistors, more silicon. Intel will argue, rightly, that they'll always be one technological step ahead of the competition, but is one step enough for x86 chips to beat ARM microprocessors?'"
This discussion has been archived. No new comments can be posted.

Apple, ARM, and Intel

Comments Filter:
  • Complicated Story (Score:2, Insightful)

    by TaoPhoenix ( 980487 )

    I can't find the angle here.

    "Legacy Free" vs "Costs".

    "Legacy Free" is a nice sounding term for "won't run $hit". So much for your 1,000 app and app-lets you rely on, Business.

    So I give up on this story and will let the rest of y'all thrash it out.

    • Re:Complicated Story (Score:4, Interesting)

      by Sir_Sri ( 199544 ) on Monday October 22, 2012 @05:31PM (#41734613)

      Won't run shit is interesting. With Windows forking into an ARM and x86 (or AMD64/IA64 whatever want to call it) versions, the writing may be on the wall for Intel. If one of the ARM guys can produce chips that will do the 150-200 dollar price bracket as well as Intel chips can on windows this becomes a whole other ball game.

      I'm not sure where anywhere near there yet. But with Qualcomm feasting on the remains of AMD, Samsung producing millions of parts a year and a few others with them it's entirely possible that within the next 10 years ARM will be a major competitor to x86. Which is why MS is forking - it's going to confuse the hell out of consumers and is, from an end user perspective a terrible idea to go out and buy a Windows RT anything on friday (windows 8 launch day) but MS plans to support their ugly bastard for a long time, so who knows. And in 3 or 4 years when we see Windows 9 roll around we may have enough software that has been compiled and for and runs on both that your 'won't run shit' assertion would no longer apply.

      • With Windows forking into an ARM and x86 (or AMD64/IA64 whatever want to call it) versions

        Windows dropped IA64 support, like it did PPC, Alpha and MIPS before.

        • by Sir_Sri ( 199544 )

          Right, I confused the Itanium instruction set with whatever intel brands its 64 bit ISA.

          • Re: (Score:3, Insightful)

            by drinkypoo ( 153816 )

            Right, I confused the Itanium instruction set with whatever intel brands its 64 bit ISA.

            Just remember, ia64 == iTanic == shit sandwich, amd64 is where it's at. Which is why it's so heartbreaking to see AMD so far to the rear in terms of performance today.

      • If one of the ARM guys can produce chips that will do the 150-200 dollar price bracket as well as Intel chips can on windows

        If the queen had nuts she'd be king.

      • by gr8_phk ( 621180 )

        I'm not sure where anywhere near there yet. But with Qualcomm feasting on the remains of AMD, Samsung producing millions of parts a year and a few others with them it's entirely possible that within the next 10 years ARM will be a major competitor to x86.

        ARM is competing with itself. With all those companies making ARM chips they have significant price competition which will lead to reduced R&D budgets. Meanwhile if you want top performance there is only one game in town and they get to charge top doll

    • by Jeremi ( 14640 )

      "Legacy Free" is a nice sounding term for "won't run $hit". So much for your 1,000 app and app-lets you rely on, Business.

      I think that's less of a problem in the cell-phone market than in the desktop market.

      In the cell-phone market, increasingly there is an App Store type service that automatically upgrades people's installed applications as necessary, so the onus is no longer on the user to do the work.

    • by jon3k ( 691256 )
      It can run a web browser, so as early as today, and as late as 5-10 years, it'll run almost everything most people need (see: Live365). Other than the fraction of a percent doing CAD or 3D modeling, etc. And that's a long time for ARM to come a long way in terms of power.
  • Wow, that last article looks like a really good Makov Chain [wikipedia.org] generator (or whatever the kids these days are using).

  • by preaction ( 1526109 ) on Monday October 22, 2012 @05:21PM (#41734505)

    The war between CORE and ARM raged across thousands of worlds, ravaging the galaxy. Neither would waver in their belief in their own supremacy. For each side, the only acceptable outcome is the complete elimination of the other.

  • "Genetic Handicap" (Score:3, Interesting)

    by Anonymous Coward on Monday October 22, 2012 @05:24PM (#41734547)

    "For all of Intel's semiconductor design and manufacturing feats, its processors suffer from a genetic handicap: They have to support the legacy x86 instruction set, and thus they're inherently more complicated than legacy-free ARM devices"

    Oh shut up. This argument comes up every time there's an ARM vs Intel debate. And you know what? Intel is pushing hard and successfully into ARM's territory and ARM has yet to hit back with any chip that can compete with Intel in servers or high end laptops or etc. And that's WITH Intel's huge profit margins. ARM certainly doesn't have the profit margin's to spare in any price war. Intel is a huge monster to defeat, and its supposed handicap means far less worry for programmers, unlike trying to support the million and growing ARM SOCs out there and the nightmare that is.

    • Are you really so ignorant as to not recognize that ARM isn't in the 'server or high end laptop' world?

      They're cheap and low power. Perfect for small mobile devices.

      However, I really don't understand why Intel won't play both sides of the fence. Why not build an AMD line/factory to offer both types of chips. Take away business from competitors. Get the past to pay for the future.

      • by foniksonik ( 573572 ) on Monday October 22, 2012 @08:21PM (#41736181) Homepage Journal

        I do believe that ARM is not a chip or even a product. It's an architecture that is licensed by others. This means that the company behind ARM makes money on every chip regardless of price. They don't care if it costs $17 or $170 to manufacture and distribute, they have little overhead so it's almost all profit at this point.

        Intel OTOH sells chips. They a much higher amount of manufacturing and sales costs.

      • by gr8_phk ( 621180 )

        However, I really don't understand why Intel won't play both sides of the fence. Why not build an AMD line/factory to offer both types of chips. Take away business from competitors. Get the past to pay for the future.

        ARM is killing all the other companies in a blood bath of competition. Meanwhile Intel is absolutely dominating in fabrication. The foundries don't feel a need to compete with Intel because x86 isn't their market so they don't feel the need to worry.

  • by Anonymous Coward on Monday October 22, 2012 @05:24PM (#41734549)

    Intel's published prices range from a 'low' $117 for a Core i3 processor.

    What about atom? You know, the processor produced by Intel, specifically for the same markets that ARM are dominating now.

  • by Elbereth ( 58257 ) on Monday October 22, 2012 @05:27PM (#41734577) Journal

    Intel has made ARM processors in the past (xScale [wikipedia.org]), and, apparently, still retains an ARM license. Intel has manufactured RISC chips, as well (i960, for example). There is absolutely no reason why Intel wouldn't/couldn't produce an ARM chip, if they wanted to. There's just no reason to do so.

    Also, using the Core i3 as an example of Intel's "low-end" is not very fair. Intel's low-end chips are the Pentium and Celeron, not the i3. The Atom is the closest thing to a competitor to the ARM chips. Pricing for Atom chips varies extensively, from $20 to $100, depending on features,

    • by amorsen ( 7485 )

      Intel has made ARM processors in the past (xScale [wikipedia.org]), and, apparently, still retains an ARM license.

      They were crap though. I have an XScale-based PDA lying around somewhere. They were truly the Netbursts of the ARM world: high clock speed and power consumption but low performance.

      • by Amouth ( 879122 ) on Monday October 22, 2012 @05:54PM (#41734847)

        Compared to the Samsung arm chips at the same time the xScale blew the doors off them in performance clock for clock, and at that time no one did well with power consumption except when asleep.

      • by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Monday October 22, 2012 @06:14PM (#41735023)

        Intel has made ARM processors in the past (xScale [wikipedia.org]), and, apparently, still retains an ARM license.

        They were crap though. I have an XScale-based PDA lying around somewhere. They were truly the Netbursts of the ARM world: high clock speed and power consumption but low performance

        Intel sold the ARM license to Marvell who owns the architectural license to it. Intel does re-license back the Xscale core for some of their networking processors though.

        As for Xscale being crap - back in the day, StrongARM and Xscale were the top of the line - the PXA255 being one of the fastest ARM chips around. The next-generation chip was supposed to be even faster, but Intel sold it to Marvell who doesn't seem to have done anything with it.

        While StrongARM was pushing 200MHz, other ARMs were barely breaking 133MHz and not very fast at it. When the PXA255 upped it to 400, it was no competition. Then ARM decided they had enough of being outclassed by Intel and designed some decent ARM11 cores and continued onward with the Cortex series.

  • Ironic (Score:5, Interesting)

    by fm6 ( 162816 ) on Monday October 22, 2012 @05:33PM (#41734647) Homepage Journal

    If Gassée is right about "architectural disdain" then it's kind of ironic. Intel itself exhibited the same disdain for x86 architecture when they initially refused to make their first 64-bit chip, the Itanium, backward compatible with it. It was only after AMD demonstrated that the architecture still had legs that they brought it to the 64-bit world — after wasting billions on Itanium development.

    Those that forget history, yada yada.

    • It's ironic that you posted that ironic comment, as it's ironic that Gassée would be right after being so spectacularly wrong about a similar topic.

      "I once preached peaceful coexistence with Windows. You may laugh at my expense - I deserve it."
      -- Jean-Louis Gassée, CEO Be, Inc.

    • Re:Ironic (Score:4, Informative)

      by SpazmodeusG ( 1334705 ) on Tuesday October 23, 2012 @12:07AM (#41737713)

      You have that completely backwards. The first Itaniums WERE backwards compatible with IA-32 (x86) at the hardware level. It was later Itaniums that ditched backwards compatibility in favour of the software based IA-32 Execution Layer.

  • what am I missing? (Score:4, Interesting)

    by ThorGod ( 456163 ) on Monday October 22, 2012 @05:34PM (#41734667) Journal

    Apple's the one currently manufacturing their A6 chips for $17, while the comparable Intel chip retails for much more?

    Isn't this more a statement of how well Apple's vertical integration of chip manufacturing went?

  • by ebunga ( 95613 ) on Monday October 22, 2012 @05:41PM (#41734739)

    Wow, ARM people are just like Java cultists... calling everything else legacy.

    • Microsoft has long referred to anything not-Microsoft as legacy. It's nothing new to absolutists.

  • by ericloewe ( 2129490 ) on Monday October 22, 2012 @05:42PM (#41734751)

    It's been pretty much proven that the "x86 legacy baggage" or however you want to put it does not seriously affect Intel's Atom for phones.

    http://www.anandtech.com/show/6330/the-iphone-5-review/10 [anandtech.com]

    Razer i, which has an Atom processor, beats A6, the best performer in the ARM field, most of the time in non-GPU tasks (one area it is lacking is GPU power), while power consumption is average for a phone. Android adds additional overhead not present in iOS, too.

    If anyone can work miracles and cram x86 into a phone, it's Intel. As ARM designs have to start dealing with greater complexity, Intel can apply their immense experience with x86 and improve performance without dramatically increasing power consumption.

    With some more work, I can see Atom beating the hell out of any ARM design in the same power envelope. I'll give it one or two generations.

    • That's some serious ass whipping. No power numbers though. At 7B per a new FAB, just wondering who has the muscle to compete with Intel.

    • by rsborg ( 111459 )

      If anyone can work miracles and cram x86 into a phone, it's Intel.

      Then why haven't they yet? It's not like they haven't been trying for years. Why did Apple (and Google) have to create the market that MS and Intel now feel they need to invade?

      Intel is the challenger here, and ARM has proven itself several billion times over.

      • by rsmith-mac ( 639075 ) on Monday October 22, 2012 @07:22PM (#41735609)

        "Trying" is probably an overstatement in this case. Intel has a well-devised plan to get there, but it's a plan that involves them taking one step at a time. First they needed the Atom CPU design, then they needed to get it integrated into a true SoC, then they need to integrate their own GPU, etc.

        Intel Atom roadmap [anandtech.com]

        Silvermont is where Intel makes their architectural leap over ARMv7 (Cortex) with the new Atom architecture coupled with Intel's own, higher performance GPUs. Then in 2014 Intel does Airmont, where Atom gets promoted to first-class status in Intel's fabs, jumping to new process nodes at the same time as Core. If all goes to plan, at this point Intel will be roughly a node ahead of the competition with an architecture as good as or better than any planned ARMv7 designs. This is the tick-tock strategy in full swing, the same strategy that is currently bludgeoning AMD to death.

        So Intel may be the challenger here, but never underestimate them. Their fabs are unrivaled and they can afford to hire some of the best architects on Earth. If Intel does their homework and doesn't screw up, they're a very dangerous foe. The only place Intel can't (or won't) go is into low-margin products, and as bad as competition from Intel would be, the ARM partners don't want to sacrifice their margins too much just to scare off Intel. It would be a Pyrrhic victory.

        • But who is going to buy their chips?

          Not Apple. Not Samsung.

          Intel doesn't have a major buyer lined up. They'll have to license the architecture out for pennies or make a new market.

          Could be a case of too little too late unless they are willing and able to do low margin, high volume.

  • by scheme ( 19778 ) on Monday October 22, 2012 @05:43PM (#41734769)

    'For all of Intel's semiconductor design and manufacturing feats, its processors suffer from a genetic handicap: They have to support the legacy x86 instruction set, and thus they're inherently more complicated than legacy-free ARM devices, they require more transistors, more silicon.

    Intel and AMD x86 processors moved on to using micro-ops and risc like operations internally years ago. The only disadvantage nowadays is a small translator that converts x86 machine code into micro-ops. Compared to the actual logic or cache on the cpu the number of transistors that the translation takes is minimal and not a big deal especially when you consider the size of cpus nowadays.

    • EXACTLY. ARM's architecture may provide a slight advantage for low-power use compared to x86. But it's very, very slight. Certainly, Intel's advantage in process technology would outweigh ARM's advantage in architecture. The only real reason x86 hasn't competed with ARM so far in very-low-power is that no one has tried hard enough. There's finally enough demand for higher-end low-power chips that Intel is taking notice. I think Intel is also taking notice because they don't like seeing an ARM-based so

  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Monday October 22, 2012 @05:56PM (#41734861)
    Comment removed based on user account deletion
  • by faragon ( 789704 ) on Monday October 22, 2012 @06:00PM (#41734903) Homepage
    This article shows the obvious: Excluding caches, performance per transistor in Intel x86 CPUs is very low. As example, current best performance per clock in the Intel CPUs is the AVX (Core i5/i7 -Sandy and Ivy Bridge-), delivering up to 8 FLOP per cycle with AVX SIMD opcodes (2 SIMD ALUs) while in previous generations was just 4 FLOPS per cycle with SSE2/3/4 (just 1 SIMD ALU). Thats miserable (back in 2000, the Playstation 2 was already capable of FMAC opcodes with 8 FLOPs/clock per SIMD ALU!!!). As example, similar performance with 4 FLOP per cycle with one SIMD ALU, at one fraction of waffer area.

    Here is a 50$ ARM general purpose multicore-CPU example for matching 999$ performance of fastests Intel Core i7 (e.g. i7-3770K 3.9GHz (peak), 4 CPU, 8 threads, 2 SIMD ALU/CPU = 8 SIMD ALUs = 64 FLOPs/clock -> 3.9*10^9Hz * 64 FLOP/s = 249.6 GFLOPS [intel.com]:
    • 4 x ARM OooE (e.g. Cortex A9-like) 2.0GHz with 2 SIMD FMAC-capable ALUs/CPU (ALU = 16 FLOPs/clock, i.e. 2 ALUs = 32 FLOPs/clock -> 4 * 2.0*10^9 * 2 * 16 = 256 GFLOPS
    • 4 * 32KB + 4*32KB (256KB) L1 full-speed code and data cache
    • 4 * 256KB (1MB) L2 half-speed cache
    • 2 MB L3 half-speed cache
    • 2 or 3 lane ring bus (cheaper interconnect).

    For increasing integer and load/store performance, it could be achieved with pipeline and issue/execution modifications, using more functional units. The limit is to keep the OooE simple enough for avoiding wasting transistor in executing tons of instructions unnecesarily.

  • Intel has some of the best (the best?) fabs in the world, and has chips that use a smaller process than what other companies are pushing out, right? So why can't they make a small, power-efficient chip that can at least meet (if not beat) the offerings from ARM and the licenses?

    To put it another way, Wikipedia tells me that ARM Holdings has 2,000 employees and a revenue of about 490 million pounds (in 2011). Intel has 100,000 employees and a 2011 revenue of 54 billion dollars (about 34 billion pounds). How

    • Intel has some of the best (the best?) fabs in the world, and has chips that use a smaller process than what other companies are pushing out, right? So why can't they make a small, power-efficient chip that can at least meet (if not beat) the offerings from ARM and the licenses?

      From what I've read on AnandTech [anandtech.com], low power Haswell chips might meet your criteria, which are due out the middle of next year. I'd be very surprised if Broadwell (the 14nm die shrink of 22nm Haswell) doesn't.

      • SO, they promisse that the chip they'll release next year can compete with the chips ARM is selling* now? (And, yeah, that would be the first time Intel overpromissed on power consuption... Forget about last year, and the year before that, and...)

        * Ok, ARM doesn't sell, I know. but they are getting done now, with inferior processes, bigger feature sizes and are still competitive with the offering Intel has for the next year.

  • by epine ( 68316 ) on Monday October 22, 2012 @06:06PM (#41734963)

    The old 80386 based on the "complex" x86 instruction set had 275,000 transistors. Intel is now making chips with 2.6 billion transistors and somehow what they once implemented as one functional unit within a budget of 0.000275 billion transistors is holding them back?

    Certainly they would rather do a few things differently had they been worried about 2013 back in 1978. Transistor count is the least of the matter. What buggers up x86 is the number of active transistors handling the instruction stream at each instruction cycle. There's no way to align variable-length instructions without active transistors (regardless of whether the transistors involved amount to a wart on a small toe of a juvenile mosquito).

    The x86 story bugs the hell out of me. Considered how well it actually held up for 45 years (and counting) it's one of the ugly duckling success stories of all time (hint: it wasn't so ugly after all).

    It was also a founding member of the Steve Jobs reality distortion field. I'm concerned his posthumous aura will continue to glow with the uplift of falsehood. He should be credited more for what he accomplished than the lies he polished to get there.

    It wasn't just Steve, it was the entire RISC consortium manufacturing an Achilles heel out of whole cloth. Far closer to the truth of the matter is that x86 has a much higher design cost than an orthogonal clean-sheet alternative. The design cost was a small multiple. Intel's resources were a large multiple. It didn't go well for RISC. The much vaunted DEC Alpha had a metal connect layer for single-cycle carry-add propagation that forever segregated it from the mass-consumer price point. It was the instruction set. No, it was the instruction set aided by a titanium stent.

    Also, the RISC design advantage does not extend to the memory cache and system bus design. These are a bear to design well for any instruction set. The RISC people moaned about the exceptional Pentium Pro performance level on server workloads (it was the first memory bus from Intel that didn't totally suck). Well, Intel broke into the server market with their crappy old x86 instruction set by grafting it onto a titanium alloy cache hierarchy and bus controller (with multiple dies grafted into the same chip package at enormous expense). Cache latency and branch prediction absolutely dwarf instruction set as the big thing to worry about since around this time. If Steve hadn't grabbed onto the inferiority of CISC around this time, it might have died a timely death.

    In low power applications, ARM has a real advantage, enough to win a huge market share at race-to-the-bottom price points. How much does the cost of a CPU influence a handset? How much everything else? I've put $300 Intel CPUs in $2000 boxes. I've put $250 Intel CPUs in $1000 boxes. I've put $60 CPUs in $500 boxes. A $16 CPU in a phone that retails for $600 for just a few months, before landing in the discount bin? I'm sure Intel wants a huge slice of that.

    One reason Intel has held their ground is that the Cortex-A15 (out-of-order superscalar multiprocessor) is starting to look a lot like the old Pentium Pro. Sure the instruction set is modern and clean (though it took ARM surprisingly long to come up with the mixed 16/32 bit instruction encoding format due to misguided ideological purity; how many active transistors does it take to determine whether the next 32 bit chunk from the instruction stream is one lump or two? More or less than the number of active transistors in the icache devoted to storing common instructions bloated to 32 bits just because?). But all the rest of the issues are pretty much the same: branch prediction stalls, cache snooping, and memory path latency.

    From Intel's perspective, an ugly instruction set is good for business. (Then they went on a jag thinking that if ugly is good, atrocious is better, and the Itanium was hatched with a jackhammer from a mastodon egg.)

    After another three die shrinks, when half the processor implements on-demand power management, and most of the other half provides task-specialized execution units, is the instruction set going to matter a hill of beans for anything other than legacy lock-in?

    • by Animats ( 122034 ) on Monday October 22, 2012 @06:51PM (#41735331) Homepage

      Far closer to the truth of the matter is that x86 has a much higher design cost than an orthogonal clean-sheet alternative.

      True. Years ago I went to a talk where the head of the Pentium Pro design team showed a graph of the number of engineers working on the project. It peaked around 3,000. Nobody had ever had a CPU design team that big before.

      The variable length instruction alignment problem of x86, although ugly, isn't a huge consumer of transistors. AMD dealt with it by expanding instructions to fixed length when loaded into cache. Intel dealt with it by sometimes starting ambiguous cases in parallel and discarding the bogus results later. The downside of fixed-length instructions, as in RISC machines, is code bloat - PowerPC code is about twice as big as x86 code, which impacts cache miss rate.

      While one instruction per clock RISC CPUs (low-end MIPS and DEC Alpha parts, and the Atmel AVR series are examples) are simple, superscalar machines executing more than one instruction per clock are almost as complex as x86 CPUs. That's why RISC stopped being a win.

      Harry Pyle was developing the instruction set [computerhistory.org] for the Datapoint 2200 in his dorm room at Case Tech in Cleveland in the late 1960s. Same building I was in; different floor. That led to the 8008 and the 8080 and the 80286 and the 80386 and ...

  • by __aailob1448 ( 541069 ) on Monday October 22, 2012 @06:15PM (#41735041) Journal

    For a long time, single-core applications were the rule so the CPU Mhz race was on. Once that ended around 3Ghz, the pressure was on for programmers to make computer code better at dividing the load between multiple cores.

    It turns out that ARM does well with lower frequencies, and delivers the best performance per watt ratio. Also, it turns out that once all your code is written for 2, 4 or 8+ cores, it doesn't matter much if your cpus are clocked at 1.3Ghz (A6/Snapdragon) instead of 2.6Ghz (i7 in macbook pro 2012).

    And if you're doing mobile, where battery life is a big factor, you need the ppw ratio more than anything, so you go ARM.

    On mobile, Intel is in a similar situation now that they were against AMD back in the AMD64 days. Their current models (atom) are inferior but competitive. They are dominating servers and desktops which gives them a secure base to experiment from and I expect their mobile offerings in the next 5 years to bridge the gap with ARM.

    Will they win? I have no clue. They might crush ARM or become the premier ARM licensee with the best ARM chips. Either way, Intel is going to lead.

  • by steveha ( 103154 ) on Monday October 22, 2012 @06:20PM (#41735071) Homepage

    Intel wants to be the only company that can meet your needs. That way, they can make you pay premium prices for their chips. This is perfectly understandable; that is what is best for Intel.

    Apple wants to be vertically integrated. They want full control over everything they do. Partly this is so they can keep as much as possible of the money they collect; partly this is so that they can guarantee excellent quality and excellent availability. This is what is best for Apple, and it isn't bad for their customers either.

    Intel does not want to become just another ARM source, competing on price with all the others. But Apple will never lock themselves in to depending on Intel for mobile chips, when ARM chips have been shown to be more than adequate. And Apple would not be investing in custom ARM chips if it was planning to adopt Intel mobile chips.

    People keep pointing out that Intel's mobile x86 chips are competitive with ARM. That won't cut it. Intel's chips would have to be better, and so much better that the risk of depending on Intel is worth it.

    That was the case for the PowerPC to x86 transition! Intel's chips were so much better than PowerPC for laptops that it was worth getting into an entangling relationship with Intel. AMD was not able to guarantee delivery of the massive quantities of chips Apple was planning to sell, and Intel was, so AMD wasn't really an option... but at least they served to keep Intel from trying to charge totally outrageous prices for their chips; there was always a credible threat of going to AMD.

    Hmm. It's looking like AMD is going to crater in spectacular fashion soon. I wonder if Apple will make a serious attempt to buy what's left of the company. That would enable Apple to make its own x86 chips! Eh, probably not. AMD is behind Intel on process, so switching to AMD chips would mean taking a hit on performance, power use, or both.

    The "SemiAccurate" web site thinks that Apple will transition to using ARM chips for laptops [semiaccurate.com], not just for mobile devices, once ARM chips are good enough (which they will be soon). So, transitioning away from x86 and to, say, multi-core 64-bit ARM chips is another way Apple can untangle from Intel.

    Apple may not be in a big hurry to actually complete the transition away from Intel chips; just a credible threat of switching to ARM chips might be enough to negotiate good prices on x86 chips. That would leave lower power consumption as the main reason to go to ARM, but a laptop's display is probably the worst power drain, especially with a Retina display.

    steveha

    • by steveha ( 103154 ) on Monday October 22, 2012 @06:51PM (#41735335) Homepage

      Also note that Apple has people paying $2500 and up for the Mac Pro, and $1000 and up for laptops. But mobile devices are closer to $500, and the Android competition is hitting the $200 price point.

      There just isn't as much room to pay top dollar prices for Intel parts in the mobile space.

      So even if Intel mobile x86 parts are slightly faster than the ARM chips, will Intel be happy selling at prices competitive with ARM prices? History suggests "no". The cheapest Atom chips are around $20 but Intel makes those suck, just as much as Intel can get away with.

      Intel is the master of segmenting markets. Different chips at different price points have different features enabled. Cheaper chips are as crippled as possible, to encourage you to buy a more expensive chip. For example, Intel doesn't support virtualization features on their less-expensive chips; and Intel mostly reserves support for ECC RAM to only the Xeon processors.

      (In contrast, AMD puts full functionality in all their parts; they are #2 and they are trying harder [follisinc.com] to please the customer. That is how you can get an HP Proliant MicroServer with a 1.5 GHz dual-core AMD Turion processor for $320 at Newegg [newegg.com], with full support for virtualization and ECC RAM. I cannot imagine a MicroServer with equal or better Intel parts hitting that price point.)

      Intel will try to balance the functionality it allows into the mobile chips against the price it can get. Apple just wants the best chips for the cheapest price. These two goals are not in alignment.

  • Tis a fool.... (Score:2, Interesting)

    Tis a fool who looks for logic in the chambers of the human heart. Or from Cupertino. And that's not a dig, Apple fans, that's just the truth. Apple will dump Intel when they feel like it, for reasons that they alone decide.

    Apple is a bit like the interrogator in 1984. They believe that can levitate off the ground and float around the room should they choose to, and what the outside world thinks makes no difference at all.

    • by mjwx ( 966435 )

      Tis a fool who looks for logic in the chambers of the human heart. Or from Cupertino. And that's not a dig, Apple fans, that's just the truth. Apple will dump Intel when they feel like it, for reasons that they alone decide.

      Apple is a bit like the interrogator in 1984. They believe that can levitate off the ground and float around the room should they choose to, and what the outside world thinks makes no difference at all.

      This.

      It's Intel looking at the big picture. Samsung was one of Apple's biggest suppliers, look at what Apple tried to do to them (although it did backfire horribly for Apple, you cant count on that happening every time). Apple is turning out to be a riskier partner than Microsoft was.

  • by PPH ( 736903 )

    ... the politics between Apple, Samsung, Microsoft, Intel, ARM and others, you should be working at the United Nations on a solution for eternal world peace.

    There is so much sub rosa crap (not all of it ethical or legal) going on between the players we may never know the truth.

  • by Roogna ( 9643 )

    Don't know what Jean-Louis is talking about, as there were press releases and everything not long ago about Apple ramping up production at TSMC foundries. Don't think they feel they need Samsung or Intel for their ARM production.

  • by Ancient_Hacker ( 751168 ) on Tuesday October 23, 2012 @11:28AM (#41741629)

    This could be a replay of the old days of mainframes. At more than one company, the engineers came up with mainframes on a desk, but the marketers could not see selling a desktop mainframe at the old 7-digit prices. So they just making the big boxes, til their eventual death. This happened to CDC, Data General, Digital, and Perkin-Elmer to name a few. Intel will undoubtedly survive, but it could be a long painful decline or change of direction. The "new architecture" fanatics there probably don't have much traction after the Itanium disaster.

Keep up the good work! But please don't ask me to help.

Working...