Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Portables (Apple) Intel Apple

Apple Replaces Last Remaining Intel-Made Component In M2 MacBook Air (macrumors.com) 87

In the M2 MacBook Air, Apple has replaced an Intel-made component responsible for controlling the USB and Thunderbolt ports with a custom-made controller, meaning the last remnants of Intel are now fully out of the latest Mac. MacRumors reports: Earlier this month, the repair website iFixit shared a teardown of the new "MacBook Air," revealing a look inside the completely redesigned machine. One subtle detail that went largely unnoticed was that unlike previous Macs, the latest "MacBook Air" introduces custom-made controllers for the USB and Thunderbolt ports. iFixit mentioned it in their report, noting they located a "seemingly Apple-made Thunderbolt 3 driver, instead of the Intel chips we're familiar with." The new component was shared on Twitter earlier today, where it received more attention. Few details are known about the controllers, including whether they're custom-made by Apple or a third party.
This discussion has been archived. No new comments can be posted.

Apple Replaces Last Remaining Intel-Made Component In M2 MacBook Air

Comments Filter:
  • by NaCh0 ( 6124 ) on Tuesday July 26, 2022 @06:50PM (#62736674) Homepage

    I'm sure this is okay if you like running macos.

    Custom silicon is usually a nightmare for hardware hackers who want to expand usage of their machines.

    • I'm sure this is okay if you like running macos.

      Custom silicon is usually a nightmare for hardware hackers who want to expand usage of their machines.

      A little Bus-sniffing and all is figured-out.

      It's not that complex of a chip. The Ashai folks will take about 2 hours to reverse-engineer it.

      • It would be nice if Apple put in UEFI as an option. The closest to this is the environment that Parallels provides, which can boot Ubuntu and other arm64 based operating systems.

    • Re:Custom chips (Score:4, Informative)

      by ArchieBunker ( 132337 ) on Tuesday July 26, 2022 @07:06PM (#62736728)

      These days OSX is closer to classic unix than Linux is currently. The thing that really pisses me off about Linux is inconsistency even on the same distribution but different architectures. Like take a headless Ubuntu x64 box and something tiny like a Raspberry Pi or RISC-V board. You'd think setting a static ip address would be the same across these machines? It's still Ubuntu 22 based but nope you won't find any real docs. Just a stickied post from some forum where some brave soul has figured out which ifconfig replacement they're using this week. NetworkManager? Connman? Netplan? Sometimes it's not even those three and something different. You know what they NEVER FUCKING USE ANYMORE? Good old ifconfig. No that makes too much goddamn sense.

      • Re:Custom chips (Score:4, Insightful)

        by K. S. Kyosuke ( 729550 ) on Tuesday July 26, 2022 @08:26PM (#62736858)
        Why would you use ifconfig if it's incompatible with Linux's networking model? Shouldn't ip be used these days?
        • Re: (Score:2, Interesting)

          Because ifconfig worked for decades. There was no reason for a replacement. The ip command does the exact same thing but a slightly different syntax for. reasons? Same with nslookup, traceroute, and route. Worked for decades but they were “old” and had to be rewritten.

          Googling for help on these topics now requires a date parameter. Setting a static ip in 2014? Not relevant today. 2017 still not relevant. 2021 might probably work. Meanwhile I bring up an OpenBSD box and don’t have to worry

          • Re:Custom chips (Score:4, Insightful)

            by K. S. Kyosuke ( 729550 ) on Wednesday July 27, 2022 @12:33AM (#62737196)

            The ip command does the exact same thing

            No, it doesn't? That was the whole point of it?

            • No, it doesn't? That was the whole point of it?

              It doesn't allow me to set static ip addresses from the command line using a slightly different syntax than ifconfig?

          • by DrYak ( 748999 ) on Wednesday July 27, 2022 @05:39AM (#62737600) Homepage

            Because ifconfig worked for decades. There was no reason for a replacement.

            Yes, there is. In short: network configuration is nowadays very dynamic.

            ifconfig dates back from an era where you merely had to set a fixed IP on your Ethernet interface, of which there's exactly one on your moherboard. At best you get an extra network card with one or two ports, and BIOS options allow you to disable the onboard one.

            in short, ifconfig was good for "set eth0 to 10.84.0.1"

            Fast-forward to modern days, and the vast majority of users have laptops and other hardware where the network is dynamic.
            That means wifi, that means WAN over 3G/4G/5G, that means docks, that means USB network adapter (i.e.: purely plug and play), etc. starting various VPNs simultaneously on a as-need basis.
            Workstations themselves now feature multiple ethernet interfaces over buses that have no guarantee about which will show up first (numbers like eth0, eth1, etc. don't make sense anymore)
            You basically have networking interface randomly popping in and out, which need to be connected ad hoc.

            One could handle that with a fuckton of bash or perl scripts and dozen of additional tools (iwconfig, etc.) but that would mean each distro has its own set of custom scripts.

            Instead the solution is that a couple of framework for dynamically managing the network have emerged. NetworkManager and Connman are the one which have got the most hold, with others in the tail end.

            The ip command does the exact same thing but a slightly different syntax for. reasons? Same with nslookup, traceroute, and route. Worked for decades but they were “old” and had to be rewritten.

            No. Not because they were old. Because:
            - they came back from the older much simpler days of "set a static IP to single interface with a standard name"
            - the networking stack in the Linux kernel has itself change (also to adapt to the modern needs of networking) and the "ip" suite of tools is geared toward those newer interfaces.

            How fucking dare shit like systemd make changes to my resolv.conf

            Because dhcp, vpn, etc. are all things that can update the way your computer resolves network names.
            So they are not set in stone anymore, but under the control of the current network management (which in turn can also overid its content with entries of your chosing).

            Also, systemd is generally design for simple unattended autoconfiguration.
            It's network management is best designed for a container that just needs to bring up its virtual network and not deal with complex networking after booting. So it's a very poor choice to deploy on a user machine.

            Googling for help on these topics now requires a date parameter. Setting a static ip in 2014? Not relevant today. 2017 still not relevant. 2021 might probably work.

            Personal tip:
            go check Arch Linux' documentation (and I say that as someone running opensuse tumbleweed and debian. manjaro arm on a pinebook pro is the closest I've come to arch).
            It's well maintained, up-to-date and clear. It documents clearly NetworkManager, and ConnMan (and was how I got eduroam to work in my early days on Sailfish OS).

            forget about Google, most of the time the top answers will be pointing to some StackExchange or similar Q&A forum which at best will be just completely outadated, or at worst would be plain wrong or cargo-culted.

            Meanwhile I bring up an OpenBSD box and don’t have to worry about ..

            ...and will not be able (not without a ton of custom scripts) to have that laptop automatically connect to a specific set of public network when it sees them (but with a lower priority that your home's network) and then automatically start a VPN over these.

            I am not making this up, this is how an entire country (Switzerland) is dealing with students: in addition to the interoperable eduroam that y

            • > Fast-forward to modern days, and the vast majority
              > of users have laptops and other hardware where the
              > network is dynamic.

              And even in these "modern days" the vast majority of Linux boxes are servers that will sit in a rack or hypervisor in a network environment that will remain unchanged if not for the lifetime of the server, definitely for the lifetime of the service you're running on it. And if you're changing the networking, you're probably reformatting and rebuilding the whole thing. So if

              • But *why* would you use ifconfig if ip can do everything that ifconfig does, and then some? Especially when distributions already migrated to using ip even for static settings?
        • Why would you use ifconfig if it's incompatible with Linux's networking model?

          In what way is ifconfig incompatible with Linux's networking model? And why didn't anyone notice this for 30 years?

          • Uh...people noticed twenty years ago, which was when ip replaced ifconfig. And the reason why ip replaced ifconfig twenty years ago was precisely because they noticed. (BTW note that I'm talking specifically about ifconfig on Linux, not necessarily about any random ifconfig from a different system.)
            • ok, you're just speaking vaguely on purpose. If you have problems with ifcofig, you should say what they are, otherwise gtfo.

      • These days OSX is closer to classic unix than Linux is currently.

        Not surprising, considering macOS is a certified UNIX system [opengroup.org].

      • I run Debian Buster on both rpi4 and amd64 platforms and the only difference is how kernel commandline parameters get set (and repo strings obviously). I literally have a switch in my puppet configs to diff amd64 and aarch64. /etc/network/interfaces works as expected.

        Raspbian and Ubuntu like to be weird, and that's their right. So I don't use them.

        This has nothing to do with linux, which is still a monolithic kernel like UNIX, not a strange mach/xnu sorta microkernelish thingingamabob.

    • by ledow ( 319597 )

      I pity anyone who thinks they're a hardware hacker trying to get the most of their machines, and then goes out and buys a Mac to hack...

  • My biggest question is: Why is this not just part of the M2 die?

    Any chip designers care to speculate?

    • by BillTheKatt ( 537517 ) on Tuesday July 26, 2022 @07:26PM (#62736758)
      Possible reasons could be:

      A - Chip includes significant power-related circuitry, which works better at older/lower resolution nodes. For example, if you're moving around a lot of power, like Thunderbolt most definitely does, it's easier to fab the circuits that control the power at an older or lower resolution (bigger) process node, like 90 nm or 130nm. That's (one reason) why many power drivers and other circuitry use the older nodes. You can mix and match nodes (5nm CPU, 90nm TB controller), but only if you're doing chiplets.

      B - They had a 3rd party design the chip for them, and they wanted to reduce the co-dependence. So they make the CPU, 3rd party makes the TB controller. If they're late, no delay on the CPU.

      C - They want the option to dual-source the TB controller. Perhaps they think their fab might not be able to make enough. As a separate chip, they can stick an Intel controller there, or one of their own (or 3rd party) and have more supply if one or the other runs out of chips.

      D - Cost. The newest nodes (5nm) cost $$$$$$$ per piece. Older nodes like 40nm, 90nm or 130nm are much much cheaper. Yes you get more for your money, but it's not a linear cost. The newest nodes are MUCH more expensive.

      But A is probably the answer. TB, USB, PCIe are all power-related, and their circuits work better on older (bigger) nodes. There's a good video on Youtube about why car chips and older nodes are in such demand and why that's not going away anytime soon.
      • There's a good video on Youtube about why car chips and older nodes are in such demand and why that's not going away anytime soon.

        Wow, all great points! Thanks for the Brain Cells!!!

        And I agree that the Sheer Power being slogged-around is likely the driving (heheh!) reason that this is a separate package.

        And you're right about Automotive Drivers being used a lot. When was Designing some Embedded Products that had to drive some fairly beefy solenoids, I immediately started looking at automotive parts by ST and others. They just have all the necessary level-translation, line-driving, and other Protection stuff all figured out!

      • by dgatwood ( 11270 )

        A - Chip includes significant power-related circuitry, which works better at older/lower resolution nodes. For example, if you're moving around a lot of power, like Thunderbolt most definitely does, it's easier to fab the circuits that control the power at an older or lower resolution (bigger) process node, like 90 nm or 130nm. That's (one reason) why many power drivers and other circuitry use the older nodes. You can mix and match nodes (5nm CPU, 90nm TB controller), but only if you're doing chiplets.

        I'd be surprised if the same chip manages power and 40 Gbps data, but I could be wrong. I mean, just providing multiple voltages on the output alone means a buck converter with lots of components. Maybe the chip manages it, but I doubt it is involved in the actual power flow. Also, I would think that ultra-high-speed data and low-resolution chips would tend not to go hand-in-hand, but I could be wrong.

        My guess is that the answer is either:

        • E. They wanted to move off of the Intel part if they could, but
        • A - Chip includes significant power-related circuitry, which works better at older/lower resolution nodes. For example, if you're moving around a lot of power, like Thunderbolt most definitely does, it's easier to fab the circuits that control the power at an older or lower resolution (bigger) process node, like 90 nm or 130nm. That's (one reason) why many power drivers and other circuitry use the older nodes. You can mix and match nodes (5nm CPU, 90nm TB controller), but only if you're doing chiplets.

          I'd be surprised if the same chip manages power and 40 Gbps data, but I could be wrong. I mean, just providing multiple voltages on the output alone means a buck converter with lots of components. Maybe the chip manages it, but I doubt it is involved in the actual power flow. Also, I would think that ultra-high-speed data and low-resolution chips would tend not to go hand-in-hand, but I could be wrong.

          My guess is that the answer is either:

          • E. They wanted to move off of the Intel part if they could, but wanted to maintain exact pin compatibility with the Intel chip initially, just in case their new silicon didn't work, thus ensuring that they wouldn't have to rev the board layout (or worse, the CPU *and* the board layout) if it didn't work or had serious bugs discovered late in the development cycle.
          • F. They needed different numbers of chips for different hardware configurations, and they didn't want to bake that hardware into a CPU that will be used in multiple hardware configurations. For example, Intel has at least three different controllers: JHL8340 (1 port), JHL8440 (four ports), and JHL8540 (2 ports).

          Possibly both.

          Guessing from the ifixit teardown of the previous (M1) Air, it looks to me like those duties are actually shared by the M1 SoC, plus a few additional components:

          1. Intel JHL8040R Thunderbolt 4 Retimer (x2) (basically a Thunderbolt 4 extender/repeater)

          2. Texas Instruments CD3217B12 – USB and power delivery IC

          3. (Maybe) Apple 1096 & 1097 – Likely PMICs

          4. (Maybe) Siliconix 7655 – 40A battery MOSFET

          https://www.ifixit.com/News/46... [ifixit.com]

  • by awwshit ( 6214476 ) on Tuesday July 26, 2022 @07:04PM (#62736720)

    Apple raises its ARM, gives Intel the finger.

  • I'd be curious to know if they've actually ditched intel or whether they've just had intel private-label either a modest variant of the part they were using, or a specifically binned version of the part they were using. Obviously Apple would think nothing of dropping Intel if they felt that it was in their interest; but they also don't have an obvious incentive to drop intel unless it's on a part that they think that they can do better or someone else can do cheaper; and they are a sufficiently influential
  • by quenda ( 644621 ) on Tuesday July 26, 2022 @09:08PM (#62736948)

    I have just received word that the Tim Cook has dissolved the ITU and USB Implementers Forum permanently.

    The last remnants of the old order have been swept away.

    That's impossible. How will the CEO maintain standards cooperation without the bureaucracy?

    The product managers now have direct control over their territories.

    Fear will keep the suppliers in line - fear of this $2T Behemoth.

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...