Apple Replaces Last Remaining Intel-Made Component In M2 MacBook Air (macrumors.com) 87
In the M2 MacBook Air, Apple has replaced an Intel-made component responsible for controlling the USB and Thunderbolt ports with a custom-made controller, meaning the last remnants of Intel are now fully out of the latest Mac. MacRumors reports: Earlier this month, the repair website iFixit shared a teardown of the new "MacBook Air," revealing a look inside the completely redesigned machine. One subtle detail that went largely unnoticed was that unlike previous Macs, the latest "MacBook Air" introduces custom-made controllers for the USB and Thunderbolt ports. iFixit mentioned it in their report, noting they located a "seemingly Apple-made Thunderbolt 3 driver, instead of the Intel chips we're familiar with." The new component was shared on Twitter earlier today, where it received more attention. Few details are known about the controllers, including whether they're custom-made by Apple or a third party.
Re: (Score:1, Troll)
I was looking to get a new laptop not too long ago, and if I didn't need Thunderbolt support, would have likely gone with a Ryzen based system.
And that is ontopic, how, exactly?
Re: (Score:2, Insightful)
I was looking to get a new laptop not too long ago, and if I didn't need Thunderbolt support, would have likely gone with a Ryzen based system.
And that is ontopic, how, exactly?
Really, Mods?!?
Parent posts a clearly Flamebait Comment. I call it out as "Offtopic" (pretty mild rebuke, actually).
But *I* am Punish-Modded "Troll"?!?
Seriously?
Re:Now license it to AMD (Score:5, Insightful)
I have mod points at the moment. That's not the problem. The problem is that nearly all the people with moderator points who see an interesting thread want to post in it, and the threads they don't want to post in, they likely don't want to read, either. So the only people who can use their mod points are the folks who have multiple sockpuppet accounts, and thus can afford to not post with one of them for a while. So yes, sockpuppet accounts are a problem, but the problem runs a bit deeper, and is a bit more fundamental than just "sockpuppet accounts have mod points".
Re:Now license it to AMD (Score:5, Funny)
I once posted a logical fix to the mod point problem and mods. It was modded down.
Re: (Score:2)
Mod parent. Down, of course.
Re: (Score:1)
Nearly all of the people with mod accounts are sock puppets. It’s becoming a problem.
The entire commenting structure of Slashdot; with "Mod Points" and Attack-Cowards is pretty fucking ridiculous.
Add to that the rampant Sockpuppetry, and finally, No Unicode Support(!), text styling that is more HTML test (so much fun on a phone!) than feature, and the whole frickin' raison detra (no Unicode, remember?) of the site is getting really hard to grok. . .
Re: (Score:3)
Re: (Score:2)
Flamebait? I don't see it. I would not have moderated that comment, personally. Probably wouldn't have modded your comment either. You should probably just take a deep breath and not worry so much about slashdot moderation. In fact, I probably should take a little break from slashdot myself.
It just looked to me like you were setting up a platform war. Sorry if I misunderstood.
Re:Now license it to AMD (Score:5, Funny)
Re: (Score:2)
I was looking to get a new laptop not too long ago, and if I didn't need Thunderbolt support, would have likely gone with a Ryzen based system.
And that is ontopic, how, exactly?
Uh, because its about a non-Intel thunderbolt implementation? An implementation in the M2 Mac Book Air?
Re: (Score:2)
Upcoming Zen4 motherboards seem to be supporting TB4.
Re: (Score:1)
TB3 and TB4 licensing are free.
Upcoming Zen4 motherboards seem to be supporting TB4.
Existing Apple Silicon ones do that already.
Re: (Score:2)
Existing Apple Silicon ones do that already.
How is that relevant to
I was looking to get a new laptop not too long ago, and if I didn't need Thunderbolt support, would have likely gone with a Ryzen based system.
They were interested in AMD TB support, not Apple TB support.
One could say "Intel ones do that already" too, but it'd be equally irrelevant.
Re: (Score:1)
What is Thunderbolt actually used for?!
Re:Now license it to AMD (Score:5, Informative)
Docking stations are back baby. Not those sweet old clunky things that sit under your laptop but a single cable connects power, 2-3 video outputs (real ones too, not those compressed over USB ones) plus USB 3 ports, sound, card reader, all the fixins. They do work well in my experience.
External GPU's, high speed drive SSD drive bays, high speed capture devices, anything where 40Gbps of bandwidth to an external device would be handy.
Re: (Score:3)
Yep.
I have a little one, about the size of one of the famous antique Nokia phones, and it's got ports galore on a single USB-C, with PD passthrough.
I've used it on my gaming laptop, on my Steam Deck, on my smartphone, it "just works".
I have horrible memories of early USB2 VGA adaptors and docking stations and their incredibly poor performance and flakiness, but nowadays I use that adaptor to pass my Vive Pro VR headset through to my laptop, no problem at all.
It's even got a VGA out alongside everything else
Re: (Score:3)
Literally everything. Want to connect two 4k displays to a single connector and then plug an ssd into the end of the chain and still have bandwidth? A laptop dock with numerous hdmi, usb, ethernet, and hdmi ports? Thunderbolt. A 10gb adapter with full throughout? Won’t even break a sweat.
PCIe (Score:2)
What is Thunderbolt actually used for?!
Anything that one could plug on a PCIe bus.
Thunderbolt is merely mixing a few Display Port video output and PCIe packet on a single USB-C cable.
So all the bells and whistles that the others are mentioning in this thread: multiple display, docks with all the ports that are misisng on the ultra thin laptop, external GPUs for gaming, etc.
Custom chips (Score:3)
I'm sure this is okay if you like running macos.
Custom silicon is usually a nightmare for hardware hackers who want to expand usage of their machines.
Re: (Score:1)
I'm sure this is okay if you like running macos.
Custom silicon is usually a nightmare for hardware hackers who want to expand usage of their machines.
A little Bus-sniffing and all is figured-out.
It's not that complex of a chip. The Ashai folks will take about 2 hours to reverse-engineer it.
Re: (Score:1)
It would be nice if Apple put in UEFI as an option. The closest to this is the environment that Parallels provides, which can boot Ubuntu and other arm64 based operating systems.
Re: (Score:2)
It would make all the work that the Asahi Linux people are doing a lot easier.
Re: (Score:2)
It would make all the work that the Asahi Linux people are doing a lot easier.
Agreed. But they seem to be doing ok so far!
Re:Custom chips (Score:4, Informative)
These days OSX is closer to classic unix than Linux is currently. The thing that really pisses me off about Linux is inconsistency even on the same distribution but different architectures. Like take a headless Ubuntu x64 box and something tiny like a Raspberry Pi or RISC-V board. You'd think setting a static ip address would be the same across these machines? It's still Ubuntu 22 based but nope you won't find any real docs. Just a stickied post from some forum where some brave soul has figured out which ifconfig replacement they're using this week. NetworkManager? Connman? Netplan? Sometimes it's not even those three and something different. You know what they NEVER FUCKING USE ANYMORE? Good old ifconfig. No that makes too much goddamn sense.
Re:Custom chips (Score:4, Insightful)
Re: (Score:2, Interesting)
Because ifconfig worked for decades. There was no reason for a replacement. The ip command does the exact same thing but a slightly different syntax for. reasons? Same with nslookup, traceroute, and route. Worked for decades but they were “old” and had to be rewritten.
Googling for help on these topics now requires a date parameter. Setting a static ip in 2014? Not relevant today. 2017 still not relevant. 2021 might probably work. Meanwhile I bring up an OpenBSD box and don’t have to worry
Re:Custom chips (Score:4, Insightful)
The ip command does the exact same thing
No, it doesn't? That was the whole point of it?
Re: (Score:2)
No, it doesn't? That was the whole point of it?
It doesn't allow me to set static ip addresses from the command line using a slightly different syntax than ifconfig?
Re: (Score:3)
Use case have evolved. (Score:5, Insightful)
Because ifconfig worked for decades. There was no reason for a replacement.
Yes, there is. In short: network configuration is nowadays very dynamic.
ifconfig dates back from an era where you merely had to set a fixed IP on your Ethernet interface, of which there's exactly one on your moherboard. At best you get an extra network card with one or two ports, and BIOS options allow you to disable the onboard one.
in short, ifconfig was good for "set eth0 to 10.84.0.1"
Fast-forward to modern days, and the vast majority of users have laptops and other hardware where the network is dynamic.
That means wifi, that means WAN over 3G/4G/5G, that means docks, that means USB network adapter (i.e.: purely plug and play), etc. starting various VPNs simultaneously on a as-need basis.
Workstations themselves now feature multiple ethernet interfaces over buses that have no guarantee about which will show up first (numbers like eth0, eth1, etc. don't make sense anymore)
You basically have networking interface randomly popping in and out, which need to be connected ad hoc.
One could handle that with a fuckton of bash or perl scripts and dozen of additional tools (iwconfig, etc.) but that would mean each distro has its own set of custom scripts.
Instead the solution is that a couple of framework for dynamically managing the network have emerged. NetworkManager and Connman are the one which have got the most hold, with others in the tail end.
The ip command does the exact same thing but a slightly different syntax for. reasons? Same with nslookup, traceroute, and route. Worked for decades but they were “old” and had to be rewritten.
No. Not because they were old. Because:
- they came back from the older much simpler days of "set a static IP to single interface with a standard name"
- the networking stack in the Linux kernel has itself change (also to adapt to the modern needs of networking) and the "ip" suite of tools is geared toward those newer interfaces.
How fucking dare shit like systemd make changes to my resolv.conf
Because dhcp, vpn, etc. are all things that can update the way your computer resolves network names.
So they are not set in stone anymore, but under the control of the current network management (which in turn can also overid its content with entries of your chosing).
Also, systemd is generally design for simple unattended autoconfiguration.
It's network management is best designed for a container that just needs to bring up its virtual network and not deal with complex networking after booting. So it's a very poor choice to deploy on a user machine.
Googling for help on these topics now requires a date parameter. Setting a static ip in 2014? Not relevant today. 2017 still not relevant. 2021 might probably work.
Personal tip:
go check Arch Linux' documentation (and I say that as someone running opensuse tumbleweed and debian. manjaro arm on a pinebook pro is the closest I've come to arch).
It's well maintained, up-to-date and clear. It documents clearly NetworkManager, and ConnMan (and was how I got eduroam to work in my early days on Sailfish OS).
forget about Google, most of the time the top answers will be pointing to some StackExchange or similar Q&A forum which at best will be just completely outadated, or at worst would be plain wrong or cargo-culted.
Meanwhile I bring up an OpenBSD box and don’t have to worry about ..
...and will not be able (not without a ton of custom scripts) to have that laptop automatically connect to a specific set of public network when it sees them (but with a lower priority that your home's network) and then automatically start a VPN over these.
I am not making this up, this is how an entire country (Switzerland) is dealing with students: in addition to the interoperable eduroam that y
Re: (Score:2)
> Fast-forward to modern days, and the vast majority
> of users have laptops and other hardware where the
> network is dynamic.
And even in these "modern days" the vast majority of Linux boxes are servers that will sit in a rack or hypervisor in a network environment that will remain unchanged if not for the lifetime of the server, definitely for the lifetime of the service you're running on it. And if you're changing the networking, you're probably reformatting and rebuilding the whole thing. So if
Re: (Score:2)
Re: (Score:1)
Why would you use ifconfig if it's incompatible with Linux's networking model?
In what way is ifconfig incompatible with Linux's networking model? And why didn't anyone notice this for 30 years?
Re: (Score:2)
Re: (Score:1)
ok, you're just speaking vaguely on purpose. If you have problems with ifcofig, you should say what they are, otherwise gtfo.
Re: (Score:3)
These days OSX is closer to classic unix than Linux is currently.
Not surprising, considering macOS is a certified UNIX system [opengroup.org].
Re: (Score:2)
I run Debian Buster on both rpi4 and amd64 platforms and the only difference is how kernel commandline parameters get set (and repo strings obviously). I literally have a switch in my puppet configs to diff amd64 and aarch64. /etc/network/interfaces works as expected.
Raspbian and Ubuntu like to be weird, and that's their right. So I don't use them.
This has nothing to do with linux, which is still a monolithic kernel like UNIX, not a strange mach/xnu sorta microkernelish thingingamabob.
Re: (Score:2)
I pity anyone who thinks they're a hardware hacker trying to get the most of their machines, and then goes out and buys a Mac to hack...
Why is this not integrated? (Score:2)
My biggest question is: Why is this not just part of the M2 die?
Any chip designers care to speculate?
Re:Why is this not integrated? (Score:5, Informative)
A - Chip includes significant power-related circuitry, which works better at older/lower resolution nodes. For example, if you're moving around a lot of power, like Thunderbolt most definitely does, it's easier to fab the circuits that control the power at an older or lower resolution (bigger) process node, like 90 nm or 130nm. That's (one reason) why many power drivers and other circuitry use the older nodes. You can mix and match nodes (5nm CPU, 90nm TB controller), but only if you're doing chiplets.
B - They had a 3rd party design the chip for them, and they wanted to reduce the co-dependence. So they make the CPU, 3rd party makes the TB controller. If they're late, no delay on the CPU.
C - They want the option to dual-source the TB controller. Perhaps they think their fab might not be able to make enough. As a separate chip, they can stick an Intel controller there, or one of their own (or 3rd party) and have more supply if one or the other runs out of chips.
D - Cost. The newest nodes (5nm) cost $$$$$$$ per piece. Older nodes like 40nm, 90nm or 130nm are much much cheaper. Yes you get more for your money, but it's not a linear cost. The newest nodes are MUCH more expensive.
But A is probably the answer. TB, USB, PCIe are all power-related, and their circuits work better on older (bigger) nodes. There's a good video on Youtube about why car chips and older nodes are in such demand and why that's not going away anytime soon.
Re: (Score:2)
There's a good video on Youtube about why car chips and older nodes are in such demand and why that's not going away anytime soon.
Wow, all great points! Thanks for the Brain Cells!!!
And I agree that the Sheer Power being slogged-around is likely the driving (heheh!) reason that this is a separate package.
And you're right about Automotive Drivers being used a lot. When was Designing some Embedded Products that had to drive some fairly beefy solenoids, I immediately started looking at automotive parts by ST and others. They just have all the necessary level-translation, line-driving, and other Protection stuff all figured out!
Re: (Score:1)
And that is ontopic, how, exactly?
Because it is an engineering decision as to why Apple would go to the trouble of creating a separate component, rather than just throwing the Intel chip overboard and integrating the functionality into the M2 SoC.
Sounds much more Ontopic than whether somone chooses a MacBook Pro or a Ryzen system.
Re: (Score:2)
A - Chip includes significant power-related circuitry, which works better at older/lower resolution nodes. For example, if you're moving around a lot of power, like Thunderbolt most definitely does, it's easier to fab the circuits that control the power at an older or lower resolution (bigger) process node, like 90 nm or 130nm. That's (one reason) why many power drivers and other circuitry use the older nodes. You can mix and match nodes (5nm CPU, 90nm TB controller), but only if you're doing chiplets.
I'd be surprised if the same chip manages power and 40 Gbps data, but I could be wrong. I mean, just providing multiple voltages on the output alone means a buck converter with lots of components. Maybe the chip manages it, but I doubt it is involved in the actual power flow. Also, I would think that ultra-high-speed data and low-resolution chips would tend not to go hand-in-hand, but I could be wrong.
My guess is that the answer is either:
Re: (Score:2)
A - Chip includes significant power-related circuitry, which works better at older/lower resolution nodes. For example, if you're moving around a lot of power, like Thunderbolt most definitely does, it's easier to fab the circuits that control the power at an older or lower resolution (bigger) process node, like 90 nm or 130nm. That's (one reason) why many power drivers and other circuitry use the older nodes. You can mix and match nodes (5nm CPU, 90nm TB controller), but only if you're doing chiplets.
I'd be surprised if the same chip manages power and 40 Gbps data, but I could be wrong. I mean, just providing multiple voltages on the output alone means a buck converter with lots of components. Maybe the chip manages it, but I doubt it is involved in the actual power flow. Also, I would think that ultra-high-speed data and low-resolution chips would tend not to go hand-in-hand, but I could be wrong.
My guess is that the answer is either:
Possibly both.
Guessing from the ifixit teardown of the previous (M1) Air, it looks to me like those duties are actually shared by the M1 SoC, plus a few additional components:
1. Intel JHL8040R Thunderbolt 4 Retimer (x2) (basically a Thunderbolt 4 extender/repeater)
2. Texas Instruments CD3217B12 – USB and power delivery IC
3. (Maybe) Apple 1096 & 1097 – Likely PMICs
4. (Maybe) Siliconix 7655 – 40A battery MOSFET
https://www.ifixit.com/News/46... [ifixit.com]
Re: Why is this not integrated? (Score:2)
Apple to Intel (Score:5, Funny)
Apple raises its ARM, gives Intel the finger.
Re: (Score:2)
Apple raises its ARM, gives Intel the finger.
Perfect!
Re: (Score:2)
It was RISCy but Apple pulled it off.
Re: (Score:2)
It was RISCy but Apple pulled it off.
Good one, too!
Re: (Score:1)
Apple bites its Thumb at Intel
Re: (Score:3)
You have any actual stats that Apple's user base is shrinking?
I know there's a lot of rep of form over function from Apple (not helped by the era of Johny Ive) but I can take my M1 laptop to work, program all day, and take it home without plugging it in and without the fans turning on once.
Say what you want about part of the Apple base, that's a pretty cool laptop.
Return of the PC or Mac debate (Score:2)
You have any actual stats that Apple's user base is shrinking?
To be fair Apple's Mac sales doubled when they switched to Intel. The entire PC or Mac debate disappeared, you could have both - natively - on the same machine with Apple's Boot Camp utility.
Today, although Microsoft Windows is available for ARM, there is no Boot Camp. So the PC or Mac debate has reappeared, and we saw how that went last time.
Hopefully an ARM Boot Camp that allows Windows ARM is planned, that they are just waiting for the Microsoft/Qualcomm exclusive deal to end.
Is Windows ARM really a thing?! (Score:2)
Today, although Microsoft Windows is available for ARM,
Is it really a thing?
I kind of understood that the (only) big appeal for Windows is that you can run a giant pile of legacy software all in the form of proprietary blobs.
Partially due to Microsoft trying to keep some modest level of API backward compatibility and partially due to always having been running on the same lineage of x86 arch, all backward compatible on thew instruction set level all the way back to the original 8088*.
Hopefully an ARM Boot Camp that allows Windows ARM is planned, that they are just waiting for the Microsoft/Qualcomm exclusive deal to end.
But if you're going to be relying on emulation anyway to access all the legacy
Re: (Score:2)
I kind of understood that the (only) big appeal for Windows is that you can run a giant pile of legacy software ...
There is also more new software for Windows than macOS.
... all in the form of proprietary blobs. Partially due to Microsoft trying to keep some modest level of API backward compatibility and partially due to always having been running on the same lineage of x86 arch, all backward compatible on thew instruction set level all the way back to the original 8088*.
Sort of. There have been one architecture change, 32-bit x86 to 64-bit x86. If one was very lucky it was a recompile of the existing code, more likely some changes were necessary to make the code 64-bit clean. ARM is a similar effort. Again if one if very lucky it is a recompile of the existing code, more than likely some changes to make the code cross platform clean will be needed. In any case, most of the actively supported 32-bit code was cleaned up
Re: (Score:2)
the PPC vs. Intel debate is VERY different from the Apple Silicon (aka: ARM) vs. Intel since one is incapable of running both applications without a significant amount of translation (which also incurs a signficant performance hit).. vs. one that COULD run some applications natively with a minimal amount of translation (and even full translation the performance hit is so minor
Re: (Score:2)
Apple's sales doubled mostly because it allowed for a MUCH easier transition from windows to OSX so more applications were available.
The doubling of Mac users that I am referring to was from the plateau established by Mac OS X under PowerPC, not the lower level of Mac OS Classic (9.x). The doubling I am referring to occurred entirely under Intel based Macs.
the PPC vs. Intel debate is VERY different from the Apple Silicon (aka: ARM) vs. Intel since one is incapable of running both applications without a significant amount of translation (which also incurs a signficant performance hit)
Yes, Rosetta 2 allows for a one time binary to binary translation on first execution. Note that not all code can be translated, Rosetta 2 has limitations. Hence the need for a native ARM Windows. However that is a different topic. Its not Intel vs ARM, its Windows or Mac that is the pr
Re: (Score:2)
Re: (Score:1)
It's easy to ditch the USB controller when you're a company who refuses to give people what they want: USB ports!
AFAICT, Every single non-laptop Mac still has "USB" (USB-A) Ports. And every single USB-C Port is but a $2 adapter from being a precious crippled little USB-A port.
Grow up. Time marched on. Try to keep up!
Re:Easy to ditch when.... (Score:5, Informative)
It's easy to ditch the USB controller when you're a company who refuses to give people what they want: USB ports!
AFAICT, Every single non-laptop Mac still has "USB" (USB-A) Ports. And every single USB-C Port is but a $2 adapter from being a precious crippled little USB-A port.
Grow up. Time marched on. Try to keep up!
The problem is, every single USB-A thumb drive that somebody gives you for any reason is a $2 adapter that you don't have with you away from connecting to your USB-C port. As much as I think USB-C is a great port, I do still wish Apple included at least one USB-A port in every machine, especially laptops.
Re: (Score:2)
It's easy to ditch the USB controller when you're a company who refuses to give people what they want: USB ports!
AFAICT, Every single non-laptop Mac still has "USB" (USB-A) Ports. And every single USB-C Port is but a $2 adapter from being a precious crippled little USB-A port.
Grow up. Time marched on. Try to keep up!
The problem is, every single USB-A thumb drive that somebody gives you for any reason is a $2 adapter that you don't have with you away from connecting to your USB-C port. As much as I think USB-C is a great port, I do still wish Apple included at least one USB-A port in every machine, especially laptops.
Sorry. As an owner of the "incoming" technology, it falls to you to have a couple USB-C to USB-A widgets in your stash. I don't know of anyone who travels with a laptop that doesn't have some "luggage" (backpack, briefcase, fanny pack, computer bag, etc) with them at all times they have the laptop with them. The constant bullshit about "What if I forget...?" is crap. I bet you never forget the AC Adapter, do you?
Re: (Score:2)
It's easy to ditch the USB controller when you're a company who refuses to give people what they want: USB ports!
AFAICT, Every single non-laptop Mac still has "USB" (USB-A) Ports. And every single USB-C Port is but a $2 adapter from being a precious crippled little USB-A port.
Grow up. Time marched on. Try to keep up!
The problem is, every single USB-A thumb drive that somebody gives you for any reason is a $2 adapter that you don't have with you away from connecting to your USB-C port. As much as I think USB-C is a great port, I do still wish Apple included at least one USB-A port in every machine, especially laptops.
Sorry. As an owner of the "incoming" technology, it falls to you to have a couple USB-C to USB-A widgets in your stash. I don't know of anyone who travels with a laptop that doesn't have some "luggage" (backpack, briefcase, fanny pack, computer bag, etc) with them at all times they have the laptop with them. The constant bullshit about "What if I forget...?" is crap. I bet you never forget the AC Adapter, do you?
I have an M1 MacBook Pro. Depending on what I'm doing, I could lose the power supply for a couple of days and not miss it unless I have to use my laptop's battery to charge my phone. So the answer is "That depends on whether I have my camera bag with me or not." :-)
Re: (Score:2)
It's easy to ditch the USB controller when you're a company who refuses to give people what they want: USB ports!
AFAICT, Every single non-laptop Mac still has "USB" (USB-A) Ports. And every single USB-C Port is but a $2 adapter from being a precious crippled little USB-A port.
Grow up. Time marched on. Try to keep up!
The problem is, every single USB-A thumb drive that somebody gives you for any reason is a $2 adapter that you don't have with you away from connecting to your USB-C port. As much as I think USB-C is a great port, I do still wish Apple included at least one USB-A port in every machine, especially laptops.
Sorry. As an owner of the "incoming" technology, it falls to you to have a couple USB-C to USB-A widgets in your stash. I don't know of anyone who travels with a laptop that doesn't have some "luggage" (backpack, briefcase, fanny pack, computer bag, etc) with them at all times they have the laptop with them. The constant bullshit about "What if I forget...?" is crap. I bet you never forget the AC Adapter, do you?
I have an M1 MacBook Pro. Depending on what I'm doing, I could lose the power supply for a couple of days and not miss it unless I have to use my laptop's battery to charge my phone. So the answer is "That depends on whether I have my camera bag with me or not." :-)
While an impressive real-world example of the incredible battery-life of Apple Silicon Laptops, I still think it is reasonable to expect that, for the next few years before USB-C becomes the Default, and the USB-A becomes as oft-encountered as a Parallel Port, you should take a minute to at least throw one of those tiny, inexpensive USB adapters in your pocket, in your car, or somewhere.
https://www.amazon.com/Adapter... [amazon.com]
I mean, really. It just doesn't make for a compelling reason to have a single connector d
Meanwhile (Score:3)
I still think it is reasonable to expect that, for the next few years before USB-C becomes the Default, and the USB-A becomes as oft-encountered as a Parallel Port, you should take a minute to at least throw one of those tiny, inexpensive USB adapters in your pocket, in your car, or somewhere.
Meanwhile, countless other companies have jumped onto the USB-C bandwagon (because after all, it is a very useful port) but still also feature at least an USB-A port (and often a few other legacy ports, often HDMI, sometimes even a pop-out ethernet one).
Everyone else: cool new stuff, we're supporting it (but you can also plug legacy stuff if needed).
Apple: we're switching to the new stuff exclusively and you ARE GOING to like it.
I mean, really. It just doesn't make for a compelling reason to have a single connector dictate packaging design decisions; especially not on something like a MacBook Air. And even on the MacBook Pro, it isn't worth dedicating precious I/O resources and Connector-Space on a USB-A Port, which has, at best, 1/4 the I/O bandwidth of a TB 3 or 4 Port, and with none of the additional versatility
Comeon, we aren't talking about adding USB-A ports to smartwatches. We're talki
Re: (Score:2)
But hey, what should I expect from a manufacturer that has ditched audio jacks as fast as they could.
What are you talking about?
AFAIK, every single Mac, including Laptops, has at least an audio output (3.5 mm) headphone/line jack. And the newer ones are specifically designed to be able to drive higher-impedance earbuds/IEMs/headphones.
Actually, I think they may be TRRS jacks, that support microphone input as well. But I am not positive of that.
The laptops all have mulitple-arrayed microphones, multi-driver speakers that are actually listenable, and the newest ones all(?) have that wide-format camera, with
Audio jacks (Score:2)
But hey, what should I expect from a manufacturer that has ditched audio jacks as fast as they could.
What are you talking about?
I am poking fun at Apple for removing the standard audio jack from their smartphones.
So something that was instantaneous (want to put you music on speakers or anything else? just plug a cable)
is now becoming a much more complex task
(at best you need to setup and authorize a Bluetooth connection, which could come with its own can of worm of problems ; at worse you spend a bunch of time searching online help resource to debug why two of your device refuse to pair with each other).
AFAIK, every single Mac, including Laptops, has at least an audio output (3.5 mm) headphone/line jack.
So, Apple is okay with puttin
Re: (Score:2)
While an impressive real-world example of the incredible battery-life of Apple Silicon Laptops, I still think it is reasonable to expect that, for the next few years before USB-C becomes the Default, and the USB-A becomes as oft-encountered as a Parallel Port, you should take a minute to at least throw one of those tiny, inexpensive USB adapters in your pocket, in your car, or somewhere.
The thing is, Apple started their transition to USB-C seven years ago, and we're still not even close to USB-C being the default. Seven years in, it still feels like we're not much closer to USB-C being the default than we were seven years ago, which makes Apple's decision to drop USB-A ports in 2015 seem seriously problematic. You don't replace an entrenched standard overnight, and USB-A had almost two decades of use as the main connector prior to Apple shipping its first USB-C ports (1996 to 2015). May
Re: (Score:2)
Maybe in another decade, USB-C will be the default, but making Mac users suffer for almost two decades just so Apple could save a single port is, frankly, abusive.
And that is Apple's fault, how, exactly?
AFAICT, it is the "Main-Stream Industry" that is taking far, far, FAR too long to transition to USB-C. It's like somewhere in China there's a veritable mountain of already-manufactured PCB-Mount USB-A and USB-B connectors, and by Shiva, they aren't going to let the crapola laptop and peripheral Designers have a decent cost on USB-C Connectors until that gigantic mountain of old connectors and cables is used up!
Seriously, other than appearing on a bunch of aging periph
Re: (Score:2)
Maybe in another decade, USB-C will be the default, but making Mac users suffer for almost two decades just so Apple could save a single port is, frankly, abusive.
And that is Apple's fault, how, exactly?
AFAICT, it is the "Main-Stream Industry" that is taking far, far, FAR too long to transition to USB-C.
ROFL. No. This isn't an industry problem. It's an Apple problem. Apple grossly underestimated just how much USB infrastructure the average person uses, largely because Apple's market tends to heavily lean towards people who don't ever plug anything into their computers. As a result, they tend to run ahead of the curve very badly with pretty much every transition (except USB-C on iPhone, because that would cost them too much MFi revenue).
The reality of the matter is that most people don't replace stuff
Re: (Score:2)
USB-C isn't that great. It breaks WAY too easily. That fragile thin little duck-bill inside the center of the host port. Instead of putting it in the cable where it's easy to replace, they put it embedded in the host hardware where it's damn near impossible to replace when it breaks.
What the heck do you do with your devices? I've abused the heck out of USB-C ports and I've never broken one, even after multiple cable trips. Maybe, just maybe the problem is that whoever manufactured your device used a crappy, low-quality connector with a thin shell that allows too much vertical play.
The only safe way to avoid having a center pin on the device side would be to make the connector MUCH bigger so that it can have a pin and a ring around that pin. I doubt anybody would adopt such a connect
Has anyone decapped one? (Score:2)
Intel will no longer be of any concern to us. (Score:4, Funny)
I have just received word that the Tim Cook has dissolved the ITU and USB Implementers Forum permanently.
The last remnants of the old order have been swept away.
That's impossible. How will the CEO maintain standards cooperation without the bureaucracy?
The product managers now have direct control over their territories.
Fear will keep the suppliers in line - fear of this $2T Behemoth.