Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IOS Privacy Security Apple

Apple Is Going To Make It Harder to Hack iPhones With Zero-Click Attacks 60

Apple is going to make one of the most powerful types of attacks on iPhones much harder to pull off in an upcoming update of iOS. From a report: The company quietly made a new change in the way it secures the code running in its mobile operating system. The change is in the beta version of the next iOS version, 14.5, meaning it is currently slated to be added to the final release. Several security researchers who specialize in finding vulnerabilities in and crafting exploits for iOS believe this new mitigation will make it much harder for hackers to take control of an iPhone with a technique known as a zero-click (or 0-click) exploit, which allows a hacker to take over an iPhone with no interaction from the target. Apple also told Motherboard it believes the changes will impact 0-click attacks.

"It will definitely make 0-clicks harder. Sandbox escapes too. Significantly harder," a source who develops exploits for government customers told Motherboard, referring to "sandboxes" which isolate applications from each other in an attempt to stop code from one program interacting with the wider operating system. Motherboard granted multiple exploit developers anonymity to speak more candidly about sensitive industry issues. Like the name suggests, zero-click attacks allow hackers to break into a target without needing the victim to interact with anything, such as a malicious phishing link. This means that the attack is generally harder for the targeted user to detect. These are generally very sophisticated attacks. These attacks may now become much rarer, according to several security researchers who look for vulnerabilities in iOS.
This discussion has been archived. No new comments can be posted.

Apple Is Going To Make It Harder to Hack iPhones With Zero-Click Attacks

Comments Filter:
  • Maybe the world at large should be cribbing security for their own systems from Apple if it's that good?

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Maybe the world at large should be cribbing security for their own systems from Apple if it's that good?

      They are improving the security of their product, why does this annoy you?

    • by jellomizer ( 103300 ) on Monday February 22, 2021 @01:58PM (#61090468)

      Companies in general should be more open and willing to share their security mechanisms with each other.
      The bad actors tend to share their information with each other, freely and openly. This allows them to be one step ahead of the systems that are trying to not get hacked.
      How many of the major hacks in history come from some slightly tweaked script. The hackers didn't worry about stepping on someone else patent or IP. They got the code, and tweaked it a bit, and deployed.

      If Companies decided to be free and open with their Security, and all involved are active with it, then we might be able to get a lot more secure systems.

      • Right now I'm engaged in a 13-week exploitation event called a CTF.
        Each week the challenges get a little bit harder, as a new protection is added or weakness is removed. To progress, each week I have to learn or use a different method to get around some protection mechanism.

        For example in the first week I might do a very simple buffer overflow.
        Perhaps the program is supposed to compare the entered password to the correct password. I might attack that by overwriting the "string compare" function pointer wi

    • Apple didn't invent Pointer Authentication Codes. They were added to ARMv8.3 back in 2017, so really anything using a modern ARM chip has been able to use them for years now. The catch is that software needs to be updated to use them, and there aren't any extremely visible benefits to users, so the rollout has been slow. Apple doesn't even have them enabled on the ARM Macs and those were basically just released.
      • All of Apple's binaries for their ARM version of macOS are already compiled for arm64e. Early versions of Xcode 12 beta didn't allow compilation for this target, but the latest 12.4 release does. If security that uses this is not enabled yet, I imagine it will be at some point.

  • Still realistically possible then?
    • by Rosco P. Coltrane ( 209368 ) on Monday February 22, 2021 @01:27PM (#61090374)

      It's Apple users we're talking about here. One click is one click too many. They're quite safe if zero click attacks are protected against.

      • I've always said the perfect iPhone would be a razor-sharp featureless visual-acoustic mirror with a big central Apple logo to drool or cum on. ;)

        The S one castrates you in your pocket, so a non-issue for iUsers.
        And when you put the Max to your ear, it lops your arm clean off. (You're bleeding wrong. It's only a fleshwound!)

    • Of course. Not even Apple is arrogant enough to claim that their devices are outright impossible to crack.
      • Officially, no. But as recently 2016 one Apple lobbyist* tried to convince me that there were no exploits on iPhones whatsoever. He was actually annoyed that I would even question the infallible security of Apple products.

        * not sure what else to call an official Apple representative at a SpaWar conference, but lobbyist seems to fit.

      • Unless you are the "user", of course.

    • Yes, still realistically possible, but they'll also deploy a more complicated way to disable GPS, WiFi and Bluetooth. So it's a wash.

  • Spoiler (Score:5, Informative)

    by Anonymous Coward on Monday February 22, 2021 @01:26PM (#61090370)

    It makes pointers require authentication using some of the high order bits of a 64-bit address. This means buffer overflow payloads can't jump to arbitrary code sections. (A problem ASLR also tried to help prevent.) Of course now key management has to be rock solid because that's the next place attackers will be looking to crack.

    • Re:Spoiler (Score:4, Insightful)

      by dgatwood ( 11270 ) on Monday February 22, 2021 @01:36PM (#61090390) Homepage Journal

      Who here is having flashbacks to 24-bit addressing and all the chaos when Apple moved to 32-bit addressing? Using address bits for something other than address information feels somehow wrong to me. :-)

      • by mccalli ( 323026 )
        Absolutely my first thought when I heard of this. Going to make the 128bit iPhone 15 fun to develop for...
      • IBM Mainframes have entered the chat. 24Bit, then 31Bit, (1 of the bits was used to indicate it wasn't 24 bit or something like that)
        • Re:Spoiler (Score:4, Insightful)

          by Rockoon ( 1252108 ) on Monday February 22, 2021 @02:40PM (#61090618)

          IBM Mainframes have entered the chat. 24Bit, then 31Bit, (1 of the bits was used to indicate it wasn't 24 bit or something like that)

          If you are to support more than one word size within a wider word such as 32-bit, you would use stop-bit encoding for overall computational efficiency. So 24-bit values would have their 25th bit set, 25-bit values would have the 26th bit set, .... 31-bit valued would have their 32nd bit set, and thus 32-bit values are not supported.

          Of course this is wholly incompatible with using the extra bits for anything after using them all for encoding the stop bit and only the stop bit.

          But honestly, 64-bit addressing aint going anywhere, even if machines go full-on 128-bit words it doesnt have to be reflected in the address space. There is almost no chance that anyone reading this actually has 64-bit address lines within their Intel, AMD, Apple, or ARM CPU. We are talking about 16 million terabytes here while the world still struggles with deciding between 0.8%, 1.6%, or 3.2% of a single terabyte for their main system.

          • by Halo1 ( 136547 )

            In fairness, it's about virtual address space, not physical memory size. Even then, at this point in time it indeed seems like we'll never need the full 64 bits for that. And I guess that constitutes my Bill Gates moment.

          • 16 bit was a big problem, but it took quite a while (1970 to 1990) before we really needed 32 bits. Remember that the bits are exponential to actual memory size. Currently we are up to maybe 36 bits except for exotic applications, and that is after 30 years. 48 bits is a long, long way away.

            The ratio between memory size and speed has been steadily changing over the decades. In the 1970s, a machine could sweep through all of its (tiny) memory in about 100ms, typically. Today it is a few seconds. At 48

            • It is an idiot decision of the x64 architecture to insist that the upper bits are zero and not to use them.

              It is not a requirement that the upper bits be zero. It is a requirement that the upper bits be considered. This is how you implement a logical 64-bit address space on top of a smaller physical one, after all.

              Thats how AMD64 addressing is and not only do I see no problem with it, I also dont see how it could be any other way, so whos the idiot?

              As far as pointer "tagging" .. understand that that is a bodge to solve only some of the wider problem, the need to tag memory, that is every memory access is ty

              • Sometimes you do not know which subtype a pointer points to statically. So even with a decent language like Java or .Net there is a place for "allocated memories foreknowledge". Then comes the real programing languages of the current era. JavaScript and Python.

                I think the AMD "64bit" could have been defined as 48 bit quite well. Ignore the top order bits. Plus a check and deference instruction. Only downside is it slightly limits attempts to make C code secure by randamizing address spaces.

                I like your

      • by tlhIngan ( 30335 )

        Who here is having flashbacks to 24-bit addressing and all the chaos when Apple moved to 32-bit addressing? Using address bits for something other than address information feels somehow wrong to me. :-)

        Chances are it's not likely to happen anytime soon.

        24 bit craziness happened because the original 68000 only had 24 address lines so even though the register was 32-bits, the top 8 bits were ignored. The 68030 made the address bus a full 32-bits wide only a few years after the Mac abused the top 8 bits for it

        • I mean, for a workstation, 128-256GiB of RAM is already pretty normal. Up to 1 or 2 TiB not outside the realms of possibility. The MacPro already has an option for 1.5TiB of RAM.

          Given 5 years of Moore's law, that puts low end workstations will be at around 1.3TiB-2.6TiB, and high end ones at around 15TiB of RAM.

          Of course, that's still well below 256TiB, but it does make this a problem in 11 years time.

          • We're still talking about mobile phones here so the workstation growth comparison is a bit aggressive.
            • Appleâ(TM)s macOS binaries are compiled for arm64e, so yes, we are talking desktops and workstations here.

      • It's not an address, it's a pointer. Pointers in the past have been implemented as simple addresses, now they're encoded as somewhat more complex data structures that put use to all 64 bits available, rather than only the lower 42.

      • Comment removed based on user account deletion
    • My understanding is that the keys are only in hardware on the ARM cpu, and I believe have some random salt added at system startup. Pointers are signed by special cpu instructions, so there are no keys in memory that software could access. The idea is that an attacker could trick software into corrupting the stack with a buffer overflow and generate a pointer, but the because the pointer wasn't written through one of the special pointer writing instructions, the pointer is useless.
  • Didn't Amazon infamously get to patent one-click purchasing? These hackers should instead patent zero-click shopping and make a "legitimate" living as patent trolls. There are legal ways to make a living as assholes.

    • There are legal ways to make a living as assholes.

      Yes, it's called capitalism.

      • by Tablizer ( 95088 )

        I'm not sure capitalism generates more jerks than socialism, but with more oversight it is possible to reduce jerks either way. However, there is a break-even point where the cost of oversight is more than the cost of jerk-hood. However, I believe the USA is too far on the side of insufficient oversight at this time.

  • No more one-click attacks- now IOS will require you to click twice before running malware.

    "Are you sure you want to run 'FluffyBunnyMalware' on this computer?"
    (click)
    "Are you really really sure you want to run 'FluffyBunnyMalware' on this computer?"
    (click)
    "Thank you- 'FluffyBunnyMalware' has been successfully loaded! Please enter your banking details."

  • by superdave80 ( 1226592 ) on Monday February 22, 2021 @02:35PM (#61090590)

    The company quietly made a new change...

    I don't know why the term 'quietly' keeps getting added to news reports, but it needs to go. It doesn't mean anything. Do you need them to have an employee standing on the street corner yelling, "HERE ARE THE FUCKING CHANGES WE ARE GOING TO MAKE!!!! PAY ATTENTION!!!"?

    • Because it makes for good sensionalistic headlines. It makes it look like a company that isn't publicly declairing every single change or test they make in every single one of their beta versions, in real time, is somehow "hiding" something.

      And besides, every one here knows that Apple can do no right. Your average basement-dwelling slashdoter would still find a way to disparage and belittle Apple and its customers even if they came up with the freaking cure for cancer.

      Just read the posts in this thread so f

    • I guess quietly means not publishing this on the list of new 'features' such as "aNiMaTeD eMoJiEeZ!!1!!1" (just an example I pulled out of my ass).

  • Is that this is to also stop iPhone owners from jailbreaking their own devices.

    Don't forget that "improving security" includes against YOU.

  • This is excellent. But I don't want to deal with iOS anymore due to its limited functionality. Can we get something like this for Android? I wish Android wasn't developed by an ad company where adding security hurts their own bottom line
  • Back to Basics (Score:4, Informative)

    by ytene ( 4376651 ) on Monday February 22, 2021 @06:18PM (#61091318)
    If you click through the vice.com article, you end up at an interesting piece over at Citizen Lab [citizenlab.ca].

    To cut to the chase, that Citizen Lab article explains that Apple's iMessage application, unlike many (most) other iOS applications, is not sand-boxed. What this means is that someone at Apple decided that because iMessage was an Apple application and because they controlled the source code and shipped it with iOS, they did not have to obey the same basic security protocols with iMessage that they force on less trusted code.

    It would seem that it did not occur to Apple that someone might be able to craft a malicious iMessage payload that could force an error in their code, allowing a handset to be compromised without user interaction.

    Whilst the novelty of the zero-day vulnerability in iMessage might be something we'd be willing to give some form of allowance for [I haven't seen to the details, so it's far too early to say], the root cause of the problem here might actually be some of Apple's own internal security practices.

    I am reminded of the collective surprise we felt way back, when it emerged that Apple's "screening" of iOS applications for the App Store came down to confirming the code had been compiled via Apple's XCode IDE and precious little else.

    Come on, Apple. It's 2021. You really have no excusefor not sand-boxing applications any more. It might be depressing, but today if you're a platform vendor [and to be fair to Apple, we need to make the same demands of Google [Android], Microsoft [Windows] and Torvalds [Linux kernel], we need our platform providers to be moving towards a model in which *ALL* code is considered hostile.

    If the default configuration of the platform is to run all user-space code in a micro-VM, then the platform developers put some serious effort in to ensuring that the security wrapper around that VM had the abilities to control precisely what the contained code could do... then we might be able to think of our platforms as being a bit more secure.

    I know this is even more old-fashioned, but maybe we should think about assessing software on something like a refreshed version of TCSEC [wikipedia.org]?
  • They'll be still vulnerable to my minus one click attack.

    It attacks you even before you don't click.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...