Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Google Apple IT

iPhone Zero-Click Wi-Fi Exploit is One of the Most Breathtaking Hacks Ever (arstechnica.com) 114

Dan Goodin, writing for ArsTechnica: Earlier this year, Apple patched one of the most breathtaking iPhone vulnerabilities ever: a memory corruption bug in the iOS kernel that gave attackers remote access to the entire device -- over Wi-Fi, with no user interaction required at all. Oh, and exploits were wormable -- meaning radio-proximity exploits could spread from one nearby device to another, once again, with no user interaction needed. This Wi-Fi packet of death exploit was devised by Ian Beer, a researcher at Project Zero, Google's vulnerability research arm. In a 30,000-word post published on Tuesday afternoon, Beer described the vulnerability and the proof-of-concept exploit he spent six months developing single-handedly. Almost immediately, fellow security researchers took notice.

"This is a fantastic piece of work," Chris Evans, a semi-retired security researcher and executive and the founder of Project Zero, said in an interview. "It really is pretty serious. The fact you don't have to really interact with your phone for this to be set off on you is really quite scary. This attack is just you're walking along, the phone is in your pocket, and over Wi-Fi someone just worms in with some dodgy Wi-Fi packets." Beer's attack worked by exploiting a buffer overflow bug in a driver for AWDL, an Apple-proprietary mesh networking protocol that makes things like Airdrop work. Because drivers reside in the kernel -- one of the most privileged parts of any operating system -- the AWDL flaw had the potential for serious hacks. And because AWDL parses Wi-Fi packets, exploits can be transmitted over the air, with no indication that anything is amiss.

This discussion has been archived. No new comments can be posted.

iPhone Zero-Click Wi-Fi Exploit is One of the Most Breathtaking Hacks Ever

Comments Filter:
  • by Big Bipper ( 1120937 ) on Wednesday December 02, 2020 @02:40PM (#60786792)
    the police managed to crack some of those iPhones they were so interested in cracking ?
    • You are very probably right.

      If they haven't, you can bet they'll be using it now.

      • by 93 Escort Wagon ( 326346 ) on Wednesday December 02, 2020 @03:15PM (#60786988)

        This was patched quite a while ago - it was fixed with iOS 13.5 (current version is 14.2). So it's unlikely to be useful for the cops at this point.

        Of course, we have no idea if anyone other than Ian Beer knew about this. It's notable, though, that the time period involved was also when we found out that iOS exploits were deemed worthless by companies that traffic in such things.

        • It's still 100% useful for iPhones that can't be upgraded past iOS 13.4.

          • by tlhIngan ( 30335 )

            It's still 100% useful for iPhones that can't be upgraded past iOS 13.4.

            Which is none. The only iPhones it would be useful on are iPhones that can't run iOS13, which I think is up to the iPhone 5 as iOS13 went fully 64-bit only. So a phone from 2012 or so.

            In fact, everything from the iPhone X and below already has a permanent flaw in the silicon that means it's always possible to bypass the boot process

          • Apparently it was also fixed in 12.4.7 hence the vast majority of operational iphones should be protected.
          • It's still 100% useful for iPhones that can't be upgraded past iOS 13.4.

            Sorry, no.

            Apple just released iOS 12.4.7, which has this patch. I installed it on my iPhone 6+ a few days ago.

            That covers back to iPhone 5, IIRC.

    • I'd say 90% chance this was not used in the cases you refer to.

      What makes this particularly interesting is that there is no need to touch the device. Law enforcement in possession of a phone, with a warrant, don't need that. Many other less powerful exploits would do just fine.

      Also, this is new. There's no reason to think cops figured this out before Ian Beer did.

      It's possible, but it's unlikely.

      • Law enforcement in possession of a phone, with a warrant, don't need that.

        Phones with a passcode are encrypted by default. They might need that. If you gain root, you have access to the unencrypted filesystem.

        • That doesn't mean you need to be able to take control of it from afar, without touching the phone and without the user knowing that you did anything.

          It has an encryption key, just like a car has a key.
          It's the difference between being able to hot wire a car vs being able to remotely call the car and have it start. It's a heck of a lot easier to hot wire it, to make use of the fact that you have physical access to it!

          Having physical possession of the phone, the cops can not only press buttons, they can conn

          • by raymorris ( 2726007 ) on Wednesday December 02, 2020 @05:58PM (#60787490) Journal

            Here's an example to demonstrate how one would use the unfettered physical access that the FBI or whomever has in these cases. This is what I did, which may need modification to work on a particular version of a particular phone running a particular version of the software. It worked just like this when I did it a long time ago.

            Attach leads to the memory chip to be able to read and write it.
            An Rpi works for this.

            Plug in an external keyboard. Except this external keyboard is really an Arduino running a very, very simple program.

            Read the memory from the phone or tablet onto the Rpi.
            You don't care if it's encrypted or not.

            The Arduino enters the PIN "0000", which is wrong.
            Read the memory again and see which bytes changed.
            (These are the bytes that count how many wrong PIN have been tried).

            Write back the original contents of memory to those bytes that changed (except for a couple of bytes like the frame pointer, leave those alone, or not).
            This sets the "incorrect PIN attempts" counter back to zero!
            We don't have to understand what's in memory. It could be encrypted, we don't care - we just set it back to whatever it was before we tried an incorrect PIN.

            The Arduino tries 0001.

            Reset the bytes of memory that we recorded earlier, the "incorrect PIN attempts" counter.

            The Arduino tries 0002.

            Reset the bytes of memory that we recorded earlier, the "incorrect PIN attempts" counter.

            That cracked the device overnight when I did it.

            When you have unfettered physical access, you can do a lot of things.

            • Or if you have a kernel hack you can drop via WiFi you can just make the phone "Air Drop" all its data and there's no need to crack anything. Best part is if you're CBP you could do this to everyone passing through a port of entry and they wouldn't even know it had happened.
            • by AmiMoJo ( 196126 )

              Must have been a pretty old iPhone. For years now Apple has been using it's "secure enclave", which is just their re-branded version of a standard ARM feature that lets things like passcodes and attempt counters be stored inside the CPU in a special area that is physically protected from tampering. It has its own CPU core and internal work RAM.

              So monitoring main memory doesn't help at all, all the checks and the attempt counter are inside the secure part of the chip. Android devices use the same method. PCs

              • PC software rarely uses the TPM when it should. When it does, it communicates with the TPM over a bus. That bus can be sniffed. Intel is talking about moving the TPM to the CPU die. That gets rid of the bus, but increases the possibilities for aide channels.

                For iPhone and Android, don't assume an operation is done in the TPM just because it should be. :)

                • by AmiMoJo ( 196126 )

                  AMD has had the TMP in Ryzen parts since day 1.

                  Sniffing the bus doesn't help you, the attempt counter and rate limiter are still inside the chip.

                  • First - no, Ryzen does not have a TPM on-chip. It can connect to a TPM on the motherboard, if the motherboard has one (most do). Ryzen, like all recent processors, has firmware available to emulate a TPM, which we call fTPM.

                    Secondly, it's entertaining when you tell me how I do my job. O should have you as a guest speaker some time when I teach.

    • by DontBeAMoran ( 4843879 ) on Wednesday December 02, 2020 @05:20PM (#60787384)

      If the police couldn't crack an iPhone on their own, they're fucking idiots. I once dropped my old iPhone 4 on the floor and it cracked on its own.

  • by nospam007 ( 722110 ) * on Wednesday December 02, 2020 @02:45PM (#60786816)

    His old job must have been awesome.

  • Nobody really uses iPhones or WiFi anyway. Sarcasm aside, that's an incredible find for the researcher.
    • He must be pretty awesome to be allowed to spend 6 months in pursuit of one exploit in the first place. I don't know how many *governments* would sponsor that. (Plenty would pay for it afterwards, however.)
  • Can someone explain to me WHY buffer overflow hacks are still goddamn everywhere?

    • Memory management is hard.

      • Re:Like I am six (Score:5, Insightful)

        by sixoh1 ( 996418 ) on Wednesday December 02, 2020 @03:01PM (#60786918) Homepage

        Memory management is hard.

        More correctly, memory management in software is complex, and offloading it to hardware is necessary for optimization on low-power processors, and hardware is also quite difficult to do well. So you have two axes of optimization, either one of which are a somewhat zero-sum game in terms of overall device performance (both MIPS/fps and power consumption/battery-life) and so theres a lot of pressure on developers (again both the OS/app/software team and the hardware designers) to squeeze performance metrics over security.

        Optimally for most /. folks we'd like to have a choice of security vs. performance, like you get with the choice to run SELinux or something, but for simplicity sake our "benevolent" overload product companies prefer eye-catching features and good performance reviews from ArsTechnica or WSJ. If we could somehow have a generic security penetration metric that was equivalent to the Dhrystone-MIPS or Fortnite Frames-Per-Second specs then we might see that tradeoff done better.

        • I read âoetwo axes of optimizationâ as in plural of ax and imagined system engineers angrily chopping iPhones. It was a pleasant mental image.

        • Yes, memory management is hard, but what's wrong with a boundaries check parser before it hits the FREAKING KERNEL??

          • Re:Like I am six (Score:5, Insightful)

            by Darinbob ( 1142669 ) on Wednesday December 02, 2020 @04:29PM (#60787232)

            Because a lot of stuff is in C, and these problems occur in libraries and not in a kernel. It's also good to have hardware where you can mark regions as not-executable, which many PC people may think is normal but actually many chips don't do this. Also good to have hardware to do an easy stack overflow check, which many CPUs don't have either. A lot of this is easy if you're using a full virtual memory system with paging, but many smaller systems don't have this and it can be a big performance hit.

            So this collides with requirements of "must be small" and "must be fast" and "must meet our deadline". Add onto that environment developers who learned on PCs who don't realize that security is important and the system won't do this for you. This even happens on higher level languages too when people believe the stories that this category of security holes are imposslbe in C++ or Java.

            • I get small. I get fast.

              I don't get: Not looking or knowing how your libs stuff the kernel.

              Knock over the kernel, and all bets are off, you likely have root, and can gleefully use whatever remained somewhat intact in the state machine. Maybe I'm old and remember patching compilers and libs with assembler until the next lib fix came.

              Or maybe it's just general anguish that QC steps don't catch what amounts to a truly fatal error.

              • Wait for it...

                Own the libs! /s

                Sometimes NIH is worth it.

                Fully characterizing externally developed libraries both in terms of how to apply them competently and in terms of determining their quality can be expensive. In high-authority applications such as kernels, it may be worthwhile to do NIH.

              • Following up on this: we've known about buffer overflows for decades. Why haven't all the libraries been patched? Or at least have some list of libraries which are safe (either because their potential buffer overflows have been patched, or because they don't deal with buffers). Is it *really* that expensive in computational terms to prevent overflows?

                • There are indeed theorists that have made OSes whose chain of authorities do absolutely prevent overflows. But these are monolithic, often database-focused operating system implementations lacking even basic graphical content.

                  The layers above PC hardware were defined by the brittleness of IBM's original PC and XT. How interrupt tables and MMUs were connected were defined back in that era, now mightily and vastly evolved. Nonetheless, there are a few supported *methods* that still work. Much of the evils foi

            • Well now we know how Apple Silicon is so fast! They must have hired a bunch of engineers away from Intel.
          • I think "complexity" implies integration complexity. For example code in module A doesn't need to bounds check (and can therefor be faster or use less power) because the data was already checked by module B. Then somebody else decided that module B wasn't really necessary or the data wasn't used in module B so it didn't need to be checked. I can imagine for something extremely low-level like wifi, bounds checking a lot might be expensive, and writing in some high level language with bounds checking built
          • The most straightforward boundary check is expensive when you're doing it a million times a second or whatever.

            In this case, it's wifi packets. Let's say you want to be able to rx on the local network at up to 100 Mbps. That means something like 10,000 packets per second. 10,000 checks per second is going to have a CPU hit.

            That said, GCC and some other compilers have some pretty effective mitigations built in. When I was studying buffer overflows, exploiting some programs for practice, I had to turn off th

      • Memory management is hard. Let's go shopping!

    • Honey, you should not be using naughty words like "goddamn" - now go sit in the timeout chair for five minutes. (Hey, you said "Like I am six"...)

      It happens because almost everything important in systems programming is still written in C. That is great for portability and to establish a lingua franca for kernel hacking alpha geeks. Performance is great, at least when things are implemented by said geeks. But C is a bad language for strict buffer length checks, particularly when you use it in a Wild West c
      • I never understood why nobody has added strict buffer length checks in C. No high-level functionality will be affected. Unless your code does something really bad, like expecting arrays passed into a method to have the size of pointer, it should just work. New code could use this new safe mode without problems.
        • Re:Like I am six (Score:5, Informative)

          by Darinbob ( 1142669 ) on Wednesday December 02, 2020 @04:34PM (#60787250)

          How would this be done? And if it can be done how will it affect performance? It would be a new C standard, or even a new language (ie, Rust). The same features that allow a simple buffer overflow error are also essential in other areas (memcpy into the middle of a buffer, using mbufs, etc). Very often people have tried this or come up with new languages which are then claimed to solve the problem, and yet the popularity of the new stuff does not catch on because of the drawbacks that come with it.

          A lot of compilers, systems, and libraries that do come with extra safety features often also come with options to turn off the safety, for performance reasons. Ie, the safety may be on during developer builds but turned off for production.

          One big solution - don't insist on hiring the cheapest uneducated trainee coders that you can, and don't insist on tight deadlines to get the work done. Security is expensive. Also don't rely on automatic tools to get the job done automatically, thus allowing cheap ass coders. Even with great tools to prevent security holes you still need experts to do the work. There is no magic bullet to substitute for experience or attention to detail.

          • A language which is syntactically the same to C but with slightly different memcpy function etc would be a good thing. Coding in a language that doesn't check for buffer overflows means you have to be careful every second, every day. It's like cars without rear seatbelts. My dad's first car ('89 Seat Ibiza) was sold without rear seatbelts, in the frickin' EU no less, but it was considered "normal". Just don't crash, become a more careful and experienced driver etc etc. And it also had no catalytic converter
            • Though static analysis tools can find a lot of these errors. I find them a bit annoying since false positive rates on things like Coverity show up far more than legitimate bugs, so they end up training devs to ignore the warnings.

              • Disclaimer: I've spent twelve years working with Coverity.

                This should not be happening. Good modern static analyzers like Coverity (and, yes, some of the competitors) normally have very low false positive rates. There are essentially two sources of "noise" in these findings. The first is, that in order to actually be able to finish analysis on very large applications, heuristics are sometimes used instead of exact calculations. This should get you an FP rate of about 5% maximum. The other reason is t

                • Well, the current project using it seems to be much worse than previous Coverity issues I've seen, and I'm not sure why. The team that controls it, which has no vested interest in actually shipping products, has turned on ALL features and is running it even on linux kernel and third party libraries.

                  Latest issues seem to be with state machines. In state X, variable V is allocated and then freed in state Y. Yet Coverity sees that in the switch statement there is a case for state Z and so declares that ther

                  • Having a poor SAST deployment has to be intolerably frustrating. One trend that I have seen over the past twelve years is that many enterprises attempt large-scale deployments where the people who control (and pay) for it are not the people who are actually using it and this is a huge barrier to success. I'm sorry you had that experience. My email is my first initial and last name at synopsys.com. Feel free to reach out to me. Especially if you have a support ticket.

                    I'm not surprised that SAST solu

                    • It's a simple state machine, third party zlib code. States being things like start of buffer, somewhere in the middle of a buffer, end of buffer. But I have seen several times where it seem slike Coverity assumes every case in a switch statement can be executed without regard to whether earlier code has limited the possible cases.

                    • Back to our previous comment, best not to analyze zlib at all as it has already been analyzed (many times) by Coverity and the true positive defects have been fixed. Therefore, the only thing left is the false positives. If you do analyze it you will get 100% false positives. Maybe there were initially 100 defects of which 85 are real. But those 85 got fixed and the 15 false positives are left. Yuck. Pruning false positives is one of the hardest things to do in a static analyzer. If not done well, yo
        • by MobyDisk ( 75490 )

          There's a gzillion libraries that do that. But developers remove them from final production code.

      • it's just that people

        This is all you had to say. C is a tool, for better or worse. Because you can do something stupid with said tool doesn't mean that the tool, itself, is bad. That we have competitions (Obfuscated C) to see how stupid people can be with a given tool still doesn't mean C, itself, is bad. The bottom line is still people.

        • Yes but the tool doesn't account for human stupidity, so it shouldn't be used by most humans. It's like safety scissors, it's not the scissor's fault per say .. but most people would be safer using safety scissors. Given that not every coder is constantly thinking about security, we're better off if ou applications were coded in a language that was less forgiving. Of course we should vette the coders as much as possible, but that's not practical (you want to keep interviewing/testing people? making sure the

    • by ytene ( 4376651 )
      Guess: buffer overflow and related attacks are more likely to be found where the programming language in which they were written leaves the programmer to perform all necessary memory buffer checks themselves.

      Although we don't know at this stage, chances are that the software containing the vulnerability was written in Objective-C, Apple's own take on the C programming language. It uses features such as null pointers, which in essence means that the language can make it easier to allow vulnerabilities to
      • Re:Like I am six (Score:5, Insightful)

        by sjames ( 1099 ) on Wednesday December 02, 2020 @03:16PM (#60786992) Homepage Journal

        That's not necessarily true in drivers. How is a language going to detect that I have created a 512 byte buffer but told the HARDWARE it is free to DMA 712 bytes (or worse, I told it 512, but due to errata, it transfers 712 in some corner case)?

        Hacking the compiler to do the checks is just moving the problem, that would mean having to write one-off drivers for the compiler and making the errors there.

      • Re:Like I am six (Score:5, Insightful)

        by _xeno_ ( 155264 ) on Wednesday December 02, 2020 @03:22PM (#60787034) Homepage Journal

        Although we don't know at this stage, chances are that the software containing the vulnerability was written in Objective-C, Apple's own take on the C programming language.

        We know it was in a kernel driver. Kernel drivers in macOS and iOS are generally written in C or C++ - Apple's kernel for both is based on BSD. (Apparently Apple's developer kits for kernel extensions uses a subset of C++ - I've never touched that code so I don't know.) A lot of the low-level stuff in macOS and iOS is written in C, it's only the application API that's in Objective-C.

        Someone is probably going to point out that technically Objective-C isn't Apple's creation. However the version they use is full of custom extensions, so Apple Objective-C probably won't work in the GCC Objective-C compiler. (Or at least didn't used to - I've tried in the past.)

        The underlying bug is apparently even more hilarious than anyone has mentioned yet, though: apparently the code checks to see if the packet is oversized, logs an error if it is, and then memcpy's the entire oversized packet into the too-small buffer anyway. It sounds like the bug is yet another "goto fail" where the error handling code is there, but was never properly tested, so it doesn't actually work and ends up making the error condition fail catastrophically.

      • There is a hardware equivalent of DEC-Alpha.

        If the code runs without crashing, it means it is secure. Because no matter how many catch handlers you have installed, no matter how strictly you handled errors, it will by supersede everything and crash. Divide by zero, sqrt of negative numbers, it does not set Nan or -Inf in the double, it crashes.

        The doc says it supports IEEE this or Posix that, but in reality nothing works. Everything crashes.

        Amazingly we eventually managed to get our software running r

        • Which tells you a lot about today's system programmers: They design programs that are memory-unsafe on purpose.
          • I think that what it tells us is that system programming is difficult. The kernel is a real-time system. "If it runs long its wrong." Parts of the kernel are memory-unsafe because a memory-safe version would be too slow to be useful even if you could implement such things. The kernel normally runs in a processor mode where less safety is available (i.e. you are directly accessing memory rather than relying on a memory management unit). If by "today's system programmers" you mean people like Linus Torva
            • The decision was based on legacy reasons. Legacy is a dubious excuse for doing things, and unless we are talking about file formats and such. The decision was in fact taken lightly. When Linus decided to use C for Linux, Linux was still a hobby project. You can have memory safety on by defaut and disable it when needed.
              • and unless = unless (sorry)
              • What decision was based on legacy reasons? Having the WiFi drivers in the kernel? Not having an MMU available when running Kernel mode? And "legacy reasons" aren't necessarily bad reasons. Rewriting a mature piece of code brings its own set of challenges.
        • OMG, we had a developer early on who implemented the integer divide-by-zero exception to return 0. Seriously brain dead. And now it's difficult to change because errors will just start appearing everywhere in code that's been "working" unless there's a very lengthy regression and code coverage push. And everyone just treated him like the "expert" even though he created so much chaos in his wake in every thing he did. Early startup days sort of thing, nobody code reviews anything because you need to have a

      • Re:Like I am six (Score:4, Insightful)

        by Darinbob ( 1142669 ) on Wednesday December 02, 2020 @04:38PM (#60787262)

        Also, the very concept of allowing nearby mobile phones to send you pictures without your approval is just begging for someone to break in. The first thing I did when a coworker forced a picture onto my phone was to disable the Airdrop feature.

    • Programmers are people and people tend to be impatient and easily distracted. Basically, if you write a program, you need to consider all possible paths through a piece of code at all times. This is tedious work. Instead of making sure that nothing bad can happen in a particular code sequence, i.e. considering ALL possible ways this code can execute, programmers often finish the "important" code paths first and defer checking the less interesting code paths. Also very often, deferred code paths are not deve
      • I think this post clearly misrepresents (and underestimates) the complexity of the problem. There are often millions of possible ways a program can execute. It's not possible to think through all of them. So you have to use some simplifying approximations. And every once in a while, you get one of those wrong. In this case, the programmer clearly *did* think of the possibility (they logged the error). Their error handling just wasn't robust enough as they still clobbered the memory.
    • Because nobody invented on-silicon memory management.

      That's when you get a MALLOC instruction. And it returns a read-only token. And you can only access memory with these tokens. And the kernel and user spaces use this mechanism exclusively.

      • Well I'm sure that *has* been invented and is in use *somewhere*. Of course, if you do this, there's no reason to believe that your hardware will be any more bug-free than equivalent software. Except you can't update it. Taking your idea to the logical conclusion, we could put the whole OS kernel in hardware...then it would automatically be bug free...except, well, it won't be.
  • by JBMcB ( 73720 ) on Wednesday December 02, 2020 @02:50PM (#60786846)

    Ostensibly, iOS is based on Darwin which uses the Xnu hybrid microkernel, based on MACH. Drivers run non-privileged in Mach, and in microkernels in general. Maybe they changed it in iOS, but that isn't a minor change.

    • iOS is not a microkernel-based OS.

      Driver's run with full kernel privileges.

      • Wikipedia says iOS uses mach. So, citation needed.

        • Re: Xnu (Score:5, Interesting)

          by _xeno_ ( 155264 ) on Wednesday December 02, 2020 @03:37PM (#60787084) Homepage Journal

          Actually, Wikipedia says iOS uses XNU, Apple's UNIX-like kernel [wikipedia.org]. While it's in part based on Mach, it also takes some code from FreeBSD. It's a hybrid kernel, not a strict microkernel.

          However Apple has started trying to move more drivers out into userspace, probably due to the existence of bugs like this. But some unknown percentage of drivers still exist in kernel space, and it's not really surprising that for performance reasons, low level drivers on a phone would still be in kernel space.

          • by AmiMoJo ( 196126 )

            For once Microsoft is ahead of the curve here. Most drivers in Windows are outside the kernel.

            Strange times.

            • I don't think its so strange at all. Historically phones had much less CPU power than desktop machines and kernel drivers are more CPU-efficient. So it was probably a necessary trade-off. Also Windows machines are often more valuable attack targets.
    • Re:Xnu (Score:4, Interesting)

      by sixoh1 ( 996418 ) on Wednesday December 02, 2020 @03:07PM (#60786946) Homepage

      I haven't tried to write driver-level code on iOS, but on OS X Kernel Extensions (KEXTs) based on IOKit (a restricted subset of C++) are quite priviledged and from a symbol table point of view have a lot of deep hook access. Apple is trying to solve that by moving user-level driver code into the new "System Extensions" format which I assume actually do have a much more limited operating envelope (some light reading on this topic [apple.com]).

      From the practical perspective, this could have existed (possibly, likely did) on OS X too since as you point out the XNU kernel has always been a somewhat shared component between iOS and OS X. Since apple does not distribute the Darwin tools for iOS like they do for OS X we'll just have to guess as to what common vulnerabilities exist.

  • How many currently used iPhones are not receiving updates? I know on Android it would be pretty significant since most manufacturers stop providing updates after a couple of years and there are plenty of older Android handsets still in use. Is it the same for Apple? Do they only patch older handsets to slow them down or are they still patched after 2-3 years?
    • I can't speak to the number of iPhones in active use that can't be updated, but the iPhone 6s, released in Sept 2015, still gets iOS 14 updates, presumably at least until the next major update, which should put it around 6 years of updates.

    • My old iPhone 6 that I use as a webcam just received a point upgrade. The phone was released in 2014.
    • by berj ( 754323 )

      A patch for this also went out for iOS 12 so that covers devices all the way back to the iPhone 5s and the iPad Mini 2. Both of which were released in 2013.

      It's uncertain if this bug also affects earlier versions. But I'd be surprised if an appreciable number of people were using older devices than that.

    • How many currently used iPhones are not receiving updates? I know on Android it would be pretty significant since most manufacturers stop providing updates after a couple of years and there are plenty of older Android handsets still in use. Is it the same for Apple? Do they only patch older handsets to slow them down or are they still patched after 2-3 years?

      My 6 year old iPhone 6, which is stuck on iOS 12, just received an update the other day to iOS 12.4.7; which contains a patch for this vulnerability. Even after Apple no longer officially supplies regular updates to a device (usually at least 4-5 years), they have a longstanding policy of issuing updates to old OS versions when necessary to patch serious vulnerabilities.

      BTW, that patch to iOS 12 covers iPhones back to the iPhone 5. That is pretty much 99.9% of all iPhones still in use.

      How does that compare

  • Linking to an article that links to the actual article.

  • I'm still getting nagged about free safety recalls for my 20+ year old POS vehicle.

    Technology companies are currently using the fact their software is buggy insecure POS which places users at unnecessary risk of harm as a marketing tool to beat people into upgrading as if we should be thankful patches are made available to us and not outraged the problem was allowed to exist in the first place.

    Apple, Microsoft... all of these Android and feature phone vendors abandoning security updates the second phones co

    • by berj ( 754323 )

      Not to get in the way of a good rant but this was patched for devices going back 7 years, some of which can't use the latest or even the second latest major iOS release.

      Should they go back further? Probably, yes (though I'd be really surprised to see someone using an iPhone 5 or 4 as a daily driver these days). I don't know about other vendors (actually I do.. and the picture ain't pretty) but Apple is most certainly not "abandoning security updates the second phones come off the line".

      • I have an iPod Touch I still use daily in my car that's approaching a decade old. Not a phone, but same OS and the wifi is active yet to update the music I keep on it. The damn thing will probably outlive me at this rate.

    • Google doesn't abandon their own devices the moment they come off the line (the assembly line is stopped). They support them for 18 months after that. 18. whole. months. /s
      • This is not true. Using the first version Nexus 7 tablet as an example, they OTA update a broken build that makes the device nearly useless, and clearly was tested for at most a full 12 minutes before deployment, then drop support.
    • I like to waste the time of those people. Go on ebay or bring a trailer and find a Ferrari VIN. Keep that handy for your next warranty scam call.

    • I'm still getting nagged about free safety recalls for my 20+ year old POS vehicle.

      Someone overly concerned about your safety and the safety of others with a 2-ton piece of metal you drive at highway speeds? What a bunch of assholes.

      Technology companies are currently using the fact their software is buggy insecure POS which places users at unnecessary risk of harm as a marketing tool to beat people into upgrading as if we should be thankful patches are made available to us and not outraged the problem was allowed to exist in the first place.

      Yeah, perhaps you're right. Consumers would be so much better off if those human developers just convinced everyone they were fucking perfect, and never made any bugs or patches public. Ever. Yes, let's demand everyone and everything execute nothing but a flawless victory when designing and manufacturing hyper-complex computing devices.

      Apple, Microsoft... all of these Android and feature phone vendors abandoning security updates the second phones come off the line... They should all be forced by law to fix security bugs for everyone for as long as the devices are used and do so on their own dime.

      Microsoft shut down

    • Apple, Microsoft... all of these Android and feature phone vendors abandoning security updates the second phones come off the line... They should all be forced by law to fix security bugs for everyone for as long as the devices are used and do so on their own dime.

      Keep Apple out of this rant.

      My 6 year old iPhone 6 just received an OS update to its "obsolete" OS (iOS 12), specifically to address this issue.

      Now what?

  • by DontBeAMoran ( 4843879 ) on Wednesday December 02, 2020 @03:54PM (#60787138)

    ... the phone is in your pocket, and over Wi-Fi someone just worms in with some dodgy Wi-Fi packets.

    You see? It just works!

    Sent from my iPho{#`%${%&`+'${`%&NO CARRIER

  • Don't think so - I have had no problems breathing while this exploit was concocted.

  • Very Nice.

  • Those who have played "Watch Dogs" probably thought at some time "How unrealistic is this depiction, as if you could hack some arbitrary bystander's phone without any interaction within a jiffy..." - but then, of course, you learn that the daily reality of IT-Insecurity is just like this.
  • by humankind ( 704050 ) on Wednesday December 02, 2020 @07:44PM (#60787760) Journal

    IMO if you leave WiFi enabled on your phone when you're outside your home/business, that's insecure. Don't do it.

    • If you go to your friends house and use their WPA2/PSK network, why would it be any less secure than using your own WPA2/PSK network? And there's no reason that this bug couldn't (and doesn't) exist in the driver for the cellular radio. In that case, exploiting it would require more hardware but wouldn't otherwise be any harder. So by that logic, you shouldn't connect your phone to the cellular network either? That's not the same as joining an unencrypted WiFi network (where malicious actors can discove
    • by antdude ( 79039 )

      I disable mine when not in used to save battery power even in my own nest. I hope Apple releases updated patche for older iOS like iPad Air and iPhones (6+ and 4S).

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...