Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Cloud OS X Data Storage

macOS 12.3 Will Break Cloud-Storage Features Used By Dropbox and OneDrive (arstechnica.com) 68

If you're using either Dropbox or Microsoft OneDrive to sync files on a Mac, you'll want to pay attention to the release notes for today's macOS 12.3 beta: the update is deprecating a kernel extension used by both apps to download files on demand. Ars Technica reports: The extension means that files are available when you need them but don't take up space on your disk when you don't. Apple says that "both service providers have replacements for this functionality currently in beta." Both Microsoft and Dropbox started alerting users to this change before the macOS beta even dropped. Dropbox's page is relatively sparse. The page notifies users that Dropbox's online-only file functionality will break in macOS 12.3 and that a beta version of the Dropbox client with a fix will be released in March.

Microsoft's documentation for OneDrive's Files On-Demand feature is more detailed. It explains that Microsoft will be using Apple's File Provider extensions for future OneDrive versions, that the new Files On-Demand feature will be on by default, and that Files On-Demand will be supported in macOS 12.1 and later.

In addition to integrating better with the Finder (also explained by Microsoft here), using modern Apple extensions should reduce the number of obnoxious permission requests each app generates. The extensions should also reduce the likelihood that a buggy or compromised kernel extension can expose your data or damage your system. But the move will also make those apps a bit less flexible -- Microsoft says that the new version of Files On-Demand can't be disabled. That might be confusing if you expect to have a full copy of your data saved to your disk even when you're offline.

This discussion has been archived. No new comments can be posted.

macOS 12.3 Will Break Cloud-Storage Features Used By Dropbox and OneDrive

Comments Filter:
  • by hey! ( 33014 ) on Thursday January 27, 2022 @07:52PM (#62213867) Homepage Journal

    ... to Apple, but "deprecating" an API shouldn't be able to "break" anything.

    • The API in question is already deprecated, and has been for quite some time. Microsoft and Dropbox have known about this change for more than a year and haven't taken the time to update their software.

      Apple are now finally removing the API altogether (as they always planned to do), and that's what is going to break Dropbox and OneDrive syncing.

      Dropbox have not even updated their software for native Apple Silicon support on Macs, when the hardware has been available to developers since June 2020. I'm not at all surprised that they have also not updated their app to not rely on a deprecated kernel extension...

      • by hey! ( 33014 )

        I guess there's a reason people never bother reading the summary.

      • Producing a superior replacement doesn't work - lots of places cling to the old junk and never bother to implement modern improvements.
        Deprecating the legacy junk doesn't work, they keep using it anyway.
        The only thing that works is totally removing the legacy junk and dragging people kicking and screaming into the modern world.

        USB didn't really take off until Apple made it mandatory and removed legacy serial/parallel/etc ports.
        Most windows apps are still 32bit, Apple forced developers to move to 64bit.
        A lot of places (slashdot included) are still using ipv4 and have not implemented ipv6.
        A lot of sites still don't support tlsv1.3 or http/2 etc.

        • by Anonymous Coward

          It may not just be Apple. Sometimes governments come in the picture. The EU for example. Remember before the EU made an announcement that phone makers better standardize on chargers, or the EU will standardize stuff for them?

          This solved a TON of headaches. No having to remember the right shape (barrel connector, edge connector, 30 pin), but the right voltages and layout (some had the tip positive, some had the sleeve positive). We went from hundreds of different phone charger types to three. Lightning

        • by Anonymous Coward on Thursday January 27, 2022 @10:56PM (#62214137)

          Ken, is this one of your socks?

          USB didn't really take off until Apple made it mandatory and removed legacy serial/parallel/etc ports.

          That's laughable. Apple was busy pushing their own alternative to USB, firewire, and failing miserably. Yes, Macs had USB, but so did everyone else. Their tiny market share did nothing to push the standard forward.

          They did force people to throw away a lot of perfectly good equipment for no good reason. Same with their premature removal of the floppy drive. Either Apple users didn't have any data worth saving or they screwed over a lot of people. I have no idea why anyone would stick with a Mac having lived through that.

          Well, I guess that they found the right audience. Remember how upgrades to OS X broke all of your old applications, forcing you to upgrade those as well? I remember all the time I spent hunting down legacy Mac software for people who needed OS X reinstalled or who bought a used Mac from someone who figured out the scam and got a Dell instead but couldn't upgrade to . That was a real pain.

          Most windows apps are still 32bit, Apple forced developers to move to 64bit.

          Which was stupid. Windows software often comes in both 32 and 64-bit releases because there is still a ton of perfectly good equipment that still uses 32-bit versions of Windows. Why force users to buy new hardware for no good reason?

          Producing a superior replacement doesn't work

          You don't know what superior means. Upgrading for the sake of upgrading is idiotic. I know of a business that still runs on a Commodore 64 with custom software written by the owner. It's been working for perfectly for nearly 40 years. It handles point of sale and inventory. There is literally nothing better that can replace it. There are authors and journalists that still use various old computers, from IBM 5150's to TRS-80 model 100's. They don't want a new laptop. It's inherently inferior to them.

          Just try taking the RPN calculators from some of the guys here because you think newer is always better.

          Deprecating the legacy junk doesn't work, they keep using it anyway.

          That's because it's not junk! Idiotic thinking like that is the reason companies in the 90's lost billions on failed COBOL to Java projects. I'll bet there are a lot of companies right now that are thanking their lucky stars that their tech debt isn't locked up in a half-assed Java replacement!

          • Firewire was never an alternative to USB, it was pitched as an alternative to external SCSI for high throughput devices like storage, scanners, video cameras etc.
            USB was the replacement for low performance devices like keyboards, mice, serial ports, parallel ports etc.

            Everyone had USB, but Apple were the first to *REQUIRE* USB and not provide anything else by default.
            In those days people still bought ps/2 mice and keyboards, parallal port printers etc even for use exclusively with machines having USB - because they were familiar. Hardly anyone bought USB peripherals, and many devices were not available in USB form. Most USB equipped machines of the day had the ports empty, some people even ran win31/early 95/nt4 with no usb support so the ports weren't even active even if they were physically present. A lot of OEMs never even bothered connecting the USB headers from the motherboard to accessible ports on the outside of the case.

            Yes Apple were the first to remove floppy drives by default too. You could still add a floppy drive if you wanted (you still can even today), they just weren't supplied by default. Most people moved on and embraced the superior replacements.

            The same is true of everything else - USB to serial/parallel/ps2 adapters still exist, but the presence of USB and the removal of legacy ports by default makes people take notice and start using the new ports.
            If you kept providing the legacy ports, people would just keep using them out of habit and never even consider that better alternatives might exist.

            • by dhaen ( 892570 )
              Firewire got a little boost when the iPod was released, as it was the only way to synchronise. I remember fitting a firewire card to my son's PC for that very reason.
              • by tlhIngan ( 30335 )

                Firewire got a little boost when the iPod was released, as it was the only way to synchronise. I remember fitting a firewire card to my son's PC for that very reason.

                Well, at the time the iPod was released, USB 1.1 was normal. Syncing gigs of music took hours. Firewire was the only way to do it at a reasonable speed of a few minutes. Things only started getting interesting at the 3rd generation iPod which supported both USB 2.0 and Firewire together. USB 2.0 was pretty new at the time as well, but at least

                • I think you've got an amusing sense of reality.

                  USB took off due to support in Windows 98, not because of the iMac.
                  Prior to Windows 98, USB support on Windows 95 was a gradient between non-existent to bad.
                  the bondi blue iMac that stirred up so much interest (for you) existed at a time where 4 computers out of every 100 were Macs.

                  The industry didn't move an inch for them.
                  • USB being on the Mac helped the industry in general as it could be used for both PCs and Macs. A company no longer had to make a PS2/serial/parallel version and/or a separate ADB version.
                    • Like I said, the movements of an industry were not dictated by 4 out of every 100 computers.
                      It's a great mythology Apple superfans tell themselves, but it isn't realistic at all.

                      Were that the case, Firewire peripherals would have been a lot more prevalent.
                      Macs are a particular market. A market that caters to high end folks. There isn't a lot of overlap. Monitors haven't made an en-mass transition to DisplayPort, FireWire never became the predominant peripheral bus, even for what it was intended for.

                      Th
                  • the bondi blue iMac that stirred up so much interest (for you) existed at a time where 4 computers out of every 100 were Macs.

                    And five out of 100 computers had actually USB devices connected to them. Four of hem iMacs.

          • Firewire(IEEE 1394) was great, but it oriented towards a higher end than what the USB 1.1 had (which was the USB that became popular). Only with USB3 did it really start to catch up and have external hard drives be fast enough to be more than just archival storage. At the design level, USB is a mess as it's fundamentally a non-stop polling architecture with a strict host->client relationship, and each new revision appears to be slapped onto an architecture that wasn't designed to be expandable. Firewire

          • That's laughable. Apple was busy pushing their own alternative to USB, firewire, and failing miserably. Yes, Macs had USB, but so did everyone else. Their tiny market share did nothing to push the standard forward.

            I remember you could buy USB mouses, but only in fruity iMac colours. And you could buy a CD recorder, but only in fruity iMac colours.

            • You remember that, because that's all you were looking for.

              Fancy PC USB mice predated that shitty hockey puck by a year.
              A year later, Microsoft sold more USB mice than Apple would produce in the next 15 years combined.
              Because- ya. Windows 98.
              Microsoft didn't make those mice because of Apple.

              You need to learn to evaluate your concept of causality based on your own perspectives, how non-objective that perspective is, because you live in an alternate fucking history.
        • Producing a superior replacement doesn't work - lots of places cling to the old junk and never bother to implement modern improvements.

          It also depends on who benefits from the improvements. There are substantial security & reliability advantages of reducing the proliferation of kernel extensions and doing things in userland. That improvement doesn't accrue to the developer adopting the replacement, they get more or less the same thing from a new API and don't have anything to "show" for it.

          • by tlhIngan ( 30335 )

            It also depends on who benefits from the improvements. There are substantial security & reliability advantages of reducing the proliferation of kernel extensions and doing things in userland. That improvement doesn't accrue to the developer adopting the replacement, they get more or less the same thing from a new API and don't have anything to "show" for it.

            Apple has been tightening down the screws on kernel extensions over the years - you used to be able to load anyone you wanted (I had to adapt a few

        • Re: (Score:2, Informative)

          USB didn't really take off until Apple made it mandatory and removed legacy serial/parallel/etc ports.

          What a load of horse shit. USB was off the ground and running. You can say that Apple was the first company with the balls to tell their customers to go fuck themselves and remove all alternatives, a move they've repeated since, but they're in no way responsible for "USB taking off"

          Most windows apps are still 32bit, Apple forced developers to move to 64bit.

          See above.
          The single thing I hate most about my Mac is that it can't run 32-bit apps. But Apple's gonna Apple.
          They didn't force developers to move to 64-bit, they forced Apple developers to start compiling 64-bit apps, somethin

          • IPv6 isn't "superior" to IPv4. As a Network Engineer, I can tell you it's not implemented precisely because it takes a lot of steps backwards in trade for more address space. It's a pain in the ass, and its RFCs were written by committees of fucking morons.

            Sounds like your job is more difficult with IPv6 not that IPv6 is "inferior"

            • I didn't say it was inferior. I said it wasn't superior.
              You trade trade a lot of questionable design decisions for more IP space. That's not better or worse. It merely is what it is.
              • And the other advantages of IPv6 mean nothing to you then.
                • Name one.
                    • No more NAT (Network Address Translation)

                      Auto-configuration.

                      No more private address collisions.

                      Better multicast routing.

                      Simpler header format.

                      Simplified, more efficient routing.

                      True quality of service (QoS), also called "flow labeling"

                      Built-in authentication and privacy support.

                    Again the problem isn't that IPv6 offers advantages. You refuse to acknowledge them because your job is harder.

                    • No more NAT (Network Address Translation)

                      Idiocy.
                      NAT is and always was optional. It's a technology that uses IPv4, it is not part of IPv4.
                      It's purely analogous to IPv6 NAT (yes, that exists ;) only less required in necessity.

                      Auto-configuration.

                      Ya, no.
                      SLAAC is near-useless in a residential internet scenario. You talk to your ISP via DHCPv6. It's as "auto-configured" as IPv4.
                      You'll end up using SLAAC on your LAN, of course, but at that point, what's the fucking difference? There isn't one.

                      No more private address collisions.

                      Hahaha- ya, replaced with an fe80:: address that makes your eyes bleed.

                      Better multicast routing.

                      No. ML

                    • Idiocy. NAT is and always was optional. It's a technology that uses IPv4, it is not part of IPv4. It's purely analogous to IPv6 NAT (yes, that exists ;) only less required in necessity.

                      NAT is not optional if you only have one IP address and more than one device. Like in many homes and small business before IPv6. Are you sure you are "network engineer"? You do know that this was a problem right? The fact that you would argue NAT in IPv4 is "optional" is idiotic. As for the rest of you comment your argument is still "I cannot see any benefit because me, me ,me.

                    • NAT is not optional if you only have one IP address and more than one device.

                      No shit, Sherlock.

                      Like in many homes and small business before IPv6.

                      Was always a matter of how much you wanted to pay. Still is today.
                      IP addresses are a capital cost (and indeed, one of my organizations most valuable assets)

                      Are you sure you are "network engineer"?

                      Sure am. And the chances are about 98% that I make more money than you doing it, so cut the ridiculous snide remarks, and instead ask yourself how you're being stupid.

                      You do know that this was a problem right?

                      Negative. It was never a problem.
                      IPv4 exhaustion is at the RR level, not the local level.
                      We have *hundreds of thousands* of available IPs, and we're not even a big ISP.

          • Where is IPv4 superior? And don't say NAT because NAT is a hack. IPv6 is a simpler protocol, it's got a lot more flexibility. I've been using it for over ten years on industrial IoT devices and sometimes some gateway devices use IPv4, and the IPv4 is a lot more effort to work with. Ie, forget all about subnets with IPv6, they're all 64-bit subnets, only someone with a legacy mindset would try to use a different subnet length on IPv6. The ony drawback is that you can't memorize your organization's addres

            • Anyone who uses the words, "NAT is a hack" isn't versed enough in the technologies at play to have a discussion with.

              You have no fucking idea what the differences between ethertype 0x86DD and 0x800.
              IPv6, notably, requires non-static header parsing for every frame. That's the literal opposite of "simpler".

              I'm glad that you've been "using it on industrial IoT devices", but I can tell right away your toying around with IPv6 is amateur level at best.
              • by Bert64 ( 520050 )

                Non static header parsing? What are you talking about?
                If you're referring to extension headers, then you only need to parse them if you actually need to care about them, which often simply isn't true. The basic IP header is simpler.

                NAT adds a level of unnecessary complexity, breaks a lot of existing protocols, and has resulted in the adoption of inferior replacements designed to work around the limitations caused by widespread NAT.

                IPv4 is designed to work without NAT, just like IPv6 is, the difference is th

                • Non static header parsing? What are you talking about? If you're referring to extension headers, then you only need to parse them if you actually need to care about them, which often simply isn't true. The basic IP header is simpler.

                  Oh jesus. Come on dude.
                  The Next Header field points to either an extension header, or the next protocol header. The Layer 3 header only includes header + total size including extension headers.
                  This means you can only know the start of the actual payload by parsing the entire chain of Next Headers.
                  The situation you describe is the situation for IPv4, not IPv6.

                  Get the fuck out of here, dude. You're batting way the fuck out of your league.

                  NAT adds a level of unnecessary complexity, breaks a lot of existing protocols, and has resulted in the adoption of inferior replacements designed to work around the limitations caused by widespread NAT.

                  NAT breaks protocols that encode an address into the protocol. Bein

          • The single thing I hate most about my Mac is that it can't run 32-bit apps. But Apple's gonna Apple.

            I have seen stupid arguments against Macs, but that must be one of the stupidest ones.

            • And I've seen some ignorant shills, but you're absolutely one of the most ignorant ones.

              Applications don't stop being useful because company policy says so.
              Processors that support both 32 and 64 bit modes have the benefit of using the best tool for the job. Moving to 64-bit for the sake of moving is fucking stupid.
              So now, instead of first-party support for 32-bit apps, we have apps that have to be run in a fucking virtualized version of OSX, or crossover's magic 32-on-64 bit wine preloader.

              It's fuckin
              • by Bert64 ( 520050 )

                And when exactly is a 32bit mode a better tool for the job than a 64bit one?

                In theory, running in a 32bit mode gives you slightly lower memory usage for the one application.. But in practice, there are significantly more downsides.

                If your OS supports both 64bit and 32bit applications, then it will require two sets of all shared libraries. Running a single 32bit application on an otherwise 64bit system requires loading these duplicate libraries, completely eliminating any memory savings you might have gained

                • And when exactly is a 32bit mode a better tool for the job than a 64bit one?

                  Oh boy.
                  The fact that you ask that question means you shouldn't be thinking you know enough about anything to be concocting opinions on this matter.

                  moving from 32-bit pointers increases memory usage significantly.
                  On the "pro" side, it means more memory is addressible.
                  Often it means more data can be stuffed into registers.
                  And that's it.
                  On the "con" side, it means more memory is used (pointers are double the size now) and cache is now half as useful.

                  This mean an application can run faster in 64-bit mo

                  • by Bert64 ( 520050 )

                    This mean an application can run faster in 64-bit mode, or it can run faster in 32-bit mode. It all depends on the complex dance between how it behaves in cache and how much of an advantage you get with the bigger registers.

                    The vast majority of things run faster in 64bit mode on ARM or x86, not just due to the larger address space but due to other improvements made when the architectures were moved to 64bit.
                    Linux has an x32 ABI which allows for some of the benefits of 64bit on x86 (eg more registers and some other stuff) to be used from 32bit code while still using 32bit pointers, there was talk of dropping this support as hardly anyone uses it.

                    Yes, it will require two sets of shared libraries. If 8 trillion Linux distributions and Microsoft can handle it, so can Apple.

                    Most linux distributions don't have multilib support by default, you have to explic

                    • The vast majority of things run faster in 64bit mode on ARM or x86, not just due to the larger address space but due to other improvements made when the architectures were moved to 64bit.

                      It is true that most things do run faster in 64-bit mode. But not all.
                      It's highly dependent on the workload, as I said before.
                      The other improvements claim is silly. It's merely the fact that you have more registers and can thus stuff more data in 0-cycle SRAM.
                      Even in 32-bit mode, fetches are 64-bit. The memory bus of the CPU has *always* been larger than 32-bit.

                      Linux has an x32 ABI which allows for some of the benefits of 64bit on x86 (eg more registers and some other stuff) to be used from 32bit code while still using 32bit pointers, there was talk of dropping this support as hardly anyone uses it.

                      Uh, no. Lol. That is *not* what x32 ABI is.
                      x32 ABI allows for 32-bit pointer use (via long-mode segmentation).
                      The code is still 64-bit. You can

      • Apple started saying in 2015 that Kernel Extensions were being deprecated as they were risky letting g third party developers get Kernel rights. Google moved off of it in 2017. 2019 was to be the end, but Microsoft, Dropbox and others requested (and received) exceptions to continue under a special signing key and testing program. 2 years will be up in May.
        • by dgatwood ( 11270 )

          I don't think this is a certificate issue. Apple allows (allowed?) unsigned kexts if they were installed prior to when signing was enforced, and supports checking whether a certificate is valid at the time when an app was first installed, rather than expiring the app as soon as its signing certificate becomes valid. So why would an expiring certificate prevent a kext from working (as opposed to merely preventing new installations)? That's contrary to my understanding of how Apple's security architecture

      • Re: (Score:2, Informative)

        by thegarbz ( 1787294 )

        I this points to more of a problem with Apple's "agile" changes to its OS. The drop of kernel extensions started with Catalina meaning they were in depreciated status for only a couple of years. This is a significant deviation by Apple compared to every other OS where depreciated extensions largely continue to work for 5 to 10 years or even longer.

        I'm not surprised that Microsoft and Dropbox didn't do anything about that within a year.

      • by dgatwood ( 11270 )

        The API in question is already deprecated, and has been for quite some time. Microsoft and Dropbox have known about this change for more than a year and haven't taken the time to update their software.

        Ah, but was the replacement API available and working reliably more than a year ago? Did Apple tell them when the old API would become unavailable?

        Apple are now finally removing the API altogether (as they always planned to do), and that's what is going to break Dropbox and OneDrive syncing.

        Dropbox have not even updated their software for native Apple Silicon support on Macs, when the hardware has been available to developers since June 2020.

        The reason they didn't update their software for Apple Silicon is because non-Apple kexts aren't allowed on Apple Silicon. So supporting that hardware amounts to a complete rewrite of the functionality in question. That could easily take two years even if the APIs they needed were already available and working perfectly in June 2020.

        I'm not at all surprised that they have also not updated their app to not rely on a deprecated kernel extension...

        I'm not surprised by softwa

        • Kernel Extensions (kexts) were deprecated in macOS Catalina, released in October 2019. They then transitioned to unsupported in macOS Big Sur, released in November 2020.

          Now, this breaking functionality in Dropbox is not what it seems on the surface.
          I don't know what is actually changing behind the scenes, but the outcome is that Smart Sync (which is what Dropbox calls syncing cloud-only files to the local disk, on demand) is still working if you double-click on a file in the Finder. It's only when you try t

  • Wish Apple could make something like Google's MacFUSE, so third parties can attach their drives and such via a known mechanism. This also would allow for use of other filesystems with macOS like ZFS, btrfs, ext4, writable NTFS, and so on. This way, it is up to the third party what it chooses to present through the mechanism.

    • One is left wondering if the macfuse.kext will continue to work in macOS 12.3 or have they removed some of the interfaces it depends on.
      • OneDrive and Dropbox are in this position because theyâ(TM)d been using Kernel Extensions despite knowing they would be deprecated in 2019. They got an exception to keep using it for another 2 years. Now, to run at all it requires a reboot to recovery mode and changing the security policy. So, no, fuseâ(TM)s days are also numbered unless they too move from a kernel extension.
  • by msauve ( 701917 ) on Thursday January 27, 2022 @08:38PM (#62213967)
    >the page notifies users that Dropbox's online-only file functionality will break in macOS 12.3

    Nope. It's MacOS which breaks functionality, not Dropbox.

    A long, long, time ago, Apple was all about standards and compatibility. They used to complain (rightly) about Microsoft's "embrace, extend, extinguish." They've since learned what makes money, and are now doing the same shit.
    • A long, long, time ago, Apple was all about standards and compatibility.

      I don't know when that was and I've been using Macs since 1990. There was a time in the early aughts when they focused a bit more on standards and compatibility, but that was pretty self-serving. Apple has always been willing to adopt standards and be compatible when they like the technology and stubbornly proprietary when they think their solution is better. I still think that's better than Microsoft's attitude where they want standards and compatibility—they just want to be the one controlling those

      • by msauve ( 701917 )
        >Apple has always...

        Gonna need specifics, especially in comparison to competing tech co's (IBM, DEC, HP, Tandy, Commodore, ...).
    • > Nope. It's MacOS which breaks functionality, not Dropbox.

      Apple removes legacy stuff from the OS usually after a sufficient time for the developer to prepare for it.
      Thatâs why I use Apple products a lot.
      If I need a way to support obsolete software/hardware I use Linux.

      • by msauve ( 701917 )
        >Apple removes legacy stuff from the OS usually after a sufficient time for the developer to prepare for it.

        And forces its customers to repurchase now-obsolete software, retire no-longer-updated hardware, and purchase the next cycle of stuff.

        I understand why developers love them, you get to go along for the ride.
        • In 2015 Apple was telling developers to stop writing insecure code that ran AS kernel privileges. Itâ(TM)s been 7 years of nagging, and they couldnâ(TM)t pull. This isnâ(TM)t like âoeoh, this new UI framework better aligns with our future design aesthetic, and one day the old one will go away (yes, that too happened)â, it is âoedonâ(TM)t rootkit peoples devices!â
        • Both DropBox and MS have to update their software. Neither of them is forcing you to buy new hardware.
    • A long, long, time ago, Apple was all about standards and compatibility.

      Apple has literally never cared about either of those things, period. I was an Apple User in the ][ days, and also the Macintosh II days, and I've had a couple of OSX machines, still have one in fact, and an accelerated Mac SE I sold once and then the buyer kept telling me they didn't have time for me to drop it off and now I still have it because they stopped returning my messages. Very weird, but thanks for the money I guess. ANYHOO, at no time has Apple ever given a fuck about a standard.

    • And how is kernel extension change which only worked on MacOS before a "standard”? How is this any different when Windows or Linux changes their kernels?
  • At least they know what would break and got out ahead of it. Practically every major OS X upgrade beaks Kontakt or Ableton or some other program that relies on low-latency audio hardware access. This happens OVER AND OVER, so if they're checking for such conflicts, they're doing a piss poor job of it.

    • by E-Lad ( 1262 )

      Have you considered the possibility that the devs of Kontakt and Ableton are using undocumented or non-public APIs, which is often a cause of such a scenario?

  • by Malc ( 1751 ) on Friday January 28, 2022 @02:24AM (#62214387)

    If you're using beta software then expect things to break. In this case, it sounds like there will be updates available from Dropbox and Microsoft in time for the OS update's final release.

    Meanwhile, developers should keep their toolchains up-to-date and way attention to the warnings the compiler spits out. Apple signals deprecations early this way for the people who don't read the documentation.

  • ... that they want to lock Apple users info.

Doubt is not a pleasant condition, but certainty is absurd. - Voltaire

Working...