I loathe Apple. They are probably one of the most detestable companies in the technology sector right now. I see them as a modern version of 90s Microsoft.
But this? I think this is a move in the right direction. The added security benefits sandboxing brings far outweigh any negative consequences a few developers too lazy to implement something Apple's been telling them they need to implement for the better part of a year might experience (at least according to the OS X review a few days ago from Ars Tech
Maybe, but my application is one of those that cannot be sandboxed.
Did you know there is no setting which allows an application to write files in a user selected folder, no you have to ask the user for every file to save manually. Which is hard when some of your customers want your application to record audio into 500 mono.wav audio files. Although I believe you can get a temporary exception from Apple, so you can change your application to comply, in other words pull it from the app store.
Did you know there is no setting which allows an application to write files in a user selected folder, no you have to ask the user for every file to save manually.
That's not true at all. The standard com.apple.security.files.user-selected.read-write entitlements can handle that very easily. All you have to do is use a standard open dialog to let the user choose a folder, and then write arbitrary crap into that folder or any subfolder within it. Then, save a security-scoped bookmark to that folder if you
Where things get awkward with that arrangement is when the user copies those files to another machine or restores from a backup. At that point, you'll have to ask the user to open the folder containing the file "foo.wav" or whatever.
I was thinking about a larger blockquote, but that choice bit above will suffice. "Those who don't understand UNIX are doomed to reinvent it - poorly." How they got to this point from Darwin is beyond me.
It's actually pretty straightforward. The UNIX security model sucks. It assumes that attacks come from the outside, and is designed to protect the user from other users on the same system. In the UNIX model, everything run by a particular user has the same rights as the user. In practice, that just isn't a viable security model anymore.
Consider this scenario: you have a web browser. When everything is working, you trust that the browser is not malicious, so you run it as yourself. Later, you go to a web page and, because of a bug in that browser, somebody is able to execute arbitrary code. Under the UNIX model, that browser can send all your files to a server in Croatia, encrypt them, and extort money from you in exchange for getting your data back.
The only way to prevent such a scenario in a traditional UNIX permission system is to run each application as a separate user. That might be practical for a power user, but it would be insane for most folks. And if you ever wanted to open that JPEG file that you saved with the web browser, you'd have to go in and either change the owner (Finder running as root is a terrifying thought) or set really scary ACLs. No matter how you cut it, that's not user-friendly.
A modern security model must be fundamentally built on the principle of distrust. Distrust everything. Any app could potentially become malicious at any time, whether because the app developer put in a backdoor or because somebody exploited a buffer overflow. It is, therefore, the responsibility of the operating system to not only protect the user from other users on the system, but also from flaws in other applications being run by the same user.
The result is a sandboxing model, in which applications are allowed to open only files that the user has explicitly authorized them to open. Although the user sees a standard file open dialog, when running in a sandbox, the application is not in charge of displaying that dialog. Instead, a system daemon called pboxd (the "powerbox daemon") displays the dialog. When the user chooses to allow that application access to a resource, that daemon then extends the application's sandbox to allow access to that file. In this way, the application has access to exactly the files or folders to which the user has granted it access. No more, no less.
Such a security model is really the only sane security model you can come up with. By using user intent rather than an arcane set of permissions, the user is able to open files in whatever application the user chooses, trusting the operating system to ensure that those applications do not have access to files that the user has not allowed those applications to open. This significantly reduces the benefits gained from attacking security holes in an application.
That's not to say that some apps don't need broader access (e.g. Finder), but it is a worthwhile goal to minimize the number of apps with that level of access, as they are the juiciest targets for attack.
Instead, a system daemon called pboxd (the "powerbox daemon") displays the dialog.
So attackers then target that daemon as it has system-wide access? Marvellous.
I don't have a Lion or Mountain Lion system in front of me, but I think that pboxd is a "system daemon" only in the sense that it's a daemon supplied as part of the system, and that it's run as the user, not as root. So attackers could target it, but they wouldn't get root access if they succeeded, they'd just get the same access an un-sandboxed user process would have.
Correct. But these days malicious software doesn't actually need root access any more. Unsandboxed access is probably "good enough".
Which means Apple had better have made sure pboxd pretty secure against attack. It's probably easier to make pboxd reasonably secure than to make all their frameworks and all their own apps reasonably secure and ensure that third-party apps are that secure, at least as long as the languages being used for development allow code unfettered read-write access to the writable portion of the address space.
Slight correction for you - it's sandboxd which is the golden ticket target - that's the core of the security model. All that attacking the powerbox daemon will do is give you write access to sections of the disk you wouldn't otherwise have. Attacking the sandbox itself gives you free reign.
So attackers then target that daemon as it has system-wide access? Marvellous.
First, it's running as the user, just outside the sandbox. Second, it has a very small attack surface—all it does is accept a request for either a load dialog or a save dialog—which should make it very difficult to attack. Third, it is not a daemon in the traditional sense. You cannot connect to it except from a process running as the user. Thus, it should not be possible to compromise it until after you have alrea
The only way to prevent such a scenario in a traditional UNIX permission system is to run each application as a separate user.
Let me introduce you to an interesting filesystem flag called SetGID.
I think he's quite aware of it. However, if the goal is to keep stuff that you run from getting access to files to which you have read access, set-GID doesn't help unless the permission system can say "a process with UID XXX only has access to this file if its group set includes group YYY", and the standard user/group/world permission set doesn't support that:
$ ls -l/tmp/doesntwork ----r--r-- 1 gharris staff 7 Jul 28 14:41/tmp/doesntwork $ id uid=XXX(gharris) gid=20(staff) groups=20(staff),... $ cat/tmp/do
The UNIX security model sucks. It assumes that attacks come from the outside, and is designed to protect the user from other users on the same system. In the UNIX model, everything run by a particular user has the same rights as the user. In practice, that just isn't a viable security model anymore.
The key is "anymore", so it's perhaps better stated as "the UNIX security model is no longer sufficient" - and I'd rephrase that as "the time-sharing security model is no longer sufficient", as that model predates UNIX and continues to exist in some present-day operating systems other than UN*Xes. It was pretty good for machines at that time, but, in a world with lots more application developers, more naive users, and access to the Intertubes being common, it's not so good any more.
...in a world with lots more application developers, more naive users, and access to the Intertubes being common, it's not so good any more.
I'm not sure I agree with that. There were always a high percentage of naïve users. I remember the UNIX users of a couple of decades back, and sure, some of us were compiling our own tools and stuff, but 90% of them were just professors running PINE (which had at least a couple of remote security holes over the years). And the larger number of developers actuall
...in a world with lots more application developers, more naive users, and access to the Intertubes being common, it's not so good any more.
I'm not sure I agree with that. There were always a high percentage of naïve users.
That's only one part of it. If you don't have a lot of malware out there, and you don't have Intertubes access to provide a bigger attack surface, the naive users matter less.
I remember the UNIX users of a couple of decades back,
UNIX isn't 20 years old. UNIX is over 40 years old, and the security model is even older, dating back at least to Multics, and possibly earlier (I forget how TOPS-10 handled permissions, for example).
And the larger number of developers actually makes things better, not worse.
O RLY?
The more apps that serve a particular purpose, the fewer users that use any one of them, and thus the fewer users that will be affected by an attack on it.
And, if the percentage of developers who are malicious is constant (or increases over time), the more developers, the more malici
UNIX isn't 20 years old. UNIX is over 40 years old...
That was an arbitrary number, and was not intended to imply that UNIX had only been around for twenty years. The point of that number was to show that naïve users are not a new problem in the UNIX world.
And, if the percentage of developers who are malicious is constant (or increases over time), the more developers, the more malicious developers.
True, but protecting against malicious apps isn't the intended purpose of sandboxing. A malicious app, gi
"All my life I wanted to be someone; I guess I should have been more specific."
-- Jane Wagner
As an Apple hater, I disagree. (Score:5, Insightful)
But this? I think this is a move in the right direction. The added security benefits sandboxing brings far outweigh any negative consequences a few developers too lazy to implement something Apple's been telling them they need to implement for the better part of a year might experience (at least according to the OS X review a few days ago from Ars Tech
Re: (Score:0)
Maybe, but my application is one of those that cannot be sandboxed.
Did you know there is no setting which allows an application to write files in a user selected folder, no you have to ask the user for every file to save manually. .wav audio files. Although I believe you can get a temporary exception from Apple, so you can change your application to comply, in other words pull it from the app store.
Which is hard when some of your customers want your application to record audio into 500 mono
Also I want to ma
Re: (Score:5, Informative)
That's not true at all. The standard com.apple.security.files.user-selected.read-write entitlements can handle that very easily. All you have to do is use a standard open dialog to let the user choose a folder, and then write arbitrary crap into that folder or any subfolder within it. Then, save a security-scoped bookmark to that folder if you
Re: (Score:0)
I was thinking about a larger blockquote, but that choice bit above will suffice. "Those who don't understand UNIX are doomed to reinvent it - poorly." How they got to this point from Darwin is beyond me.
Re:As an Apple hater, I disagree. (Score:5, Informative)
It's actually pretty straightforward. The UNIX security model sucks. It assumes that attacks come from the outside, and is designed to protect the user from other users on the same system. In the UNIX model, everything run by a particular user has the same rights as the user. In practice, that just isn't a viable security model anymore.
Consider this scenario: you have a web browser. When everything is working, you trust that the browser is not malicious, so you run it as yourself. Later, you go to a web page and, because of a bug in that browser, somebody is able to execute arbitrary code. Under the UNIX model, that browser can send all your files to a server in Croatia, encrypt them, and extort money from you in exchange for getting your data back.
The only way to prevent such a scenario in a traditional UNIX permission system is to run each application as a separate user. That might be practical for a power user, but it would be insane for most folks. And if you ever wanted to open that JPEG file that you saved with the web browser, you'd have to go in and either change the owner (Finder running as root is a terrifying thought) or set really scary ACLs. No matter how you cut it, that's not user-friendly.
A modern security model must be fundamentally built on the principle of distrust. Distrust everything. Any app could potentially become malicious at any time, whether because the app developer put in a backdoor or because somebody exploited a buffer overflow. It is, therefore, the responsibility of the operating system to not only protect the user from other users on the system, but also from flaws in other applications being run by the same user.
The result is a sandboxing model, in which applications are allowed to open only files that the user has explicitly authorized them to open. Although the user sees a standard file open dialog, when running in a sandbox, the application is not in charge of displaying that dialog. Instead, a system daemon called pboxd (the "powerbox daemon") displays the dialog. When the user chooses to allow that application access to a resource, that daemon then extends the application's sandbox to allow access to that file. In this way, the application has access to exactly the files or folders to which the user has granted it access. No more, no less.
Such a security model is really the only sane security model you can come up with. By using user intent rather than an arcane set of permissions, the user is able to open files in whatever application the user chooses, trusting the operating system to ensure that those applications do not have access to files that the user has not allowed those applications to open. This significantly reduces the benefits gained from attacking security holes in an application.
That's not to say that some apps don't need broader access (e.g. Finder), but it is a worthwhile goal to minimize the number of apps with that level of access, as they are the juiciest targets for attack.
Re: (Score:2)
Instead, a system daemon called pboxd (the "powerbox daemon") displays the dialog.
So attackers then target that daemon as it has system-wide access? Marvellous.
I don't have a Lion or Mountain Lion system in front of me, but I think that pboxd is a "system daemon" only in the sense that it's a daemon supplied as part of the system, and that it's run as the user, not as root. So attackers could target it, but they wouldn't get root access if they succeeded, they'd just get the same access an un-sandboxed user process would have.
Re: (Score:2)
Correct. But these days malicious software doesn't actually need root access any more. Unsandboxed access is probably "good enough".
Re: (Score:2)
Correct. But these days malicious software doesn't actually need root access any more. Unsandboxed access is probably "good enough".
Which means Apple had better have made sure pboxd pretty secure against attack. It's probably easier to make pboxd reasonably secure than to make all their frameworks and all their own apps reasonably secure and ensure that third-party apps are that secure, at least as long as the languages being used for development allow code unfettered read-write access to the writable portion of the address space.
Re: (Score:2)
Slight correction for you - it's sandboxd which is the golden ticket target - that's the core of the security model. All that attacking the powerbox daemon will do is give you write access to sections of the disk you wouldn't otherwise have. Attacking the sandbox itself gives you free reign.
Re: (Score:2)
First, it's running as the user, just outside the sandbox. Second, it has a very small attack surface—all it does is accept a request for either a load dialog or a save dialog—which should make it very difficult to attack. Third, it is not a daemon in the traditional sense. You cannot connect to it except from a process running as the user. Thus, it should not be possible to compromise it until after you have alrea
Re: (Score:2)
The only way to prevent such a scenario in a traditional UNIX permission system is to run each application as a separate user.
Let me introduce you to an interesting filesystem flag called SetGID.
I think he's quite aware of it. However, if the goal is to keep stuff that you run from getting access to files to which you have read access, set-GID doesn't help unless the permission system can say "a process with UID XXX only has access to this file if its group set includes group YYY", and the standard user/group/world permission set doesn't support that:
Re: (Score:2)
The UNIX security model sucks. It assumes that attacks come from the outside, and is designed to protect the user from other users on the same system. In the UNIX model, everything run by a particular user has the same rights as the user. In practice, that just isn't a viable security model anymore.
The key is "anymore", so it's perhaps better stated as "the UNIX security model is no longer sufficient" - and I'd rephrase that as "the time-sharing security model is no longer sufficient", as that model predates UNIX and continues to exist in some present-day operating systems other than UN*Xes. It was pretty good for machines at that time, but, in a world with lots more application developers, more naive users, and access to the Intertubes being common, it's not so good any more.
Re: (Score:2)
I'm not sure I agree with that. There were always a high percentage of naïve users. I remember the UNIX users of a couple of decades back, and sure, some of us were compiling our own tools and stuff, but 90% of them were just professors running PINE (which had at least a couple of remote security holes over the years). And the larger number of developers actuall
Re: (Score:2)
I'm not sure I agree with that. There were always a high percentage of naïve users.
That's only one part of it. If you don't have a lot of malware out there, and you don't have Intertubes access to provide a bigger attack surface, the naive users matter less.
I remember the UNIX users of a couple of decades back,
UNIX isn't 20 years old. UNIX is over 40 years old, and the security model is even older, dating back at least to Multics, and possibly earlier (I forget how TOPS-10 handled permissions, for example).
And the larger number of developers actually makes things better, not worse.
O RLY?
The more apps that serve a particular purpose, the fewer users that use any one of them, and thus the fewer users that will be affected by an attack on it.
And, if the percentage of developers who are malicious is constant (or increases over time), the more developers, the more malici
Re: (Score:2)
That was an arbitrary number, and was not intended to imply that UNIX had only been around for twenty years. The point of that number was to show that naïve users are not a new problem in the UNIX world.
True, but protecting against malicious apps isn't the intended purpose of sandboxing. A malicious app, gi