Apple Security Blunder Exposes Lion Login Passwords In Clear Text 205
An anonymous reader writes "An Apple programmer, apparently by accident, left a debug flag open in the most recent version of its Mac OS X operating system. In specific configurations, applying the OS X Lion update 10.7.3 turns on a system-wide debug log file that contains the login passwords of every user who has logged in since the update was applied. The passwords are stored in clear text."
Do they have a build process? (Score:5, Insightful)
When I build a system for Linux distribution, I use scripts to configure the options on the build server. I don't use manually specified configurations from developer workstations.
Doesn't Apple grasp this concept of source code versioning and build management? Or was the debug flag in question hard-coded in the source rather than specified as a build option? If so, Apple needs to revisit it's coding structure and figure out how to set BUILD TIME options instead of hard coding them.
Re:malware (Score:4, Insightful)
considering how this only affects people who used FileVault encryption on their Mac prior to Lion, then upgraded to Lion but kept the folders encrypted using the legacy version of FileVault, I hardly think this will be a popular vector for any attacks, malware or otherwise.
We have QA processes which automatically detect (Score:5, Insightful)
Does Apple have no such thing? This leads me to think that Apple either has no development lifecycle or, in case they have one, only half-heartedly obey it.
Re:We have QA processes which automatically detect (Score:4, Insightful)
I've been working here and there for Software Verification for a number (double-digit) number of years, on a number of products. I've seen programmers do some things in development that they forget to clean out before release that would curl your hair. Especially from the ones fresh out of school, who don't have a lot of experience. "Oh, I'll put in these debug lines just for now." No wrappers or conditional compliation of any kind, so they leak out into the final product with no one the wiser.
Another commenter pointed out that a proper assurance test would look for rogue files. That works for unauthorized/unspecified log files, such as in this case, if the organization has good specifications and tight testing. I'm not in a position to comment about Apple's coverage in this area. The problem is that other debug statements could make unauthorized entries into authorized logs, and who would catch it?
What I saw was most effective was peer code review, especially if you had the coder equivalent of the BOFH in the audience to catch crap like this. There's nothing like people seeing "release" code with debug stuff not stubbed out.
Re:Do they have a build process? (Score:5, Insightful)
There is no way you can protect yourself against careless developer.
Of course there is. It's called "code review".
Just a bug (Score:5, Insightful)
Somehow I have a feeling that if this same kind of "bug" had been found in another operating system, such as one coming from Redmond, the discussion and media coverage would have been quite different, and there would have been much more Slashdot comments on this story.
We are talking about passwords stored in clear text, no fix yet, and based on the article, no assurance that the fix would remove copies of the unencrypted passwords. For a company that was wondering how to spend 100 billions. What a joke.
Re:malware (Score:2, Insightful)
Slow down here chief, these aren't Linux users you're talking about. Apple'a upgrade is easy. It asks if you want to upgrade the encryption to match Lion's. If you said no, then you're exposed since you're using old code. They're not asking you to recompile your kernel here. I've never met anyone serious about encryption who stays versions behind.
Re:Do they have a build process? (Score:5, Insightful)
That's option B, option A is called "Open Source".
Which works as a distributed form of... wait for it... code review.
timestamp it and salt it and then hash it? (Score:5, Insightful)
First, if you timestamp it, you don't need to salt it. The password would effectively have a lifetime of minutes at best, so adding a salt doesn't improve anything.
Second, your idea ruins the whole point of using a trapdoor function (what the internet means by "hash"). The point of the trapdoor function is that the server doesn't have to have your password stored on it, because you can just verify the password presented by comparing a hashed form of the presented password to the hash you have stored.
But with a time+password hashing scheme, the server must know the user's password because each time the user logs in, the must construct a new hash from the password and the current time.
So, if your server is going to know the password, just use a shared secret system like SRP. Then you get two-way mutual authentication too.
Re:malware (Score:4, Insightful)
The problem with Linux and generalizations is that there's more than one Linux.
I've never recompiled a kernel (that I'm aware of).
Re:timestamp it and salt it and then hash it? (Score:4, Insightful)
The point of the trapdoor function is that the server doesn't have to have your password stored on it, because you can just verify the password presented by comparing a hashed form of the presented password to the hash you have stored.
But with a time+password hashing scheme, the server must know the user's password because each time the user logs in, the must construct a new hash from the password and the current time.
Not true. The server can just store the salt + hash of the password. Then when a client wants to authrenticate the server sends the salt + random one-time value. The client uses the salt to generate the password hash. Then it concatenates the random value and resulting hash and hashes the whole and sends the result to the server. The server too computes the hash of the concatenated random value and password hash. It then compares that with what the client submitted. If it matches then the client had the right password. The hash sent by the client cannot be used for replay attacks because the random value will be different for the next login. The server does not know the user password and the hash is salted making it hard to recover the user password (which helps if it was used on other sites). The stolen hash can however be used to authenticate through the login algorithm however.
There's probably better solutions but this is enough to prove the server does not have to know the user password even if timestamps or one-time random values are used.