Private Cloud Compute Security Guide
security.apple.com280 points by djoldman 12 hours ago
280 points by djoldman 12 hours ago
There's something missing from this discussion.
What really matters isn't how secure this is on an absolute scale, or how much one can trust Apple.
Rather, we should weigh this against what other cloud providers offer.
The status quo for every other provider is: "this data is just lying around on our servers. The only thing preventing a employee from accessing it is that it would be a violation of policy (and might be caught in an internal audit.)" Most providers also carve out several cases where they can look at your data, for support, debugging, or analytics purposes.
So even though the punchline of "you still need to trust Apple" is technically true, this is qualitatively different because what would need to occur for Apple to break their promises here is so much more drastic. For other services to leak their data, all it takes is for one employee to do something they shouldn't. For Apple, it would require a deliberate compromise of the entire stack at the hardware level.
This is very much harder to pull off, and more difficult to hide, and therefore Apple's security posture is qualitatively better than Google, Meta or Microsoft.
If you want to keep your data local and trust no-one, sure, fine, then you don't need to trust anyone else at all. But presuming you (a) are going to use cloud services and (b) you care about privacy, Apple has a compelling value proposition.
Sibling comments point out (and I believe, corrections are welcome) that all that theater is still no protection against Apple themselves, should they want to subvert the system in an organized way. They’re still fully in control. There is, for example, as far as I understand it, still plenty of attack surface for them to run different software than they say they do.
What they are doing by this is of course to make any kind of subversion a hell of a lot harder and I welcome that. It serves as a strong signal that they want to protect my data and I welcome that. To me this definitely makes them the most trusted AI vendor at the moment by far.
As soon as you start going down the rabbit hole of state sponsored supply chain alteration, you might as well just stop the conversation. There's literally NOTHING you can do to stop that specific attack vector.
History has shown, at least to date, Apple has been a good steward. They're as good a vendor to trust as anyone. Given a huge portion of their brand has been built on "we don't spy on you" - the second they do they lose all credibility, so they have a financial incentive to keep protecting your data.
Apple have name/address/credit-card/IMEI/IMSI tuples stored for every single Apple device. iMessage and FaceTime leak numbers, so they know who you talk to. They have real-time location data. They get constant pings when you do anything on your device. Their applications bypass firewalls and VPNs. If you don't opt out, they have full unencrypted device backups, chat logs, photos and files. They made a big fuss about protecting you from Facebook and Google, then built their own targeted ad network. Opting out of all tracking doesn't really do that. And even if you trust them despite all of this, they've repeatedly failed to protect users even from external threats. The endless parade of iMessage zero-click exploits was ridiculous and preventable, CKV only shipped this year and isn't even on by default, and so on.
Apple have never been punished by the market for any of these things. The idea that they will "lose credibility" if they livestream your AI interactions to the NSA is ridiculous.
> They made a big fuss about protecting you from Facebook and Google, then built their own targeted ad network.
What kind of targeting advertising am i getting from apple as a user of their products? Genuinely curious. I’ll wait.
The rest of your comment may be factually accurate but it isn’t relevant for “normal” users, only those hyper aware of their privacy. Don’t get me wrong, i appreciate knowing this detail but you need to also realize that there are degrees to privacy.
> What kind of targeting advertising am i getting from apple as a user of their products?
https://support.apple.com/guide/iphone/control-how-apple-del...
In the App Store and Apple News, your search and download history may be used to serve you relevant search ads. In Apple News and Stocks, ads are served based partly on what you read or follow. This includes publishers you’ve enabled notifications for and the type of publishing subscription you have.
> If you don't opt out, they have full unencrypted device backups, chat logs, photos and files.
Also full disk encryption is opt-in for macOS. But the answer isn't that Apple wants you to be insecure, they just probably want to make it easier for their users to recover data if they forget a login password or backup password they set years ago.
> real-time location data
Locations are end to end encrypted.
> Also full disk encryption is opt-in for macOS. But the answer isn't that Apple wants you to be insecure, they just probably want to make it easier for their users to recover data if they forget a login password or backup password they set years ago.
"If you have a Mac with Apple silicon or an Apple T2 Security Chip, your data is encrypted automatically."
The non-removable storage is I believe encrypted using a key specific to the Secure Enclave which cleared on factory reset. APFS does allow for other levels of protection though (such as protecting a significant portion of the system with a key derived from initial password/passcode, which is only enabled while the screen is unlocked).
It's disingenuous to compare Apple's advertising to Facebook and Google.
Apple does first party advertising for two relatively minuscule apps.
Facebook and Google power the majority of the world's online advertising, have multiple data sharing agreements, widely deployed tracking pixels, allow for browser fingerprinting and are deeply integrated into almost all ecommerce platforms and sites.
They have not been punished because they have not abused their access to that data.
Some might call this abuse: https://news.ycombinator.com/item?id=42069588
Didn’t Edward reveal Apple provides direct access to the NSA for mass surveillance?
> allows officials to collect material including search history, the content of emails, file transfers and live chats
> The program facilitates extensive, in-depth surveillance on live communications and stored information. The law allows for the targeting of any customers of participating firms who live outside the US, or those Americans whose communications include people outside the US.
> It was followed by Yahoo in 2008; Google, Facebook and PalTalk in 2009; YouTube in 2010; Skype and AOL in 2011; and finally Apple, which joined the program in 2012. The program is continuing to expand, with other providers due to come online.
https://www.theguardian.com/world/2013/jun/06/us-tech-giants...
That seemed to be puffery about a database used to store subpoena requests. You have "direct access" to a service if it has a webpage you can submit subpoenas to.
Didn’t Apple famously refuse the FBI’s request to unlock the San Bernardino’s attacker’s iPhone. FBI ended up hiring an Australian company which used a Mozilla bug that allows unlimited password guesses without the phone wiping.
If the NSA had that info, why go through the trouble?
> If the NSA had that info, why go through the trouble?
To defend the optics of a backdoor that they actively rely on?
If Apple and the NSA are in kahoots, it's not hard to imagine them anticipating this kind of event and leveraging it for plausible deniability. I'm not saying this is necessarily what happened, but we'd need more evidence than just the first-party admission of two parties that stand to gain from privacy theater.
> There's literally NOTHING you can do to stop that specific attack vector.
E2E. Might not be applicable for remote execution of AI payloads, but it is applicable for most everything else, from messaging to storage.
Even if the client hardware and/or software is also an actor in your threat model, that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment. Open hardware is an alternative, and some states build their entire hardware stack to eliminate such threats. If you have at least one trusted equipment mitigations are possible (e.g. external network filter).
E2E does not protect metadata, at least not without significant overheads and system redesigns. And metadata is as important as data in messaging and storage.
> And metadata is as important as data in messaging and storage.
Is it? I guess this really depends. For E2E storage (e.g. as offered by Proton with openpgpjs), what metadata would be of concern? File size? File type cannot be inferred, and file names could be encrypted if that's a threat in your model.
The most valuable "metadata" in this context is typically with whom you're communicating/collaborating and when and from where. It's so valuable it should just be called data.
How is this relevant to the private cloud storage?
No point in storing data if it is never shared with anyone else.
Whom it is shared with can infer the intent of the data.