Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of other freedoms are being abused and always have been, but somehow we don't go and ban kitchen knives, as having them around is valuable. This is a false dichotomy. Systems can be secure and trusted by the user without having to cede control, and some risks are just not worth eliminating.

Most importantly - it's the user who needs to know whether their system has been tampered with, not apps.

 help



> somehow we don't go and ban kitchen knives

False analogy. You can’t have your kitchen knife exploited by a hacker team in North Korea, who shotgun attacks half of the public Internet infrastructure and uses the proceeds to fund the national nuclear program, can you? (I somewhat exaggerate, but you get the idea.)

> Systems can be secure and trusted by the user without having to cede control

In an ideal world where users have infinite information and infinite capability to process and internalize it to become an infosec expert, sure. I don’t know about you, but most of us don’t live in that world.

I agree it’s not perfect. Having to use liquid glass and being unable to install custom watch faces is ridiculous. There’s probably an opportunity for a hardened OS which can be trusted by interested parties to not be maliciously altered, and also not force so many constraints onto users like current walled gardens do. But a fully open OS, plus an ordinary user who has no time or willingness to casually become a tptacek on the side, in addition to completely unrelated full-time job that’s getting more competitive due to LLMs and whatnot, seems more like a disaster than utopia.


> You can’t have your kitchen knife exploited by a hacker team in North Korea, who shotgun attacks half of the public Internet infrastructure and uses the proceeds to fund the national nuclear program, can you? (I somewhat exaggerate, but you get the idea.)

Isn’t the status quo, that you need to intentionally choose to allow this?


On iOS, the worst you can do is not update your OS and thus be vulnerable to exploits. There is no setting that a casual user could be social engineered into enabling that would allow the OS to be patched.

Yes (well, kinda - attested systems can be and are vulnerable too), and remote attestation is completely orthogonal to that threat anyway. Securing the boot chain does not involve letting apps verify the environment they run in, it's an extra (anti-)feature that's built on top of secure boot chains.

It's also really incredible how people can see "user being in control" and just immediately jump to "user having to be an infosec expert", as if one implied the other. You can't really discuss things in good faith in such climate :(


Bootloader patching is just what you chose to use in your original false analogy. Letting apps verify the environment they run in is just as critical for the purposes of guaranteeing the digital identity. It’s all pieces of the puzzle.

It's not. I can guarantee my identity by e.g. scanning my ID card on a system with absolutely no secure boot chain. I can also guarantee a secure boot chain with my patched bootloader. Neither of these things require apps to verify the environment they run in.

> I can guarantee my identity by e.g. scanning my ID card on a system with absolutely no secure boot chain.

Your ID card is on your phone. Go ahead, guarantee you’re not using a duplicate of someone else’s ID card, that no one could duplicate your card, with a mainstream widely available consumer phone.

> I can also guarantee a secure boot chain with my patched bootloader.

Go ahead, show how your grandma automatically guarantees to interested parties that I or whoever else didn’t patch her bootloader to run a backdoored OS, while using a mainstream widely available consumer phone.

> Neither of these things require apps to verify the environment they run in.

Demonstrate a mainstream, widely available consumer phone that does these things without requiring apps to verify the environment they run it.

We can continue this infinitely, but if you keep making sweeping contrarian statements without contributing the proof required then it’s just not worth it.


> Your ID card is on your phone.

No, it's not. It lays on the desk next to me right now. I can communicate with it over NFC and I can't duplicate it. There's a debit card next to it and the same applies there - though it can also be communicated with by using a smartcard reader, which can't be done with my ID.

> guarantees to interested parties

The only interested party is my grandma, and she'll come to me to help her because her phone will stop working when the boot chain gets compromised (as it should).

> Demonstrate a mainstream, widely available consumer phone that does these things without requiring apps to verify the environment they run it.

Pretty much all of them today? Letting apps verify the environment is an extra feature built on top of secure boot chains, not the other way around. We're only having this discussion because having secure boot chains enables app attestation to work in the first place, and letting the user patch things is just a matter of key management policies. If you think these are "sweeping contrarian statements", you may want to spend some time learning how these things work.

This is not a technical problem, technical aspects have been already solved a long time ago. This is a social/political problem of who holds power over whom.


> but somehow we don't go and ban kitchen knives, as having them around is valuable

Some countries do :) Though I think physical analogies are misleading in a lot of ways here.

> Systems can be secure and trusted by the user without having to cede control, and some risks are just not worth eliminating.

Secure, yes, trustworthy to a random developer looking at your device, no. They're entirely separate concepts.

> Most importantly - it's the user who needs to know whether their system has been tampered with, not apps.

Expecting users to know things does a lot of heavy lifting here.


I never mentioned users having to know things (what you quoted was about the user getting informed whether their system is compromised, which is the job of a secure boot chain). The user being in control means that the user can decide who to trust. The user may end up choosing Google, Apple, Microsoft etc. and it's fine as long as they have a choice. Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to. And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.

> what you quoted was about the user getting informed whether their system is compromised, which is the job of a secure boot chain

User being informed means they have to know what a compromised system would entail. That alone is a huge and frankly impossible thing to expect from regular people.

> Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to.

> And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.

Then you can't demand those developers trust your device.


> That alone is a huge and frankly impossible thing to expect from regular people.

The systems used by regular people could just refuse to boot further when detecting a compromise, so I'm not sure where this comes from. We have prior art for that too. This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.

> Then you can't demand those developers trust your device.

Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.


> This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.

No, they're fundamentally opposed to each other. The entire point is that developers don't want their apps patched by just anyone, especially not malicious actors. Small minority of power users will inevitably get caught in the crossfire.

> Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.

No, you really couldn't. Past lack of technical means doesn't mean anyone trusted your device nor that we had use-cases where this was important. (It was also usually solved with external hardware, physical dongles and whatnot.)


> The entire point is that developers don't want their apps patched

That's exactly what I'm trying to say. The entire point is not to secure the user, it's to secure the apps. It's working against the user's interest, as letting the user lie to apps is essential to user's agency. The technical means used to achieve this could also be used to work for the user and ensure their security without compromising their agency, but that's not what happens on mainstream platforms.

> No, you really couldn't.

Yes, you could. Exactly how you describe, so it was used only where it mattered, and in other cases they just had no choice. Today the friction is so low that even McDonald's app will refuse to work on a device it considers untrustworthy. The user does not benefit from that at all.


> as letting the user lie to apps is essential to user's agency.

You do understand that in this case the user's agency has a very clear line?

Tampering with an electronic identity software is not a fundamental right the same way as tampering with your ID-card or passport isn't.

> [...] and in other cases they just had no choice.

QED. Not that they wouldn't or didn't want to.


App attestation does not stop at legally binding identity software, and legally binding identity software can be serviced without app attestation. I accept not being able to tamper with my ID card, I may say it's "mine" but it ultimately belongs to the government; I don't accept not being able to tamper with my computers, they wouldn't belong to me anymore if that was the case.

> Not that they wouldn't or didn't want to.

Of course, but my devices' purpose isn't to grant wishes to corporations. In the ideal world they would still have no other choice. Unfortunately the more people use platforms that let them attest the execution environment the less leverage we have against them.


> I accept not being able to tamper with my ID card, I may say it's "mine" but it ultimately belongs to the government; I don't accept not being able to tamper with my computers, they wouldn't belong to me anymore if that was the case.

So where does a digital ID card fit in your model? It's the government's but on your computer.


I have a digital ID card on my desk right now. It does not need to be stored on the phone which has all the means necessary to communicate with the card. In fact, if it was in a slightly different form factor I could even put it physically into my phone as it happens to have a built-in smartcard reader, which would still be a more reasonable solution than apps since then it wouldn't be strongly coupled with a complex device that can break or be compromised in various ways (some of which can't be solved with attestation) and would maintain a clear separation between what's mine and what's government's. What exactly would I, as a user, gain by muddling that distinction?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: