I'd like to be able to run pagekite frontends on an untrustworthy and potentially hostile endnode. For example, I have a virtual private server that is very cheap and provides a lot of bandwidth - but I fear they might eavesdrop on the network traffic, the hard disk, or even the ram.
The https and ssl traffic that goes through pagekite is already secure from eavesdropping on the front-end.
The remaining challenge is to secure the pagekite.rc config file on the front-end. As it is now, if an attacker can read the SECRET in the config file, he would be able to redirect my kites (on this front-end or any other front-ends using the same key).
The proposed alternative is to store a public key fingerprint in the config file, rather than the SECRET (the private key). The pagekite back-end would authenticate to the front-end by signing a challenge with the private key. An attacker that can read the config file would not be able to authenticate as a backend to any other pagekite front-ends.
Comments
The benefits however are limited - it may be better to simply give each front-end a throw-away shared secret (which you don't reuse elsewhere).
Rearchitecting pagekite.py so it can associate shared secrets with front-ends instead of with kites/backends might make sense anyway and would allow for this sort of thing.
But the bandwidth is much more limited.
Zero-hop I2P would be ideal for this. But then there's no reason to use Pagekite. And I2P is a bit more complex while at it.