encrypted file and directory names?
Adam Spiers
pass at adamspiers.org
Sun Feb 5 22:22:37 CET 2017
On Sun, Feb 05, 2017 at 08:26:18AM +0000, Brian Candler wrote:
>On 05/02/2017 03:53, HacKan Iván wrote:
>>I thought the same, but implementing it is a real pain in the ass.
>>
>>I'm currently working on something I'll send soon, and then I'm
>>gonna work on an extension to do just that :)
>
>
>If this is implemented I'd definitely prefer to see it as an
>extension, because I like the way pass works today. My threat model
>is different to yours :-)
I can totally sympathise with that. An extension would be fine for
me.
>I'd say that the main benefit of putting separate passwords in
>separate files is that you can have independent changes to the git
>repository and they are less likely to cause merge conflicts.
>If you added a single encrypted file at the top of the repository,
>mapping password name to token, that benefit would be lost.
Not really. Firstly, changes within existing files (e.g. changes to
passwords) would not require any change to the common encrypted index
file. Secondly, whilst you are right that the use of a single
encrypted index file would mean that additions / deletions of files
(including renames) could cause merge conflicts, that was just my
first naive proposal for how to implement this. I am sure it is
possible to come up with a smarter design that minimises or even
eliminates this merge conflict issue; here are some initial
suggestions ...
The first thing to note is that if the mechanism for calculating
obfuscated filenames is a simple hash such as SHA-256, then in order
to implement
pass show google.com
we simply perform SHA-256 on "google.com", and then look for a file
called
~/.password-store/d4c9d9027326271a89ce51fcaf328ed673f17be33469ff979e8ab8dd501e664f
in the store and decrypt that. In that case, there is no need for any
index, so there is no risk of merge conflicts. However, this prevents
traversal of the unencrypted pass-name (filename) namespace, so it
would break functionality like:
pass find google
But this can be solved easily. One possibility would be to store each
pass-name within its corresponding encrypted file. In fact this is
more or less already recommended by https://www.passwordstore.org/
anyway, where it describes the multi-line strategy:
For example, Amazon/bookreader might look like this:
Yw|ZSNH!}z"6{ym9pI
URL: *.amazon.com/*
Username: AmazonianChicken at example.com
Secret Question 1: What is your childhood best friend's most bizarre superhero fantasy? Oh god, Amazon, it's too awful to say...
Phone Support PIN #: 84719
Then running "pass find" would decrypt every file in the store to find
the one(s) you are looking for. Of course this would slow it down a
lot. But "pass grep" already has the same complexity, so if that's
tolerable (which it probably is, given that most stores presumably
won't have more than a few hundred entries at most), then perhaps
that's not a big deal.
If this increased complexity was an issue, one solution would be to
keep the index but minimise the frequency of merge conflicts, by
splitting the index into buckets hashed by (say) the first character
of the unencrypted pass-name (filename), or by its length. Then you'd
only get a merge conflict when two or more changes affected pass-names
with the same first character, or of the same length.
But actually, a better approach would be to keep the index as a single
encrypted file but simply avoid committing it to git. Ta-da! No merge
conflicts :-) But, I hear you cry, how would changes to the index file
in one git working tree get propagated to the index file in another
(remote) git working tree? Simple - the index can simply be
automatically rebuilt each time the store changes. So effectively it
would be nothing more than an encrypted cache of the mapping between
pass-names and digests. I think this is a much cleaner solution. It
could even be automated using git hooks.
In order to be able to build this index, we'd need to store each
pass-name within the encrypted file, similar to the suggestion in the
multi-line approach above, either by reducing the URL to a canonical
form like "amazon.com" and computing the digest of that, or by relying
on the presence of a separate, manually entered "Name" field, e.g.
Yw|ZSNH!}z"6{ym9pI
Name: amazon.com
URL: *.amazon.com/*
Username: AmazonianChicken at example.com
Secret Question 1: What is your childhood best friend's most bizarre superhero fantasy? Oh god, Amazon, it's too awful to say...
Phone Support PIN #: 84719
Some policy would have to be decided in advance for how this canonical
form is calculated. It would probably be best to use the "Name" field
if present, and otherwise fall back to massaging the URL pattern into
canonical form.
Finally, I should note that there is another problem with using a
straight digest algorithm like SHA-256: it's vulnerable to dictionary
attacks and rainbow tables. For example an attacker could precompute
the SHA-256 digests for all the embarrassing or sensitive websites
they can think of, and then search for filenames those digests within
any store they get hold of.
A simple defence against this would be to generate a secret master
passphrase for the store, which would be stored in a separate
encrypted file, and then add that to each pass-name when generating
its digest. So for example if the passphrase was
2T803$7e$D%2Rq!
then to calculate the name of the filename storing your google.com
secrets, you'd calculate the SHA-256 of
2T803$7e$D%2Rq!google.com
and then consequently look in
~/.password-store/92b93018be81372c7d04192dde1eb5b55d8007137dcccbae264f97419f2513b0
I expect there are some weaknesses with this approach which someone
more versed in cryptography than me would balk at, but then they'd
probably be able to suggest a fix. And anyway, this already sounds
like it would provide good enough protection for most cases, unless
you're worried about certain government agencies figuring out which
sites you're storing credentials for ;-)
>In fact,
>you might as well just keep all your passwords in a single file
>(instead of name -> token it would contain name -> password)
No, because then you'd be *guaranteed* a merge conflict *every* time
you made concurrent commits to the git repo from different remotes.
Other smaller disadvantages include:
- It's clearly inefficient to decrypt / encrypt the entire store
every time you want to read or write a single entry.
- This would expose *all* your credentials to an untrustworthy
sysadmin in a single go for the duration that they are in
memory unencrypted, since the sysadmin can read the memory
of any running process. Only decrypting a single file would
force them to use another attack, such as tty-snooping to obtain
your GPG passphrase.
More information about the Password-Store
mailing list