Discussion:
[kitten] Concerns about draft-ietf-kitten-krb-spake-preauth-04
Sam Hartman
2018-01-25 18:51:39 UTC
Permalink
Hi.
I came across the SPAKE preauthentication draft because IANA asked me to
look at it.
I was late and they sent it along to Larry before I could get around to
it.


I like the general approach. I ran into two specific issues, and only
conducted a brief review of the document.
I support having a solution to something along these lines, but I
suspect we're a few revisions away from something solid here.


I have two initial concerns.

First, the draft claims to provide the strengthen reply key facility.
However as far as I can tell from section 8, the draft entirely replaces
the reply key. The strengthen reply key facility is only appropriate if
the original reply key is mixed into the resulting reply key.
This is important because after the reply key has been replaced,
knowledge of the reply key does not imply anything about authentication.
I believe this mechanism provides the replace reply key facility not the
strengthen reply key facility.

I also have concerns about the proposal to make Kerberos checksums
deterministic.

Permitting non-deterministic checksums was an intentional decision
during the development of RFC 3961.
The theoretical basis behind deterministic checksums is dubious at best;
there are a lot safer models for non-deterministic checksums just as
there are better theoretical (and practical) models behind
non-deterministic encription.

During the SHA-3 competition, there was discussion of whether NIST
should look at parameterized/non-deterministic hashing. The conclusion
was not to do so, but at least none of the discussions I saw seemed to
deny the value of non-deterministic hashing.

We revisited this decision yet again when we drafted RFC 6113, and ran
into exactly the same issue that I think drove the authors to want to
change Kerberos checksums: the desire to create an incremental checksum
of the conversation in a manner similar to TLS.
After careful discussion, we abandoned that approach.
It would forbid non-deterministic checksums and would require exporting
partial checksum contexts in KDC cookies.

Even if we're going to revisit that decision again, this document is not
the appropriate place to do so.
That's a significant change to RFC 3961 and an explicit reversal of a
decision that has been reviewed within the community multiple times.

Such a change should be in its own standards track document not combined
with this specification.

Beyond these specific concerns, I'm nervous about the alignment of this
spec and RFC 6113. Doubtless part of that is my involvement in 6113
speaking, and I certainly don't have anything substantiated. I would
request others review for alignment with 6113.

I expect to fully review the document within the next week.

--Sam
Greg Hudson
2018-01-25 20:07:20 UTC
Permalink
Post by Sam Hartman
First, the draft claims to provide the strengthen reply key facility.
However as far as I can tell from section 8, the draft entirely replaces
the reply key. The strengthen reply key facility is only appropriate if
the original reply key is mixed into the resulting reply key.
This is important because after the reply key has been replaced,
knowledge of the reply key does not imply anything about authentication.
I believe this mechanism provides the replace reply key facility not the
strengthen reply key facility.
The replacement key K'[0] is PRF+(initial-reply-key, stuff) where stuff
includes the SPAKE shared result. Do you think that is not an adequate
way of mixing in the initial reply key?
Post by Sam Hartman
I also have concerns about the proposal to make Kerberos checksums
deterministic.
Please have a look at
https://www.ietf.org/mail-archive/web/kitten/current/msg06416.html and
say which direction appeals to you the most. I went with this approach
because it reflected WG consensus at the time, and tried to be up front
about it (I sent
https://www.ietf.org/mail-archive/web/kitten/current/msg06427.html with
a clear subject line).
Post by Sam Hartman
Permitting non-deterministic checksums was an intentional decision
during the development of RFC 3961.
The theoretical basis behind deterministic checksums is dubious at best;
there are a lot safer models for non-deterministic checksums just as
there are better theoretical (and practical) models behind
non-deterministic encription.
We have had a deterministic checksum tightly coupled to every enctype
after single-DES, and have used only those tightly-coupled checksums in
standards documents for many years. We also haven't ever introduced a
checksum type negotiation mechanism other than "this enctype implies
this mandatory-to-implement checksum." So I don't agree with this at
all; I think the value of admitting non-deterministic checksums is
highly dubious, and that it's vanishingly unlikely that we would ever
migrate towards non-deterministic checksums in the future.

If I could go back in time, I would strongly argue for deterministic
keyed checksums to be a function in the enctype profile, and for
checksum types to not be a thing. Having a type registry purely for
unkeyed checksums would be okay if we had a need for it. But having
checksum types with different properties lumped together, in a separate
registry from enctypes, is how we got CVE-2010-1324 and friends.
Post by Sam Hartman
We revisited this decision yet again when we drafted RFC 6113, and ran
into exactly the same issue that I think drove the authors to want to
change Kerberos checksums: the desire to create an incremental checksum
of the conversation in a manner similar to TLS.
After careful discussion, we abandoned that approach.
It would forbid non-deterministic checksums and would require exporting
partial checksum contexts in KDC cookies.
I do recall that an earlier iteration of 6113 would have required
storing either a partial checksum context or a partial conversation
transcript in the KDC cookie, but the transcript checksum mechanism in
the SPAKE draft doesn't require exporting partial contexts.
Post by Sam Hartman
Such a change should be in its own standards track document not combined
with this specification.
I am a little surprised to hear that. We don't have a history of being
sticklers about document scope; for example, RFC 6806 introduced
PA-REQ-ENC-PA-REP and FAST negotiation, despite being primarily about
referrals.
Sam Hartman
2018-01-29 13:14:24 UTC
Permalink
Post by Sam Hartman
First, the draft claims to provide the strengthen reply key
facility. However as far as I can tell from section 8, the draft
entirely replaces the reply key. The strengthen reply key
facility is only appropriate if the original reply key is mixed
into the resulting reply key. This is important because after
the reply key has been replaced, knowledge of the reply key does
not imply anything about authentication. I believe this
mechanism provides the replace reply key facility not the
strengthen reply key facility.
Greg> The replacement key K'[0] is PRF+(initial-reply-key, stuff)
Greg> where stuff includes the SPAKE shared result. Do you think
Greg> that is not an adequate way of mixing in the initial reply
Greg> key?

I'm sorry, I misread the spec.
We are in agreement that strengthen reply key is appropriate.
Post by Sam Hartman
I also have concerns about the proposal to make Kerberos
checksums deterministic.
Greg> Please have a look at
Greg> https://www.ietf.org/mail-archive/web/kitten/current/msg06416.html
Greg> and say which direction appeals to you the most.

I was leaning toward option 3: use a checksum independent of option
3961. I had planned to propose that in a more detailed review.
I agree option 2 would be more convenient but also agree that we'd need
to make sure the prf has the required properties.
I'd be happy to meet with you and see if we think that's the case if
you're interested in that.


Greg> I went with
Greg> this approach because it reflected WG consensus at the time,
Greg> and tried to be up front about it (I sent
Greg> https://www.ietf.org/mail-archive/web/kitten/current/msg06427.html
Greg> with a clear subject line).

I understand. I think the issues I'm bringing up have not been
adequately considered by the WG. I think a lot of people involved in
those earlier discussions including the design of RFC 3961 are not
currently involved in the WG (or the IETF). I would not generally
describe myself as involved in the IETF these days. So, I think it is
reasonable for the WG to consider whether this new information causes
the WG to change consensus.
Post by Sam Hartman
Permitting non-deterministic checksums was an intentional
decision during the development of RFC 3961. The theoretical
basis behind deterministic checksums is dubious at best; there
are a lot safer models for non-deterministic checksums just as
there are better theoretical (and practical) models behind
non-deterministic encription.
Greg> We have had a deterministic checksum tightly coupled to every
Greg> enctype after single-DES, and have used only those
Greg> tightly-coupled checksums in standards documents for many
Greg> years.

It sounds like you believe that the point of permitting
non-determisistic checksums was to support the existing DES checksum
types. While we certainly needed to do that, there was a desire to
permit things like salting a checksum or selecting a function from a
family of functions if general attacks against a fully known function
got stronger.

Around the time of the MD5 and SHA-1 concerns, this felt like a very
important potential fallback.
I think it still is.

Greg> We also haven't ever introduced a checksum type
Greg> negotiation mechanism other than "this enctype implies this
Greg> mandatory-to-implement checksum." So I don't agree with this
Greg> at all; I think the value of admitting non-deterministic
Greg> checksums is highly dubious, and that it's vanishingly
Greg> unlikely that we would ever migrate towards non-deterministic
Greg> checksums in the future.


Greg> If I could go back in time, I would strongly argue for
Greg> deterministic keyed checksums to be a function in the enctype
Greg> profile, and for checksum types to not be a thing.

If I could go back in time, I would certainly argue for checksum types
to not be a thing.

You seem to be conflating checksum type negotiation with deterministic
checksums. I'm assuming if we wanted a checksum that was not
deterministic, we'd either introduce it at the same time as an enctype
migration or if we were responding to an attack, introduce a new enctype
explicitly for the checksum type.

I think it's fairly unlikely we're going to introduce a new enctype that
sees wide use until we see some sort of attack we're responding to.
I think whether we look at nondeterministic checksums will depend on the
state of our confidence in our hashes at the time.

Greg> Having a
Greg> type registry purely for unkeyed checksums would be okay if we
Greg> had a need for it. But having checksum types with different
Greg> properties lumped together, in a separate registry from
Greg> enctypes, is how we got CVE-2010-1324 and friends.

We're in agreement that RFC 1510's handling of key types, enctypes and
checksum types was a huge mess. We tried to clean that up with RFC 3961
but were unable to remove checksum types because of legacy. RFC 3961
does not come out and say that for new enctypes you have one checksum
type per enctype, but that's the only sensible thing to do.
Post by Sam Hartman
Such a change should be in its own standards track document not
combined with this specification.
Greg> I am a little surprised to hear that. We don't have a history
Greg> of being sticklers about document scope; for example, RFC 6806
Greg> introduced PA-REQ-ENC-PA-REP and FAST negotiation, despite
Greg> being primarily about referrals.

I don't think those changes were reversing previous consensus.
I think that if the WG is going to reverse a consensus decision it has
revisited multiple times before, writing a document to call that out and
to get the review from people who thought about those issues is
valuable.
I have no concerns about lumping in non-controversial changes into other
documents.
Greg Hudson
2018-01-31 21:40:56 UTC
Permalink
On 01/29/2018 08:14 AM, Sam Hartman wrote:> I agree option 2 would be
more convenient but also agree that we'd need
Post by Sam Hartman
to make sure the prf has the required properties.
I'd be happy to meet with you and see if we think that's the case if
you're interested in that.
After a phone discussion with Sam:

* PRF might not have the required properties, although it would probably
still be difficult to attack. SPAKE2's security proof relies on a
random oracle (implemented using a hash) over the concatenation of the
party identities, the public keys, the initial secret, and the shared
group element. A PRF is supposed to emulate a random oracle if the
function is chosen randomly from the PRF family, but in this case we're
choosing it based on a low-entropy secret. It seems unlikely that an
attacker could make any headway since a large part of the PRF input
string (the shared group element) is unknown, but the security proof
most likely does not work.

* If we want to hew closely to the SPAKE2 security proof, we probably
need to bring an unkeyed hash into the operation (as the current draft
already uses PRF+ instead of a hash function for the final key
derivation). There are two possibilities for choosing the unkeyed hash,
the second of which I hadn't considered in detail:

1. Choose the hash based on the group number (make it part of the
group definition). There are two concerns here:
1a. If the KDC issues an optimistic challenge and the client rejects
it (going back to issuing a SPAKESupport), the hash function might
change. We could fix this by not including rejected optimistic
challenges in the transcript, which should be fine; the subsequent group
negotiation should still be secure.
1b. There's no hard guarantee that the group's hash function output
size is as large as the enctype's random-to-key input size, although
SHA-256's output size would likely be good enough for the forseeable
future. So we would have to specify an exception case when the hash
output size isn't big enough, or use a hash extension scheme, which is
easy to do (it would look a lot like PRF+) but is fiddly.

2. Choose the hash based on the enctype, essentially defining a
SPAKE-specific extension to RFC 3961 in the form of a mapping from
enctype number to hash function. Mapping this way means we can ensure
that the chosen hash function output size is at least as large as the
enctype random-to-key input size, for each enctype. We would probably
want to use SHA-256 for every existing enctype, even for enctypes which
use SHA-1 or something worse internally.

One question which informs the choice between (1) and (2) is whether
bigger, harder groups (P-521 as compared to P-256, for example) benefit
from larger intermediate hashes. I don't think they do, as everything
gets hashed down to the enctype random-to-key input size in the end.

* Using recursive hash operations to limit how much the KDC has to
remember should be fine, under the random oracle assumption.
random-oracle(random-oracle(A|B)|C) should look the same to an attacker
as random-oracle(A|B|C), assuming the intermediate hash size is too
large to practically attack.
Post by Sam Hartman
You seem to be conflating checksum type negotiation with deterministic
checksums. I'm assuming if we wanted a checksum that was not
deterministic, we'd either introduce it at the same time as an enctype
migration or if we were responding to an attack, introduce a new enctype
explicitly for the checksum type.
So the concern here (clarified via the call) is that we could find a
theoretical attack on existing uses of checksums, which could be
resolved using non-deterministic MACs, and perhaps isn't bad enough to
create an immediate practical attack but is bad enough that we want to
migrate away from deterministic MACs via new enctypes. I continue to
think this scenario too unlikely to worry about, but Sam having raised
the objection does motivate me to more strongly consider substituting an
unkeyed hash for the draft's existing key derivation and transcript
checksum methods.
Simo Sorce
2018-01-31 22:10:54 UTC
Permalink
Post by Greg Hudson
On 01/29/2018 08:14 AM, Sam Hartman wrote:> I agree option 2 would be
more convenient but also agree that we'd need
Post by Sam Hartman
to make sure the prf has the required properties.
I'd be happy to meet with you and see if we think that's the case if
you're interested in that.
* PRF might not have the required properties, although it would probably
still be difficult to attack. SPAKE2's security proof relies on a
random oracle (implemented using a hash) over the concatenation of the
party identities, the public keys, the initial secret, and the shared
group element. A PRF is supposed to emulate a random oracle if the
function is chosen randomly from the PRF family, but in this case we're
choosing it based on a low-entropy secret. It seems unlikely that an
attacker could make any headway since a large part of the PRF input
string (the shared group element) is unknown, but the security proof
most likely does not work.
* If we want to hew closely to the SPAKE2 security proof, we probably
need to bring an unkeyed hash into the operation (as the current draft
already uses PRF+ instead of a hash function for the final key
derivation). There are two possibilities for choosing the unkeyed hash,
1. Choose the hash based on the group number (make it part of the
1a. If the KDC issues an optimistic challenge and the client rejects
it (going back to issuing a SPAKESupport), the hash function might
change. We could fix this by not including rejected optimistic
challenges in the transcript, which should be fine; the subsequent group
negotiation should still be secure.
1b. There's no hard guarantee that the group's hash function output
size is as large as the enctype's random-to-key input size, although
SHA-256's output size would likely be good enough for the forseeable
future. So we would have to specify an exception case when the hash
output size isn't big enough, or use a hash extension scheme, which is
easy to do (it would look a lot like PRF+) but is fiddly.
2. Choose the hash based on the enctype, essentially defining a
SPAKE-specific extension to RFC 3961 in the form of a mapping from
enctype number to hash function. Mapping this way means we can ensure
that the chosen hash function output size is at least as large as the
enctype random-to-key input size, for each enctype. We would probably
want to use SHA-256 for every existing enctype, even for enctypes which
use SHA-1 or something worse internally.
The one concern here is, what happen if the hash function turns out to
be broken down the line ?
How would new and old code negotiate what hash to use ?

Or is the thinking that we'd add new enctypes if the hash function is
broken anyway so new enctypes will be mapped to this new hash function
?

What happens when a new enctype is introduced ? Will there have to be
also an explicit allocation of a hash type to be use for SPAKE?
Will it be implicit ? At least SHA-256 or whatever the enctype uses if
"better" ?
Post by Greg Hudson
One question which informs the choice between (1) and (2) is whether
bigger, harder groups (P-521 as compared to P-256, for example) benefit
from larger intermediate hashes. I don't think they do, as everything
gets hashed down to the enctype random-to-key input size in the end.
* Using recursive hash operations to limit how much the KDC has to
remember should be fine, under the random oracle assumption.
random-oracle(random-oracle(A|B)|C) should look the same to an attacker
as random-oracle(A|B|C), assuming the intermediate hash size is too
large to practically attack.
Post by Sam Hartman
You seem to be conflating checksum type negotiation with deterministic
checksums. I'm assuming if we wanted a checksum that was not
deterministic, we'd either introduce it at the same time as an enctype
migration or if we were responding to an attack, introduce a new enctype
explicitly for the checksum type.
So the concern here (clarified via the call) is that we could find a
theoretical attack on existing uses of checksums, which could be
resolved using non-deterministic MACs, and perhaps isn't bad enough to
create an immediate practical attack but is bad enough that we want to
migrate away from deterministic MACs via new enctypes. I continue to
think this scenario too unlikely to worry about, but Sam having raised
the objection does motivate me to more strongly consider substituting an
unkeyed hash for the draft's existing key derivation and transcript
checksum methods.
I do not have an issue with this, assuming we do not corner ourselves
with the choice of hash.

Simo.
--
Simo Sorce
Sr. Principal Software Engineer
Red Hat, Inc
Greg Hudson
2018-01-31 22:56:01 UTC
Permalink
On 01/31/2018 05:10 PM, Simo Sorce wrote:>> 2. Choose the hash based
on the enctype [...]
Post by Simo Sorce
The one concern here is, what happen if the hash function turns out to
be broken down the line ?
How would new and old code negotiate what hash to use ?
Or is the thinking that we'd add new enctypes if the hash function is
broken anyway so new enctypes will be mapped to this new hash function
?
We would have to migrate to a new enctype to change the hash function we
use in SPAKE. That's a high cost in some scenarios--like if someone
discovers a horrible preimage attack on SHA-256, but SHA-1 still appears
as preimage-resistant as it does today, so that aes-sha1 still seems
like a decent enctype to use for every purpose other than SPAKE. But I
think those scenarios are vanishingly unlikely.
Post by Simo Sorce
What happens when a new enctype is introduced ? Will there have to be
also an explicit allocation of a hash type to be use for SPAKE?
Will it be implicit ? At least SHA-256 or whatever the enctype uses if
"better" ?
When a new enctype is defined, we would also have to populate its entry
in the SPAKE mapping of enctype to hash function (which could be an IANA
registry). If there's no mapping for an enctype then that enctype can't
be used with SPAKE. But it should be easy to update the SPAKE
enctype-to-hash registry in the same document as we define a new enctype.

There is the option of specifying "SHA-256 as long as the key size is <=
256 bits unless there's an explicit mapping," but that option runs a
serious interoperability risk--one party might use the SHA-256 default
(through negligence or some time window between the enctype assignment
and the SPAKE mapping update) while the other party uses a different,
explicitly specified hash function.
Simo Sorce
2018-02-01 14:10:11 UTC
Permalink
Post by Greg Hudson
On 01/31/2018 05:10 PM, Simo Sorce wrote:>> 2. Choose the hash based
on the enctype [...]
Post by Simo Sorce
The one concern here is, what happen if the hash function turns out to
be broken down the line ?
How would new and old code negotiate what hash to use ?
Or is the thinking that we'd add new enctypes if the hash function is
broken anyway so new enctypes will be mapped to this new hash function
?
We would have to migrate to a new enctype to change the hash function we
use in SPAKE. That's a high cost in some scenarios--like if someone
discovers a horrible preimage attack on SHA-256, but SHA-1 still appears
as preimage-resistant as it does today, so that aes-sha1 still seems
like a decent enctype to use for every purpose other than SPAKE. But I
think those scenarios are vanishingly unlikely.
Post by Simo Sorce
What happens when a new enctype is introduced ? Will there have to be
also an explicit allocation of a hash type to be use for SPAKE?
Will it be implicit ? At least SHA-256 or whatever the enctype uses if
"better" ?
When a new enctype is defined, we would also have to populate its entry
in the SPAKE mapping of enctype to hash function (which could be an IANA
registry). If there's no mapping for an enctype then that enctype can't
be used with SPAKE. But it should be easy to update the SPAKE
enctype-to-hash registry in the same document as we define a new enctype.
It is a bit annoying to have to do this for each new enctype ... just
for a pre-auth mechanism.
I wonder if we couldn't simply add a field in the first message that
specifies the hash we are using.
The server can then just refuse operations if it doesn't want to use
the hash the client selected.
This would allow smooths transitions from "current-hash" to "new-hash"
in case current hash is not totally broken right away, but like SHA-1
gets weaker and weaker and we plan replacement.

It would mean potentially allowing clients to use a new enctype with a
keysize longer than the current hash, but I would assume we would add a
new hash option as that enctype is introduced and clients would do the
same. The server can simply refuse the older hash type with newer
enctypes, while still allowing older hash with older enctypes for
clients that have not caught up.

I do not know if this is desirable or can pose issues (downgrade
attacks ?), but I thought mentioning it as an option, to have a
possibility for a smoother transition in future.
Post by Greg Hudson
There is the option of specifying "SHA-256 as long as the key size is <=
256 bits unless there's an explicit mapping," but that option runs a
serious interoperability risk--one party might use the SHA-256 default
(through negligence or some time window between the enctype assignment
and the SPAKE mapping update) while the other party uses a different,
explicitly specified hash function.
Indeed, I wouldn't do this without the option I mention above.

Simo.
--
Simo Sorce
Sr. Principal Software Engineer
Red Hat, Inc
Sam Hartman
2018-02-01 17:17:45 UTC
Permalink
Simo> It is a bit annoying to have to do this for each new enctype
Simo> ... just for a pre-auth mechanism. I wonder if we couldn't
Simo> simply add a field in the first message that specifies the
Simo> hash we are using. The server can then just refuse operations
Simo> if it doesn't want to use the hash the client selected. This
Simo> would allow smooths transitions from "current-hash" to
Simo> "new-hash" in case current hash is not totally broken right
Simo> away, but like SHA-1 gets weaker and weaker and we plan
Simo> replacement.

I think that the idea of combining the hash definition with the group
definition is about the same as this, only a bit simpler.

My ranked options are:

* Combine choice of hash with choice of group (Kerberos SPAKE groups
include a hash function in their definition). Requires changing the
spec to restart the hash when a KDC rejects an optimistic group
offer. Greg and I believe the security of this is fine.

* Registering a mapping of SPAKE hashes to enctypes. The major
advantage I see is that it can (I think) maintain interop with the
existing protocol. The down side is that in some cases you might have
to register a new enctype or except a non-ideal SPAKE hash.

* Use the PRF. I think that while the concerns are real, the security
is fine

* Your proposal of hash in first message

* Get sufficient review and make RFC 3961 hashes deterministic

* Find other options

* Give up on SPAKE
Simo Sorce
2018-02-01 19:11:39 UTC
Permalink
Post by Sam Hartman
Simo> It is a bit annoying to have to do this for each new enctype
Simo> ... just for a pre-auth mechanism. I wonder if we couldn't
Simo> simply add a field in the first message that specifies the
Simo> hash we are using. The server can then just refuse operations
Simo> if it doesn't want to use the hash the client selected. This
Simo> would allow smooths transitions from "current-hash" to
Simo> "new-hash" in case current hash is not totally broken right
Simo> away, but like SHA-1 gets weaker and weaker and we plan
Simo> replacement.
I think that the idea of combining the hash definition with the group
definition is about the same as this, only a bit simpler.
* Combine choice of hash with choice of group (Kerberos SPAKE groups
include a hash function in their definition). Requires changing the
spec to restart the hash when a KDC rejects an optimistic group
offer. Greg and I believe the security of this is fine.
* Registering a mapping of SPAKE hashes to enctypes. The major
advantage I see is that it can (I think) maintain interop with the
existing protocol. The down side is that in some cases you might have
to register a new enctype or except a non-ideal SPAKE hash.
* Use the PRF. I think that while the concerns are real, the security
is fine
* Your proposal of hash in first message
I think the above options are all fine by me, maybe not in the same
exact order, but they all work.
Post by Sam Hartman
* Get sufficient review and make RFC 3961 hashes deterministic
* Find other options
* Give up on SPAKE
I'd rather not go into the last three, but I am open to other options
that have substantial advantages and no big downside compared to the
first 4 on the table that it is worth spending time on them.

Simo.
--
Simo Sorce
Sr. Principal Software Engineer
Red Hat, Inc
Nathaniel McCallum
2018-02-01 20:15:58 UTC
Permalink
I agree with Simo. The first two options are probably the best. I
don't have a strong opinion between them. However, I suspect that we
aren't worried about compatibility at this point (nobody ships this).
Post by Simo Sorce
Post by Sam Hartman
Simo> It is a bit annoying to have to do this for each new enctype
Simo> ... just for a pre-auth mechanism. I wonder if we couldn't
Simo> simply add a field in the first message that specifies the
Simo> hash we are using. The server can then just refuse operations
Simo> if it doesn't want to use the hash the client selected. This
Simo> would allow smooths transitions from "current-hash" to
Simo> "new-hash" in case current hash is not totally broken right
Simo> away, but like SHA-1 gets weaker and weaker and we plan
Simo> replacement.
I think that the idea of combining the hash definition with the group
definition is about the same as this, only a bit simpler.
* Combine choice of hash with choice of group (Kerberos SPAKE groups
include a hash function in their definition). Requires changing the
spec to restart the hash when a KDC rejects an optimistic group
offer. Greg and I believe the security of this is fine.
* Registering a mapping of SPAKE hashes to enctypes. The major
advantage I see is that it can (I think) maintain interop with the
existing protocol. The down side is that in some cases you might have
to register a new enctype or except a non-ideal SPAKE hash.
* Use the PRF. I think that while the concerns are real, the security
is fine
* Your proposal of hash in first message
I think the above options are all fine by me, maybe not in the same
exact order, but they all work.
Post by Sam Hartman
* Get sufficient review and make RFC 3961 hashes deterministic
* Find other options
* Give up on SPAKE
I'd rather not go into the last three, but I am open to other options
that have substantial advantages and no big downside compared to the
first 4 on the table that it is worth spending time on them.
Simo.
--
Simo Sorce
Sr. Principal Software Engineer
Red Hat, Inc
_______________________________________________
Kitten mailing list
https://www.ietf.org/mailman/listinfo/kitten
Benjamin Kaduk
2018-02-05 03:06:43 UTC
Permalink
I also think that we should probably go with one of the first two
options. There is some appeal to attaching the hash to the SPAKE
group, since they both are things new in this spec and we don't have
to awkwardly try to force enctypes to also include a new thing.
The downside is of course that we might choose a hash that is "too
small" for some hypothetical new enctype that uses large keys, but
presumably we will notice the conflict if such a new enctype arises,
and can make new group entries (even if they are the same underlying
groups) with larger hashes.

-Ben
Post by Nathaniel McCallum
I agree with Simo. The first two options are probably the best. I
don't have a strong opinion between them. However, I suspect that we
aren't worried about compatibility at this point (nobody ships this).
Post by Simo Sorce
Post by Sam Hartman
Simo> It is a bit annoying to have to do this for each new enctype
Simo> ... just for a pre-auth mechanism. I wonder if we couldn't
Simo> simply add a field in the first message that specifies the
Simo> hash we are using. The server can then just refuse operations
Simo> if it doesn't want to use the hash the client selected. This
Simo> would allow smooths transitions from "current-hash" to
Simo> "new-hash" in case current hash is not totally broken right
Simo> away, but like SHA-1 gets weaker and weaker and we plan
Simo> replacement.
I think that the idea of combining the hash definition with the group
definition is about the same as this, only a bit simpler.
* Combine choice of hash with choice of group (Kerberos SPAKE groups
include a hash function in their definition). Requires changing the
spec to restart the hash when a KDC rejects an optimistic group
offer. Greg and I believe the security of this is fine.
* Registering a mapping of SPAKE hashes to enctypes. The major
advantage I see is that it can (I think) maintain interop with the
existing protocol. The down side is that in some cases you might have
to register a new enctype or except a non-ideal SPAKE hash.
* Use the PRF. I think that while the concerns are real, the security
is fine
* Your proposal of hash in first message
I think the above options are all fine by me, maybe not in the same
exact order, but they all work.
Post by Sam Hartman
* Get sufficient review and make RFC 3961 hashes deterministic
* Find other options
* Give up on SPAKE
I'd rather not go into the last three, but I am open to other options
that have substantial advantages and no big downside compared to the
first 4 on the table that it is worth spending time on them.
Simo.
--
Simo Sorce
Sr. Principal Software Engineer
Red Hat, Inc
_______________________________________________
Kitten mailing list
https://www.ietf.org/mailman/listinfo/kitten
_______________________________________________
Kitten mailing list
https://www.ietf.org/mailman/listinfo/kitten
Nathaniel McCallum
2018-02-05 16:11:00 UTC
Permalink
Unless I'm missing something, I don't think this is necessary. When
the hash is associated with the group choice, the output size of the
hash is correlated to the strength of the group used. If a new enctype
arises with stronger keys, increasing the hash alone won't increase
the security of the SPAKE exchange itself. Therefore, an entirely new
group is needed.
Post by Benjamin Kaduk
I also think that we should probably go with one of the first two
options. There is some appeal to attaching the hash to the SPAKE
group, since they both are things new in this spec and we don't have
to awkwardly try to force enctypes to also include a new thing.
The downside is of course that we might choose a hash that is "too
small" for some hypothetical new enctype that uses large keys, but
presumably we will notice the conflict if such a new enctype arises,
and can make new group entries (even if they are the same underlying
groups) with larger hashes.
-Ben
Post by Nathaniel McCallum
I agree with Simo. The first two options are probably the best. I
don't have a strong opinion between them. However, I suspect that we
aren't worried about compatibility at this point (nobody ships this).
Post by Simo Sorce
Post by Sam Hartman
Simo> It is a bit annoying to have to do this for each new enctype
Simo> ... just for a pre-auth mechanism. I wonder if we couldn't
Simo> simply add a field in the first message that specifies the
Simo> hash we are using. The server can then just refuse operations
Simo> if it doesn't want to use the hash the client selected. This
Simo> would allow smooths transitions from "current-hash" to
Simo> "new-hash" in case current hash is not totally broken right
Simo> away, but like SHA-1 gets weaker and weaker and we plan
Simo> replacement.
I think that the idea of combining the hash definition with the group
definition is about the same as this, only a bit simpler.
* Combine choice of hash with choice of group (Kerberos SPAKE groups
include a hash function in their definition). Requires changing the
spec to restart the hash when a KDC rejects an optimistic group
offer. Greg and I believe the security of this is fine.
* Registering a mapping of SPAKE hashes to enctypes. The major
advantage I see is that it can (I think) maintain interop with the
existing protocol. The down side is that in some cases you might have
to register a new enctype or except a non-ideal SPAKE hash.
* Use the PRF. I think that while the concerns are real, the security
is fine
* Your proposal of hash in first message
I think the above options are all fine by me, maybe not in the same
exact order, but they all work.
Post by Sam Hartman
* Get sufficient review and make RFC 3961 hashes deterministic
* Find other options
* Give up on SPAKE
I'd rather not go into the last three, but I am open to other options
that have substantial advantages and no big downside compared to the
first 4 on the table that it is worth spending time on them.
Simo.
--
Simo Sorce
Sr. Principal Software Engineer
Red Hat, Inc
_______________________________________________
Kitten mailing list
https://www.ietf.org/mailman/listinfo/kitten
_______________________________________________
Kitten mailing list
https://www.ietf.org/mailman/listinfo/kitten
Greg Hudson
2018-02-03 18:19:16 UTC
Permalink
On 02/01/2018 12:17 PM, Sam Hartman wrote:> * Combine choice of hash
with choice of group (Kerberos SPAKE groups
Post by Sam Hartman
include a hash function in their definition). Requires changing the
spec to restart the hash when a KDC rejects an optimistic group
offer. Greg and I believe the security of this is fine.
Under this choice the transcript hash cannot be started until a group is
selected, so in the normal message flow we won't initialize the
transcript until the KDC sends its challenge message. This works
because the client and KDC both still have access to the client's
support message (the client because it is stateful, and the KDC because
the support message is present in the request the KDC is processing).
We can concatenate the support and challenge message together and update
the transcript hash with both at once, saving a hash operation.

For the case where the group's hash function doesn't output enough bytes
for the enctype's random-to-key function, I currently have this text:

[as the last field of the hash input:]
* A single-byte block counter, with the initial value 0x01.

If the hash output is too small for the encryption type's key
generation seed length, the block counter value is incremented and
the hash function re-computed to produce as many blocks as are
required. The result is truncated to the key generation seed length,
and the random-to-key function is used to produce the key value.

For edwards25519 and P-256, SHA-256 is a clear best choice. From a
security perspective, I believe SHA-256 is also an adequate choice for
P-384 and P-521 because we're likely going to truncate down to 128 or
256 bits for the key anyway. But if someone is using P-384 or P-521,
they're probably doing so for compliance with some kind of 192-bit or
256-bit security standard, and the use of SHA-256 might raise a flag
because of its 128-bit collision resistance. So I lean towards
specifying SHA-384 for P-384 and SHA-512 for P-521. If someone is using
those groups, they are already wasting a bunch of bytes on unnecessarily
large public keys anyway.
Nathaniel McCallum
2018-02-03 18:46:52 UTC
Permalink
I agree regarding hash choices. However, as I was thinking about the hint,
it does open the door to a downgrade attack. If the hint isn't in the
transcript, we can't tell if it was modified. As the protocol currently is
defined, any attempt to modify the SPAKESupport message will break the
final key derivation.

So in my mind, the value of using a hint needs to be weighed against this
cost.

What is the precise reason the hint can't be included in the transcript?

On Feb 3, 2018 1:19 PM, "Greg Hudson" <***@mit.edu> wrote:

On 02/01/2018 12:17 PM, Sam Hartman wrote:> * Combine choice of hash
with choice of group (Kerberos SPAKE groups
Post by Sam Hartman
include a hash function in their definition). Requires changing the
spec to restart the hash when a KDC rejects an optimistic group
offer. Greg and I believe the security of this is fine.
Under this choice the transcript hash cannot be started until a group is
selected, so in the normal message flow we won't initialize the
transcript until the KDC sends its challenge message. This works
because the client and KDC both still have access to the client's
support message (the client because it is stateful, and the KDC because
the support message is present in the request the KDC is processing).
We can concatenate the support and challenge message together and update
the transcript hash with both at once, saving a hash operation.

For the case where the group's hash function doesn't output enough bytes
for the enctype's random-to-key function, I currently have this text:

[as the last field of the hash input:]
* A single-byte block counter, with the initial value 0x01.

If the hash output is too small for the encryption type's key
generation seed length, the block counter value is incremented and
the hash function re-computed to produce as many blocks as are
required. The result is truncated to the key generation seed length,
and the random-to-key function is used to produce the key value.

For edwards25519 and P-256, SHA-256 is a clear best choice. From a
security perspective, I believe SHA-256 is also an adequate choice for
P-384 and P-521 because we're likely going to truncate down to 128 or
256 bits for the key anyway. But if someone is using P-384 or P-521,
they're probably doing so for compliance with some kind of 192-bit or
256-bit security standard, and the use of SHA-256 might raise a flag
because of its 128-bit collision resistance. So I lean towards
specifying SHA-384 for P-384 and SHA-512 for P-521. If someone is using
those groups, they are already wasting a bunch of bytes on unnecessarily
large public keys anyway.
Greg Hudson
2018-02-03 21:24:48 UTC
Permalink
Post by Nathaniel McCallum
I agree regarding hash choices. However, as I was thinking about the
hint, it does open the door to a downgrade attack. If the hint isn't in
the transcript, we can't tell if it was modified. As the protocol
currently is defined, any attempt to modify the SPAKESupport message
will break the final key derivation.
A pa-hint would be used only when a KDC advertises an RFC 6113
authentication set containing SPAKE and one or more other preauth
mechanisms. (As far as I know, nobody implements RFC 6113
authentication sets at this time.) The pa-hint would be used by a
client to decide whether to select between this authentication set, and
another authentication set which perhaps does not use SPAKE. The
pa-hint would allow the client to determine ahead of time whether SPAKE
group negotiation will succeed. Without the pa-hint, the client might
only find out that SPAKE won't work after it has already processed an
earlier part of the authentication set, perhaps asking for user input.

If the client does decide to use this authentication set, it will use
the normal message flow which begins with a client's SPAKESupport
message. This support message will be included in the transcript, so
group negotiation will be protected.

Including the pa-hint in the transcript would pose some difficulty for
the KDC. The KDC does not know at the time of the pa-hint what group
will be negotiated (so it doesn't know what hash function to use, if the
hash function is a property of the group). It also might not want to
use network bytes on a cookie at this point, since it doesn't know if
the client understands SPAKE at all. So I would expect a KDC to instead
reconstruct its pa-hint later on when it processes the client's support
message. That would mean changes to the KDC configuration could break
authentications in progress.

(It's possible that you were thinking of a rejected KDC optimistic
challenge rather than a pa-hint, the problem is that the hash function
might change when the selected group changes. If the client rejects an
optimistic KDC challenge, it will fall back to the normal message flow
which still protects group negotiation.)

(It's also possible that you were thinking of the initial pa-value that
gets included in a preauth_required error that simply lists SPAKE as one
of several preauth types. That would be understandable because the MIT
krb5 KDC code calls the preauth_required method-data a "hint list", but
that is not what an RFC 6113 pa-hint is. Currently the initial SPAKE
pa-value is empty unless the KDC issues an optimistic challenge, and we
are not proposing to change that.)
Nathaniel McCallum
2018-02-04 04:02:30 UTC
Permalink
Got it. Thank you for the clarification.
Post by Greg Hudson
Post by Nathaniel McCallum
I agree regarding hash choices. However, as I was thinking about the
hint, it does open the door to a downgrade attack. If the hint isn't in
the transcript, we can't tell if it was modified. As the protocol
currently is defined, any attempt to modify the SPAKESupport message
will break the final key derivation.
A pa-hint would be used only when a KDC advertises an RFC 6113
authentication set containing SPAKE and one or more other preauth
mechanisms. (As far as I know, nobody implements RFC 6113
authentication sets at this time.) The pa-hint would be used by a
client to decide whether to select between this authentication set, and
another authentication set which perhaps does not use SPAKE. The
pa-hint would allow the client to determine ahead of time whether SPAKE
group negotiation will succeed. Without the pa-hint, the client might
only find out that SPAKE won't work after it has already processed an
earlier part of the authentication set, perhaps asking for user input.
If the client does decide to use this authentication set, it will use
the normal message flow which begins with a client's SPAKESupport
message. This support message will be included in the transcript, so
group negotiation will be protected.
Including the pa-hint in the transcript would pose some difficulty for
the KDC. The KDC does not know at the time of the pa-hint what group
will be negotiated (so it doesn't know what hash function to use, if the
hash function is a property of the group). It also might not want to
use network bytes on a cookie at this point, since it doesn't know if
the client understands SPAKE at all. So I would expect a KDC to instead
reconstruct its pa-hint later on when it processes the client's support
message. That would mean changes to the KDC configuration could break
authentications in progress.
(It's possible that you were thinking of a rejected KDC optimistic
challenge rather than a pa-hint, the problem is that the hash function
might change when the selected group changes. If the client rejects an
optimistic KDC challenge, it will fall back to the normal message flow
which still protects group negotiation.)
(It's also possible that you were thinking of the initial pa-value that
gets included in a preauth_required error that simply lists SPAKE as one
of several preauth types. That would be understandable because the MIT
krb5 KDC code calls the preauth_required method-data a "hint list", but
that is not what an RFC 6113 pa-hint is. Currently the initial SPAKE
pa-value is empty unless the KDC issues an optimistic challenge, and we
are not proposing to change that.)
Greg Hudson
2018-02-05 06:00:19 UTC
Permalink
This option seems to have the most initial WG support. So I have put
together proposed changes to the spec:

https://github.com/greghudson/ietf/pull/4/commits/77753ee1c901ff771cba46b8c16d801fd8c74676

I have also updated my Python implementation to produce updated test
vectors, and my C implementation (for MIT krb5) to verify them.

Substantial bits of new text:

[In the section "SPAKE Pre-Authentication Message Protocol":]
Each group definition specifies an associated hash function, which
will be used for transcript protection and key derivation.

[In the section "Second Pass":]
The client and KDC will each initialize a transcript hash [xref]
using the hash function associated with the hosen group, and update it
with the concatenation of the DER-encoded PA-SPAKE messages sent by
the client and the KDC.

[In the section "Optimizations":]
If the group chosen by the challenge message is supported by
the client, the client MUST skip to the third pass by issuing an
AS-REQ with a PA-SPAKE message using the response choice. In this case
no SPAKESupport message is sent by the client, so the first update to
the transcript hash contains only the KDC's optimistic challenge. If
the KDC's chosen group is not supported by the client, the client MUST
continue to the second pass. In this case both the client and KDC MUST
reinitialize the transcript hash for the client's support message.
Clients MUST support this optimization.

[In the section "Transcript Hash":]
When the transcript hash is updated with an octet string input, the
new value is the hash function computed over the concatenation of the
old value and the input.

In the normal message flow or with the second optimization described
in [xref], the transcript hash is first updated with the concatenation
of the client's support message and the KDC's challenge, and then
updated a second time with the client's pubkey value. It therefore
incorporates the client's supported groups, the KDC's chosen group,
the KDC's initial second-factor messages, and the client and KDC
public values. Once the transcript hash is finalized, it is used
without change for all key derivations [xref].

If the first optimization described in [xref] is used successfully,
the transcript hash is updated first with the KDC's challenge message,
and second with the client's pubkey value.

If first optimization is used unsuccessfully (i.e. the client does
not accept the KDC's selected group), the transcript hash is computed
as in the normal message flow, without including the KDC's optimistic

challenge.

[In the section "Key Derivation":]

[As the fourth field in the hash input:]
The PRF+ output used to compute the initial secret input w as
specified in [xref].

[As the ninth and final field in the hash input:]
A single-byte block counter, with the initial value 0x01.

If the hash output is too small for the encryption type's key
generation seed length, the block counter value is incremented and the
hash function re-computed to produce as many blocks as are required.
The result is truncated to the key generation seed length, and the
random-to-key function is used to produce the key value.

The section "Update to Checksum Specifications" is removed, and with it
the prohibition against using SPAKE with single-DES enctypes.

For the pa-hint for authentication sets, I realized that we probably
want to include second factor info as well as KDC group support, so I
will address that in a separate update and thread.
Nathaniel McCallum
2018-02-05 16:08:00 UTC
Permalink
This all looks good. Thanks!
Post by Greg Hudson
This option seems to have the most initial WG support. So I have put
https://github.com/greghudson/ietf/pull/4/commits/77753ee1c901ff771cba46b8c16d801fd8c74676
I have also updated my Python implementation to produce updated test
vectors, and my C implementation (for MIT krb5) to verify them.
[In the section "SPAKE Pre-Authentication Message Protocol":]
Each group definition specifies an associated hash function, which
will be used for transcript protection and key derivation.
[In the section "Second Pass":]
The client and KDC will each initialize a transcript hash [xref]
using the hash function associated with the hosen group, and update it
with the concatenation of the DER-encoded PA-SPAKE messages sent by
the client and the KDC.
[In the section "Optimizations":]
If the group chosen by the challenge message is supported by
the client, the client MUST skip to the third pass by issuing an
AS-REQ with a PA-SPAKE message using the response choice. In this case
no SPAKESupport message is sent by the client, so the first update to
the transcript hash contains only the KDC's optimistic challenge. If
the KDC's chosen group is not supported by the client, the client MUST
continue to the second pass. In this case both the client and KDC MUST
reinitialize the transcript hash for the client's support message.
Clients MUST support this optimization.
[In the section "Transcript Hash":]
When the transcript hash is updated with an octet string input, the
new value is the hash function computed over the concatenation of the
old value and the input.
In the normal message flow or with the second optimization described
in [xref], the transcript hash is first updated with the concatenation
of the client's support message and the KDC's challenge, and then
updated a second time with the client's pubkey value. It therefore
incorporates the client's supported groups, the KDC's chosen group,
the KDC's initial second-factor messages, and the client and KDC
public values. Once the transcript hash is finalized, it is used
without change for all key derivations [xref].
If the first optimization described in [xref] is used successfully,
the transcript hash is updated first with the KDC's challenge message,
and second with the client's pubkey value.
If first optimization is used unsuccessfully (i.e. the client does
not accept the KDC's selected group), the transcript hash is computed
as in the normal message flow, without including the KDC's optimistic
challenge.
[In the section "Key Derivation":]
[As the fourth field in the hash input:]
The PRF+ output used to compute the initial secret input w as
specified in [xref].
[As the ninth and final field in the hash input:]
A single-byte block counter, with the initial value 0x01.
If the hash output is too small for the encryption type's key
generation seed length, the block counter value is incremented and the
hash function re-computed to produce as many blocks as are required.
The result is truncated to the key generation seed length, and the
random-to-key function is used to produce the key value.
The section "Update to Checksum Specifications" is removed, and with it
the prohibition against using SPAKE with single-DES enctypes.
For the pa-hint for authentication sets, I realized that we probably
want to include second factor info as well as KDC group support, so I
will address that in a separate update and thread.
_______________________________________________
Kitten mailing list
https://www.ietf.org/mailman/listinfo/kitten
Greg Hudson
2018-02-05 20:32:05 UTC
Permalink
Post by Greg Hudson
[As the fourth field in the hash input:]
The PRF+ output used to compute the initial secret input w as
specified in [xref].
[...]
Post by Greg Hudson
If the hash output is too small for the encryption type's key
generation seed length, the block counter value is incremented and the
hash function re-computed to produce as many blocks as are required.
The result is truncated to the key generation seed length, and the
random-to-key function is used to produce the key value.
When
the hash is associated with the group choice, the output size of the
hash is correlated to the strength of the group used. If a new enctype
arises with stronger keys, increasing the hash alone won't increase
the security of the SPAKE exchange itself. Therefore, an entirely new
group is needed.
Ben may have been thinking about the protocol working at all (easily
solved by hash extension), but Nathaniel's response made me think about
the possibility that the SPAKE exchange might reduce the work an
attacker would require to discover the reply key, in the case where the
initial reply key was high-entropy to start with. As Nathaniel notes,
for this possibility we have to consider the group as well as the hash
function.

To make a long story short, I think to be excruciatingly correct we
should compute KRB-FX-CF2 of the derived key with the initial reply key,
in case w (as represented by the PRF output used to compute it) is
shorter than the initial reply key was and therefore contains lower
entropy. So we would have:

reply-key <- KRB-FX-CF2(initial-reply-key,
random-to-key(H(...|w|K|...)),
pepper1, pepper2)

That way we can't possibly make the reply key any worse. This is only
really an issue for a future where attackers can do 2^128 brute force
work (enough to break ECDLP on edwards25519 or P-256), but we may as
well get it right.
Nathaniel McCallum
2018-02-05 21:39:16 UTC
Permalink
Agreed.
Post by Greg Hudson
Post by Greg Hudson
[As the fourth field in the hash input:]
The PRF+ output used to compute the initial secret input w as
specified in [xref].
[...]
Post by Greg Hudson
If the hash output is too small for the encryption type's key
generation seed length, the block counter value is incremented and the
hash function re-computed to produce as many blocks as are required.
The result is truncated to the key generation seed length, and the
random-to-key function is used to produce the key value.
When
the hash is associated with the group choice, the output size of the
hash is correlated to the strength of the group used. If a new enctype
arises with stronger keys, increasing the hash alone won't increase
the security of the SPAKE exchange itself. Therefore, an entirely new
group is needed.
Ben may have been thinking about the protocol working at all (easily
solved by hash extension), but Nathaniel's response made me think about
the possibility that the SPAKE exchange might reduce the work an
attacker would require to discover the reply key, in the case where the
initial reply key was high-entropy to start with. As Nathaniel notes,
for this possibility we have to consider the group as well as the hash
function.
To make a long story short, I think to be excruciatingly correct we
should compute KRB-FX-CF2 of the derived key with the initial reply key,
in case w (as represented by the PRF output used to compute it) is
shorter than the initial reply key was and therefore contains lower
reply-key <- KRB-FX-CF2(initial-reply-key,
random-to-key(H(...|w|K|...)),
pepper1, pepper2)
That way we can't possibly make the reply key any worse. This is only
really an issue for a future where attackers can do 2^128 brute force
work (enough to break ECDLP on edwards25519 or P-256), but we may as
well get it right.
_______________________________________________
Kitten mailing list
https://www.ietf.org/mailman/listinfo/kitten
Sam Hartman
2018-02-13 14:17:50 UTC
Permalink
Nathaniel> Agreed.

The changes, including the discussion of using krb-fx-cf2 to protect
initial high-entropy reply keys.
I agree that high entropy reply keys are unlikely in cases where SPAKE
is valuable, but it's relatively easy to do.

Benjamin Kaduk
2018-02-06 01:00:23 UTC
Permalink
Post by Greg Hudson
Post by Greg Hudson
[As the fourth field in the hash input:]
The PRF+ output used to compute the initial secret input w as
specified in [xref].
[...]
Post by Greg Hudson
If the hash output is too small for the encryption type's key
generation seed length, the block counter value is incremented and the
hash function re-computed to produce as many blocks as are required.
The result is truncated to the key generation seed length, and the
random-to-key function is used to produce the key value.
When
the hash is associated with the group choice, the output size of the
hash is correlated to the strength of the group used. If a new enctype
arises with stronger keys, increasing the hash alone won't increase
the security of the SPAKE exchange itself. Therefore, an entirely new
group is needed.
Ben may have been thinking about the protocol working at all (easily
solved by hash extension), but Nathaniel's response made me think about
the possibility that the SPAKE exchange might reduce the work an
attacker would require to discover the reply key, in the case where the
initial reply key was high-entropy to start with. As Nathaniel notes,
for this possibility we have to consider the group as well as the hash
function.
Right, I mean the case where the group size matches the SPAKE hash, but
the enctype is bigger than both, though you have taken this even a
little further than I had in mind. (But it still makes sense!)
Post by Greg Hudson
To make a long story short, I think to be excruciatingly correct we
should compute KRB-FX-CF2 of the derived key with the initial reply key,
in case w (as represented by the PRF output used to compute it) is
shorter than the initial reply key was and therefore contains lower
reply-key <- KRB-FX-CF2(initial-reply-key,
random-to-key(H(...|w|K|...)),
pepper1, pepper2)
That way we can't possibly make the reply key any worse. This is only
really an issue for a future where attackers can do 2^128 brute force
work (enough to break ECDLP on edwards25519 or P-256), but we may as
well get it right.
Yup.
Sam Hartman
2018-02-01 13:58:59 UTC
Permalink
Yesterday I also discussed my RFC 6113 alignment concerns with Greg.

If I were more involved in implementing Kerberos, I probably would be
arguing for a somewhat different approach--focusing on less duplication
of facilities provided in RFC 6113. However, Greg reviewed the history,
and I have confidence that the issues I'm thinking about were
considhered by those involved in the design of the proposal. So with
one exception, with regard to RFc 6113 alignment I believe my concerns
have already been considered and rejected.

I brought up one issue that I don't think has been adequately
considered: the decision not to include a pa-hint in this proposal.

Under RFC 6113, the preauthentication types that participate in a
preauthentication set are permitted to include a hint in the first
message.
The intent is for a client to know whether it has the necessary
facilities (CAs, algorithm support, access to appropriate tokens, etc)
in order to succeed at using a preauthentication set before starting.

We're trying to avoid the following:

* A KDC proposes an authentication set with SPAKE as the second element

* The first member of that authentication set requires user
interaction. So the client interacts with the user.

* Then the client discovers that it doesn't share a group in common with
the KDC.

The intent of RFC 6113 is that a client know what user interaction will
be required and whether it has the necessary support prior to starting
an authentication set. I don't think SPAKE meets this intent. There
are mechanisms prior to RFC 6113 that also don't provide this facility.
However, I think it would be relatively easy to define a hint message
including the set of groups the KDC supports to work in this case.

I realize that in the current draft, group offers are always sent from
the client to the KDC.
I note that beyond a relatively small specification complexity burden
(defining a message similar to an existing message), the complexity
burden of sometimes getting groups from the KDC would only be suffered
by implementations that use SPAKE in authentication sets.
Greg Hudson
2018-02-01 16:45:02 UTC
Permalink
Post by Sam Hartman
I brought up one issue that I don't think has been adequately
considered: the decision not to include a pa-hint in this proposal.
[...]
Post by Sam Hartman
I note that beyond a relatively small specification complexity burden
(defining a message similar to an existing message), the complexity
burden of sometimes getting groups from the KDC would only be suffered
by implementations that use SPAKE in authentication sets.
I think we can just reuse SPAKESupport, so this should be easy to
specify. And yes, a KDC or client which doesn't implement
authentication sets doesn't have to do anything. I can write up the text.

My inclination is to specify that the pa-hint should not affect the
transcript, since it occurs in an earlier phase of operation
(authentication set selection), and including it in the transcript would
make life harder for the KDC. The client will send its own SPAKESupport
message once SPAKE preauth beginsw, and that message will be included in
the transcript.
Nathaniel McCallum
2018-02-01 20:27:40 UTC
Permalink
If we reuse SPAKESupport, then we can never differentiate between the
client and server messages. I'm not objecting, just noting.
Post by Greg Hudson
Post by Sam Hartman
I brought up one issue that I don't think has been adequately
considered: the decision not to include a pa-hint in this proposal.
[...]
Post by Sam Hartman
I note that beyond a relatively small specification complexity burden
(defining a message similar to an existing message), the complexity
burden of sometimes getting groups from the KDC would only be suffered
by implementations that use SPAKE in authentication sets.
I think we can just reuse SPAKESupport, so this should be easy to
specify. And yes, a KDC or client which doesn't implement
authentication sets doesn't have to do anything. I can write up the text.
My inclination is to specify that the pa-hint should not affect the
transcript, since it occurs in an earlier phase of operation
(authentication set selection), and including it in the transcript would
make life harder for the KDC. The client will send its own SPAKESupport
message once SPAKE preauth beginsw, and that message will be included in
the transcript.
_______________________________________________
Kitten mailing list
https://www.ietf.org/mailman/listinfo/kitten
Benjamin Kaduk
2018-02-02 04:06:18 UTC
Permalink
Post by Greg Hudson
Post by Sam Hartman
I brought up one issue that I don't think has been adequately
considered: the decision not to include a pa-hint in this proposal.
[...]
Post by Sam Hartman
I note that beyond a relatively small specification complexity burden
(defining a message similar to an existing message), the complexity
burden of sometimes getting groups from the KDC would only be suffered
by implementations that use SPAKE in authentication sets.
I think we can just reuse SPAKESupport, so this should be easy to
specify. And yes, a KDC or client which doesn't implement
authentication sets doesn't have to do anything. I can write up the text.
My inclination is to specify that the pa-hint should not affect the
transcript, since it occurs in an earlier phase of operation
(authentication set selection), and including it in the transcript would
make life harder for the KDC. The client will send its own SPAKESupport
message once SPAKE preauth beginsw, and that message will be included in
the transcript.
This is probably fine, and we can reiterate in the security
considerations that the hints are just hints (i.e., not
authoritative), and could be wrong.

-Ben
Loading...