Full Disclosure
mailing list archives
Re: Time Expiry Alogorithm??
From: "Pavel Kankovsky" <peak () argo troja mff cuni cz>
Date: Sun, 21 Nov 2004 19:55:35 +0100 (CET)
On Fri, 19 Nov 2004, Anders Langworthy wrote:
If a certain deterministic computation (e.g. decryption) can be made in
time T, then it can be made in any time T' > T.
This is true for breaking a cipher by brute force, but it doesn't
account for (stop looking at me) somehow incorporating a timestamp into
the encryption scheme to prevent 'legit' decryption after a certain time.
As you yourself pointed out, the timestamp has to be some kind of
unforgeable "trusted timestamp". Such a scheme is not a "deterministic
computation" from the message recipient's point of view because the other
party behaves nondeterministically (in the sense the recipient cannot
predict its exact future behaviour using known information only).
Anyway, replay attack (record the "trusted timestamp" and reuse it later)
is still possible. It's even worse when generic timestamps not dependent
on the message are used because the enemy can gather and record timestamps
in advance. Therefore we need special timestamps for every encrypted
message.
And this is the point where the "timestamp" part becomes superfluous: we
can simply break the decryption key into two parts (neither of them
sufficient to decrypt the message alone), give one part to the recipient,
and the other part to the trusted third party guaranteeing 1. to give it
to the recipient when it asks before the expiration time, 2. to discard it
and not to give it to anyone after the expiration time.
We can use any conventional encryption because we are unable to stop the
recipient from recording all the inputs (or even the output) and repeat
the decryption...unless the recipient decrypts and views the message on
*the sender's* TCB (rather than his/her own computer) but there is little
need to invent new complex cryptographical schemes if the sender's TCB is
used because the sender's TCB can implement arbitrary access control of
the sender's choice.
I'm going to disagree as politely as possible. As an example, using RSA
with 1024 bit keys allows for around 10^150 possible primes. Compare
this to the 10^70 some atoms in the known universe to see how
disgustingly big that number is. Cracking this encryption scheme by
searching the keyspace is laughable.
There are many things that can go wrong: gradual improvement of
factorization algorithms (very likely, IMHO) can erode the strength of
shorter keys, a major breakthrough (quantum computing?) can kill RSA
with one mighty blow, your PRNG used to generate keys can be weaker than
expected...
Mathematically, this is a very remote possibility, as factoring primes
is probably an NP problem, and P is probably not NP. Neither of these
has been proven, however.
According to my vague recollection of what I heard from people more
skilled at the complexity theory, P != NP implies the existence of an
infinite scale of complexity classes between P and NP and factorization
(of composite numbers of course, factorization of primes is trivial...
unless you are Bill Gates (*)) is suspected to represent one of those
classes more complex than P but less complex than NPcomplete.
(*) Bill Gates, "The Road Ahead," p. 265:
The obvious mathematical breakthrough [to break modern encryption]
would be development of an easy way to factor large prime numbers.
Using larger keys will still provide a measure of security.
Not for ciphertexts already encrypted with shorter keys.
Pavel Kankovsky aka Peak [ Boycott Microsofthttp://www.vcnet.com/bms ]
"Resistance is futile. Open your source code and prepare for assimilation."
_______________________________________________
FullDisclosure  We believe in it.
Charter: http://lists.netsys.com/fulldisclosurecharter.html
By Date
By Thread
Current thread:
