






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
General Access, Control, Data structures for access contol, Bell-LaPuda Model, Biba Model, Clark-Wilson Model, Role Bases Access Control.
Typology: Study notes
1 / 10
This page cannot be seen from the preview
Don't miss anything!
These brief lecture notes are intended to help you focus on the main concepts that we have covered in class. Their structure follows closely that of the lectures. These notes are not a substitute for your own notes – they are not as comprehensive as your notes should be, and they may contain a lot of typos.
We write x ← A(params) to write that we obtain x by running algorithm A on input params. Notice that A can be randomized. If S is any finite set, we write x ← S to mean that we select x uniformly at random from S. If x and y are strings, then we write x||y for the concatenation of x with y.
In the early days of cryptography, the design of cryptographic systems used a trial-and-error approach: first design the system, then wait for someone to break the system, then patch the system, wait for another break and so on, until no more attacks can be found. Unfortunately this approach offers minimal security guarantees: the people that have tried to attack the system did not manage to produce any new attacks. The defining characteristic of modern cryptography is its use of security models for defining security of various primitives and protocols 1 and perhaps
(^1) Throughout these notes I will refer to only primitives and ignore protocols, but all of the discussion applies equally well to protocols
more importantly, their use in proving security. Next, I describe at a high level what a security model is and then I will give several examples. Before being able to even define a security model, one has to fix the syntax of the primitive, that is, to specify precisely what are the algorithms that implement the primitive. A security model for a primitive then makes precise two things:
Throughout this document we describe several of the most common secu- rity models used in modern cryptography. How do we know that a security model does indeed capture security of the primitive? Unfortunately, there is no good answer to this question. Here we have to rely on It may seem that security models suffer from the same problem as the first cryptosystems
4 Encryption
In this section we define a couple of security models for encryption and show how one can use these models to argue that a scheme is insecure. We start with asymmetric encryption and then move to symmetric encryption.
Syntax. For example, an asymmetric encryption scheme Π is given by three probabilistic polynomial times algorithms: (Kg, Enc, Dec) one for key generation, one for encryption, and one for decryption.
Example 1. The definition above is not very intuitive, It takes a while to both convince oneself that it captures a good intuition of security, and also takes a while to get used to using it. We start with the following simple example. Assume that an encryption scheme Π = (Kg, Enc, Dec) is such that the encryption function leaks the first bit of the encrypted message. Formally, this means that there exists an algorithm E which given an encryption c of some message m outputs the first bit of m. Clearly, such a scheme is insecure and we show that the scheme can be shown insecure using the the model above. To do so, we must provide an adversary that wins the game above. The adversary is pretty simple. It works as follows. The adversary first receives a public key pk (as specified by the game). Then, it outputs the pair of messages 00000000, 1111111 and receives back a ciphertext c∗. It then runs E on c∗^ to obtain a bit d (the first bit of the plaintext encrypted in c∗). The adversary outputs d as his guess bit. Now, let’s argue that the adversary described above wins the IND-CPA security game with probability 1. For this, we analyze separately the case when the bit b selected by the game is 0 and 1, respectively. If b is 0, then c∗^ is an encryption of 00000000; then E(c∗) = 0 which means that the adversary outputs 0 as is guess bit (which equals b). The analysis for the case when b = 1 is similar. So the adversary wins with probability 1.
Example 2. In the second example we prove that an encryption scheme where the encryption function is deterministic is not IND-CPA secure. The problem with deterministic encryption is that two different encryptions of the same message yield the same ciphertext. The model defined above cap- tures this problem. Consider some encryption scheme where the encryption algorithm Enc is deterministic. The following adversary breaks the scheme with probability 1. The ad- versary receives the public encryption key pk from the challenger and selects two different messages, say 000000 and 111111 (any pair of distinct messages would do). Then, the adversary sends these two messages and receives back a ciphertext c∗^ which encrypts one of the two messages. The goal of the adversary is to determine which of the two messages was encrypted. The ad- versary can immediately determine this bit by computing c 0 = Enc(pk, m 0 ) and c 1 = Enc(pk, m 1 ); if c∗^ = c 0 then the adversary outputs 0, otherwise the adversary outputs 1. It is immediate that this adversary always wins the IND-CPA game.
Indistinguishability under chosen-ciphertext attack (IND-CCA).
The security model above lacks one important ability that the adversary may have when encryption is used in applications. Specifically, it may be possible for an adversary to trick the receiver in decrypting ciphertexts that he chooses. Clearly, if this is the case it should be desirable that the ad- versary does not obtain any information about the plaintexts underlying ciphertexts for which he did not see the decryptions. A model that captures the above intuition can be obtained from the IND-CPA model, by giving to the adversary the ability to see decryptions of messages he chooses. The resulting security game is therefore as follows. First the adversary receives as input an encryption key pk. Then, the adversary outputs a ciphertext c and receives the result of the decryption p ← Dec(sk, c). He can repeat this step for as many times as he wants. Next, the adversary outputs his two messages m 0 and m 1 and receives the encryp- tion c∗^ of the message mb (where b is a bit chosen uniformly at random). The adversary is allowed more decryption queries, with the amendment that he is not allowed to obtain a decryption of c∗. Finally, the adversary out- puts a bit d (which represent his guess as to what bit b is). The adversary wins if he guesses bit b correctly, i.e. if b = d. We say that the scheme is IND-CCA secure if no efficient adversary can win the game with probability significantly better than 1/2 (which is the probability he simply guesses b).
Extra bit encryption This example shows that the security model de- fined above may be somewhat too strong. Namely, we construct an encryp- tion scheme that is should probably considered secure, but which can be proven insecure in the above model. Consider some IND-CCA secure encryption scheme (Kg, Enc, Dec), and construct a new encryption scheme (Kg, Enc, Dec) in which encryption sim- ply adds one (useless) bit to a ciphertext, and decryption ignores this bit. Formally: Kg is simply Kg. Enc(pk, m) computes c ← Enc(pk, m) and out- puts c||1, i.e. the concatenation of c with the bit 1. The decryption algorithm ignores the last bit (and does not check if this last bit is 1) and decrypts the rest: Dec(sk, c||b) outputs Dec(sk, c). With this benign looking modification, the resulting scheme is not IND-CCA anymore. The problem in this case is one of malleability: the adversary, given a ciphertext, can create a different ciphertext but for the same plaintext. The following attacker shows that the scheme (Kg, Enc, Dec) is not IND-CCA secure. The adversary receives public key pk, selects two different messages m 0 and m 1 and sends them to the challenger. The adversary receives a ci-
with probability 1.
The El-Gamal encryption scheme is not IND-CCA secure. Recall that the ElGamal encryption scheme works on a group G with a generator g. The key generation selects a random exponent x and outputs (gx, x) where gx^ is the public key and x is the secret key. To encrypt a message m ∈ G under a public key X, the encryption algorithm selects a random exponent r and outputs (gr, Xr^ · m). To decrypt a ciphertext (R, C) with secret key x output C · R−x. The observation that leads to the attack that we described is that this en- cryption scheme is malleable. Informally, this means that given a ciphertext C, an adversary can create a ciphertext c′^ which encrypts a plaintext related somehow to the plaintext that underlies c. Specifically, consider ciphertext c = (R, C) which encrypts some message m and let h be an arbitrary group element. Then c′^ = (R, h · C) encrypts the message h · m. The following IND-CCA adversary uses the above property to break the ElGamal cryptosystem. The adversary receives an encryption key pk, selects two different group elements g 0 and g 1 and sends them (to the challenger). It receives in return the encryption c∗^ = (R∗, C∗) of gb. Then, the adversary selects a random element h in G, prepares the ciphertext c′^ = (R∗, h · C∗) which he sends out to obtain a decryption. It receives in return a group element g∗. Then, if g∗/h = g 0 the adversary outputs 0 as his guess, and if g∗/h = g 1 the adversary outputs 1. Now we argue that the adversary wins this game with probability 1. We do this as before: assume that the bit b selected by the challenger is 0. Then, the encryption that the adversary receives is an encryption of g 0. In turn, the encryption c′^ that the adversary creates is an encryption of h · g 0. The adversary therefore correctly guesses 0. The analysis is similar for the case when b = 1, and we conclude that the adversary wins with probability 1.
5 Efficient adversaries and negligible functions
In cryptography, security of systems is rarely absolute. In the security mod- els described in the previous sections we were somewhat informal in defining security but we reflected (informally) the following idea: we said that a sys- tem is secure if no efficient adversary can win the game that defines security of encryption with probability significantly better than 1/2. We need to better specify two things: one is what is the class of efficient adversaries, and the second is what does it mean for the advantage of the
adversary to be significantly better than some constant (1/2 in the case of encryption.)
(∀p)(∃np ∈ N)(∀n ≥ np) f (n) ≤
p(n) In the above the first quantification is over all polynomials p. The two definitions go hand in hand. Consider an efficient adversary that breaks the system only with negligible probability. Then, it is not possible to amplify its advantage by simply repeating the attack (polynomially many times). Next we prove it for the case of repeating the attack twice, but the argument extends with very little modifications to the more general case. Consider an adversary against a secure system, which means that the probability that the adversary breaks the system is upperbounded by a negligible function, say fA(n). We want to show that repeating the attack twice still yields a negligible function. The probability that the adversary breaks the system by carrying out two attacks can immediately be seen to be less or equal to 2fA(n). We want to show that this function is negligible (the argument immediately extends to a constant number of repetitions, and even to polynomially many repetitions). Fix some polynomial p. Then g(n) = 2 · p(n) is also a polynomial. Since fA(n) is negligible, there exists some constant ng such that
(∀n ≥ ng)fA(n) ≤
g(n)
It immediately follows that
(∀n ≥ ng)2 · fA(n) ≤
g(n)
p(n)
wants at any point in the game. The only ciphertext he is not allowed to query is the challenge ciphertext c∗^ (which is the cipheretext whose security we actually model).
6 Hash functions
The Merkle-Damgard construction. Let f : { 0 , 1 }n+l^ → { 0 , 1 }n^ be a com- pression function. The Merkle-Damgard construction is a way to construct a hash function out of the compression function. The constructed hash function Hf works for messages for which the length is a multiple of l. Let M = M 1 ||M 2 ||... ||Mk be a message of length kl (each of the Mis is a block of l bits). The value of Hf (M ) = Hk+1 obtained as follows:
We have shown that if f is a collision resistant function (i.e. it’s difficult to find M, N with M 6 = N but f (M ) = f (N )) then Hf defined as above is also collision resistant (for messages of the same length). The proof proceeds as follows. Assume that there exists an adversary A that breaks the collision resistance of Hf. Adversary A finds two messages M = M 1 ||M 2 ||... ||Mk and N = N 1 ||N 2 ||... ||Nk such that M 6 = N yet Hf (M ) = Hf (N ). Let H 0 , H 1 , H 2 ,... , Hk 1 and K 0 , K 1 , K 2 ,... , Kk+1 be the intermediary values; in particular H 0 = K 0 = IV and Hk+1 = Kk+1. The following algorithm B finds a collision for f out of the collision for Hf.
for i = k downto 1 do if Hi||Mi 6 = Ki||Ni then output Hi||Mi, Ki||Ni.