1. 19 Sep, 2018 1 commit
  2. 09 Sep, 2018 1 commit
  3. 17 Aug, 2018 6 commits
    • Eric Biggers's avatar
      crypto: skcipher - fix crash flushing dcache in error path · 7e179bff
      Eric Biggers authored
      commit 8088d3dd upstream.
      
      scatterwalk_done() is only meant to be called after a nonzero number of
      bytes have been processed, since scatterwalk_pagedone() will flush the
      dcache of the *previous* page.  But in the error case of
      skcipher_walk_done(), e.g. if the input wasn't an integer number of
      blocks, scatterwalk_done() was actually called after advancing 0 bytes.
      This caused a crash ("BUG: unable to handle kernel paging request")
      during '!PageSlab(page)' on architectures like arm and arm64 that define
      ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
      page-aligned as in that case walk->offset == 0.
      
      Fix it by reorganizing skcipher_walk_done() to skip the
      scatterwalk_advance() and scatterwalk_done() if an error has occurred.
      
      This bug was found by syzkaller fuzzing.
      
      Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
      
      	#include <linux/if_alg.h>
      	#include <sys/socket.h>
      	#include <unistd.h>
      
      	int main()
      	{
      		struct sockaddr_alg addr = {
      			.salg_type = "skcipher",
      			.salg_name = "cbc(aes-generic)",
      		};
      		char buffer[4096] __attribute__((aligned(4096))) = { 0 };
      		int fd;
      
      		fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
      		bind(fd, (void *)&addr, sizeof(addr));
      		setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
      		fd = accept(fd, NULL, NULL);
      		write(fd, buffer, 15);
      		read(fd, buffer, 15);
      	}
      Reported-by: default avatarLiu Chao <liuchao741@huawei.com>
      Fixes: b286d8b1 ("crypto: skcipher - Add skcipher walk interface")
      Cc: <stable@vger.kernel.org> # v4.10+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7e179bff
    • Eric Biggers's avatar
      crypto: skcipher - fix aligning block size in skcipher_copy_iv() · 0f2981ee
      Eric Biggers authored
      commit 0567fc9e upstream.
      
      The ALIGN() macro needs to be passed the alignment, not the alignmask
      (which is the alignment minus 1).
      
      Fixes: b286d8b1 ("crypto: skcipher - Add skcipher walk interface")
      Cc: <stable@vger.kernel.org> # v4.10+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0f2981ee
    • Eric Biggers's avatar
      crypto: ablkcipher - fix crash flushing dcache in error path · 68432fd1
      Eric Biggers authored
      commit 318abdfb upstream.
      
      Like the skcipher_walk and blkcipher_walk cases:
      
      scatterwalk_done() is only meant to be called after a nonzero number of
      bytes have been processed, since scatterwalk_pagedone() will flush the
      dcache of the *previous* page.  But in the error case of
      ablkcipher_walk_done(), e.g. if the input wasn't an integer number of
      blocks, scatterwalk_done() was actually called after advancing 0 bytes.
      This caused a crash ("BUG: unable to handle kernel paging request")
      during '!PageSlab(page)' on architectures like arm and arm64 that define
      ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
      page-aligned as in that case walk->offset == 0.
      
      Fix it by reorganizing ablkcipher_walk_done() to skip the
      scatterwalk_advance() and scatterwalk_done() if an error has occurred.
      Reported-by: default avatarLiu Chao <liuchao741@huawei.com>
      Fixes: bf06099d ("crypto: skcipher - Add ablkcipher_walk interfaces")
      Cc: <stable@vger.kernel.org> # v2.6.35+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      68432fd1
    • Eric Biggers's avatar
      crypto: blkcipher - fix crash flushing dcache in error path · 2cde72d9
      Eric Biggers authored
      commit 0868def3 upstream.
      
      Like the skcipher_walk case:
      
      scatterwalk_done() is only meant to be called after a nonzero number of
      bytes have been processed, since scatterwalk_pagedone() will flush the
      dcache of the *previous* page.  But in the error case of
      blkcipher_walk_done(), e.g. if the input wasn't an integer number of
      blocks, scatterwalk_done() was actually called after advancing 0 bytes.
      This caused a crash ("BUG: unable to handle kernel paging request")
      during '!PageSlab(page)' on architectures like arm and arm64 that define
      ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
      page-aligned as in that case walk->offset == 0.
      
      Fix it by reorganizing blkcipher_walk_done() to skip the
      scatterwalk_advance() and scatterwalk_done() if an error has occurred.
      
      This bug was found by syzkaller fuzzing.
      
      Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
      
      	#include <linux/if_alg.h>
      	#include <sys/socket.h>
      	#include <unistd.h>
      
      	int main()
      	{
      		struct sockaddr_alg addr = {
      			.salg_type = "skcipher",
      			.salg_name = "ecb(aes-generic)",
      		};
      		char buffer[4096] __attribute__((aligned(4096))) = { 0 };
      		int fd;
      
      		fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
      		bind(fd, (void *)&addr, sizeof(addr));
      		setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
      		fd = accept(fd, NULL, NULL);
      		write(fd, buffer, 15);
      		read(fd, buffer, 15);
      	}
      Reported-by: default avatarLiu Chao <liuchao741@huawei.com>
      Fixes: 5cde0af2 ("[CRYPTO] cipher: Added block cipher type")
      Cc: <stable@vger.kernel.org> # v2.6.19+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2cde72d9
    • Eric Biggers's avatar
      crypto: vmac - separate tfm and request context · e7aefb13
      Eric Biggers authored
      commit bb296481 upstream.
      
      syzbot reported a crash in vmac_final() when multiple threads
      concurrently use the same "vmac(aes)" transform through AF_ALG.  The bug
      is pretty fundamental: the VMAC template doesn't separate per-request
      state from per-tfm (per-key) state like the other hash algorithms do,
      but rather stores it all in the tfm context.  That's wrong.
      
      Also, vmac_final() incorrectly zeroes most of the state including the
      derived keys and cached pseudorandom pad.  Therefore, only the first
      VMAC invocation with a given key calculates the correct digest.
      
      Fix these bugs by splitting the per-tfm state from the per-request state
      and using the proper init/update/final sequencing for requests.
      
      Reproducer for the crash:
      
          #include <linux/if_alg.h>
          #include <sys/socket.h>
          #include <unistd.h>
      
          int main()
          {
                  int fd;
                  struct sockaddr_alg addr = {
                          .salg_type = "hash",
                          .salg_name = "vmac(aes)",
                  };
                  char buf[256] = { 0 };
      
                  fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
                  bind(fd, (void *)&addr, sizeof(addr));
                  setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16);
                  fork();
                  fd = accept(fd, NULL, NULL);
                  for (;;)
                          write(fd, buf, 256);
          }
      
      The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds
      VMAC_NHBYTES, causing vmac_final() to memset() a negative length.
      
      Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com
      Fixes: f1939f7c ("crypto: vmac - New hash algorithm for intel_txt support")
      Cc: <stable@vger.kernel.org> # v2.6.32+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e7aefb13
    • Eric Biggers's avatar
      crypto: vmac - require a block cipher with 128-bit block size · ef70d145
      Eric Biggers authored
      commit 73bf20ef upstream.
      
      The VMAC template assumes the block cipher has a 128-bit block size, but
      it failed to check for that.  Thus it was possible to instantiate it
      using a 64-bit block size cipher, e.g. "vmac(cast5)", causing
      uninitialized memory to be used.
      
      Add the needed check when instantiating the template.
      
      Fixes: f1939f7c ("crypto: vmac - New hash algorithm for intel_txt support")
      Cc: <stable@vger.kernel.org> # v2.6.32+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ef70d145
  4. 03 Aug, 2018 2 commits
  5. 22 Jul, 2018 1 commit
  6. 17 Jul, 2018 1 commit
    • Eric Biggers's avatar
      crypto: x86/salsa20 - remove x86 salsa20 implementations · 19f39eff
      Eric Biggers authored
      commit b7b73cd5 upstream.
      
      The x86 assembly implementations of Salsa20 use the frame base pointer
      register (%ebp or %rbp), which breaks frame pointer convention and
      breaks stack traces when unwinding from an interrupt in the crypto code.
      Recent (v4.10+) kernels will warn about this, e.g.
      
      WARNING: kernel stack regs at 00000000a8291e69 in syzkaller047086:4677 has bad 'bp' value 000000001077994c
      [...]
      
      But after looking into it, I believe there's very little reason to still
      retain the x86 Salsa20 code.  First, these are *not* vectorized
      (SSE2/SSSE3/AVX2) implementations, which would be needed to get anywhere
      close to the best Salsa20 performance on any remotely modern x86
      processor; they're just regular x86 assembly.  Second, it's still
      unclear that anyone is actually using the kernel's Salsa20 at all,
      especially given that now ChaCha20 is supported too, and with much more
      efficient SSSE3 and AVX2 implementations.  Finally, in benchmarks I did
      on both Intel and AMD processors with both gcc 8.1.0 and gcc 4.9.4, the
      x86_64 salsa20-asm is actually slightly *slower* than salsa20-generic
      (~3% slower on Skylake, ~10% slower on Zen), while the i686 salsa20-asm
      is only slightly faster than salsa20-generic (~15% faster on Skylake,
      ~20% faster on Zen).  The gcc version made little difference.
      
      So, the x86_64 salsa20-asm is pretty clearly useless.  That leaves just
      the i686 salsa20-asm, which based on my tests provides a 15-20% speed
      boost.  But that's without updating the code to not use %ebp.  And given
      the maintenance cost, the small speed difference vs. salsa20-generic,
      the fact that few people still use i686 kernels, the doubt that anyone
      is even using the kernel's Salsa20 at all, and the fact that a SSE2
      implementation would almost certainly be much faster on any remotely
      modern x86 processor yet no one has cared enough to add one yet, I don't
      think it's worthwhile to keep.
      
      Thus, just remove both the x86_64 and i686 salsa20-asm implementations.
      
      Reported-by: syzbot+ffa3a158337bbc01ff09@syzkaller.appspotmail.com
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      19f39eff
  7. 03 Jul, 2018 1 commit
    • Maciej S. Szmigiero's avatar
      X.509: unpack RSA signatureValue field from BIT STRING · af20e4ec
      Maciej S. Szmigiero authored
      commit b65c32ec upstream.
      
      The signatureValue field of a X.509 certificate is encoded as a BIT STRING.
      For RSA signatures this BIT STRING is of so-called primitive subtype, which
      contains a u8 prefix indicating a count of unused bits in the encoding.
      
      We have to strip this prefix from signature data, just as we already do for
      key data in x509_extract_key_data() function.
      
      This wasn't noticed earlier because this prefix byte is zero for RSA key
      sizes divisible by 8. Since BIT STRING is a big-endian encoding adding zero
      prefixes has no bearing on its value.
      
      The signature length, however was incorrect, which is a problem for RSA
      implementations that need it to be exactly correct (like AMD CCP).
      Signed-off-by: default avatarMaciej S. Szmigiero <mail@maciej.szmigiero.name>
      Fixes: c26fd69f ("X.509: Add a crypto key parser for binary (DER) X.509 certificates")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarJames Morris <james.morris@microsoft.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      af20e4ec
  8. 30 May, 2018 1 commit
    • Eric Biggers's avatar
      PKCS#7: fix direct verification of SignerInfo signature · e47c1bf9
      Eric Biggers authored
      [ Upstream commit 6459ae38 ]
      
      If none of the certificates in a SignerInfo's certificate chain match a
      trusted key, nor is the last certificate signed by a trusted key, then
      pkcs7_validate_trust_one() tries to check whether the SignerInfo's
      signature was made directly by a trusted key.  But, it actually fails to
      set the 'sig' variable correctly, so it actually verifies the last
      signature seen.  That will only be the SignerInfo's signature if the
      certificate chain is empty; otherwise it will actually be the last
      certificate's signature.
      
      This is not by itself a security problem, since verifying any of the
      certificates in the chain should be sufficient to verify the SignerInfo.
      Still, it's not working as intended so it should be fixed.
      
      Fix it by setting 'sig' correctly for the direct verification case.
      
      Fixes: 757932e6 ("PKCS#7: Handle PKCS#7 messages that contain no X.509 certs")
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarSasha Levin <alexander.levin@microsoft.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e47c1bf9
  9. 16 May, 2018 1 commit
  10. 01 May, 2018 1 commit
  11. 12 Apr, 2018 1 commit
    • Arnd Bergmann's avatar
      crypto: aes-generic - build with -Os on gcc-7+ · 7cae67e3
      Arnd Bergmann authored
      
      [ Upstream commit 148b974d ]
      
      While testing other changes, I discovered that gcc-7.2.1 produces badly
      optimized code for aes_encrypt/aes_decrypt. This is especially true when
      CONFIG_UBSAN_SANITIZE_ALL is enabled, where it leads to extremely
      large stack usage that in turn might cause kernel stack overflows:
      
      crypto/aes_generic.c: In function 'aes_encrypt':
      crypto/aes_generic.c:1371:1: warning: the frame size of 4880 bytes is larger than 2048 bytes [-Wframe-larger-than=]
      crypto/aes_generic.c: In function 'aes_decrypt':
      crypto/aes_generic.c:1441:1: warning: the frame size of 4864 bytes is larger than 2048 bytes [-Wframe-larger-than=]
      
      I verified that this problem exists on all architectures that are
      supported by gcc-7.2, though arm64 in particular is less affected than
      the others. I also found that gcc-7.1 and gcc-8 do not show the extreme
      stack usage but still produce worse code than earlier versions for this
      file, apparently because of optimization passes that generally provide
      a substantial improvement in object code quality but understandably fail
      to find any shortcuts in the AES algorithm.
      
      Possible workarounds include
      
      a) disabling -ftree-pre and -ftree-sra optimizations, this was an earlier
         patch I tried, which reliably fixed the stack usage, but caused a
         serious performance regression in some versions, as later testing
         found.
      
      b) disabling UBSAN on this file or all ciphers, as suggested by Ard
         Biesheuvel. This would lead to massively better crypto performance in
         UBSAN-enabled kernels and avoid the stack usage, but there is a concern
         over whether we should exclude arbitrary files from UBSAN at all.
      
      c) Forcing the optimization level in a different way. Similar to a),
         but rather than deselecting specific optimization stages,
         this now uses "gcc -Os" for this file, regardless of the
         CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE/SIZE option. This is a reliable
         workaround for the stack consumption on all architecture, and I've
         retested the performance results now on x86, cycles/byte (lower is
         better) for cbc(aes-generic) with 256 bit keys:
      
      			-O2     -Os
      	gcc-6.3.1	14.9	15.1
      	gcc-7.0.1	14.7	15.3
      	gcc-7.1.1	15.3	14.7
      	gcc-7.2.1	16.8	15.9
      	gcc-8.0.0	15.5	15.6
      
      This implements the option c) by enabling forcing -Os on all compiler
      versions starting with gcc-7.1. As a workaround for PR83356, it would
      only be needed for gcc-7.2+ with UBSAN enabled, but since it also shows
      better performance on gcc-7.1 without UBSAN, it seems appropriate to
      use the faster version here as well.
      
      Side note: during testing, I also played with the AES code in libressl,
      which had a similar performance regression from gcc-6 to gcc-7.2,
      but was three times slower overall. It might be interesting to
      investigate that further and possibly port the Linux implementation
      into that.
      
      Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356
      Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83651
      Cc: Richard Biener <rguenther@suse.de>
      Cc: Jakub Jelinek <jakub@gcc.gnu.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarSasha Levin <alexander.levin@microsoft.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7cae67e3
  12. 08 Apr, 2018 3 commits
  13. 19 Mar, 2018 1 commit
  14. 03 Mar, 2018 1 commit
  15. 28 Feb, 2018 4 commits
    • Eric Biggers's avatar
      PKCS#7: fix certificate blacklisting · 29e76b21
      Eric Biggers authored
      commit 29f4a67c upstream.
      
      If there is a blacklisted certificate in a SignerInfo's certificate
      chain, then pkcs7_verify_sig_chain() sets sinfo->blacklisted and returns
      0.  But, pkcs7_verify() fails to handle this case appropriately, as it
      actually continues on to the line 'actual_ret = 0;', indicating that the
      SignerInfo has passed verification.  Consequently, PKCS#7 signature
      verification ignores the certificate blacklist.
      
      Fix this by not considering blacklisted SignerInfos to have passed
      verification.
      
      Also fix the function comment with regards to when 0 is returned.
      
      Fixes: 03bb7931 ("PKCS#7: Handle blacklisted certificates")
      Cc: <stable@vger.kernel.org> # v4.12+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      29e76b21
    • Eric Biggers's avatar
      PKCS#7: fix certificate chain verification · 1a1f7f72
      Eric Biggers authored
      commit 971b42c0 upstream.
      
      When pkcs7_verify_sig_chain() is building the certificate chain for a
      SignerInfo using the certificates in the PKCS#7 message, it is passing
      the wrong arguments to public_key_verify_signature().  Consequently,
      when the next certificate is supposed to be used to verify the previous
      certificate, the next certificate is actually used to verify itself.
      
      An attacker can use this bug to create a bogus certificate chain that
      has no cryptographic relationship between the beginning and end.
      
      Fortunately I couldn't quite find a way to use this to bypass the
      overall signature verification, though it comes very close.  Here's the
      reasoning: due to the bug, every certificate in the chain beyond the
      first actually has to be self-signed (where "self-signed" here refers to
      the actual key and signature; an attacker might still manipulate the
      certificate fields such that the self_signed flag doesn't actually get
      set, and thus the chain doesn't end immediately).  But to pass trust
      validation (pkcs7_validate_trust()), either the SignerInfo or one of the
      certificates has to actually be signed by a trusted key.  Since only
      self-signed certificates can be added to the chain, the only way for an
      attacker to introduce a trusted signature is to include a self-signed
      trusted certificate.
      
      But, when pkcs7_validate_trust_one() reaches that certificate, instead
      of trying to verify the signature on that certificate, it will actually
      look up the corresponding trusted key, which will succeed, and then try
      to verify the *previous* certificate, which will fail.  Thus, disaster
      is narrowly averted (as far as I could tell).
      
      Fixes: 6c2dc5ae ("X.509: Extract signature digest and make self-signed cert checks earlier")
      Cc: <stable@vger.kernel.org> # v4.7+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1a1f7f72
    • Eric Biggers's avatar
      X.509: fix NULL dereference when restricting key with unsupported_sig · 99b2095a
      Eric Biggers authored
      commit 4b34968e upstream.
      
      The asymmetric key type allows an X.509 certificate to be added even if
      its signature's hash algorithm is not available in the crypto API.  In
      that case 'payload.data[asym_auth]' will be NULL.  But the key
      restriction code failed to check for this case before trying to use the
      signature, resulting in a NULL pointer dereference in
      key_or_keyring_common() or in restrict_link_by_signature().
      
      Fix this by returning -ENOPKG when the signature is unsupported.
      
      Reproducer when all the CONFIG_CRYPTO_SHA512* options are disabled and
      keyctl has support for the 'restrict_keyring' command:
      
          keyctl new_session
          keyctl restrict_keyring @s asymmetric builtin_trusted
          openssl req -new -sha512 -x509 -batch -nodes -outform der \
              | keyctl padd asymmetric desc @s
      
      Fixes: a511e1af ("KEYS: Move the point of trust determination to __key_link()")
      Cc: <stable@vger.kernel.org> # v4.7+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      99b2095a
    • Eric Biggers's avatar
      X.509: fix BUG_ON() when hash algorithm is unsupported · dcb04cc7
      Eric Biggers authored
      commit 437499ee upstream.
      
      The X.509 parser mishandles the case where the certificate's signature's
      hash algorithm is not available in the crypto API.  In this case,
      x509_get_sig_params() doesn't allocate the cert->sig->digest buffer;
      this part seems to be intentional.  However,
      public_key_verify_signature() is still called via
      x509_check_for_self_signed(), which triggers the 'BUG_ON(!sig->digest)'.
      
      Fix this by making public_key_verify_signature() return -ENOPKG if the
      hash buffer has not been allocated.
      
      Reproducer when all the CONFIG_CRYPTO_SHA512* options are disabled:
      
          openssl req -new -sha512 -x509 -batch -nodes -outform der \
              | keyctl padd asymmetric desc @s
      
      Fixes: 6c2dc5ae ("X.509: Extract signature digest and make self-signed cert checks earlier")
      Reported-by: default avatarPaolo Valente <paolo.valente@linaro.org>
      Cc: Paolo Valente <paolo.valente@linaro.org>
      Cc: <stable@vger.kernel.org> # v4.7+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      dcb04cc7
  16. 22 Feb, 2018 1 commit
  17. 16 Feb, 2018 6 commits
  18. 13 Feb, 2018 1 commit
  19. 03 Feb, 2018 3 commits
  20. 17 Jan, 2018 1 commit
    • Eric Biggers's avatar
      crypto: algapi - fix NULL dereference in crypto_remove_spawns() · 3662493d
      Eric Biggers authored
      commit 9a006742 upstream.
      
      syzkaller triggered a NULL pointer dereference in crypto_remove_spawns()
      via a program that repeatedly and concurrently requests AEADs
      "authenc(cmac(des3_ede-asm),pcbc-aes-aesni)" and hashes "cmac(des3_ede)"
      through AF_ALG, where the hashes are requested as "untested"
      (CRYPTO_ALG_TESTED is set in ->salg_mask but clear in ->salg_feat; this
      causes the template to be instantiated for every request).
      
      Although AF_ALG users really shouldn't be able to request an "untested"
      algorithm, the NULL pointer dereference is actually caused by a
      longstanding race condition where crypto_remove_spawns() can encounter
      an instance which has had spawn(s) "grabbed" but hasn't yet been
      registered, resulting in ->cra_users still being NULL.
      
      We probably should properly initialize ->cra_users earlier, but that
      would require updating many templates individually.  For now just fix
      the bug in a simple way that can easily be backported: make
      crypto_remove_spawns() treat a NULL ->cra_users list as empty.
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3662493d
  21. 10 Jan, 2018 2 commits
    • Eric Biggers's avatar
      crypto: pcrypt - fix freeing pcrypt instances · 7156c794
      Eric Biggers authored
      commit d76c6810 upstream.
      
      pcrypt is using the old way of freeing instances, where the ->free()
      method specified in the 'struct crypto_template' is passed a pointer to
      the 'struct crypto_instance'.  But the crypto_instance is being
      kfree()'d directly, which is incorrect because the memory was actually
      allocated as an aead_instance, which contains the crypto_instance at a
      nonzero offset.  Thus, the wrong pointer was being kfree()'d.
      
      Fix it by switching to the new way to free aead_instance's where the
      ->free() method is specified in the aead_instance itself.
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Fixes: 0496f560 ("crypto: pcrypt - Add support for new AEAD interface")
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7156c794
    • Eric Biggers's avatar
      crypto: chacha20poly1305 - validate the digest size · 9c36498f
      Eric Biggers authored
      commit e57121d0 upstream.
      
      If the rfc7539 template was instantiated with a hash algorithm with
      digest size larger than 16 bytes (POLY1305_DIGEST_SIZE), then the digest
      overran the 'tag' buffer in 'struct chachapoly_req_ctx', corrupting the
      subsequent memory, including 'cryptlen'.  This caused a crash during
      crypto_skcipher_decrypt().
      
      Fix it by, when instantiating the template, requiring that the
      underlying hash algorithm has the digest size expected for Poly1305.
      
      Reproducer:
      
          #include <linux/if_alg.h>
          #include <sys/socket.h>
          #include <unistd.h>
      
          int main()
          {
                  int algfd, reqfd;
                  struct sockaddr_alg addr = {
                          .salg_type = "aead",
                          .salg_name = "rfc7539(chacha20,sha256)",
                  };
                  unsigned char buf[32] = { 0 };
      
                  algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
                  bind(algfd, (void *)&addr, sizeof(addr));
                  setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, sizeof(buf));
                  reqfd = accept(algfd, 0, 0);
                  write(reqfd, buf, 16);
                  read(reqfd, buf, 16);
          }
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Fixes: 71ebc4d1 ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9c36498f