1. 01 May, 2012 7 commits
    • James Chapman's avatar
      l2tp: remove unused stats from l2tp_ip socket · c8657fd5
      James Chapman authored
      The l2tp_ip socket currently maintains packet/byte stats in its
      private socket structure. But these counters aren't exposed to
      userspace and so serve no purpose. The counters were also
      smp-unsafe. So this patch just gets rid of the stats.
      
      While here, change a couple of internal __u32 variables to u32.
      Signed-off-by: default avatarJames Chapman <jchapman@katalix.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c8657fd5
    • James Chapman's avatar
      l2tp: Use ip4_datagram_connect() in l2tp_ip_connect() · de3c7a18
      James Chapman authored
      Cleanup the l2tp_ip code to make use of an existing ipv4 support function.
      Signed-off-by: default avatarJames Chapman <jchapman@katalix.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      de3c7a18
    • James Chapman's avatar
      l2tp: fix locking of 64-bit counters for smp · 5de7aee5
      James Chapman authored
      L2TP uses 64-bit counters but since these are not updated atomically,
      we need to make them safe for smp. This patch addresses that.
      Signed-off-by: default avatarJames Chapman <jchapman@katalix.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5de7aee5
    • Eric Dumazet's avatar
      net: makes skb_splice_bits() aware of skb->head_frag · 1d0c0b32
      Eric Dumazet authored
      __skb_splice_bits() can check if skb to be spliced has its skb->head
      mapped to a page fragment, instead of a kmalloc() area.
      
      If so we can avoid a copy of the skb head and get a reference on
      underlying page.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Cc: Matt Carlson <mcarlson@broadcom.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1d0c0b32
    • Eric Dumazet's avatar
      tcp: makes tcp_try_coalesce aware of skb->head_frag · 329033f6
      Eric Dumazet authored
      TCP coalesce can check if skb to be merged has its skb->head mapped to a
      page fragment, instead of a kmalloc() area.
      
      We had to disable coalescing in this case, for performance reasons.
      
      We 'upgrade' skb->head as a fragment in itself.
      
      This reduces number of cache misses when user makes its copies, since a
      less sk_buff are fetched.
      
      This makes receive and ofo queues shorter and thus reduce cache line
      misses in TCP stack.
      
      This is a followup of patch "net: allow skb->head to be a page fragment"
      
      Tested with tg3 nic, with GRO on or off. We can see "TCPRcvCoalesce"
      counter being incremented.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Cc: Matt Carlson <mcarlson@broadcom.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      329033f6
    • Eric Dumazet's avatar
      net: make GRO aware of skb->head_frag · d7e8883c
      Eric Dumazet authored
      GRO can check if skb to be merged has its skb->head mapped to a page
      fragment, instead of a kmalloc() area.
      
      We 'upgrade' skb->head as a fragment in itself
      
      This avoids the frag_list fallback, and permits to build true GRO skb
      (one sk_buff and up to 16 fragments), using less memory.
      
      This reduces number of cache misses when user makes its copy, since a
      single sk_buff is fetched.
      
      This is a followup of patch "net: allow skb->head to be a page fragment"
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Cc: Matt Carlson <mcarlson@broadcom.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d7e8883c
    • Eric Dumazet's avatar
      net: allow skb->head to be a page fragment · d3836f21
      Eric Dumazet authored
      skb->head is currently allocated from kmalloc(). This is convenient but
      has the drawback the data cannot be converted to a page fragment if
      needed.
      
      We have three spots were it hurts :
      
      1) GRO aggregation
      
       When a linear skb must be appended to another skb, GRO uses the
      frag_list fallback, very inefficient since we keep all struct sk_buff
      around. So drivers enabling GRO but delivering linear skbs to network
      stack aren't enabling full GRO power.
      
      2) splice(socket -> pipe).
      
       We must copy the linear part to a page fragment.
       This kind of defeats splice() purpose (zero copy claim)
      
      3) TCP coalescing.
      
       Recently introduced, this permits to group several contiguous segments
      into a single skb. This shortens queue lengths and save kernel memory,
      and greatly reduce probabilities of TCP collapses. This coalescing
      doesnt work on linear skbs (or we would need to copy data, this would be
      too slow)
      
      Given all these issues, the following patch introduces the possibility
      of having skb->head be a fragment in itself. We use a new skb flag,
      skb->head_frag to carry this information.
      
      build_skb() is changed to accept a frag_size argument. Drivers willing
      to provide a page fragment instead of kmalloc() data will set a non zero
      value, set to the fragment size.
      
      Then, on situations we need to convert the skb head to a frag in itself,
      we can check if skb->head_frag is set and avoid the copies or various
      fallbacks we have.
      
      This means drivers currently using frags could be updated to avoid the
      current skb->head allocation and reduce their memory footprint (aka skb
      truesize). (thats 512 or 1024 bytes saved per skb). This also makes
      bpf/netfilter faster since the 'first frag' will be part of skb linear
      part, no need to copy data.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Cc: Matt Carlson <mcarlson@broadcom.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d3836f21
  2. 30 Apr, 2012 3 commits
  3. 29 Apr, 2012 5 commits
  4. 27 Apr, 2012 4 commits
    • Allan Stephens's avatar
      tipc: Reject payload messages with invalid message type · aad58547
      Allan Stephens authored
      Adds check to ensure TIPC sockets reject incoming payload messages
      that have an unrecognized message type.
      
      Remove the old open question about whether TIPC_ERR_NO_PORT is
      the proper return value.  It is appropriate here since there are
      valid instances where another node can make use of the reply,
      and at this point in time the host is already broadcasting TIPC
      data, so there are no real security concerns.
      Signed-off-by: default avatarAllan Stephens <allan.stephens@windriver.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      aad58547
    • Eric Dumazet's avatar
      net: cleanups in sock_setsockopt() · 82981930
      Eric Dumazet authored
      Use min_t()/max_t() macros, reformat two comments, use !!test_bit() to
      match !!sock_flag()
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      82981930
    • hartleys's avatar
      crush: include header for global symbols · feb50ac1
      hartleys authored
      Include the header to pickup the definitions of the global symbols.
      
      Quiets the following sparse warnings:
      
      warning: symbol 'crush_find_rule' was not declared. Should it be static?
      warning: symbol 'crush_do_rule' was not declared. Should it be static?
      Signed-off-by: default avatarH Hartley Sweeten <hsweeten@visionengravers.com>
      Cc: Sage Weil <sage@newdream.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      feb50ac1
    • Eric Dumazet's avatar
      ipv6: RTAX_FEATURE_ALLFRAG causes inefficient TCP segment sizing · 67469601
      Eric Dumazet authored
      Quoting Tore Anderson from :
      https://bugzilla.kernel.org/show_bug.cgi?id=42572
      
      When RTAX_FEATURE_ALLFRAG is set on a route, the effective TCP segment
      size does not take into account the size of the IPv6 Fragmentation
      header that needs to be included in outbound packets, causing every
      transmitted TCP segment to be fragmented across two IPv6 packets, the
      latter of which will only contain 8 bytes of actual payload.
      
      RTAX_FEATURE_ALLFRAG is typically set on a route in response to
      receving a ICMPv6 Packet Too Big message indicating a Path MTU of less
      than 1280 bytes. 1280 bytes is the minimum IPv6 MTU, however ICMPv6
      PTBs with MTU < 1280 are still valid, in particular when an IPv6
      packet is sent to an IPv4 destination through a stateless translator.
      Any ICMPv4 Need To Fragment packets originated from the IPv4 part of
      the path will be translated to ICMPv6 PTB which may then indicate an
      MTU of less than 1280.
      
      The Linux kernel refuses to reduce the effective MTU to anything below
      1280 bytes, instead it sets it to exactly 1280 bytes, and
      RTAX_FEATURE_ALLFRAG is also set. However, the TCP segment size appears
      to be set to 1240 bytes (1280 Path MTU - 40 bytes of IPv6 header),
      instead of 1232 (additionally taking into account the 8 bytes required
      by the IPv6 Fragmentation extension header).
      
      This in turn results in rather inefficient transmission, as every
      transmitted TCP segment now is split in two fragments containing
      1232+8 bytes of payload.
      
      After this patch, all the outgoing packets that includes a
      Fragmentation header all are "atomic" or "non-fragmented" fragments,
      i.e., they both have Offset=0 and More Fragments=0.
      
      With help from David S. Miller
      Reported-by: default avatarTore Anderson <tore@fud.no>
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Tested-by: default avatarTore Anderson <tore@fud.no>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      67469601
  5. 26 Apr, 2012 13 commits
  6. 25 Apr, 2012 1 commit
  7. 24 Apr, 2012 7 commits