1. 30 May, 2018 1 commit
  2. 28 Jun, 2017 1 commit
  3. 20 Jun, 2017 2 commits
  4. 02 May, 2017 1 commit
  5. 13 Feb, 2017 2 commits
  6. 20 Jan, 2017 1 commit
  7. 06 Jan, 2017 1 commit
  8. 19 Dec, 2016 1 commit
  9. 10 Nov, 2016 1 commit
  10. 07 Nov, 2016 2 commits
    • Alexander Duyck's avatar
      swiotlb: Add support for DMA_ATTR_SKIP_CPU_SYNC · 0443fa00
      Alexander Duyck authored
      As a first step to making DMA_ATTR_SKIP_CPU_SYNC apply to architectures
      beyond just ARM I need to make it so that the swiotlb will respect the
      flag.  In order to do that I also need to update the swiotlb-xen since it
      heavily makes use of the functionality.
      
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad@kernel.org>
      0443fa00
    • Alexander Duyck's avatar
      swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping function · 76418421
      Alexander Duyck authored
      The mapping function should always return DMA_ERROR_CODE when a mapping has
      failed as this is what the DMA API expects when a DMA error has occurred.
      The current function for mapping a page in Xen was returning either
      DMA_ERROR_CODE or 0 depending on where it failed.
      
      On x86 DMA_ERROR_CODE is 0, but on other architectures such as ARM it is
      ~0. We need to make sure we return the same error value if either the
      mapping failed or the device is not capable of accessing the mapping.
      
      If we are returning DMA_ERROR_CODE as our error value we can drop the
      function for checking the error code as the default is to compare the
      return value against DMA_ERROR_CODE if no function is defined.
      
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad@kernel.org>
      76418421
  11. 04 Aug, 2016 1 commit
    • Krzysztof Kozlowski's avatar
      dma-mapping: use unsigned long for dma_attrs · 00085f1e
      Krzysztof Kozlowski authored
      The dma-mapping core and the implementations do not change the DMA
      attributes passed by pointer.  Thus the pointer can point to const data.
      However the attributes do not have to be a bitfield.  Instead unsigned
      long will do fine:
      
      1. This is just simpler.  Both in terms of reading the code and setting
         attributes.  Instead of initializing local attributes on the stack
         and passing pointer to it to dma_set_attr(), just set the bits.
      
      2. It brings safeness and checking for const correctness because the
         attributes are passed by value.
      
      Semantic patches for this change (at least most of them):
      
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
      
          @@
          f(...,
          - struct dma_attrs *attrs
          + unsigned long attrs
          , ...)
          {
          ...
          }
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      and
      
          // Options: --all-includes
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
          type t;
      
          @@
          t f(..., struct dma_attrs *attrs);
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: default avatarKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Acked-by: default avatarVineet Gupta <vgupta@synopsys.com>
      Acked-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Acked-by: default avatarHans-Christian Noren Egtvedt <egtvedt@samfundet.no>
      Acked-by: Mark Salter <msalter@redhat.com> [c6x]
      Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
      Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
      Reviewed-by: default avatarBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
      Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
      Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
      Acked-by: default avatarBjorn Andersson <bjorn.andersson@linaro.org>
      Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
      Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
      Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      00085f1e
  12. 23 Oct, 2015 2 commits
  13. 10 Sep, 2015 1 commit
    • Christoph Hellwig's avatar
      dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent} · 6894258e
      Christoph Hellwig authored
      Since 2009 we have a nice asm-generic header implementing lots of DMA API
      functions for architectures using struct dma_map_ops, but unfortunately
      it's still missing a lot of APIs that all architectures still have to
      duplicate.
      
      This series consolidates the remaining functions, although we still need
      arch opt outs for two of them as a few architectures have very
      non-standard implementations.
      
      This patch (of 5):
      
      The coherent DMA allocator works the same over all architectures supporting
      dma_map operations.
      
      This patch consolidates them and converges the minor differences:
      
       - the debug_dma helpers are now called from all architectures, including
         those that were previously missing them
       - dma_alloc_from_coherent and dma_release_from_coherent are now always
         called from the generic alloc/free routines instead of the ops
         dma-mapping-common.h always includes dma-coherent.h to get the defintions
         for them, or the stubs if the architecture doesn't support this feature
       - checks for ->alloc / ->free presence are removed.  There is only one
         magic instead of dma_map_ops without them (mic_dma_ops) and that one
         is x86 only anyway.
      
      Besides that only x86 needs special treatment to replace a default devices
      if none is passed and tweak the gfp_flags.  An optional arch hook is provided
      for that.
      
      [linux@roeck-us.net: fix build]
      [jcmvbkbc@gmail.com: fix xtensa]
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6894258e
  14. 08 Sep, 2015 1 commit
    • Julien Grall's avatar
      xen: Make clear that swiotlb and biomerge are dealing with DMA address · 32e09870
      Julien Grall authored
      The swiotlb is required when programming a DMA address on ARM when a
      device is not protected by an IOMMU.
      
      In this case, the DMA address should always be equal to the machine address.
      For DOM0 memory, Xen ensure it by have an identity mapping between the
      guest address and host address. However, when mapping a foreign grant
      reference, the 1:1 model doesn't work.
      
      For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN
      (Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point
      of view given that all ARM guest are auto-translated.
      
      Even though the name pfn_to_mfn is misleading, we need to ensure that
      those caller get a GFN and not by mistake a MFN. In pratical, I haven't
      seen error related to this but we should fix it for the sake of
      correctness.
      
      In order to fix the implementation of pfn_to_mfn on ARM in a follow-up
      patch, we have to introduce new helpers to return the DMA from a PFN and
      the invert.
      
      On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn.
      
      The helpers will be used in swiotlb and xen_biovec_phys_mergeable.
      
      This is necessary in the latter because we have to ensure that the
      biovec code will not try to merge a biovec using foreign page and
      another using Linux memory.
      
      Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn
      given that the only usage was in swiotlb.
      Signed-off-by: default avatarJulien Grall <julien.grall@citrix.com>
      Reviewed-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      32e09870
  15. 06 May, 2015 1 commit
  16. 11 Dec, 2014 1 commit
  17. 10 Dec, 2014 1 commit
  18. 04 Dec, 2014 6 commits
  19. 30 Jan, 2014 1 commit
    • Ian Campbell's avatar
      xen: swiotlb: handle sizeof(dma_addr_t) != sizeof(phys_addr_t) · e17b2f11
      Ian Campbell authored
      The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
      causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
      
      This can happen in practice on ARM where foreign pages can be above 4GB even
      though the local kernel does not have LPAE page tables enabled (which is
      totally reasonable if the guest does not itself have >4GB of RAM). In this
      case the kernel still maps the foreign pages at a phys addr below 4G (as it
      must) but the resulting DMA address (returned by the grant map operation) is
      much higher.
      
      This is analogous to a hardware device which has its view of RAM mapped up
      high for some reason.
      
      This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
      systems with more than 4GB of RAM.
      Signed-off-by: default avatarIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
      e17b2f11
  20. 15 Nov, 2013 1 commit
  21. 29 Oct, 2013 1 commit
  22. 25 Oct, 2013 3 commits
  23. 24 Oct, 2013 1 commit
  24. 10 Oct, 2013 1 commit
    • Stefano Stabellini's avatar
      swiotlb-xen: use xen_alloc/free_coherent_pages · 1b65c4e5
      Stefano Stabellini authored
      Use xen_alloc_coherent_pages and xen_free_coherent_pages to allocate or
      free coherent pages.
      
      We need to be careful handling the pointer returned by
      xen_alloc_coherent_pages, because on ARM the pointer is not equal to
      phys_to_virt(*dma_handle). In fact virt_to_phys only works for kernel
      direct mapped RAM memory.
      In ARM case the pointer could be an ioremap address, therefore passing
      it to virt_to_phys would give you another physical address that doesn't
      correspond to it.
      
      Make xen_create_contiguous_region take a phys_addr_t as start parameter to
      avoid the virt_to_phys calls which would be incorrect.
      
      Changes in v6:
      - remove extra spaces.
      Signed-off-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Reviewed-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      1b65c4e5
  25. 09 Oct, 2013 1 commit
  26. 10 Oct, 2013 1 commit
    • Stefano Stabellini's avatar
      xen/arm,arm64: enable SWIOTLB_XEN · 83862ccf
      Stefano Stabellini authored
      Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to
      program the hardware with mfns rather than pfns for dma addresses.
      Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select
      SWIOTLB_XEN on arm and arm64.
      
      At the moment always rely on swiotlb-xen, but when Xen starts supporting
      hardware IOMMUs we'll be able to avoid it conditionally on the presence
      of an IOMMU on the platform.
      
      Implement xen_create_contiguous_region on arm and arm64: for the moment
      we assume that dom0 has been mapped 1:1 (physical addresses == machine
      addresses) therefore we don't need to call XENMEM_exchange. Simply
      return the physical address as dma address.
      
      Initialize the xen-swiotlb from xen_early_init (before the native
      dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops.
      Signed-off-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
      
      
      Changes in v8:
      - assume dom0 is mapped 1:1, no need to call XENMEM_exchange.
      
      Changes in v7:
      - call __set_phys_to_machine_multi from xen_create_contiguous_region and
      xen_destroy_contiguous_region to update the P2M;
      - don't call XENMEM_unpin, it has been removed;
      - call XENMEM_exchange instead of XENMEM_exchange_and_pin;
      - set nr_exchanged to 0 before calling the hypercall.
      
      Changes in v6:
      - introduce and export xen_dma_ops;
      - call xen_mm_init from as arch_initcall.
      
      Changes in v4:
      - remove redefinition of DMA_ERROR_CODE;
      - update the code to use XENMEM_exchange_and_pin and XENMEM_unpin;
      - add a note about hardware IOMMU in the commit message.
      
      Changes in v3:
      - code style changes;
      - warn on XENMEM_put_dma_buf failures.
      83862ccf
  27. 09 Oct, 2013 1 commit
  28. 02 Oct, 2013 1 commit
    • Zoltan Kiss's avatar
      tracing/events: Add bounce tracing to swiotbl · 2b2b614d
      Zoltan Kiss authored
      Ftrace is currently not able to detect when SWIOTLB has to do double buffering.
      Under Xen you can only see it indirectly in function_graph, when
      xen_swiotlb_map_page() doesn't stop after range_straddles_page_boundary(), but
      calls spinlock functions, memcpy() and xen_phys_to_bus() as well. This patch
      introduces the swiotlb:swiotlb_bounced event, which also prints out the
      following informations to help you find out why bouncing happened:
      
      dev_name: 0000:08:00.0 dma_mask=ffffffffffffffff dev_addr=9149f000 size=32768
      swiotlb_force=0
      
      If you use Xen, and (dev_addr + size + 1) > dma_mask, the buffer is out of the
      device's DMA range. If swiotlb_force == 1, you should really change the kernel
      parameters. Otherwise, the buffer is not contiguous in mfn space.
      Signed-off-by: default avatarZoltan Kiss <zoltan.kiss@citrix.com>
      [v1: Don't print 'swiotlb_force=X', just print swiotlb_force if it is enabled]
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      2b2b614d
  29. 09 Aug, 2013 1 commit