Skip to content
  • David Miller's avatar
    drm: Preserve SHMLBA bits in hash key for _DRM_SHM mappings. · f1a2a9b6
    David Miller authored
    
    
    Platforms such as sparc64 have D-cache aliasing issues.  We
    cannot allow virtual mappings in different contexts to be such
    that two cache lines can be loaded for the same backing data.
    Updates to one cache line won't be seen by accesses to the other
    cache line.
    
    Code in sparc64 and other architectures solve this problem by
    making sure that all userland mappings of MAP_SHARED objects have
    the same virtual address base.  They implement this by keying
    off of the page offset, and using that to choose a suitably
    consistent virtual address for mmap() requests.
    
    Making things even worse, getting this wrong on sparc64 can result
    in hangs during DRM lock acquisition.  This is because, at least on
    UltraSPARC-III, normal loads consult the D-cache but atomics such
    as 'cas' (which is what cmpxchg() is implement using) only consult
    the L2 cache.  So if a D-cache alias is inserted, the load can
    see different data than the atomic, and we'll loop forever because
    the atomic compare-and-exchange will never complete successfully.
    
    So to make this all work properly, we need to make sure that the
    hash address computed by drm_map_handle() preserves the SHMLBA
    relevant bits, and that's what this patch does for _DRM_SHM mappings.
    
    As a historical note, many years ago this bug didn't exist because we
    used to just use the low 32-bits of the address as the hash and just
    hope for the best.  This preserved the SHMLBA bits properly.  But when
    the hashtab code was added to DRM, this was no longer the case.
    
    Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
    f1a2a9b6