Fourth International Workshop

OMHI

On-chip memory hierarchies and interconnects:
                             organization, management and implementation


August 2015, Vienna, Austria

to be held in conjunction with Euro-Par 2015

 




Paper Submission: June 2, 2015

Paper Notification: June 30, 2015

Early Registration: July 17, 2015

Workshop Date: August 24, 2015

Camera-Ready: October 2, 2015





Submit your paper here.

Accepted papers presented at the workshop have been published in Lecture Notes in Computer Science (LNCS), volume 9523. You can find it online.

A best paper award will be given to the paper (or papers) presented at the workshop as judged by the Program Chair in collaboration with the Program Committee.




The Third International Workshop on On-chip memory hierarchies and interconnects: organization, management and implementation (OMHI2015) will be held in Vienna, Austria. This workshop is organized in conjunction with the Euro-Par annual series of international conferences dedicated to the promotion and advancement of all aspects of parallel computing.

WELCOME

about the workshop

IMPORTANT DATES

INFORMATION

FOR THE  AUTHORS

Performance of current chip multiprocessors (CMPs), either consisting of only CPU cores or heterogeneous CPU processors, is mainly dominated by data access latencies and bandwidth constraints. To alleviate these problems, current multi-core and many-core processors include high amounts of on-chip memory storage, organized either as caches or main memory.Cores fetch the requested data by traversing an on-chip interconnects. Latencies are mainly affected by the devised on-chip memory hierarchy and the interconnect design, whose latencies can dramatically grow with the number of cores. This problem aggravates as the core counts increase. Thus, new cache hierarchies and interconnects organizations for CPUs are required to address this problem.


In contrast, the main problem in GPUs is on memory bandwidth instead of latencies. Current GPUs are designed to hide memory latencies through fine-grained multithreading. The main goal of GPUs on-chip memories is to reduce off-chip memory traffic, thus optimizing GPU programs for cache locality is important. As current heterogeneous CPU-GPU systems are proliferating in the market, the memory system must be designed to efficiently support both types of memory organizations.


Regarding technology, current SRAM technologies present important design challenges in terms of density and high leakage currents, so that it is unlikely the implementation of future cache hierarchies with only SRAM technology, especially in the context of multi- and many-core processors. Instead, alternative technologies addressing leakage and density are being explored in large CMPs. This fact enables the design of alternative on-chip hierarchies. Finally, to take advantage of these complex hierarchies, efficient management is required. This includes, among others, thread allocation policies, cache management strategies, and the NoC design, both in 2D and 3D designs.


Authors are invited to submit high quality papers representing original work in (but not limited to) the following topics:


  1. On-chip memory hierarchy organizations: homogeneous and heterogeneous technologies, including persistent memories.

  2. On-chip memory management (e.g. prefetching, fairness-aware strategies).

  3. Memory hierarchy aware programming in CMPs and GPUs.

  4. Thread allocation to cores, scheduling, and workload balancing.

  5. Coherence problems in heterogeneous GPUs and tile-based CMPs.

  6. Cache hierarchy-network co-design.

  7. Studies of real applications and programmability issues for both CMPs and GPUs considering the memory subsystem.

  8. Efficient network design with emergent technologies (photonics and wireless).

  9. Tradeoffs among performance, energy and area.

  10. 3D stacked architectures and their usage for memory purposes in CMPs and GPUs.