Third International Workshop

OMHI

On-chip memory hierarchies and interconnects:
                             organization, management and implementation


August 2014, Porto, Portugal

to be held in conjunction with Euro-Par 2014

 




Paper Submission (extended):
June 9, 2014

Paper Notification: July 9, 2014

Early Registration:  July 25, 2014

Camera-Ready: October 3, 2014





Submit your paper here.

Accepted papers presented at the workshop have been published in Lecture Notes in Computer Science (LNCS), volume 8806. You can find information about it or access the online version.

A best paper award will be given to the paper (or papers) presented at the workshop as judged by the Program Chair in collaboration with the Program Committee.




The Third International Workshop on On-chip memory hierarchies and interconnects: organization, management and implementation (OMHI2014) will be held in Porto, Portugal. This workshop is organized in conjunction with the Euro-Par annual series of international conferences dedicated to the promotion and advancement of all aspects of parallel computing.

WELCOME

about the workshop

IMPORTANT DATES

INFORMATION

FOR THE  AUTHORS

Performance of current chip multiprocessors (CMPs), either consisting of only CPU cores or heterogeneous CPU/GPGPU processors, is mainly dominated by data access latencies and bandwidth constraints. To alleviate this problem, current multi-core and many-core processors include high amounts of on-chip memory storage, organized either as caches or main memory. Cores fetch the requested data by traversing an on-chip interconnects. Latencies are mainly affected by the devised on-chip memory hierarchy and the interconnect design, whose latencies can dramatically grow with the number of cores. This problem aggravates as the core counts increases. Thus, new cache hierarchies and interconnects organizations are required to address this problem.

These on-chip cache hierarchies have been typically built mainly employing Static Random Access Memory (SRAM) technology, which is the fastest existing electronic memory technology. This technology presents important design challenges in terms of density and high leakage currents, so that it is unlikely the implementation of future cache hierarchies with only SRAM technology, especially in the context of multi- and many-core processors. Instead, alternative technologies (e.g. eDRAM or STTRAM) addressing leakage and density are being explored in large CMPs. This fact enables the design of alternative on-chip hierarchies. Finally, to take advantage of these complex hierarchies, efficient management is required. This includes, among others, thread allocation policies, cache management strategies, and the NoC design, both in 2D and 3D designs.

This workshop will provide a forum for engineers and scientists to address challenges, and to present new ideas for on-chip memory hierarchies and interconnects focusing on organization, management and implementation.


Authors are invited to submit high quality papers representing original work in (but not limited to) the following topics:


  1. On-chip memory hierarchy organizations: homogeneous and heterogeneous technologies, including persistent memories.

  2. On-chip memory management: prefetching, replacement algorithms, data replication and promotion.

  3. Thread allocation to cores, scheduling, workload balancing and programming.

  4. Coherence problems in heterogeneous GPGPUs as tile-based CMPs.

  5. Cache hierarchy/coherence protocol/network co-design.

  6. Cache-aware performance studies for real applications and programmability issues.

  7. Efficient network design with emergent technologies (photonics and wireless).

  8. Power and energy management.

  9. Tradeoffs among performance, energy and area.

  10. Moving data among on-chip and off-chip memories.

  11. 3D-stacked memory organizations.