+
+
The current computing landscape has witnessed a rapid and ongoing surge of change and innovation. This change has been driven by the relentless need to improve the energy-efficiency, memory, and computational throughput across all levels of the architectural hierarchy. The mounting volume of data that today's systems need to organize poses new challenges to the architecture, which can no longer be solved with classical, homogeneous designs. Advancements in all these areas have led Heterogeneous systems to become the norm rather than the exception.
+
+
Heterogeneous computing leverages a diverse set of computing (CPU, GPU, FPGA, TPU, DPU, etc.) and Memory (HBM, Persistent Memory, Coherent PCI protocols, etc.), hierarchical systems, and units to accelerate the execution of a diverse set of applications. Emerging and existing areas such as AI, BigData, Cloud Computing, Edge-Computing, Real-time systems, High-Performance Computing, and others have seen a real benefit due to Heterogeneous computer architectures. In addition, a new wave of accelerators based on dataflow architecture instead of the traditional von Neumann is sure to bring additional challenges and opportunities.
+
+
These new heterogeneous architectures often also require the development of new applications and programming models, to satisfy these new architectures and to fully utilize their capabilities. This workshop focuses on understanding the implications of heterogeneous designs at all levels of the computing system stack, such as hardware, compiler optimizations, porting of applications, and developing programming environments for current and emerging systems in all the above-mentioned areas. It seeks to ground heterogeneous system design research through studies of application kernels and/or whole applications, as well as shed light on new tools, libraries, and runtime systems that improve the performance and productivity of applications on heterogeneous systems.
+
+
The goal of this workshop is to bring together researchers and practitioners who are at the forefront of Heterogeneous computing to learn the opportunities and challenges in future Heterogeneous system design trends and thus help influence the next trends in this area.
+
+
+
Topics of interest include (but are not limited to):
+
+ - Applications for GPU-based systems, and hybrid/heterogeneous systems
+ - Techniques for optimizing kernels for execution on GPUs, FPGAs, TPUs, DPUs, and new emerging heterogeneous platforms
+ - Strategies for programming heterogeneous systems using high-level models such as OpenMP, OpenACC, SYCL, OneAPI, Kokkos, Raja, and low-level models such as OpenCL, CUDA
+ - Methods and tools to tackle challenges from heterogeneity in AI/ML/DL, Big Data, Cloud Computing, Edge-Computing, Real-time Systems, and High-Performance Computing
+ - Strategies for application behavior characterization and performance optimization for accelerators
+ - Models of application performance on heterogeneous and accelerated HPC systems
+ - Compiler Optimizations and tuning heterogeneous systems including parallelization, loop transformation, locality optimizations, vectorization
+ - Implications of workload characterization in heterogeneous and accelerated architecture design
+ - Benchmarking and performance evaluation for heterogeneous systems at all levels of the system stack
+ - Tools and techniques to address both performance and correctness to assist application development for accelerators and heterogeneous processors
+ - System software techniques to abstract application domain-specific functionalities for accelerators
+ - Innovative use of heterogeneous computing in AI for science or optimizations for AI
+ - Design and use of domain-specific functionalities on accelerators
+
+
+
Paper Tracks:
+
+
There are two paper tracks available for AsHES’24:
+
+ - 1) Full paper track (8 - 10 pages) including citations;
+ - 2) Short paper track (maximum of 4 pages) including citations; meant to highlight early investigations of innovative ideas.
+
+
+
Submitted papers will undergo a single-blind review process, so the authors do not need to anonymize a submission.
+
+