Computer Architecture Research
Stay current with breakthrough research and emerging trends. Explore cutting-edge papers from top-tier conferences and understand their practical implications.
Showing 5 of 5 papers
2025
HipKittens: Fast and Furious AMD Kernels
HipKittens is a C++ embedded domain-specific language that provides tile-based programming primitives for high-performance AI kernel development on AMD GPUs. The framework introduces novel scheduling patterns (8-wave ping-pong and 4-wave interleave), explicit register management, and chiplet-aware cache optimization to achieve performance competitive with or exceeding hand-optimized assembly kernels across diverse AI workloads.
Impact: HipKittens addresses the critical software gap limiting AMD GPU adoption in AI workloads, often called the 'CUDA moat.' By providing accessible C++ programming primitives, it enables developers to write high-performance AMD kernels without resorting to raw assembly. The framework achieves 1.2-10× speedups over existing baselines in various settings and matches AMD's hand-optimized assembly kernels across key operations like GEMM and attention. This work is particularly impactful for democratizing AI hardware access, as AMD MI355X GPUs offer competitive or superior specifications to NVIDIA alternatives (2.5 PFLOPs BF16, 8 TB/s bandwidth, 288 GB memory). The open-source release enables the AI community to leverage diverse hardware platforms, potentially breaking vendor lock-in and accelerating AI development through increased compute availability.
2024
KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache Generation
Novel parallelization scheme that accelerates LLM prompt phase by dual-purposing KV-cache for parallel generation, achieving 1.4× and 1.6× speedups for Llama 7B and Falcon 7B with asynchronous communication and context-level load-balancing.
Impact: Directly reduces time-to-first-token (TTFT) in production LLM serving systems, enabling better user experience for long-context applications like RAG, summarization, and in-context learning.
2023
Dynamic Warp Scheduling for Improved GPU Utilization
Machine learning-based warp scheduler that adapts to workload characteristics, achieving 15-25% performance improvements across diverse GPU workloads.
Impact: Directly applicable to next-generation GPU architectures, with major vendors expressing interest in the approach for future products.
Scalable Cache Coherence for Manycore Processors
2017
Attention Is All You Need
This paper introduces the Transformer, a novel neural network architecture based entirely on attention mechanisms, eliminating the need for recurrence and convolutions. The model achieves state-of-the-art results on machine translation tasks (28.4 BLEU on WMT 2014 English-to-German) while being significantly more parallelizable and requiring less training time than previous approaches.
Impact: The Transformer architecture has revolutionized natural language processing and beyond, becoming the foundation for modern large language models like BERT, GPT, and their successors. Its parallel processing capabilities enable efficient training on modern GPU hardware, reducing computational costs and training time significantly. The architecture's success in machine translation demonstrated that attention mechanisms alone could outperform recurrent networks, leading to widespread adoption across various domains including computer vision, speech recognition, and protein folding. The model's interpretability through attention visualizations has also provided insights into how neural networks process sequential data, influencing both research and production systems in industry.
Stay Ahead of the Curve
Computer architecture is rapidly evolving. Our research summaries help you understand the latest breakthroughs and their practical implications for system design.