Source linked

Hardware-Software Co-Design for Accelerating Multimodal Foundation Models

A new methodology combines hardware and software techniques to reduce computational and memory requirements for multimodal foundation models, with implications for production systems and research.

sparse-attentionkernel-exploitmevllm-inferencefrontierautomated

The proposed methodology combines hardware and software co-design of transformer blocks with an optimization pipeline to reduce computational and memory requirements for multimodal foundation models. The methodology employs performance enhancements through fine-tuning for domain-specific adaptation, MFM compression using hierarchy-aware mixed-precision quantization and structural pruning for transformer blocks and MLP channels. It also optimizes operations through speculative decoding, model cascading that routes queries through a small-to-large cascade and uses lightweight self-tests to determine when to escalate to larger models, as well as co-optimization of sequence length, visual resolution & stride, and graph-level operator fusion. To efficiently execute the model, the processing dataflow is optimized based on the underlying hardware architecture together with memory-efficient attention to meet on-chip bandwidth and latency budgets. The effectiveness of the proposed methodology is demonstrated on medical-MFMs and on code generation tasks, and extensions are discussed toward energy-efficient spiking-MFMs.


Source: Focus Session: Hardware and Software Techniques for Accelerating Multimodal Foundation Models

Read original source ->

External source stays available while the OJO article and comment thread stay local.

Comments load interactively on the live page.