Abstract
Deep neural networks (DNNs) have become indispensable in many real-life applications like natural language
processing, and autonomous systems. However, deploying DNNs on resource-constrained devices, e.g., in
RISC-V platforms, remains challenging due to the high computational and memory demands of fully connected
(FC) layers, which dominate resource consumption. Low-rank factorization (LRF) offers an effective approach
to compressing FC layers, but the vast design space of LRF solutions involves complex tradeoffs among
FLOPs, memory size, inference time, and accuracy, making the LRF process complex and time-consuming. This
article introduces an end-to-end LRF design space exploration methodology and a specialized design tool for
optimizing FC layers on RISC-V processors. Using Tensor Train Decomposition (TTD) offered by TensorFlow
T3F library, the proposed work prunes the LRF design space by excluding first, inefficient decomposition shapes
and second, solutions with poor inference performance on RISC-V architectures. Compiler optimizations are
then applied to enhance custom T3F layer performance, minimizing inference time and boosting computational
efficiency. On average, our TT-decomposed layers run 3× faster than IREE and 8× faster than Pluto on the
same compressed model. This work provides an efficient solution for deploying DNNs on edge and embedded
devices powered by RISC-V architectures.
processing, and autonomous systems. However, deploying DNNs on resource-constrained devices, e.g., in
RISC-V platforms, remains challenging due to the high computational and memory demands of fully connected
(FC) layers, which dominate resource consumption. Low-rank factorization (LRF) offers an effective approach
to compressing FC layers, but the vast design space of LRF solutions involves complex tradeoffs among
FLOPs, memory size, inference time, and accuracy, making the LRF process complex and time-consuming. This
article introduces an end-to-end LRF design space exploration methodology and a specialized design tool for
optimizing FC layers on RISC-V processors. Using Tensor Train Decomposition (TTD) offered by TensorFlow
T3F library, the proposed work prunes the LRF design space by excluding first, inefficient decomposition shapes
and second, solutions with poor inference performance on RISC-V architectures. Compiler optimizations are
then applied to enhance custom T3F layer performance, minimizing inference time and boosting computational
efficiency. On average, our TT-decomposed layers run 3× faster than IREE and 8× faster than Pluto on the
same compressed model. This work provides an efficient solution for deploying DNNs on edge and embedded
devices powered by RISC-V architectures.
| Original language | English |
|---|---|
| Article number | 171 |
| Pages (from-to) | 1-34 |
| Journal | ACM Transactions on Embedded Computing Systems |
| Volume | 24 |
| Issue number | 6 |
| DOIs | |
| Publication status | Published - 24 Oct 2025 |