An Accuracy-Preserving Neural Network Compression via Tucker Decomposition |
IEEE Transactions on Sustainable Computing |
2024 |
Co-Exploring Structured Sparsification and Low-Rank Tensor Decomposition for Compact DNNs |
TNNLS |
2024 |
Tender: Accelerating Large Language Models via Tensor Decomposition and Runtime Requantization |
ISCA |
2024 |
Coarse-To-Fine Tensor Trains for Compact Visual Representations |
ICML |
2024 |
Position: Tensor Networks are a Valuable Asset for Green AI |
ICML |
2024 |
Compression-aware Training of Neural Networks using Frank-Wolfe |
Arxiv |
2024 |
Unified Low-rank Compression Framework for Click-through Rate Prediction |
KDD2024 ADS |
2024 |
A Practical Approach for Employing Tensor Train Decomposition in Edge Devices |
IJPP |
2024 |
Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition |
Arxiv |
2024 |
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models |
NAACL |
2024 |
CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization |
Arxiv |
2024 |
FLORA: Fine-grained Low-Rank Architecture Search for Vision Transformer |
WACV |
2024 |
FLoRA: Low-Rank Core Space for N-dimension |
Arxiv |
2024 |
Reduced storage direct tensor ring decomposition for convolutional neural networks compression |
Arxiv |
2024 |
Federated Learning Using Coupled Tensor Train Decomposition |
Arxiv |
2024 |
Neural Network Compression Based on Tensor Ring Decomposition |
TNNLS |
2024 |
Enhanced network compression through tensor decompositions and pruning |
TNNLS |
2024 |
Enhancing GAN Performance Through Neural Architecture Search and Tensor Decomposition |
ICASSP |
2024 |
Deep Convolutional Neural Network Compression Method: Tensor Ring Decomposition with Variational Bayesian Approach |
Neural Processing Letters |
2024 |
Deep Learning Model Compression With Rank Reduction in Tensor Decomposition |
TNNLS |
2023 |
Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor Decomposition |
FPL |
2023 |
SVD-NAS: Coupling Low-Rank Approximation and Neural Architecture Search |
WACV |
2023 |
How Informative is the Approximation Error from Tensor Decomposition for Neural Network Compression? |
ICLR |
2023 |
Tensor shape search for efficient compression of tensorized data and neural networks |
Applied Soft Computing |
2023 |
FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer |
AAAI |
2023 |
Compressing convolutional neural networks with hierarchical Tucker-2 decomposition |
Applied Soft Computing |
2023 |
Tensor shape search for efficient compression of tensorized data and neural networks |
Applied Soft Computing |
2023 |
An effective low-rank compression with a joint rank selection followed by a compression-friendly training |
Neural Networks |
2023 |
Joint matrix decomposition for deep convolutional neural networks compression |
Neurocomputing |
2023 |
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization |
Arxiv |
2023 |
HODEC: Towards Efficient High-Order DEcomposed Convolutional Neural Networks |
CVPR |
2022 |
Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations |
NeurIPS |
2022 |
BATUDE: Budget-Aware Neural Network Compression Based on Tucker Decomposition |
AAAI |
2022 |
Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition |
AAAI |
2022 |
Towards Compact Neural Networks via End-to-End Training: A Bayesian Tensor Approach with Automatic Rank Determination |
SIMODS |
2022 |
Deep neural network compression by Tucker decomposition with nonlinear response |
Knowledge-Based Systems |
2022 |
Nested compression of convolutional neural networks with Tucker-2 decomposition |
IJCNN |
2022 |
PSM-nets: Compressing Neural Networks with Product of Sparse Matrices |
IJCNN |
2022 |
A Design Space Exploration Methodology for Enabling Tensor Train Decomposition in Edge Devices |
SAMOS |
2022 |
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition |
NeurIPS |
2021 |
Deeply Shared Filter Bases for Parameter-Efficient Convolutional Neural Networks |
NeurIPS |
2021 |
Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework |
CVPR |
2021 |
Deep Convolutional Neural Network Compression via Coupled Tensor Decomposition |
JSTSP |
2021 |
Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer |
CVPR |
2020 |
Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification |
CVPRW |
2020 |
Holistic CNN Compression via Low-Rank Decomposition with Knowledge Transfer |
TPAMI |
2019 |
Efficient Neural Network Compression |
CVPR |
2019 |
ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition |
Neural Networks |
2019 |
Learning Filter Basis for Convolutional Neural Network Compression |
ICCV |
2019 |
Automated Multi-Stage Compression of Neural Networks |
ICCVW |
2019 |
Compressing Deep Models using Multi Tensor Train Decomposition |
ICCAIS |
2019 |
Compressing Fully Connected Layers using Kronecker Tensor Decomposition |
ICCSNT |
2019 |
Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling |
OpenReview |
2019 |
Wide Compression: Tensor Ring Nets |
CVPR |
2018 |
Extreme Network Compression via Filter Group Approximation |
ECCV |
2018 |
On Compressing Deep Models by Low Rank and Sparse Decomposition |
CVPR |
2017 |
Factorized Convolutional Neural Networks |
ICCVW |
2017 |
Accelerating Very Deep Convolutional Networks for Classification and Detection |
TPAMI |
2016 |
Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications |
ICLR |
2016 |
Accelerating Convolutional Neural Networks for Mobile Applications |
MM |
2016 |
Ultimate tensorization: compressing convolutional and FC layers alike |
NIPSW |
2016 |
Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition |
ICLR |
2015 |
Speeding up Convolutional Neural Networks with Low Rank Expansions |
Arxiv |
2014 |