SLAM: Structured Linear Algebra for Sustainable Machine Learning

Cristian Rusu, University of Bucharest, 2020-2022

General Information

  • Structured Linear Algebra for Sustainable Machine Learning (SLAM) is a “Young Teams” research project funded through UEFISCDI:

    • code PN-III-P1-1.1-TE-2019-1843
    • contract number TE16/2020
    • 24 month duration (September 2020 – August 2022)
    • 429 498 RON funding (approx. 88 000 euro)
  • Brief description: This project develops new numerical methods based on structured linear algebra with applications to efficient algorithms for signal processing, machine learning, and artificial intelligence. The objective is lowering the costs associated with modern machine learning methods which is essential to ensure their widespread adoption and their longterm sustainability. As machine learning methods heavily rely on numerical linear algebra (and mostly matrix calculations), in this project, we propose new numerical methods based on new structured matrices to reduce the storage and computational complexity of training and running machine learning models.

Team

Description

This project develops new numerical methods based on structured linear algebra with applications to efficient algorithms for signal processing, machine learning, and artificial intelligence. State-of-the-art machine learning techniques are revolutionizing many fields of science (computer science, biology, economics, business, to name a few) and the society at large by taking over increasingly complex tasks from human operators.

While it is true that these technological breakthroughs have immense benefits and promise to revolutionize the global economy, unfortunately, they also exhibit at least one major, potentially crippling, drawback: high costs of operating the state-of-the-art machine learning methods on specialized hardware infrastructure and with high costs in terms in energy consumption (direct monetary costs and indirect costs in the form of damage to the environment). We are quickly reaching a situation where only a few large companies or well-funded research groups can afford the costs of running large-scale state-of-the-art machine learning methods. Furthermore, even in these cases, the damage done to the environment due to CO2 emissions is unacceptable.

Therefore, lowering the costs associated with modern machine learning methods is essential to ensure their widespread adoption and their longterm sustainability. As machine learning methods heavily rely on numerical linear algebra (and mostly matrix calculations), in this project, we propose new numerical methods based on new structured matrices to reduce the storage and computational complexity of training and running machine learning models. We expect our work to have a direct impact on the latest machine learning techniques, such as optimization techniques, dictionary learning for sparse representations, kernel methods, and neural networks.

Objectives

The project has two overarching objectives:

  • Developed new structured linear algebraic methods with applications to
    • The development of new preconditions for linear systems
    • Efficient factorizations relevant to eigenvalue and singular value decompositions
    • Non-negative matrix factorizations
  • Extensions of structured matrices to optimization techniques
    • Structured gradient descend and quasi-Newton methods
    • Structured matrices for kernel methods
    • Structured matrices for neural networks

Papers

The following papers summarize the results of this research project. Each paper includes source code freely available on github.

  • C. Rusu and L. Rosasco, Constructing fast approximate eigenspaces with application to the fast graph Fourier transforms
    published in IEEE Transactions on Signal Processing 69, 5037-5050
    available online here
  • C. Rusu, An iterative coordinate descent algorithm to compute sparse low-rank approximations
    published in IEEE Signal Processing Letters 29
    available online here
  • C. Rusu, An iterative Jacobi-like algorithm to compute a few sparse eigenvalue-eigenvector pairs
    available online here
  • C. Rusu and P. Irofti, Efficient and parallel separable dictionary learning
    published in IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)
    available online here
  • C. Rusu and L. Rosasco, Fast approximation of orthogonal matrices and application to PCA
    published in Signal Processing 194
    available online here
  • C. Rusu, Learning multiplication-free linear transformations
    published in Digital Signal Processing 126
    available online here
  • P. Irofti, C. Rusu and A. Pătraşcu, Dictionary Learning with Uniform Sparse Representations for Anomaly Detection
    published in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
    available online here
  • D. Ilie-Ablachim and C. Rusu, Real-time precnditioners for solving systems of linear equations
    under development 2022
  • D. Ilie-Ablachim, I. Culic, and C. Rusu, Online preconditioner design for kernel machines
    under development 2022
  • D. Ilie-Ablachim and C. Rusu, New deterministic algorithms for online PCA
    under development 2022
  • C. Rusu and N. Gonzalez-Prelcic, A novel approach for unit-modulus least-squares optimization problems
    under development 2022
  • N. Gonzalez-Prelcic, E. Dominguez-Jimenez and C. Rusu, Multicoset-based deterministic measurement matrices for compressed sensing of sparse multiband signals
    under development 2022
  • J. Palacios, N. Gonzalez-Prelcic and C. Rusu, Multidimensional orthogonal matching pursuit: theory and application to high accuracy joint localization and communication at mmWave
    under development 2022