Algebraic Initialization Strategies for Large-Scale Tensor Decompositions

Algebraic Initialization Strategies for Large-Scale Tensor Decompositions

Tuesday, May 23, 2023 3:00 PM to 3:20 PM · 20 min. (Europe/Berlin)
Hall Z - 3rd Floor
Focus Session
AI ApplicationsML Systems and ToolsNumerical Libraries

Information

Extracting meaningful information from data is the objective in countless applications in signal processing and data analysis. Key in these fields is the interpretability of the underlying components, which can be rank-1 or low multilinear rank terms depending on the problem at hand. The components can be found via a tensor decomposition, which is typically computed via optimization starting from a random initial guess. To reduce the computation time, parallelization or randomization can be used to speed up the optimization algorithm for large-scale tensors. Alternatively, the multilinear structure can be exploited by using a multistage approach. First, the tensor is compressed to a small core tensor using well-studied algorithms such as the multilinear singular value decomposition. Next, an initialization for a decomposition of the core tensor is generated, which is then used as the starting point for an optimization step. Finally, a refinement step is performed after undoing the compression. While these techniques can reduce the time for a single decomposition, tens, hundreds, or even thousands of (random) initializations are often required due to the nonconvexity and difficulty of the optimization problem. Therefore, good initialization strategies are often key. In this talk, we show how to use algebraic techniques that leverage the required interpretability or uniqueness conditions. As an example, we focus on the computation of a difficult type of decomposition: the block term decomposition. We show that this decomposition can be formulated as a low (matrix) rank approximation problem, which relies on a standard singular value decomposition, followed by a generalized eigenvalue decomposition. Both can be computed using standard sequential or parallel algorithms. This way, the total time can be reduced by several orders of magnitude, making the block term decomposition a practically viable technique for analyzing large-scale tensors.
Format
On-siteOn Demand
Beginner Level
50%
Intermediate Level
30%
Advanced Level
20%

Log in