Linear Algebra and Neural Approaches for Representation Learning
It is well-known that the performance of a machine learning model heavily relies on what data representation is used as the input.
Representation learning has always been one of the most important topics in machine learning and methods therein have evolved from the traditional approaches of enhancing features via dimensionality reduction to directly generating features using neural networks (i.e., deep learning).
This talk will introduce some of the speaker’s major works on representation learning including both linear algebra and neural approaches. It will cover sub-topics on traditional feature refinement methods, co-embedding, data visualisation and neural representations for locally structured, sequential and graph data, as well as their utilisation in real-world AI and data analysis applications.