Learning Structures from High-dimensional Data
By S. Zhang
Sixin Zhang tells ADASP how to characterize learnable structures from high-dimensional data.
Machine learning plays an important role in artificial intelligence and data science. One core problem in learning is to characterize learnable structures from the high-dimensional data, which are formed by a large number of variables. This is a particularly challenging problem due to limited data observation and limited machine computation. I shall present my contributions to the following three basic questions: What are structures of data? Can certain structures of data be learnt? How to learn? I shall present maximum-entropy models to characterize coherent structures of high-dimensional data such as textures and turbulent flows. This modeling problem is then closely related to deep learning for recognizing images and to transform learning for inverse problems such as source separation. Lastly, I present distributed and stochastic optimization algorithms to address large-scale learning problems.
Sixin Zhang obtained his PhD in Computer Science (2010-2016) at Courant Institute of Mathematical Sciences, NYU, advised by Yann LeCun. After graduation, he worked as a postdoc researcher at ENS Paris in France with Stéphane Mallat on wavelet analysis and deep learning. Then He worked one-year at Peking University (Center for Data Science) in China as a research associate. He is now at CNRS, IRIT, Université de Toulouse as a postdoc researcher in the group of Cédric Févotte.