ABSTRACT
=========
In this talk, we present our recent work regarding the computation of intrinsic dimensionalities of large-scale datasets. Our work builds on an axiomatization by V. Pestov and its adaption to geometric datasets by Hanika et al. We will explain how we made this concept measurable for datasets with hundred millions of data points. Furthermore, we will discuss how this concept can be applied to the realm of graph learning. For this, we study how the intrinsic dimensionality of graph data is connected to the success of classification via graph neural networks.