
RÃ©sumÃ© :
AI and machine learning are commonly described as "black boxes" that are efficient, but opaque. While complete opacity would be an exaggeration, it is true that many methods for explainability rely on forms of retroengineering: we try to infer the model from its (partial, intermediary, final) results. These methods are typically based on largescale, efficient matrix manipulation.
Graphs and their extensions have shown to be visualisable and interpretable, even at large scales. In their classical formulation, they are also very similar to matrices, but also versatile: they can be directed, weighted, multilayered, temporal, etc. Each of those extensions giving rise to interesting algorithmic and datadriven questions.
To date, few machine learning methods, harness the expressive power of graphs, in part due to the complexities of graph algorithms, typically having polynomial running times, which is incompatible with the scale of data at hand.
However, the situation has changed: (i) the impact of AI on society makes it no longer acceptable to only favour efficiency despite explainability, and (ii) recent advances in algorithmic methods on graphs demonstrates that due to the nature of realworld graphs, even some NPhard problems become tractable.
The aim of this talk is to explore this avenue of research. We will discuss the stateofthe art as well as some past results in realworld (temporal) graph modeling and in explainability, and will then discuss some recent results on pattern mining on temporal graphs.
