|ACONTA: Augmented Complex Networks - Trustworthy Analysis|
Thanks to increased technology-based interaction (e.g., email, whatsapp, internet fora) and technology-based data capture tools (e.g., sensors, BT/RFID badges, automated video encoding), it is increasingly possible to observe network change in real time. In this talk, I will focus on a statistical model that has been developed to predict and explain networks that change in continuous time (or where only the order of the interaction events is known). The graphs are represented as a list of "relational events" where each event includes a time stamp (at which the interaction occured), a sender ID, a receiver ID, and, if available, attributes of the event and/or the sender or receiver. This type of model allows a researcher to predict the next event: who will be the next sender of an edge, who will be the receiver of the edge, and at what time will the interaction occur? By building a model that fits the dynamics of the network well, we get to a statistical understanding of the social drivers of the interaction dynamics in the specific network. I will show the use of this type of model on several datasets and especially focus on the use of regularisation to extract highly explainable and (relatively) parsimonious final models, while maintaining prediction accuracy.
Explainability aims to facilitate the understanding of various aspects of a model leading to information that can be used by many actors such as: the data scientist, the manager, the expert in the field, the user of the model, etc. Explainability can be seen as an active feature of a model, denoting any action taken by a model to clarifying or detailing its internal functions. We discuss in this talk the need to establish explainability approaches inherent in the field of interaction network analysis to help produce emerging explanations from topological information extracted from the network structure, that can be combined with those obtained from contextual information analysis. This could be very necessary when the analysis of the interaction network is part of a decision system. To illustrate this, we propose to analyze a tweet propagation network constructed around two influencers having two opposing opinions related to a given topic. We propose a method for detecting user’ opinion modification in relation to several nodal and topological features which integrates elements obtained by the analysis of the propagation network combined with textual content analysis. We aim to find actions that explain the stability or the change of users’ opinion that allow us to understand emerging opinions and their evolutions.
When dealing with the Safety Assessment and Cybersecurity Aspects for critical and complex industrial systems and critical infrastructures, and the essential functionalities supported, there are two main challenges that have to be considered. One aspect is the complexity regarding many dimensions (model complexity, computational complexity, ...), the other is the huge number of scenarios one has to assess regarding the dynamics of an accident, the sophistication of the attacks and the various ways these scenarios may develop in detail. In Physical Safety Assessment, the nature of the threats are mainly stochastic, where in the domain of cybersecurity, the threads are based on deterministic layers of attacks that may combine cyber and physical malevolent actions to target an asset. In both these domains, complex networks demonstrated their ability to model scenarios and turn out to be a quite practical framework to give insight on the systems or infrastructures under physical or cyber threats, sometimes combined. Indeed, for safety assessment problems, we have metrics that serve to quantify the risk in terms of probabilities or frequencies based on the probabilities of individual scenarios (probabilistic safety assessment). These metrics are good enough to have an idea of the risk, given some configuration of the system, are used on a daily basis for decision-making for what we call risk informed applications. For cybersecurity, we are used to sorting scenarios based on their likelihood, considering the vulnerabilities, the digital security criteria, the level and the capacities of the attacker, which means that we deal mainly with non-rare probabilities based on deterministic analyses and expert judgement. When we combine cybersecurity and Physical threats, there is a need of new metrics that can measure resilience of the systems and assess its robustness against stochastic failures and targeted attacks. Fortunately, complex networks metrics can serve to model these scenarios and its metrics and centralities used to measure resilience and robustness of these critical infrastructures or systems.
Anomaly detection has been a field of intense research for the last decades, firstly for vector data and more recently for relational data. In the first case, where the elements (or instances) are described by feature vectors, the task aims at finding vectors in the space that are "far" from the others and, different criteria have been proposed to quantify the notion of "far" from the others. Thus anomalies are defined as substantial variations from the norm. In the case of relational data, the methods proposed in the literature rarely aim at detecting the same type of anomalies but they can be classified into different categories depending on the kind of graph that they deal with. In this talk, dedicated to graph-based anomaly detection, we will illustrate these different approaches with a focus on the detection of contextual anomalies in attributed graphs.