Interpretable Network Representations


Networks (or interchangeably graphs) have been ubiquitous across the globe and within science and engineering: social networks, collaboration networks, protein-protein interaction networks, infrastructure networks, among many others. Machine learning on graphs, especially network representation learning, has shown remarkable performance in tasks related to graphs, such as node/graph classification, graph clustering, and link prediction. These tasks are closely related to the Web applications, especially social network analysis and recommendation systems. For example, node classification and graph clustering are widely used for studies on community detection, and link prediction plays a vital role in friend or item recommendation. Like performance, it is equally crucial for individuals to understand the behavior of machine learning models and be able to explain how these models arrive at a certain decision. Such needs have motivated many studies on interpretability in machine learning. Specifically, for social network analysis, we may need to know the reasons why certain users (or groups) are classified or clustered together by the machine learning models, or why a friend recommendation system considers some users similar so that they are recommended to connect with each other. Under such circumstances, an interpretable network representation is necessary and it should carry the graph information to a level understandable by humans.

In this tutorial, we will (1) define interpretability and go over its definitions within different contexts in studies of networks; (2) review and summarize various interpretable network representations; (3) discuss connections to network embedding, graph summarization, and network visualization methods; (4) discuss explainability in Graph Neural Networks, as such techniques are often perceived to have limited interpretability; and (5) highlight the open research problems and future research directions. The tutorial is designed for researchers, graduate students, and practitioners in areas such as graph mining, machine learning on graphs, and machine learning interpretability. Few prerequisites are required for The Web Conferenc participants to attend.

Tutorial Outline

  • Introduction slides
  • Interpretability in Network Settings
    • Network Properties
    • Spectral Properties
    • Relationship Between a Network and its Subgraphs
  • Interpretable Network Representations
    • Graph Summarization Methods
    • Network Embedding Methods
    • Network Visualization
  • Demo
  • Graph Neural Network and its Explainability
  • Q & A

Video Teaser

IGR Teaser


Shengmin Jin

hi Shengmin Jin is a research associate of Computer and Information Science and Engineering at Syracuse University. His research interests include large scale graph mining and graph representation. His work has been published in data mining venues including TKDE, KDD, WSDM, ICDM, CIKM, etc. Shengmin has been a teaching assistant for the graduate courses including social media mining, data mining and algorithms at Syracuse University. Shengmin was the recipient of the 2016 Syracuse outstanding achievement award in graduate study. Before joining Syracuse University, he received the B.S. degree in Maths from Fudan University. More information can be found [here](

Danai Koutra

hi Danai Koutra is an Associate Director of the Michigan Institute for Data Science (MIDAS) and an Associate Professor in Computer Science and Engineering at the University of Michigan, where she leads the Graph Exploration and Mining at Scale (GEMS) Lab. She is also an Amazon Scholar. Her research focuses on practical and scalable methods for large-scale real networks, and her interests include graph summarization, graph representation learning, knowledge graph mining, similarity and alignment, and anomaly detection. She has won an NSF CAREER award, an ARO Young Investigator award, the 2020 SIGKDD Rising Star Award, research faculty awards from Google, Amazon, Facebook and Adobe, a Precision Health Investigator award, the 2016 ACM SIGKDD Dissertation award, and an honorable mention for the SCS Doctoral Dissertation Award (CMU). She holds one 'rate-1' patent on bipartite graph alignment, and has multiple papers in top data mining conferences, including 8 award-winning papers. She is the Secretary of the new SIAG on Data Science, an Associate Editor of ACM TKDD, a track co-chair for the 'Social Network Analysis and Graph Algorithms' track at TheWebConf 2022, and has served multiple times in the organizing committees of all the major data mining conferences (e.g., ACM SIGKDD, ACM WSDM, SIAM SDM, ECML/PKDD, ACM CIKM, IEEE ICDM). She has also co-organized 7 tutorials and 6 workshops.

Reza Zafarani

hi Reza Zafarani is an Assistant Professor at the department of electrical engineering and computer science at Syracuse University. His research interests are in Data Mining, Machine Learning, Social Media Mining, and Social Network Analysis. His research has been published at major academic venues, and highlighted in various scientific outlets. He is the principal author of "Social Media Mining: An Introduction", a textbook by Cambridge University Press. He is the recepient of the NSF CAREER award, the winner of the President's Award for Innovation and outstanding teaching award at Arizona State University. Reza has served on numerous program committees and/or organization committees of machine learning/data mining conferences. He is the current associate editor for SIGKDD explorations and Frontiers in Communication. More information can be found at [here](