PyWhy Causality in Practice

PyWhy Causality in Practice is a new talk series focusing on causality and machine learning, especially from a practical perspective. We'll have tutorials and presentations about PyWhy libraries but also talks by external speakers working on causal inference.


Causal Representation Learning: Discovery of the Hidden World

Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications.
The talk will include a description of the causal-learn package in PyWhy. Learn more: https://github.com/py-why/causal-learn

Speaker: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023.
Join the live seminar on January 29, 2024 at Monday 8:00am pacific / 11:00am eastern / 4:00pm GMT / 9:30pm IST.


EconML library and what's new in v0.15

EconML is a Python package that implements several cutting-edge causal inference estimators on top of flexible machine learning methods. In this talk, Keith Battocchi, software engineer at Microsoft Research New England and lead developer for EconML, presents a brief overview of EconML followed by a closer look at several big new features in EconML 0.15.


LLMs for causal inference

Emre Kiciman, Senior Principal Researcher at Microsoft, talks about pywhy-llm, a new experimental library that focuses on using large language models for causality.