Representation learning for sequential decision-making
One of the fundamental goals for AI and machine learning is to support decision-making. In many applications, learning of decision-making policies cannot be done through trial-and-error. For example, in healthcare, experimentation with clinical decisions is strictly regulated. As a result, there is great interest in using so-called observational (non-experimental) data of historical patients to find better policies for treatment.
Learning optimal decision-making policies from observational data is intimately tied to causality. In particular, we want to be sure that associations between historical decision and outcomes are not spurious but that they are representative of what would happen if we were to act on them in practice. For example, even though there is an observed correlation between national chocolate consumption and the number of Nobel prizes awared to laurates from different countries doesn't mean that eating more chocolate gives us a better chance of being awarded future prizes. To ensure success, it is critical that methods applied to this task preserve "causal sufficiency"---that they don't leave out information that can identify such associations.
In this project, we will study representation learning in the context of sequential decision-making. This requires understanding the mathematical difference between correlation and causation and the ability to apply this in development of new methods. The project will touch on reinforcement learning, representation learning and causal inference.