Reinforcement Learning for Social Policies


Social scientific research on optimal taxation is well established. However, because of the lack of real-world experiments and the varying nature of societies, identifying a universal taxation scheme is difficult. Additionally, the dynamic nature of setting taxes and changing other policies (e.g., health or education spending) makes it hard for a government official—or any human—to oversee all complex connections and unintended consequences in the economy. This lack of overseeing capacity leads to suboptimal policymaking.

Recently, computer scientists have developed an algorithm called “The AI Economist,”  using a two-level deep reinforcement learning (RL) approach. This trains a virtual government official (first level) to learn optimal policies while observing economic agents acting in the economy (second level). Both the official and the agents are adaptive to the simulated world environment they live in. Existing simulations show that the AI Economist (i.e., the virtual government official) finds better tax trade-offs than existing human models. Although this result is promising,  these simulations need more realism before they can be tested for real-life policymaking Thus, the overarching goal of this project is to increase the realism of existing algorithms by

  • deploying real data and
  • extending the AI Economist to deploy not only tax policies but also use other public policies such as health and educational policies.

Project goals

  1. To train and evaluate AI Economist as described in [2] to find optimal allocation of resources to maximize social welfare.
  2. To improve the realism of the first goal by using real-world data on human living conditions.


  • Studies computer Science, physics or mathematics
  • Courses in machine learning and AI
  • Courses in statistics/math
  • Good programming skills (preferable in Python, TensorFlow/PyTorch)
  • Being motivated, creative, focused and has problem-solving skills


Date range: 
November, 2020 to November, 2023