Stochastic Simulation and Reinforcement Learning
The Latin American Regional Section of the International Association for Statistical Computing (IASC-LARS), the IASC-LARS School on Computational Statistics and Data Science, the International Association for Statistical Computing (IASC), and the International Statistical Institute (ISI) are pleased to invite postgraduate and undergraduate students to attend the IASC-LARS Course “ Stochastic Simulation and Reinforcement Learning ”.
The IASC-LARS Courses aim (1) to spread the knowledge base and advances in Statistical Computing in Latin American and the world, (2) to provide an overview of the state-of-the-art of the ongoing research in computational statistics, (3) to provide an overall perspective of the application of computational statistics in data science problems, (4) to present applications where computational statistics have been crucial to solving problems in real-life applications, and (5) to increase the number of researchers and practitioners in computational statistics and data science.
The course will be team-taught by
Objectives:
- to introduce graphical models: Bayesian classifiers, Bayesian networks and graphical causal models, including inference and learning techniques.
- to gain some practical experience with Weka and Hugin.
- to learn the basic concepts behind Reinforcement Learning (RL).
- to understand the main algorithms in RL.
Preliminary Agenda ( Mexico City local time – UCT/GMT -6 hours)
TRACK 1- Probabilistic graphical models: principles and applications
(Dr. L. Enrique Sucar – INAOE)
- Introduction: probabilistic graphical models, types of graphs.
- Bayesian classifiers: naive Bayesian classifier, TAN and BAN models, multidimensional and hierarchical classification
- Bayesian networks: parameters and structure learning.
- Graphical causal models: causal Bayesian networks, causal reasoning, learning causal models.
TRACK 2 – Stochastic simulation: output analysis and application
(Dr. David F. Muñoz-Negrón – ITAM)
- Transient and steady-state simulation
- Bootstrap
- Markov chain Montecarlo
TRACK 3 – Reinforcement learning
(Dr. Eduardo F. Morales – INAOE)
- Introduction to Reinforcement Learning (RL): Markov process, reward function, Bellman equation.
- Markov decision process solution methods: Dynamic programming, Monte-Carlo and temporal-difference learning
- Strategies in large state spaces: eligibility traces, model-based RL function approximation
- Deep RL: deep learning and RL.
Preliminary Course Timetable
Time | April 17, 2021 | April 18, 2021 |
08:00 – 09:30 | TRACK 1 | TRACK 2 |
09:30 – 09:45 | Break | Break |
09:45 – 11:45 | TRACK 1 | TRACK 3 |
11:45 – 12:00 | Break | Break |
12:00 – 13:30 | TRACK 1 | TRACK 3 |
15:00 – 16:30 | TRACK 2 | TRACK 3 |
Total hours | 6hrs 30min | 6hrs 30min |
References
- L. E. Sucar, “Probabilistic Graphical Models: Principles and Applications”, Springer, 2015.