Games, Decisions & Networks Research Lab

Artificial intelligence (AI) applications are becoming increasingly prevalent in social systems, such as the digital economy and urban mobility. They are now considered socio-technical systems, including human and AI-powered decision-makers. This raises new questions about how AI-powered decision-makers will interact with each other and with humans in such emerging dynamical systems and how we can apply control-theoretic methodologies for their reliable integration into our society. Therefore, at the GDN Lab,

  • We focus on developing a foundational understanding of learning and autonomy in complex, dynamic, and multi-agent systems.
  • We develop new methodologies for analyzing, controlling, and optimizing socio-technical systems.
  • We apply these methodologies to specific problems in urban mobility, robotics, and the digital economy.

See the preprint arXiv:2111.11742, the following poster and the video recording as examples of our recent research projects:

M. O. Sayin, F. Parise, and A. Ozdaglar, “Fictitious play in zero-sum stochastic games,” SIAM Journal on Control and Optimization, 2022.
University of Minnesota, Institute for Mathematics & Its Applications, Data Science Seminar in Apr. 2023

We are looking for new team members. Please get in touch with us if you are interested in!


Recent News

  • [Jun. 2024] Our paper titled “(Smooth) fictitious-play in identical-interest stochastic games with independent continuation-payoff estimates” got accepted to the Applied and Computational Mathematics.
  • [Jun. 2024] Our paper titled “Strategizing against Q-learners: A control-theoretical approach” got accepted to the IEEE Control Systems Letters (L-CSS).
  • [Apr. 2024] A new version of the preprint “Decentralized learning for stochastic games: Beyond zero sum and identical interest” is available at arXiv:2310.07256v2 with a new title “Finite-horizon approximations and episodic equilibrium for stochastic games”. [url]
  • [Mar. 2024] A new preprint is available: Y. Arslantas, E. Yuceel, and M. O. Sayin, “Strategizing against Q-learners: A control-theoretical approach,” available at arXiv:2403.08906, 2004. [url]
  • [Mar. 2024] Dr. Sayin gave an invited talk titled “Taming the Wild West of AI: Game Theory for Predictable AI Interactions” at the Workshop on Vector Optimization, Active Learning, Design of Experiments, Game Theory and Their Applications.
  • [Feb. 2024] A new preprint is available: A. S. Donmez, Y. Arslantas, and M. O. Sayin, “Team collaboration vs competition: New fictitious play dynamics for multi-team zero-sum games,” available at arXiv:2402.02147, 2024. [url]
  • [Jan. 2024] A new version of the preprint “Efficient-Q learning in stochastic games” is available at arXiv:2302.09806v2 with a new title “Logit-Q dynamics for efficient learning in stochastic teams”. [url]
  • [Dec. 2023] Dr. Sayin gave an invited talk titled “Convergence of Heterogeneous Learning Dynamics in Zero-sum Stochastic Games” at the IEEE Conference on Decision and Control, “Workshop on Control, Game, and Learning Theory for Security and Privacy”. [url]
  • [Nov. 2023] A new preprint is available: Y. Arslantas, E. Yuceel, Y. Yalin, and M. O. Sayin, “Convergence of Heterogeneous Learning Dynamics in Zero-sum Stochastic Games,” available at arXiv:2311.00778. [url]
  • [Oct. 2023] A new Python library for learning in Markov games is available on GitHub. [url]
  • [Oct. 2023] A new preprint is available: M. O. Sayin, “Decentralized Learning for Stochastic Games: Beyond Zero Sum and Identical Interest,” available at arXiv:2310.07256. [url]
  • [Oct 2023] Dr. Sayin gave an invited talk titled “Efficient-Q Learning for Stochastic Games” at the Annual INFORMS Meeting, “Algorithmic Learning in Games” Session.
  • [Sep 2023] Dr. Sayin gave an invited talk titled “Convergent Heterogenous Learning for Zero-sum Stochastic Games” at the Annual Allerton Conference.
  • [Aug 2023] Our paper titled “Episodic logit-Q dynamics for efficient learning in stochastic teams” will appear in IEEE CDC’23. [url]