Nash sample-based approximation in large multiplayer games via Gradient Descent

0

Nash equilibrium is a central concept in game theory. Several Nash solvers exist, but none scale to normal-form games with many actions and many players, especially those whose payoff tensors are too large to store in memory. In this work, we propose an approach that iteratively improves an approximation of a Nash equilibrium by the conjoint game. It accomplishes this by plotting a previously established homotopy that defines a continuum of equilibria for the regularized game with decreasing levels of entropy. This continuum approximates asymptotically to the limit logit equilibriumproven by McKelvey and Palfrey (1995) to be unique in almost all games, thus partially circumventing the well-known balance selection problem of multiplayer games. To encourage iterations to stay close to this path, we effectively minimize mean deviation incentive via stochastic gradient descent, intelligently sampling inputs into the gain tensor as needed. The Monte Carlo estimates of the joint game stochastic gradient are biased due to the appearance of a nonlinear max operator in the objective, so we introduce additional innovations in the algorithm to smooth out the gradient bias. The descent process can also be viewed as a repeated construction and reaction to a polymatrix approximation of the game. In this way, our proposed approach, average deviation downhill with adaptive sampling (ADIDAS), is most similar to three classical approaches, namely homotopy type, Lyapunov and iterative polymatrix solvers. The lack of local convergence guarantees for biased gradient descent prevents guaranteed convergence to Nash, however, we demonstrate through extensive experiments the ability of this approach to approximate a unique Nash equilibrium in shape games normal with up to seven players and twenty-one actions (several billions of results) which are orders of magnitude larger than those possible with earlier algorithms.


Source link

Share.

About Author

Comments are closed.