Non-atomic game

Type of normal-form game

In game theory, a non-atomic game (NAG) is a generalization of the normal-form game to a situation in which there are so many players so that they can be considered as a continuum. NAG-s were introduced by David Schmeidler;[1] he extended the theorem on existence of a Nash equilibrium, which John Nash originally proved for finite games, to NAG-s.

Motivation

Schmeidler motivates the study of NAG-s as follows:[1]

"Nonatomic games enable us to analyze a conflict situation where the single player has no influence on the situation but the aggregative behavior of "large" sets of players can change the payoffs. The examples are numerous: Elections, many small buyers from a few competing firms, drivers that can choose among several roads, and so on."

Definitions

In a standard ("atomic") game, the set of players is a finite set. In a NAG, the set of players is an infinite and continuous set P {\displaystyle P} , which can be modeled e.g. by the unit interval [ 0 , 1 ] {\displaystyle [0,1]} . There is a Lebesgue measure defined on the set of players, which represents how many players of each "type" there are.

Each player can choose one of m {\displaystyle m} actions ("pure strategies"). Note that the set of actions, in contrast to the set of players, remains finite as in standard games. Players can also choose a mixed strategy - a probability distribution over actions. A strategy profile is a measurable function from the set of players P {\displaystyle P} to the set of probability distributions on actions; the function assigns to each point p {\displaystyle p} in P {\displaystyle P} a probability distribution x ( p ) {\displaystyle x(p)} ; it represents the fact that the infinitesimal player p {\displaystyle p} has chosen the mixed strategy x ( p ) {\displaystyle x(p)} .

Let x {\displaystyle x} be a strategy profile. The choice of an infinitesimal player p {\displaystyle p} has no effect on the general outcome, but affects his own payoff. Specifically, for each pure action j {\displaystyle j} in { 1 , , m } {\displaystyle \{1,\dots ,m\}} there is a function u j {\displaystyle u_{j}} that maps each player p {\displaystyle p} in P {\displaystyle P} and each strategy profile x {\displaystyle x} to the utility that player p {\displaystyle p} receives when he plays j {\displaystyle j} and all the other players play as in x {\displaystyle x} . As player p {\displaystyle p} plays a mixed strategy x ( p ) {\displaystyle x(p)} , his payoff is the inner product x ( p ) u ( p , x ) {\displaystyle x(p)\cdot u(p,x)} .

A strategy profile x {\displaystyle x} is called pure if x ( p ) {\displaystyle x(p)} is a pure strategy for almost every p {\displaystyle p} in P {\displaystyle P} .

A strategy profile x {\displaystyle x} is called an equilibrium if for almost every player p {\displaystyle p} and every mixed strategy y {\displaystyle y} , it holds that x ( p ) u ( p , x ) y u ( p , x ) . {\displaystyle x(p)\cdot u(p,x)\geq y\cdot u(p,x).}

Existence of equilibrium

David Schmeidler proved the following theorems for the case P = [ 0 , 1 ] {\displaystyle P=[0,1]} :[1]

Theorem 1. If for all p {\displaystyle p} the function u ( p , ) {\displaystyle u(p,\cdot )} is weakly continuous from L 1 ( [ 0 , 1 ] ) {\displaystyle L^{1}([0,1])} to R {\displaystyle \mathbb {R} } , and for all x {\displaystyle x} and i {\displaystyle i} , j {\displaystyle j} the set { p u i ( p , x ) > u j ( p , x ) } {\displaystyle \{p\mid u_{i}(p,x)>u_{j}(p,x)\}} is measureable, then an equilibrium exists.

The proof uses the Glicksberg fixed-point theorem.

Theorem 2. If, in addition to the above conditions, u ( p , x ) {\displaystyle u(p,x)} depends only on the action-integrals of the strategy profile, that is, on ( P x j ( t ) d t ) j { 1 , , m } , {\displaystyle \left(\int _{P}x_{j}(t)\,\mathrm {d} t\right)_{j\in \{1,\dots ,m\}},} then a pure-strategy equilibrium exists.

The proof uses a theorem by Robert Aumann. The additional condition in Theorem 2 is essential: there is an example of a game satisfying the conditions of Theorem 1, with no pure-strategy equilibrium. David Schmeidler also showed that Nash's equilibrium theorem follows as a corollary from Theorem 2. Specifically, given a finite normal-form game G {\displaystyle G} with n {\displaystyle n} players, one can construct a non-atomic game H {\displaystyle H} such that each player in G {\displaystyle G} corresponds to a sub-interval of P {\displaystyle P} of length 1 / n {\displaystyle 1/n} . The utility function is defined in a way that satisfies the conditions of Theorem 2. A pure-strategy equilibrium in H {\displaystyle H} corresponds to a Nash equilibrium (with possibly mixed strategies) in G {\displaystyle G} .

Finite number of types

A special case of the general model is that there is a finite set T {\displaystyle T} of player types. Each player type t {\displaystyle t} is represented by a sub-interval of P t {\displaystyle P_{t}} of the set of players P {\displaystyle P} . The length of the sub-interval represents the amount of players of that type. For example, it is possible that 1 / 2 {\displaystyle 1/2} the players are of type 1 {\displaystyle 1} , 1 / 3 {\displaystyle 1/3} are of type 2 {\displaystyle 2} , and 1 / 6 {\displaystyle 1/6} are of type 3 {\displaystyle 3} . Players of the same type have the same utility function, but they may choose different strategies.

Nonatomic congestion games

A special sub-class of nonatomic games contains the nonatomic variants of congestion games (NCG). This special case can be described as follows.

  • There is a finite set E {\displaystyle E} of congestible elements (e.g. roads or resources).
  • There are n {\displaystyle n} types of players. For each type i {\displaystyle i} there is a number r i {\displaystyle r_{i}} , representing the amount of players of that type (the rate of traffic for that type).
  • For each type i {\displaystyle i} there is a set S i {\displaystyle S_{i}} of possible strategies (possible subsets of E {\displaystyle E} ).
  • Different players of the same type may choose different strategies. For every strategy s {\displaystyle s} in S i {\displaystyle S_{i}} , let x i , s {\displaystyle x_{i,s}} denote the fraction of players in type i {\displaystyle i} using strategy s {\displaystyle s} . By definition, s S i x i , s = r i {\displaystyle \sum _{s\in S_{i}}x_{i,s}=r_{i}} . We denote x s := i : s S i f s , i {\displaystyle x_{s}:=\sum _{i:s\in S_{i}}f_{s,i}}
  • For each element e {\displaystyle e} in E {\displaystyle E} , the load on e {\displaystyle e} is defined as the sum of fractions of players using e {\displaystyle e} , that is, x e = s e x s {\displaystyle x_{e}=\sum _{s\ni e}x_{s}} . The delay experienced by players using e {\displaystyle e} is defined by a delay function d e {\displaystyle d_{e}} . This function must be monotone, positive, and continuous.
  • The total disutility of each player choosing strategy s {\displaystyle s} is the sum of delays on all edges in the subset s {\displaystyle s} : d s ( x ) = e s d e ( x e ) {\displaystyle d_{s}(x)=\sum _{e\in s}d_{e}(x_{e})} .
  • A strategy profile is an equilibrium if for every player type i {\displaystyle i} , and for every two strategies s 1 , s 2 {\displaystyle s_{1},s_{2}} in S i {\displaystyle S_{i}} , if x i , s 1 > 0 {\displaystyle x_{i,s_{1}}>0} , then d s 1 ( x ) d s 2 ( x ) {\displaystyle d_{s_{1}}(x)\leq d_{s_{2}}(x)} . That is: if a positive measure of players of type i {\displaystyle i} choose s 1 {\displaystyle s_{1}} , then no other possible strategy would give them a strictly lower delay.

NCG-s were first studied by Milchtaich,[2] Friedman[3] and Blonsky.[4] Roughgarden and Tardos[5] studied the price of anarchy in NCG-s.

Computing an equilibrium in an NCG can be rephrased as a convex optimization problem, and thus can be solved in wealky-polynomial time (e.g. by the ellipsoid method). Fabrikant, Papadimitriou and Talwar[6] presented a strongly-polytime algorithm for finding a PNE in the special case of network NCG-s. In this special case there is a graph G {\displaystyle G} ; for each type i {\displaystyle i} there are two nodes s i {\displaystyle s_{i}} and t i {\displaystyle t_{i}} from G {\displaystyle G} ; and the set of strategies available to type i {\displaystyle i} is the set of all paths from s i {\displaystyle s_{i}} to t i {\displaystyle t_{i}} . If the utility functions of all players are Lipschitz continuous with constant L {\displaystyle L} , then their algorithm computes an e {\displaystyle e} -approximate PNE in strongly-polynomial time - polynomial in n {\displaystyle n} , L {\displaystyle L} and 1 / e {\displaystyle 1/e} .

Generalizations

The two theorems of Schmeidler can be generalized in several ways:[1]: Final remarks 

  • In Theorem 2, instead of requiring that u ( p , x ) {\displaystyle u(p,x)} depends only on P x {\displaystyle \int _{P}x} , one can require that u ( p , x ) {\displaystyle u(p,x)} depends only on P 1 x , , P k x {\displaystyle \int _{P_{1}}x,\ldots ,\int _{P_{k}}x} , where P 1 , , P k {\displaystyle P_{1},\dots ,P_{k}} are Lebesgue-measureable subsets of P {\displaystyle P} .
  • In Theorem 1, instead of requiring that each player's strategy space is a simplex, it is sufficient to require that each player's strategy space is a compact convex subset of R m {\displaystyle \mathbb {R} ^{m}} . If the additional assumption of Theorem 2 holds, then there exists an equilibrium in which the strategy of almost every player p {\displaystyle p} is an extreme point of the strategy space of p {\displaystyle p} .

See also

References

  1. ^ a b c d Schmeidler, David (1973-04-01). "Equilibrium points of nonatomic games". Journal of Statistical Physics. 7 (4): 295–300. Bibcode:1973JSP.....7..295S. doi:10.1007/BF01014905. ISSN 1572-9613.
  2. ^ Milchtaich, Igal (1996). "Congestion Models of Competition". The American Naturalist. 147 (5): 760–783. Bibcode:1996ANat..147..760M. doi:10.1086/285878. ISSN 0003-0147. JSTOR 2463089. S2CID 55004212.
  3. ^ Friedman, Eric J. (1996-09-01). "Dynamics and Rationality in Ordered Externality Games". Games and Economic Behavior. 16 (1): 65–76. doi:10.1006/game.1996.0074. ISSN 0899-8256.
  4. ^ Blonski, Matthias (1999-08-01). "Anonymous Games with Binary Actions". Games and Economic Behavior. 28 (2): 171–180. doi:10.1006/game.1998.0699. ISSN 0899-8256.
  5. ^ Roughgarden, Tim; Tardos, Éva (2004-05-01). "Bounding the inefficiency of equilibria in nonatomic congestion games". Games and Economic Behavior. 47 (2): 389–403. doi:10.1016/j.geb.2003.06.004. ISSN 0899-8256. S2CID 10778635.
  6. ^ Fabrikant, Alex; Papadimitriou, Christos; Talwar, Kunal (2004-06-13). "The complexity of pure Nash equilibria". Proceedings of the thirty-sixth annual ACM symposium on Theory of computing. STOC '04. New York, NY, USA: Association for Computing Machinery. pp. 604–612. doi:10.1145/1007352.1007445. ISBN 978-1-58113-852-8. S2CID 1037326.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Non-atomic_game&oldid=1327085787"