Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $9.99/month after trial. Cancel anytime.

Mechanism Design: Fundamentals and Applications
Mechanism Design: Fundamentals and Applications
Mechanism Design: Fundamentals and Applications
Ebook243 pages2 hours

Mechanism Design: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Mechanism Design


In the fields of economics and game theory, mechanism design is a subfield that takes a goals-first approach to the design of economic mechanisms or incentives, with the goal of working toward desired objectives in strategic contexts where players are assumed to act rationally. It is also known as the reverse game theory due to the fact that it begins at the end of the game and then works its way backwards. It has a wide range of applications, including those in the domains of economics and politics, including market design, auction theory, and social choice theory, as well as networked-systems.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Mechanism Design


Chapter 2: Laplace's Equation


Chapter 3: Likelihood Function


Chapter 4: Navier-Stokes Equations


Chapter 5: Maximum Likelihood Estimation


Chapter 6: Sufficient Statistic


Chapter 7: Linear Elasticity


Chapter 8: Fisher Information


Chapter 9: Implicit Function Theorem


Chapter 10: Kullback-Leibler Divergence


(II) Answering the public top questions about mechanism design.


(III) Real world examples for the usage of mechanism design in many fields.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of mechanism design.


What is Artificial Intelligence Series


The artificial intelligence book series provides comprehensive coverage in over 200 topics. Each ebook covers a specific Artificial Intelligence topic in depth, written by experts in the field. The series aims to give readers a thorough understanding of the concepts, techniques, history and applications of artificial intelligence. Topics covered include machine learning, deep learning, neural networks, computer vision, natural language processing, robotics, ethics and more. The ebooks are written for professionals, students, and anyone interested in learning about the latest developments in this rapidly advancing field.
The artificial intelligence book series provides an in-depth yet accessible exploration, from the fundamental concepts to the state-of-the-art research. With over 200 volumes, readers gain a thorough grounding in all aspects of Artificial Intelligence. The ebooks are designed to build knowledge systematically, with later volumes building on the foundations laid by earlier ones. This comprehensive series is an indispensable resource for anyone seeking to develop expertise in artificial intelligence.

LanguageEnglish
Release dateJun 27, 2023
Mechanism Design: Fundamentals and Applications

Read more from Fouad Sabry

Related to Mechanism Design

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Mechanism Design

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Mechanism Design - Fouad Sabry

    Chapter 1: Mechanism design

    In strategic situations where participants act rationally, the subject of economics known as mechanism design uses an objectives-first approach to building economic mechanisms or incentives to achieve desired goals. Reverse game theory is a branch of game theory that begins at the conclusion of a game and works its way backwards. Market design, auction theory, social choice theory, and other areas of economics and politics, as well as networked-systems, can all benefit from it (internet interdomain routing, sponsored search auctions).

    The field of mechanism design analyzes several approaches to solving games with secret knowledge. The given in a design challenge is the target function, whereas the unknown is the mechanism, as explained by Leonid Hurwicz. Therefore, conventional economic theory, which is normally devoted to the investigation of a mechanism's efficiency, is the opposite of the design problem. So, two characteristics that set these games apart are:

    that a creator of a game, rather than inheriting the structure, makes that decision

    that the game's creator cares about how it ends

    Leonid Hurwicz, Eric Maskin, and Roger Myerson won the Nobel Memorial Prize in Economic Sciences in 2007 for having laid the basis of mechanism design theory..

    One player, known as the principal, wishes to condition his actions on knowledge that is privately known to the other players in an interesting class of Bayesian games. The manager, for instance, may want to verify the salesperson's claims about a used car's condition. Since it is in the salesman's interest to exaggerate the facts, he will not be able to discover anything by simply asking the salesman. But the principle does have an advantage in mechanism design: he can create a game whose rules will encourage others to behave in the way he wants.

    The principal's dilemma would be difficult to resolve without the application of mechanism design theory. He would have to weigh his options and pick the game that would have the greatest impact on the strategies of his opponents. The principal also risks being misled by his agents if he relies on their interpretations. Thanks to careful consideration of the mechanism and the revelation principle, the principal need only think about games in which the agents honestly disclose their private information.

    One of the players in a game of mechanism design has access to confidential information, contacted the school administration, determines the reward system.

    After Harsanyi (1967), The agents get hidden messages from the environment that carry payoff-related information.

    For example, A person's tastes or opinions about a product's quality could be conveyed in a message.

    We call this information the agent's type (usually noted \theta and accordingly the space of types \Theta ).

    Agents then report a type to the principal (usually noted with a hat {\hat {\theta }} ) that can be a strategic lie.

    This report's results, the principle and the agents are compensated in accordance with the payout structure selected by the principal.

    The game's schedule is:

    The principal commits to a mechanism y() that grants an outcome y as a function of reported type

    The Informants Declare, possibly dishonestly, a type profile {\hat {\theta }}

    The mechanism is executed (agents receive outcome y({\hat \theta }) )

    Knowing who gets what requires, it is common to divide the outcome y into a goods allocation and a money transfer,

    y(\theta )=\{x(\theta ),t(\theta )\},\ x\in X,t\in T

    where x stands for an allocation of goods rendered or received as a function of type, and t stands for a monetary transfer as a function of type.

    A common metric used by designers is the outcome when they have access to all relevant data.

    Define a social choice function f(\theta ) mapping the (true) type profile directly to the allocation of goods received or rendered, f(\theta ):\Theta \rightarrow X

    A mechanism, on the other hand, connects the self-reported type profile to an outcome (yet, both a goods allocation x and a money transfer t )

    y({\hat \theta }):\Theta \rightarrow Y

    If the mechanism is well-behaved, it will lead to a Bayesian game (a game of private knowledge) with a Bayesian Nash equilibrium. At equilibrium, agents make strategic report selections depending on the type of information they have.

    {\hat \theta }(\theta )

    Solving for Bayesian equilibria in such a context is challenging because it requires finding optimal strategies for actors' responses and the best inference from a probable strategic deception. The disclosure principle is a general result that allows a designer to focus only on equilibria where agents give accurate reports of their type, regardless of the mechanism used. According to the revelation principle, for every Bayesian Nash equilibrium, there exists another Bayesian game in which the players all confess their true types from the outset.

    This is of great value. Assuming that all participants are forthcoming about their types, the concept permits the solution of a Bayesian equilibrium (subject to an incentive compatibility constraint). It does away with the requirement for deceptive or planned action all at once.

    The evidence is really direct.

    Consider a Bayesian game in which an agent's strategy and reward depend on both its type and the actions of other agents, u_{i}\left(s_{i}(\theta _{i}),s_{{-i}}(\theta _{{-i}}),\theta _{{i}}\right) .

    By definition agent i's equilibrium strategy s(\theta _{i}) is Nash in expected utility:

    s_{i}(\theta _{i})\in \arg \max _{{s'_{i}\in S_{i}}}\sum _{{\theta _{{-i}}}}\ p(\theta _{{-i}}\mid \theta _{i})\ u_{i}\left(s'_{i},s_{{-i}}(\theta _{{-i}}),\theta _{i}\right)

    If you want to force agents to all pick the same equilibrium, you need just define the method that would do so. Having the mechanism itself take on the role of enforcing equilibrium strategies is the simplest to define.

    y({\hat \theta }):\Theta \rightarrow S(\Theta )\rightarrow Y

    Since such a system would play the strategies the agents deemed optimum, the agents would naturally think it optimal to divulge type under such a mechanism.

    Formally, choose y(\theta ) such that

    {\displaystyle {\begin{aligned}\theta _{i}\in {}&\arg \max _{\theta '_{i}\in \Theta }\sum _{\theta _{-i}}\ p(\theta _{-i}\mid \theta _{i})\ u_{i}\left(y(\theta '_{i},\theta _{-i}),\theta _{i}\right)\\[5pt]&=\sum _{\theta _{-i}}\ p(\theta _{-i}\mid \theta _{i})\ u_{i}\left(s_{i}(\theta ),s_{-i}(\theta _{-i}),\theta _{i}\right)\end{aligned}}}

    It is typical for a mechanism's designer to anticipate either

    to design a mechanism y() that implements a social choice function

    to find the mechanism y() that maximizes some value criterion (e.g.

    profit)

    To implement a social choice function f(\theta ) is to find some t(\theta ) transfer function that motivates agents to pick f(\theta ) .

    Formally, If the mechanism's equilibrium strategy profile corresponds to the same distribution of commodities as a social choice function, then the mechanism is optimal, f(\theta )=x\left({\hat \theta }(\theta )\right)

    Specifically, we claim that the mechanism actualizes the social-choice function.

    Because of the concept of revelation, the designer can usually find a transfer function t(\theta ) to implement a social choice by solving an associated truthtelling game.

    If agents see it as best to be honest about their type, {\hat \theta }(\theta )=\theta

    We confidently assert that such a system is feasible for actual use (or just implementable).

    The task is then to solve for a truthfully implementable t(\theta ) and impute this transfer function to the original game.

    An allocation x(\theta ) is truthfully implementable if there exists a transfer function t(\theta ) such that

    u(x(\theta ),t(\theta ),\theta )\geq u(x({\hat \theta }),t({\hat \theta }),\theta )\ \forall \theta ,{\hat \theta }\in \Theta

    This is also known as the IC constraint (incentive compatibility).

    In applications, the IC condition is the key to describing the shape of t(\theta ) in any useful way.

    In some cases, it can even do an analytical separation of the transfer function.

    Additionally, If agents have the freedom to opt out of the game, a participation (individual rationality) restriction may be imposed.

    Consider a setting in which all agents have a type-contingent utility function u(x,t,\theta ) .

    Consider also a goods allocation x(\theta ) that is vector-valued and size k (which permits k number of goods) and assume it is piecewise continuous with respect to its arguments.

    The function x(\theta ) is implementable only if

    \sum _{{k=1}}^{n}{\frac {\partial }{\partial \theta }}\left({\frac {\partial u/\partial x_{k}}{\left|\partial u/\partial t\right|}}\right){\frac {\partial x}{\partial \theta }}\geq 0

    whenever x=x(\theta ) and t=t(\theta ) and x is continuous at \theta .

    This is a precondition that follows from the first and second order conditions of the agent's optimization problem under the premise of honesty.

    There are two halves to its meaning. The first part states that the agent's MRS rises according to the type of exchange,

    {\displaystyle {\frac {\partial }{\partial \theta }}\left({\frac {\partial u/\partial x_{k}}{\left|\partial u/\partial t\right|}}\right)={\frac {\partial }{\partial \theta }}\mathrm {MRS} _{x,t}}

    In other words, if the system does not favor more advanced agent kinds, then agents will not disclose truthful information. If there is a mechanism that penalizes higher types for reporting, then higher types will violate the truthtelling IC requirement by falsely claiming to be lower kinds. The second part is a potential case of monotony, {\frac {\partial x}{\partial \theta }}

    which indicates that higher types should receive a greater share of the good if we're being optimistic.

    It's possible that the two parts will work together.

    If for some type range the contract offered less quantity to higher types \partial x/\partial \theta <0 , The mechanism may make up for it by offering lower prices on higher types.

    However, low-type agents already have access to such a contract, hence, this is a pathological solution.

    In the search for a mechanism, such a solution may emerge.

    This calls for a ironing of the situation. Designers in a multiple-good setting can also incentivize agents to make trade-offs between goods by giving them more of one item in exchange for less of another (e.g.

    margarine in place of butter).

    Theorists of mechanism design continue to struggle with the issue of how to create mechanisms that serve multiple purposes.

    To guarantee practicality, studies on mechanism design typically make two assumptions:

    {\displaystyle {\frac {\partial }{\partial \theta }}{\frac {\partial u/\partial x_{k}}{\left|\partial u/\partial t\right|}}>0\ \forall k}

    The single-crossing condition, sorting condition, or Spence-Mirrlees condition are all names for this scenario. If the agent's MRS is rising in type, then the utility function has an increasing form.

    {\displaystyle \exists K_{0},K_{1}{\text{ such that }}\left|{\frac {\partial u/\partial x_{k}}{\partial u/\partial t}}\right|\leq K_{0}+K_{1}|t|}

    The growing rate of the MRS is constrained by this technological requirement.

    These assumptions are sufficient to provide that any monotonic x(\theta ) is implementable (a t(\theta ) exists that can implement it).

    In addition, in the single-good setting the single-crossing condition is sufficient to provide that only a monotonic x(\theta ) is implementable, so the designer can confine his search to a monotonic x(\theta ) .

    The classic finding given by Vickrey (1961) states that the seller can expect the same return from any member of a broad class of auctions, and that this revenue is the best the seller can accomplish. In this scenario, Each buyer employs the same valuation techniques (which may be a function of type)

    Types of buyers are allocated in a random fashion.

    A continuous distribution is used to select the various categories of buyers.

    The monotone hazard rate feature holds true for the type distribution.

    The system will sell the product to the highest bidder.

    The final stipulation is essential to the theorem's proof. In other words, if the seller wants to make more money, he may have to hand the item over to a broker who places a lower value on it. This usually forces him to take the chance that the item won't sell at all.

    In order to address the public choice dilemma of whether or not to build a public project—such as a bridge—when its costs are shared by all agents, Clarke (1971) and Groves modified the auction model first proposed by Vickrey (1961). The ensuing Vickrey-Clarke-Groves mechanism can encourage actors, even if they have privately known valuations, to select the socially efficient allocation of the public good. In other words, the tragedy of the commons can be avoided if certain requirements are met, such as quasilinear utility or the absence of a need for budget balance.

    Consider a setting in which I number of agents have quasilinear utility with private valuations v(x,t,\theta ) where the currency t is valued linearly.

    In order to get a true type profile, the VCG designer creates a mechanism that is incentive compatible (and so realistic to implement), which the designer uses to execute the most fair distribution

    {\displaystyle x_{I}^{*}(\theta )\in {\underset {x\in X}{\operatorname {argmax} }}\sum _{i\in I}v(x,\theta _{i})}

    The VCG system is ingenious because of the incentive it provides for open disclosure. By punishing every agent with the cost of the distortion he generates, it removes incentives to misreport.

    Enjoying the preview?
    Page 1 of 1