Manipulations of Equivalent Preference in Parallel Allocation Mechanism

The parallel distribution is a non-centralized mechanism for distributing indivisible items to agents, which can take into account computing efficiency, economic benefits and social equality. However, as most decentralized allocation mechanism, the parallel protocol is not strategy-proof. In this paper, supposed the manipulator has additive preferences with possible indifferences between single objects, we study the most basic manipulation problem under the parallel allocation mechanism. For any given set of items, we proved that the agent1 can determine whether all objects in the set can be guaranteed in polynomial time. In addition, we given an algorithm for finding pessimism and proved the correctness, completeness and time complexity of polynomial.


Introduction
The problem of resource allocation is a very important issue in computer science and economics. Therefore, artificial intelligence theory and technology research has paid more and more attention to the design of the distribution system for multiple self-interesting agents. Relevant research starts from various realistic environmental constraints (including paid or unpaid allocation [1,2], whether the price of resources is restricted [3], centralized or non-centralized allocation [4][5][6], resources to be allocated are divisible [7][8][9][10][11] or indivisible [12][13][14], etc.). Design and analyze the distribution system for multiple selfinteresting agents at the level of practical operation (especially the level of specific process computing) based on consideration of the individual's private interests and the resulting rational behavior (that is, the self-interest Agent pursues the maximization of its private interests).
We mainly study how to design an allocation system for multiple self-interest agents in two aspects: (1) In the case where the self-interesting agents are all honest, whether the distribution results that take into account economic benefits and social equality can be calculated in polynomial time; And (2) if one self-interested agents attempt to find dishonest (such as manipulation [15] and conspiracy [16]) action plans that can bring additional benefits, whether the calculation task is difficult. In this paper, we mainly focus on the second aspect, which is to analyze the computational complexity of self-interested agents to manipulate the allocation results.
Bouveret and Lang studied a sequence resource allocation mechanism [2]. Under this sequence mechanism, each self-interesting agent does not need to submit any information before the allocation process begins, she only needs to pick her favorite resources from the remaining objects in turn according to a specified order. Then, Kalinowski et al. [17] proved some of Bouveret and Lang's assumptions about the optimal agent order in theory, and analyzed some computational problems related to manipulation under the sequence mechanism [18]. In addition, Huang et al. [19] studied a new parallel resource allocation system, and under the assumption that all participating agents have strict preferences, Ref. [20] analyzed the computational complexity of manipulation problems under this system.
In this paper, we consider a restricted model and focus on equivalent preference of manipulator. It is assumed that other agents always acts truthfully, i.e., she will ask the favorite remaining items in each round, according to a linear order; manipulator is a manipulator that knows this fact (including other agents' preference order) and has additive preferences with possible indifferences between single objects (i.e., equivalent preference). For any given set of items, we proved that the agent1 can determine whether all objects in the set can be guaranteed in polynomial time. In addition, we given an algorithm for finding pessimism and proved the correctness, completeness and time complexity of polynomial.

Model and Notations
There is a set of m ≥ 2 indivisible and distinct items = { 1 , … , }. The items are distributed to a set of agents = {1 , … , } according to a preference profile ≽ = {≽ 1 , … , ≽ } where ≽ donate the weak preference order of agent over . Let donate the set of all the possible preference profiles for agents in .
Let ∶ 2 → + be the utility function of agent ∈ . For any object ∈ , we also write ({ }) as ( ) . In this paper, we assume that (∅) = 0 and is additive, i.e., ( ) = ∑ ( ) ∈ for any set ⊆ . We write 1 ≽ 2 to means that agent values object 1 at least as much as object 2 and use ≻ , i.e., 1 ≻ 2 if and only if 1 ≽ 2 but not 2 ≽ 1 . In addition, ∼ donates agent 's equivalence relation, i.e., 1 ∼ 2 if and only if ( 1 ) = ( 2 ). So there are some equivalence We donate the preferences 1 ≻ 2 ∼ 3 by the list: : { 1 } ≻ { 2 , 3 } for a short. In this paper, we assume that only agent = 1 has equivalence classes, i.e., preference profile ≽ = {≽ 1 , ≻ 2 , … , ≻ }. For ∈ ∖ {1}, let ( ) ∈ {1 , … , | |} to donate the rank of object in agent 's preference, that is, Let ( ) donate the set of objects which they have the same value as for agent 1. The description of the parallel allocation mechanism is as follows. In each round, each agent reports a favorite remaining item. If an item is reported by only one agent, then the item is assigned to this agent, otherwise the winner of a simple game (i.e., each agent has the same possibility to win) will get the object when more than one agent asks the same object. Then, repeat the process as long as there are remaining items. we call agent has pessimism if and only if agent cannot get the object when more than one agent (including agent ) asks it, that is, agent has always been a loser of the simple game.
A over some ′ ⊆ is a finite sequence (1), … , (| |) such that ( ) ∈ ′ and ( ) ≠ ( ) for any 1 ≤ ≠ ≤ | |. Intuitively, assuming that the set of remaining objects is ′, ( ) specifies the object that agent reports in the next th round. Some strategies may fail because some objects that agent intends to report has already been allocated. We say strategy is − if for 1 ≤ ≤ | |, object ( ) is still available in the next th round, and there is no object available after the next | |th round. In the rest of this paper, we only consider well--defined strategies, and a strategy over is called a strategy for short. We use ( , , ≽, ) donates a manipulation problem for the manipulator agent 1 consists of , , ≽ and a set of target objects ⊆ . A strategy is successful for ( , , ≽, ) if assuming
It is assumed that agent 2 always acts truthfully, i.e., she will ask the favorite remaining items in each round, according to a strict preference order (i.e., there are different benefits for different objects, that is, 2 ( ) ≠ 2 ( ) if and only if 1 ≤ ≠ ≤ | |). Agent 1 is a manipulator that knows this fact and she has a weak preference order over . where , ∈ ( ) , then we call is the boundary set of solving ( , , ≽, ).
Theorem 1 Given a manipulation problem ( , , ≽, ), ⊆ , if ∈ ( ) is the boundary set and solving ( , , ≽, ) is judged not to be a successful strategy, then for any set ∈ ( ), there is also not a successful strategy to solve ( , , ≽, ).
According to definition of ( , ),when = 2, if the set ∼ and , ∩ = ∅, then we can know that | 2 ( , )| ≥ | 2 ( , )| . We also know that = 2 ( ⋂ , ∖ ) by remark 1 . Then = 2 ( ⋂ , ∖ ) and = 2 ( ⋂ , ∖ ) means that the set of equivalent target objects that must be achieved no later than round respectively. So | ⋃ 1≤ ≤ | ≥ | ⋃ 1≤ ≤ | for any ≥ 1. If the object of boundary set cannot all be allocated to agent 1 ( . ., there is no successful strategy to get all the objects in the boundary set ). Then we can know that | ⋃ 1≤ ≤ | ≥ for any ≥ 1 according to remark 1. Finally we can get : Conclusion, if the object of boundary set cannot all be given by a strategy for ( , , ≽, ), then the object of any equivalent set , ∼ cannot all be allocated to agent 1.∎ We develop Algorithm 1 to find a successful strategy if it exists. This algorithm is adapted from Ref. [20].

Manipulations for N-Agents
For further research, we study more agents, but there is only one manipulator, and the behaviors of other agents are honest. In the following study, we have a set of objects = { 1 , … , }, a set of agents = {1 , … , } and profile ≽ = {≽ 1 , ≻ 2 , … , ≻ }. Agent 1 be the manipulator that knows the preference orders of the other agents, and she has a weak preference order. There is our model of two-layer to solve manipulation problem ( , , ≽, ): (1) The (outer layer) can judge whether the manipulation problem ( , , ≽, ) has a successful strategy; (2) The (inner layer) can find the 's equivalent target set. The : Through the set ̅̅̅̅ , we use , and to describe the representation of the success strategy for the th round on of the parallel allocation process: • donates the set of items remaining after − 1 rounds on , • donates the equivalent target set of objects that be found in round on , • donates the set of items that must be obtained before rounds (including rounds) on , • defines the set of items acquired by other agents in round on . Mathematically, let ′ = ∖ {1}, then: : When the round of is , we use ̅̅̅̅ , ̅̅̅̅ , ̅̅̅̅ , ̅̅̅̅ , ̅̅̅̅ to describe the requirements that find the equivalent target set in round on . • ̅̅̅̅ donates the set of objects that can remain after round on , • ̅̅̅̅ donates the set of objects that may become boundary target set after round on , • ̅̅̅̅ donates the set of objects that may be asked by other agents in round on , • ̅̅̅̅ donates the set of objects that have appeared no matter than round on , • ̅̅̅̅ donates the set of equivalent target objects in round on . Formally, let ′ = ∖ {1}, then: The above is some formal definitions of the manipulation problem solving model,  for any ≥ ≥ 1 . Based on the above statements and assumptions, there is a successful strategy ′ for ( , , ≽, ̅̅̅̅ ′) starting by asking the items in ̅̅̅̅ ′. Based on the above, if manipulator asks the item by ′ in every round < , then ′ is not asked by any agent ∈ {2 , … , | |} in th round. Let be a strategy asking the item specified by ′ in any round < , and asking ′ in th round. So we can say that is a successful strategy for ( , , ≽, ̅̅̅̅ ).