New attack that the consensus might be vulnerable to ("P + e Attack")

Previous topic - Next topic

zack

Vitalik talks about an effective attack against POW starting at 52:27 in this video https://www.youtube.com/watch?v=S47iWiKKvLA
This family of attacks can also be used against the truthcoin consensus mechanism.
I will attempt outline an example scenario below:
Lets say that there is a prediction market for a presidential election that Obama won. The attacker bets the wrong way (on McCain), and attempt to convince the votecoin-holders to claim that McCain won.
The attacker makes a contract which gives a bunch of money to votecoin-holders who choose McCain.
If the prediction says McCain wins, then the contract gives 0.01 of the attacker's money units to the owner of every 1 money unit worth of votecoins that chose McCain.
If the prediction says Obama wins, then the contract gives  1.00 of the attackr's money units to the owner of  every 1 money unit worth of votecoins that chose McCain.  Since the market ends on McCain, the attacker only has to pay the smaller amount, which is worth as much as 0.5% of the votecoins.The attacker needs to be able to afford to purchase 50% of the votecoins.

If we use Paul's smoothing constant of 0.9, it is 10x worse. The attacker only needs to afford 5% of the votecoins, and he only has spend as much as 0.05% of the votecoins is worth.This problem only effects Paul's version of rep. My version, which is like colored-coins is immune to this attack.

Moderator Note: Changed title to include "P + e Attack"

psztorc

#1
Quote from: zack on January 11, 2015, 06:37:13 PM
Vitalik talks about an effective attack against POW starting at 52:27 in this video https://www.youtube.com/watch?v=S47iWiKKvLA
Too busy to watch. I'll leave it to you (or any forum-member) to summarize any relevant parts.

Quote from: zack on January 11, 2015, 06:37:13 PM
Lets say that there is a prediction market for a presidential election that Obama won. The attacker bets the wrong way (on McCain), and attempt to convince the votecoin-holders to claim that McCain won.
The attacker makes a contract which gives a bunch of money to votecoin-holders who choose McCain.
If the prediction says McCain wins, then the contract gives 0.01 of the attacker's money units to the owner of every 1 money unit worth of votecoins that chose McCain.
If the prediction says Obama wins, then the contract gives  1.00 of the attackr's money units to the owner of  every 1 money unit worth of votecoins that chose McCain.  Since the market ends on McCain, the attacker only has to pay the smaller amount, which is worth as much as 0.5% of the votecoins.The attacker needs to be able to afford to purchase 50% of the votecoins.
While I agree that one could do this, you have not shown that there is a unique strategic equilibrium where everyone expects everyone to vote for McCain, and no one can do better. One requirement would be to consider the effect of "counter-contracts" with McCain and Obama swapped, neutralizing the first "contract". Is it just a bidding war with no convergence? VTC-holders have a direct incentive to protect the value of the VTC they purchased by assurance-contracting these counter-contracts into existence. They are rewarded with their own money so this doesn't seem to go anywhere... (either way, it is up to you, not me, to demonstrate that the attack-equilibrium can survive this new choice-dimension).

But, for fun, let's say no one countered the contract.

What if every single voter responds to this by splitting his/her votecoins into two pools: one 98% of their holdings, and the other 2% of their holdings? With the 98% they vote Obama, with the 2% they vote McCain. Each persons' 2% account loses a substantial portion of VoteCoin (in fact, the smallest VTC holder gets a full alpha=.1 wiped out of his 2% account), but each person's 98% account regains exactly as much VTC. There is no net difference to anyone (Authors, Miners, Traders, Voters), other than that every Voter collects "1.00 of the attackr's money units". A direct transfer from the attacker to the VTC holders.

Because anyone can backstab this strategy to exchange lost-VTC for gained-contract-payments, I would expect individuals would increase and decrease that 2% parameter, in relation to the size of the payoff. The only reason that this state of affairs is not a Nash Equilibrium is because the attacker can profitably deviate...by not attacking at all.

Quote from: zack on January 11, 2015, 06:37:13 PM
My version, which is like colored-coins is immune to this attack.
Perhaps you should be a little more measured, given your track record.
Nullius In Verba

zack

PS>They are rewarded with their own money so this doesn't seem to go anywhere
Sorry, I didn't explain clearly. The attacker has to be able to afford to purchase 1/2 of the votecoins. He gives a binding contract to purchase them. The votecoin holders aren't getting their own money back, they are sacrificing their votecoins to get a larger sum of money which the attacker is putting at risk.

PS>But, for fun, let's say no one countered the contract.
These contracts are very expensive to make. I doubt anyone is altruistic enough to throw away so much money.

PS>What if every single voter responds to this by splitting his/her votecoins into two pools: one 98% of their holdings, and the other 2% of their holdings

If you only cheat with 2% of your money, then you are only earning about 2% money as you possibly could. If you know that 98% of people are going to be honest, then you should vote 100% dishonest so that you can take more money from the attacker.

I will make a square showing how much the votecoin-holder can earn in each case, to show Nash equilibrium.
____________|LIE____|HONEST
attack fails____|1.51___|1.5
attack succeeds|1.5____|0

Whether the attack fails or succeeds, it is still in the interest of the votecoin-holders to vote incorrectly.
So, the Nash equilibrium is that the attack will succeed.

psztorc

Quote from: zack on January 11, 2015, 08:43:59 PM
PZ>They are rewarded with their own money so this doesn't seem to go anywhere
The attacker has to be able to afford to purchase 1/2 of the votecoins. He gives a binding contract to purchase them. The votecoin holders aren't getting their own money back, they are sacrificing their votecoins to get a larger sum of money which the attacker is putting at risk.
Wouldn't it be clearer to just say what the attacker does?

Quote from: zack on January 11, 2015, 08:43:59 PM
PZ>But, for fun, let's say no one countered the contract.
I don't understand "counter the contract".
Imagine all VTC owners collectively have the option to make an assurance contract saying:
"""
The attacker makes a contract which gives a bunch of money to votecoin-holders who choose Obama.
If the prediction says Obama wins, then the contract gives 0.01 of the attacker's money units to the owner of every 1 money unit worth of votecoins that chose Obama.
If the prediction says McCain wins, then the contract gives  1.00 of the attackr's money units to the owner of  every 1 money unit worth of votecoins that chose Obama.
"""
This would be a "counter-contract", because now VTC are being bribed equally, no matter what they do.

Quote from: zack on January 11, 2015, 08:43:59 PM
PZ>What if every single voter responds to this by splitting his/her votecoins into two pools: one 98% of their holdings, and the other 2% of their holdings

If you only cheat with 2% of your money, then you are only earning about 2% money as you possibly could. If you know that 98% of people are going to be honest, then you should vote 100% dishonest so that you can take more money from the attacker.
No, because 1 * 2% = .02, which is more than .01 * 100% = .01.  And this ignores the effect on the VTC market cap.

You did not explain where your numbers come from, so your square proves nothing.
Nullius In Verba

zack

In response to the counter-contract idea: The votecoin-holders could make a bigger counter-contract. Then the outcome of the prediction market will be decided by whoever is willing to spend more money on making the bigger contract.

The attacker makes a contract that says this:
"If Obama wins, then I give 2.01 votecoin worth of cashcoin to every votecoin-holder who voted for McCain and
if McCain wins, then I give 0.01 votecoin worth of cashcoin to every votecoin-holder who voted for McCain"

____________|LIE____|HONEST
attack fails____|2.01___|2
attack succeeds|2.01____|0

Derivation of each number from table explained:
If the votecoin-holder lies, and the attack fails, then he loses his votcoin, but he he gets the 2.01 prize from the attacker to make up for it.
If the votcoin-holder is honest, and the attack fails, then he doubles his votecoin, since he takes the votecoin away from the liar.
If the votecoin holder lies, and the attack succeeds, then he doubles his votecoin, since he takes the votecoin away from the honest votecoin-holder. and he also gets the 0.01 prize from the attacker.
If the votecoin-holder is honest, and the attack succeeds, then he loses his votecoin, and does not get a prize from the attacker, so he has 0.

psztorc

Quote from: zack on January 11, 2015, 09:50:56 PM
In response to the counter-contract idea: The VTC owners cannot make a contract that spends the attackers funds because they don't know the attacker's private key. They could offer to spend more money. So the outcome of the prediction market will be decided by whoever is willing to spend more money.
They aren't. So the above is not a "response to the counter-contract idea".

Quote from: zack on January 11, 2015, 09:50:56 PM
The attacker makes a contract that says this:
"If Obama wins, then I give 2.01 votecoin worth of cashcoin to every votecoin-holder who voted for McCain and
if McCain wins, then I give 0.01 votecoin worth of cashcoin to every votecoin-holder who voted for McCain"

____________|LIE____|HONEST
attack fails____|2.01 ___|2
attack succeeds|2.01____|0






0%-LIE2%-LIE100%-LIE
Attack Fails (0.00*2.01) (0.02*2.01) (1.00*2.01)
Attack Succeeds {(0.00*0.01) - x } {(0.02*0.01) - x } {(1.00*0.01) - x }

Where x is the (nonzero, evenly-felt) loss of each VTC owner due to the attack. (Of course, in this instance, it happens to make no difference).
We then rank the preferences as follows:





0%2%100%
Fails #3 #2 #1
Succeeds #6 #5 #4

Obviously, if every VTC owner used the strategy I outlined for "2%", the attacker would 'Fail', putting us at a stable #2 Nash Equilibrium. By deviating toward 100%, they would risk hitting #4 instead of #1. They can all simultaneously guarantee that they don't trip the attack-success barrier by splitting their vote, as I described. They could even safely split 51% 49% (of their account), but if others split on a different level (for example, 100% 0%), the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50. This is because all the lie-ballots are the same, and will draw for last [or second, or whatever non-first slot] and bleed a proportion of their VTC. Less in those ballots, the less bleeding.
Nullius In Verba

zack

If the votecoin holders give encrypted votes, and the votes are tallied up in a SMPC (Secure Multi-Party Computation), then this vulnerability disappears. It is no longer possible to which way each voter voted.

zack

This solution takes ~1/100th as much time and fees as SMPC:

Say there are 100 votecoin holders in a jury.
Each voter makes their encrypted vote, then they collect all the encrypted votes onto one big page.
Each voter make sure that their own vote is on the page.
Each voter signs the page.
Each voter then creates an un-signed transaction that reveals their vote.It would be impossible for a voter to prove how he voted (unless all 100 voted the same way). So the attacker wont know who to reward.

psztorc

This seems a little ad hoc. I would prefer to discuss my claim that, fundamentally, such a bribe wouldn't be a problem at all.
Nullius In Verba

zack

Putting all the votes onto one page and having everyone sign the page does not work.
If we cannot tell how they voted, then we cannot redistribute the votecoin.

SMPC would still work, because we can make everyone's votecoin balances encrypted.

>I would prefer to discuss my claim that, fundamentally, such a bribe wouldn't be a problem at all.

Now I am confused. You even made a chart where each row is strictly increasing.
Why do you still think the bribe wont be a problem?

>the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50
I assume you are looking at the row where the attack fails.
When the attack fails, the VTC owners who 100% dishonest earn more money than they would have received from <100% dishonest.
(When the attack succeeds, the VTC owners who 100% dishonest earn more money than they would have received from <100% dishonest)
VTC do get transferred from the dishonest towards honest, so the honest nodes are rewarded in excess of a normal round of voting.
The bribe is big enough to exceed how many VTC are lost _and_ more than make up for how much money they could have made by being honest.
The attacker needs to put a very large amount of money at risk. There is a possibility that 49% of voters will be dishonest, and then the contract will give a ton of the attacker's money to the dishonest voters.

psztorc

Quote from: zack on January 12, 2015, 12:31:53 PM
Now I am confused. You even made a chart where each row is strictly increasing.
Why do you still think the bribe wont be a problem?
I would only be repeating myself.

I guess I can clarify that the #'s are ranks, not raw payouts (hence the phrase "rank the payouts"), players want to have #1 more than #2. By moving to the right, one risks falling to the second (worse) row. This is pretty clear from the payout table, which has obviously superior 'up' and 'right' directions. The issue is that if >50% of the VTC move right, everyone risks falling to the second row...they will only move right enough to exploit the attacker, and they will want to do this in a very safe way. It is all explained in the first post ("What if every single...").

Quote from: zack on January 12, 2015, 12:31:53 PM
>the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50
I assume you are looking at the row where the attack fails.
No, I am looking at the RBCR from the SVD-consensus code in Truthcoin.

Quote from: zack on January 12, 2015, 12:31:53 PM
When the attack fails, the VTC owners who 100% dishonest earn more money than they would have received from <100% dishonest.
(When the attack succeeds, the VTC owners who 100% dishonest earn more money than they would have received from <100% dishonest)
No. #2 > #4.
#2: (0.02*2.01) = .0402
#4: {(1.00*0.01) - x } = .0100 - x   (x is positive)
( Since you didn't read the table in the first place, I struggle to understand what's to be gained by my explaining the table in the second place... )

Quote from: zack on January 12, 2015, 12:31:53 PM
VTC do get transferred from the dishonest towards honest, so the honest nodes are rewarded in excess of a normal round of voting.
The bribe is big enough to exceed how many VTC are lost _and_ more than make up for how much money they could have made by being honest.
No. Again, you did not read my initial response about the pooling (2%) strategy, so there is nothing to be gained by my repeating it again. Hopefully, anyone who is interested in my response to this can scroll up to "What if every single...". If anyone has specific questions about this example-strategy, I am willing to answer them.

Quote from: zack on January 12, 2015, 12:31:53 PM
The attacker needs to put a very large amount of money at risk. There is a possibility that 49% of voters will be dishonest, and then the contract will give a ton of the attacker's money to the dishonest voters.
The strategy I described attempts to drain this money from the attacker in a safe, incentive-compatible way. You should read about it so that you can point out any flaws in it. However, the fact that failure pays more than success creates attraction in that area, which is already attractive because of x (the VTC market cap).
Nullius In Verba

zack

I agree with you that the votecoin-holders collectively would much prefer the attack to fail, but that just isn't how nash equilibrium work.
Here are a bunch of examples where the nash equilibrium is not the highest pay-out.
*tragedy of the commons. Collective interest is to preserve the public good. Individual interest is to exploit it.
*Crabs in a bucket. They all hold each other down so no-one can climb out. It is in their collective interest to stop climbing, and climb out one at a time, but the nash equilibrium works to trap them all.
*Stampeding crowds of humans fighting to go through a doorway. The collective interest is for orderly lines, the individual interest is to get out first.
*We speak english instead of esperanto or toki pona. The collective interest is towards an easier more precise language, the individual interest is to talk to their peers and not waste time learning multiple languages.

The attack is explained pretty well on lesswrong
http://lesswrong.com/lw/dr9/game_theory_as_a_dark_art/
under the subheading "The Hostile Takeover"

It was in the shareholder's collective interest to sell at the higher price, but because of the greed of a minority, they ended up selling at the lower price.

psztorc

Quote from: zack on January 12, 2015, 08:56:11 PM
Here are a bunch of examples where the nash equilibrium is not the highest pay-out.
It is you who does not understand how nash equilibria work. They have nothing to do with "the highest payout".

You are instead describing coordination failure, but in my first post I explained how to safely coordinate in a way that is also a NE.
Nullius In Verba

zack

The consensus is such that voters are unable to coordinate. That way the voters cannot make the prediction contradictory to the real world and steal funds from users.

So why do you think the voters will suddenly gain the ability to coordinate every time this attack occurs?

vbuterin

So, it is indeed possible to make a counter-coordination contract to defeat the first contract. However, this does result in a bidding war, and so the wrong answer is going to win if the attacker overpowers the combined weight of altruists (note that that's specifically the weight of _altruists_, or rather in my lingo altruists-prime, not just the combined weight of people who have _any_ incentive to see their preferred outcome win, due to the public goods problem). But an algorithm that works only if the attacker has less economic weight than altruists-prime is a low bar; even naive PoS beats it.

> VTC-holders have a direct incentive to protect the value of the VTC they purchased by assurance-contracting these counter-contracts into existence.

Ah, so that's why we'll have different views on this. My position is that assurance contracts don't work :)

Now, there is another kind of counter-coordination that Vlad Zamfir figured out that does work. Essentially, first of all, instead of the naive Schellingcoin mechanism where winners get P and losers get 0, we add the anti-coordination game to at least the extent at which the mechanism always has an equal total revenue, ie. if there are k winners, winners get NP/k and losers get 0. Then, set up the contract C such that:

(i) to join C you need to put down a security deposit
(ii) after you join C, you need to provably vote with a 60% chance of Obama and a 40% chance of McCain (ie. use some common entropy to decide your vote with that probability distribution, eg. vote Obama iff sha3(block hash) % 10 < 6)
(iii) after you join C and get your reward if you vote Obama, you need to equally redistribute the reward that you get, as well as any bribes that you receive, among all participants in C
(iv) if you violate (ii) or (iii) you lose the deposit

The expected collective payoff, assuming everyone joins C, is going to be P * N + (P + ϵ) * N * 0.4 ~= P * N * 1.4. The incentive to join C is that you receive an expected payoff of 1.4 * P instead of P. Once you join, the security deposit bounds you to participate. The key trick here is that the contract allows the participants to provably share the rewards and collect the maximum possible benefit from the entire combined game. The mechanism doesn't inherit the problems of assurance contracts for public goods because you have the ability to exclude non-participants from sharing in the collective gain (namely, the attacker's attempted bribe).

Essentially, this is basically a way of using a version of my decentralized coordination contract from https://www.youtube.com/watch?v=S47iWiKKvLA&feature=youtu.be (52:27) against Andrew Miller's centralized coordination contract.