New attack that the consensus might be vulnerable to ("P + e Attack")

Previous topic - Next topic

psztorc

Quote from: zack on January 13, 2015, 03:20:06 AM
The consensus is such that voters are unable to coordinate. That way the voters cannot make the prediction contradictory to the real world and steal funds from users.

So why do you think the voters will suddenly gain the ability to coordinate every time this attack occurs?
It is about information sets...the true ballot (" b* ") is known to everyone, actual votes cast are unknown, and there is an incentive to become a double-agent (which discourages cartels).

Voters can coordinate on b*, because there is low-cost/publicly-available information for them to use. Coordinating a lie is difficult because the only info-source (the liar himself) is suspect (because of RBCR).

The coordination-scheme I proposed uses only public information (in your attack, you assumed that all Voters would all learn of the existence of you attack-contract).
Nullius In Verba

psztorc

Quote from: vbuterin on January 13, 2015, 07:59:56 PM
So, it is indeed possible to make a counter-coordination contract to defeat the first contract. However, this does result in a bidding war, and so the wrong answer is going to win if the attacker overpowers the combined weight of altruists (note that that's specifically the weight of _altruists_, or rather in my lingo altruists-prime, not just the combined weight of people who have _any_ incentive to see their preferred outcome win, due to the public goods problem). But an algorithm that works only if the attacker has less economic weight than altruists-prime is a low bar; even naive PoS beats it.
I do agree that the counter-contracts create a bidding war, and that bidding wars are a low bar. Although in this case the right answer is winning by default, and in cases of a tie, I readily agree that it would hardly be satisfying if I stopped here.

Quote from: vbuterin on January 13, 2015, 07:59:56 PM
> VTC-holders have a direct incentive to protect the value of the VTC they purchased by assurance-contracting these counter-contracts into existence.

Ah, so that's why we'll have different views on this. My position is that assurance contracts don't work :)
Oh, what a noble mind is here o'erthrown 8)

Quote from: vbuterin on January 13, 2015, 07:59:56 PM
Now, there is another kind of counter-coordination that Vlad Zamfir figured out that does work. Essentially, first of all, instead of the naive Schellingcoin mechanism where winners get P and losers get 0, we add the anti-coordination game to at least the extent at which the mechanism always has an equal total revenue, ie. if there are k winners, winners get NP/k and losers get 0. Then, set up the contract C such that:

(i) to join C you need to put down a security deposit
(ii) after you join C, you need to provably vote with a 60% chance of Obama and a 40% chance of McCain (ie. use some common entropy to decide your vote with that probability distribution, eg. vote Obama iff sha3(block hash) % 10 < 6)
(iii) after you join C and get your reward if you vote Obama, you need to equally redistribute the reward that you get, as well as any bribes that you receive, among all participants in C
(iv) if you violate (ii) or (iii) you lose the deposit

The expected collective payoff, assuming everyone joins C, is going to be P * N + (P + ϵ) * N * 0.4 ~= P * N * 1.4. The incentive to join C is that you receive an expected payoff of 1.4 * P instead of P. Once you join, the security deposit bounds you to participate. The key trick here is that the contract allows the participants to provably share the rewards and collect the maximum possible benefit from the entire combined game. The mechanism doesn't inherit the problems of assurance contracts for public goods because you have the ability to exclude non-participants from sharing in the collective gain (namely, the attacker's attempted bribe).

Essentially, this is basically a way of using a version of my decentralized coordination contract from https://www.youtube.com/watch?v=S47iWiKKvLA&feature=youtu.be (52:27) against Andrew Miller's centralized coordination contract.
Such a scheme strikes me as remarkably similar to my original response (which everyone seems to be overlooking!! ...it begins with: "What if every single voter...."). Instead of joining with some probability, everyone partially joins, and instead of using a new smart contract to redistribute the proceeds, each person partially benefits. From my tinkering, I confidently guess that a safe equilibrium

I would have been very interested (but overwhelmingly surprised) if no such counter-scheme existed or could be discovered...it occurred to me that, if so, the mere existence of Ethereum could invalidate many blockchain consensus schemes (even those powering Ethereum itself), as well as rival Ethereum contracts. In fact, precisely because of that reductio ad absurdum, I wasn't so worried.

I enjoyed your presentation...lots of practical game theory. I glad to see you excited about these Kaldor-Hicks/compensation-principle puzzles...it suits Ethereum.
Nullius In Verba

zack

I don't think Vlad's counter-contract will work.

On a normal round, everyone puts M money at risk, and gets M money back. If nearly half the players lie, then the other half win nearly 2M, and the liars get none.
Attacker offers P+e to liars if the attack fails, where P=2M.
He offers e to liars if the attack succeeds.
joining C puts a bond a risk size B>>2M.
People in C are expected to be honest 60%, and to lie 40%.
_____________|Join C and lie|Join C and conform|Don't join C and be honest100%|Don't join C and lie 100%|
Attack fails____|-B+P+e______|1.2M+0.4P+0.4e__|2M_______________________|P+e
Attack succeeds|-B+2M+e_____|0.8M+0.4e______|_________________________|2M+e

The biggest number in each row (by just 0.6 e) is "Don't join C and lie 100%", so the Nash equilibrium is that the attack will succeed.

I think this is a bidding war where the attacker has to be willing to spend 1.0 in attack-bribes to overtake 0.6 spent on defence-bribes.

vbuterin

QuoteWhat if every single voter responds to this by splitting his/her votecoins into two pools: one 98% of their holdings, and the other 2% of their holdings? With the 98% they vote Obama, with the 2% they vote McCain. Each persons' 2% account loses a substantial portion of VoteCoin (in fact, the smallest VTC holder gets a full alpha=.1 wiped out of his 2% account), but each person's 98% account regains exactly as much VTC. There is no net difference to anyone (Authors, Miners, Traders, Voters), other than that every Voter collects "1.00 of the attackr's money units". A direct transfer from the attacker to the VTC holders.

So, the only issue there is, why would each voter do that? If it is better to do 98/2 in favor of Obama rather than 100/0, why not do 0/100 in favor of McCain? It's certainly in the collective interest to do 98/2, but not in the individual interest. The difference between this and the counter-coordination contract approach is that with the counter-coordination contract, by joining the contract you are pre-committing to vote 98/2 (or 60/40 as I suggested), and it's this pre-commitment that guarantees that others will pre-commit to share bribes with you. It's a different game, because it's two-stage.

Quote from: psztorc on January 13, 2015, 08:36:16 PM
I would have been very interested (but overwhelmingly surprised) if no such counter-scheme existed or could be discovered...it occurred to me that, if so, the mere existence of Ethereum could invalidate many blockchain consensus schemes (even those powering Ethereum itself), as well as rival Ethereum contracts. In fact, precisely because of that reductio ad absurdum, I wasn't so worried.

So, the problem here is that you are assuming Ethereum is necessary for secure coordination. It's not. It's necessary for _the average person_ to easily engage in secure coordination. Large businesses and governments can perform any of these attacks with no Ethereum required just by making promises and using their reputation as collateral.

Quote
I don't think Vlad's counter-contract will work.

So, to give what I think is a summary, if the attacker instead promises substantially more than P as a bribe if the mechanism loses, then it will indeed make more sense to not participate and lie, because by lying you're guaranteed 100% of the attacker's reward instead of just 40% of it. Okay, I'll be awaiting Vlad's response :)

psztorc

Quote from: vbuterin on January 13, 2015, 09:25:25 PM
So, the only issue there is, why would each voter do that? If it is better to do 98/2 in favor of Obama rather than 100/0, why not do 0/100 in favor of McCain? It's certainly in the collective interest to do 98/2, but not in the individual interest.
No, not in this case. I explain in sentence 4 and elaborate in 5 and 6:
Quote from: psztorc on January 11, 2015, 10:26:05 PM
They could even safely split 51% 49% (of their account), but if others split on a different level (for example, 100% 0%), the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50. This is because all the lie-ballots are the same, and will draw for last [or second, or whatever non-first slot] and bleed a proportion of their VTC. Less in those ballots, the less bleeding.
Voters want to coordinate a globally identical split-% (even if they don't want to split at all, the split would be 0, 100%). They coordinate on a level which doesn't allow the attack to succeed, primarily because it pays more (but also because it hurts them less).


Quote from: vbuterin on January 13, 2015, 09:25:25 PM
The difference between this and the counter-coordination contract approach is that with the counter-coordination contract, by joining the contract you are pre-committing to vote 98/2 (or 60/40 as I suggested), and it's this pre-commitment that guarantees that others will pre-commit to share bribes with you. It's a different game, because it's two-stage.
Is there an echo in here?
Quote from: psztorc on January 13, 2015, 08:36:16 PM
...instead of using a new smart contract to redistribute the proceeds, each person partially benefits.


Quote from: vbuterin on January 13, 2015, 09:25:25 PM
Quote from: psztorc on January 13, 2015, 08:36:16 PM
I would have been very interested (but overwhelmingly surprised) if no such counter-scheme existed or could be discovered...it occurred to me that, if so, the mere existence of Ethereum could invalidate many blockchain consensus schemes (even those powering Ethereum itself), as well as rival Ethereum contracts. In fact, precisely because of that reductio ad absurdum, I wasn't so worried.

So, the problem here is that you are assuming Ethereum is necessary for secure coordination. It's not. It's necessary for _the average person_ to easily engage in secure coordination. Large businesses and governments can perform any of these attacks with no Ethereum required just by making promises and using their reputation as collateral.
I am not assuming that at all, what gave you that idea? I am merely saying that, if you could build a PoW-killer, or a Truthcoin-killer, then that doesn't leave much hope for any blockchain or smart contract (they can all be killed with the killer).
Nullius In Verba

zack

Quote from: psztorc on January 14, 2015, 06:03:28 PM
I am merely saying that, if you could build a PoW-killer, or a Truthcoin-killer, then that doesn't leave much hope for any blockchain or smart contract (they can all be killed with the killer).

Correct. That is why there is so much controversy about proof of stake.
I think it needs to work this way: If 2 people are in disagreement, then the one who is willing to burn more money wins. No one will waste money on a P+E, E attack, they will just burn it to get their way.

I am trying to implement proof of stake because it is needed for truthcoin.

vbuterin

QuoteBy deviating toward 100%, they would risk hitting #4 instead of #1

In my models I generally assume that the good guys are all infinitely small and thus have zero individual incentive to benefit the collective good; if a mechanism does not work under this assumption then it's only as strong as it is monopolistic and that's not really a good place to be. I once did a statistical analysis of the level of combined incentive of this altruism-prime effect (that's the sum over all nodes of "probability_of_being_pivotal(node) * incentive(node)") on the Ethereum crowdsale database and found that it can be overcome by an attacker with 8% of stake in the absolute best case, and often less than 1%.

Quoteif others split on a different level (for example, 100% 0%), the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50. This is because all the lie-ballots are the same, and will draw for last [or second, or whatever non-first slot] and bleed a proportion of their VTC. Less in those ballots, the less bleeding.

So, there are two arguments here. The first argument is the individual incentive without looking at each individual's influence on the system (as I generally prefer to). Here, there is a simple appeal to linearity: if the attacker's bribe makes B more profitable than A, then B will also be more profitable than 0.49 * B + 0.51 * A. The second argument is what happens when we do look at the individual incentive. Then, we have a situation where the equilibrium is 49/51, and it's the individual's choice of whether to vote probabilistically, vote B or vote A. If the individual's vote power is less than 1%, then if the attacker's bribe exceeds the reward that's all you need for the attack to succeed. If the individual's vote power is greater than 1%, then the absolutely optimal strategy seems to be to try to target a 49.999999/50.000001 split, but then your mechanism becomes infinitely fragile (assurance contracts also have this problem; sure, you _can_ force absolutely everyone to contribute and that resolves my incentive concerns, but then if even one person does not pay up the whole thing breaks), and because of imperfect information you get right back to this "low probability of being pivotal" problem that allows attackers to succeed just fine with even a moderate extra bribe.

Quote from: psztorc on January 14, 2015, 06:03:28 PM
I am not assuming that at all, what gave you that idea? I am merely saying that, if you could build a PoW-killer, or a Truthcoin-killer, then that doesn't leave much hope for any blockchain or smart contract (they can all be killed with the killer).

Actually, no. PoS is not a multi-equilibrium system in the same way that PoW is; you slash double-signers so the attacker still massively loses even if they win (there is a slight expansion to the weak subjectivity norm by which you can wait for a few minutes to check if a fork refuses to include evidence, and refuse to accept it if it does). But yeah, PoW is, as John Maynard Keynes would say, a barbarous relic.

psztorc

#22
It seems that you've misunderstood my point of view in several ways:

Quote from: vbuterin on January 25, 2015, 03:39:42 PM
In my models I generally assume that the good guys are all infinitely small and thus have zero individual incentive to benefit the collective good; if a mechanism does not work under this assumption then it's only as strong as it is monopolistic and that's not really a good place to be. I once did a statistical analysis of the level of combined incentive of this altruism-prime effect (that's the sum over all nodes of "probability_of_being_pivotal(node) * incentive(node)") on the Ethereum crowdsale database and found that it can be overcome by an attacker with 8% of stake in the absolute best case, and often less than 1%.
Those payoffs are all individual payoffs (as always)...


Quote from: vbuterin on January 25, 2015, 03:39:42 PM
So, there are two arguments here. The first argument is the individual incentive without looking at each individual's influence on the system (as I generally prefer to). Here, there is a simple appeal to linearity: if the attacker's bribe makes B more profitable than A, then B will also be more profitable than 0.49 * B + 0.51 * A.
I explained why the payoff is nonlinear two weeks ago:
Quote from: psztorc on January 11, 2015, 10:26:05 PM
...They could even safely split 51% 49% (of their account), but if others split on a different level (for example, 100% 0%), the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50. This is because all the lie-ballots are the same, and will draw for last [or second, or whatever non-first slot] and bleed a proportion of their VTC. Less in those ballots, the less bleeding.
This also explains why the [.50-e vs .50+e] (such as .49 vs .51) equilibrium, while worst for the attacker and best for the Voters, is itself unstable and will collapse to a unique e* (itself proportional to the magnitude of the bribe).


Quote from: vbuterin on January 25, 2015, 03:39:42 PM
The second argument is what happens when we do look at the individual incentive. Then, we have a situation where the equilibrium is 49/51, and it's the individual's choice of whether to vote probabilistically, vote B or vote A. If the individual's vote power is less than 1%, then if the attacker's bribe exceeds the reward that's all you need for the attack to succeed. If the individual's vote power is greater than 1%, then the absolutely optimal strategy seems to be to try to target a 49.999999/50.000001 split, but then your mechanism becomes infinitely fragile (assurance contracts also have this problem; sure, you _can_ force absolutely everyone to contribute and that resolves my incentive concerns, but then if even one person does not pay up the whole thing breaks), and because of imperfect information you get right back to this "low probability of being pivotal" problem that allows attackers to succeed just fine with even a moderate extra bribe.
I wrote about "splitting" one's vote, precisely to avoid this problem and introduce a stable equilibrium.
Quote from: psztorc on January 11, 2015, 07:47:03 PM
What if every single voter responds to this by splitting his/her votecoins into two pools: one 98% of their holdings, and the other 2% of their holdings? ...


Quote from: vbuterin on January 25, 2015, 03:39:42 PM
Quote from: psztorc on January 14, 2015, 06:03:28 PM
I am not assuming that at all, what gave you that idea? I am merely saying that, if you could build a PoW-killer, or a Truthcoin-killer, then that doesn't leave much hope for any blockchain or smart contract (they can all be killed with the killer).
Actually, no. PoS is not a multi-equilibrium system in the same way that PoW is; you slash double-signers so the attacker still massively loses even if they win (there is a slight expansion to the weak subjectivity norm by which you can wait for a few minutes to check if a fork refuses to include evidence, and refuse to accept it if it does). But yeah, PoW is, as John Maynard Keynes would say, a barbarous relic.
Firstly, I don't feel you've demonstrated a difference between PoW and PoS (double-hashers also "lose" in PoW). Perhaps you can formally define "not a multi-equilibrium system"? But I was referring not to consensus algorithms but instead to smart contracts in general (data feeds, hedging contracts, autonomous agents, as they could all be bribed or leeched / self-referenced into oblivion, assuming that such attacks were profitable [as you suggest]).
Nullius In Verba

vbuterin

#23
Quote
I wrote about "splitting" one's vote, precisely to avoid this problem and introduce a stable equilibrium.

Yeah, so the problem with splitting one's vote is that the mechanism is still fragile. Here's how. Suppose that I precommit to making a 100/0 split, and shut off my computer and go away. Then the community has the incentive to create a split that leads to a maximally close to 50/50 outcome without me. If your argument is correct, then they will succeed. However, I collect twice as many bribes as they do, so assuming bribes exceed intrinsic revenue doing what I did is a dominant strategy. This is a similar argument to one of the secondary reasons why assurance contracts don't work: even if it looks like you're pivotal, you're actually (very very probably) not, because if you disappear others will have the incentive to pick up the slack.

Quote from: psztorc on January 11, 2015, 10:26:05 PM
...They could even safely split 51% 49% (of their account), but if others split on a different level (for example, 100% 0%), the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50. This is because all the lie-ballots are the same, and will draw for last [or second, or whatever non-first slot] and bleed a proportion of their VTC. Less in those ballots, the less bleeding.

So, if the voters that split 50/50 get less than the voters that go 100/0 (which I will accept; since by voting 100/0 you are exerting influence over the result and hence increasing your probability of winning). But then doesn't that make 50/50 not a stable equilibrium?

Quote
Quote from: vbuterin on January 25, 2015, 03:39:42 PM
Quote from: psztorc on January 14, 2015, 06:03:28 PM
I am not assuming that at all, what gave you that idea? I am merely saying that, if you could build a PoW-killer, or a Truthcoin-killer, then that doesn't leave much hope for any blockchain or smart contract (they can all be killed with the killer).
Actually, no. PoS is not a multi-equilibrium system in the same way that PoW is; you slash double-signers so the attacker still massively loses even if they win (there is a slight expansion to the weak subjectivity norm by which you can wait for a few minutes to check if a fork refuses to include evidence, and refuse to accept it if it does). But yeah, PoW is, as John Maynard Keynes would say, a barbarous relic.
Firstly, I don't feel you've demonstrated a difference between PoW and PoS (double-hashers also "lose" in PoW). Perhaps you can formally define "not a multi-equilibrium system"? But I was referring not to consensus algorithms but instead to smart contracts in general (data feeds, hedging contracts, autonomous agents, as they could all be bribed or leeched / self-referenced into oblivion, assuming that such attacks were profitable [as you suggest]).

PoW doesn't have a concept of double-hashing but it does have a concept of "wrong-hashing" (mining on the wrong fork). If you mine on the wrong fork and it wins, you win, and if you mine on the right fork and the wrong fork wins, you lose. By "not a multi-equilibrium system" I mean "the correct way to behave remains the optimal way to behave (assuming bribes less than some security margin) regardless of what everyone else is doing".

I do have an alternative design that "rescues" Schelling-like schemes to some degree, although at some cost of ugliness and (yay!) quasi-subjectivity; look out for an upcoming blog post :)

psztorc

You dropped a "[ quote ]" I added it back for you.

Quote from: vbuterin on January 26, 2015, 04:12:13 AM
Yeah, so the problem with splitting one's vote is that the mechanism is still fragile. Here's how. Suppose that I precommit to making a 100/0 split, and shut off my computer and go away. Then the community has the incentive to create a split that leads to a maximally close to 50/50 outcome without me. If your argument...
My position has nothing to do with precommitment, and has never been that voters will want to approach a 50% 50% split.

To restate my position yet again, in yet another way, consider this timeline: "Bribe" -> "Voters use { bribe magnitude, VTC market cap } to compute optimal split e* ( which will always be between .000 and .500, and in practice might be between .001 and .030 )" -> "all Voters split their VTC holdings by e*" -> "bribe fails (and pays out big), resolution succeeds".

Anyone who splits too low (closer to 0-100) passed up on free bribe money, and anyone who split too high (closer to 50-50) effectively sold VTC at a cheaper-than-market price.

If Voters are too lazy to do the computation for e*, they might be in the first case (and lose out on bribe money), but they will pick up new VTC for free (the total-market-value-of-the-free-VTC will be lower than the bribe-one-could-have-received-if-one-computed-e*). The bribe is a buy offer like any other...the Outcome-Resolution process still works as intended. So even lazy voters will just split 0-100 and profit at the briber's expense.


Quote from: vbuterin on January 26, 2015, 04:12:13 AM
So, if the voters that split 50/50 get less than the voters that go 100/0 (which I will accept; since by voting 100/0 you are exerting influence over the result and hence increasing your probability of winning). But then doesn't that make 50/50 not a stable equilibrium?
To repeat myself yet again, I have never claimed that 50-50 is a stable equilibrium. I did say "safely split 51% 49%", but by "safely" I meant that the outcome-resolution process would be "safe" from the "attack" (the bribe). If one finishes the sentence, it is overwhelmingly clear that the "safe 51% 49%" is a non-equilibrium and completely hypothetical situation.


Quote from: vbuterin on January 26, 2015, 04:12:13 AM
PoW doesn't have a concept of double-hashing but it does have a concept of "wrong-hashing" (mining on the wrong fork). If you mine on the wrong fork and it wins, you win, and if you mine on the right fork and the wrong fork wins, you lose. By "not a multi-equilibrium system" I mean "the correct way to behave remains the optimal way to behave (assuming bribes less than some security margin) regardless of what everyone else is doing".
I still see no difference. Because PoW is cumulative, this is (in expectation) exactly how PoW already works (PoW is "not a multi-equilibrium system"). You are most likely to win if you mine on the longest chain (given that you do not control 51%).


Quote from: vbuterin on January 26, 2015, 04:12:13 AM
I do have an alternative design that "rescues" Schelling-like schemes to some degree, although at some cost of ugliness and (yay!) quasi-subjectivity; look out for an upcoming blog post :)
I often feel patronized when people give me unsolicited advice (even when the advice is given privately and is ultimately very helpful). However, at my own peril, let me suggest that you consider taking a break from Theory, which has made-useless many a young genius.

"""
In the 1920s, there was a dinner at which the physicist Robert W. Wood was asked to respond to a toast ... 'To physics and metaphysics.'

Now by metaphysics was meant something like philosophy—truths that you could get to just by thinking about them. Wood took a second, glanced about him, and answered along these lines:

The physicist has an idea, he said. The more he thinks it through, the more sense it makes to him. He goes to the scientific literature, and the more he reads, the more promising the idea seems. Thus prepared, he devises an experiment to test the idea. The experiment is painstaking. Many possibilities are eliminated or taken into account; the accuracy of the measurement is refined. At the end of all this work, the experiment is completed and ... the idea is shown to be worthless. The physicist then discards the idea, frees his mind (as I was saying a moment ago) from the clutter of error, and moves on to something else. The difference between physics and metaphysics, Wood concluded, is that the metaphysicist has no laboratory.
"""
-Carl Sagan
Nullius In Verba

vbuterin

Quote
Anyone who splits too low (closer to 0-100) passed up on free bribe money, and anyone who split too high (closer to 50-50) effectively sold VTC at a cheaper-than-market price.

So I suppose it's the latter part of the claim that I don't see being the case at all. As I see it, whatever slight nonlinearity exists in the payout (and in any case a sufficiently high bribe will outweigh this nonlinearity) actually works against the mechanism, as pushing your vote further toward one end or the other further increases the probability that the side that you are favoring wins.

Quote
I still see no difference. Because PoW is cumulative, this is (in expectation) exactly how PoW already works (PoW is "not a multi-equilibrium system"). You are most likely to win if you mine on the longest chain (given that you do not control 51%).

Except that "mining on what is currently the longest chain" is NOT always the optimal behavior. If the current longest chain is A, and you expect in the near future that everyone else will switch to B, then it is your incentive to switch to B.

psztorc

#26
Quote from: vbuterin on January 26, 2015, 04:37:37 PM
Quote
Anyone who splits too low (closer to 0-100) passed up on free bribe money, and anyone who split too high (closer to 50-50) effectively sold VTC at a cheaper-than-market price.
So I suppose it's the latter part of the claim that I don't see being the case at all. As I see it, whatever slight nonlinearity exists in the payout (and in any case a sufficiently high bribe will outweigh this nonlinearity) actually works against the mechanism, as pushing your vote further toward one end or the other further increases the probability that the side that you are favoring wins.
It might, if you made a bunch of extra assumptions about "trembling hands" or miscommunications, but I am not talking probability one bit. These are all pure strategies. No randomness, no mixing, no variability, no probabilities. In equilibrium (with no profitable deviations, no regrets) the bribe fails to achieve its objective, and it fails with certainty.


Quote from: vbuterin on January 26, 2015, 04:37:37 PM
Quote
I still see no difference. Because PoW is cumulative, this is (in expectation) exactly how PoW already works (PoW is "not a multi-equilibrium system"). You are most likely to win if you mine on the longest chain (given that you do not control 51%).
Except that "mining on what is currently the longest chain" is NOT always the optimal behavior. If the current longest chain is A, and you expect in the near future that everyone else will switch to B, then it is your incentive to switch to B.
I agree that this is a distinction between PoW and PoS, that expectations of the future matter less in PoS, and I agree that expectations can be manipulated using bribes. Well done.

However, my original argument stands: In Bitcoin-only, one cannot construct such a smart-contract bribe. However, in Ethereum, one can. Would Ethereum smart contracts attack each other in endless cycles, making the platform useless?
Nullius In Verba

vbuterin

Quote from: psztorc on January 26, 2015, 05:59:49 PM
Quote from: vbuterin on January 26, 2015, 04:37:37 PM
Quote
Anyone who splits too low (closer to 0-100) passed up on free bribe money, and anyone who split too high (closer to 50-50) effectively sold VTC at a cheaper-than-market price.
So I suppose it's the latter part of the claim that I don't see being the case at all. As I see it, whatever slight nonlinearity exists in the payout (and in any case a sufficiently high bribe will outweigh this nonlinearity) actually works against the mechanism, as pushing your vote further toward one end or the other further increases the probability that the side that you are favoring wins.
It might, if you made a bunch of extra assumptions about "trembling hands" or miscommunications, but I am not talking probability one bit. These are all pure strategies. No randomness, no mixing, no variability, no probabilities. In equilibrium (with no profitable deviations, no regrets) the bribe fails to achieve its objective, and it fails with certainty.

Right, so I do assume miscommuncations, trembling hands and just plain bounded rationality prohibiting non-obvious game-theoretic reasoning deeper than a few steps. Perhaps this is the fundamental difference between our approaches.

But I still am not convinced of one thing. Even if you are correct, and there is a nonlinearity favoring moderate strategies over extreme ones, I still think that the derivative d(intrinsic reward)/d(% of money voting 1) is bounded, and so if the attacker credibly commits to a bribe whose value exceeds that upper bound, people will have the incentive to go 100-0 in favor of the attacker.

Quote
However, my original argument stands: In Bitcoin-only, one cannot construct such a smart-contract bribe. However, in Ethereum, one can. Would Ethereum smart contracts attack each other in endless cycles, making the platform useless?

So I think this is where I got the idea that you were implying Ethereum is necessary for secure coordination and credible commitment; maybe you weren't, it doesn't matter much. The issue I have is, if a particular smart contract is attackable, and if we agree that game-theoretic incentive incompatibility implies that the contract will eventually be profitably attacked, then in the presence of Ethereum that smart contract will be attacked by Ethereum and in the absence of Ethereum it will be attacked by credible commitment schemes using plain old real-world trusted parties (eg. lawyers, a Codius multisig with parties from five different countries, etc). And if a smart contract is not (profitably) attackable, then it will be fine under both models.

psztorc

Quote from: vbuterin on January 27, 2015, 07:20:09 AM
Quote from: psztorc on January 26, 2015, 05:59:49 PM
It might, if you made a bunch of extra assumptions about "trembling hands" or miscommunications, but I am not talking probability one bit. These are all pure strategies. No randomness, no mixing, no variability, no probabilities. In equilibrium (with no profitable deviations, no regrets) the bribe fails to achieve its objective, and it fails with certainty.

Right, so I do assume miscommuncations, trembling hands and just plain bounded rationality prohibiting non-obvious game-theoretic reasoning deeper than a few steps. Perhaps this is the fundamental difference between our approaches.
This is not "a difference between our approaches", fundamental or otherwise. My model neither requires nor contains uncertainty (and trembling hands and bounded rationality would not introduce uncertainty) so there is no "probability that the side that you are favoring wins". However, if you introduced a new model with miscommunications in a way that contained uncertainty, you could tautologically introduce uncertainty in the outcome. However, you could do that anywhere, with anything, which is what I was trying to say: You can have uncertainty in the outcome, if you create uncertainty in the inputs. I was (charitably) using your example to show that your point was irrelevant.

But suppose you had introduced uncertainty...as the split approached 50-50, whatever (tiny) uncertainties you had contrived to introduce (people voting the wrong way on accident, despite being paid not to do this) would contribute to a larger failure probability (if the split is 2-98 and the uncertainties can only shift NORM(0%,3%), the attack still fails with near-certainty, but with a 49-51 split the attack may succeed with some "probability"). Even if you trembled into an "I'll just join the attacker because I'm very confused and want the bribe"-strategy, others have a profitable reaction to that tremble (and will be trembling the other way, anyways).

For reasonable bribes, you can change the attack-hit-rate from .001% to .002%, but only by assuming that Voters are careless (which would introduce uncertainty anyway, making the bribe irrelevant). Additional uncertainties-conditional-on-bribes are not necessary, because the strategy I outlined survives (and thrives) under voter communication.

Quote from: vbuterin on January 27, 2015, 07:20:09 AM
But I still am not convinced of one thing. Even if you are correct, and there is a nonlinearity favoring moderate strategies over extreme ones, I still think that the derivative d(intrinsic reward)/d(% of money voting 1) is bounded, and so if the attacker credibly commits to a bribe whose value exceeds that upper bound, people will have the incentive to go 100-0 in favor of the attacker.
That's true, but the bribe would have to be a credible purchase of 50% of the VTC. If one could commit to that, they could do the same attack just by buying, not bribing. That attack is completely different from the one proposed here (and I'm not very worried about it).

Quote from: vbuterin on January 27, 2015, 07:20:09 AM
Quote
However, my original argument stands: In Bitcoin-only, one cannot construct such a smart-contract bribe. However, in Ethereum, one can. Would Ethereum smart contracts attack each other in endless cycles, making the platform useless?

So I think this is where I got the idea that you were implying Ethereum is necessary for secure coordination and credible commitment; maybe you weren't, it doesn't matter much. The issue I have is, if a particular smart contract is attackable, and if we agree that game-theoretic incentive incompatibility implies that the contract will eventually be profitably attacked, then in the presence of Ethereum that smart contract will be attacked by Ethereum and in the absence of Ethereum it will be attacked by credible commitment schemes using plain old real-world trusted parties (eg. lawyers, a Codius multisig with parties from five different countries, etc). And if a smart contract is not (profitably) attackable, then it will be fine under both models.
That's nice logic, but "plain old real-world trusted parties (eg. lawyers)" are in fact very limited in what they can do, even if we ignore the tremendous costs involved. (Codius would seem to overlap almost-completely with Ethereum, not lawyers). Bitcoin would be 51% attacked to death if we had a real-world trusted alternative...because the coins would have no value, and so there would be no way to pay for the PoW-clock. The long arm of the law can't reach into a coder's imagination.
Nullius In Verba