Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - vbuterin

#1
Quote from: psztorc on October 15, 2014, 01:19:23 AM
I don't really want to get into it now (have lots of stuff to do, and want to be able to have my full say in writing), but it is actually a problem that BTSX : BitUSD tracks as well as it does. Instead, BitUSD should be permanently cheaper, for as long as there is technical and social risk associated with the BitsharesX project. "How much cheaper" is set by the market itself, but it might need to be quite substantial at first (imagine early Bitcoin, worth essentially nothing).

But you actually have your answer, I think: BitsharesX tried "markets which were open indefinitely" and no one wanted what it was selling (which is precisely because $1 was too expensive for 1 Bit$. The volume was microscopic (probably all devs or testers) and usage was zero.

So there is an interesting problem (or, depending on how you look at it, conclusion) of this style of reasoning. If you have any argument A that states that the price must be less than $1 because of risk, and you model it and determine that the price should be $k < 1, then you can also see that the price in the best case will be $k, but then you apply argument K and then because of risk the price should be $k^2. Repeat by induction and the price approaches zero.

So "trading at a discount" is not an appropriate conclusion for an asset that has higher risk and lower reward. Either complete collapse is, or if the asset is useful for some specific reason, then low volume.
#2
Yep, this is pretty similar to an idea that I've seen in the bitshares circles. One of the only ways to do it.

The other strategy that Bitshares has is they do trading in batches, so txs get committed in phase 2k, and then revealed in phase 2k+1, and sorted by price and simultaneously evaluated at the end. One optimization (not sure whether or not they do it) is to have round k+1 of committing and round k of revealing happen at the same time.

QuoteUnder your proposal, maximum trade frequency would decrease from roughly (1 / second) to (1 / 10 minutes), a factor of 600.

Hence why you make a faster block time, say 12s. I'm actually in the process of discussing the safety of our own algorithms for this with some academics at Cornell right now; will let you know the results if you are interested. I think something like 5 blocks for a commit+reveal phase can get you down to a 90s average confirmation time.

QuoteWouldn't it be possible for front-runners to hash all {market , share} combinations, which would prevent the hash from really hiding anything? You could add salt, but that would cost more blockchain-space

Sure, but that would cost a large fee. The fee can be made to be proportional to the size of the trade if desired, although at some privacy cost (you would need to supply at least a rough idea of the size of the trade at commit time to make the fee enforceable). If you don't like that then you can have a two-level fee: a fee for not revealing during the reveal period, and a much higher fee for never revealing. This lets you enforce the smaller fee at reveal time and not subject users to too much risk since they only need to pay the larger fee if they are malfeasant.
#3
Quote from: psztorc on January 26, 2015, 05:59:49 PM
Quote from: vbuterin on January 26, 2015, 04:37:37 PM
Quote
Anyone who splits too low (closer to 0-100) passed up on free bribe money, and anyone who split too high (closer to 50-50) effectively sold VTC at a cheaper-than-market price.
So I suppose it's the latter part of the claim that I don't see being the case at all. As I see it, whatever slight nonlinearity exists in the payout (and in any case a sufficiently high bribe will outweigh this nonlinearity) actually works against the mechanism, as pushing your vote further toward one end or the other further increases the probability that the side that you are favoring wins.
It might, if you made a bunch of extra assumptions about "trembling hands" or miscommunications, but I am not talking probability one bit. These are all pure strategies. No randomness, no mixing, no variability, no probabilities. In equilibrium (with no profitable deviations, no regrets) the bribe fails to achieve its objective, and it fails with certainty.

Right, so I do assume miscommuncations, trembling hands and just plain bounded rationality prohibiting non-obvious game-theoretic reasoning deeper than a few steps. Perhaps this is the fundamental difference between our approaches.

But I still am not convinced of one thing. Even if you are correct, and there is a nonlinearity favoring moderate strategies over extreme ones, I still think that the derivative d(intrinsic reward)/d(% of money voting 1) is bounded, and so if the attacker credibly commits to a bribe whose value exceeds that upper bound, people will have the incentive to go 100-0 in favor of the attacker.

Quote
However, my original argument stands: In Bitcoin-only, one cannot construct such a smart-contract bribe. However, in Ethereum, one can. Would Ethereum smart contracts attack each other in endless cycles, making the platform useless?

So I think this is where I got the idea that you were implying Ethereum is necessary for secure coordination and credible commitment; maybe you weren't, it doesn't matter much. The issue I have is, if a particular smart contract is attackable, and if we agree that game-theoretic incentive incompatibility implies that the contract will eventually be profitably attacked, then in the presence of Ethereum that smart contract will be attacked by Ethereum and in the absence of Ethereum it will be attacked by credible commitment schemes using plain old real-world trusted parties (eg. lawyers, a Codius multisig with parties from five different countries, etc). And if a smart contract is not (profitably) attackable, then it will be fine under both models.
#4
Quote
Anyone who splits too low (closer to 0-100) passed up on free bribe money, and anyone who split too high (closer to 50-50) effectively sold VTC at a cheaper-than-market price.

So I suppose it's the latter part of the claim that I don't see being the case at all. As I see it, whatever slight nonlinearity exists in the payout (and in any case a sufficiently high bribe will outweigh this nonlinearity) actually works against the mechanism, as pushing your vote further toward one end or the other further increases the probability that the side that you are favoring wins.

Quote
I still see no difference. Because PoW is cumulative, this is (in expectation) exactly how PoW already works (PoW is "not a multi-equilibrium system"). You are most likely to win if you mine on the longest chain (given that you do not control 51%).

Except that "mining on what is currently the longest chain" is NOT always the optimal behavior. If the current longest chain is A, and you expect in the near future that everyone else will switch to B, then it is your incentive to switch to B.
#5
Quote
I wrote about "splitting" one's vote, precisely to avoid this problem and introduce a stable equilibrium.

Yeah, so the problem with splitting one's vote is that the mechanism is still fragile. Here's how. Suppose that I precommit to making a 100/0 split, and shut off my computer and go away. Then the community has the incentive to create a split that leads to a maximally close to 50/50 outcome without me. If your argument is correct, then they will succeed. However, I collect twice as many bribes as they do, so assuming bribes exceed intrinsic revenue doing what I did is a dominant strategy. This is a similar argument to one of the secondary reasons why assurance contracts don't work: even if it looks like you're pivotal, you're actually (very very probably) not, because if you disappear others will have the incentive to pick up the slack.

Quote from: psztorc on January 11, 2015, 10:26:05 PM
...They could even safely split 51% 49% (of their account), but if others split on a different level (for example, 100% 0%), the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50. This is because all the lie-ballots are the same, and will draw for last [or second, or whatever non-first slot] and bleed a proportion of their VTC. Less in those ballots, the less bleeding.

So, if the voters that split 50/50 get less than the voters that go 100/0 (which I will accept; since by voting 100/0 you are exerting influence over the result and hence increasing your probability of winning). But then doesn't that make 50/50 not a stable equilibrium?

Quote
Quote from: vbuterin on January 25, 2015, 03:39:42 PM
Quote from: psztorc on January 14, 2015, 06:03:28 PM
I am not assuming that at all, what gave you that idea? I am merely saying that, if you could build a PoW-killer, or a Truthcoin-killer, then that doesn't leave much hope for any blockchain or smart contract (they can all be killed with the killer).
Actually, no. PoS is not a multi-equilibrium system in the same way that PoW is; you slash double-signers so the attacker still massively loses even if they win (there is a slight expansion to the weak subjectivity norm by which you can wait for a few minutes to check if a fork refuses to include evidence, and refuse to accept it if it does). But yeah, PoW is, as John Maynard Keynes would say, a barbarous relic.
Firstly, I don't feel you've demonstrated a difference between PoW and PoS (double-hashers also "lose" in PoW). Perhaps you can formally define "not a multi-equilibrium system"? But I was referring not to consensus algorithms but instead to smart contracts in general (data feeds, hedging contracts, autonomous agents, as they could all be bribed or leeched / self-referenced into oblivion, assuming that such attacks were profitable [as you suggest]).

PoW doesn't have a concept of double-hashing but it does have a concept of "wrong-hashing" (mining on the wrong fork). If you mine on the wrong fork and it wins, you win, and if you mine on the right fork and the wrong fork wins, you lose. By "not a multi-equilibrium system" I mean "the correct way to behave remains the optimal way to behave (assuming bribes less than some security margin) regardless of what everyone else is doing".

I do have an alternative design that "rescues" Schelling-like schemes to some degree, although at some cost of ugliness and (yay!) quasi-subjectivity; look out for an upcoming blog post :)
#6
QuoteBy deviating toward 100%, they would risk hitting #4 instead of #1

In my models I generally assume that the good guys are all infinitely small and thus have zero individual incentive to benefit the collective good; if a mechanism does not work under this assumption then it's only as strong as it is monopolistic and that's not really a good place to be. I once did a statistical analysis of the level of combined incentive of this altruism-prime effect (that's the sum over all nodes of "probability_of_being_pivotal(node) * incentive(node)") on the Ethereum crowdsale database and found that it can be overcome by an attacker with 8% of stake in the absolute best case, and often less than 1%.

Quoteif others split on a different level (for example, 100% 0%), the VTC-owners who split closer to 50-50 stand to lose VTC to those who split further from 50-50. This is because all the lie-ballots are the same, and will draw for last [or second, or whatever non-first slot] and bleed a proportion of their VTC. Less in those ballots, the less bleeding.

So, there are two arguments here. The first argument is the individual incentive without looking at each individual's influence on the system (as I generally prefer to). Here, there is a simple appeal to linearity: if the attacker's bribe makes B more profitable than A, then B will also be more profitable than 0.49 * B + 0.51 * A. The second argument is what happens when we do look at the individual incentive. Then, we have a situation where the equilibrium is 49/51, and it's the individual's choice of whether to vote probabilistically, vote B or vote A. If the individual's vote power is less than 1%, then if the attacker's bribe exceeds the reward that's all you need for the attack to succeed. If the individual's vote power is greater than 1%, then the absolutely optimal strategy seems to be to try to target a 49.999999/50.000001 split, but then your mechanism becomes infinitely fragile (assurance contracts also have this problem; sure, you _can_ force absolutely everyone to contribute and that resolves my incentive concerns, but then if even one person does not pay up the whole thing breaks), and because of imperfect information you get right back to this "low probability of being pivotal" problem that allows attackers to succeed just fine with even a moderate extra bribe.

Quote from: psztorc on January 14, 2015, 06:03:28 PM
I am not assuming that at all, what gave you that idea? I am merely saying that, if you could build a PoW-killer, or a Truthcoin-killer, then that doesn't leave much hope for any blockchain or smart contract (they can all be killed with the killer).

Actually, no. PoS is not a multi-equilibrium system in the same way that PoW is; you slash double-signers so the attacker still massively loses even if they win (there is a slight expansion to the weak subjectivity norm by which you can wait for a few minutes to check if a fork refuses to include evidence, and refuse to accept it if it does). But yeah, PoW is, as John Maynard Keynes would say, a barbarous relic.
#7
QuoteWhat if every single voter responds to this by splitting his/her votecoins into two pools: one 98% of their holdings, and the other 2% of their holdings? With the 98% they vote Obama, with the 2% they vote McCain. Each persons' 2% account loses a substantial portion of VoteCoin (in fact, the smallest VTC holder gets a full alpha=.1 wiped out of his 2% account), but each person's 98% account regains exactly as much VTC. There is no net difference to anyone (Authors, Miners, Traders, Voters), other than that every Voter collects "1.00 of the attackr's money units". A direct transfer from the attacker to the VTC holders.

So, the only issue there is, why would each voter do that? If it is better to do 98/2 in favor of Obama rather than 100/0, why not do 0/100 in favor of McCain? It's certainly in the collective interest to do 98/2, but not in the individual interest. The difference between this and the counter-coordination contract approach is that with the counter-coordination contract, by joining the contract you are pre-committing to vote 98/2 (or 60/40 as I suggested), and it's this pre-commitment that guarantees that others will pre-commit to share bribes with you. It's a different game, because it's two-stage.

Quote from: psztorc on January 13, 2015, 08:36:16 PM
I would have been very interested (but overwhelmingly surprised) if no such counter-scheme existed or could be discovered...it occurred to me that, if so, the mere existence of Ethereum could invalidate many blockchain consensus schemes (even those powering Ethereum itself), as well as rival Ethereum contracts. In fact, precisely because of that reductio ad absurdum, I wasn't so worried.

So, the problem here is that you are assuming Ethereum is necessary for secure coordination. It's not. It's necessary for _the average person_ to easily engage in secure coordination. Large businesses and governments can perform any of these attacks with no Ethereum required just by making promises and using their reputation as collateral.

Quote
I don't think Vlad's counter-contract will work.

So, to give what I think is a summary, if the attacker instead promises substantially more than P as a bribe if the mechanism loses, then it will indeed make more sense to not participate and lie, because by lying you're guaranteed 100% of the attacker's reward instead of just 40% of it. Okay, I'll be awaiting Vlad's response :)
#8
So, it is indeed possible to make a counter-coordination contract to defeat the first contract. However, this does result in a bidding war, and so the wrong answer is going to win if the attacker overpowers the combined weight of altruists (note that that's specifically the weight of _altruists_, or rather in my lingo altruists-prime, not just the combined weight of people who have _any_ incentive to see their preferred outcome win, due to the public goods problem). But an algorithm that works only if the attacker has less economic weight than altruists-prime is a low bar; even naive PoS beats it.

> VTC-holders have a direct incentive to protect the value of the VTC they purchased by assurance-contracting these counter-contracts into existence.

Ah, so that's why we'll have different views on this. My position is that assurance contracts don't work :)

Now, there is another kind of counter-coordination that Vlad Zamfir figured out that does work. Essentially, first of all, instead of the naive Schellingcoin mechanism where winners get P and losers get 0, we add the anti-coordination game to at least the extent at which the mechanism always has an equal total revenue, ie. if there are k winners, winners get NP/k and losers get 0. Then, set up the contract C such that:

(i) to join C you need to put down a security deposit
(ii) after you join C, you need to provably vote with a 60% chance of Obama and a 40% chance of McCain (ie. use some common entropy to decide your vote with that probability distribution, eg. vote Obama iff sha3(block hash) % 10 < 6)
(iii) after you join C and get your reward if you vote Obama, you need to equally redistribute the reward that you get, as well as any bribes that you receive, among all participants in C
(iv) if you violate (ii) or (iii) you lose the deposit

The expected collective payoff, assuming everyone joins C, is going to be P * N + (P + ϵ) * N * 0.4 ~= P * N * 1.4. The incentive to join C is that you receive an expected payoff of 1.4 * P instead of P. Once you join, the security deposit bounds you to participate. The key trick here is that the contract allows the participants to provably share the rewards and collect the maximum possible benefit from the entire combined game. The mechanism doesn't inherit the problems of assurance contracts for public goods because you have the ability to exclude non-participants from sharing in the collective gain (namely, the attacker's attempted bribe).

Essentially, this is basically a way of using a version of my decentralized coordination contract from https://www.youtube.com/watch?v=S47iWiKKvLA&feature=youtu.be (52:27) against Andrew Miller's centralized coordination contract.
#9
Off Topic / Re: Vitalik on funding public goods
September 12, 2014, 08:32:16 PM
Quote from: psztorc on September 12, 2014, 06:30:07 PM
Ok, I strongly believe in heterogeneous preferences, but I think your model setup might represent an AC reasonably well. I'm still not sure about your conclusions.

I was completely with you until this point:
Quote from: vbuterin on September 11, 2014, 10:38:42 PM
3. The utility of contributing is pV - C * 0.5. Hence, someone will contribute if 2pV > C.

It seems to me that p has again shifted its meaning. You've asserted something-like: "people will independently derive the value of p, then roll a dice to decide if they will contribute", but here you say that "someone will contribute if 2pV > C". If everyone's p is the same, either everyone will contribute or no one will? It seemed before that everyone was indifferent to contributing (as long as enough people did), which implied the mixed strategy. Also V is always > C, so for p>=.5 everyone will contribute, seemingly.

My feeling is still that in the last round, people will contribute the C. They may experience regret either way (donating or otherwise). Imagine V = 100 and C is 2. Then regret for the success case ("too many" people donated) is -2 ("I could have saved that 2!") but for the fail case is -98 ("I really needed that lighthouse, why didn't I just donate the 2!?"). Thus it seems that even your constrained model would have everyone donating when C < V/2. Possibly, even when C > V/2 everyone would donate (as, by definition, agents do act to maximize their utility).

Your case had a very high value for V/C, so your DAC in my model would work up to ~2500 people. Note also that regret for the fail case is -98 only when a member is pivotal; in the case where the contract would have failed with or without them, there is no regret.

Quote
this assumes that you can look at how people feel about {V1, C1, "Digging a gigantic hole in the Atacama Desert and then filling it back up."} and generalize it to {V2, C2, "Building an Earth Asteroid Deflector to protect the planet from destruction."}, which I feel is pretty much impossible.

V1 ~= 0. V2 ~= ∞. And there will be many cases in between. People will probably figure out the values in between using linear regression.

So, on the main point, I guess the main impasse is how to reconcile:

1. The probability of contributing is 1/k
2. A person contributes if 2pV > C, and does not otherwise

The only tool that game theory has for solving this class of problems is the mixed-strategy Nash equilibrium, ie. a set of probabilities such that there is no benefit from unilateral deviation from everyone's strategy. So, intuitively, the goal is to prove (or disprove) that if everyone contributes with probabilty 1/k, then 2pV = C. My explanation for that is that that situation is the situation that an entrepreneur wants, since anything else is not a stable equilibrium.

What alternative equilibrium do you propose in my model? One where k ~= 1? In that case, p will be the inverse square root of the number of irrational people, which certainly is more manageable by a constant factor, and what we should really focus attention on is not the model of "there exist N people" but rather "there exists an infinite number of people with a power law distribution of V values for the public good"; I think that might be more where the uncertainty that I am getting at comes from. But in that case, I am pretty convinced that a p ~= 1/sqrt(N) factor is going to appear in there for similar reasons.
#10
Off Topic / Re: Vitalik on funding public goods
September 11, 2014, 10:38:42 PM
Actually, just to move this forward, how about I'll propose a formal model for the game that I am discussing, and we'll see what parts of it you disagree. We'll also limit ourselves to assurance contracts to simplify things; if we agree on the economics of the AC then we can move on to the DAC.

1. There exist N players, each of which receive $V utility from the production of a hypothetical public good.
2. An assurance contract is set up where people can contribute either $0 or $C.
3. If more than N/k people contribute, the funds are sent to the entrepreneur, otherwise they are sent back to the donors. k is set by the entrepreneur, because the entrepreneur knows from prior experience that each person has a probability of 1/k of contributing (the reason why the entrepreneur wants to set the threshold to N/k is so that the threshold is right at the top of the bell curve for the probability distribution of total contributions, maximizing the probability that someone is pivotal and therefore maximizing the incentive to contribute). As another consequence of this optimization on the part of the entrepreneur, the probability of success is 0.5.
4. The game lasts for R rounds, round 1 ... round R. People who have not yet contributed can become contributors during any round.

Note that there are plenty of simplifications here. If you think that given these simplifications my analysis is correct, but under your preferred simplifications my analysis is not correct, then we can focus on the simplifications. If you think that my analysis is not correct even given the simplifications, then we move on.

1. There is no incentive to contribute to rounds 1 ... R-1 (this is because you have more information in round R, and because contributing earlier means that you are pushing the probability of success toward the right side of the gaussian, where the derivative of the probability of success is lower, so fewer people will contribute)
2. Let p be the probability of being pivotal.
3. The utility of contributing is pV - C * 0.5. Hence, someone will contribute if 2pV > C.

The stable equilibrium is the one where 2pV = C, so some people contribute and some do not, and the equilibrium probability of contributing is 1/k. If more than 1/k people contribute, then the Gaussian will move to the right, so the threshold will no longer be at the top of the Gaussian, so p will be lower and thus 2pV < C so others will be less likely to contribute to compensate (so it would be the same result except you are expected to pay more); less than 1/k people contributing has the same result, except instead of compensating it drives the success probability to zero (which nobody wants).

This equilibrium does not exist if there are no values of C and k such that 2pV = C.
#11
Off Topic / Re: Vitalik on funding public goods
September 11, 2014, 08:11:38 PM
Sorry, I did indeed use p in two contradictory ways. There is P, probability of participation, and p, probability of being pivotal. There is also the probability of success, but I've fixed that to 0.5 (since we can adjust the funding threshold up or down to make it so). Does that sound reasonable?

Quote
Or, when you say "everyone participating with some probability p" does that mean that everyone calculates the same probability [for example, p=15%], and then rolls a 100-sided dice and contributes if it comes up 15 or lower?

That is indeed my model.

Quote
that you intend to generalize from one DAC-project "build a lighthouse in New Haven, CT" to another "build a dam in Sandouping"?

There exist entrepreneurs, who calibrate DACs to have a 0.5 probability of success (so as to maximize each person's probability of being pivotal). There are going to be many of these games, with different thresholds. So I am seeing this as a situation where there are many different DACs constantly popping up around the world and people have plenty of experience seeing how often they end up succeeding and how often they end up having pivotal members.

Quote
Let's say it is the deadline. Will not all players donate (their marginal benefit - epsilon)? Their utility increases either way. In your framework, this is because they now believe that p1 = zero.

No, because everyone else is playing at the same time as them. At the deadline, the game is a single-round game, so the mixed-strategy-equilibrium model is the right one to take.

Quote
I intended to build a little "realism" into this: agents may try to save on their donations ('quasi-free-ride'), by attempting to generate additional interest/community in a market earlier, with a credible signal.

Sure, but priming the pump in this context is a public good. So we can't count on it to have that much of an effect.

I fully agree on heterogeneity of preferences, I just think that most public goods are NOT of the form where five people's utilities can add up to more than 30% of the total cost of a PG.

Quote
I know, my way is a little better because it encourages earlier donations and aggregates info on the project's feasibility.

Actually, I think prediction market-based incentivization is unfortunately worse, because at least DACs use the Gaussian distribution to create a leveraging effect where everyone's incentive is magnified by a factor of sqrt(N) due to the pivot effect, whereas the prediction market is basically just a donation scheme. A fully trustworthy donation scheme that has awesome anti-cheating and quality assurance properties, but a donation scheme nonetheless. People won't donate unless V > C, rather then the pV > C that DACs provide.
#12
Off Topic / Re: Vitalik on funding public goods
September 10, 2014, 01:15:15 AM
First of all, an important point is that all I am doing here is following a mixed strategy equilibrium model. It is obvious that no one participating, and everyone participating, are not Nash equilibria, so the equilibrium involves everyone participating with some probability p with 0 < p < 1. Everyone's thinking is independent, and the sum of random variables is a Gaussian. Yes, in the real world some people are richer or more interested in the public good than others, but I don't think that affects the model much (I suppose with some wealth distributions you might get a kind of weird stepladder effect, but the level of skewedness needed there feels like very precise laboratory conditions that are going to end up being very rarely true; if what you're saying is that DACs depend on a particular highly skewed stepladder distribution of wealth and benefit in order to be viable, and you think the real world semi-often describes that model, then I suppose we can discuss that, I am open to the possibility that they are viable as a niche tool).

> However, all 9 individuals would believe that p1=zero, because <20$ has been raised so far

p = 0 is NOT a stable Nash equilibrium as I said above, so you certainly cannot assume p1 = zero. Even if <$20 has been raised so far, the relevant question is what the probability is $20 will be raised by the deadline.

> I also think that you sometimes change what you mean by "P" (or "V" originally?). Sometimes it is someone's belief today, sometimes it is a kind of global omnipotent belief, sometimes it involves post-fundraise perfect hindsight of the group, or involves the CLT. This may be the source of some of your confusion.

V = the utility of the public good to each person
p = the probability that you are going to be pivotal, calculated from the Gaussian distribution of the mixed-strategy Nash equilibrium

I did not use capital P anywhere I can see.

Just to be clear, here is my theory on DACs: in most cases in reality, pV << C. No one participating is not a Nash equilibrium, everyone participating is not a Nash equilbrium. What is going to happen is that the mixed strategy equilibrium will end up at slightly less than 0.5 (since at exactly 0.5 participating is a losing proposition), and so the entrepreneurs will end up having to disburse funds more often than they receive them (if the probabilities are not 0.5, similar calculations apply, it'll just have some slightly different constant scaling factors). As a result of this, no entrepreneurs will want to try to make a DAC in the first place. This conclusion seems to match current reality.

Also, please note that I am NOT criticizing Truthcoin as a means of public goods incentivization; I realize that it does not work the same way, I am focusing strictly on DACs.
#13
Off Topic / Re: Vitalik on funding public goods
September 08, 2014, 03:54:19 AM
Quote from: psztorc on September 07, 2014, 07:43:33 PM
No, it is impossible to make a superfluous contribution. Even donations of one cent are incentive compatible. (By "contributors", I was referring to cash-donators ["contributors pledge"]).

Suppose there is a DAC, let's say the target is $100000, and $25000 has been donated so far, and you are wondering whether or not to donate $1000. Let W be the total value that everyone is going to donate at the end not including you.

1. If W < $99000, then you get 2x back
2. If $99000 <= W < $100000, then get 0x back but are pivotal so it was a good decision for you to contribute.
3. If W >= $100000, then you get 0x back, and the lighthouse gets built anyway, so you lose out on $1000 by contributing vs not contributing.

Hence, you want to maximize your probability of being in (1) or (2) (ideally (2)) over (3).

QuoteIt is also clear that, under an AC, each of the 9 individuals would contribute $1 at t=1 (as this increases their utility by +4 * Probability(SuccessfulFundraise), which is always a positive number) ... nowhere did anyone calculate their probability of being pivotal,

The expected payoff of contributing is -c / p2 + (p2 + p1) * u , where u is the utility of the public good, c is the contribution cost, p2 is the probability of a successful fundraise with you, and p1 is the probability of a successful fundraise with-or-without you. p2 - p1 is precisely the probability of being pivotal. Here, c = 1, u = 5, so each of the 9 people will contribute $1 only if they believe that (p2 - p1) / p2 is equal to at least 1/5.

Note that Gaussians are superexponential, so the derivative drops to zero faster than the principal; hence, you are not going to get an effect where you can increase (p2 - p1) / p2 by dropping c to near-zero and then step c up over time; in fact, just the opposite.

Quote
they cannot observe any preferences other than their own

Now that is false. If DACs are useful at all, we certainly expect them to be used much more often than one round.
#14
Off Topic / Re: Vitalik on funding public goods
September 07, 2014, 05:28:17 PM
Quote
You seem to be limiting yourself to the case where each donor must pledge the same quantity of money. Is it news to you that most real-world assurance contracts, such as those on kickstarter.com, allow individuals to contribute different amounts of money? Digitally, individuals could register many "copies" of themselves to donate more than once.

Sure, I accept that, but I don't see how that changes the model. You can even model a user deciding how much to contribute from a continuous choice set as a set of independent users deciding whether or not to contribute 2^k, 2^k-1, 2^k-2, etc.

Quote
why you think N is large

Most public goods in the real world have very very large N; for example, scientific research has N = 7000000000+, national defense has N = 100000 to 1500000000, open source software development has N in the high thousands to low millions, etc. N is basically the set of people that benefit from your public good. Even after you correct for wealth concentration and preference concentration and treat corporations as monoliths, the effective N is still at least in the thousands.

If you are trying to apply dominant assurance contracts to the public goods problem of large members of an industry donating to a common lobbying group, then sure, they work. They'll also work for people in a tiny village maintaining a local park. But not anything larger scale.

Quote
If contributors pledge less than their marginal benefit (which they would), these contributors win whether the DAC raises enough $ or not. Contributors therefore do not care whether or not the good is built, and therefore, whether or not they are pivotal. They can't lose.

You're missing the case where users' contributions could be superfluous.
#15
Off Topic / Re: Vitalik on funding public goods
September 06, 2014, 07:47:34 PM
Quote from: psztorc on September 04, 2014, 03:17:46 PM
However, you'd like to add one thing to this model: that preferences are such that only some probability of individuals would choose to contribute any money.

More precisely, that there exists some set of N people each of which may or may not choose to contribute money, and each of them have a probability of contributing money or not.

Quote
Forgive me, but are you aware that the CLT refers to the average of 'a sample of ~30 random variables'? I hope this does not appear patronizing, but I cannot understand why you would so-confidently employ the phrasing "you can get". To belabor the point, if x = random_variable_with_some_nonTaleb_distribution, and y = average( roughly_more_than_30_x's ), then the distribution of several y's would be approximately normally distributed. x, however, would not, it would still be distributed however it was distributed initially.

So, the y's are the total donations, of which there are N. The mean (and threshold) is V. The distribution of y's has mean v and standev sqrt(N), and a probability of ~1/sqrt(N) of being exactly equal to V. There are no subsets involved.

Quote
Quote from: vbuterin on September 04, 2014, 03:30:28 AM
In the second case, it makes sense to wait until you have more information before doing everything,
I don't see why.

Because at the last second you have more information about how many people already contributed, and therefore the probability that you will be pivotal.