Vitalik on funding public goods

Previous topic - Next topic

zack

I was talking with Vitalik about using truthcoin to fund public goods, and he is convinced that it is no more effective than an assurance contract. This is his explanation: https://forum.ethereum.org/discussion/747/im-not-understanding-why-dominant-assurance-contracts-are-so-special

I had thought that Truthcoin is far superior at funding public goods compared to dominant assurance contracts for these reasons:

1) (the most important difference)
Dominant Assurance Contracts only pay money to the entrepreneur if enough money has been raised.
Truthcoin would only pay the entrepreneur upon successful delivery of the final product.

2)
Dominant Assurance Contracts can only reward a single entrepreneur. He has a monopoly on creating the final product.
Truthcoin would allow anyone to deliver the product and collect the reward.
It is free market competition.

3)
The reward for failure is a constant in Dominant Assurance Contracts. The first person to purchase a share pays the same amount as the 10th person. Buying shares on the first day is identical to buying them on the last day. There is an incentive to wait as long as possible, to reduce the chance that your money gets trapped for a long time.
Rewards in truthcoin are dynamically changing. On a day where it looks like the final product will succeed, then success shares get more expensive, and failure shares get cheaper. Usually, investors have better prices the earlier they invest.

4)
Dominant Assurance Contracts are off-chain. So it is easy to lie about the state of the contract.
Truthcoin shares are on-chain. So everyone knows exactly how much funds are invested.

psztorc

#1
I'm not understanding what he doesn't understand. The explanation and argument are easily accessible on wikipedia: http://en.wikipedia.org/wiki/Assurance_contract#Dominant_assurance_contracts

From glancing at his comments, he seems to have derailed completely when he says "By central limit theorem p ~= 1 / sqrt(N)", as the distribution of "the average of N draws from the set of member's reservation prices" has absolutely no relevance whatsoever.

Nonetheless he stumbles into a conclusion that "people have the incentive to contribute if...they are 'pivotal' ". This state of affairs is exactly what the D in DAC attempts to solve: with the entrepreneur reimbursing donors, those potential donors who value the good at all always have the incentive to contribute something (up to their valuation), whether they "pivotal" or not (whether they are "anything" or not).

Truthcoin's Public Goods concept (the market where you can't sell, Schelling States, etc) is theoretically even better than the basic DAC, because, you don't need to trust a third-party  / legal system to decide whether or not the public good has been provided. You also don't need to trust a bank / lawyer / escrow with your money, nor do you need it to calculate and write out the checks to the entrepreneurs / funders / providers. There are other benefits: cost, prices-as-forecasts (of the project's completion), etc.
Nullius In Verba

vbuterin

See, the problem with the economic argument is that it's exactly like the argument for N-round prisoner's dilemma resulting in both parties defecting every round starting from the first: it works in theory, but fails utterly in practice because of bounded rationality and foggy information issues. The thing is, real life is not accurately modeled by a scenario where N people are sitting in a room, everyone has perfect information, and things happen slowly enough for exactly the minimum number of contributions to get made and for things to stop there.

A more accurate model of real life is a fog: everyone else will participate or not participate with some probability, out of those individual probabilities you can get a distribution for the collective total (a Gaussian with median V and variance sqrt(N)), and given that distribution you can determine your probability of being pivotal and thus decide the outcome. The mathematical conclusion, that dominant or standard assurance contracts work only for public goods with a return factor greater than sqrt(N), follows as in my above linked forum post.

Now, that's a one-round scenario where everyone either puts their funds in at the start or does not. There are two kinds of multi-round scenarios:

1. You can withdraw your contribution
2. You can't

In the first case, the expected result is that the total donation will end up buzzing around V, as if it is higher it is rational to withdraw and if it is lower it is rational to pledge, but eventually there is going to be a "last round" due to a deadline and network latency bounds, at which point the game is equivalent to a one-round scenario. In the second case, it makes sense to wait until you have more information before doing everything, so everyone will wait until the last second before either participating or not participating. Thus, in both cases my Gaussian model still works.

zack

Thanks for joining the forum.

Your analysis is heavily dependent upon this fact: Everyone knows precisely how much the public good will cost to produce.
In practice this fact is rarely true, almost no one knows how much it will cost, until it is done.

2 basic scenarios could occur:
(1)The investors think they need $100,000 to get the software written, but there is a developer somewhere willing to do it for only $30,000.
They will raise money for a while, and as soon as they pass $30,000, the developer will do the work and claim the prize.

(2)The investors think they only need $100,000, but it is actually a lot more expensive. No developer is willing to do it for less than $300,000.
They would raise money to $100,000, wait a few months, and then the entrepreneur would lose his investment. The investors would all get their money back with interest.

psztorc

#4
Welcome to the forum.

Quote from: vbuterin on September 04, 2014, 03:30:28 AM
See, the problem with the economic argument is that it's exactly like the argument for N-round prisoner's dilemma resulting in both parties defecting every round starting from the first: it works in theory, but fails utterly in practice because of bounded rationality and foggy information issues.
When has the N-round prisoner's dilemma ever failed to play out as expected? True prisoner's dilemmas occur frequently in life, but are very hard to construct experimentally. When real people know the game will end soon in finite time, they do start defecting immediately (2:08).

Quote from: vbuterin on September 04, 2014, 03:30:28 AM
The thing is, real life is not accurately modeled by a scenario where N people are sitting in a room, everyone has perfect information, and things happen slowly enough for exactly the minimum number of contributions to get made and for things to stop there.

A more accurate model of real life is a fog: everyone else will participate or not participate with some probability,
If I understand you correctly, you don't have a problem with "use of oversimplified models to focus on and understand a part of reality". However, you'd like to add one thing to this model: that preferences are such that only some probability of individuals would choose to contribute any money.

Quote from: vbuterin on September 04, 2014, 03:30:28 AM
out of those individual probabilities you can get a distribution for the collective total (a Gaussian with median V and variance sqrt(N))
Forgive me, but are you aware that the CLT refers to the average of 'a sample of ~30 random variables'? I hope this does not appear patronizing, but I cannot understand why you would so-confidently employ the phrasing "you can get". To belabor the point, if x = random_variable_with_some_nonTaleb_distribution, and y = average( roughly_more_than_30_x's ), then the distribution of several y's would be approximately normally distributed. x, however, would not, it would still be distributed however it was distributed initially.

Again, I only say this because I don't see why you would assume that we only care about separating the population of potential contributors into groups of more than 30, examining what the 'average participation' would be of these groups, and then focusing on what a group of these groups might do. I don't really even know why you would subset the potentially-contributing-population at all, or why any uncertainty matters to the potential-contributors. Moreover, assurance contracts are relevant in cases with >1 potential-contributor, not 30 or any other number. It is for these reasons I am assuming that you misinterpreted the CLT.

Quote from: vbuterin on September 04, 2014, 03:30:28 AM
and given that distribution you can determine your probability of being pivotal and thus decide the outcome. The mathematical conclusion, that dominant or standard assurance contracts work only for public goods with a return factor greater than sqrt(N), follows as in my above linked forum post.
Assuming that individuals don't mind locking up their money for a short period (and this assumption is what is addressed by the D-AC), there is no reason for contributors to care about how likely the project is to succeed (whether they are "pivotal"). They can only win or remain where they are.

Quote from: vbuterin on September 04, 2014, 03:30:28 AM
Now, that's a one-round scenario where everyone either puts their funds in at the start or does not. There are two kinds of multi-round scenarios:

1. You can withdraw your contribution
2. You can't
Actually only the 2nd would be considered an assurance contract ("In a binding way").

Quote from: vbuterin on September 04, 2014, 03:30:28 AM
In the second case, it makes sense to wait until you have more information before doing everything,
I don't see why.

Quote from: vbuterin on September 04, 2014, 03:30:28 AM
so everyone will wait until the last second before either participating or not participating.
In the DAC this is false, and in the scheme I proposed for Truthcoin it is the reverse.
Nullius In Verba

vbuterin

Quote from: psztorc on September 04, 2014, 03:17:46 PM
However, you'd like to add one thing to this model: that preferences are such that only some probability of individuals would choose to contribute any money.

More precisely, that there exists some set of N people each of which may or may not choose to contribute money, and each of them have a probability of contributing money or not.

Quote
Forgive me, but are you aware that the CLT refers to the average of 'a sample of ~30 random variables'? I hope this does not appear patronizing, but I cannot understand why you would so-confidently employ the phrasing "you can get". To belabor the point, if x = random_variable_with_some_nonTaleb_distribution, and y = average( roughly_more_than_30_x's ), then the distribution of several y's would be approximately normally distributed. x, however, would not, it would still be distributed however it was distributed initially.

So, the y's are the total donations, of which there are N. The mean (and threshold) is V. The distribution of y's has mean v and standev sqrt(N), and a probability of ~1/sqrt(N) of being exactly equal to V. There are no subsets involved.

Quote
Quote from: vbuterin on September 04, 2014, 03:30:28 AM
In the second case, it makes sense to wait until you have more information before doing everything,
I don't see why.

Because at the last second you have more information about how many people already contributed, and therefore the probability that you will be pivotal.

psztorc

Quote from: vbuterin on September 06, 2014, 07:47:34 PM
Quote
Forgive me, but are you aware that the CLT refers to the average of 'a sample of ~30 random variables'? I hope this does not appear patronizing, but I cannot understand why you would so-confidently employ the phrasing "you can get". To belabor the point, if x = random_variable_with_some_nonTaleb_distribution, and y = average( roughly_more_than_30_x's ), then the distribution of several y's would be approximately normally distributed. x, however, would not, it would still be distributed however it was distributed initially.

So, the y's are the total donations, of which there are N. The mean (and threshold) is V. The distribution of y's has mean v and standev sqrt(N), and a probability of ~1/sqrt(N) of being exactly equal to V. There are no subsets involved.
You seem to be limiting yourself to the case where each donor must pledge the same quantity of money. Is it news to you that most real-world assurance contracts, such as those on kickstarter.com, allow individuals to contribute different amounts of money? Digitally, individuals could register many "copies" of themselves to donate more than once.

It would be a shame if you followed Tabarrok down some impractical comparative-statics-exploration-section of his paper, for essentially no reason.

If not, are you assuming that the rv "P" simply has this distribution? That would seem to be limiting and rather pointless. Otherwise (if you invoke the CLT) I would like to know what you took a sample of N's from, why you think N is large, and why you care about the distribution of y=mean( those samples ).

Quote from: vbuterin on September 06, 2014, 07:47:34 PM
Quote
Quote from: vbuterin on September 04, 2014, 03:30:28 AM
In the second case, it makes sense to wait until you have more information before doing everything,
I don't see why.

Because at the last second you have more information about how many people already contributed, and therefore the probability that you will be pivotal.
If contributors pledge less than their marginal benefit (which they would), these contributors win whether the DAC raises enough $ or not. Contributors therefore do not care whether or not the good is built, and therefore, whether or not they are pivotal. They can't lose.
Nullius In Verba

vbuterin

Quote
You seem to be limiting yourself to the case where each donor must pledge the same quantity of money. Is it news to you that most real-world assurance contracts, such as those on kickstarter.com, allow individuals to contribute different amounts of money? Digitally, individuals could register many "copies" of themselves to donate more than once.

Sure, I accept that, but I don't see how that changes the model. You can even model a user deciding how much to contribute from a continuous choice set as a set of independent users deciding whether or not to contribute 2^k, 2^k-1, 2^k-2, etc.

Quote
why you think N is large

Most public goods in the real world have very very large N; for example, scientific research has N = 7000000000+, national defense has N = 100000 to 1500000000, open source software development has N in the high thousands to low millions, etc. N is basically the set of people that benefit from your public good. Even after you correct for wealth concentration and preference concentration and treat corporations as monoliths, the effective N is still at least in the thousands.

If you are trying to apply dominant assurance contracts to the public goods problem of large members of an industry donating to a common lobbying group, then sure, they work. They'll also work for people in a tiny village maintaining a local park. But not anything larger scale.

Quote
If contributors pledge less than their marginal benefit (which they would), these contributors win whether the DAC raises enough $ or not. Contributors therefore do not care whether or not the good is built, and therefore, whether or not they are pivotal. They can't lose.

You're missing the case where users' contributions could be superfluous.

psztorc

Quote from: vbuterin on September 07, 2014, 05:28:17 PM
Quote
If contributors pledge less than their marginal benefit (which they would), these contributors win whether the DAC raises enough $ or not. Contributors therefore do not care whether or not the good is built, and therefore, whether or not they are pivotal. They can't lose.

You're missing the case where users' contributions could be superfluous.
No, it is impossible to make a superfluous contribution. Even donations of one cent are incentive compatible. (By "contributors", I was referring to cash-donators ["contributors pledge"]).


It is wise of you to ask for help, you do seem to be very confused. Let's try an example of a simple AC first:

[1] There are 10 people.
[2] 9 of the 10 each value a Lighthouse at $5. They would be indifferent between getting $5 cash today, or having a lighthouse magically appear today. Choosing between $4 and a Lighthouse, they'd take the Lighthouse, and choosing between $6 and the Lighthouse, they'd take the $6.
[3] The 10th person values the Lighthouse at 0 (they don't want it).
[4] A Lighthouse would cost $20 to build today.

Clearly, no one will privately build the lighthouse, because each individual will calculate ($5 > $20) = FALSE.

It is also clear that, under an AC, each of the 9 individuals would contribute $1 at t=1 (as this increases their utility by +4 * Probability(SuccessfulFundraise), which is always a positive number). At t=2 they might increase their contribution to 2$ each, then $3 each. 9*3=27 would be raised, and (as 27 > 20) some would proportionally be refunded (7/9 = $0.78 to each of the 9 donors) and the lighthouse constructed. Each of the 9 donors would have his or her name inscribed on the inside of the lighthouse. Each would benefit (5-(3-0.78))= +2.78, in other words, as purely a result of the AC existing, each individual would gain happiness equal to "the happiness they would have gained from magically receiving $2.78 right now".

Note that nowhere did anyone calculate their probability of being pivotal, nor anyone invoke the CLT (which would be inappropriate as ((N=10)<30). Even if we had a total of 100 people, 90 valuing the L at $5, and 10 at $0, the players could not themselves evoke the CLT, as they cannot observe any preferences other than their own. They would have N=1 observation of P (which would be 1.00 if they were in the group of 90, and 0.00 if they were in the group of 10).

Suppose that they could (!) magically -and unrealistically- observe some kind anonymous distribution of P. They could average 100 P's and get a single N'=1 observation of a normally distributed y. Y would have a mean of .9 and a sd of .09 by CLT, but what would anyone do with this information? They could look directly at the distribution of P, and learn much more.

For the Dominant AC we continue:

[5] For liquidity purposes, all individuals everywhere do not enjoy it when their money is tied up. They dislike even making a pledge (which they would get back if not enough money is raised) which locks up money for a single day. They dislike being in this state of affairs (that of a locked dollar) for a single day as much as they dislike "permanently losing $0.05 during the course of a single day".

Now we have a problem, because the 9 can no longer increase their utility with certainty. In fact, possibly none will contribute.

This is what the DAC solves. One new 11th individual says: "I will risk my own $10, to try and raise $32 total ($12 for me, $20 for the lighthouse). Our contract runs all day today and tomorrow." The loss is bounded at $0.10 / dollar, yet gains from the entrepreneur's 10$ could total (5/31.999)*10 = $1.56 for a donation of the contributor's full $5 which "almost made it but didn't". The contributors are back in win-win territory, all 9 donate $3.56, $32 is raised, and the entrepreneur gains $1.50 = $12 - ($10 + (10*.05*1)) (the contract ended during the first day) and the 9 contributors gain $1.44 = $5.00 - 3.56.
Nullius In Verba

vbuterin

#9
Quote from: psztorc on September 07, 2014, 07:43:33 PM
No, it is impossible to make a superfluous contribution. Even donations of one cent are incentive compatible. (By "contributors", I was referring to cash-donators ["contributors pledge"]).

Suppose there is a DAC, let's say the target is $100000, and $25000 has been donated so far, and you are wondering whether or not to donate $1000. Let W be the total value that everyone is going to donate at the end not including you.

1. If W < $99000, then you get 2x back
2. If $99000 <= W < $100000, then get 0x back but are pivotal so it was a good decision for you to contribute.
3. If W >= $100000, then you get 0x back, and the lighthouse gets built anyway, so you lose out on $1000 by contributing vs not contributing.

Hence, you want to maximize your probability of being in (1) or (2) (ideally (2)) over (3).

QuoteIt is also clear that, under an AC, each of the 9 individuals would contribute $1 at t=1 (as this increases their utility by +4 * Probability(SuccessfulFundraise), which is always a positive number) ... nowhere did anyone calculate their probability of being pivotal,

The expected payoff of contributing is -c / p2 + (p2 + p1) * u , where u is the utility of the public good, c is the contribution cost, p2 is the probability of a successful fundraise with you, and p1 is the probability of a successful fundraise with-or-without you. p2 - p1 is precisely the probability of being pivotal. Here, c = 1, u = 5, so each of the 9 people will contribute $1 only if they believe that (p2 - p1) / p2 is equal to at least 1/5.

Note that Gaussians are superexponential, so the derivative drops to zero faster than the principal; hence, you are not going to get an effect where you can increase (p2 - p1) / p2 by dropping c to near-zero and then step c up over time; in fact, just the opposite.

Quote
they cannot observe any preferences other than their own

Now that is false. If DACs are useful at all, we certainly expect them to be used much more often than one round.

zack

#10
The creator of the public good does NOT get paid upon reaching some target amount of funds. Instead he gets paid when he completes the public good.

Quote from: vbuterin on September 08, 2014, 03:54:19 AM
Suppose there is a DAC, let's say the target is $100000

What does "target" mean in this context? If we are using truthcoin, there is no way to hard-code a "target" into the funding of a public good.
If I bet a ton of money against Hillary, will I eventually reach a "target" amount of money that causes her to win the election? no.

Donating to a truthcoin public good is similar to giving money to a time-limited bounty. So if someone completes the bounty within the time-limit, then they can take your money. Truthcoin is more powerful than a time-limited bounty because it allows for integration of a few more parties, like the entrepreneur, and competing engineers.

A couple counter-examples to disprove the existence of "target":
1) It is possible to under-fund a public good with truthcoin, and the good still gets built. Imagine the good costs the engineer $10,000, but only $1000 were donated. There is nothing stopping the engineer from building the good and collecting the $1000, even though he takes a large loss.

2) It is possible to over-fund a good, and the good doesn't get built. The community thinks software will cost $10,000, but it really only costs $1000. If no one writes the software by the time-limit, then the $10,000+interest are given back to the investors, and the entrepreneur loses everything.

psztorc

#11
These might help you:

1] The CLT does not magically transform "every random distribution in the world of more than 30 elements" into "a Gaussian distribution". If it did, the current wealth distribution (for example) would be somehow magically impossible. Instead, the CLT refers to the distribution of "good means" (ie those with at least 30 observations, from the same nonTaleb distribution).

2] Preferences ("P") are not "Gaussian". Even if they happened to be non-ordinal and distributed this way, we would, epistemologically, have no way of ever knowing this. We can observe player i's contribution at time t, and use that as evidence of P_i > P_contribution during t, but that single inference is nowhere near observing the entire distribution of preferences themselves (which would be akin to being able to read, with certainty, everyone's mind at all times).

3] Not only are preferences unobservable, but contrary to what you indicate (irrelevantly) about the Gaussian step up, preferences are themselves a function of information, ie preferences can be a function of signals that other's have sent about their preferences. I can prefer to attend a party if and only if my friend expresses an interest in also attending. They are also a function of time (I may want to go to the party with my friend at first, but later back out). The DAC's major strength is that it is robust to people's changing beliefs about each other.

4] You are correct when you say that "[they] will contribute $1 only if they believe that (p2 - p1) / p2 is equal to at least 1/5". However, all 9 individuals would believe that p1=zero, because <20$ has been raised so far, so (p2 - p1) / p2 is 1 (which guarantees donations while total_donations<20 and while those donations won't change ((p2-p1)/p2), [in other words: small ones]). If you then said that there are many information-frictions in getting this ball rolling, you would be right: those are exactly what the DAC (not AC) is designed to address. So I introduced this toy problem to build to the DAC (where, you'll notice, your "at least (1/5)" criticism doesn't apply).

I tried to make that example clear, and took time out of my day to help you understand the importance of the DAC, because I think that you may be in a position to advance its use. I would appreciate it if you gave the example a second read.

I also think that you sometimes change what you mean by "P" (or "V" originally?). Sometimes it is someone's belief today, sometimes it is a kind of global omnipotent belief, sometimes it involves post-fundraise perfect hindsight of the group, or involves the CLT. This may be the source of some of your confusion.
Nullius In Verba

vbuterin

First of all, an important point is that all I am doing here is following a mixed strategy equilibrium model. It is obvious that no one participating, and everyone participating, are not Nash equilibria, so the equilibrium involves everyone participating with some probability p with 0 < p < 1. Everyone's thinking is independent, and the sum of random variables is a Gaussian. Yes, in the real world some people are richer or more interested in the public good than others, but I don't think that affects the model much (I suppose with some wealth distributions you might get a kind of weird stepladder effect, but the level of skewedness needed there feels like very precise laboratory conditions that are going to end up being very rarely true; if what you're saying is that DACs depend on a particular highly skewed stepladder distribution of wealth and benefit in order to be viable, and you think the real world semi-often describes that model, then I suppose we can discuss that, I am open to the possibility that they are viable as a niche tool).

> However, all 9 individuals would believe that p1=zero, because <20$ has been raised so far

p = 0 is NOT a stable Nash equilibrium as I said above, so you certainly cannot assume p1 = zero. Even if <$20 has been raised so far, the relevant question is what the probability is $20 will be raised by the deadline.

> I also think that you sometimes change what you mean by "P" (or "V" originally?). Sometimes it is someone's belief today, sometimes it is a kind of global omnipotent belief, sometimes it involves post-fundraise perfect hindsight of the group, or involves the CLT. This may be the source of some of your confusion.

V = the utility of the public good to each person
p = the probability that you are going to be pivotal, calculated from the Gaussian distribution of the mixed-strategy Nash equilibrium

I did not use capital P anywhere I can see.

Just to be clear, here is my theory on DACs: in most cases in reality, pV << C. No one participating is not a Nash equilibrium, everyone participating is not a Nash equilbrium. What is going to happen is that the mixed strategy equilibrium will end up at slightly less than 0.5 (since at exactly 0.5 participating is a losing proposition), and so the entrepreneurs will end up having to disburse funds more often than they receive them (if the probabilities are not 0.5, similar calculations apply, it'll just have some slightly different constant scaling factors). As a result of this, no entrepreneurs will want to try to make a DAC in the first place. This conclusion seems to match current reality.

Also, please note that I am NOT criticizing Truthcoin as a means of public goods incentivization; I realize that it does not work the same way, I am focusing strictly on DACs.

psztorc

> First of all, an important point is that all I am doing here is following a mixed strategy equilibrium model. It is obvious that no one participating, and everyone participating, are not Nash equilibria, so the equilibrium involves everyone participating with some probability p with 0 < p < 1.
I'm not sure that that follows. In my example above, the individuals either just participated or didn't. Is p a distribution of preferences (values in range(0,1) which are different for each person)? Or, when you say "everyone participating with some probability p" does that mean that everyone calculates the same probability [for example, p=15%], and then rolls a 100-sided dice and contributes if it comes up 15 or lower?

> Everyone's thinking is independent,
Haha. But for now we'll take it arguendo,  :)

> and the sum of random variables is a Gaussian.
The sum of >30 nonTaleb rv's is Gaussian, but it seems now that you intend to generalize from one DAC-project "build a lighthouse in New Haven, CT" to another "build a dam in Sandouping"? Otherwise how will you observe the rvs and use them later? Such a generalization certainly seems ambitious.

> Yes, in the real world some people are richer or more interested in the public good than others, but I don't think that affects the model much
My guess is that it actually does. If everyone felt the same way about the PG, people wouldn't complain about funding it via taxation. The complaint is that some must buy what they do not want, while there also exist unused Pareto improvements. The entrepreneurs would be better than the government at finding 'the vital few' for each new project.

> if what you're saying is that DACs depend on a particular highly skewed stepladder distribution of wealth and benefit in order to be viable
I don't: it can be any distribution, even your assumption of "equal caring". I do think the marginal advantage of a DAC over taxation increases with (let's call it) the 'benefit heterogeneity'. We more-homogeneously benefit from an interstate highway system our personal / commerce / military organizations can use ("...but who would build the roads?"), but where is funding for the drudge work of Bitcoin unit testing?

> p = 0 is NOT a stable Nash equilibrium as I said above, so you certainly cannot assume p1 = zero.
I think this is an example of p and V shifting their meanings. You say that p is "the probability that you are going to be pivotal", and yet here you are using p as though it were a strategy (because you describe it as a NE, which is defined by a set of strategies). If they are the same, you are implying that people can each choose their p (which I don't understand). In paragraph 1 I was also confused over p, which seemed to be determined externally from the game setup somehow.

> Even if <$20 has been raised so far, the relevant question is what the probability is $20 will be raised by the deadline.
Let's say it is the deadline. Will not all players donate (their marginal benefit - epsilon)? Their utility increases either way. In your framework, this is because they now believe that p1 = zero.

I intended to build a little "realism" into this: agents may try to save on their donations ('quasi-free-ride'), by attempting to generate additional interest/community in a market earlier, with a credible signal. The entrepreneur himself may do this. These are just my personal expectations and observations of kickstarter.

> Just to be clear, here is my theory on DACs: in most cases in reality, pV << C.
The V depends on "which public good", of course.

> the entrepreneurs will end up having to disburse funds more often than they receive them (if the probabilities are not 0.5, similar calculations apply, it'll just have some slightly different constant scaling factors). As a result of this, no entrepreneurs will want to try to make a DAC in the first place. This conclusion seems to match current reality.
I believe Tabarrok says that it is a good thing that entrepreneurs do not create DAC's they think will lose, as it saves everyone the trouble of considering worthless DACs.

> Also, please note that I am NOT criticizing Truthcoin as a means of public goods incentivization; I realize that it does not work the same way, I am focusing strictly on DACs.
I know, my way is a little better because it encourages earlier donations and aggregates info on the project's feasibility.
Nullius In Verba

zack

Quote from: vbuterin on September 10, 2014, 01:15:15 AM
Also, please note that I am NOT criticizing Truthcoin as a means of public goods incentivization; I realize that it does not work the same way, I am focusing strictly on DACs.

Quote from: psztorc on September 10, 2014, 01:56:49 PM
I know, my way is a little better because it encourages earlier donations and aggregates info on the project's feasibility.

Since prediction markets works better, why discuss DACs at all?
It is like a discussion on how to rub sticks most effectively to start a fire.