TruthcoinTalk
Other => Off Topic => Topic started by: zack on September 03, 2014, 03:32:55 pm

I was talking with Vitalik about using truthcoin to fund public goods, and he is convinced that it is no more effective than an assurance contract. This is his explanation: https://forum.ethereum.org/discussion/747/imnotunderstandingwhydominantassurancecontractsaresospecial
I had thought that Truthcoin is far superior at funding public goods compared to dominant assurance contracts for these reasons:
1) (the most important difference)
Dominant Assurance Contracts only pay money to the entrepreneur if enough money has been raised.
Truthcoin would only pay the entrepreneur upon successful delivery of the final product.
2)
Dominant Assurance Contracts can only reward a single entrepreneur. He has a monopoly on creating the final product.
Truthcoin would allow anyone to deliver the product and collect the reward.
It is free market competition.
3)
The reward for failure is a constant in Dominant Assurance Contracts. The first person to purchase a share pays the same amount as the 10th person. Buying shares on the first day is identical to buying them on the last day. There is an incentive to wait as long as possible, to reduce the chance that your money gets trapped for a long time.
Rewards in truthcoin are dynamically changing. On a day where it looks like the final product will succeed, then success shares get more expensive, and failure shares get cheaper. Usually, investors have better prices the earlier they invest.
4)
Dominant Assurance Contracts are offchain. So it is easy to lie about the state of the contract.
Truthcoin shares are onchain. So everyone knows exactly how much funds are invested.

I'm not understanding what he doesn't understand. The explanation and argument are easily accessible on wikipedia: http://en.wikipedia.org/wiki/Assurance_contract#Dominant_assurance_contracts
From glancing at his comments, he seems to have derailed completely when he says "By central limit theorem p ~= 1 / sqrt(N)", as the distribution of "the average of N draws from the set of member's reservation prices" has absolutely no relevance whatsoever.
Nonetheless he stumbles into a conclusion that "people have the incentive to contribute if...they are 'pivotal' ". This state of affairs is exactly what the D in DAC attempts to solve: with the entrepreneur reimbursing donors, those potential donors who value the good at all always have the incentive to contribute something (up to their valuation), whether they "pivotal" or not (whether they are "anything" or not).
Truthcoin's Public Goods concept (the market where you can't sell, Schelling States, etc) is theoretically even better than the basic DAC, because, you don't need to trust a thirdparty / legal system to decide whether or not the public good has been provided. You also don't need to trust a bank / lawyer / escrow with your money, nor do you need it to calculate and write out the checks to the entrepreneurs / funders / providers. There are other benefits: cost, pricesasforecasts (of the project's completion), etc.

See, the problem with the economic argument is that it's exactly like the argument for Nround prisoner's dilemma resulting in both parties defecting every round starting from the first: it works in theory, but fails utterly in practice because of bounded rationality and foggy information issues. The thing is, real life is not accurately modeled by a scenario where N people are sitting in a room, everyone has perfect information, and things happen slowly enough for exactly the minimum number of contributions to get made and for things to stop there.
A more accurate model of real life is a fog: everyone else will participate or not participate with some probability, out of those individual probabilities you can get a distribution for the collective total (a Gaussian with median V and variance sqrt(N)), and given that distribution you can determine your probability of being pivotal and thus decide the outcome. The mathematical conclusion, that dominant or standard assurance contracts work only for public goods with a return factor greater than sqrt(N), follows as in my above linked forum post.
Now, that's a oneround scenario where everyone either puts their funds in at the start or does not. There are two kinds of multiround scenarios:
1. You can withdraw your contribution
2. You can't
In the first case, the expected result is that the total donation will end up buzzing around V, as if it is higher it is rational to withdraw and if it is lower it is rational to pledge, but eventually there is going to be a "last round" due to a deadline and network latency bounds, at which point the game is equivalent to a oneround scenario. In the second case, it makes sense to wait until you have more information before doing everything, so everyone will wait until the last second before either participating or not participating. Thus, in both cases my Gaussian model still works.

Thanks for joining the forum.
Your analysis is heavily dependent upon this fact: Everyone knows precisely how much the public good will cost to produce.
In practice this fact is rarely true, almost no one knows how much it will cost, until it is done.
2 basic scenarios could occur:
(1)The investors think they need $100,000 to get the software written, but there is a developer somewhere willing to do it for only $30,000.
They will raise money for a while, and as soon as they pass $30,000, the developer will do the work and claim the prize.
(2)The investors think they only need $100,000, but it is actually a lot more expensive. No developer is willing to do it for less than $300,000.
They would raise money to $100,000, wait a few months, and then the entrepreneur would lose his investment. The investors would all get their money back with interest.

Welcome to the forum.
See, the problem with the economic argument is that it's exactly like the argument for Nround prisoner's dilemma resulting in both parties defecting every round starting from the first: it works in theory, but fails utterly in practice because of bounded rationality and foggy information issues.
When has the Nround prisoner's dilemma ever failed to play out as expected? True prisoner's dilemmas (http://robinhanson.typepad.com/files/threeworldscollide.pdf) occur frequently in life, but are very hard to construct experimentally. When real people know the game will end soon in finite time, they do start defecting immediately (https://www.youtube.com/v/XHKan75x7GI?start=100&end=137&autoplay=1) (2:08).
The thing is, real life is not accurately modeled by a scenario where N people are sitting in a room, everyone has perfect information, and things happen slowly enough for exactly the minimum number of contributions to get made and for things to stop there.
A more accurate model of real life is a fog: everyone else will participate or not participate with some probability,
If I understand you correctly, you don't have a problem with "use of oversimplified models to focus on and understand a part of reality". However, you'd like to add one thing to this model: that preferences are such that only some probability of individuals would choose to contribute any money.
out of those individual probabilities you can get a distribution for the collective total (a Gaussian with median V and variance sqrt(N))
Forgive me, but are you aware that the CLT refers to the average of 'a sample of ~30 random variables'? I hope this does not appear patronizing, but I cannot understand why you would soconfidently employ the phrasing "you can get". To belabor the point, if x = random_variable_with_some_nonTaleb_distribution, and y = average( roughly_more_than_30_x's ), then the distribution of several y's would be approximately normally distributed. x, however, would not, it would still be distributed however it was distributed initially.
Again, I only say this because I don't see why you would assume that we only care about separating the population of potential contributors into groups of more than 30, examining what the 'average participation' would be of these groups, and then focusing on what a group of these groups might do. I don't really even know why you would subset the potentiallycontributingpopulation at all, or why any uncertainty matters to the potentialcontributors. Moreover, assurance contracts are relevant in cases with >1 potentialcontributor, not 30 or any other number. It is for these reasons I am assuming that you misinterpreted the CLT.
and given that distribution you can determine your probability of being pivotal and thus decide the outcome. The mathematical conclusion, that dominant or standard assurance contracts work only for public goods with a return factor greater than sqrt(N), follows as in my above linked forum post.
Assuming that individuals don't mind locking up their money for a short period (and this assumption is what is addressed by the DAC), there is no reason for contributors to care about how likely the project is to succeed (whether they are "pivotal"). They can only win or remain where they are.
Now, that's a oneround scenario where everyone either puts their funds in at the start or does not. There are two kinds of multiround scenarios:
1. You can withdraw your contribution
2. You can't
Actually only the 2nd would be considered an assurance contract (http://en.wikipedia.org/wiki/Assurance_contract) ("In a binding way").
In the second case, it makes sense to wait until you have more information before doing everything,
I don't see why.
so everyone will wait until the last second before either participating or not participating.
In the DAC this is false, and in the scheme I proposed for Truthcoin it is the reverse.

However, you'd like to add one thing to this model: that preferences are such that only some probability of individuals would choose to contribute any money.
More precisely, that there exists some set of N people each of which may or may not choose to contribute money, and each of them have a probability of contributing money or not.
Forgive me, but are you aware that the CLT refers to the average of 'a sample of ~30 random variables'? I hope this does not appear patronizing, but I cannot understand why you would soconfidently employ the phrasing "you can get". To belabor the point, if x = random_variable_with_some_nonTaleb_distribution, and y = average( roughly_more_than_30_x's ), then the distribution of several y's would be approximately normally distributed. x, however, would not, it would still be distributed however it was distributed initially.
So, the y's are the total donations, of which there are N. The mean (and threshold) is V. The distribution of y's has mean v and standev sqrt(N), and a probability of ~1/sqrt(N) of being exactly equal to V. There are no subsets involved.
In the second case, it makes sense to wait until you have more information before doing everything,
I don't see why.
Because at the last second you have more information about how many people already contributed, and therefore the probability that you will be pivotal.

Forgive me, but are you aware that the CLT refers to the average of 'a sample of ~30 random variables'? I hope this does not appear patronizing, but I cannot understand why you would soconfidently employ the phrasing "you can get". To belabor the point, if x = random_variable_with_some_nonTaleb_distribution, and y = average( roughly_more_than_30_x's ), then the distribution of several y's would be approximately normally distributed. x, however, would not, it would still be distributed however it was distributed initially.
So, the y's are the total donations, of which there are N. The mean (and threshold) is V. The distribution of y's has mean v and standev sqrt(N), and a probability of ~1/sqrt(N) of being exactly equal to V. There are no subsets involved.
You seem to be limiting yourself to the case where each donor must pledge the same quantity of money. Is it news to you that most realworld assurance contracts, such as those on kickstarter.com, allow individuals to contribute different amounts of money? Digitally, individuals could register many "copies" of themselves to donate more than once.
It would be a shame if you followed Tabarrok down some impractical comparativestaticsexplorationsection of his paper, for essentially no reason.
If not, are you assuming that the rv "P" simply has this distribution? That would seem to be limiting and rather pointless. Otherwise (if you invoke the CLT) I would like to know what you took a sample of N's from, why you think N is large, and why you care about the distribution of y=mean( those samples ).
In the second case, it makes sense to wait until you have more information before doing everything,
I don't see why.
Because at the last second you have more information about how many people already contributed, and therefore the probability that you will be pivotal.
If contributors pledge less than their marginal benefit (which they would), these contributors win whether the DAC raises enough $ or not. Contributors therefore do not care whether or not the good is built, and therefore, whether or not they are pivotal. They can't lose.

You seem to be limiting yourself to the case where each donor must pledge the same quantity of money. Is it news to you that most realworld assurance contracts, such as those on kickstarter.com, allow individuals to contribute different amounts of money? Digitally, individuals could register many "copies" of themselves to donate more than once.
Sure, I accept that, but I don't see how that changes the model. You can even model a user deciding how much to contribute from a continuous choice set as a set of independent users deciding whether or not to contribute 2^k, 2^k1, 2^k2, etc.
why you think N is large
Most public goods in the real world have very very large N; for example, scientific research has N = 7000000000+, national defense has N = 100000 to 1500000000, open source software development has N in the high thousands to low millions, etc. N is basically the set of people that benefit from your public good. Even after you correct for wealth concentration and preference concentration and treat corporations as monoliths, the effective N is still at least in the thousands.
If you are trying to apply dominant assurance contracts to the public goods problem of large members of an industry donating to a common lobbying group, then sure, they work. They'll also work for people in a tiny village maintaining a local park. But not anything larger scale.
If contributors pledge less than their marginal benefit (which they would), these contributors win whether the DAC raises enough $ or not. Contributors therefore do not care whether or not the good is built, and therefore, whether or not they are pivotal. They can't lose.
You're missing the case where users' contributions could be superfluous.

If contributors pledge less than their marginal benefit (which they would), these contributors win whether the DAC raises enough $ or not. Contributors therefore do not care whether or not the good is built, and therefore, whether or not they are pivotal. They can't lose.
You're missing the case where users' contributions could be superfluous.
No, it is impossible to make a superfluous contribution. Even donations of one cent are incentive compatible. (By "contributors", I was referring to cashdonators ["contributors pledge"]).
It is wise of you to ask for help, you do seem to be very confused. Let's try an example of a simple AC first:
[1] There are 10 people.
[2] 9 of the 10 each value a Lighthouse at $5. They would be indifferent between getting $5 cash today, or having a lighthouse magically appear today. Choosing between $4 and a Lighthouse, they'd take the Lighthouse, and choosing between $6 and the Lighthouse, they'd take the $6.
[3] The 10th person values the Lighthouse at 0 (they don't want it).
[4] A Lighthouse would cost $20 to build today.
Clearly, no one will privately build the lighthouse, because each individual will calculate ($5 > $20) = FALSE.
It is also clear that, under an AC, each of the 9 individuals would contribute $1 at t=1 (as this increases their utility by +4 * Probability(SuccessfulFundraise), which is always a positive number). At t=2 they might increase their contribution to 2$ each, then $3 each. 9*3=27 would be raised, and (as 27 > 20) some would proportionally be refunded (7/9 = $0.78 to each of the 9 donors) and the lighthouse constructed. Each of the 9 donors would have his or her name inscribed on the inside of the lighthouse. Each would benefit (5(30.78))= +2.78, in other words, as purely a result of the AC existing, each individual would gain happiness equal to "the happiness they would have gained from magically receiving $2.78 right now".
Note that nowhere did anyone calculate their probability of being pivotal, nor anyone invoke the CLT (which would be inappropriate as ((N=10)<30). Even if we had a total of 100 people, 90 valuing the L at $5, and 10 at $0, the players could not themselves evoke the CLT, as they cannot observe any preferences other than their own. They would have N=1 observation of P (which would be 1.00 if they were in the group of 90, and 0.00 if they were in the group of 10).
Suppose that they could (!) magically and unrealistically observe some kind anonymous distribution of P. They could average 100 P's and get a single N'=1 observation of a normally distributed y. Y would have a mean of .9 and a sd of .09 by CLT, but what would anyone do with this information? They could look directly at the distribution of P, and learn much more.
For the Dominant AC we continue:
[5] For liquidity purposes, all individuals everywhere do not enjoy it when their money is tied up. They dislike even making a pledge (which they would get back if not enough money is raised) which locks up money for a single day. They dislike being in this state of affairs (that of a locked dollar) for a single day as much as they dislike "permanently losing $0.05 during the course of a single day".
Now we have a problem, because the 9 can no longer increase their utility with certainty. In fact, possibly none will contribute.
This is what the DAC solves. One new 11th individual says: "I will risk my own $10, to try and raise $32 total ($12 for me, $20 for the lighthouse). Our contract runs all day today and tomorrow." The loss is bounded at $0.10 / dollar, yet gains from the entrepreneur's 10$ could total (5/31.999)*10 = $1.56 for a donation of the contributor's full $5 which "almost made it but didn't". The contributors are back in winwin territory, all 9 donate $3.56, $32 is raised, and the entrepreneur gains $1.50 = $12  ($10 + (10*.05*1)) (the contract ended during the first day) and the 9 contributors gain $1.44 = $5.00  3.56.

No, it is impossible to make a superfluous contribution. Even donations of one cent are incentive compatible. (By "contributors", I was referring to cashdonators ["contributors pledge"]).
Suppose there is a DAC, let's say the target is $100000, and $25000 has been donated so far, and you are wondering whether or not to donate $1000. Let W be the total value that everyone is going to donate at the end not including you.
1. If W < $99000, then you get 2x back
2. If $99000 <= W < $100000, then get 0x back but are pivotal so it was a good decision for you to contribute.
3. If W >= $100000, then you get 0x back, and the lighthouse gets built anyway, so you lose out on $1000 by contributing vs not contributing.
Hence, you want to maximize your probability of being in (1) or (2) (ideally (2)) over (3).
It is also clear that, under an AC, each of the 9 individuals would contribute $1 at t=1 (as this increases their utility by +4 * Probability(SuccessfulFundraise), which is always a positive number) ... nowhere did anyone calculate their probability of being pivotal,
The expected payoff of contributing is c / p2 + (p2 + p1) * u , where u is the utility of the public good, c is the contribution cost, p2 is the probability of a successful fundraise with you, and p1 is the probability of a successful fundraise withorwithout you. p2  p1 is precisely the probability of being pivotal. Here, c = 1, u = 5, so each of the 9 people will contribute $1 only if they believe that (p2  p1) / p2 is equal to at least 1/5.
Note that Gaussians are superexponential, so the derivative drops to zero faster than the principal; hence, you are not going to get an effect where you can increase (p2  p1) / p2 by dropping c to nearzero and then step c up over time; in fact, just the opposite.
they cannot observe any preferences other than their own
Now that is false. If DACs are useful at all, we certainly expect them to be used much more often than one round.

The creator of the public good does NOT get paid upon reaching some target amount of funds. Instead he gets paid when he completes the public good.
Suppose there is a DAC, let's say the target is $100000
What does "target" mean in this context? If we are using truthcoin, there is no way to hardcode a "target" into the funding of a public good.
If I bet a ton of money against Hillary, will I eventually reach a "target" amount of money that causes her to win the election? no.
Donating to a truthcoin public good is similar to giving money to a timelimited bounty. So if someone completes the bounty within the timelimit, then they can take your money. Truthcoin is more powerful than a timelimited bounty because it allows for integration of a few more parties, like the entrepreneur, and competing engineers.
A couple counterexamples to disprove the existence of "target":
1) It is possible to underfund a public good with truthcoin, and the good still gets built. Imagine the good costs the engineer $10,000, but only $1000 were donated. There is nothing stopping the engineer from building the good and collecting the $1000, even though he takes a large loss.
2) It is possible to overfund a good, and the good doesn't get built. The community thinks software will cost $10,000, but it really only costs $1000. If no one writes the software by the timelimit, then the $10,000+interest are given back to the investors, and the entrepreneur loses everything.

These might help you:
1] The CLT does not magically transform "every random distribution in the world of more than 30 elements" into "a Gaussian distribution". If it did, the current wealth distribution (for example) would be somehow magically impossible. Instead, the CLT refers to the distribution of "good means" (ie those with at least 30 observations, from the same nonTaleb distribution).
2] Preferences ("P") are not "Gaussian". Even if they happened to be nonordinal and distributed this way, we would, epistemologically, have no way of ever knowing this. We can observe player i's contribution at time t, and use that as evidence of P_i > P_contribution during t, but that single inference is nowhere near observing the entire distribution of preferences themselves (which would be akin to being able to read, with certainty, everyone's mind at all times).
3] Not only are preferences unobservable, but contrary to what you indicate (irrelevantly) about the Gaussian step up, preferences are themselves a function of information, ie preferences can be a function of signals that other's have sent about their preferences. I can prefer to attend a party if and only if my friend expresses an interest in also attending. They are also a function of time (I may want to go to the party with my friend at first, but later back out). The DAC's major strength is that it is robust to people's changing beliefs about each other.
4] You are correct when you say that "[they] will contribute $1 only if they believe that (p2  p1) / p2 is equal to at least 1/5". However, all 9 individuals would believe that p1=zero, because <20$ has been raised so far, so (p2  p1) / p2 is 1 (which guarantees donations while total_donations<20 and while those donations won't change ((p2p1)/p2), [in other words: small ones]). If you then said that there are many informationfrictions in getting this ball rolling, you would be right: those are exactly what the DAC (not AC) is designed to address. So I introduced this toy problem to build to the DAC (where, you'll notice, your "at least (1/5)" criticism doesn't apply).
I tried to make that example clear, and took time out of my day to help you understand the importance of the DAC, because I think that you may be in a position to advance its use. I would appreciate it if you gave the example a second read.
I also think that you sometimes change what you mean by "P" (or "V" originally?). Sometimes it is someone's belief today, sometimes it is a kind of global omnipotent belief, sometimes it involves postfundraise perfect hindsight of the group, or involves the CLT. This may be the source of some of your confusion.

First of all, an important point is that all I am doing here is following a mixed strategy equilibrium model. It is obvious that no one participating, and everyone participating, are not Nash equilibria, so the equilibrium involves everyone participating with some probability p with 0 < p < 1. Everyone's thinking is independent, and the sum of random variables is a Gaussian. Yes, in the real world some people are richer or more interested in the public good than others, but I don't think that affects the model much (I suppose with some wealth distributions you might get a kind of weird stepladder effect, but the level of skewedness needed there feels like very precise laboratory conditions that are going to end up being very rarely true; if what you're saying is that DACs depend on a particular highly skewed stepladder distribution of wealth and benefit in order to be viable, and you think the real world semioften describes that model, then I suppose we can discuss that, I am open to the possibility that they are viable as a niche tool).
> However, all 9 individuals would believe that p1=zero, because <20$ has been raised so far
p = 0 is NOT a stable Nash equilibrium as I said above, so you certainly cannot assume p1 = zero. Even if <$20 has been raised so far, the relevant question is what the probability is $20 will be raised by the deadline.
> I also think that you sometimes change what you mean by "P" (or "V" originally?). Sometimes it is someone's belief today, sometimes it is a kind of global omnipotent belief, sometimes it involves postfundraise perfect hindsight of the group, or involves the CLT. This may be the source of some of your confusion.
V = the utility of the public good to each person
p = the probability that you are going to be pivotal, calculated from the Gaussian distribution of the mixedstrategy Nash equilibrium
I did not use capital P anywhere I can see.
Just to be clear, here is my theory on DACs: in most cases in reality, pV << C. No one participating is not a Nash equilibrium, everyone participating is not a Nash equilbrium. What is going to happen is that the mixed strategy equilibrium will end up at slightly less than 0.5 (since at exactly 0.5 participating is a losing proposition), and so the entrepreneurs will end up having to disburse funds more often than they receive them (if the probabilities are not 0.5, similar calculations apply, it'll just have some slightly different constant scaling factors). As a result of this, no entrepreneurs will want to try to make a DAC in the first place. This conclusion seems to match current reality.
Also, please note that I am NOT criticizing Truthcoin as a means of public goods incentivization; I realize that it does not work the same way, I am focusing strictly on DACs.

> First of all, an important point is that all I am doing here is following a mixed strategy equilibrium model. It is obvious that no one participating, and everyone participating, are not Nash equilibria, so the equilibrium involves everyone participating with some probability p with 0 < p < 1.
I'm not sure that that follows. In my example above, the individuals either just participated or didn't. Is p a distribution of preferences (values in range(0,1) which are different for each person)? Or, when you say "everyone participating with some probability p" does that mean that everyone calculates the same probability [for example, p=15%], and then rolls a 100sided dice and contributes if it comes up 15 or lower?
> Everyone's thinking is independent,
Haha. But for now we'll take it arguendo, :)
> and the sum of random variables is a Gaussian.
The sum of >30 nonTaleb rv's is Gaussian, but it seems now that you intend to generalize from one DACproject "build a lighthouse in New Haven, CT" to another "build a dam in Sandouping"? Otherwise how will you observe the rvs and use them later? Such a generalization certainly seems ambitious.
> Yes, in the real world some people are richer or more interested in the public good than others, but I don't think that affects the model much
My guess is that it actually does. If everyone felt the same way about the PG, people wouldn't complain about funding it via taxation. The complaint is that some must buy what they do not want, while there also exist unused Pareto improvements. The entrepreneurs would be better than the government at finding 'the vital few' for each new project.
> if what you're saying is that DACs depend on a particular highly skewed stepladder distribution of wealth and benefit in order to be viable
I don't: it can be any distribution, even your assumption of "equal caring". I do think the marginal advantage of a DAC over taxation increases with (let's call it) the 'benefit heterogeneity'. We morehomogeneously benefit from an interstate highway system our personal / commerce / military organizations can use ("...but who would build the roads?"), but where is funding for the drudge work of Bitcoin unit testing?
> p = 0 is NOT a stable Nash equilibrium as I said above, so you certainly cannot assume p1 = zero.
I think this is an example of p and V shifting their meanings. You say that p is "the probability that you are going to be pivotal", and yet here you are using p as though it were a strategy (because you describe it as a NE, which is defined by a set of strategies). If they are the same, you are implying that people can each choose their p (which I don't understand). In paragraph 1 I was also confused over p, which seemed to be determined externally from the game setup somehow.
> Even if <$20 has been raised so far, the relevant question is what the probability is $20 will be raised by the deadline.
Let's say it is the deadline. Will not all players donate (their marginal benefit  epsilon)? Their utility increases either way. In your framework, this is because they now believe that p1 = zero.
I intended to build a little "realism" into this: agents may try to save on their donations ('quasifreeride'), by attempting to generate additional interest/community in a market earlier, with a credible signal. The entrepreneur himself may do this. These are just my personal expectations and observations of kickstarter.
> Just to be clear, here is my theory on DACs: in most cases in reality, pV << C.
The V depends on "which public good", of course.
> the entrepreneurs will end up having to disburse funds more often than they receive them (if the probabilities are not 0.5, similar calculations apply, it'll just have some slightly different constant scaling factors). As a result of this, no entrepreneurs will want to try to make a DAC in the first place. This conclusion seems to match current reality.
I believe Tabarrok says that it is a good thing that entrepreneurs do not create DAC's they think will lose, as it saves everyone the trouble of considering worthless DACs.
> Also, please note that I am NOT criticizing Truthcoin as a means of public goods incentivization; I realize that it does not work the same way, I am focusing strictly on DACs.
I know, my way is a little better because it encourages earlier donations and aggregates info on the project's feasibility.

Also, please note that I am NOT criticizing Truthcoin as a means of public goods incentivization; I realize that it does not work the same way, I am focusing strictly on DACs.
I know, my way is a little better because it encourages earlier donations and aggregates info on the project's feasibility.
Since prediction markets works better, why discuss DACs at all?
It is like a discussion on how to rub sticks most effectively to start a fire.

Since prediction markets works better, why discuss DACs at all?
Your OP, which created this thread, referenced Vitalik's beliefs about financing public goods (as does the Thread Title). So that is literally what we are talking about.
Probably another key detail would be that DAC's are a yearsold established concept, whereas the TDAC smart contract I wrote about is an entirely new (untested) and theoretical concept. As it itself is an extension of a DAC, it would seem reasonable to move the conversation from AC to DAC to TDAC.

Sorry, I did indeed use p in two contradictory ways. There is P, probability of participation, and p, probability of being pivotal. There is also the probability of success, but I've fixed that to 0.5 (since we can adjust the funding threshold up or down to make it so). Does that sound reasonable?
Or, when you say "everyone participating with some probability p" does that mean that everyone calculates the same probability [for example, p=15%], and then rolls a 100sided dice and contributes if it comes up 15 or lower?
That is indeed my model.
that you intend to generalize from one DACproject "build a lighthouse in New Haven, CT" to another "build a dam in Sandouping"?
There exist entrepreneurs, who calibrate DACs to have a 0.5 probability of success (so as to maximize each person's probability of being pivotal). There are going to be many of these games, with different thresholds. So I am seeing this as a situation where there are many different DACs constantly popping up around the world and people have plenty of experience seeing how often they end up succeeding and how often they end up having pivotal members.
Let's say it is the deadline. Will not all players donate (their marginal benefit  epsilon)? Their utility increases either way. In your framework, this is because they now believe that p1 = zero.
No, because everyone else is playing at the same time as them. At the deadline, the game is a singleround game, so the mixedstrategyequilibrium model is the right one to take.
I intended to build a little "realism" into this: agents may try to save on their donations ('quasifreeride'), by attempting to generate additional interest/community in a market earlier, with a credible signal.
Sure, but priming the pump in this context is a public good. So we can't count on it to have that much of an effect.
I fully agree on heterogeneity of preferences, I just think that most public goods are NOT of the form where five people's utilities can add up to more than 30% of the total cost of a PG.
I know, my way is a little better because it encourages earlier donations and aggregates info on the project's feasibility.
Actually, I think prediction marketbased incentivization is unfortunately worse, because at least DACs use the Gaussian distribution to create a leveraging effect where everyone's incentive is magnified by a factor of sqrt(N) due to the pivot effect, whereas the prediction market is basically just a donation scheme. A fully trustworthy donation scheme that has awesome anticheating and quality assurance properties, but a donation scheme nonetheless. People won't donate unless V > C, rather then the pV > C that DACs provide.

Actually, just to move this forward, how about I'll propose a formal model for the game that I am discussing, and we'll see what parts of it you disagree. We'll also limit ourselves to assurance contracts to simplify things; if we agree on the economics of the AC then we can move on to the DAC.
1. There exist N players, each of which receive $V utility from the production of a hypothetical public good.
2. An assurance contract is set up where people can contribute either $0 or $C.
3. If more than N/k people contribute, the funds are sent to the entrepreneur, otherwise they are sent back to the donors. k is set by the entrepreneur, because the entrepreneur knows from prior experience that each person has a probability of 1/k of contributing (the reason why the entrepreneur wants to set the threshold to N/k is so that the threshold is right at the top of the bell curve for the probability distribution of total contributions, maximizing the probability that someone is pivotal and therefore maximizing the incentive to contribute). As another consequence of this optimization on the part of the entrepreneur, the probability of success is 0.5.
4. The game lasts for R rounds, round 1 ... round R. People who have not yet contributed can become contributors during any round.
Note that there are plenty of simplifications here. If you think that given these simplifications my analysis is correct, but under your preferred simplifications my analysis is not correct, then we can focus on the simplifications. If you think that my analysis is not correct even given the simplifications, then we move on.
1. There is no incentive to contribute to rounds 1 ... R1 (this is because you have more information in round R, and because contributing earlier means that you are pushing the probability of success toward the right side of the gaussian, where the derivative of the probability of success is lower, so fewer people will contribute)
2. Let p be the probability of being pivotal.
3. The utility of contributing is pV  C * 0.5. Hence, someone will contribute if 2pV > C.
The stable equilibrium is the one where 2pV = C, so some people contribute and some do not, and the equilibrium probability of contributing is 1/k. If more than 1/k people contribute, then the Gaussian will move to the right, so the threshold will no longer be at the top of the Gaussian, so p will be lower and thus 2pV < C so others will be less likely to contribute to compensate (so it would be the same result except you are expected to pay more); less than 1/k people contributing has the same result, except instead of compensating it drives the success probability to zero (which nobody wants).
This equilibrium does not exist if there are no values of C and k such that 2pV = C.

Ok, I strongly believe in heterogeneous preferences, but I think your model setup might represent an AC reasonably well. I'm still not sure about your conclusions.
I was completely with you until this point:
3. The utility of contributing is pV  C * 0.5. Hence, someone will contribute if 2pV > C.
It seems to me that p has again shifted its meaning. You've asserted somethinglike: "people will independently derive the value of p, then roll a dice to decide if they will contribute", but here you say that "someone will contribute if 2pV > C". If everyone's p is the same, either everyone will contribute or no one will? It seemed before that everyone was indifferent to contributing (as long as enough people did), which implied the mixed strategy. Also V is always > C, so for p>=.5 everyone will contribute, seemingly.
My feeling is still that in the last round, people will contribute the C. They may experience regret either way (donating or otherwise). Imagine V = 100 and C is 2. Then regret for the success case ("too many" people donated) is 2 ("I could have saved that 2!") but for the fail case is 98 ("I really needed that lighthouse, why didn't I just donate the 2!?"). Thus it seems that even your constrained model would have everyone donating when C < V/2. Possibly, even when C > V/2 everyone would donate (as, by definition, agents do act to maximize their utility).

There exist entrepreneurs, who calibrate DACs to have a 0.5 probability of success (so as to maximize each person's probability of being pivotal). There are going to be many of these games, with different thresholds. So I am seeing this as a situation where there are many different DACs constantly popping up around the world and people have plenty of experience seeing how often they end up succeeding and how often they end up having pivotal members.
I'm feeling very confident that you are going to want to abandon this premise. Entrepreneurs may fund many DACs, but this will drive C down, and (VC) up (increasing the likelihood of success, but decreasing your "p"). More importantly, this assumes that you can look at how people feel about {V1, C1, "Digging a gigantic hole in the Atacama Desert and then filling it back up."} and generalize it to {V2, C2, "Building an Earth Asteroid Deflector to protect the planet from destruction."}, which I feel is pretty much impossible.

Ok, I strongly believe in heterogeneous preferences, but I think your model setup might represent an AC reasonably well. I'm still not sure about your conclusions.
I was completely with you until this point:
3. The utility of contributing is pV  C * 0.5. Hence, someone will contribute if 2pV > C.
It seems to me that p has again shifted its meaning. You've asserted somethinglike: "people will independently derive the value of p, then roll a dice to decide if they will contribute", but here you say that "someone will contribute if 2pV > C". If everyone's p is the same, either everyone will contribute or no one will? It seemed before that everyone was indifferent to contributing (as long as enough people did), which implied the mixed strategy. Also V is always > C, so for p>=.5 everyone will contribute, seemingly.
My feeling is still that in the last round, people will contribute the C. They may experience regret either way (donating or otherwise). Imagine V = 100 and C is 2. Then regret for the success case ("too many" people donated) is 2 ("I could have saved that 2!") but for the fail case is 98 ("I really needed that lighthouse, why didn't I just donate the 2!?"). Thus it seems that even your constrained model would have everyone donating when C < V/2. Possibly, even when C > V/2 everyone would donate (as, by definition, agents do act to maximize their utility).
Your case had a very high value for V/C, so your DAC in my model would work up to ~2500 people. Note also that regret for the fail case is 98 only when a member is pivotal; in the case where the contract would have failed with or without them, there is no regret.
this assumes that you can look at how people feel about {V1, C1, "Digging a gigantic hole in the Atacama Desert and then filling it back up."} and generalize it to {V2, C2, "Building an Earth Asteroid Deflector to protect the planet from destruction."}, which I feel is pretty much impossible.
V1 ~= 0. V2 ~= ∞. And there will be many cases in between. People will probably figure out the values in between using linear regression.
So, on the main point, I guess the main impasse is how to reconcile:
1. The probability of contributing is 1/k
2. A person contributes if 2pV > C, and does not otherwise
The only tool that game theory has for solving this class of problems is the mixedstrategy Nash equilibrium, ie. a set of probabilities such that there is no benefit from unilateral deviation from everyone's strategy. So, intuitively, the goal is to prove (or disprove) that if everyone contributes with probabilty 1/k, then 2pV = C. My explanation for that is that that situation is the situation that an entrepreneur wants, since anything else is not a stable equilibrium.
What alternative equilibrium do you propose in my model? One where k ~= 1? In that case, p will be the inverse square root of the number of irrational people, which certainly is more manageable by a constant factor, and what we should really focus attention on is not the model of "there exist N people" but rather "there exists an infinite number of people with a power law distribution of V values for the public good"; I think that might be more where the uncertainty that I am getting at comes from. But in that case, I am pretty convinced that a p ~= 1/sqrt(N) factor is going to appear in there for similar reasons.

Note also that regret for the fail case is 98 only when a member is pivotal; in the case where the contract would have failed with or without them, there is no regret.
That does seem like the case, doesn't it? But, in a world of homogenous preferences, where everyone shares the same V, would pay the same C, and independently constructs the same p, each of these individuals is empirically a copy of the same person. If they all share exactly the same preferences and cost/benefits, and the same reasoning (in construction of p), they might all be simultaneously and equally responsible for the contract's success or failure. Even if no one contributed, "everyone" might be pivotal. This might be what, for example, Eliezer Yudkowsky would argue. I don't think it is particularly important either way.
So, on the main point, I guess the main impasse is how to reconcile:
1. The probability of contributing is 1/k
2. A person contributes if 2pV > C, and does not otherwise
Yes, because there is a clear contradiction between "a probability of contributing" and "contributes if".
Six pure outcomes, V ranking highest, VC ranking second, and getting nothing the lowest:
 No Donate  Donate 
No Lighthouse  0  0 
Pivotal Lighthouse  0  1 
Lighthouse  2  1 
In this setup, someone would care about their likelihood of being in the "Lighthouse" state, but not in the "Pivotal Lighthouse" state (under tremblinghand, you'd donate as long as you weren't in "Lighthouse"). I think it is your obsession with 'being pivotal' which explains why your ideas are so different.
Edit: I had mislabeled the rows. Fixed now.
Edit 2: Elaborated "In this setup..." point.