This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Show posts MenuQuote
5. Every user who submitted a correctly submitted value between the 25th and 75th percentile gains a reward of N tokens (which we'll call "schells")
Quote
At the end of the epoch (or, more precisely, at the point of the first "ping" during the next epoch), everyone who submitted a value for P between the 25th and 75th percentile, weighted by deposit, gets their deposit back plus a small reward, everyone else gets their deposit minus a small penalty, and the median value is taken to be the true UScent/wei price. Everyone who failed to submit a valid value for P gets their deposit back minus a small penalty.
Quote from: psztorc on June 30, 2014, 12:14:59 AM
Now I'm going to repeat some things I said before:Quote from: martinBrown
TruthCoin is basically an extension of SchellingCoin.Quote from: martinBrown on June 27, 2014, 04:14:46 AM"""Whatever helps you think about it. I think SchellingCoin makes no sense, and misuses the Schelling Point idea, which requires a completely symmetric and simultaneous game with one information set and multiple equilibria all with exactly the same payout. Nothing even close to what Vitalik wrote about."""
"Price-pegging" by portfolio replication is totally different, would be better called price-tracking. Tokens are issued which would track the price of BTC (or USD, or any other asset) through a decentralized hedging contract, described by vbuterin in SchellingCoin.
Quote
Thus, knowing only that the only value that other people's answers are going to be biased towards is the actual wei/UScent, the rational choice to vote for in order to maximize one's chance of being near-median is the wei/UScent itself. Hence, it's in everyone's best interests to come together and all provide their best estimate of the wei/UScent price. An interesting philosophical point is that this is also the same way that proof-of-work blockchains work, except that in that case what you are voting on is the time order of transactions instead of some particular numeric value;
Quote
In an attempt to measure the players' morality, Tyler uses the eigenmorality idea from before. The extent to which player A "cooperates" with player B is simply measured by the percentage of times A cooperates. ... This then gives us a "cooperation matrix," whose (i,j) entry records the total amount of niceness that player i displayed to player j. Diagonalizing that matrix, and taking its largest eigenvector, then gives us our morality scores.
Quote
At first glance, the above definitions sound ludicrously circular—even Orwellian—but we now know that all that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust. And the computation of such an eigenvector need be no more "Orwellian" than ... well, Google. If enough people want it, then we have the tools today to put flesh on these definitions, to give them agency: to build a crowd-sourced deliberative democracy, one that "usually just works" in much the same way Google usually just works.
Quote
Now, would those with axes to grind try to subvert such a system the instant it went online? Certainly. ... So there would arise a parallel world of trust and consensus and "expertise," mutually-reinforcing yet nearly disjoint from the world of the real. But here's the thing: anyone would be able to see, with the click of a mouse, the extent to which this parallel world had diverged from the real one. ... The deniers and their think-tanks would be exposed to the sun; they'd lose their thin cover of legitimacy.
Quote
But the point of an eigentrust system wouldn't be to convince everyone. As long as I'm fantasizing, the point would be that, once people's individual decisions did give rise to a giant connected trust component, the recommendations of that component could acquire the force of law. The formation of the giant component would be the signal that there's now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority—that the minority has, in effect, withdrawn itself from the main conversation and retreated into a different discourse. ... This is still democracy; it's just democracy enhanced by linear algebra.
Quote
Other people will object that, while we should use the Internet to improve the democratic process, the idea we're looking for is not eigentrust or eigenmorality but rather prediction markets. Such markets would allow us to, as my friend Robin Hanson advocates, "vote on values but bet on beliefs." For example, a country could vote for the conditional policy that, if business-as-usual is predicted to cause sea levels to rise at least 4 meters by the year 2200, then an aggressive emissions reduction plan will be triggered, but not otherwise. But as for the prediction itself, that would be left to a futures market: a place where, unlike with voting, there's a serious penalty for being wrong, namely losing your shirt. If the futures market assigned the prediction at least such-and-such a probability, then the policy tied to that prediction would become law.
...
But just like Google, whatever its flaws, works well enough for you to use it dozens of times per day, so a crowd-sourced eigendemocracy might —just might— work well enough to save civilization.
Quote
Moving on to eigendemocracy, here I think the biggest problem is one pointed out by commenter Rahul. Namely, an essential aspect of how Google is able to work so well is that people have reasons for linking to webpages other than boosting those pages' Google rank. In other words, Google takes a link structure that already exists, independently of its ranking algorithm, and that (as the economists would put it) encodes people's "revealed preferences," and exploits that structure for its own purposes.
...
By contrast, consider an eigendemocracy, with a giant network encoding who trusts whom on what subject. If the only reason why this trust network existed was to help make political decisions, then gaming the system would probably be rampant: people could simply decide first which political outcome they wanted, then choose the "experts" such that claiming to "trust" them would do the most for their favored outcome. It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust. So, how would an eigendemocracy suss out the truth about who trusts whom on which subject? I don't have a very good answer to this, and am open to suggestions.
Quote
That "trust" network discussion reminds me of the bitcoin network somehow...
...
(1) This seems very similar to the block chain innovation used in bitcoin.
Quote
Another problem that I have with TruthCoin though... Doesn't it assume that someone could "buy truth"? Let's say that TC's market cap is 1M ether. I could then create a contract for 2M ether, and buy enough TC to swing the vote in my favour? I'd lose my all TC because of that, and probably kill te whole TC currency, but i'd be awarded for 2M for the contract fulfillment. In oher words - Isn't TC feasible to protect a contract only up to an amount of TC's market cap?
Quote
> TruthCoin voter decisions are not used to adjudicate the outcomes of prediction markets in other coins.
Well, this is a direct quote from the TruthCoin whitepaper. Article IV, Page 19, section (b).(i.):
"Most critically of all, this system will have to sign Withdrawal Transactions so that users can bring Bitcoin out of this system and back into their personal wallets."
Also, straight from the Abstract on Page 1:
"Bitcoin users can create PMs on any subject, or trade anonymously within any PM, and all PMs enjoy low fees and permanent market liquidity through a LMSR market maker."
So it seems that you missed something about TruthCoin
Quote
... assuming a market with a clear outcome, this system produces an iterated game of chicken between coalitions of voters holding shares in each outcome. If either coalition is able to convince the other that they are absolutely going to spend their N/2 votes on their preferred outcome, the other side is incentivized to back down and concede to prevent losing their bond. It is only be repeated play, in which participants have a reputation to maintain that will be damaged if they vote for a patently incorrect outcome, that this game can be avoided. Thus we think this approach is less desirable as it requires tracking reputations for all participants in the market and not just a small number of adjudicators.