Categories
blog

A Philosophy of Blockchain: Is Secrecy a Danger?

A Philosophy of Blockchain: Is Secrecy a Danger?

A Philosophy of Blockchain: Is Secrecy a Danger?

Proof of stake could endanger the equality of the blockchain and hidden centralizations could endanger its trustlessness. However, there’s another innovation that may endanger both…

written by Shannon Appelcline

Upon inventing Bitcoin, Satoshi Nakamoto created an open ledger that anyone could write to as long as they followed the consensus rules. This design revealed two crucial elements of blockchain design. First, it declared the equality of the blockchain: anyone could see anything on the blockchain thanks to its permissionless design; and anyone could add any valid transaction to the blockchain thanks to its censorship resistance. Second, it demonstrated the trustlessness of the blockchain: anyone could verify that both the blocks and their transactions were validly constructed.

But the founding principles of a community are constantly endangered as it grows and evolves. As we’ve written in past philosophy articles, we feel that proof of stake could endanger the equality of the blockchain and that hidden centralizations could endanger its trustlessness. However, there’s another innovation that may endanger both: secrecy.

A Confidential Possibility

There has been a bit of secrecy in Bitcoin from the start, as Satoshi Nakamoto states in the original paper: “The necessity to announce all transactions publicly precludes [traditional privacy, which limits information about an exchange to the parties involved and a trusted third party], but privacy can still be maintained by breaking the flow of information in another place: by keeping public keys anonymous.”

from Satoshi Nakamoto’s paper. https://bitcoin.org/bitcoin.pdf

However, the blockchain is not truly anonymous. At best, it’s pseudonymous and even that’s quite fragile. It depends on strict key hygiene, where everyone constantly creates new keys, and even then there’s the danger of correlation if someone can detect clusters of addresses and connect any of them to a real-world identity.

The quest for privacy beyond Nakamoto’s pseudonymity has loomed large as Bitcoin has matured. In 2013, Greg Maxwell proposed CoinJoin as one of the first solutions; it simply mixed together bitcoins, making it harder to correlate them. That same year Adam Back detailed “bitcoins with homomorphic value”, which would eventually become the Confidential Transactions of Blockstream’s Elements Project. Back took a different tack by blinding the contents of a transaction, so that people outside the transaction could only see that it occurred (and what the mining fee was). The fact that later non-Confidential Transactions could leak information about previous Confidential Transactions is probably what led to the creation of fully privacy oriented blockchains, such as Monero in 2014 and Zcash in 2016, each of which took different approaches to secrecy.

Obviously, there is interest in increased blockchain privacy: it’s been one of the driving forces for cryptocurrency adoption. This transactional secrecy has a variety of advantages, the most crucial of which is fungibility: with true privacy, it becomes impossible to trace the provenance of an individual transaction, which is crucial for working currency; without it, the cryptocurrrency in individual transactions could be censored if the network did not like who used it or how it we used.

However, we must balance this growing philosophical desire for complete secrecy with the philosophies that have been a developed part of the blockchain from the start. Secrecy may actually enhance some of the blockchain’s core ideals, such as its censorship resistance. And, it doesn’t hurt others, such the blockchain’s trustlessness: protocols like Blockstream’s Confidential Transactions were explicitly designed to balance out the inputs and outputs of a transactions, allowing verification by anyone.

But that’s not to say that secrecy doesn’t have problems of its own.

The Dangers of Cryptocurrency Secrecy

One of the original goals of Bitcoin (and other cryptocurrencies) was to give power back to the people. In the physical world, we’ve lost agency to corporations, government, and plutocrats. The blockchain gave that back to us in part due to its transparency. It suddenly became possible to require that transactions of public entities be public in a way that we never could have considered in traditional financial systems. We could require that proxies publicly reveal their votes, that elected officials detail their contributions, and that corporations declare transactions related to their advertisements, their guarantees, and their certifications — and many of these revelations could be verified through the blockchain itself.

But now, as a shroud of secrecy is spreading across blockchains, expectations of transparency are rapidly fading. If cryptocurrency becomes as opaque as traditional currency, then the opportunity to demand transparency, to truly change the rules of the game, will evaporate.

Confidential transactions and privacy-protecting digital currencies are being advertised as a way for us to have privacy, but it’s them, the rich and the powerful, who will make the greatest use of this power. We already see this in the opaque finances of the physical world, at places like Deutsche Bank, which is facing legal action as the result of laundering twenty billion dollars of Russian money. If the transparency of the blockchain becomes opaque, it’ll happen there too. The rich and the powerful will hide their transactions so that they can maintain the influence and authority they’ve gathered in the physical world and extend it to the digital world — using the very tool that’s supposed to reverse those trends.

In addition, secrecy may turn cryptocurrencies into what people fear. There has long been concern over criminal uses of the blockchain but the transparency and pseudonymity of most blockchains have worked against that — and in fact have made criminals vulnerable when they mistakenly thought they were safe. Cryptocurrrency secrecy could let them in.

Secrecy may turn cryptocurrencies into what people fear. Photo by Julius Drost on Unsplash

Perhaps we, as a blockchain community, will assess these costs as acceptable given the privacy gains for the common person. Or perhaps not. But the problems become even greater when moving from cryptocurrency to the wider world of digital assets, a topic that’s dear to us at Bitmark.

The Dangers of Digital Asset Secrecy

Bitmark defines and defends digital property through the Bitmark Property System, which allows people to register their digital assets and data, then to license, sell, loan, or otherwise leverage that digital property for the good of both themselves and our society. KKBOX’s use of the Bitmark blockchain to record royalties for the use of digital music shows how this system can help individual musicians, while UC Berkeley and Pfizer have demonstrated the benefits of recording health data permissions to support health studies and clinical trials that could contribute to the whole world. But for digital assets to have value, it’s vitally important that ownership records be public, not secret.

Last year we wrote about the case of Shepard Fairey, whose famous “Hope” stencil portrait of Barack Obama became the source of a legal dispute because Fairey didn’t license the original photographic source. That case study demonstrates our need to know who owns something so that we can license it (or purchase it or borrow it): secrecy works against the interests of both asset holders and hopeful licensees. The music industry offers another use case: rights information should be stored in electronic data, but it’s often wrong, which has left artists unable to collect billions of dollars in royalties.

This problem of determining asset ownership is so large that it became a major focus of the US Copyright Office in the ’00s. As corporations like Google tried to turn physical assets into digital assets, they ran into a major problem with “orphan works”, where they couldn’t discover who the rights-holder for an asset was, and so were unable to attain permission (or refusal) to use the work. The Copyright Office was thus tasked with determining whether these orphaned works still served to “promote the progress of Science”, one of the major purposes for copyright in the United States. Their conclusion was:

“Both the use of individual orphan works and mass digitization offer considerable opportunities for the diffusion of creativity and learning. Too often, however, the public is deprived of the full benefit of such uses, not because rightsholders and users cannot agree to terms, but because a lack of information or inefficiencies in the licensing process prevent such negotiations from occurring in the first place. As countries around the world are increasingly recognizing, these obstacles to clearance are highly detrimental to a well-functioning copyright system in the twenty-first century. The Office thus agrees that a solution for the United States is ‘desperately need[ed]’ …”

In other words, the US Copyright Office recognized that society and its institutions needed to be able to discover both the attributes and ownership of assets, so that it could better itself and reach its full potential.

As corporations like Google tried to turn physical assets into digital assets, they ran into a major problem with “orphan works”. Photo by Amanda Jones on Unsplash

Obviously, having ownership information easily available could have helped Shepard Fairey and Google, who each wanted to reuse existing assets. It could have helped musicians, who desired to receive payments for their music. But it goes far beyond that and far beyond these cases of accidental secrecy. By knowing who owns resources, a society can find those resources when it needs them — whether it be iron ore required for construction or health data needed to solve a medical problem. By knowing who owns items, a society can contact the owners of those items — perhaps because those items are surprisingly dangerous (due to a recall) or perhaps because they’re surprisingly valuable (due to a need). Finally, as the US Copyright Office noted, registration of ownership can allow negotiation, a necessary element to resolve negative externalities related to a marketplace, as discussed in Coase Theorem. Having purposeful secrecy would directly contradict all of these use cases, which is why it’s even more problematic for digital assets than for simple cryptocurrency.

The US Copyright Office suggested legalistic methods to solve this problem. But there’s another, better solution, one that can avoid works being orphaned or misplaced in the first place: technology. It’s the solution offered by the Bitmark Property System, which organizes and codifies the ownership of digital assets on a property rights blockchain for the good of both the rights holders and our society. By maintaining these deeds in the public eye, not in secrecy, we can enable all of these use cases: Fairey’s artistic reimagination of a photo, Google’s digitization of classic works, and our society’s ability to locate, recall, or purchase items of importance.

Conclusion

Many people laud the privacy of the blockchain, something that was possible once upon a time when transactions depended on cash in the physical world, but which is becoming increasingly difficult in a world of electronic banking on the internet.

But, we should be aware that secrecy of this sort has very real consequences. Some people might value their privacy enough to empower the plutocratic powers of the physical world in cyberspace, though we think there’s real weight to both sides of the argument. But when we delve into the wider world of digital assets, we think this position becomes increasingly dangerous.

Which is why the Bitmark Property System is open and transparent, just as Bitcoin was in its original design.

Further Reading

Alt, Casey, Sean Moss-Pultz, Amy Whitaker, & Timothy Chen. November 2016. “Defining Property in the Digital Environment”.
Bitmark_defining-property-dig-env.pdf.

Bitmark. Retrieved July 2019. “Why Property Rights Matter”. Bitmark. https://bitmark.com/en/property-blockchain/why-property-rights-matter.

Bitmark. October 2018. “How to Use the Blockchain to Riff Artwork, Sell PDFs, and Otherwise Gain Economic Control of Your…” Hackernoon. https://hackernoon.com/bitmark-how-to-use-the-blockchain-for-property-rights-ecf9f5e67e77.

Blockstream. Retrieved 2019. “Elements by Blockstream”. The Elements Project. https://elementsproject.org/.

Deahl, Dani. May 2019. “Metadata is the Biggest Little Problem Plaguing the Music Industry”. The Verge. https://www.theverge.com/2019/5/29/18531476/music-industry-song-royalties-metadata-credit-problems.

Hsiang-Yun L. February 2019. “Coase Theorem in the World of Data Breaches”. Human Rights at the Digital Age. https://techandrights.tech.blog/2019/02/22/coase-theorem-in-the-world-of-data-breaches/.

Maxwell, Greg. August 2013. “CoinJoin: Bitcoin Privacy for the Real World”. Bitcoin Talk. https://bitcointalk.org/index.php?topic=279249.0.

Nakamoto, Satoshi. October 2008. “Bitcoin: A Peer-to-Peer Electronic Cash System”. https://bitcoin.org/bitcoin.pdf.

Poelstra, Andrew, Adam Back, Mark Friedenbach, Gregory Maxwell, and Pieter Wuille. 2017. “Confidential Assets.” Blockstream. https://blockstream.com/bitcoin17-final41.pdf.

US Copyright Office. June 2015. “Orphan Works and Mass Digitization”. Copyright.gov. https://www.copyright.gov/orphan/reports/orphan-works2015.pdf.

Van Wirdum, Aaron. November 2015. “Is Bitcoin Anonymous? A Complete Beginner’s Guide”. Bitcoin Magazine. https://bitcoinmagazine.com/articles/is-bitcoin-anonymous-a-complete-beginner-s-guide-1447875283.

Van Wirdum, Aaron. June 2016. “Confidential Transactions: How Hiding Transaction Amounts Increases Bitcoin Privacy”. Bitcoin Magazine. https://bitcoinmagazine.com/articles/confidential-transactions-how-hiding-transaction-amounts-increases-bitcoin-privacy-1464892525.

By Bitmark Inc. on August 29, 2019.
Categories
blog

A Philosophy of Blockchain: Do You Have Hidden Centralizations?

A Philosophy of Blockchain: Do You Have Hidden Centralizations?

A Philosophy of Blockchain: Do You Have Hidden Centralizations?

It’s hard to avoid hidden centralizations, particularly when you’re creating code that’s being used by a network…

written by Shannon Appelcline

There is no doubt that decentralization was one of the core philosophies of Bitcoin (and thus the blockchain). In his original white paper, Satoshi Nakamoto wrote that Bitcoin could enable “any two willing parties to transact directly with each other without the need for a trusted third party”. Over time, this idea has become a touchstone for the blockchain technology: the Nakamoto Consensus protocol ensures that a blockchain is created in a decentralized way, then anyone can validate transactions to ensure that a blockchain remains trusted.

Why is decentralization important to blockchains? Satoshi Nakamoto alludes to the intent in the original Bitcoin white paper, saying that it would be problematic if “the fate of the entire money system depend[ed] on” a centralized authority. This is certainly a crucial, if pragmatic, reason to avoid centralization: if the health and viability of that authority fails, then so does the entire system.

It would be problematic if “the fate of the entire money system depend[ed] on” a centralized authority. Photo by Robert Bye on Unsplash

However, the Cypherpunks, who prefigured the creation of Bitcoin, offered more philosophical reasons behind the logic of decentralization. In “A Cypherpunk’s Manifesto”, Eric Hughes discussed how the traditional right of privacy was being eroded as commerce moved into electronic realms, saying: “We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence.” Perhaps more importantly: we can’t depend on the people who run those organizations and corporations to protect our privacy. Even if we trust a centralized authority when we give them our data, we can’t guarantee that we will trust them in five, ten, or fifty years when new executives take over. The cypherpunks saw the need to sidestep those centralizations to ensure that the privacy of the physical world was replicated on the internet, but the mere desire for decentralization doesn’t ensure it.

The cypherpunks saw the need to sidestep those centralizations to ensure that the privacy of the physical world was replicated on the internet. Photo by Clem Onojeghuo on Unsplash

Despite our best intentions, hidden centralization have crept into blockchains. They take power from the people, which was core to the conception of the blockchain, and instead give it to small and powerful groups of elites. Many people fear the 51% attack, where a majority of miners could take over a blockchain, because any proof-of-work blockchain has a hidden centralization in just 51% of its full mining nodes. However, there are even more insidious possibilities. Protocol designers, code programmers, and even blockchain voters can all be part of small, hidden centralizations, where a small subset of blockchain users suddenly become an authority.

Different blockchains have approached this problem in different ways, either fighting the hidden centralizations of miners, coders, or voters or embracing them.

But even for those of us who believe that these hidden centralizations violates the core philosophies of the blockchain, it’s a difficult battle.

Programmers & Bitcoin

Bitcoin offers the best example of the battle over hidden centralization, both because it’s the largest blockchain and because it’s worked the most with the issue — largely focused on the hidden centralization of Bitcoin protocols and code.

Theoretically, any code that aligns with the Bitcoin consensus protocols can be used to interact with the Bitcoin blockchain, but its technological future is primarily directly by Bitcoin Core — the codebase ultimately derived from the original code of Satoshi Nakamoto, and now used by approximately three-quarters of Bitcoin nodes. This code thus represents Bitcoin’s first hidden centralization.

Though Bitcoin Core has a small number of “core” team members, led by Wladimir van der Laan, the code is maintained on Github to maximize community involvement. This has allowed for hundreds of contributors: theoretically any Bitcoin aficionado can join in. However, even for the most popular and most successful blockchain, reality doesn’t always line up with desire. Just 67 contributors have made more than a dozen commits, just 45 have made more than twenty, just 23 more than a hundred. Only a dozen or so people have full commit permissions on the Bitcoin Github repository.

Just 67 contributors have made more than a dozen commits, just 45 have made more than twenty, just 23 more than a hundred. Photo by Alex Kotliarskyi on Unsplash

Fortunately, Bitcoin’s centralization begins to recede when you move away from the Bitcoin Core code to the underlying consensus protocols. Changes are initiated in Bitcoin Improvement Protocols, which support widespread community involvement. They then continue through a process of “rough consensus”, where major objections are heard and resolved. Only afterward are BIPs introduced as actual code.

If this rough consensus was the final word in which BIPs were eventually introduced into code, then the hidden centralizations within Bitcoin’s consensus protocols would be greatly minimized. However, there’s another step: Bitcoin nodes must indicate that they’ve adopted a new protocol element through signaling, as detailed in BIP9. It was never intended to be anything but a marker to show which nodes had adopted the newest protocols … until the great Bitcoin block-size debate. Different interest groups fought over how large blocks on the Bitcoin blockchain should be, and during this conflict, BIP9’s signalling methodology became a de facto voting process for how the debate should be resolved. Essentially, it was “vote-by-work”, since the votes were determined based on the power of miners creating blocks.

In other words: there are hidden centralizations all the way down. Even if “core” coders don’t have ultimate power on the Bitcoin blockchain (and there’s certainly disagreement here), then miners might, because they can limit what consensus protocols go into actual use. They’re both centralizations, and like Eric Hughes said, you can’t expect such authorities to respect the best interests of the rest of a network.

As the block-size debate continued, a User Activated Soft Fork, or UASF, attempted to shift the decision-making power all the way down to the end-user, but was superseded by the adoption of BIP91 before the UASF could come to fruition. Nonetheless, it was an important statement of the power of the people at the foundation of the blockchain (and an important step away from centralization).

The long and bloody debates over block size resulted in the creation of Bitcoin Cash, a variant of the Bitcoin currency. This fundamental and irrevocable disagreement demonstrated that even Bitcoin doesn’t deal perfectly with the problems that can be introduced by centralized code and protocols. But it’s certainly the blockchain that’s done the most to battle its hidden centralizations.

Others of us have further to go.

Programmers & Ethereum

Ethereum is the second most popular blockchain after Bitcoin, focused on distributed computing rather than just cryptocurrency transfer. Like Bitcoin, Ethereum has its own Github, its own BIPs (called ERCs), and a relatively small team of developers — though Ethereum has twice as many regular developers as Bitcoin, according to a Cointelegraph report. So, there’s certainly been care and effort in managing Ethereum’s hidden centralizations of code and protocol. However, unlike Bitcoin, Ethereum has its own non-profit: the Ethereum Foundation. This non-profit is a different sort of hidden centralization that could exert a lot of control over the future of the network — a problematic possibility that blossomed in the 2016 DAO incident.

The DAO, a “decentralized autonomous organization” was a new business model controlled by smart contracts. It allowed investors to place their money into an organization that could then make investments of its own all run from an open and borderless blockchain. It could have been the future of business on the internet.

Unfortunately, the Turing-complete nature of Ethereum’s expansive programming language ultimately spelled the DAO’s doom. Turing-complete languages are either difficult or impossible to logically prove, which means there’s no way to formally say that they will do what they’re supposed to. Or, if you prefer: they can have hidden bugs. And that was the case with The DAO. Participants invested $150 million into the decentralized autonomous organization, and then a hacker used an exploit to transfer out $50 million of that money. The bug, it should be clear, was in the DAO software, not in Ethereum itself.

Blockchains are merciless. There are no take-backs. If you send money to the wrong place, don’t recover your change, mess up a hash … or write a program wrong, that’s on you. The autonomy of decentralization becomes a harsh reality if you make a mistake. So, under the core philosophy of the blockchain, that $50 million should have been gone. As some people said in the wake of the DAO incident: “code is law”.

The whole community voted using a brand-new “Carbonvote” system, which was essentially vote-with-stake: yes and no votes were counted based on the amount of cryptocurrency held by the voting address. Photo by Arnaud Jaegers on Unsplash

That’s not what happened. Instead, the Ethereum Foundation proposed a hard fork to wind back time and get the lost money back to the DAO’s contributors. The whole community voted using a brand-new “Carbonvote” system, which was essentially vote-with-stake: yes and no votes were counted based on the amount of cryptocurrency held by the voting address. 89% of voters agreed to roll back the DAO losses, but it still raised the spectre of hidden centralization. How much authority did the Ethereum Foundation exert by making and pushing the proposal? And, would they do it again? In the worst case, Ethereum had revealed a hidden centralization in the small group of people controlling the Foundation, while in the best case they’d merely shown that 51% of voters could make unprecedented changes to the blockchain.

Either way, the centralization was real. The Ethereum Classic blockchain was the result: a new cryptocurrency spun off of Ethereum for people who felt that the DAO rollback had violated the philosophies of the blockchain. (Like the somewhat similar Bitcoin Cash fork, the new cryptocurrency has proven considerably less valuable than its parent.)

Programmers & Bitmark

At Bitmark, we’ve developed our own blockchain for the specific purpose of building a digital property system: the Bitmark blockchain, focused on the registration and management of digital property and assets. It allows users to transfer and license these various properties, controlling and monetizing their digital assets. Much as with the cryptocurrencies of Bitcoin and Ethereum, it’s crucially important that this network be truly decentralized, so that its users can trust the neutrality and openness of the blockchain.

And here we’ve discovered what the creators of Bitcoin and Ethereum already know: it’s hard to avoid hidden centralizations, particularly when you’re creating code that’s being used by a network. As with the other blockchain communities, our code is available on Github. We also have a brand-new Bitmark Upgrade Proposal (BUP) system, where community members can suggest upgrades to the Bitmark Algorithms. But that’s not enough on its own: several external developers have forked our core code, but to date all of the commits are from our own engineering team, while the BUP system is too new to have generated proposals.

Though Bitmark has had strong success with partners like KKBOX (who uses the Bitmark Property System to record rights to digitally streaming music) and Chibitronics (who registers Bitmark certificates to verify the authenticity of their hardware), making additions to a blockchain’s core code (or its algorithms) requires a totally different sort of community. We’ve thus experimented with other methodologies, such as seeding our coding community via a bug bounty program. A few community members have already been paid out for reporting small bugs in our web app. These bug reports aren’t Github commits or BUPs, but they’re a first step toward decentralizing our coding work.

After we’ve welcomed more external coders into our community, we’ll eventually need mechanisms to decide what actually gets added to our code and our protocols. Bitcoin and Ethereum both developed somewhat ad hoc systems to allow voting on contentious protocols: Bitcoin used a signalling system that wasn’t intended for voting, while Ethereum’s Carbonvote was created to resolve an immediate crisis. A more thoughtful system, not created due to immediate exigencies, could more carefully consider the best way to manage collective choice, whether it be rough consensus, work voting, stake voting, or something else.

Conclusion

Unlike the philosophy discussed in our last article, on proof of work vs proof of stake, the blockchain community is much more settled on the advantages of decentralization.

In spite of that, blockchains have big problems with centralized authorities; they’re just somewhat hidden.

The central power of the Ethereum Foundation to pursue a radical reversion of the Ethereum blockchain was an eye opener for many in the blockchain community, but it’s just a singular example of a more endemic problem, one that generates serious questions that we all need to consider.

How can we reduce the centralizations inherent in miners, protocol developers, coders, and blockchain VIPs?

How can we develop protocols of collective choice that maintain the power of the people without subjecting it to the tyranny of the majority?

In other words, how can we truly meet the blockchain’s original goals of decentralization in reality?

Further Reading

Bitcoin. Retrieved June 2019. “Bitcoin Improvement Protocols”. Github. https://github.com/bitcoin/bips.

Bitmark. Retrieved June 2019. “Bitmark Inc. Repos”. Github. https://github.com/bitmark-inc.

Bitmark. Retrieved June 2019. “Bug Bounty Program”. Bitmark. https://docs.bitmark.com/learning-bitmark/contributing-to-bitmark/bug-bounty-program.

Caffyn, Grace. August 2015. “What is the Bitcoin Block Size Debate and Why Does It Matter?” Coindesk. https://www.coindesk.com/what-is-the-bitcoin-block-size-debate-and-why-does-it-matter.

Electric Capital. March 2019. “Dev Report”. Medium. https://medium.com/@ElectricCapital/dev-report-476df4ff1fd2.

Hughes, Eric. March 1993. “A Cypherpunk’s Manifesto”. Cypherpunk Mailing List. Archived on https://github.com/NakamotoInstitute/nakamotoinstitute.org/blob/master/sni/static/docs/cypherpunk-manifesto.txt.

Falkon, Samuel. December 2017. “The Story of the DAO — Its History and Consequences.” Medium. https://medium.com/swlh/the-story-of-the-dao-its-history-and-consequences-71e6a8a551ee.

Hall, Christopher. April 2019. “BUP 001: BUP Process”. Github. https://github.com/bitmark-property-system/bups/blob/master/bup-0001-draft.markdown.

Lapowsky, Issie. September 2017. “The Feds Promised to Protect Dreamer Data. Now What?” Wired. https://www.wired.com/story/daca-trump-dreamer-data/.

Lombrozo, Eric. June 2017. “Forking, Signaling, and Activation”. Medium. https://medium.com/@elombrozo/forks-signaling-and-activation-d60b6abda49a.

Nakamoto, Satoshi. October 2008. “Bitcoin: A Peer-to-Peer Electronic Cash System”. https://bitcoin.org/bitcoin.pdf.

SFOX. April 2019. “Bitcoin Governance: What Are BIPS and How Do They Work?” Medium. https://blog.sfox.com/bitcoin-governance-what-are-bips-and-how-do-they-work-276cbaebb068.

Wilcke, Jeffrey. July 2016. “To Fork or Not to Fork”. Ethereum Blog. https://blog.ethereum.org/2016/07/15/to-fork-or-not-to-fork/.

Zmudzinski, Adrian. March 2019. “Ethereum Has More than Twice as Many Core Devs Per Month as Bitcoin, Report Says”. Cointelegraph. https://cointelegraph.com/news/ethereum-has-more-than-twice-as-many-core-devs-per-month-as-bitcoin-report.

By Bitmark Inc. on August 15, 2019.
Categories
blog

A Philosophy of Blockchain: What Would Satoshi Nakamoto Think of Proof of Stake?

A Philosophy of Blockchain: What Would Satoshi Nakamoto Think of Proof of Stake?

A Philosophy of Blockchain: What Would Satoshi Nakamoto Think of Proof of Stake?

We’ve been somewhat surprised by how many people ask us why we don’t use proof of stake instead. There are a few reasons, but most importantly…

Photo by Giammarco Boscaro on Unsplash

written by Shannon Appelcline

The blockchain began with the Bitcoin ledger, which started on January 3, 2009, making it just more than ten years old. That’s still very young for a major new field of computer technology. It’s like databases in 1970, before the relational database model or the structured query language appeared, or like the TCP/IP-based internet in 1993, just a couple of years after the advent of the web page.

Given the youth of the field, it’s not surprising that we’re still exploring different routes forward: finding out how to best design and best use the immutable consensus ledgers that represent a whole new way to record and store information.

We’re still trying to decide what the philosophy of blockchain is, and what it should be.

Proving the Blockchain

One of the blockchain’s current technological discussions focuses on consensus: how do you decide what can be added to the blockchain? What can you do to make sure the blockchain is fairly constructed and remains immutable? This question has long been viewed through the lens of The Byzantine Generals’ Problem, which requires the consensus of selected “generals”, but accepts that some might be adversaries. A system is considered Byzantine Fault Tolerant if it can remain true despite the failure of some percentage of its components.

Traditional solutions to this problem predate the blockchain: Practical Byzantine Fault Tolerance (pBFT) achieves consensus by its community members engaging in multiple rounds of voting until they achieve sufficient agreement — even if some members are not responding or are responding maliciously. It was groundbreaking, but also has some severe limitations: because of its extensive communication requirements, pBFT requires a limited number of established actors: you can only have so many generals, and they must also be known in advance.

Despite these limitations, pBFT (or close variants) has come into use on some permissioned blockchains. The federated consensus used by Blockstream’s Liquid network is one such example. It’s a private network where cryptocurrency exchanges come together to support arbitrage between their companies, therefore a permissioned solution works well. The Hyperledger Fabric blockchain is similarly built around pBFT.

However, Satoshi Nakamoto realized that entirely different solutions would be required for permissionless public blockchains built on trustless transactions. One of the largest innovations of Bitcoin was thus his “Nakamoto Consensus” protocol. It contains multiple elements, including: block proposer selection (where someone is given the opportunity to suggest a block) and block inclusion (where the block is added to a blockchain, which may or may not become the main chain, depending on a stochastic process). Some people also consider Bitcoin’s scarcity and rewards structures to be part of the protocol.

It’s that first element of Nakamoto Consensus, the block proposer selection, that really catches peoples’ attention and that drives many of the discussions over the future of blockchains, and that’s because it’s core to the idea of permissionless blockchains such as Bitcoin. Technically, you could choose a block proposer through a random or a round-robin mechanism. (In fact, pBFT does the latter.) That works for permissioned blockchains, but it quickly falls apart on permissionless blockchain, as anyone could make any number of accounts to increase their odds of proposing blocks. As a result, Nakamoto Consensus required a Sybil defense mechanism to prevent people from gaining advantage in the consensus by creating numerous accounts. That’s what a good block proposer selection method does.

Two Sybil defense mechanisms have achieved strong attention: proof of work and proof of stake, both of which control Sybils by making them expensive. (There are also other proposer selection mechanisms such as proof of activity, proof of authority, proof of burn, and proof of capacity, but they haven’t yet achieved the same attention.) Though proof of work was Satoshi Nakamoto’s original Sybil defense, proof of stake is quickly gaining on it. There are now even systems that combine proof of stake with traditional BFT systems — such as the EOS blockchain, which uses proof of stake to vote for the “generals”, who then engage in BFT voting to achieve consensus.

So how do the two most popular Sybil defense systems work?

Proof of work is the traditional method of Sybil defense on a blockchain, used by Bitcoin and (at the moment) by Ethereum. Every participant is given the opportunity to solve a math problem. Whoever manages to do so is allowed to propose a block for the blockchain, contributing to the consensus and the growth of the ledger. Sybil defense is provided by the fact that it’s very hard to make these calculations, and so has real costs in energy.

Proof of Work: Every participant is given the opportunity to solve a math problem. (Photo by Dimitar Belchev on Unsplash)

Proof of stake is a newer Sybil defense, first used in production by Peercoin and being studied by Ethereum for a future release. Here, a random participant is chosen to create a new block based on their “stakes” in the system, usually defined as either quantity or age of cryptocurrency holdings. In other words, participants randomly get to make blocks if they either have a lot of digital assets or a lot of old digital assets; this is the core cost that prevents attackers from making numerous Sybil accounts. Early proof-of-stake mechanisms presumed that stakeholders were inherently incentivized to produce correct blocks, but that resulted in certain attacks, so newer systems create security by adding punishment: if a participant incorporates fraudulent transactions, then they lose part of their stake and the ability to create future blocks.

Proof of Stake: a random participant is chosen to create a new block based on their “stakes” in the system, as either quantity or age of cryptocurrency holdings. (Photo by Dmitry Moraine on Unsplash)

There are advantages and disadvantages to each system; however, to truly assess which is best for the blockchain community requires going back to the beginning and remembering the technology’s philosophical underpinnings.

Reconsidering the Philosophy

In recent years, Bitcoin discussion has mainly focused on its price, while wider blockchain discussions tend to concentrate on what can be done with the technology. But that omits a crucial topic: why the blockchain was created in the first place.

Satoshi Nakamoto’s original white paper on Bitcoin is heavy on technology and light on philosophy, but it offers a few clues with its discussions of “peer-to-peer” networks and its move away from “central authority”. Bitcoin was about giving power to the people, so that they could transact currency without having to depend on either a corporation or the state. As such, it was an outgrowth of the cypherpunks, who had been working on digital cash solutions for some time. They advocated for privacy and fought against government control and against censorship.

The philosophical underpinning of blockchain can easily be derived from the intersection of cypherpunk ideals with the decentralized, peer-to-peer technology imagined by Nakamoto. This meeting of ideas suggests a world where everyone is an equal: where everyone has the opportunity to contribute to the consensus, and where they can all interact on a level playing field.

It’s about reversing the plutocratic and autocratic trends of the physical world, where very small numbers of people have great influence and power, and instead creating a place where everyone has autonomy and agency.

At Bitmark, we abstract the core philosophies of the blockchain by saying that it should be open (so that anyone can access it), borderless (so that real-world barriers don’t impact our virtual equality), censorship-resistant (so that no one can prevent another person’s participation), and permissionless (so that anyone can add to the consensus).

Watch this great and very accessible video about the “Five Pillars of Open Blockchains”

And that brings us back to the question of proof of stake versus proof of work. For a Sybil defense system to truly uphold the original principles of blockchain, it needs to be a system that anyone can join, and the two most popular Sybil defense systems are not equals in this regard.

Comparing the Protocols

Bitmark has its own stake in this debate because we’ve created our own blockchain, the Bitmark Property System. Where the Bitcoin blockchain secures digital money, the Bitmark blockchain secures property rights, allowing people to own, transfer, and generate income from their data and other digital assets. In constructing the Bitmark public blockchain, we had to consider many questions about the philosophy of the blockchain; we then used the answers to guide the design of our blockchain. One of our decisions was to use proof of work, and we’ve been somewhat surprised by how many people ask us why we don’t use proof of stake instead. There are a few reasons, but most importantly:

We don’t believe that proof of stake matches the original ideals of Bitcoin, which are also our own ideals in creating the Bitmark blockchain.

In short, proof of stake reverses the egalitarian ideals of the blockchain. Certainly, miners have a lot of power in proof-of-work blockchains, but we know people who have purchased mining rigs solely to ensure that they would always have a voice on the blockchain. Though they might only be able to rarely produce a block, they can ensure that their transactions can never be censored. Every single person has that possibility on the Bitcoin blockchain (albeit at a higher price now than in its early days, due to its success).

In contrast,

proof of stake consolidates power in the hands of the few — the old and the rich — exactly mimicking the real-world environment that Bitcoin was trying to overthrow.

Photo by Hunters Race on Unsplash

It prevents the many from participating, it allows the rich to get richer, and it creates new dangers of censorship. Though we speak of this at a personal level, every corporation, organization, and government should have the same concerns: newcomers could face new barriers to entry because a competitor could prevent them from participating in a proof-of-stake blockchain. Proof of stake has the very real possibility of creating digital plutocracies, and even absent other concerns, that dramatic change in philosophy would be enough for us (and we suspect for many blockchain enthusiasts) to abandon proof of stake entirely.

The other major issue with proof of stake is that it’s poorly tested. We have faith in proof of work because it’s been the backbone of Bitcoin and Ethereum for years; it’s processed three-quarters of a billion transactions. In contrast Peercoin, even with a market cap of $10 million dollars, is averaging about a dozen transactions an hour. Overall, Peercoin has accumulated just a few million transactions; that’s one thousandth of what Bitcoin and Ethereum have done. Meanwhile, Ethereum has spent over five years working through a few different proof-of-stake mechanisms. Partway through that time, in 2016, Vitalik Buterin said: “After years of research, one thing has become clear: proof of stake is non-trivial — so non-trivial that some even consider it impossible.”

It’s possible that at some time in the future, someone will come up with a well-tested, well-reviewed proof-of-stake protocol that also answers the problems that proof-of-stake mechanisms currently have in regard to consolidation of power. But the time is not now.

None of this should suggest that proof of work is without challenges, because there are many. We’re aware that energy usage has often been a complaint, but it’s not one we find particularly credible; just as we think that people should have autonomy in the blockchain world, we think they should have that control in the real-world too, and that means not censoring or controlling their energy usage. This sort of autonomy has been a general rule in any free society: for example, we are aware of the environmental costs of cars and their dangers to occupants, other drivers, and pedestrians; but for the most part we don’t try and control car usage by limiting it, we simply work to make it more efficient and safer. Similarly, we can work to make energy production cleaner and more reusable, but as a free society we shouldn’t place limitations on how people use their power.

But, there are other challenges to proof of work: the idea of a 51% attack, that’s something that keeps us up at night. Further, as a company working on a proof-of-work blockchain, we have to ask whether there is room for a third major proof-of-work network, following Bitcoin and Ethereum. Or, would a proliferation of proof-of-work networks dilute the miner base of each, making them more susceptible to an association of miners who could jump from blockchain to blockchain to engage in 51% attacks?

For the moment, we prefer the better known and better tested solution, but we should be aware of the dangers of any method for controlling Sybils in a permissionless system.

Conclusion

On balance, Bitmark feels that the advantages of proof of work currently remain greater than those of proof of stake. But we find the philosophical issues even more important: we would need to see a proof-of-stake mechanism that was aligned more closely with the blockchain’s original ideals before we were willing to adopt it, even if it was well-tested and offered clear improvements over current proof-of-work systems.

Certainly, we understand that other groups might balance the advantages and disadvantages of these two Sybil defense mechanisms in different ways.

But ultimately, we believe it comes down to that philosophical question: like the founders of blockchain, are you interested in empowering the people? Or, does your personal philosophy lie elsewhere?

Photo by Jed Villejo on Unsplash

We think philosophies are important, and we expect to continue to address the topic in future articles about choices in blockchain design. Future topics we’re considering include hidden centralizations, the inclusion of tokens and scripting languages, the publication of information, and whether you should just use the Bitcoin blockchain (or no blockchain at all).

Further Reading

Beyer, Stefan. April 2019. “Proof-of-Work is not a Consensus Protocol: Understanding the Basics of Blockchain Consensus”. Medium. https://medium.com/cryptronics/proof-of-work-is-not-a-consensus-protocol-understanding-the-basics-of-blockchain-consensus-30aac7e845c8.

Bitmark. Retrieved June 2018. “Bitmark Blockchain: Technical Overview”. Bitmark. https://bitmark.com/en/property-blockchain/bitmark-blockchain

Buterin, Vitalik. October 2014. “Slasher Ghost and Other Development in Proof of Stake”. Ethereum Blog. https://blog.ethereum.org/2014/10/03/slasher-ghost-developments-proof-stake/

Castro, Miguel and Barbara Liskov. February 1999. “Practical Byzantine Fault Tolerance.” Proceedings of the Third Symposium on Operating Systems Design and Implementation. http://pmg.csail.mit.edu/papers/osdi99.pdf.

Daily Bit. April 2018. “9 Types of Consensus Mechanisms that You Don’t Know About”. Medium. https://medium.com/the-daily-bit/9-types-of-consensus-mechanisms-that-you-didnt-know-about-49ec365179da

Ethereum. March 2019 Update. “Proof of Stake FAQ”. Github. https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ.

Hall, Christopher, Casey Alt, Lê Quý Quốc Cường, and Sean Moss-Pultz. November 2017. “Bitmark The Property System for the Digital Environment.” Bitmark. Bitmark_technical-white-paper.pdf

Hammerschmidt, Chris. January 2017. “Consensus in Blockchain Systems. In Short.” Medium. https://medium.com/@chrshmmmr/consensus-in-blockchain-systems-in-short-691fc7d1fefe

Jenks, Tyler. March 2018. “Pros and Cons of Different Blockchain Consensus Protocols. Very. https://www.verypossible.com/blog/pros-and-cons-of-different-blockchain-consensus-protocols

Nakamoto, Satoshi. October 2008. “Bitcoin: A Peer-to-Peer Electronic Cash System”. https://bitcoin.org/bitcoin.pdf.

Torpey, Kyle. April 2018. “What Are the Philosophical Underpinnings of Bitcoin?” Bitcoin Market Journal. https://www.bitcoinmarketjournal.com/cypherpunk-bitcoin/

Vaidya, Kiran. November 2016. “The Byzantine Generals’ Problem”. Medium. https://medium.com/all-things-ledger/the-byzantine-generals-problem-168553f31480

By Bitmark Inc. on August 1, 2019.
Categories
blog

Technology is Not Magic: The Hacker’s Point of View — Bitmark Ambassador “bunnie” Huang

Technology is Not Magic: The Hacker’s Point of View — Bitmark Ambassador “bunnie” Huang

Technology is Not Magic: The Hacker’s Point of View — Bitmark Ambassador “bunnie” Huang

“One of the reasons I am so passionate about open source, is that I worry that, if people believe that technology is magic, then we find ourselves in a dangerous situation. We essentially become slaves to the technology…”

Andrew “bunnie” Huang is a renowned hacker, author, researcher, and activist

The Bitmark Ambassador series highlights innovators who understand the importance of property rights in the modern digital environment. They are industry pioneers — artists, lawyers, scientists, health researchers, hackers, makers and creators.

Andrew “bunnie” Huang is a renowned hacker, author, researcher, and activist. He is best known for his open hardware designs: the Chumby (app-playing alarm clock), Chibitronics (peel-and-stick electronics for craft), and Novena (DIY laptop). His book on reverse engineering, Hacking the Xbox, is a widely respected tool for hardware hackers. He serves as a Research Affiliate for the MIT Media Lab and a technical advisor for several startups including Bitmark and MAKE magazine. bunnie received his PhD in Electrical Engineering from MIT and currently lives in Singapore where he runs a private product design studio, Kosagi.

Watch bunnie’s talk to learn more about his new project! (betrusted)

Throughout his various projects to empower fellow hackers, journalists, and women, these projects all share one core value: “The Importance of Free Will”.

“I really value free will. A lot of times at the end of the day, part of the idea of seeing the world as a hacker and not seeing the labels on things — that’s kind of the essence of free will.”

In his Bitmark Ambassador video, bunnie raises an interesting question about the people behind large organizations and companies that create rules and define structure. These people are no better than us — we all have the intelligence and capability to question what we are led to believe. We do not need to settle for blind acceptance.

“I really hope in the future we can always find a way to preserve free will. And a lot of the idea behind open source and sharing and sharing the idea of hacking is teaching people how to have that sense of free will and independence, that ability to control their destiny.”

bunnie tells us that if technology makes people feel trapped or lost then there is a path to understand it. That is how a hacker looks at technology, seeing it for what it really is, not what it’s only packaged to be.

“That kind of experience of being able to just kind of touch the hardware and play around with it, break it, fix it, kind of got me over even the notion that technology is magic. Technology is something that you can understand.”

Enjoy “Technology is Not Magic” below and let us know how technology impacts your perspective on the world.

Set the quality to HD for this inspirational video!

More about bunnie:

▪ He filed a lawsuit against the U.S. government arguing that Section 1201 of the Digital Millennium Copyright Act stifles innovation and free speech.

▪ He worked with a PhD candidate at the MIT Media Lab to develop programmable circuit stickers that encourage more girls to experiment with electronics and physical computing.

▪ He created a reference design for a cheap Geiger counter with the goal of helping citizens detect environmental radiation resulting from the Fukushima Daiichi disaster in Japan.

▪ He teamed up with NSA whistleblower Edward Snowden to develop Introspection Engine, an iPhone case for journalists and human rights activists that detects if their devices are secretly transmitting Wi-Fi, cellular, Bluetooth, or GPS signals when they shouldn’t be.

By Bitmark Inc. on July 16, 2019.
Categories
blog

Under-the-Radar Health Information Markets: the Supply, the Demand, and the Exploited.

Under-the-Radar Health Information Markets: the Supply, the Demand, and the Exploited.

Nowadays, it is not a secret that healthcare providers — such as hospitals — can store and utilize individuals’ health information. Hospitals keep records of individuals so that the diagnosis can be based on more information, and some countries even have a health information exchange system among different hospitals for the same purpose.

Yet, there are also some unnoticeable health information markets that are growing rapidly by consuming your health data without your awareness or explicit consent. In the following paragraphs, I will examine the players in the under-the-radar health information market from the view of supply and demand. I will then wrap up the article by raising awareness of the high risks that individuals face.

The Supply: Who is accessing and supplying your health information without your consent?

Health Data Brokerage Industry

In general, data brokers refer to entities that collect information about individuals and sell that data to other data brokers, companies and individuals. Accordingly, health data brokers refer to those who particularly focus on health information. In the US, Health data brokers can legally buy and sell anonymous (de-identified) data under the Health Insurance Portability and Accountability Act (HIPAA), as well as non-anonymous health data not covered by that privacy standard, including what you put into search engines and health websites [1].

“Your medical data is for sale — all of it.”

— The Guardian

One of the biggest health data brokers in the field is IMS Health (now called “IQVIA” after the merge). According to Forbes, IMS claimed it “processes data from more 45 billion healthcare transactions annually and collects information from more than 780,000 different streams of data worldwide.[2]” It is noteworthy that data brokers do not have a direct relationship with the people who they are collecting data from — meaning that people tend to be unaware of their data being collected and sold.

Health Data Breaches

Throughout history, one of the common ways for criminals to get something valuable is via stealing — and at the age of the internet, it becomes data breaches. Suggested by the Forbes, healthcare industry is now the most cyber-attacked industry. In the United States alone, between 2009 and 2017, there have been 2,181 healthcare data breaches that have resulted in the exposure of 176,709,305 healthcare records — accounting for 54.25% of its population [3]. In 2016, there were 9 times more medical than financial records breached [4]. It is also noteworthy that 75% of those records were exposed or stolen as a result of hacking or IT incidents, signaling how criminals saw value in the actions [5].

Every year, with the exception of 2015, the number of healthcare data breaches (in the USA) has increased, rising from 199 breaches in 2010 to 344 breaches in 2017.

Apart from the United States, Australia and Singapore also recently faced a serious health data breach. The Office of the Australian Information Commissioner has revealed in July 2018 that there have been more than 300 major data breaches this year — among which healthcare sector was the worst hit with 49 major data breaches [6]. Singapore, on the other hand, also suffered from one of the worst cyber attacks in history this year. Hackers invaded the computers of SingHealth, Singapore’s largest group of healthcare institutions, and stolen the health records of 1.5 million patients — including Prime Minister Lee Hsien Loong [7].

Darknet Market

Darknet Market, also known as the “Dark Web” or the “Deep Web”, can be seen as an online form of black market. Many of health records from the previously mentioned data breaches go to the darknet market for sale.

“Stolen health credentials can go for $10 each, about 10 or 20 times the value of a U.S. credit card number.” — PhishLabs.

On the dark web, complete health records normally contain an individual’s name, date of birth, social security number, and medical information. Such records can sell for as much as $60 a piece, whereas stolen credit cards sell for just $1 to $3 [8]. The prices might vary due to the number of items available in the package, characteristic of the victim, the source of the stolen data and the underground reputation of the sellers [9].

Source: Redsocks Malicious Threat Detection (11st Apr 2018), Dark Web: The Harmful Business of Medical Data. Available at: https://www.redsocks.eu/blog-2/dark-web-the-harmful-business-of-medical-data/

According to Guardian, a darknet trader even claimed to have access to any Australian’s Medicare details and can supply it upon request. The price for purchasing an Australian’s Medicare card details is 0.0089 bitcoin –equivalent to US$22 at the time [10].


The Demand: Who is buying your health information without your consent?

Medical Identity Theft

Medical identity theft, as defined by the World Privacy Forum, occurs when “someone uses a person’s identity without the person’s consent to obtain medical services or goods, or uses the person’s identity information to make false claims for medical services or goods” [11].

In the US, medical records have been in great demand from cybercriminals because they contain valuable personal information — such as name, address, date of birth and Social Security Number — all in one record [12]. With such information, criminals can access specific medical equipment or drugs available upon prescription — and then later sell them on the black market.

Pharmaceutical Companies

The pharmaceutical industry has traditionally depended on aggressive marketing for the products promotion. However, the traditional commercial method does not seem to do the trick anymore these days. Particularly, companies are failing to engage with patients when they look for information about symptoms in the early stages [13]. So by accessing more health information about individuals, they can gain better insights into the market and how to best interact with patients/consumers [14].

Besides the marketing aspect, to prove the value of their drugs, pharmaceutical companies have started to involve real-world data when conducting clinical studies over the past decade. Between 2010 and 2016, the average cost of bringing a drug to market has increased by 33%, yet the average peak sales decrease by 49%. Meanwhile, the market for precision medicine is expected to grow from $39 billion in 2015 to &87.7 billion by 2023 [15]. IMS Health, for instance, claims that pharmaceutical sales and marketing are a key part of IMS’ business, and its data also helps big pharma justify prices for drugs by demonstrating their effectiveness [16].


The Exploited: High risks, yet low (if any) returns for individuals

Your health information cannot exist without you. Yet, other people are benefiting from it instead of you.

All the health information that I mentioned above — whether it is in a data breach or being purchased by the pharmaceutical companies — are generated by individuals. Therefore, I believe it is fair to argue that individuals, instead of the data brokers or the hackers, have the most at stake — yet as it shows, receive the least benefits from the market.

Privacy is at stake

Most of the current legal protections (e.g. HIPAA) focus on removing personally identifiable information — such as name, phone number, address, date of birth — when it comes to health records. Health data brokers, for instance, tend to only deal with such de-identified health information when running their business. However, it is critical to realize that such method is no longer enough for securing one’s privacy as it is possible to re-identify those data what were de-identified. One of the popular ways to do so is by combing databases to fill in the blanks, which is also known as “mosaicking”[17].

“Enough anonymous data gathered over time will eventually contain enough clues to re-identify nearly anyone who has received medical care, posing a big potential threat to privacy [18].”

The Australian government, for instance, published medical billing records covers 2.9 million people on its open data website and those data were later found re-identifiable by using known information about the individuals [19]. With the increasing popularity of consumer genomics, a research has found out the “more than 60 per cent of Americans with European ancestry can be identified through their DNA using open genetic genealogy databases, regardless of whether they’ve ever sent in a spit kit [20].” In the below graph, Bloomberg shows how someone can successfully re-identify your medical records in 5 simple steps.

Source: Bloomberg Research

Pay the high price for being a medical identity theft victim

In the US, it is suggested that a medical identity theft can cost one about $13,500 to resolve [21]. Unlike the traditional financial identity theft, medical identity theft is more difficult to be discovered and dealt with. One of the main reasons is that health information tends to be very private and unchangeable — one cannot simply cancel his/her demographic data, family history, insurance information or medication.

Once you become a victim of medical identity theft, doctors might update your health records with the imposter’s medical information, which can lead to false treatment for you and medical bills that you have to pay for [22].

What’s it in for the individuals?

Bearing such costs and risks as mentioned, one would assume that there must be something in it for the individuals. But in my reality, I have never get rewarded (in any forms) from hospitals, pharmaceutical companies or health data brokers for utilizing my valuable health information — I believe that is the experience of almost everyone out there.

To conclude, our health information (in many forms) are in fact traded around more than we expected, both legally and illegally. From data brokers to hackers, entities get on hold of valuable and sensitive health information/data and make profits out of them. I believe the very first step is to raise public awareness as well as empowering individuals to request better control over their health information.


Reference:

[1] Fast Company (1st Apr 2018). Can this app that lets you sell your health data cut your health costs. Available at: https://www.fastcompany.com/40512559/can-this-app-that-lets-you-sell-your-health-data-cut-your-health-costs[2] Forbes (6th Jan 2014). Company that knows what drugs everyone takes going public. Available at: https://www.forbes.com/sites/adamtanner/2014/01/06/company-that-knows-what-drugs-everyone-takes-going-public/#2f37caf24c90[3] HIPAA Journal. Healthcare Data Breach Statistics. Available at: https://www.hipaajournal.com/healthcare-data-breach-statistics/[4] Forbes (Dec 2017). The Real Threat Of Identity Theft Is In Your Medical Records, Not Credit Cards. Available at: https://www.forbes.com/sites/forbestechcouncil/2017/12/15/the-real-threat-of-identity-theft-is-in-your-medical-records-not-credit-cards/#5c7f7fa01b59[5] HIPPA Journal (Sep 2018), Study reveals 70% Increase in Healthcare Data Breaches Between 2010 and 2017. Available at: https://www.hipaajournal.com/study-reveals-70-increase-in-healthcare-data-breaches-between-2010-and-2017/[6] News.Com.AU (31st Jul 2018). Health sector tops the list as Australians hit by 300 data breaches since February. Available at: https://www.news.com.au/technology/online/hacking/health-sector-tops-the-list-as-australians-hit-by-300-data-breaches-since-february/news-story/5e95c47694418ad072bf34d872e22124 [7] The Strait Times (Jul 2018). Personal info of 1.5m SingHealth patients, including PM Lee, stolen in Singapore’s worst cyber attack. Available at: https://www.straitstimes.com/singapore/personal-info-of-15m-singhealth-patients-including-pm-lee-stolen-in-singapores-most[8] Fast Company (2016). On the Dark Web, Medical Records Are a Hot Commodity. Available at: https://www.fastcompany.com/3061543/on-the-dark-web-medical-records-are-a-hot-commodity[9] Redsocks Malicious Threat Detection (Apr 2018). Dark Web: The Harmful Business of Medical Data. Available at: https://www.redsocks.eu/blog-2/dark-web-the-harmful-business-of-medical-data/[10] The Guardian (Jul 2018). The Medicare machine: patient details of ‘any Australian’ for sale on darknet. Available at: https://www.theguardian.com/australia-news/2017/jul/04/the-medicare-machine-patient-details-of-any-australian-for-sale-on-darknet[11] World Privacy Forum. Medical Identity Theft. Available at: https://www.worldprivacyforum.org/category/med-id-theft/ [12] Entefy (Dec 2017). Medical records fetch a premium on the black market. Then along comes blockchain. Available at: https://www.entefy.com/blog/post/500/medical-records-fetch-a-premium-on-the-black-market-then-along-comes-blockchain[13] McKinsey & Company (May 2016). How pharma companies can better understand patients. Available at: https://www.mckinsey.com/industries/pharmaceuticals-and-medical-products/our-insights/how-pharma-companies-can-better-understand-patients[14] Lewis, R. J., Weintraub, S., Sitler, B., McHugh, J., Zan, R., & Morales, S. (2015). Results: The Future of Pharmaceutical and Healthcare Marketing. [15] Deloitte (2017). Life Sciences and Health Care Prediction 2022. Available at: https://www2.deloitte.com/uk/en/pages/life-sciences-and-healthcare/articles/healthcare-and-life-sciences-predictions.html[16] Fortune (9th Feb 2018). This Little-Known Firm Is Getting Rich Off your Medical Data. Available at: http://fortune.com/2016/02/09/ims-health-privacy-medical-data/[17] Forbes (2016). The Big Data Era of Mosaicked Deidentification: Can we Anonymize Data Anymore? Available at: https://www.forbes.com/sites/kalevleetaru/2016/08/24/the-big-data-era-of-mosaicked-deidentification-can-we-anonymize-data-anymore/#802d2be3f1e2[18] The Century Foundation (2017). Strengthening Protection of Patient Medical Data. Available at: https://tcf.org/content/report/strengthening-protection-patient-medical-data/?agreed=1[19] The Guardian (Jul 2018). ‘Data is a fingerprint’: why you aren’t as anonymous as you think online. Available at: https://www.theguardian.com/world/2018/jul/13/anonymous-browsing-data-medical-records-identity-privacy[20] Wired (2018). Genome Hackers Show No One’s DNA Is Anonymous Anymore. Available at: https://www.wired.com/story/genome-hackers-show-no-ones-dna-is-anonymous-anymore/[21] AARP (2017). Medical Identity Theft: It Can Cost You Thousands. Available at: https://states.aarp.org/medical-identity-theft-can-cost-thousands/ [22] Panda Security. Identity Theft. Available at: https://www.pandasecurity.com/mediacenter/news/identity-theft-statistics/

By Hsiang-Yun L. on July 01, 2019.
Categories
blog

Blocktrend Today’s Q&A With Bitmark CEO Sean

Blocktrend Today’s Q&A With Bitmark CEO Sean

A few months ago, our CEO Sean did an interview with Astro Hsu for Astro’s publication Blocktrend Today. Astro is Taiwan’s top blockchain writer and influencer who has thousands of paid subscribers for his blockchain Chinese newsletters. Below is the English translation of that interview:

Bitmark is an independent public blockchain. Its biggest difference with other blockchains is that Bitmark has not issued its own cryptocurrency, however, it does use bitcoin to reward miners. Users can also use it to issue music, business cards, and other digital files. In 2017 Bitmark received support from Alibaba’s Taiwan Entrepreneur Fund.

In this interview, we’ll be talking with Bitmark CEO Sean Moss-Pultz. He is also responsible for guiding HTC Blockchain Phone Exodus 1’s technology and data R&D.

Sean is an American, and is married to a Taiwanese woman, with whom he speaks Chinese and English. This interview was conducted in both Chinese and English.

Astro: We’ll start with the simplest, yet most difficult question to answer, okay? In terms of everyday life, what are Blockchain’s biggest use cases?

Sean: As many physical things are being digitized, people to people interactions are becoming “less warm”. Blockchain can bring back this warmth.

Half a year ago Pochang Wu gave me one of his CDs. I then went and bought a new CD player for the occasion, and listened with my son. Physical CDs are special. If you don’t use physical CDs, it’s really hard for us to wrap music and gift it to friends. Of course in the era of streaming, we can gift KKBOX gift cards, but it’s just not the same. With CDs, people can feel like they’re getting a special gift. But with gift cards it just doesn’t have that special significance, it’s not something you can really collect.

Blockchain allows us to create a sense of gift giving in the era of digital music, so that music can be collected again.

Astro: Bringing about the feeling of giving gifts by hand, but in the digital world, that sounds very abstract. How can blockchain do this?

Sean: With blockchain, you can really easily trace back something’s origin. People who obtain music not only know who originally published the files, but also know who has transferred the files. If your music was given to you directly by the producer and without any middleman, then this is just like getting an autograph, it’s very meaningful.

Astro: Perhaps later when getting an important person’s business card, people might screenshot its blockchain transfer record and share it on social media to show off. This could be the feeling of “gift delivery by hand” in the digital world. In that way, what does Bitmark use blockchain to do?

Sean: Bitmark built its own blockchain, and uses it to register people’s digital property rights. It’s slightly similar to an intellectual property office. Before, we could only duplicate or share digital assets. Now we can finally allow people to formally authorize the use of their digital property if they have a clear record of digital data’s provenance and transfer history.

Bitmark and KKBOX’s subsidiary (KKFARM) digital publication platform are collaborating to bring digital property registration to music publishing software. As paperwork digitizes, efficiency will improve and producers will also receive compensation more quickly.

Now, Bitmark is also collaborating with HTC to allow consumers to register their data under their names.

Astro: Producers often want to authorize music, using blockchain to digitize the process, that’s easy to understand. However, what’s the use of Bitmark allowing consumers to register their data under their names? Is it also to authorize information?

Sean: The Cambridge Analytica scandal happened in 2018, Facebook apologized and was fined. But consumers didn’t receive any of Facebook’s reparations because the consumer data was certainly not their own to begin with.

Consumers provide their data in exchange for Facebook’s services, and Facebook ought to satisfactorily fulfill their data management responsibility. However, just who really owns the data has no clear boundary. Consumers and Facebook both say they own the data.

The same situation happens in hospitals. Do medical records actually belong to hospitals or patients? I think people should confront this problem, and blockchain is what we can use to solve it.

In the future, consumers just have to take one more measure — registering their data under their own name — and things can become very different. Accordingly, data property rights will have no room for uncertainty, and will be convenient to authorize.

Now that data property rights belong to consumers, only authorizing enterprises can use it, one can assume some enterprises who experience data breaches will necessarily have to pay consumers reparations. It’s just like if a bank gets hacked and sustains losses, they must necessarily repay depositors the same.

Astro: Do you think companies will have to buy or rent data from consumers, and won’t be like how currently they directly take and use it? Likewise, how can consumers register data under their own name?

Sean: Data is an important digital asset, and now everybody knows that data can be used to make money. So data has to be similar to copyrights or land in that property rights and ways to authorize should all be clearly defined.

In the past nobody did this, because at the time there wasn’t any blockchain technology. People had no way to register their data bit by bit, and because of this data property rights are entirely controversial, every party thinks data should be theirs, and in the end companies like Facebook and Google have the advantage.

Bitmark blockchain helps register all kinds of digital property rights. We are going to embed our services into an HTC phone, and all data output will be immediately registered to consumers. Consumers won’t think there’s any change, but these data property rights will register themselves.

In the future, there will be more and more research institutions and enterprises that want to purchase data from consumers. Besides HTC’s phone, Bitmark is also collaborating with KKBOX to allow producers to authorize music. In the healthcare sphere, we are going to directly embed into health apps or hospitals’ systems. If institutions have research demands, they just have to get paid or unpaid authorization from consumers.

Astro: I originally wanted to ask: if the market has more than 1000 different kinds of blockchains, how are people supposed to choose the best one or the best suitable blockchain for them? Now it seems like this question is just asked as a counter. People certainly aren’t taking the initiative to go choose which blockchain to use, but people might have no idea what blockchain they are using, because blockchain is quietly sneaking into everybody’s devices and apps, right?

Sean: Right. Blockchain is an underlying technology, people won’t know which blockchain their underlying technology uses, just like people usually don’t know whether the chips in their own phones are actually manufactured by TSMC or Samsung.

Even though we’re collaborating with HTC, KKBOX, or other health apps, blockchain will slowly make its way into people’s daily lives through these companies.

Astro: When consumers first register a lot of data on the Bitmark blockchain, will there be any privacy issues?

Sean: This is a common misperception. Consumers aren’t giving their data to Bitmark, instead, they’re just registering their data’s property rights on the Bitmark blockchain. Consider that it’s like a random hash appearing as data, there are no privacy issues. It’s just like land management bureau registering land property rights, you’re just giving them your property rights information, and the bureau doesn’t own your land.

In the future, data will exist on consumers’ phones, or in companies’ data centers, however, it won’t be on Bitmark blockchain in that way. Protecting the genuine security of this data is extremely important, but this is already beyond the scope of Bitmark’s control.

Astro: Everybody is looking for suitable situations to use blockchain. Currently, do the companies Bitmark is collaborating with have anything in common with each other?

Sean: They’ve all encountered problems due to the lack of clarity surrounding digital property rights. Corporations will do whatever they can to own data for themselves, but data is generated by consumers. Bitmark helps consumers register data’s digital property rights, and creates a platform for exchanging data, allowing consumers to authorize its use, thus establishing a completely new data trade standard.

Astro: This is the last question. When do you think blockchain will be universally used?

Sean: Even I don’t know. This is just like asking a newly hatched chick “when did you think you were going to hatch?” The chick just knows that it always wants to hatch, but it doesn’t know when it will.

This interview essentially did not discuss cryptocurrency. However, it is still very worthwhile to solely discuss pure use cases of blockchain.

Bitmark blockchain helps people establish property rights over their digital assets, coinciding with HTC’s blockchain phone to “let go of data.” After consumers own their data property rights, the next immediate question will be one of authorizing data.

This is an important milestone. In the future when companies want to use users’ data they will need permission and will have to pay compensation. Because of this, Sean also predicts that in the future there will exist data commercial agents, who are responsible for matching companies and consumers. On one hand, this will help enterprises obtain authorization and conduct payments as well as help consumers find the buyers who are most in accordance with their needs, for example finding the highest prices and values.

Furthermore, this also can produce impact against today’s tech giants. Whether it’s for Facebook or Google, to them data is a golden hen, and they won’t want to give it up overnight. This is the inventor’s dilemma. Because of this, tech giants might not go with the new generation’s flow. On the contrary, the governments that currently do not rely on consumers’ data to make money or the startups that don’t have data can extend both arms and embrace the new trends.

The beginning of the revolution, perhaps might be the moment of blockchain phone’s emergence and that small group of users. Phones are what people spend most of their time on, and the influence that blockchain’s entering the realm of phones will not be limited to managing cryptocurrencies, but can also allow people to more conveniently manage their own data rights.

Subtly applying Bitmark’s technology to devices and applications is the next key step towards giving power back to the people.

Here is a link for the original publication (in Chinese): https://blocktrend.today/03-12-2019-interview-bitmark-ceo-sean-moss-pultz

Subscribe to BlockTrend Today Newsletter (in Chinese): https://blocktrend.today/member-plan

Page composed with the free online HTML editor. Please subscribe for a license to remove these messages from the edited documents.

By Simon Imbot on May 03, 2019.
Categories
blog

With Data Anonymization Becoming A Myth, How Do We Protect Ourselves In This World Of Data?

With Data Anonymization Becoming A Myth, How Do We Protect Ourselves In This World Of Data?

With humanity moving into the world of big data, it has become increasingly challenging, if not impossible, for individuals to “stay anonymous”.

Every day we generate large amounts of data, all of which represent many aspects of our lives. We are constantly told that our data is magically safe for releasing as long as it is “de-identified”. However, in reality, our data and privacy are constantly exposed and abused. In this article, I will discuss the risks of de-identified data and then examine the extent to which existing regulations effectively secure privacy. Lastly, I will argue the importance for individuals to take more proactive roles in claiming rights over the data they generate, regardless of how identifiable it is.

What can go wrong with “de-identified” data?

Most institutions, companies, and governments collect personal information. When it comes to data privacy and protection, many of them assure customers that only ”de-identified” data will be shared or released. However, it is critical to realize that de-identification is no magic process and cannot fully prevent someone from linking data back to individuals — — for example via linkage attacks. On the other hand, there are also new types of personal data, like genomic data, that simply cannot be de-identified.

Linkage attacks can re-identified you by combining datasets.

A linkage attack takes place when someone uses indirect identifiers, also called quasi-identifiers, to re-identify individuals in an anonymized dataset by combining that data with another dataset. The quasi-identifiers here refer to the pieces of information that are not themselves unique identifiers but can become significant when combined with other quasi-identifiers [1].

One of the earliest linkage attacks happened in the United States in 1997. The Massachusetts State Group Insurance Commission released hospital visit data to researchers for the purpose of improving healthcare and controlling costs. The governor at the time, William Weld, reassured the public that patient privacy was well protected, as direct identifiers were deleted. However, Latanya Sweeney, an MIT graduate student at the time, was able to find William Weld’s personal health records by combining this hospital visit database with an electoral database she bought for only US$ 20 [2].

Another famous case of linkage attack is the Netflix Prize. In October 2006, Netflix announced a one-million-dollar prize for improving their movie recommendation services. They published data about movie rankings from around 500,000 customers between 1998 and 2005 [3]. Netflix, much like the governor of Massachusetts, reassured customers that there are no privacy concerns because “all identifying information has been removed”. However, the research paper How To Break Anonymity of the Netflix Prize Dataset” was later published by A. Narayanan and V. Shmatikov to show how they successfully identified Netflix records of non-anonymous IMDb users, uncovering information that could not be determined from their public IMDb ratings [4].

Some, if not all, data can never be truly anonymous.

Genomic data is some of the most sensitive and personal information that one can possibly have. With the price and time it takes to sequence a human genome advancing rapidly over the past 20 years, people now only need to pay about US$ 1,000 and wait for less than two weeks to have their genome sequenced [5]. Many other companies, such as 23andMe, are also offering cheaper and faster genotyping services to tell customers about their ancestry, health, traits etc [6]. It has never been easier and cheaper for individuals to generate their genomic data, but, this convenience also creates unprecedented risks.

Unlike blood test results having an expiration date, genomic data undergoes little changes over and individuals’ lifetime and therefore has long-lived value [7]. Moreover, genomic data is highly distinguishable and various scientific papers have proven that it is impossible to make genomic data fully anonymous. For instance, Gymrek et al. (2013) argue that surnames can be recovered from personal genomes by linking “anonymous” genomes and public genetic databases [8]. Lippert et al. (2017) also challenge the current concepts of genomic privacy by proving that de-identified genomes can be identified by inferring phenotypic measurements such as physical traits and demographic information [9]. In short, once someone has your genome sequence, regardless of the level of identifiability, your most personal data is out of your hands for good — unless you could change your genome the way you would apply for a new credit card or email address.

That is to say, we, as individuals, have to acknowledge the reality that simply because our data is de-identified doesn’t mean that our privacy or identity is secured. We must learn from linkage attacks and genomic scientists that what used to be considered anonymous might be easily re-identified using new technologies and tools. Therefore, we should proactively own and protect all of our data before, not after, our privacy is irreversibly out of the window.

Unfortunately, existing laws and privacy policies might protect your data far less than you imagine.

Understanding how NOT anonymous your data really is, one might then wonder how existing laws and regulations keep de-identified data safe. The answer, surprisingly, is that they don’t.

Due to the common misunderstanding that de-identification can magically make it safe to release personal data, most regulations at both the national or company levels do not regulate data that doesn’t relate to an identifiable person.

At the national level

In the United States, the Privacy Rule of the Health Insurance Portability and Accountability Act (HIPAA) protects all “Individually Identifiable Health Information (or Protected Health Information, PHI)” held or transmitted by a covered entity or its business associate, in any form or media. The PHI includes many common identifiers such as name, address, birth date, Social Security Number [10]. However, it is noteworthy that there are no restrictions on the use or disclosure of de-identified health information. In Taiwan, one of the leading democratic countries in Asia, the Personal Information Protection Act covers personal information such as name, date of birth, ID number, passport number, characteristics, fingerprints, marital status, family, education, occupation, medical record, medical treatment etc [11]. However, the Act doesn’t also clarify the rights concerning “de-identified” data. Even the European Union, which has some of the most comprehensive legislation for protecting data, states in its General Data Protection Regulation (GDPR) that “the principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable” [12].

Source: Privacy on iPhone — Private Side (https://www.youtube.com/watch?v=A_6uV9A12ok)

At the company level

A company’s privacy policy is to some extent the last resort for protecting an individual’s rights to data. Whenever we use an application or device, we are complied to agree with its privacy policy and to express our consent. However, for some of the biggest technology companies, whose business largely depends on utilizing users’ data, their privacy policies tend to also exclude the “de-identified data”.

Apple, despite positioning itself as one of the biggest champions of data privacy, states in its privacy policy that Apple may “collect, use, transfer, and disclose non-personal information for any purpose [13].” Google also mentions that they may share non-personally identifiable information publicly and with partners — like publishers, advertisers, developers, or rights holders [14]. Facebook, the company that has caused massive privacy concerns over the past year, openly states that they provide advertisers with reports about the kinds of people seeing their ads and how their ads are performing while assuring users that Facebook doesn’t share information that personally identifies the users. Fitbit, which is argued to have 150 billion hours of anonymized heart data from its users [15], states that they may share non-personal information that is aggregated or de-identified so that it cannot reasonably be used to identify an individual [16].”

Overall, none of the governments or companies are currently protecting the de-identified data of individuals, despite the foreseeable risks of privacy abuses if/when such data gets linked back to individuals in the future. In other words, none of those institutions can be held accountable by law if such de-identified data is re-identified in the future. The risks fall solely on individuals.

An individual should have full control and legal recourse to the data he/she generates, regardless of identifiability levels.

Acknowledging that the advancement of technology in fields like artificial intelligence makes complete anonymity less and less possible, I argue that all data generated by an individual should be seen as personal data despite the current levels of identifiability. In a rule-of-law and democratic society, such a new way of viewing personal data will need to come from both bottom-up public awareness and top-down regulations.

As the saying goes, “preventing diseases is better than curing them.” Institutions should focus on preventing foreseeable privacy violations when “anonymous” data gets re-identified. One of the first steps can be publicly recognizing the risks of de-identified data and including it in data security discussions. Ultimately, institutions will be expected to establish and abide by data regulations that apply to all types of personally generated data regardless of identifiability.

As for individuals who generate data every day, they should take their digital lives much more seriously than before and be proactive in understanding their rights. As stated previously, when a supposedly anonymous data is somehow linked back to somebody, it is the individual, not the institution, who bears the costs of privacy violation. Therefore, with more new apps and devices coming up, individuals need to go beyond simply taking what is stated in the terms and conditions without reading through, and acknowledge the degree of privacy and risks to which they are agreeing. Some non-profit organizations such as Privacy InternationalTactical Technology Collective and Electronic Frontier Foundation may be a good place to start learning more about these issues.

Overall, as we continue to navigate the ever-changing technological landscape, individuals can no longer afford to ignore the power of data and the risks it can bring. The data anonymity problems addressed in this article are just several examples of what we are exposed to in our everyday lives. Therefore, it is critical for people to claim and request full control of and adequate legal protections for their data. Only by doing so can humanity truly enjoy the convenience of innovative technologies without compromising our fundamental rights and freedom.

Reference

[1] Privitar (Feb 2017). Think you ‘anonymised’ data is secure? Think again. Available at: https://www.privitar.com/listing/think-your-anonymised-data-is-secure-think-again[2] Privitar (Feb 2017). Think you ‘anonymised’ data is secure? Think again. Available at: https://www.privitar.com/listing/think-your-anonymised-data-is-secure-think-again[3] A.Narayanan and V. Shmatikov (2008). Robust De-anonymization of Large Sparse Datasets. Available at:https://www.cs.utexas.edu/~shmat/shmat_oak08netflix.pdf [4] A.Narayanan and V. Shmatikov (2007). How To Break Anonymity of the Netflix Prize Dataset. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.100.3581&rep=rep1&type=pdf[5] Helix. Support Page. Available at: https://support.helix.com/s/article/How-long-does-it-take-to-sequence-my-sample [6] 23andMe Official Website. Available at: https://www.23andme.com/[7] F. Dankar et al. (2018). The development of large-scale de-identified biomedical databases in the age of genomics — principles and challenges. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5894154/[8] Gymrek et al. (2013). Identifying personal genomes by surname inference. Available at: https://www.ncbi.nlm.nih.gov/pubmed/23329047 [9] Lippert et al. (2017). Identification of individuals by trait prediction using whole-genome sequencing data. Available at: https://www.pnas.org/content/pnas/early/2017/08/29/1711125114.full.pdf [10] US Department of Health and Human Services. Summary of the HIPAA Privacy Rule. Available at: https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html[11] Laws and regulations of ROC. Personal Information Protection Act. Available at: https://law.moj.gov.tw/Eng/LawClass/LawAll.aspx?PCode=I0050021[12] GDPR. Recital 26. Available at: https://gdpr-info.eu/recitals/no-26/ [13] Apple Inc. Privacy Policy. Available at: https://www.apple.com/legal/privacy/en-ww/ [14] Google. Privacy&Terms (effective Jan 2019). Available at: https://policies.google.com/privacy?hl=en&gl=tw#footnote-info [15] BoingBoing (Sep 2018). Fitbit has 150 billion hours of “anonymized” health data. Available at: https://boingboing.net/2018/09/05/fitbit-has-150-billions-hours.html [16] Fitbit. Privacy Policy (effective Sep 2018). Available at: https://www.fitbit.com/legal/privacy-policy#info-we-collect

By Hsiang-Yun L. on April 29, 2019.
Categories
blog

Blockchain Startups With Real World Applications

Blockchain Startups With Real World Applications

You might have already heard, but Bitmark has been selected as one of twelve startups to participate in the 2019 UC Berkeley Blockchain Xcelerator! We’re very excited, and would like to thank Blockchain at Berkeley, The Sutardja Center for Entrepreneurship and Technology, and the Haas School of Business for this opportunity to connect with the extensive resources that the Berkeley and Silicon Valley communities can provide.

Over the course of the next few weeks we’ll be meeting with advisors, mentors, and industry experts, attending weekly pitch sessions and speaker sessions. The accelerator offers the opportunity to receive an investment of up to $200k USD from the X-Fund, a VC focused on investing in UC Berkeley’s Blockchain ecosystem and emerging technologies. We’ll also have the potential to win additional investments from partner funds.

What makes this accelerator so unique is that its leadership seeks to push blockchain technology beyond the hype of cryptocurrency and further its adoption practical tool. This first batch of teams consists of startups that are more than just ICOs and quick ways to make some cash. We have all demonstrated the ability to offer concrete new ways to use blockchain to solve real problems and create new value.

Bitmark is certain that data is the world’s next major asset class. We use blockchain to defend the evolution of property rights; from physical and intellectual property to data and digital property. To read more about us and our fellow teams, check out:

Meet the teams: Berkeley Blockchain Xcelerator kicks off with first batch – Berkeley Blockchain…
On March 19, the Berkeley Blockchain Xcelerator welcomed its first batch of teams to the recently launched accelerator…

xcelerator.berkeley.edu

By Simon Imbot on April 26, 2019.
Categories
blog

How To Use Blockchain To Make Your Data Less Tragic

Photo by Curtis MacNewton on Unsplash

How To Use Blockchain To Make Your Data Less Tragic

Written By Shannon Appelcline

The problem begins two hundred years and at least two technological revolutions before the blockchain. Because grazing lands were held in common, individuals had no incentive to use those fields appropriately. Farmers allowed their animals to overgraze, eventually ruining the land because each individual sought to maximize their own benefit.

This is the Tragedy of the Commons, as first detailed by William Forster Lloyd in 1833. The Tragedy describes the problem of using an openly accessible resource, where any individual can benefit from using the resource but the costs of that use are borne by the entire group. The selfish and destructive usage that naturally results is the Tragedy of the Commons.

“The vast innovations and expansions of the modern age have now brought the Tragedy to new fields.”

Most classic examples of the Tragedy of the Commons are ecologically based, focused on topics like overgrazing, overfishing, and overpopulation. However, the vast innovations and expansions of the modern age have now brought the Tragedy to new fields. Much of our society now operates on interconnected computer technologies that are full of common resources. It flows through shared fiber and routers; it operates on shared software; and is transmits and uses data that has been shared, whether we intend it or not. The Tragedy of the Commons tells us that all of these openly accessible resources are likely to be abused to the point of destruction; the whole internet might be one problem away from a complete breakdown.

The Heartbleed bug of 2014 offers one of the clearest examples to date of how the Tragedy of the Commons impacts our shared online commons. Heartbleed was a critical security bug accidentally introduced into OpenSSL, the open-source software used to secure most communications on the internet, as part of a “heartbeat” extension released in 2012. Though the heartbeat code was reviewed when it was incorporated into OpenSSL, the process was much less rigorous than the extensive security reviews that had been required of SSL implementations in its earliest days. That’s because in the 2010s, OpenSSL was being maintained by just a single full-time developer and a small group of volunteers. Thus, a problem like Heartbleed, where a mistake compromised half-a-million certs and uncountable “secure” connections, was almost inevitable. The entire internet was using the shared resource of the OpenSSL code, but no was one supporting it properly: this is the definition of the Tragedy of the Commons.

Open source software is just one of the twenty-first century’s tragic commons. Many of the shared resources that comprise the internet have already proven vulnerable. Asymmetric DSL lines can get clogged by uploads, while entire neighborhoods see their internet slow down every Saturday and Sunday night. Forums created for communities can be destroyed by spammers trying to make a buck.

And then there’s a digital resource that many people don’t think about: data.

The Data Commons & Digital Property

Most internet users don’t realize that their data is quickly becoming a commons too. Unfortunately, when we upload data to the internet we usually forfeit our exclusive ownership. Obviously, when we write blogs, post images, or tweet messages, they might get copied of reused by others. Some of this replication is supported by the law, some by terms of service, and some not at all, but it happens nonetheless.

“Most internet users don’t even realize that their data is quickly becoming a commons too.”

However, the data joining this new commons goes far beyond the material that we explicitly post. Exercise trackers record where we are; internet searches reveal our interests; and voice assistants do both. Using this data, aggregators can create models of who we are, what we want, and what we might do — without our permission, and beyond our control.

Our health information is becoming part of the data commons too. That includes our DNA information, which is some of the most intimate and personally identifiable data out there. Despite that, people are now sending their DNA off to companies and uploading it to publicly searchable websites. These genomic data commons have already led to uses beyond the dreams of submitters, such as when the Sacramento County Sheriff’s department searched the GEDMatch database for the identity of the Golden State Killer — a burglar and serial killer who terrorized California in the 1970s. They were able to match records of 10 to 20 distant relatives and eventually built a family tree that revealed the killer. Obviously, tracking down a serial killer is a societal good, but it shows how data placed in a commons can be used for far different purposes than was intended.

In other words, the classic Tragedy of the Commons applies to this data commons. Our data gets used and reused, diminishing its value while the commons get abused, likely leading to their ultimate destruction. There’s no transparency, and we have no control.

This Tragedy of the Commons is just one example of a negative externality, where we as individuals are impacted by transactions between other people. The lack of concern for the data commons also generates other externalities, such as the loss of privacy and the possibility of financial losses when our data is breached — and huge data breaches impacting millions of people have been happening regularly, with some of the largest occurring at Equifax, Marriott, and Target. This adds insult to injury; even once your data has become valueless, you can still be harmed by malicious actors stealing and selling your personal data.

So how do we solve the tragedy of the data commons? How do we take back our control of our data? We can do so by reaching back to classic property law, newly updated for the digital age, which permits us to register our data as digital property. Doing so allows us to prove that we’re the owners of that data. We can prove whether specific uses were licensed (or not!), and we can demand that our data be returned to us if it’s being used in some unlawful way.

Even better, the Coase Theorem tells us that registering property can also help to resolve other externalities as long as all parties are able to freely negotiate. If our data is registered as digital property, then we will have recourse when a company loses our data to a breach, because they will have done us definable harm. Not only can we take back what is ours, but we can enforce better use of the commons itself.

Property rights are crucial to our modern society, and the example of the data commons shows us why: they allow us to exert property rights when our data is used, sold, compiled, or even lost without our permission.

The Data Commons & The Blockchain

Turning data into registered digital property solves the basic tragedy of the data commons, but it creates a new problem as well, because that data has to be recorded somewhere by someone. This suggests the need for a centralized authority, but that goes against one of the core advantages of the internet: the openness that led to most of its innovative growth.

“When you add regulation, you typically have to add centralized management as well.”

As it happens, centralization is an almost inevitable byproduct of any traditional solution to the Tragedy of the Commons. When Lloyd originally wrote about the Tragedy, he stated that it largely resulted from lack of regulation, and when you add regulation, you typically have to add centralized management as well. So how do we regulate property rights in the data commons without turning to a centralized authority?

The answer is one of the newest technologies of the last ten years: the blockchain.

The blockchain originated with Bitcoin, but is now the heart of a variety of use cases from smart contracts to decentralized identities to name services. A blockchain is essentially a permanent, distributed ledger. In other words, it’s a big database that everyone can write to, but that no one can erase. From the viewpoint of the Tragedy of the Commons, the crucial innovation of the blockchain is that it’s built upon consensus rules. The way in which data is added to the blockchain is defined by clear rules that everyone knows. These rules can be changed, but that takes consensus too.

The blockchain thus offers a solution to the tragedy of the data commons while still maintaining the innovative open nature of the internet. Its consensus rules are regulations, but they’re regulations that are executed by the distributed network itself, rather than a central entity.

Obviously, blockchains can’t solve every Tragedy of the Commons, but they are a solution for things that can be put on a blockchain, including those smart contracts, those decentralized identities, those name services, and more generally … all of our data.

The Data Commons & Bitmark

This discussion isn’t just theoretical. The use of a blockchain to record digital property rights is the heart of the Bitmark Property System: it’s already managing numerous sorts of data, from health information to music royalties. There were many reasons to adopt this particular solution and foiling the Tragedy of the Commons is definitely one.

However, Bitmark’s work on digital property rights is just the first step in solving the tragedy of the data commons. To reach its fullest success requires buy-in from companies, governments, and individuals. Fortunately, the tides have been shifting in recent years: a variety of groups are now looking at self-sovereign solutions of this type, where control is granted to people, not companies, governments, or organizations.

The EU’s 2018 GDPR has gone the furthest in giving people control over their data. It empowers people in Europe to know how their personally identifiable data is being used and gives them the power to retrieve it if necessary. The GDPR defines personally identifiable data more as a human right than a property right, but it’s a clear step in the same direction: once we’ve recognized peoples’ personally identifiable data as their own, recognizing their registered data is an obvious next step, and one that Bitmark is ready for.

We’ll see plenty of other Tragedies of the Commons on the internet as the online world continues to mature, and it seems likely that the blockchain will be a good solution for many of them.

How might blockchains, or the approach of consensus rules, apply to other shared resources like open software, shared bandwidth, and community forums? That’s exactly the sort of question we should be asking to ensure the future of the internet.

Further Reading

Armerding, Taylor (2018). “The 18 Biggest Data Breaches of the 21st Century”. CSO Online. Retrieved from https://www.csoonline.com/article/2130877/the-biggest-data-breaches-of-the-21st-century.html.

Davidow, Bill (2012). “The Tragedy of the Internet Commons”. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2012/05/the-tragedy-of-the-internet-commons/257290/.

Hsiang-Yun L. (2019). “Coase Theorem in the World of Data Breaches”. Human Rights at the Digital Age. Retrieved from https://techandrights.tech.blog/2019/02/22/coase-theorem-in-the-world-of-data-breaches/.

Lloyd, William Forster (1833). Two Lectures on the Checks to Population. Retrieved from https://en.wikisource.org/wiki/Two_Lectures_on_the_Checks_to_Population.

Synopsys (2017). “The Heartbleed Bug”. Retrieved from http://heartbleed.com/.

Zhang, Sarah (2018). “How a Genealogy Website Led to the Alleged Golden State Killer”. The Atlantic. Retrieved from https://www.theatlantic.com/science/archive/2018/04/golden-state-killer-east-area-rapist-dna-genealogy/559070/.

By Simon Imbot on April 20, 2019.
Categories
blog

Coase Theorem in the World of Data Breaches

Coase Theorem in the World of Data Breaches

“This is a really serious security issue, and we’re taking it really seriously,…..I’m glad we found this, but it definitely is an issue that this happened in the first place.”

— Facebook CEO Mark Zuckerburg

(after the company’s security breach that exposed the personal information of 30 million users.[1])

We now live in a world of data. Every single day, each one of us generates some very personal data about what we see, where we go, who we talk to, what we think and even who we are. Data is quickly becoming one of the most critical factors of production in the current market economy. Yet, it also brings negative externalities that cannot and should not be ignored for the market to function effectively. Many economists have proposed theories and tools to tackle the problem of externalities. In this article, I am going to specifically focus on the solution proposed by Ronald Coase in 1960, and show how the theory can be applied to the modern world of data.

When the Market Fails

Before diving into the Coase Theorem, we first need to first talk about “externality”, which can be defined as the positive or negative consequences of economic activities on third parties [2]. The externality is considered to be a form of market failure — as it is the spillover effect of the consumption or production of a good that is not reflected in the price of the good [3]. That is, the market equilibrium fails to capture and reflect the real cost/benefit of economic activity. Some everyday externalities that people encounter including air pollution and cigarette smoking. Another classic example of a negative externality is described by Garrett Hardin in his scientific paper named “The Tragedy of the Commons”, which discusses how individuals tend to exploit shared resources so the demand greatly outweighs supply, and the resource becomes unavailable for the whole [4].

Pollution is a classic example of a negative externality.

Coase Theorem: Assigning Property Rights to Tackle Externalities

Prior to Ronald H. Coase, who was awarded the Nobel Prize for Economics in 1991, economists were prone to consider corrective government actions as the solutions to externalities. For instance, by setting numerical limits on activities with external effects (Command and Control regulation), placing a subsidy to increase consumption of positive externalities, and internalizing the externalities using price system (Pigouvian tax). However, in his publication “The Problem of Social Cost” in 1960, Coase argues that there is a real danger that such government intervention in the economic system, in fact, leads to the protection of those responsible for harmful effects [5]. Instead, he suggests that the market can potentially solve the problem of externalities by itself if property rights are complete and parties can negotiate costlessly.

“We may speak of a person owning land and using it as a factor of production but what the land-owner in fact possesses is the right to carry out a circumscribed list of actions.”

— — C., Ronald (1960). The Problem of Social Cost.

To see how this economic theory can be applied to a real-world problem, let’s take a quick look into the Cap-and-Trade system.

Cap-and-Trade: A real-world application of Coase Theorem

Facing the global challenge of climate change, the European Union created the world’s first international Emission Trading System (ETS) in 2005 with the goal to reduce greenhouse gas emissions. The EU ETS works on a Cap-and-Trade principle — — A cap is set on the total amount of certain greenhouse gases that can be emitted by installations in the system. The cap is reduced over time so that total emissions fall. Within the cap, companies can receive or buy emission allowances which they can trade with one another as needed [6]. In other words, the cap to some extent represents the right to emit certain greenhouse gases, whereas the trading reflects the negotiations Coase argues that can lead to more efficient market allocation.

“Trading brings flexibility that ensures emissions are cut where it costs least to do so. A robust carbon price also promotes investment in clean, low-carbon technologies.”

— The European Commission

According to the EU, the ETS has shown good results as the cap on emissions from power stations and other fixed installations is reduced by 1.74% every year between 2013 -2020 [7], and the emissions are estimated to be 43% lower than in 2005 by 2030 [8].

Coase Theorem in the World of Data Breaches

Living at the age of big data, data breaches have become increasingly common in our daily lives. According to the Identity Theft Resource Center, the number of significant breaches at US businesses, government agencies, and other organizations reached 1300 in 2017, compared to less than 200 in 2005 [9]. This increase is partly due to the fact that the world’s volume of data has grown exponentially over the past decade, giving cybercriminals a greater opportunity to expose massive volumes of data in a single breach [10].

Although it is normally defined as an “incident” where information is stolen or taken from a system without the knowledge or authorization of the system’s owner [11], I suggest viewing data breach (especially those ones involving personal information) as a modern form of negative externality. It is because when the data that institutions captured from individuals to run their business get breached, individuals get spillover effects in terms of privacy and financial loss. Yet, the liabilities of such harm are not clearly defined and therefore taken into account within the market mechanism.

“We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.”

— Google Official Statement disclosing the data leak affecting up to 500,000 accounts [12].

Take Facebook’s security breach in September 2018 as an example. 30 million people (more than the whole population of Australia) had their names and contact details leaked, and within which 14 million of them further had their sensitive information (include gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, education, work etc) exposed to the attackers [13]. With the significant harm that this “incident” brought to people’s privacy, what Facebook did was apologizing, saying that it was “a breach of trust” and that they “promise to do better” for the users [14]. Yet no matter how sincere those apologies might be, they cannot and will not solve the core of the problem.

Data breaches cause great harm to society as well as individuals. However, such negative externalities are not well captured and reflected in the market.

It is, however, not to suggest solving data breaches with one-fits-for-all governmental regulations. Because according to Coase, we need to recognize the reciprocal nature of the problem. That is, a data breach cannot happen without Facebook failing to secure its data, but at the same time, it also cannot take place without the users willingly inputting data to the platform. So what is missing here, based on the Coase Theorem, is the clear definition of the rights to data.

In the World of Data where Property Rights are defined and defended

Based on Coase Theorem, the property rights to data, in fact, refer to the rights to carry out a circumscribed list of actions. A couple of examples of actions shall include:

  • Right to control access to one’s data
  • Right to monetize one’s data
  • Right to donate/give away one’s data
  • Right to defend the privacy of one’s data
  • …….

When the above rights to data are clearly defined, individuals are empowered to have legal recourse and bargaining power against a “data breach incident” that inflicts their rights. In the case of Facebook, for example, users will be able to confront Facebook in court for its failure to defend the user’s data privacy and use the data only for the permitted purpose (intentionally or not). Or even before the outbreak of a data breach, which seems inevitable for centralized data storage, users can already negotiate terms with Facebook for the potential risks that Facebook exposes them to. Facing such confrontation and consequences, Facebook will be forced to better capture the costs and risks it bears when storing/utilizing its users’ data. This might lead to a change of business model for Facebook or a new user-platform relationship where Facebook openly compensates users for the risks they are exposed to.

In short, as argued by Coase, once the rights to data are clarified, parties can openly negotiate terms and compensations resulted from the negative externalities — just like how we do with greenhouse gases — and therefore lead to better market equilibrium.

Baby step at a time to tackle market failures in the world of data

The Facebook data breach is not the first of its kind, and unfortunately will not be the last. In fact, it is estimated that data breaches will just become more frequent, bigger and more expensive in the near future. Therefore, although Coase Theorem, similar to all economic theories, has its limitations with real-world applications, it still sheds lights on how defining the rights to data can be the first step toward solving digital world negative externality such as data breach and enabling a better-functioned market mechanism in the long-term.

References

[1] The New York Times (Sep 2018). Facebook Security Breach Exposes Accounts of 50 Million Users. Available at: https://www.nytimes.com/2018/09/28/technology/facebook-hack-data-breach.html

[2] Quickonomics. Positive Externalities vs Negative Externalities. Available at: https://quickonomics.com/positive-externalities-vs-negative-externalities/

[3] Intelligent Economist. Introduction to externalities. Available at: https://www.intelligenteconomist.com/externalities/

[4] Investopedia. Tragedy Of The Commons. Available at: https://www.investopedia.com/terms/t/tragedy-of-the-commons.asp

[5] C., Ronald (1960). The Problem of Social Cost.

[6] European Commission. EU Emissions Trading System (EU ETS). Available at: https://ec.europa.eu/clima/policies/ets_en

[7] European Commission. EU Emissions Trading System (EU ETS). Available at: https://ec.europa.eu/clima/sites/clima/files/factsheet_ets_en.pdf

[8] European Commission. EU Emissions Trading System (EU ETS). Available at: https://ec.europa.eu/clima/policies/ets_en

[9] Priceonomics. Why Security Breaches Just Keep Getting Bigger and More Expensive. Available at: https://priceonomics.com/why-security-breaches-just-keep-getting-bigger-and/

[10] Digital Guardian (Jan 2019). The History of Data Breaches. Available at: https://digitalguardian.com/blog/history-data-breaches

[11] Trend Micro. Data Breach. Available at: http://www.trendmicro.tw/vinfo/us/security/definition/data-breach

[12] Google (Oct 2018). Project Strobe: Protecting your data, improving our third-party APIs, and sunsetting consumer Google+. Available at: https://www.blog.google/technology/safety-security/project-strobe/

[13] Facebook Newsroom (Oct 2018), An Update on the Security Issue. Available at: https://newsroom.fb.com/news/2018/10/update-on-security-issue/

[14] The Verge (Mar 2018). Mark Zuckerberg apologizes for Facebook’s data privacy scandal in full-page newspaper ads. Available at: https://www.theverge.com/2018/3/25/17161398/facebook-mark-zuckerberg-apology-cambridge-analytica-full-page-newspapers-ads

By Hsiang-Yun L. on February 26, 2019.