Discussions around the risks associated with the rising influence of online services in our everyday life tend to revolve around the issue of privacy. The business model of dominant digital platforms relies on the collection and processing of enormous amounts of private data. In order to mine the users for valuable information, Big Tech companies employ various funnels which direct the stream of online data into their closed silos where the information can be extracted, analyzed, and then finally monetized for their sole benefit.

Lack of privacy is not, however, the only problem we should associate with Big Tech dominance over our digital lives. Another, closely related and highly problematic issue is data accessibility. Once your data enters a corporate silo, your control over it diminishes considerably.

While users of closed platforms usually retain legal ownership of their data, they have very limited say over how it is used and processed. The use of their data is subjected to long-winded and quite often deliberately vague Terms of Service, written by lawyers whose main goal is to defend the interests of the platform’s owners. The content hosted on the service is carefully scrutinized for breaches of policy and scraped to gather any information that might be useful for advertisers. (Policy solutions like GDPR or CCPA may have curbed a bit some of the more egregious advertisement practices, but they can not solve the power and information asymmetries between users and data-driven companies.)

Service providers dictate not only what kind of data can be stored and shared on their platforms, but also how it can be accessed and processed. They usually force users to choose certain tools to access their data and very rarely allow third-party software to interact with their services on a user’s behalf. Signing up for a closed service, therefore, necessitates giving up at least part of your agency.

Users of such services put themselves at mercy of arbitrary decisions of the platform’s owners with predictable consequences. News media and popular discussion forums are full of reports and personal accounts of users being banned from a service or being temporarily restricted from accessing their data over supposed breaches of terms of use.

Facebook, Youtube, and Twitter, to name just a few examples, have permanently cut off thousands of people from using the services they provide, usually without any warning, leaving the banned users with limited options to appeal the decision, and with no redress. Quite often bans apply to a large number of people fitting a certain profile at once, without taking their individual circumstances into account. And it’s not just social media platforms, the Microsoft-owned GitHub banned all developers from Iran, Syria and Crimea from accessing their private repositories in 2019.

Bans imposed by digital platforms can have severe consequences that go far beyond not being able to access your information. They can limit the user’s ability to fulfill their various obligations towards others (e.g. complete a work project or a school assignment) and can cut them off from their social circle. With so many services being interconnected or using the same access credentials, a ban from one platform can also mean the inability to use a vast array of other, seemingly independent services.

Bans imposed by dominant digital platforms can be directed not only at persons but also at software. While there are some legitimate reasons for a removal of an app from a repository - like e.g. preventing the spread of dangerous malware - such bans can also be used to deter the growth of competitive platforms or to reduce users’ agency. Examples of the latter practices include Google banning ad-blocking apps from its Play Store, Apple restricting access to VPN applications for its Chinese customers, and every major platform banning Parler.

Sometimes such bans are imposed not after a careful examination of relevant facts but as a result of an arbitrary decision of a bot, as was the case when Google banned a video app for an “offensive” description in its Play Store, as it listed the support for the popular “.ass” subtitle file standard. (This was not an isolated instance, Google’s policy bot regularly blocks access to various apps for superficial reasons that later prove to be erroneous).

Corporate censorship may anger us, but it should not come as a surprise. Those who control the code and the underlying infrastructure, also control access to data. The closed source, corporate-run, silo-like environment by its very nature is antithetical to the free flow of information.

At Golem Foundation we believe in the universal rights of data accessibility and data portability. We believe that users should be allowed to do what they want to with their data, to decide with whom they want to share it, and who can process it. They also should not be restricted in their ability to access their data, move it between different storage options, and interact with it with the tools of their choice. Vendor lock-ins, outside content moderation, and denial of service do not fit in with our view of an open, decentralized, user-controlled Internet.

This is why we are developing Wildland as a backend-agnostic, censorship-resistant, and open source protocol. Self-defined data containers that form the building blocks of Wildland can be stored anywhere you want. You can keep them on your hard drive, on your home NAS, or on one of the storage backends available through the Wildland marketplace. You can easily replicate your containers on several backends, each with distinct access policies defined, and move your data between different storage options without having to update their location in the file system. And with backend stacking employed you will be able to easily encrypt your data on the fly and turn any backend into end-to-end encrypted storage. In short, with Wildland, we are putting you in charge of your data. Just as it should be. After all, it’s your data.