Data collection and privacy concerns
Data privacy is not an unknown concern these days. With the emergence of Web 2.0, people are using the internet more and more frequently. We have changed our primary online behavior from passive consumers of information (searching, reading blogs, watching videos) to active creators of it. With the bulk of data that a regular person generates each day, they become more and more susceptible to the data privacy issues circling the internet.
Personal digital assistants like Alexa and google home are a very common household item these days. Facial and voice recognition technologies have made major strides as they have started to distinguish and understand different racial and cultural features. Voice assistants are able to understand multiple accents clearly with accuracy of more than 90%. It is easier to collect highly clean data sets today than ever before. These assistants are constantly listening and recording our conversations in lieu of providing us with better services. While the comfort and possibilities of such services do enrich people’s life at certain stages, the concerns often go ignored.
Today, data is the most valuable asset that any company has and its value is constantly rising. Companies like Google, Facebook, and Amazon have built vast empires on the top of the data economy. With the rise of AI and ML technologies, the technocrats of the world have been quick in seeing the value that these technologies would bring. And for them to grow and run, they need fuel in the form of data. AIs consume more and more data in different forms to learn and grow, which is, in turn, leading corporations to identify and implement new ways of collecting data. As there is a need for data as a commodity created, the supply of this commodity has also seen a steep rise.
Data privacy issues
There are many issues that can be outlined with respect to communication and information technologies in general.
The issue with data privacy as many people believe is primarily Ad targeting. The corporations know what our likes and preferences are and hence show us advertisements based on that. Though it’s not false, it’s far from the whole truth.
Other major concerns are behavior conditioning or channeling your beliefs into updating your likes and preferences. And not just for causes like targeted marketing, but like who you vote for. Or causes you believe in. The infamous case of US elections and Cambridge Analytica is a prime example of such a concern put into practice.
Another major issue of data privacy is the mishandling of data or data breach. While we believe our data to be safe and secure in the servers of big tech companies like Google and Facebook, it’s often vulnerable to practices like breaches, data piracy and unethical distribution or selling of data. For example, in 2016, Uber’s data was hacked releasing personal information of 57 million users to the data pirates. Instead of disclosing the breach, Uber paid $100,000 to the hackers to delete the data.
Many services put our information publicly accessible for other users to consume. This enables people to collect, copy, publish, distribute anyone’s data which can have serious consequences on one’s reputation. In 2018, Amazon’s Alexa recorded an entire conversation of a family in Portland, Ore., and sent it to one of their contacts giving rise to a massive debate about data rights.
Data collected online is stored for a nearly indefinite amount of time, making it impossible to delete or get rid of this information.
Behavior Conditioning - History and potency
Though there are many issues concerning data privacy, I wish to primarily talk about behavior conditioning or the process of programming human behavior. The idea of programming behavior was first explored by B.F. Skinner, a psychologist of the early 20th century who conducted a series of experiments on rats to explore the idea of operand conditioning. His experiments involved rats or ‘operands’ kept in a controlled environment being motivated to follow a special kind of stimulus, ‘reinforcing stimulus.’ They are tricked into following the reinforced stimulus by being offered some reward. After going through the process multiple times, they forget the reward and start implementing the reinforced behavior as their normal behavior.
Fast forward to the 21st century and today we are those rats (operands) and the likes on Instagram are little pieces of cheese or rewards that keep us in the system while we are being experimented upon. All the data that we are willingly or unknowingly giving away is being used against us to create Skinner like experiments, reinforcing our behavior into whatever the corporations want.
For example, let’s say I like wearing blue jackets a lot. Based on my online history, marketing agents would know this ‘private’ information and would show me ads for blue jackets. But let’s suppose for some reason they don’t want to sell a blue jacket but rather a yellow one. So I start getting blogs and articles in my feed such as “Why yellow is the new trend” or “10 celebrities pulling off yellow jackets.” It would nudge me into going for a yellow jacket instead of the blue one and I might just end up buying one, using my personal data as a tool to do so. This is just one of the simple examples of how behavior programming operates in humans. Now let’s suppose, the question here is not about blue or yellow jackets, but of more concerning topics, like my political inclination or my thoughts on different issues.
It is true that much like every technology, behavior conditioning is also being used to accomplish positive goals. Like helping users lose weight, improve their sleep cycle, or attend meetings on time. It provides us with endless positive possibilities and use cases which when implemented successfully would be beneficial for many users. But the dark side of this method is deeply concerning along with the fact that many companies and individuals are already implementing that and we as consumers of online services are already participating in those experiments with little inclination or choice of opting out. And the scale at which these methods can operate involving millions of users at a time is also alarming. We are one of those rats as long as we have a cell phone device in our pocket or a laptop in our backpacks. Many of us are already responding to modified reinforced behavior already without actively realizing their participation. The real food for thought would be for how long and how effectively would these techniques be used in the coming future and to what effect.