The evolution of technology and the internet has been nothing short of miraculous. Yet the meteoric rise of wealth, influence, and scandal surrounding technology companies large and small is making people reconsider its impact. Customers, businesses, and even governments are taking steps to discover and understand the wide-ranging net of ‘big tech’, some hoping to pursue legislative and regulatory solutions. But it may be too late if the data collection model is in its last stages.
Big Tech is described as any other 'big'. Big pharma, big tobacco, big oil - these organizations become incredibly wealthy and are generally few in number with almost monopolistic control of the development, design, and profitability of their industry. Companies like Amazon, Apple, and Facebook race for $1T company valuations, create the world’s wealthiest people, and of the 48 million Americans with smart home speakers, 71% of them are Amazon’s Alexa.
Commonly referred to as “screen time”, Americans connect with our technology on a minute-by-minute basis. From alarm clocks (or for many the smartphone) when we wake to cars that transport us and to a television or radio before we sleep, technology has infected almost everything we do. And the big companies know it. That’s why they track you.
Pandora’s Box Opens
Cambridge Analytica is a British company that became a household name early in 2018 as concerning news broke in The New York Times and The Observer. The company specialized in digital and social media data-mining, the practice of creating information profiles for users of a particular platform by routing that user through their system. This demographic information is commonly sold to advertisers to provide greater targeting. A common example is “quizzes” where through your Facebook for example you were shown a link to a quiz. All you had to do was give the quiz permission to see information on your profile. Seems simple and gave the user a chance to approve or disapprove of such harvesting of their private information.
What made CA’s news in March of 2018 so contentious was that they had allegedly lied or misled Facebook and the user by harvesting more data than either Facebook or the user knew of or approved, specifically data from users who did not use the “quiz” but were simply associated with the person who did. This generated lots of discussion in the media, online, the dinner table, and of course Washington D.C.
In part because of CA’s proximity for then candidate Trump’s 2016 presidential campaign investigations began in the Senate into the events surrounding CA and whether it had violated privacy rights, and what part did Facebook have to play, even if negligence. The hearings resulted in several testimonies, even inviting the CEOs of Facebook, Twitter, and Google to testify as to the concern for safety and privacy concerning user data.
Our Concern Is Likely Too Late
The hearings on Capitol Hill in April 2018 arguably did more harm than good to the questions we the people charged our leadership in Washington to uncover. It was toe-curlingly clear to any tech-friendly person that our leadership hadn’t the slightest clue how this tech worked and could not duly assure the American people that their privacy and personal information was safe or under the safe control of the company.
Sadly our well-intentioned concern may be a day late and a dollar short. Data mining research shows that Facebook alone averages roughly 4.1M posts every minute. Each of these posts could contain a location, the type of device used to post, and everything about the person who posted it. This information collection creates a data mountain so large there is no real way to measure it accurately. It is stored and combined with tools to make that data usable or sellable, like the ‘algorithm.’ While this word gets a lot of the attention algorithms do not play much into the issue of privacy. While they do use personal data to target advertising and other promotions by design the algorithm can only operate on data fed to it by the user. In other words, the user is implicitly permitting the company to use that information fed in to personalize the experience for the user. The proverbial ‘fine-print’ will detail how and in what ways this data is used.
But as the technology has become more complicated and powerful it has also become more vulnerable. As was the case with Cambridge Analytica the primary user was unaware that he or was was implicitly giving CA access to their connected network, without the permission of anyone in that network. This spurred controversy in Europe, too, resulting in the GDPR (General Data Protection Regulation), an attempt to legislate against unwanted access or unverifiable permission to use a user’s data. While this solution satisfied citizenry of the EU it did little to address the problems of theft and misleading analytics companies.
As of late Facebook, Google, and even telecommunication companies have been challenged with several controversies from hacks to non-permissed location tracking. These technologies are so invasive that they are used internationally by law enforcement as better tools than their own surveillance. So what must change to avoid a world where all of our personal information we store electronically or share socially on the internet is like a Golden Corral buffet for dark-web hackers and hostile foreign powers?
Change Is Brewing
Regardless of legislators, the EU, or even hackers, Big Tech companies are feeling the pressure from their users. Some companies are providing updates that let customers delete whole time periods, specific word searches, and switching to an “opt-in only” model that allows customers to better control who has access to their personal information, for how long, and for what it can specifically be used.
Consumers are also becoming more educated. While some choose to delete their platforms altogether some are taking the time to understand the terms and conditions to which they are agreeing. Part of the appeal of allowing the platform to track your data is to “personalize” your experience. This means that everything from personalized ads, to recommendations for certain concerts or restaurants, discounts and coupon promotions and more. The factor of personal responsibility weighs heavily into the solution to the Big Tech privacy issue. A source from the telecommunications industry who asked to remain anonymous works with customers who are concerned about privacy sees a two-fold solution.
It’s a combination [of personal responsibility and legislation]...You have the option to clear data and caches but [few] go through that work, especially if they’re not conscious of the information being stored….Limiting what kind of information ‘Big Tech’ can keep through legislation would help, but people need to be actively aware [of where and how their content is used.]
Washington and several state legislatures have been addressing problems as specific as a requests for social media account passwords by an employer, or determining the usage of social media access and data in divorce proceedings. These and others are steps in the right direction to protecting the privacy of users under the 4th amendment to the United States Constitution. Consumers are getting smarter about how much and what they put online, parents are limiting the social media access and screen time for young children, and the free market principles of demand and supply should resolve this issue. Is it to late? Only one way to find out. *posts tweet*
TCC is an online news and opinion publication sponsored by IMC Branding Co, based in Tampa, Florida designed to give our readers something better than the partisan media of today.