As The Before Times began to crumble this spring, prompted by the revelation that the coronavirus was here to stay, the physical world gave way to a modified plan. The changes came quickly: the cafe chatter died down, hand sanitizer disappeared from the shelves, and we even adopted a medical grade handwashing technique, believing that it might be enough to keep our bodies free of virus. And then it all stopped.
When quarantine hit, our physical lives were minimized. But our digital lives were ready for that moment, held over the past decade, a time when social media companies have become the world’s largest businesses and the most dominant forces in our daily lives.
We connected in 2020 because we necessary social media this year. We were glued to our phones because, well, what else did we have left?
We had our close circles of family and friends, only some of whom we could see physically. But really, we just had our digital life. According to EMarketer estimates, the average American adult spent an additional 23 minutes per day on their smartphone per day in 2020, and 11 more minutes per day on social media alone. Year after year for the third quarter of 2020, Pinterestthe daily active user base grew by 37%, Twitterthe base increased by 29%, Snapchat by 18% more users, and Facebook by 12%.
As the economy has changed online, companies in Silicon Valley have seen their stock prices skyrocket. But they also saw an intense backlash from Washington, which has peppered the tech industry with investigations and lawsuits. Even Madison Avenue demanded that social platforms act more responsibly.
Intensified content moderation
The Covid-19 pandemic has resulted in a large number of health related and political disinformation. As a result, the social platforms adopted stricter policies to protect their users, such as labeling and deleting inaccurate messages, and provided reliable resources from reliable sources on the pandemic and the elections. In fact, when access to accurate information is a matter of life and death, social media companies take it more seriously.
When Twitter’s policies on disinformation and incitement to violence were applied to President Donald Trump’s account at the end of May, and the company took off tag your tweets as breaking the rules or being inaccurate, the entire industry reacted. Snapchat stopped promoting Trump’s account, Tic suspended the president, Reddit prohibits hate speech, and Facebook has momentarily been boycotted by over 1,000 advertisers for its failure to prevent hate speech and misinformation from its platform.
Misinformation is still a prevalent problem on almost all social media platforms, but after years of pretending they’re not “media companies,” tech companies – including Facebook – have finally taken a stand. more practical role in monitoring their platforms. That said, as policies have changed, platforms are reluctant to share their data with researchers, it is therefore difficult to quantify how much more certain or better information is after these changes.
While Democrats and Republicans have different reasons for being skeptical of Silicon Valley, each party has turned its anger on Big Tech. In the social media space, Democrats largely believe that platforms are not doing enough to curb the spread of hate, extremism and disinformation, while Republicans baselessly claim that the platforms -forms have a liberal bias in the way they moderate content. Yet the techlash is in full swing in Washington.
Facebook’s Mark Zuckerberg, Google’s Sundar Pichai and Twitter’s Jack Dorsey each testified before Congress in recent months over content moderation as well as, for Zuckerberg and Pichai, allegations of anti-competitive behavior – the government has now filed multiple lawsuits against Facebook and Google for violations of antitrust laws. Facebook is due to its dominance in the social media space while Google is focused on its dominance in search and digital advertising.