The graduation season is upon us when commencement speakers will once again task college graduates with the mission to change the world. This once meant inventing new products or processes, creating new art forms, or expressing individual ideas to improve the lives of others. Now it increasingly means launching a startup that incorporates its world-changing intentions in its mission statement.
When coupled with an entrepreneurial spirit that urges employees to move fast and deal with consequences later, there is little room to deliberate risk considerations. The globally-networked digital economy offers companies the opportunity to truly change the world in just a few short years. This creates tremendous opportunities, but also breeds serious business and social risks.
Dominant young companies like Facebook, Amazon, and Uber gained success by rushing forward while not waiting for societal and government regulatory approvals. Yet the more a company asserts a mission to change the world, it should apply an equally weighty seriousness to risk management strategies for both the intended and unintended consequences of its organizational objectives.
Facebook softened its original motto “Move Fast and Break Things” geared toward its developers to a cautionary “Move Fast with a Stable Infrastructure” in 2014 and a more socially palatable “We Learn as We Go” in light of more recent criticism. Expect this motto to change again since it does not inspire confidence when politicians are asking if you can self-regulate.
Setting the Fulcrum for Balancing Trust with Regulatory Control
Companies once grew at a slower pace than legislative processes. There was an opportunity to debate negative outcomes of business practices and set regulatory controls before the damage could become too widespread. The opposite is true today. The speed of business innovation and expansion is far faster than our increasingly sluggish parliamentary procedures and political deadlocks.
This growing disparity in speed means a greater degree of trust must be placed in the hands of budding entrepreneurs and the executives of rising unicorns. Building a risk-aware culture is a challenge for any business, but it is even more daunting for the new breed of world-changing companies based on management trends that encourage employees to make mistakes and learn as you go.
Collecting personal data on customers and prospects is not new but will become increasingly important to remain competitive. It does not matter if you sell cars, prescribe medication, or provide entertainment. Every business in the digital economy will benefit from catering to personal interests and needs. A personalized customer experience offers progress for both providers and end users alike. All businesses in this new economy will have to carefully manage risks that accompany the management of personal data.
This new era, however, also ushers in weightier responsibilities when companies state a strategic objective to guide users toward a self-defined greater good. Facebook founder Mark Zuckerberg’s bold message to the world is “It’s not enough to just connect people, we have to make sure that those connections are positive.”
We have entered an age when corporate boards and executives are inviting additional responsibilities and burdens of addressing social demands. Companies are not only collecting, analyzing, and communicating data, but also guiding and prescribing answers leveraging AI tools.
Western economies, and the United States in particular, prefer to cultivate entrepreneurship with a light-touch regulatory environment. The swift emergence of world-changing businesses targeting global audiences tests this preference when fundamental rights are at stake.
I am not a supporter of regulatory expansion and its looming threat of stifling creativity. But watching several congressmen during the Zuckerberg hearing threaten to abandon their light-touch regulatory stance seemed hollow and passé when Zuckerberg was practically begging for regulation.
The Internet is growing in importance around the world in people’s lives, and I think that it is inevitable that there will need to be some regulation. — Mark Zuckerberg during his appearance before the U.S. House Energy and Commerce Committee on April 11, 2018
Cynics note Facebook would certainly welcome new regulation that could lock in its current dominant status while setting a higher bar for other developing social media companies. But Zuckerberg’s statements are more likely an admission that the responsibilities of taking on a broad social agenda are either too burdensome or too tempting for abuse.
The question for legislative bodies is where to set the fulcrum that balances trust in a cultivating business environment for entrepreneurs against regulatory oversight that protects fundamental individual rights. Historically, regulation addresses the externalities that may not be the primary concerns of companies. Now legislators must address the risks associated with companies that offer what are perceived to be public platforms for conducting business and social interaction.
The current top concerns for western democracies are (1) the use of personal information, and (2) the control of content. The first is an issue of privacy rights. The second tests the boundaries of free speech rights.
How the EU Got the First Step Right with GDPR
Back in 1995, the public Internet was on the rise as a new frontier that left many questioning the opportunities and the risks it posed. As a consultant with International Data Corporation at the time, I had a contract with the European Commission to report to the Directorate-General for Communications Networks, Content and Technology on developing technology trends as they considered setting policy directions for the newly-formed European Union.
While presenting the latest activities of Netscape, Yahoo, Compuserve and industry exchanges, Vlassis Venner (my best recollection of the Director-General’s name) interrupted me with a quizzical look and asked, “John, what are we going to do about this?”
I was dumbfounded by the question with the multiple thoughts running through my head. Information wants to be free. Does he want to stop this progress? I am glad I live in the U.S. The land of the free, the bold, and the brave who will embrace this change with a sense of opportunity and progress.
After two decades, I realize I owe Mr. Venner an apology for my thoughts. I had assumed he, as a government power broker, sought to impose control over information and manage who received and benefited from the flow of data. Now I know he may have been more prescient and concerned about privacy rights for individuals.
The EU’s GDPR (General Data Protection Regulation) is the first major legislation and regulation for democratic countries in the digital economy that empowers the individual to protect their personal information and right to privacy. The EU got it right by focusing on a core principle for individual protection rather than imposing restrictions and controls on companies.
I am sorry Mr. Venner.
Facebook Triggers the Next Phase of Individual Protections
After the high-profile appearance of Mark Zuckerberg before both House and Senate committees, the United States is certain to follow GDPR with its own legislation to protect privacy rights. But the Facebook case also calls for more serious discussion and a second phase of legislative action to protect a second pillar of individual rights — free speech.
Zuckerberg assertively declares his company has a higher duty to ensure the connections and communications on the Facebook platform are positive yet leaves “positive” undefined and does not outline a clear approach to managing risks associated with this responsibility.
To date, the U.S. Congress has chosen to let companies define and manage the gray lines of what is considered acceptable Internet content. This forces the battlegrounds of political discussions and social outrage into individual companies where the loudest voices, individual dictates, or group think decision biases may endanger broader concepts of free speech.
How will Microsoft ban offensive language across its product lines? How will Twitter and Snapchat identify and control hate speech and bullying?
The United States currently contends with outrage over free speech zones, political speech deemed hate speech, dog whistles, and forced group identity. If the changing perceptions of what is acceptable discourse in the U.S. is so unruly among 326 million people, how can Facebook possibly take on the risks of defining, identifying, and managing risks around content for over two billion users in more than 200 countries? Stating an altruistic mission to enable positive connections is one thing. Taking on the risks of controlling Russian meddlers in US and European elections, Diamond and Silk in the US, anti-Rohingya propaganda in Myanmar, and German joke tellers targeting foreign leaders is another.
Who will set the risk management standards? The U.S. public turns to Congress. Congress defers to companies. Companies rely on their users.
In response to Senator Mazie Hirono’s question about racially-targeted real estate advertising, Zuckerberg noted “most of the enforcement today is still that our community flags issues for us when that comes up.”
In response to Senator Thune’s questions about the steps Facebook takes when evaluating the line between legitimate political discourse and hate speech, Zuckerberg offer this statement:
So, from the beginning of the company in 2004 — I started in my dorm room; it was me and my roommate. We didn't have A.I. technology that could look at the content that people were sharing. So — so we basically had to enforce our content policies reactively.
People could share what they wanted, and then, if someone in the community found it to be offensive or against our policies, they'd flag it for us, and we'd look at it reactively. Now, increasingly, we're developing A.I. tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook.
Later, he notes:
Hate speech — I am optimistic that, over a 5 to 10-year period, we will have A.I. tools that can get into some of the nuances — the linguistic nuances of different types of content to be more accurate in flagging things for our systems.
But, today, we're just not there on that. So a lot of this is still reactive. People flag it to us. We have people look at it. We have policies to try to make it as not subjective as possible. But, until we get it more automated, there is a higher error rate than I'm happy with.
Any concerns here?
The added risks of taking on socially-conscious mission statements is not limited to digital businesses. Starbucks states its mission is “to inspire and nurture the human spirit – one person, one cup and one neighborhood at a time." They were fine with the statement until they realized they needed to shut down their stores for a full day of employee sensitivity training.
A new breed of socially-conscious companies is taking hold. Intentions beyond financial viability are admirable but must come with a recognition of the responsibilities to integrate risk management principles to proactively anticipate and manage the associated weighty risks. The missteps of Facebook and Starbucks are two risk management failure examples that reveal building effective risk cultures are part of promoting a social agendas.
Government has a role in ensuring companies are prepared to meet these challenges not by imposing controls on companies but by reinforcing individual rights. GDPR is the first phase that emphasizes protections for personal information and privacy rights. The U.S. can use the Facebook discussion to shift into a second phase that better defines and sets protections for free speech.