Facebook is preparing to pay a multi-billion-dollar fine and dealing
with ongoing ire from all corners for its user privacy lapses, the viral
transmission of lies during elections, and delivery of ads in ways that skew
along gender and racial lines. To grapple with these problems (and to get ahead
of the bad PR they created), Chief Executive Mark Zuckerberg has proposed that
governments get together and set some laws and regulations for Facebook to
follow.

But Zuckerberg should be aiming higher. The question isn’t just
what rules should a reformed Facebook follow. The bigger question is what all
the big tech companies’ relationships with users should look like. The
framework needed can’t be created out of whole cloth just by new government
regulation; it has to be grounded in professional ethics.

Doctors and lawyers, as they became increasingly professionalized
in the 19th century, developed formal ethical codes that became the seeds of
modern-day professional practice. Tech-company professionals should follow
their example. An industry-wide code of ethics could guide companies through
the big questions of privacy and harmful content.

State governments made compliance with these codes mandatory to
get a license to practice medicine or law. Lawyers’ ethics require that they
meet obligations — sometimes called “fiduciary” duties —of confidentiality,
loyalty and care. Modern-day medical ethics are framed to include autonomy
(i.e. respect for individual self-determination), “non-maleficence”
(Hippocrates’ “first, do no harm”), beneficence and justice — concepts that
reflect the same kinds of values.

Drawing on Yale law professor Jack Balkin’s concept of “information fiduciaries,” I have proposed that the tech companies
develop an industry-wide code of ethics that they can unite behind in
implementing their censorship and privacy policies — as well as any other
information policies that may affect individuals.

Just like legal and medical practitioners, tech companies are
knowledge specialists, so it makes sense to obligate them to develop standards
of good ethical practice for gathering and using data about you. (They can
begin by Googling medical and legal ethics codes!) An ethical code also doesn’t
require legislation or regulation to be put into place; the companies could
adopt it on their own. But it would be no surprise if a well-developed ethical
code ended up being backed by law and regulation. That’s what ultimately
happened with doctors and lawyers.

The scope of “information fiduciary” ethics has to apply to all
people, not just a company’s customers or subscribers. (Facebook, for example,
collects data on non-Facebook users, and to some degree can’t
help doing so). Even if companies can’t stop gathering user data, they
certainly can be obligated to treat users and non-users alike. They should also
be duty-bound to treat them with care (don’t allow individuals’ data to be used
in ways that harm them; don’t serve them content or ads that are false or
misleading), loyalty (don’t put company interests ahead of the well being of
the individuals whose data you hold), and, perhaps most important,
confidentiality. That last duty means, at a minimum: Don’t share individuals’
data with companies without their knowing, particular consent. And don’t share
individuals’ data with governments unless the governments have sought that
information consistent with international rights guarantees and norms of due
process.

An industry-wide — and, ideally, society-wide — recognition of the
tech companies’ duty of confidentiality, care and loyalty has another benefit.
It can give the companies legal standing to fight for user interests in the
face of government demands for individuals’ private dataMore
broadly, it might also give the companies standing to fight censorship of
content that individuals have the right to produce, to seek and to read, as
allowed by the United Nations Universal Declaration of Human Rights and other
international rights instruments.

But the companies shouldn’t stop with building an ethical
framework. They should also convene forums (my preferred model is the
U.N.-backed Internet Governance Forum) through which governments, communities,
individuals and other stakeholders can raise ethical criticisms and concerns
directly with the companies and one another. These forums should be global with
low barriers to participation. (It follows that any tech-ethics framework
should be open to amendment based on critical feedback from these forums or
from elsewhere.) At worst, such a forum allows stakeholders to let off steam;
at best, it can enable people who care about the internet and its services to
identify emerging problems and solutions quickly.

None of this will end criticism of the big internet companies.
When they remove content — abiding by either law or their own content policies
— they invariably will get three reactions: You censored too much! You didn’t
censor enough! You censored the wrong stuff!

Still, it’s better to allow the companies to try to keep such
services from being overrun with informational garbage. If we’re smart, we’ll recognize
that they’ll never be perfect, or even perfectly consistent.

It’s fashionable to suppose that all tech companies are amoral and
selfish — and certainly some have given us good reason to think so. But I think
it’s useful to begin by assuming they want to do good, and that they want to
act in good faith. Most of their missteps, I believe, are grounded not in
sociopathology or malice but in the arrogance that springs from their own
perception that their intentions are beneficent.

That arrogance has been shaken by the “techlash,” which is all to the good. If the newly chastened companies like Facebook are now ready to do whatever it takes to “friend” us, drafting and adhering to a code of ethics isn’t asking too much. Commit to that, and I’ll accept the request.

Image credit: Wachiwit

Featured Publications