November 06, 2017
Toma

We think it’s a harmless way to stay in touch with friends. The truth is far more destructive. Award-winning writer John Lanchester investigates how Mark Zuckerberg’s Harvard jape grew into the biggest surveillance enterprise in the history of mankind.

At the end of June, Mark Zuckerberg announced that Facebook had hit a new level: 2bn monthly active users. That number, the company’s preferred “metric” when measuring its own size, means 2bn different people used Facebook in the preceding month. It is hard to grasp just how extraordinary that is. Bear in mind that thefacebook — its original name — was launched exclusively for Harvard students in 2004. No human enterprise, no new technology or utility or service, has ever been adopted so widely so quickly. The speed of uptake far exceeds that of the internet itself, let alone ancient technologies such as television, cinema or radio.

Also amazing: as Facebook has grown, its users’ reliance on it has also grown. The increase in numbers is not, as one might expect, accompanied by a lower level of engagement. More does not mean worse — or worse, at least, from Facebook’s point of view. On the contrary. In the far distant days of October 2012, when Facebook hit 1bn users, 55% of them were using it every day.

At 2bn, 66% are. Its user base is growing at 17% a year — which you’d have thought impossible for a business already so enormous. Facebook’s biggest rival for logged-in users is YouTube, owned by its deadly rival Alphabet (the company formerly known as Google), in second place with 1.5bn monthly users. Three of the next four biggest apps, or services, or whatever one wants to call them, are WhatsApp, Messenger and Instagram, with 1.2bn, 1.2bn, and 700m users respectively.

Those three entities have something in common: they are all owned by Facebook. No wonder the company is the fourth most valuable in the world, with a market capitalisation of $505bn.

Zuckerberg’s news about Facebook’s size came with an announcement that may or may not prove to be significant. He said that the company was changing its “mission statement”, its version of the canting pieties beloved of corporate America. Facebook’s mission used to be “making the world more open and connected”.

A non-Facebooker reading that is likely to ask: why? Connection is presented as an end in itself, an inherently and automatically good thing. Is it, though? Flaubert was sceptical about trains because he thought (in Julian Barnes’s paraphrase) that “the railway would merely permit more people to move about, meet and be stupid together”. You don’t have to be as misanthropic as Flaubert to wonder if something similar isn’t true about connecting people on Facebook.

For instance, Facebook is generally agreed to have played a big, perhaps even a crucial, role in the election of Donald Trump. The benefit to humanity is not clear. This thought, or something like it, seems to have occurred to Zuckerberg, because the new mission statement spells out a reason for all this connectedness. It says that the new mission is to “give people the power to build community and bring the world closer together”.

Hmm. Alphabet’s mission statement, “to organise the world’s information and make it universally accessible and useful”, came accompanied by the maxim “Don’t be evil”, which has been the source of a lot of ridicule: Steve Jobs called it “bullshit”. Which it is, but it isn’t only bullshit. Plenty of companies, indeed entire industries, base their business model on being evil. This is especially an issue in the world of the internet. Internet companies are working in a field that is poorly understood by customers and regulators. The stuff they’re doing, if they’re any good at all, is by definition new. In that overlapping area of novelty and ignorance and unregulation, it’s well worth reminding employees not to be evil, because if the company succeeds and grows, plenty of chances to be evil are going to come along.

Google and Facebook have both been walking this line from the beginning. Their styles of doing so are different. An internet entrepreneur I know has had dealings with both companies. “YouTube knows they have lots of dirty things going on and are keen to try and do some good to alleviate it,” he told me. I asked what he meant by “dirty”. “Terrorist and extremist content, stolen content, copyright violations. That kind of thing. But Google, in my experience, knows that there are ambiguities, moral doubts, around some of what they do, and at least they try to think about it. Facebook just doesn’t care. When you’re in a room with them you can tell. They’re” — he took a moment to find the right word — “scuzzy.”

That might sound harsh. There have, however, been ethical problems and ambiguities about Facebook since the moment of its creation. The scene is as it was recounted in The Social Network, Aaron Sorkin’s movie about the birth of Facebook. While in his first year at Harvard, Zuckerberg suffered a romantic rebuff. Who wouldn’t respond to this by creating a website where undergraduates’ pictures are placed side by side so that users of the site can vote for the one they find more attractive?

Jesse Eisenberg’s brilliant portrait of Zuckerberg in The Social Network is misleading, as Antonio Garcia Martinez, a former Facebook manager, argues in Chaos Monkeys, his entertainingly caustic book about his time at the company. The movie Zuckerberg is a highly credible character, a computer genius located somewhere on the autistic spectrum with minimal to non-existent social skills. But that’s not what the man is really like. In real life, Zuckerberg was studying for a degree with a double concentration in computer science and — this is the part people tend to forget — psychology.

People on the spectrum have a limited sense of how other people’s minds work; they lack a “theory of mind”, it has been said. Zuckerberg, not so much. He is very well aware of how people’s minds work and, in particular, of the social dynamics of popularity and status. The initial launch of Facebook was limited to people with a Harvard email address; the intention was to make access to the site seem exclusive and aspirational. (And also to control site traffic so that the servers never went down. Psychology and computer science, hand in hand.)

Then it was extended to other elite campuses in the US. When it launched in the UK, it was limited to Oxbridge and the LSE. The idea was that people wanted to look at what other people like them were doing, to see their social networks, to compare, to boast and show off, to give full rein to every moment of longing and envy, to keep their noses pressed against the sweet-shop window of others’ lives.

This focus attracted the attention of Facebook’s first external investor, the now notorious Silicon Valley billionaire Peter Thiel. Again, The Social Network gets it right: Thiel’s $500,000 investment in 2004 was crucial to the success of the company. But there was a particular reason Facebook caught Thiel’s eye, rooted in a byway of intellectual history. In the course of his studies at Stanford — he majored in philosophy — Thiel became interested in the work of the US-based French philosopher René Girard. Girard’s big idea was something he called “mimetic desire”.

Human beings are born with a need for food and shelter. Once these fundamental necessities of life have been acquired, we look around us at what other people are doing, and wanting, and we copy them. In Thiel’s words, the idea is “that imitation is at the root of all behaviour”. Thiel said: “Social media proved to be more important than it looked because it’s about our natures.” We are keen to be seen as we want to be seen, and Facebook is the most popular tool humanity has ever had with which to do that.

The view of human nature implied by these ideas is pretty dark. If all people want to do is go and look at other people so that they can compare themselves to them then Facebook doesn’t really have to take too much trouble over humanity’s welfare, since all the bad things that happen to us are things we are doing to ourselves. For all the corporate uplift of its mission statement, Facebook is a company whose essential premise is misanthropic. It is perhaps for that reason that Facebook, more than any other company of its size, has a thread of malignity running through its story.

The high-profile, tabloid version of this has come in the form of incidents such as the live-streaming of rapes, suicides, murders and cop killings. But this is one of the areas where Facebook seems, to me, relatively blameless. People live-stream these terrible things over the site because it has the biggest audience; if Snapchat or Periscope were bigger, they’d be doing it there instead.

In many other areas, however, the site is far from blameless. The highest-profile recent criticisms of the company stem from its role in Trump’s election. There are two components to this, one of them implicit in the nature of the site, which has an inherent tendency to fragment and atomise its users into like-minded groups. The mission to “connect” turns out to mean, in practice, connect with people who agree with you. We can’t prove just how dangerous these “filter bubbles” are to our societies, but it seems clear that they are having a severe impact on our increasingly fragmented polity. Our conception of “we” is becoming narrower.

This fragmentation created the conditions for the second strand of Facebook’s culpability in the Anglo-American political disasters of the past year. The portmanteau terms for these developments are “fake news” and “post-truth”, and they were made possible by the retreat from a general agora of public debate into separate ideological bunkers.

In the open air, fake news can be debated and exposed; on Facebook, if you aren’t a member of the community being served the lies, you’re quite likely never to know that they are in circulation. It’s crucial to this that Facebook has no financial interest in telling the truth. No company better exemplifies the internet-age dictum that if the product is free, you are the product. Facebook’s customers aren’t the people who are on the site: its customers are the advertisers who use its network and who relish its ability to direct ads to receptive audiences. Why would Facebook care if the news streaming over the site is fake? Its interest is in the targeting, not in the content.

Fake news is not, as Facebook has acknowledged, the only way it was used to influence the outcome of the 2016 presidential election. On January 6, 2017, the director of national intelligence in the US published a report saying that the Russians had waged an internet disinformation campaign to damage Hillary Clinton and help Trump. “Moscow’s influence campaign followed a Russian messaging strategy that blends covert intelligence operations — such as cyber-activity — with overt efforts by Russian government agencies, state-funded media, third-party intermediaries, and paid social media users or ‘trolls’,” the report said.

In September, details of what the Russians had done started coming out. Kremlin-connected propaganda outfits bought $100,000 of Facebook advertising and used it to target 10m Americans. The strategy was much sneakier than just taking out ads saying “Vote Trump”: the ads focused instead on exacerbating existing social and political divisions inside America. The Russians created pages to spread inflammatory content about border security, black activism and benefit fraud, among other topics.

There was fake news about Muslim men claiming benefits for multiple wives; there was also a staged scene from New York’s Union Square, in which a Muslim man pretended to be assaulted by a bystander (actually an actor), in order to see whether passers-by intervened. There was a ton of anti-Hillary stuff, too, of course — but the cunning thing was the way it stoked the anger on both sides. The evidence was so clear that even Zuckerberg had to acknowledge it. “After the election, I made a comment that I thought the idea misinformation on Facebook changed the outcome of the election was a crazy idea,” he said last month. “Calling that crazy was dismissive and I regret it. This is too important an issue to be dismissive.”

The company is promising to treat this set of problems as seriously as it treats other problems such as malware, account hacking and spam. We’ll see. One man’s fake news is another’s truth-telling, and Facebook works hard at avoiding responsibility for the content on its site — except for sexual content, about which it is super-stringent. Nary a nipple on show. It’s a bizarre set of priorities, which only makes sense in an American context, where any whiff of explicit sexuality would immediately give the site a reputation for unwholesomeness. Photos of breastfeeding women are banned and rapidly get taken down. Lies and propaganda are fine.

The key to understanding this is to think about what advertisers want: they don’t want to appear next to pictures of breasts because it might damage their brands, but they don’t mind appearing alongside lies because the lies might be helping them find the consumers they’re trying to target. In Move Fast and Break Things, his polemic against the “digital-age robber barons”, Jonathan Taplin points to an analysis on Buzzfeed: “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlets such as The New York Times, The Washington Post, Huffington Post, NBC News and others.” This doesn’t sound like a problem Facebook will be in any hurry to fix.

The fact is that fraudulent content, and stolen content, are rife on Facebook, and the company doesn’t really mind, because it isn’t in its interest to mind. An illuminating YouTube video from Kurzgesagt, a German outfit that makes high-quality short explanatory films, notes that in 2015, 725 of Facebook’s top 1,000 most-viewed videos were stolen from the people who created them. This is another area where Facebook’s interests contradict society’s. We may collectively have an interest in sustaining creative and imaginative work in many different forms and on many platforms. Facebook doesn’t. It has two priorities, as Martinez explains in Chaos Monkeys: growth and monetisation. It simply doesn’t care where the content comes from.

Zuckerberg himself has spoken up on this issue, in a Facebook post addressing the question of “Facebook and the election”. “Of all the content on Facebook, more than 99% of what people see is authentic,” he claimed. “Only a very small amount is fake news and hoaxes.” More than one Facebook user pointed out that in their own news feed, Zuckerberg’s post about authenticity ran next to fake news. In one case, the fake story pretended to be from the TV sports channel ESPN. When it was clicked on, it took users to an ad selling a diet supplement.

A neutral observer might wonder if Facebook’s attitude to content creators is sustainable. Facebook needs content, obviously, because that’s what the site consists of: content that other people have created. It’s just that it isn’t too keen on anyone apart from Facebook making any money from that content. Over time, that attitude is profoundly destructive to the creative and media industries. Access to an audience — that unprecedented 2bn-and-counting people — is a wonderful thing, but Facebook isn’t in any hurry to help you make money from it. If the content providers all eventually go broke, well, that might not be too much of a problem. There are, for now, lots of willing providers: anyone on Facebook is, in a sense, working for Facebook, adding value to the company.

Taplin has worked in academia and in the film industry. The reason he feels so strongly about these questions is that he started out in the music business, as a tour manager for acts including Bob Dylan and the Band, and was on hand to watch the business being destroyed by the internet. What had been a $20bn industry in the US in 1999 was a $7bn industry 15 years later. He saw musicians who had made a good living become destitute. That didn’t happen because people had stopped listening to their music — more people than ever were listening to it — but because music had become something people expected to be free. YouTube is the biggest source of music in the world, playing billions of tracks annually, but in 2015 musicians earned less from it and from its ad-supported rivals than they earned from sales of vinyl. Not CDs and recordings in general: vinyl.

Something similar has happened in the world of journalism. Facebook is, in essence, an advertising company that is indifferent to the content on its site except insofar as it helps to target and sell advertisements. A version of Gresham’s law is at work, in which fake news, which gets more clicks and is free to produce, drives out real news, which often tells people things they don’t want to hear, and is expensive to produce. In addition, Facebook uses an extensive set of tricks to increase its traffic and the revenue it makes from targeting ads, at the expense of the news-making institutions whose content it hosts. Its news feed directs traffic at you based not on your interests, but on how to make the maximum amount of advertising revenue from you.

In the early years of Facebook, Zuckerberg was much more interested in the growth side of the company than in the monetisation. That changed when Facebook went in search of its big payday at the initial public offering (IPO). This is a huge turning point for any start-up: in the case of many tech-industry workers, the hope and expectation associated with “going public” is what attracted them to their firm in the first place, and/or what has kept them glued to their workstations. It’s the point where the notional money of an early-days business turns into the real cash of a public company. When the time came for the IPO, Facebook needed to turn from a company with amazing growth to one that was making amazing money. It was already making some, thanks to its sheer size, but not enough to guarantee a truly spectacular valuation on launch. It was at this stage that the question of how to monetise Facebook got Zuckerberg’s full attention. It’s interesting, and to his credit, that he hadn’t put too much focus on it before — perhaps because he isn’t particularly interested in money per se. But he does like to win.

The solution was to take the huge amount of information Facebook has about its “community” and use it to let advertisers target ads with a specificity never known before, in any medium. Martinez: “It can be demographic in nature (eg 30- to 40-year-old females), geographic (people within five miles of Sarasota, Florida), or even based on Facebook profile data (do you have children; ie, are you in the mommy segment?).”

That was the first part of the monetisation process for Facebook, when it turned its gigantic scale into a machine for making money. The company offered advertisers an unprecedentedly precise tool for targeting their ads at particular consumers. (Particular segments of voters too can be targeted with complete precision. One instance from 2016 was an anti-Clinton ad repeating a notorious speech she made in 1996 on the subject of “super-predators”. The ad was sent to African-American voters in areas where the Republicans were trying, successfully it turned out, to suppress the Democrat vote. Nobody else saw the ads.)

The second big shift around monetisation came in 2012 when internet traffic began to switch away from desktop computers towards mobile devices. If you do most of your online reading on a desktop, you are in a minority.

The switch was a potential disaster for all businesses that relied on internet advertising, because people don’t much like mobile ads, and were far less likely to click on them than on desktop ads. Facebook solved the problem by means of a technique called “onboarding”. As Martinez explains it, the best way to think about this is to consider our various kinds of name and address.

“For example,” he writes, “if Bed, Bath and Beyond wants to get my attention with one of its wonderful 20%-off coupons, it calls out:
Antonio Garcia Martinez, 1 Clarence Place #13, San Francisco, CA 94107.

If it wants to reach me on my mobile device, my name there is:
38400000-8cfo-11bd-b23e-10b96e40000d

On my laptop, my name is this:
07J6yJPMB9juTowar.AWXGQnGPA1MCmThgb9wN4vLoUpg.BUUtWg.rg.FTN.0.AWUxZtUf

“This is the content of the Facebook retargeting cookie, which is used to target ads-are-you based on your mobile browsing,” Martinez continues. “Each of these keys is associated with a wealth of our personal behaviour data: every website we’ve been to, many things we’ve bought in physical stores, and every app we’ve used and what we did there … The biggest thing going on in marketing right now, what is generating tens of billions of dollars in investment and endless scheming inside the bowels of Facebook, Google, Amazon and Apple, is how to tie these different sets of names together, and who controls the links.”

Facebook already had a huge amount of information about people and their social networks and their professed likes and dislikes. After waking up to the importance of monetisation, it added to its own data a huge new store of data about offline, real-world behaviour, acquired through partnerships with big companies such as Experian, which have been monitoring consumer purchases for decades via their relationships with direct-marketing firms, credit card companies and retailers.

There doesn’t seem to be a one-word description of these firms: “consumer credit agencies” or something similar about sums it up. Their reach is much broader than that might make it sound, though. Experian says its data is based on more than 850m records and claims to have information on 49m UK individuals living in 26m households.

These firms know all there is to know about your name and address, your income and level of education, your relationship status, plus everywhere you’ve ever paid for anything with a card. Facebook could now put your identity together with the unique device identifier on your phone.

It puts that together with the rest of your online activity: not just every site you’ve ever visited, but every click you’ve ever made. All this information is used to sell you things via online ads.

The ads work on two models. In one of them, advertisers ask Facebook to target consumers from a particular demographic. But Facebook also delivers ads via a process of online auctions, which happen in real time whenever you click on a website. Because every website you’ve ever visited (more or less) has planted a cookie on your web browser, when you go to a new site, there is a real-time auction, in millionths of a second, to decide what your eyeballs are worth and what ads should be served to them, based on what your interests, and income level and whatnot, are known to be. This is the reason ads have that disconcerting tendency to follow you around, so that you look at a new telly or a pair of shoes or a holiday destination, and they’re still turning up on every site you visit weeks later. This was how, by chucking talent and resources at the problem, Facebook was able to turn mobile from a potential revenue disaster to a great hot steamy geyser of profit.

What this means is that even more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company. I’ve spent time thinking about Facebook, and the thing I keep coming back to is that its users don’t realise what it is the company does.

What Facebook does is watch you, and then use what it knows about you and your behaviour to sell ads. I’m not sure there has ever been a more complete disconnect between what a company says it does — “connect”, “build communities” — and the commercial reality. Note that the company’s knowledge about its users isn’t used merely to target ads, but to shape the flow of news to them.

Since there is so much content posted on the site, the algorithms used to filter and direct that content are the thing that determines what you see: people think their news feed is largely to do with their friends and interests, and it sort of is, with the crucial proviso that it is their friends and interests as mediated by the commercial interests of Facebook. Your eyes are directed towards the place where they are most valuable for Facebook.

I’m left wondering what will happen when and if this $500bn penny drops. As Tim Wu shows in his energetic and original book, The Attention Merchants, there is a suggestive pattern here: that a boom is more often than not followed by a backlash, that a period of explosive growth triggers a public and sometimes legislative reaction. Wu’s first example is the draconian anti-poster laws introduced in early 20th-century Paris (and still in force — one reason the city is, by contemporary standards, undisfigured by ads).

Facebook seems vulnerable to a backlash. One place they are likely to begin is in the core area of its business model: ad-selling. The advertising it sells is “programmatic”, that is, determined by computer algorithms that match the customer to the advertiser and deliver ads accordingly, via targeting and/or online auctions.

The problem with this, from the customer’s point of view — remember, the customer here is the advertiser, not the Facebook user — is that a lot of the clicks on these ads are fake. There is a mismatch of interests here. Facebook wants clicks, because that’s how it gets paid: when ads are clicked on. But what if the clicks aren’t real, but are instead automated clicks from fake accounts run by computer bots?

This is a well-known problem, which particularly affects Google, because it’s easy to set up a site, allow it to host programmatic ads, then set up a bot to click on those ads, and collect the money that comes rolling in. On Facebook, the fraudulent clicks are more likely to be from competitors trying to drive each others’ costs up.

The industry publication Ad Week estimates the annual cost of click fraud at $7bn. One single fraud site, Methbot, whose existence was exposed at the end of last year, uses a network of hacked computers to generate $3m-$5m of fraudulent clicks every day. Estimates of fraudulent traffic’s market share are variable, with some guesses coming in at about 50%; some website owners say their own data indicates a fraudulent-click rate of 90%. This is by no means entirely Facebook’s problem, but it isn’t hard to imagine how it could lead to a big revolt against “ad tech”, as this technology is generally known, on the part of the companies who are paying for it.

I’ve heard academics in the field say that there is a form of corporate groupthink in the world of the big buyers of advertising, who are currently responsible for directing large parts of their budgets towards Facebook. That mindset could change. Also, many of Facebook’s metrics are tilted to catch the light at the angle that makes them look shiniest. A video is counted as “viewed” on Facebook if it runs for three seconds, even if the user is scrolling past it in their news feed and even if the sound is off. If counted by the techniques that are used to count television audiences, many Facebook videos with hundreds of thousands of “views”, would have no viewers at all.

A customers’ revolt could overlap with a backlash from regulators and governments. Google and Facebook have what amounts to a monopoly on digital advertising. That monopoly power is becoming more and more important as advertising spend migrates online. Between them, they have already destroyed large sections of the newspaper industry. Facebook has done a huge amount to lower the quality of public debate and to ensure that it is easier than ever before to tell what Hitler approvingly called “big lies” and broadcast them to a big audience. The company has no business need to care about that, but it is the kind of issue that could attract the attention of regulators.

That isn’t the only external threat to the Google/Facebook duopoly. The US attitude to antitrust law was shaped by Robert Bork, the judge whom Reagan nominated for the Supreme Court, but the Senate failed to confirm. Bork’s most influential legal stance came in the area of competition law. He promulgated the doctrine that the only form of anti-competitive action that matters concerns the prices paid by consumers. His idea was that if the price is falling that means the market is working, and no questions of monopoly need be addressed. This philosophy still shapes regulatory attitudes in the US and it’s the reason Amazon, for instance, has been left alone by regulators despite the manifestly monopolistic position it holds in the world of online retail, books especially.

The big internet enterprises seem invulnerable on these narrow grounds. Or they do until you consider the question of individualised pricing. The huge data trail we all leave behind as we move around the internet is increasingly used to target us with prices that aren’t like the tags attached to goods in a shop. On the contrary, they are dynamic, moving with our perceived ability to pay.​ Four researchers based in Spain studied the phenomenon by creating automated personas to behave as if, in one case, “budget conscious” and in another “affluent”, and then checking to see if their different behaviour led to different prices.

It did: a search for headphones returned a set of results that were, on average, four times more expensive for the affluent persona. A hotel-booking site charged higher fares to the affluent consumer. In general, the location of the searcher caused prices to vary by as much as 166%. So in short, yes, personalised prices are a thing, and the ability to create them depends on tracking us across the internet. That seems, to me, a prima facie violation of the American post-Bork monopoly laws, focused as they are entirely on price. It’s sort of funny, and also sort of grotesque, that an unprecedentedly huge apparatus of consumer surveillance is fine, apparently, but an unprecedentedly huge apparatus of consumer surveillance that results in some people paying higher prices may well be illegal.

Perhaps the biggest potential threat to Facebook is that its users might go off it. Two billion monthly active users is a lot of people, and the “network effects” — the scale of the connectivity — are, obviously, extraordinary.

But there are other internet companies that connect people on the same scale — Snapchat has 173m daily users, Twitter 328m monthly users — and, as we’ve seen in the disappearance of MySpace, the one-time leader in social media, when people change their minds about a service, they can go off it hard and fast.

For that reason, were it to be generally understood that Facebook’s business model is based on surveillance, the company would be in danger. The one time Facebook did poll its users about the surveillance model was in 2012, when it proposed a change to its terms and conditions — the change that underpins the current template for its use of data.

The result of the poll was clear: 90% of the vote was against the changes. Facebook went ahead and made them anyway, on the grounds that so few people had voted. No surprise there, neither in the users’ distaste for surveillance nor in the company’s indifference to that distaste. But this is something that could change.

The other thing that could happen at the level of individual users is that people stop using Facebook because it makes them unhappy. Earlier this year, in a paper from the American Journal of Epidemiology, researchers found, quite simply, that the more people use Facebook, the more unhappy they are. In addition, they found that the positive effect of real-world interactions, which enhance wellbeing, was accurately paralleled by the “negative associations of Facebook use”.

In effect, people were swapping real relationships that made them feel good for time on Facebook, which made them feel bad. That’s my gloss, rather than that of the scientists, who take the trouble to make it clear that this is a correlation rather than a definite causal relationship, but they did go so far — unusually far — as to say that the data “suggests a possible trade-off between offline and online relationships”. This isn’t the first time something like this effect has been found. To sum up: there is a lot of research showing that Facebook makes people feel like shit. So maybe, one day, people will stop using it.

What, though, if none of the above happens? What if advertisers don’t rebel, governments don’t act, users don’t quit, and the good ship Zuckerberg and all who sail in her continues blithely on? We should look again at that figure of 2bn monthly active users. The total number of people who have any access to the internet — as broadly defined as possible, to include the slowest dial-up speeds and creakiest developing-world mobile service, as well as people who have access but don’t use it — is 3.5bn. Of those, about 750m are in China and Iran, which block Facebook.

Russians, about 100m of whom are on the net, tend not to use Facebook because they prefer their native copycat site VKontakte. So put the potential audience for the site at 2.6bn. In developed countries where Facebook has been present for years, use of the site peaks at about 75% of the population (that’s in the US). That would imply a total potential audience for Facebook of 1.95bn. At 2bn monthly active users, Facebook has already gone past that number, and is running out of connected humans.

Whatever comes next will take us back to those two pillars of the company: growth and monetisation. Growth can only come from connecting new areas of the planet. An early experiment came in the form of Free Basics, a program offering internet connectivity to remote villages in India, with the proviso that the range of sites on offer should be controlled by Facebook. “Who could possibly be against this?” Zuckerberg wrote in The Times of India. The answer: lots and lots of angry Indians. The government ruled that Facebook shouldn’t be able to “shape users’ internet experience” by restricting access to the broader internet. A Facebook board member tweeted: “Anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?”

So the growth side of the equation is not without its challenges, technological as well as political. Google, which has a similar running-out-of-humans problem, is working on Project Loon, “a network of balloons travelling on the edge of space, designed to extend internet connectivity to people in rural and remote areas worldwide”. Facebook is working on a project involving a solar-powered drone called the Aquila, which has the wingspan of a commercial airliner, weighs less than a car and, when cruising, uses less energy than a microwave oven.

The idea is that it will circle remote, currently unconnected areas of the planet, for flights that last as long as three months at a time. It connects users via laser and was developed in Bridgwater, Somerset. (Amazon’s drone programme is based in the UK too, near Cambridge. Our legal regime is pro-drone.) Even the most hardened Facebook sceptic has to be a little bit impressed by the ambition and energy. But the fact remains that the next 2bn users are going to be hard to find.

That’s growth, which will mainly happen in the developing world. Here in the rich world, the focus is more on monetisation, and it’s in this area that I have to admit something that is probably already apparent. I am scared of Facebook. The company’s ambition, its ruthlessness and its lack of a moral compass scare me. It goes back to that moment of its creation, Zuckerberg at his keyboard after a few drinks creating a website to compare people’s appearance, not for any real reason other than that he was able to do it. That’s the crucial thing about Facebook, the main thing that isn’t understood about its motivation: it does things because it can.

Zuckerberg knows how to do something, and other people don’t, so he does it. Motivation of that type doesn’t work in the Hollywood version of life, so Aaron Sorkin had to give Zuck a motive to do with social aspiration and rejection. But that’s wrong, completely wrong. He isn’t motivated by that kind of garden-variety psychology. He does this because he can, and justifications about “connection” and “community” are ex post facto rationalisations. The drive is simpler and more basic. That’s why the impulse to growth has been so fundamental to the company, which is, in many respects, more like a virus than it is like a business. Grow and multiply and monetise. Why? There is no why.

Automation and artificial intelligence are going to have a big impact in all kinds of worlds. These technologies are new and real and they are coming soon. Facebook is deeply interested in these trends. We don’t know where this is going, we don’t know what the social costs and consequences will be, we don’t know what will be the next area of life to be hollowed out, the next business model to be destroyed, the next company to go the way of Polaroid or the next business to go the way of journalism or the next set of tools and techniques to become available to the people who used Facebook to manipulate the elections of 2016.

One of the things that really stands out about the Russian use of Facebook during the US election was how it draws all the things I’ve mentioned together. It focused on American fragmentation, and sought to exacerbate the country’s social and political divides. It used Facebook’s algorithmic targeting to focus on what it already knew people thought, and gave them more of the same. It used falsehoods, knowing that the company had no real interest in weeding them out. It manipulated people’s feelings. The people behind that campaign had done a better job of studying Facebook’s innate amorality and potential for misuse than anyone in government.

We just don’t know what’s next, but we know it’s likely to be consequential, and that a big part will be played by the world’s biggest social network. On the evidence of Facebook’s actions so far, it’s impossible to face this prospect without unease.

 

Source: thetimes