Learn to code. And cook, perform open-heart surgery, write kanji, design efficient thermodynamics, blog…
For such an interesting and useful debate, what such silly arguments are being advanced. I think the coders need to lift their heads from their screens and spend the year to learn the humanities.
Clay Johnson (@cjoh) believes that coding is like literacy: if you don’t learn it, you’re shut out of the world. Matt Galligan (@mg) suggests it is like cooking: it doesn’t matter that you can’t compete against Jamie Oliver with a whisk, simply being knowledgeable is important. (I’ll link to their tweets shortly.)
Both are deeply smart. Yet I beg to differ on two grounds.
First, we live in a resource constrained world, and one cannot peruse everything they’d like. I’ve never read Plutarch despite knowing I’d be a better person for it.
Second, there is a value to concentrating one’s efforts where there is the biggest payoff; the idea of comparative advantage. An old economics textbook — was it Samuelson? — used the example of why President Roosevelt oughtn’t type his own letters even if he is a faster typist than his secretary.
I have no principled objection to learning to code, either rudimentarily or more seriously, if that is what one wishes. But insisting that it is somehow essential to learn is ridiculous.
Surely the same arguments could be made for other things that affect us on an everyday level, such as food (learn to farm!), health (learn to sequence genomes!). We drive cars: must we learn how they work? We surf the internet: must we tinker with IPv6 header fields and the protocol stack? Where does it end?
Benjamin Franklin detested that schools in his day taught Latin and Greek as standard fare — far better to learn living languages to actually talk to people, he recommended. I’m not as narrowly practical as that. (After all, it let scholars across Europe communicate, as the term “Latin Quarter” in Paris suggests. And it opens the mind to a world of great works.) Yet there is a lot of sense to the idea that we should discriminate with our time wisely.
Yes, yes. I understand that as more facets of life are dominated by computers, with algorithms making decisions that once were done by people, it is essential that the public has a basic understanding of how software works, so that they can appreciate its limitations — and can act on that knowledge as citizens, voters, consumers, parents, etc. I get it. (I’m even writing a book on big-data that deals with this.)
Still, the principles of software, albeit useful to be familiar with, holds no sacrosanct importance that it should jump the queue of priorities; coding can hardly claim some sort of categorical imperative that elevates it to something any honorable person must know. Rather, it is like most other things in life: nice to know if you can, but one can avail oneself of the marketplace to bring in the skills when it’s needed. Typists never needed to know the innards of a mechanical typewriter. Newspapers subscribers don’t need to learn about presses, or the stylebook, or HTML5.
What may be most surprising is that people are surprised. So coders urge everyone to code. Priests want parishioners to pray. Boy scouts want us to camp. Generals ask that we be prepared to defend. When you’re a hammer, everything looks like a nail. Frogs see the universe as a pond. Lawyers want us to code too, but not in the way software engineers mean.
In the middle ages, music was the fourth of the seven liberal arts which all educated men needed to learn. Should today’s computer scientists think less of themselves if they cannot sightread a staff?
Even as someone with a jaundice-eyed look at the use of data and analytics in business, I can’t help but think that the introductory message on Vodafone’s help line just couldn’t be worse.
Calling today to solve a small problem with the new account, I am accosted by the “perky British ‘It Girl'” voice. She’s enthusing that the new Samsung smartphone is about to be released. But Vodafone already knows who I am — they can detect that from the cellphone I am calling from. They also know my account information. Certainly they know that I just signed an 18 month contract for an iPhone. Why should I care about a Samsung phone?
The answer is that I shouldn’t — and they know it. In their dunderhead minds, this was probably just an advertising deal in which they agreed to bombard callers to the service line with the advert in return for some dosh. But it doesn’t seem very cost effective. Why not use the moment to call to my attention something that I might actually be interested in buying. Instead, Vodafone is “training” me to ignore their marketing messages, since they are not relevant.
So Vodafone gets some short term lucre, but annoys its customers and creates psychological incentives to disregard its adverts, creating longer term harm.
What is most pathetic about this is that it need not happen. Who is to say that there ought be one advert for all callers? Wouldn’t segmentation make more sense, and have six or thirty or two hundred different messages?
It underscores that the entities most teeming with data can be the stupidest at using it.
Grrr… I’m still on hold!
If there ever was a case for basic analytics and personalization, this is it. For such a smart machine in so many ways, my iPad couldn’t be dumber when it comes to its recommendations.
The app store and newsstand apparently think it’s OK to make recommendations of items that I already own. On the newsstand, it foists ads for The Economist — seemingly unaware that I already have it on the device, and that I’m a full subscriber. Being bombarded with a useless ad might seem to only cost me something (ie, my attention), but it costs Apple something too: an occasion to show me something relevant, like a subscription offer to The Atlantic or The New York Review of Books, which I may indeed want.
The app store is just as bad. One day I buy an app, and the next day the app store still tries to recommend it to me. It does this even though it actually knows that I already own it, considering that it marks “installed” in the box where the price usually is.
Are we so inured to information-technology not working that we fail to care when it confirms our presumption? There are two reasons why Apple’s failure to incorporate users’ information into what it recommends is more than just sloppy system design.
First, Apple’s brand promises a premium service and excellent design. Steve Jobs built the company’s reputation on that and trounced rivals. The lack of personalization leads Apple to fall short of the standard it sets for itself.
Second, Apple’s ignorance hurts us both. The company’s fortunes are tied to software and services atop the device. So it effectively forgoes revenue opportunities whenever it tries to sell me something that I already have. Yet as a customer, the irrelevant ads in effect “train” me to give Apple less of my attention when I interact with the service, since I don’t expect the ads to be as useful.
Ultimately, the problem underscores that many people may want personalization and targeted advertising when it brings them value. For the owner of an iPad who wants to cut through the chaff and add functionality to the device, Apple’s use of my data is useful to me. The episode shows that customers can be just as angry when the expectation of personalization falls short, as when it creepily happens when one doesn’t expect it.
Those obsessed with Mammon will read Facebook’s IPO prospectus for what it says about making money. Others of us with a more geeky bent will pour over what it reveals about how the company handles data. It starts with arresting stats: 845 million active monthly users; 100 billion friendships, and every day 250 million photos uploaded and 2.7 billion likes or comments.
But that is just the eye-candy. The substance is buried deep in the prose, under the heading “Data Management and Personalization Technologies.” Get a load of this:
“loading a user’s home page typically requires accessing hundreds of servers, processing tens of thousands of individual pieces of data, and delivering the information selected in less than one second. In addition, the data relationships have grown exponentially and are constantly changing.”
And then there is this:
“We use a proprietary distributed system that is able to query thousands of pieces of content that may be of interest to an individual user to determine the most relevant and timely stories and deliver them to the user in milliseconds.”
“We store more than 100 petabytes (100 quadrillion bytes) of photos and videos.”
“We use an advanced click prediction system that weighs many real-time updated features using automated learning techniques. Our technology incorporates the estimated click-through rate with both the advertiser’s bid and a user relevancy signal to select the optimal ads to show.”
But my favorite is this:
“Our research and development expenses were $87 million, $144 million, and $388 million for 2009, 2010, and 2011, respectively.”
So R&D expenses grew almost five-fold in three years. Considering Facebook had $1 billion in profit on $3.7 billion of revenue last year, the company’s research budget came to 10% of sales. This is very healthy (albeit natural, perhaps, with a company boasting such hefty profit margins). According to the OECD, the top 100 R&D-inteisve companies in the IT and telecoms sectors spend an average of nearly 7% of revenue on R&D.
Most of the fruits of the R&D is probably kept internal and covered under trade secrets. But for that generous sum, the prospectus informs us:
“As of December 31, 2011, we had 56 issued patents and 503 filed patent applications in the United States and 33 corresponding patents and 149 filed patent applications in foreign countries relating to social networking, web technologies and infrastructure, and related technologies. Our issued patents expire between May 2016 and June 2031.”
But the most interesting thing is how much was not exposed in the prospectus. In a section were Facebook purported to explain its analytics, with an example of how it uses elements on a webpage to determine what ads to show (page 87), the example was so juvenile as to be meaningless.
It is actually funny the way Facebook keeps quiet on analytics, considering that the first time the word appears is on page 12, when Facebook cites it as one of the “risk factors” that could ruin the business:
“our inability to improve our analytics and measurement solutions that demonstrate the value of our ads and other commercial content”
Though it is loath to make too much of it, since it is its main source of value, Facebook is an analytics company before anything else. Google might have been the world’s first big-data IPO. Facebook may be the first analytics one. But you wouldn’t know it from its IPO prospectus.
The always insightful Pete Warden recently penned a blog post on “What the Sumerians can teach us about data.” There is much to praise and react to in his analysis. But I’m struck in particular by a semantic matter: does Pete really mean “data” or “information”? I usually hate this genre of challenge; it’s the most tedious in our business. But this time it deserves to be raised.
The reason is that the idea of quantification is really a phenomenon of the Middle Ages in Europe (laying to rest the old canard that they were “dark ages” devoid of progress). On the other hand, the period of antiquity is typified by man describing his world as one of qualities. (Remember Socrates’s “forms?” And Aristotle’s taxonomy on just about everything?)
To be sure, in the area of money we can talk about quantification and thus data as we think about it today. But in many of Pete’s terrific examples of how the Sumerians recorded their world — in the “fixed media” of clay tablets and the like — I am unsure if the term data fits.
Ought “writing” be considered data? If so, how about caveman paintings? Surely the Egyptian hieroglyphs imparted information — but should we call it “data” per se? The only way to answer that question is to define data.
The word data is the plural of datum, neuter past participle of the Latin dare, “to give”, hence “something given,” instructs Wikipedia. “1. Facts and statistics collected together for reference or analysis. 2. The quantities, characters, or symbols on which operations are performed by a computer, being stored and transmitted in the form of…” reports a Google definition.
Building on the idea that data may be something different than just recording information, at what point does something go from being simply info to data?
I have a few ideas on how to answer this — I am scribbling away on a large work that looks at this topic among others. But I’m not quite ready to share it with the world, since the thoughts are still fermenting. In the meantime, Pete’s post is a wonderful look at how an early society recorded and used information. Among my favorite points:
* “Written records remove the problem of fallible memories, but replaces it with a second-degree question of provenance. How do you know the data accurately reflects what happened?”
* “We still have a disturbing tendency to trust anything that’s recorded, without understanding the subjective process that went into creating the record.”
* “The main way Sumerians protected the integrity of their data was through curses. This may seem laughable to a modern audience, but I don’t think we’re so different. Do you expect the FBI to actually raid your house if you copy that VHS tape?”
* “In the absence of real answers, we’ll take bogus ones painted with a veneer of data, just like the Sumerians.
* “If there’s any way you can, please think about how to open up data you control, it’s the best way to pass it on to posterity.”
Having pointed out what I enjoyed most, let me close on a final quibble. Pete writes:
“The Sumerians recorded everything on stone or clay tablets … This data exhaust gives a rich view into trade, worship, life, death, medicine and almost every other aspect of the Sumerian’s world.”
It is absolutely not “data exhaust” in the way that the term has come to be known (and how I helped popularize it in a report a few years ago). The idea was information provided as a byproduct of interacting with information that itself could be collected and analyzed. The simplest example is tracking readers activities to reveal to website visitors the most-read articles, as a simple heuristic to indicate what might interest them.
What Pete describes, and what the Sumerians recorded, was information (or perhaps data) pure and simple. No “exhaust” about it — other than that the tablets had been thrown away by the Sumerians before modern archeologists dug them up.
But all this ranting is only meant to add momentum to my appreciation for Pete’s splendid work in this post and others!
How to craft rules in a BigData world for information access? It is a hard question. But how not to is far clearer.
According to a new US government policy, lawyers representing Guantanamo prisoners are allowed to read Wikileaks’ classified US documents — but not print or save them. The actual policy “guidance” is here (from Politico) and an analysis by Politico’s Josh Gerstein is here.
Are the US officials that devised this policy out of their minds? How could anyone rationally adopt such an inherently inconsistent policy?
If the lawyers cannot read the material, they are blocked from accessing pertinent information that is already in the public domain, which could help them prepare a defense. Allowing access is only sensible. To do otherwise would be to deny reality (that the material is widely available), and might deny justice too.
However, crippling that access by placing arbitrary restrictions on its use make no sense whatsoever. Why? On what basis is one allowed to read but not print or save? Surely the US does not mean for the frailty of a person’s memory to govern how material is put to use. But that is the policy’s effect.
The irony is that the current policy is actually a slightly more rational shift from previous rules that forbid any access at all. It underscores the fact that the government has no clue how to respond to the new world we’re in regarding BigData leaks.
And it is a longstanding problem. Just this month, the US officialy released the trove of documents known as the Pentagon Papers — 40 years after they appeared in the New York Times. (The AP’s story is here) The Economist, in an article last month about it (“The open society and its ostriches“) argued that the way to think about these cases is that “the illegal disclosures in effect declassify the information.”
When the contradiction between futile policies and the reality on the ground grow so wide as to be preposterous — as it is now — something has to give. It will be the rules, of course, that go. But with government, this takes a long time.
Scott McNealy, the co-founder and long time boss of Sun Microsystems, was famous for his “top ten” riffs on tech trends. Today he’s recreated it on Twitter (follow @scottmcnealy), reprising his famous remark in 1999: “You have zero privacy anyway. Get over it.”
Here’s a compilation of the tweets (followed by a quick analysis relating it to Sony’s Stringer on security):
* * *
Top 10 signs you no longer have privacy and should get over it:
10. The guy behind the McDonalds counter greets you with, “Would you like a salad to help you with your constipation?”
9. A Google search on “white only clubs” has just one result: TaylorMade.
8. Your soon to be ex-spouse produces your iPhone GPS database in settlement hearings.
7. The TSA stops molesting and radiating your 82 year old mom because she is clearly not going to hijack that plane.
6. 20 neighbors show up at same Groupon inspired Spearmint Rhino happy hour in Vegas.
5. IRS starts auditing folks who don’t pay income taxes, not the folks who pay the most.
4. Local police become largest purchaser of camera equipped UAV’s.
3. Your parents require your Facebook, laptop, and phone passwords and actually review your online activity regularly. And you are 40.
2. The UPS driver delivers your small package to your door and, with a smile and wink, asks if you would like batteries with that.
1. Twitter starts suggesting Tweets for you, and they are perfect and better than your own.
* * *
As in 1999, McNealy is right on fact, wrong on what to do about it (as critics argued at the time). Not ensuring some protections is irrational. But whether he’s right or not is beside the point. It is refreshing when a top executive calls it as he sees it — and a bit silly when people quibble with the wording rather than the larger point itself.
Here, I’m thinking of Sony’s boss, Howard Stringer, who recently described the PlayStation Network hack is words that was sure to eviscerate him among tech journos. “Nobody’s system is 100 percent secure,” he said in a conference call. “This is a hiccup in the road to a network future.” (in Bloomberg’s piece). “It’s not a brave new world; it’s a bad new world,” he said (in the WSJ piece).
Stringer has been pounced on by some in the press. He shouldn’t be. Though the point he raises we’ve known for a long time, it is still quite right.