Archive

Archive for the ‘analytics’ Category

Can a cellphone carrier be stupider?

May 10, 2012 Leave a comment

Even as someone with a jaundice-eyed look at the use of data and analytics in business, I can’t help but think that the introductory message on Vodafone’s help line just couldn’t be worse.

Calling today to solve a small problem with the new account, I am accosted by the “perky British ‘It Girl'” voice. She’s enthusing that the new Samsung smartphone is about to be released. But Vodafone already knows who I am — they can detect that from the cellphone I am calling from. They also know my account information. Certainly they know that I just signed an 18 month contract for an iPhone. Why should I care about a Samsung phone?

The answer is that I shouldn’t — and they know it. In their dunderhead minds, this was probably just an advertising deal in which they agreed to bombard callers to the service line with the advert in return for some dosh. But it doesn’t seem very cost effective. Why not use the moment to call to my attention something that I might actually be interested in buying. Instead, Vodafone is “training” me to ignore their marketing messages, since they are not relevant.

So Vodafone gets some short term lucre, but annoys its customers and creates psychological incentives to disregard its adverts, creating longer term harm.

What is most pathetic about this is that it need not happen. Who is to say that there ought be one advert for all callers? Wouldn’t segmentation make more sense, and have six or thirty or two hundred different messages?

It underscores that the entities most teeming with data can be the stupidest at using it.

Grrr… I’m still on hold!

Advertisements

Help! My iPad is dumber than I am!

April 2, 2012 Leave a comment

If there ever was a case for basic analytics and personalization, this is it. For such a smart machine in so many ways, my iPad couldn’t be dumber when it comes to its recommendations.

The app store and newsstand apparently think it’s OK to make recommendations of items that I already own. On the newsstand, it foists ads for The Economist — seemingly unaware that I already have it on the device, and that I’m a full subscriber. Being bombarded with a useless ad might seem to only cost me something (ie, my attention), but it costs Apple something too: an occasion to show me something relevant, like a subscription offer to The Atlantic or The New York Review of Books, which I may indeed want.

The app store is just as bad. One day I buy an app, and the next day the app store still tries to recommend it to me. It does this even though it actually knows that I already own it, considering that it marks “installed” in the box where the price usually is.

Are we so inured to information-technology not working that we fail to care when it confirms our presumption? There are two reasons why Apple’s failure to incorporate users’ information into what it recommends  is more than just sloppy system design.

First, Apple’s brand promises a premium service and excellent design. Steve Jobs built the company’s reputation on that and trounced rivals. The lack of personalization leads Apple to fall short of the standard it sets for itself.

Second, Apple’s ignorance hurts us both. The company’s fortunes are tied to software and services atop the device. So it effectively forgoes revenue opportunities whenever it tries to sell me something that I already have. Yet as a customer, the irrelevant ads in effect “train” me to give Apple less of my attention when I interact with the service, since I don’t expect the ads to be as useful.

Ultimately, the problem underscores that many people may want personalization and targeted advertising when it brings them value. For the owner of an iPad who wants to cut through the chaff and add functionality to the device, Apple’s use of my data is useful to me. The episode shows that customers can be just as angry when the expectation of personalization falls short, as when it creepily happens when one doesn’t expect it.

Categories: analytics, Apple, iPad Tags: , ,

What Facebook’s IPO reveals about big-data analytics

February 10, 2012 Leave a comment

Those obsessed with Mammon will read Facebook’s IPO prospectus for what it says about making money. Others of us with a more geeky bent will pour over what it reveals about how the company handles data. It starts with arresting stats: 845 million active monthly users; 100 billion friendships, and every day 250 million photos uploaded and 2.7 billion likes or comments.

But that is just the eye-candy. The substance is buried deep in the prose, under the heading “Data Management and Personalization Technologies.” Get a load of this:

“loading a user’s home page typically requires accessing hundreds of servers, processing tens of thousands of individual pieces of data, and delivering the information selected in less than one second. In addition, the data relationships have grown exponentially and are constantly changing.”

And then there is this:

“We use a proprietary distributed system that is able to query thousands of pieces of content that may be of interest to an individual user to determine the most relevant and timely stories and deliver them to the user in milliseconds.”

And this:

“We store more than 100 petabytes (100 quadrillion bytes) of photos and videos.”

And this:

“We use an advanced click prediction system that weighs many real-time updated features using automated learning techniques. Our technology incorporates the estimated click-through rate with both the advertiser’s bid and a user relevancy signal to select the optimal ads to show.”

But my favorite is this:

“Our research and development expenses were $87 million, $144 million, and $388 million for 2009, 2010, and 2011, respectively.”

So R&D expenses grew almost five-fold in three years. Considering Facebook had $1 billion in profit on $3.7 billion of revenue last year, the company’s research budget came to 10% of sales. This is very healthy (albeit natural, perhaps, with a company boasting such hefty profit margins). According to the OECD, the top 100 R&D-inteisve companies in the IT and telecoms sectors spend an average of nearly 7% of revenue on R&D.

Most of the fruits of the R&D is probably kept internal and covered under trade secrets. But for that generous sum, the prospectus informs us:

“As of December 31, 2011, we had 56 issued patents and 503 filed patent applications in the United States and 33 corresponding patents and 149 filed patent applications in foreign countries relating to social networking, web technologies and infrastructure, and related technologies. Our issued patents expire between May 2016 and June 2031.”

But the most interesting thing is how much was not exposed in the prospectus. In a section were Facebook purported to explain its analytics, with an example of how it uses elements on a webpage to determine what ads to show (page 87), the example was so juvenile as to be meaningless.

It is actually funny the way Facebook keeps quiet on analytics, considering that the first time the word appears is on page 12, when Facebook cites it as one of the “risk factors” that could ruin the business:

“our inability to improve our analytics and measurement solutions that demonstrate the value of our ads and other commercial content”

Though it is loath to make too much of it, since it is its main source of value, Facebook is an analytics company before anything else. Google might have been the world’s first big-data IPO. Facebook may be the first analytics one. But you wouldn’t know it from its IPO prospectus.

Categories: analytics, big data, Facebook

What donations tell us about … more donations

April 26, 2011 Leave a comment

One of the most impressive trends over the past decade (and broadly, the past century) has been the rise of the NGO. In the 1990s they mushroomed like start-ups and attracted “social entrepreneurs.” The bigger shift today is that it’s no longer a person’s full-time job: now actual entrepreneurs toiling at start-ups have their own philanthropic gig on the side. A computer went from a 2-ton, $2 million, room-sized machine to a pocket-sized thing. So did non-profit organizations.

I recently scribbled a few thoughts about the data dimensions of responding to Japan’s crisis for The Economist’s website: “The information equation” on April 24th. I was impressed that a private-sector company was playing the role that a governmental organization or NGO might play. (It’s a Google.org project, to be exact.)

Among the things I learned was that Google collected $5.5 million in donations through its crisis-response page. A small but not insignificant haul. But it got me thinking. The world of BigData is about learning new things from information that is otherwise invisible to the naked eye. What could the donation data tell us about how to more effectively solicit charitable contributions? Specifically, as I wrote in the penultimate paragraph of the article:

The donation data may offer a chance to learn new things about how people contribute. For example, what is the average amount? Does it follow a standard normal deviation (ie, a “bell curve”) in which a few give a little and a lot, with the majority donating around $15? Or is it a power-law distribution, in which there are two or three extremely rich donors, a handful of generous ones, followed by a long tail of $2 contributions? Did they donate using PayPal or credit cards? What time of day do people give? Is it after they have read a news story or clicked a link within an e-mail? The information would help fundraisers tailor how to make their appeals. And the data can be broken down by country or even city via Internet Protocol addresses.

I’ve asked Google’s hyper-helpful PR team to run the idea past their number-crunchers, to get access to the findings so I can write a story about this. It’s sort of like Google Flu Trends, but for charities. It would be highly valuable information for NGOs to know — particularly one that is dear to my heart, International Bridges to Justice (where I proudly serve on the board).

Categories: analytics, NGOs