Luddites,
reverse-Luddites,
and value

$?Luddite” sounds like an indestructible substance over which wars are fought in the Marvel Universe. 

The word originally referred to workers in early 19th century Nottinghamshire, England, who protested low wages and poor working conditions by destroying their employers’ machines. Today, Luddite has morphed into a reference to people who oppose technological advances.

Contemporary Luddite-type voices arise with nearly every innovation, from smartphones, to digital music, to self-driving cars, to gene splicing. Concerns range from “it’s bad for character” to “you shouldn’t mess with nature” to “it’ll put people out of work.” The first two have yet to hold up, but the last often does. It’s difficult to name a field where technology has not eliminated jobs. To be sure, technology also creates jobs, though it is by no means a lateral shift. It’s not as if the average laid-off assembly line worker can move into a newly created animator job at Pixar.

Job loss is, unfortunately, a brutal reality of progress. Despite what some politicians aver, the majority of phased-out American jobs have not gone to immigrants or overseas companies, but to automation and digitalization. That doesn’t lessen the heartbreak and very real consequences of losing a job or a once-thought secure career. It only explains the underlying why.

When grocery chains install self-checkout lanes, not a few customers express a bit of indignation. In something of a reverse-Luddite concern, some feel they are taking over a task once performed by employees and thus entitled to a discount. It’s more complicated than they realize, of course. If there are savings, they may come in the form of an avoided or delayed price increase rather than a discount. Perhaps more important, the if in if there are savings is a sizable if. According to the academic journal The Conversation, self-scanners do not reduce a store’s costs at all. In fact, The Atlantic recently reported that for many stores self-scanners increase costs by providing a too-tempting, easy way to steal.

Yet “we deserve a discount for doing the store’s work” may not reflect economic naiveté so much as not seeing value in a brand. That should be of concern to all, not least the financial services industry. 

It goes without saying that digital technology has eliminated or reduced the need for many a position in the banking industry. Outside of investment advice, consultation, and large commercial deals, fewer and fewer banking transactions require interaction with a live body. Portable devices, PCs, and ATMs pretty well cover the gamut from account set-up, to loan applications, to deposits, to payments, even to cross-selling. Today, a loyal customer might realistically have no need to set foot in a bank lobby for years. Clients using a bank’s technology might, like grocery shoppers, begin to wonder if they, not the bank, are doing most of the work—and, therefore, if they’re really getting enough value for fees paid.

That, of course, can lead to price-shopping, a brand’s worst enemy.

Banks in general have long faced the challenge of the parity-product, utility, or necessary-evil perception. Slogans like “The friendly bank” do nothing to help and, worse than going unnoticed, risk inducing eye-rolls. 

Client-banker relationships can offer an effective differentiator, but are inconsistent and tend to build loyalty not to a bank but to an employee—who may move on and take clients along. More disconcerting, however, is that client-banker relationships today are giving way to technology that becomes the client’s primary point of contact with the bank

Replacing bankers with screens requires a user interface that conveys perceived brand value. As I have written before, it is difficult but not unattainable. One need look no further than the fierce loyalty users have for their Android or iOS devices, which are, with all due respect to your smartphone of choice, all but parity products.

Every digital banking innovation deserves to heralded as a significant step forward in efficiency, accuracy, and service. From a marketing standpoint, it’s important to communicate the advantages of technological advances often—and convincingly—in order to ensure that clients continue see value. That can help ward off potential Luddites and reverse-Luddite protests.

Of course, it bears remembering that ensuring clients see value begins with ensuring that financial institutions first deliver it.

Posted in Uncategorized by Matt. No Comments

Google’s Duplex et al:
Wow or Yikes?

technology-3389904_960_720I have empathy for writers of futuristic science fiction. It’s almost impossible to correctly envision the future.

You may have noticed, for instance, that 2001 came and went without the discovery of an obelisk on the moon.

But last week, Google’s I/O Conference apparently conjured up visions of 2001: A Space Odyssey’s rogue robot Hal for a few attendees and bloggers. 

It began when Google CEO Sundar Pichai played a recording of a phone call to a salon to book a haircut appointment. Normally such a conversation would be unremarkable, and this one would have been, too—had not the caller been an artificial intelligence app called Duplex. Astonishingly, the salon employee who took the call couldn’t tell. Duplex handles conversation adeptly and naturally, even tossing in convincing ums and ahs, hesitations, and tonal variations. Even after being clued in, audience members had a hard time believing the caller wasn’t a real person.

To hear Duplex make the salon appointment, click here. To hear it make a dinner reservation, click here.

It’s a marvelous technological achievement with untold potential to be useful. Yet, perhaps even predictably, no small number of people were creeped out. NPR reported:

While Google wowed developers with the realness of the bot’s speech, many observers immediately took issue with how the technology apparently tricked the human on the line.

“Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill who studies the social impacts of technology.

“As digital technologies become better at doing human thingsDuplex-Salon appt Duplex-dinner reservation, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, deliberate deception. Not okay,”

Entrepreneur and writer Anil Dash agreed: “This stuff is really, really basic, but: any interaction with technology or the products of tech companies must be exist within a context of informed consent. Something like #GoogleDuplex fails this test, _by design_. That’s an unfixable flaw.”

To which I say, oh come on. 

Development is in the early stages. If the thought of unknowingly speaking with a machine need really be troubling, and I’m not convinced it need be, it’s a simple matter to program Duplex to open a conversation by identifying itself. Let’s not lose sight of the Wow! by inventing pseudo-moral dilemmas.

Last year, another AI experiment was blown out of proportion in the media. Perhaps you heard about it: Facebook shut down a pair of AI-esque bots because they had invented their own secret language and, for all we knew, were plotting humankind’s overthrow. At least, that’s what you might have thought from irresponsible headlines and stories. In fact, Facebook was conducting an experiment to see if the AIs could manage a simple negotiation. In the process, the bots improvised their own shorthand, which, when you think about it, is remarkable. Facebook stopped the experiment not because a program in which they couldn’t understand the bots was dangerous, but because it was useless

A few weeks ago I wrote about the Cambridge Analytica situation, a case in which it’s too early to tell if fears and accusations are grounded, hyperbolic, or a bit of both. In the meantime, Mark Zuckerberg appears to be getting serious about privacy. I suppose being called before a Senate panel can do that to a person. As I write, Facebook staffers are busily digging through a mountain of apps—a fishing expedition to find out who has been on a fishing expedition, as it were. Facebook’s Newsroom page just announced, 

Facebook will investigate all the apps that had access to large amounts of information before we changed our platform policies in 2014—significantly reducing the data apps could access. [Zuckerberg] also made clear that where we had concerns about individual apps we would audit them—and any app that either refused or failed an audit would be banned from Facebook.

The investigation process is in full swing, and it has two phases. To date thousands of apps have been investigated and around 200 have been suspended—pending a thorough investigation into whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before 2015—just as we did for Cambridge Analytica.

And not just Facebook. Chances are you, as I, have of late received privacy policy statements or reassurances from search engines, social media sites, financial institutions, and others. Try clicking on a link that takes you Twitter, and you’ll be asked to click OK on a screen that says, “By playing this video you agree to Twitter’s use of cookies. This may include analytics, personalization, and ads.” 

Where will it end? I’m expecting a privacy statement from the pizza delivery dude any day now.

I don’t know whether or not all of the preventive disclosures are overkill, but if they are I don’t mind. Something about “safe” scoring a little higher than “sorry” on the Preferred Outcome Scale.

Posted in Uncategorized by Matt. No Comments

Bungled bank robberies

Banana peel

There’s nothing quite so entertaining as an incompetent bank robber. Alas, as digital banking marches forward, we may see fewer and fewer of them. Before the art all but disappears, here are some favorite tales of mishaps and ineptitude.

• Never underestimate a teller. In Dallas, a would-be robber demanded a teller hand over the money in her till. Fine, she said, but first she would need to see two forms of ID. Which—I’m not making this up—the man produced. She took her time copying down his information, giving police plenty of time to greet him as he exited the bank. AOL.

• The Royal Bank of Scotland in the town of Rothesay has two unique robbery prevention devices. One is a revolving door. A trio of men bent on robbing the branch entangled themselves in it and needed help from bank staff to free themselves. A bit embarrassed, they left to regroup. They successfully navigated the door on their second foray, although it took some doing to convince amused bank personnel that they were serious about committing robbery. It was then that the second prevention device, a counter, went to work. It dealt a broken ankle to the robber who tried to jump over it. His accomplices tried to flee, but in their haste forgot to beware the revolving door. They remained trapped until police came. Anvari.

• After dashing out of a Virginia Beach bank with stolen cash, a robber decided he’d better return and retrieve his robbery note. Whew, he must have thought, that was close. His next thought might have been, Crap! I left the car keys at the bank. Opting not to return for them, he ran home and told his roommate that someone had stolen the car. The roommate, who was the car’s owner, reported the alleged theft. Police found the car, matched it to the keys left at the bank, and arrested the robber. The roommate was no doubt relieved to be spared the uncomfortable “you’ll need to find someplace else to live” conversation. Dumb Criminals.

• The car keys thing may be going around. In January of this year in Taylorsville, Utah, a man left his car keys at the credit union he had just robbed. Taking off on foot, he snagged and tore his bagful of cash. He could only watch (and, possibly, swear a blue streak) as the wind carried some of his booty into hands of eager passersby and the rest of it down a storm drain. To add to his ill luck, police promptly apprehended him. With this incident and his already-lengthy criminal history, I suspect he is in jail as I write. Miami Herald.

• The note seemed clear enough: “I have a gun. Gimme your money or else.” But to the robber’s bafflement, the teller handed back the note unread. This was Harbor Bank of Maryland, she explained, and she couldn’t accept a Maryland National transaction slip (on which he’d written the unread demand). The man took his note and left, leaving the teller unaware that she had just thwarted a robbery attempt. We’d be unaware, too, and the would-be robber might not have been apprehended if not for a woman who, waiting in line behind him, read the note over his shoulder. Baltimore Sun.

• Two of my favorites happened close to home in Salt Lake City. In one, a man handed a robbery note to a teller only to learn, the hard way, that two armed FBI agents were in line behind him. In the other, a man ordered a teller to empty her till into a paper sack. When she handed him back the back, he shoved it down the front of his pants and fled the bank. Seconds later, he, too, learned something the hard way: Dye packs burn at about 400 degrees. Apprehending the man wasn’t difficult. Neither was identifying him. Personal conversations.

• Thieves who rely on robbery notes would do well to invest in blank paper. A man arrested in Englewood, Colorado, wrote the robbery note on one of his personalized checks. To his credit, first he blacked out his name. Not to his credit, he didn’t black it out very well. Neowin.

• Another fellow at least had the good sense to use a nameless starter check. But the thing about checks, even starter checks, is that they have account numbers. Tracing the check’s account number to the thief was an easy matter. Barstool Sports.

• A Pennsylvania robber was smart enough not to use a personalized document, but he managed to make up for it in other ways. Asking the teller for a blank deposit slip, he wrote, “Just give me the money and nothing else will happen”—and then signed his own nameNorfolk Daily News.

I hope you enjoyed reading these anecdotes as much as I enjoyed sharing them. Fair being fair, I’m going to wrap up with a story where ineptitude took place on the side of the law. It happened a while ago, so I won’t be embarrassing any of Idaho’s finest.

You have doubtless heard of Butch Cassidy, famous for making withdrawals at gunpoint. (His former hideout is a few hours’ drive from my home.) In 1896, Cassidy and two accomplices robbed the Bank of Montpelier, Idaho, making off on horseback with $13, worth roughly $352 today. Instead of chasing the robbers on horseback as any sane person would do, the first responding deputy hopped on one of themthar newfangled bicycles. He didn’t get far. 

Every town is entitled to its claim to fame. Montpelier celebrates the robbery with an annual Butch Cassidy Cook Off and Reenactment. The town even has its own Butch Cassidy Museum, and, out front, a Hollywood-style sidewalk star pays tribute to the robbery.

Posted in Uncategorized by Matt. No Comments

The CLOUD Act
or
Look what rode
in on the omnibus

Cloud spyYou may have heard of the Fourth Amendment. It says, in essence, that if the cops want to search your home, they’d better convince a judge to issue a warrant.

In theory, anyhow. It’s another matter in practice, for things have changed since the Amendment’s ratification in 1792. For instance, we didn’t have trash collection back then. Madison might have been amused—or appalled—to see that, a year short of two centuries later, none other than the U.S. Supreme Court would have to weigh in on when cops can legally dig through your trash.[1]

There were other things besides trash collection that no one foresaw in 1792, such as hidden cameras and microphones, tracking devices, radar that detects motion through walls, and camera-carrying drones, to name a few. And we didn’t have cloud technology.

Warrants collided with the cloud in Microsoft Corporation v. United States of America.

It all began with a 2013 drug trafficking case in New York. The government jumped through the right hoops and obtained a warrant for documents stored on Microsoft servers. Microsoft said—I’m paraphrasing somewhat liberally here—“Sure, fine, you can search our servers located in the U.S., but some of the data you want resides on servers in Ireland, where you have no jurisdiction, so, no.”

A series of suits followed. A federal magistrate judge ordered Microsoft to furnish the data residing in Ireland. Microsoft appealed to a federal district judge, who agreed with the magistrate judge. Next Microsoft appealed to the Second Circuit Court of Appeals. This time, Ireland weighed in, saying they’d kind of like a say as to who accesses data stored on their soil. The district judge overturned the earlier decisions.

So, the government took the matter to the Supreme Court, which heard arguments in February.

A decision is due later this year. Or not.

Microsoft and the government have since filed a motion with the Supreme Court that says, and again I paraphrase, “Yeah, about that? Never mind.”

That’s because the situation appears resolved by the CLOUD Act, a bill quietly attached to the 2,232-page, $1.3 trillion omnibus spending bill that U.S. Congress passed last month. “CLOUD” is for “Clarifying Lawful Overseas Use of Data.”[2] The CLOUD Act lets “qualifying foreign governments” and the United States access information from servers on each other’s respective soil.

In what might at first blush seem a sudden turnabout, the government and Microsoft both support the CLOUD Act. So do Apple, Google, and others. But that would be curious only if Microsoft’s earlier resistance had been grounded on Fourth Amendment principles. It now appears that Microsoft et al may have been less concerned about individual privacy rights and more concerned about being sued or prosecuted for handing over information. The CLOUD Act provides tech companies complete protection from civil and criminal actions for compliance with government requests.

There are a couple of wrinkles.

Under the CLOUD Act, a “qualifying foreign government” can demand and obtain access to your U.S.-based records without approval from the United States. And, as pointed out by the Electronic Frontier Foundation (EFF) on February 8, not all qualifying governments have privacy laws as stringent as ours.

And then, this question popped into my devious, loophole-seeking mind: Couldn’t a U.S. president skip the whole obtaining-a-warrant thing by asking a foreign country to obtain data—from servers in the U.S.? Apparently the same thing has popped into other devious, loophole-seeking minds. As EFF reported on March 22, this and other issues have more than a few people concerned.

But perhaps we are being unduly paranoid. Never in history have high-ranking U.S. officials abused their power. Right?


[1] Answer: The moment you leave it at the curb. See California v. Greenwood.

[2] Congress and their acronyms. So cute.

Posted in Uncategorized by Matt. No Comments

Welcome to the
Facebook Follies

FB & CapitolI needn’t tell readers of this blog what apparently came as news to a U.S. Senator: Advertising revenues keep the lights on at Facebook and account for Mark Zuckerberg’s $62.2 billion net worth.

Nor need I point out that the more targeted an advertising medium, the more valuable it is to advertisers, and that it is with targeting that Facebook shines. 

With every Facebook action, and with every personality test (Which Muppet are you?), users reveal a good deal more about themselves than their fondness for kitten videos. Facebook abounds with opportunities to disclose your age, location, interests, reading choices, product preferences, religion, sexual orientation, political leanings, eating habits, TV and movie favorites, clothing preferences, music choices, favorite activities, travel habits, marital status, and more. That data is compiled, and it is sortable. 

So, say your product is ideal for married vegan Trekkies who like reggae, drive a Prius, and own a dog. Facebook lets you select and show your ads only to people fitting that profile. (I’m not making that up.) That much seems to upset a lot of people, though it needn’t. Each Facebook is a data point among billions. Advertisers aren’t interested in peering into your individual life. They’re interested in not wasting money trying to sell steaks to vegans.

Data-driven targeting benefits users, too. It cuts down the number of irrelevant ads showing up in your feed. (Yes, without it, you’d see even more irrelevant ads.) It lets you enjoy Facebook—and oodles of content that come with it—without having to shell out. If the thought of your data being amassed no matter how it’s used creeps you out on general principle, that’s one thing. Otherwise, Facebook data gathering is arguably helpful.

Then came Cambridge Analytica.

The seeds for trouble spilled onto rich soil shortly after academic psychologist and data scientist Aleksandr Kogan obtained a boatload of data from Facebook. He obtained it in accordance with Facebook policy, so that much wasn’t the problem. The problem was that then he turned around and gave the data, which wasn’t his to give, to British political consulting firm Cambridge Analytica

There’s a reason Facebook is in hot water even though it was Kogan who broke the rules. “Unlike other recent privacy breakdowns,” wrote TIME’s Lisa Eadicicco earlier this month,

“… thieves or hackers did not steal information. [Facebook] actually just handed the data over, then didn’t watch where it went.” [Italics added.]

What puts Facebook in even hotter water is that Cambridge Analytica’s clients didn’t use the data to sell mac and cheese or hand soap, but to promote political causes and candidates—from Brexit, to Ted Cruz, to Donald Trump.

(Time to pause for a disclaimer: This isn’t about Brexit or Trump. It’s about data.)

The way Cambridge Analytica may have applied the data has people upset. The New York Times painted a scary picture:

One recent advertising product on Facebook is the so-called “dark post”: A newsfeed message seen by no one aside from the users being targeted. With the help of Cambridge Analytica, Mr. Trump’s digital team used dark posts to serve different ads to different potential voters, aiming to push the exact right buttons for the exact right people at the exact right times.

Imagine the full capability of this kind of “psychographic” advertising. In future Republican campaigns, a pro-gun voter whose Ocean score ranks him high on neuroticism could see storm clouds and a threat: The Democrat wants to take his guns away. A separate pro-gun voter deemed agreeable and introverted might see an ad emphasizing tradition and community values, a father and son hunting together.

In this election, dark posts were used to try to suppress the African-American vote. According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous “super predator” line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake. Federal Election Commission rules are unclear when it comes to Facebook posts, but even if they do apply and the facts are skewed and the dog whistles loud, the already weakening power of social opprobrium is gone when no one else sees the ad you see—and no one else sees “I’m Donald Trump, and I approved this message.”

(Time for another disclaimer: This isn’t about the Republican Party, either. Examples focus on the GOP because in the U.S. Cambridge Analytica refuses to work for other parties.)

The fear is less that dark posts might change minds and more that it might push fence-sitting minds to the message-sender’s side. Cambridge Analytica reportedly knows how to identify and push the hot buttons of large numbers of people by sending them tailored messages. If they present misleading or even false information, there’s pretty much no one to call them on it, because those likely to object don’t see those messages.

This, as reported by Reuters, has not helped ease concerns:

The suspended chief executive of Cambridge Analytica said in a secretly recorded video broadcast on Tuesday that his UK-based political consultancy’s online campaign played a decisive role in U.S. President Donald Trump’s 2016 election victory.

Yet some voices are skeptical.

Vox quite bluntly states, “There’s nearly no evidence these ads could change your voting preferences or behavior.”

To be sure, advertising is oft accused of persuasion power it doesn’t have. And as yet no hard data support the claim that dark posts affected the outcome of the Brexit vote or the U.S. 2016 elections. Consider, for instance, that the first U.S. politician to retain Cambridge Analytica was Ted Cruz. As you may have heard, Cruz didn’t secure the nomination.

For that matter, targeted messaging is nothing new. The only difference is that technology can amass data faster, in greater volume, and in near real-time; has sharpened marketers’ aim; and facilitates matching messages to audiences in a way never before seen.

But it’s equally true that it’s premature to dismiss claims about dark data’s potential to influence undecideds. It may simply be that dark data is so new that there hasn’t been time to execute valid tests. We can assuredly expect those tests very soon.

On a lighter note

Shall we end on a lighter note? Here are three of my favorite questions put to Mark Zuckerberg by U.S. Senators in last week’s hearing:

Is Twitter the same as what you do? —Senator Lindsey Graham, R, South Carolina

I’m communicating with my friends on Facebook, and indicate that I love a certain kind of chocolate. And, all of a sudden, I start receiving advertisements for chocolate. What if I don’t want to receive those commercial advertisements? —Senator Bill Nelson, D, Florida

How do you sustain a business model in which users don’t pay for your service? —Senator Orrin Hatch, R, Utah (where I live). (Zuckerberg: Senator, we run ads.)

How reassuring it is to know that powerful people who don’t understand Facebook are investigating Facebook on our behalf.

Posted in Uncategorized by Matt. No Comments