March 5, 2020

What's more trustworthy: Garbage language or a Google algorithm?

What's more trustworthy: Garbage language or a Google algorithm?

This post originally appeared in the March 5, 2020 issue of The Content Technologist with the email subject line "A matter of trust in people, not machines."

Feeling passionately about what I read is a gift rather than a distraction. I have to remember that.

In the past two weeks I’ve let my brain chew endlessly on material I Did Not Like. It’s fairly unusual, as I avoid Hate Reading. I have no time for fusty yelling old men who are upset at their impending irrelevance, so I avoid their watering holes. I do not watch or trust TV news or their attendant websites and fan cultures, full stop. I am never going to like 95% of New York Times editorial opinions, so I don’t click on links in that section unless I’m feeling self-destructive. (Hate Reading also drives truly loathsome digital business practices, but that’s another story for another time.)

Thankfully there is still plenty to read without giving pageviews to the A-holes, but when I expect to like something and I don’t … it’s terrible! Not everything is agreeable. I get distracted. I seethe. I am disappointed. “You are wrong where you should have been right,” I tell the author in my head. “Let me list all of the reasons why you are wrong. Why did I trust you with my time and my brain in the first place.”

“Don’t let it get to you,” I tell myself. “They are entitled to their opinions and have zero impact on your plans today.”

“But this person is smart and trustworthy and has a massive audience and they are saying silly things!” I Gollum-argue back.

And so on until it’s time to write a weekly newsletter.


Playing the language game: Chapter one

What roiled me first: an old-man-yells-at-cloud rant on corporate “garbage language.” Molly Young, who is most definitely not and old man and someone I’ve followed off-and-on since her tumblr days, wrote about the meaninglessness of corporate jargon. The topic is certainly well-tread (hello, Mr. Orwell) and certainly worthy of attention in many cases. I have written before of my hatred of words like “leverage” and “utilize” when you can just fucken say “use,” man.

Young references Anna Weiner’s recent memoir Uncanny Valley, which terms corporate jargon “garbage language.” Both Young and Weiner are frustrated with the corporate obfuscation of meaning, whether it’s in tech or marketing or anything touched by an MBA. Young and Weiner are also frustrated that not everyone is as adept with plain language as they are — not everyone is a writer.

As a comms and publishing person, one of my hardest career lessons was discovering that not everyone is an English language-focused writer. Despite the many “I am silently correcting your grammar” mugs out there, most people are not writers or editors! Editors and writers work to make the publishing English language easier for the non-writers and editors.

A man reads from a paper, "You are technically correct. The best kind of correct." Gif from The Simpsons

The English language is confusing and hardly a perfect system for communication in itself. So different cultures develop short-hand and vernacular.

Workplace culture is its own thing. People who have studied the workplace have invented their own shorthand/jargon. Using corporate jargon can be a status symbol — sound like you have an MBA even if you don’t! — and a way to hedge when you’re uncertain. Especially if you like and respect the people you’re working with, corporate vernacular can actually be useful. It can be a tool for making the workplace easier.

In many cases, the vernacular is worth complaining about! For fuck’s sake, both “insights” and “findings” have fewer letters and make more grammatical sense than “learnings.” But some, like RACI — which Young calls out specifically — can be helpful systems for organizing projects.

Accountable or informed?

If you’re not familiar, RACI can be a marvelous tool when you have far too many people who want to have input on a particular project, and you need to narrow down the number of voices who are keeping you from getting things done. It’s a nice way of telling your executive, “I know you’ve been invited to work on projects like this in the past, but in this case, we’re going to let a smaller team have accountability.” Or, “Look, we value your opinion, but you’re in the Informed bucket because you keep asking questions that don’t matter right now and I want to go home.”

Young recreates this exchange:

CO-WORKER: Going forward, we’ll be using a RACI for all projects.
MOLLY: What’s a RACI?
CO-WORKER: RACI stands for “Responsible, Accountable, Consulted, and Informed.” The RACI will be distributed around so that we’re all aligned and on the same page.
ME: But what is this thing, like, physically? Is it a chart?
CO-WORKER: It’s hard to explain.

I empathize with the point of view of the coworker, most likely a project manager of some sort. Sounds like there were too many people involved and the co-worker was trying to create some kind of order to get work done.

In my view, the Molly character in this story is a complete dismissive jerk who isn’t working with her team. The RACI is “this thing.” She’s not even listening to the co-worker, who sounds fed up with Molly’s questioning. Molly doesn’t trust her co-worker simply because she’s not using the same vernacular. To me, this reads as “You have a different professional vocabulary than I do, so I value you less.”

Why did this get to me? I once had a similar encounter with a coworker. I was putting together a RACI for a massive web redesign project. My coworker scrunched her nose as if smelling something foul and said, curtly, “[In my former position in the highly successful content marketing department] they had RACIs too. They were so corporate. I didn’t like them.”

I thought: thanks for completely dismissing my work on getting internal support and processes for your project, colleague! Great to know that we’re on the same team.

I thought: I’ll mark you as Informed, then.

I thought all kinds of other sarcastic thoughts that made me feel nasty and gross. And it sucked because I thought we were on the same team. I was a writer and editor (albeit working on a different team at this organization), and she was an editor. We were both editors! We should be on the same team. Even if I were in sales and you’re clinging to the idea that salespeople are evil (ugh please let that go we’re all just trying to get by), it’s not helping anyone to throw up roadblocks if we’re both trying to accomplish the same goal.

Can’t we all just get along?

One of the 20th century worst business practices is the idea that internal groups should compete with each other. That sales and editorial should never talk, or that one team has bad ideas. For some reason, legacy media still encourages this constant internal drama. It’s the same system that champions Hate reading.

A sign of a terrible company culture is when colleagues default to shitting on other teams or teammates or to shitting on clients. In any team, you won’t agree with every decision or every client comment, but it’s way easier if you’re generally accepting of the words and systems others use.

Or, get out and do your own thing (says the independent consultant who is independent for a reason). I mean, I’m guilty of all of the above dismissiveness. But. I dunno. Don’t crow about it, I guess. (But am I guilty of crowing too?)

In a gif, Gollum says "You don't have any friends. Nobody likes you!"
Keeping an eye on the bigger picture of communication culture

There are all kinds of alarms to be raised about productivity culture. About corporate culture. About tech culture. About the culture of American capitalism. About cultish startups. About the goal of revenue above all things. About words we use that have no meaning. About the words we use that do.

The corporate vocabularies and vernaculars that your colleagues use aren’t worthy of raising those alarms. Your teammates are just trying to get through their days, and if they’re not massively unethical, give them the benefit of the doubt. If you don’t understand a term, do your own research (and not just on Wikipedia) about why they might like be using that term or process.

To be fair, I also spew sarcasm at people who say, “Everyone should learn to code!” Coding is great for people who like to code! All those delicious coding languages come with their own possibilities and their structures and I understand how that can be incredibly fulfilling.

In a diverse society, not everyone needs to learn how to write elegant code. Some people learn to write elegant language. Some people develop elegant systems to organize other people. You can find elegance and grace in any role.

Life’s hard enough without your coworkers being dicks about how you communicate. Conversation and curiosity — and not the disingenuous questions above — goes a long way.

No matter where you are, trust your colleagues. If you have disagreements, raise them gracefully. Complaining about MBA vernacular doesn’t make you a courageous whistleblower. Don’t be a dick.

All that said — Young’s other points about Away and using business language to be punitive or obscure: I agree with those! But it’s not helpful to dismiss words you don’t understand entirely as “garbage language” if your true issue is with the way you’re being treated. Blame the cause, not the symptom, to use a garbage language cliche.

The cause is an overbroad argument for technological dominance: Chapter two

I’m a fan of Avinash Kaushik, whose plain-language explanations of web analytics have gone a long way to eliminating unnecessary jargon in corporate situations. Pretty much the only reason I’m able to work in analytics as an English major in is because of Kaushik (and a few additional teachers).

Kaushik advocates explaining analytics in human terms.

A couple of things: Kaushik’s cultural experience is entirely different from mine, and he comes with an entirely different perspective. I respect that. Also, he’s a company man! He works at Google. They have treated him well.

As much as I like web analytics as a tool for understanding human behavior and as much as I like diffusing analytics into plain language, I have a hard time with his recent newsletter about trusting the Black Box of the algorithm. In his most recent email newsletter, he wrote:

The Machine Learning challenge landscape is multi-dimensional, but perhaps the key challenge is that we, more often than not, don't really understand how algorithms do what they do.

Kaushik’s argument is that we don’t always understand why technologies like flight and space travel work, but they do, and we couldn’t imagine life without them. It’s a more traditional technologist’s argument: The ultimate benefit to society is worth the trials and errors on the way.

You could argue that flight and space travel are technologies that have ultimately contributed to climate change and are they really good to begin with? etc. etc. Not gonna do that today!

My gut response has far more to do with my well-documented Problem with Authority.

I agree that we don’t really understand how machine learning works and algorithms tweak themselves. We don’t!

But we do understand the inputs that humans build into algorithms, and without transparency into those inputs (i.e., Google describing what go into its algorithm, which it used to do) — it’s super difficult to trust the results.

By “super difficult” I mean, no I don’t think it’s possible to ever trust the results of a computation without understanding all of the inputs. Even if the results of the computation are pretty good, as Google search results are, it’s fair to ask questions about the process. I don’t think that we should dismiss the entirety of data collection and algorithms, as many privacy advocates believe. But the argument of opposing data collection and the computer processing of marketing and search results as “an impediment to progress” is massively hard to swallow.

Who decides what goes into the black box of the algorithm?

Average people have zero insight into how any of the many algorithms that shape their lives work — Google algorithms or any of the others out there. Even if user signals, such as where people click or what they like, affect an algorithm, they have no impact on how much power we assign to that algorithm, what biases the algorithm incorporates, and what how that algorithm shapes our lives.

Even someone like me, who spends time decoding ranking factors and has a sophisticated understanding of Google search, can’t swallow “we just have to let the machine work and we will never know.” To me, that’s treating the algorithm as magic.

Gif of Mickey Mouse sorcerer magically adding arms to a broom

Machine learning isn’t magic. Algorithms are built by people and can be broken down by people.

We could examine plenty of factors to know about each algorithm that aren’t publicly available. I can study the Google algorithm all I want and look at word count and EAT and every clue that Google drops with Gary Ilyes and John Mueller or what outsiders like Rand Fishkin say. But none of those are the full story.

Those inputs and processes are protected with intellectual property laws and non-disclosure agreements and hidden by capitalists paranoid about competition. How can we trust an algorithm’s result when we have no idea who asked the question, how they asked it, or what the intended result was?

Politely refusing to trust the machine

Content algorithms are amazing and helpful and ultimately beneficial to society. But the argument of “trust the black box and trust the process” shuts down a necessary ethical conversation. Who gets to decide what goes into the black box? Those people all work for Google and they will be sued if they tell you.

So, Avinash, I’m going to trust you generally, but not on this one. Dismissing questions about technology as an “impediment to progress” is magical thinking.

Magic does not exist. Algorithms are not magic.

With those larger ethical questions — which have nothing to do with my teammates and which Kaushik will probably never read — I’m ok with putting up the stonewall. I’m all right with taking the Molly Young approach and directly challenging, “ok, but what is this thing, like, physically?” I’m ok with being difficult. I’m ok with being a dick.


Related articles on The Content Technologist

The business of content | AI| big tech


Want more Content Technologist in your inbox every Thursday? Forever free for the first 1,000 subscribers.