About

I'm Mike Pope. I live in the Seattle area. I've been a technical writer and editor for over 30 years. I'm interested in software, language, music, movies, books, motorcycles, travel, and ... well, lots of stuff.

Read more ...

Blog Search


(Supports AND)

Google Ads

Feed

Subscribe to the RSS feed for this blog.

See this post for info on full versus truncated feeds.

Quote

One of the best things about teaching undergraduates is how much you learn.

Mark Liberman



Navigation





<October 2014>
SMTWTFS
2829301234
567891011
12131415161718
19202122232425
2627282930311
2345678

Categories

  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  

Contact

Email me

Blog Statistics

Dates
First entry - 6/27/2003
Most recent entry - 10/16/2014

Totals
Posts - 2312
Comments - 2502
Hits - 1,674,131

Averages
Entries/day - 0.56
Comments/entry - 1.08
Hits/day - 405

Updated every 30 minutes. Last: 2:19 AM Pacific


  09:52 AM

Carrying on with adventures using the Tumblr API. (Part 1, Part 2)

As noted, I decided that I wanted to create a local HTML file out of my downloaded/exported Tumblr posts. In my initial cut, I iterated over the list of TumblrClass instances that I'd assembled from the downloaded posts, and I then wrote out a bunch of hard-coded HTML. This worked, but was inflexible, to say the least—what if I wanted to reorder items or something?

So I fell back on yet another old habit. I created a "template" of the HTML block that I wanted, using known strings in the template that I could swap out for content. Here's the HTML template layout, where strings like %%%posttitle%%% and %%%posturl%%% are placeholders for where I want the HTML to go:
<!-- tumblr_block_template.html -->
<div class="post">
    <div class="posttitle">%%%posttitle%%%</div>
    <div class="postdate">%%%postdate%%%</div>
    <div class="posttext">%%%posttext%%%</div>
    <div class="postsource">%%%postsource%%%</div>
    <div class="posturl"><a href="%%%posturl%%%"
        target="_blank">%%%posturl%%%</a></div>
    <div class="postctr">[%%%postcounter%%%]&nbsp;
        <span class="posttype">%%%posttype%%%</span>
    </div>
</div>
The idea is to read the template, read each TumblrClass item, swap out the appropriate member for the placeholder, and build up a series of these blocks. Here's the code to read the template and build the blocks of content:
html_output = ''
 
html_file = open('c:\\Tumblr\\tumblr_block_template.html', 'r')
html_block_template = html_file.read()
html_file.close()
 
ctr = 0
for p in sorted_posts:
    new_html_block = html_block_template
    ctr += 1
    new_html_block = new_html_block.replace('%%%posttitle%%%', p.post_title)
    new_html_block = new_html_block.replace('%%%postdate%%%', p.post_date)
    new_html_block = new_html_block.replace('%%%posttext%%%', p.post_text)
    new_html_block = new_html_block.replace('%%%postsource%%%', p.post_source)
    new_html_block = new_html_block.replace('%%%posturl%%%', p.post_url)
    new_html_block = new_html_block.replace('%%%postcounter%%%', str(ctr))
    html_output += new_html_block
To embed these <div> blocks into an HTML file, I did the same thing again—I created a template .html file that looks like this:
<!-- tumblr_template.html -->
<html>
<head>
  <link rel="stylesheet" href="tumbl_posts.css" type="text/css">
  <meta http-equiv="content-type" content="text/html;charset=utf-8">
</head>
<body>
<h1>Tumblr Posts</h1>
%%%posts%%%
</body>
</html>
With this in hand, I can read the template .html file and do the swap thing again, and then write out a new file. To actually write the file, I generated a timestamp to use as part of the file name: 'tumbl_bu-' plus %Y-%m-%d-%H-%M-%S plus '.html'.

There was one complication. I got some errors while writing the file out, which turned out to be an issue with Unicode encoding—apparently certain cites that I pasted into Tumblr contain characters that can’t be converted to ASCII, which is the default encoding for writing out a file. The solution there is to use the codecs module to convert. (It’s possible that this is a problem only in Python 2.x.)

Here’s the complete listing for the Python script. (I wrapped some of the lines in a Python-legal way to squeeze them for the blog.)
import datetime,json,requests
import codecs # For converting Unicode in source

class TumblrPost:
def __init__(self,
post_url,
post_date,
post_text,
post_source,
post_title,
post_type):
self.post_url = post_url
self.post_date = post_date
self.post_text = post_text
self.post_source = post_source
self.post_type = post_type
if post_title is None or post_title == '':
self.post_title = ''
else:
self.post_title = post_title

all_posts = [] # List to hold instances of the TumblrPost class
html_output = '' # String to hold the formatted HTML for all the posts
folder_name = 'C:\\Tumblr\\'

# Get the text posts and add them as TumblrPost objects to the all_posts_list
print "Fetching text entries ..."
request_url = 'http://api.tumblr.com/v2/blog/mikepope.tumblr.com/posts/text?api_key=[MY_KEY]'
offset = 0
posts_still_left = True
while posts_still_left:
request_url += "&offset=" + str(offset)
print "\tFetching text entries (%i) ..." % offset
tumblr_response = requests.get(request_url).json()
total_posts = tumblr_response['response']['total_posts']
for post in tumblr_response['response']['posts']:
# See https://www.tumblr.com/docs/en/api/v2#text-posts
p = TumblrPost(post['post_url'],
post['date'],
post['body'], '',
post['title'],
'text') # No source for text posts
all_posts.append(p)
offset += 20
if offset > total_posts:
posts_still_left = False

# Get the quotes posts and add them as TumblrPost objects to the all_posts_list.
print "Fetching quote entries ..."
request_url = 'http://api.tumblr.com/v2/blog/mikepope.tumblr.com/posts/quote?api_key=[MY_KEY]'
offset = 0
posts_still_left = True
while posts_still_left:
request_url += "&offset=" + str(offset)
print "\tFetching quote entries (%i) ..." % offset
tumblr_response = requests.get(request_url).json()
total_posts = tumblr_response['response']['total_posts']
for post in tumblr_response['response']['posts']:
# See https://www.tumblr.com/docs/en/api/v2#quote-posts
p = TumblrPost(post['post_url'],
post['date'],
post['text'],
post['source'], '',
'quote') # No title for quote posts
all_posts.append(p)
offset += 20
if offset > total_posts:
posts_still_left = False

sorted_posts = sorted(all_posts,
key=lambda tpost: tpost.post_date,
reverse=True)

print "Creating HTML file ..."

# Read a file that contains the HTML layout of the posts,
# with placeholders for individual bits of data
html_file = open(folder_name + 'tumblr_block_template.html', 'r')
html_block_template = html_file.read()
html_file.close()

ctr = 0
for p in sorted_posts:
new_html_block = html_block_template
ctr += 1
new_html_block = new_html_block.replace('%%%posttitle%%%', p.post_title)
new_html_block = new_html_block.replace('%%%postdate%%%', p.post_date)
new_html_block = new_html_block.replace('%%%posttext%%%', p.post_text)
new_html_block = new_html_block.replace('%%%postsource%%%', p.post_source)
new_html_block = new_html_block.replace('%%%posturl%%%', p.post_url)
new_html_block = new_html_block.replace('%%%postcounter%%%', str(ctr))
new_html_block = new_html_block.replace('%%%posttype%%%', p.post_type)
html_output += new_html_block

# The template has a placeholder for the content that's generated dynamically
html_file = open(folder_name + 'tumblr_template.html', 'r')
html_file_contents = html_file.read()
html_file.close()
html_file_contents = html_file_contents.replace('%%%posts%%%', html_output)

# Open (i.e., create) a new file with the ability to write Unicode.
# See http://stackoverflow.com/questions/934160/write-to-utf-8-file-in-python
file_timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
with codecs.open(folder_name +
'tumbl_bu-' +
file_timestamp +
'.html', 'w', "utf-8-sig") \
as new_html_file:
new_html_file.write(html_file_contents)
new_html_file.close()

print 'Done!'

[categories]  

|


  09:17 PM

I wonder how many people do this. Let’s say I’m reading something on Wikipedia, and a paragraph includes a link that’s seductively drawing my attention away from the current article. In a show of resistance to ADHD, I won’t just click that link—instead, I’ll Ctrl+click it, thus opening the linked page in another tab “for later.”

After some amount of reading, I’ll have, oh, a dozen tabs open in the browser:


Or 20. Or 30. In another exhibit of discipline, I will occasionally drag all of these open tabs from the many and various browser windows I have open into a single browser window. Now, that’s organized.

Perhaps it’s the “for later” part that I’m wondering about. I just checked some of the pages in those tabs in the screenshot. As near as I can tell, the oldest one goes back about three months. Here’s a sampling of the pages I currently have open:
  • The Tumblr API reference
  • Three (!) articles on time perspective.
  • An article on how to use Twitter for business.
  • The article “Complements and Dummies” by John Lawler, a linguist.
  • An article on high-impact training in 4 minutes.
  • An article on how to create effective to-do lists.
  • An article on how to adjust the aim of the headlight on a motorcycle.
  • The syllabus, wiki, and video page for a Coursera course I’m taking.
  • A Wikipedia article about the 1952 steel strike (related to the previous).
You can see that these are all pages that I want to keep handy, ready to read when I get a few spare minutes.

My officemate and I were talking about this today, and it turns out he does something similar. My collection of open tabs has survived several computer reboots (thanks, Chrome!), and my officemate confirms that his collection has persisted through a number of upgrades to Firefox.

It seems like a logical approach would be to bookmark these pages, either in the browser, or using some sort of bookmarking site like Pinterest or (ha) Delicious. Or heck, OneNote or EverNote.

But in my case, tossing a link into any of these is almost the equivalent of throwing it into a black hole. Yes, I have the link, but I don’t make a habit of going back to my saved links and looking for things that had struck my fancy days or weeks or months ago.

No, the habit of keeping these pages open seems to act as a kind of short-term bookmarking. Now and then I might actually click on one of the tabs just to remind myself of why I have all these pages open. For the most part, any given page still looks interesting, so I don’t want to close it. After all, I still intend to read that page Real Soon Now.

[categories]  

[1] |


  10:51 AM

I’m not sure whether this is an eggcorn or just a homonym mistake whose tense logic amused me. I was reading an article and ran across the following (picture here in case they edit the text later):

(The text of interest says “the diatribe was entirely representative of the reality, which is bared out not only by the aforementioned Pew poll, but another Pew poll”)

The author intended to bear out, meaning to “substantiate, confirm” (see definition 30). One reason to suspect that this is an eggcorn is that, as with eggcorns generally, the word substitution sort of makes sense: to bare out could mean, perhaps with a little squinting, something along the lines of “to make bare,” hence perhaps to make obvious.

And as I say, I liked the logic of the past tense. The past of bear out is born out or borne out. Thus this sentence was intended to read “… which is born(e) out not only by …”. But if you substitute bare, you’ve got a regular verb in terms of past tense, so it is inevitably bared out.

Eggcorns are interesting because they offer a tiny peek into how speakers parse and interpret things they hear. (And they are primarily based in sound, not reading.) Chris Waigl maintains a great database of eggcorns that’s fascinating to browse through for just this reason.

You don’t find eggcorns—or whatever this mistake is—in formal articles, not nearly as often as you do in blog posts or other unedited material. So this is, I think, a real find. :-)

[categories]  

|


  12:14 PM

This is part 2. (See part 1.)

Previously on Playing with the Tumblr API:

“I have a Tumblr blog …”
“Tumblr’s search feature is virtually useless …”
“However, Tumblr does support a nice RESTful API …”
“I wanted to build an HTML page out of my posts …”


As I described last time, once you’ve registered an app and gotten a client ID and secret key, it’s very easy to call the Tumblr API to read posts:
http://api.tumblr.com/v2/blog/mikepope.tumblr.com/posts/text?api_key=secret-key
This API returns up to 20 posts at a time. Each response includes information about how many total posts there match are that your request criteria. So to get all the posts, you make this request in a loop, adding an offset value that you increment each time, and stopping when you’ve hit the total number of posts. Here’s one way to do that in Python:
import json
 
request_url = 'http://api.tumblr.com/v2/blog/mikepope.tumblr.com/posts/text?api_key=key'
offset = 0
posts_still_left = True
while posts_still_left:
    request_url += "&offset=" + str(offset)
    tumblr_response = requests.get(request_url).json()
    total_posts = tumblr_response['response']['total_posts']
    for post in tumblr_response['response']['posts']:
        # Do something with the JSON info here
    offset += 20
    if offset > total_posts:
        posts_still_left = False
I’m using the awesome requests library (motto: “HTTP for Humans”) to make the API requests. The response is in JSON. In raw Python, the return value is typed as requests.models.Response, but the json library makes it easy to convert that to a dict. You can then easily pluck out the values you want. Here, for example, I’m extracting the values of the total_posts field. Inside the response element there’s a posts array that contains the guts of each of the 20 posts that the response returns.

Normalizing  Post Info

I noted before that I was interested in 2 (text, quote) of the 8 types of posts that Tumblr supports, and that different post types return somewhat different info. The JSON info for Tumblr posts contains a lot of information—a lot of it is metadata like state (published, queued), note_count, tags, and some other stuff that, while essential to Tumblr’s purposes, did not interest me personally. I’m interested in just these things: post_url, date, title, and body (text posts) or source (quote posts).

To normalize this information, I fell back on old habits: I created a TumblrPost class in Python and defined members that accommodated all of the JSON values I was interested in across both post types:
class TumblrPost:
    def __init__(self, post_url, post_date, post_text, post_source, post_title):
        self.post_url = post_url
        self.post_date = post_date
        self.post_text = post_text
        self.post_source = post_source
        if post_title is None or post_title == '':
            self.post_title = ''
        else:
            self.post_title = post_title
Should I want at some point to accommodate additional types of posts, I can add members to this class. I guess.

Having this class lets me read the raw JSON in a loop and create an instance of the class for each Tumblr post I read. I can then just add the new instance to a Python list. My code to read text posts looks like the following:
all_posts = []      # List to hold instances of the TumblrPost class

request_url = 'http://api.tumblr.com/v2/blog/mikepope.tumblr.com/posts/text?api_key=key'
offset = 0
posts_still_left = True
while posts_still_left:
    request_url += "&offset=" + str(offset)
    print "\tFetching text entries (%i) ..." % offset
    tumblr_response = requests.get(request_url).json()
    total_posts = tumblr_response['response']['total_posts']
    for post in tumblr_response['response']['posts']:
        p = TumblrPost(post['post_url'], post['date'], post['body'], '', post['title'])
        all_posts.append(p)
    offset += 20
    if offset > total_posts:
        posts_still_left = False

Reading both Text and Quote Posts

So that took care of reading text posts. As I say, quote posts have slightly different JSON layout, such that reading the JSON and instantiating a TumblrPost instance looks like this (no body, but a source):
p = TumblrPost(post['post_url'], post['date'], post['text'], post['source'], '')
I debated whether to try to tweak my loop logic try to accommodate both text-type and quote-type requests in the same loop. In that case, the loop has to a) issue a slightly different request (with /quote? instead of /text?) and then b) extract slightly different JSON when creating the TumblrClass instances. This would require a variable to track which type of post I was reading and then some if logic to branch to the appropriate request and appropriate instantiation logic. Bah. In the end, I just copied this loop (*gasp!*) and changed the couple of affected lines.

Next up: Creating the actual HTML, and then done, whew.

[categories]  

|


  03:05 PM

One of the delights of my job has always been the chance to work with people from all over, and I mean, like, from all over the globe. A nice side effect is that people bring their unique brands of English with them, affording endless opportunities to listen to, read, and think about the vast dialectal variations in our language.

One of our developers has the task of sending out a biweekly email with tips and tricks about using our tools. He happens to be from Sri Lanka, so his English is primarily informed by British usage, and the subject line of his email read “Tip of the Fortnight.” Apparently having second thoughts after the email went out, he popped into my office and asked “Will people understand the term fortnight?”

I think it’s safe to say that literate Americans understand fortnight just fine. But it’s not a term that many Americans produce, I think. I lived in England for a couple of years, and I got very used to expressions like a fortnight’s holiday, but even with this exposure, the term never entered my active vocabulary.

His question, tho, sent me on a bit of a quest to try to determine what the, you know, isogloss is for fortnight. Right across the hall from me is a Canadian, so I asked him. Nope, he said, they don’t use it. My wife has cousins in Australia, so I sent a query off to one of them. Oh, yes, they use it all the time, she said. In fact, she asked, what do you say in the States when you're referring to something on a two-weekly basis? Good question, which underscored why fortnight is such a handy word. I mean, really: how do you phrase "Tip of the Fortnight" in American English?

The word has a long history—according to the OED, it goes back to Old English (first cite 1000), and if I read their note right, Tacitus referred to a Germanic way of reckoning time by nights. (Interestingly, the most recent cite in the OED is for 1879, not that they really needed a cite more recent than that for a term that is in everyday use in Britain.)

I looked in a couple of dictionaries, but neither of them indicated anything along the lines of “chiefly Br.”, as they occasionally will with a regional term. The two usage guides I have handy, Garner and the MWDEU, are both silent on the term. (I slightly expected Garner to comment on the term’s use in, say, legal writing, but nope.)

But I’ll stick to my now-anecdotally based theory that fortnight is just not used much in North American English. Still, I don’t think my colleague had much to worry about regarding the subject line of his email. As I say, I’m pretty sure that my American and Canadian colleagues recognize the term. And of course, many others come from places where it’s a perfectly normal word, and like the cousin, they might wonder why we don't adopt such an obviously useful term.

[categories]  

[4] |


  10:43 PM

I have a Tumblr blog where I stash interesting (to me) quotes and citations that I've run across in my readings. Tumblr has some nice features, including a queue that lets you schedule a posting for "next Tuesday" or a date and time that you pick.


Tumblr Woes

However, Tumblr’s search feature is virtually useless, which I sorely miss when I want to find something I posted in the distant past. As near as I can tell, their search looks only for tags, and even then (again, AFAICT) it doesn't scope the search to just one (my) blog.

In theory, I can display the blog and use the browser's Ctrl+F search to look for words. Tumblr supports infinite scroll, but, arg, in such a way that Ctrl+F searches cannot find posts that I can plainly see right in the browser.

When search proved useless, I thought I might be able to download all my posts and then search them locally. However, and again AFAICT, Tumblr has no native support for exporting your posts. There once was a utility/website that someone had built that allowed you to export your blog, but it's disappeared.[1]

APIs to the Rescue

However, Tumblr does support a nice RESTful API. Since I've been poking around a bit with Python, it seemed like an interesting project to write a Python script to make up for these Tumblr deficiencies. I initially thought I'd write a search script, but I ended up writing a script to export my particular blog to an HTML file, which actually solves both of my frustrations—search and export/backup.

Like other companies, Tumblr requires you to register your application (e.g. "mike's Tumblr search") and in exchange they give you an OAuth key that consists of a "consumer key" and a "secret key." You use these keys (most of the time) to establish your bona fides to Tumblr when you make requests using the API.

(Side note: They basically have three levels of auth. Some APIs require no key; some require just the secret key; and some require that you use the Tumblr keys in conjunction with OAuth to get a temporary access key. This initially puzzled me, but it soon became clear that their authentication/authorization levels correspond with how public the information is that you want to work with. To get completely public info, like the blog avatar, requires no auth. In contrast, using the API to post to the blog or edit a post requires full-on OAuth.)

Tasks I Needed to Perform

The mechanics of what I wanted to do—namely, get all the posts—are trivially easy. For example, to get a list of posts of type "text" (more on that in a moment), you do this:

http://api.tumblr.com/v2/blog/mikepope.tumblr.com/posts/text?api_key=secret-key

This returns 20 posts' worth of information in a JSON block that's well documented, and which includes paging markers so that you can get the next 20 posts, etc. In a narrow sense, all I needed to do was to issue a request in a loop to get the blog posts page by page, concatenate everything together, and write it out. I’d then have a “backup”—or at least a copy, even if it was a big ol’ JSON mess—of my entries, and these would be somewhat searchable.

As it happens, you use different queries to get types of posts. Tumblr supports a half dozen types of posts—text, quote, link, answer, video, audio, photo, chat. Each type requires a separate query[2], and each returns a slightly different block of JSON. For just basic read-and-dump, it’s a matter of looping through again, but this time with a slightly different query.

So that’s the basics. As noted, I got this idea that I wanted to build an HTML page out of my posts, and that complicated things. But not too terribly much. (I’m using Python, after all, haha). More on that soon.

Update: Part 2 is now up.

[1] One of the original reasons I got interested in writing this blog, in fact, was that LiveJournal did not support any form of search way back in 2001.

[2] Their docs suggest that if type is left blank, they'll return everything, but that was not my experience.

[categories]  

|


  11:15 AM

Let's start with bump. Among its definitions is "raise" or "rise," along these lines:There is some subtlety here to the definition; there's connotation of a non-linear increase, as a bump might appear on a graph.

Anyway, by this definition, if something increases in speed, that would be a … speed bump, right? That's how the author of an article in Ars Technica intended it:
The Web is going to get faster in the very near future. And sadly, this is rare enough to be news.

The speed bump won't be because our devices are getting faster, but they are. It won't be because some giant company created something great, though they probably have. […]
Except ... not. A speed bump performs precisely the opposite: it's a device designed specifically to reduce speed. (On a recent trip to Costa Rica, we learned that a speed bump there is referred to as a reductor de velocidad, an admirably straightforward term.)

Obviously, if you back up and read the sentence again, you get the intent. And perhaps the term speed bump in its traffic-calming sense isn't known as widely as I imagine, and therefore would not cause many people to, um, slow down. But a simple edit—e.g., "The bump in speed"—would have fixed this small ambiguity.

It never hurts to have someone else read through your text. You never know when their slightly different understanding of the world will send them off in the wrong direction based on what you've written.

And now, back to actually reading the article, which is actually quite fascinating.

[categories]  

|


  11:02 PM

One kind of writing error (I am tempted to put that into quotation marks) that editors catch is the so-called dangling modifier. In this construction, a modifying phrase appears, on close inspection, to have no antecedent. Here's an example:
Walking up the driveway, the flowers looked beautiful.
The dangling aspect is this: who is it that's walking up the driveway? IOW, what does "Walking up the driveway" actually modify? It sure isn't the flowers.

Once you're attuned to dangling modifiers, you'll find them everywhere. For example, I hear them in radio ads all the time. Not long ago, I found this example on a poster in our bus station that was advertising Montana tourism:


(In case you can't read it, it says "As a kid the Mission Mountains were my backyard.")

And that's just the thing: dangling modifiers are quite common. Merriam-Webster's Dictionary of English Usage has a whole slew of examples that go back to the 17th century. Under most circumstances, listeners or readers seem to have no trouble mentally filling in the missing antecedent from context. Indeed, MWDEU observes that "... they may hardly be noticeable except to the practicing rhetorician or usage expert." That's certainly been my experience as an editor—I've not only had to point our dangling modifiers to writers, but I've often had to go through the exercise of explaining why they're (nominally) wrong, as I've done here.

But sometimes opening modifiers do sow confusion. I was reading a movie review by David Denby in The New Yorker today and was taken aback by a dangler. The movie concerns two men (Steve Coogan and Rob Brydon) who are touring around Italy in a car. Here's the sentence that struck me:
Ogling the scenery in "The Trip to Italy," you wonder if the men's small car—a Mini Cooper—will drive off the edge of a cliff, or if, when they board a yacht in the Golfo dei Poeti, someone will fall overboard and drown.
Who exactly is doing the ogling here? The nearest noun (well, pronoun) is "you." Am I doing the ogling? The next available noun is "the men's car," which is not likely to be ogling. Is it maybe "the men" (only making a brief appearance in the genitive) or maybe just "they" (which does appear as a subject in one of the clauses) who are ogling?

Perhaps I'm making too much of this, and Denby really does mean me-the-viewer. But the whole sentence—or the opening modifier, anyway—threw me enough that I had to stop and think about it for a considerable time. And another editorial rule suggests that if your readers have to stop and think, the sentence isn't working.

[categories]  

[1] |


  12:04 PM

I share an office with a fellow writer—let's call him Colleague B. We work on the same team, and thus we do joint planning and work and reporting. For example, every Monday afternoon we have a look at the upcoming week and plan our work. And on Fridays, we put together a joint status report that rolls up all the things we actually worked on.

The nature of our work, however, adds a certain chaos factor to our planning. On Mondays, we can certainly attempt to plan out what we need to do for the week. But every day—literally every day, and sometimes more than once a day—something new pops up. People send us email requesting a review of some documentation, or a developer will stick his head in the office and want input on some UI, or a bug will come in from a customer, or ... well, the possibilities are wide and varied, but there's always something.

Now, Colleague B is, by his free admission, a bit OCD. He is consistent and orderly both about our planning and our reporting, and he has that thing where he intuitively understands the delta between today and some upcoming date. Me, I'm a bit more on the other side, and my sense of time and dates is referred to around our household as "magical thinking."

Colleague B is not a big fan of the daily dose of chaos. Here we've planned out our week on Monday, and people keep coming in and asking for stuff. As he says, in his ideal world, people who have something for us would get in line, and we'll get to them when we're done with what we're working on.

On the other hand, I don't mind nearly as much the drop-what-you're-doing interruptions. I'm apparently happy to put aside the thing I'm working on in order to work on this new thing, or at least, till some other yet newer request comes in.

The world of computers has an analog for us: Colleague B is FIFO: first in, first out. Take a number, and we'll service you in order. In computing terms, FIFO describes a queue. Me, I'm LIFO: last in, first out. Like stacking trays in a cafeteria—the last on the stack is the first one off, and indeed, in computing terms, LIFO describes a stack.

Happily, it turns out that this combination of work styles works out well. Colleague B works his way through our Monday list, odds are good that by Friday, items can be checked off. But at the same time, we've handled a half dozen or so new jobs that came up during the week, things we had no idea about on Monday. In fact, Colleague B says that occasionally he'll finish up something and go read email, and by the time he's become aware of some new request, I've already handled it.

Of course, there's a certain amount of literary license here. It's not as if Colleague B won't handle ad-hoc queries with alacrity, and it's not as if I'm unable to handle anything other than whatever the most recent emergency is. Still, programmers know that sometimes the right data structure is a queue and sometimes it's a stack. As long as there are two of us, and as long as we divvy up the work correctly, we can handle pretty much all of it.

[categories]   ,

|


  09:50 AM

As has been discussed at great length over the years, English has no gender-neutral way to use a pronoun for a singular and sexed thing:

Everyone should bring [his|his or her] own lunch.

In previous eras, people didn't really blink at using the masculine generically:

To each his own.

... and some still maintain that this is fine, although the insistence that his in such contexts is gender-neutral is easily shown to be questionable:

A nurse is expected to provide his own stethoscope.

Anyway, vernacular English has solved this problem for centuries (like, at least as far back as Shakespeare) by using so-called singular they:

Everyone should bring their own lunch.

Still, as widely used as singular they has been, proscriptions on it have been strong for formal writing. Garner doesn't like it, tho he allows that they is sometimes the "most convenient solution" and that using the singular in some cases can result in "deranged" sentences. Seemingly falling for the Recency Illusion, Garner says that "they has increasingly moved toward singular senses," and "nothing that a grammarian says will change [these developments]."[1]

Chicago 16 is pretty clear: "Although they and their have become common in informal usage [Recency Illusion again?--M], neither is considered acceptable in formal writing." Our own style guide at work: "Avoid using they or them to refer to a singular noun of indeterminate sex. You can usually accomplish this by changing the noun to a plural. In other cases, you can rewrite a sentence to avoid the need for a pronoun altogether."

This last indicates to me how strong the proscription is—rather than take the chance of using a vernacular usage, you write around the need for a pronoun altogether.

Anyway, all of this came to my mind again when I saw an article today about poets, a group known to consist of both male- and female-type people. Observe this interesting struggle by the author:
If you’ve ever been to a poetry reading, the following scene will be familiar. After being introduced, a poet steps onstage and engages the audience with some light social speech. Maybe they* talk about their forthcoming book, [...]
Take note of the asterisk, which leads to the following footnote:
* I'm using "they" as the singular gender-neutral pronoun here to avoid suggesting that "Poet Voice" is a gendered thing (it's not), and also to avoid the clunkiness of "his or her."
This strikes me as an interesting development. Garner often labels entries in his usage guide using the following index:

Stage 1: New form/innovation used by a small number of users.
Stage 2: Vernacular for speech, not acceptable in standard usage.
Stage 3: Commonplace but avoided in careful usage.
Stage 4: Virtually universally used but still decried by SNOOTs.
Stage 5: Universal.

The article seems like evidence that singular they is hovering somewhere between stages 2 and 3.

The question is how many people would really notice the use of they here if the author had not gone to such pains to point it out. Are we really at Stage 3 or 3-1/2? John McIntyre, who knows a thing or two about copyediting, advises singular they, to which one of his commenters says "A year or two ago I gave in and started using 'they' in the singular. What a relief! It's easy to use and only occasionally sounds awkward. There's no going back for me now."


[1] The use of singular they is often ascribed to a sensitivity to sexism in language, but I doubt that Shakespeare or Austen was much concerned about that.

[categories]  

[5] |