A Rebuke to the Cocky Writers Who Incorrectly Assumed the Name of Today’s New iPhone

If you go to Google and do a search for “iPhone 8” you will see a lot of search results. I’ll briefly list a small sampling of them:

  • telegraph.co.uk: iPhone 8: Release date and time, price, latest features and news.
  • 9to5mac.com: Check out our top stories on iPhone 8.
  • theverge.com: Apple iPhone 8 event: start time, live stream, and live blog.
  • macrumors.com: iPhone 8 Manufacturing Issues May Lead to Extended Supply Shortages and Shipping Delays.
  • theinquirer.net: The long awaited iPhone 8 is officially launching later today.

This list could go on and on. It’s fair to say that the entire industry that’s covering this event has been incorrectly calling D22 the iPhone 8. Well guess what? There will be an iPhone 8, but it will not be the D22 model. That’s going to be a source of confusion for a lot of readers who took mainstream media at face value.

If you wrote about the iPhone 8, shame on you. Not only did you not pick the right name, but you didn’t even pick a name that makes sense. It wasn’t even a good guess. At what point in iPhone history has the name of the iPhone gone from one number to its consecutive number in a single year? Never. It’s always been a tick-tock cycle. iPhone 3G, iPhone 3GS; iPhone 4, iPhone 4S; iPhone 5, iPhone 5S; iPhone 6, iPhone 6S. If you wanted to guess at a name based on the past decade of momentum, you would’ve called this the iPhone 7S. But you cleverly figured on your own that since this was a bigger release this time, Apple was going to just skip all convention and call D22 the iPhone 8. And then everyone jumped on the bandwagon and it became a whole thing.

There’s an air of confidence and cockiness here that smells exceedingly foul to my nostrils. Being smug and right is tolerable, but being smug and wrong is intolerable.

“Well, we had to call it something, what did you want us to call it?” Well, calling it the 7S would be a start. That way if you were wrong, you would at least be giving Apple the credit of assuming it to be consistent in its naming schema. Or you could call it the “2017 iPhone” or even just the “new iPhone.” If you didn’t care about verbose headlines, you could call it “The iPhone We Believe to Be Called the iPhone 8.” Most anything beats coming up with a name whose schema you derived purely on your own, whilst being cocky about it, whilst being wrong about it. You can pick two of those three and come out ok. Pick all three, and you fail.

Oh, I know. It’s all about the search engines. And once everyone else in the media jumps on the bandwagon and “iPhone 8” becomes the search term, that’s the term you have to compete on. I’m not criticizing the underdogs who went with the herd because it was that or starve. I’m criticizing the people who sat down 6-12 months ago and decided that this was what they were going to call it.

Because they were badly wrong.

Apple’s Obsession with Secrecy Is Not Unhealthy 

Manton Reece:

Worked on a blog post yesterday arguing that the iOS 11 GM leak reaction is overblown, just more unhealthy fallout from Apple’s obsession with secrecy. This morning, scrapped the draft. I have other work to do. Looking forward to tomorrow’s event!

It’s not secrecy for the sake of secrecy. To quote what I wrote this spring after the Talk Show interview at WWDC:

Interesting to hear Schiller explain the reason why Apple hates leaks so much: because it steals the thunder from the developer teams who worked so hard and want to see people’s delight when their work first sees the light of day.

We don’t chide simply because we like the idea of keeping secrets. It’s because leaking like this is disrespecful to the team that built the thing that’s leaked. It’s not fair to them.

Kudos to Manton for ditching his draft.

Bundling the New iPhone Pro with Apple Music and iCloud Storage 

Vlad Savov, writing at The Verge:

Here are three things I’m not interested in: a $1,000 iPhone, Apple Music, and paid iCloud storage. I’m neither an Apple hater nor a cheapskate; I simply prefer to spend my money on more tangible and lasting things than ephemeral subscriptions or the early-adopter tax for the latest and greatest iPhone of 2017. But an analyst note from Barclays has thrown up an intriguing possibility that could, all of a sudden, kindle my interest in Apple’s services and new hardware: what if Apple bundled free Apple Music and iCloud storage with the purchase of the top new iPhone model?

If you’re already paying monthly for Apple Music and iCloud Storage, this would be an appealing deal indeed.

This part caught my eye though:

Personally, I’d prefer that Apple offers free Apple Music for the lifetime of the $1,000 iPhone (and maybe limit it to that one specific device) over a one-year subscription, but I can see both methods having the desired effect.

An Apple Music subscription is $9.99 per month. The 200GB iCloud storage plan is $2.99 per month. These two services would cost $1,000 after 77 months, or 6.4 years. If this new iPhone Pro came with these services for free for the lifetime of the device, it would be cheaper to buy this iPhone than to pay for the services with no iPhone. There’s zero chance Apple’s offering a lifetime subscription like this. That’s a deal too good to be true by an order of magnitude.

Weird how everyone’s calling this “the $1,000 iPhone,” by the way. Is that really going to be the base price? That low?

Why Talk of a $1,000 iPhone Is Overblown 

Great piece by Jan Dawson at Tech.pinions about all the different ways to rationalize buying an iPhone that costs a few hundred bucks more than we’re used to paying. I’m not planning on getting this device this year — price aside, I think the supply constraint is going to make it extremely difficult to get. But no final decisions until after tomorrow’s keynote.

Hat tip to The Loop for the link.

A Requiem for the iPhone’s Home Button 

David Pierce, writing for Wired:

Ask any Android user what it’s like when the screen freezes, and you’re either stuck waiting for something to happen or pressing the power button to reboot your phone and be done with it. Or when you’re in a full-screen app and can’t figure out how to leave it. These are the little things that critics like to say separate iOS from Android in the first place, the “you already know how to use it” believers. The reason you already know how to use it? There’s a button, right there in front of your face.

OK, sure, so the iPhone 7’s home “button” wasn’t really a button at all. It was just a dedicated spot on the bottom of the device where Apple’s haptic feedback engines made you think you were clicking a button when you pressed it. And when the phone froze, it did too. There was still something soothing to pressing the button underneath a frozen GarageBand, hoping it would eventually take you home. Maybe it was just a placebo, like pressing the Close Door button in an elevator even when you know full well it won’t actually Close Door any faster. It still helped you feel in control, rather than existing at the mercy of software.

I disagree with most of this piece, but this one part was interesting. With a physical button, you could override a frozen screen; with a virtual button, you can’t. Here’s the thing though: I own an iPhone 7 and I cannot recall when the last time my iPhone froze. This drawback of not having an ability to “parachute” out of the frozen app isn’t an issue for me, because I never need it. I daresay that’s true of most users on most iOS apps. The parachute was nice in the early days when the software wasn’t as mature and the hardware limits were much easier to reach, but nowadays, how many people actually get a frozen iPhone screen?

Safari Takes More Memory Than Chrome

There’s been conjecture of late regarding the increased popularity that Safari would have if it supported favicons in its tabs. For light users of the web who surf and do not build, it’s conceivable that this could indeed be the deciding factor in determining whether to use Safari or Chrome.

For those who are building the future World Wide Web however, there’s a more substantial reason why Safari is unusable. Here’s a screenshot of my Activity Monitor from this past Wednesday. As you can see, Safari was taking nearly 18GB of memory, some of which was virtual memory, since my MacBook Pro has only 16GB of physical memory. Virtual memory is substantially slower than physical memory, which means my computer was running at a crawl. The only thing to do was to quit Safari.

I've blurred out a work-related domain, because I'm super private like that.

What was I doing that caused Safari to need this much memory? I was running Webpack Hot Module Replacement in a VueJS application. This is standard stuff for Chrome though; I’m running HMR more hours than not when I’m at my computer. But I had Safari open on this particular day because I was trying to troubleshoot a bug that was specific to Safari, which is pretty much all that I ever use Safari for.1 And Safari just couldn’t handle it. This wasn’t a fluke, either. Every time I open up Safari with HMR running, this happens.

That Safari gradually demands more and more memory is not unique to me, either. Andrew Hodgkinson writes at Stack Exchange:

I’ve seen this ever since Safari 8 on OS X 10.9.5, OS X 10.10.x all versions with Safari 8 and sadly in Safari 9 on all El Capitan betas to date, too. The Safari memory leak in my case is severe and Safari has to be completely quit & restarted often. It only seems to happen if you tend to have a few windows open a lot, which you “reuse”; but overall, Safari just grows and grows (by many GB).

This jives with my experience. I needn’t have many Safari tabs open when this problem occurs. I can have 20 or 30 Chrome tabs open, no problem, but if I have just 2-3 Safari tabs open and one of them has an HMR socket connection, Safari’s toast. Andrew goes on:

Suggestions about “putting in more memory” are absurd. I’ve a 16GB MacBook Pro laptop which is the maximum configuration of soldered-on (“pro” my rear-end!) memory which Apple provide. It simply isn’t possible to add more. Memory pressure and slowdown tend to get critical when Safari exceeds 10GB. I did once persist to the point where it was using over 13GB. When restarted with all tabs manually revisited to ensure all pages are loaded, it’ll go back to about 2.5Gb. A leak of that size is utterly indefensible.

This is a stark change in behaviour from Safari 7, which behaved basically fine in this regard - yet there are surprisingly few reports of it online. It isn’t a subtle problem and Safari 8 has been around for ages. Others would have noticed, yet few report it.

You know why few people report it? Because no serious frontend developer uses Safari. Safari’s great if you’re just casually surfing around. It’s unusable if you’re trying to get work done. At least, the kind of work that the people in my world do.

  1. As an aside, one great way to learn to dislike a piece of software is to only open it when you’re trying to fix bugs that are specific to that software. As far as perceptions go, I view Safari on equal footing to FireFox and Internet Explorer for this reason. ↩︎

The Net Neutrality Hearing That Never Happened 

David Shepardson, writing for the Union Leader:

A U.S. House committee said on Wednesday it has cancelled a planned hearing on Sept. 7 on the future of internet access rules after no companies publicly committed to appearing.

Among those who had been invited in late July to share thoughts before the U.S. House Energy and Commerce Committee were the chief executives of Alphabet Inc., Facebook Inc., AT&T Inc. and Verizon Communications Inc.

How to know when someone’s virtue signaling: when their words don’t match their actions. These tech leaders don’t really care about net neutrality. They say they do because it’s the expedient thing to do. But when it actually comes to it, they recognize correctly that net neutrality is a farce. They had their chance, and they didn’t show up. Congress extended the deadline and they still didn’t show up. Not one single company. This is one of the clearest cases of virtue signaling and partisan politics you’ll ever see. It’s laughable.

Ajit Pai:

We are flip-flopping for one reason and one reason alone. President Obama told us to do so.

Pretty much.

What Happened Last Night? 

Last night got weird. There was undoubtedly alcohol involved. So much for complaining about typos. Last night’s tweetstorm was riddled with them. If you look at the replies to the original tweet, they’re overwhelmingly rebukeful, and rightfully so. Rick had this to say:

Delete your account. Between you and activist Cook this Mac since 1990 has about had it with Apple and it’s [sic] fan boys.

I agree that it’s getting ridiculous. Lots and lots of alienation going on right now. It’s not helping anyone. Let’s find ways to respectfully disagree and move on, please?

Climate Change, Hurricanes, and Crickets 

Scott Adams:

So why is the biggest story in the world conspicuously missing from the news? Keep in mind that climate change is still the biggest story even if the hurricanes are NOT telling us something new. The public wants to know how big the threat is. We’re scared!!!

Instead of that news, we get mostly crickets.

But why?

Brilliant analysis. In other news, Adams is probably not going to be making a $10 million donation to climate change studies anytime soon.

Is Antitrust the Answer? 

Ben Thompson:

That Google, Facebook, Amazon, and other platforms are as powerful as they are is not due to their having acted illegally but rather to the fundamental nature of the Internet and the way it has reorganized value chains in industry after industry.

Moreover, these platforms have far more positive outcomes than distribution-based monopolies ever did: the consumer experience is better, and there are huge new opportunities to build new businesses (especially serving niches completely ignored in a distribution-based world) on top of them. That is a good thing, worth preserving.

It’s always refreshing when Ben pontificates in a lengthy piece exactly how I’ve felt about something. The mantra that monopolies are inherently, categorically evil is less persuasive in the era of Internet companies and economics. Times are changing. It’s exciting.

How to Change Slack’s Font Family to San Francisco 

Yesterday I made some modifications to my gist to reflect Slack’s ever-changing DOM IDs. While I was at it, I changed the font family from Slack-Lato to San Francisco. It looks a million times better. I’m in love with Slack once more. If you’re on a Mac using Slack’s desktop application, check it out.

AMP Is Good for Users 

Last week, Nick Heer had this to say about AMP:

Google is the world’s most-used search engine and they’ve restricted one of their most prominent features to sites that use AMP, their own fork of HTML.

All other things being equal between two web results, if one of them implements AMP and the other one does not, I want the one that loads faster. Don’t you? AMP loads faster by many multiples. When I’m doing a Google Search on my phone and see that the desired result implements AMP, it makes me smile every time. And that’s the point. Google Search has to do just one thing really well: show the user the result they want, as fast as possible. Anything that gets closer to that goal is a win, something to celebrate.

Granted, AMP is Google’s thing, but that doesn’t automatically make Google evil for perculating its implementers to the top of search results and to its Top News Carousel. This isn’t about keeping curmudgeon site owners happy. It’s about making users happy. AMP is one of the best things to happen to Google Search in recent memory.

I quibble over calling this a “fork of HTML,” too. In his footnote, Nick had this to say:

I got a little bit of pushback on Hacker News and Twitter last time I wrote this. Just to be clear: AMP’s specifications require that pages link to this script: https://cdn.ampproject.org/v0.js. For a page to be valid AMP HTML, it must include that JavaScript file, which is hosted by Google.

It’s true that many elements on an AMP page would fail to render without this JavaScript file, because there are custom (though not invalid) attributes required to replace the regular ones. But that could be said of any JavaScript-rich application that’s powered by Angular, React, or Vue. In other words, it’s not a fork of HTML any more than are the world’s most popular JavaScript frameworks. Now granted, unlike these other JavaScript frameworks, you cannot self-host this JavaScript at present; it must reside on Google’s servers. But you can still pour over every line of it and make pull requests to it at its open source GitHub repo. It’s not a black box. You know exactly what you’re getting yourself into when you include this JavaScript file.

Nick doesn’t mention it here, but to address one final criticism I’ve seen: if we’re going to knock on AMP because it doesn’t take visitors to the actual website and thereby hurts the business model of the website, then we need to be consistent and knock on Apple News too, because it does the exact same thing. In fact, all the complaints I’ve seen about AMP apply to Apple News too.

The High Cost of Working 40 Hours a Week 

Daryush Valizadeh:

Secondly, I realized that my unique male brain is not designed to pay attention for prolonged periods of time in cooperative environments. I did not want to sit down continuously for hours, wait my turn to speak, and pretend to be “nice” to others. I also saw the females present not as my intellectual equals—even if they were smart—but as fodder for sexual fantasies that gave me distracting boners. The women seemed to greatly enjoy this environment, probably because men treated them with great respect just because they were women, regardless of their intellectual contributions.

Sounds like a real adventurist, which clashes with his about page where he discusses his concern that “eliminiation of sex roles” tends to unleash women’s “promiscuity and other negative behaviors that block family formation.” How’s your family formation and non-promiscuity working out for you, Daryush?

Also, anyone who balks at a productive 40-hour workweek is bad company. I’m glad I don’t have to work with this guy.

Switching from Jekyll to Hugo: How I Made My Static Blog’s Local Build Speed 100x Faster

The past few months I’ve been in something of a conundrum with Drinking Caffeine. As the post count hit 100, 200, and then 300, Jekyll was increasingly slowing down to a crawl in its ability to rebuild the site. Not just during deploys, but during local builds in edit mode. Every time I would make a change to a markdown file, it would take about 10 seconds to update the local HTML preview. It was getting ridiculous. Having to wait 10 seconds before changes can be verified in a web browser is a deal breaker.

Whilst musing about my predicament last weekend, I considered moving to WordPress. Here were the pros:

  1. WordPress is powered by MySQL for its user-generated data. Databases were designed to be lightening fast with thousands and millions of rows. This would solve my speed issue overnight.
  2. WordPress is very popular. It’s responsible for 27% of the web. It has a lot of momentum and it’s getting better all the time. There are great hosting solutions for it such as WPEngine. Lots of tech sites are powered by it.

Here were the cons:

  1. Like any database-powered CMS, I would lose a number of things with switching from Jekyll. These included:
    • Version control via git.
    • The ability to edit directly in my IDE of choice. Instead, I would have to return to the pain that comes with interacting with a browser application (or MarsEdit) instead of the much more elegant syntax of YAML front matter.
  2. Then there’s the concern about security. A statically-generated HTML site has virtually zero concerns in this regard, whereas a CMS with a web login interface exposes this vulnerability; especially when it’s software as popular as WordPress where everyone knows that the login is /wp-admin. Yes, you can change that default URL as well as install 2FA plugins. But the point is that with static, you don’t have to even mess with that, and with non-static, you do.
  3. Lastly there’s the consideration of hosting costs. Services like GitHub Pages and GitLab Pages let you host static sites on their servers for free. That’s a whole lot better deal than paying $348/year at WPEngine for the WordPress equivalent; especially if your blog is a side project that you do for fun.

I concluded that static was the way to go, but I still needed to find a way to get faster builds locally. I looked into a piece called Increase Jekyll Build Speed to no avail. Even incremental builds didn’t seem to make a difference, likely because I’ve been using things like Jekyll Archives which means that incremental builds can’t really be that incremental.

At this point I decided it was time to find a completely new build system. Googling around, I found this piece by Eli Williamson at Netlify:

Jekyll holds it’s [sic] position at #1 as the most popular static site generator. That’s no surprise, considering it was created and fantastically supported by none other than GitHub.


Since we last reported on the top ten static site generators back in 2016, Hugo has jumped up the list to #2. Built around Google’s Go programming language, it’s blazing fast. This is no accident, it was engineered for speed (massive Hugo sites can be built in milliseconds) - even Smashing Magazine, with a seemingly endless well of articles and knowledge, recently switched to Hugo and experienced incredible reduction in build times and fantastic increases in flexibility. Learn more about their switch here.

Hugo sounded promising, so I explored it more. It turns out that Hugo is faster — much, much faster. This is because unlike Jekyll, which is built upon the scripting language Ruby, Hugo is built upon the compiled language Go. When it comes to file processing, scripting languages are pretty lousy compared to compiled ones, at least in this case. I decided to switch from Jekyll to Hugo. The transition took me a little over 14 hours to complete. The devil’s in the details, the peaks of which I’ll highlight in a moment, but here’s the important part: my local builds have gone down from 10 seconds to 100 milliseconds. That’s a 100x increase in speed. Looking at the Codeship logs for two specific deploys, the production-mode build time has gone from 9.621s to .775s, roughly a 10x increase. Hugo is clearly the victor. I’m never returning to Jekyll.

Whilst saying this, I want to make something clear. I’m not trying to throw shade on Jekyll as Jekyll. Rather, I want to point out the fact that compiled languages, not scripting languages, are the right choice as the underlying architecture for a static site builder. I suspect that Jekyll will remain the #1 most popular static site generator for sometime to become, for the simple reason that GitHub Pages offers it as the only choice on its platform.1 If you’re into the idea of using a static site generator, you most likely use and love GitHub, so it’s a very natural fit. I do not think GitHub will offer support for Hugo any time soon, because GitHub would have to build up a robust Go infrastructure, and its culture is Ruby. Making architectural accommodations for Hugo does not seem to be a wise use of GitHub’s resources, considering that GitHub Pages is a free service. That said, GitLab Pages has native support for Hugo, so competition might drive GitHub to eventually offer similar.

Now, for a few technical details of the transition:

  • I had to fix change the sytnax for all of my internal linking. This was a pretty simple search-and-replace of the Jekyll syntax for links versus the Hugo syntax. The hugo import jekyll command was a convenient jumpstart for transitioning the project, but it fell short of many things, and this was one of them.
  • I also had to migrate a lot of the URLs of old posts by hand. Jekyll determined a post’s URL by its filename, with the post’s front matter optionally overriding it. Hugo determines a post’s URL by its title, with the post’s front matter optionally overriding it.
  • I dropped the hour, minute, and second from my publish time. Hugo botched my dates in the hugo import jekyll command (it assumed my front matter timestamps were in standard UTC timezone, whereas they were in US Central, as specified in my Jekyll’s _config.yml file). Rather than go back and correct all of those by hand, I decided to pick my battles elsewhere and just drop this feature. The exact timestamp was never fully accurate anyway, because it got generated at initial compose time, not publish time.
  • I tried using server-side syntax highlighting for my code snippets but pygmentizer doesn’t support all the languages I would like, plus it was buggy in both Tmux sessions and in Codeship deploys. I decided to give PrismJS a chance instead. So far I’m liking it a lot. Especially the new syntax for invoking it — I’m now doing markdown’s standard triple tickmarks instead of the Jekyll-specific snippet highlighting syntax.
  • I couldn’t retain my old month-by-month archive — at least, not without a lot of custom work. I decided to go with standard pagination that comes with Hugo.
  • I also couldn’t retain my old feed URLs, so I consolidated those to a single URL.

As you can see above, Hugo is limited and substantially more challenging in certain respects that Jekyll is not. Thus you wind up with opinions like the one at this reddit thread:

I have also done websites in Hugo; the inflexibility of golang’s template processor was extremely limiting to the point that I would never use Hugo again.

To me, Hugo’s challenges are worth it because it is so much faster. But for the average content owner who writes a couple posts per month, and hasn’t yet (and possibly never will) hit the slowness of a 300+ post Jekyll site, they’ll likely find that they prefer the greater flexibility and simpler directory structure of Jekyll.

That said, I found Hugo’s tooling to be a lot better. It has HMR built right into it; when you edit a post and save your file, the browser page updates the resulting HTML content without doing a full refresh. The same holds true for any changes that occur to any referenced CSS or JS. It’s incredible. Also, creating a new post doesn’t require any custom tooling. I had this whole initialization script that I was using for Jekyll. Now it’s simply:

hugo new post/some-new-headline.md

With that, depending on how I’ve set up my archetype, I’ll have exactly the initial post data that I want. No custom initialization script necessary.

If you’re thinking of starting a new static project, go with Hugo. It’s blazing fast.

  1. Unless you handle your deploys abroad and then simply push the built HTML to GitHub Pages, as I do. ↩︎

Spotify’s Music Limit 

Adam Engst, writing at TidBITS:

Spotify never explains why changing an arbitrary number in the code from 10,000 to 50,000 (Google Play Music’s limit) or 100,000 (Apple Music’s limit) would somehow hurt the experience for those who don’t want to add that many tracks. A Spotify music collection is just a list of tracks, so it’s hard to imagine how allowing that list to exceed 10,000 could cause any problems.

I can think of all kinds of reasons why “an arbitrary number in the code” could have huge ramifications. To use a networking analogy, imagine if your software were architected in such a way that you couldn’t do variable-length subnet masking. It’s fair to conjecture that Spotify would have to drastically change its architecture to accomodate that 1%, and the amount of work involved would not come close to paying off.

You have to love how uninformed customers think they’re smarter at the inner workings of a comany’s software than the people inside the company.

Vívoactive 3 Introduces Garmin Pay 

From the Garmin product page for Vívoactive 3:

vívoactive 3 is our first wearable to feature Garmin Pay, which lets you pay for purchases with your watch. Use it just about anywhere you can tap your card to pay1. So, if you left your wallet in your locker or just forgot it, that post-run morning caffè latte can still be yours. Just tap and go.

Sounds like Garmin has been losing sales due to the Apple Watch and decided it needs to compete with similar features.

The Case for Consolidating Your Company’s Repos 

Every sentence in this piece by David MacIver is gold and will resonate with anyone who’s had to deal with multiple intertwined repos. David writes:

When you have multiple projects across multiple repos, “Am I using the right version of this code?” is a question you always have to be asking yourself – if you’ve made a change in one repository, is it picked up in a dependent one? If someone else has made changes in two repositories have you updated both of them? etc.

You can partly fix this with tooling, but people mostly don’t and when they do the tooling is rarely perfect, so it remains a constant low grade annoyance and source of wasted time.

This is the voice of experience. If in doubt, use one repo. If not in doubt, still consider using just one repo.

Facebook Now Blocks Ads from Fake News Pages 

Shannon Liao, writing for The Verge:

Facebook added an additional defense today against the spread of fake news and viral hoaxes. If a Facebook page is found to be repeatedly sharing false stories, the company will ban the page from advertising on Facebook.

This sounds good in theory but it’s a slippery slope. What happens when 5% of the population doesn’t think the news is fake? 10%? 49%? It’s fully within Facebook’s perogative to do this and it’s understandable, but there’s the potentiality for great mischief here.

RSS Change for Drinking Caffeine

I am deprecating all current feeds at Drinking Caffeine, and replacing it with one single feed at /feed.xml. If you’re reading the site from a feed reader, then you’ll want to change your URL to that.

Sorry for the hassle! I’m switching from Jekyll to Hugo1 and while Hugo is better at most things, it’s worse at customizable feeds.

  1. More on this later. ↩︎


Privnote is a drop-dead simple service that lets you “share a confidential note via a web link that will self-destruct after it is read by your intended recipient.” Very cool.

Handling Click Events with Dismissible HTML Modals

It is often the case in web applications that designers must afford the user a progressive disclosure of complexity. One of the great ways to achieve this without disrupting the user experience is with HTML modals. The term modal means different things to different people in the web community, but when I use it here, I’m referring to exactly this: any UI element that is overlaid atop the current UI layer, regardless of whether it be a small detail or a fully enveloping layer, such that the only way to dismiss it is to either click outside of or mouse away from said modal.

There are three ways that modals can be triggered into view:

  1. Automatically. An example would be a modal that appears immediately upon logging in, that prompts a free user to upgrade to the site’s premium tier. Automatic modals should generally be avoided, although they have their rare use cases.
  2. Upon hover. Despite the popularity of this option, a compelling argument can be made that modals should never be triggered upon hover, because no UX should be triggered by hover, only UI (such as a link changing color). Here are a few reasons why triggering UX changes on hover is a bad idea:
    1. Because it increases code complexity and technical debt from a developer’s standpoint. There is no such thing as “hover” on a touchscreen device. Since most web applications must support touch devices nowadays, the code complexity great increases if an action must occur on hover for a non-touch device but on tap for a touch device.
    2. Because it is difficult to interpret user intent when hovering, but it is never difficult to determine user intent on click. When a user hovers over item B, that might be because they are simply trying to go from item A to item C. Because of this ambiguity, we have pieces of software such as the hoverIntent jQuery plugin. What these plugins do is set a timer from the moment that the user mouses over something. When the timer is up, it checks to see if the user is still mousing over that something. If the user is, it triggers the desired action, otherwise it does nothing. No such timer is necessary when the action is triggered with a click. In other words, hover is slower than click.
    3. Because a web application feels much more solid when no UX can be triggered via hover. I freely admit that this last argument is touchy-feely but I think it’s important for that very reason; ultimately we judge a UI/UX by how it makes us feel. If you are looking for an example that eschews hover UX very well, look no further than Jira. Even the drop-downs on its sales page cannot be triggered on hover; only on click. Doesn’t that have a nice feel to it?
  3. Upon click. This is the gold standard. HTML modals that appear on click are non-intrusive and they are triggered by an action that is clearly intended by the user. In situations where the modal serves as a peek into something deeper, the first click triggers the modal, and the second click sends the user to a page that contains the modal’s contents in greater detail. If Stack Overflow were to change its user modals that occur on hover to instead occur on click (if you’re unfamiliar, mouse over Ruby Velhuis to see what I am referring to), the site would be the better for it. In Ruby’s example, first click would trigger the modal, and the second click would take you to his profile.

Having thus briefly described modals, we now come to the heart of today’s subject, which is this: from a technical standpoint, how do you handle the dismissal of these modals? There are three ways to do this that I have encountered.

The first method is by initially focusing on the modal and then dismissing on the onblur method. Here’s a simple example:

  I am a modal!

  // Writing vanilla JS in real life instead of using a
  // proper framework is an abomination. Example only!
  const modal = document.querySelector('.modal');

  const showModal = () => {
    modal.style.display = 'block';

  const hideModal = () => {
    modal.style.display = 'hide';

This is great for very a simple modal that does not contain HTML form tags that accept focus. Envision a menu that is the visual equivalent of right-clicking on the desktop or a Finder window in macOS — that sort of simple menu. A practical web example would be the “more options” of an Instagram post when viewing it at Instagram.com. One very nice thing about this method is just how fragile focus is. It’s easily broken. You lose it as soon as you click something within the menu. You lose it if you switch to a different tab in your browser, or to a different app. This transient nature is also similar to right-clicking on macOS: switching to a different space causes the menu to disappear. The main downside to using this solution is just how limited the items inside the modal must be. If the modal contains form elements or its own set of clickable actions that upon click should not dismiss the modal, this solution won’t work.

The second method is by stopping the upward DOM propagation of click events that occur from within the modal itself, and setting a click listener on the overall document. For a generic code example, I can’t improve on this StackOverflow answer. Assume that #menucontainer is the identifier for your modal in this scenario here. When you set this up, any click events that get triggered on the document are guaranteed to have not occurred from within the modal. This is a terrible solution though, and the reason for its terribleness is pontificated quite nicely at this CSS Tricks piece. Stopping propagation of an event is never necessary and leads to many potential pitfalls. I recommend reading this CSS Tricks piece in its entirety to get a firm grasp as to exactly why this is so.1 To quote a pertinent part from it:

Modifying a single, fleeting event might seem harmless at first, but it comes with risks. When you alter the behavior that people expect and that other code depends on, you’re going to have bugs. It’s just a matter of time.

Not only is it “just a matter of time” until you have bugs with event.preventDefault(), but unless you have strenuous test coverage on your UI, this is the sort of thing that will silently break and be very difficult to debug once you discover it later, depending on how nested your DOM is. You’ll have to step through each layer of DOM nodes to see where the click event propagation is getting stopped.

This brings us to the third and gold standard for handling the dismissal of HTML modals, which is by manually detecting whether a given click occurs inside or outside the modal. The popular examples are all in jQuery but few serious web applications use jQuery, so I’ll give an example in VueJS instead, in Single File Component parlance:

  <div class="modal" v-show="showModal">I am a modal</div>

import eventHub from '../eventhub';

export default {
  name: 'Modal',
  created() {
    // this event gets sent when a click occurs abroad that
    // should open up this modal
    eventHub.$on('show-modal', this.showModal);
  data() {
    return {
      showModal: false
  methods: {
    showModal() {
      this.showModal = true;
      document.addEventListener('click', this.documentClick);
    hideModal() {
      document.removeEventListener('click', this.documentClick);
      this.showModal = false;
    documentClick(e) {
      if (e.target !== this.$el &&
        !this.$el.contains(e.target)) {

For the more curious, the eventhub import is nothing more than a singleton instance of Vue, shared between all components within the application. This is standard practice. The eventhub.js file contains nothing more than this:

import Vue from 'vue';

export default new Vue();

If you’re wanting to adapt this to something other than VueJS, the main part you need to worry about in the example above is the contents of the documentClick() method. The Node.contains() method is a standard method and you can use this in any framework — React, Angular, etc. As long as you have the event object and the modal’s node object, you’re good to use this.

  1. As a bonus, this piece will also introduce you to the little-known defaultPrevented property of event objects. ↩︎

The Outline’s Linkbait Piece on GarageBand 

Yesterday, Stephen Hackett linked to an Outline piece titled Why Are There So Many Knobs in GarageBand?. Aside from the headline, the word GarageBand occurs just twice in the article, and those are throwaways. There are many mentions of other applications and screenshots of them, but there are no screenshots of GarageBand. In other words, John Lagomarsino threw in an Apple application to make the headline catchy but then proceeded to talk about other audio software.

GarageBand is an excellent application. It’s not a fraction as skeumorphic as the screenshots in this article. I’ve spent about five hours this week in GarageBand playing around with some mixups, and the interface is more or less exactly what I would expect and desire from an Apple audio application in 2017. The fact that John can’t critique it is telling. It’s a very well designed application that’s been through multiple revisions.

Linkbait aside, I found this to be inaccurate as well. John writes:

Faders, switches, knobs, needles twitching between numbers on a volume meter — they’re all there. Except you have to control them with a mouse.

First, how many people use GarageBand on a Mac versus on an iPad or an iPhone? Some of them are using a mouse. Many of them are using touch, and knobs make perfect sense on a touchscreen device.

Second, while it’s somewhat cumbersome to the uninitiated to use a mouse on a knob, it’s difficult for me to think of a UI control that would be its superior. What does John want here? A slider? A numerical input? The nice thing about a knob is that it gives you a 270° spin. If you’re used to controlling things with knobs, you can just look at a knob and know exactly where it’s set, in a way that’s not as visually intuitive with a slider or numerical input or anything else I can think of.

I have yet to find a piece from The Outline that I liked.


Spectre was one of the truly great video games of the ‘90s. Today I discovered an online embedded version of it and it’s exactly how I remembered it. Reading through the user manual, I realize that there were all kinds of features I hadn’t ever used, too. I’ve previously talked about the dangers of remembering past software too fondly but in this case, it’s well founded. Definitely worth checking out over the weekend, regardless of whether you’ve played it before.

82 Percent of Computer Science Majors Are Male 

Gaby Galvin, writing for U.S. News last year:

The gender gap in computing jobs has gotten worse in the last 30 years, even as computer science job opportunities expand rapidly, according to new research from Accenture and Girls Who Code.

In 1984, 37 percent of computer science majors were women, but by 2014 that number had dropped to 18 percent, according to the study. The computing industry’s rate of U.S. job creation is three times the national average, but if trends continue, the study estimates that women will hold only 20 percent of computing jobs by 2025.

Women are 55 percent of undergraduate students in college, but only 18 percent of computer science majors are women. Why isn’t that number 55?

If you think that one hundred percent of that 37 percent delta is explainable in terms of gender bias then you believe in Santa Claus.