Monday, April 30, 2018

Intel delays Cannon Lake processors →

Malcolm Owen for AppleInsider:

Revealed during Intel’s quarterly earnings report, the chip giant revealed it would continue to focus on shipping chips that use the established 14-nanometer process this year, reports PC Gamer. While next-generation chips using a 10nm production process will ship this year, Intel is instead shifting high volume manufacturing into 2019.


Intel CEO Brian Krzanich advised the change in pace was caused through issues achieving suitably high yields of 10nm chips. Rather than try to achieve high volume production this year, and potentially waste considerable portions of wafers used in manufacturing, the company is instead taking time to fix issues before attempting mass production.

Bad news for all PC makers, and another perfect example of the entire industry’s reliance on Intel. Apple’s rumored switch to ARM may be worth the headache.

Sunday, April 29, 2018

It has been 229 days since AirPower was announced →

Nick Heer for Pixel Envy:

Imagine an alternate universe where the AirPower and the wireless charging case for the AirPods weren’t announced until, say, the opening keynote of WWDC this year with same-day availability. Sure, buyers of iPhones and Apple Watches that were released last year would have to suffer through several tedious months of wondering why Apple didn’t make their own charging pad because many of the ones out there right now aren’t very good, but the reaction to its then-immediate availability would have been a classic example of underpromising and overdelivering.

Good points by Nick.

There’s one thing I noticed, too: we haven’t received the typical “product needs a little more time” statement from Apple, like when AirPods were delayed for about two months. If this silence is any indication, something completely unforeseen happened with AirPower, or Apple wouldn’t have announced it so far in advance. Although, they have got into the rhythm of doing so over the past couple years. I don’t know if it’s Tim’s call in particular, but it’s quite a shift from announcing and releasing the same day or week like they used to.

Friday, April 27, 2018

Apple should reinvent home networking

I started writing this article about a month ago. Here’s part of the original intro:

You may have been saddened to hear that Apple has no plans to update their wireless networking/storage line of equipment known as AirPort, but I think something bigger could be brewing in Cupertino.

Now in the wake of AirPort’s official demise, my thoughts remain unchanged on how and why Apple should reinvent home networking, because it’s needed more than ever.

Like any Apple product, AirPort routers offered a simple interface for configuration, employing Apple’s ‘it just works’ mentality. I have never owned an AirPort product, but from what I know, the entire line handled the basics swimmingly. But the reality is: the demand on our home networks and the Internet is only growing, and AirPort essentially was a hobby, similar to the original Apple TV.

In a world where everything is connected, in a time where privacy and security are more important than ever, Apple should seize the opportunity to offer a modern wireless networking solution that also takes advantage of their flourishing ecosystem.

Read on

Thursday, April 26, 2018

Apple officially discontinues AirPort line of products →

Rene Ritchie asked Apple what’s up with their AirPort line of products (AirPort Express, AirPort Extreme, and AirPort Time Capsule Wi-Fi routers). Here’s the official word from Apple:

“We’re discontinuing the Apple AirPort base station products. They will be available through Apple.com, Apple’s retail stores and Apple Authorized Resellers while supplies last.”

The writing has been on the wall, as the AirPort line has become increasingly stagnant. The timing of this news is apropos, as I have been working on an article detailing the case for Apple to reinvent home networking.

Apple could seize the moment and create a modern Wi-Fi system at a time that would be advantageous for them and their customers. I look forward to publishing my thoughts soon, as I’m not so sure Apple is bowing forever out of this business.

Wednesday, April 25, 2018

How to get Workflows for your iPhone and iPad →

It’s time for some more Workflow-goodness, similar to my post from the other day.

Matthew Cassinelli for iMore:

Workflow for iPhone and iPad is Apple’s powerful automation app, letting you create or get other people’s workflows that you can use to speed up tasks on your devices.

But you don’t have to be able to create workflows to benefit from them – you can add them from the Gallery or import them from other people, just run those, and still get a lot of benefit from using Workflow.

Workflow is a really powerful app that was purchased by Apple. I have come to rely on it heavily.

Fun fact: Matthew was on the Workflow team before Apple bought the app (and a little after), so he’s the perfect person to write this. As a matter of fact, if you’re itching for more advanced iOS automation techniques, check out his personal blog.

Tuesday, April 24, 2018

Speed up Apple Watch software updates by disabling Bluetooth →

Christian Zibreg for iDownloadBlog discovered a faster way to update his Apple Watch:

Disabling Bluetooth on your paired iPhone at the right time will force your Apple Watch to connect to your iPhone via the faster Wi-Fi protocol.

Read through to find out exactly when you need to disable Bluetooth during the update process for this to work.

This is great, because when I update mine, I swear I have been transported back to 1998 with a 56k modem. I have always wondered why Apple doesn’t broker this process over Wi-Fi by default. It sure would make for a better experience. I’ll have to give this process a try with the next update.

Monday, April 23, 2018

Michael Rockwell’s Workflow Toolkit →

I discovered Michael Rockwell’s blog last week, Initial Charge (he’s also the creator of #OpenWeb). Upon perusing his site, I discovered ‘The Toolkit’, which is his list of publishing workflows for the … well … Workflow app.

I’m particularly fond of the ‘Push To Ulysses’ flow, which I even used to write this post. So meta. Here’s Michael’s description of it:

Push To Ulysses: When viewing a webpage in Safari, initiate Push to Ulysses from Workflow’s action extension. A new sheet will be opened in Ulysses with my template for publishing Linked List items. If activated with text selected on the webpage, that text will be placed in a blockquote within the body of the template.

There are quite a few more, so if you’re a web publisher, head on over and check them out.

The Subscription Age

If history has taught us anything, it’s that quite a number of folks don’t like to pay outright for digital content and services. Ever since the dawn of widespread Internet adoption in the 90s, people have always figured out ways to get content for free. From early peer-to-peer file sharing services such as Napster and Kazaa, to the more modern BitTorrent, and questionable streaming services such as Kodi. But now there’s a new age upon us. It’s an age so convenient that we’re willing to forego alternative means and pony up! Yes indeed, it’s The Subscription Age.

Read on

Thursday, April 19, 2018

Siri isn’t dumb, she’s less consistent

Everyone loves to hate on Siri. The common trope equates to her being dumb or not up to par with the other voice assistants (largely being Alexa and Google Voice Assistant). I believe this perception largely comes from Siri’s greatest opportunity for improvement: general knowledge. 1

Table Stakes

Let’s first address the table stakes among digital assistants — weather, sports, news, smart home functions, etc. I feel they all do these jobs equally well, with little differences.

For example: let’s say my living room Lurton Caséta dimmer is at 5%, but I want to raise it to 100%. If I tell Alexa to “turn on the living room lights”, Alexa is smart enough to interpret my intent as a human would and just raise the lights. A human might have more snark at first. Siri, on the other hand, does not understand my intent. If I issue the same command to her, she does nothing because the lights are already on. As if a child, she might as well be saying “the lights are already on, duh”. I must specifically ask Siri to “set the lights to 100%” or some variation.

It’s a little annoyance, and although I prefer Alexa’s handling of the situation, there is still feature parity here.

General Knowledge

By contrast, I feel this is the main area in which Siri lacks consistent feature parity with the others. Even in my own circle of friends and family, the questions that fail the most fall into this category. These are usually questions I would never ask Siri myself, since I know she can’t answer them accurately (if at all). Here are just a few examples, comparing Siri and Alexa.

Are tomatoes a fruit?

  • Siri: Wolfram Alpha results with no direct answer to the question.
  • Alexa: “Yes, a tomato is a fruit.”

What is the largest freshwater lake in the world?

  • Siri: “Here’s what I found on the web.”
  • Alexa: “The largest freshwater lake by area is Lake Superior, at 31,795.5 square miles.”

What time is Brooklyn Nine-Nine on?

  • Siri: “Sorry, I couldn’t find anything called ‘Brooklyn Nine-Nine’ playing nearby.”
  • Alexa: “Season five of Brooklyn Nine-Nine airs on Fox Tuesdays at 9:30pm Eastern and 8:30pm Central.”

Now, I will say that Siri answered most of my general knowledge questions correctly (about 70% of them) as I was looking for the above examples. However, every time Siri doesn’t answer correctly or in an unexpected way, trust in the service takes another hit.

Siri’s negative perception will continue to increase until Apple addresses this area and others (hopefully in some capacity at this year’s WWDC). This isn’t Siri’s only problem, but I think it’s the biggest one. Severely reducing dumps to web searches (like above) is another one. As Siri and voice input are increasingly positioned at the forefront of new computing methods, the last thing Apple needs is to be thought of as behind. Does this all make Siri dumb? No. It makes her less consistent.


  1. General Knowledge. /salute 

Monday, April 16, 2018

Apple explains how Personalized Hey Siri works →

Apple’s latest entry into their Machine Learning Journal details how they personalized the Hey Siri trigger phrase for engaging the personal assistant. Here are a few interesting tidbits.

[…] Unintended activations occur in three scenarios – 1) when the primary user says a similar phrase, 2) when other users say “Hey Siri,” and 3) when other users say a similar phrase. The last one is the most annoying false activation of all. In an effort to reduce such False Accepts (FA), our work aims to personalize each device such that it (for the most part) only wakes up when the primary user says “Hey Siri.” […]

I love the candidness of the writers here. I can also relate to the primary scenario. Let’s just say I’ve learned how often I say the phrase “Are you serious?”, because about 75% of the time I do, Siri thinks I’m trying to activate her. It’s fairly annoying on multiple levels.

On Siri enrollment and learning:

[…] During explicit enrollment, a user is asked to say the target trigger phrase a few times, and the on-device speaker recognition system trains a PHS speaker profile from these utterances. This ensures that every user has a faithfully-trained PHS profile before he or she begins using the “Hey Siri” feature; thus immediately reducing IA rates. However, the recordings typically obtained during the explicit enrollment often contain very little environmental variability. […]

And:

This brings to bear the notion of implicit enrollment, in which a speaker profile is created over a period of time using the utterances spoken by the primary user. Because these recordings are made in real-world situations, they have the potential to improve the robustness of our speaker profile. The danger, however, lies in the handling of imposter accepts and false alarms; if enough of these get included early on, the resulting profile will be corrupted and not faithfully represent the primary users’ voice. The device might begin to falsely reject the primary user’s voice or falsely accept other imposters’ voices (or both!) and the feature will become useless.

Heh. Maybe this explains my “Are you serious?” problem.

They go on to explain improving speaker recognition, model training, and more. As with all of Apple’s Machine Learning Journal entries, this one is very technical in content, but these peeks behind the curtain are highly interesting to say the least.

One thing I didn’t see note of was how microphone quality and quantity improves recognition. For instance, Hey Siri works spookily-well on HomePod, with its seven microphones. However, I assume they aren’t using Personalized Hey Siri on HomePod, since it’s a communal device with multiple users, so the success rate may be implicitly higher already. Either way, I wish my iPhone would hear me just as well.