The Orbit20™ Arrived Today, and I Want to Share my Thoughts

Last week, I posted about my experience with HIMS Inc. and the nightmare I went through to get my SmartBeetle braille display repaired, only to have it break again and discover it was effectively totaled. You can go back and find that post if you like, but the summation is that it was a horrible experience, HIMS Inc. is the Yellow Eyes to my Winchester Brothers, and there’s no changing my mind. The morning after I posted that, I ordered the Orbit20™, priced at $449 before tax and shipping. As I’m drafting this, I’ve the device just under three hours, which is long enough you me to have formed my opinion and feel comfortable sharing it.

Why This Over Other Reviews?

The other reviews I’ve seen go through the unboxing of the Orbit20™, call it a nifty device, then complain about the lack of cursor routing keys. Since I’ve seen those reviews, I was webl-warned before I purchased, it has an impact on some use features, but more moving on. Also, I’m not going to spend time on the unboxing because that’s information you can find on The Orbit Reader 20™ product page. In fact, if you don’t get all the items in the box, you’re supposed to call the seller. What you’ll find are my impressions from using the device.

At the time of this writing, I’ve only tested the device as a bluetooth display for my iPhone ITS running iOS12.1.2. I cigure that’s what most of my readers will be interested in, and I that imagine the stand alone reader and notetaking capabilities would be much different from this experience.

About The Layout

For purposes of this post, the important thing to understand about the layout of the Orbit20™ is that the six keys for entering Braille are at the very top of the unit, there is a circle arrows with a select button in the center between and slightly below the six input keys, the spacebar is below the circle of arrows, and dots 7 and 8 are on the left and right sides of the spacebar respectively. This means that dots 7 and 8 are well below the main six input keys, and I’ll get to exactly why that’s important shortly. The device’s exterior is made entirely of plastic, and let’sskip the Greek that is actual dimensions and just say that the unit is a little chunky in the hand compared to the standards of a lot of electronics you see these days. The positive to that is that it actually feels durable, which we could say is a nice contrast to a lot of today’s electronics.

Reading and Interacting

Since I’m using this to interact with my iPhone, the way I use the Orbit20™ is probably different from the way someone who uses it to interact with a PC might experience this product. If you are doing that, you’re welcome to share your experiences in the comments, just be respectful.

14 Vs. 20 Cells

I noticed this difference right away between the Orbit20™ and my SmartBeetle. Basically, there’s practically portable, and then there’s sacrificing functionality in the name of portability. At 20 cells, you can read most app names without panning the screen, unless there’s a notification baddge. With 14 cells, I often had to pan at least twice to read many app names. This difference really comes into play when I’m reading email, messages, or social media posts, especially when the content has multiple emojis, the descriptions of which can end up feeling like their own paragraph.

Basic Navigation

Most of the navigation can be performed using the circle arrows. Push the right arrow to move to the next item on screen, the left arrow for previous, and the select button to do the double-tap. You can use the up and down arrows to move between items based on the setting of the rotor. You can also perform these actions with the six main keys and the spacebar, but you’ll be switching between your left and right hand a lot, so the other hand can do the reading.

Commands Using Dots 7 or 8

Dots 7 and 8 are to the left and right of the spacebar, which is below the six main input keys and circle of arrows. This means there’s a good two to three inches your fingers have to cover to input a keyboard sequence like Command Enter, which you do by pushing dot 1 + dot 7 +. Spacebar, releasing, and then pushing either dot 8+ space bar, or dot 1 + dot 5+ spacebar. This is a keyboard short cccommnly used in apps like Whatsapp and Facebook messenger. It’s perfectly doable, but I find it easier to to just find and press the actual send button.


Just Text Input

Regular text (braille) input is one of the best parts of the display. The keys fit the fingers well, and press with minimal effort. You can do most of it without moving your fingers away from these keys.


This part isn’t so nice. I said I wouldn’t go on about the lack of cursor routing keys, but the fact is this absence impacts the process of proofreading something that has been written. With that said, iOS lets you ustomize many commands for a braille display, and so this can be worked around with a little time and patience.

All in All, a Good Device

Over all, the Orbit Reader is a good device. It’s high points for me are the 20 braille cells, and sharpness of the braille. It’s also extremely rrsponsive to screen changes and button presses. It’s low points for me are the laout of some of the buttons, and the noise it makes when the braillle refreshes. Out of all of the displays I’ve owned, this one is extremely loud. It sounds a bit like a fly trapped in a window screen on a summmer evening. This is, however, a worthwhile buy if you use or know some who uses braille.

Why You Shouldn’t Buy a SmartBeetle, and Why HIMS Inc. Needs to be Taken down a notch

As a blind person who is a long-time tech user, I can tell you that the software and special equipment I need to be independent is an investment. Sure, I’m saved the expense of a car, auto insurance, and the cost of maintenance and licensing fees, but these costs get replaced with the costs of my screen reader and refreshable braille display. Until recently, a screen reader cost just about as much as the computer I wanted to access, and I had to but upgrades every so often to make sure I could keep accessing new versions of mainstream applications.

With many operating systems including built-in screen readers, it has become more affordable to obtain acccess to computers, so long as you’re the kind of user who can get buy on text-to-speech feedback for the contents displayed on a computer screen. I am not this kind of user, and the cost of a refreshable braille display remains high. The SmartBeetle, the kind of display I owned until recently sold for $1,345, and that was one of the cheapest units, and it now sells for $995. In other words, readers, it’s an investment. Unlike a screen reader that only provides spoken versions of visual elements, the braille display gives me tangible rendenrings of things on the screen. I relied on it to let me check the spelling of people’s names and email addresses, to proofread documents, to privately read communications from friends, family, and coworkers, and engage in other activities where it would not be beneficial to have the contents of my screen spoken. With that said, when a piece of equipment that carries such a huge workload breaks, problems will be had.

Six months ago, the SmartBeetle broke. Specifically, it stopped allowing me to connect to devices via bluetooth, which is essential to it’s functioning. If this were a device that is used by the majority of the population, getting in touch with the company’s technical support team would yield a timely response, and it would not be okay for such communications to go unanswered. But braille displays are not used by a majority of the population, and the SmartBeetle which is manufactured by HIMS Inc. is no exception, and this apparently gives HIMS Inc. the license to have a lax standard of support team responsiveness. I sent several emails, made several phone calls, all of which detailed my problem and got no response.

When I finally did get a response, the first thing that happned was I had to go through all the troubleshooting steps listed in the user guide with the customer service person. This is annoying but standard no matter what, and so I cooperated. The yielded the response that the manufacturer in Korea would need to be contacted to find out what to do. In a company that sells popular devices like computers, I would have been given a support ticket, and this would have allowed me to track the progress of my support request. It also would have, when I reached out, given the next support agent a frame of reference. None of these things happened, there was no follow up from the support department, and I sent several more emails and made more phone calls and left messages that got no response. Meanwhile, all of the activities I described abovee are lessened in quality for me.

Just before this passed thanksgiving, I finally got in touch with a support agent, and it was determined that the bluetooth board had gone out and would need to be replaced. The cost of thee replacement part was $170. Just as a point of reference, you can buy a fairly kickass set of bluetooth headphones for that price. So for the entire repair process, I had to apy $20 to ship the SmartBeetle to the company for repairs, $170 for the bluetooth board, $85 for an hour’s labor, $20 to ship the device back, and $20 to rescue the device from a UPS center when they couldn’t deliver the package to me at my house. This totals $315 for repairs, about a quarter of the original price of $1,345, or a third of the current $995 price for a new SmartBeetle.

Two weeks later, one of the cells in the device stopped functioning. This is kind of like what happens when some of the pixels in your TV screen go out. This time the cost of replacement parts would be $780, and that excludes any labor, shipping, and transportation costs involved. When I told the support agent I had just sent the device in for repairs and paid ovr $300, the response I got was, “Bumber.” It turns out, there is a 90-day warranty, but it only covers the part ttthat was replaced. This means I’m now putting more than the cost of the original unit into fixing thee unit. To any of you car owners, this means the device has been effectively totaled. When I pointed this out, I was encouraged to buy a new unit because it was the better deal, and never mind that this is damage that could have been caused during he repair and shipping process.

If this were an iPhone and if I had gottena customer experience of a quality that compares to Apple, it would be a no-brainer. Unfortunately, I have six months of no response, ball-dropping, and a company that seems to feel it’s okay to charge more repairs than a replacement unit. The worst part is HIMS Inc. and companies like it have gotten so used to people having to put up with their antics that they don’t even care that I’m less than pleased with them. I’m posting my experience on all of their dealers’ sites, I’ve hit their Facebook page, will probably be targeting their Google Maps page if the have one, and the only result is catharsis for me. HIMS and companies like need to be taken down a notch. They cannot continue to treat people like this. Even if a limited market means a higher price for products, that doesn’t mean you can treat your customers like total shit.

In this case, I found a replacement device for $449 from a company that supports its products. It’s still more money than I wanted to spend, but it’s better than giving these losers any more money. In short, reader, don’t buy a SmartBeeeetle, don’t by from HIMS Inc. The product doesn’t last, and the customer support is lousy.

Instagram Adds Support for Alt Text, But It Still Doesn’t Have my Attention as a Blind Person Interested in Photography

Yesterday, Instagram announced two new features it was adding to make the platform more accessible to visually impaired users. First, newly uploaded photos would have automatically generated alt (descriptive) text added to them. This feature is powered by the same technology that Facebook added to its platform a few years ago, which attempts to determine and name the objects in posted photos, and make those results available to screen readers so users of the software can benefit from the description. These days, that technology has advanced enough so that a lot of the memes people post and share on both of these platforms can be enjoyed by visually impaired users.

Second, users of Instagram have the option to add alt text to photos they post, potentially providing a more detailed description for their visually impaired followers. This is similar to the way Twitter and Mastodon have decided to handle making images accessible. While this is a huge step forward for the platform, and while I had lots of fun testing the feature on my own Instagram account, there are a couple things that need to be worked out before I adopt it as my main outlet for nurturing my interest in photography.

Complicated Execution

The process for adding custom alt text to photos when you post them on Instagram is somewhat complicated. In addition to the normal steps for posting a photo such as adding filters and captions, you actually have to click the advanced settings at the bottom of the posting screen, add the alt text, then share the photo. To be fair, this is still easier than Twitter’s execution, which requires you to go into the accessibility section of your Twitter settings and enable the option to add alt text before it even becomes available in the tweet posting box. Mastodon offers the option to add a description to photos right out of the box, and that makes it the best, but all three of these platforms are handling the ability to add custom descriptions better than Facebook. Facebook does allow custom alt text to be added to photos. At the time of this writing, this help article explains that this is only possible from a computer. After expressing my displeasure about this, on the Facebook accessibility home page, I arcvd a response that the vthe version of the iOS app released today does allow the editing of alt text, though in an extremely complicated way.

Alt Text Does Not Transfer

My interest in photography is a growing thing, and I want to share the experience with as many people on as many platforms as possible, and I want to make sure it’s an accessible experience. When I found out that Facebook wouldn’t let me edit the alt text of photos, I thought: Hmm. Instagram lets you cross-post to a number of services, including Facebook. I bet if I add the alt text on Instagram and cross-post to Facebook, that’ll solve the problem.

Unfortunately, no. When I went to my Facebook timeline and looked at the photos, I either got the message that no alt text was available, or the automatically generated stuff that Facebook’s been putting out for a few years now. The alt text was also not posted to Twitter. One possible work-around is to configure an IFTTT applet that posts Instagram photos as native Twitter photos, but I haven’t tested this yet. It’s worth noting that if you cross-post from Twitter to Mastodon, the alt text is transferred 98% of the time.


Instagram’s support for automatic alt text and its giving users the ability to add custom alt text to photos is a huge step for accessibility. However, as a blind person who is posting more photos and wants to make sure their posts are accessible on all platforms, the experience is missing a couple of key features I need for it to become my main platform for sharing my experience as I develop my interest in photography. Right now, I’m separately posting to each network, and adding descriptions in the main post section of Facebook. There’s more effort involved, but the end result, to me, is worth the extra energy.

Pet Peeve: Calling Something a Hack When It’s Not

We’ve all been in the position of looking up how to do something with one of our electronics on the Internet. These days, it seems that many search results have the word “hack” in them, as in “Five Amazon Echo Hacks You Didn’t Know About”. I remember a time when a title like that meant you’d be clicking a link to a set of directions to make your device do something the manufacturer hadn’t intended, and may even frown upon. Now, it’s just a way to get people to click on rehashing of the directions that come with the equipment. In other words, these aren’t hacks. The manufactured wanted you to use these features. If anything, they’re lesser known features. Calling an documented feature a hack is like calling mayonnaise a secret sauce when you slap it on a ham sandwich. It doesn’t change anything, and it makes you look pretentious in the bargain.

A Tip of my Hat to Two Sites in the Blindness Community Promoting Gender Inclusion

It’s a good thing when any website offers options beyond the binary when they ask for gender. When sites within a smaller community such as the kind made by blindness or any other shared disability by people, it’s even better. Blind Bargains did this in their most recent survey to collect feedback about their podcast. When I tweeted my pleasure at this discovery, I received a reply from one of my followers that RS Games, a site that offers accessible board and card games for people with visual impairments, now gives its players the option to identify as nonbinary. As a long time player of RS Games, I can tell you this is a major relief after years of having to go by “it” because I refused to choose either of two options when playing. If you know of any other blindness related sites that are promoting this sort of inclusion, please post them in the comments ection below, or mention them to me on Mastodon or Twitter. You can also use the contact form in the navigation menu at the top of this page to send an email if you want to keep yourself private. This way, we can make sure everyone gets the credit they deserve and promote inclusion. Thank you in advance.

Here is More Information About Google Docs for WordPress

Hello Followers,

When I posted about finding a Google docs extension for wordPress, I got lots of questions. Here is the link to the article that taught me how to use it. To this person’s advice, I’d like to add that it is a good idea to upload photos to the media library of WP, rather than insert them into Docs. This makes sure that when you add the alt text, it stays alt text and doesn’t become a file title written in scripturecontinuing. for Google Docs is a new add-on that’ll make your life a whole lot easier. Here’s how you can use this tool for your website or blog.
— Read on

Add Descriptions to Pictures we Share: Why People Don’t, What the Benefits are, and Two Rules for making them Count

A baby dragon on it's trainer's arm, ready to hunt.


Posting pictures is something we do every day. Maybe it’s a cute outfit you’re child’s wearing. It could be a ginormous sandwich you’re having for lunch. Possibly, it’s a cloud you truly think looks like a dinosaur. No matter what it is, the process usually goes something like this.

  • Take the picture or pictures.
  • Call up the share sheet on your mobile device, and choose your social network.
    • Insert your commentary with appropriate hashtags, and hit the post button.
  • Wait for likes and comments.

Most people who go through this process miss a step. That step is adding a description (also known as alternative or alt text) to the photo. If you’re reading this post, it’s probably because you have decided that you want to be adding alt text (descriptions) to your photos, and you want to make them as effective as possible for people viewing your posts. In other words, you’ve typed very specific search terms into the search engine of your choice because this is something you’re actively looking for. If this isn’t the case, and you’re just learning about alt text for the first time, don’t worry. Here is a page that offers a summary of what alternative text is, how to add it to photos, and includes other tips for making sure your content is as accessible to your audience as possible. Since I don’t believe in rehashing content that is already available and well-written, I’m going to assume that you’ve either read the page I’ve just linked to, or that you’ve at least done your research into the platform you’ve chosen to use to find out what it offers for adding descriptions to your pictures. With that said, I’ll be talking about the experience I had that made me realize why most people never think to add descriptions to their photos, describe two benefits to doing so, and talk about the following two rules, which are actually more like guidelines, for making your descriptions as effective as possible:

  1. Be as concise and detailed as possible.
  2. Adjust your description according to your purpose and audience.

Why Aren’t People Describing Their Photos?

Remember the process I said you go through to share your photos? More specifically, remember the step that references the share sheet on your mobile device? Here’s what I found out while researching platforms to share my own experiences with photography. When you choose a photo from your mobile device and use the share sheet to post it, the option to add alternative text or a description for the photo isn’t available. This is true even if the platform in question gives users the ability to include this information with their posts when made through the website or app for that platform. This means that people are not always making the decision to not include descriptions with their photos, but rather that they are not being given the choice at all because they’re taking the most user friendly and direct route to sharing their content.

Again, if you’re reading this, it’s because you’ve decided that including descriptions with your photos is something you should and want to do. You have your own reasons and you want to make sure you’re being effective. If you’re still making that decision, or just in case you like to be reminded that taking the extra steps to add descriptions is worth it, the next section describes two benefits of this process.

Two Benefits to Adding Descriptions to Your Pictures

The first benefit to adding descriptions by using alternative text to your pictures is that it makes your content accessible to everyone, including viewers with visual impairments. With the increasing adoption rate by large technology companies of universal design, more visually impaired people than ever have access to the Internet and its content. In a digital world where descriptions of photos are desired but not largely available, you can stand out as a person who is aware of the different needs of others and/or technologically savy just by taking the extra steps to add alternative text to your photos. You’re also making your content more accessible to search engines.

If you take a few seconds to think about it, you’ll realize making sure your photos have appropriate descriptions can make it easier for people to find your content. Search engines provide results based on the text a user types into the search box. While it is possible to filter results to images, those images are found based on the text in the search box. In other words, describing your images lets the search engine properly index them and lead people to your content. With these things in mind, let’s talk about how to make sure the descriptions are as effective as possible.

Be Detailed and Concise

A baby dragon on its trainer's arm, ready to hunt.

Most people tend to think the title of image is enough of a description. I like dragons, and I tend to use them for examples when I can. The original title of this picture is “Baby Dragon”. If a picture’s worth a thousand words, and we’re only using two to describe it, it should be immediately clear that something important is being left out. For starters, “baby dragon” can mean a lot of things.It could mean a dragon just hatched, it could mean a small, not yet vicious dragon that someone thinks they can tame, or it could and does mean that it is a small dragon on its trainer’s arm, ready to hunt. What’s important is that we, the ones posting the picture make sure to provide the relevant details in as few words as possible. The way this is done depends on the content that is being posted.

For example, if the content is a meme, both text and details of the picture should be included in the alt text. Similarly, if the picture is all text, something that occurs on both Twitter and Instagram, the text of the picture needs to be included in the alt text. This allows our audience to have quick access to the information. This also means that people in the photo who are an important part of the photo should be identified, and that screen shots should include descriptions of the important parts of the screen. This last is especially true if you are posting how-to articles so viewers can make sure they’re on the correct step in the guide.

In regard to being concise, a lot can be done simply by not repeating details that are in the text of your post. For example, if your post text says, “Lovely night at the beach” and your picture is of the beach at nighttime, you can leave “beach” and “night” out of the description, and spend more time describing the other elements like the colors of the sunset or how much moon is visible.

Adjust Your Description according to your Purpose and Audience

An orange dragon flying through the sky with the sun in the background.

While you were choosing a platform to share your content, I imagine an important step for you was figuring out which of them would be most accessible to your intended audience. I chose WordPress because it lets me build a website I know is accessible to people who use screen readers, people don’t need an account to read my stuff, and I can post my posts to other social networks. We go through a similar process when describing photos.

I described the baby dragon just by saying “a baby dragon,” but I did not include details like color. As a dragon enthusiast who interacts with other dragon enthusiasts, I can tell you that a lot of significance is placed on the color of the dragon, as well as whether it is Western, Eastern, or Celtic, as this distinction indicates physical characteristics we would expect to find. For the record, the dragon used to start this section is a Western-style dragon, a detail I left out in the alt text. How much detail and what details are necessary will largely depend on who you want to reach.

If you’re posting detailed computer how-tos with screenshots, you’ll probably want to include the name of the screen you’re on, the available options, and any messages that appear if the screenshot is of a result from a single action or series of actions so that someone following the steps in your guide can compare their results. If you’re an artist communicating with other artists, you will likewise need to adjust the level of detail in your descriptions the folks over at have this down to a science.


This post described a reason as to why people don’t add alternative text (descriptions) to photos they upload, the benefits of doing so, and two guidelines for making the descriptions meaningful to the intended audience. Just like any other rules, there are exceptions. There are computer programs that provide descriptions of photos based on complex formulas, but said formulas still don’t always manage to adjust for context to communicate the best meanings behind a picture. Until the day when machines can accurately interpret context, it’s up to us to make sure we’re providing quality descriptions.

Three Barriers I Encountered as a Blind Person to Setting up Face ID, and How I trained my Eyes to Use Attention Detection on iPhone XS


Face ID for iPhones has been available for over a year now. Because I learned my lesson about early adopting from the Apple Watch, I decided to stick with a device that still let me unlock my phone with my fingerprint. With the release of this year’s line of iphone’s, one thing was made very clear. Face ID isn’t going away in the near future. So, feeling secure thanks to Apple’s excellent return policy, I figured it couldn’t hurt to try it out.

Okay, you’re blind, but you’re in to Photography. What’s the big deal?

As it turns out, there’s really no big deal at all. When I take a picture, my goal is to get the object or objects centered in the frame and get the picture. Face ID works just like that once it’s set up. Jonathan mosen Published a getting started guide whose directions and security advisories are still current, so I don’t see a need to rehash that here. There’s really only one point on which I disagree with him, and we’ll get to that in a minute. What you’ll find here are some considerations for setting this feature up if you’re totally blind, followed by a description of a training method for using the phone’s attention detection feature.

Roadblocks to setting it Up

The getting started guide I’ve just linked to details the setting up of Face ID with VoiceOver. While it is a very simple process, almost simpler than setting up Touch ID, there are some potential barriers that could impact one’s user experience and first impressions if this new feature. The first one has to do with the attention detection feature.

It’s Disabled by Default for a Reason

When you set up Face ID with VoiceOver on, you get a dialog that tells you that attention detection is disabled, and that you can enable it in Face ID and Passcode settings if you wish. Since I made the mistake of enabling it without any sort of training and had to deal with the results, I feel comfortable telling you that the best thing to do is leave attention detection disabled until you’ve finished setting up your iPhone, which includes but is not limited to the setting up of all accounts, as well as two-factor authentication apps and password managers. Some of these apps make you verify your identity after attention detection is enabled, but trust me when I say that extra bit of effort is a lot easier to swallow than the frustration you’ll experience otherwise. Once you’ve read the training method section of this post, you may wish to consider enabling attention detection if for no other reason than leaving it disabled has security implications. The next issue has to do with lighting.

Finding light

I’ve been dealing with facial recognition apps for awhile now, and proper lighting is important. One of the implications of the condition that causes me to be totally blind is that i have light perception on some days, and I am completely without it on others. The result is I need a reliable way to find light. Seeing AI has a light detection feature, and it lets me do that. It operates on a C-scale, and plays a higher note on the scale when brighter light is detected, and the note becomes extremely high an Atonal if the light is too bright. For the record, the best light for facial recognition is indicated by an E on the scale. For those of you who are unfamiliar with musical scales, this is the first note you sing in many songs, including but not limited to “mary Had a little Lamb,” which many people tend to sing in the key of C for some reason. Since I had an iPhone before, I was able to map out my apartment to find the best lighting prior to the new arrival, but you can do this any time before entering the setup screen. The final barrier has do to with just how to move your face.

Like clockwork? Not exactly.

I said earlier that I disagreed with Mr. Mosen on one point in his getting started guide, and here it is. In his guide, Mr. Mosen says,

Imagine that your nose is a hand of an analogue clock. Your nose needs to reach as many points on the clock as possible. So, after double-tapping the “get started” button, and waiting for confirmation that your head is positioned correctly, point your nose up to 12 o’clock, then move it around to 3 or 4. Point it down to six o’clock. Move your head in the opposite direction, so it reaches 9 or 8. Then conclude by moving it up to 12 again.

Here’s my problem, and i realize it may be a personal one. A clock is a two-dimensional surface, but the circle in which you need to move your head to set up face ID is actually three-dimensional. There are lots of blind people, myself included, who have trouble interpreting two-dimensional representations of three-dimensional space and objects. This makes maps and diagrams especially useless for me. Here, when I tried to follow those directions, I tried to get my nose to 6 o’clock, my head ran into my right shoulder, and I got stuck at four or five o’clock. With some help from the VoiceOver prompts, as well as relating it to my own experiences, I came up with the following:

Imagine that your head is a joystick on a game controller or old-style arcade machine. A joystick moves in a total of nine directions: Center, forward, back, left, right, forward and left, forward and right, backward and left, and backward and right. Start with your head in the center, then move it through the remaining eight positions to capture your face, making sure you don’t move outside the phone’s field of vision. If you do, VoiceOver will let you know, and you’ll just have to reposition your head to continue. Once you’ve completed the process and finished setting up the rest of your iPhone, it’s time to train yourself to use attention detection.

How I Trained my unruly Eyes

Another implication of my visual condition is that I have Nystagmus, which for purposes of this discussion means I have absolutely no control over my eye movements. This is what the eye doctors have always told me, this is what I told anyone who asked, and this is what we all believed. Aside from people getting upset because they think I’m rolling my eyes at them, it hasn’t caused me too much trouble. If my experience with Face ID and Attention Detection shows anything, it’s that I may have more control over it than I thought. Here’s the process I wnt through, and I’m betting some of you will be able to do this too.

Taking Selfies to Find the Camera

You might not have realized it, but the iPhone’s front camera has an extremely bright flash. It’s so bright that even though I didn’t have light perception yesterday, I could feel the heat from it. In my case, I still have my eyes rather than prosthetics, so all the nerves are still in tact. I spent a good half hour taking selfie after selfie until I could consistently get the heat of the flash in one or the other of my eyes. You can double-check this by going through photos with VoiceOver, and it will notify you if there are blinking eyes, which tends to happen when a bright light hits them. The next step was to enable Attention Detection, and go through the same process until I could consistently unlock the phone.

Making my eyes move where I want when I want

Here’s the thing to remember: Eyes, regardless of whether or not they are performing their intended function, are a body part. This means, at least for me, that I can make my eyes move in conjunction with another body part, my hands and arms in this case. By holding my phone in both hands at or around the center of my body, I was able to make my eyes go toward the middle of the phone to first find the flash, and to then get that satisfying click sound that means my phone is unlocked. I then had to keep doing it until I could unlock my phone in an amount of time comparable to the time it takes me to use Touch ID.


This post described three barriers I encountered while setting up Face ID on my iPhone, and how I worked around them. I then explained how I trained myself to use the Attention Detection feature to allow myself the most security possible from the device. At this point, I can unlock the phone consistently with Face ID and the Attention feature turned on. I still have failures at this point, but I used to get them all the time with Touch ID. I still haven’t made up my mind on whether or not I like Face ID, but I still have thirteen days. Most telling, though, is the fact I have not brought myself to wipe my old iPhone just yet.