Pet Peeve: Calling Something a Hack When It’s Not

We’ve all been in the position of looking up how to do something with one of our electronics on the Internet. These days, it seems that many search results have the word “hack” in them, as in “Five Amazon Echo Hacks You Didn’t Know About”. I remember a time when a title like that meant you’d be clicking a link to a set of directions to make your device do something the manufacturer hadn’t intended, and may even frown upon. Now, it’s just a way to get people to click on rehashing of the directions that come with the equipment. In other words, these aren’t hacks. The manufactured wanted you to use these features. If anything, they’re lesser known features. Calling an documented feature a hack is like calling mayonnaise a secret sauce when you slap it on a ham sandwich. It doesn’t change anything, and it makes you look pretentious in the bargain.

A Tip of my Hat to Two Sites in the Blindness Community Promoting Gender Inclusion

It’s a good thing when any website offers options beyond the binary when they ask for gender. When sites within a smaller community such as the kind made by blindness or any other shared disability by people, it’s even better. Blind Bargains did this in their most recent survey to collect feedback about their podcast. When I tweeted my pleasure at this discovery, I received a reply from one of my followers that RS Games, a site that offers accessible board and card games for people with visual impairments, now gives its players the option to identify as nonbinary. As a long time player of RS Games, I can tell you this is a major relief after years of having to go by “it” because I refused to choose either of two options when playing. If you know of any other blindness related sites that are promoting this sort of inclusion, please post them in the comments ection below, or mention them to me on Mastodon or Twitter. You can also use the contact form in the navigation menu at the top of this page to send an email if you want to keep yourself private. This way, we can make sure everyone gets the credit they deserve and promote inclusion. Thank you in advance.

Here is More Information About Google Docs for WordPress

Hello Followers,

When I posted about finding a Google docs extension for wordPress, I got lots of questions. Here is the link to the article that taught me how to use it. To this person’s advice, I’d like to add that it is a good idea to upload photos to the media library of WP, rather than insert them into Docs. This makes sure that when you add the alt text, it stays alt text and doesn’t become a file title written in scripturecontinuing. for Google Docs is a new add-on that’ll make your life a whole lot easier. Here’s how you can use this tool for your website or blog.
— Read on

Add Descriptions to Pictures we Share: Why People Don’t, What the Benefits are, and Two Rules for making them Count

A baby dragon on it's trainer's arm, ready to hunt.


Posting pictures is something we do every day. Maybe it’s a cute outfit you’re child’s wearing. It could be a ginormous sandwich you’re having for lunch. Possibly, it’s a cloud you truly think looks like a dinosaur. No matter what it is, the process usually goes something like this.

  • Take the picture or pictures.
  • Call up the share sheet on your mobile device, and choose your social network.
    • Insert your commentary with appropriate hashtags, and hit the post button.
  • Wait for likes and comments.

Most people who go through this process miss a step. That step is adding a description (also known as alternative or alt text) to the photo. If you’re reading this post, it’s probably because you have decided that you want to be adding alt text (descriptions) to your photos, and you want to make them as effective as possible for people viewing your posts. In other words, you’ve typed very specific search terms into the search engine of your choice because this is something you’re actively looking for. If this isn’t the case, and you’re just learning about alt text for the first time, don’t worry. Here is a page that offers a summary of what alternative text is, how to add it to photos, and includes other tips for making sure your content is as accessible to your audience as possible. Since I don’t believe in rehashing content that is already available and well-written, I’m going to assume that you’ve either read the page I’ve just linked to, or that you’ve at least done your research into the platform you’ve chosen to use to find out what it offers for adding descriptions to your pictures. With that said, I’ll be talking about the experience I had that made me realize why most people never think to add descriptions to their photos, describe two benefits to doing so, and talk about the following two rules, which are actually more like guidelines, for making your descriptions as effective as possible:

  1. Be as concise and detailed as possible.
  2. Adjust your description according to your purpose and audience.

Why Aren’t People Describing Their Photos?

Remember the process I said you go through to share your photos? More specifically, remember the step that references the share sheet on your mobile device? Here’s what I found out while researching platforms to share my own experiences with photography. When you choose a photo from your mobile device and use the share sheet to post it, the option to add alternative text or a description for the photo isn’t available. This is true even if the platform in question gives users the ability to include this information with their posts when made through the website or app for that platform. This means that people are not always making the decision to not include descriptions with their photos, but rather that they are not being given the choice at all because they’re taking the most user friendly and direct route to sharing their content.

Again, if you’re reading this, it’s because you’ve decided that including descriptions with your photos is something you should and want to do. You have your own reasons and you want to make sure you’re being effective. If you’re still making that decision, or just in case you like to be reminded that taking the extra steps to add descriptions is worth it, the next section describes two benefits of this process.

Two Benefits to Adding Descriptions to Your Pictures

The first benefit to adding descriptions by using alternative text to your pictures is that it makes your content accessible to everyone, including viewers with visual impairments. With the increasing adoption rate by large technology companies of universal design, more visually impaired people than ever have access to the Internet and its content. In a digital world where descriptions of photos are desired but not largely available, you can stand out as a person who is aware of the different needs of others and/or technologically savy just by taking the extra steps to add alternative text to your photos. You’re also making your content more accessible to search engines.

If you take a few seconds to think about it, you’ll realize making sure your photos have appropriate descriptions can make it easier for people to find your content. Search engines provide results based on the text a user types into the search box. While it is possible to filter results to images, those images are found based on the text in the search box. In other words, describing your images lets the search engine properly index them and lead people to your content. With these things in mind, let’s talk about how to make sure the descriptions are as effective as possible.

Be Detailed and Concise

A baby dragon on its trainer's arm, ready to hunt.

Most people tend to think the title of image is enough of a description. I like dragons, and I tend to use them for examples when I can. The original title of this picture is “Baby Dragon”. If a picture’s worth a thousand words, and we’re only using two to describe it, it should be immediately clear that something important is being left out. For starters, “baby dragon” can mean a lot of things.It could mean a dragon just hatched, it could mean a small, not yet vicious dragon that someone thinks they can tame, or it could and does mean that it is a small dragon on its trainer’s arm, ready to hunt. What’s important is that we, the ones posting the picture make sure to provide the relevant details in as few words as possible. The way this is done depends on the content that is being posted.

For example, if the content is a meme, both text and details of the picture should be included in the alt text. Similarly, if the picture is all text, something that occurs on both Twitter and Instagram, the text of the picture needs to be included in the alt text. This allows our audience to have quick access to the information. This also means that people in the photo who are an important part of the photo should be identified, and that screen shots should include descriptions of the important parts of the screen. This last is especially true if you are posting how-to articles so viewers can make sure they’re on the correct step in the guide.

In regard to being concise, a lot can be done simply by not repeating details that are in the text of your post. For example, if your post text says, “Lovely night at the beach” and your picture is of the beach at nighttime, you can leave “beach” and “night” out of the description, and spend more time describing the other elements like the colors of the sunset or how much moon is visible.

Adjust Your Description according to your Purpose and Audience

An orange dragon flying through the sky with the sun in the background.

While you were choosing a platform to share your content, I imagine an important step for you was figuring out which of them would be most accessible to your intended audience. I chose WordPress because it lets me build a website I know is accessible to people who use screen readers, people don’t need an account to read my stuff, and I can post my posts to other social networks. We go through a similar process when describing photos.

I described the baby dragon just by saying “a baby dragon,” but I did not include details like color. As a dragon enthusiast who interacts with other dragon enthusiasts, I can tell you that a lot of significance is placed on the color of the dragon, as well as whether it is Western, Eastern, or Celtic, as this distinction indicates physical characteristics we would expect to find. For the record, the dragon used to start this section is a Western-style dragon, a detail I left out in the alt text. How much detail and what details are necessary will largely depend on who you want to reach.

If you’re posting detailed computer how-tos with screenshots, you’ll probably want to include the name of the screen you’re on, the available options, and any messages that appear if the screenshot is of a result from a single action or series of actions so that someone following the steps in your guide can compare their results. If you’re an artist communicating with other artists, you will likewise need to adjust the level of detail in your descriptions the folks over at have this down to a science.


This post described a reason as to why people don’t add alternative text (descriptions) to photos they upload, the benefits of doing so, and two guidelines for making the descriptions meaningful to the intended audience. Just like any other rules, there are exceptions. There are computer programs that provide descriptions of photos based on complex formulas, but said formulas still don’t always manage to adjust for context to communicate the best meanings behind a picture. Until the day when machines can accurately interpret context, it’s up to us to make sure we’re providing quality descriptions.

Three Barriers I Encountered as a Blind Person to Setting up Face ID, and How I trained my Eyes to Use Attention Detection on iPhone XS


Face ID for iPhones has been available for over a year now. Because I learned my lesson about early adopting from the Apple Watch, I decided to stick with a device that still let me unlock my phone with my fingerprint. With the release of this year’s line of iphone’s, one thing was made very clear. Face ID isn’t going away in the near future. So, feeling secure thanks to Apple’s excellent return policy, I figured it couldn’t hurt to try it out.

Okay, you’re blind, but you’re in to Photography. What’s the big deal?

As it turns out, there’s really no big deal at all. When I take a picture, my goal is to get the object or objects centered in the frame and get the picture. Face ID works just like that once it’s set up. Jonathan mosen Published a getting started guide whose directions and security advisories are still current, so I don’t see a need to rehash that here. There’s really only one point on which I disagree with him, and we’ll get to that in a minute. What you’ll find here are some considerations for setting this feature up if you’re totally blind, followed by a description of a training method for using the phone’s attention detection feature.

Roadblocks to setting it Up

The getting started guide I’ve just linked to details the setting up of Face ID with VoiceOver. While it is a very simple process, almost simpler than setting up Touch ID, there are some potential barriers that could impact one’s user experience and first impressions if this new feature. The first one has to do with the attention detection feature.

It’s Disabled by Default for a Reason

When you set up Face ID with VoiceOver on, you get a dialog that tells you that attention detection is disabled, and that you can enable it in Face ID and Passcode settings if you wish. Since I made the mistake of enabling it without any sort of training and had to deal with the results, I feel comfortable telling you that the best thing to do is leave attention detection disabled until you’ve finished setting up your iPhone, which includes but is not limited to the setting up of all accounts, as well as two-factor authentication apps and password managers. Some of these apps make you verify your identity after attention detection is enabled, but trust me when I say that extra bit of effort is a lot easier to swallow than the frustration you’ll experience otherwise. Once you’ve read the training method section of this post, you may wish to consider enabling attention detection if for no other reason than leaving it disabled has security implications. The next issue has to do with lighting.

Finding light

I’ve been dealing with facial recognition apps for awhile now, and proper lighting is important. One of the implications of the condition that causes me to be totally blind is that i have light perception on some days, and I am completely without it on others. The result is I need a reliable way to find light. Seeing AI has a light detection feature, and it lets me do that. It operates on a C-scale, and plays a higher note on the scale when brighter light is detected, and the note becomes extremely high an Atonal if the light is too bright. For the record, the best light for facial recognition is indicated by an E on the scale. For those of you who are unfamiliar with musical scales, this is the first note you sing in many songs, including but not limited to “mary Had a little Lamb,” which many people tend to sing in the key of C for some reason. Since I had an iPhone before, I was able to map out my apartment to find the best lighting prior to the new arrival, but you can do this any time before entering the setup screen. The final barrier has do to with just how to move your face.

Like clockwork? Not exactly.

I said earlier that I disagreed with Mr. Mosen on one point in his getting started guide, and here it is. In his guide, Mr. Mosen says,

Imagine that your nose is a hand of an analogue clock. Your nose needs to reach as many points on the clock as possible. So, after double-tapping the “get started” button, and waiting for confirmation that your head is positioned correctly, point your nose up to 12 o’clock, then move it around to 3 or 4. Point it down to six o’clock. Move your head in the opposite direction, so it reaches 9 or 8. Then conclude by moving it up to 12 again.

Here’s my problem, and i realize it may be a personal one. A clock is a two-dimensional surface, but the circle in which you need to move your head to set up face ID is actually three-dimensional. There are lots of blind people, myself included, who have trouble interpreting two-dimensional representations of three-dimensional space and objects. This makes maps and diagrams especially useless for me. Here, when I tried to follow those directions, I tried to get my nose to 6 o’clock, my head ran into my right shoulder, and I got stuck at four or five o’clock. With some help from the VoiceOver prompts, as well as relating it to my own experiences, I came up with the following:

Imagine that your head is a joystick on a game controller or old-style arcade machine. A joystick moves in a total of nine directions: Center, forward, back, left, right, forward and left, forward and right, backward and left, and backward and right. Start with your head in the center, then move it through the remaining eight positions to capture your face, making sure you don’t move outside the phone’s field of vision. If you do, VoiceOver will let you know, and you’ll just have to reposition your head to continue. Once you’ve completed the process and finished setting up the rest of your iPhone, it’s time to train yourself to use attention detection.

How I Trained my unruly Eyes

Another implication of my visual condition is that I have Nystagmus, which for purposes of this discussion means I have absolutely no control over my eye movements. This is what the eye doctors have always told me, this is what I told anyone who asked, and this is what we all believed. Aside from people getting upset because they think I’m rolling my eyes at them, it hasn’t caused me too much trouble. If my experience with Face ID and Attention Detection shows anything, it’s that I may have more control over it than I thought. Here’s the process I wnt through, and I’m betting some of you will be able to do this too.

Taking Selfies to Find the Camera

You might not have realized it, but the iPhone’s front camera has an extremely bright flash. It’s so bright that even though I didn’t have light perception yesterday, I could feel the heat from it. In my case, I still have my eyes rather than prosthetics, so all the nerves are still in tact. I spent a good half hour taking selfie after selfie until I could consistently get the heat of the flash in one or the other of my eyes. You can double-check this by going through photos with VoiceOver, and it will notify you if there are blinking eyes, which tends to happen when a bright light hits them. The next step was to enable Attention Detection, and go through the same process until I could consistently unlock the phone.

Making my eyes move where I want when I want

Here’s the thing to remember: Eyes, regardless of whether or not they are performing their intended function, are a body part. This means, at least for me, that I can make my eyes move in conjunction with another body part, my hands and arms in this case. By holding my phone in both hands at or around the center of my body, I was able to make my eyes go toward the middle of the phone to first find the flash, and to then get that satisfying click sound that means my phone is unlocked. I then had to keep doing it until I could unlock my phone in an amount of time comparable to the time it takes me to use Touch ID.


This post described three barriers I encountered while setting up Face ID on my iPhone, and how I worked around them. I then explained how I trained myself to use the Attention Detection feature to allow myself the most security possible from the device. At this point, I can unlock the phone consistently with Face ID and the Attention feature turned on. I still have failures at this point, but I used to get them all the time with Touch ID. I still haven’t made up my mind on whether or not I like Face ID, but I still have thirteen days. Most telling, though, is the fact I have not brought myself to wipe my old iPhone just yet.

Two Shortcuts for Using Emojis on iOS


For those of you who have been following along, I’ve decided to make it a goal to make emojis a bigger part of my self expression. The biggest reason for this is that they seem to be more universally understood than regular words, even though my screen reader has an assigned verbal expression for each emoji. How do I know? I’ve never seen a social networking post or text message that was read to me as, “Going to a funeral today. Face with tears of joy.” To those of you who are visual, this message would look like, “Going to a funeral today. 😂”

When I first proposed the emoji goal, the response I got was something like, why would you want to use those? They take so long to type. The truth is if you’re using a touch screen device with a screen reader like VoiceOver on iPhone, using emojis can be a lengthy process. This post describes two shortcuts you can use to type emojis on your iphone more quickly and efficiently, and without the installation of third party tools.

The Simplest Solution is Somethimes the Best

Most people have forgotten, but emoticons were the first emojis. At a glance, they are made by combining two or more punctuation marks. Here is a complete list for you. On iPhone, when you type an emoticon and insert a space, it is automatically replaced with an appropriate emoji, assuming the device’s autocorrect feature, which will appear in the next section, is enabled. You can make a lot of the most common emojis this way. If you’re looking to use more complex emojis, continue to the next section.

How to Use Text Replacement to Quickly Type Emojis

What is Text Replacement?

here is an article that explains what text replacement is and how to use it. You may wish to read this before proceeding to the steps below if for no other reason than it provides an alternative to my explanation style. Let me just say that before the days of Third party keyboards and Braille screen input for VoiceOver users, this was one of the quickest ways of typing on a touch screen. Now… The fun stuff.

What to Do

Have you read the above link on text replacement yet? If not, this is me strongly recommending that you take the time to go back and do so. … … … Okay, I can see forward is the only direction in which you’re interested in going, so here we go.

For this tutorial, we’ll be telling our iPhone that when we type “lcf” (without quotes) followed by a space, it will be replaced with 😭. Once you do that, you’ll be able to create as many text shortcuts as you like for your own favorite emojis.

  1. Go to Settings➡️General➡️Keyboards. If you do it right, you should get the screen shown here. The keyboards section of the iOS settings. Available options are keyboards, hardware keyboard, text replacement, and options for autocorrect.
  1. Next, tap the text replacement option in the middle right of the screen. You should get a screen like this. The main text replacement screen,displaying the add and edit buttons, as well as the keyboard shortcuts added so far.
  1. You then need to tap the add (+) in the top right. You should have this screen. The add shortcut screen with blank fields. The software keyboard is showing, and the phrase field is currently editing.
  1. Fill out the fields as shown here. The phrase has the 😭 in the phrase field, and lcf in the shortcut field.
  1. Finally, tap the save button in the top right. Now, the next time you type “lcf” followed by a space, you should get 😭.

Now It’s Your Turn

You should now be able to make your own shortcuts. You can use them to type one emoji like 💩, or a series of emojis like 🦂▶️🐸. The only limits are those of your own creativity. The best part, these are backed up in icloud, so your shortcuts go from device to device.

For My First Post, Training Myself to Interact with Pictures

The General Idea

For those of you who don’t know, this site is mostly about my goal of becoming a blind photographer. I decided that the first step I should take was to train myself to interact with pictures. Places like Twitter are rich with pictures of all kinds, and I always spent my energy ignoring them unless they had descriptions, an occurrence that is rare going on nonexistent. It is one of those situations where the world wasn’t going to change unless I did. So, just in case you missed my blurb about it on the home page, I began using Seeing AI to analyze pictures.

I almost Overdid It

Let’s face it, folks. Not every picture that gets posted on social media is interesting, just like not every chicken strip is nice and crispy. After fifty or sixty selfies, cat pics, memes, etc., the process of analyzing each picture gets BORING!!! If you’re one of the people whose selfies I ran cross, don’t take it personally. I just overdid things, and analyzing pics went from being a fun thing to do on a Saturday afternoon to something a little too close to working in a processing center. If I was going to keep working toward my goal, I needed to make sure I had a reason to keep going. That reason, as it turns out, is the same one that encourages people to share pictures in the first place–the social benefits of sharing your experiences.

Building an Interactive Road to That First Milestone

At the same time I decided to start this thing, I was also becoming involved with Mastodon, another microblogging service. I could go on and on about the differences between it and Twitter, but the main one is that the culture on Mastodon supports every kind of social group you can imagine, including but not limited to aspiring blind photographers. The process was simple. I requested that people could send me pics, I would analyze them, and try to guess what the picture was. If I got it right, everyone went away happy. If not, the person had to tell me what the picture was, and then we could have a good laugh over the errors of computers. That’s what I’ve been doing for the last month or so. That, and programming friends into my camera for future use, a process I will describe in a future post. For now, I’m continuing to interact with people and their pics, and it has two results. First, it keeps me interacting with photos, and keeps me engaged. Second, it raises awareness of how AI helps people, and gives people an idea of how to describe their pictures when they post them, a topic I will also cover in a future post.

What’s the Next Step?

The next step is to get myself used to incorporating emojis into my self-expression. A picture is worth a thousand words, and emojis are just little pictures, aren’t they? Maybe I’ll write an entire post in emojis.