NC_2023_12_03

2021, Allison Sheridan
NosillaCast Apple Podcast
http://podfeet.com

Edit Transcript Remove Highlighting Add Audio File
Export... ?

NC_2023_12_03


[0:00] Music.

[0:10] It's Sunday, December 3rd, 2023, and this is show number 969.
Well, we've got a great big show, so let's kick right into it.

Unmute Presents – our community festive gift guide

https://unmute.show/2023/11/30/unmute-presents-our-community-festive-gift-guide/


[0:19] You've heard contributions on the Nosilla cast from both Marty Sobo and Michael Babcock.
Together, they host the podcast called Unmute Presents, which is a tech podcast with an ever-so-slight blind accessibility slant.

[0:31] They asked me to contribute to their community festive gift guide, and it was a special episode they did, and I was more than glad to contribute.
I listened to that show and I added a couple of items that they had on the show to my own holiday wish list.
So whether you're sighted or not, you should really go check out the link in the show notes to Unmute Presents, our community festive gift guide, or you can look for Unmute Presents in your podcatcher of choice.

Bartender 5 – ScreenCastsONLINE Video Tutorial

https://www.podfeet.com/blog/2023/12/bartender-5-sco/


[0:54] I recently published an article telling you of all of the wonderful new features of Bartender 5, one of the most useful apps you can get for your Mac.
I was so excited about the new features in Bartender 5 that I called dibs on doing the video tutorial for ScreenCastsOnline subscribers.
That tutorial is now up.
In this tutorial, I demonstrate the basics of Bartender that haven't changed, like moving more uncommonly used items to the secondary menu bar.
But then I get into the fun new stuff.
I show you exactly how to style your menu bar in creative ways and how to add triggers to cause your menu bar to change.
I teach you how to trigger menu bar changes based on what apps you're running, your battery status, location, time of day, by which network you're attached to and even by a script.
This was one of the easiest tutorials to create that I've ever done for ScreenCastsOnline because the tool is rock solid and such an important part of my workflow.
One of the reasons I love doing these tutorials is because I learn every single little detail of how to use the tool I'm teaching.
You can get a free 7-day trial of ScreenCastsOnline at ScreenCastsOnline.com.

[1:57] And see this tutorial and all of the current back catalog of tutorials.

Tiny Mac Tips – Part 8 of X

https://www.podfeet.com/blog/2023/11/tiny-mac-tips-part-eight/


[2:02] Music.

[2:11] So I'm back with part 8 of my tiny Mac tips. You may remember that this is an ongoing series I started in order to teach Jill from the Northwoods how to move from being an adequate Mac user to a proficient one.
In case you missed the earlier installments, I've included links to the first 7 installments in this episode, what I'm calling part 8 of X.
Alright, let's start out with the Quick Actions menu.
In the Finder, there's a nifty little thing called Quick Actions Menu. menu.
You access it by right-clicking or control-clicking or two-finger tapping, whichever way you want to call it, on any file.
The Quick Actions menu is contextual, so the options revealed to you will be different depending on the type of file you've selected.
So let's go through some of the different ones that you can do.
If you choose Quick Actions on an image file, you'll be able to rotate left, open markup to annotate the image, create a PDF of the image, convert the image, or remove the background.
Now, most of those are obvious, but the last two weren't to me.
Convert Image allows you to change to a JPEG, a PNG, or an HEIF.
You can also adjust the image size and choose whether or not to preserve the metadata.
Remove Background was even more mysterious. I read on several reputable websites that it should do what it says on the tin.
Given a photo with a prominent subject and a somewhat continuous background, it should preserve the subject while removing the background and save it out as a transparent PNG.
I took a lot of different image types, with varying degrees of obvious subject-background.

[3:39] Contrast and not one single image I tested did anything at all.
No transparent PNG, but also no error and no message.
I even tried a portrait mode photo and I had no joy.
When Sandy did her proofreading that she does on all of my blog posts, she said, I don't know what your problem is.
It worked for me. And she sent a couple of photos and showed how well it worked.
So I tried it on a different Mac with the exact same photo, and it worked on the first try.
So I'm not quite sure why I had trouble on one Mac, but I thought I'd mention it.
But if you need to remove background, it's right there in this contextual menu called Quick Actions.
Now, if you have an audio file that's not an MP3, the Quick Actions menu will offer to let you trim the file without even opening QuickTime.
It launches a little floating window with the trim bars on either side of the waveform.
What I can't explain is why it doesn't work on MP3 files.
I was able to trim an M4A, a WAV, and an AIFF, but the Quick Actions menu only said Customize when right-clicking on an MP3.

[4:41] By the way, Customize is available in all of these. it takes you to System Settings where you can add some types of shortcuts to the Quick Actions menu.
Now, PDFs have two interesting options with the Quick Actions menu.
If you select just one PDF, you can immediately go into markup with the PDF without even opening Preview.
But if you select two or more PDFs and then open Quick Actions menu, you get a completely different option.
You'll see Create PDF, which means in that one click, you'll be able to combine all of of the PDFs you selected into one single PDF. I think that's a pretty cool trick.

[5:16] Now, have you ever taken a movie where you're looking straight down on the subject and it gets saved in the wrong orientation because the internal gyroscope of the phone doesn't know where his up is?
Well, with the Quick Actions menu, you can not only trim video files just like you can with audio files, but you can also rotate them to the left just like you can do with image files.

[5:35] Now, if you'd like to always have access to the Quick Actions without even having to open the menu with a right click on your file, there's a way to see the actions available to you right in the Finder window.
With the Finder window open, go to the View menu in the menu bar and choose Show View Options, or use Command J.
In that menu, make sure the box is checked that says Show Preview Column.
In addition to showing you a larger preview of your file, underneath that, you can see and select the options I've just described from the Quick Actions menu. menu.
You can read more about the Quick Actions menu at support.apple.com and I've got a link directly to that in the show notes.

[6:13] All right, let's take a new tiny tip on. In macOS settings, we have a Display section.
Now this allows you to change the resolution of your displays, just like it says.
The normal view for displays shows your display or displays across the top, and below that you have four icons illustrating what your display will change, how it will change, depending on the resolution.
Now, for a nerd, it's almost insulting how cartoony this is.
On the left, it says larger text, and on the right it says more space.
And in case that's not obvious enough, the text inside these four icons goes from large to small.
Now, if you'd like to work with real resolutions instead of cartoons, go to the bottom of this same window and click the Advanced tab.
The overlay that comes up has three fun toggles that control how your Mac reacts if there's an iPad nearby or even another Mac.
You know, things like letting your cursor slide back and forth between the devices.
And don't be distracted by that fancy new stuff in this menu.
Look at the top of this menu and you'll see a toggle that says Show Resolutions as List.
Now, you'll be taken back to the normal display screen, and instead of seeing those four cartoons, you'll see six resolutions, or at least that's what my MacBook Pro shows.
I'm not sure everybody gets six, but I see six in the list.
Not only that, you'll see a toggle to show all resolutions.
When I do that with my 14-inch MacBook Pro, I get 22 resolutions from which to choose.

[7:40] Now, people have been paying for third-party apps to have access to more screen resolutions for years, and now in macOS it's built right in.
I know some of those apps allow you to get even more resolutions, but this is a lot to choose from.
Now, I know I mocked Apple for keep giving us cartoons by default, but I do think that cartoons are probably a lot more helpful to normal people than this giant list of options.

[8:03] All right, our next tip is we're going to talk about why do double dashes turn into one long em dash.
Bart and I record our programming by Stealth podcast together, which you should totally listen to, it's awesome.
We're both reading along with his tutorial show notes, and we're both able to edit them at the same time.
If you're curious how we do that, we use Git, which is a version control system mostly for programmers, but you can use it just for text files like Bart and I do for programming by Stealth.
Anywho, one of my jobs while Bart is teaching me is to be proofreading the notes.
Recently he was explaining a terminal command, and it required a flag with a double dash in front of it.
What he meant to write was, this is where the ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ this problem many times before, and you probably have too, where you're trying to type two dashes and macOS changes them into an emdash.
I explained to Bart that I know how to fix it. You just type them really, really slowly. You go dash...

[9:23] Dash, and then it lets you keep two of them. Well Bart taught me a tip, and now I'm going to pass it along to you, and it is a tiny tip.
It turns out there's a system setting that controls this behavior, and he said he'd just not gotten around to changing the settings on his new Mac, but it's one of the things he always does on his Macs, but that's why it slipped through on this one version of the show notes was because he just hadn't done this setting yet.
If you open System Settings, Keyboard, then under Text Input, you'll see Input Sources.
Then you'll see your language, in my case it says US for US English.
Finally, you'll see an Edit button. On the overlay that pops up, it will show all input sources on the left with a big list of automatic features that you may or may not like about macOS.

[10:07] By default, macOS corrects spelling automatically, it capitalizes words automatically, it shows inline predictive text, it adds periods if you do a double space, and finally, there's one that says, use smart quotes and dashes.
Now, it's interesting that they combine smart quotes and dashes into one toggle switch. You can't do these separately.
Both of these happen to bother programmers, so I'm thinking maybe that's why they're stuck together.
In software development, smart or curly quotes instead of the tiny vertical tick quotes can really mess up your code.
And of course, if you're trying to document a command for the terminal, you often use flags that are called with a double dash, and you simply cannot type that in a normal text editor if smart quotes and dashes are enabled.
Now, I know this is supposed to be just tiny Mac tips, but if you'd also like to disable this feature in iPadOS, it's in a slightly different spot and with a different name.
I don't know why, it's almost like they don't think the same people might use the same platforms. Anyway, you'll find the option in Settings, General, Keyboard, but the toggle is called Smart Punctuation.
Now, on both platforms, if you really do want to type a proper em dash, you could do it by simply holding down Shift Option and tapping the minus or dash key on your keyboard.

[11:24] All right, next tip. Have you ever accidentally opened a whole slew of windows at once?
I did this just the other day and I was planning on doing this tip, but it just it just happened and so I have a great example.
I had selected all of the files in a folder and I wanted to open the get info window on just one of them.
Without realizing they were all selected, I suddenly had 61 info windows open entirely filling my screen.

[11:50] Luckily, I knew how to close them all in a single click. I held down our old friend the option key and clicked the red close button.
And it's amazing how quickly they all closed.
This tiny tip extends far beyond the Finder. It works pretty much in every app in macOS that obeys the user interface guidelines.
So if you need to close a bunch of, I suppose, Word documents would do that as well.
You can hold down option, hit the red dot on one of them, and they should all close.
I'm going to wind up this tiny Mac Tips article by telling you about the best keyboard shortcuts post I've ever seen, and it's not mine.
There's a ton of these out there, but Daniel Alm, who's the developer for the Timing app, he starts out slow and he builds up the concepts step by step and then even explains the funny symbols for things like control and option.
I had no idea the option key symbol was taken from railroad switches.
Get it? It's like an option. Now I know which one's which. Anyway, if you're just learning the Mac, having a good guide to understanding keyboard shortcuts will be very handy. If you're a keyboard shortcut junkie already, you might want to keep this one in your back pocket to show your friends.

OCR PDFs with Free Open Source Tools on a Mac with a Shell Script

https://www.podfeet.com/blog/2023/12/ocr-pdf-mac/


[12:57] Last week, George from Tulsa gave us a great explanation of how he solved his problem of converting gigabytes of image-only PDFs to be searchable by using open-source, free optical character recognition, or OCR, software.
He explained that he's a Linux user with Linux Mint Cinnamon, so his explanation had an ever-so-slight Linux bias.
Now, George's goal was to be able to search his PDFs, but in applying OCR to his image files, he gained something else.
A searchable PDF PDF is an accessible PDF. If we can search a file, that means the text is there for screen readers like VoiceOver to be able to read it.
That is a huge deal and you should consider doing that with your PDFs, especially in the business environment.
While George gave us the steps to install and use the open source tools to OCR files on Linux, he also wrote, if you're geeky and love playing with computers, you might be able to get Tesseract and OCR MyPDF to run on a Mac using Mac ports or Homebrew.
One of the things I really enjoy about using a Mac is that we have a flavor of Unix under the hood, which means we get to take advantage of many of the cool open-source tools Linux people get to play with.
Our Windows brothers and sisters get to play too, because of the Windows subsystem for Linux.
Now you know I had to try to see if I could get the same tools working on my Mac. I mean, George had thrown the gauntlet down there, right?
I was hoping it would be super complicated and I'd have to do a whole bunch of work and And it would give me fodder for a long, drawn-out blog post.

[14:26] Sadly, it was very easy using the tips from George to convert unsearchable, inaccessible PDFs into glorious, searchable, accessible PDFs. But don't worry.
Even though replicating what George did was super easy, I decided to take it up a notch so this will be a nice, meaty story.

[14:44] George's instructions came at the right time for me. I recently downloaded a user manual for the automated pet feeder I told you about a few weeks back, maybe a few months ago, and I needed to be able to search it, but the darn thing hadn't been OCR'd.
I had a problem to be solved.
Now, you may recall George described two different open source tools he downloaded to do the OCR dance, Tesseract and OCR My PDF.
If your document is an image, then all you need is Tesseract.
But if you want to OCR a document in PDF format, you're going to need to use both.

[15:17] Now, I do want to make a warning here. I am going to get nerdy.
I'm gonna get really nerdy and I think I think it's good and I think it's fun and I really had a good time But I am gonna warn you here.
We have a little bit lighter content coming up later on in the show So if you want to take a little nap during this part, that's okay But I had a lot of fun and that's what makes me enjoy doing the show is when I'm having fun So I hope that's okay All right, to do this exercise yourself, you're going to need to do one thing that sounds super complicated, but it's actually quite easy.
You need to install something called Homebrew, and you have to do it from the command line in the terminal.
We've actually walked through this before on the Nocellicast, and so maybe one of these times you're going to get excited and go ahead and do this.
It's really easy, but I'm going to explain it again.
I want you to think about Homebrew as being like the App Store, except it's on the command line. So we're going to install homebrew and then a very simple command will let you install any app that's available inside homebrew First you're going to go to the homebrew website at brew.sh.

[16:18] See it's even a short URL. It's really easy on that page You're going to see a long gloppy terminal command.
It's got all kinds of stuff in it. It's got curl and Incomprehensible words is wonderful to the right of that. You're going to see a copy command I want you to click it to copy it.
Now you open the Terminal application, which is buried in your Applications folder, inside the Utilities folder. Maybe I said that backwards.
Utilities is inside Applications, and Terminal is inside Utilities.

[16:48] Alright, you've copied the command, just paste it into your Terminal, and then hit Enter.
That's literally all there is to installing Homebrew. Copy, paste, enter.
I should warn you, you're going to see lots of lots of more unintelligible stuff fly by on your screen, but just don't worry your pretty little head about it.
Now once that's done, installing Tesseract and OCR My PDF is just as easy as installing Homebrew itself.
To install an app with Homebrew, you simply type brew install and the name of your app in the terminal.
So to install Tesseract, we'll use all lowercase and type brew install tesseract.
Then for OCR My PDF, again in all lowercase, we're going to type brew install OCR My PDF.
Now, if you thought you saw a lot of glop flying by when you installed Homebrew, wait until you see how much goes by when you install OCR My PDF.
What you're mostly seeing is what's called dependencies. These are the other applications, called libraries in this kind of context.
Anyway, these are other applications on which OCR My PDF relies.
I think it's possible that one of the dependencies installed with OCR My PDF is actually Tesseract. But I installed Test Rack first to test, so I'm not really sure whether you need to do it ahead of time or just this one time.
Alright, to recap though, we've typed in three terminal commands, and we are 100% ready to OCR our image and PDF files for free.

[18:14] George's problem was that he had scanned documents that were saved as image files, so they weren't even PDFs yet.
To convert his image files to searchable and accessible PDFs, he used Tesseract.
But he said the command can't be invoked directly, natively, so he had to invoke it by using ocrmypdf.
However, I found that on macOS, I could use Tesseract natively on an image file.
I took a screenshot, I saved it with the name og.png for original.
I then ran the very simple command that George had given us, tesseract og.png.

[18:49] New. This created a file called new.txt with all of the text of the image file.
Now, that's not exactly what I was trying to do, but it was interesting that by default, Tesseract could create text files for us.
I should mention that a lovely gentleman named Frank made a comment on George's post last week that you could do exactly that, that you could create text files directly using Tesseract the way I just described.
Alright, but I want to make the output a PDF.
All we have to do is slap the word PDF in all caps on the end of that command.
So to summarize, we can tell the command Tesseract to take og.png as the input file, new as the output file, and then pdf as the format.
We don't have to put the file extension on the file called new because it'll be added automatically. So, we write tesract og.png new pdf.
And that's all there is to it. We now have a fully searchable and accessible pdf called new.pdf.

[19:50] That wasn't that hard, right? I know I said it was super nerdy, but that was really pretty easy.
We downloaded three things by just putting in commands in the terminal, and we ran a terminal command.
That's it. And we've got it. And we've done it for free.
While OCRing an image file was fun, I more often run across unsearchable PDFs.
I mentioned earlier that I have a user manual for the cat feeder from PetLibro that's not searchable.
It came up because I had an issue with the cat feeder, and support told me to reset it.
Well, I had to scan the entire manual with my eyeballs to find out where they described the reset process.
I mean, come on, who has that kind of time? I wanted to do Command-F, look for reset, and find it right away.
I really needed this manual to be searchable.
Well, it's time to use George's recommendation of OCR My PDF to, well, OCR our PDFs.
He gave us his simple command, which is only slightly more complex than the one we just made up for Tesseract.
He invokes the OCR My PDF command, and he gives it the flag –output type, which I can now spell because of that setting I just changed, and then he gives the file type we want, which is pdf, this time weirdly in lowercase.
Then he gave it his input file named 1.pdf and his output file 2.pdf.
So the whole command is ocr mypdf //output-type.

[21:14] Pdf 1.pdf 2.pdf. I tested this command with my cat feeder manual and it took 40 seconds to scan and ocr the 22 page pdf.
It successfully ocr'd mypdf, but I was kind of surprised to see scads of error messages in the terminal as it ran.
Every error message was identical, complaining that some image object had no attribute.
I put an example of one in the show notes, but I am definitely not going to read it to you.
As all of the errors seemed to be associated with images in the PDF, I wasn't terribly concerned.
I opened the PDF, and it looked exactly as it did before I ran the OCR process, except it was searchable and the text was selectable just as I'd wanted.
Now in this file, there were images in the original PDF, and in some cases there were numbers with little dotted lines pointing to parts on the cat feeder.
And I'm thinking perhaps OCR, my PDF was annoyed by those.
In any case, if you get errors on embedded images or drawings inside your PDF, you really don't need to be surprised, and I don't think it causes a problem.
Okay, you can stop here if you want to, but I didn't.
I had installed two app libraries using Homebrew.
I was able to replicate George's success on Linux in converting both image and non-searchable PDFs into searchable and accessible PDFs.

[22:33] I was even able to turn them into plain text files if so desired.
The whole process of learning, figuring this all out, and installing everything took me maybe 20 minutes if I round up.
But what fun would it be if I stopped right there?

[22:47] In George's article, he explained that he created a folder in which he drops the file he wants to OCR.
He changes the name to 1.pdf so he can use it and run his hard-coded command, which saves the output as 2.pdf.
And I'm assuming he's gonna need to change the name of that file back to something he really wants it to be and put it back where it belongs.
In that original folder that I was just talking about that he created, he also keeps a text file with his hard-coded command, so he doesn't have to remember it.
He can open it up, copy it, paste it into Terminal, and it runs.
While this works well enough and is certainly repeatable, I wanted to try to automate the process.
I didn't want to always have to use the same folder or name the file 1.pdf.
I wanted the freedom to have this work anywhere on my Mac with files of any name.
Often I spend hours automating something that takes very little time for me to do, but I do it often enough that getting it automated is worth the trouble.
This is not one of those times. I hardly ever need to OCR files.
Seriously, it comes up once a blue moon.
And yet for some reason, this idea just tickled me, the idea of automating it.
It was a challenge, and it sounded like fun.
In Programming by Stealth, Bart has been teaching us about automating things on the command line, so this gave me a perfect opportunity to practice some of our new skills.

[24:03] For those of us amongst us who are not programmers, but have managed to get this far by installing two app libraries on the command line, the next step isn't that big of a leap.
Whatever you can type in as a command on the terminal, file, you can put it into a shell script, which is like a little automating program, and then you can run it all in one go.
We already know how to run the commands to OCR our files, so why not slap them together into a shell script to make our lives easier?
Since Bart taught us how to write bash scripts and programming by self, installments 143 to 154, I decided to make my script in bash.
My goals in the automation of George's process were as follows.

[24:42] Allow the script to run on any file, any PDF file, in any folder.
Allow the PDF to have any name we like.
Have the script export the OCRed version of the PDF into the same folder as the original, but I wanted to tack "-OCR"- onto the end of the file name.
This way I'd be able to tell the two files apart, and I wouldn't risk overwriting the original file.
If I succeeded at these goals, I wanted it to run inside Keyboard Maestro, but that was a stretch goal.
So, our script is going to run on any file in any folder or directory.
In order to build the name of the output file, we're going to need to extract the directory path from the input file and save that out as a variable.
We're going to strip off the PDF, the .pdf at the end of the input file, and then we're going to build the output file name by adding together that directory path to the original file, the input file name, and then we're going to add "-ocr.pdf.

[25:36] To the end of the name of the output file." Still with me?
You got that? It's not too hard, but we're going to get there in little steps.
Scripting languages like Bash and AppleScript take the first input to a command, and they give that variable name $1.
So variable names always have dollars in them once you've assigned them.
Before you assign them, they don't. It's kind of weird. I'm going to call my script file ocrpdf.sh.

[26:04] So the way you run a script file is you put dot slash in front of the name.
What I want to be able to do is write dot slash OCRPDF dot sh myfile.pdf.
So myfile's just gonna be a placeholder. It's gonna be called whatever we want it to be.
Now, when we run that command, myfile.pdf will automatically be assigned the variable name $1 in our script.
But we don't want to use that name because it can get reassigned, so let's create our own variable name.
That's gonna be the first command in our script. I'm going to call it inputName.
So it's going to say inputName equals $1, so I'm taking $1 and shoving it into inputName.
Now $inputName will be the full path to the file name.
For example, if my file is on the desktop, $inputName would be slash user slash allison slash desktop slash myfile.pdf.

[26:55] Okay cool. When we tell the script to write the output file, we're going to need to tell the script where to write that file, which you've already decided is going to be right back into the same directory as the input file.
We can extract the directory name from $inputName so we have it ready for the output file.
Luckily, there's a built-in command in Bash called dirname that'll grab that directory name for us.
I'll create it a variable imaginatively also called dirname.
So I say dirname equals $dirname $inputName. There's a bunch of parentheses around it, but you can read it in the show notes.
I don't need to say all these things exactly right, but basically we're going to say dirname and an input name and that'll give us the directory name that we want and shove it into that variable.
Now this is swell that we have this full path name from our input file.
For our next trick, we need to extract the file name for the input file without its extension.
If we can do that, then we can use the directory, the original input file name, plus dash ocr.pdf to be the name of the output file.
To get the input filename without that directory path and without the .pdf, we can use another nifty little built-in command called basename.
So I'm going to say input basename is what I'm going to call my final input basename without all the other stuff on it. Input basename equals $basename$inputname.pdf.

[28:15] So now we've got that input basename that's just by itself, and we've got the directory name by itself.
Now, ideally, since OCR in my PDF can OCR image files too, I should write this generically so it could be a PNG, a JPEG, or even a TIFF file, but I'm going to leave that for another day. This is complex enough.
The last thing we need to do is build the output filename is to slap "-ocr.pdf.

[28:40] On the end of it." I decided to create a variable called $ADD for the additional text.
So ADD equals quote "-ocr.pdf," unquote. We now have all of the building blocks to create the output file name.
We have $dirname is the original directory path where we're going to write the output file.

[28:59] $inputBaseName is just the name of that file without the path or file extension.
And $add is the "-ocr.pdf that we're going to pop on the end so we don't overwrite, the original file, and so we can tell which file has been OCRed.
To build the output file name, we need to concatenate all of this together.
Well, concatenation is a fancy word for adding it all into one long string of text.
You can do it, you can actually look it up in Excel. That's kind of a fun command to go play with.
In Bash, you put the variable names inside squirrely brackets with the dollar on the outside, and then any plain text just gets thrown in there without any of these brackets.
We want the directory name followed by a slash, then the input base name without the path or file extension, and then we want our added text dash ocr.pdf on the end.
I am definitely not gonna read this command because it's getting real gloppy now. But again, this is all documented in the show notes.
We're now ready to add the last and most important bit of our script.
We need to actually, in the script, tell it to run the ocrmypdf command.
We'll run it essentially like George did it originally, but instead of using one.pdf and two.pdf as our file names, we're going to use our fancy new variables $inputName and $outputName instead.
So the final command in this long script is ocrmypdf –output-type.

[30:18] Pdf $inputName $outputName. Ta-da!
I put the entire text of the completed script in the show notes and I'm definitely not going to read that to you.
All right, I sent this script off to George to run on Linux without any instructions and And that succeeded just about as well as you would have expected.
He didn't know how to use it. So let's step through the instructions on how to use the script.
First you can install Homebrew as I explained earlier. You can install OCRMyPDF as I explained earlier.
You're going to create the script by copying the text in this article and pasting it into a text file called OCRPDF.sh.
You can call it whatever you want, but if you do that, it'll be easier to follow.
In the terminal, you're going to go to the directory where you saved the script and change the permissions on the file so that it's executable by entering chmod plus x ocrpdf.sh.
I know that doesn't sound like it makes any sense, but it's changing the permissions so that it's allowed to be executed.
In the terminal, we run scripts by typing ./. before the script name.
I mentioned that a little bit ago.
This script requires an input file, so we need to run the script and tell it which file is the input file.
If your script is in the same directory as the PDF you want to OCR, and if, for example, the original file is called myfile.pdf, you would type ./.ocrpdf.sh.

[31:42] Myfile.pdf. It's all you have to type, and it should create a file called myfile-ocr.pdf, in the same directory that's fully searchable and accessible.
Now, if the file isn't in the same directory as the script, enter the full path name for the file.
Now, that can be hard to type sometimes, so if you're on a Mac, you can just drag the file that you want to do the OCR on into the terminal after you write ./.ocrpdf.sh.

[32:10] And it'll automatically put the full path into the terminal, including the file name.
I don't know if that works on Linux, but it is a godsend because I can never remember how to put the path names in correctly.
Let me tell you, this is my first time teaching other people how to write terminal commands and how to create a shell script, so I think it's highly likely that I've left out some steps or I've made a boo-boo or two in the instructions.
Please go gently on me, but do correct me or ask me questions if this doesn't work for you.
Now, I'm going to tell you a little secret. Only the people who stayed awake this long are going to get to hear it.
I did not write this all in one fell swoop. I made lots of mistakes and I had to look a lot of stuff up.
It's a very short script, but it took me quite a long time to write it.
Perhaps that's not a surprise to anyone that I made this many mistakes and I had to look a lot of stuff up.
But how I looked them up might be a surprise. In the past, I've gone to the Googles, put in a search term, then scrolled through the results looking for answers from Stack Overflow.
That's a site where programmers ask and answer questions on coding.
Sometimes I'd get lucky, and I'd find the answer on Stack Overflow, but often I'd have to search over and over to get the answer I needed.
This time around, I asked ChatGPT the questions instead.
I used Microsoft Edge as my Chromium browser rather than Google Chrome, and Edge has ChatGPT built right into the Bing search engine.
The advantages of using ChatGPT are many-fold.

[33:37] First of all, you get several summary-level answers. Each answer has a footnote, and it tells you the source.
I can see on one question I got seven answers, and the first couple were from superuser.com and stackoverflow.com.

[33:50] I can actually click on the link to the answer I'm interested in, and then I can read the question and answer in full context.
I can see how many people upvoted that. Was that a good answer?
Having it summarized and having quick access to the source, to me, is much better than a giant list of results from Google.
A chat GPT remembers what you're talking about, too.
In a few instances, I'd ask it a question, and then I'd need to refine it by saying, I'm on a Mac, I forgot to tell you.
I didn't have to repeat the question. all I had to write was, now answer for Mac OS.
And it would say, oh, I'm sorry, I gave you Linux instructions.
Here's the answer in Mac OS.
Now, while the answers are wrong, as often as people are wrong when they answer on the native websites, I found it very quick to work my way through the answers that weren't exactly what I was looking for.
I was also able to command tab to work away on something else during the 15 to 30 seconds it took ChatGPT to craft the answers to my questions.
Now, I didn't rely wholly on ChatGPT inside Bing to do my work, but rather it helped me build up each piece.
I enjoyed having what Microsoft likes to call co-pilot by my side.

[35:01] The bottom line is I really enjoyed figuring out how to write a shell script on my Mac to automate the process of OCRing PDFs.
Perhaps it's a bit nerdy for you, but it makes me really feel powerful to be able to do this.
I remember a day when I used to want to automate things because all of the cool kids were doing it, but I couldn't figure out what to automate, and even if I did think of something, I didn't have the technical chops to pull it off.
I think that's what programming is all about for me. Now I have an itch I want to scratch, and I know if I try to use the tools Bart and the other no-silicast, ways have taught me, I'll be smarter when I'm done.
One more thing. As I was writing this up, I kept thinking, I'm not worthy to teach this stuff, and I bet there's a better, more elegant way to solve this problem.
I wrote it up anyway, because Bart constantly says in Programming by Stealth that there are often many right ways to do something.
Even if his solution might be more elegant than mine, it doesn't make mine less.
This lesson that he keeps teaching me is probably my favorite thing about learning from Bart.
Believe it or not, there is a part 2 to this article. After I got my little shell script running, I decided I could figure out how to put it into Keyboard Maestro, so I don't even need to launch the terminal to run it.
The solution is super cool, and it was really fun, so stay tuned for part 2.

Support the Show

https://podfeet.com/patreon


[36:21] Now, I know the holidays are fully upon us and you probably have a lot of financial demands on you, but if you'd like to help fill the stocking of a lovely podcaster who entertains and educates you without fail every single week, please consider becoming a patron of the Podfeet podcast.
You can do that by going to podfeet.com slash patreon, or you can choose a dollar amount that works for you and your family.
It truly shows me the appreciation for the content we provide here.

Tom Mattock on Alt Text (Alt Tags) on Images in Social Media (no blog post)


[36:47] Today, I'd like to welcome to the show a gentleman you've heard from before, but this time we're gonna have a little conversation.
Welcome officially Tom Matic to the NoSilicast.
Hello, thank you for letting me be here, it's great.
Well, Tom and I have been engaged with a few other folks, including John Gruber on Mastodon about the subject of adding alt text, which is also called alt tags, to images when you post them to social media.
Since Tom is blind, we thought it might be helpful for us to learn from him what that alt text is, why it's important, who it helps, and hopefully he'll give us a few tips on how to create good alt text, and maybe even some tools to help us do that.
So with that grand introduction, Tom, why don't you first tell us a little bit about yourself?
Well, let's see. I'm 50 years old. I graduated from Perkins School for the Blind back in 1993, went to college, graduated, and I'm now working at Walmart, been there since 2002, and married my best friend back in 2015.
Oh, nice. And, uh... Now, you're blind, as we said up front, but you've got a couple of other interesting things in your background about, from your earlier life.

[37:55] Yes, when I was four years old, I was going to visit my grandparents.
My mother was giving birth to our third child, and I got sick down there.
I got some kind of influenza, and couldn't breathe for about eight or nine minutes, and I was in a coma for about a year.
A year? And, yes, a whole year I was in a coma.
And I wasn't supposed to be here, but I've recovered, and I graduated high school, and I graduated college, and all these other things.
Holy cow. So you came by everything easily?
Not really, but yeah.

[38:29] Now, you were telling me that you have cerebral palsy as well, is that a result of that?
Yes, cerebral palsy is technically oxygen starvation to the brain, and since I didn't have any oxygen, that's how that came about.
Now technically, I learned in college, during a term paper, that cerebral palsy is technically before, during, or after birth, up to a year old, where I was four years old, it's technically not cerebral palsy, but everything else says it is, so that's how they classify it.
Okay, well I don't suppose it's all that important at this point to name it, right?
Right, right, right. So you have- I just have what I have and I do pretty good, I think.
Yeah, it sure sounds like it. So you're in a wheelchair and you have some trouble with your dexterity with your hands, is that correct? Yes, yes. Okay.
I always feel not worthy when I talk to somebody like you.
It's like, I'm whining because, you know, like my feet hurt because I walked too far at Disneyland yesterday, you know?
Well, I wouldn't want to be there either way, it's too many people, too crowded, I can get to use the chair.

[39:36] But people in wheelchairs get to go to the front of the line, that's just a little hot Yes, I don't like that, I really, that's a pet peeve of mine.
Really? I really don't. Yeah. Huh.
I don't know, I think you should get a few perks, but anyway, we're not here to talk about that.
Let's first, I've talked about alt text a fair amount on the show, but I've never done it from scratch.
So, what is alt text? what can you explain it to us?
What alt text is when you have a picture and you tap on a certain part of the screen.
I'm not sure how people who are excited do it.
I know I can tap on a picture and it says.

[40:13] Put in a description i put in and i can type in a description what the picture is and when i posted the apps like mona from mastodon it comes over into the alt text part so anybody with a screen reader can read what i wrote.

[40:28] Do they know what the picture is if they can't see it what's the point i always say a picture is worth a thousand words you know but if you can't see the picture what's the point there's no words don't you know what the words are.
So, when you receive a picture for me, when you see one of my posts, how is it that you hear my alt text with your screen reader?
Yeah, voiceover on the iPhone will read the alt text automatically when the picture's there. It's like part of the picture.
Okay. And then when you go to save the picture, if the person does it right, like you've been doing lately, it copies it directly to the caption field to the picture, so I don't need to do any extra work.
I can just mark it as a favorite and it goes up on my Apple TV for other people to see and enjoy.
Oh, wow. Oh, that's interesting. So you can save the photo and it actually saves it in. I've got to try that. I did not know it could do that.
Now, we've been describing it in the context of Mona, but it's for Mastodon, which is a darling of the blind community.
I mean, I don't think I've ever seen an app like it.
People just embraced it because they wrote it very much with accessibility in mind, so that's pretty cool.
So it allows you to hear it when you do it, and that's important to you because otherwise it's just a meaningless post.
Yes. I mean, it depends. I mean, the person can write something around it, but you don't know what the picture is.

[41:57] You know, you can say, oh, here's a nice picture.
We were at Disney World last night or Disneyland last night.
Here's our picture from our, but we don't know what's in the picture.

[42:07] I would have no idea that the picture you did last night was of the Disney castle all lit up with the fireworks and the lights in the background. I'd have no idea.
So, if somebody does do, I mean, it is possible to write a post that includes a very specific description, but I feel like it's just contextually different.
So in my post, I said, you know, it was really cool to see the Disney Castle all lit up with the Christmas lights at night, but in my description, I said, they're white dangly lights, and the, I think I said the building is blue, and it's against a jet black sky, and you know, that's different than the way you would write it just to say, here's a post.

[42:46] Right, exactly. Because everybody else seeing your post, they see the pictures, you wouldn't have to give those descriptions for somebody who could see it. Right, right.
Now, I feel like it's becoming- And I just learned yesterday, I'm sorry.
Go ahead. I just learned yesterday that when I send a picture to people with a caption in it, and on a family group, they don't see the captions automatically.
Right. I did not know that. So that's an interesting thing, and I'm going to turn us like 90 degrees from where we were supposed to go just because I ended up in a conversation with somebody, part of this thread that we had with John Gerber and asked a question about alt text, and so that's how I get involved in this conversation.
But this one very angry blind person started writing back and forth to me, and it was very interesting.
They said that they purposely put content into the alt text because that way the sighted people who don't bother to read their descriptions will miss something.
And it was like, it was such a, just such an angry perspective.
It really surprised me, but it came in the context of, I was saying, well, I don't actually normally read the alt text.
And they said, well, that's what really makes me mad.
And it's not revealed to us automatically. We just don't see it.
And this person said, yeah, that post, well, but this person came back and said, yeah, that's just those terrible app developers not revealing the alt text to everybody.

[44:15] And well, except for Mona, because Mona, if you send a picture to Mona without captions, it will say, hey, do you want to add a caption? Do you want to add alt text to it?
I don't think it does that for me because there's a setting in Mona.
Oh, you can set it to do settings is a setting that's automatically on.
Right it's not recently that says remind to stand with alt text if you're sending a picture oh that's cool i'm gonna turn that on so.
But this person was mad that the sighted people weren't seeing it when they see a post they were angry with the developers for that but to me if if i can see this picture of disneyland with the white lights. the black sky behind it.
It's redundant information, and actually it would be kind of annoying to also see the alt text.
Exactly. And I don't know how you would see that. Would it be like alongside of it?
Well, what's nice about Mastodon is if you, and actually in Mona, I think it actually says alt on the picture, so if you want to see it, you can see it. Oh.
And VoiceOver reads it automatically, so I don't know that there's a separate field.
Yeah, so like Steve posted a picture of a giant Santa, and I can see in the bottom left-hand corner in Mona, I can see a little, like a little text box, and if I click that, I can see it says, photo of me and my dog Tessa in front of a blow-up Santa lawn decoration that stands about 25 feet tall. We appear small.
So that- Oh, cool, I gotta get that picture.

[45:41] I'll tag you in it. No, yeah, no, I follow Steve, too, so I'll get that off of Mona.
Is that Master on in it? Yeah, yeah, it was- Yeah, I'll go get that later.
Just a day or so ago. But I mean, we have the access to it, but it's not in our face, and this person kind of was annoyed that we weren't being forced to look at it all the time, which I thought was just— Well, why would you, though?
I mean, if you can see a picture, why would you need the extra thousand words when you can see the picture? Exactly.
Now, one thing that— But you do have the option to see it, so that's good at least.
Yeah, yeah, I do like that, and that's definitely not true in all apps.
So you started talking about some tools to allow us to add these captions more easily.
Yes. What do you do? Or what's available to us that I don't know about?
Well, there's one called Be My Eyes.

[46:28] It's the iphone app. Yes it's an iphone app and what it does up until recently you can call in to get help with the picture you think what's in this picture and either a i or a person could help you with different things but they really bumped up the a i recently so you can look at a picture.
And it will analyze the stuff in the background.
I'll tell you, I got, I'll send it to you later, I shouldn't send it to you, but Moritz said of your picture of Disneyland, when I put it, I was just curious what he was gonna say.
And it puts in so many more things in the background.
And it's an amazing way to recognize it. There was a post I did, I saw a few weeks ago with, from Doctor Who, do you know that show, Doctor Who?
Yeah, yeah. Well, there was a picture of one of the creatures, and I'm like, I wonder if this is going to recognize it.
So I put it through Be My Eyes, and it knew what the creature was called. Oh, wow.
It's just great. It just adds a more rich description to whatever a person could ever think to write.

[47:40] So I'm a little confused. I'm a little confused. So Be My Eyes is an app that runs on the iPhone, and so you're using it in order to find out what's in an image.
I thought you were going to be telling me how I can put better tools.
Okay, you're talking about the other way around. Got it. Yeah, so what you do is, instead of sharing it, first you share the picture to Be My Eyes, and then Be My Eyes will look at the picture and it'll go, this is a description of the picture, and it will pick out the background and what the sky looks like and what the castle to look like, and then there are lights bursting over it, and it just goes really, really, really intense.
And... So, this might give sighted people the excuse of we don't need to do captions if you have these tools. What do we need to bother for?
No, because what you want to do is save us the work of having to do that.
If you do it first, if you would, like say you took your picture, you ran it through BMII, and then you would put that text your old text along with whatever you wrote, we wouldn't have to do that because it was a step, in my opinion.
It's just, you know, it's just better.

[48:54] It's just better descriptions. It seems to me that if the person describing the photo does take a little bit of effort to write a sentence or two, you can pick out what's important. Why was this interesting?
Like what Steve did with his photo of Santa and he and our dog Tessa, we appear small.
That might not be something that AI would pick out, but that was the point of the photo was for you to realize that this Santa was ridiculously large.
Right. But that's why I like to put both. I usually put what the person does, and if I want more description, then I run it through Be My Eyes.
So, how do you use your iPhone to look with Be My Eyes to look at a photo if the photo's on your iPhone?
So, I take the picture, I save it, I can open it on Mona, let's say, and then I, you can tap on it, you go to share.

[49:46] And then you can send it to Be My Eyes to analyze it more, and then I can see the original alt text that whatever you wrote before I do that.
And I'm like, okay, that's good, but I want a little bit more.
And the other thing is, if you want to know, like, you know, what color is Tesla?
Your dog, Tesla, right? You're not going to say that because you're assuming people are going to see that, right? Right, or it was unimportant to the story.
I know, but what if I want to know? Sure. You know, because you're going to look at it, you go, oh, that's a brown dog named Tesla.
But I'm not going to know what color the dog is, but if I could say to the picture, what color is the dog in the image?
And BMI will come back and go, oh, it's a brown dog.

[50:29] Okay. With red spots or whatever, if you want more information.
And then I can choose to add that to my copy of the captions or not.
I can figure out what's important for me to save for myself.
Okay. Okay. I got you. I got you. So, as far as from the sighted person's perspective to create captions that are good, or create, alt text that's good, we're going to use these words all interchangeably because they pretty much are.
What are you looking for in an alt text? What makes a good alt text?

[51:06] My first thought would be, look at the picture, what's the first thing that jump out to you?
Is it the background, is it the sky, is it the castle that's all lit up?
What are you seeing yourself when you first look at the picture, what's your first gut reaction?
You go, oh, that's cool, those lights over the castle are cool, and maybe it's the location because you don't, I can see a castle, but I'm like, well, what is it?
Is that Ireland, is that where Bart is? is that Disney World? Where is that castle?
So it's important to put where the place is, because I'm not coming from Brown.
I don't, I've never been to Disneyland, so I've never seen it, so I wouldn't know what it is.
So that's important to put what it is and where it is, and then anything that makes it stand out, maybe.
Okay, okay. So if you were, maybe just imagine you're trying to describe the photo to somebody over the phone.
Yes, that's a good thought, yes. Like why was this interesting?
I saw this really cool photo last night of a, I was looking on threads while I was hanging out in a line at Disneyland and it was, looked like a moon shape over the water but there was this bridge and it was actually a reflection of an arch and just the way I'm describing it to you now but that's what you would put into the alt text.
That's perfect, yes, that's exactly what you would put in the alt text.
And what do you say to people that say it's too much trouble?

[52:33] Well, I understand it's a little extra work, because you just want to get the picture, oh, this is a cool picture, I'm going to share it to the world. That's fine.
But then you're lucky, you know. And there are a lot of people in Macedon who say, well, I'm not going to re-boost your stuff if you don't put alt text in it.
The way I look at it, and I did a talk on this a hundred years ago at, I think it was Blog World Expo at the time, I entitled my talk Increase Your Audience Size Through Accessibility.
So everybody wants more followers, right? Everybody wants more people to see their posts.
People say they're still on ex-Twitter because they have so many followers, even though they know that 80% are bots.
But they want more followers. that's people love that and it's like you want to know a way to approach I don't know a couple million extra people why don't you throw an alt text on that you know right and all text isn't just for basset on either it is on X it is on Facebook any place that you can put the picture you can't add alt text yeah it's in our slack and we have a bunch of blind people in our slack so don't be don't be forgetting those folks either pod feed that comes like I'm talking isn't something I've taken on yet.
I've just recently gotten into Discord.
Oh cool. And it's in Discord too, right?

[53:58] I don't know. I haven't gotten that far yet. I just started it like a few weeks ago.
But I mean, you can put all text on your images in Discord, I believe, right?
I would assume. I haven't tried it. I haven't tried that. I'm not as bad as if you can.
I bet you could, because that's just an image. I mean, like I said, if the image has the caption already in it and you share it to Discord, I guess it would go over.
Okay. Okay. So back up on that. That was something that came up that I didn't realize.
Is if you're on the iPhone and you have a photo up, if you swipe up, you can see like the date the photo was taken, you know, what kind of camera, you can see a little map.
And that whole thing, there should be a caption field just before all that.
Correct, it says add a caption.
Yeah. But I, and so I had never realized this, I occasionally put things in the caption, But if I put a caption on this I in my experience testing this yesterday, it doesn't stick When I go away from the photo and I come back it can it it's gone But you have to get done once you're done typing it you have to get done Okay.

[55:08] Pat and Steve in front of a white pole star we took a picture for Bart because Bart's going to be buying a pole star and We just happened to run across one.
So Polestar EV.
Okay, that's good enough. So I'm going to tape done. Maybe I never hit done.
So now you're saying if I share this to Mona, or how about in a text message to you? That's supposed to work, right?
I suppose, but I can't get to my text right now, but yeah. Well, we'll leave that for everybody to find out whether this worked.
I told Tom he wasn't allowed to have anything playing back on the recording at the same time, so he's not listening to voiceover or anything like that.

[55:51] Yeah, I just got the text. Okay, don't look. I heard my sound.
Don't, yeah. I won't look at it. I won't look at it. I won't.
So that's interesting. It never occurred to me that I could do that there.
And now what's really cool, if this works, and I assume it does, Because you said it does.
If this works, it actually does look like it's stuck this time.
So if you put the caption on the photo in photos, then you can take that same photo, send it to Mastodon, send it to Slack, send it to Discord, send it to Facebook, it would actually be already there.
Yes. Interesting.
Huh. I got to test that out because that's a big thing is I have to use, I mostly do it on my Mac because I want to use my Clipboard Manager to save that text and save the text of my message because it's two separate things.
So I'm going back and forth, back and forth, back and forth doing it.
But if I could just put it in the photo.
On the Mac, do the caption field show up there?
Let's see. Oh, well, I'm syncing that photo. This is real-time excitement, everyone. Let's see what happens. Let's see if it's there.

[56:56] I do not see a caption field. There's a title field. Let me see if I do get info. What have I got?
I got a title. I got a location.
It's 24 millimeters keyword tells me who's there. Tells me I was in the Mickey and friends and Pixar pals parking structure and nothing about that caption.
Yeah, I know. I have sent the picture with captions to my family.
They say they can't see the captions either.

[57:25] But what if that's just a problem with the iPhone sending texts?
Well, my impression has been that Apple has two completely different teams working on photos for the Mac and photos for the iPhone, and that they occasionally have coffee together and go, oh, that people in places thing, or people and pets thing, maybe we should sync that between them.
But like, I've got a title... I don't have a Mac, so I can't show that, yeah. I've got a title field on the Mac, which I fill that out all the time so that I can find my photos, but that doesn't pass through over to the iPhone, and the iPhone's got captions.
There's no title field either, and I've never seen a title field on the iPhone.
Nope, there isn't one. So that's interesting.
I think, I mean, if this caption thing is gonna be somewhere, I think having it on the iPhone is probably a better place, because most people take a photo and post it, right?
So I think those of us that are immediately going over to a Mac are probably fewer and farther between. Right.

[58:29] So, that's pretty cool. Is there anything else that you wanted to tell us?
Because we're kind of coming up on our time here. Anything else we should be thinking about?
Oh, I want to stick one in and then I'll let you answer my question.
Tom and Kevin Jones have both given me positive reinforcement publicly for good captions when I do them.
And I got to tell you, it makes me want to do it 10 times more.
When I go, wow, I made a difference. Somebody was able to get my content that otherwise wouldn't have been able to see what I was trying to talk about and I'm very responsive to pat on my little pumpkin head, so keep doing that when I do a good job.
Yes. I saw a post on Macedon last night talking about this and someone said think of it like this, you see a screenshot of a weather forecast, what would you rather say, would you rather the description that your phone says to you, this is a screenshot of a weather forecast or the temperature the next three days will be 55 and sunny.

[59:30] Which would you rather see, you know? One gives you actionable information and one does not.
Now I'm going to declare one thing I never describe.
I will say Discord logo if that's what I'm posting a picture of.
Because it doesn't matter whether it's a D or whatever, I know it looks like a little controller thing, that doesn't matter, right? Please tell me I don't have to describe logos.
Well, but I don't know, but see that's where, if you put that in a picture, I could run that through Be My Eyes and then Be My Eyes could describe it if I needed it to.
I don't know what that description, I don't know what that is.
Yeah, but it doesn't matter. But if I heard it 100, but if I saw it once and then I saw it the second, third time, I go, okay, I know what that is, I don't need to do that.
But if it's the first time, I might wanna know what it is.
Yeah. But the second or third time, maybe not. But that's where, like you said, where some of these kind of tools can come in handy and tell you what it is.
Let's try it more. Well, on my blog post, if I'm using just like the Nocellicast logo, it just says Nocellicast logo.
There's a logo? Yeah. For every episode of the Nocellicast that I post the blog post for has the Nocellicast logo on it. And I just wrote Nocellicast logo.
You don't know that I think... I didn't know there's a logo.
Sorry? I didn't know there's a logo.

[1:00:56] You know what I should do, though, now that I think about it?
Those logos are all stored in WordPress, and I don't have to write them each time.
I'm going to go back and fix all of them. So, like, the programming by Stealth logo is very cute.
I should describe that one. Security Bits is very cute.
What you could do is do one of your text expanders for it, so you don't have to type it each time.
Yeah, well, no, it's already embedded in the photo. I just point to it in WordPress.
I don't upload it every time. So I don't know. I'm just being lazy the first time. So I could fix that.
I'm gonna fix that but when it's generic It's like I've got a bartender logo on a post. I just did it's just gonna say bartender logo to be fair Right, right.
Well, no, I did say it had a little tuxedo person on it.
So I guess I did do a better job All right. I'm conflicted. You can tell that's fine.
You're the host you can do what you want Well, and everybody can do what they want Nobody has to do alt text.
But if you want more people to enjoy your content, which is the whole point of posting things publicly.
Maybe you should consider doing it right yeah. My wife is in here yesterday doing pictures and i said i get back and i post this is it yes and then i would dictate it into the field is that sound good to go yep.

[1:02:09] Okay and then i run it through be my eyes and it would tell me more like man with a beard and. a woman with a pink dress, and I'm like, oh, I didn't even know what she was wearing today.
She didn't even ask what she was wearing. I should have asked her yesterday.
The picture told me when I posted the pictures yesterday.
Oh, nice, nice. Well, this has been really helpful. I like your perspective.
You're one of those positive people that make me more encouraged and interested in doing things to make my content more accessible because I do want more people to enjoy my content.
The angry people- You're very good at that. Oh, thank you. The angry people not so much. Oh this software doesn't work for the blind and don't try it.
Yeah I, it's always at the top of mind I feel a little bit guilty I'm gonna be talking this week about a battery pack and it's got a big bright display and I talk about how awesome the display is and I never point out yeah you're not gonna be able to read that if you're blind but Kevin Jones just said, use test be my eyes on it to see if it works so maybe when we're done recording you can help me learn how to do that.
So this has been really fun. Does it chime when you put it on?
Does it make any noise? No, no. No, you haven't got a chance.
Okay. But there are battery packs that do and so that's definitely a better way to go. Yeah, there's some that beep, some that vibrate. There's all kinds of things that they do.

[1:03:32] Yeah, not this one. No. But I do need to cut us off. If people want to follow you on Mastodon, what's your Mastodon handle?

[1:03:42] IGuy7200 at DragonsCave.space, DragonsCave.space. Okay, I will make sure there is a link in the show notes to that and thank you so much for coming On the show.
This was really fun to actually get to talk to you Appreciate it. Nice meeting you, too.

[1:04:00] Well, I wanted to add one more thing. In my conversation with Tom, you'll remember he was using Be My Eyes to add captions using AI to his photos before posting.

[1:04:10] As you should never do during live during a recording, I installed the app on my iPhone, and I didn't seem to have the same user interface in Be My Eyes as Tom, so we got kind of confused in the middle there.
Now I'm going to do a tutorial on how to get this to work, but for those who are curious right now, I figured out what was wrong.
When you first open Be my eyes, you tell it whether you'll be requiring assistance or that you're willing to give assistance to someone else over the phone.
I told it I would require assistance so they would know I was blind, but then I logged in with the account I created back in 2015 in hopes of providing other people with assistance.
Sandy Foster did this as well back then, but we've never been called, which is very sad.
But anyway, logging into this existing account flipped the app away from needing assistance and back to wanting to provide assistance.
So, I had to delete the app and log into a fresh account that I told it did need assistance and now I was able to see the AI tools that Tom can see.
Spoiler, what Tom taught me with Be My Eyes is nothing short of amazing.
The AI descriptions are scary, they're so good.
But you're going to have to wait for me to write my tutorial so you can hear more about it.
Well, that's going to wind us up for this week. Did you know you can email me at alison at podfeet.com anytime you like? If you have a question or a suggestion, just send it on over.
You can follow me on Mastodon at podfeet at chaos.social.

[1:05:30] Remember everything good starts with podfeet.com. If you want to join in the conversation, you can join our slack at podfeet.com slash slack where you can talk to me and all of the other lovely new Silicastaways.
You can support the show at podfeet.com slash Patreon or with a one-time donation, at podfeet.com slash PayPal.
And if you want to join in the fun of the live show, head on over to podfeet.com, slash live on Sunday nights at 5pm Pacific time and join the friendly and enthusiastic Nosilla Castaways.

[1:05:56] Music.