NC_2023_04_23
[0:00] Music.
[0:10] Sunday, April 23, 2023, and this is show number 937. This week our guest on Chit Chat Across the Pond is your favorite psychological scientist,
CCATP #766 — Dr. Maryanne Garry on Influencing Delusions About Highly Complex Skills
[0:22] Dr. Mary Ann Geary of the University of Waikato in New Zealand. Dr. Geary and four of her colleagues published a paper recently in the Royal Society Open Science, and it was entitled Trivially Information Semantic Context Inflates People's Confidence They Can Perform a Highly complex skill. It's a big fancy title, but the experiment builds on previous studies where people have demonstrated that people have highly inflated beliefs of their capabilities doing highly complex tasks for which they are entirely unqualified. In particular, a high percentage of people have high confidence that they could land a commercial plane with no help from the tower if there was an emergency and the pilot was incapacitated. In the study that Marianne and her co-workers did, they tested whether watching a short, trivially informative video of two pilots landing a plane would influence that confidence level.
Would it make them more confident if they saw a video that gave them no instruction whatsoever? Would that change their confidence level? As always with Dr. Gary, you'll learn a lot, you'll laugh along with us, and your dreams will be crushed as only she can.
You can find Chitchat Across the Pond Lite in your podcatcher of choice.
Of course, there's a link in the show notes.
Tesla Full “Self-Driving” Beta 11 Wins the Most Improved Award
[1:36] When I was 12 years old, my family had a collie puppy named Charlie who was in sore need of some training, so my parents signed me and Charlie up for obedience classes.
When it came time for the final graduation ceremony, which involved the incredibly complex procedure of walking around a circular track, Charlie stopped halfway through and he pooped on the track.
Before that, Charlie was awarded the Most Improved trophy. And I actually still have that trophy.
There's a picture of it in the show notes, but it's actually right behind me as I'm recording for the live audience.
Anyway, I tell that story because I'm here to give Tesla's full self-driving beta the Most Improved award.
[2:13] For those who haven't been following along, about a year ago, Steve and I were both able to get into Tesla's full self-driving beta program by competing against a rating system in the car and achieving 98% or higher on the test over the course of a couple of weeks.
Full self-driving beta means actually driving itself on city streets, not just freeways.
[2:33] Now after testing full self-driving for a few weeks way back then, my assessment of of its driving skills was that of a, Like a student driver who is also drunk.
Seriously, it was bad. It drove at full speed through dips.
When it made left turns into roads with a median separator, it would just drive right at the median, forcing us to rest control from the car.
It was super tentative, turning into intersections, and it kind of would go, er, er, er, er, you know, oh, it's terrible, terrible turning.
It came to a stop at lights way too quickly, and then it accelerated way too slowly.
So you're basically sure somebody's gonna be honking at you you because it was so slow getting away from lights. One time, halfway through a turn, it actually just gave up and gave control of the car back to Steve, and that was really nerve-wracking to do it partway through an intersection.
One of the many things I enjoy about owning a Tesla, though, is that they send out regular software updates, even if you're not on the beta. Makes the car feel new when it happens.
A great example was when they added camera views from the side of the car to the display.
Now when we engage the turn signal, we get a rearward view to see the lane beside us.
It's a much safer way to see the lane you want to change into, rather than turning your head around and speeding traffic.
[3:49] Now Tessa often moves things around on the display, which can be a little annoying.
They move the garage door opening button pretty much every time they do an update.
I think they do it just to keep our minds sharp.
[3:59] Anyway, when Steve and I tested full self-driving beta, it was FSD version 10, and they've sending out minor updates for the past year or so, which made very slight improvements in the full self-driving experience. I'd say the student driver maybe stopped drinking hard liquor and instead was only drinking high alcohol content IPA beer. Better, but still terrifying. Just recently though, Tesla shipped out FSD version 11 and it's definitely a marked improvement. I'm ready to declare that while it still feels like a student driver, It's like a good student driver who is actually completely sober.
It's that much better than FSD 10. I'll still call it a student driver because it doesn't drive quite like an experienced driver and I'll get into some details of what I mean.
I decided to take a drive and I'm going to illustrate these improvements by describing this recent trip where I let the car drive me to the Apple Store.
First it drove west down my block and it gently stopped at the four-way stop sign.
A pretty good start.
Now, Tesla got in trouble recently for letting the car do a rolling stop, say, less than five miles per hour while driving. So now the Tesla comes to a sarcastically complete stop. In any.
[5:15] Case, it waited till the cars who'd gotten there first took their turns, and then with no hesitation, it accelerated appropriately into that right turn. None of that eer, eer, eer that it used to do before. Now, after a couple of other turns, it drove up to a stoplight, and it waited for the light to change. Now you have to really pay attention even when waiting at a light because the car's just going to absolutely go as soon as the light turns green and as soon as the cross traffic clears. My car and I wanted to turn left but there was a car straight across from us and my car pulled a little bit into the intersection as a driver should and waited to see what the other car opposing us was going to do. The driver had their left turn signal on but they did not move into the intersection. The Tesla waited and didn't make a move. At this point I took over and I executed the turn myself. I'm not going to count that against the car as a mistake because how many times have you seen someone with a turn signal on who makes a completely different move? You can't count on that car to be turning. So I'm not going to count that as a mistake but I was a little worried that people would be impatient behind me and I made the turn myself. I re-engaged full self-driving, and then we traveled down a very wide road that has a fair amount of traffic on it. It's pretty busy. In the middle of the block, a pedestrian dashed out into the street and jay walked diagonally across in front of me. The car gently slowed down well in advance, and when the pedestrian had finished crossing, appropriately accelerated back up to the speed limit.
[6:41] I was particularly pleased with this maneuver. Not only did it not kill the pedestrian—that's stable stakes, right? But it also didn't panic. In FSD-10, it would often set up an audible alarm and slow down quite violently at the slightest provocation. It was quite unnerving when, as a human, you could tell the situation was not a crisis and would clear well before your arrival.
So it was really great to see FSD-11 treat this potentially dangerous situation cautiously, but not overly so.
[7:10] Another bothersome thing with FSD 10 was that it was truly terrible when making a right turn into traffic. When the intersection was clear, it would do that thing of inching out in little jerks, much like a student driver, eventually starting to make the turn, but then overcorrecting to the right and then back to the left, only then starting to accelerate. In my test drive, the car needed to turn right onto a very busy two-lane road. It was also tricky because to our left, the cars were coming over a hill which reduced the time to react. I'm happy to say that it did admirably. It inched into the intersection just enough and when it had the visibility it needed and saw that the road was clear, it accelerated quite rapidly into the correct lane. It was very comforting for it to do it quickly. You know what I mean?
Like if you're going to go, you should just go. And it did. In Tesla's, I think even without full self-driving, you can enable a feature where it sends a little bong sound when the light turns green. It's pretty accurate if you're going straight ahead and the light turns, but it wasn't very accurate on the left turn arrows. Might tell you to go, send you the bong when you're in the left turn lane, but it was the straight through light that had actually turned green. We, the Tesla and I, drove up to a red left turn arrow. It waited until the left turn arrow turned green and then it accelerated very smoothly into the correct lane.
[8:30] Now remember I said earlier that FSD 10 had trouble recognizing medians on left turns?
I was very relieved not to have to rest control of my car as it cleared the physical median with plenty of space. I drove a bit farther and as I came up to a red light on this two-lane road, someone in a large SUV parked right at the intersection in front of a fire hydrant.
Sorry, Bodhi. Not only that, he opened the driver's side door all the way up and got out of the car and stood there in the lane. At this point he was blocking easily a third of the lane that I was currently driving in, or I should say the Tesla was driving in. The Tesla did not panic.
Notice that the lane was blocked well before it would require a hard stop and instead rolled to a stop a good 10 feet before this silly man. Now this man continued to partially block our lane so the Tesla started trying to edge to the left but it didn't seem super confident about the maneuver. There was also a line of cars coming up along on the left of me, so I decided I'd give it a hand and I took over. Again, it didn't make any mistakes, but I just got a little bit anxious and and took over.
[9:35] Now, remember I said that FSD 11 is like a student driver, even a good student driver, but it doesn't drive as a seasoned driver would?
A good example of that was when we turned onto Sepulveda Boulevard, which is a major thoroughfare with three lanes.
The left and middle lanes travel along pretty nicely, but the right lane is problematic.
You can see it has lots of dips for rain gutters.
Hey, we had rain this year, so don't make fun of us, but we do have rain gutters just in case. So it has lots of these dips at every intersection and above it has lots of business entrances and intersections where cars slow down and make turns.
It is easily the worst lane to be in. Nobody wants to drive in this lane.
[10:17] Well, the Tesla really liked that far right lane. I signaled to move to the center lane and it obeyed me.
It made a very smooth lane change.
As soon as I'd driven about a block in that center lane, it said, yeah, I'm going back into that right lane again.
Now, it wasn't technically wrong, but no human drivers were choosing that lane for obvious reasons, because it was an annoying lane.
I let it drive that way for the rest of the ride, and it made no errors.
While I still find it stressful to let the car drive itself, overall, I wouldn't say it made any outright mistakes on this particular drive.
I did find that it drove faster than I'm comfortable with at certain times.
However, every time it seemed too fast, I checked, and it was driving at or under the posted speed limit.
I think I might be a little old lady driver, so take that for what you will.
Later, Steve and I took a drive together in my car where he was in the driver's seat with full self-driving in control.
In this drive, we experienced more of those I wouldn't have done it that way events.
For example, there were a couple of areas where we know the traffic backs up.
So if we were driving, we would get into the correct lane a mile or more before an upcoming turn.
But that Tesla would tootle along in the wrong lane until it was actually necessary to change lanes.
Again, not technically a mistake, but it wasn't what a human driver would have done that was familiar with the roads.
Now, I guess it isn't familiar with the roads, right? It's figuring it out as it goes.
[11:45] Now, perhaps the most obvious example of student driver feel is on curvy roads.
Experienced drivers will hug the inside of a turn, but the Tesla always goes for the middle of the lane no matter what. So this isn't dangerous per se, but it feels like a loss of control as though the car's going to slide out of the curve, like it's not going to stay tight. You know, normal human drivers, then we tug the inside of that lane. So it was again, just not the way we would have done it. Now, it also has trouble when lanes get super wide.
On the particular drive where Steve was behind the wheel, there's a park that lets out onto a busy two-lane road. At the park exit, they widened the lane to allow drivers to merge in more easily.
Well, all the Tesla knows, though, is that the lane it's driving in is normal width, it's going along normal, normal, normal, and then suddenly it grows to almost twice as wide as normal and and then narrows back down.
The only thing that Tesla knows to do is drive right down the middle.
So it's to the left, because it's in a narrower lane. As it got wider, it's going farther and farther to the right, and then it has to come back in and to the left again.
So this could mislead drivers behind it into thinking the car was moving to the right to make a right turn, because that's what a human driver would be doing.
As it starts to narrow, though, the Tesla continues straight while still staying in the middle of the lane.
Again, not the way a human would drive it.
[13:08] It also made the same mistake it has made on this drive since we started testing FSD.
One of the left turns is onto a road with a painted, not physical median.
In the US, painted medians are designated by a double-double yellow line.
While it appears to have learned not to drive over the physical medians, it ran right over the end of that painted median just like it did on FSD 10.
[13:31] Now, I briefly let the car drive me on the freeway as well. With full self-driving, working with navigation, the car got itself onto the freeway and then tried to move into the carpool lane.
That makes sense since a Model 3, you know, Tesla electric vehicle is eligible for carpool access, but what it didn't know is I never put the carpool access car stickers on my car.
Now, you probably mocked me for this, but they were purple.
My car is red. It would have just looked terrible. Plus, I never drive in that car, I very rarely drive in that car by myself when I'm in a lot of traffic, so it's not a big deal.
Anyway, the car didn't know it wasn't allowed in that lane.
Now, it did do a couple of other lane changes on the freeway, and I wasn't completely happy with how it performed.
As with FSD 10, it still moved into lanes where it was impolitely close to the driver coming up from behind.
Maybe not technically dangerous, but definitely kind of like how a jerk would change lanes.
[14:26] We later learned that there are several different settings for full self-driving in FSD 11.
You can choose Chill, Average, or Assertive.
I changed it from Average down to Chill, and now it says, in this profile, your Model 3 will have a larger follow distance and perform fewer speed changes.
I haven't had the nerve to try it again on the freeway, but perhaps it will drive a little bit more like an old lady like me with this change in settings.
I really hate to think what assertive would be like.
The bottom line is that Full Self-Driving 11 is a huge improvement over Full Self-Driving 10.
I was beginning to doubt our driverless future, but this update renews that hope.
[15:06] A couple of weeks ago, a darling six-year-old boy we knew named Caden was killed in a car accident.
This is why I believe so strongly in supporting efforts to bring us true self-driving cars as soon as possible.
CSUN ATC 2023: Glean Personal Study Tool
[15:21] Well, I'm kneeling on the floor with Helena Harrison from a company called Glean.
She suggested kneeling and I thought that was a lot of fun, so we're going to do this interview in a little bit more casual style.
Glean is a company that helps people take notes. And if you're in school or you go to a lot of teams meetings and you need to be able to take notes, but you find yourself distracted because you're taking notes and you miss the point, Glean is designed to kind of help you do that.
Is that a good description of it? Yes, absolutely.
It records all of your audio and then you can add your notes as you go along and you'll be able to come back to your notes later to expand when you've got more time. It takes that pressure away of you having to sort of write everything down during your meeting or during your lecture or your class.
So I've seen applications like Notability is a note-taking app that also does audio so you can grab a section of your text.
[16:15] And then find out what they said right then. This is sort of the other way around. The primary focus is the audio, but you're putting in short notes to say go look at this. Is that a good way to describe it? Yes, I would actually think that the primary focus is really more your notes because that's the, that's what you want. You want to sort of have a really nice summary of your class or your meeting afterwards, but the audio is there to really help you expand your notes afterwards. Afterwards, but during, while I'm in there I can just click a button that says important or follow up or I could just type in confused was one the buttons that you had in there which I really liked. I could have used that in school. Like, I want to go back and listen to that two or three more times.
Yes, absolutely. Yes, so you can add your notes during, yes. And it's just recording in the background there. Now, where does it do the recording?
[17:01] It does a recording on the cloud, so it's a cloud-based software, but the nice thing about being on the cloud, it means that you can access it anywhere, so not just on your own laptop, but you could access it on your friend's laptop, or you could access it on your iPad, or your phone.
What's the security around that?
We have Amazon Web Services is what we use, and everything is encrypted at rest.
We obviously made sure that everything was as secure as it possibly could be.
Is the right answer to the question. So, I'm looking at the interface right now and she's imported the slides into note-taking. She's got, let's see, I'm going to actually press buttons here. On the right-hand side, there's kind of an interesting little interface that shows the audio with little highlights for each section where she took some notes about what was going on in there. But it also does, let me see if I get it right, speech-to-text.
Is that right?
It does do speech to text, yes, so after you've actually done your audio recorded, once you've stopped your recording, you can then click a little button that says convert to text and that will convert all of your audio into a text format.
[18:09] So this is cool because there's a column where the notes that you've taken are.
So let's see, there's a note I'm looking at that says ISS and if I click on that, I can see where in the audio that was spoken and then can I, there's a play button, I'm just gas in the interface. I can hit play and I could be able to hear what was said about the International Space Station.
[18:32] So that's pretty cool. So you can kind of go back and forth.
You did one other thing with the text. Can you explain that?
You started in the text transcription and you were able to do something with that too?
Yes, of course. So I can also go through my notes and click on a note and it will take me straight to that part of the transcript.
And if I think that part of transcript is really important or I want to sort of take it out of my transcript and pop it into my notes, I can. I can just copy and paste it across.
So it saves me having to write it all down.
Of course, once it's in there, if it's a quote, I'm going to leave it exactly as it is, or it might be that I want to go and edit it, and reword it and make it my own.
Then do you export your notes from this? What do you do afterwards?
Yes, you can export it to anywhere. So basically, you copy.
[19:15] There's a reading view in here that takes all of the audio away, and you're just left with your slides if you've got them.
You don't have to have slides. It's definitely your slides, and your headings, and your notes.
Then you copy that across.
Because you're copying it, you can pop it anywhere, it's not restricted to Google Docs or Word. You can pop it into a nursing journal or a media diary, basically any way you like that accepts text. This is very cool. So what does it cost to use Glean? And again, this is for business, it's for school, wherever you need to be taking notes, what does it cost? It's $129 a year, or you can do a monthly subscription instead, which is £12, $12 a year. That actually sounds like a pretty good price. I could have used that in a lot of of classes I took in college. Thank you very much, Helena. This was really cool.
My pleasure and thank you. Thank you. It's very nice of you to come and talk to me at my stand. Oh, and the name of the company is Glean. And what is the website?
Glean. Oh, hang on a minute. Glean.co. Glean.co. That's the website, isn't it? Yeah, just Glean.co.
Nice and easy. Very cool. Thank you very much.
Thank you. Thank you very much. Nice to meet you.
Support the Show
[20:23] Week after Terry Austin made me buy Hush for $50, I conveniently mentioned that in my plug for folks to support the show. Guess what happened? Both John Murray and Bill Reveal went over to podfee.com slash paypal and they donated collectively more than enough money to cover Hush. Their generosity and show of support is overwhelming. I should mention, this isn't even the first time the two of them have donated. If you'd like to be awesome like John and Bill, please consider giving a one-time donation or donations on a schedule of your choosing to show your support of the work that we do at the Podfeed Podcast. Remember, you can do it at podfeed.com slash PayPal.
Retrobatch for Automating Image Manipulation
[21:04] Remember a few months ago when I spent a stupid amount of time automating the incredibly complex procedure of unchecking a box in Preview's export window to remove the alpha channel from PNG files?
Well, the problem to be solved is that images with an alpha channel have transparency, and if they're dark images, they're impossible to see if the viewer is using dark mode on their device. So when I would tweet out or toot out on Mastodon a link to one of the shows, and if it had transparency in it, if it had an alpha channel, it would be impossible for those using dark mode to actually see what I'd posted. Now, I've told you about a couple of solutions, but this week I solved that problem and a bigger problem using an amazing tool called RetroBatch — boy, that's hard to say — RetroBatch Pro from Flying Meat Software. Now, the problem to be solved this week, separate from the alpha channel problem, is creating effective featured images for for my blog posts.
You know how if someone posts a link on social media, expands to show the title and an image?
Well, people have figured out that posts with featured images are much more likely to cause the reader on social media to follow through and look at the link.
If you create your blog posts properly in your content management system, such as WordPress, you get to control what image is shown when that link is posted.
[22:24] Now, technically, I can slap any old image I like as the featured image in WordPress, but whether it looks good when you see it is a whole nother thing.
For example, if my image is in at least 400 pixels tall, Facebook won't render anything at all.
And if it's the wrong aspect ratio, WordPress, in cahoots with my theme, I'm really not sure who to blame here, will crop the image.
In spite of spending a lot of time trying to create a repeatable process to make uncropped, good-looking featured images, I have not succeeded until now.
What I do know from talking to my theme vendor is that if I don't have a sidebar on my theme, which I don't, then my theme will crop images to 1040 by 650.
Now that means there's no need for me to upload anything bigger than that.
Now that aspect ratio is a little bit wonky and I'd prefer two to one, So my goal was to create featured images at 1040 by 520.
[23:19] Now let's say I find a company's logo for the featured image so they get some visual juice from the review. Last week, for example, Sandy reviewed a product from Anchor and I ran into the problem that I always run into. The logo file was very high resolution, but it was the wrong aspect ratio. At 3357 by 800, it was more than 4 to 1. Now if I plop that very high resolution an image into WordPress as the featured image, it gets cropped, so all you see is N-K-I.
[23:50] Now think about that. There isn't even an I in the name anchor.
It's truncated the E, so I'm seeing the middle N-K and part of the capital E.
While the process to make the featured image look nice is unpredictable, tedious, and error-prone.
After literally years of attempts to make it a reliable process, about a week ago I came up with a less terrible than the other methods process.
Using Affinity Photo, I created a two to one, 1040 by 520 rectangle in white, and I saved it as a preset.
So when I open Affinity Photo, if I then open the preset, I can drag my image onto the new image file and start dragging it around on that white rectangle. So there's a white rectangle underneath the image file.
If my image file is too big, like say the anchor logo, I have to resize it until it fits either a width or height and then get it centered properly.
When I think it looks good enough, I have to export the file and save it with a new name and either save this Affinity Photo file or delete it.
It's still annoying and it's still time consuming, but it's not as bad as all of the other methods I had tried.
You might ask why I chose a white background when dark mode folks are people too.
[25:02] It's because I had to pick something, okay? I'm not going to modify the background for every single image. Anyway, this image size problem is triply aggravating because it shows up just often enough and it always rears its ugly head right when I'm finally done with an article.
And that makes it even more frustrating. So imagine I spend hours and hours crafting a story and adding screenshots, entering alt tags so our screen reader friends can enjoy the images.
I make sure the grammar and spelling are correct. I'm double-checking links to sources.
I publish. I push it up to WordPress. I throw in the featured image. And then I have to stop, because the featured image looks poopy. I scream out in despair every single time.
[25:45] Well, after the most recent problem with the Anchor logo, I went to Mastodon and I asked my followers, is there some way to automate a solution to this? The wonderful Greg Scown responded.
You might know that name, he's the co-founder of Smile, the people who make the most awesome text expander software. He suggested I take a look at Acorn, as it supports AppleScript.
I don't know much about AppleScript, but I know people who do, so I thought maybe it was worth a shot. Acorn is an image editor, by the way, that you probably have already heard of. I trotted off to the Flying Meat website to take a fresh look at Acorn. The last version I paid for was version 3, and it appears that developer Gus Mueller has been very busy since I last used Acorn, as he's on version 7 now. Anyway, as I started looking at Acorn, I realized Gus has another app, and it's called RetroBatch. When I was on the Automator's podcast back in February, Rosemary brought up the app RetroBatch for automating image manipulation. You know me, someone says, this is fun for automation. It's an immediate download for me. I didn't have time to to play with it right back then in February.
So I put play with RetroBatch Rosemary on my to-do list. So I'd remember who told me about it and remember to do it.
Just like the other 24 items languishing on my to-do list, It never got done.
[27:05] Now I did some reading and I learned that RetroBatch allows you to automate complex image manipulation according to rules you provide. That sounded like it might be the right tool to solve my problem with featured images. I tooted back to Greg that he'd help me go in the right direction and I tagged the Flying Meat Mastodon account with my response.
Imagine my delight when Gus responded with a screenshot of exactly how RetroBatch would help, me solve my problem. I bought the Pro version of RetroBatch because the particular task I wanted to perform was going to require the use of rules, so some if-then-else kind of conditions. It was, also going to require a wee bit of JavaScript. By wee bit, I mean microscopic bit. RetroBatch is $20 while RetroBatch Pro is $40. You can do an upgrade from regular RetroBatch up to the Pro version if if you want.
We'll get into the differences towards the end of this article, but I wanna start by walking you through the RetroBatch interface as I describe how it solved my problem.
The RetroBatch interface reminds me a lot of Audio Hijack in that you drag nodes, little rounded squares, onto a canvas and then connection lines appear between them indicating the direction and path of your workflow.
Down the left sidebar are groups of nodes to choose from. In the center is the canvas where you build the workflow.
On the right side is a contextual inspector palette and below that is an image preview window.
I'll get into the details as we go through the example.
[28:34] Before I could start automating my solution, I had to figure out exactly what I wanted the solution to provide. Now, images that need to be modified fall into four different buckets based on their sizes, and the modifications are different for each of these four scenarios.
That's why I was going to have to write these rules. Of course, being a nerd, I drew it up as a truth table. Now, you can't see the truth table because you're listening, so I'll describe it to to you in words instead.
The four conditions are, number one, the image is bigger in both dimensions than my targeted size, 1040 by 520.
In that case, I need to scale down once in each direction until both dimensions are no larger than that target dimension.
The second scenario is the image is wider than 1040, but shorter than 520.
I want to scale the width down to 1040. What if the image is narrower than 1040, but it's taller than 520?
Those images, I want to scale the height down to 520.
Finally, what if the image is smaller in both dimensions than the desired 1040 by 520?
In that case, I don't want to apply any scaling at all.
All right, with my truth table set aside, that gets us the image scale properly, but we need to do something to add width or height to the dimension where it's deficient.
In Gus's response of Mastodon, he explained that the command is to add margin.
[29:55] So before diving into these four scenarios at once to follow my truth table, I started with one test case.
The anchor logo that I talked about was both wider and taller than my target dimensions.
I knew that it was more than a two to one aspect ratio, so I knew that if I scaled the width to 1040, it wouldn't be tall enough.
So the margin would have to be added to the top and bottom.
[30:17] On the left sidebar, I flipped open the group of nodes for read images, and I dragged in the read individual files node.
With this node selected, The inspector palette over on the right changed to show an area where I could add some representative files to be tested.
It's a great way to run tests repeatedly on multiple test files.
This is where I would eventually drag in all four different options.
But for now, I just drag the anchor logo over into the inspector palette.
In the little read individual files node, it now says one file.
That tells you that one file was matched for that node. That little indicator can be very important as you're debugging your workflows in RetroBatch.
If a node isn't doing what you expect, it might say zero files, which means it doesn't and have a match so it won't function properly for you.
After I dragged my one file in, at the top of the window, RetroBatch showed the warning, add a write node to save your files somewhere. Good to have that reminder, but I'll get to that in a minute. My next step was to scale the image down until the width was 1040. Under Transform, I found the scale node, dragged it into the read individual files, or right next to the read individual files node. When I dropped the scale node to the right of the read node, an arrow line that flowed. It's got little arrows. It connected the two nodes together very nicely.
So I could tell that was going from reading in to doing scaling.
[31:41] Now with the scale node selected, that inspector palette changed to show my options to control how to do the scale. I use the drop down to change the scaling for percentage, to fix width and enter 1040. Okay, we're doing pretty good here.
Also in the inspector palette, I checked the box to tell it only to scale smaller.
Now the last thing I wanted it to ever do is upscale my images. If it's too small in any direction or even both, I'll add the margin to make it big enough in both directions.
[32:12] So far I've been opening up these little groups on the left sidebar like opening transform to find scale, but you don't actually have to do that. If that seems a little bit tedious, you can add a node of your choosing by right clicking on an existing node and choosing from the pop-up menu. I added the Adjust Margins node right after the Scale node, doing just that.
Now this is where the wee bit of JavaScript comes in, and I don't think I would have ever figured this out if Gus hadn't spoon-fed me the solution through Mastodon.
With Adjust Margins selected, by default the Inspector palette lets you define the number of pixels you want to add to the left, bottom, right, and top of the image. Well, that makes perfect sense. But when this automation runs, I won't know how many pixels to add, and I don't I don't even know which edge is going to require some margin.
I need RetroBatch to figure that out on its own.
[33:04] So Adjust Margin also allows you to add a percentage of width, height, short side, or long side, or use JavaScript expression. Turns out we're going to use JavaScript expression.
But don't be intimidated. It's super easy. The JavaScript-ness of it is actually hidden from us.
It's really more like simple algebra. In the left and right margin boxes, we simply type 1040 minus W, collectively divided by 2. That means we want the width to be 1040 pixels when we're done, so we subtract the width of the image from 1040, and divide it by 2. Simple, right? Then we add that value as a margin on the left and right.
Likewise, if we want to add half the difference between the height and 520 to the top and bottom, we can just enter the top and bottom margins of 520 minus h, all that divided by 2.
There, you've written a JavaScript expression. I promised it was easy, didn't I?
[33:58] Finally, I selected the color white again for the margin to be added.
The final step is to add a Write Images node to the end of our workflow just like the warning told us to.
Because we read in the image, we did a bunch of manipulation, we need to write it back out.
As you might have figured out, we have some fun options in the Inspector palette for write images.
We can have the automation ask for an output folder when run, we can have the output folder open when the export finishes, and we can overwrite existing images if we want.
We can add a suffix or prefix to the file name as well.
[34:30] Now, I was going to add a suffix with something brilliant and witty like modified. While you certainly could do that, there's a drop down with a massive number of other options. Remember, RetroBatch is really for image manipulation, not modifying silly logos.
Because of that, RetroBatch offers you options for the suffix and prefix to include a suffix or prefix to include things like capture date, author, copyright, keywords, bits per channel, pixel depth and more. In that long list of options to add to the filename, I also found image height and image width. This is perfect for my needs. Since it knows h and w, we've been using it in our equation, I can have my exported images suffixed with the 1040 by 520, so I can clearly tell them apart from my originals.
Now, there's really one more important part of the interface for Retrobat I want you to to know about.
If you select an image node, such as that initial read node or the ending write node, in the bottom right, you'll see the image preview window for that stage of your workflow.
So let's say you just have the read node selected.
I can see the anchor logo with this very wide aspect ratio and it's got no white space above or below it.
There's a zoom slider to help you see the image at a reasonable magnification for its size.
[35:49] In the very bottom right is a little info button that'll pop up what looks like some of the XF data for the image, you know, all that nerdy goodness photographers care about.
It also in there shows the dimensions of this input file, and I was able to see that the anchor logo is 3357 by 800 pixels.
So that's my unmodified original.
I'd have, by the way, I had to zoom down to 22% to see it in that viewing box.
Now, if I click on the right node instead, in that image preview, I can see that now the image is around 2 to 1 aspect ratio, and it has a nice white margin on the top and bottom.
[36:23] So, if I click on the info button, I can see the resulting image will now be 1040 by 520 and will have that information tacked onto the file name.
This is a great way to see your workflow is doing exactly what you want before you even bother having it execute.
I mentioned up front that you can drag a pile of images in to test the workflow you've created. If you do that, you'll see them all lined up in that image preview window and you can tap through them to verify that each will be modified to your desires. Now it is time for the moment of truth. At the top of the window is a play button that will run your automation, or you can use command enter to make it go. I was delighted to see my 1040 by 520 image squirt out to the folder I had requested with the suffix that I wanted. Now my workflow at this point takes an image, scales it to the correct width, and adds the margin to the top and bottom. But this workflow doesn't know anything about my truth table yet, so it doesn't know how to do different things depending on the dimensions of the image. Adding rules was crucial to my workflow because I needed RetroBatch to figure out whether the images are taller or wider than the desired aspect ratio, and only then scale the images and finally add those margins. Now rules in RetroBatch look a lot like the rules we're familiar with making smart folders in finder or smart albums and apple photos you'll be very familiar with how they look. Now I do want to mention rules again are only available in the Pro version of RetroBatch.
[37:48] I do have trouble pronouncing that, don't I? Anyway, I created four rules to parse the images into the four scenarios of my truth table using image pixel width and height and whether they were greater than my desired dimensions.
When I was done with the final design of my featured image workflow, I tested it with a bunch of representative files.
[38:06] Now, by a stroke of luck, I just happened to include one image that had that pesky alpha channel.
So it looked really silly after going through the RetroBatch workflow.
It was the right dimensions, it had the white margin just as appropriate, but in the middle of the image, you could see right through it because it was transparent.
I created a new RetroBatch workflow, completely separate from the first one, to see whether it might be a better way to solve my pesky alpha channel problem than all the other methods I'd created.
In the discussion forms for RetroBatch, I found out that to remove the alpha channel, you can simply add a matte node.
My test workflow was very simple with only four nodes, read in the image, set a rule to check to see if it had transparent pixels, if they were present, and then slap on a mat if they are, and then write out the image. Easy peasy.
I saved out my workflow and called it no-alpha.retrobatch.
Then I discovered RetroBatch allows you to export it as a droplet.
I exported it as a droplet and suddenly I had an app.
I dragged the transparent PNG on my droplet and boom, the transparency was gone.
Now I knew this droplet was a keeper, so I held down the command key and I dragged the app into my Finder Windows toolbar.
Now, if I ever have a transparent image I want to use on the web, I can simply drag it onto the app in the toolbar and boom, I am done.
So freaking easy. I am just so excited. This is so much easier than all of the other 28 ways I tried to do this.
[39:35] All right, now that I knew how to fix the alpha channel problem, I just added that same set of steps to my featured image workflow.
So now if it finds that transparent images, it just slaps on the mat right before it squirts it out.
Now the workflow is pretty cool looking and it's very readable.
It starts with the read file nodes, then it branches into the four rules of my truth table.
The first rule goes through the two scaling nodes, since those images are too big in both directions.
The two rules that grab images are too big in one or the other dimensions, those only go through one scale node.
Finally, the last rule has no scaling at all because it's too small in both directions.
[40:13] After going through that process, all four rules converge to just one Adjust Margins node, one Matte node, and then to the final Write Images node.
The only thing I wish it had was some indication of what the nodes are doing without having to select each node and look at the Inspector palette.
For example, every Rule node just says Rules. It would be nifty if I could enter a name for each node, like 2big and w and h.
I asked Gus whether there was a way to name the rule nodes, or add some kind of text to them so you could see which rule did what at a glance without opening the nodes.
I was delighted with his response. He wrote back, you know, that's a good idea I hadn't considered, and nobody has asked for it yet.
I'll see what I can do about it in a future release.
How cool is that? Anyway, after I'd run all my tests, I exported my featured image workflow as a droplet, and it also earned a treasured spot in my Finder toolbar.
I am so excited about this workflow, and I couldn't wait to use it.
I went back to the post about Hush by Terry Austin because the Hush logo looked really poopy on my featured images.
The logo is an 850 pixel rounded square, which is super high res, but my theme in WordPress cropped the top and bottom of it.
I drag the Hush logo onto my featured image droplet and drag the resulting image into WordPress and now it looks fantastic.
[41:38] All right, now that I've told you about how I used RetroBatch to solve my problem, let's chat just a little bit about what else it can do to kind of trigger whether you'd be interested in it. I mentioned that there's a pro and regular version of RetroBatch.
If you're using the regular version of RetroBatch, you can see the nodes you would have access to if you had the pro version. They're clearly marked so Gus isn't trying to trick you, but he lets you play with them to see how they would work if you upgraded to pro.
[42:04] Now, RetroBatch is really well documented, and in the documentation I found a listing of which features are available in both Pro and Regular and what's available just in Pro.
There are 56 nodes available for both versions, and 23 more are available in the Pro version.
I highly recommend going to this link in the docs to see everything RetroBatch can do.
It might inspire you to see what more you can do to automate your photo workflow.
Now, as I said, the documentation for RetroBatch is excellent.
If you write documentation yourself, you might actually be interested in checking out the tools Gus uses to create his.
The docs for RetroBatch are written in mkdocs, which is an open-source static site generator, and the theme is open-source, and the site on which he hosts the theme is also free.
You can find all of the links about these documentation tools in his docs at flyingmeet.com.
Alright, back to using the tool.
As you lay out nodes, I said that these connection lines automatically appear. But sometimes it's a little hard to move a node so that the connection lines between nodes are exactly what you wanted it to do. If you require more granular control, RetroBatch allows you to draw the connection lines by hand. In Preferences, there's a checkbox on the General tab to allow manual connections with a control drag. I found that toggling this on and off while I was working allowed me to get the connection lines exactly where I needed them.
[43:28] Now when I was working on RetroBatch on my laptop, I had to shrink the window width to fit my smaller screen.
My workflow was partially hidden under the right sidebar section, and I couldn't scroll to see it.
I was going to mention here that it was a problem you might run into, but first I wrote to Gus about it.
He responded immediately, and he said I'd uncovered a bug. But before I could even respond to his email, he sent me a second email telling me he fixed it and sent me an updated version of the app. This guy is amazing.
I mentioned twice already how well-documented RetroBatch is.
I was searching for the right terminology to describe the Inspector Palette, and I, discovered that just about anything you want to do has multiple ways to do it.
For example, I said earlier that to add a read node, I used the left sidebar and I dragged it in, and then I dragged images into the Inspector Palette.
Turns out, you can just drag an image, or a selection of images, or even a folder containing images right onto the canvas and RetroBatch will automatically create that read node and populate the inspector palette.
It's a way easier way to do it.
If that's not the way you want to do it, you could also use edit add node.
If you like the sidebar, you can even double click a node to add it to the canvas.
By the way, if you want to duplicate a node, you can hold down the option key while dragging on a node.
I used that last trick once to actually copy a node from one workflow to another and it worked.
If you don't like one way of doing things in RetroBatch, there's probably another way.
[44:57] Now, I want to tell you, I did run my usual elementary level test of voiceover with RetroBatch, and I was able to pull in an image with a right node, add an adjustment, and write the file out to a known location. I was able to interact with the inspector palette as well. I'm sure there are ways that the interface could be improved for voiceover, but at a fundamental level, I didn't run into any showstoppers. The fact that GUS gives you multiple ways to add nodes means that when one method didn't work, the menu method did work with voiceover. If you're a voiceover user, I'd definitely try it yourself before taking my word for it that it'll work for you. Now the bottom line is I know my particular problem to be solved probably isn't a problem any of you have, but if you do any kind of image manipulation that you do repeatedly or need to do on a series of images, RetroBatch might help you automate your workflow and apply your changes consistently. Whether it's blur effects, color adjustments, sharpening, transforming, adding color effects, manipulating metadata, or adding a watermark, RetroBatch can help you with your work. I am astonished at how responsive and skilled Gus Mueller is and I'm a huge fan of well-documented tools. Check out the free 14-day trial of RetroBatch at flyingmeat.com.
[46:10] Well that's going to wind us up for this week. Did you know you can email me at allison at podfeet.com anytime you like and I will probably answer. If you have a question or a suggestion, just send it on over. You can follow me on Mastodon at podfeet at chaos.social. Remember, Everything good starts with podfeed.com.
If you want to join in the fun of the conversation, you can join our Slack community over at podfeed.com slash slack where you can talk to me and all of the other lovely Nocilla Castaways, even bark from time to time.
You can support the show at podfeed.com slash Patreon or with a one-time donation like John and Bill did, over at podfeed.com slash PayPal.
And if you want to join in the fun of the live show, head on over to podfeed.com slash live on Sunday nights at 5 p.m. Pacific time join the friendly and enthusiastic.
[46:53] Music.