NC_2025_03_23

This episode reviews the Audio-Technica 2100X mic, continues the Programming by Stealth series on Jekyll, and features interviews on underwater tech and EV charging solutions, along with a reflection on automation tools and AI transcription.

2025, Allison Sheridan
NosillaCast Apple Podcast

Automatic Shownotes

Chapters

0:00 
NC_2025_03_23
0:46 
PBS 178 of X: Getting Started with Jekyll Pages (GitHub Pages)
2:20 
CES 2025: Komatsu Underwater Remote Controlled Dredge
4:35 
COVESA mixup
7:02 
CES 2025: PlugP2P Peer-to-Peer EV Charging
13:07 
Automation Gives Me All the Feels
29:00 
CES 2025: MOTER Next Gen Car Data for Auto Insurers
34:09 
Support the Show
34:30 
CCATP #811 — Adam Engst on Measuring AI Transcription Accuracy

Long Summary

In this episode, I dive into a diverse set of topics while recording remotely at Lindsay and Nolan's house, watching their kids during their 10-year anniversary trip. I share my experience using the Audio-Technica 2100X mic, which simplifies my setup for on-the-road podcasting. In our ongoing Programming by Stealth series, Bart and I continue our exploration of Jekyll, focusing on its build process and named conventions to better manage files, particularly the confusing use of the term "assets." We work through practical examples that expose some of the complexity within Jekyll, preparing for the more advanced topics we plan to tackle in future episodes.

Transitioning to CES, I recount my experience interviewing exhibitors on the giant show floor. I faced the challenge of interviewing someone for the first time about an impressive piece of underwater remote-controlled construction equipment designed to tackle climate-related dredging issues efficiently. Madeline Pierce shares key insights on how their technology assists in restoring ecosystems, preparing for natural disasters, and drives home the significance of intelligent systems that operate under water.

I narrate a humorous incident where I accidentally ended up at the wrong event, leading to insightful interviews about innovative electric vehicle charging solutions and discussions on how the automotive industry is changing in face of new technologies. This leads me to an enlightening conversation with Ann Campbell of PlugP2P, a platform connecting electric vehicle charger owners with those in need—creating a marketplace that is beneficial for all by allowing hosts to set their rates for sharing their charging outlets.

As I reflect on the automation journey Bart and I embarked on years ago through Programming by Stealth, I share personal anecdotes that highlight the power of automation in daily tasks, particularly when it comes to file management and audio editing. These small yet impactful automations bring joy to my workflow, showcasing the layers of knowledge that come with our programming education. I recount struggles and triumphs with automating mundane tasks using TextExpander and Hazel, emphasizing how seemingly small solutions can result in significant efficiency gains over time.

Additionally, I venture into the fascinating world of AI transcription technologies alongside Tidbits’ Adam Angst, exploring the accuracy and nuances of various platforms, including Apple's built-in features. We discuss the challenges and philosophical implications surrounding AI-generated content and how language models, like ChatGPT and Perplexity, transform our interaction with information retrieval—from static searches to dynamic conversations.

Throughout this episode, I contemplate the changing landscape of information gathering and retrieval while engaging with various innovative technologies that are shaping our everyday interactions. I encourage listeners to reflect on their own journeys with technology and automation, inviting them to share their thoughts and experiences as we continue navigating this evolving space together.

Brief Summary

In this episode, I record remotely at Lindsay and Nolan's, sharing my experience with the Audio-Technica 2100X mic. Bart and I continue our Programming by Stealth series, tackling Jekyll's build process.
At CES, I interview Madeline Pierce about underwater remote-controlled technology for climate-related dredging, and I discuss electric vehicle charging solutions with Ann Campbell from PlugP2P.
I reflect on our automation journey, highlighting the efficiency brought by small tools like TextExpander and Hazel. Additionally, I explore AI transcription with Tidbits' Adam Angst, considering the evolving impact of AI on information retrieval. I encourage listeners to share their own technological experiences.

Tags

Lindsay
Nolan
Audio-Technica 2100X
Programming by Stealth
Jekyll
CES
Madeline Pierce
remote-controlled technology
electric vehicle charging
automation journey
TextExpander
Hazel
AI transcription
Adam Angst
information retrieval

Transcript

[0:00]
NC_2025_03_23
[0:00]Music.
[0:11]2025, and this is show number 1037. While we're down at Lindsay and Nolan's house right now and watching their kids while they're off on a 10-year anniversary little vacation in Costa Rica, you might notice a little difference to my audio as a result. I'm actually using an Audio-Technica 2100X, I think it's called these days. It's a mic I've had before. I had a real old one, and I decided it just simplifies everything for the road. So I'm just using it through USB-C, and I think it sounds okay. It's not quite the high LPR 40, but it'll do in a pinch.
[0:46]
PBS 178 of X: Getting Started with Jekyll Pages (GitHub Pages)
[0:46]In our previous installment of Programming by Stealth, Bart taught us how to install Ruby, install Bundler, install Gems, and then build a very simple website using Jekyll as our static site generator and had that website actually show up in GitHub. In this most recent installment of our Jekyll mini-series, Bart explains Jekyll's build process, which is mostly automated by how you name things and the content of the files you create, like adding YAML frontmatter. Then we spend some quality time bemoaning how the Jekyll developers reuse the word assets to mean two different things. Bart avoids some of the associated confusion by creating some naming conventions of our own. We get to do a worked example where we learn a little bit about pages in Jekyll and do a few things the hard way that will redo the easy way in the upcoming installments. If you're following along real-time, note we will not be recording another episode of Programming by Stealth for six weeks because of a birthday celebration for actually on both ends of the pond, sides of the pond, and my trip to Japan. But of course you can find Programming by Stealth number 178 at a link in the show notes or in your podcatcher of choice or of course at pbs.bartificer.net.
[1:57]Our first CES interview this week is going to be from the giant show floor. As is often the case, the person staffing the booth was reticent to do an interview as she'd never done one before. It's not like being at the press events. Those people are ready to be recorded and interviewed in lots of ways. You might want to go watch the video of this next one as it's a giant piece of equipment that's pretty dramatic to see.
[2:20]
CES 2025: Komatsu Underwater Remote Controlled Dredge
[2:24]We are possibly interviewing someone about the well we're definitely interviewing them but we're interviewing them about possibly the largest piece of equipment i've seen here we are looking at a giant underwater remote controlled construction equipment piece of equipment and i'm here with madeline pierce who's going to tell us a little bit about it so what was the problem that this was designed to solve so this helps with uh underwater dredging uh and what do we dredge for? You're just removing material from the river. Yeah, but I think there's something, it's a climate problem or helping with it. So you can help restore ecosystems or help prepare for natural disasters. Oh, okay. All right. So what we're looking at, it looks like it should be, well, actually, it looks like one of my grandson's little construction equipment, but this thing is massive. It's, what is it, like 35, 40, 30 feet long. And I see a pole sticking up out of it. What's the pole for? So the pole is for GPS. It helps communicate with the tele-remote controller. Okay, so we've got somebody with a little remote control, a little joystick that's driving all of this, right? And so that must limit the depth you can get to with this. Yeah, so this can be operated up to seven meters underwater. That's still pretty big. And if you're doing a river, that's probably, well, I guess there's big rivers we should talk about, but yeah.
[3:41]Now, this isn't the first one of these Komatsu's made. No, so there was a diesel mechanical version that has been made since the 70s and it had a snorkel so to allow for air to get to the engine. So that limits it even more on depth right? Yeah and eventually we want this to get to 50 meters. Oh wow now how would you do that over what you're doing today? They want to remove the GPS mass so it'll be tethered with a buoy at the top above the water to help communicate with the controller. Wow one of the first questions Steve asked when we got in here was how did they get this into CES because I'm guessing this weighs a couple of pounds too. Do you have any idea how much it weighs? It weighs 30,000 kilograms. Holy cow. This thing is really impressive. Well, I love the idea of this. This is all electric then, right? Yes, completely electric. Forgot to mention that. So it's every little girl's dream to be able to drive a piece of construction equipment underwater. I think this is really cool. Thank you for joining us. Thank you.
[4:35]
COVESA mixup
[4:39]Before I play the next set of CES interviews, I got to tell you a funny story. I've mentioned that we get most of the interviews during three press events, CES Unveiled, Pepcom, and Showstoppers. These events are held in hotel ballrooms, and we usually hang out with Dave Hamilton, Chuck Joyner, Pilot Pete, and Norbert Frasa before and after the events. I think it was on the third day when we went to Showstoppers that for some reason we weren't with our merry gang. So Steve and I were on our own to navigate to the hotel and find the room. Some of the events require you to pick up a badge before entering, so we weren't surprised when there was a table to do just that.
[5:14]Oddly, the woman helping us said she didn't see our names. She wasn't perturbed and just printed us badges anyway. We went into the ballroom and started our usual wandering of the aisles of vendors searching for interesting products to learn about. The first one of those interviews you're going to hear is with PlugP2P. Now, Steve found this booth and was very excited to have me chat with the founders about an innovative solution for EV charging. After we interviewed them, we found a few more interesting products and we did more interviews. But we started noticing a couple of strange things. First of all, they were all about the automotive industry. Secondly, they were all talking about Michigan. We even found a booth up front with a giant map of Michigan on the wall. We went to talk to the woman staffing the booth, and being from Michigan myself, we had a nice chat. I learned from her that her organization is trying to get more businesses to invest in the state of Michigan, specifically in transportation. So, okay, that explains why the heavier-than-expective automotive angle this year. Finally, after the fourth person I talked to, I noticed that every person that we'd been talking to had used a word I didn't recognize. The word was COVISA. I asked the last person, what the heck is this word? I keep hearing people say Covisa. What does that mean? He gave me kind of a blank stare of confusion and said, Covisa is the sponsor of this event?
[6:33]We weren't at showstoppers at all. We'd gone into the wrong room. This also explains why we hadn't seen any of our usual gang or Tom Merritt and Rob Dunwood, who were also there, and the rest of the DTNS crew, who should have been in the room. We usually run into them as we're wandering around. This also explains why the women out front didn't have badges for us. We actually got two really good interviews that you're going to hear tonight, and then we went on to Showstoppers for the rest of the evening. With that, let's learn all about Plug P2P.
[7:02]
CES 2025: PlugP2P Peer-to-Peer EV Charging
[7:01]Music.
[7:07]We're big fans of electric vehicles, and we really believe in being able to charge anywhere. And one of the problems to be solved is, what if you're, I don't know, going to a ball game or something like that, and you need to charge, But you also need to park and it's possible that plug P2P might have the answer to this So I'm talking to Ann Campbell the founder of plug P2P And she's going to tell us all about their idea of how this can be solved. Yes. Okay, great Thank you for stopping by on plug P2P is a mobile app that helps connect people who own an EV charging device And it could be an electrical outlet because most people have a mobile charger or it can be an actual you know Tesla charger or a J1772 charger. So we connect people who own those charging devices. They use them a couple times a week They just sit there unused most of the time There are over four million of them in the US today and we want to sort of create a marketplace so that people who have a charger and People who need a charger can find each other. And, you know, we think this is going to be really helpful to people who drive EVs because they charge at home most of the time. But sometimes, you know, they could be visiting someone. They could be going to a game somewhere. They might be spending the afternoon at the park at a soccer game. And it would be nice if they could just kind of park across the street or down the street or around the block and let their vehicle charge while they're sitting there watching that football game for four hours.
[8:29]And everybody wins, you know. So wait a minute, we're all larcenists, so we want the money. So I've got a Tesla charger in my garage, and I'm a nice person, but I pay $0.52 a kilowatt hour during the peak time and $0.26 during the off time. So I'm going to want some compensation. What's your idea there? So you set your calendar, you set your price, you get to decide. The host is in full control of the situation. when someone wants to come to your house to charge, you can decide to accept or decline that charge event. And they know what your calendar is. They know how much you charge. And you're going to charge. Market discovery is something that we enable. So if you can charge $20 an hour during that football game that's right around the corner from your house, you're going to do that and you should do that. And guess what we're able to do that. Because they're also parking their car, aren't they? Absolutely. So it's a win-win for everybody. And, you know, our app will allow you to kind of discover what people are willing to pay. And you should be compensated because you're sharing your property.
[9:33]You're giving them an electric charge. So there's really a win-win benefit all around by using our app. So plugp2p.com. Check it out. That's really perfect. You're ready to wind this up and I've still got more excitement about this. So I like the idea that I get to set the charging. So if it's a children's soccer game across the the street, I might get a buck an hour, but if it's the, I'm at SoFi Stadium, I'm down the block there, I'm going to get banked for that because I'm not allowing them to park and charge their electric vehicle at the same time. That's right. And if you're at a highway exit ramp, you might get 20 bucks an hour, whereas, you know, on Memorial Day weekend, if somebody really, really needs to charge, it could be an electrical outlet. You don't even have to own an EV charger. So you as a host, there are people who drive EVs that have mobile chargers that plug into a regular electrical outlet. You just need to have it checked by an electrician, make sure it's safe, and we do ask you those questions. And so we just want it to be a really positive, outstanding experience for everybody. That's interesting. So you're only going to get maybe five or six miles per hour of charge on a 110 outlet.
[10:44]If they also get parking during an event like that, that's worth money. That's interesting. So I might not even be somebody who owns an electric vehicle, doesn't even have a charger, but I've got an outlet, or maybe I've got a dryer outlet. Maybe they've got the charger, so some things like that. So you've built the infrastructure for all of this math to be done, so I don't have to worry my pretty little head about how to do it, but all I do is set the charge. That's right. That's right. You just tell us what your actions are. I said charge. Charge, I meant money. That's right. You decide your schedule. You decide how much you want to charge. And you put it out there. It's like you're selling something on eBay or Airbnb or Uber. This is what I have to offer. This is what I want to charge. And then you find matches. Guests will be able to find you if you have what they need. And the beauty of a marketplace is it allows buyers and sellers to find each other and negotiate a price. Oh, right. And so if you set your price at $50 an hour and you get no customers, you know that if I want to be in this business, I've got to drop my price. Some weekends you might be able to get $50 an hour. Some weekends it might be $2 or $5 or $3, whatever it is. You'll find out. I love this. So the site is PlugP2P.com, PeerToPeer.com.
[11:55]But I'm sorry, is this already available today? So we're launching in the next couple of months. We've been working on the software for about a year now. We're putting the final bells and whistles on it. We want it to be a really amazing user experience. And we think everybody's going to love it when we get it out there. So it'll be downloadable in the Apple Store and the Android Play Store. It's for iOS and Android. So look for it in a couple of months, but you can go to PlugP2P.com to get us your email. We'll let you know when it's going to be launched so we can keep you in the loop. You can sign up and you can either be a charge guest, a charge host. You can be both. And, you know, we're just going to create this awesome ecosystem so that people can find each other and start EV charging and solve this big problem that the whole society is dealing with. This is fantastic, Anne. I expect a personal phone call when this all goes live, but this is great. One more time, I'm going to plug P2P.com. You'll learn all about it. You can email us. We'll answer any questions. Give us your email, and we'll keep you posted when we go live. Perfect. Thank you. Right. Thank you.
[13:07]
Automation Gives Me All the Feels
[13:10]When Bart and I hatched our plan for him to teach me and the audience how to program, he called it Programming by Stealth for a very specific reason. His plan was to sneak up on us by teaching us a single language first, and then when we weren't noticing, keep adding to our toolkit, until we turned around one day and realized we were developers. He started with simple HTML to create webpages, then some CSS to style our webpages. But what fun is a webpage without some JavaScript to make it do things?
[13:36]Over the years, Bart has continued to build on the lessons learned from before while introducing new concepts and new languages to us. He isn't teaching us languages. He's teaching us how to think like developers. He's teaching us that powerful tools are within our reach. He's teaching us how to learn. Long before Programming by Stealth was a twinkle in our eyes, I remember people raving about how fun it was to automate things. I wanted to be one of the cool kids who did that, but for the life of me, I couldn't think of anything I needed automated. Now I feel like everywhere I look, there's some tedious task to be eliminated and improved. It's like I got a pair of glasses that let me see what was there all along.
[14:14]Now, while I've accomplished some complicated things that make me quite proud, sometimes it's the little things that just make me feel just all warm and fuzzy. Let me give you a tiny but lovely example. Creating video tutorials for screencasts online is a lot of work, and the software we use to record, edit, and produce the videos, ScreenFlow is a bit twitchy and unreliable at times. To combat the fear of losing all my work I store my ScreenFlow file locally and every time I finish a chunk of work on it, maybe it might be 5 minutes, 10 minutes, 15 minutes worth of work, I make a copy of that file to a folder in Dropbox. Since I'm creating multiple files of the same name I need to differentiate them. For a long time I manually appended the date and time to the end of the file's name. For example let's say I'm working on a tutorial for Audacity, the one I just finished, and I make a copy of the file at 4.55pm on March 15th, I would change the name to Audacity 2025-03-15, then 1655.screenflow. A few years ago, as my automation chops were heating up, I decided to automate it a bit by using a TextExpander snippet. In TextExpander, there's a little calendar icon and a clock icon. Each of those will place little tokens that will automatically insert the date and time.
[15:30]Under the calendar, you can drop in the year in, say, YYYY format, another one for the month in MM, and the day is DD. I chose to put dashes between them to make it look nice. The clock does the same kind of thing with hour, minute, and second. My saves are far enough apart that the seconds aren't important, so my append to text is just HHMM.
[15:50]With this fancy text expander snippet, I can make a copy of the Audacity file in Dropbox, hit Enter to select the text of the file name, Command-Right Arrow to get to the end of the name, hit the space bar, and then type my text expander abbreviation to append the date and time. I was quite pleased with my new automation.
[16:08]I went happily along with that solution for years, but just last week I started wondering, is there a way to do it even more automatically? Hazel from NoodleSoft is a fantastic app whose job is to watch folders and to take action on those files when they arrive in those folders. I've been automating things with Hazel for ages, but for some reason, it just hadn't occurred to me to use it for this automation. Something about the way Bart has been layering tool after tool on the Programming by Stealth audience just kind of starts your brain working differently. Hazel has a couple of ways I could automate my appended date and time to my copied ScreenFlow files. For both methods, you start by telling Hazel which folder do you want to watch. Then you add rules to the folder and you name them. I called my rule addDate. I know I'm inventive that way. All right, next you tell Hazel the conditions under which it should apply the rule. For this tutorial, my main file will simply be called audacity.ScreenFlow. So my condition is to simply be that it only apply the rule to files of that exact name. I can't have it look generically for any files ending in, say, .screenflow, because the copies that are renamed with that date will also end in .screenflow. Hazel would just keep appending the date and time indefinitely to all files. I'll have to change this condition for the next tutorial I create, and perhaps there might be a way to automate even that step, but changing it once for every two-week project is not onerous. Now that we've taught Hazel how to recognize the file to rename, we need to tell Hazel how to append the date and time under the heading do the following to the match file or folder.
[17:38]Hazel has built-in tokens for just this purpose that are very similar to what you see in TextExpander. There's a dropdown I changed to rename and that it offers with pattern. To the right, it automatically drops in two tokens, name and extension. With that default, it would rename it with the same name and the same extension. Clicking into this area pops up a window where you can add all kinds of other tokens. For my needs, I wanted the date added token and from there, I can use the edit pattern option to the token to define exactly how I want to define the date and time to look. It's really easy. Now, I'm going to make a little confession. I didn't use this easy drag-and-drop interface of Hazel with its little tokens. I was busy working on my video tutorial, so I outsourced the solution to my assistant Claude.ai. Claude suggested I use Hazel's ability to run a script instead, and offered me an Apple script to perform the same function. I've had experience with AppleScript, enough of it to understand what Claude wrote for me, so I felt confident it wasn't going to do any damage if it made a mistake. Claude adds comments for each function, making it pretty easy to understand what it's doing. Now, I'm not going to go through the script it wrote, but I put the full text of the script into the show notes if you're interested. Now, I know this is an itty-bitty automation created with a simple drag-and-drop interface or a six-line AppleScript if you're fancy, But every single time it renamed my file when I copied it into Dropbox, I smiled. You simply can't buy that kind of joy.
[19:06]Now, another fun automation I did recently involved using my Stream Deck for audio editing. I don't do a lot of fancy stuff with my Stream Deck, but I've programmed a few buttons that I use all the time. My favorite one is to switch from headphones to speakers for audio playback. A lot of podcasters heavily edit their audio, but I don't really do that, partly because I find it dreadfully boring and time-consuming. If I've just done an hour-long interview with someone, the last thing I want to do is go back and listen to it again, but more slowly. The other reason I don't do it is because most of my guests are clear and concise speakers who don't fumble for words. Bart is a great example. Off the cuff, he rarely makes any kind of a mistake or an um or an uh. If anyone makes ums and uhs, it's usually me. When I'm recording solo segments, I start and stop when I flub up, and I just delete the flub and jump back in. I find that much easier than trying to splice out mistakes.
[19:58]But a while back, I had Pat Dangler on the show, and afterwards, she was kind of sad. She was disappointed in how many ums and uhs she did. I decided to make her happy and remove as many as I could from the recording. In ScreenFlow, I have segments of me talking where it's not practical to stop and edit along the way as I do for the podcast, so in ScreenFlow, I do have to edit after the fact. But Don McAllister wrote a series of keyboard maestro macros that gives us keystrokes for editing. They're simply marvelous. My favorite one lets you select a region of the recording, and in one keystroke, it cuts, rewinds a smidge, and then plays. It's so fast to let you know whether you've done a good job on the cut, or maybe cut a word off too soon or too late. When I started editing my recording with Pat using my audio software, Hindenburg, I really, truly missed Don McAllister's keyboard maestro macros. I'd drag across an um in the waveform with my cursor, move my hands to the keyboard, hit Command-X to cut the region, move my hand back to the trackpad, click to place the cursor where I wanted to start playing, move my hands back to the keyboard, and hit the spacebar to play. The tedium was overwhelming.
[21:05]Now, Bart has often talked about how the best programming is when you're scratching your own itch. This one itched real bad. But he has also instilled in us the faith in ourselves that we can probably automate our way out of the most annoying things that we're facing. You've heard me refer to Hindenburg about 358 times over the past years, but you'll notice I've never explained much about how it works, or recorded a tutorial for ScreenCastsOnline about it. That's because much of it is still mysterious to me, and I've never taken the time to buckle down and learn the whole darn thing. The good news is, I pointed Jill from the Northwoods to Hindenburg early on in her podcasting journey, and she watched all of their videos and just consumed every bit she could to try to learn everything about Hindenburg. So now, when I get stuck, I just ask her questions about Hindenburg. It's way easier than learning it yourself.
[21:53]Now, my goal at this point was to replicate what I have over in ScreenFlow with Don's Keyboard Maestro macro. I wanted a single action to cut, rewind, and play. Cut is easy with Command-X, but that rewind and play part was going to take some sleuthing. While Hindenburg is a highly complex and capable tool, I found out that much of its capability is hidden, so you have to know what's there and know what it's called in order to even search for it in the very annoying documentation. While preferences for Hindenburg are very meager, with only three tabs, on the middle tab called Interface, I found a setting called Pre-Roll Time in Milliseconds, and it was set to 3000. And I kind of smelled like what I was looking for, like back up three seconds. A full three seconds rewind is way more than I would want, so I set it to 1000 for one second. Okay, that's great. But how do I make Pre-Roll go? This is a setting for how long, but how do I make it pre-roll? I flipped through every single menu in the menu bar, and there was absolutely no mention of this mythical pre-roll. One of the reasons Hindenburg is so difficult to learn is that there is no user manual to speak of. Instead, they have little video guides with no words, just music playing while they show you something. I like a video tutorial as much as the next woman, but they're really hard to search. I eventually found the right video, and after all that searching for how to activate pre-roll, they typed in text on top of the video the letter P.
[23:20]Well, geez, that could have been a menu bar item, couldn't it? Well, armed with how to cut with Command X and how to pre-roll with P to hear the audio after the cut, I added a multi-action button to my stream deck. I named it Cut plus Pre-Roll because I'm inventive, as we've already mentioned in this episode. I dragged in two system hotkeys, one for Command X and one for the letter P. I tested the button, and it worked perfectly to cut and pre-roll my audio.
[23:48]I was so enchanted with the new button that I made another button, just a single action button for play pause, where the hotkey is just the spacebar. That might seem like a trivial button since the spacebar is pretty easy to use, but I found I could do my editing with my left hand on the stream deck and my right hand on the trackpad doing the selecting. It was easy to use my left hand to play pause than to give my right hand two different things to do. I was so excited about this itty-bitty little automation that I made a little video to show you that is, of course, in the show notes. In the video, my button has a generic icon, but I got frisky while writing up my joy about this automation for you, and I decided to make my own button. Stream Deck has a little web app that lets you make them for yourself, or you can choose from their library, but I wanted my own. I didn't see anything that inspired me in the library of icons, so I used the noun project to find a nice icon of a pair of scissors and the letter P. In the noun project that I've talked about before, Or you can choose a color for them for the icon that you've chosen. And I chose to make them red. I plopped them into the web app and I made my little button icon. So it's basically just a pair of scissors and the letter P.
[24:54]Well, I got to tell you, this made me so happy. I didn't hate cutting all of the ums and uhs that Pat said. And while I was there, I get rid of all of mine. And I think I might have done more than she did. Again, it's the automation giving me joy that I really want to get across to you. Here's my last example. Back in 2016, I wrote an article about a little app I wrote for myself to add drop shadows to my images for the blog posts. If you look back on my posts from back then, the images look pretty fuzzy, but they do have a nice drop shadow. Since then, Helma helped me add some CSS code, that's cascading style sheets, to my theme so that any image I upload has a drop shadow. And with MarsEdit, I can tell it to double the resolution so that's why all of the images now look really crisp. I bring all this up because I got a comment on my article from 2016 just a week or so ago. The person's name is Scott, and he was asking for help getting my little Dropshadow app to work. This required some real dusting off of the old brain cells as I re-read my own post on how I created it. There's an open source library called ImageMagick, with a K, that does most of the work on the command line. But I used Automator to package the ImageMagick script into an app. I explained how to build this yourself, but I also gave a download of my Automator app. Scott was getting an error when he ran the download, and he posted the error.
[26:15]Now, if you ask me what I did last week, I pretty much can't tell you. I could definitely not tell you what I talked about on the podcast last week, so the chance I'd be able to remember what I did nine years ago was pretty slim. If it weren't for Bart instilling this love of coding, that itch Scott put in my brain to figure out would have gone unscratched. Instead, I spent a short bit of time asking Claude what Scott's error was trying to tell us, and a long bit of time writing an addendum to my article. And enhancing my app quite a bit. The easy answer was that Scott hadn't selected a file before asking the app to add the drop shadow. Now, the long answer is that it was never going to work under macOS Sequoia because of a permission dance he would need to do to use it. Oh, and if you're on Apple Silicon, my shell script inside Automator wouldn't work either. I only knew that because I discovered a while back on another coding project that Homebrew moved where it installs apps with the switch to Apple Silicon. As I worked through writing up the addendum instructions, I was very frustrated that I hadn't put in a full screenshot of the Automator app into the show notes, so I was kind of guessing at parts of it. Since 2016, I've discovered both Shodder and CleanShot X, which will take scrolling screenshots, So I was able to use one of them to add the entire Automator screenshot in all its glory to the addendum. That way, if somebody didn't want to download and run an app from a clearly unidentified developer, they could rebuild it themselves.
[27:39]Now, after I added the 2025 update to the article, I was very proud of myself, and I commented on the blog post to alert Scott that I'd answered his questions. And wouldn't you know it, after thanking me quite profusely, he asked for an enhancement to the app. Since he did a brick wall when he tried double-clicking on the downloaded app, he asked whether I could make it politely ask for a file instead of throwing an incomprehensible error. Well, you know that sounded like fun, right? I didn't go as far as Scott asked, mostly because I got lazy, but I think I addressed the spirit of his request. Now, if you double-click on the app instead of throwing an incomprehensible error, it simply tells you to drag a file onto the app for it to add the drop shadow. I had a lot of fun dusting off this nine-year-old project, and it made me feel good that Scott was getting value out of it. If you'd like to try my little drop shadow app for yourself, you can download it with its readme file at the link in the show notes. Now, back to the point of the story. Before Bart and I started programming by Stealth in October of 2015, I was simply someone who wished they could think up something to automate. After nearly a decade of Bart giving us this slow and steady instruction in so many languages, tools, and concepts, everywhere I look, I see opportunities to automate. While I'm working on these automations, and when I succeed, I feel really good. I feel accomplished. I feel powerful. I have all the feels.
[29:00]
CES 2025: MOTER Next Gen Car Data for Auto Insurers
[29:04]I'm in the motor booth, M-O-T-E-R, with Af Waswasi. And the pitch here says motor is the bridge between automotors, automakers, and insurers. So I'm thinking this is going to be controversial and awesome. What do you have to tell us about today, Af? So basically, we get our data from the car using Covesa standard signals. And then our SDK libraries, the software we create, basically.
[29:33]We have driver scoring, driver coaching, and then accident and event detection. So basically, you have our software in the car. And as you drive for each trip, it would score you and check your driving behavior. I don't like that. I'm a terrible driver. So that'll help you become a better driver and safer also. So does it gamify it? Is that part of it? So, think of it as every time you drive better, your score increases, so you pay less premium. Kind of that's the idea. So as I'm driving, and actually it's very strange for Steve and me that we're watching the video, the videos are all with cars driving right where we live or in our neighborhood. So I've got this system in my car and I'm going to be driving along. I'm paying attention. I'm not paying attention. I'm going to get a score. And whatever I do is being reported to my insurance company. No. So whatever you do is going to go to the cloud where for now, it's mostly ride share rentals like fleets similar like that and then so imagine a fleet manager wants to see who are his best drivers and worst drivers so you can easily go look at the data see each driver's scoring.
[30:53]And their driving events like speeding tailgating heartbreaking you know all accidents if they cause any and i'm okay as long as somebody else is getting monitoring just not me is but no it doesn't go directly to an insurance company oh it does not oh okay it doesn't go right away so that's another thing like it would depend on the uh the who are the customer so one of our customers for example is a uh non-emergency medical uh like non-emergency fleet so like not 9-1-1 but those um you know someone wants to go to the hospital someone's on a wheelchair and they need those, we install those in their vehicles also to monitor how good those drivers are. Because maybe someone elderly, someone is sick or injured, and so... Make sure they're safe. Yeah, basically. Make sure the driver is not driving recklessly or...
[31:49]Okay, so let's talk a little bit more about how this works. I see two pieces of hardware here. I see a cluster module and a console map. I got a bunch of wires. I got some Ethernet going on here. What is all this hardware? So these, imagine these are the car entertainment systems in the middle. And so our product is basically the software that's installed in these. So that one over there acts like the car ECU is sending the signals to the car software. And then our software is in there. So it scores the driver and also sends driver coaching messages and information. So all you have to do is just turn on your car and start driving and it takes care of it so this is hardware simulation we're looking at right here this is not something i've got to install in my car if i'm one of these drivers like imagine this is a car and that's another car i got you i got it so we see somebody driving along and maybe they're not paying attention to the road it's going to give them some sort of signal to say hey you know you want to look back at the road and you're watching the score speeding right now so the driver was speeding so you get that recorded And then sometimes like a lane departure, they go and then you can go back and look at your trip, see what happened. Like you look at what caused your score to drop or increase.
[33:06]So this is at this point, you're looking at companies that you can partner with to put this in cars for, say, if you could get Uber to do it. Or, well, that doesn't work as well because it's not really a fleet, but maybe somewhere where they control the vehicle, right? Yeah, we want to support as many different platforms as we can so that our software can, they can just install it and then do whatever they want to do with the. It's got a little big brother on it. I'm not going to lie. Little bit, little bit. But I don't know. Again, my premise is humans shouldn't be driving at all. So if this helps us to be better drivers, maybe that's the right thing. Yeah, I would say it's on your side, then it's against you. It's more to help the driver than actually be taxing to the driver. Butter side up. Butter side up. All right. Thank you very much, Af. This is very interesting. The company name is Motor, M-O-T-E-R. If people want to learn more, where would they go? Motor. Very good. Thank you. Thank you so much. Appreciate it.
[34:09]
Support the Show
[34:13]Have you been enjoying your ad-free feed of the NoSilicast? and all of the other lovely shows here at the PodFeed Podcast. You know, they may be ad-free, but they aren't cost-free. If you'd like to help us keep these shows funded, please consider becoming a patron by going to podfeed.com slash Patreon.
[34:30]
CCATP #811 — Adam Engst on Measuring AI Transcription Accuracy
[34:29]Music.
[34:37]Well, the delightful Adam Angst of Tidbits is back again with us this week, and he, in the last couple of weeks, has let me help him conduct some experiments trying to find the accuracy levels of different AI's ability to do transcription. And today we're going to start by exploring his findings, but I have a feeling we're going to wander off that path as well.
[34:57]Gee, are you suggesting that I don't usually stay on topic? I love our nonlinear conversation. Yeah.
[35:04]Nonlinear is the way to go, man. um so okay yeah so so what happened was i is when apple introduced the audio transcription feature in notes i thought well that's a great idea you know why wouldn't you want to record a presentation you were at or you were listening to or something like that and then get a transcript of it i mean i'm i'm a journalist right like i need to check what people say um i not go on my memory and and so i did this and then and then like the rabbit hole just started because like there's audio hijack it can do transcription too and oh but wait notes works on the mac and the iphone i wonder if those are different and and and then you told me about mac whisper and like well clearly i better check mac whisper too and so it's like kind of got this idea that i could figure out the accuracy level of one of these things this turned out to be vastly harder than i'd anticipated because like what is accurate right so, What I'd been recording was Apple presentations, you know, because they're relatively clean. Like, at first, like the WWC, you know, and iPhone releases, they're clean, right? I mean, those are scripted. But there's no transcripts of them that you can just download. But then, you know, like an Apple earnings call, you know, the first part is scripted, but then you get into all the back and forth of the analysts, and it just goes off the rails.
[36:30]So it all worked, but how do you check how accurate it is? And then I remembered that NPR does transcripts for all their shows, at least some of their shows. And so I found a shortwave podcast and, you know, I had transcripts and everything like that. And I recorded it. And of course, I recorded it in multiple ways because if you want to use notes, you actually have to literally record sound coming out of a speaker. Whereas Audio Hijack can just like suck in the digital sound without actually ever playing it.
[37:01]Mac Whisper needs files. So I record it in all these ways, and then I transcribed it. And then, again, started trying to figure out, how do you compare? And I actually started with ChatGPT, which gave me some answers that looked plausible at first. But every time I asked in a slightly different way, I got wildly different answers. Oh, you were asking it which one of these two is more accurate? Yeah, because I had the official one. So did you give it the audio file too? No. No, no. I just gave it the transcript. So I said, here's the official transcript, and here's five others. Compare them on how many words are different, how the punctuation is different, a capitalization, that kind of stuff. Okay. Trying to give it some sort of metrics? Yeah, yeah. And eventually, it said, oh, it was calculating word error rate. And it turns out word error rate is a thing. So I didn't know. So, it turns out where I write is an equation of, let's see, it's the substituted words, the missing words, and the inserted words divided by the total number of words in the official version. Okay. So, it doesn't worry about punctuation, doesn't worry about capitalization, things like that. Good, good.
[38:17]And so, that helped a lot because then I had something I could tell it to do, which it sort of knew how to do, and then it became consistent. Okay. And it also, I learned that there were calculators that you could just paste into two transcripts and it would calculate this for you. The W-E-R-T? Yeah, the word error rate, none of them agreed.
[38:39]They're driving me up the wall. You know, they were ballpark, and they sort of went in the same order, you know, like this one was more accurate than that one and things like that, but they never agreed on the numbers just about. Let me ask, is it a percentage? Yes. So how close were they in percentage? I mean, a word error rate would be like 5%, 3%, it'd be a small number, hopefully. Yeah, they were all within, you know, 1% to 4% difference. Okay, but you wanted scientific evidence. It just seemed a little fuzzy to me. So I'm like, come on. And what I realized eventually was this was a fool's errand, and I was the fool. Always get it while she's drinking.
[39:26]I was drinking coffee right when he said that. It did not come out of my nose, but close. So, the problem was, I actually did an error check on the official transcript. So, I played the audio and read along with it. Luckily, it was only like 12 minutes or something, so I was able to do that. I was going to say, that's really the only metric that you could do that you could trust is really, I'm reading it, I'm looking at it, I can tell what it's saying, right? And you know what? There were at least four errors in the official transcript. They were missing words. And those were errors that were true errors. There were other ones which I realized, suddenly, well, now there's decisions you have to make. So if I say, just right there, I said, so, so if I make. Right. I repeated the word so. Right. Should a transcript include that? If you want to be technically accurate, yes. If you want it to be good for the reader, definitely not. No. And so, yeah. So it turns out that Notes, both on Mac and iOS, is pretty verbatim. If you repeat a word, it will pick it up twice. All the Audio Hijack and Mac Whisper, which actually both use Whisper behind the scenes.
[40:51]They're like you don't really need to see this I've actually really appreciated that in the interviews that I do at CES I am not very articulate in those interviews I'm often repeating something or I'm going uh uh uh uh and it's just not there in the transcript you sound great, I've listened to some of those I listen to them and I'm just like oh man, so yeah so it turns out.
[41:20]And I went in thinking this was a computer problem. You know, like one of those, you know, you plug in the numbers and it spits out an answer and you're done. You can do a graph, you can do math. You could do, right. You know, like there should be a number. Sure. And what I came to realize, and I mean, there's still some useful information that came out of it. You know, that I did get this nice chart and a table of all the things and was able to come to some conclusions. Mac Whisper is probably the best sort of unsurprisingly and that's what it does. And interestingly, notes in iOS and notes on macOS are different. MacOS does better. Really? Yeah. Don't know why. Maybe it's running locally. Same recording. Maybe it's running locally and it's RAM-based and that sort of thing. I don't know. Who knows? Very interesting. So, you know, that's on an iPhone 16 Pro versus an M1 MacBook Air. So maybe even a different Mac might make a difference. I don't know. I asked you about that when I did it because I was running it on my M3 MacBook Pro and on my M2 MacBook Air. And I was asking you whether you thought it would be different. And we didn't think it would be, but maybe.
[42:34]Again, I just don't know. And that's sort of what this is coming down to, is that we're seeing that more and more of the world's problems, in some sense, don't have fixed, easy answers. We've moved beyond fixed, easy answers. Those are all—we've solved those. This is computers, though. It's got to be exact. It's got to be exact! No, no, it doesn't. And the fact that there's these decisions that you have to make is part of why it's not exact. Even the two calculators where you just pasted it in your text and they claimed to calculate what error rate, they didn't agree. So clearly they made some different decisions on how to do this.
[43:22]And again, what is right? I don't know. They all agreed that when there were just exactly four errors, that they had the same. They all agreed on that. But as soon as you got into actual real-world situations, it was much trickier. And of course, that is professional audio from NPR. It's as good audio as you're going to get. It's an official transcript, which I believe was created by a person, at least to some extent, because the missing words were not missed by any of the AIs. Oh, and these weren't just uhs and ums. Nope, this was like, yeah, this was actually, they were talking about something being an annual exercise. And I forget it was both annual and exercise, or just one or just annual. But basically, that word just disappeared from the official transcript. Okay, which looks like human error now. Right, it was very clear in the audio. there was no reason for it to have been missed. So, there were a few things like that. You do realize how scary this is. Now you can tell the error, you can tell it's human because it made a mistake.
[44:36]So, yeah. So, it just became this really interesting kind of philosophical issue of, well, what is the truth in some sense? What is the answer?
[44:52]And And, you know, I've been thinking about this more and more because I've been using basically like the last year and few months, I've been using Perplexity, Arc Search, which has this AI driven browse for me, and now ChatGPT to do searches. And so, I've been looking more into the search space, and not so much in an official, like a comparative way, where like, oh, let me run the search across all the four of them and see what happens.
[45:26]But just like, is it solving my problem? Let me ask you a question. Is it answering my question? Yeah. Let me ask you a question. I don't know what people mean when they say using it as search. How is it different than giving it a prompt? It seems like the same thing. Well, so perplexity is just a search engine, right? I mean, or 99% of it. Oh, okay. So perplexity is a search engine. And so when you put a prompt into perplexity, it's like typing a search into Google. It will do a search based on your prompt. then it will collect all the results and it will use them as context to answer the question in your prompt how is that not just doing a prompt that sounds like the same thing to me oh no no no because imagine chat gpt before i had access to the web right perplexity is literally doing a search in the background whereas chat gpt before i had access to the web all it could do is base its answer on its training model.
[46:28]Okay. I didn't actually realize that had changed. It's not just using its training data. So it can answer current events now? ChatGPT is doing full web searches when you ask. Perplexi always does them or almost always does them. You can ask Perplexi to do certain things where it will just be generative AI because there's sort of no search involved. You know if you like um you could i mean perplexity will do just you know like you know write me a limerick kind of thing um but uh but chat gpt in the past was just a completely other side would only do generative ai and only recently in november i guess did they open it up to search, okay that explains some of the results i've been getting i didn't uh notice a big shift but i now they think about it yeah you should be able to see it it should say searching the web, when you type in certain kind of prompts in ChatGPT, and then it will give you sources. Because both of them will give you sources to what they found and incorporated into their answer.
[47:36]That's interesting. I know when I used to do it from the web interface, I used to see that, but in the app, I don't. I'm not sure. I'm not using the app primarily. I'm not even sure I have it downloaded in my other machine. It only works in Apple Silicon, so I can't run it on my 27-inch iMac.
[47:57]So I'm just using ChatGPT in the browser and an Arc tab. Okay. All right. So, and then just the third one is Arc Search, which is the iOS version of Arc. So, all my tabs are synced at all times so that I can just tap, tap, tap. So, Arc is a browser. Arc is a browser. But they added this feature called Browser for Me, which is you type in your search or they've got a very nice way to talk to it. And it does exactly what the others are doing. Whereas it goes out and does a search, runs your prompt against the context of that search and then gives you answers with resource links. So when I go to just my URL bar in Safari with my default set as Google for search and I type in a term and it does that Gemini thing at the beginning, is that the same sort of search that you're talking about? Honestly, not sure. I've given up on Google so long ago that I'm not really up on what they're doing these days. Brave Search, which I used until I switched to Perplexity for a couple years before that, Brave Search does an AI summary, but it's so short that it's often not useful.
[49:23]Okay. How are you doing, let me guess, in Arc, the Arc web browser, does it allow you to choose perplexity as one of the options for your search? Is that how you're using perplexity? I'm trying to. So I'm currently using ChatGPT.
[49:43]And I'm hesitating because I can't remember whether it lets you choose or if I coded it in slightly. Because it's a Chromium browser, you can set up your own search engines very easily. And I can't remember if I did that. And I think with ChatGPT, I had to have an extension, a Chrome extension. Okay, just because in Safari, my only choices are Google, Yahoo, Bing, DuckDuckGo, and Ecosia, whatever that is. But I don't think I can choose. Oh, don't use Ecosia. Horrible, horrible. But I can't choose perplexity. No, no. You're limited to what Apple will let you do. Yeah. And so, that's one of the reasons I don't use Safari. So, but what's interesting about these is they, I feel like there's a bit of a sea change happening. So, I mean, do you remember using Archie and Veronica? No. I remember the comic book when I was a little kid. Yeah. So, Archie was the search engine for FTP sites. Oh, wow. in the in like the early 90s veronica was the search engine for for gopher sites.
[50:56]And then the web comes along and then we get altavista and yahoo and then eventually google and but what was sort of interesting about the jump from like you had this sort of interesting jump where like ftp sites you had to um you could just navigate like a like just it was literally a file system and you know you just go into folders and there'll be no more folders and documents there turbo gopher or gopher and then turbo gopher was the client from the university of minnesota um sort of made that um more fluid in the sense that you could have um you could be navigating through this gopher space which would have documents and whatnot but it was still very much this hierarchical list and you kind of go through and then of course the web pops up and And suddenly we've got full text, right? And with links and everything. And so that was the evolution. And this feels like it's sort of the next thing where instead of finding a single page with text on it, it's finding a bunch of pages and collating their information in a way that it can then go get to the answer. So you don't have to go and look at each page to see if it actually answers your questions. Okay. Because, I mean, that was the big win of Google, right? PageRank, the idea of PageRank was that it was, you were likely to get in the first couple of results, the answer to your question. Right.
[52:22]That seems to have gone away. Part of the reason why I don't use Google anymore. Yeah, it has gone away. And part of it is also that, I haven't thought this through, but I'm just saying this out loud. I wonder if keywords are no longer enough. In other words, we've been trained to do keyword searches.
[52:45]And so a keyword search, you know, Google's going to go like, ah, well, that's a keyword, and that's a keyword, and that's a keyword. Here's the pages that have those keywords on them and that do all my other things in terms of being useful and blah, blah, blah, blah, blah, so that they meet the page rank. Um and whereas what i find with the more ai driven search engines is it's much more of a conversation because it's more like a chat bot right you know so so you don't you don't usually give it just keywords usually say what it is you're trying to accomplish and that makes a difference in how it's going to answer. And that makes a big difference, because, of course, it's generative AI. So, it's going to be doing the next token lookup things, and the more you give it, the more those can be more statistically appropriate to actually meet the needs of the question. Right. So, you're saying this feels like a sea changed you. In what way?
[53:54]Well, because, in fact, and I understand the various issues underlying this, but the fact is, is when you do a search, you're trying to answer a question most of the time. I mean, there are searches that are no question, like our navigation. I just want to go to this website. I don't know what its URL is. So take me there.
[54:16]And then there are simple, very simple fact searches. You know, what is the world record in the mile? You know, 343.13. But then there are a lot of things where...
[54:31]The search is really starting something. So, for instance, I am looking for information about this particular model of GE refrigerator, because I have one and it's not working in this particular way. Tell me what might be wrong. Different. Yeah, that's really different. That's not something—that's got to explore a lot to get to that answer. It's got to explore a lot. and then it tells me what it thinks might be wrong and I'm like, yeah, I don't think that's it because the freezer's still working. The bottom freezer's still working. So clearly the compressor's not gone. Oh, okay, then let me adjust my answer, to focus in on the other things that could be wrong. And I'm not saying that it's necessarily going to get the answer right, but it's a more exploratory and participatory approach, to information gathering. It's also like dealing with somebody who's pretty smart, but also has zero ego. So when you tell them, yeah, no, this model doesn't have that kind of compressor, it goes, oh, yeah, you're right. I'm sorry. I'm sorry. Let me approach this from a different way. Unlike Siri, which never apologizes. Yeah, that's really what it is. We just want her to apologize.
[56:00]I find that interesting because, well, yesterday I was playing with, I was using Claude to help me with some coding. And I told him what was going wrong. And it said, oh, here, or no, I said, I wanted to do a certain thing. And it said, okay, here's how you could do that. And I said, yeah, that didn't work. And it goes, okay, I understand. Well, how about if you try this? And I did it again. And then it goes, yeah, clearly none of these paths are going to work. Let's take a whole new approach. And it wasn't, you know, it wasn't upset and saying, well, why didn't it work? I don't know. It just came back and said, let's take another angle on this. You could hard-code this thing, and here's how you could do it. Infinitely patient. Yep. Yeah. Infinitely patient. And it's also, you know, again...
[56:39]It's very important that these things provide sources because they – I hate the word hallucinate. I think it's a stupid word. I do too. They make mistakes. They make mistakes, and partly they can make mistakes because people make mistakes. Sometimes their data is wrong. One of the things that drove me off of Brave Search to begin with is not that Brave Search was doing a bad job, but just that the stuff I was getting was so stupid. It like the results like the actual pages i was reading i'm like well that's dumb you know like i was i was doing things that are in my field right like they're you know like i was talking about you know apple stuff and iphone and whatnot and so i was usually looking for confirmation, or the nudge of the one thing i'd forgotten those kinds of things so like i had a pretty good sense of what quality was and i was not finding it you know that the stuff that was popping up in the search engines. So it wasn't like the originals, the sources were necessarily good, which can result in bad AI responses. But I still want to be able to check them to see, because sometimes, again, AI just got some bad data somewhere and put it together in a stupid way, and now you've got some really incorrect stuff and you need to check that. But most of the time, I don't care that much, right?
[58:04]When I'm asking about what might be wrong with my refrigerator, I'm basically trying to determine, is this something I can fix myself or do I need to call a repair person right away? Right. And the chances that it's going to be super wrong on that is probably, well, it doesn't know your skill set. I mean, if you tell it that you have a sponge and a piece of electrical tape and it's going to go, yeah, no, you can't do this. But in fact, you can tell it your skill set and it will take that into account as opposed to having to watch a YouTube video and get 15 minutes in to determine that, no, the guy's not going to talk about your particular problem. Um so you know i said i mean the people who do youtube videos on repair stuff that is actually fabulous but a lot of the time it is not actually what you need because it's just a slightly different model or you know your particular problem is enough different etc etc so so you know there are things like that where um but with with the chat pots um that was the proplac you know, the AI search engines, you can literally do things like, I'm having trouble taking this thing apart. Are there any tricks?
[59:17]And because it has the context of what you've been searching for all the way back, it can say, oh, well, you know, yes, these kind of connectors, you know, can be a problem and, you know, try doing X, Y, or Z. And again, sometimes like, as someone, I do fix a fair amount of things. I'm not great at it. I'm usually successful, but it takes me a while. Successful and great being two different things. Yeah, meaning great, meaning it would just like, oh, you do this, click, click, click, and you're done. Successful is like an hour later, ah, I finally beat it into submission. But a lot of times it really is the trick. Like it's that experience of, yeah, yeah, here's how you get it off. You know, like if you have to spend 15 minutes trying to figure out how to get one little piece off, then you feel you wasted a lot of time, whereas something can just tell you, oh, yeah, there's a little tab underneath you got to push. Thank you. You know, I couldn't see the tab. One of my favorite anecdotes is that I had somebody snapped off the antenna on my old 76 Honda Civic, and I went down to the store and bought another one, and I went under the console, and I unplugged the connector, and I unscrewed it from the roof, and I pulled the cable all the way out before I thought, boy, I bet my dad would have tied something to that before he pulled it out. And I called him and I told him that. He said, Allison, how many times do you think I forgot to do that before I learned to always do that? You know, I've never forgotten that again.
[1:00:47]That's really true. You know, I grew up on a farm. So yeah, so I have a fair amount of those kind of skills. But as I said, particularly working with small stuff, you can't see real well. It's just, you know, you just like, oh, just tell me the trick. And in fact... Just to go with the story um the fridge really did die on monday the guy who actually was able to come on tuesday which was great it was luckily it was cold enough to move everything outside and not have a problem because we live well we can't do that we don't live where you do that yeah we don't live in sunny southern california um and uh and then um but he actually determined that it was the the control board um that was the problem and i watched him take it off including these little clips that you had to like use a pliers on i'm like oh that would have taken me i don't know how long to not realize those little clips. A lot of times I think what we really want to know is, should I just pull harder or is that going to break it? Yes, yes. Because I, oh yeah, precisely. So, so in any event, so he, he eventually, you know, like he figures out the control board, he's looking it up on his phone and, and having a little trouble finding it. So I look it up on my phone in repair clinic and, and I find the thing and he's like, well, you know, it'll be, you know, $380, you know, for the part and the service visit. We can probably get it in three or four days and I can probably be back another couple days after that. I'm like, a week without a fridge is not going to be fun. Looks like Repair Clinic can send it to me tomorrow.
[1:02:12]And I can put it in myself because I have watched you with the little clippy things.
[1:02:18]So by Wednesday we had to work in fridge again. And so you had gotten to that point with using an AI search agent. In that case, not. I'd gotten to the point of calling him. It's a call right. The repair person. Yeah, well, no, because I was like, okay, the various things that it's talking about are things I can't just solve with Repair Clinic. And it had mentioned the control board, but I was like, control board? Like, I don't do the electronic stuff. I mean, I'm good at plugging and playing, but if it's actually a dead control board, I don't know how to figure that out. And to be fair, like this guy, I mean, the repair guy was great. I mean, like he was looking up wiring diagrams and stuff like that. And he showed me on the back of the control board, like there were two little dark spots at solder joints where things had probably gotten too hot at some point. And it's like, okay. And in the part that really pissed me off, I had power cycled the fridge multiple times at this point, right? Because that's what you do. Because it's always just unplug it and plug it back in, right? Right. And so he unplugs the board, all the connectors from the control board, and he's looking at it in the back, and he puts it back together, and he plugs them all in, and the fridge starts working. I'm like, seriously? No. And there were no capacitors on it anywhere. There was no way it could have stored anything. I was like, I have had this thing sitting unplugged for a day, and when he plugged it in, to show him, it was still broken. So in any event, I still got the new control board and all that because I didn't trust it.
[1:03:47]But it was very helpful. I didn't trust the old control board. Yeah, you said I didn't trust him. I wanted to make sure we kept up there. He was good. Yeah, no, I trusted him, but I didn't trust the old control board to keep working. Right, right. But again, getting to that point was the tricky part because I was like, well, it has these symptoms. Like, I can only tell you what the symptoms are. And that was a useful – and that was a discussion. It was not a keyword search.
[1:04:23]Right. And the thing is that the ability to go back and forth with it, like you say, you can go to Google search and search for a term, and it'll spit you out some answers, and then you want to ask the follow-on question, and you're starting from scratch again. It's sort of like talking to Siri. But with these chat clients, you can just say, okay, it assumes it knows all this information. I talked about on my show that I was asking a question and it says, well, you know, you could use TextExpander for that. And I see you've got it on your system. And I was like, what? How do you know that? And it says, because you told me. And it says, go back and look in your search history. And sure enough, I had said, I'm trying to make a TextExpander snippet that does this. And it basically keeps an inventory of everything you've ever told it. Well, in fact, one of the things as I'm starting to write about this, I'm actually going back and looking, both Perplexity and ChatGPT keep sort of a library of your past searches, which is kind of interesting because, again, you might pick them up. Like, I've gone back and continued a conversation.
[1:05:31]Because sometimes I don't want to start from scratch. I'd already had some discussion about something. I've gotten somewhere. And now I want to pick that up and go further. So that was just, it's kind of an interesting thing. The other thing that perplexity does, which actually I find, I've been finding more and more helpful, is it suggests like four or five prompts to continue. You know, like it's helping you continue the conversation. Oh, yeah. It'll say, did you want to know a little bit more about this, this thing I just told you? Yeah. And so I've been finding that more and more that's actually been useful for certain kinds of like open-ended things where I'm like, tell me about this. I don't really know very much about this. And it will tell me something and then it will make some suggestions later on that actually are not bad.
[1:06:20]Now, in your musings as you're working on this topic, you were starting to talk about this as a cultural technology. Can you expand on what you mean by that? Yeah, so there's a woman named Alison Gopnik came up with this term, and she's arguing that generative AI, in essence because it's been trained on everything it can possibly find, is a cultural technology in the sense that we are using it to extract information, extract specific bits and pieces of human knowledge. In that sense, it's sort of like Wikipedia.
[1:07:07]Wikipedia is a large collection of human knowledge, and when we go and read an article about something in Wikipedia, we're extracting that information from the sum total of human knowledge that's been recorded in Wikipedia. And it was an interesting way of thinking about it because it gets away from a little bit of the intelligent agent concept, which I've had more trouble with. I mean, I haven't yet seen any of the agent-y stuff where it's like doing something that feels like it may even make sense to me, you know, that, you know, like, that why would you have something do this? You know, like, they've given examples of like, book me plane tickets. I'm like, I barely trust myself to book plane tickets. I'm not going to trust some random, you know, like, every time I book a plane flight, I'm like, well, let's say if there's a cheaper one. What about this? What if I did these other hours? That's the wrong seat. That's a stopover in a weird place. That's too close. How could it possibly know those things about me when I barely know them about myself?
[1:08:19]And I don't even know them until I'm presented with something. I'm like, oh, that doesn't seem like a good time. You know, like, you know, that just, it feels problematic to me in that regard. Whereas the sort of this way of plumbing human knowledge, I mean, what's Google if not, you know, this grand sum of human knowledge that you can search for bits and pieces of? Well, now we're not searching for the bits and pieces, we're asking questions of it. Which is sort of the same thing with a jump. I see what you're saying. Because we're having a conversation with it, not giving it, you know, Audacity envelope tool version 3.7.1. We're asking, hey, I want to change the volume of a track and I want to put control points in it and I'm using Audacity. How do I do that? I was just helping a friend.
[1:09:19]Controlling his Mac remotely, and he was having, he'd been doing something where, he'd been doing something, he'd been receiving spam that triggered more spam to a forum, a web forum that I create, that I manage, and I'm like, how are you doing this? And so, I finally, I mean, I sort of figured it out. I mean, I knew roughly what was going on, but he uses Apple Mail, which I don't, and it was, and so it was, you know, I'm sort of on the spot, like we're literally talking, and I'm controlling his computer. And I'm like, man, how do you find the freaking raw source of an email message in Apple Mail? And so I just asked ChatCPT. And it told me, and I'm like, okay, thank you. Now I know, you know, like, yes, I could have done a search in Google, and probably found three or four things that would might have told me if I went and read them. But I just needed to know what the menu structure was to get to that command. And there were a couple of other little things like that, because I'm not as familiar with Apple Mail as I could be, because it's not the app I use. And so to be able to get to that information, again, I wanted answers. I didn't want references.
[1:10:33]Yeah, right, right, right, right. Right? Yeah. I mean, it's doing the same thing. It's going off and reading and finding all of these references, like you say, but then it's telling us the answer in the way we need to hear it, not the way it was typed into a webpage the day the person wrote it. And keep in mind, the answer may come from a webpage that has almost nothing to do with what you're interested in. Or it's wiki how.
[1:11:06]You can't see Adam, but his head is exploding as I said that. I mean, I just hate it. They always come up to the top, too. That's the thing that drives me crazy. And it's literally never the one I want to read. Precisely. So, yeah. So, that's just it. Is that, like, particularly when you're searching for something, I mean, that's in some ways a sort of bad example because that probably would have come up relatively quickly in a search. Um but but there's sometimes you're looking for something which no one's written an article about this it's not that interesting it's some little piece that you're going to find in referenced in something else i mean i do this a lot in tidbits i don't know if anyone notices or appreciates um or if the search engines notice or appreciate or the chatbots notice appreciate i like to try to put additional information into articles um, So, like, the answer, I mean, like, I'm writing about something, but I will mention other things on the way to getting there so that those are recorded in a place that they can be found.
[1:12:08]And, or just kind of absorbed by the reader. Readers going, oh, they have an article about such and such and such and such. And by the time they're done, they also know these three or four other little facts, which weren't absolutely necessary to know to get to the end, but they're there now. I think about this a lot because when I go to a YouTube video to see how to do X, and I'm taking that link, all I want is that. When I go to a recipe, all I want is the recipe. But when I write, I'm the person who's going to tell you the story of how I got there. I am the exact opposite of the person whose site I would ever want to look for.
[1:12:48]I've started teasing myself in my own article saying, So, anybody else would say, click here, here, here, there's the answer to your question. But that's not what I'm going to do today. Let's start back in 1958 when I was born, and I'm going to tell you how I got to this point and told this story today. But I do feel like giving that context can be important to helping people understand why the answer is what it is. It's a fine line because, right, you don't want to go back to 1958, but you do want to provide some context. And the food blogs are actually a good example of that. So, in the last couple of years, I've become a huge fan of Deb Perlman and Smitten Kitten—Kitchen, sorry, can I get that right? My wife and I made that mistake very early on, calling it Smitten Kitten, and now we can't stop.
[1:13:41]We do stuff like that. We always talk about Avocado's Number—, Yes, precisely. So, make a great guacamole for Smitten Kitchen with that. But in any event, she does a wonderful job of having a small amount of a lead up or verbiage before the recipe. And so that you will read it and it's amusing and entertaining and well-written. And again, additional facts show up in there. Whereas most of the time when I go to a food blog, I am mashing that jump to recipe button as fast as I can because I do not want to hear what hubby and the kids think of it. I open up the app Paprika and I tell it to import the recipe and I get that immediately.
[1:14:33]Well, I don't let Paprika import anything. until I've gotten to make sure the recipe's okay. So that's the problem. So yeah, so that's part of it. But anyway, yeah, so I think it is a fine line. But this is a little bit of kind of, again, what we're looking for. I mean, someone was complaining to me recently or commenting on Tibbet's Talk about how they did a search for, I think it was like HP printer help. And then they called the phone number that Google got and it was a scammer. Oh. And I'm like, that's really, That's a Google fail. How could Google possibly not put up an official phone number? And I did the search, and I got official phone numbers. But it just goes to show, again, there's no real answer anymore. Everything's a little fuzzy. It's personalized and customized and a little different. So who knows what this person had searched for in the past, such that Google thought differently about giving them a scammer rather than the right answer.
[1:15:36]Oh, he really likes the scammers. Okay, then. When you were talking, we were texting back and forth or emailing, when you were talking about this idea of a cultural technology, you quoted the woman you just referred to, Alison Gopnik from UC Berkeley, and she told the story of the stone soup. And that really struck me as a way to explain what you're talking about, about a cultural technology with AI. Can you just talk through what she wrote? I'll put a link in the show notes to her article. A little bit. But I actually didn't, I mean, the stone soup analogy didn't quite work as well for me.
[1:16:12]But basically the idea was, I mean, the stone soup fable is, you know, people come to a village, you know, a traveler comes to a village and he's like, hey, you know, let's make some soup, you know. And people are like, oh, we don't have anything, you know. And he's like, well, I've got these magic stones. And, you know, he puts the stones in a pot with some water. And, oh, you know, this would be great if we just had, you know, a carrot or two. And we're like, oh, I got a carrot, you know. And by the end, people have provided all of the parts to actually make a really good soup. And I think what she's suggesting is that we are adding all the little bits and pieces to make generative AI actually useful. We're turning it into soup by providing all of the extra little bits. Oh, wouldn't it be nice if it knew about such and such? Wouldn't it be great if it was trained on that? You know, I'm going to tell it about my issue over here. And just everything sort of goes into the pod. So, and what you end up with is more valuable than what you put in parts and pieces.
[1:17:16]I like that perspective. I'm also choosing to use Claude because it's not training on what I'm asking it. And that's usually an option for most of them at this point. Well, yeah, well, it's opt out on some of them. Like on ChatGPT, it's opt out, but on Claude, it's opt in. And so I'm choosing not to train it on my own data. But, you know, I probably could because asking stupid questions about how to write something in jQuery is probably not a national secret I need to protect. It is an interesting question. and you know like i've been using this lex word processor um which is kind of an you know it's got a chat bot attached to it you can ask it questions about your what you're writing at all times and you can feed it context in other words like say you go to these you know include as context these web pages and you know and this pdf i've uploaded and things like that and um, um you know and it's not doing a sharing stuff but someone was like oh aren't you worried about putting what you're writing you know into into a chat bot where you can't control i'm like everything i write is for public i mean like literally published you know so you know no i'm not in the slightest bit concerned about that you know if i was writing you know my journal um you know that i didn't want anyone to read ever yeah i probably wouldn't put it into even an online word processor um but you know you did have to you have to figure out whether or not you actually care about some of these things because it is a little bit about providing the carrot for the soup.
[1:18:44]Right. Yeah. One of the things I was thinking about with this is there was a study done recently looking at the level of drop-off in information being provided to Stack Overflow over the course of time and mapping when AI started to kick in. And so there has been a steady decline in the number of people participating in the information that's being added to Stack Overflow. It's been at a fairly constant rate going down. But the minute ChatGPT hits the market, it just plummets. And they did a good job of doing a comparison study in other countries where ChatGPT is not available and the drop-off did not happen at that point. So they're fairly confident of the data that they had. But part of it was like, well, wait a minute, does that mean we won't have new information if we stop contributing? But what they found was that this drop-off was in languages that are fully established.
[1:19:43]And so they said, if you look back at JavaScript, it's been around a long time. HTML's pretty much figured out. People are just asking the same questions over and over again, and people are answering the same thing. or saying, oh, you should go to this answer. It's already been answered. Oh, go to this answer. It's always been answered. But if you look at some of the newer languages, they aren't seeing that kind of drop off. So that gave me a little bit of confidence that maybe what we're doing right now is I'm writing an article about how to turn on some accessibility feature in system settings on Mac OS, but you've already written it up. And Mac Rumors or somebody, one of the other sites has already written it up and it's probably written up 100,000 different places. Maybe that's not what we need to be doing anymore because that information is no longer, you could just ask an AI about it because the world knowledge exists, shut up, talk about something new. Well, we can always hope. I mean, the cultural well is deep and is getting deeper, but it is definitely a situation where, you know, pretty much anything you think of writing about has been written about already.
[1:20:54]Very quickly, my wife's aunt is a cookbook writer and has been since the 70s. And she's written many cookbooks. And she said, Google was what got it for her. She's like, I can type in basically the ingredients I want to have. And I'm thinking of a new recipe, and someone's already made it. So there was no point then? Right. Well, she used to invent recipes, but it turns out there are no new recipes. in some basic fashion. So again, what the Google's was one too, is it's a cultural technology. It's giving us access to the human knowledge. And the chatbots are just a slightly different approach to that. But yeah, that said, I still believe you want to be careful about when you ask them for recipes. They don't really taste so well.
[1:21:49]Well, I know, Adam, you and I both like to be precise and have data and analysis. And we've basically made this a discussion of, boy, this is a squishy thing now. So squishy, so squishy. Gonna have to learn to live with uncertainty. Oh, that's uncomfortable. Well, thanks for coming on. As always, this is always a blast. And everybody should go to tidbits.com and read, I assume there's gonna be an article on this subject coming out eventually. Sooner or later. Resolve the squishy bits. Yeah, right. Once I come up with the answer, we'll be good.
[1:22:25]All right. We'll talk to you again soon. Thank you. Well, that is going to wind us up for this week. Did you know you can email me at alison at podfeet.com anytime you like? If you have a question or a suggestion, just send it on over. Remember, everything good starts with podfeet.com. You can follow me on Mastodon at podfeet.com slash Mastodon. If you want to listen to the podcast on YouTube, like all the cool kids do now, you can go to podfeet.com slash YouTube. If you want to join in the conversation, you can join our Slack community, which is Hop and Fun at podfeet.com slash Slack. You can talk to me in there and all of the other lovely Nocilla castaways. You can support the show at podfeet.com slash Patreon, as I mentioned earlier, or with a one-time donation. You can do that at podfeet.com slash donate and use Apple Pay or any credit card. Or you can use podfeet.com slash PayPal for one-time donations. And if you want to join in the fun of the live show head on over to popfee.com slash live on Sunday nights at 5 p.m. Pacific time and join the friendly and enthusiastic Nosilla Castaways if you've been here tonight you would have gotten to talk to my grandchildren and I know all of you wish you'd been there anyway thanks for listening.
[1:23:32]Music.

Error: Could not load transcript. Please try again later.

Reload

Loading Transcript...