Good morning from Charlottesville, Virginia! ☕️
In an email to customers, Amazon announced that it would be ending service for Kindle devices older than the 2012 edition. Those devices will lose access to the Kindle Store.
Something that’s always kind of bugged me about technology is how it often marches forward at the expense of older technologies. I’m talking about the backward compatibility problem. We have old storage mechanisms, like floppy drives, that folks can no longer use. And by use I mean get their old data off of them if they want it. I have a collection of random detritus I’ve been carrying around with me for years and years and I’ve had to move it from storage mechanism to storage mechanism manually because I knew that old tech would fall out of favor at some point.
A dev team is probably being held back by supporting older models or their plans for an upcoming release dropped the requirement to support older models. Regardless, it’s a real bummer. I know, I know, folks can read on the Kindle app and use the website if they’d prefer but that’s not the point. Is it too much to ask for your reader device to work forever? Maybe. But it would be nice if they could keep the older devices from becoming e-waste, which you know is gonna happen.
Instead of wiping out jobs, AI is shifting the tasks of developers. They are doing less routine coding work and devoting more of their schedule to overseeing swarms of AI-powered code-writing agents — autonomous bots that can complete tasks. Engineers, in turn, are spending more time designing the structure of software and generating ideas.
I see this daily at WillowTree. We’ve been using LLMs to drive development to greater and greater effect. Sure, we look the code over, and make changes, but that happens with human developers as well — ever get PR feedback?
The point is I’m still employed and doing exactly what the article says. I’m orchestrating the LLM to do work for me. I not using it to swarm on tasks yet but I do point it at tickets in JIRA and have it go to work. It works really well if the tickets are well defined and have enough detail so the LLM doesn’t need additional input.
I’m still not using AI directly in my personal projects because I love the challenge of writing code. Sure, I get frustrated and struggle, just look at my last post about finally getting something to work in Stream for Mac as an example, but I just love the work.
I’ll continue to use LLMs at work as long as they’ll have me and continue to learn new stuff on my own time in my own apps.
After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now.
More folks need to follow the EFF’s lead and get the heck out of X hell. It’s a sess pool of right wing loonies and tech bros.
Come join us in the Fediverse. Mastodon is an incredible replacement for X and it’s not nearly as difficult to join and understand as many folks have lead you to believe.
If you’d like to get an easy start just pull down the Mastodon iOS or Android apps and use those to create your account. Once you better understand the Mastodon communities you can choose to switch to a different server — or instance in Mastodon parlance — or stay on the main Mastodon instance. No harm, no foul.
If you get there feel free to reach out to me! I’d love to chat with you and answer any questions you may have — as long as you’re ok with me not knowing the answer. 😃
The free AI already on your Mac.
macOS Tahoe ships with a 3B parameter LLM. apfel gives you CLI access with one brew install. No model downloads, no API keys, no configuration needed, just works.
So, this is kinda nifty! Unlock the LLM already on your computer! Why the heck not? It’s there. Might as well use it, right? 👍🏼
Michael J. Fox is alive and well, the “Back to the Future” icon assured fans on Threads after CNN sparked a death scare by releasing a video on its content platforms titled “Remembering the life of actor Michael J. Fox.”
I’m sorry he had to see this. All the major news organizations probably have something like this already set aside for famous peoples death. It’s cold and impersonal but it’s the way these things work.
I’m happy to hear Marty McFly hasn’t left us yet. He seems to be a really great person and he’s definitely made my life richer.
Back to the Future Part III is still the best of the three. 😄 (Go ahead and @ me.)
Tool and Puscifer frontman Maynard James Keenan has shared a message of support for his former military academy prep school friend, General Randy George, who was recently driven out of his position as Army Chief of Staff during the early stages of the USA’s conflict with Iran.
I knew Mr. Keenan has some military experience but I didn’t know the extent of it. It’s nice to see him publically support his friend like this.
I hope they get together and have a few beers together, or perhaps some wine?
I’ve been so proud of my reading workflow, using Feedbin as a repository for all the newsletters I get, that I missed the other important part of that workflow: I open ReadKit once a day, read the items in my story list that interest me, and then close the iPad and go about my day. I am not looking for updates throughout the day, or using the app as a read-later service—in fact, my default view only shows me items from the past 48 hours—but as the true successor of that old morning newspaper.
I read part of this and sent a Mastodon post to Mr. Snell pointing him to Stream, but that was before reading the entire piece, which was a mistake. He’s looking for something differen, not a River of News style reader, which is what Stream was built for.
I’m also a subscriber to the site so I get a private feed of podcasts and on the latest Six Colors Podcast he and Dan talk about his reading setup and what he thinks might be his perfect setup. It sounds to me like he’d love to have Google Reader back. It had some features other feed readers typically don’t have, like searching for keywords and building a feed from that. The benefits of a backend service, if you can afford to run one.
The perfect app for an AI to do for you is a demo app. Yesterday I wrote about making WordPress boom with new apps for writers that run in the web ecosystem, not as plug-ins, in JS running in the browser, or on the desktop, any desktop, that would work too. Probably would be fine to put an MCP shell around it so it can be in AI-internal scripts.
Yes, LLMs are great for this! I can see exactly what Dave is trying to do but some folks may see it as him trying to pull a fast one on them. I don’t see it that way at all. I believe he’s genuinely trying to make the web better for writers. Why else would he go to the trouble to build so many open source projects over the years? His WordLand project is worth your time. It’s a nice, very small, writing surface built just for writers.
Apple’s online store in the U.S. is currently showing delivery estimates of up to 4-5 months for many Mac mini and Mac Studio configurations with upgraded amounts of RAM. The delays are occurring amid a severe global memory chip shortage driven by surging demand from companies building AI servers that requires large amounts of RAM.
Darned AI companies! This is, at some point, going to make computers outrageously expensive. That may not be the case in today’s Apple ecosystem but it’s coming. I need to pull the trigger on a new box before they’re out of reach.
