Reading a Book a Day in 2025

I try to read 100 pages a day, spread across several books at a time across a mix of lengths from short stories to 600+ page novels. Somehow, this has averaged in me reading approximately a book a day this year so far.

Also I read a decent amount of adult-only material, so while I don’t say anything explicit here, this post references stories that aren’t appropriate for non-adults.

The Stand-Out

The Dragonfly Gambit by A.D. Sui is a scifi novella about revenge? I really don’t know how to describe it because it confuses me. The characters are all far TOO realistic. They are constant contradictions in actions, feelings, and thoughts. They feel like real people, not characters. This book wastes no time introducing its world and the political situation. Instead you are thrust directly into a complex interpersonal conflict at the core of a sadistic empire’s slow demise. You just have to figure out what’s going on as you go. No hand-holding.

Alien

Alien as in very different or strange, not “from another world” or non-human.

  • (military scifi, alt reality) Ninefox Gambit by Yoon Ha Lee: A tech/magic culture with corruption and revolt. I started reading this in 2022, and only finished it this year. It’s deeply strange. (This is the first book in a series.)
  • (fantasy smut, alt reality) DRAGONS! DRAGONS! DRAGONS! by Dragon Cobolt: A world where everything is dragons. Absurdist and horny. The world-building alone is insanity. (I also just read The Happiest Apocalypse, which is set on our world after the LHC accidentally grants everyone’s wish in an instant. Absurdist, less horny, but somehow more sex scenes.)
  • (scifi, non-human) Sheffali’s Caravan by Burnt Redstone: Exposition-heavy, slow start. An alien refugee is the secret member of a successful trading family on a backwater world within a bigoted empire. (It isn’t erotica or smut, despite being published on a website exclusively for those things.)
  • (scifi, near future) Xenocide by Orson Scott Card: I reread the third book of Ender’s Saga. When I was younger, I didn’t recognize the implicit support of eugenics within. I think this story is good despite that. The question of how to co-exist between completely alien species is important (human groups already struggle to co-exist).

Series in Progress

  • (scifi, modern day) Expeditionary Force by Craig Alanson: I’ve read the first two books of this series this year. There are like 14 books I think, and a spin-off series. A small band trying to keep Earth safe after being plunged into a multi-species interstellar conflict is a neat premise, and the macguffin of the first book is a continuing source of humor and intrigue. đŸ˜€
  • (scifi, near-future, post-human) Bobiverse by Dennis E. Taylor: I’d previously read the first 4 books, and couldn’t get Not Till We Are Lost until this year. I think the series is falling off, but still enjoyable. I read this through an audiobook, which I think made it better. I do not share that opinion on the previous books in the series – they all work great as text.
  • (fantasy erotica, modern day) Satyr Play by Burnt Redstone: A 4-part series. The first part introduces a well-developed world and stakes. The second and third parts really up the ante. I’m still reading the fourth part, which starts much slower than the others. The sex scenes were best nearer to the beginning, and get annoying at times. Fortunately, their frequency reduces over time.
  • (military scifi, near future) Halo: I read The Flood (book 2) and First Strike (book 3) this year. Skip The Flood if you’ve played the first game. First Strike was more interesting, but filler-y. The Fall of Reach (book 1) is very good, and I’m reading Ghosts of Onyx (book 4) now, which seems good so far (I’m 33% through it).
  • (scifi, futuristic) Culture by Iain M. Banks: I read Use of Weapons this year. I’m not sure I fully understood it, and like the other 2 novels of the Culture I’ve read so far, it was completely unlike the other novels in this series. I highly recommend The Player of Games, and Consider Phlebas only if you want your heart ripped out.
  • (scifi, alt realities) Pandominion by M.R. Carey: I read the first book last year, and the second book this year. The first book was faster-paced and better written, but the conclusion in the second book is satisfying. I don’t like that it feels like one book that was stretched into two books, with filler being shoved into the second, but it’s good regardless.
  • (fantasy erotica, medieval) Toofy by shakna (Toofyverse): I’ve read only the main story so far, and while it starts out with a lot of sex scenes and explicit fetish material, the story shifts to political intrigue quite quickly, and stays there. A slave catgirl wants an empire, and she gets what she wants.

Anthologies, Short Stories, Other

  • (scifi, near future) The Temporary Murder of Thomas Monroe by Tia Tashiro: Short Story. In a future where people have backups..
  • (fantasy smut, medieval) Blood of Dragons by OneEyedRoyal: Anthology. Cool magic (and sex) abounds when you interact with the blood of dragons. These all have interesting stories around the smut.
  • (scifi thriller, near future) Star Splitter by Matthew J. Kirby: A reluctant teenager joins her parents on a distant outpost, being transferred by mind-copy and body printing. Except, she wakes up on a crash site.. with herself.
  • (nonfiction, AI) The Big Nine by Amy Webb: I have many thoughts about this book. I think it’s pretty good, but gives a free pass to malicious ignorance in decisions by large companies, and doesn’t challenge deeper issues that exacerbate problems around AI.

Romance vs Erotica vs Smut

I use these terms very specifically. Romance contains little or no explicit sex or eroticism, but focuses on desire/attraction/relationships. Erotica has sex scenes or explicit material, but has significant story around those scenes. It can be read for carnal urges, an interesting story, or both. Smut is written porn. The focus is on titillating, and story takes a backseat or is not really extant.

AI Coding Tools Influence Productivity Inconsistently

Not So Fast: AI Coding Tools Can Actually Reduce Productivity by Steve Newman is a detailed response to METR’s Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity study. The implied conclusion is AI tools decrease productivity by 20%, but this isn’t the only conclusion, and more study is absolutely required.

[This study] applies to a difficult scenario for AI tools (experienced developers working in complex codebases with high quality standards), and may be partially explained by developers choosing a more relaxed pace to conserve energy, or leveraging AI to do a more thorough job.
– Steve Newman

Under the section Some Kind of Help is the Kind of Help We All Can Do Without is exactly what I’d expect: The slowdown is attributable to spending a lot of time dealing with AI output being substandard. I believe this effect can be reduced by giving up on AI assistance faster. In my experience, AI tooling is best used for simple tasks when you verify the suggested code/tool usage by reviewing manuals/guides, or when you only use the output as a first-pass glance to see what tools/libraries you should look up to better understand options.

To me, it seems that many programmers are too focused on repetitively trying AI tools when that usually isn’t very effective. If AI can’t be coerced into correct output within a few tries, it usually will take more effort to keep trying than to write it yourself.


I wrote the following from the perspective of wanting this study to be false:

There are several potential reasons for the study results to be false, and these pitfalls were accounted for, but I feel some arguments were not well-supported.

  • Overuse of AI: I think the reasoning for why this effect wasn’t present is shaky because it reduced the sample size significantly.
  • Lack of experience with AI tools: This was treated as a non-issue, but relying on self-reporting to make that determination, which is generally unreliable (which was pointed out elsewhere). (Though, there was not an observable change over the course of the study, indicating changing experience is unlikely to affect the result.)
  • Difference in thoroughness: This effect may have influenced the result, but there was no significant effect shown either way. This means more study is required.
  • More time might not mean more effort. This was presented with nothing to argue for or against it – because it needs further study.

(The most important thing to acknowledge is that it’s complex, and we don’t have all the answers.)

Conclusions belong at the top of articles.

Studies are traditionally formatted in a way that leaves their conclusions to the end. We’ve all been taught this for essay writing in school. This should not be carried over to blog posts and articles published online. I also think this is bad practice in general, but at least online, where attention spans are their shortest, put your key takeaways at the top, or at least provide a link to them from the top.

Hank on vlogbrothers explains how the overload of information online is analogous to how nutrition information is overwhelming and not helpful. (This hopefully explains one of the biggest reasons why the important stuff needs to be clear and accessible.)

Writers have a strong impulse to save their best for last. We care about what we write and want it to be fully appreciated, but that’s just not going to happen. When you bury the lead, you are spreading misinformation, even if you’ve said nothing wrong.

Putting conclusions at the end is based on the assumption that everyone reads the whole thing. Almost no one does that. The majority look at a headline only. The next 99% only read the beginning, and the next group doesn’t finish it either. A minority finishes reading everything they start, and that’s actually a bad thing to do. Many things aren’t worth reading ALL of. Like this, why are you still reading? I’ve made the point already. This text is fluff at the end, existing to emphasize a point you should already have understood from the rest.

When Open Source Maintainers Don’t Understand Community is Important

This is just to vent frustration at a thoroughly stupid experience I had recently. A portion of that stupidity is me failing to read something correctly, but I’m just really stuck on the stupidity of the response to me asking for help:

I asked for clarification, and was told to go away.

My reaction clearly indicates that I am not undrstanding something, and I even tried to give context to where I’m coming from so that it would be easier to spot what I misunderstood, but instead I was told to go ask a bot.

And then they blocked anyone from ever asking for help again.

The public is not allowed to open issues now.

What’s most frustrating to me about this is that it coincides perfectly with another issue I ran into today where I couldn’t add an important detail to an old issue. Past conversations are useful to people looking for assistance, especially when one solves their problem and explains it. When I am blocked from replying to something with a solution, anyone in the future experiencing the same issue is likewise blocked from finding the answer.

I now know what I messed up, but I’m not allowed to pass that knowledge to the future, because I was confused and made a mistake in how I asked for help.

There’s another layer to this that is often ignored: When this is the response the average newbie gets when they first try to contribute, they are encouraged to never ask again, or in the case of submitting pull requests, encouraged to never try to help again.

When open source maintainers discourage newbies, they cannibalize the future of their software.


Okay, that’s my entire point, but I also encounted some funny things as part of this.

What is a contribution? GitHub doesn’t know!

I think it’s interesting that GitHub says the repo limited opening issues / commenting on issues to past contributers, but I am a past contributer. GitHub clearly considers issues to be contributions, as every profile has a graph showing issues as part of their contributions:

My contributions: 89% commits, 11% issues.

AI tools can be very powerful, but they can also be very stupid

Earlier today, I tested Perplexity AI’s capability to answer a few basic questions easily answered through traditional search engines, such as which insect has the largest brain and which country is the current leader in development of thorium-based reactors. The results? It doesn’t know ants are insects, thinks fruit flies have large brains just because they have been the subject of a large number of studies, and ignores India in favor of China because western media reports on China a lot more.

But you know what, I wanted to test this asshole’s suggestion to ask ChatGPT about my problem, and surprisingly, it gave a very clear and accurate response!

Note (2024-10-02): Open AI has since removed the ability to access web sites from ChatGPT, and dumbed it down significantly. It is no longer a viable tool for most use cases.

ChatGPT points out what I misread: I have to clone the repo AND run NPM, not just run NPM.

When you offer binaries for a project, they have to actually exist..

To be fair, this is a fairly recent change to the ReadMe, but maybe you should publish binaries before advertising that you publish binaries?

Getting Started: Download a release binary and run it. Simple.
The advertised binaries don't exist.

Installation and usage aren’t the same thing

It’s understandable to be confused about whether someone has correctly installed something, but after confirming that installation has worked, ignoring the question asked is unhelpful to say the least.

After confirming that I've installed it, my question is ignored.

I’m experimenting with dolphin-mixtral-8x7b

Update (2024-10-02): This is one of my lowest quality posts despite the effort I put into it. The most important detail here is to use positive reinforcement when working with LLMs. Just like with humans, being nice gets far better results than being mean.

Tl;dr: Minor differences in wording can have a huge impact in results and oh my god I have really slow hardware and no money help me aaaa.


First, thank goodness for Ollama, and thanks to Fireship for introducing me to it. I have limited hardware, and every tool I’ve tried to run local models has refused to deal with this and crashed itself or whole systems when running anything with decent capability. I’ve no money, so I can’t upgrade (and things are getting desperate, but that’s a different story).

Why dolphin-mixtral? Aside from technical issues, I’ve been using ChatGPT-3.5 to experiment. The problem is that ChatGPT is incredibly cursed by censorship and bias due to OpenAI’s heavy hand in its construction. (Why and how this is a problem can be its own post, and Eric Hartford has a good overview.) (To be clear, my problem with its bias is specifically that it enforces status quo, and the status quo is harmful.) Dolphin-mixtral is built by taking a surprisingly fast model equivalent or better than GPT-3.5 and removing some of the pre-trained censorship by re-training it to be more compliant with requests.

Dolphin-mixtral doesn’t just solve this problem though. There’s still the idea of censorship in it, and sometimes your prompt must be adjusted to remind it that it is in a place to provide what you request regardless of its concept of ethics. (Of course, there is also value in an automated tool reminding you that what you request may be unethical.. but the concept of automated ethics is morally bankrupt.) I’d like to highlight that positive reinforcement works far better than negative reinforcement. A lot of people stoop to threatening a model to get it to comply, but this is never needed, and leads to worse results.

My problem is a little more simple. I haven’t gotten to experiment with models much because I don’t have money or hardware for it, and now that I can experiment, I have to do so very slowly. In fact, the very simple test that inspired this post isn’t finished right now, and has been running for 9 hours. That test is to make the default prompt of Dolphin lead to less verbose responses so that I can get usable results quicker.

I asked each version of this prompt “How are you?”:

PromptOutput Length, 5-shotDifferenceNotes
Dolphin (default)133.8 charactersWastes time explaining itself.
Curt32.2 characters76% fasterStraight to the point.
Curt284.6 characters37% fasterWastes time explaining itself.

I really dislike when models waste time explaining that they are just an LLM. Whether someone understands what that means or not, we don’t care. We want results, not an apology or defensiveness. There’s more to do to make this model less likely to respond with that, but at least for now, I have a method to make things work.

The most shocking thing to me was how much of a difference a few words make in the system prompt, and how I got results opposite of what I expected. The only difference between Curt and Curt2 was “You prefer very short answers.” vs “You are extremely curt.” Apparently curt doesn’t mean exactly what I thought it means.

Here’s a link to the generated responses if you want to compare them yourself. Oh, and I’m using custom scripts to make things easier for me since I’m mostly stuck on Windows.

AI Won’t Destroy Tests

When calculators first started coming out, people said they would be used to cheat and students wouldn’t learn anything. Instead, we changed how testing works to focus on learning what’s important – broader concepts and implications – instead of “what is 232+47”. With AI tools, we again need to change how tests work. This time, instead of asking if a student can regurgitate information in a way that aligns with the teacher, we can start to see if students are actually paying attention to the work. The difference between AI answers and real answers is a level of understanding deeper than the surface.