Review: Apocalypse AI – The Seven

Apocalypse AI, by A.J. Ramsey

Genre: Science Fiction

Rating: 1 star

The description of this book promises fast paced action and edge of your seat suspense. This could not be further from the truth.

I have NEVER rated a book 1 star before, but this one is so, so deserving. This book spent literally 50% of its time as back story being told around a campfire. I WISH I WAS KIDDING! Just when you’re thanking all the good things in the world that it’s over, it’s someone else’s turn to tell her story! ARE YOU KIDDING ME!? Campfire stories are NOT fast paced. They are NOT suspenseful.

And this book hits SO MANY of my personal pet peeves, that if it weren’t for Stephen King recommending that people read bad books to learn to write better, I would have deleted this one after the first chapter.

I was so frustrated with the first chapter that I made some notes:

“Stupid AI – Stops war it’s winning for no apparent reason. Doesn’t house multiple safe houses for its code.”

The first point is never addressed in the book. The second, they make a sad attempt at explaining that the AI’s code has become so enormous from its constant growth that it requires servers that fill a city.

Really? An AI that cures cancer within one year of activation. An AI that can hardwire the brain to interface with the Internet. An AI that grows androids, has a limitless mechanical army … can’t shrink hard drives and processors? The book takes place somewhere after 2040, I think. At that point, a super-brilliant AI should have conquered quantum computing, and maybe gone on even further. If HUMANS can shrink processors, hard drives, RAM, etc by several times on a regular basis, I’m pretty sure this AI should be walking around in its own body. Or many bodies. The idea that it has made no progress on fixing its one weakness – the size of storage – is laughably bad. I mean, even if we suspend our disbelief for a moment, it has so many factories and robots all over the place, why can’t it build more of these enormous servers somewhere else? It’s infinitely smarter than any human, yet fails at this simple problem? It makes no sense whatsoever.

Second note: “Awful analogy.”

This is a quote – “think of it like tossing a grenade through DANA’s (AI’s) bedroom window. Even if we don’t kill the AI, we can cut off its control over their robot army if we infect the core.”

I have a couple problems with this … what if DANA isn’t in the bedroom at all? Then you’ve wasted your grenade and done no damage. Why would blowing up the bedroom and failing to kill the AI cut it off from its robot army? Does it only have one computer in its bedroom, and there’s nothing else in the rest of the house it could use to communicate?

There was one moment in the book where I thought, “That’s actually kind of cool,” when an android described its own creation and birth. Again, that’s part of the insanely long info-dump exposition fireside story, so it wasn’t enough to get me to like this. Honestly, the author should have started with that character’s story as a stand alone book, because it sounded REALLY interesting, but not as a, “Here’s 50% of the book being her talking about her past” kind of story. That was AWFUL. AWFUL I say.

And then my #1 pet peeve of a first book in a series: it ends for no reason. There is absolutely no attempt to tie up any loose ends. It just says something along the lines of “join us next time in book 2!” and if I hadn’t been holding an e-reader I would have thrown the book in a fire. I hated it so much I won’t even add an affiliate link on my website’s review, because I expect no one to buy it.

Mr. King, I know you said it’s a good idea to read a bad book, but I didn’t learn anything reading this, except that life’s too short to waste on bad books.