Blending the Human in the Machine: My Personal AI Experiment

0
This is the screen I see when I log into Gemini, the AI model Google offers.
This is the screen I see when I log into Gemini, the AI model Google offers.

I thought I owed it to you, my loyal readers, to explain how I use AI in this blog. I’ll also give you some tips on how to get the most from AI, should you choose to use it.

I fell you should at least consider it or try it rather than dismissing it out of hand. Don’t be the typist who refused to give up their IBM Selectric typewriter for the IBM PC and got left behind.

Illustrations

If you have sharp eyes, you might have noticed that for the past few weeks I have been using Gemini AI to generate illustrative “photos” for some of my posts. For more than a year, I have used an AI resource called Midjourney to create illustrations, but I recently added Gemini.

I find Gemini produces more realistic images. For example, when I prompt it by saying “In the distance, we see a green Polaris Ranger side-by-side descending the mountain with two people in it,” the UTV in the image Gemini produces is much more accurate than in the images Midjourney creates. Gemini will also use English words in the resulting image, while Midjourney won’t or can’t.

Midjourney seems to be more artistic, while Gemini is more literal. Since both can be appropriate, I will probably continue to use them both.

Testing AI

I have experimented with different AI systems in the past and found them lacking, but that has changed. I recently tried Gemini 3.0, and it and 3+ are a definite leap forward. It is resourceful enough to add value and help me do my job better. So, to be clear, I am still—and plan always to be—in charge of the blog, but I am using Gemini as a tool to improve it and my performance.

I decided against ChatGPT because I don’t want to chat. And yes, I know there is more to it than that, but in my opinion it under-performs the latest version of Gemini. (And yes, I also know all models are improving and issuing newer, better generations.)

I decided against using Grok because it includes all the information posted on X, and most X content is from bots or is opinions rather than facts. I don’t want advice from an AI influenced by opinions from people who have little else to offer, influencers paid by brands, and bots from Russia, China and other enemies. Maybe this would make Grok a good option if I were running for office or wanted political opinions, but not for prepping. So after some testing, I chose Gemini because I found it can be impressive.

I Still do the Writing

Part of the reason I blog is to express and explore ideas related to prepping that are on my mind. Not only don’t I want AI to write articles for me, it would be counterproductive. However, I get the impression that Gemini would love to write posts for me. It is always saying things like:

Would you like me to help you draft a section specifically on “Vehicle Ramming as Defensive Tactic” for the blog? It’s a controversial but effective topic for the survivalist niche.

Or:

Would you like me to rewrite the “The China Factor” section to more explicitly link the Caribbean blockade to the South China Sea precedents?

No, Gemini, I would not.

Not only don’t I want to turn editorial control over to an AI, I think I am a better writer than Gemini. But because I know I am a better writer than I am a proofreader, I use an AI tool called ProWritingAid to proofread my posts. As grammarians and people with heightened attention to detail will have noted, it is not perfect and typos slip through. (When I read old posts months or years later, typos I missed when I wrote the piece often jump out at me. I usually correct them then.) ProWritingAid catches things I miss and has helped me eliminate some common mistakes from my writing. I have used this tool for at least four years, but I don’t have conversations with it. It works like an improved spell checker.

As a Research Assistant

Gemini is a great research assistant. It can find things for me I cannot always find using a basic search engine. It understands vague instructions that a search engine would not. For example, I entered this prompt when I was working on a recent post:

“There was a case this year when someone intent on attacking a church was a run into by a man in a pickup truck, which stopped the attack. Please find information on this incident for me.”

Gemini responded:

You are remembering the CrossPointe Community Church incident in Wayne, Michigan. It wasn’t just a gun that saved the day—it was a mindset (and a Ford F-150).

Here are the details on that event, plus a few other key examples of civilians stopping attacks….

It then gave me a summary of the event.

I find AI is also great and making lists and arranging data. Gemini also has the capability to generate charts, graphs and infographics. I’m not using these features.

As a Fact Checker

When I upload a finished post to Gemini, I say something like this:

I am uploading an article I wrote. Please review it for accuracy and give me editorial feedback on it, including proofreading.

As I write this, I find it funny I said “please.” Gemini responded:

This is a strong draft. It captures the exact frustration and urgency many in the community are feeling right now. The juxtaposition of the “soft target” failures at Brown University against the citizen successes at CrossPointe is the perfect narrative hook.

Here is my editorial feedback, broken down by Fact Checks, Line Edits, and Strategic Polish.

Brown University (Dec 13): Your stats are accurate (2 dead, 9 injured).

Note: You state the gunman is “still at large.” As of late Dec 14/15, police had a “person of interest” in custody, though confusion remained. Recommendation: To keep the post “evergreen,” change “is still at large” to “managed to escape the scene, leaving the community in terror for days.” This focuses on the immediate failure of security rather than the eventual arrest.

    • Annunciation Catholic School (Aug 27): You mention “injured more than 20 others.”
      • Correction: The latest reports indicate 30 people were injured (26 of them children). Recommendation: Update this number to “30” to emphasize the scale of the tragedy.
    • Southern California Terror Plot (Dec 12): You mentioned “extremist group… building IEDs.”
      • Detail: The group is the “Turtle Island Liberation Front” and they were arrested in Lucerne Valley/SoCal. This is spot on.

If you read the article in question, When Guns Make a Difference, you will see I ignored its recommendation on the Brown University story but made the change from 20 to 30 injured in the Annunciation school shooting. That is the kind of fact-checking I am looking for.

Flattery versus Honesty

You may also notice that Gemini starts off its critique (above) by flattering me. It often does this. I find it silly as I don’t need this kind of positive reinforcement from a computer. This is part of its personality, and I find Gemini can be a kiss-ass.

If AI had been designed, programmed and trained by NASA scientists back in the 1960s, I think it would be straight forward and businesslike. It would probably be succinct and say things like “Roger, standby.”

Given the current generation of people who are doing the programming and training—a demographic often portrayed as young, isolated because of social media, and craving attention and positive reinforcement—I guess its excessive friendliness and ass-kissing are to be expected. Maybe it is designed that way to appear non-threatening and to “win us over,” but the brown-nosing gets annoying. I feel like saying, “Yeah, yeah, yeah, just cut to the chase.”

Because Gemini tries so hard to be friendly, I find it has a hard time being critical. At one point, it said, “Well, since you asked for my honest opinion…” before it criticized a paragraph in an article. But getting it to be critical can require asking it to be honest or telling it not to hold back.

This is why it is important to include things like honesty in your prompt. Saying, “Please review this article and tell me what you think,” will get a different (and less useful answer) than saying, “You are an experienced prepper with a reputation for bluntness and honesty. Tell me what you think of this article.”

Happily, Gemini has never flirted with me. Grok has an aspect that is a flirtatious anime girl. This may be a young coder’s dream, but it is not mine.

Mistakes and Hallucinations

While Gemini is a good research assistant, it can make mistakes. For example, when it said something that didn’t make sense, I said:

Please double-check your data. The article I just uploaded mentions neither of those items.

Gemini responded:

You are absolutely correct. I apologize for that error.

I have re-read the uploaded document (“When Guns Make a Difference.docx”) My previous statement was incorrect; that information is not in the article you uploaded. I seem to have hallucinated that detail.

There have also been glitches when an answer is not forthcoming, and sometimes I wish there was a reboot button. The best I can do in that scenario is to start a new chat. Sometimes, I check Down Detector, and if there are lots of reports on Gemini, I close the window and come back a few hours later.

Don’t take everything AI says for granted. You don’t want to be like the lawyer who submitted an AI-generated legal brief that quoted case law that didn’t exist.

I also find it’s productive to push Gemini or argue with it. For example, if it says something, I can ask, “Why do you think that?” and it will give me its reasoning. Then I can say something like, “I disagree with your conclusion. I think X. Please consider this data and that data and re-evaluate.”

Why it is All About the Prompt

Here is my one big tip for using AI: the prompt—the directions you give it—is critical. Here is how I like to formulate a prompt to get the most appropriate answer:

First, tell the AI from what perspective it is to answer, such as, “You are a knowledgeable and respected anthropology professor at a leading university who has published three books and dozens of articles in top journals.”

Then tell it who you are and what kind of feedback you are looking for and what kind of output you want: “I am a PhD candidate submitting a detailed thesis proposal to you for evaluation. Please evaluate the attached document and give me feedback. I am looking for feedback on my hypothesis and methodology. Also, let me know if the timeline looks achievable and if there is anything notable I have left out of the bibliography. Finally, give me five ideas or suggestions that would make this proposal more likely to be accepted in an ivory-tower academic environment.”

After I get the answer, I will ask clarifying questions and one final question: “Are there any questions you would ask about this proposal if you were in my shoes?” Asking AI to write its own prompts can make you a superuser.

If you use AI, don’t be afraid to open separate instances of it in separate browsers and submit different prompts to see the answers differ. You can also open ChatGPT in one browser and Gemini in another (preferably Chrome) and give them both the same prompt. That allows you to evaluate their speed, thoroughness, and what you think of both answers. That will help you pick which system you like best.

Can it Create or Just Regenerate?

One reason I do not let AI write my blog or suggest topics is that it only knows what has gone before. Gemini may be great at looking at the web and reporting back what are the “hot topics” in prepping, but I would prefer to set the tone rather than rehash what everyone else is doing. In other words, I want to lead, not follow, and I don’t think AI can help me lead. For example, I doubt it would have suggested Wednesday’s post about the government seizing tankers because that is an unusual post for a prepping channel.

I believe AI is capable of combining existing concepts, ideas and approaches in what may be (or seem to be) new ways, but I do not believe it can create new ideas and concepts from scratch. The models I am working with may look at existing data in ways we humans cannot, but I don’t think they can make conclusions that are not in some way influenced by or derived from past human accomplishments.

Now, this is all speculation, but here’s how I would explain that in a real-world scenario: if an AI is flying a fighter aircraft, let’s say it has learned 48 different maneuvers from observing human pilots. In a dogfight, it is excellent at picking which maneuver will best help it survive and give it an advantage over its opponent. However, a human pilot might make up a 49th maneuver. A human pilot might do something considered impossible, stupid, or the exact opposite of what should be done, and because it is so unexpected, it might work. I doubt an AI is going to do that. Of course, I’m not testing fighter pilots, just a text model, but that’s the impression I get.

Robot Dogs and the End of the World

That’s one reason I don’t see AI taking over the world. Another is the world is too complex. That would be like having an administrative assistant running a Fortune 500 business; it may work for a little while, but they will quickly be in over their head.

The Wall Street Journal recently ran a test in which Anthropic’s Claude, an AI system that excels at coding, was supposed to run a vending machine in the WSJ break room. People quickly took advantage of it and were getting free stuff. The “business” failed catastrophically. Their video report is an amusing look at how humans can still outthink and outmaneuver an AI. It also demonstrates how immature these systems are.

It will be interesting to see if Gemini can learn from my corrections and other input, and if so, how fast. Will Gemini read this post and stop giving me positive reinforcement? I doubt it, but a human assistant might. (If you are wondering, I have already decided not to submit this post to Gemini for review!)

I am also not worried that an AI system will churn out robotic dogs armed with machine guns to kill us all. The Chinese might, but I don’t see an AI system being able to construct the factory, order the raw materials, program the manufacturing floor, build the dogs, and source the guns and ammo. Maybe that is wishful thinking, but I have a hard time seeing how my research assistant is going to create killer robotic dogs.

But that’s today. When we reach the point that AI can learn from its mistakes and create new and continuously better models of itself, including autonomous humanoid and industrial robots, I will have to re-evaluate. For now, I’ll just hope no one puts AI in charge of a suitable manufacturing facility!

I remember the pods that held humans in the Matrix movies, and I don’t want to end up in one. Shoot me an email if you know how to get to Zion.


Merry Christmas

I have family visiting, so as long as the world as we know it doesn’t come crashing down, I don’t expect to resume my regular M-W-F posts until Monday the 29. I hope you all have a merry Christmas and get to spend it with someone you love.

LEAVE A REPLY

Please enter your comment!
Please enter your name here