Wanting to know more about how Artificial intelligence (AI) works and tips for using it effectively for genealogy, I took Steve Little’s AI Course at the GRIP Genealogy Institute, “AI Genealogy Seminars: From Basics to Breakthroughs.” Steve recommended Ethan Mollick’s 2024 book Co-Intelligence: Living and Working with AI. I found the book to be perfect for my level of understanding and full of examples. Humorous at times, Mollick gives us much food for thought. Writing about the ever-changing landscape of AI is a challenge, but Co-Intelligence provides us with some broad principles that will still be valid with updated AI systems.
Ethan Mollick is an educator and has embraced the use of AI with his students at Wharton, where he is a professor of management specializing in entrepreneurship and innovation.
In the introduction, Mollick writes:
After a few hours of using generative AI systems, there will come a moment when you realize that Large Language Models (LLMs), the new form of AI that powers services like ChatGPT, don’t act like you expect a computer to act. Instead they act more like a person. It dawns on you that you are interacting with something new, something alien, and that things are about to change.
Based on my experience with AI, I’ll give some applications for our family history work in this review.
Part I
Co-Intelligence is divided into two parts. The first three chapters comprise part I, which gives a brief history of AI and how to use it. In part I of Co-Intellince, the author walks us through the history of AI. Artificial Intelligence (AI) first entered the world in the 1950s with Alan Turing’s work in theoretical computer science. Further development had its ups and downs until 2010, when we saw AI being used in businesses but not by the average user. In 2017, the transformer technology developed which was a huge shift. This gave rise to the Large Language Model (LLM), which could predict the next token (a word or part of a word).
A huge breakthrough for us as consumers was Open AI’s introduction of Chat GPT in November 2022. Because of the simple user interface, we could now easily experiment with AI. Where AI had largely been in the background for our work as genealogists with indexing efforts by FamilySearch or Ancestry, now we could experiment ourselves. Millions of people signed on to Chat GPT and asked it to create poems, write essays, and much more. I joined the crowd in playing with the LLM but didn’t use it for any type of genealogy process.
Now that I actively use it in my work, I found Chapter 3, “Four Rules of Working with AI,” particularly helpful. It only takes a few weird answers to realize that chatting with the LLM is more complicated than you initially thought.
Four Rules of Working with AI
One. Always invite AI to the table
Mollick emphasizes the need to experiment to learn what can actually be done with AI that will prove useful. We need to understand its limitations, abilities, and subtle nuances in working with this tool. Rather than get frustrated when we don’t get the response we expected, use that to learn the AI’s strengths and weaknesses.
What does this mean for us as genealogists? Whenever we have a task, we can ask ourselves if AI can help. Many tedious tasks, such as transcriptions, can be done well by AI, but it is up to us to discover how accurate that transcription is. Mollick describes this as the jagged frontier. We don’t know how well a particular LLM will work with a task until we try it. As technology improves, something we couldn’t do yesterday could very well be possible tomorrow.
Two. Be the human in the loop
Mollick makes several interesting observations about working with AI. First, it is more concerned with pleasing you than being accurate, which is why AI will often hallucinate or give a plausible answer that is completely wrong. For instance, in working with a Widow’s Pension application, I asked an LLM to summarize the details from the document. The answer looked great until I checked the actual document and saw that every date was wrong! I then asked it to use only the dates on the document, which resulted in an accurate summary. We can’t assume anything when working with AI.
We provide crucial oversight, our unique perspective, our critical thinking skills, our human values, and our ethics. The more I work with AI in my research, the more I see how important my training and experience is. The LLM is simply using its training of data sets to predict the next word. Although it is very good at this, AI doesn’t have my thousand-plus hours of analyzing records to make genealogical conclusions.
Three: Treat AI like a person
Mollick explains that since AI is eager to please us, we need to tell it what kind of persona it should take. If we’re doing research, we can tell it, “You are an expert genealogist.” That is clear and specific and hopefully will guide the AI appropriately when helping us with a task. I start all my genealogy-related prompts by giving the LLM this persona.
Four. Assume this is the worst AI you’ll ever use
This speaks to the ever-advancing world of AI. The major companies are competing to provide the best LLM possible. It’s exciting to see each new model that we can then experiment with to try new tasks. If the current LLM can’t handle transcribing a deed, perhaps the next model will handle it with ease.
As I’ve been experimenting with using AI in the Research Like a Pro process, I’ve had many successes but also some epic failures. I can’t wait to see if some of those failures turn around with a better AI.
Part II
In part II of Co-Intelligence, Mollick provides us with several ways to consider AI in our work.
AI as a person
Mollick stresses that AI is unpredictable – something I have seen repeatedly. In our institute course, we tried inputting the same prompts into the LLMs, and no one received the same answer. This is much different than using a calculator that will always give you the same answer to a problem.
Since AI doesn’t act like software, treat it like a human. Understand the strengths and weaknesses of each model. Currently, ChatGPT 4o, Claude 3.5 Sonnet, and Gemini are popular LLMs. I’ve tried having each perform the same task, and it’s fascinating to see how they differ.
AI as a creative
In this section, Mollick explores the concept of hallucinations, where the LLMs can make up answers. He posits that this may come from the training data, which may introduce biases into the answers. He also warns against small hallucinations, such as my experiment with the pension application.
A good way to use AI is to brainstorm ideas. Perhaps you need a new way of thinking about your research question. You could ask the LLM to propose ideas to solve your conundrum. Perhaps it will give you a new perspective. AI is trained to process a huge amount of information, but we have to know how to work with it to ask the right questions.
AI as a co-worker
Anytime we have a mundane, tedious task we can ask ourselves if AI could do this quicker. Mollick suggests we divide our tasks into “just me” tasks and “me and AI” tasks. In my world as a professional genealogist, a “just me” task would be analyzing the research and then writing the report, making appropriate connections. A “me and AI” task would be taking that same report and asking AI to create a summary in a bulleted list.
AI as a tutor
Our go-to tool for learning something new has been Google or YouTube videos, but you can also use an LLM how to do something. Often, the response seems quite good but may need some tweaking. For instance, I was writing an email and needed a quick list of steps for sharing Ancestry DNA results. I asked the LLM and it gave a nice bulleted list. Trying it out, though, I saw some steps were missing and needed adjusting.
In our GRIP course, we experimented with summarizing class transcripts to create our syllabus. There is no end to how AI can help us learn. We can ask it to summarize a long article for us. If we’re looking at a document that we’re unsure of how to interpret in its historical context, we can ask AI. The response may be incredibly helpful or not – the vagaries of this new tool.
AI as a coach
This section discusses the concept of expertise. Can AI help us become an expert in our field? We learn and do our best when we receive feedback. If you’re writing your first research report, you have no idea if it is good or not. Receiving feedback from an expert genealogist who has written many reports will help you get up to speed much faster than working on your own. If used appropriately, AI may be able to provide feedback in many areas of our work as genealogists. I uploaded a recent report and asked the LLM to suggest sections that needed further discussion. I was pleasantly surprised with the suggestions that were exactly right.
AI as our future
Mollick wraps up Co-Intelligence with four scenarios for the future and what each could mean for us as humanity.
- As Good as It Gets
- Slow Growth
- Exponential Growth
- The Machine God
What will the future hold for using AI in genealogy and family history? We’ve just begun to scratch the surface. What is certain is that change is coming! I highly recommend Co-Intelligence as a way to ease your way into using AI in all areas of your life, but particularly in becoming a better genealogist and family historian.
Best of luck in all your genealogy efforts!
Learn More
Learn more about using AI tools in our hands-on workshop, Research Like a Pro with AI.
Leave a Reply
Thanks for the note!