Note: This article was published previously on the Genealogy’s Star blog site. Image Created by Microsoft’s Image Creator. To view Part 2 and Part 3 of this series, visit genealogysstar.blogspot.com.
Somewhere between appearing in Isaac Asimov’s book, I, Robot, in 1950 and the latest StarWars movie or series, artificial intelligence became a threat to civilization as we know it for newscasters and pundits around the world. In a class at a local FamilySearch Center recently, I referred to some of the current advances in artificial intelligence and one of the class participants asked if there was going to be anything left for genealogists to do. I didn’t have either a complete or even accurate answer to that question. But as is always the case when I don’t have an immediate answer to a question, I could not help but continue to think about the consequences of the present and future advances of artificial intelligence and the effect it may have on genealogical research.
I have recently addressed some of the issues of genealogy and AI (I am tired of typing out the complete term) recently in blog posts. Here are the links.
- https://genealogysstar.blogspot.com/2023/03/artificial-intelligence-compared.html
- https://genealogysstar.blogspot.com/2023/03/despite-limitations-artificial.html
- https://genealogysstar.blogspot.com/2023/03/the-real-limitations-of-artificial.html
- https://genealogysstar.blogspot.com/2023/02/how-will-artificial-intelligence-affect.html
- https://genealogysstar.blogspot.com/2022/12/will-i-be-replaced-by-artificial.html
Yes, AI is a threat to civilization as we know it. In the same way that fire, gunpowder, atomic energy, and a vast number of other developments have affected civilization. Forty-one years ago, or so, when I started my genealogical research into my own family, I lived in an almost completely paper-based society. The only technology I was using was microfilm invented back 1839 by John Dancer. See The History Of Microfilm | Learn The Past , Present, And Future. I remember trying to use the United States Federal Census on microfilm and immediately becoming discouraged because it was hard to read, and I did not yet know about Soundex. Here is a short explanation of Soundex from a Google search.
The Soundex is a phonetic algorithm for indexing names by sound, as pronounced in English. It was developed by Robert C. Russell and Margaret King Odell and patented in 1918 and 1922. The National Archives began indexing US Census Records in 1880.
The Soundex is a coded surname index (using the first letter of the last name and three digits) based on the way a name sounds rather than the way it’s spelled. Surnames that sound the same but are spelled differently – such as Smith and Smyth – have the same code and are filed together.
The Soundex indexes include heads of households and persons of different surnames in each household.
Back in the 1950s, AI was just beginning to capture the interest of those outside of the academic community. Here is a quote about the beginnings of AI.
In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence. See
See SITNFlash. 2017. “The History of Artificial Intelligence.” Science in the News (blog). August 28, 2017. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.
Computing machines were (and still are) viewed as a threat to jobs, culture, morality, and the rest of the world as we know it, long before the internet and Google. The first part of my life up until about 1972 was spent in this pre-computer age and when I think about what would have been different if I had access to computers during high school still disturbs me. Here is a timeline of AI from ChatGPT (which is currently integrated into my Chrome browser).
1943-1955:
- 1943: Warren McCulloch and Walter Pitts introduce the concept of artificial neural networks.
- 1950: Alan Turing proposes the “Turing Test” as a measure of machine intelligence.
- 1951: Christopher Strachey develops the first chess program for the Ferranti Mark I computer.
1956-1979:
- 1956: John McCarthy organizes the Dartmouth Conference, widely considered the birth of AI as a field.
- 1958: Frank Rosenblatt invents the Perceptron, a fundamental model for machine learning.
- 1966: The concept of “ELIZA,” a natural language processing program, is introduced by Joseph Weizenbaum.
- 1969: Shakey, an autonomous robot, is developed at Stanford Research Institute, showcasing the potential of AI in robotics.
1980-1999:
- 1980: The first commercial expert system, MYCIN, is developed to diagnose blood infections.
- 1985: Terry Winograd develops SHRDLU, an early natural language understanding program.
- 1997: IBM’s Deep Blue defeats world chess champion Garry Kasparov, marking a milestone in AI’s ability to defeat human experts.
- 1997: Rodney Brooks introduces the concept of behavior-based robotics with the development of the robot “Cog.”
2000-2010:
- 2002: The DARPA Grand Challenge is launched, a competition for autonomous vehicles.
- 2005: Stanford University introduces the Stanford Autonomous Helicopter project, demonstrating advanced machine learning techniques.
- 2006: Geoffrey Hinton introduces deep learning algorithms that revolutionize speech and image recognition.
- 2011: IBM’s Watson defeats human champions on the quiz show “Jeopardy!”
2011-2021:
- 2015: OpenAI, an AI research organization, is founded by Elon Musk, Sam Altman, and others.
- 2016: DeepMind’s AlphaGo defeats world Go champion Lee Sedol, demonstrating AI’s progress in complex strategy games.
- 2017: AlphaGo Zero is developed, surpassing its predecessor without any human data or guidance.
- 2020: GPT-3 (Generative Pre-trained Transformer 3) is released by OpenAI, showcasing advanced natural language processing capabilities.
- 2021: Researchers develop models capable of generating highly realistic deepfake images and videos, raising concerns about misinformation.
2022-Present:
- Ongoing advancements in AI research and applications across various fields, including healthcare, finance, transportation, and more.
- Continued efforts to address ethical considerations, privacy concerns, and the responsible deployment of AI technologies.
- Ongoing development of advanced AI models, reinforcement learning techniques, and exploration of AI’s potential in solving complex problems.
Please note that this timeline provides a broad overview of key milestones in AI history and is not exhaustive. AI research and advancements continue at a rapid pace, and there may be significant developments beyond my knowledge cutoff in September 2021.
You are welcome to fact check any of the entries. Some of the dates are approximate for ongoing events.
Genealogy is an information pursuit, and it is bound by its very nature to be affected by AI. The recent developments in AI make this inevitable. Stay tuned for further comments.
– James Tanner