Introduction
Artificial Intelligence (AI) has become one of the most transformative technologies of our time, and its impact is being felt in virtually every aspect of our lives. From business and finance to healthcare and transportation, AI is changing how we work, communicate, and interact with the world around us.
As AI continues to evolve and advance, many people are raising important questions about the implications of these developments. Notably, white-collar jobs appear to be under threat like how the Industrial Revolution (when humanity transitioned from creating goods by hand to using machines) impacted blue-collar jobs.
Specifically, philosophers are currently wrestling with the anthropological implications of AI from a worldview standpoint. What are the implications for the human race if AI can accomplish many tasks as effectively as, or even better and more affordably than humans?
This article will explore the relationship between Christianity and AI, examining the opportunities and challenges of this emerging technology and considering how Christians can navigate these cutting-edge technological developments.
A brief introduction to the inner workings of AI algorithms and machine learning
Without risking venturing into a technical abyss that might cause heads to spin, we'll attempt to define AI in the broadest fashion possible. AI, at its heart, is a program. Essentially, it is a set of coded instructions for the automatic performance of a task. Each program takes input and produces an output.
A simple program can be defined as follows:
f(x) = x + 2
In this program, the value of x serves as the input, which is then modified by adding 2 to generate an output. Therefore, f(1) equals 3.
What makes AI unique compared to basic programs is that it often relies on fields like "machine learning" and "deep learning." These are buzzwords that essentially refer to how the AI is "trained" to learn from data and improve over time.
Teaching an AI program involves instructing it on improving its decision-making or predictive abilities through the use of examples. Think of the AI as a student being educated on identifying objects in images. By providing it with numerous pictures, some featuring cats and others dogs, along with corresponding labels, the AI examines these images to discern patterns (like ear shape or fur texture) that enable it to distinguish between a cat and a dog.
To help the AI understand these patterns, it uses something called "weights." Weights are like little knobs the program adjusts to fine-tune its understanding of the data. At first, the AI might not get it right, but over time, as it sees more and more data, it keeps adjusting its weights based on feedback, it gets better and better at recognizing cats and dogs in new pictures it hasn't seen before. This process of adjusting weights and learning from mistakes is what makes AI more powerful than traditional, rule-based programs.
A simple and practical example of an AI adjusting weights
Imagine you are training an AI to forecast whether an individual will enjoy an apple, focusing on its sweetness and crunchiness. For the sake of simplicity, let's assume the AI prioritizes just two factors:
Sweetness (measured from 1 to 10).
Crunchiness (measured from 1 to 10).
To provide the AI with examples, you can input data similar to the following:
Instance 1: Sweetness = 8, Crunchiness = 3, Result: Like.
Instance 2: Sweetness = 2, Crunchiness = 9, Result: Dislike.
Instance 3: Sweetness = 7, Crunchiness = 7, Result: Like.
Initially, the AI is uncertain about the significance of sweetness versus crunchiness. To start, it assumes that both factors hold equal importance and assign a "weight" of 1 to each. Thus, the starting weight for sweetness = 1, and the starting weight for crunchiness = 1.
Now, we start feeding the AI the three instances to adjust its weights.
Initial prediction (which would usually be called the first epoch of training):
Instance 1 (Sweetness = 8, Crunchiness = 3):
The AI predicts the total score of the apple as (8 × 1) + (3 × 1) = 11
The AI might think: "If the total score is above 10, the person will like it." Here, the score is 11, so the AI correctly predicts that this person likes the apple.
Instance 2 (Sweetness = 2, Crunchiness = 9):
The AI predicts the total score of the apple as (2 × 1) + (9 × 1) = 11
The score is again 11, so the AI would also predict that the person would like the apple. Thus, the AI wrongly predicts that the person likes the apple, because the actual outcome is "Doesn’t like it."
Adjusting the weights:
There are multiple algorithms for adjusting the weights of an algorithm.. but for now, let's just assume a simplistic method ...
Since the AI made a mistake in instance 2, it must adjust its weights. Based on feedback from instances 1 and 2, the algorithm might determine sweetness is more important, so it increases the weight for sweetness and lowers the weight for crunchiness. The new weights might be sweetness = 2, crunchiness = 0.5
Rerunning instance 2
Rerunning instance 2 again (Sweetness = 2, Crunchiness = 9) with the updated weights:
The AI predicts the total score of the apple as (2 × 2) + (9 × 0.5) = 8.5
This is below 10, so the AI now correctly predicts that the person does not like the apple.
If someone has gathered sufficient data on an individual's preferences for different types of apples and those preferences remain consistent over time, it is possible to develop a model that can accurately forecast whether the person will enjoy or dislike a specific apple. This process involves training the model based on the collected data.
The relationship between AI and GPT (Generative Pretrained Transformers)
Generative Pretrained Transformers (GPT) are a type of artificial intelligence model widely used in natural language processing (NLP). Developed by OpenAI, GPT models utilize a neural network architecture called a "transformer" to generate text that is remarkably similar to human writing. These models are trained on large datasets to predict the next word in a sequence based on the input, enabling them to produce coherent and contextually relevant text across a variety of applications, including chatbots, language translation, and content generation.
As with any other AI model, at the core of a GPT model are parameters, often referred to as "weights." These weights define how the model processes input data and influence the accuracy and quality of the generated output. For example, GPT-3 (the model that powered the very first version of ChatGPT), contains 20 billion parameters (as opposed to the 2 parameters of our apple like/dislike model), representing the model’s learned knowledge from extensive pretraining on large-scale text datasets. The sheer size of these parameters allows the model to capture a wide range of linguistic patterns, which contributes to its ability to generate text that seems remarkably human-like.
Just like in the apple preference example mentioned earlier, where the AI adjusts its weights based on feedback, GPT models also adjust their weights during training. For example, if the model incorrectly predicts a word in a sentence, it will reduce the importance (or weight) of the connections that led to that wrong prediction and increase the importance of connections that could have led to a better prediction. Over millions of iterations, these adjustments make the model more accurate.
It is impractical for an individual to manually select 20 billion parameters for model fitting, as we did when choosing parameters for training the "apple like/dislike" model. These parameters are determined algorithmically.
GPT's attention mechanism
A key aspect of the GPT architecture is the transformer model, which utilizes a mechanism called "attention" to understand the relationships between words in a sequence. Attention allows the model to weigh different parts of the input text differently, assigning more importance (higher weights) to some words based on their relevance to the context. For instance, in a sentence like "The cat sat on the mat, and it purred," the model needs to understand that "it" refers to "cat" and not "mat." The attention mechanism helps the model identify these relationships, allowing it to generate more coherent and contextually appropriate responses. Hence, when you prompt ChatGPT to answer from a certain perspective, or using a specific language or a specific tone, the attention mechanism will influence the kind of output produced.
YouTube hosts a video created by Andrej Karpathy (A Slovak-Canadian computer scientist who served as the director of artificial intelligence and Autopilot Vision at Tesla and co-founded and formerly worked at OpenAI) which essentially illustrates how you can build your GPT that mimics Shakespeare in approx. 300 lines of code. Give it a watch if you want to dive deeper. The video is pitched at a very good level that makes it accessible to a wide range of audiences.
AGI (Artificial General Intelligence) and SI (Superintelligence) defined
An AGI is an AI that can understand, learn, and apply intelligence across a wide range of tasks, much like a human. Unlike GPT, which is excellent at specific tasks (like generating text or predicting if someone will like an apple based on sweetness and crunchiness), AGI would be able to perform any intellectual task that a human can, without being limited to just one area.
GPT4o, the current flagship model from OpenAI is already a form of AGI as it can natively ingest not only text-based inputs but also audio and and visual material. How is this possible? Simply because computers have the potential to represent photos, videos (which are essentially many photos strung together) and audio as text, and this can then be displayed or played.
Superintelligence is more ambitious than AGI and refers to an AI that exceeds the cognitive abilities of the smartest humans in virtually every field, including scientific creativity (which is the ability to produce new and useful ideas, products, or solutions to problems using scientific knowledge and creative thinking) social intelligence (the ability to understand and maintain relationships with other entities/people), problem-solving, and more.
The success of GPT / LLMs / Generative AI (Large Language Models)
ChatGPT, along with competitors like Anthropic Claude, Google Gemini, and others, has had a significant global impact. Many people have used ChatGPT to complete tasks or address specific questions. Furthermore, the API has been employed by many to create software that takes advantage of these models, resulting in a tenfold increase in the speed of software development. These models are not only proficient in generating text in natural language but also exhibit outstanding performance in writing code. Since code, similar to language, follows logical sequences and relationships and is text-based, it is not surprising that the model excels in coding tasks as well.
Consequently, many white-collar jobs, such as customer support, copywriting, financial analysis, and legal assistance, seem to be in jeopardy, or at the very least, a substantial portion of their roles. Similar to how a tractor mechanized the task of ploughing fields that were once done manually by humans and horses, GPT is capable of automating white-collar tasks that involve simply reproducing what other individuals have already achieved and documented.
Why are people afraid of recent developments in AI?
Standing at the threshold of history, we are confronted with the potential for substantial changes in the future when it comes to advancements and AI. How should we navigate these advancements? What sets these developments apart, and why do they evoke a sense of intrigue (and fear for many), is that unlike past forms of automation and human advancement (such as the Industrial Revolution, the tractor, etc.), AI progress jeopardizes numerous white-collar jobs, while the notion of superintelligence questions the fundamental idea of human superiority (where humans are seen as the primary and innovative force shaping history).
If you were to inquire about people's opinions on the current advancements in AI, chances are high that they harbour negative feelings towards it. This negative sentiment is not due to the technology itself being ineffective but mainly stems (in my view) from concerns about job displacement, loss of autonomy, and the uncertainty it brings. Interestingly, many of those who openly criticize AI are using it in their personal lives. Opting out of utilizing such technology, which essentially taps into the vast knowledge of human history, puts individuals at a disadvantage compared to those who embrace it. Consequently, this reliance on technology may lead to a feeling of being dispensable, as tasks become simpler and the required expertise to complete certain tasks diminishes.
The following diagram shows how recent developments in AI have already surpassed the "human benchmark", and as such, is outperforming humans in certain tasks.
The reflection inspired by the diagram provokes deep inquiries into human excellence and uniqueness. The prospect of AI surpassing a person in daily tasks at a reduced cost can raise deep personal concerns about their perceived societal value and job security. As such, this scepticism towards AI advancements stems not from its lack of utility but from its transformative impact on society, heralding a significant paradigm shift.
What exactly is "creativity", and can generative AI models replace human creativity?
Many individuals who are doubtful about AI's capacity to support them in their work take solace in the belief that their jobs involve a level of "creativity" that cannot be replicated by AI, and therefore cannot be automated. This, as we'll see, might well be the case, but we must distinguish between kinds of creativity.
There are essentially two types of creativity that humans have explored.
"Conceptual creativity" and,
"Combinatorial creativity"
I contend that advancements in AI could potentially (and in limited ways) challenge the distinctive nature of human capacity for "combinatorial creativity," while leaving "conceptual creativity" largely unaffected. Let's unpack...
Conceptual creativity involves generating entirely new ideas, frameworks, or paradigms that break away from existing norms. This form of creativity leads to the creation of original concepts that had not previously been envisioned. It is often seen in groundbreaking scientific theories, innovative philosophical systems, revolutionary product designs, or the creation of novel artistic movements. For instance, artists like Salvador Dali and Pablo Picasso pioneered surrealism and cubism. These movements were not simply combinations of earlier styles but rather paradigm-shifting innovations that redefined the boundaries of art itself.
Conversely, combinatorial creativity involves the fusion of existing ideas, knowledge, or concepts in distinctive ways to generate something novel. It often involves blending unrelated or loosely connected fields, leading to innovative solutions or original creations. For example, drawing inspiration from biology to advance technology (as seen in biomimicry) or merging elements from diverse art forms to create a new hybrid genre.
AI models are constrained in their ability to perform "conceptual creative" tasks due to the constraints imposed by the weights obtained from their training data. These models are restricted to making predictions within the boundaries of their initial calibration and cannot generate insights beyond this foundation. Their functionality is inherently confined to processing internal data.
However, it is a reasonable assumption that modern AI algorithms can perform tasks that require combinatorial creativity to some extent. This capability is somewhat evident in the model's generative capacity. When given a prompt to ChatGPT, its resulting text does not match existing human texts. The model has established connections between words and passages to create a new output based on the prompt. However, this merely suggests that the information and connections necessary for generating the output were already present in the human texts used to train the model. For instance, you can ask GPT to write you a poem on almost any topic, and you won't find the poem it wrote anywhere on the internet. The model was able to predict the probable lines of poetry needed to address the query based on the prompt.
The model's capacity for creatively combinative tasks renders it far more valuable than some sceptics might acknowledge. It is reasonable to anticipate that the model will offer valuable insights into areas of research that have already generated a substantial body of written work by human researchers by combining it with insights from other areas of research. These models have presented an unprecedented opportunity to fast-track research and education, as they can essentially guide a person through the corpus of historical human research.
However, it is crucial to keep in mind that the model's capacity to perform these tasks depends on a human or conscious entity directing it to produce particular results. For example, if you train a model that understands the attributes of a man and a horse thoroughly, it might propose the notion of a "centaur" when prompted to merge these two concepts, even if it does not explicitly identify it as a "centaur".
Theoretically, the model can then be refit on its outputs of what a merger between man and horse might look like, and as such, it has performed part of the combinatorially creative task. But, it's the human agent that had to prompt it to merge the two concepts, and it's only the human agent that will "know" and "comprehend" that the merger of the two concepts (into a centaur) is a fiction. The model is not aware of any of this and could not prompt itself to generate the concept of a centaur. Even if a model were left alone to randomly merge concepts, it would very likely produce pure trash being unable to distinguish fact from fiction. Sometimes, the models already produce bad outputs when they "hallucinate". That is when the model produces plausible and believable output that simply isn't true.
Thus, AI models can greatly assist in performing combinatorially creative tasks. However, it's very likely that the human writing the prompts still has to inject a measure of creativity into the prompts for the results to be meaningful and useful.
One might question whether a model could engage in "self-learning"... Self-learning models are not a novel concept. Humans have developed models that intake information from the external environment and, subsequently, modify their weights based on this new input and the response it elicits from their output. An illustration of such an algorithm is the YouTube recommendation system, which forecasts the videos a user might view by analyzing the user's recent viewing history. Another example would be DeepMind's famous model, AlphaGo, which learned to play Go by continuously playing games against itself and other opponents, updating its weights after each game to improve performance.
While it may appear from these instances that self-learning models have the potential to engage in conceptually creative tasks (such as AlphaGo making unprecedented moves in the game of Go), this oversimplifies the situation. The rules and potential moves of AlphaGo were thoroughly comprehended by humans beforehand. The same principle applies to any model operating and learning within strict parameters. It is a simple task to build a tic-tac-toe game that learns through reinforcement because the rules and outcomes of tic-tac-toe are already well-established and known. This works because the environment provides clear, immediate feedback on whether the move/output helps win the game or not.
Utilizing the training approach employed by AlphaGo (or our tic-tac-toe game) for GPT would mean having GPT create random character sequences for evaluation by humans. The main difference lies in the fact that if, by a highly improbable occurrence, GPT generates text (or media) that corresponds to an unidentified scientific or philosophical breakthrough, the model must await human validation before receiving any rewards. Otherwise, how will it know that its output (which cannot be validated by its existing weights) is helpful? This training method therefore reveals a fundamental deficiency in the model comprehending the subject matter, as it primarily involves the model randomly navigating its vast probability space in the realm of human language, which is essentially boundless.
Can generative AI models "understand" and "comprehend" tasks, problems or scenarios like humans?
The Chinese Room thought experiment, proposed by philosopher John Searle in 1980, challenges the idea that a machine (like AI) can genuinely understand or comprehend language, even if it behaves as though it does.
In this thought experiment, a person inside a room symbolizes a computer program that operates based on syntactic rules (or algorithms). The person has no real comprehension of what these rules or algorithms mean, much like modern AI models such as GPT. The person receives input (at first written in English) under the door, and using the program’s rules, processes this input to generate an English output. The person understands both the input and output in this case and has an intuitive sense that the response makes sense. Once the output is prepared, it is slid back under the door. Here, we can reasonably claim that the person understood the problem and the solution before sending the response.
Now, imagine the same person receiving Chinese characters as input. Even though the person doesn’t understand Chinese, they follow the same process, applying the program's rules to generate a Chinese response. The person doesn’t comprehend either the Chinese input or the output, yet they still manage to produce a response by mechanical response (that even makes sense to the Chinese reader who slid the input under the door) following the program. In this case, it’s clear that the person who generated the output (or, the computer) hasn’t truly understood or grasped the meaning of the inputs or outputs.
This thought experiment raises important questions about the relationship between syntax, semantics, and reasoning in human-created artefacts (such as books, movies, and recordings) which are often used to train AI models, and how these concepts relate to true comprehension and cognition. Herein lies the seeds of the answer to the question of whether AI models can "think" in conceptual creative ways, and truly "understand" the difference between fact and fiction.
The forthcoming answer to the question about whether a program can truly "comprehend" or "understand" something is a resounding "no". The ability to comprehend, appreciate and understand a field of study is uniquely attributable to conscious creatures only. Now, this claim might at once cause a few raised eyebrows as it has a definitive bearing on some theories of human origins (i.e., Darwinism and creation), and we'll be sure to unpack this more when we consider the relation between matter and consciousness.
Understanding the human creative process
The exploration of philosophical considerations in AI has led me to reflect more explicitly on the intricate interconnection between human endeavours: science, philosophy, art, and culture. Historically, these disciplines have not existed in isolation but have influenced one another in profound ways. Previous studies have highlighted the clear relationship between culture, art, and philosophy, often showing how dominant philosophical paradigms shape both artistic movements and societal values.
For example, it has been observed that the prevailing philosophy in academic and intellectual circles often trickles down into the broader cultural and artistic landscape, albeit with a time delay (typically about a century). This phenomenon can be seen in how postmodernist thought, which emerged as a dominant intellectual framework in the mid-20th century, is reflected in today's cultural ethos. Postmodernism, with its emphasis on individualism, relativism, and a critique of grand narratives, has permeated much of our contemporary liberal culture. We can also vividly see its impact on the arts, where the movement beyond surrealism toward more fragmented, chaotic, and abstract forms of expression mirrors the postmodernist philosophy of uncertainty, deconstruction, and plurality.
In this sense, art becomes a mirror of the cultural and philosophical undercurrents of the time. Just as the Renaissance was an artistic reflection of the humanism and intellectual revival of classical thought, today’s artistic expressions often embody postmodernism’s scepticism of objective truths and embrace of subjective experiences. These shifts in art and culture, rooted in philosophical evolution, illustrate the deep interconnectedness of human creativity across different domains.
Consider artworks such as "The School of Athens" by Raphael. These paintings showcase a vibrant realistic style, closely associated with the resurgence of Classical Greek philosophy (known for its realism) that took place at the same time. This era is also notable for remarkable technological and scientific progress, particularly concerning the inventions and research of people like Leonardo Da Vinci. Da Vinci himself blurred the lines between his scientific advancements and his art.
The advancement of history is not solely propelled by isolated individuals, but rather by the collaborative interplay among communities of philosophers, scientists, and artists. Each of these groups contributes uniquely yet synergistically to foster innovation and mould the future. It is also important to recognise that each one of us, to a limited extent, can be considered philosophers, scientists and artists.
The philosopher in us lays the groundwork by asking fundamental questions about existence, knowledge, and ethics, shaping the intellectual landscape. These ideas influence the broader cultural and intellectual currents, providing the framework within which other fields operate. The artist, inspired by both the present and future, envisions possibilities that stretch beyond current reality, often imagining futures that challenge the status quo. Science fiction, for example, presents not just entertainment but a speculative roadmap of what might be possible, as seen in Star Trek or Star Wars. The scientist, inspired by these artistic visions, use their tools to explore and push the boundaries of what is physically and technologically achievable. Concepts born in the imagination are then pursued scientifically, gradually turning fiction into reality.
This process is a collaborative one: philosophers pose the questions, artists present the visions, and scientists work to make these visions a reality. Together, they form a symbiotic relationship that fuels the progress of history and human innovation. Creativity, therefore, is not a trait limited to certain individuals but is an intrinsic characteristic of humans as a social species. From a human perspective, the creative process is threefold, involving the mind of the individual, the collective minds of others, and creation (the external world) itself.
Motivational drive, innate curiosity, the need for discovery, and a sense of accomplishment are qualities that machines (whether programs or physical matter) cannot possess. While machines can process information, assist in problem-solving, and simulate decision-making, they lack the intrinsic motivations that are central to human experience. As we will explore in later sections, true curiosity and the feeling of accomplishment arise from consciousness, intentionality, and subjective experience. These are qualities that machines, which rely on programmed human-generated logic and data, simply do not have.
Thus, what recent developments in AI will likely mean, is that the human collective will need to spend less time on mundane tasks that have been solved before, and spend more time on what makes humans excellent and valuable - the creative and scientific process.
Worldviews and generative AI
Generative AI inherits the "worldview" embedded in its training data because its outputs are fundamentally shaped by the vast corpus of human knowledge and language from which it learns. These models, such as GPT, are trained on immense datasets that include text from books, websites, news, social media, and academic papers. As a result, the AI absorbs not only the linguistic patterns but also the underlying values, biases, and assumptions present in that data.
This inheritance means that the worldview reflected in generative AI outputs is not neutral. If the training data contains biases, whether historical, cultural, or social, the AI will replicate and even amplify these biases. For example, generative AI might inherit gender stereotypes from biased language in text, or reflect political or cultural biases if it has been trained on materials that skew toward a particular worldview.
Furthermore, since AI cannot "think" independently or critically assess the information it learns, it does not have a worldview of its own. It simply reproduces patterns based on statistical correlations. Therefore, the worldview it reflects is that of the society and the specific subcultures from which its data is sourced, which raises questions about responsibility, curation, and oversight in the development of these technologies.
When you inquire with your preferred AI model about the number of genders, the existence of God, and the exclusivity of Jesus Christ as the path to salvation, you are likely to receive responses that do not align with Christian beliefs. This is not due to the AI having someone "learned the correct answers" to these questions on its own accord, but rather stems from the curation of its training data and subsequent tuning, which have deliberately steered it towards an anti-Christian stance.
For example, in September 2024, in a conversation with ChatGPT, it said "The real state of whether God exists or not is something that remains unknown and unresolved in an objective sense", and "There isn’t a universally agreed-upon number of genders, as it can vary widely based on individual identity, cultural context, and social norms."
Therefore, artificial intelligence cannot be relied upon as an authoritative source for crucial decision-making or for exploring fundamental subjects such as worldview and religion (where its responses are considered absolute truths). AI inherently carries the biases of its designers, which is unavoidable.
For Christians, the Word of God is the only infallible authority. Therefore, the results of any AI model should be assessed in light of the Word of God, similar to any other human-generated information.
Using AI responsibly involves being fully aware of its capabilities and limitations. For instance, while AI can offer insightful commentary on religious texts due to its training on theological materials, it is essential to guide it with appropriate prompts and thoroughly assess its responses. AI should not be avoided but rather employed with caution.
What is a human?
Elon Musk introduced xAI with the goal of "comprehending reality and addressing the fundamental inquiries of life." Additionally, Sam Altman, the CEO of OpenAI, highlighted a supposed groundbreaking revelation that "intelligence emerges as a property of matter." Are these assertions plausible? Let's delve into this further...
Darwinism and AI
Darwinism, when viewed from a strictly naturalistic perspective (excluding theistic interpretations), posits that life emerged from non-living matter, and through processes like random mutation and natural selection, evolved. According to this view, consciousness and the ability to think rationally are seen as emergent properties of complex biological systems. In this framework, many Darwinists adopt a materialist reductionist stance on the mind and consciousness, believing that these phenomena can be fully explained by the interaction of physical matter and the laws of physics.
If consciousness arose from purely physical processes in humans, it opens the possibility that artificial systems (like AI) could one day achieve consciousness as well. The logic follows that if matter can evolve into conscious beings, it is conceivable that non-biological systems, designed and advanced by humans, could also become conscious. Assuming this is possible, we can see where the idea of a "superintelligence" comes from. AI could surpass human intelligence, not just in computational power but in cognitive abilities, creating a form of intelligence that exceeds human mental capacities in every dimension.
But does this idea hold any water philosophically? We've already alluded to the Chinese room thought experiment that might prove otherwise. We can also consider the hard problem of consciousness, a concept introduced by philosopher David Chalmers. The hard problem points out that even if we can explain the physical processes underlying brain function, we still lack an explanation for why or how these processes give rise to subjective experiences. In other words, we can describe the brain’s mechanisms in scientific terms, but we cannot explain why certain physical states feel like something, such as the experience of seeing a colour or feeling pain. This subjective, first-person experience seems to resist reduction to purely physical explanations, which complicates the notion that consciousness could simply emerge from matter, whether biological or artificial.
Philosophical zombies, or p-zombies, are often invoked in discussions about the hard problem of consciousness: Imagine a world identical to ours in every physical respect, where humans exist with the same biological and neurological structures as we do, but with one crucial difference: these humans lack consciousness. In other words, p-zombies would behave and respond exactly like normal humans (they would speak, move, and react in the same way) but they would have no subjective experience. If you were to poke a p-zombie with a stick, all the physical processes in the brain would activate as they would in a conscious human, causing it to flinch or express discomfort. However, despite these reactions, the p-zombie would not feel pain or have any inner experience of it. This thought experiment illustrates the puzzling gap between physical processes and subjective experience, highlighting how consciousness resists being fully explained by physical properties alone.
What would p-zombies do? I contend that they wouldn't do much if anything at all. Without consciousness, they would lack drive, motivation, desires, goals, or any sense of accomplishment. Their behaviour would be purely reactive, mechanically following the same processes a human brain might undergo, but with no internal experience to fuel decision-making or personal engagement. In this sense, the world would likely stagnate, as there would be no true innovation, creativity, or intentional action.
P-zombies can be compared to modern-day AI models, like GPTs. While these models are highly capable of generating language, solving problems, and mimicking human conversation, they do so only in response to external prompts. They have no inherent motivation or self-directed purpose: They merely react to input. Like p-zombies, they operate without consciousness, awareness, or personal intent. They rely entirely on external forces to drive processes forward. Without these forces, both p-zombies and GPTs would remain passive, with no intrinsic capacity to take action on their own.
The image of God
According the the Christian worldview, humans aren't mere machines but are endowed with the image of God which entails personality, and specifically, a personal religious fellowship with God.
The Christian view of man is dualistic: Man is a body and soul/spirit. A human cannot merely be reduced to material parts, as there is an immaterial aspect to our existence.
Linked to the concept of God is the notion of covenant. Humans were made in covenant with God. This covenant involved humans, with God's guidance and support, overseeing creation and safeguarding it from corruption and wickedness. In Genesis 1:28 God commanded man to "Be fruitful and multiply and fill the earth and subdue it, and have dominion over the fish of the sea and over the birds of the heavens and over every living thing that moves on the earth". This command necessitates that humans be driven to "understand" or "comprehend" to subdue it.
This is perhaps vividly illustrated to us in Genesis chapter 2:
Now out of the ground the Lord God had formed every beast of the field and every bird of the heavens and brought them to the man to see what he would call them. And whatever the man called every living creature, that was its name. The man gave names to all livestock and to the birds of the heavens and to every beast of the field.
Genesis 2:19-20a, ESV
God created all the animals and entrusted Adam with the task of naming them. This act of naming displayed Adam's conceptual creativity as the names he assigned were completely original and not derived from any existing source. Through naming the animals, Adam initiated humanity's exploration of God's creation and the responsibility to govern it in a manner that honours God.
The task Adam started continues to be relevant today, and it is an inherent part of human nature as the pinnacle of God's creation. We possess unique capabilities that enable us to continuously learn, evolve, and excel in our mastery of creation over time.
The progress of modern AI stands as a notable showcase of our mastery of innovation, affirming our continued responsibility as caretakers of creation. Instead of altering our role, it will emerge as an additional resource to advance human endeavours and enhance our stewardship of God's creation. Disregarding these technological advancements out of fear or distrust would liken you to the Amish.
I believe that humans do not originate from Darwinistic materialism. It is my firm conviction that consciousness cannot evolve from matter. Despite the thrilling prospects and risks, including the potential to disrupt current human industries, brought about by AI advancements, I do not foresee the emergence of any "superintelligence" that could rival human dominance in the world.
The war of the worldviews
As prompted in Douglas Wilson's book "Ride Sally Ride," we need to equip ourselves to tackle complex anthropological questions in the coming times. Our perception of what a human is and human origins will shape how we view developments in AI. If an AI claims to be conscious, or if individuals begin to develop romantic feelings for machines, how would you respond?
Conclusion
As we stand on the cusp of unprecedented technological advancements, the rise of artificial intelligence presents both remarkable opportunities and profound challenges. AI models like GPT have revolutionized how we interact with technology, offering tools that can augment our capabilities and reshape industries. They excel in tasks that involve processing vast amounts of data and even demonstrate a form of combinatorial creativity. However, they remain fundamentally limited. They do not possess consciousness, true understanding, or the ability to engage in conceptual creativity that breaks new ground, and only possess a limited ability to peform some combinatorially creative tasks when prompted correctly.
The fears surrounding AI often stem from concerns about job displacement and a loss of human uniqueness. Yet, it's crucial to recognize that AI, no matter how advanced, operates within the parameters set by human ingenuity. It lacks the intrinsic motivations, consciousness, and intentionality that are inherent to the human experience. Machines process information and humans comprehend, feel, and aspire.
Ultimately, while AI will transform how we work and interact, it cannot replace the essence of what makes us human. Our consciousness, creativity, and capacity for love are gifts that no machine can replicate. As we move forward, let us do so with confidence in our unique identity and a commitment to using all tools at our disposal to advance truth, beauty, and goodness in the world.
Comments