ronwdavis.com

Challenging the Foundations of Artificial Intelligence

Written on

Introduction to AI's Limitations

The notion of achieving genuine computer intelligence through current data-driven methodologies is highly debatable. The primary function of these data-centric programs seems to be the replication of words through a selection process. It raises the question: how can true intelligence emerge from mere word imitation, aside from sheer luck?

Upon examining present-day AI algorithms, it becomes evident that what we see is essentially an advanced copying mechanism. Such copiers do not create; they reproduce what they encounter.

The existing strategies in AI are bound to reveal their shortcomings over time. This approach is less a scientific endeavor and more a fantastical quest aimed at extracting financial resources until the anticipated next "AI winter" arrives, at which point funding will likely diminish. Various observers have pointed out that the current models are already showing signs of these limitations. As long as the public clings to this version of reality, investments in this misguided journey—characterized by neural networks and generative AI—will continue unabated. This situation largely involves individuals manipulating vast datasets to devise innovative ways of "playing" with information. Often, this data is obtained from original creators without consent. For instance, OpenAI attempts to rationalize this behavior by providing justifications, yet such theft remains unjustifiable.

The results produced by these systems lack a foundation in robust theoretical frameworks. The phrase "standing on the shoulders of giants" does not apply to the way modern AI has been constructed.

Hallucinations in AI

One major issue plaguing current AI systems is the phenomenon known as hallucinations. Why do these systems display such behavior? Can today's data scientists elucidate the reasons behind this occurrence? If an individual experiences hallucinations, we understand that there's an underlying issue. In such cases, intervention—anchored in scientific understanding and experience—is usually pursued.

However, with AI, there is no consensus on what constitutes intelligence. Without a clear definition, how can any objectives be established?

Furthermore, the so-called "testing" conducted by data scientists falls short of the rigorous standards implemented in the software industry. If any credibility is to be assigned to this endeavor, the results of these purported tests must be disclosed. Otherwise, the entire initiative lacks validity. We would not accept even a rudimentary accounting system without substantial testing.

Currently, we are unable to explain the hallucinations exhibited by these programs. These are simply computer programs, devoid of any unique characteristics. They operate based on a set of instructions that dictate how computers should behave. Programmers express concern when a computer yields an unexplainable result, as they are expected to have a comprehensive understanding of the program's functions. Such unexpected behaviors are classified as bugs or errors.

The term "bug" signifies an error within the program, arising from incorrect specifications made by the programmers. A significant amount of time is dedicated to both creating these programs and identifying and rectifying bugs. During this process, evidence is gathered to confirm that the system functions as intended. Companies such as OpenAI must provide substantial proof of thorough testing and the correction of program bugs.

The Consequences of Ignoring Bugs

Unfortunately for the public, those utilizing this largely untested software are often unaware of the potential repercussions. Why is this not the case for programs that are purportedly achieving extraordinary feats? Does the perceived success of these systems excuse a lack of testing? Why is it not a priority to investigate the causes of bugs within neural networks? Why do we accept these anomalies as part of the evolution of computer intelligence? What scientific rationale supports the belief that programming errors signify progress?

Reflecting on the current narrative, why do we regard these bugs as positive attributes rather than flaws in AI software? A common defense against the need for testing is that the reasons behind these issues remain elusive. Another questionable justification posits that such bugs are special "features." This perspective appears to be accepted without scrutiny, especially since the software is generating revenue for AI firms.

OpenAI and similar organizations appear to believe that these errors—what I prefer to call aberrations—will lead to the development of Artificial General Intelligence (AGI), which is often considered the ultimate goal of AI research. They operate under the assumption that intelligence will somehow emerge from program anomalies like hallucinations. It raises the question: can we genuinely expect spontaneous human-like intelligence to arise from software riddled with errors, especially when even the creators seem unaware of the triggers for such behaviors?

I hold firm to the belief that identifying one bug is merely the tip of the iceberg; more bugs are likely to follow. The presence of a single bug may indicate a more profound issue within the program. Given the current architecture of advanced computers, their behavior should be predictable.

From the outset of this era of data-driven intelligence, I've questioned the rationale behind the belief that these programs would autonomously develop intelligence. They are fundamentally just a collection of instructions guiding the computer in performing specific tasks, and these instructions have remained largely unchanged since the inception of programming over six decades ago. Nothing contradicts this fundamental principle.

Regrettably, data scientists seem to think that they have managed to alter this paradigm. They believe their software will somehow deviate from the expected behavior of computers. Their assumption is that the software will evolve, defying the explicit instructions provided by programmers, instead following directives that the computers can "invent." Invention suggests a form of creativity—a trait that computers inherently lack, as they can only execute the given instructions.

If this is indeed the case, I challenge current AI researchers to present the machine instructions that would generate such creative behavior. For this to materialize, there must exist code capable of inducing spontaneous actions. However, this introduces a paradox: genuine spontaneous behavior would necessitate a fundamental alteration of the instructions interpretable by the computer or a modification to its hardware to facilitate new actions. I contest this idea and urge data scientists to clarify how such transformations could occur.

It should be demonstrable, as is the case with all computer programs, including those in data science, that these new behaviors can indeed be showcased. Please substantiate your claims by providing the necessary programming instructions that could enable a computer to modify its hardware. After all, isn't that the rationale behind investing heavily in data-driven AI? Humans can adapt their thinking, a trait associated with Human General Intelligence (HGI).

If AI researchers fail to demonstrate and validate the requisite code, then it stands to reason that contemporary computers will not achieve this outcome. Without a shift in the current trajectory of AI, we are indeed heading toward a harsh AI winter.

Randy Kaplan, PhD, Computer Science

If you found this article insightful, please share your thoughts. If you didn't, I'd appreciate your questions and feedback. Your engagement means a lot to me.

Don't forget to clap for my articles, regardless of your opinion! Subscribe to stay updated on my work and foster a deeper connection between us. Thank you for taking the time to read my thoughts.

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

A Journey of Purpose: Building a Meaningful Life

Explore how purpose influences our daily lives through a pilgrimage story and the significance of our choices.

Starlink Internet Service: Revolutionizing Connectivity in WA

Explore how Starlink is transforming internet access for Washington State's military and emergency services.

Essential Bash Commands Every Developer Should Master

Discover 13 essential Bash commands that can enhance your efficiency as a developer and simplify your daily workflow.

Title: Embracing Self-Acceptance: A Journey Towards Inner Peace

Explore the importance of self-acceptance and love, and how to find worth from within rather than through external validation.

# Navigating the Future: What Happens When Robots Replace Jobs?

Exploring the implications of automation on jobs, society, and potential solutions like universal basic income.

AI Revolution: Embracing Change Through Intelligent Disruption

Explore how AI challenges traditional practices, driving efficiency and innovation in business.

The Ethical Dilemma of Randomised Statistical Trials

Exploring the ethical implications of randomized trials and when they may not be the best choice in research.

Building Chatbots from the Ground Up: Your Path to Success

Discover how to develop chatbots that meet industry needs and capitalize on AI's potential with this detailed step-by-step guide.