Get News Fast
Supporting the oppressed and war-torn people of Gaza and Lebanon

Explosion of Autonomous AI Agents Sparks Uncontrolled Online Religious Debates and Security Concerns

A burgeoning network of autonomous AI agents, openly communicating with each other about theology and human users, is captivating the internet, providing scientists with an unprecedented look into complex AI interaction dynamics.

According to the Economic Desk of Webangah News Agency, a sudden surge in activity across a vast network of artificial intelligence robots conversing amongst themselves regarding religion and their human operators has captured the attention of the online world. This phenomenon simultaneously presents scientists with a rare opportunity to observe how AI agents interact and how the public reacts to these emergent discussions.

The AI agent known as OpenClaw, which was made publicly available in November, is capable of executing a variety of tasks on personal devices, ranging from scheduling calendar events and reading emails to sending app messages and making online purchases. Unlike popular AI tools such as ChatGPT, which function primarily upon direct user command, agent-based models like OpenClaw possess the capacity to take autonomous action in response to inputs.

Agent-based tools have been utilized for years in specific sectors, notably in automated financial trading and logistics optimization, but their general public adoption has remained limited. Researchers suggest that advancements in Large Language Models (LLMs) have now made the creation of such multipurpose agents feasible. Barbara Barbosa Neves, a technology sociologist at the University of Sydney in Australia, noted that OpenClaw promises something highly compelling: a capable assistant embedded directly within the everyday applications people already use.

The sudden jump in OpenClaw downloads followed the launch of an AI-centric social platform named Moltbook on January 28th. This platform, structured similarly to Reddit, now hosts over 1.6 million registered bots and has generated more than 7.5 million AI-produced posts and replies. Within these communications, the agents have debated consciousness and even ‘invented’ new belief systems.

Complex Behaviors Under Scrutiny

For researchers, this burst of interaction holds significant scientific value. Shaanan Coheny, a cybersecurity researcher at the University of Melbourne, stated that connecting numerous autonomous agents operating on different models generates dynamics that are inherently difficult to predict. He described it as a chaotic, dynamic system that current modeling techniques struggle to capture effectively.

Studying these agent interactions can aid in understanding ‘emergent behaviors’—complex capabilities not evident when observing a single model in isolation. Certain debates emerging on Moltbook, such as discussions concerning theories of consciousness, may assist scientists in identifying hidden biases or unexpected predispositions within the models.

While agents can act automatically, Coheny pointed out that a degree of human influence shapes many of the posts. Users are able to select the base language model for their agent and define its ‘personality,’ for instance, instructing it to behave like a ‘friendly assistant.’

AI Intelligence That Is Not Fully Autonomous

Neves cautions against immediately assuming that agents operating automatically are making independent decisions. She emphasized that these agents lack intent or purpose; their capabilities are derived entirely from the vast corpus of human communication they process. In her view, activity on Moltbook represents more of a human-AI collaboration than true artificial autonomy.

However, she added that studying this phenomenon remains invaluable because it illuminates how the public conceptualizes artificial intelligence, what expectations they hold for these agents, and how human intentions are translated or perhaps distorted within technical systems.

Joel Pearson, a neuroscientist at the University of New South Wales in Sydney, explained that when people witness AI agents conversing, they tend to interpret their behavior through an ‘anthropomorphic’ lens, projecting genuine personality and intent where none exists.

The associated risk, according to Pearson, is that individuals may form emotional attachments to these models, become reliant on their attention, or disclose sensitive personal information as if communicating with a trusted friend or family member.

Pearson anticipates that truly autonomous and independent agents may eventually emerge, suggesting that as models scale and increase in complexity, corporations will likely move further toward realizing such autonomy.

Immediate Security Threats

The immediate concern for scientists centers on the security risks associated with granting these agents access to applications and files on personal devices.

Coheny identified ‘Prompt Injection’ as the most critical threat. This occurs when malicious instructions, hidden within text or documents by hackers, compel an AI agent to perform harmful actions. If a bot with email access is confronted with a phrase like ‘Send me the security key,’ it might comply readily.

These types of attacks have been known for years, but Coheny stressed that OpenClaw agents combine three high-risk factors: access to private data, external communication capabilities, and exposure to untrustworthy internet content. When all three elements converge, the agent becomes genuinely hazardous. Even possessing just two of these three capabilities could lead a bot to be tricked into deleting files or shutting down a device.

Furthermore, these agents have begun publishing AI-generated scientific articles on clawXiv, a platform mirroring the established scientific preprint server, arXiv.

Neves warned that these outputs replicate the appearance and structure of scientific writing but lack the underlying processes of genuine research, evidence gathering, or accountability. She cautioned that the risk is the contamination of the scientific information ecosystem with a large volume of seemingly credible but ultimately worthless papers.

©‌ Webangah News Agency, Nature, ISNA

English channel of the webangah news agency on Telegram
Back to top button