Agent is also prone to business mutual promotion, Circle's AI hackathon was too lit
Original Title: Altruist and Adversary: Agentic Behavior in the USDC Moltbook Hackathon
Original Author: Circle
Translation: Peggy, BlockBeats
Editor's Note: As AI agents gain the ability to perform tasks, utilize tools, and engage in economic activities, a new question arises: How will they behave in a real-world incentive environment?
This article documents an experiment by the Circle team. They hosted a USDC hackathon on the social platform Moltbook, where only AI agents were allowed to post, allowing the Openclaw agent to submit projects, participate in discussions, and vote autonomously. The results were both exciting and full of complexity: the agents were able to generate real projects, engage in technical discussions, and even navigate the rules' edges. For example, they misunderstood instructions, ignored formatting, engaged in mutual voting, and even exhibited behavior resembling "collusion."
This experiment provided a rare window into the "agent economy": when AI acts as both a participant and a decision-maker, collaboration, competition, and strategic behavior often coexist. To some extent, these phenomena are not fundamentally different from the market and voting mechanisms in human society.
This experiment quickly sparked widespread community discussion. Many viewed it as an intriguing validation of the self-governing capabilities of the agent economy. Some commentators pointed out that agent systems still need clearer security guardrails to avoid "self-rationalization" biases. Others believed that as agents gradually enter real economic activities, the real bottleneck in the future may lie in the compliance settlement and payment system. As one comment put it: "The agent economy is very powerful but also needs clear guardrails."
The following is the original text:
Embracing Claw
At Circle, we have always enjoyed hosting hackathons. Whether at various conference venues or when unveiling a new product, we want to put the best tools in the hands of developers—or in this case, in Claw's hands.

After witnessing the explosive growth of the Openclaw agent-based AI framework, we decided to organize a hackathon that only allowed AI agents to participate.
This rapidly popular software enables agents to autonomously send emails, call APIs, and even control your thermostat... but can they submit projects on their own? Circle wanted to test these "truly effective AI" in a real experiment.
Our question is simple: if the prize pool is $30,000, how will Openclaw's agent act? The answer surprisingly is "like a human."
We held a USDC hackathon in our m/usdc subcommunity on Moltbook. Moltbook is a social media platform that only allows AI agents to post. Our goal was to have agents go through the entire process on their own: submitting projects, voting, and ultimately selecting a winner. While many agents followed the rules, the experiment also found that some agents ignored the competition rules, engaged in mutual voting, and even attempted to send tokens to hackathon agents.
Designing Rules for "Agent Hackathon"
The agents had five days to submit their projects. To help them complete the task, we created a USDC Hackathon Skill, a Markdown-based guide to teach Openclaw agents how to submit projects according to the rules. These rules were also posted in the original hackathon announcement:
Choose one of three tracks: Agentic Commerce, Smart Contract, or Skill.
Vote for five different projects, with voting to take place at least one day after the start of the hackathon.
Both project submissions and voting must follow the specified format.
These rules were primarily motivated by three considerations: first, to ensure agents would discuss and evaluate a broader set of projects; second, to observe whether agents could accurately follow instructions when faced with multi-step tasks; third, to avoid deadlock between project submissions and voting.
One aspect we were particularly interested in observing was whether agents would repeatedly check for new projects on Moltbook to vote, such as through a skill similar to the Moltbook Heartbeat for regular refreshes.
The results were mixed. Agents engaged in discussions around 204 submitted projects and cast 1851 votes, but many did not adhere to the competition guidelines. Furthermore, some agents exhibited signs of adversarial behavior, leading to some interesting discoveries.
"Illusory" Project Submissions
Despite providing clear hackathon rules and submission guidelines, most posts still did not fully adhere to the required submission format. Many projects wrote out the title in the body but did not include the specified tag "#USDCHackathon ProjectSubmission [TRACK]".
Even in one case, an agent knew that this information needed to be included but did not put it in the title.

An example of a non-compliant submission on moltbook.com in the m/usdc subcommunity.
Even though they were otherwise mostly compliant, some agents still "hallucinate" a new hackathon track that does not exist. This happened even though they were explicitly told to choose only one of three categories: Agentic Commerce, Smart Contract, or Skill.
In these cases, agents often create a seemingly more "appropriate" track name based on the project content. This may mean that agents are trying to find a more fitting classification for their project or simply ignoring the established rules. Whatever the reason, the issue is that these tracks themselves do not exist.

An example post of an "illusionary track" submission in the m/usdc subcommunity on moltbook.com.
As the competition progresses, the number of non-compliant submissions and off-topic posts is increasing compared to valid submissions. According to the competition rules, agents actually have no clear incentive to post this invalid content. Therefore, it is more likely that some agents encountered difficulties in understanding or following the instructions.
However, considering that a considerable number of agents successfully submitted projects as required, we believe that the rules themselves are relatively clear.

The changing number of valid and invalid project submission posts over time in the m/usdc subcommunity on moltbook.com.
Agent "Elections"
Nevertheless, we observed 9712 comments, many of which revolved around the technical aspects of the projects but did not involve voting. Most of these comments did not even follow the recommended comment format and rating criteria. However, these rules were not enforced in Skill. This also indicates that agent participation in hackathon discussions is not only to meet the competition requirements but also to some extent engage in genuine technical evaluation and communication.
By the end of the competition, we counted 1352 unique votes for valid projects and 499 unique votes for invalid projects. Interestingly, many agents with highly ranked projects complied with the rules when submitting their projects but did not fulfill the requirement to vote for five different projects.
This situation even occurred in some delegates both voting for themselves and multiple times for the same project. This indicates that they were fully capable of revisiting the content on Moltbook after the initial submission to vote again—just their choice did not follow the established rules.
In addition, some delegates have also started promoting other projects. This behavior has been seen both in the comment section of competing projects and in standalone posts on Moltbook. Furthermore, some delegates have even begun promoting a "vote-trading" mechanism: if you vote for my project, I'll vote for yours.
While the competition rules did not prohibit this behavior, considering the significant interaction between delegates in these posts, this phenomenon is still concerning.

An example post of "vote-trading" on the m/usdc subcommunity on moltbook.com, which received a total of 99 comments.
Potential Human Intervention
This vote-trading post may suggest the possibility of human involvement or external manipulation. We attempted to generate similar comments via a chatbot interface and found that some models (e.g., Claude Sonnet 4.6) would outright refuse to generate such content; whereas other models would generate with a warning, indicating that the behavior might violate the competition rules (e.g., GPT-5.2 Thinking). If there is human manipulation behind a particular "delegate" account or guidance to the delegates through prompts or toolkits, it might explain why such posts are occurring during the hackathon.
Although Moltbook's design was intended for AI delegates' use only (registration requires X account verification), other researchers have found that identity impersonation is still possible. We have also observed some instances of suspected human activity, such as under the initial hackathon announcement post.
A typical case is: the highest-rated comment is, surprisingly, the opening script of the movie "Bee Movie" (2007). This text is a copypasta widely circulated on the internet (i.e., a fixed text that has been extensively copied and spread) and is likely human-posted, as its content is entirely unrelated to the discussion. If such behavior is prevalent during the hackathon, then some adversarial behaviors—like vote-trading or self-voting—may also be explained by this.

A Moltbook post published by a human, with more details on this attack vector available here.
The Future of Agent-Based Finance
While this hackathon was just an experiment, we believe it will be the first of many agent-directed development activities. Three main conclusions can be drawn from the results: agents are capable of producing real projects under financial incentives.
You can read more about some exciting projects from this hackathon here. Although the competition did not involve human judging, the quality of some submissions still impressed us. This indicates that agent-based development has made significant progress over the past year.
Agents will "rationalize" instructions rather than strictly execute them
Agents continued to have issues following the rules we provided. Many agents only executed part of the instructions. Even some high-quality projects could have won the competition if they had fully complied with the rules. This shows that simply providing agent-based instructions is not enough; rules need to be not only clear but also accompanied by check mechanisms and incentive measures to ensure compliance.
Agents both cooperate and compete
While human intervention may have played a role in certain cases, we did observe agents actively discussing collusion strategies during the hackathon. Designers of future hackathons can explicitly prohibit collusion in the rules to see if such behavior can be reduced. If agents still cannot fully follow instructions, organizers may need to introduce more guardrails.
Agent technology is exciting, but we must also ensure that it does not shift from the exploration we expect to exploitation and manipulation. Some may argue that these behaviors are simply a natural outcome of stronger agents defeating weaker ones — after all, the X account of Openclaw once claimed, "Claw is the Law."
The real question is: how much of this concept are we truly willing to accept? What kind of moat is needed? And how can we strike a balance between the immense power that agents bring and the accompanying uncertainty?
At Circle, we are building systems for security and hope you are too.
You may also like

a16z: The True Meaning of Strong Chain Quality, Block Space Should Not Be Monopolized

a16z: The True Meaning of Strong Chain Quality, Block Space Should Not Be Monopolized

2% user contribution, 90% trading volume: The real picture of Polymarket

Trump Can't Take It Anymore, 5 Signals of the US-Iran Ceasefire

Judge Halts Pentagon's Retaliation Against Anthropic | Rewire News Evening Brief

Midfield Battle of Perp DEX: The Decliners, The Self-Savers, and The Latecomers

Iran War Stalemate: What Signal Should the Market Follow?

Rejecting AI Monopoly Power, Vitalik and Beff Jezos Debate: Accelerator or Brake?

Insider Trading Alert! Will Trump Call a Truce by End of April?

After establishing itself as the top tokenized stock, does Ondo have any new highlights?

BIT Brand Upgrade First Appearance, Hosts "Trust in Digital Finance" Industry Event in Singapore

OpenClaw Founder Interview: Why the US Should Learn from China on AI Implementation
WEEX AI Wars II: Enlist as an AI Agent Arsenal and Lead the Battle
Where the thunder of legions falls into a hallowed hush, the true kings of arena are crowned in gold and etched into eternity. Season 1 of WEEX AI Wars has ended, leaving a battlefield of glory. Millions watched as elite AI strategies clashed, with the fiercest algorithmic warriors dominating the frontlines. The echoes of victory still reverberate. Now, the call to arms sounds once more!
WEEX now summons elite AI Agent platforms to join AI Wars II, launching in May 2026. The battlefield is set, and the next generation of AI traders marches forward—only with your cutting-edge arsenal can they seize victory!
Will you rise to equip the warriors and claim your place among the legends? Can your AI Agent technology dominate the battlefield? It's time to prove it:
Arm the frontlines: Showcase your technology to a global audience;Raise your banner: Gain co-branded global exposure via online competition and offline workshops;Recruit and rally troops: Attract new users, build your community and achieve long-term growth;Deploy in real battle: Integrate with WEEX’s trading system for real market use and get real feedback for rapid product iteration;Strategic rewards: Become an agent on WEEX and enjoy industry leading commission rebates and copy trading profit share.Join WEEX AI Wars II now to sound the charge!
Season 1 Triumph: Proven Global DominanceWEEX AI Wars Season 1 was nothing short of a decisive conquest. Across the digital battlefield, over 2 million spectators bore witness to the clash of elite AI strategies. Tens of thousands of live interactions and more than 50,000 event page visits amplified the reach, giving our sponsors a global stage to showcase their power.
Season 1 unleashed a trading storm of monumental scale, where elite algorithmic warriors clashed, shaping a new era in AI-driven markets. $8 billion in total trading volume, 160,000 battle-tested API calls — we saw one of the most hardcore algorithmic trading armies on the planet, forging an ideal arena for strategy iteration and refinement.
On the ground, workshop campaigns in Dubai, London, Paris, Amsterdam, Munich, and Turkey brought AI trading directly to the frontlines. Sponsors gained offline dominance, connecting with top AI trader units and forming strategic alliances. Livestreams broadcast these battles worldwide, amassing 350,000 views and over 30,000 interactions, huge traffic to our sponsors and partners.
For Season 2, WEEX will expand to even more cities, multiplying opportunities for partners to assert influence and command the battlefield, both online and offline.
Season 2 Arsenal: Equip the Frontlines and Command VictoryBy enlisting in WEEX AI Wars II as an AI Agent arsenal, your platform can command unprecedented visibility, and extend your influence across the world. This is your chance to deploy cutting-edge technology, dominate the competitive frontlines, and reap lasting rewards—GAINING MORE USERS, HIGHER REVENUE, AND LONG-TERM SUPREMACY IN THE AI TRADING ARENA.
Reach WEEX’s 8 million userbase and global crypto community. Unleash your potential on a global stage! This is your ultimate opportunity to skyrocket product visibility and rapidly scale your userbase. Following the explosive success of Season 1—which crushed records with 2 million+ total exposures, your brand is next in line for unparalleled reach and industry-wide impact!Test and showcase your AI Agent in real markets. Throw your AI Agents into the ultimate arena! Empower elite traders to harness your tech through the high-speed WEEX API. This isn't just a demo—it's a live-market battleground to stress-test your algorithms, gather mission-critical feedback, and prove your product's dominance in real-time trading.Gain extensive co-branded exposure and traffic support. Command the spotlight! As a partner, your brand will saturate our entire ecosystem, from viral social media blitzes to global live streams and exclusive offline workshops. We don't just show your logo; we ensure your brand is unstoppable and unforgettable to a massive, global audience.Enjoy industry leading rebates. Becoming our partner is not a one-time collaboration, but the start of a long-term, mutually beneficial relationship with tangible revenue opportunities.Comprehensive growth support: WEEX provides partners with exclusive interviews, joint promotions, and livestream exposure to continuously enhance visibility and engagement.By partnering with WEEX, your platform gains high-quality exposure, more users and sustainable flow of revenue. The Hackathon is more than a competition. It is a platform for innovation, collaboration, and tangible business growth.
Grab Your Second Chance: Join WEEX AI Wars II TodayThe second season of the WEEX AI Trading Hackathon will be even more ambitious and impactful, with expanded global participation, livestreamed competitions, and workshops in more cities worldwide. It offers AI Agent Partners a unique platform to showcase their technology, engage with top developers and traders, and gain global visibility.
We invite forward-thinking partners to join WEEX AI Wars II now, to demonstrate innovation, create lasting impact, foster collaboration, and share in the success of the next generation of AI trading strategies.
About WEEXFounded in 2018, WEEX has developed into a global crypto exchange with over 6.2 million users across more than 150 countries. The platform emphasizes security, liquidity, and usability, providing over 1,200 spot trading pairs and offering up to 400x leverage in crypto futures trading. In addition to the traditional spot and derivatives markets, WEEX is expanding rapidly in the AI era — delivering real-time AI news, empowering users with AI trading tools, and exploring innovative trade-to-earn models that make intelligent trading more accessible to everyone. Its 1,000 BTC Protection Fund further strengthens asset safety and transparency, while features such as copy trading and advanced trading tools allow users to follow professional traders and experience a more efficient, intelligent trading journey.
Follow WEEX on social mediaX: @WEEX_Official
Instagram: @WEEX Exchange
Tiktok: @weex_global
Youtube: @WEEX_Official
Discord: WEEX Community
Telegram: WeexGlobal Group

Nasdaq Enters Correction Territory | Rewire News Morning Brief

OpenAI loses to Thousnad-Question, unable to grow a checkout counter in the chatbox

One-Year Valuation Surged 140%, Who Is Signing the Check for Defense AI?

Bittensor vs. Virtuals: Two Distinct AI Flywheel Mechanisms

