AI could destroy crypto within five years | Opinion

ai-could-destroy-crypto-within-five-years-|-opinion

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

AI has been evolving at breakneck speed and is becoming an indispensable and useful tool across many industries for business and consumer-facing functions, including the finance industry. The fast adoption of AI is largely driven by its low barriers to entry, where anyone with a computer and internet connection can interact with AI and develop it. Yet, as AI grows, it brings severe risks: AI is already sufficiently advanced to assist human-led hacking through LLMs and AI agents.

Within five years, AI will develop capabilities to hack on another, far more dangerous front: through Artificial General Intelligence. Once AGI is achieved, it won’t be controlled by people. Unless governments and regulators step in now (even then, the chances of controlling it are slim), AGI will escape the confines of its creators and autonomously hack cryptocurrency encryptions… and entire global digital systems.

The rise of artificial intelligence 

AI has seen a dramatic surge in the commercial arena, one of its most recent iterations being AI agents. These are finely tuned, complicated systems, geared to perform specialized tasks and make decisions with minimal human intervention. Already transforming the tech space and becoming a useful everyday tool for many basic human interactions, such as self-driving cars, fraud detection, on-chain trading, and application creation.

The next evolution will be AGI, a complex system that can do everything a human can. It will not need human prompts or intervention; it will simply have real-time autonomy: the ability to perform tasks, but also recognise when a task needs performing. This is set to be achieved in the next two years. OpenAI CEO Sam Altman and Elon Musk have made public statements declaring clear roadmaps to achieve AGI; regardless of who gets there first, time will decide whether they are a hero or the destroyers of worlds.

Potential evolution of use cases: Friend and foe

Alongside positive innovations, AI is facilitating amoral actors. LLMs and AI agents are being used to assist hacking. LLMs get jailbroken to produce malware and are being used for targeted scamming, such as faking voices in phone calls to incite money transfers from unsuspecting victims. An open-sourced AI Agent can be used by malicious actors for targeted scams, tasked with finding, tracking, tracing, and infiltrating a target—never ending until it achieves its goal.

One key vulnerability on the horizon lies in the potential for AGI to break cryptographic encryptions. Bitcoin (BTC) encryption hacking is already a growing market, but is currently limited to the realm of quantum computing, which has an incredibly high barrier to entry—it requires extreme assets, resources, and technical capabilities that the average hacker does not possess. Crypto encryption hacking has been limited to a small handful of individuals who are legally paying for the services of quantum computing firms to unlock their wallets and recover funds for a hefty finder’s fee.

Encryption hacking will not always be bound to the limits of quantum computing. AGI presents much lower barriers to entry and will leverage technology that man has not even yet discovered.

AI + bad actors = death of crypto

Once AI has advanced to the point where it can hack encryption, the level of criminals using this tool will dramatically increase. Usage will extend well beyond innocent crypto investors trying to recover funds. Hackers will use AI agents sufficiently advanced to target and exploit these weaknesses in our financial systems.

Particularly as there are many amoral groups that lack quantum computing skills, actively interested in the weaponisation of AI, for example, the North Korean Lazarus Group, infamous for financial hacks, poses a real danger. We are looking at far more devastating effects than just scams and theft. North Korea will use it as a digital weapon, destroying existing financial systems.

Beyond the weaponisation of AI by unfriendly states and actors lies the potential for AGI to go rogue from its creators. Once this happens, one of the first aspects after decentralizing itself, through infiltrating digital networks to become untraceable to avoid being switched off, will be to gather financial resources. The most likely target is the financial markets, specifically crypto. It would try to access digital capital by tracking high-frequency traders, forging accounts, and hacking online bank accounts. It would also break encryption methods for Bitcoin and all major cryptocurrencies. In the blink of an eye, it will be able to hack every single wallet that has ever existed and immediately secure the assets by selling them on the chain for gold, bonds, and fiat currencies, all before any centralised authority could respond.

If an AI can crack all the private keys for crypto wallets, why would it not do so within minutes, stealing and selling all of the Bitcoin and other cryptocurrencies, and then diversifying into other assets? It would, and it will.

Any hope for crypto (and humankind)?

Without proactive counter-strategies, AGI could dismantle cryptocurrencies and very quickly take control of all human finances. As a complex, non-deterministic system, AGI is a technology where the safety rails imposed by governments are most needed; managing these risks requires exceptional cooperation and sophistication across many actors and institutions. Yet this is the first time a world-changing technology has not been spearheaded by the government, but instead by private corporations. From the Manhattan Project to microprocessors, the internet, GPS, and digital cameras, these are all technologies that originated from government-run projects for defense, which were subsequently commercialized for consumers (except for the bomb). Silicon Valley is at the forefront of AI, placing us in a unique position in history where the scale and speed of distribution, versus government oversight, may not be sufficient to prevent major negative impacts.

AI will transform society, but once AI proves extremely smart and capable, there will be little to stop the ‘wrong hands’ from developing harmful AI tools. AGI starts a countdown timer until it breaks free from its creator’s chains and moves into the real world. The risks are not exclusive to financial markets, but apply to the whole digital landscape, IoT devices, electrical grids, and so much more. Crypto is but a means to an end for AI, a means to gather wealth. Crypto is purely the financial vessel for the worst-case scenario to unfold.

Zach Burks

Zach Burks

Zach Burks, CEO of Mintology, is an accomplished blockchain developer with over a decade of experience in the Ethereum ecosystem. He has progressed the governing principles of Ethereum first-hand through his collaboration with the Ethereum Foundation on improving the ERC-721 standard, the cornerstone standard for all NFTs, and by authoring ERC-2981, the industry-defining on-chain royalties standard. Zach is also the mastermind behind Gasless Minting, which revolutionized the NFT creation process.

Related Posts

Leave a Reply