Last week saw one of the crypto industry’s most terrifying hacks in recent memory, threatening not just a single protocol or application, but an untold number of applications that relied on a single infrastructure element. And this could have been avoided with security practices that are second nature in more mature industries.
This happened in the middle of the night, US time, on December 14. That’s when an attacker injected malicious “drain” code into Ledger’s Login Kit, a widely used software component maintained by the hardware wallet maker. Hours before its patch, the malicious code extracted digital assets from wallets connected to services via Connect Kit. One commenter, slightly hyperbolically, described the hack as compromising “every Web3 site in the world.”
Fortunately, the damage to crypto users was not as catastrophic as it could have been. But the hack has devastating implications for Ledger itself, especially because it was 100% preventable – if only an extremely simple code update monitoring process had been in place. The fact that the compromised code was first detected by third-party company Blockaid, using a version of this update monitoring processrather than by Ledger himself, makes the failure even more damaging.
But similar failures are common in cryptocurrency and blockchain projects, and for similar reasons. Specifically, many crypto projects have immature or underfunded security postures, usually overwhelmingly focused on finding vulnerabilities in specific pieces of code.
The Ledger hack shows how limited this approach is, since the vulnerability was not in the code at all. Instead, he was manage code. To avoid such internal process failures, crypto projects must reorient their security standards around more rigorous security reviews common in the banking industry, to take a particularly ironic example.
Plumbing problem
Connect Kit acts as a kind of plumbing for an extended universe of distributed applications. In theory, Connect Kit allows Ledger wallet users to carefully control third-party app access to stored cryptocurrencies using Ledger hardware dongles. Compromising Connect Kit was tantamount to compromising all of these connected services.
This was a new iteration of a classic “supply chain attack,” which gained notoriety with the Russian-backed Solarwinds hackwhich also compromised behind-the-scenes infrastructure software and could have caused up to $100 billion in damage to a wide range of businesses and entities in 2020. The Ledger Connect Kit hack was detected and fixed in a few hours, and now appears to have cost users less than half a million dollars in crypto.
But post-mortems of the attack revealed deep problems in the way Ledger managed its software – software whose overriding selling point to users is that it is hyper-secure.
Here’s what happened, at least as far as we currently know. According to Ledger, the initial compromise was a phishing attack that gained access to the accounts of a former Ledger employee. Although it’s impossible to say for sure, it appears that better anti-phishing training could have avoided this first apparent failure in the process.
But even worse, the former employee still had access to a Ledger JavaScript Package managed using a third-party service called NPM. This is the second failure of the process: access to the code of all former employees should obviously be immediately revoked upon their departure.
But even that was not the real deadly sin. It was apparently common for changes to this NPM-hosted Javascript package to be used to update Connect Kit code in real time, without apparently any human review or approval. This is the third failure of the process, and it is particularly serious.
Automatically updating from a live database of code is often called “loading from CDN (content delivery network)”. It allows an application to be updated quickly, frequently, and without requiring user interaction. But the method, at least as implemented for Connect Kit, also created a major vulnerability because there was no human oversight to ensure the changes were intentional and official.
Once the hacker was inside the JavaScript package on NPM, there was effectively nothing between him and the code controlling users’ wallets. Ethereum Developer Rotki’s Lefteris Karapetsas didn’t pull any punchescalling the use of this over-the-air update method “insane.”
(However, some observers have blamed the NPM itself for its failure to implement better version control natively.)
These are precisely the kinds of deficiencies that a code-only security review would fail to detect because they are not in the code.
Audit audits
This is why the language of security “audits,” so frequently invoked by blockchain companies, can sometimes be misleading.
A formal financial audit isn’t just about making sure all of a company’s money is where it’s supposed to be at any given time. Rather, an accounting audit is a comprehensive, end-to-end examination of a company’s overall money management practices. A CPA performing a financial audit doesn’t just review bank statements and revenue figures—they’re also required, as required by AICPAto evaluate “a company’s internal controls and assess fraud risk.”
But a cybersecurity audit does not have the same global and formal meaning as in accounting. Many security audits mainly boil down to code reviews at a specific time— the equivalent of a financial audit that simply examined current bank balances. Code reviews are obviously crucial, but they are only the beginning of true security, not the end.
To truly match the rigor of a financial audit, a cybersecurity review must evaluate a company’s entire development lifecycle through a formal, structured process that ensures nothing slips through the cracks. This involves looking at the different phases of the development lifecycle, including quality assurance, and developing a threat analysis that identifies likely risks. It includes internal security reviews, on topics such as phishing prevention. And it includes a review of change management processes, particularly relevant in Ledger’s case.
If there’s a silver lining here, it’s that this doesn’t mean cryptography is inherently or fundamentally impossible to secure properly. It can certainly seem that way, with the constant drumbeat of hacks, vulnerabilities and collapses. But the problem isn’t the blockchain’s unusual architecture: it’s a series of compromises on rigorous, standardized security.
As the crypto industry matures, companies that invest to meet these standards will reap the rewards by providing trust and longevity. And the rest will be left behind, marred by avoidable failures.
David Schwed, a leading digital asset security expert, is COO of blockchain security company Halborn and former global head of digital asset technology at BNY Mellon. The opinions expressed in comments on Fortune.com are solely the opinions of the authors and do not necessarily reflect the opinions and beliefs of Fortune.