You’ve heard of multi-sig for cryptocurrency, but could it work for secure software development?

The multi-signature transaction method for cryptocurrency wallets hit the headlines for all the wrong reasons last month, after a vulnerability in Parity Technologies’ multi-sig platform resulted in hundreds of thousands of ether being frozen.

As Parity wrestles with the prospect of a implementing a hard fork to enable the recovery of the funds (which amount to around $165 million), you might be forgiven for dismissing multi-sig as a dangerous concept that throws up more risks than benefits.

Individual platform vulnerabilities aside, however, multi-signatures have long been utilized to provide an additional layer of security in the cryptocurrency world. By requiring two or more parties to sign a transaction before it is broadcast to the network, this method (usually) eliminates any single point of failure.

As cryptocurrency shifts from a niche digital asset to a mainstream currency/speculation tool, it’s likely that the use of multi-signature wallets will also grow. But the multi-sig concept is not confined solely to the blockchain.

According to Joanna Rutkowska, CEO and founder of Invisible Things Lab and the Qubes OS project, multi-sig is an attractive prospect for secure software development – particularly when applied to binary signatures.

Speaking at this year’s Black Hat Europe, which took place in London last week, Rutkowska explained how multi-party signatures could be utilized to improve the security of OS installation images and OS updates, application installers, and browser and firmware updates.

“If a hacker were to target a high-profile individual, they would most likely not look for some remotely exploitable bug, which is often not very reliable,” she said.

“They would rather want to look into how to compromise the update build and distribution process for whatever software this person might be using – the software with the weakest security.”

The binary update build and distribution process involves three parties: developers, vendors, and end users. Developers create the software and push this to the build server. The server then builds the binaries before distributing the update to users.

“Hopefully, the developers not only created the software, but cryptographically signed it,” Rutkowska said. “For some reason, most developers still don’t do this, but I want to focus on the middle step: the build server.

“Usually, there is a cryptokey on the build server that is used to sign the resulting binary. In a normal situation, the software on a user’s computer will check the signature and, if accepted, run the installation.”

The problem, according to the Invisible Things Lab founder, is that if someone compromises the build server, this presents the perfect opportunity to install a backdoor. We need only look to Piriform’s recent postmortem of the CCleaner attack, which affected more than two million customers, to see the damage that can be caused by such activity.

“There is [currently] no way for the user to verify updates, because if I inject a backdoor before it was signed on the build server, then the verification will succeed,” Rutkowska said.

“What we really would like to have for software updates is to be able to verify that the binary matches the source code. But this usually doesn’t work, for multiple reasons. One reason is that we don’t have access to any software code that is not open-source.

“Even if we had access to the code, the problem is that repeating the build process is very difficult, it’s very resource intensive and naïve to expect end users to repeat this process. We have build servers for a reason.”

According to Rutkowska, one alternative would be to have multiple build servers run by multiple organizations – ideally in different countries. Each of these servers would take the same the source code and verify that the signature had been created by a trusted developer. They would then build the binary and sign it.

“Assuming they build the same binaries, we end up with having one binary with multiple signatures,” she said. “For users who might have concerns about the state interfering with the integrity of the build process, this might provide some more assurance.”

By using these multi-signed binaries, Rutkowska said backdoor injection is made significantly more difficult because a hacker would need to attack more than one organization that has been tasked with building the parallel binaries.

The premise is relatively straightforward, so why is this method not being utilized? “As far as I know, nearly no software uses these methods,” said Rutkowska. “Tor might be an exception.

“The problem is that for this scheme to work, the build process must be deterministic. Unfortunately, if you take complex enough source code – essentially any source code these days – and build binaries from it, it will almost always be different because there will be timestamps in the resulting binaries, there will be differences in the number of threads used, and so on.”

Despite the apparent hurdles relating to the non-deterministic nature of binary builds, multi-sig updates could well present a valid alternative for those looking to improve upon the current model, which offers little protection against hackers who are intent on injecting malicious code during the build process.

As it currently stands, however, the multi-sig-for-software concept would require developers to share their coveted source code with other organizations – an unlikely scenario, and one which would instead necessitate the use of something along the lines of homomorphic encryption.