What kinds of products can you audit?
We have expertise in a number of areas, everything ranging from low-level firmware on hardware devices to native wallets and scripts to cloud and web-based applications. All of these have interfaces ripe with the potential for bugs and many present risk to users if security takes a backseat. But here's a few examples to give you an idea.
- Desktop or mobile wallets
- Daemons and client applications
- Cloud services
- Miner software and nodes
- Server scripts
- Ethereum smart contracts (the basics)
- Exchanges and web portals
- Web applications and APIs
- Database interactions
- Hardware devices and IoT
- Network ACLs
We have people with experience doing security in just about every practical environment, language and form factor. Generally, if it accepts input and output in any way and isn't a "Hello World" app, it likely contains specific risks that need to be mitigated in one way or another.
What does an audit consist of?
This depends on the package, but at a basic level it involves studying the system to come up with a threat model. From that threat model, one is able to understand where risk is present and how to protect against threats posed. Once the threats are understood and attack surface(s) evaluated, a review of the code to see if it's hardened against attacks is performed. When weaknesses are found, they are cataloged and mitigations proposed. During a deep dive, we explore further and ensure security testing is being performed and look for configuration and environment flaws. If an integrated approach is selected, we'll work side-by-side with your team to also get issues fixed in real-time and provide guidance and training to reduce the likelihood of the errors reoccurrance in the future. Of course if our clients don't quite fit into these boxes, we are flexible enough to adjust where needed so you get the most out of the audit.
Most importantly, once an audit is complete and the issues found are mitigated, you can be much more confident in the security of your codebase.
What's the difference between an audit and a bug bounty?
An audit or security review is a more formal process of identifying and mitigating risk in your product. It identifies attack surface, audits calls and data passing back and forth from the user and outside components, catches bugs and hardens against them. Bug bounties are great if you've already identified attack surface and want bug hunters to concentrate on one or two particular areas, but they provide little more than a report at the end of the day and leave fixing stuff up to you.
So you can think of an audit like a comprehensive bug bounty with more high quality, actionable deliverables that give one confidence the code is hardened against attacks.
What's the difference between a bug and a security bug?
So there's bugs, which may cause something unexpected to happen when your program is running, such as a crash or other malfunction. Security bugs are a subset of those which have an impact on the application's security, such as the crash actually corrupting memory and allowing someone to read or write from it in crafty ways to fully compromise the machine. For example, a NULL pointer dereference bug that crashes a crash in a client app usually doesn't have a security impact (although there are some exceptions). It may just look like a crash at first, but after analysis it potentially allows one to change the intended paths taken by the app, control flow or reorganize memory in a way that arbitrary code could be called or injected and executed instead of the normal operation, then that bug would be a vulnerability or a security bug.
There's whole areas in the security industry devoted to making exploitation of bugs more of a commodity practice for various legitimate purposes, such as proving vulnerabilities impact and risk to organizations. But as to most things in life, there's a flip side to that: exploitation can also be used in malicious ways, such as targeted attacks to breach data and malware proliferation.
But I don't have a smart contract, so why would I need an audit?
Faulty contracts that lose money or redirect funds are one thing, but there are many more issues that occur in code than that. If users are running your application or using your website, they are fundamentally managing risk, whether they realize it or not. How high or low the risk is that their account, computer or otherwise will be compromised is determined by security effort one has put into their code, as your them using your product extends their attack surface.
Think about it like this: I'm riding a bike, so I have taken on the risk of the road and other vehicles, as well as how properly maintained the bike may be. The bike is your product and the road is the Internet. If you make sure the bike is safe to use, you reduce my risk of crashing. Additionally if you include helmets and protective gear for those riding your bike, you help protect them even if someone on the road tries something dodgy or other yet unknown events.
But we have already turned on SSL on our website...
TLS/SSL only protects against attackers reading or modifying data in transit over the Internet. It does not mitigate any code-based vulnerabilities, which are the ones people use to hack websites, compromise databases or spread malware. These are the types of issues our professionals find and help you fix before they do.
Our code is a fork and I think the original project was already audited.
But as a fork implies, new code was added since, therefore any previous audit has not covered it or the new environment for which it may be placed. And frankly, if someone does find vulnerability, thoughts offer little condolence and don't solve any problems. Audits are action, an investment in the project's security and are something that makes a tangible difference.
How much does an audit cost?
We try to make it as simple as possible, but it depends on a couple things, primarily the package which works for you. We know industry prices, but our team is agile in our craft, so we can do a professional job at a fraction than what you would see quoted from a comparable code auditing or penetration testing company.
But my project is just a wallet or local client or XYZ, what security bugs could it possibly have?
Coin wallets have quite a bit going on under the hood and more attack surface than one might think. Clients have to worry about malicious servers too, not just the other way around. Native code creates opportunities to mishandle allocations and memory copies in subtle ways. Web portals, forums and generally anything that has pieces of user interaction are notoriously hard to parse and display safely. If networked components are talking locally or across the pond, there's many specific things that must take place in order not opened up exploitable weaknesses in the process. Hardware devices have their own set of vulnerabilities and gotchas. Point being if it's written and runs and talks to users or the Internet, it's almost certainly worth having a security expert evaluate.
We have expertise in a number of areas, everything ranging from low-level firmware on hardware devices to native wallets and scripts to cloud and web-based applications. All of these have interfaces ripe with the potential for bugs and many present risk to users if security takes a backseat. But here's a few examples to give you an idea.
- Desktop or mobile wallets
- Daemons and client applications
- Cloud services
- Miner software and nodes
- Server scripts
- Ethereum smart contracts (the basics)
- Exchanges and web portals
- Web applications and APIs
- Database interactions
- Hardware devices and IoT
- Network ACLs
We have people with experience doing security in just about every practical environment, language and form factor. Generally, if it accepts input and output in any way and isn't a "Hello World" app, it likely contains specific risks that need to be mitigated in one way or another.
What does an audit consist of?
This depends on the package, but at a basic level it involves studying the system to come up with a threat model. From that threat model, one is able to understand where risk is present and how to protect against threats posed. Once the threats are understood and attack surface(s) evaluated, a review of the code to see if it's hardened against attacks is performed. When weaknesses are found, they are cataloged and mitigations proposed. During a deep dive, we explore further and ensure security testing is being performed and look for configuration and environment flaws. If an integrated approach is selected, we'll work side-by-side with your team to also get issues fixed in real-time and provide guidance and training to reduce the likelihood of the errors reoccurrance in the future. Of course if our clients don't quite fit into these boxes, we are flexible enough to adjust where needed so you get the most out of the audit.
Most importantly, once an audit is complete and the issues found are mitigated, you can be much more confident in the security of your codebase.
What's the difference between an audit and a bug bounty?
An audit or security review is a more formal process of identifying and mitigating risk in your product. It identifies attack surface, audits calls and data passing back and forth from the user and outside components, catches bugs and hardens against them. Bug bounties are great if you've already identified attack surface and want bug hunters to concentrate on one or two particular areas, but they provide little more than a report at the end of the day and leave fixing stuff up to you.
So you can think of an audit like a comprehensive bug bounty with more high quality, actionable deliverables that give one confidence the code is hardened against attacks.
What's the difference between a bug and a security bug?
So there's bugs, which may cause something unexpected to happen when your program is running, such as a crash or other malfunction. Security bugs are a subset of those which have an impact on the application's security, such as the crash actually corrupting memory and allowing someone to read or write from it in crafty ways to fully compromise the machine. For example, a NULL pointer dereference bug that crashes a crash in a client app usually doesn't have a security impact (although there are some exceptions). It may just look like a crash at first, but after analysis it potentially allows one to change the intended paths taken by the app, control flow or reorganize memory in a way that arbitrary code could be called or injected and executed instead of the normal operation, then that bug would be a vulnerability or a security bug.
There's whole areas in the security industry devoted to making exploitation of bugs more of a commodity practice for various legitimate purposes, such as proving vulnerabilities impact and risk to organizations. But as to most things in life, there's a flip side to that: exploitation can also be used in malicious ways, such as targeted attacks to breach data and malware proliferation.
But I don't have a smart contract, so why would I need an audit?
Faulty contracts that lose money or redirect funds are one thing, but there are many more issues that occur in code than that. If users are running your application or using your website, they are fundamentally managing risk, whether they realize it or not. How high or low the risk is that their account, computer or otherwise will be compromised is determined by security effort one has put into their code, as your them using your product extends their attack surface.
Think about it like this: I'm riding a bike, so I have taken on the risk of the road and other vehicles, as well as how properly maintained the bike may be. The bike is your product and the road is the Internet. If you make sure the bike is safe to use, you reduce my risk of crashing. Additionally if you include helmets and protective gear for those riding your bike, you help protect them even if someone on the road tries something dodgy or other yet unknown events.
But we have already turned on SSL on our website...
TLS/SSL only protects against attackers reading or modifying data in transit over the Internet. It does not mitigate any code-based vulnerabilities, which are the ones people use to hack websites, compromise databases or spread malware. These are the types of issues our professionals find and help you fix before they do.
Our code is a fork and I think the original project was already audited.
But as a fork implies, new code was added since, therefore any previous audit has not covered it or the new environment for which it may be placed. And frankly, if someone does find vulnerability, thoughts offer little condolence and don't solve any problems. Audits are action, an investment in the project's security and are something that makes a tangible difference.
How much does an audit cost?
We try to make it as simple as possible, but it depends on a couple things, primarily the package which works for you. We know industry prices, but our team is agile in our craft, so we can do a professional job at a fraction than what you would see quoted from a comparable code auditing or penetration testing company.
But my project is just a wallet or local client or XYZ, what security bugs could it possibly have?
Coin wallets have quite a bit going on under the hood and more attack surface than one might think. Clients have to worry about malicious servers too, not just the other way around. Native code creates opportunities to mishandle allocations and memory copies in subtle ways. Web portals, forums and generally anything that has pieces of user interaction are notoriously hard to parse and display safely. If networked components are talking locally or across the pond, there's many specific things that must take place in order not opened up exploitable weaknesses in the process. Hardware devices have their own set of vulnerabilities and gotchas. Point being if it's written and runs and talks to users or the Internet, it's almost certainly worth having a security expert evaluate.