Friday, May 2, 2008

Developing Secure Applications

Security risks in a system can come from variety of sources. It can come from a weakness in user interaction with the system, interdependencies of different components in the system, or from the implementation of the system itself. Here, we look at some of the vulnerabilities that can be introduced in the system implementation resulting in security risks. These vulnerabilities remain same across different software development paradigms and languages used for implementation.

1. Never Trust User Input
User input in an application must be validated against input rules of the application. A malformed input, if accepted unchecked in an application may provide window of opportunity for an attacker. To illustrate this with an example, an input field, accepting person’s name can only consist of alphabetic and perhaps numeric characters. Allowing special characters such as dollar ($) and star (*) can lead to very nasty attacks depending on the type of application. In many computer languages, special characters have special meanings, which if used inappropriately can compromise the integrity and availability of data.

2. Be Careful With Output
Output information that is necessary to perform a task for a user. To take the common example, passwords should never be shown on the screen when either setting it first time or later used for logging into the system. What data to display on a user interface and what not, is highly application dependent, still “need to know” can be used as a general guideline. A user should be allowed access to only information that he or she needs to know in order to complete his task and no more. A security conscious application should validate access control before outputting data.

3. Trust Experts for Crypto Implementation
Many applications use cryptographic libraries to perform various security services. It is best to trust the library implementers with cryptographic implementation. Do not attempt to come up with your own secret algorithms and implementation. Many of the open source and proprietary libraries have gone through years of rigor to achieve the current level of trust. This is not to say, that such libraries whether open source or proprietary have no security holes. But they certainly have (Openssl e.x.) gone through rigorous reviews and bug fixes over the years. Leave crypto details to experts and channel your energy in securing the application.

4. Protect Against Buffer Overflows
Programs manipulate data in memory buffers. When data size that is being handled, exceeds the expected size, it results in internal memory buffer overflows. A skillful attacker can carefully craft the data to exploit buffer overflows and execute malicious code. Most of these attacks can be automated with tools and non skilled attackers only need to know how to use these tools. A typical vulnerability introduced by buffer overflow is escalation of privileges (admin access on windows, root access on Unix/Linux) to the system. Once an attacker has highest level of access on a system, it’s a cakewalk to compromise it further.
Buffer overflow problems can be avoided by checking the bounds for memory region where data is being written. This can be either done by programmer explicitly or provided as a language feature. A great number of security problems are caused by this category and application’s security is significantly increased by making it buffer overflow proof.

5. Use Secure Default Configuration
Insecure defaults are common cause of compromised systems and loss of security. Microsoft has been notorious in the past for shipping software (e.x. Windows) with services, which a user may never use, running by default. But bad guys certainly find out about such services and have a good time exploiting those. Things have changed for the better in recent times and newer software from Microsoft ship with relatively secure default configurations, where user is allowed to choose the services rather than running everything by default. Most of the applications ship with multiple configurations/services, hence using secure defaults is important for all application vendors, not just Microsoft.
Using secure defaults is more difficult in practice than it sounds, mainly due to insecure configurations propagated in earlier generations of software. When later version of software suddenly changes the default behavior, end user’s application may stop working or break due to hidden dependencies. It can be a time consuming task to fix those issues, and it’s certainly worth the effort. Application developers should make sure that only absolute minimum services or configuration is provided at setup time and user is prompted to explicitly enable the functionality which he wants to use, rather than enabling it by default.

6. Force to change default password
All software products without exception ship either with a default password or an empty password. If default password is not changed at the time of installation, system is a very easy target for attackers. A well intentioned administrator may simply forget or put password change in his to do list due to deadline pressures. Regardless, it opens vulnerability in the system. Therefore, application developers (vendors) must force the default password change at the time of installation or first use. Any system administrator worth his salt will prefer to be forced to change the default password and appreciate the security posture of the product.

7. Follow “less is better” approach
Useful applications come with many features, some commonly used and some obscure. Typically 80% of the users use 20% features of the application. Application developers should follow the mantra of least feature installation for enhanced security. This is somewhat similar to the principle of ‘using secure default configuration’, but goes deeper. When an application is installed, only the minimum required set of components should be installed by default, user should be asked and given a choice for installation of advanced components. (To read about an incident on installing hidden components, go to story of Sony BMG, http://www.schneier.com/essay-094.html). For most users, minimum installation will be good enough. For advanced users, they will know what extra features to install. If you are writing network applications, be extra careful. Networked applications are by definition exposed to threats; hence this principle becomes more important.

8. Follow “least privilege” principle
An application should run only with the privileges it requires. Not only this is necessary to prevent against intentional misuse by attackers but it also protects against unintentional mistakes committed by users. Within an application, not all of the code needs to run at the same privilege level. Depending on the resources used, different code segments may require different privilege levels. Application should be architected and implemented to keep this logical separation. It also provides other side benefits in ease of debugging and in adding auditing feature; ex. in a front end application, different access levels can be used for displaying read only data and read-write data (which could in turn be tied to user id).
For an application that makes calls to an external component (using DLLs or shared libraries), it should grant restricted permissions for allowed tasks only. This may require language level or Operating Systems support. Use where ever this support is provided.

9. Have multiple access levels
This is an extension of the principle of ‘least privilege’ at user level. A user should not be given more access rights than he requires. An application needs to define the access control methods it supports. Typically, role based access control is used in enterprise applications. Role based access control implies that a user is given access rights to the application based on his role in the organization; ex. in an employee management system, a regular employee can access only his records and a manager may have access to records of all his subordinates. One popular method to enforce role based access control in a centralized manner uses RADIUS/TACACS/LDAP servers. Administrators of the system should be given clear instructions on how to manage application roles (create, modify, delete etc.) and application should correctly
enforce the access to data based on those roles.

10. Handle exceptions, errors and failures securely
A lot of code in any application is written to handle error conditions. This creates security holes if error handling code does not take proper care to keep the security environment consistent. It is important to leave the system in a valid and consistent security state in the event of errors and exceptions. A common mistake is to reveal too much information about the system under exception conditions. Here are some things which should be handled under exceptions.
An application handling the security keys should remove and zero out the keys under conditions of unrecoverable exception. Failing to do so may leave the keys in memory even if application is no longer using it and may lead to compromise of application security. A web interface should not reveal information about database schema or tables in the backend if illegal queries are submitted. A backend database application should commit only complete transactions and have rollback mechanisms in place to avoid inconsistent data due to partial transactions.

11. Develop a security testing strategy
It is well known that a bug found during development is cheapest to fix and a bug found in the field is costliest. Same is true for security bugs and vulnerabilities. Not only does security vulnerability create anxiety in customer’s mind for security of his systems and data, it also creates poor public perception. Security testing requires different kind of skills and mind set than regular functional testing. In order to have confidence in security of the application; security testing must be included in the project schedule. Security testing team should develop the security test suite along the lines of functional test suite. This security test suite can be run against regular builds and provides the baseline for security features. Just like broken functionality can be easily found out by functional test suites, broken security can be found out by security test suites. Once security issues are found, development team should be sensitive to those and take ownership to fix.

12. Avoid feature creep
More components and features an application has, more the likelihood of introducing security vulnerabilities. Even if all components by themselves are secure, it is harder to test all the different ways components in an application interact. If an application has large number of components, it becomes almost impossible to test all permutations. It also provides a large attack surface for an attacker, since he has more things to try to attack and get around the security. Hence, it is advisable to keep the application as simple as possible and provide only necessary features.

In this post, I have outlined some general principles to write secure applications. Along with these, best software engineering practices also add to security. Some of the principles are worth repeating here. Reuse as much code as possible, perform regular code reviews, do unit tests, and most of all, think about security (just like one thinks about functionality) when writing code. In this list, thinking about security is arguably the hardest, not everyone can be a security expert (and need not be, only security awareness is enough for most people) therefore, enforcement of security enabled code should be left to a team of security experts. This team can enforce the best security development practices by using tools, education seminars, code reviews and any other method that makes sense in a particular environment.

No comments: