Security Code Review

In continuation to my previous post

Security code review is the process of auditing the source code for an application to verify that the proper security controls are present, that they work as intended, and that they have been invoked in all the right places. Code review is a way of ensuring that the application has been developed so as to be “self-defending” in its given environment.

Security code review is a method of assuring secure application developers are following secure development techniques. A general rule of thumb is that a penetration test should not discover any additional application vulnerabilities relating to the developed code after the application has undergone a proper security code review.

A secure code review can be a manual or automated review, each with advantages and disadvantages. In a manual review, an analyst reviews the code line by line, looking for defects and security related flaws. An automated review uses a tool to scan the code and report potential flaws.

The primary objective of a security code review is to ensure code is not insecure/exploitable or keep entry points for intruders to get in.

Although manual code review is a very timing consuming process it often can bring in more value and advantage of perspective from architecture and business standpoint. It provides an ability to deep dive into the code paths to check for logical errors and flaws in the design and architecture most automated tools couldn’t find. Security issues like authorization, authentication and data validation can be better detected manually compared to some automated tools. However the maximum of lines that could be reviewed per day in an effective way is ~3000 as per MITRE.

Automated review helps solve the problems associated with manual review. However, good automated review tools are expensive. Additionally, the technology behind automated tools is only effective at finding certain types of flaws. A single automated tool may be good at finding some issues but unable to detect others. Employing multiple automated tools can mitigate this problem but will still not uncover every issue.

The core areas that are assessed during a code review are below.

  • Authentication
  • Authorization
  • Session management
  • Data validation
  • Error handling
  • Logging
  • Encryption

The additional look up that can be done for security code reviews could include

  • Deprecated Features
  • Parameter Typecasting
  • Unused Variables
  • Input Sanitation
  • NO HARD-CODED Passwords
  • Sensitive code in user interface (Source Comments)
  • No Unlimited Result set
  • Don’t hit database unless needed (DDoS)
  • User Groups and Permissions
  • no MD5, SHA-1, RC3, Rc4 algorithms
  • explicit changes in configuration files
  • file upload verification
  • Change session ID after user has successfully authenticated
  • Secure Application Design and Development

One can write scripts in order scan the entire code base for above scenarios if costly tools are not an option. In this situation a Regex could be used to in order find a specific pattern in the code base. Upon finding this pattern could be further checked for how it is supposed to be handled.

ex. if i want my received form data to be handled correctly i would accept it in the input handlers before assigning it to the object variable. This input handler will check it for datatype, length, inclusion, exclusion of special character etc. upon meeting the criteria it is then assigned to the object which goes to application’s query building logic where this input along with other input and built-in/inherited values are checked if they formulate correct query or not and then proceed with building the query and executing when all checks are passed.

In this example I check my Regex for specific variable(s) which handle input data then follow them to ensure they are addressed correctly in the script.

Best Practices and Lessons Learned

Understand the developers’ approach. Before starting a secure code review, talk to the developers and understand their approaches to mechanisms like authentication and data validation. Information gathered during this discussion can help jump-start the review and significantly decrease the time a reviewer spends trying to understand the code.

Use multiple techniques. If possible, use both manual and automated techniques for the review because each method will find things that the other doesn’t. In addition, try to use more than one automated tool because the strengths of each differ and complement the others.

Do not assess level of risk. A secure code review should not attempt to make judgments about what is acceptable risk. The review team should report what it finds. The customer uses the program’s approved risk assessment plan to assess risk and decide whether to accept it or not.

Focus on the big picture. When performing a manual review, resist trying to understand the details of every line of code. Instead, gain an understanding of what the code as a whole is doing and then focus the review on important areas, such as   functions that handle login or interactions with a database. Leverage automated tools to get details on specific flaws.

Follow up on review points. After a review, hold a follow-up discussion with the development team to help them understand what the findings mean and how to address them.

Stick to the intent of the review. Secure code review is not penetration testing. Review teams should not be allowed to “pen-test” a running version of the code because it can bias the results by giving a false sense of completeness.

References and Note of Thanks-

Click to access OWASP_Code_Review_Guide-V1_1.pdf


OWASP Top 10 – Cross Site Request Forgery

  • Force an authorized user to send forged HTTP requests (utilize victim session data)
  • victim must be logged in.
  • These requests are considered as legitimate by vulnerable server-side application.

Accepting un-validated inputs, storing it in the database, presenting it to the user upon request and when logged in user accesses it the exploitation occurs.


  • Unique token in hidden field (this causes value to be sent in the message body and not in the URL of request)
  • Require user to re-authenticate before making a sensitive/important request.
  • implement Captcha
  • mobile SMS/OTP verification.

OWASP Top 10 – XSS

  • Sending untrusted data to system
  • sending text based attack scripts to exploit interpreter in browser


  • session hijacking
  • defacement
  • insert hostile content
  • redirect user


  • escape all untrusted data
  • whitelisting
  • input validation
  • server-side validation
  • For rich content – auto sanitation library ex. OWASP Anti-SAMY

OWASP Top 10 – XML External Entities (XXE)

This causes

  • Data extraction
  • Remote  code execution
  • Scan internal systems
  • Perform Denial of Service.

Your application is vulnerable if it uses SAML for identity processing and your XML Processor parses

  • Untrusted XML Acceptance
  • Untrusted XML Uploads
  • Inserting untrusted data in XML


  • sanitize input
  • SOAP 1.2
  • Patch and upgrade XML  processor

DISABLE XML External Entity and DTD Processing in all XML Parsers in applications.

OWASP Top 10 – Sensitive Data Exposure

Data Security at Rest, In Transit and In Client Browser.


  • Encryption – Key Rotation, Storage, Split Knowledge
  • Data Masking
  • No hard-coded credentials
  • Disable Page caching (This comes in handy in case of permission changes)
  • Re-verification of identity, object.
  • no plain text data exchange
  • no weak algorithms
  • Discard sensitive data (from session/memory/cache etc.) ASAP.
  • Preferably encrypt data even when it is in memory (performance overhead).