Writing Secure Code

Michael Howard, David LeBlanc

Mentioned 15

Covers topics such as the importance of secure systems, threat modeling, canonical representation issues, solving database input, denial-of-service attacks, and security code reviews and checklists.

More on Amazon.com

Mentioned in questions and answers.

I am an IT student and I am now in the 3rd year in university. Until now we've been studing a lot of subjects related to computers in general (programming, algorithms, computer architecture, maths, etc).

I am very sure that nobody can learn every thing about security but sure there is a "minimum" knowledge every programmer or IT student should know about it and my question is what is this minimum knowledge?

Can you suggest some e-books or courses or anything can help to start with this road?

Principles to keep in mind if you want your applications to be secure:

  • Never trust any input!
  • Validate input from all untrusted sources - use whitelists not blacklists
  • Plan for security from the start - it's not something you can bolt on at the end
  • Keep it simple - complexity increases the likelihood of security holes
  • Keep your attack surface to a minimum
  • Make sure you fail securely
  • Use defence in depth
  • Adhere to the principle of least privilege
  • Use threat modelling
  • Compartmentalize - so your system is not all or nothing
  • Hiding secrets is hard - and secrets hidden in code won't stay secret for long
  • Don't write your own crypto
  • Using crypto doesn't mean you're secure (attackers will look for a weaker link)
  • Be aware of buffer overflows and how to protect against them

There are some excellent books and articles online about making your applications secure:

Train your developers on application security best pratices

Codebashing (paid)

Security Innovation(paid)

Security Compass (paid)

OWASP WebGoat (free)

For general information on security, I highly recommend reading Bruce Schneier. He's got a website, his crypto-gram newsletter, several books, and has done lots of interviews.

I would also get familiar with social engineering (and Kevin Mitnick).

For a good (and pretty entertaining) book on how security plays out in the real world, I would recommend the excellent (although a bit dated) 'The Cuckoo's Egg' by Cliff Stoll.

When I compile my library I have switched ont -fPIC because I want to be able to compile it as a shared library but also as static.

Using gcc 3.4.4 on cygwin I get this warning on all source files:

-fPIC ignored for target (all code is position independent)

And I really wonder what's the point of it. It tells me that I use a switch which has no effect because what the switch should avieche is already accomplished. Well, it means it's redundant, fine. But what's the point of it and how can I suppress it?

I'm not talking about why using PIC or not, just why it generates that IMO useless warning.

And I really wonder what's the point of it...
I'm not talking about why using PIC or not, just why it generates that IMO useless warning.

That's a good question, and I have not seen a definitive answer. At least one of the GCC devs considers it a pointless warning. Paolo Bonzini called it that in his recent patch Remove pointless -fPIC warning on Windows platforms.

According to Jonathan Wakely on the GCC mailing list at How to suppress "warning: -fPIC ignored for target..." under Cygwin (August 2015):

That warning has been there since well before 2003 (I couldn't be bothered tracing the history back past a file rename in 2003).

And from Alexander Monakov on the same thread (referencing Bonzini's patch):

A patch was proposed just recently to just remove the warning: https://gcc.gnu.org/ml/gcc-patches/2015-08/msg00836.html


Related, Windows has /ASLR, which is address space layout randomization. Its optional, but often required as a security gate, meaning all program code on must be compiled with it. If you have an SDLC, then you are probably using /ASLR because Microsoft calls it out as a best practice in Writing Secure Code.

The Linux/Unix equivalent of /ASLR is -fPIE for executables.

Under Windows, all DLL code is relocatable. Under Linux/Unix, shared object code can be made relocatable with -fPIC.

-fPIC is a "superset" of -fPIE (some hand waiving). That means -fPIC can be used anywhere you would use -fPIE (but not vice-versa).

As I code, I try to be security-conscious all the time. The problem is that I need to know what to look for and what to prevent.

Is there a list somewhere of the most common (C++) software vulnerabilities and how to avoid them?

What about C++ software for specific uses, e.g. a linux console software or a web application?

Many resources are available, some in question are:

I work on a project known as the Security Development Lifecycle (SDL) project at Microsoft (http://microsoft.com/sdl) - in short it's a set of practices that must be used by product groups before they ship products to help improve security.

Over the last couple of years, we have published a great deal of SDL documentation, as customers ask for more information about what we're doing.

But what I'd like to know is:

  1. What are you doing within your organization to help improve the security of your product?
  2. What works? What doesn't work?
  3. How did you get management to agree to this work?

Thanks.

Honestly, Reading your book was a good start. :-)

Responding to your questions:

  1. Crypto is a hobby of mine that I sometimes blog about (e.g. on TLS and AES). After writing my own implementation of AES, I learned enough to know beyond a reasonable doubt that I should never use my own implementation but rather use the ones written by the CryptoAPI and OpenSSL guys.

    • Code reviews where people that are good at security issues are marked as required.
    • Having a class on-site with labs to raise awareness of issues mentioned in your book as well internal mailing lists discussing new issues.
    • Several folks listen to the Security Now podcast to keep current on what types of issues are out there and what is getting attacked. This indirectly affects design.
  2. Except for an on-site course and buying the code review tool, none of these require management approval.

{     
    char buf[8];
    sprintf(buf,"AAAA%3s","XXXXXXXX");
    printf("%s\n",buf);
}

what will happen?

The buffer have 8 characters space and only 3 free characters left, however, "XXXXXXXX" is 8 characters long.

I take a test with Visual Studion 2008 on Windows 7. As a result, the program printed:AAAXXXXXXX, and a run-time error happened.

what will happen? ...

{     
    char buf[8];
    sprintf(buf,"AAAA%3s","XXXXXXXX");
    printf("%s\n",buf);
}

On Windows, you are supposed to use sprintf_s. The code should fail an audit, so it should not make it into production. For reference, see Microsoft's Writing Secure Code (Developer Best Practices). In particular, see Chapter 5.


On Linux, if the compiler and platform provides FORTIFY_SOURCE, then the code above should result in a call to abort(). Many modern Linux platforms support it, so I would expect it.

FORTIFY_SOURCE uses "safer" variants of high risk functions like memcpy, strcpy and sprintf. The compiler uses the safer variants when it can deduce the destination buffer size. If the copy would exceed the destination buffer size, then the program calls abort().

To disable FORTIFY_SOURCE for testing, you should compile the program with -U_FORTIFY_SOURCE or -D_FORTIFY_SOURCE=0.


To address @prng's comment regarding portability, strcpy_s, printf_s, sprintf_s and friends are standard C. See ISO/IEC TR 24731-1.

If the missing functionality on Linux and glibc is a problem, then you can abstract away the differences due to a crippled glibc with preprocessor macros. Regardless of what Linux and glibc does, the code does not meet minimum standards on the Windows platform.

I m looking to write some quality C code. Can someone point me to some articles , websites..whatever I need something with examples. I have already seen and read K&R C book.

But times have changed, some one must have more to say on quality C Code. and another important thing is How do you ensure that you as programmer have written quality C code??

There are a lot of aspects for quality of code and tons of articles, books, blogs

but I can advice you this ones as beginning:

Code complete

Code secure

I am working on a consumer facing web application built on .Net/C# MSSQL as the databse

We have been following general good coding practices to prevent bugs and SQL/JS query injections but non of us are experts on security.

What would be a good checklist to find out how really secure is the application we are building.

Since you're developing on MS platforms, have you looked at the Security Development Lifecycle?

Michael Howard's Writing Secure Code covers many of these practices as well.

I have a set of Win32 applications that share information using a shared memory segment created with CreateFileMapping() and MapViewOfFile(). One of the applications is a system service; the remainder are started by the logged-in user. On Windows XP, there was no problem. We named our segments “Global\Something” and all was well.

The additional security in Vista (and assumedly Windows 7) appears to prevent this architecture from working. Normal users are not allowed to create (Win32 error 5) objects in the global namespace. The MSDN indicates that if the account has the “create global” privilege then all should be well, but this does not seem to be the case in practice. Also, Vista’s “integrity” features appear to prevent the “low integrity” user processes from accessing the “high integrity” service-created shared memory object. It looks like I should be able to fix this via some magical SetSecurityDescriptorSacl() incantation, but I’m having difficulty learning to speak sacl.

So the question is: What is the proper way of using a shared memory segment between services and normal user processes?

To preempt the easy answer of “just turn off UAC”, we’re in a fairly locked-down environment and that is not a possibility.

Edit: Both the service and the user process need read/write access to the segment.

The simplest way would be to have your service create the shared memory and specify a DACL in CreateFileMapping that grants regular users read access to the shared memory.

Normal users don't have the create global privilege, but services can have this privilege. If you must have your users create the shared memory and then have the service probe it, you could have an IPC scheme where your user code sends a message to the service containing the file mapping handle, and the service would then call DuplicateHandle to get a reference to it. This would require your service to run with the debug privilege.

The simplest way to create a DACL is to use ConvertStringSecurityDescriptorToSecurityDescriptor, which takes a string in a format called SDDL specifying the ACL.

Writing Secure Code contains an excellent chapter on creating DACL's with SDDL.

// Error handling removed for brevity
SECURITY_ATTRIBUTES security;
ZeroMemory(&security, sizeof(security));
security.nLength = sizeof(security);
ConvertStringSecurityDescriptorToSecurityDescriptor(
         L"D:P(A;OICI;GA;;;SY)(A;OICI;GA;;;BA)(A;OICI;GR;;;IU)",
         SDDL_REVISION_1,
         &security.lpSecurityDescriptor,
         NULL);

CreateFileMapping(INVALID_HANDLE_VALUE, &security,
              PAGE_READWRITE, sizeHigh, sizeLow, L"Global\\MyObject");

LocalFree(securityDescriptor.lpSecurityDescriptor);

"D:P(A;OICI;GA;;;SY)(A;OICI;GA;;;BA)(A;OICI;GR;;;IU)" specifies the DACL. D:P means this is a DACL (instead of a SACL . . . you'd rarely use SACL's) followed by several ACE strings which control who gets access. Each one is A (allow) and allows for object and contains inheritance (OICI). The first to grant all access (GA - grant all) to System (SY) and Administrators (BA, built-in administratos). The last grants read (GR) to interactive users (IU), which are users actually logged on to a session.

Once this is done, normal users should be able to call OpenFileMapping to get a handle to the shared mapping, and be able to map it into their process. Since normal users have limited rights on the object, they'll have to be sure to open it and map it for read-access only.

If users need write-acccess, you'd replace GR with GWGR. Note that this isn't secure - a limited user would then be able to modify the shared memory while your service is reading and trying to parse information, resulting in a crash of your service.

In my iOS4+ app i use AES encryption on several places and whole app has to be very secure. In order to do this I have to hard code several keys in this app which are then randomly picked when I need to encrypt something...

My question is how to store those private keys? Is it safe to hard-code them using NSString? Or

#define SecretKeyString @"febd9a24d8b65c1c787d50a4ed3619a9"

If user jailbreaks iPhone with this app installed, couldn't he get those hard-coded keys? How can i hide them most effectively?

Thanks for any suggestion...

I recommend reading some articles on security by obfuscation, which is essentially what you are trying to achieve (at least thats what all the recommendations are saying) and are ultimately not secure.

However, iOS's sandboxing is your first and most effective form of security.

Second, input validation will be the next most important security feature your app will need. Having encryption all over means nothing if you don't validate all your input (from user typed info, to network responses to app launching via a scheme).

In the end, secure encryption, where it is necessary, is only secure if you do not hardcore (or obfuscate your hard coding). mprivat is correct, you'll need to use either user generated data (a login), a public key encryption (so only the non-included private key can decrypt) or a server side encryption that uses SSL for transport.

I'd also say that if your secure data is only to be maintained on the device that you use the keychain API, and in particular make sure that you use the form that requires the user to be logged in for item retrieval.

If you have data that you are encrypting on the device that is decrypted on both the device and on another device (like a server) you have a fundamental architectural flaw. It is quite important that encryption-decryption only ever be client-client (aka, user's device only) or client-server (which can be the users device to the server or the server to the users device). Mixing the two results in vulnerabilities. I'm specifically meaning the same encryption mechanism here, using a seperate encryption for client-client vs client-server is fine (and sometimes necessary).

Here's a must read for those who are needing to write secure code: http://www.amazon.com/gp/aw/d/0735617228

I am new to linq, and this keeps popping on a null volume field. The file is unpredictable, and it will happen so I would like to put a 0 in where there is an exception. any quick and easy way to do it?

     var qry =
            from line in File.ReadAllLines("C:\\temp\\T.txt")
            let myRecX = line.Split(',')
            select new myRec()

            {

               price =   Convert.ToDecimal( myRecX[0].Replace("price =  ", "")) ,
               volume = Convert.ToInt32(myRecX[1].Replace("volume =", "")),
                dTime = Convert.ToDateTime( myRecX[2].Replace("timestamp =", ""))

            };

I think here there's an issue beyond the use of Linq.

In general is bad practice manipulating file data before sanitizing it.

Ever if the following question is on the filename (rather than it's content) is a good starting point to understand the concept of sanitizing input:

C# Sanitize File Name

After all yourself tells that your code lacks control of the file content, so before call:

let myRecX = line.Split(',')

I suggest define a private method like:

string SanitizeInputLine(string input) {
  // here do whatever is needed to bring back input to 
  // a valid format in a way that subsequent calls will not
  // fail

  return input;
}

Applying it is straightforward;

let myRecX = SanitizeInputLine(line).Split(',')

As general rule never trust input.

Let me quote Chapter 10 named All Input Is Evil!_ of Writing Secure Code by Howard/LeBlanc:

...you should never trust data until data is validated. Failure to do so will render your application vulnerable. Or, put another way: all input is evil until proven otherwise.

I am on a project that involves processing financial information, and so I need to write secure asp.net pages using C# 2008 (https etc)

Can anyone recomment any tutorials then can help me understand more about writing secure asp.net apps?

Thanks

There's a whole book on this topic, Dominick Baier's Developing More-Secure Microsoft ASP.NET 2.0 Applications. It is outstanding, and has a ton of features and techniques that you won't find anywhere else, at least not without a lot of digging. I've used this book for web security design on two projects, and I highly recommend it.

EDIT TO ADD: Second recommendation, Writing Secure Code: Practical Strategies and Proven Techniques for Building Secure Applications in a Networked World. While much of the code in this book is about unmanaged code, the sections on understanding good security development practices, threat modeling, etc., really tell you what you need to be thinking about as you design and evaluate your web site's security issues.

I've been coding in C++, Matlab, and similar languages for scientific purposes for quite some time now, but I recently wanted to get into web programming. I've taught myself HTML and CSS and I've dabbled in Javascript, PHP, and mySQL. I would really like to start making more advanced, user-driven websites (if that makes sense - ultimately sites similar to twitter and facebook in functionality), but I am worried that I don't know enough about internet security and vulnerabilities to make sure that the programming decisions I make are secure/safe.

What suggestions do you have or information can you offer me that will help me be confident in the security of the code that I produce.

If none of this makes sense or you would like some clarification, just ask.

Check out Writing Secure Code by Michael Howard and David LeBlanc from Microsoft Press. It's got a lot of good information on secure coding in general as well as a chapter or two specific to web programming. It's a Microsoft book but most of the ideas translate to whatever language you are working in.

Link to Amazon.

I am new to the ethical hacking world, and one of the most important things is the stack overflow, anyway I coded a vulnerable C program which has a char name [400] statement, and when I try to run the program with 401A's it doesn't overflow, but the book which I am following says it must overflow and the logic sense says so, so what's wrong???

Here's a good example in C showing how a buffer overflow can be used to execute arbitrary code. Its objective is to find an input string that will overwrite a return address causing a target function to be executed.

For a very good explanation of buffer overflows I would recommend chapter 5 of Writing Secure Code 2nd Edition.

Other good info on buffer overflows:

My boss told me to look at the following code and tell him what the potential security vulnerabilities were. I'm not very good at this kind of thing, since I don't think in the way of trying to hack code. All I see is that nothing is declared private, but other than that I just don't know.

#define NAME_SIZE (unsigned char) 255
// user input should contain the user’s name (first name space
// middle initial space last name and a null
// character), and was entered directly by the user.
// Returns the first character in the user input, or -1 if the method failed.
char poor_method(char* user_input, char* first, char *middle, char* last)
{
   char*buffer;
   char length;

   // find first name
   buffer = strtok(user_input, " ");
   if(buffer==0)
   {
        return -1;
   }
   length = strlen(buffer);
   if(length <= NAME_SIZE)
   {
        strcpy(first, buffer);
   }

   // find middle name
   buffer = strtok(NULL, " ");
   if(buffer==0)
   {
        return-1;
   }
   if(middle)
       *middle = buffer[0];

   // find last name
   buffer = strtok(NULL, "\0");
   length = strlen(buffer);
   if(length <= NAME_SIZE)
   {
       strcpy(last, buffer);
   }
   // Check to make sure that all of the user input was used
   buffer = strtok(NULL, "\0");
   if(buffer != NULL)
   {
       return-1;
   }
   return first[0];
}

What security vulnerabilities are there?

Get good at writing secure code

You most likely don't want systems that you are responsible for finding their way onto bugtraq or cve. If you don't understand it, be honest with your boss. Tell him you don't understand and you want to work on it. Pick up Writing Secure Code. Read it, learn it, love it. Asking this question on SO and giving your boss the answer definitely doesn't help you in the long run.

Then look at the sample code again :)

We distribute Visual Studio 2010 project files. Its our minimum platform requirement for the MSBuild-style project files. Users are expected to upgrade it as necessary. The project also has each configuration set to /arch:SSE2 as a default. Users are expected to upgrade it as necessary.

We are experiencing a D9002 dirty compile under Visual Studio 2012 and above due to use of /arch:SSE2 as a minimum platform architecture (related, /arch setting for VS2010 - VS2015). We added the following to our project file to squash the warning:

<!-- Visual Studio 2012 dirty compile due to use of SSE2 -->
<ItemDefinitionGroup Condition="'$(VisualStudioVersion)'>='11.0'">
  <ClCompile>
    <DisableSpecificWarnings>9002;%(DisableSpecificWarnings)</DisableSpecificWarnings>
  </ClCompile>
</ItemDefinitionGroup>

The above code runs fine on Visual Studio 2010. Under Visual Studio 2012 it creates more noise:

1>------ Build started: Project: cryptest, Configuration: Release x64 ------
1>cl : Command line warning D9014: invalid value '9002' for '/wd'; assuming '4999'
1>cl : Command line warning D9002: ignoring unknown option '/arch:SSE2'

Visual Studio complains it does not know what the warning is, and then it issues the warning. You can't make this stuff up...

We have an SDLC and we follow Microsoft's recommendations. We also take a firmer posture and make clean compiles a security gate because (1) it causes user hardships, and (2) it generates chatter on the mailing list and reports in the issue tracker.

How do we disable D9002 for VS2012 and above?