Sunday, April 12, 2009

Cross Site Scripting – How can you stop that

Recently Cross Site Scripting has taken a big toll across internet. It is really alarming that something that was so much ignored over the past few years is now becoming so much common and devastating. In this post we shall discuss at least one case study of Cross Site Scripting. This post will also remind you one very common and simple thought -  the benefit of technology is on the hands of the people who use it. Whether it is good or bad depends on the user.

Lets revisit the Cross Site Scripting as well as server compromisation attack I faced recently.

26 March 2009 – 10:45 AM

I had built a website for one of my clients which had a content management system(CMS) built in it. For editing HTML content in some of the pages, we had used FCK Editor,  a very popular editor available in the market. Suddenly the client let us know that somehow, Google has black listed our website.

Malware warning

While searching the name of the website, something like the picture was getting displayed in the Google Search results, clicking on which was taking to the page shown in bottom picture. From our account in Google for the website’s traffic, we came to know that our website was distributing malwares, Trojans and backdoors for a couple of days. What the…

Malware warning page

The client was really upset, as numerous customers were complaining about this incident and were afraid about their security and computer. They immediately called the Server Support and made the website go down, while putting up and emergency landing web page. We were perplexed: McAfee was showing that the website was 100% secure and PCI Compliant.

26 March 2009 – 12:31 PM 

The first thing that occurred to me was some kind of malware was getting downloaded, so there must be some XSS in our website and I randomly opened some of the WebPages in notepad. What I found was a small script tag at the bottom of the body tag of some of the WebPages. Something like:

<script src=”” ></script>

So Mister Ken.gif was the culprit. I thought. So I did a simple search on the Website’s root folder to find if there were anymore Mr. ken out there. The search returned a few JavaScript files e.g. jQuery and ThickBox etc. I cleaned them up. I was still not sure how this happened. The root folder has write permission of only asp net machine account. So if someone has to do something he/she has to have that permission.

Suddenly I found out there was another file named “con.aspx”, I opened the files in notepad and soon realized there was no code in it which was written by me. This was an alien. I downloaded that, and took a small risk. I put the file in my local development environment and executed it. It needed some password which I overrode with the code. Then there was some beautiful scene on my screen. The file was a Gem. It was soon displaying all the files on my wwwroot folder along with there permissions. It was displaying the running processes in my machine. It was displaying all opened and closed ports in the machine. It was displaying some registry values which indicated the name of my machine. --- I could not take anymore. Soon I found out the original project name was AspxSpy 1.0 which was an open source project which was even hosted in CodePlex. You can grab the file there.

Following are some pictures I took:

aspxspy_showing_my_iis imageimage

19 March 2009 – time unknown

So this was like the atom bomb, the evil or good is the user, not the software itself. The hacker had uploaded this file in the server using the vulnerabilities of the file upload system of FCK Editor and then using the permission of the machine account it had copied itself to the root directory. So the hacker had now access to command prompt, the IIS and also the registry.

Using this opportunity, the hacker had uploaded a file named open.bat along with a svchost.exe in the C:\WINDOWS\addins folder. the open.bat file was written so that it should enter the script tag in the pages. The svchost was manipulated as the server administrator later found out. It was found out soon that there was a root level compromisation on the server.

From the IIS log it was found that the hack was done on 19th March, and we had absolutely NO clue at all unless Google had pointed it out.
So, this was the modus operandi of the hack of the server.

Present Day

After a few days when we found out what the hacking attack was all about and how much damage it actually did in the process, we started thinking about the preventive measures. Around the web, Cross Site Scripting had only one preventive measure, check user input for malicious code before displaying them in WebPages. But as I told you before, the website that we are talking about has a CMS section, so already some java scripts are getting saved through the HTML Editor. This way we can not check for malicious code.

  • However, we revised the folder permission of the web server and took a very stern approach of giving only write permission to very specific folder which actually needed it. On the process I found out that there was a common tendency of some of the developers – to give write permission to “Everyone” in the root folder. It was really really dangerous.
  • We have used JavaScript source as relative links all over our website, so using a regex to find any url after the src element of a script tag can be a probable solution, though we did not try that one.
  • We upgraded the FCK Editor to the recent 2.6.4 version with FCKEditor.NET 2.6.3 which had some authentication check before letting the user upload some files and view uploaded files in the server.


This hack attack was an eye opener to me, as there were some common negligence that developers do and hackers exploit that small loop holes. No doubt today’s hacking is also coined as "Social Engineering”.

Recently I am doing a lot more study on Cross Site Scripting. So, it will be better if you consider this small article as the episode one. Soon I will be back with more information. If you have any information that you want to share with me, please leave a comment. Lets make our websites digital fortresses.




Saturday, March 7, 2009

Speed up your Visual Studio’s Website Building & Publishing


The problem

Recently, I had been experiencing a very annoying problem regarding my project’s publishing. In my company it is a common practice, that we publish the website we are going to deploy, to a local folder with “Single File Assembly” option ON and then upload the necessary files that has been changed via FTP to the staging or production server. Recently, when ever I was tying to publish the website, Visual Studio was giving output after trying to pre-compile as --

“Object <436114d4_6fca_4720_9ef7_fcd21e349d9d>/<Z3qAWE53tdTYmJNo8I3Wn6ys_320>.rem  has been disconnected or does not exist at the server.”

(The alpha-numerics are random of course as it seems to be.)
I googled this problem and found an interesting fact that was causing the problem. When pre compiling, an object was being created and as the publish is taking too long to execute, the object is being Garbage Collected and as a result the object becomes unavailable to the process. So, I found out the “project building time” was the main culprit here. The project I was working on is pretty large (~6.66GB) and I never dared to press F5 while trying to test something because if somehow I pressed F5, I had to go for a coffee and sometimes, even after my coffee break of 15-20 minutes I had to come back only to find the build is still going on….

The Search of Solution

Finally, I got frustrated and searched desperately to solve the problem. One solution I could think of was to increase the RAM of my machine and that required organization’s approval. I have 1 GB of RAM in my machine and I am running Windows XP Service Pack 3, I thought it was enough of RAM for a machine running XP and Visual Studio 2005. Anyways, I opened the system properties box by right clicking on “My Computer” and clicking on “Properties” in the context menu. Went to “Performance” section and selected the radio button which assures to give “Performance” the highest priority. More over, I killed some unnecessary processes from Windows Task Manager - especially the “Search Indexer”. Now my machine was faster than lightning! So I started publishing again.


“Object <some alphanumeric>/<some_alphanumeric_number>.rem  has been disconnected or does not exist at the server.”

Duh! So much for doing so many things. I started searching again. And this time God said “Let there be light…” and of course .. ”there was light!”.


After a long search I detected the culprit of the “long building time” mischief. This was being generated by some conflicting dll.refresh files.

When ever someone adds a reference of an assembly to an application the dll.refresh file gets generated automatically. This .refresh file actually saves the relative path for the original dll file from where the dll file was referenced. At the time of building, the Visual Studio check those dll.refresh files to find whether any updates has been made to the original dll file and if it finds an update, then it automatically updates the referenced ones too accordingly. If there are dependent assemblies, then Visual Studio also copies them too to the bin folder.

So to restrain Visual Studio from auto checking the updates, the dll.refresh files can be deleted which in turn solved my long building time problem. Believe me, the building time change was drastic.

This I think is a good solution to the slowdown problem because, even if the original dll gets updated you can always make a reference which will update your application. Generally within short term projects  we do not change third party dlls very often, so they can be deleted without any hesitation. Of course if you have used a class library of your own which gets changed every now and then, it is better to keep the refresh file intact. So, what was the problem and where is the conflict?

Scott Guthrie, calls this problem the Dueling Assembly Reference Problem. You can get a wonderful detail in his blog here.

Friday, March 6, 2009

How to make your Website McAfee Secured


Recently one of my clients requested, after a critical attack on the server where an application was running, to make sure that the application is secured in all perspectives. We chose McAfee to ensure that the website is 100% hacker proof. After a thorough Application Scan, Network Scan, and PCI Scan, McAfee found out several vulnerabilities in the Website, and I thought some of them exist in most of the websites we make. So here is a brief list and solutions we have found out to make a website 100% McAfee Secured and PCI Compliant.

Before we delve into the technical details, I want to clarify what this article covers and what it does not, better not to create any false expectations. This article is solely based on my experience while solving the vulnerabilities, and I have presented them likewise. As several of the fixes require tweaks in registry, make sure to back the registry up, and change anything at your own risk.

Following are the most important vulnerabilities that were reported by McAfee:

1  Vulnerability 1

1.1. Vulnerability Name:  Microsoft .net Custom Errors Not Set
1.2. Description:

This is when the custom error page is not set for an application. We generally use the web.config to specify the Error Page. But McAfee does not recognize that.


In order to fix this vulnerability the error pages must be set from the IIS. The steps are as follows:

    • Go to IIS Manager
    • Right click on the project name that has to be McAfee Secured.
    • Click Properties.
    • In the Properties Box, click on the “Custom Errors” tab and the following window appears:


    • Highlight any one of the error type and click on edit properties and a small window appears where the custom error page can be specified.


    • Click on OK button in both this and its parent window.
    • If even one error page is specified for any one error McAfee considers this a vulnerability no more.

2  Vulnerability 2

2.1  Vulnerability Name:  Allow All Policy in crossdomain.xml
2.2  Description:

If a website uses several number of flash videos and content and other websites use them in their websites, the crossdomain.xml comes into play. This xml file sets the policy about who can use the content in their websites in terms of domain name. It has a key called –
<allow-access-from domain="*"/>
According to McAfee this poses a serious threat of hack attack to the website.


It is very confusing how McAfee scans the file, because even if the file is removed from the root (wwwroot), and a copy of the file still exists in any place of the whole server, McAfee still finds that and gives alert. The best solution is to Edit each and every occurrence of that file in the whole server and remove all stars (*) and give specific domain names from which the videos will be accessed. If not required it is best to completely delete the file.
So the edited file will look like: 
<allow-access-from domain="*"/>
(* denotes anything that prepends the domain name e.g. in also all protocols: http or https

Vulnerability 3

3.1  Vulnerability Name:  Using SSL v2 in secure communications
3.2  Description:

The remote service appears to encrypt traffic using SSL protocol version 2.
Netscape Communications Corporation introduced SSL 2.0 with the launch of Netscape Navigator 1.0 in 1994 and it contains several well-known weaknesses. For example, SSLv2 doesn't provide any protection against man-in-the-middle attacks during the handshake, and uses the same cryptographic keys for message authentication and for encryption.
In Internet Explorer 7, the default HTTPS protocol settings are changed to disable the weaker SSLv2 protocol and to enable the stronger TLSv1 protocol. By default, IE7 users will only negotiate HTTPS connections using SSLv3 or TLSv1. Mozilla Firefox is expected to drop support for SSLv2 in its upcoming versions.
As almost all modern browsers support SSLv3, disabling support for the weaker SSL method should have minimal impact. The following browsers support SSLv3:

    • Internet Explorer 5.5 or higher (PC)
    • Internet Explorer 5.0 or higher (Mac)
    • Netscape 2.0 (Domestic) or higher (PC/Mac)
    • Firefox 0.8 or higher (PC/Mac/Linux)
    • Mozilla 1.7 or higher (PC/Mac/Linux)
    • Camino 0.8 or higher (Mac)
    • Safari 1.0 or higher (Mac)
    • Opera 1.7 or higher (PC/Mac)
    • Omniweb 3.0 or higher (Mac)
    • Konqueror 2.0 or higher (Linux)
      According to, an Assessor's update report, " is imperative that an ASV identify the use of SSL 2.0 to transmit cardholder data as a failure."

SSL related protocols can run on other service ports as well. Typical ports include: 465, 993, 995, 2078, 2083, 2087, 2096, 8443, etc. Each application will have its own configuration options to handle SSL protocols.
To Solve this problem one needs to open Registry Editor.

    • Click Start, click Run, type regedt32 or type regedit, and then click OK.
    • In Registry Editor, locate the following registry key:
      HKey_Local_Machine\System\CurrentControlSet\Control\SecurityProviders \SCHANNEL\Protocols\SSL 2.0\Server


    • On the Edit menu, click Add Value.
    • In the Data Type list, click DWORD.


    • In the Value Name box, type Enabled, and then click OK. (The value will be automatically set to 0 – Disabled )


4  Vulnerability 4

4.1. Vulnerability Name:  Weak Supported SSL Ciphers Suites

4.2.   Description:

The remote host supports the use of SSL ciphers that offer either weak encryption or no encryption at all. This vulnerability is valid for all SSL/TLS sessions that are passing sensitive information.
PCI defines strong cryptography, for secret key based systems, as anything above 80 bit encryption.


The solution to this is very simple but requires registry tweak again. Following are the steps.

    • Click Start, click Run, type regedt32 or type regedit, and then click OK.
    • In Registry Editor, locate the following registry key:
HKey_Local_Machine\System\CurrentControlSet\Control\SecurityProviders \SCHANNEL\Ciphers


    • Under the Cipher key there are several Ciphers.
    • Locate the ciphers which have encryption less than 128 bit
    • Create DWORD values named Enabled and Value 0 for each of them, just as the previous case.
    • For convenience I have marked them with red Arrows in the picture above.
    • System Restart is NOT required for this.

    Now the server is secured.
    The above mentioned security issues are the major ones that most of the systems have. However other than this there may be some easy and minor vulnerability like –

      • Using robots.txt in the pages.(Generally inserted by Web Marketing team to track user hit).
      • Directory Scanner: Common directories are revealed. This can be resolved by URL rewriting and setting “Directory Browsing” off.

    Note: For the above vulnerabilities a minor registry tweak will be necessary. So it is strongly recommended to back up the registry before doing anything. By any chance if something gets messed up, just delete the SCHANNEL key and restart the machine, the key will be auto-generated.

Sunday, September 28, 2008

Welcome aboard!

During our long long journey as developers we come across numerous problem and learn through them. I am going to share my experiences through this blog. As Microsoft has launched their new set of products like Visual Studio 2008, SQL Server 2008, MVC, IIS7 and many more, its time to learn, enhance and be at the forefront of technology. Very soon I am going to give a presentation at my office on IIS7 and currently I am busy gathering as much information as I can on this. ScottGu's blog has been a real help for me especially the videos of interviews and seminars, MIX etc. Thanks a lot to him.
As I gather more information on IIS7 I shall keep posting here.