Thursday, March 29, 2012

TCP Data - Section 7

Introduction

Finally, the last page of our incredible TCP Analysis. As most of you would expect, this section is dedicated to the DATA section that follows the TCP Header.


The Data

The following diagram may have been tiresome, however, it will be displayed one final time to note the data portion of the packet:

tcp-analysis-section-7-1

Your knowledge regarding the procedure followed when the above packet arrives to its destination is assumed. However, a summary is given below to refresh our understanding in order to avoid confusion.

When the above packet arrives at the receiver, a decapsulation process is required in order to remove each OSI layer's overhead and pass the Data portion to the application that's waiting for it. As such, when the packet is received in full by the network card, it is given to the 2nd OSI layer (Datalink) which, after performing a quick check on the packet for errors, it will strip the overhead associated with that layer, meaning the yellow blocks will be removed.

The remaining portion, that is, the IP header, TCP header and Data, now called an IP Datagram, will be passed to the 3rd OSI layer (Network) where another check will be performed and if found to be error free, the IP header will be stripped and the rest (now called a Segment) is passed to the 4th OSI layer.

The TCP protocol (4th OSI layer) will accept the segment and perform its own error check on the segment. Assuming it is found error free, the TCP header is stripped off and remaining data is given to the upper layers eventually arriving at the application waiting for it.


Summary

Our in-depth analysis of the the TCP protocol has reached its conclusion. After reading all these pages, we are sure you have a much better understanding regarding the TCP protocol's purpose and process, and you are able to really appreciate its functions.

TCP Options - Section 6

Introduction

The TCP Options (MSS, Window Scaling, Selective Acknowledgements, Timestamps, Nop) are located at the end of the TCP Header which is also why they are covered last. Thanks to the TCP Options field we have been able to enhance the TCP protocol by introducing new features or 'addons' as some people like to call them, defined by their respective RFC's.

As data communication continues to become more complex and less tolerable to errors and latency, it was clear that these new features had to be incorporated to the TCP transport to help overcome the problems created by the new links and speeds available.

To give you an example, Window Scaling, mentioned in the previous pages and elaborated here, is possible using the TCP Options field because the original Window field is only 16 bits long, allowing a maximum decimal number of 65,535. Clearly this is far too small when we want to express 'Window size' values using numbers in the range of thousands to a million e.g 400,000 or 950,000.

Before we delve into any details, let's take a look at the TCP Options field:

tcp-analysis-section-6-1

As you can see, the TCP Options field is the sixth section of the TCP Header analysis.

Located at the end of the header and right before the Data section, it allows us to make use of the new enhancements recommended by the engineers who help design the protocols we use in data communications today.








TCP Options

Most of the TCP Options we will be analysing are required to appear only during the initial SYN and SYN/ACK phase of the 3-way-handshake TCP performs to establish a virtual link before transferring any data. Other options, however, can be used at will, during the TCP session.

It is also important to note that the TCP Options may occupy space at the end of the TCP header and are a multiple of 8 bits in length. This means that if we use one TCP Option that is 4 bits in length, there must be another 4 bits of padding in order to comply with the TCP RFC. So the TCP Options length MUST be in multiples of 8 bits, that is 8, 16, 24, 32 e.t.c

Here's a brief view of the TCP Options we are going to analyse:

  • Maximum Segment Size (MSS)
  • Window Scaling
  • Selective Acknowledgements (SACK)
  • Timestamps
  • Nop

Let's now take a look at the exciting options available and explain the purpose of each one.


Maximum Segment Size (MSS)

The Maximum Segment Size is used to define the maximum segment that will be used during a connection between two hosts. As such, you should only see this option used during the SYN and SYN/ACK phase of the 3-way-handshake. The MSS TCP Option occupies 4 bytes (32 bits) of length.

If you have previously come across the term "MTU" which stands for Maximum Transfer Unit, you will be pleased to know that the MSS helps define the MTU used on the network.

If your scratching your head because the MSS and MTU field doesn't make any sense to you, or it is not quite clear, don't worry, the following diagram will help you get the big picture:

tcp-analysis-section-6-2

You can see the Maximum Segment Size consists of the Data segment, while the Maximum Transfer Unit includes the TCP Header, MSS and the IP Header.

It would also benefit us to recognise the correct terminology that corresponds to each level of the OSI Model: The TCP Header and Data is called a Segment (Layer 4), while the IP Header and the Segment is called an IP Datagram (Layer 3).

Furthermore, regardless of the size the MTU will have, there is an additional 18 bytes overhead placed by the Datalink layer. This overhead includes the Source and Destination MAC Address, the Protocol type, followed by the Frame Check Sequence placed at the end of the frame.

This is also the reason why we can only have a maximum MTU of 1500 bytes. Since the maximum size of an Ethernet II frame is 1518 bytes, subtracting 18 bytes (Datalink overhead) leaves us with 1500 bytes to play with.

TCP usually computes the Maximum Segment Size (MSS) that results in IP Datagrams that match the network MTU. In practice, this means the MSS will have such a value that if we add the IP Header as well, the IP Datagram (IP Header+TCP Header+DATA) would be equal to the network MTU.

If the MSS option is omitted by one or both ends of the connection, then the value of 536 bytes will be used. The MSS value of 536 bytes is defined by RFC 1122 and is calculated by taking the default value of an IP Datagram, 576 bytes, minus the standard length of the IP and TCP Header (40 bytes), which gives us 536 bytes.

In general, it is very important to use the best possible MSS value for your network because your network performance could be extremely poor if this value is too large or too small. To help you understand why, lets look at a simple example:

If you wanted to transfer 1 byte of data through the network, you would need to create a datagram with 40 bytes of overhead, 20 for the IP Header and 20 for the TCP Header. This means that your using 1/41 of your available network bandwidth for data. The rest is nothing but overhead!

On the other hand, if the MSS is very large, your IP Datagrams will also be very large, meaning that they will most probably fail to fit into one packet should the MTU be too small. Therefore they will require to be fragmented, increasing the overhead by a factor of 2.


Window Scaling

We briefly mentioned Window Scaling in the previous section of the TCP analysis, though you will soon discover that this topic is quite broad and requires a great deal of attention.

After gaining a sound understanding of what the Window size flag is used for, Window Scaling is, in essence, an extention to the Window size flag. Because the largest possible value in the Window size flag is only 65,535 bytes (64 kb), it was clear that a larger field was required in order to increase the value to a whopping 1 Gig! Thus, Window Scaling was born.

The Window Scaling option can be a maximum of 30 bits in size, which includes the original 16 bit Window size field covered in the previous section. So that's 16 (original window field) + 14 (TCP Options 'Window Scaling') = 30 bits in total.

If you're wondering where on earth would someone use such an extremely large Window size, think again. Window Scaling was created for high-latency, high-bandwidth WAN links where a limited Window size can cause severe performance problems.

To consolidate all these technological terms and numbers, an example would prove to be beneficial:

tcp-analysis-section-6-3

The above example assumes we are using the maximum Window size of 64 kbs and because the WAN link has very high latency, the packets take some time to arrive to their destination, that is, Host B. Due to the high latency, Host A has stopped transmitting data since there are 64 kbs of data sent and they have not yet been acknowledged.

When the Time = 4, Host B has received the data and sends the long awaited acknowledgement to Host A so it can continue to send data, but the acknowledgement will not arrive until somewhere around Time = 6.

So, from Time = 1 up until Time = 6, Host A is sitting and waiting. You can imagine how poor the performance of this transfer would be in this situation. If we were to transfer a 10 Mb file, it would take hours!

Let's now consider the same example, using Window Scaling:

tcp-analysis-section-6-4

As you can see, with the use of Window Scaling, the window size has increased to256 kb! Since the value is quite large, which translates to more data during transit, Host B has already received the first few packets, while Host A is still sending the first 256 kb window.

On Time = 2, Host B sends an Acknowledgement to Host A, which is still busy sending data. Host A will receive the Acknowledgement before it finishes the 256 kb window and will therefore continue sending data without pause since it will soon receive another Acknowledgement from Host B.

Clearly the difference that a large window size has made is evident, increasing the network performance and minimising the ideal time for the sending host.

The Window Scale option is defined in RFC 1072, which lets a system advertise 30-bit (16 from the original window + 14 from the TCP Options) Window size values, with a maximum buffer size of 1 GB. This option has been clarified and redefined in RFC 1323, which is the specification that all implementations employ today.

Lastly, for those who deal with Cisco routers, it may benefit you to know that you are also able to configure the Window size on Cisco routers running the Cisco IOS v9 and greater. Also, routers with versions 12.2(8)T and above support Window Scaling, which is automatically enabled for Window sizes above 65,535 bytes (64 kb), with a maximum value of 1,073,741,823 bytes (1 GByte)!


Selective Acknowledgments (SACK)

TCP has been designed to be a fairly robust protocol though, despite this, it still has several disadvantages, one of which concerns Acknowledgements, which also happens to be the reason Selective Acknowledgement were introduced with RFC 1072.

The problem with the good old plain Acknowledgements is that there are no mechanisms for a receiver to state "I'm still waiting for bytes 20 through 25, but have received bytes 30 through 35". And if your wondering whether this is possible, then the answer is 'yes' it is!

If segments arrive out of order and there is a hole in the receiver's queue, then using the 'classical' Acknowledgements supported by TCP, can only say "I've received everything up to byte 20". The sender then needs to recognise that something has gone wrong and continue sending from that point onwards (byte 20).

As you may have concluded, the above situation is totally unacceptable, so a more robust service had to be created, hence Selective Acknowledgments!

Firstly, when a virtual connection is established using the classic 3-way-handshake the hosts must send a "Selective Acknowledgments Permitted" in the TCP Options to indicate that they are able to use SACK's. From this point onwards, the SACK option is sent whenever a selective acknowledgment is required.

For example, if we have a Windows98 client that is waiting for byte 4,268, but the SACK option shows that the Windows98 client has also received bytes 7,080 through 8,486, it is obvious that it is missing bytes 4,268 through 7,079, so the server should only resend the missing 2,810 bytes, rather than restarting the entire transfer at byte number 4,268.

Lastly, we should note that the SACK field in the TCP Options uses two 16 bit fields, a total of 32 bits together. The reason there are two fields is because the receiver must be able to specify the range of bytes it has received, just like the example we used. In the case where Window Scaling is also used, these 2 x 16 bit fields can be expanded to two 24 or 32 bit fields.


Timestamps

Another aspect of TCP's flow-control and reliability services is the round-trip delivery times that a virtual circuit is experiencing. The round-trip delivery time will accurately determine how long TCP will wait before attempting to retransmit a segment that has not been acknowledged.

Because every network has unique latency characteristics, TCP has to understand these characteristics in order to set accurate acknowledgment timer threshold values. LANs typically have very low latency times, and as such TCP can use low values for the acknowledgment timers. If a segment is not acknowledged quickly, a sender can retransmit the questionable data quickly, thus minimizing any lost bandwidth and delay.

On the other hand, using a low threshold value on a WAN is sure to cause problems simply because the acknowledgment timers will expire before the data ever reaches the destination.

Therefore, in order for TCP to accurately set the timer threshold value for a virtual circuit, it has to measure the round-trip delivery times for various segments. Finally, it has to monitor additional segments throughout the connection's lifetime to keep up with the changes in the network. This is where the Timestamp option comes into the picture.

Similarly to the majority of the other TCP Options covered here, the Timestamp option must be sent during the 3-way-handshake in order to enable its use during any subsequent segments.

The Timestamp field consists of a Timestamp Echo and Timestamp Reply field, both of which the reply field is always set to zero by the sender and completed by the receiver after which it is sent back to the original sender. Both timestamp fields are 4 bytes long!


Nop

The nop TCP Option means "No Option" and is used to separate the different options used within the TCP Option field. The implementation of the nop field depends on the operating system used. For example, if options MSS and SACK are used, Windows XP will usually place two nop's between them, as was indicated in the first picture on this page.

Lastly, we should note that the nop option occupies 1 byte. In our example at the beggining of the page, it would occupy 2 bytes since it's used twice. You should also be aware that this field is usually checked by hackers when trying to determine the remote host's operating system.


Summary

This page provided all the available TCP Options that have been introduced to the TCP protocol in its efforts to extend its reliability and performance. While these options are critical in some cases, most users are totally unaware of their existence, especially network administrators. The information provided here is essential to help administrators deal with odd local and wan network problems that can't be solved by rebooting a server or router :)

The final page to this topic is a summary covering the previous six pages of TCP, as there is little to analyse in the data section of the TCP Segment. It is highly suggested you read it as a recap to help you remember the material covered.

TCP Window Size, Checksum & Urgent Pointer - Section 5

Introduction

Our fifth section contains some very interesting fields that are used by the TCP transport protocol. We see how TCP helps control how much data is transferred per segment, make sure there are no errors in the segment and, lastly, flag our data as urgent, to ensure it gets the priority it requires when leaving the sender and arriving at the recipient.

So let's not waste any time and get right into our analysis!

tcp-analysis-section-5-1

The fifth section we are analysing here occupies a total of 6 bytes in the TCP header.

These values, like most of the fields in the protocol's header, remain constant in size, regardless of the amount of application data.

This means that while the values they contain will change, the total amount of space the field occupied will not.







The Window Flag

The Window size is considered to be one of the most important flags within the TCP header. This field is used by the receiver to indicate to the sender the amount of data that it is able to accept. Regardless of who the sender or receiver is, the field will always exist and be used.

You will notice that the largest portion of this page is dedicated to the Window size field. The reason behind this is because this field is of great importance. The Window size field is the key to efficient data transfers and flow control. It trully is amazing once you start to realise how important this flag is and how many functions it contains.

The Window size field uses 'bytes' as a metric. So in our example above, the number 64,240 is equal to 64,240 bytes, or 62.7 kb (64,240/1024).

The 62.7 kbytes reflects the amount of data the receiver is able to accept, before transmitting to the sender (the server) a new Window value. When the amount of data transmitted is equal to the current Window value, the sender will expect a new Window value from the receiver, along with an acknowledgement for the Window just received.

The above process is required in order to maintain flawless data transmission and high efficiency. We should however note that the Window size field selected is not in any case just a random value, but one calculated using special formulas like the one in our example below:

tcp-analysis-section-5-3

In this example, Host A is connected to a Web server via a 10 Mbit link. According to our formula, to calculate the best Window value we need the following information: Bandwidth and Delay. We are aware of the link's bandwidth: 10,000,000 bits (10 Mbits), and we can easily find out the delay by issuing a 'ping' from Host A to the Web server which gives us an average Round Trip Time response (RTT) of 10 milliseconds or 0.01 seconds.

We are then able to use this information to calculate the most efficient Window size (WS):

WS = 10,000,000 x 0.01 => WS = 100,000 bits or (100,000/8)/1024 = 12,5 kbytes

For 10 Mbps bandwidth and a round-trip delay of 0.01 sec, this gives a window size of about 12 kb or nine 1460-byte segments:

tcp-analysis-section-5-4

This should yield maximum throughput on a 10 Mbps LAN, even if the delay is as high as 10 ms because most LANs have round-trip delay of less than a few milliseconds. When bandwidth is lower, more delay can be tolerated for the same fixed window size, so a window size of 12 kb works well at lower speeds, too.


Windowing - A Form of Flow Control

Apart from the Windowing concept being a key factor for efficient data transmission, it is also a form of flow control, where a host (the receiver) is able to indicate to the other (the sender) how much data it can accept and then wait for further instructions.

The fact is that in almost all cases, the default value of 62 kbytes is used as a Window size. In addition, even though a Window size of 62 kbytes might have been selected by the receiver, the link is constantly monitored for packet losses and delays during the data transfer by both hosts, resulting in small increases or decreases of the original Window size in order to optimise the bandwidth utilisation and data throughput.

This automatic self-correcting mechanisim ensures that the two hosts will try to make use of the pipe linking them in the best possible way, but do keep in mind that this is not a guarantee that they will always succeed. This is generally the reason why a user is able to manually modify the Window size until the best value is found and this, as we explained, depends greatly on the link between the hosts and its delay.

In the case where the Window size falls to zero, the remote TCP can send no more data. It must wait until buffer space becomes available and it receives a packet announcing a non-zero Window size.

Lastly, for those who deal with Cisco routers, you might be interested to know that you are able to configure the Window size on Cisco routers running the Cisco IOS v9 and greater. Routers with versions 12.2(8)T and above support Window Scaling, a feature that's automatically enabled for Window sizes above 65,535, with a maximum value of 1,073,741,823 bytes!

Window Scalling will be dealt with in greater depth on the following page.

On the Server Side: Larger Window Size = More Memory

Most network administrators who have worked with very busy web servers would recall the massive amounts of memory they require. Since we now understand the concept of a 'Window size', we are able to quickly analyse how it affects busy web servers that have thousands of clients connecting to them and requesting data.

When a client connects to a web server, the server is required to reserve a small amount of memory (RAM) aside for the client's session. The amount of required memory is the same amount as the window size and, as we have seen, this value depends on the bandwidth and delay between the client and server.

To give you an idea how the window size affects the server's requirements in memory, let's take an example:

If you had a web server that served 10,000 clients on a local area network (LAN) running at 100 Mbits with a 0.1 second round trip delay and wanted maximum performance/efficiency for your file transfers, according to our formula, you would need to allocate a window of 1.25 MB for each client, or 12 Gigs of memory for all your clients! Assuming of course that all 10,000 clients are connected to your web server simultaneously.

To support large file transfers in both directions (server to client and vice versa), your server would need: [(100,000,000 x 0.1) 10,000 x 2] = over 24 Gigs of memory just for the socket buffers!

So you can see how important it is for clients not to use oversized window values! In fact, the current TCP standard requires that the receiver must be capable of accepting a full window's worth of data at all times. If the receiver over-subscribes its buffer space, it may have to drop an incoming packet. The sender will discover this packet loss and invoke TCP congestion control mechanisms even though the network is not congested.

It is clear that receivers should not over-subscribe buffer space (window size) if they wish to maintain high performance and avoid packet loss.


Checksum Flag

The TCP Checksum field was created to ensure that the data contained in the TCP segment reaches the correct destination and is error-free. For those network gurus who are wondering how TCP would ensure the segment arrives to the correct destination (IP Address), you will be happy to know that there is a little bit more information used than just the TCP header to calculate the checksum and, naturally, it would include a portion of the IP Header.

This 'extra' piece of information is called the pseudo-header and we will shortly analyse its contents but, for now, let's view a visual representation of the sections used to calculate the TCP checksum:

tcp-analysis-section-5-5

The above diagram shows you the pseudo header, followed by the TCP header and the data this segment contains. However, once again, be sure to remember that the pseudo header is included in the Checksum calculation to ensure the segment has arrived at the correct receiver.

Let's now take a look how the receiver is able to verify it is the right receiver for the segment it just received by analysing the pseudo header.


The Pseudo Header

The pseudo header is a combination of 5 different fields, used during the calculation of the TCP checksum. It is important to note (and remember!) that the pseudo header is not transmitted to the receiver, but is simply involved in the checksum calculation.

Here are the 5 fields as they are defined by the TCP RFC:

tcp-analysis-section-5-6

When the segment arrives at its destination and is processed through the OSI layers, once the transport layer (Layer 4) is reached, the receiver will recreate the pseudo header in order to recalculate the TCP header checksum and compare the result with the value stored in the segment it has received.

If we assume the segment somehow managed to find its way to a wrong machine, when the pseudo header is recreated, the wrong IP Address will be inserted into the Destination IP Address field and the result will be an incorrect calculated checksum. Therefore, the receiver that wasn't supposed to receive the segment will drop it as it's obviously not meant to be there.

Now you know how the checksum field guarantees that the correct host will receive the packet, or that it will get there without any errors!









However, be sure to keep in mind that even though these mechanisms exist and work wonderfully in theory, when it comes to the practical part, there is a possibility that packets with errors might make their way through to the application!

It's quite amazing once you sit down and think for a minute that this process happens for every single packet that is sent and received between hosts that use TCP and UDP (UDP calculates the same way its checksum) as their transport protocol!

Lastly, during the TCP header checksum calculation, the field is set to zero (0) as shown below. This action is performed only during the checksum calculation on either end because it is unknown at the time. Once the value is calculated, it is then inserted into the field, replacing the inital zero (0) value.

This is also illustrated in the screen shot below:

tcp-analysis-section-5-7

In summarising the procedure followed when calculating the checksum, the following process occurs, from the sender all the way to the receiver:

The sender prepares the segment that is to be sent to the receiving end. The checksum is set to zero, in fact 4 zeros (hex) or 8 zeros (0000 0000) if you look at it in binary, because the checksum is an 8 bit field.

The checksum in then calculated using the pseudo header, TCP header and lastly the data to be attached to the specific segment. The result is then stored in the checksum field and the segment is sent!

The segment arrives at the receiver and is processed. When it reaches the 4th OSI layer where the TCP lives, the checksum field is set once again to zero. The receiver will then create its own pseudo header for the segment received by entering its own IP Address in the Destination IP Address field (as shown in the previous diagrams) and makes use of the TCP header and data to calculate the new checksum.

If all is successfully accomplished, the result should be identical with the one the checksum field segment had when it arrived. When this occurs, the packet is then further processed and the data is handed to the application awaiting it.

If, however, the checksum is different, then the packet should be discarded (dropped) and a notification will be sent to the receiver depending on how the TCP stack is implemented on the receiver's operating system.


The Urgent Pointer

In section 4, we analysed the TCP Flag options and amongst them we found the Urgent Pointer flag. The urgent pointer flag in the TCP Flag allows us to mark a segment of data as 'urgent', while this urgent pointer field specifies where exactly the urgent data ends.

To help you understand this, take a look at the following diagram:

tcp-analysis-section-5-8

You may also be interested to know that the Urgent Pointer can also be used when attacking remote hosts. From the case studies we have analysed, we see that certain applications, which supposedly guard your system from attack attempts, do not properly log attacks when the URG flag is set. One particular application happens to be the famous BlackIce Server v2.9, so beware!

As a final conclusion this section, if you find yourself capturing thousands of packets in order to view one with the URG bit set, don't be disappointed if you are unable to catch any such packets! We found it nearly impossible to get our workstation to generate such packets using telnet, http, ftp and other protocols. The best option and by far the easiest way would be to look for packet crafting programs that allow you to create packets with different flags and options set.


Summary

While this section was a fairly extensive, we have covered some very important sections of the TCP protocol. You now know what a TCP Window is and how you can calculate it depending on your bandwidth and delay.

We also examined the Checksum field, which is used by the receiver to verify the segment it received is not corrupt and at the same time checking to make sure it didn't receive the segment accidently!

Lastly, we examined in great detail the usage of the URG flag and Urgent Pointer field, which are used to define an incoming segment that contains urgent data.

After enjoying such a thorough analysis, we're sure you're ready for more! The next section deals with the TCP Options located at the end of the TCP header.


TCP Flag Options - Section 4

Introduction

As we have seen in the previous pages, some TCP segments carry data while others are simple acknowledgements for previously received data. The popular 3-way handshake utilises the SYNs and ACKs available in the TCP to help complete the connection before data is transferred.

Our conclusion is that each TCP segment has a purpose, and this is determined with the help of the TCP flag options, allowing the sender or receiver to specify which flags should be used so the segment is handled correctly by the other end.

Let's take a look at the TCP flags field to begin our analysis:

tcp-analysis-section-4-1

You can see the 2 flags that are used during the 3-way handshake (SYN, ACK) and data transfers.

As with all flags, a value of '1' means that a particular flag is 'set' or, if you like, is 'on'. In this example, only the "SYN" flag is set, indicating that this is the first segment of a new TCP connection.

In addition to this, each flag is one bit long, and since there are 6 flags, this makes the Flags section 6 bits in total.

You would have to agree that the most popular flags are the "SYN", "ACK" and "FIN", used to establish connections, acknowledge successful segment transfers and, lastly, terminate connections. While the rest of the flags are not as well known, their role and purpose makes them, in some cases, equally important.

We will begin our analysis by examining all six flags, starting from the top, that is, the Urgent Pointer:


1st Flag - Urgent Pointer

The first flag is the Urgent Pointer flag, as shown in the previous screen shot. This flag is used to identify incoming data as 'urgent'. Such incoming segments do not have to wait until the previous segments are consumed by the receiving end but are sent directly and processed immediately.

An Urgent Pointer could be used during a stream of data transfer where a host is sending data to an application running on a remote machine. If a problem appears, the host machine needs to abort the data transfer and stop the data processing on the other end. Under normal circumstances, the abort signal will be sent and queued at the remote machine until all previously sent data is processed, however, in this case, we need the abort signal to be processed immediately.

By setting the abort signal's segment Urgent Pointer flag to '1', the remote machine will not wait till all queued data is processed and then execute the abort. Instead, it will give that specific segment priority, processing it immediately and stopping all further data processing.

If you're finding it hard to understand, consider this real-life example:

At your local post office, hundreds of trucks are unloading bags of letters from all over the world. Because the amount of trucks entering the post office building are abundant, they line up one behind the other, waiting for their turn to unload their bags.

As a result, the queue ends up being quite long. However, a truck with a big red flag suddenly joins the queue and the security officer, whose job it is to make sure no truck skips the queue, sees the red flag and knows it's carrying very important letters that need to get to their destination urgently. By following the normal procedures, the security officer signals to the truck to skip the queue and go all the way up to the front, giving it priority over the other the trucks.

In this example, the trucks represent the segments that arrive at their destination and are queued in the buffer waiting to be processed, while the truck with the red flag is the segment with the Urgent Pointer flag set.

A further point to note is the existence of theUrgent Pointer field. This field is covered in section 5, but we can briefly mention that when the Urgent Pointer flag is set to '1' (that's the one we are analysing here), then the Urgent Pointer field specifies the position in the segment where urgent data ends.


2nd Flag - ACKnowledgement

The ACKnowledgement flag is used to acknowledge the successful receipt of packets.

If you run a packet sniffer while transferring data using the TCP, you will notice that, in most cases, for every packet you send or receive, an ACKnowledgement follows. So if you received a packet from a remote host, then your workstation will most probably send one back with the ACK field set to "1".

In some cases where the sender requires one ACKnowledgement for every 3 packets sent, the receiving end will send the ACK expected once (the 3rd sequential packet is received). This is also called Windowing and is covered extensively in the pages that follow.


3rd Flag - PUSH

The Push flag, like the Urgent flag, exists to ensure that the data is given the priority (that it deserves) and is processed at the sending or receiving end. This particular flag is used quite frequently at the beginning and end of a data transfer, affecting the way the data is handled at both ends.

When developers create new applications, they must make sure they follow specific guidelines given by the RFC's to ensure that their applications work properly and manage the flow of data in and out of the application layer of the OSI model flawlessly. When used, the Push bit makes sure the data segment is handled correctly and given the appropriate priority at both ends of a virtual connection.

When a host sends its data, it is temporarily queued in the TCP buffer, a special area in the memory, until the segment has reached a certain size and is then sent to the receiver. This design guarantees that the data transfer is as efficient as possible, without waisting time and bandwidth by creating multiple segments, but combining them into one or more larger ones.

When the segment arrives at the receiving end, it is placed in the TCP incoming buffer before it is passed onto the application layer. The data queued in the incoming buffer will remain there until the other segments arrive and, once this is complete, the data is passed to the application layer that's waiting for it.

While this procedure works well in most cases, there are a lot of instances where this 'queueing' of data is undesirable because any delay during queuing can cause problems to the waiting application. A simple example would be a TCP stream, e.g real player, where data must be sent and processed (by the receiver) immediately to ensure a smooth stream without any cut offs.

A final point to mention here is that the Push flag is usually set on the last segment of a file to prevent buffer deadlocks. It is also seen when used to send HTTP or other types of requests through a proxy - ensuring the request is handled appropriately and effectively.


4th Flag - Reset (RST) Flag

The reset flag is used when a segment arrives that is not intended for the current connection. In other words, if you were to send a packet to a host in order to establish a connection, and there was no such service waiting to answer at the remote host, then the host would automatically reject your request and then send you a reply with the RST flag set. This indicates that the remote host has reset the connection.

While this might prove very simple and logical, the truth is that in most cases this 'feature' is used by most hackers in order to scan hosts for 'open' ports. All modern port scanners are able to detect 'open' or 'listening' ports thanks to the 'reset' function.

The method used to detect these ports is very simple: When attempting to scan a remote host, a valid TCP segment is constructed with the SYN flag set (1) and sent to the target host. If there is no service listening for incoming connections on the specific port, then the remote host will reply with ACK and RST flag set (1). If, on the other hand, there is a service listening on the port, the remote host will construct a TCP segment with the ACK flag set (1). This is, of course, part of the standard 3-way handshake we have covered.

Once the host scanning for open ports receives this segment, it will complete the 3-way handshake and then terminate it using the FIN (see below) flag, and mark the specific port as "active".


5th Flag - SYNchronisation Flag

The fifth flag contained in the TCP Flag options is perhaps the most well know flag used in TCP communications. As you might be aware, the SYN flag is initialy sent when establishing the classical 3-way handshake between two hosts:

tcp-analysis-section-4-2

In the above diagram, Host A needs to download data from Host B using TCP as its transport protocol. The protocol requires the 3-way handshake to take place so a virtual connection can be established by both ends in order to exchange data.

During the 3-way handshake we are able to count a total of 2 SYN flags transmitted, one by each host. As files are exchanged and new connections created, we will see more SYN flags being sent and received.


6th Flag - FIN Flag

The final flag available is the FIN flag, standing for the word FINished. This flag is used to tear down the virtual connections created using the previous flag (SYN), so because of this reason, the FIN flag always appears when the last packets are exchanged between a connection.

It is important to note that when a host sends a FIN flag to close a connection, it may continue to receive data until the remote host has also closed the connection, although this occurs only under certain circumstances. Once the connection is teared down by both sides, the buffers set aside on each end for the connection are released.

A normal teardown procedure is depicted below:

tcp-analysis-section-4-3

The above diagram represents an existing connection betwen Host A and B, where the two hosts are exchanging data. Once the data transfer is complete, Host A sends a packet with the FIN, ACK flags set (STEP 1).

With this packet, Host A is ACKnowledging the previous stream while at the same time initiating the TCP close procedure to kill this connection. At this point, Host A's application will stop receiving any data and will close the connection from this side.

In response to Host A's request to close the connection, Host B will send an ACKnowledgement (STEP 2) back, and also notify its application that the connection is no longer available. Once this is complete, the host (B) will send its own FIN, ACK flags (STEP 3) to close their part of the connection.

If you're wondering why this procedure is required, then you may need to recall that TCP is a Full Duplex connection, meaning that there are two directions of data flow. In our example this is the connection flow from Host A to Host B and vice versa. In addition, it requires both hosts to close the connection from their side, hence the reason behind the fact that both hosts must send a FIN flag and the other host must ACKnowledge it.

Lastly, at Step 4, Host A willl acknowledge the request Host B sent at STEP 3 and the closedown procedure for both sides is now complete!


Summary

This page dealt with the TCP Flag Options available to make life either more difficult, or easy, depending on how you look at the picture :)

Perhaps the most important information given on this page that is beneficial to remember is the TCP handshake procedure and the fact that TCP is a Full Duplex connection.

The following section will examine the TCP Window size, Checksum and Urgent Pointer fields, all of which are relevant and very important. For this reason we strongly suggest you read through these topics, rather than skip over them.

TCP Header Length Analysis - Section 3

Introduction

The third field under close examination is the TCP Header length. There really isn't that much to say about the Header length other than to explain what it represents and how to interpret its values, but this alone is very important as you will soon see.

Let's take a quick look at the TCP Header length field, noting its position within the TCP structure:

tcp-analysis-section-3-1

You might also have seen the Header length represented as "Data offset" in other packet sniffers or applications, this is virtually the same as the Header length, only with a 'fancier' name.

Analysing the Header length

If you open any networking book that covers the TCP header, you will almost certainly find the following description for this particular field:



"An interger that specifies the length of the segment header measured in 32-bit multiples" (Internetworking with TCP/IP, Douglas E. Comer, p. 204, 1995). This description sounds impressive, but when you look at the packet, you're most likely to scratch your head thinking: what exactly did that mean?

Well, you can cease being confused because we are going to cover it step by step, giving answers to all possible questions you might have. If we don't cover your questions completely, well... there are always our forums to turn to!


Step 1 - What portion is the "Header length"?

Before we dive into analysing the meaning of the values used in this field, which by the way changes with every packet, we need to understand which portion on the packet is the "Header length".

tcp-analysis-section-3-2

Looking at the screen shot on the left, the light blue highlighted section shows us the section that's counted towards the Header length value. With this in mind, you can see that the total length of the light blue section (header length) is 28 bytes.

The Header length field is required because of the TCP Options field, which contains various options that might or might not be used. Logically, if no options are used then the header length will be much smaller.

If you take a look at our example, you will notice the 'TCP Options' is equal to 'yes', meaning there are options in this field that are used in this particular connection. We've expanded the section to show the TCP options used and these are 'Max Segment' and 'SACK OK'. These will be analysed in the pages which follow, at the present time though we are interested in whether the TCP options are used or not.

As the packet in our screenshot reaches the receiving end, the receiver will read the header length field and know exactly where the data portion starts.

This data will be carried to the layers above, while the TCP header will be stripped and disregarded. In this example, we have no data, which is normal since the packet is initiating a 3-way handshake (Flags, SYN=1), but we will cover that in more depth on the next page.

The main issue requiring our attention deals with the values used for the header length field and learning how to interpret them correctly.


Step 2 - Header Value Analysis

From the screen shot above, we can see our packet sniffer indicating that the field has a value of 7(hex) and this is interpreted as 28 bytes. To calculate this, you take the value of 7, multiply it by 32 and divide the result by 8: 7x32=224/8=28 bytes.

Do you recall the definition given at the beginning of this page? "An interger that specifies the length of the segment header measured in 32-bit multiples". This was the formal way of describing these calculations :)

The calculation given is automatically performed by our packet sniffer, which is quite thoughtful, wouldn't you agree? This can be considered, if you like, as an additional 'feature' found on most serious packet sniffers.

Below you will find another screen shot from our packet sniffer that shows a portion of the TCP header (left frame) containing the header length field. On the right frame, the packet sniffer shows the packet's contents in hex:

tcp-analysis-section-3-3

By selecting the Header length field on the left, the program automatically highlights the corresponding section and hex value on the right frame. According to the packet sniffer, the hex value '70' is the value for the header length field.

If you recall at the beginning of the page, we mentioned the header length field being 4 bits long. This means that when viewing the value in hex, we should only have one digit or character highlighted, but this isn't the case here because the packet sniffer has incorrectly highlighted the '7' and '0' together, giving us the impression that the field is 8 bits long!

Note: In hex, each character e.g '7' represents 4 bits. This means that on the right frame, only '7' should be highlighted, and not "70". Furthermore, if we were to convert '7' hex to binary, the result would be '0111' (notice the total amount of bits is equal to 4).


Summary

The 'Header length' field is very simple as it contains only a number that allows the receiving end to calculate the number of bytes in the TCP Header. At the same time, it is mandatory because without it there is no way the receiver will know where the data portion begins!

Logically, wherever the TCP header ends, the data begins - this is clear in the screen shots provided on this page. So, if you find yourself analysing packets and trying to figure out where the data starts, all you need to do is find the TCP Header, read the "Header length" value and you can find exactly where the data portion starts!

Next up are the TCP flags that most of us have come across when talking about the famous 3-way handshake and virtual connections TCP creates before exchanging data.

TCP Sequence & Acknowledgement Numbers - Section 2

Introduction

This page will closely examine the Sequence and Acknowledgement numbers. The very purpose of their existence is related directly to the fact that the Internet, and generally most networks, are packet switched (we will explain shortly) and because we nearly always send and receive data that is larger than the maximum transmission unit (a.k.a MTU - analysed on sections 5 and 6 ) which is 1500 on most networks.

Let's take a look at the fields we are about to analyse:

tcp-analysis-section-2-1

As you can see, the Sequence number proceeds the Acknowledgement number.

We are going to explain how these numbers increment and what they mean, how various operating systems handle them in a different manner and lastly, what way these numbers can become a security hazard for those who require a solid secure network.









TCP - Connection Oriented Protocol

The Sequence and Acknowledgement fields are two of the many features that help us classify TCP as a connection oriented protocol. As such, when data is sent through a TCP connection, they help the remote hosts keep track of the connection and ensure that no packet has been lost on the way to its destination.

TCP utilizes positive acknowledgments, timeouts and retransmissions to ensure error-free, sequenced delivery of user data. If the retransmission timer expires before an acknowledgment is received, data is retransmitted starting at the byte after the last acknowledged byte in the stream.

A further point worth mentioning is the fact that Sequence numbers are generated differently on each operating system. Using special algorithims (and sometimes weak ones), an operating system will generate these numbers, which are used to track the packets sent or received, and since both Sequence and Acknowledgement fields are 32bit, there are 2^32= 4,294,967,296 possibilities of generating a different number!


Initial Sequence Number (ISN)

When two hosts need to transfer data using the TCP transport protocol, a new connection is created. This involves the first host that wishes to initiate the connection, to generate what is called an Initial Sequence Number (ISN), which is basically the first sequence number that's contained in the Sequence field we are looking at. The ISN has always been the subject of security issues, as it seems to be a favourite way for hackers to 'hijack' TCP connections.

Believe it or not, hijacking a new TCP connection is something an experienced hacker can alarmingly achieve with very few attempts. The root of this security problem starts with the way the ISN is generated.

Every operating system uses its own algorithm to generate an ISN for every new connection, so all a hacker needs to do is figure out, or rather predict, which algorithm is used by the specific operating system, generate the next predicted sequence number and place it inside a packet that is sent to the other end. If the attacker is successful, the receiving end is fooled and thinks the packet is a valid one coming from the host that initiated the connection.

At the same time, the attacker will launch a flood attack to the host that initiated the TCP connection, keeping it busy so it won't send any packets to the remote host with which it tried to initiate the connection.

Here is a brief illustration of the above-mentioned attack:

tcp-analysis-section-2-2

As described, the hacker must find the ISN algorithm by sampling the Initial Sequence Numbers used in all new connections by Host A. Once this is complete and the hacker knows the algorithm and they are ready to initiate their attack:

tcp-analysis-section-2-3

Timing is critical for the hacker, so he sends his first fake packet to the Internet Banking Server while at the same time starts flooding Host A with garbage data in order to consume the host's bandwidth and resources. By doing so, Host A is unable to cope with the data it's receiving and will not send any packets to the Internet Banking Server.

The fake packet sent to the Internet Banking Server will contain valid headers, meaning it will seem like it originated from Host A's IP Address and will be sent to the correct port the Internet Banking Server is listening to.

There have been numerous reports published online that talk about the method each operating system uses to generate its ISN and how easy or difficult it is to predict. Do not be alarmed to discover that the Windows operating system's ISN algorithm is by far the easiest to predict!

Programs such as 'nmap' will actually test to see how difficult it can be to discover the ISN algorithm used in any operating system. In most cases, hackers will first sample TCP ISN's from the host victim, looking for patterns in the initial sequence numbers chosen by TCP implementations when responding to a connection request. Once a pattern is found it's only a matter of minutes for connections initiated by the host to be hijacked.


Example of Sequence and Acknowledgment Numbers

To help us understand how these newly introduced fields are used to track a connection's packets, an example is given below.

Before we proceed, we should note that you will come across the terms "ACK flag" or "SYN flag"; these terms should not be confused with the Sequence and Acknowledgment numbers as they are different fields within the TCP header. The screen shot below is to help you understand:

tcp-analysis-section-2-4

You can see the Sequence number and Acknowledgement number fields, followed by the TCP Flags to which we're referring.

The TCP Flags (light purple section) will be covered on the pages to come in much greater depth, but because we need to work with them now to help us examine how the Sequence and Acknowledgement numbers work, we are forced to analyse a small portion of them.

To keep things simple, remember that when talking about Sequence and Acknowledgement numbers we are referring to the blue section, while SYN and ACK flags refer to the light purple section.








The next diagram shows the establishment of a new connection to a web server - the Gateway Server. The first three packets are part of the 3-way handshake performed by TCP before any data is transferred between the two hosts, while the small screen shot under the diagram is captured by our packet sniffer:

tcp-analysis-section-2-5

To make sure we understand what is happening here, we will analyse the example step by step.


Step 1

Host A wishes to download a webpage from the Gateway Server. This requires a new connection between the two to be established so Host A sends a packet to the Gateway Server. This packet has the SYN flag set and also contains the ISN generated by Host A's operating system, that is 1293906975. Since Host A is initiating the connection and hasn't received a reply from the Gateway Server, the Acknowledgment number is set to zero (0).

tcp-analysis-section-2-6

In short, Host A is telling the Gateway Server the following: "I'd like to initiate a new connection with you. My Sequence number is 1293906975".


Step 2

The Gateway Server receives Host A's request and generates a reply containing its own generated ISN, that is 3455719727, and the next Sequence number it is expecting from Host A which is 1293906976. The Server also has the SYN & ACK flags set, acknowledging the previous packet it received and informing Host A of its own Sequence number.

tcp-analysis-section-2-7

In short, the Gateway Server is telling Host A the following: "I acknowledge your sequence number and expecting your next packet with sequence number 1293906976. My sequence number is 3455719727".


Step 3

Host A receives the reply and now knows Gateway's sequence number. It generates another packet to complete the connection. This packet has the ACK flag set and also contains the sequence number that it expects the Gateway Server to use next, that is 3455719728.

tcp-analysis-section-2-8

In short, Host A is telling the Gateway Server the following: "I acknowledge your last packet. This packet's sequence number is 1293906976, which is what you're expecting. I'll also be expecting the next packet you send me to have a sequence number of 3455719728".

Now, someone might be expecting the next packet to be sent from the Gateway Server, but this is not the case. You might recall that Host A initiated the connection because it wanted to download a web page from the Gateway Server. Since the 3-way TCP handshake has been completed, a virtual connection between the two now exists and the Gateway Server is ready to listen to Host A's request.

With this in mind, it's now time for Host A to ask for the webpage it wanted, which brings us to step number 4.


Step 4

In this step, Host A generates a packet with some data and sends it to the Gateway Server. The data tells the Gateway Server which webpage it would like sent.

tcp-analysis-section-2-9

Note that the sequence number of the segment in line 4 is the same as in line 3 because the ACK does not occupy sequence number space.

So keep in mind that any packets generated, which are simply acknowledgments (in other words, have only the ACK flag set and contain no data) to previously received packets, never increment the sequence number.


Last Notes

There are other important roles that the Sequence and Acknowledgement numbers have during the communication of two hosts. Because segments (or packets) travel in IP datagrams, they can be lost or delivered out of order, so the receiver uses the sequence numbers to reorder the segments. The receiver collects the data from arriving segments and reconstructs an exact copy of the stream being sent.

If we have a closer look at the diagram above, we notice that the TCP Acknowledgement number specifies the sequence number of the next segment expected by the receiver. Simply scroll back to Step 2 and you will see what I mean.


Summary

This page has introduced the Sequence and Acknowledgement fields within the TCP header. We have seen how hackers hijack connections by discovering the algorithms used to produce the ISNs and we examined step by step the way Sequence and Acknowledgement numbers increase.

TCP Source & Destination Port Number - Section 1

Introduction

This section contains one of the most well-known fields in the TCP header, the Source and Destination port numbers. These fields are used to specify the application or services offered on local or remote hosts. We explain the importance and functionality of the TCP source and destination ports, alongside with plenty of examples.

You will come to understand how important ports are and how they can be used to gain information on remote systems that have been targetted for attacks. We will cover basic and advanced port communications using detailed examples and colourful diagrams, but for now, we will start with some basics to help break down the topic and allow us to smoothly progress in to more advanced and complex information.

tcp-analysis-section-1-1

When a host needs to generate a request or send data, it requires some information:

1) IP Address of the desired host to which it wants to send the data or request.

2) Port number to which the data or request should be sent to on the remote host. In the case of a request, it allows the sender to specify the service it is intending to use. We will analyse this soon.



1) The IP Address is used to uniquely identify the desired host we need to contact. This information is not shown in the above packet because it exists in the IP header section located right above the TCP header we are analysing. If we were to expand the IP header, we would (certainly) find the source and destination IP Address fields in there.

2) The 2nd important aspect, the port number, allows us to identify the service or application our data or request must be sent to, as we have previously stated. When a host, whether it be a simple computer or a dedicated server, offers various services such as http, ftp, telnet, all clients connecting to it must use a port number to choose which particular service they would like to use.

The best way to understand the concept is through examples and there are plenty of them below, so let's take a look at a few, starting from a simple one and then moving towards something slightly more complicated.


Time To Dive Deeper!

Let's consider your web browser for a moment.

When you send a http request to download a webpage, it must be sent to the correct web server in order for it to receive it, process it and allow you to view the page you want. This is achieved by obtaining the correct IP address via DNS resolution and sending the request to the correct port number at the remote machine (web server). The port value, in the case of an http request, is usually 80.

Once your request arrives at the web server, it will check that the packet is indeed for itself. This is done by observing the destination IP Address of the newly received packet. Keep in mind that this particular step is a function of the Network layer.

Once it verifies that the packet is in fact for the local machine, it will process the packet and see that the destination port number is equal to 80. It then realises it should send the data (or request) to the http deamon that's waiting in the background to serve clients:

tcp-analysis-section-1-2

Using this neat method we are able to use the rest of the services offered by the server. So, to use the FTP service, our workstation generates a packet that is directed to the server's IP address, that is 200.0.0.1, but this time with a destination port of 21.

The diagram that follows illustrates this process:

tcp-analysis-section-1-3

By now you should understand the purpose of the destination port and how it allows us to select the services we require from hosts that offer them.

For those who noticed, our captured packet at the beginning of this page also shows the existence of another port, the source port, which we are going to take a look at below.


Understanding the Source Port

The source port serves analogues to the destination port, but is used by the sending host to help keep track of new incoming connections and existing data streams.

As most of you are well aware, in TCP/UDP data communications, a host will always provide a destination and source port number. We have already analysed the destination port, and how it allows the host to select the service it requires. The source port is provided to the remote machine, in the case of our example, this is the Internet Server, in order for it to reply to the correct session initiated by the other side.

This is achieved by reversing the destination and source ports. When the host (in our example, Host A) receives this packet, it will identify the packet as a reply to the previous packet it sent:

tcp-analysis-section-1-4

As Host A receives the Internet Server's reply, the Transport layer will notice the reversed ports and recognise it as a response to the previous packet it sent (the one with the green arrow).

The Transport and Session layers keep track of all new connections, established connections and connections that are in the process of being torn down, which explains how Host A remembers that it's expecting a reply from the Internet Server.

Of course the captured packet that's displayed at the beginning of the page shows different port numbers than the ones in these diagrams. In that particular case, the workstation sends a request to its local http proxy server that runs on port 8080, using port 3025 as its source port.

We should also note that TCP uses a few more mechanisms to accurately keep track of these connections. The pages to follow will analyse them as well, so don't worry about missing out on any information, just grab some brain food (hhmmm chocolate...), sit back, relax and continue reading!

Tuesday, March 27, 2012

How to limit recipient per message

smtpd_recipient_limit (default 1000) parameter controls how many recipients the SMTP server will take per message delivery request.
-You can't restrict this to a to/cc/bcc field - it's all recipients. For that you'd have to use a regular expression in header_checks to arbitrarily limit the length of each header to something reasonable. (We could do this in the web-client though if someone wants to open an RFE in bugzilla.)

smtpd_recipient_overshoot_limit (default 1000) - The number of recipients that a remote SMTP client can send in excess of the hard limit specified with smtpd_recipient_limit, before the Postfix SMTP server increments the per-session error count for each excess recipient. "Postfix will 4xx the 'overshoot' addresses so a sending MTA can try them again later."

Then see the smtpd_hard_error_limit (default 20) parameter to know at what number of errors it will disconnect.

So you technically need to consider like 3 values here - which affect both inbound & outbound mail.

(I've heard of an smtpd_extra_recipient_limit but I've never used it / might just be for in queues.)

Then there's the throttling tools:

smtpd_client_recipient_rate_limit (default: 0 no limit) - The maximum number of recipient addresses that an SMTP client may specify in the time interval specified via anvil_rate_time_unit (default: 60s -careful adjusting this affects other things)" and note that this is "regardless of whether or not Postfix actually accepts those recipients" Those over will receive a 450 4.7.1 Error: too many recipients from [the.client.ip.address] It's up to the client to deliver those recipients at some later time.

It may prove prudent to also adjust:
smtpd_client_connection_rate_limit (default: 0)- The maximal number of connection attempts any client is allowed to make to this service per time unit. The time unit is specified with the anvil_rate_time_unit configuration parameter.
smtpd_client_message_rate_limit (default: 0) - The maximal number of message delivery requests that any client is allowed to make to this service per time unit, regardless of whether or not Postfix actually accepts those messages. The time unit is specified with the anvil_rate_time_unit configuration parameter.

The purpose of these features are to limit abuse, as opposed to regulating legitimate mail traffic, but some use them that way.

There's also Policyd which can do sender-(envelope, SASL, or host / ip)-based throttling on messages and/or volume per defined time unit, plus recipient rate limiting.
http://www.policyd.org

To adjust:
Quote:
su - zimbra
postconf -e 'smtpd_recipient_limit = 1000'
To apply settings:
Quote:
postfix reload
To check current settings:
Quote:
postconf | grep smtpd_recipient_limit
Note: When your looking this up, smtpd_recipient_limit is not to be confused with default_destination_recipient_limit parameter, which controls how many recipients a Postfix delivery agent will send with each copy of an email message. If an email message exceeds that value, the Postfix queue manager breaks up the list of recipients into smaller lists. Postfix will attempt to send multiple copies of the message in parallel. So that really isn't limiting the number of addresses, it just breaks it into chunks for other servers to accept easier.

How to limit amount of messages per time unit in Postfix

Open postfix config file
#nano /etc/postfix/main.cf

And add next rows

smtpd_client_event_limit_exceptions = #Clients that are excluded from connection count (default: $mynetworks)
anvil_rate_time_unit = 60s #The time unit over which client connection rates and other rates are calculated. (default: 60s)
anvil_status_update_time = 120s #How frequently the server logs peak usage information. (default: 600s)
smtpd_client_message_rate_limit=500 #The maximal number of message delivery requests that any client is allowed to make to this service per time unit. (default: 0) To disable this feature, specify a limit of 0.

This configuration means that client may send 500 messages per minute, message 501 will be rejected.

Wednesday, March 21, 2012

Firewall ...

1. Nắm vững giao thức mạng: Cần nắm vững TCP/IP suite và tính chất + đặc điểm của gói tin ở mỗi tầng

2. Nắm vững hệ điều hành và cơ chế vận hành của hệ thống mình đang bảo vệ

3. Có khả năng đọc logs và phân tích các biến cố xảy ra trên các dạng logs của hệ thống mình muốn bảo vệ

4. Có khả năng sniff packets và phân tích packets

5. Nắm vững ứng dụng firewall mình đang dùng