Whenever you run a relatively larger network with multiple hosts connecting to each other you will have to think about network monitoring and optimization as well as security. Some organisations insist on having a very strict firewall policy by implementing network firewalls as well as local firewalls on operating systems. Having a strict firewall regime in place and ensuring network traffic between compartments will travel via a network firewall and systems within a compartment can only communicate if allowed by the local firewall on the operating system is a in general a good thing from a security point of view.
From an administrative point of view things become a lot more difficult. Due to this a lot of organisations are will not implement local firewalls on machines and will allow systems within a certain network compartment to freely talk to each other.
Even though it might take a lot more effort and thinking it is a good practice to also ensure local firewalls are in place to tighten security. In case you intend to implement this in an existing environment it might be hard to do so based upon architecture documents. Organically grown networks and applications might have established connections over time that are not captured in the enterprise architecture.
To be able to find out what the actual network usage is and how connections actually are made you will have to start collecting data on network connections. Even in the case you are not trying to put a more strict firewall regime in place it is good to understand who talks to who. Being able to have a real-time insight in connections will improve awareness and provides the options to improve security, remove single point of failures and improve the level of service.
Storing your data for analysis
Whenever you start collecting data for analysis you will have to store it. Some good options to store this type of data are for example;
Collecting the data
regardless of what technology you use to store the information centrally, being it Beats or Splunk or any other technology, you will have to capture the connection data first. When you are using iptables on your Oracle Linux instance you can use the standard iptables firewall from Linux to do the work. If you use firewalld you also have this option however it is not covered in this post.
What needs to be done is to put the firewall in a mode where it logs all new connections. That means inbound as well as outbound connections. To do so you can use the below commands to instruct iptables to log every new connection to the standard log location.
Understanding the records
Adding the two rules to iptables as shown above will make sure that new connections are logged. You will have to implement a way to collect the log records from all systems in a single location and you will have to find a way to analyze the data to uncover the connection patterns between systems. However, before you can do this it is of importance to understand the log records itself.
he below two examples show an inbound and an outbound connection in the log files where a connection was made between xft83.company.com (10.103.11.83) and xft82.company.com (10.103.11.82).
As you can see the messages differ a little from each other each other however hold a number of the same pieces of information. The main differnce in the outbound and inbound messages is that MAC is added to all inbound log records to provide information on the MAC address of the sending NIC who is initiating the connection.
Not listed above is for example are ACK and SYN-ACK who form together with SYN the TCP 3-Way Handshake. During the 3 way handshake the following will happen:
This means that if we are looking for the packets send by the host that initiate a new TCP socket connection we have to look for packets with SYN.
Wrapping it together
By capturing this information from all servers and storing it in a central location you will have the ability to show how network traffic is flowing in your network. The result might be totally different than what is outlined in the architecture documents, especially in cases where the IT footprint has grown over the years. Having this insight helps in gaining understanding and showcasing potential risks and issues with the way your IT footprint is constructed.
This insight also helps in planning changes, introducing new functionality and improvement and helps in implementing a more strict security regime while not hindering ongoing operations.
From an administrative point of view things become a lot more difficult. Due to this a lot of organisations are will not implement local firewalls on machines and will allow systems within a certain network compartment to freely talk to each other.
Even though it might take a lot more effort and thinking it is a good practice to also ensure local firewalls are in place to tighten security. In case you intend to implement this in an existing environment it might be hard to do so based upon architecture documents. Organically grown networks and applications might have established connections over time that are not captured in the enterprise architecture.
To be able to find out what the actual network usage is and how connections actually are made you will have to start collecting data on network connections. Even in the case you are not trying to put a more strict firewall regime in place it is good to understand who talks to who. Being able to have a real-time insight in connections will improve awareness and provides the options to improve security, remove single point of failures and improve the level of service.
Storing your data for analysis
Whenever you start collecting data for analysis you will have to store it. Some good options to store this type of data are for example;
- Elasticsearch in combination with Logstash/Beats and Kibana
- Graph databases where Neo4J is the most easy to use.
Collecting the data
regardless of what technology you use to store the information centrally, being it Beats or Splunk or any other technology, you will have to capture the connection data first. When you are using iptables on your Oracle Linux instance you can use the standard iptables firewall from Linux to do the work. If you use firewalld you also have this option however it is not covered in this post.
What needs to be done is to put the firewall in a mode where it logs all new connections. That means inbound as well as outbound connections. To do so you can use the below commands to instruct iptables to log every new connection to the standard log location.
iptables -I OUTPUT -m state --state NEW --protocol tcp -j LOG --log-prefix "New Connection: "
iptables -I INPUT -m state --state NEW --protocol tcp -j LOG --log-prefix "New Connection: "
Understanding the records
Adding the two rules to iptables as shown above will make sure that new connections are logged. You will have to implement a way to collect the log records from all systems in a single location and you will have to find a way to analyze the data to uncover the connection patterns between systems. However, before you can do this it is of importance to understand the log records itself.
he below two examples show an inbound and an outbound connection in the log files where a connection was made between xft83.company.com (10.103.11.83) and xft82.company.com (10.103.11.82).
Dec 27 16:43:13 xft83.company.com kernel: New Connection: IN=eth0 OUT= MAC=00:21:f6:01:00:02: 00:21:f6:01:00:01:08:00 SRC=10.103.11.82 DST=10.103.11.83 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=6662 DF PROTO=TCP SPT=50525 DPT=5044 WINDOW=14600 RES=0x00 SYN URGP=0 Dec 27 16:45:44 xft83.company.com kernel: New Connection: IN= OUT=eth0 SRC=10.103.11.83 DST=10.103. 11.82 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=26870 DF PROTO=TCP SPT=64899 DPT=9200 WINDOW=14600 RES=0 x00 SYN URGP=0
As you can see the messages differ a little from each other each other however hold a number of the same pieces of information. The main differnce in the outbound and inbound messages is that MAC is added to all inbound log records to provide information on the MAC address of the sending NIC who is initiating the connection.
- IN : stating the NIC used for the inbound connection. This will only be filled in case of an inbound connection.
- OUT : Stating the NIC used for the outbound connection. This will only be filled in case of an outbound connection.
- MAC : Stating the MAC addressed of the NIC used by the sending party. This is only filled in case of an inbound connection
- SRC : source IP address of the sending party who innitiate the connection
- DST : Destination IP address of the receiving party for whom the connection is intended
- LEN : Packet length
- TOS : Type of Service (for packet prioritization)
- PREC : Precedent bits
- TTL : Time to Live
- ID : Packet identifier
- DF : don't fragment (DF) bit
- PROTO : The protocol used. In our example we filter only for TCP protocol based connections
- SPT : source port number on the source IP address of the sending party who innitiate the connection
- DPT : Destination port number on the Destination IP address of the receiving party for whom the connection is intended
- WINDOW : Size of TCP window
- RES : Reserved bits
- SYN : SYNchronize packet for the TCP 3-Way Handshake
- URGP : Urgent packet
Not listed above is for example are ACK and SYN-ACK who form together with SYN the TCP 3-Way Handshake. During the 3 way handshake the following will happen:
- Host A sends a TCP SYNchronize packet to Host B
- Host B receives A's SYN
- Host B sends a SYNchronize-ACKnowledgement
- Host A receives B's SYN-ACK
- Host A sends ACKnowledge
- Host B receives ACK.
- TCP socket connection is ESTABLISHED.
This means that if we are looking for the packets send by the host that initiate a new TCP socket connection we have to look for packets with SYN.
Wrapping it together
By capturing this information from all servers and storing it in a central location you will have the ability to show how network traffic is flowing in your network. The result might be totally different than what is outlined in the architecture documents, especially in cases where the IT footprint has grown over the years. Having this insight helps in gaining understanding and showcasing potential risks and issues with the way your IT footprint is constructed.
This insight also helps in planning changes, introducing new functionality and improvement and helps in implementing a more strict security regime while not hindering ongoing operations.