By the original QRadar configuration, all the appliances comes with a pre-configured firewall rules in the OS. For testing purposes we can simple deactivate the firewall using the command “service iptables stop” (to stop the firewall) and “service iptables start” (to turn it back). But sometimes we need to update the firewall configuration aiming permanent changes.
In order to change firewall rules on your appliance you need to follow the below steps:
- Connect through SSH to the appliance that you want to make modifications;
- Login using ‘root’ account;
- Edit one of the following files:
- Add your firewall rules in the file, for example:
- -A INPUT -i eth0 -s x.x.x.x -j ACCEPT
- Save the file with the ‘ :wq ‘;
- Run /opt/qradar/bin/iptables_update.pl so your changes take effect;
With those steps your firewall configuration is now changed and will persist even in rebooting cases.
Continuing the post about running commands across the environment, today we’d like to present you another very useful and powerful command. Gathering information about the appliances and servers can be a painful task, but QRadar can provide us with some good scripts to make this task easy and automated. For example, if you execute on your QRadar Console:
[root@MY_RADAR]# /opt/qradar/bin/myver -v
…you’ll get a lot of information about you appliance like :
- Appliance type,
- Core version of the system,
- Patch number,
- Is the QRM enabled,
- Is the appliance you ran this command is a console,
- What’s the IP address,
- What’s the kernel architecture,
- Information about CPU, Operating System and if this is HA host or not.
And here’s the tricky part: to get this information from all your QRadar servers and appliances, you can combine it with the “/opt/qradar/support/all_servers.sh” command, presented in the another post, and gather this valuable information from all your managed hosts. For example, we can run this command across all the servers and input the result in a text file:
[root@MY_RADAR]# /opt/qradar/support/all_servers.sh “/opt/qradar/bin/myver -v” > /root/info.txt
As you can see, with just one line we can gather information of all our servers and generate a raw report of our QRadar environment. Simple, isn’t it?
The daily maintenance across a small environments can be an easy job, but when our environment grows to a point where we have several appliances it can be a though job. For example, in case we need to monitor the Disk Space in a environment of just one appliance, we can simple connect through SSH to the QRadar and run a Linux command such as ‘df -h‘, but in a large environment with several appliances this practice would take a lot of time.
In the QRadar distributed environments, the console acts like a central management console to all the another appliances. In our example of monitoring disk, wouldn’t be easier if we could run a command in the main console to get information about all the environment? It’s exactly what the script ‘all_servers.sh‘ does. The script is located at:
To run the command, you can use the following syntax:
[root@MY_RADAR]# ./opt/qradar/support/all_servers.sh ‘COMMAND’
(Where COMMAND is what you want to run in the appliances)
In our example of monitoring the disk size, we could use:
[root@MY_RADAR]# ./opt/qradar/support/all_servers.sh ‘df -h’ > /root/drive_space.txt
And it would write the result of the script on all the servers in the following file: /root/drive_space.txt
The script can be used for several different purposes: Monitoring disk space, Monitoring CPU, Viewing network configurations, checking logs, etc. Can you imagine how it could help in your environment?! Had good ideas of how to integrate it with your monitoring systems?! Let us know in the comments!
— This post was suggested and written by our new collaborator, Tomasz Stankiewicz.
We already discussed about how configure log sources, and how configure QRadar to receive the logs. Let’s say that everything is ready, you are in front of the customer, and the logs doesn’t show up, do you know how to troubleshoot it? Here is some quick troubleshooting tips, that can help you in those situations:
- Verify the connectivity between the log source and the QRadar collector:
- You can simply ping from the log source to the collector;
- By default, the IP-Tables from QRadar drop pings, so you will need to stop the iptables process in the QRadar collector. You can do it opening the terminal (or ssh) in the QRadar and using the following command:
services iptables stop ;
- If you cannot even ping the QRadar server from your log source, the issue is the network;
- Don’t forget to restart the IPtables after testing, just use the following command:
services iptables start ;
- Verify the firewalls between the log source and the QRadar:
- The firewalls should allow the ports used to collect. For example, for collecting syslog, the firewalls should allow the port 514/UDP;
- If you have no access to the firewall, a simple way to test the firewall is using the telnet command from the logsource to the QRadar: telnet [IP] [PORT]
Example: telnet 10.1.1.1 514
- If the telnet doesn’t work, some firewall is dropping the packets on the specified port, you should ask for a firewall rule allowing the traffic;
- Verify the flows coming in the QRadar collector:
- You can use the command tcpdump in the QRadar to verify if the packets are being received in the QRadar;
- Syntax: tcpdump -i [INTERFACE] src host [IP-LOGSOURCE] port [PORT]
- Example: tcpdump -i eth0 src host 10.2.2.2 port 514
- If nothing shows up, there is some network issue dropping the packets or the log source is not properly configured;
- Verify the QRadar Logs:
- The QRadar logs are stored in the following folder: /var/log/
- The main log is named qradar.log
- You can simple access and monitor the log using the following command: tail –f /var/log/qradar.log
- You can verify the current EPS using the following command:
tail –f /var/log/qradar.log | grep ‘Events per Second’
I hope this post help you guys to troubleshoot collecting problems on QRadar. If you have any question or suggestion, please leave us a comment!