The IBM QRadar 7.3.1 was released in the beginning of 2018. However, several companies still using past versions of the tool. One of the most asked question every time a new release is out is “Why should I upgrade?”. To answer this question, I compiled a list of interesting improvements in the past four major releases. This list was based on the official IBM release notes and several QRadar open mics.
- Indexing offence by any field, including custom fields
- Customize columns on log activity tab, create custom layouts
- See average EPS for each log source, on admin tab and reports
- Support of if/then/else and case statements on AQL queries
- Release of a software version of QRadar
- All logs and flows are compressed when stored
- Paging on searches (improving search performance)
- Change network interface configuration through the web console (IP address, interfaces, bonding, etc)
- Change firewall rules through the web console
- New APIs for QVM and incident retrieval
- Resource restrictions for specific users (searches)
- X-Force is already included in the QRadar subscription
- Reference sets are now domain specific, each client has their own domain set
- Data retention buckets now can be per tenant
- Offence assignment is improved and offence screen support tenants
- Web interface for DSM editor
- AQL support nested queries
- IBM Security Master Console now included with qradar. Provides a holistic view of the environment
- EPS/FPM is now a shared pool that can be distributed across devices
- QRadar now runs on RedHat 7.3, which allows LVM for partition management. It also uses the SystemD for service management, meaning that you have to use “systemclt” to manage things in the system, like service start/stop
- Activation keys are not necessary anymore. You select the type on a list
- No more limit on log source numbers. The limit is by EPS
- Tenant management is improved, the tenants can create their own reference sets and custom properties.
- AQL now supports advanced statements, such as session queries, bitwise operators and functions.
- Apps now can be outsourced to an external AppNode
- New interfaces for remote networks and remote services
- Java deployment editor doesn’t exist anymore, all device management happens through the admin interface
- New login screen, new logos and design.
- New app called Pulse, very interesting dashboards, provide “SOC Views” and fancy graphs
- Custom properties can now be based in AQL queries
- Now it is possible to identify if QRadar inverted the flow in the network activity tab
- Minor patch updates does not cause downtime anymore
- Event collection now runs as a separated service, meaning you can restart just the event collection in a device
- New left side menu, allows creating shortcuts and favourites
- Browser-based notifications
- New “QRadar Deployment Intelligence App” provides a lot of system health information
- Possibility to enforce password policy
- New “QRadar Assistant App” comes already with QRadar. It gives tips on how to use the tool, suggest apps, and provide a live feed of the IBM Security Support twitter.
- Log source auto-detection can now be controlled, allowing only certain types of log sources to be auto-detected
- Auto-discovery of event properties.
- New offering of a Data Storage solution for QRadar, this allows to some of the logs to be collected only and not parsed by the pipeline (saving EPS). This can be interesting if one of the devices is on debug mode.
- Support to JSON formats in log source extension parsing
- AQL can now be targeted by event processor, improving search time
- Geolocation is improved. Now you can manually enter the geolocation of IPs on the network hierarchy, so maps are correct.
- New App Developer Center, so people can develop their own apps with the IBM SDK
- Rules can now be triggered by distance on geolocation. “If a traffic comes from more than 100km from here”..
- The vulnerability manager and risk manager are completely redesigned.
- The incident forensics module supports packet capture and more advanced features
Today I was reading about the new QRadar integration with the IBM BigData solution. Instead of writing down here, I decided to share with you guys a very nice video that summarize the benefits of this integration.
(Part 1) QRadar Basics and Big Data
(Part 2) QRadar BigData Extension:
I hope you guys enjoy the videos. You can also check more from the author in his youtube channel.
IBM recently released the new “IBM Security QRadar Certified Deployment Professional” or also called ” IBM Security QRadar SIEM V7.1 Implementation”. For the most of the people certifications are just accomplishments to attach on their CV, but the real value of the certification is not the paper itself, but is the study to get the certification. Even people that work years with the product, when studying to the certification discover new features or new ways to work with the solution, and being certified (after the proper study) gives you the necessary confidence that at least you already saw all the features of the product and you are able to use the tool in its best way.
The new certification (code C2150-196) consists in a 90-minutes test containing 64 questions involving all the phases of the project. From installing the hardware to tuning the rules. As mentioned in the first paragraph, studying and getting certified will give you a broader vision about the product, not only the tasks that you are used. The test passing score is 70%, a high score compared to another certifications from IBM, and as it involves all phases of the project, you should dedicate part of you time to study the tool.
The best way to prepare yourself to the certification is exploring the tool. Don’t try to go to the certification having never even logged on QRadar. Another good source of information, is the study guide from IBM that you can find on this link. It basically provides you with all the topics of the certification.
A personal tip to you is focus in the following categories: Difference between the versions (SIEM, LogManager, etc); theory behind the offences (how it is generates, how to configure the rules, etc); Interface usage (where can you find the features, how to do things in the interface, etc); and Solution Architecture (Components).
Another suggestion for people who have budget for it, go for the IBM classes. I went to two QRadar courses (2 years ago) and both were very helpful and practical. The courses were filled with useful exercises and hands-on activities. The bad point is the prices, but usually the companies pays for the training. To learn more about the IBM QRadar course, check this link out.
After studying the study guide (or attending the official training), exploring the tool and practicing the theory, you will be good to go for the certification. To get more information about how to schedule your certification visit the official IBM learning center.
In the last post we discussed how to calculate the EPS of our environment. Now lets discuss how to calculate the required size of the storage, since with the EPS in hands it turns way easier to calculate the size of our database. In this scenario we will consider only the log storage, not considering the network flows storage.
First of all, we need to understand how the data is stored on QRadar. Basically, you have 3 types of data:
- Online live data: All the events can be accessed with no latency. In this case the data is not compacted;
- Online compacted data: All the events can be accessed but with a small latency because the data is compacted. The avarage compression rate is 10:1;
- Offline data: All the events cannot be accessed instantly because all the data is in a external backup server. To access this data the user should import the backup into the QRadar (or into a QRadar Virtual Machine) for analysis;
After understanding which each type of data represents, we can start to calculate the storage based on the requirements of the project. In the sizing, we only use the Online data, the offline backup is not considered (since it is a external independent server).
To make an easy explanation, lets use the following requirements:
[Online Live Data: 7 days; Online Compacted: 180 days; EPS: 2500]
Steps to calculate:
- Calculate how much data is generated each second: Multiply the EPS by 300 bytes (the average size of an log):
In the example: 2500 x 300 = 750000 bytes = 732.5 kb/s
- With the Data Per Second, we can calculate how much data we have in one day (1 day = 86400 seconds):
In the example: 732.5 * 86400 = 63288000 kb/day = 61804.7 Mb/day = 60.4 Gb/day
- Now that we know how much data is generated in one day, lets calculate the Online Live Data size (non-compacted):
In the example: 60.4Gb/day * 7 = 422.8Gb
- Now, lets calculate the Online Compacted Data. Note that the average compression rate is 10:1 :
In the example: 180 days – 7 days (online live data) = 173 days
173 days * 60.4Gb = 10449.2 Gb
10449.2Gb * 0,1 (compression rate) = 1044.92Gb
- We have the size of the online live and the online compacted data. Now we just need to sum both and we have the final size:
In the example: 422.8Gb + 1044.92Gb = 1467.72Gb = 1.43Tb
Following this basic steps we can have a accurate approximation of the necessary storage size. A good practice is using a storage 20% bigger than the estimated.
Do you have any another experience with storage sizing? Let us know in the comments!
UPDATE: According to one of our readers (see comments), starting from the version 7.2.7, the stored data will always be compressed. So, if you are sizing your environment for the latest QRadar version, you should use only the “compressed data” calculations.
One of the biggest challenges when sizing a QRadar implementation is estimating the Events Per Second (aka. EPS) of the environment, specially because in the most of the cases we don’t have full access to the log sources to precisely determine the EPS. So in this post we will review some tips about how to estimate the EPS.
Determining the EPS of one event source with access to the system or access to the logfiles.
# Dump the log in a file and delete all the log not from the past 24h. Leave only the last 24h of logs
– If the system generate syslog, follow these steps:
a. Configure the logsource to send the logs to any linux server
b. In the destination linux server execute the following command: tcpdump -i eth0 src host SOURCE_IP dst port 514
c. Run the command for exactly 24 hours in a regular day and verify how many log packets you got.
# Verify the number of logs in the file.
– If there is just one log per line, simple open the file on notepad and verify how many lines you have;
– If the logs are not one per line, verify the whole size of the file (in bytes) and divide by 250 (the avarage size of a log line). Example: File with 3Mb = 3145728 bytes / 250 = 12583 Log packets
# Divide the number of packets by 86400, the result will be the EPS of the log source
Determining the EPS without access to logs or the system:
# From my previous experience, a good approximation of EPS is:
|IIS or Exchange||10|
|General Windows Server||2|
|General Windows Workstation||0,5|
|DNS or DHCP||15|
|IPS, IDS or DAM||5|
Calculating the EPS of the whole environment:
# Multiply the number of each device by the estimated EPS
# Sum the EPS of all kind of devices and you will have the EPS of your whole environment
3 Core Routers + 2 IPS = 3x 150 + 2x 5 = 460 EPS
# Remember to always consider at least 20% margin for buying your license.
Do you have any another tips to calculate EPS? Let us know in the comments!