FlowTraq's hardware requirements depend heavily on the number of flow records it receives per second. The more flow records FlowTraq must process, the bigger the hardware investment becomes.

In order to provide full forensic recall capability, FlowTraq stores every flow record it receives to disk indefinitely, as long as there is room in the database. In addition to storing flow records on disk, FlowTraq Server keeps a memory cache of recently received records. The larger this cache, the larger the number of records which can be accessed quickly. If your network is very busy, you may need to dedicate more RAM to the server installation, or you may need to install multiple server machines in a cluster. If the client is looking for records that are not in the RAM cache, then the FlowTraq server will have to access these records on disk, which will take substantially longer. If further history is kept in a separate archive (an optional configuration) that in turn may take even longer.

This full-fidelity feature allows for more powerful analysis and forensic capabilities than traditional flow collectors. However, it also means that FlowTraq can be more demanding of the hardware it's running on than traditional flow collectors.

A FlowTraq server handling a 24/7 sustained flow rate of 25,000 updates per second should be configured, at minimum, with an 8-core CPU and with 8GB of RAM per core, for a total of 64GB. Disk space configuration should be driven by your required retention period. Full fidelity retention of 25,000 flow updates per second will consume about 1TB per week; therefore, keeping 3 months of flow data at a saturated 10Gbit network will take about 12TB.

[Tip]Tip

The following hardware guidelines apply both to the vApp-based FlowTraq server as well as FlowTraq server daemons installed directly on dedicated physical hardware.


The preceding configurations should be interpreted as guidelines. To determine your requirements, test the software's performance in your network environment. There are many alternatives offered by hardware vendors that fall in the same general categories. Also, based on your personal preference, you may get the job done with less powerful hardware. A smaller configuration will certainly handle 25,000 flow updates per second, but it will take additional time to display graphs and tables, will be able to host fewer active connections gracefully (including NBI detectors) and in extreme cases may drop flows during periods of high disk usage.

[Note]Note

Remember that most flow exporters send multiple updates per flow, so that the updates/second rate is on average 2x-3x higher than the flows/second rate.

In demanding environments (such as those with a flow load higher than 25,000 updates per second, many FlowTraq users, or heavy external API/script usage), you may need to run more than one FlowTraq server in a cluster configuration. This automatically balances the processing load over multiple systems and is completely transparent to the user. A cluster of 8 FlowTraq nodes can generally handle a 200,000 update per second load with a similar speed to a single node handling 25,000 updates per second. Contact FlowTraq support for guidance.

[Caution]Caution: 32-bit environments

Although FlowTraq will work in a 32-bit environment, we strongly recommend that FlowTraq Server be installed on a 64-bit platform.

On 32-bit platforms, FlowTraq Server will only be able to use approximately 3GB of RAM. This is unlikely to be sufficient in most environments. Using a 64-bit operating system will allow FlowTraq Server software to allocate more RAM, which allows for a longer instant recall history and a higher input flow rate.

Note that in order to be able to take advantage of a 64-bit platform, both the CPU and the operating system must be 64-bit.

The FlowTraq vApp is a 64-bit system which can be configured to use large quantities of RAM if needed.

[Caution]Caution: shared environments

FlowTraq is a very resource-intensive application, and its performance can be greatly impacted by other tasks and software running in its environment, particularly disk-intensive applications. In a shared virtual environment, it is important to consider carefully the impact of other virtual machines on core and RAM availability, and on disk throughput.