Endpoints in an IT environment generate logs related to access, operations, events, and errors. A security breach on an endpoint could direct attackers to the other components and endpoints it connects with, thereby making endpoint logging and monitoring crucial for IT security. Modern IT environments and applications often have multiple endpoints and tiers of infrastructure with a combination of on-prem servers and cloud services. If several endpoints or components in your IT environment or application infrastructure run on Linux, it makes total sense to centralize all of the logs they generate and analyze them in a single location.
In this article, we’ll take a look at how you could forward your Linux logs and centralize them on Apica.
Why centralize Linux logs?
Unifying log data in a central location is one of the best practices in logging and monitoring. Having your logs from all of your distributed data sources promotes a better correlation of events and cross-analysis. Besides, your system and security administrators can sift through log and event data in a single location rather than searching for the right log to look at when an issue occurs. Log centralization also makes it faster to troubleshoot and fix critical issues, identify potential failures and threats, and optimize overall system performance.
Benefits of centralizing Linux logs on Apica
The Apica platform offers numerous benefits to log centralization. Although most Linux-based systems already centralize logs using Syslog daemons, the logs are stored internally unless forwarded to another server. By shipping your Linux logs to Apica, you can:
- Protect your Linux logs from accidental loss or inaccessibility during downtimes by storing them in a separate location.
- Carry out complex searches using regular expressions and advances queries without consuming the host’s computing resources.
- Store Linux logs for longer durations on any S3-compatible object store and have them fully indexed and searchable.
- Reducing MTTD and MTTR by helping administrators getting to root causes and fixing issues faster.
- Enable role-based access to Linux logs outside the system that generates them, thereby increasing data security and promoting cross-team collaboration.
- Comparing and correlating events across the application and infrastructure layers.
- Augment log data with security events using built-in SIEM rules and detect security-related events across your Linux systems in real-time.
Besides, you also get to visualize your Linux log data and metrics and create alerts for thresholds and events that trigger remediation or other process automation workflows in your ITOM toolkit.
Shipping Linux logs using Fluent Bit
Several Linux log forwarders help ship logs to an external server or endpoint. Some of these include Rsyslog, syslog-ng, Logstash, FluentD, and Fluent Bit. We’ve developed a simple script that helps install Fluent Bit on your Linux endpoints, configures Rsyslog, and sets up log forwarding to Apica.
You’ll need access to a Apica instance to try out the following steps. In case you do not have access to a Apica instance, you can sign up for a 14-day free trial of Apica SaaS. Alternatively, there are plenty of deployment options to choose from if you’d like to try out Apica PaaS. You could go for the free-forever Apica PaaS Community Edition or try out our straightforward deployments on Kubernetes, AWS, MicroK8s, or Docker.
To forward your logs to Apica using Fluent Bit, do the following.
- Clone the Apica Installation GitHub repository and navigate to
/fluent-bit/linux
. - Access the
td-agent-bit.sh
script from this folder. - Make the script executable by running the command:
chmod +x td-agent-bit.sh
- Execute the script by running:
./td-agent-bit.sh
The script executes and carries out the following tasks:
- Installs Fluent Bit
- Checks your OS versions and updates your sources list, as mentioned in the Fluent Bit documentation.
- Configures Rsyslog to add
omfwd
, as shown below.
*.* action(type="omfwd"
queue.type="LinkedList"
action.resumeRetryCount="-1"
queue.size="10000"
queue.saveonshutdown="on"
target="127.0.0.1" Port="5140" Protocol="tcp"
)
The script also places the td-agent-bit.conf
file under the default Fluent Bit installation folder /etc/td-agent-bit
.
Configure the [OUTPUT]
section of the td-agent-bit.conf
file based on your Apica cluster, as shown below.
[OUTPUT]
Name http
Match *
Host localhost
Port 80
URI /v1/json_batch
Format json
tls off
tls.verify off
net.keepalive off
compress gzip
Header Authorization Bearer ${Apica_TOKEN}
Now that the configuration is complete, run the following commands to start Fluent Bit and Rsyslog.
systemctl start td-agent-bit
systemctl restart rsyslog
Next, access the Logs page on your Apica UI. You should now see your Linux logs being ingested into the Linux:Linux1
namespace on your Apica UI.
Conclusion
If your IT environment contains a lot of components that run on Linux, there’s a lot to gain from centralizing your Linux logs on Apica.
Apica is the world’s only unified data platform for real-time monitoring, observability, log aggregation, and analytics with an infinite storage scale without zero storage tax. Apica ships with a host of integrations and tooling that lets you exercise cross-platform, real-time monitoring, observability, and analysis, threat and bug forensics, and process automation – all while leveraging built-in robust security measures, promoting cross-team collaboration, and maintaining regulatory compliance. The use of object storage also means that neither do we dictate how much log data you can store and for how long, nor do we force you to favor logging specific components of your environment over others – you get to log everything and store and manage all your data on your terms.
There’s always an option of building your own log management and analytics stack using open source tooling. However, this can be more challenging than it appears and may not scale the way you need the stack to. If you find the benefits and ease of integration compelling enough, do give Apica SaaS a try. You’ll not regret it.