This integration is last tested with Artifactory 7.71.11 and Xray 3.88.12 versions.
Note! You must follow the order of the steps throughout Splunk Configuration
Install the JFrog Log Analytics Platform
app from Splunkbase here!
1. Download file from Splunkbase
2. Open Splunk web console as administrator
3. From homepage click on the Manage button with a wheel icon (left side of the screen, in the top right corner of Apps section)
4. Click on "Install app from file"
5. Select download file from Splunkbase on your computer
6. Click upgrade
7. Click upload
Splunk will ask the user to restart to complete the installation. If the app is not restarted automatically, do the following steps:
1. Open Splunk web console as administrator
2. Click on Settings then Server Controls
3. Click on Restart Splunk
Login to Splunk after the restart completes.
Confirm the version is the latest version available in Splunkbase.
Our integration uses the Splunk HEC to send data to Splunk.
Users will need to configure the HEC to accept data (enabled) and also create a new token. Steps are below.
1. Open Splunk web console as administrator
2. Click on "Settings" in dropdown select "Indexes"
3. Click on "New Index"
4. Enter Index name as jfrog_splunk
5. Click "Save"
1. Open Splunk web console as administrator
2. Click on "Settings" in dropdown select "Indexes"
3. Click on "New Index"
4. Enter Index name as jfrog_splunk_metrics
5. Select Index Data Type as Metrics
6. Click "Save"
1. Open Splunk web console as administrator
2. Click on "Settings" in dropdown select "Data inputs"
3. Click on "HTTP Event Collector"
4. Click on "New Token"
5. Enter a "Name" in the textbox
6. (Optional) Enter a "Description" in the textbox
7. Click on the green "Next" button
8. Add "jfrog_splunk" index to store the JFrog platform log data into.
9. Click on the green "Review" button
10. If good, Click on the green "Done" button
11. Save the generated token value
1. Open Splunk web console as administrator
2. Click on "Settings" in dropdown select "Data inputs"
3. Click on "HTTP Event Collector"
4. Click on "New Token"
5. Enter a "Name" in the textbox
6. (Optional) Enter a "Description" in the textbox
7. Click on the green "Next" button
8. Add "jfrog_splunk_metrics" index to store the JFrog platform metrics data into.
9. Click on the green "Review" button
10. If good, Click on the green "Done" button
11. Save the generated token value
For non Kubernetes-based installations, enable metrics in Artifactory, make the following configuration changes to the Artifactory System YAML
shared:
metrics:
enabled: true
artifactory:
metrics:
enabled: true
Once this configuration is done and the application is restarted, metrics will be available in Open Metrics Format
Metrics are enabled by default in Xray. For Kubernetes-based installations, openMetrics is enabled in the helm install commands listed below
Ensure you have access to the Internet from VM. Recommended install is through fluentd's native OS based package installs:
OS | Package Manager | Link |
---|---|---|
CentOS/RHEL | Linux - RPM (YUM) | https://docs.fluentd.org/installation/install-by-rpm |
Debian/Ubuntu | Linux - APT | https://docs.fluentd.org/installation/install-by-deb |
MacOS/Darwin | MacOS - DMG | https://docs.fluentd.org/installation/install-by-dmg |
Windows | Windows - MSI | https://docs.fluentd.org/installation/install-by-msi |
Gem Install** | MacOS & Linux - Gem | https://docs.fluentd.org/installation/install-by-gem |
** For Gem based install, Ruby Interpreter has to be setup first, following is the recommended process to install Ruby
1. Install Ruby Version Manager (RVM) as described in https://rvm.io/rvm/install#installation-explained, ensure to follow all the onscreen instructions provided to complete the rvm installation
* For installation across users a SUDO based install is recommended, the installation is as described in https://rvm.io/support/troubleshooting#sudo
2. Once rvm installation is complete, verify the RVM installation executing the command 'rvm -v'
3. Now install ruby v2.7.0 or above executing the command 'rvm install <ver_num>', ex: 'rvm install 2.7.5'
4. Verify the ruby installation, execute 'ruby -v', gem installation 'gem -v' and 'bundler -v' to ensure all the components are intact
5. Post completion of Ruby, Gems installation, the environment is ready to further install new gems, execute the following gem install commands one after other to setup the needed ecosystem
'gem install fluentd'
After FluentD is successfully installed, the below plugins are required to be installed
gem install fluent-plugin-concat
gem install fluent-plugin-splunk-hec
gem install fluent-plugin-jfrog-siem
gem install fluent-plugin-jfrog-metrics
We rely heavily on environment variables so that the correct log files are streamed to your observability dashboards. Ensure that you fill in the .env file with correct values. Download the .env file from here
- JF_PRODUCT_DATA_INTERNAL: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service you will find its active log files in the
$JFROG_HOME/<product>/var/log
directory - SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
- SPLUNK_HEC_HOST: Splunk Instance URL
- SPLUNK_HEC_PORT: Splunk HEC configured port
- SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
- SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
- SPLUNK_INSECURE_SSL: false for test environments only or if http scheme
- SPLUNK_VERIFY_SSL: false for disabling ssl validation (useful for proxy forwarding or bypassing ssl certificate validation)
- SPLUNK_COMPRESS_DATA: true for compressing logs and metrics json payloads on outbound to Splunk
- JPD_URL: Artifactory JPD URL of the format
http://<ip_address>
- JPD_ADMIN_USERNAME: Artifactory username for authentication
- JFROG_ADMIN_TOKEN: Artifactory Access Token for authentication
- COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)
Apply the .env files and then run the fluentd wrapper with one argument pointed to the fluent.conf.*
file configured.
source jfrog.env
./fluentd $JF_PRODUCT_DATA_INTERNAL/fluent.conf.<product_name>
Note! These steps were not tested to work out of the box on MAC
In order to run FluentD as a docker image to send the logs, violations and metrics data to splunk, the following commands needs to be executed on the host that runs the docker.
-
Check the docker installation is functional, execute command 'docker version' and 'docker ps'.
-
Once the version and process are listed successfully, build the intended docker image for Splunk using the docker file,
- Download Dockerfile from here to any directory which has write permissions.
-
Download the docker.env file needed to run Jfrog/FluentD Docker Images for Splunk,
- Download docker.env from here to the directory where the docker file was downloaded.
For Splunk as the observability platform, execute these commands to setup the docker container running the FluentD installation
-
Execute
docker build --build-arg SOURCE="JFRT" --build-arg TARGET="SPLUNK" -t <image_name> .
Command example
docker build --build-arg SOURCE="JFRT" --build-arg TARGET="SPLUNK" -t jfrog/fluentd-splunk-rt .
The above command will build the docker image.
-
Fill the necessary information in the docker.env file
JF_PRODUCT_DATA_INTERNAL: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service you will find its active log files in the
$JFROG_HOME/<product>/var/log
directory SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https SPLUNK_HEC_HOST: Splunk Instance URL SPLUNK_HEC_PORT: Splunk HEC configured port SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk SPLUNK_INSECURE_SSL: false for test environments only or if http scheme SPLUNK_VERIFY_SSL: false for disabling ssl validation (useful for proxy forwarding or bypassing ssl certificate validation) SPLUNK_COMPRESS_DATA: true for compressing logs and metrics payloads that are sent to Splunk JPD_URL: Artifactory JPD URL of the formathttp://<ip_address>
JPD_ADMIN_USERNAME: Artifactory username for authentication JFROG_ADMIN_TOKEN: Artifactory Access Token for authentication COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray) -
Execute
docker run -it --name jfrog-fluentd-splunk-rt -v <path_to_logs>:/var/opt/jfrog/artifactory --env-file docker.env <image_name>
The <path_to_logs> should be an absolute path where the Jfrog Artifactory Logs folder resides, i.e for an Docker based Artifactory Installation, ex: /var/opt/jfrog/artifactory/var/logs on the docker host.
Command example
docker run -it --name jfrog-fluentd-splunk-rt -v $JFROG_HOME/artifactory/var/:/var/opt/jfrog/artifactory --env-file docker.env jfrog/fluentd-splunk-rt
Recommended installation for Kubernetes is to utilize the helm chart with the associated values.yaml in this repo.
Product | Example Values File |
---|---|
Artifactory | helm/artifactory-values.yaml |
Artifactory HA | helm/artifactory-ha-values.yaml |
Xray | helm/xray-values.yaml |
Warning
The old docker registry partnership-pts-observability.jfrog.io
, which contains older versions of this integration is now deprecated. We'll keep the existing docker images on this old registry until August 1st, 2024. After that date, this registry will no longer be available. Please helm upgrade
your JFrog kubernetes deployment in order to pull images as specified on the above helm value files, from the new releases-pts-observability-fluentd.jfrog.io
registry. Please do so in order to avoid ImagePullBackOff
errors in your deployment once this registry is gone.
Add JFrog Helm repository:
helm repo add jfrog https://charts.jfrog.io
helm repo update
Throughout the exampled helm installations we'll use jfrog-splunk
as an example namespace. That said, you can use a different or existing namespace instead by setting the following environment variable
export INST_NAMESPACE=jfrog-splunk
If you don't have an existing namespace for the deployment, create it and set the kubectl context to use this namespace
kubectl create namespace $INST_NAMESPACE
kubectl config set-context --current --namespace=$INST_NAMESPACE
Generate masterKey
and joinKey
for the installation
export JOIN_KEY=$(openssl rand -hex 32)
export MASTER_KEY=$(openssl rand -hex 32)
-
Skip this step if you already have Artifactory installed. Else, install Artifactory using the command below
helm upgrade --install artifactory jfrog/artifactory \ --set artifactory.masterKey=$MASTER_KEY \ --set artifactory.joinKey=$JOIN_KEY \ --set artifactory.metrics.enabled=true \ -n $INST_NAMESPACE --create-namespace
π‘ Metrics collection is disabled by default in Artifactory. Please make sure that you are following the above
helm upgrade
command to enable them in Artifactory by settingartifactory.metrics.enabled=true
. For Artifactory versions <=7.86.x, please enable metrics by setting the flagartifactory.openMetrics.enabled=true
-
Create a secret for JFrog's admin token - Access Token using any of the following methods
kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file> OR kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMIN_TOKEN>
-
For Artifactory installation, download the .env file from here. Fill in the jfrog_helm.env file with correct values.
- SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
- SPLUNK_HEC_HOST: Splunk Instance URL
- SPLUNK_HEC_PORT: Splunk HEC configured port
- SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
- SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
- SPLUNK_INSECURE_SSL: false for test environments only or if http scheme
- SPLUNK_VERIFY_SSL: false for disabling ssl validation (useful for proxy forwarding or bypassing ssl certificate validation)
- SPLUNK_COMPRESS_DATA: true for compressing logs and metrics json payloads on outbound to Splunk
- JPD_URL: Artifactory JPD URL of the format
http://<ip_address>
- JPD_ADMIN_USERNAME: Artifactory username for authentication
- COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)
Apply the .env files using the helm command below
source jfrog_helm.env
-
Postgres password is required to upgrade Artifactory. Run the following command to get the current password
POSTGRES_PASSWORD=$(kubectl get secret artifactory-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
-
Upgrade Artifactory installation using the command below
helm upgrade --install artifactory jfrog/artifactory \ --set artifactory.joinKey=$JOIN_KEY \ --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \ --set splunk.host=$SPLUNK_HEC_HOST \ --set splunk.port=$SPLUNK_HEC_PORT \ --set splunk.logs_token=$SPLUNK_HEC_TOKEN \ --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \ --set splunk.compress_data=$SPLUNK_COMPRESS_DATA \ --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \ --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \ --set splunk.verify_ssl=$SPLUNK_VERIFY_SSL \ --set jfrog.observability.jpd_url=$JPD_URL \ --set jfrog.observability.username=$JPD_ADMIN_USERNAME \ --set jfrog.observability.common_jpd=$COMMON_JPD \ -f helm/artifactory-values.yaml \ -n $INST_NAMESPACE --create-namespace
-
For HA installation, please create a license secret on your cluster prior to installation.
kubectl create secret generic artifactory-license --from-file=<path_to_license_file>
-
Skip this step if you already have Artifactory installed. Else, install Artifactory using the command below
helm upgrade --install artifactory-ha jfrog/artifactory-ha \ --set artifactory.masterKey=$MASTER_KEY \ --set artifactory.joinKey=$JOIN_KEY \ --set artifactory.license.secret=artifactory-license \ --set artifactory.license.dataKey=artifactory.cluster.license \ --set artifactory.metrics.enabled=true \ -n $INST_NAMESPACE --create-namespace
π‘ Metrics collection is disabled by default in Artifactory-HA. Please make sure that you are following the above
helm upgrade
command to enable them in Artifactory by settingartifactory.metrics.enabled=true
. For Artifactory versions <=7.86.x, please enable metrics by setting the flagartifactory.openMetrics.enabled=true
-
Create a secret for JFrog's admin token - Access Token using any of the following methods
kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file> OR kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMIN_TOKEN>
-
Download the .env file from here. Fill in the jfrog_helm.env file with correct values.
- SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
- SPLUNK_HEC_HOST: Splunk Instance URL
- SPLUNK_HEC_PORT: Splunk HEC configured port
- SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
- SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
- SPLUNK_INSECURE_SSL: false for test environments only or if http scheme
- SPLUNK_VERIFY_SSL: false for disabling ssl validation (useful for proxy forwarding or bypassing ssl certificate validation)
- SPLUNK_COMPRESS_DATA: true for compressing logs and metrics json payloads on outbound to Splunk
- JPD_URL: Artifactory JPD URL of the format
http://<ip_address>
- JPD_ADMIN_USERNAME: Artifactory username for authentication
- COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)
Apply the .env files and then run the helm command below
source jfrog_helm.env
-
Postgres password is required to upgrade Artifactory. Run the following command to get the current password
POSTGRES_PASSWORD=$(kubectl get secret artifactory-ha-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
-
Upgrade Artifactory HA installation using the command below
helm upgrade --install artifactory-ha jfrog/artifactory-ha \ --set artifactory.joinKey=$JOIN_KEY \ --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \ --set splunk.host=$SPLUNK_HEC_HOST \ --set splunk.port=$SPLUNK_HEC_PORT \ --set splunk.logs_token=$SPLUNK_HEC_TOKEN \ --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \ --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \ --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \ --set splunk.verify_ssl=$SPLUNK_VERIFY_SSL \ --set splunk.compress_data=$SPLUNK_COMPRESS_DATA \ --set jfrog.observability.jpd_url=$JPD_URL \ --set jfrog.observability.username=$JPD_ADMIN_USERNAME \ --set jfrog.observability.common_jpd=$COMMON_JPD \ -f helm/artifactory-ha-values.yaml \ -n $INST_NAMESPACE --create-namespace
Create a secret for JFrog's admin token - Access Token using any of the following methods if it doesn't exist
kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file>
OR
kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMIN_TOKEN>
For Xray installation, download the .env file from here. Fill in the jfrog_helm.env file with correct values.
- SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
- SPLUNK_HEC_HOST: Splunk Instance URL
- SPLUNK_HEC_PORT: Splunk HEC configured port
- SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
- SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
- SPLUNK_INSECURE_SSL: false for test environments only or if http scheme
- SPLUNK_VERIFY_SSL: false for disabling ssl validation (useful for proxy forwarding or bypassing ssl certificate validation)
- SPLUNK_COMPRESS_DATA: true for compressing logs and metrics json payloads on outbound to Splunk
- JPD_URL: Artifactory JPD URL of the format
http://<ip_address>
- JPD_ADMIN_USERNAME: Artifactory username for authentication
- JFROG_ADMIN_TOKEN: For security reasons, this value will be pulled from the secret jfrog-admin-token created in the step above
- COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)
Apply the .env files and then run the helm command below
source jfrog_helm.env
Generate a master key for xray
export XRAY_MASTER_KEY=$(openssl rand -hex 32)
Use the same joinKey
as you used in Artifactory installation ($JOIN_KEY) to allow Xray node to successfully connect to Artifactory.
helm upgrade --install xray jfrog/xray --set xray.jfrogUrl=$JPD_URL \
--set xray.masterKey=$XRAY_MASTER_KEY \
--set xray.joinKey=$JOIN_KEY \
--set splunk.host=$SPLUNK_HEC_HOST \
--set splunk.port=$SPLUNK_HEC_PORT \
--set splunk.logs_token=$SPLUNK_HEC_TOKEN \
--set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \
--set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \
--set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \
--set splunk.verify_ssl=$SPLUNK_VERIFY_SSL \
--set splunk.compress_data=$SPLUNK_COMPRESS_DATA \
--set jfrog.observability.jpd_url=$JPD_URL \
--set jfrog.observability.username=$JPD_ADMIN_USERNAME \
--set jfrog.observability.common_jpd=$COMMON_JPD \
-f helm/xray-values.yaml \
-n $INST_NAMESPACE --create-namespace
JFrog Artifactory Dashboard is divided into multiple sections Application, Audit, Requests, Docker, System Metrics, Heap Metrics and Connection Metrics
- Application - This section tracks Log Volume(information about different log sources) and Artifactory Errors over time(bursts of application errors that may otherwise go undetected)
- Audit - This section tracks audit logs help you determine who is accessing your Artifactory instance and from where. These can help you track potentially malicious requests or processes (such as CI jobs) using expired credentials.
- Requests - This section tracks HTTP response codes, Top 10 IP addresses for uploads and downloads
- Docker - To monitor Dockerhub pull requests users should have a Dockerhub account either paid or free. Free accounts allow up to 200 pull requests per 6 hour window. Various widgets have been added in the new Docker tab under Artifactory to help monitor your Dockerhub pull requests. An alert is also available to enable if desired that will allow you to send emails or add outbound webhooks through configuration to be notified when you exceed the configurable threshold.
- System Metrics - This section tracks CPU Usage, System Memory and Disk Usage metrics
- Heap Metrics - This section tracks Heap Memory and Garbage Collection
- Connection Metrics - This section tracks Database connections and HTTP Connections
JFrog Xray Dashboard is divided into three sections Logs, Violations and Metrics
- Logs - This section provides a summary of access, service and traffic log volumes associated with Xray. Additionally, customers are also able to track various HTTP response codes, HTTP 500 errors, and log errors for greater operational insight
- Violations - This section provides an aggregated summary of all the license violations and security vulnerabilities found by Xray. Information is segment by watch policies and rules. Trending information is provided on the type and severity of violations over time, as well as, insights on most frequently occurring CVEs, top impacted artifacts and components.
- Metrics - This section tracks CPU usage, System Memory, Disk Usage, Heap Memory and Database Connections
Log data from JFrog platform logs is translated to pre-defined Common Information Models (CIM) compatible with Splunk. This compatibility enables new advanced features where users can search and access JFrog log data that is compatible with data models. For example
| datamodel Web Web search
| datamodel Change_Analysis All_Changes search
| datamodel Vulnerabilities Vulnerabilities search
To run this integration for Splunk users can create a Splunk instance with the correct ports open in Kubernetes by applying the yaml file:
kubectl apply -f k8s/splunk.yaml
This will create a new Splunk instance that can be used for a demo to send JFrog logs, violations and metrics over to. Follow the setup steps listed above to see data in the Dashboards.
- Fluentd - Fluentd Logging Aggregator/Agent
- Splunk - Splunk Logging Platform
- Splunk HEC - Splunk HEC used to upload data into Splunk