Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace apm-server OTLP endpoint with Otel native processors and Elasticsearch #120

Open
wants to merge 16 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .env.override
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,4 @@ KAFKA_SERVICE_DOCKERFILE=./src/kafka/Dockerfile.elastic
# *********************
COLLECTOR_CONTRIB_IMAGE=docker.elastic.co/beats/elastic-agent:8.16.0
OTEL_COLLECTOR_CONFIG=./src/otelcollector/otelcol-elastic-config.yaml
OTEL_COLLECTOR_CONFIG_EXTRAS=./src/otelcollector/otelcol-elastic-config-extras.yaml
ELASTIC_AGENT_OTEL=true
61 changes: 28 additions & 33 deletions .github/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ Additionally, the OpenTelemetry Contrib collector has also been changed to the [

## Docker compose

1. Start a free trial on [Elastic Cloud](https://cloud.elastic.co/) and copy the `endpoint` and `secretToken` from the Elastic APM setup instructions in your Kibana.
1. Open the file `src/otelcollector/otelcol-elastic-config-extras.yaml` in an editor and replace the following two placeholders:
- `YOUR_APM_ENDPOINT_WITHOUT_HTTPS_PREFIX`: your Elastic APM endpoint (*without* `https://` prefix) that *must* also include the port (example: `1234567.apm.us-west2.gcp.elastic-cloud.com:443`).
- `YOUR_APM_SECRET_TOKEN`: your Elastic APM secret token.
1. Start the demo with the following command from the repository's root directory:
1. Start a free trial on [Elastic Cloud](https://cloud.elastic.co/) and copy the `endpoint` and `secretToken` from the Elastic APM setup instructions in your Kibana. These variables will be used by the [elasticsearch exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter#elasticsearch-exporter) to authenticate and transmit data to your Elasticsearch instance.
rogercoll marked this conversation as resolved.
Show resolved Hide resolved
2. Open the file `src/otelcollector/otelcol-elastic-config.yaml` in an editor and replace the following two placeholders:
- `YOUR_ELASTICSEARCH_ENDPOINT`: your Elasticsearch endpoint (*with* `https://` prefix example: `https://1234567.us-west2.gcp.elastic-cloud.com:443`).
- `YOUR_ELASTICSEARCH_API_KEY`: your Elasticsearch API Key
3. Start the demo with the following command from the repository's root directory:
```
make start
```
Expand All @@ -27,27 +27,23 @@ Additionally, the OpenTelemetry Contrib collector has also been changed to the [
- Set up [kubectl](https://kubernetes.io/docs/reference/kubectl/).
- Set up [Helm](https://helm.sh/).

### Start the Demo

### Start the Demo (Kubernetes deployment)
1. Setup Elastic Observability on Elastic Cloud.
1. Create a secret in Kubernetes with the following command.
2. Create a secret in Kubernetes with the following command.
```
kubectl create secret generic elastic-secret \
--from-literal=elastic_apm_endpoint='YOUR_APM_ENDPOINT_WITHOUT_HTTPS_PREFIX' \
--from-literal=elastic_apm_secret_token='YOUR_APM_SECRET_TOKEN'
kubectl create secret generic elastic-secret-otel \
--from-literal=elastic_endpoint='YOUR_ELASTICSEARCH_ENDPOINT' \
--from-literal=elastic_api_key='YOUR_ELASTICSEARCH_API_KEY'
```
Don't forget to replace
- `YOUR_APM_ENDPOINT_WITHOUT_HTTPS_PREFIX`: your Elastic APM endpoint (*without* `https://` prefix) that *must* also include the port (example: `1234567.apm.us-west2.gcp.elastic-cloud.com:443`).
- `YOUR_APM_SECRET_TOKEN`: your Elastic APM secret token, include the Bearer or ApiKey but not the "Authorization=" part e.g. Bearer XXXXXX or ApiKey XXXXX below is an example:
```
kubectl create secret generic elastic-secret \
--from-literal=elastic_apm_endpoint='12345.apm.us-west2.gcp.elastic-cloud.com:443' \
--from-literal=elastic_apm_secret_token='Bearer 123456789123456YE2'
```
1. Execute the following commands to deploy the OpenTelemetry demo to your Kubernetes cluster:
- `YOUR_ELASTICSEARCH_ENDPOINT`: your Elasticsearch endpoint (*with* `https://` prefix example: `https://1234567.us-west2.gcp.elastic-cloud.com:443`).
- `YOUR_ELASTICSEARCH_API_KEY`: your Elasticsearch API Key
3. Execute the following commands to deploy the OpenTelemetry demo to your Kubernetes cluster:
```
# clone this repository
git clone https://github.com/elastic/opentelemetry-demo

# switch to the kubernetes/elastic-helm directory
cd opentelemetry-demo/kubernetes/elastic-helm

Expand All @@ -61,30 +57,29 @@ Additionally, the OpenTelemetry Contrib collector has also been changed to the [
helm install -f deployment.yaml my-otel-demo open-telemetry/opentelemetry-demo
```

#### Kubernetes monitoring
Additionally, this EDOT Collector configuration includes the following components for comprehensive Kubernetes monitoring:
- [K8s Objects Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/k8sobjectsreceiver): Captures detailed information about Kubernetes objects.
- [K8s Cluster Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/k8sclusterreceiver): Collects metrics and metadata about the overall cluster state.

This demo already enables cluster level metrics collection with `clusterMetrics` and
Kubernetes events collection with `kubernetesEvents`.
#### Kubernetes monitoring (daemonset)

In order to add Node level metrics collection we can run an additional Otel collector Daemonset with the following:
The `daemonset` EDOT collector is configured with the components to monitor node-level metrics and logs, ensuring detailed insights into individual Kubernetes nodes:

1. Create a secret in Kubernetes with the following command.
```
kubectl create secret generic elastic-secret-ds \
--from-literal=elastic_endpoint='YOUR_ELASTICSEARCH_ENDPOINT' \
--from-literal=elastic_api_key='YOUR_ELASTICSEARCH_API_KEY'
```
Don't forget to replace
- `YOUR_ELASTICSEARCH_ENDPOINT`: your Elasticsearch endpoint (*with* `https://` prefix example: `https://1234567.us-west2.gcp.elastic-cloud.com:443`).
- `YOUR_ELASTICSEARCH_API_KEY`: your Elasticsearch API Key
- [Host Metrics Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetrics): Collects system-level metrics such as CPU, memory, and disk usage from the host.
- [Kubelet Stats Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/kubeletstats): Gathers pod and container metrics directly from the kubelet.
- [Filelog Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelog): Ingests and parses log files from nodes, providing detailed log analysis.

2. Execute the following command to deploy the OpenTelemetry Collector to your Kubernetes cluster, in the same directory `kubernetes/elastic-helm` in this repository.
To deploy the EDOT Collector to your Kubernetes cluster, ensure the `elastic-secret-otel` Kubernetes secret is created (if it doesn't already exist). Then, run the following command from the `kubernetes/elastic-helm` directory in this repository.

```
# deploy the Elastic OpenTelemetry collector distribution through helm install
helm install otel-daemonset open-telemetry/opentelemetry-collector --values daemonset.yaml
```

#### Kubernetes architecture diagram

![Deployment architecture](../kubernetes/elastic-helm/elastic-architecture.png "K8s architecture")

## Explore and analyze the data With Elastic

### Service map
Expand Down
70 changes: 50 additions & 20 deletions kubernetes/elastic-helm/daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,20 @@ securityContext:
runAsGroup: 0

extraEnvs:
# Work around for open /mounts error: https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/35990
- name: HOST_PROC_MOUNTINFO
value: ""
- name: ELASTIC_AGENT_OTEL
value: "true"
- name: ELASTIC_ENDPOINT
valueFrom:
secretKeyRef:
name: elastic-secret-ds
name: elastic-secret-otel
key: elastic_endpoint
- name: ELASTIC_API_KEY
valueFrom:
secretKeyRef:
name: elastic-secret-ds
name: elastic-secret-otel
key: elastic_api_key
- name: K8S_NODE_NAME
valueFrom:
Expand Down Expand Up @@ -61,7 +64,7 @@ config:
exporters:
debug:
verbosity: basic
elasticsearch:
elasticsearch/ecs:
endpoints:
- ${env:ELASTIC_ENDPOINT}
api_key: ${env:ELASTIC_API_KEY}
Expand All @@ -71,32 +74,55 @@ config:
enabled: true
mapping:
mode: ecs
elasticsearch/otel:
endpoints:
- ${env:ELASTIC_ENDPOINT}
api_key: ${env:ELASTIC_API_KEY}
logs_dynamic_index:
enabled: true
metrics_dynamic_index:
enabled: true
mapping:
mode: otel
processors:
batch: {}
elasticinframetrics:
add_system_metrics: true
add_k8s_metrics: true
resourcedetection/eks:
detectors: [env, eks]
drop_original: true
resourcedetection/cluster:
detectors: [env, eks, gcp, aks, eks, k8snode]
timeout: 15s
override: true
k8snode:
auth_type: serviceAccount
eks:
resource_attributes:
k8s.cluster.name:
enabled: true
resourcedetection/gcp:
detectors: [env, gcp]
timeout: 2s
override: true
resource/k8s:
aks:
resource_attributes:
k8s.cluster.name:
enabled: true
resource/k8s: # Resource attributes tailored for services within Kubernetes.
attributes:
- key: service.name
from_attribute: app.label.component
- key: service.name # Set the service.name resource attribute based on the well-known app.kubernetes.io/name label
from_attribute: app.label.name
action: insert
attributes/k8s_logs_dataset:
actions:
- key: data_stream.dataset
value: "kubernetes.container_logs"
- key: service.name # Set the service.name resource attribute based on the k8s.container.name attribute
from_attribute: k8s.container.name
action: insert
- key: app.label.name # Delete app.label.name attribute previously used for service.name
action: delete
- key: service.version # Set the service.version resource attribute based on the well-known app.kubernetes.io/version label
from_attribute: app.label.version
action: insert
- key: app.label.version # Delete app.label.version attribute previously used for service.version
action: delete
resource/hostname:
attributes:
- key: host.name
from_attribute: k8s.node.name
action: upsert
attributes/dataset:
actions:
Expand Down Expand Up @@ -311,12 +337,16 @@ config:
pipelines:
logs:
receivers: [filelog]
processors: [batch, k8sattributes, resourcedetection/system, resourcedetection/eks, resourcedetection/gcp, resource/demo, resource/k8s, resource/cloud, attributes/k8s_logs_dataset]
exporters: [debug, elasticsearch]
processors: [batch, k8sattributes, resourcedetection/cluster, resource/hostname, resource/demo, resource/k8s, resource/cloud]
exporters: [debug, elasticsearch/otel]
metrics:
receivers: [hostmetrics, kubeletstats]
processors: [batch, k8sattributes, elasticinframetrics, resourcedetection/system, resource/demo, resourcedetection/eks, resourcedetection/gcp, resource/k8s, resource/cloud, attributes/dataset, resource/process]
exporters: [debug, elasticsearch]
processors: [batch, k8sattributes, elasticinframetrics, resourcedetection/cluster, resource/hostname, resource/demo, resource/k8s, resource/cloud, attributes/dataset, resource/process]
exporters: [debug, elasticsearch/ecs]
metrics/otel:
receivers: [kubeletstats]
processors: [batch, k8sattributes, resourcedetection/cluster, resource/hostname, resource/demo, resource/k8s, resource/cloud]
exporters: [debug, elasticsearch/otel]
traces: null
telemetry:
metrics:
Expand Down
Loading
Loading