Skip to content

Quickstart

Icelake is a fully managed European observability data lakehouse. This guide takes you from zero to “I can see my data in Grafana” in under 5 minutes — no servers on your side to run, nothing to install. You talk to Icelake over the standard Prometheus, Loki, and OpenTelemetry endpoints, and Grafana consumes it back out over the same APIs.

Grafana Metrics Drilldown exploring Home Assistant metrics stored in Icelake

Sign up at app.icelake.eu or contact us if your team needs a pilot environment. Accounts are tenant-scoped and include everything — ingest endpoints, admin UI, pgwire SQL, AI MasterMind, dashboards — out of the box.

In the admin UI, open Data In → API Keys → Create key. You’ll get a client ID and a secret starting with ilk_. Copy the secret now — it’s shown exactly once. The same key authenticates every ingestion and query path.

Export them so the rest of this tutorial picks them up:

shell
export ICELAKE_CLIENT_ID="your-client-id"
export ICELAKE_API_KEY="ilk_..."

Any Prometheus agent, OpenTelemetry Collector, Fluent Bit, or curl can talk to Icelake. The shortest possible smoke test is one curl that pushes a log line into the Loki-compatible endpoint:

Push a log to Icelake
curl -X POST https://api.icelake.eu/loki/api/v1/push \
-u "$ICELAKE_CLIENT_ID:$ICELAKE_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @- <<EOF
{"streams":[{"stream":{"app":"quickstart"},"values":[["$(($(date +%s) * 1000000000))","Hello from Icelake Quickstart"]]}]}
EOF

A 204 No Content means it landed. For real data flows, point your existing tools at the managed endpoints:

SourceEndpointDoc
Prometheus remote writehttps://api.icelake.eu/api/v1/prom/pushPrometheus
Loki push APIhttps://api.icelake.eu/loki/api/v1/pushLoki & LogQL
OpenTelemetry OTLP HTTPhttps://api.icelake.eu/v1/{logs,metrics,traces}OpenTelemetry
MQTT (TTN or your broker)configured via the admin UIMQTT

All of them authenticate with the same client ID + ilk_… secret, either as HTTP Basic auth or as the pgwire username/password pair.

Already have a Grafana (cloud or internal)? Skip to the datasource config below. Otherwise, this docker-compose.yml plus a provisioning file brings up a local Grafana with Icelake pre-configured as both a Prometheus and a Loki datasource:

docker-compose.yml
services:
grafana:
image: grafana/grafana:latest
ports:
- "3001:3000"
environment:
- ICELAKE_CLIENT_ID=${ICELAKE_CLIENT_ID}
- ICELAKE_API_KEY=${ICELAKE_API_KEY}
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning
grafana/provisioning/datasources/icelake.yaml
apiVersion: 1
datasources:
- name: Icelake Metrics
type: prometheus
access: proxy
url: https://api.icelake.eu/prom
basicAuth: true
basicAuthUser: ${ICELAKE_CLIENT_ID}
secureJsonData:
basicAuthPassword: ${ICELAKE_API_KEY}
isDefault: true
- name: Icelake Logs
type: loki
access: proxy
url: https://api.icelake.eu/loki
basicAuth: true
basicAuthUser: ${ICELAKE_CLIENT_ID}
secureJsonData:
basicAuthPassword: ${ICELAKE_API_KEY}

Run docker compose up -d. Grafana picks up the datasources on startup — no click-through config needed.

Open http://localhost:3001, sign in (default admin / admin), and head to Explore → Metrics. Grafana reads labels and series directly from Icelake’s Prometheus-compatible /prom/api/v1/* endpoints, so Metrics Drilldown, ad-hoc PromQL, and saved dashboards all work out of the box. Switch the datasource to Icelake Logs for LogQL against your ingested logs.