Commit 895a669c authored by 徐泽意's avatar 徐泽意

update

parent 6f4139fb
apiVersion: v1
appVersion: 1.1.1
description: Apache Storm is a free and open source distributed realtime computation
system.
home: http://storm.apache.org/
icon: http://storm.apache.org/images/logo.png
keywords:
- storm
- zookeeper
maintainers:
- email: jorwalk@gmail.com
name: jorwalk
- email: stackedsax@users.noreply.github.com
name: stackedsax
name: storm
sources:
- https://github.com/apache/storm
version: 1.0.2
approvers:
- jorwalk
- stackedsax
reviewers:
- jorwalk
- stackedsax
# storm
## Storm
[Apache Storm](http://storm.apache.org/) is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use!
storm
\ No newline at end of file
### Prerequisites
This example assumes you have a Kubernetes cluster installed and
running, and that you have installed the ```kubectl``` command line
tool somewhere in your path. Please see the [getting
started](https://kubernetes.io/docs/tutorials/kubernetes-basics/) for installation
instructions for your platform.
### Installing the Chart
To install the chart with the release name `my-storm`:
```bash
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install --name my-storm incubator/storm
```
## Configuration
The following table lists the configurable parameters of the Storm chart and their default values.
### Nimbus
| Parameter | Description | Default |
| --------------------------------- | --------------------------- | ------------------- |
| `nimbus.replicaCount` | Number of replicas | 1 |
| `nimbus.image.repository` | Container image name | storm |
| `nimbus.image.tag` | Container image version | 1.1.1 |
| `nimbus.image.pullPolicy` | The default pull policy | IfNotPresent |
| `nimbus.service.name` | Service name | nimbus |
| `nimbus.service.type` | Service Type | ClusterIP |
| `nimbus.service.port` | Service Port | 6627 |
| `nimbus.resources.limits.cpu` | Compute resources | 100m |
### Supervisor
| Parameter | Description | Default |
| --------------------------------- | --------------------------- | ------------------- |
| `supervisor.replicaCount` | Number of replicas | 3 |
| `supervisor.image.repository` | Container image name | storm |
| `supervisor.image.tag` | Container image version | 1.1.1 |
| `supervisor.image.pullPolicy` | The default pull policy | IfNotPresent |
| `supervisor.service.name` | Service Name | supervisor |
| `supervisor.service.port` | Service Port | 6700 |
| `supervisor.resources.limits.cpu` | Compute Resouces | 200m |
### User Interface
| Parameter | Description | Default |
| --------------------------------- | --------------------------- | ------------------- |
| `ui.enabled` | Enable the UI | true |
| `ui.replicaCount` | Number of replicas | 1 |
| `ui.image.repository` | Container image name | storm |
| `ui.image.tag` | UI image version | 1.1.1 |
| `ui.image.pullPolicy` | The default pull policy | IfNotPresent |
| `ui.service.type` | UI Service Type | ClusterIP |
| `ui.service.name` | UI service name | ui |
| `ui.service.port` | UI service port | 8080 |
| `ui.resources.limits.cpu` | Compute resources | 100m |
### Zookeeper
| Parameter | Description | Default |
| --------------------------------- | --------------------------- | ------------------- |
| `zookeeper.enabled` | Enable Zookeeper | true |
| `zookeeper.service.name` | Service name | zookeeper |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```bash
$ helm install --name my-release -f values.yaml incubator/storm
```
> **Tip**: You can use the default [values.yaml](values.yaml)
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
appVersion: 3.4.10
description: Centralized service for maintaining configuration information, naming,
providing distributed synchronization, and providing group services.
home: https://zookeeper.apache.org/
icon: https://zookeeper.apache.org/images/zookeeper_small.gif
maintainers:
- email: lachlan.evenson@microsoft.com
name: lachie83
- email: owensk@google.com
name: kow3ns
name: zookeeper
sources:
- https://github.com/apache/zookeeper
- https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper
version: 1.3.1
approvers:
- lachie83
- kow3ns
reviewers:
- lachie83
- kow3ns
# incubator/zookeeper
This helm chart provides an implementation of the ZooKeeper [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) found in Kubernetes Contrib [Zookeeper StatefulSet](https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper).
## Prerequisites
* Kubernetes 1.6+
* PersistentVolume support on the underlying infrastructure
* A dynamic provisioner for the PersistentVolumes
* A familiarity with [Apache ZooKeeper 3.4.x](https://zookeeper.apache.org/doc/current/)
## Chart Components
This chart will do the following:
* Create a fixed size ZooKeeper ensemble using a [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/).
* Create a [PodDisruptionBudget](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-disruption-budget/) so kubectl drain will respect the Quorum size of the ensemble.
* Create a [Headless Service](https://kubernetes.io/docs/concepts/services-networking/service/) to control the domain of the ZooKeeper ensemble.
* Create a Service configured to connect to the available ZooKeeper instance on the configured client port.
* Optionally apply a [Pod Anti-Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) to spread the ZooKeeper ensemble across nodes.
* Optionally start JMX Exporter and Zookeeper Exporter containers inside Zookeeper pods.
* Optionally create a job which creates Zookeeper chroots (e.g. `/kafka1`).
## Installing the Chart
You can install the chart with the release name `zookeeper` as below.
```console
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install --name zookeeper incubator/zookeeper
```
If you do not specify a name, helm will select a name for you.
### Installed Components
You can use `kubectl get` to view all of the installed components.
```console{%raw}
$ kubectl get all -l app=zookeeper
NAME: zookeeper
LAST DEPLOYED: Wed Apr 11 17:09:48 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zookeeper N/A 1 1 2m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
zookeeper-headless ClusterIP None <none> 2181/TCP,3888/TCP,2888/TCP 2m
zookeeper ClusterIP 10.98.179.165 <none> 2181/TCP 2m
==> v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
zookeeper 3 3 2m
```
1. `statefulsets/zookeeper` is the StatefulSet created by the chart.
1. `po/zookeeper-<0|1|2>` are the Pods created by the StatefulSet. Each Pod has a single container running a ZooKeeper server.
1. `svc/zookeeper-headless` is the Headless Service used to control the network domain of the ZooKeeper ensemble.
1. `svc/zookeeper` is a Service that can be used by clients to connect to an available ZooKeeper server.
## Configuration
You can specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install --name my-release -f values.yaml incubator/zookeeper
```
## Default Values
- You can find all user-configurable settings, their defaults and commentary about them in [values.yaml](values.yaml).
## Deep Dive
## Image Details
The image used for this chart is based on Ubuntu 16.04 LTS. This image is larger than Alpine or BusyBox, but it provides glibc, rather than ulibc or mucl, and a JVM release that is built against it. You can easily convert this chart to run against a smaller image with a JVM that is built against that image's libc. However, as far as we know, no Hadoop vendor supports, or has verified, ZooKeeper running on such a JVM.
## JVM Details
The Java Virtual Machine used for this chart is the OpenJDK JVM 8u111 JRE (headless).
## ZooKeeper Details
The ZooKeeper version is the latest stable version (3.4.10). The distribution is installed into /opt/zookeeper-3.4.10. This directory is symbolically linked to /opt/zookeeper. Symlinks are created to simulate a rpm installation into /usr.
## Failover
You can test failover by killing the leader. Insert a key:
```console
$ kubectl exec zookeeper-0 -- /opt/zookeeper/bin/zkCli.sh create /foo bar;
$ kubectl exec zookeeper-2 -- /opt/zookeeper/bin/zkCli.sh get /foo;
```
Watch existing members:
```console
$ kubectl run --attach bbox --image=busybox --restart=Never -- sh -c 'while true; do for i in 0 1 2; do echo zk-${i} $(echo stats | nc <pod-name>-${i}.<headless-service-name>:2181 | grep Mode); sleep 1; done; done';
zk-2 Mode: follower
zk-0 Mode: follower
zk-1 Mode: leader
zk-2 Mode: follower
```
Delete Pods and wait for the StatefulSet controller to bring them back up:
```console
$ kubectl delete po -l app=zookeeper
$ kubectl get po --watch-only
NAME READY STATUS RESTARTS AGE
zookeeper-0 0/1 Running 0 35s
zookeeper-0 1/1 Running 0 50s
zookeeper-1 0/1 Pending 0 0s
zookeeper-1 0/1 Pending 0 0s
zookeeper-1 0/1 ContainerCreating 0 0s
zookeeper-1 0/1 Running 0 19s
zookeeper-1 1/1 Running 0 40s
zookeeper-2 0/1 Pending 0 0s
zookeeper-2 0/1 Pending 0 0s
zookeeper-2 0/1 ContainerCreating 0 0s
zookeeper-2 0/1 Running 0 19s
zookeeper-2 1/1 Running 0 41s
```
Check the previously inserted key:
```console
$ kubectl exec zookeeper-1 -- /opt/zookeeper/bin/zkCli.sh get /foo
ionid = 0x354887858e80035, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
bar
```
## Scaling
ZooKeeper can not be safely scaled in versions prior to 3.5.x. This chart currently uses 3.4.x. There are manual procedures for scaling a 3.4.x ensemble, but as noted in the [ZooKeeper 3.5.2 documentation](https://zookeeper.apache.org/doc/r3.5.2-alpha/zookeeperReconfig.html) these procedures require a rolling restart, are known to be error prone, and often result in a data loss.
While ZooKeeper 3.5.x does allow for dynamic ensemble reconfiguration (including scaling membership), the current status of the release is still alpha, and 3.5.x is therefore not recommended for production use.
## Limitations
* StatefulSet and PodDisruptionBudget are beta resources.
* Only supports storage options that have backends for persistent volume claims.
Thank you for installing ZooKeeper on your Kubernetes cluster. More information
about ZooKeeper can be found at https://zookeeper.apache.org/doc/current/
Your connection string should look like:
{{ template "zookeeper.fullname" . }}-0.{{ template "zookeeper.fullname" . }}-headless:{{ .Values.service.ports.client.port }},{{ template "zookeeper.fullname" . }}-1.{{ template "zookeeper.fullname" . }}-headless:{{ .Values.service.ports.client.port }},...
You can also use the client service {{ template "zookeeper.fullname" . }}:{{ .Values.service.ports.client.port }} to connect to an available ZooKeeper server.
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "zookeeper.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "zookeeper.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "zookeeper.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- if .Values.exporters.jmx.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-jmx-exporter
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
config.yml: |-
hostPort: 127.0.0.1:{{ .Values.env.JMXPORT }}
lowercaseOutputName: {{ .Values.exporters.jmx.config.lowercaseOutputName }}
rules:
{{ .Values.exporters.jmx.config.rules | toYaml | indent 6 }}
ssl: false
startDelaySeconds: {{ .Values.exporters.jmx.config.startDelaySeconds }}
{{- end }}
{{- if .Values.jobs.chroots.enabled }}
{{- $root := . }}
{{- $job := .Values.jobs.chroots }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "zookeeper.fullname" . }}-chroots
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: jobs
job: chroots
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
backoffLimit: {{ $job.backoffLimit }}
completions: {{ $job.completions }}
parallelism: {{ $job.parallelism }}
template:
metadata:
labels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
component: jobs
job: chroots
spec:
restartPolicy: {{ $job.restartPolicy }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
containers:
- name: main
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /bin/bash
- -o
- pipefail
- -euc
{{- $port := .Values.service.ports.client.port }}
- >
sleep 15;
export SERVER={{ template "zookeeper.fullname" $root }}:{{ $port }};
{{- range $job.config.create }}
echo '==> {{ . }}';
echo '====> Create chroot if does not exist.';
zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} get {{ . }} 2>&1 >/dev/null | grep 'cZxid'
|| zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} create {{ . }} "";
echo '====> Confirm chroot exists.';
zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} get {{ . }} 2>&1 >/dev/null | grep 'cZxid';
echo '====> Chroot exists.';
{{- end }}
env:
{{- range $key, $value := $job.env }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
resources:
{{ toYaml $job.resources | indent 12 }}
{{- end -}}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: server
spec:
selector:
matchLabels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
component: server
{{ toYaml .Values.podDisruptionBudget | indent 2 }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "zookeeper.fullname" . }}-headless
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.headless.annotations }}
annotations:
{{ .Values.headless.annotations | toYaml | trimSuffix "\n" | indent 4 }}
{{- end }}
spec:
clusterIP: None
ports:
{{- range $key, $port := .Values.ports }}
- name: {{ $key }}
port: {{ $port.containerPort }}
targetPort: {{ $key }}
protocol: {{ $port.protocol }}
{{- end }}
selector:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
{{- with .Values.service.annotations }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
{{- range $key, $value := .Values.service.ports }}
- name: {{ $key }}
{{ toYaml $value | indent 6 }}
{{- end }}
selector:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: server
spec:
serviceName: {{ template "zookeeper.fullname" . }}-headless
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
component: server
updateStrategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
template:
metadata:
labels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
component: server
{{- if .Values.podLabels }}
## Custom pod labels
{{- range $key, $value := .Values.podLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
annotations:
{{- if .Values.podAnnotations }}
## Custom pod annotations
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
securityContext:
{{ toYaml .Values.securityContext | indent 8 }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
containers:
- name: zookeeper
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.command }}
command: {{ range . }}
- {{ . | quote }}
{{- end }}
{{- end }}
ports:
{{- range $key, $port := .Values.ports }}
- name: {{ $key }}
{{ toYaml $port | indent 14 }}
{{- end }}
livenessProbe:
{{ toYaml .Values.livenessProbe | indent 12 }}
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 12 }}
env:
- name: ZK_REPLICAS
value: {{ .Values.replicaCount | quote }}
{{- range $key, $value := .Values.env }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
{{- range $secret := .Values.secrets }}
{{- range $key := $secret.keys }}
- name: {{ (print $secret.name "_" $key) | upper }}
valueFrom:
secretKeyRef:
name: {{ $secret.name }}
key: {{ $key }}
{{- end }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumeMounts:
- name: data
mountPath: /var/lib/zookeeper
{{- range $secret := .Values.secrets }}
{{- if $secret.mountPath }}
{{- range $key := $secret.keys }}
- name: {{ $.Release.Name }}-{{ $secret.name }}
mountPath: {{ $secret.mountPath }}/{{ $key }}
subPath: {{ $key }}
readOnly: true
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.exporters.jmx.enabled }}
- name: jmx-exporter
image: "{{ .Values.exporters.jmx.image.repository }}:{{ .Values.exporters.jmx.image.tag }}"
imagePullPolicy: {{ .Values.exporters.jmx.image.pullPolicy }}
ports:
{{- range $key, $port := .Values.exporters.jmx.ports }}
- name: {{ $key }}
{{ toYaml $port | indent 14 }}
{{- end }}
livenessProbe:
{{ toYaml .Values.exporters.jmx.livenessProbe | indent 12 }}
readinessProbe:
{{ toYaml .Values.exporters.jmx.readinessProbe | indent 12 }}
env:
- name: SERVICE_PORT
value: {{ .Values.exporters.jmx.ports.jmxxp.containerPort | quote }}
{{- with .Values.exporters.jmx.env }}
{{- range $key, $value := . }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
resources:
{{ toYaml .Values.exporters.jmx.resources | indent 12 }}
volumeMounts:
- name: config-jmx-exporter
mountPath: /opt/jmx_exporter/config.yml
subPath: config.yml
{{- end }}
{{- if .Values.exporters.zookeeper.enabled }}
- name: zookeeper-exporter
image: "{{ .Values.exporters.zookeeper.image.repository }}:{{ .Values.exporters.zookeeper.image.tag }}"
imagePullPolicy: {{ .Values.exporters.zookeeper.image.pullPolicy }}
args:
- -bind-addr=:{{ .Values.exporters.zookeeper.ports.zookeeperxp.containerPort }}
- -metrics-path={{ .Values.exporters.zookeeper.path }}
- -zookeeper=localhost:{{ .Values.ports.client.containerPort }}
- -log-level={{ .Values.exporters.zookeeper.config.logLevel }}
- -reset-on-scrape={{ .Values.exporters.zookeeper.config.resetOnScrape }}
ports:
{{- range $key, $port := .Values.exporters.zookeeper.ports }}
- name: {{ $key }}
{{ toYaml $port | indent 14 }}
{{- end }}
livenessProbe:
{{ toYaml .Values.exporters.zookeeper.livenessProbe | indent 12 }}
readinessProbe:
{{ toYaml .Values.exporters.zookeeper.readinessProbe | indent 12 }}
env:
{{- range $key, $value := .Values.exporters.zookeeper.env }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
resources:
{{ toYaml .Values.exporters.zookeeper.resources | indent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
{{- range .Values.secrets }}
- name: {{ $.Release.Name }}-{{ .name }}
secret:
secretName: {{ .name }}
{{- end }}
{{- if .Values.exporters.jmx.enabled }}
- name: config-jmx-exporter
configMap:
name: {{ .Release.Name }}-jmx-exporter
{{- end }}
{{- if not .Values.persistence.enabled }}
- name: data
emptyDir: {}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}
This diff is collapsed.
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<configuration monitorInterval="60" shutdownHook="disable">
<properties>
<property name="pattern">%d{yyyy-MM-dd HH:mm:ss.SSS} %c{1.} %t [%p] %msg%n</property>
</properties>
<appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
</Console>
<Console name="STDERR" target="SYSTEM_ERR">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
</Console>
<RollingFile name="A1" immediateFlush="false"
fileName="${sys:storm.log.dir}/${sys:logfile.name}"
filePattern="${sys:storm.log.dir}/${sys:logfile.name}.%i.gz">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="100 MB"/> <!-- Or every 100 MB -->
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<RollingFile name="WEB-ACCESS" immediateFlush="false"
fileName="${sys:storm.log.dir}/access-web-${sys:daemon.name}.log"
filePattern="${sys:storm.log.dir}/access-web-${sys:daemon.name}.log.%i.gz">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="100 MB"/> <!-- Or every 100 MB -->
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<RollingFile name="THRIFT-ACCESS" immediateFlush="false"
fileName="${sys:storm.log.dir}/access-${sys:logfile.name}"
filePattern="${sys:storm.log.dir}/access-${sys:logfile.name}.%i.gz">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="100 MB"/> <!-- Or every 100 MB -->
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<RollingFile name="METRICS"
fileName="${sys:storm.log.dir}/${sys:logfile.name}.metrics"
filePattern="${sys:storm.log.dir}/${sys:logfile.name}.metrics.%i.gz">
<PatternLayout>
<pattern>${patternMetrics}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="2 MB"/>
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<Syslog name="syslog" format="RFC5424" charset="UTF-8" host="localhost" port="514"
protocol="UDP" appName="[${sys:daemon.name}]" mdcId="mdc" includeMDC="true"
facility="LOCAL5" enterpriseNumber="18060" newLine="true" exceptionPattern="%rEx{full}"
messageId="[${sys:user.name}:S0]" id="storm" immediateFlush="true" immediateFail="true"/>
</appenders>
<loggers>
<Logger name="org.apache.storm.logging.filters.AccessLoggingFilter" level="info" additivity="false">
<AppenderRef ref="WEB-ACCESS"/>
<AppenderRef ref="syslog"/>
</Logger>
<Logger name="org.apache.storm.logging.ThriftAccessLogger" level="info" additivity="false">
<AppenderRef ref="THRIFT-ACCESS"/>
<AppenderRef ref="syslog"/>
</Logger>
<Logger name="org.apache.storm.metric.LoggingClusterMetricsConsumer" level="info" additivity="false">
<appender-ref ref="METRICS"/>
</Logger>
<root level="info"> <!-- We log everything -->
<appender-ref ref="STDERR"/>
<appender-ref ref="STDOUT"/>
<appender-ref ref="A1"/>
<appender-ref ref="syslog"/>
</root>
</loggers>
</configuration>
\ No newline at end of file
dependencies:
- name: zookeeper
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
version: 1.3.1
digest: sha256:ae6ba70dbd6645a7a9dcea6363c9870bba66d72f385796a523adee41974f6f4d
generated: "2019-06-11T14:40:14.989855-07:00"
dependencies:
- name: zookeeper
version: ~1.3.1
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
condition: zookeeper.enabled
1. Get the Storm UI URL by running these commands:
{{- if contains "NodePort" .Values.ui.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "storm.ui.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.ui.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "storm.ui.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "storm.ui.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.ui.service.externalPort }}
{{- else if contains "ClusterIP" .Values.ui.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "storm.ui.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:{{ .Values.ui.service.port }} -n {{ .Release.Namespace }}
{{- end }}
\ No newline at end of file
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "storm.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "storm.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "storm.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "storm.nimbus.name" -}}
{{- printf "%s-%s" (include "storm.name" .) .Values.nimbus.service.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a fully qualified nimbus name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "storm.nimbus.fullname" -}}
{{- $name := default .Chart.Name .Values.nimbus.service.name -}}
{{- printf "%s-%s" (include "storm.fullname" .) $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "storm.supervisor.name" -}}
{{- printf "%s-%s" (include "storm.name" .) .Values.supervisor.service.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a fully qualified supervisor name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "storm.supervisor.fullname" -}}
{{- $name := default .Chart.Name .Values.supervisor.service.name -}}
{{- printf "%s-%s" (include "storm.fullname" .) $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "storm.ui.name" -}}
{{- printf "%s-%s" (include "storm.name" .) .Values.ui.service.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a fully qualified ui name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "storm.ui.fullname" -}}
{{- $name := default .Chart.Name .Values.ui.service.name -}}
{{- printf "%s-%s" (include "storm.fullname" .) $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a fully qualified zookeeper name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "storm.zookeeper.fullname" -}}
{{- $name := default .Values.zookeeper.service.name -}}
{{- printf "%s-%s" (include "storm.fullname" .) $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "storm.logging.name" -}}
{{- printf "%s-logging" (include "storm.fullname" .) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Override the zookeeper service name for the zookeeper chart so that both charts reference the same zookeeper service name.
*/}}
{{- define "zookeeper.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}o
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s-%s" .Release.Name .Values.stormName $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
\ No newline at end of file
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "storm.nimbus.fullname" . }}
labels:
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
storm.yaml: |-
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- {{ template "storm.zookeeper.fullname" . }}
nimbus.seeds:
- {{ template "storm.nimbus.fullname" . }}
storm.local.hostname: {{ template "storm.nimbus.fullname" . }}
storm.log4j2.conf.dir: "/log4j2"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "storm.supervisor.fullname" . }}
labels:
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
storm.yaml: |-
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- {{ template "storm.zookeeper.fullname" . }}
nimbus.seeds:
- {{ template "storm.nimbus.fullname" . }}
storm.local.hostname: {{ template "storm.supervisor.fullname" . }}
storm.log4j2.conf.dir: "/log4j2"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "storm.logging.name" . }}
labels:
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
{{- $files := .Files }}
{{- range tuple "cluster.xml" "worker.xml" }}
{{ . }}: |-
{{ $files.Get . | indent 4 }}
{{- end }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "storm.ui.fullname" . }}
labels:
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
storm.yaml: |-
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- {{ template "storm.zookeeper.fullname" . }}
nimbus.seeds:
- {{ template "storm.nimbus.fullname" . }}
storm.local.hostname: {{ template "storm.ui.fullname" . }}
storm.log4j2.conf.dir: "/log4j2"
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "storm.nimbus.fullname" . }}
labels:
app: {{ template "storm.nimbus.name" . }}
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.nimbus.replicaCount }}
selector:
matchLabels:
app: {{ template "storm.nimbus.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
labels:
app: {{ template "storm.nimbus.name" . }}
release: {{ .Release.Name }}
spec:
initContainers:
- name: init-{{ template "storm.zookeeper.fullname" . }}
image: busybox
command: ["sh", "-c", "until nslookup {{ template "storm.zookeeper.fullname" . }}; do echo waiting for {{ template "storm.zookeeper.fullname" . }}; sleep 2; done;"]
containers:
- name: {{ .Values.nimbus.service.name }}
image: "{{ .Values.nimbus.image.repository }}:{{ .Values.nimbus.image.tag }}"
imagePullPolicy: {{ .Values.nimbus.image.pullPolicy }}
command: ["storm", "nimbus"]
ports:
- containerPort: {{ .Values.nimbus.service.port }}
resources:
{{ toYaml .Values.nimbus.resources | indent 10 }}
volumeMounts:
- mountPath: "/conf"
name: storm-configmap
- mountPath: "/log4j2"
name: storm-logging-config
volumes:
- name: storm-configmap
configMap:
name: {{ template "storm.nimbus.fullname" . }}
- name: storm-logging-config
configMap:
name: {{ template "storm.logging.name" . }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "storm.nimbus.fullname" . }}
labels:
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.nimbus.service.type }}
ports:
- port: {{ .Values.nimbus.service.port }}
name: {{ .Values.nimbus.service.name }}
selector:
app: {{ template "storm.nimbus.name" . }}
release: {{ .Release.Name }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "storm.supervisor.fullname" . }}
labels:
app: {{ template "storm.supervisor.name" . }}
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.supervisor.replicaCount }}
selector:
matchLabels:
app: {{ template "storm.supervisor.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "storm.supervisor.name" . }}
release: {{ .Release.Name }}
spec:
initContainers:
- name: init-{{ template "storm.zookeeper.fullname" . }}
image: busybox
command: ["sh", "-c", "until nslookup {{ template "storm.zookeeper.fullname" . }}; do echo waiting for {{ template "storm.zookeeper.fullname" . }}; sleep 2; done;"]
- name: init-{{ template "storm.nimbus.fullname" . }}
image: busybox
command: ["sh", "-c", "until nslookup {{ template "storm.nimbus.fullname" . }}; do echo waiting for {{ template "storm.nimbus.fullname" . }}; sleep 2; done;"]
containers:
- name: {{ .Values.supervisor.service.name }}
image: "{{ .Values.supervisor.image.repository }}:{{ .Values.supervisor.image.tag }}"
imagePullPolicy: {{ .Values.supervisor.image.pullPolicy }}
command: ["storm", "supervisor"]
ports:
- containerPort: {{ .Values.supervisor.service.port }}
resources:
{{ toYaml .Values.supervisor.resources | indent 10 }}
volumeMounts:
- mountPath: "/conf"
name: storm-configmap
- mountPath: "/log4j2"
name: storm-logging-config
volumes:
- name: storm-configmap
configMap:
name: {{ template "storm.supervisor.fullname" . }}
- name: storm-logging-config
configMap:
name: {{ template "storm.logging.name" . }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "storm.supervisor.fullname" . }}
labels:
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- port: {{ .Values.supervisor.service.port }}
name: {{ .Values.supervisor.service.name }}
selector:
app: {{ template "storm.supervisor.name" . }}
release: {{ .Release.Name }}
{{- if .Values.ui.enabled -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "storm.ui.fullname" . }}
labels:
app: {{ template "storm.ui.name" . }}
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.ui.replicaCount }}
selector:
matchLabels:
app: {{ template "storm.ui.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
labels:
app: {{ template "storm.ui.name" . }}
release: {{ .Release.Name }}
spec:
initContainers:
- name: init-{{ template "storm.zookeeper.fullname" . }}
image: busybox
command: ["sh", "-c", "until nslookup {{ template "storm.zookeeper.fullname" . }}; do echo waiting for {{ template "storm.zookeeper.fullname" . }}; sleep 2; done;"]
- name: init-{{ template "storm.nimbus.fullname" . }}
image: busybox
command: ["sh", "-c", "until nslookup {{ template "storm.nimbus.fullname" . }}; do echo waiting for {{ template "storm.nimbus.fullname" . }}; sleep 2; done;"]
- name: init-{{ template "storm.supervisor.fullname" . }}
image: busybox
command: ["sh", "-c", "until nslookup {{ template "storm.supervisor.fullname" . }}; do echo waiting for {{ template "storm.supervisor.fullname" . }}; sleep 2; done;"]
containers:
- name: {{ .Values.ui.service.name }}
image: "{{ .Values.ui.image.repository }}:{{ .Values.ui.image.tag }}"
imagePullPolicy: {{ .Values.ui.image.pullPolicy }}
command: ["storm", "ui"]
ports:
- containerPort: {{ .Values.ui.service.port }}
resources:
{{ toYaml .Values.ui.resources | indent 10 }}
volumeMounts:
- mountPath: "/conf"
name: storm-configmap
- mountPath: "/log4j2"
name: storm-logging-config
volumes:
- name: storm-configmap
configMap:
name: {{ template "storm.ui.fullname" . }}
- name: storm-logging-config
configMap:
name: {{ template "storm.logging.name" . }}
{{- end -}}
{{- if .Values.ui.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: {{ template "storm.ui.fullname" . }}
labels:
chart: {{ template "storm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.ui.service.type }}
ports:
- protocol: TCP
port: {{ .Values.ui.service.port }}
name: {{ .Values.ui.service.name }}
selector:
app: {{ template "storm.ui.name" . }}
release: {{ .Release.Name }}
{{- end -}}
# Default values for storm.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
nameOverride: ""
fullnameOverride: ""
name: storm
enabled: true
nimbus:
replicaCount: 1
image:
repository: registry.cn-qingdao.aliyuncs.com/wod/storm
tag: 1.1.1
pullPolicy: IfNotPresent
service:
name: nimbus
type: ClusterIP
port: 6627
resources:
limits:
cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
supervisor:
replicaCount: 3
image:
repository: registry.cn-qingdao.aliyuncs.com/wod/storm
tag: 1.1.1
pullPolicy: IfNotPresent
service:
name: supervisor
port: 6700
resources:
limits:
cpu: 200m
nodeSelector: {}
tolerations: []
affinity: {}
ui:
enabled: true
replicaCount: 1
image:
repository: registry.cn-qingdao.aliyuncs.com/wod/storm
tag: 1.1.1
pullPolicy: IfNotPresent
service:
type: ClusterIP
name: ui
port: 8080
resources:
limits:
cpu: 100m
ingress:
enabled: false
annotations: {}
tls: []
zookeeper:
enabled: true
service:
name: zookeeper
stormName: storm
# Default values for storm.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
nameOverride: ""
fullnameOverride: ""
name: storm
enabled: true
nimbus:
replicaCount: 1
image:
repository: storm
tag: 1.1.1
pullPolicy: IfNotPresent
service:
name: nimbus
type: ClusterIP
port: 6627
resources:
limits:
cpu: 100m
nodeSelector: {}
tolerations: []
affinity: {}
supervisor:
replicaCount: 3
image:
repository: storm
tag: 1.1.1
pullPolicy: IfNotPresent
service:
name: supervisor
port: 6700
resources:
limits:
cpu: 200m
nodeSelector: {}
tolerations: []
affinity: {}
ui:
enabled: true
replicaCount: 1
image:
repository: storm
tag: 1.1.1
pullPolicy: IfNotPresent
service:
type: ClusterIP
name: ui
port: 8080
resources:
limits:
cpu: 100m
ingress:
enabled: false
annotations: {}
tls: []
zookeeper:
enabled: true
service:
name: zookeeper
stormName: storm
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<configuration monitorInterval="60" shutdownHook="disable">
<properties>
<property name="pattern">%d{yyyy-MM-dd HH:mm:ss.SSS} %c{1.} %t [%p] %msg%n</property>
<property name="patternNoTime">%msg%n</property>
<property name="patternMetrics">%d %-8r %m%n</property>
</properties>
<appenders>
<RollingFile name="A1"
fileName="${sys:workers.artifacts}/${sys:storm.id}/${sys:worker.port}/${sys:logfile.name}"
filePattern="${sys:workers.artifacts}/${sys:storm.id}/${sys:worker.port}/${sys:logfile.name}.%i.gz">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="100 MB"/> <!-- Or every 100 MB -->
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
</Console>
<Console name="STDERR" target="SYSTEM_ERR">
<PatternLayout>
<pattern>${pattern}</pattern>
</PatternLayout>
</Console>
<RollingFile name="METRICS"
fileName="${sys:workers.artifacts}/${sys:storm.id}/${sys:worker.port}/${sys:logfile.name}.metrics"
filePattern="${sys:workers.artifacts}/${sys:storm.id}/${sys:worker.port}/${sys:logfile.name}.metrics.%i.gz">
<PatternLayout>
<pattern>${patternMetrics}</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="2 MB"/>
</Policies>
<DefaultRolloverStrategy max="9"/>
</RollingFile>
<Syslog name="syslog" format="RFC5424" charset="UTF-8" host="localhost" port="514"
protocol="UDP" appName="[${sys:storm.id}:${sys:worker.port}]" mdcId="mdc" includeMDC="true"
facility="LOCAL5" enterpriseNumber="18060" newLine="true" exceptionPattern="%rEx{full}"
messageId="[${sys:user.name}:${sys:logging.sensitivity}]" id="storm" immediateFail="true" immediateFlush="true"/>
</appenders>
<loggers>
<root level="info"> <!-- We log everything -->
<appender-ref ref="A1"/>
<appender-ref ref="syslog"/>
</root>
<Logger name="org.apache.storm.metric.LoggingMetricsConsumer" level="info" additivity="false">
<appender-ref ref="METRICS"/>
</Logger>
<Logger name="STDERR" level="INFO">
<appender-ref ref="STDERR"/>
<appender-ref ref="syslog"/>
</Logger>
<Logger name="STDOUT" level="INFO">
<appender-ref ref="STDOUT"/>
<appender-ref ref="syslog"/>
</Logger>
</loggers>
</configuration>
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment