Commit ca43aab9 authored by 徐泽意's avatar 徐泽意

update

parents
version: 1.0.0
description: logstash with RESTful configuration support
icon: https://www.elastic.co/assets/blt86e4472872eed314/logo-elastic-logstash-lt.svg
home: https://www.github.com/batrako/logstashChart
maintainers:
- email: fsmetar@gmail.com
name: Iván Alvarez
name: logstash
appVersion: 7.6.0
sources:
- https://www.github.com/batrako/logstashChart
- https://www.elastic.co/products/logstash
## 简介
Logstash是一个具有实时pipeline功能的开源数据收集引擎。Logstash可以动态的统一来自不同数据源的数据,并将数据规范化到你选择的目的地。虽然Logstash最初推动了日志收集方面的创新,但它的功能现在更丰富了。任何类型的事件都可以通过丰富的input,filter,output插件进行转换,简化抽取过程。
## 特性
```
1.内部全文检索使用了开源项目ES,是当前流行的开源分布式全文检索引擎,很强大。
2.持久队列提供跨节点故障的保护。
3.可扩展插件生态系统,提供超过200个插件,以及创建和贡献自己的灵活性。
4.对事件字段执行常规转换。可以重命名,删除,替换和修改事件中的字段。
5.
```
## 前置条件
```
Kubernetes 1.6+
helm 2.8+
基础框架PV支持
```
## 参数配置
| 参数 | 描述 | 默认值 |
| ------------------------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| `clusterName` | logstash标识符 | `logstash` |
| `nodePort` | 启用或禁用对Elastic Search的外部访问 | `enabled` |
| `environment` | 环境 (dev/pre/pro) | `dev` |
| `replicas` | 副本数 | `1` |
| `configReloadAutomatic` | Logstash默认配置类型。如果启用了configReloadAutomatic,则在更新管道配置文件时,Logstash尝试加载管道 | `true` |
| `extraEnvs` | 额外的环境变量,将会附加到env中 | `{}` |
| `secretMounts` | 保密字典挂载 | `{}` |
| `image` | 镜像 | `docker.elastic.co/elasticsearch/elasticsearch` |
| `imageTag` | 镜像版本 | `6.5.3` |
| `imagePullPolicy` | 镜像拉取策略 | `IfNotPresent` |
| `lsJavaOpts` | Jvm大小 | `-Xmx1g -Xms1g` |
| `resources` | 允许为状态集设置资源大小 | `requests.cpu: 100m`<br>`requests.memory: 2Gi`<br>`limits.cpu: 1000m`<br>`limits.memory: 2Gi` |
| `networkHost` | 网络值[主机logstash设置] | `0.0.0.0` |
| `antiAffinityTopologyKey` | 默认情况下,这会阻止多个Elasticsearch节点在同一Kubernetes节点上运行 | `kubernetes.io/hostname` |
| `antiAffinity` | 将此设置为硬执行[反亲和规则] | `hard` |
| `podManagementPolicy` | 默认情况下Kubernetes[连续部署状态集] | `Parallel` |
| `protocol` | 用于准备探测的协议 | `http` |
| `httpPort` | kubernetes用于健康检查及服务的http端口 | `9200` |
| `updateStrategy` | 更新策略 | `RollingUpdate` |
| `maxUnavailable` | 最大不可用 | `1` |
| `fsGroup` | [securityContext.fsGroup]的组ID(GID),以便Elasticsearch用户可以从持久卷中读取 | `1000` |
| `terminationGracePeriod` | 终止宽限期 | `120` |
| `readinessProbe` | 探针 | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
| `imagePullSecrets` | 声明拉取镜像时需要指定密钥 | `[]` |
| `nodeSelector` | 标签 | `{}` |
| `tolerations` | 节点亲和性 | `[]` |
| `ingress` | 访问权 | `enabled: false` |
## 安装部署
```
helm install \
/etc/kubernetes/helm/logstash\
--name=logstash \
--namespace=logstash \
-f /etc/kubernetes/helm/logstash/values.yaml
```
## 卸载
```
helm delete logstash --purge
```
## 升级
```
helm upgrade logstash \
--name=logstash --namespace=logstash \
-f /etc/kubernetes/helm/logstash/values.yaml
```
\ No newline at end of file
logo.png

14.2 KB

configInfo:
- name: replicas
text: pod数量
type: text
value: "1"
- name: image
text: logstash镜像
type: text
value: "registry.cn-qingdao.aliyuncs.com/wod/logstash"
- name: imageTag
text: 版本
type: radio
value: ["7.6.0"]
- name: imageApi
text: Api镜像
type: text
value: "registry.cn-qingdao.aliyuncs.com/wod/logstash-api"
- name: imageTagApi
text: 版本
type: radio
value: ["latest"]
- name: lsJavaOpts
text: jvm大小
type: text
value: "-Xmx1g -Xms1g"
- name: resources
text: 资源限制
type: resource
memory: "2Gi"
cpu: "1000m"
- name: config.storageClassName
text: 存储类名称
type: text
value: "nfs-client"
- name: config.resources.request.storage
text: 存储卷大小
type: text
value: "1Gi"
\ No newline at end of file
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "uname" -}}
{{ .Values.clusterName | lower }}-{{ .Values.nodeGroup }}
{{- end -}}
{{- define "appname" -}}
{{- default "logstashapi" .Values.appname | lower | trunc 16 | trimSuffix "-" -}}
{{- end -}}
{{- define "clusterName" -}}
{{- default "cluster" .Values.clusterName | lower -}}
{{- end -}}
{{- define "nodeGroup" -}}
{{- default "group" .Values.nodeGroup | lower -}}
{{- end -}}
{{- define "company" -}}
{{- default "co" .Values.company | lower | trunc 3 | trimSuffix "-" -}}
{{- end -}}
{{- define "environment" -}}
{{- default "dev" .Values.environment | lower | trunc 3 | trimSuffix "-" -}}
{{- end -}}
{{- define "namespace" -}}
{{include "company" . }}-{{include "appname" . }}-{{include "environment" . }}
{{- end -}}
{{- define "claimName" -}}
{{include "appname" . }}-{{include "clusterName" . }}-{{include "nodeGroup" . }}
{{- end -}}
\ No newline at end of file
{{- if (eq .Values.config.dynamicProvision true) }}
kind: PersistentVolume
apiVersion: v1
metadata:
name: {{ template "namespace" . }}-{{ template "claimName" .}}-config
labels:
unique: {{ template "namespace" . }}-{{ template "claimName" .}}-config
spec:
capacity:
storage: {{ .Values.config.resources.request.storage }}
accessModes: {{ .Values.config.accessModes }}
hostPath:
path: "/tmp/logstashapi-config"
storageClassName: {{ .Values.config.storageClassName }}
persistentVolumeReclaimPolicy: Retain
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "claimName" . }}-config
spec:
accessModes: {{ .Values.config.accessModes }}
resources:
requests:
storage: {{ .Values.config.resources.request.storage }}
storageClassName: {{ .Values.config.storageClassName }}
selector:
matchLabels:
unique: {{ template "namespace" . }}-{{ template "claimName" .}}-config
{{- end }}
\ No newline at end of file
kind: Service
apiVersion: v1
metadata:
name: {{ template "uname" . }}-headless
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
caas_app: "{{ template "appname" . }}"
annotations:
# Create endpoints also if the related pod isn't ready
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
selector:
app: "{{ template "uname" . }}"
ports:
- name: logstashapi
port: {{ .Values.httpPort }}
---
{{- if eq .Values.nodePort "enabled" }}
kind: Service
apiVersion: v1
metadata:
name: "ls-{{ template "environment" . }}-{{ template "appname" . }}{{ .Values.zone }}-svc"
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
spec:
type: NodePort
selector:
app: "{{ template "uname" . }}"
ports:
- name: logstashapi
protocol: TCP
port: {{ .Values.httpPort }}
- name: lsapi
protocol: TCP
port: {{ .Values.httpPortApi }}
targetPort: 8080
{{- end }}
\ No newline at end of file
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ template "uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
caas_app: "{{ template "appname" . }}"
spec:
serviceName: {{ template "uname" . }}-headless
selector:
matchLabels:
app: "{{ template "uname" . }}"
replicas: {{ default .Values.replicas }}
podManagementPolicy: {{ .Values.podManagementPolicy }}
updateStrategy:
type: {{ .Values.updateStrategy }}
template:
metadata:
name: "{{ template "uname" . }}"
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
spec:
securityContext:
fsGroup: {{ .Values.fsGroup }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "uname" .}}"
topologyKey: {{ .Values.antiAffinityTopologyKey }}
{{- else if eq .Values.antiAffinity "soft" }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: {{ .Values.antiAffinityTopologyKey }}
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "uname" . }}"
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
{{- if .Values.secretMounts }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .name }}
{{- end }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
initContainers:
- name: init-config
securityContext:
runAsUser: 0
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
volumeMounts:
- name: config
mountPath: /usr/share/logstash/config/
command: ["sh", "-c", "mkdir -p /usr/share/logstash/config; mkdir -p /usr/share/logstash/config/pipeline; mkdir -p /usr/share/logstash/config/patterns; touch /usr/share/logstash/config/logstash.yml; if [ -z \"$(ls -A /usr/share/logstash/config/pipeline/)\" ]; then echo 'input {}\nfilter{}\noutput{}\n' > /usr/share/logstash/config/pipeline/pipeline.conf; echo '- pipeline.id: pipeline_1\n path.config: \"/usr/share/logstash/config/pipeline/pipeline.conf\"' > /usr/share/logstash/config/pipelines.yml; fi; chown -R 1000:1000 /usr/share/logstash"]
resources:
requests:
cpu: "100m"
memory: "500Mi"
limits:
cpu: "200m"
memory: "1Gi"
containers:
- name: "{{ template "name" . }}"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# check that the node api is respondig
http () {
local path="${1}"
curl -XGET -s -k --fail {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}${path}
}
echo 'Waiting for logstash api is responding'
http "/_node/stats/jvm" ;
ports:
- name: logstashapi
containerPort: {{ .Values.httpPort }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: LS_JAVA_OPTS
value: "{{ .Values.lsJavaOpts }}"
- name: CONFIG_RELOAD_AUTOMATIC
value: "{{ .Values.configReloadAutomatic }}"
- name: http.host
value: "0.0.0.0"
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
volumeMounts:
- name: config
mountPath: /usr/share/logstash/config/
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
- name: "{{ template "name" . }}-api"
image: "{{ .Values.imageApi }}:{{ .Values.imageTagApi }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
ports:
- containerPort: 8080
name: lsapi
livenessProbe:
httpGet:
scheme: HTTP
path: /v2/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: LOGSTASH_CONFIG_PATH
value: "/usr/share/logstash/config"
volumeMounts:
- name: config
mountPath: /usr/share/logstash/config/
volumes:
- name: config
persistentVolumeClaim:
claimName: "{{ template "claimName" . }}-config"
---
clusterName: "logstash"
nodeGroup: "parser"
nodePort: "enabled"
nodePortIp: "auto"
zone: "01"
replicas: 1
minimumMasterNodes: 1
# Extra environment variables to append
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs:
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts:
# - name: logstash-keystore
# secretName: logstash-keystore
# path: /usr/share/elasticsearch/config/keystore
image: "registry.cn-qingdao.aliyuncs.com/wod/logstash"
imageTag: "7.6.0"
imageApi: "registry.cn-qingdao.aliyuncs.com/wod/logstash-api"
imageTagApi: "latest"
imagePullPolicy: "IfNotPresent"
lsJavaOpts: "-Xmx1g -Xms1g"
configReloadAutomatic: "true"
resources:
requests:
cpu: "100m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
networkHost: "0.0.0.0"
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
protocol: http
httpPort: 9600
httpPortApi: 9500
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
# GroupID for the elasticsearch user. The official elastic docker images always have the id of 1000
fsGroup: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
config:
dynamicProvision: true
accessModes: [ "ReadWriteMany" ]
storageClassName: "nfs-client"
resources:
request:
storage: 1Gi
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment