此页面由 Cloud Translation API 翻译。
Switch to English

从Prometheus服务器加载指标

在TensorFlow.org上查看 在Google Colab中运行 在GitHub上查看源代码 下载笔记本

总览

本教程将来自Prometheus服务器的CoreDNS指标加载到tf.data.Dataset ,然后使用tf.keras进行训练和推断。

CoreDNS是一台专注于服务发现的DNS服务器,已广泛部署为Kubernetes集群的一部分。出于这个原因,它通常由devops运营密切监视。

本教程是一个示例,可供开发人员通过机器学习在其操作中寻求自动化的示例。

设置和使用

安装所需的tensorflow-io软件包并重新启动运行时

 import os
 
 try:
  %tensorflow_version 2.x
except Exception:
  pass
 
TensorFlow 2.x selected.

pip install tensorflow-io
Requirement already satisfied: tensorflow-io in /usr/local/lib/python3.6/dist-packages (0.12.0)
Requirement already satisfied: tensorflow<2.2.0,>=2.1.0 in /tensorflow-2.1.0/python3.6 (from tensorflow-io) (2.1.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (3.2.0)
Requirement already satisfied: google-pasta>=0.1.6 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (0.1.8)
Requirement already satisfied: tensorflow-estimator<2.2.0,>=2.1.0rc0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (2.1.0)
Requirement already satisfied: tensorboard<2.2.0,>=2.1.0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (2.1.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (0.34.2)
Requirement already satisfied: grpcio>=1.8.6 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.27.2)
Requirement already satisfied: astor>=0.6.0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (0.8.1)
Requirement already satisfied: absl-py>=0.7.0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (0.9.0)
Requirement already satisfied: termcolor>=1.1.0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.1.0)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.18.1)
Requirement already satisfied: keras-applications>=1.0.8 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.0.8)
Requirement already satisfied: protobuf>=3.8.0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (3.11.3)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.1.0)
Requirement already satisfied: wrapt>=1.11.1 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.12.0)
Requirement already satisfied: gast==0.2.2 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (0.2.2)
Requirement already satisfied: scipy==1.4.1; python_version >= "3" in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.4.1)
Requirement already satisfied: six>=1.12.0 in /tensorflow-2.1.0/python3.6 (from tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.14.0)
Requirement already satisfied: markdown>=2.6.8 in /tensorflow-2.1.0/python3.6 (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (3.2.1)
Requirement already satisfied: setuptools>=41.0.0 in /tensorflow-2.1.0/python3.6 (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (45.2.0)
Requirement already satisfied: werkzeug>=0.11.15 in /tensorflow-2.1.0/python3.6 (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.0.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /tensorflow-2.1.0/python3.6 (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /tensorflow-2.1.0/python3.6 (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.11.2)
Requirement already satisfied: requests<3,>=2.21.0 in /tensorflow-2.1.0/python3.6 (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (2.23.0)
Requirement already satisfied: h5py in /tensorflow-2.1.0/python3.6 (from keras-applications>=1.0.8->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (2.10.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /tensorflow-2.1.0/python3.6 (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.3.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /tensorflow-2.1.0/python3.6 (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /tensorflow-2.1.0/python3.6 (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (4.0.0)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /tensorflow-2.1.0/python3.6 (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (4.0)
Requirement already satisfied: certifi>=2017.4.17 in /tensorflow-2.1.0/python3.6 (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (2019.11.28)
Requirement already satisfied: idna<3,>=2.5 in /tensorflow-2.1.0/python3.6 (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (2.9)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /tensorflow-2.1.0/python3.6 (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (1.25.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /tensorflow-2.1.0/python3.6 (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (3.0.4)
Requirement already satisfied: oauthlib>=3.0.0 in /tensorflow-2.1.0/python3.6 (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (3.1.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /tensorflow-2.1.0/python3.6 (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2.0,>=2.1.0->tensorflow-io) (0.4.8)

 from datetime import datetime

import tensorflow as tf
import tensorflow_io as tfio
 

安装和设置CoreDNS和Prometheus

出于演示目的,在本地开放端口9053的CoreDNS服务器可以接收DNS查询,而开放端口9153 (默认)的CoreDNS服务器可以公开进行刮擦的指标。以下是CoreDNS的基本Corefile配置,可以下载

 .:9053 {
  prometheus
  whoami
}
 

有关安装的更多详细信息,请参见CoreDNS的文档

 !curl -s -OL https://github.com/coredns/coredns/releases/download/v1.6.7/coredns_1.6.7_linux_amd64.tgz
!tar -xzf coredns_1.6.7_linux_amd64.tgz

!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/Corefile

!cat Corefile
 
.:9053 {
  prometheus
  whoami
}

 # Run `./coredns` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw('./coredns &')
 

下一步是设置Prometheus服务器,并使用Prometheus刮擦从上方在端口9153上公开的CoreDNS指标。用于配置的prometheus.yml文件也可以下载

 !curl -s -OL https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz
!tar -xzf prometheus-2.15.2.linux-amd64.tar.gz --strip-components=1

!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/prometheus.yml

!cat prometheus.yml
 
global:
  scrape_interval:     1s
  evaluation_interval: 1s
alerting:
  alertmanagers:

  - static_configs:
    - targets:
rule_files:
scrape_configs:
- job_name: 'prometheus'
  static_configs:
  - targets: ['localhost:9090']
- job_name: "coredns"
  static_configs:
  - targets: ['localhost:9153']

 # Run `./prometheus` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw('./prometheus &')
 

为了显示一些活动,可以使用dig命令针对已设置的CoreDNS服务器生成一些DNS查询:

sudo apt-get install -y -qq dnsutils
dig @127.0.0.1 -p 9053 demo1.example.org

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> @127.0.0.1 -p 9053 demo1.example.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53868
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 3
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 855234f1adcb7a28 (echoed)
;; QUESTION SECTION:
;demo1.example.org.     IN  A

;; ADDITIONAL SECTION:
demo1.example.org.  0   IN  A   127.0.0.1
_udp.demo1.example.org. 0   IN  SRV 0 0 45361 .

;; Query time: 0 msec
;; SERVER: 127.0.0.1#9053(127.0.0.1)
;; WHEN: Tue Mar 03 22:35:20 UTC 2020
;; MSG SIZE  rcvd: 132


dig @127.0.0.1 -p 9053 demo2.example.org

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> @127.0.0.1 -p 9053 demo2.example.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53163
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 3
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: f18b2ba23e13446d (echoed)
;; QUESTION SECTION:
;demo2.example.org.     IN  A

;; ADDITIONAL SECTION:
demo2.example.org.  0   IN  A   127.0.0.1
_udp.demo2.example.org. 0   IN  SRV 0 0 42194 .

;; Query time: 0 msec
;; SERVER: 127.0.0.1#9053(127.0.0.1)
;; WHEN: Tue Mar 03 22:35:21 UTC 2020
;; MSG SIZE  rcvd: 132


现在是一个CoreDNS服务器,其度量标准已由Prometheus服务器抓取并准备由TensorFlow使用。

为CoreDNS指标创建数据集并在TensorFlow中使用它

可以使用tfio.experimental.IODataset.from_prometheus完成可从PostgreSQL服务器创建的CoreDNS指标数据集。最少需要两个参数。 query将传递到Prometheus服务器以选择度量标准, length是要加载到数据集中的时间段。

您可以从"coredns_dns_request_count_total""5" (秒)开始以创建以下数据集。由于在本教程的前面发送了两个DNS查询,因此在时间序列末尾, "coredns_dns_request_count_total"的度量标准应为"2.0"

 dataset = tfio.experimental.IODataset.from_prometheus(
      "coredns_dns_request_count_total", 5, endpoint="http://localhost:9090")


print("Dataset Spec:\n{}\n".format(dataset.element_spec))

print("CoreDNS Time Series:")
for (time, value) in dataset:
  # time is milli second, convert to data time:
  time = datetime.fromtimestamp(time // 1000)
  print("{}: {}".format(time, value['coredns']['localhost:9153']['coredns_dns_request_count_total']))
 
Dataset Spec:
(TensorSpec(shape=(), dtype=tf.int64, name=None), {'coredns': {'localhost:9153': {'coredns_dns_request_count_total': TensorSpec(shape=(), dtype=tf.float64, name=None)} } })

CoreDNS Time Series:
2020-03-03 22:35:17: 2.0
2020-03-03 22:35:18: 2.0
2020-03-03 22:35:19: 2.0
2020-03-03 22:35:20: 2.0
2020-03-03 22:35:21: 2.0

进一步研究数据集的规范:

 (
  TensorSpec(shape=(), dtype=tf.int64, name=None),
  {
    'coredns': {
      'localhost:9153': {
        'coredns_dns_request_count_total': TensorSpec(shape=(), dtype=tf.float64, name=None)
      }
    }
  }
)

 

显而易见,数据集由一个(time, values)元组组成,其中values字段是一个python dict,扩展为:

 "job_name": {
  "instance_name": {
    "metric_name": value,
  },
}
 

在上面的示例中, 'coredns'是作业名称, 'localhost:9153'是实例名称, 'coredns_dns_request_count_total'是度量标准名称。请注意,根据所使用的Prometheus查询,可能会返回多个作业/实例/度量。这也是在数据集结构中使用python dict的原因。

以另一个查询"go_memstats_gc_sys_bytes"为例。由于CoreDNS和Prometheus都是用Golang编写的,因此"go_memstats_gc_sys_bytes"指标可用于"coredns"作业和"prometheus"作业:

 dataset = tfio.experimental.IODataset.from_prometheus(
    "go_memstats_gc_sys_bytes", 5, endpoint="http://localhost:9090")

print("Time Series CoreDNS/Prometheus Comparision:")
for (time, value) in dataset:
  # time is milli second, convert to data time:
  time = datetime.fromtimestamp(time // 1000)
  print("{}: {}/{}".format(
      time,
      value['coredns']['localhost:9153']['go_memstats_gc_sys_bytes'],
      value['prometheus']['localhost:9090']['go_memstats_gc_sys_bytes']))
 
Time Series CoreDNS/Prometheus Comparision:
2020-03-03 22:35:17: 2385920.0/2775040.0
2020-03-03 22:35:18: 2385920.0/2775040.0
2020-03-03 22:35:19: 2385920.0/2775040.0
2020-03-03 22:35:20: 2385920.0/2775040.0
2020-03-03 22:35:21: 2385920.0/2775040.0

现在可以将创建的Dataset直接传递给tf.keras ,以进行训练或推理。

使用数据集进行模型训练

创建度量标准数据集后,可以将数据集直接传递到tf.keras进行模型训练或推断。

出于演示目的,本教程将仅使用一个非常简单的LSTM模型,该模型具有1个功能和2个步骤作为输入:

 n_steps, n_features = 2, 1
simple_lstm_model = tf.keras.models.Sequential([
    tf.keras.layers.LSTM(8, input_shape=(n_steps, n_features)),
    tf.keras.layers.Dense(1)
])

simple_lstm_model.compile(optimizer='adam', loss='mae')

 

要使用的数据集是带有10个样本的CoreDNS的'go_memstats_sys_bytes'值。但是,由于形成了window=n_stepsshift=1的滑动窗口,因此需要其他样本(对于任意两个连续元素,将第一个作为x ,将第二个作为y进行训练)。总计为10 + n_steps - 1 + 1 = 12秒。

数据值也按比例缩放为[0, 1]

 n_samples = 10

dataset = tfio.experimental.IODataset.from_prometheus(
    "go_memstats_sys_bytes", n_samples + n_steps - 1 + 1, endpoint="http://localhost:9090")

# take go_memstats_gc_sys_bytes from coredns job 
dataset = dataset.map(lambda _, v: v['coredns']['localhost:9153']['go_memstats_sys_bytes'])

# find the max value and scale the value to [0, 1]
v_max = dataset.reduce(tf.constant(0.0, tf.float64), tf.math.maximum)
dataset = dataset.map(lambda v: (v / v_max))

# expand the dimension by 1 to fit n_features=1
dataset = dataset.map(lambda v: tf.expand_dims(v, -1))

# take a sliding window
dataset = dataset.window(n_steps, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda d: d.batch(n_steps))


# the first value is x and the next value is y, only take 10 samples
x = dataset.take(n_samples)
y = dataset.skip(1).take(n_samples)

dataset = tf.data.Dataset.zip((x, y))

# pass the final dataset to model.fit for training
simple_lstm_model.fit(dataset.batch(1).repeat(10),  epochs=5, steps_per_epoch=10)
 
Train for 10 steps
Epoch 1/5
10/10 [==============================] - 2s 150ms/step - loss: 0.8484
Epoch 2/5
10/10 [==============================] - 0s 10ms/step - loss: 0.7808
Epoch 3/5
10/10 [==============================] - 0s 10ms/step - loss: 0.7102
Epoch 4/5
10/10 [==============================] - 0s 11ms/step - loss: 0.6359
Epoch 5/5
10/10 [==============================] - 0s 11ms/step - loss: 0.5572

<tensorflow.python.keras.callbacks.History at 0x7f1758f3da90>

上面训练有素的模型实际上并不是很有用,因为在本教程中设置的CoreDNS服务器没有任何工作量。但是,这是一条工作流水线,可用于从真正的生产服务器加载指标。然后可以改进该模型,以解决devops自动化的现实问题。