2015-10-01 17 views
5

Provare a utilizzare Cinder volumens su OpenStack come volumi persistenti per i miei pod. Non appena configuro il cloudprovider e riavvio il kubelet, kubelet non riesce a ottenere il suo ID esterno dal provider cloud.Impossibile costruire l'oggetto api.Node per kubelet: impossibile ottenere l'ID esterno dal provider cloud: Impossibile trovare l'oggetto

L'API di OpenStack è raggiungibile tramite https utilizzando un certificato di comodo. il comodo-ca-bundle è installato come trusted ca sul nodo. L'uso di curl contro l'API funziona senza opzioni --insecure e --cacert.

Utilizzando kubernetes 1.1.0-alpha su CentOS 7

$ sudo journalctl -u kubelet

Oct 01 07:40:26 [4196]: I1001 07:40:26.303887 4196 debugging.go:129]  Content-Length: 1159 
Oct 01 07:40:26 [4196]: I1001 07:40:26.303895 4196 debugging.go:129]  Content-Type: application/json 
Oct 01 07:40:26 [4196]: I1001 07:40:26.303950 4196 request.go:755] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/nodes","resourceVersion":"172921"},"items":[{"metadata":{"name":"192.168.100.80","selfLink":"/api/v1/nodes/192.168.100.80","uid":"b48b4cb9-676f-11e5-8521-fa163ef34ff1","resourceVersion":"172900","creationTimestamp":"2015-09-30T12:35:17Z","labels":{"kubernetes.io/hostname":"192.168.100.80"}},"spec":{"externalID":"192.168.100.80"},"status":{"capacity":{"cpu":"2","memory":"4047500Ki","pods":"40"},"conditions":[{"type":"Ready","status":"Unknown","lastHeartbeatTime":"2015-10-01T07:31:55Z","lastTransitionTime":"2015-10-01T07:32:36Z","reason":"Kubelet stopped posting node status."}],"addresses":[{"type":"LegacyHostIP","address":"192.168.100.80"},{"type":"InternalIP","address":"192.168.100.80"}],"nodeInfo":{"machineID":"dae72fe0cc064eb0b7797f25bfaf69df","systemUUID":"384A8E40-1296-9A42-AD77-445D83BB5888","bootID":"5c7eb3ff-d86f-41f2-b3eb-a39adf313a4f","kernelVersion":"3.10.0-229.14.1.el7.x86_64","osImage":"CentOS Linux 7 (Core)","containerRuntimeVersion":"docker://1.7.1","kubeletVersion":"v1.1.0-alpha.1.390+196f58b9cb25a2","kubeProxyVersion":"v1.1.0-alpha.1.390+196f58b9cb25a2"}}}]} 
Oct 01 07:40:26 [4196]: I1001 07:40:26.475016 4196 request.go:457] Request Body: {"kind":"DeleteOptions","apiVersion":"v1","gracePeriodSeconds":0} 
Oct 01 07:40:26 [4196]: I1001 07:40:26.475148 4196 debugging.go:101] curl -k -v -XDELETE -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" https://localhost:6443/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80 
Oct 01 07:40:26 [4196]: I1001 07:40:26.526794 4196 debugging.go:120] DELETE https://localhost:6443/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80 200 OK in 51 milliseconds 
Oct 01 07:40:26 [4196]: I1001 07:40:26.526865 4196 debugging.go:126] Response Headers: 
Oct 01 07:40:26 [4196]: I1001 07:40:26.526897 4196 debugging.go:129]  Content-Type: application/json 
Oct 01 07:40:26 [4196]: I1001 07:40:26.526927 4196 debugging.go:129]  Date: Thu, 01 Oct 2015 07:40:26 GMT 
Oct 01 07:40:26 [4196]: I1001 07:40:26.526957 4196 debugging.go:129]  Content-Length: 1977 
Oct 01 07:40:26 [4196]: I1001 07:40:26.527056 4196 request.go:755] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"fluentd-elasticsearch-192.168.100.80","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80","uid":"a90941f6-680f-11e5-988c-fa163e94cde4","resourceVersion":"172926","creationTimestamp":"2015-10-01T07:40:17Z","deletionTimestamp":"2015-10-01T07:40:26Z","deletionGracePeriodSeconds":0,"annotations":{"kubernetes.io/config.mirror":"mirror","kubernetes.io/config.seen":"2015-10-01T07:39:43.986114806Z","kubernetes.io/config.source":"file"}},"spec":{"volumes":[{"name":"varlog","hostPath":{"path":"/var/log"}},{"name":"varlibdockercontainers","hostPath":{"path":"/var/lib/docker/containers"}}],"containers":[{"name":"fluentd-elasticsearch","image":"gcr.io/google_containers/fluentd-elasticsearch:1.11","args":["-q"],"resources":{"limits":{"cpu":"100m"},"requests":{"cpu":"100m"}},"volumeMounts":[{"name":"varlog","mountPath":"/var/log"},{"name":"varlibdockercontainers","readOnly":true,"mountPath":"/var/lib/docker/containers"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","nodeName":"192.168.100.80"},"status":{"phase":"Running","conditions":[{"type":"Ready","status":"True"}],"hostIP":"192.168.100.80","podIP":"172.16.58.24","startTime":"2015-10-01T07:40:17Z","containerStatuses":[{"name":"fluentd-elasticsearch","state":{"running":{"startedAt":"2015-10-01T07:37:23Z"}},"lastState":{"terminated":{"exitCode":137,"startedAt":"2015-10-01T07:23:00Z","finishedAt":"2015-10-01T07:33:17Z","containerID":"docker://1398736fd9b274132721206ccaf89030af5e8e304118d29286aec6b2529395ee"}},"ready":true,"restartCount":1,"image":"gcr.io/google_containers/fluentd-elasticsearch:1.11","imageID":"docker://03ba3d224c2a80600a0b44a9894ac0de5526d36b810b13924e33ada76f1e7406","containerID":"docker://d9ac24c8a0fbceea7c494bce73d56d6ea5f003f1d1b7b8ad3975fc7e3c7679b4"}]}} 
Oct 01 07:40:26 [4196]: I1001 07:40:26.528210 4196 status_manager.go:209] Pod "fluentd-elasticsearch-192.168.100.80" fully terminated and removed from etcd 
Oct 01 07:40:26 [4196]: I1001 07:40:26.675178 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/services 
Oct 01 07:40:26 [4196]: I1001 07:40:26.710214 4196 debugging.go:120] GET https://localhost:6443/api/v1/services 200 OK in 34 milliseconds 
Oct 01 07:40:26 [4196]: I1001 07:40:26.710249 4196 debugging.go:126] Response Headers: 
Oct 01 07:40:26 [4196]: I1001 07:40:26.710260 4196 debugging.go:129]  Content-Type: application/json 
Oct 01 07:40:26 [4196]: I1001 07:40:26.710270 4196 debugging.go:129]  Date: Thu, 01 Oct 2015 07:40:26 GMT 
Oct 01 07:40:26 [4196]: I1001 07:40:26.710436 4196 request.go:755] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"172927"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"28717019-676b-11e5-afb9-fa163e94cde4","resourceVersion":"18","creationTimestamp":"2015-09-30T12:02:44Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"protocol":"TCP","port":443,"targetPort":443}],"clusterIP":"10.100.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"elasticsearch-logging","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/elasticsearch-logging","uid":"833c8df5-676b-11e5-958e-fa163e94cde4","resourceVersion":"153","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"elasticsearch-logging","kubernetes.io/cluster-service":"true","kubernetes.io/name":"Elasticsearch"}},"spec":{"ports":[{"protocol":"TCP","port":9200,"targetPort":"db"}],"selector":{"k8s-app":"elasticsearch-logging"},"clusterIP":"10.100.3.159","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kibana-logging","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kibana-logging","uid":"833043fa-676b-11e5-958e-fa163e94cde4","resourceVersion":"149","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kibana-logging","kubernetes.io/cluster-service":"true","kubernetes.io/name":"Kibana"}},"spec":{"ports":[{"protocol":"TCP","port":5601,"targetPort":"ui"}],"selector":{"k8s-app":"kibana-logging"},"clusterIP":"10.100.136.111","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kube-dns","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kube-dns","uid":"8319ba13-676b-11e5-958e-fa163e94cde4","resourceVersion":"146","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kube-dns 
Oct 01 07:40:26 [4196]: ","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeDNS"}},"spec":{"ports":[{"name":"dns","protocol":"UDP","port":53,"targetPort":53},{"name":"dns-tcp","protocol":"TCP","port":53,"targetPort":53}],"selector":{"k8s-app":"kube-dns"},"clusterIP":"10.100.0.10","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kube-ui","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kube-ui","uid":"83473271-676b-11e5-958e-fa163e94cde4","resourceVersion":"155","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kube-ui","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeUI"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8080}],"selector":{"k8s-app":"kube-ui"},"clusterIP":"10.100.246.61","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-grafana","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/monitoring-grafana","uid":"835da09c-676b-11e5-958e-fa163e94cde4","resourceVersion":"157","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Grafana"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8080}],"selector":{"k8s-app":"influxGrafana"},"clusterIP":"10.100.207.92","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-heapster","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/monitoring-heapster","uid":"83367b90-676b-11e5-958e-fa163e94cde4","resourceVersion":"151","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Heapster"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8082}],"selector":{"k8s-app":"heapster"},"clusterIP":"10.100.119.4","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-influxdb","namespace":"kube-system","selfLink":"/api/v1/names 
Oct 01 07:40:26 [4196]: paces/kube-system/services/monitoring-influxdb","uid":"836c95b8-676b-11e5-958e-fa163e94cde4","resourceVersion":"159","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"InfluxDB"}},"spec":{"ports":[{"name":"http","protocol":"TCP","port":8083,"targetPort":8083},{"name":"api","protocol":"TCP","port":8086,"targetPort":8086}],"selector":{"k8s-app":"influxGrafana"},"clusterIP":"10.100.101.182","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"reverseproxy","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/reverseproxy","uid":"15e65b7d-6776-11e5-a5d0-fa163e94cde4","resourceVersion":"10994","creationTimestamp":"2015-09-30T13:20:57Z","labels":{"k8s-app":"reverseproxy","kubernetes.io/cluster-service":"true","kubernetes.io/name":"reverseproxy"}},"spec":{"ports":[{"name":"http","protocol":"TCP","port":8181,"targetPort":8181,"nodePort":80},{"name":"https","protocol":"TCP","port":8181,"targetPort":8181,"nodePort":443}],"selector":{"k8s-app":"reverseproxy"},"clusterIP":"10.100.168.84","type":"NodePort","sessionAffinity":"None"},"status":{"loadBalancer":{}}}]} 
Oct 01 07:40:26 [4196]: I1001 07:40:26.875150 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/watch/nodes?fieldSelector=metadata.name%3D192.168.100.80&resourceVersion=172921 
Oct 01 07:40:26 [4196]: I1001 07:40:26.900981 4196 debugging.go:120] GET https://localhost:6443/api/v1/watch/nodes?fieldSelector=metadata.name%3D192.168.100.80&resourceVersion=172921 200 OK in 25 milliseconds 
Oct 01 07:40:26 [4196]: I1001 07:40:26.901009 4196 debugging.go:126] Response Headers: 
Oct 01 07:40:26 [4196]: I1001 07:40:26.901018 4196 debugging.go:129]  Date: Thu, 01 Oct 2015 07:40:26 GMT 
Oct 01 07:40:27 [4196]: I1001 07:40:27.001744 4196 iowatcher.go:102] Unexpected EOF during watch stream event decoding: unexpected EOF 
Oct 01 07:40:27 [4196]: I1001 07:40:27.002685 4196 reflector.go:294] pkg/client/unversioned/cache/reflector.go:87: Unexpected watch close - watch lasted less than a second and no items received 
Oct 01 07:40:27 [4196]: W1001 07:40:27.002716 4196 reflector.go:224] pkg/client/unversioned/cache/reflector.go:87: watch of *api.Node ended with: very short watch 
Oct 01 07:40:27 [4196]: I1001 07:40:27.075065 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/watch/services?resourceVersion=172927 
Oct 01 07:40:27 [4196]: I1001 07:40:27.101642 4196 debugging.go:120] GET https://localhost:6443/api/v1/watch/services?resourceVersion=172927 200 OK in 26 milliseconds 
Oct 01 07:40:27 [4196]: I1001 07:40:27.101689 4196 debugging.go:126] Response Headers: 
Oct 01 07:40:27 [4196]: I1001 07:40:27.101705 4196 debugging.go:129]  Date: Thu, 01 Oct 2015 07:40:27 GMT 
Oct 01 07:40:27 [4196]: I1001 07:40:27.104168 4196 openstack.go:164] openstack.Instances() called 
Oct 01 07:40:27 [4196]: I1001 07:40:27.133478 4196 openstack.go:201] Found 8 compute flavors 
Oct 01 07:40:27 [4196]: I1001 07:40:27.133519 4196 openstack.go:202] Claiming to support Instances 
Oct 01 07:40:27 [4196]: E1001 07:40:27.158908 4196 kubelet.go:846] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object 
Oct 01 07:40:27 [4196]: I1001 07:40:27.202978 4196 iowatcher.go:102] Unexpected EOF during watch stream event decoding: unexpected EOF 
Oct 01 07:40:27 [4196]: I1001 07:40:27.203110 4196 reflector.go:294] pkg/client/unversioned/cache/reflector.go:87: Unexpected watch close - watch lasted less than a second and no items received 
Oct 01 07:40:27 [4196]: W1001 07:40:27.203136 4196 reflector.go:224] pkg/client/unversioned/cache/reflector.go:87: watch of *api.Service ended with: very short watch 
Oct 01 07:40:27 [4196]: I1001 07:40:27.275208 4196 debugging.go:101] curl -k -v -XGET -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.100.80 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308434 4196 debugging.go:120] GET https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.100.80 200 OK in 33 milliseconds 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308464 4196 debugging.go:126] Response Headers: 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308475 4196 debugging.go:129]  Content-Type: application/json 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308484 4196 debugging.go:129]  Date: Thu, 01 Oct 2015 07:40:27 GMT 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308491 4196 debugging.go:129]  Content-Length: 113 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308524 4196 request.go:755] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/pods","resourceVersion":"172941"},"items":[]} 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308719 4196 config.go:252] Setting pods for source api 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308753 4196 kubelet.go:1921] SyncLoop (REMOVE): "fluentd-elasticsearch-192.168.100.80_kube-system" 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308931 4196 volumes.go:100] Used volume plugin "kubernetes.io/host-path" for varlog 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308960 4196 volumes.go:100] Used volume plugin "kubernetes.io/host-path" for varlibdockercontainers 
Oct 01 07:40:27 [4196]: I1001 07:40:27.308977 4196 kubelet.go:2531] Generating status for "fluentd-elasticsearch-192.168.100.80_kube-system" 

$ kubectl versione

Client Version: version.Info{Major:"1", Minor:"1+", 
GitVersion:"v1.1.0-alpha.1.390+196f58b9cb25a2", 
GitCommit:"196f58b9cb25a2222c7f9aacd624737910b03acb", 
GitTreeState:"clean"} 

Server Version: version.Info{Major:"1", Minor:"1+", 
GitVersion:"v1.1.0-alpha.1.390+196f58b9cb25a2", 
GitCommit: 
"196f58b9cb25a2222c7f9aacd624737910b03acb", 
GitTreeState:"clean"} 

$ cat/etc/os-release

NAME="CentOS Linux" 
VERSION="7 (Core)" 
ID="centos" 
ID_LIKE="rhel fedora" 
VERSION_ID="7" 
PRETTY_NAME="CentOS Linux 7 (Core)" 
ANSI_COLOR="0;31" 
CPE_NAME="cpe:/o:centos:centos:7" 
HOME_URL="https://www.centos.org/" 
BUG_REPORT_URL="https://bugs.centos.org/" 

CENTOS_MANTISBT_PROJECT="CentOS-7" 
CENTOS_MANTISBT_PROJECT_VERSION="7" 
REDHAT_SUPPORT_PRODUCT="centos" 
REDHAT_SUPPORT_PRODUCT_VERSION="7" 

$ cat/etc/kubernetes/kubelet

### 
# kubernetes kubelet (node) config 

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) 
KUBELET_ADDRESS="--address=0.0.0.0" 

# The port for the info server to serve on 
# KUBELET_PORT="--port=10250" 

# You may leave this blank to use the actual hostname 
KUBELET_HOSTNAME="--hostname_override=192.168.100.80" 

# location of the api-server 
KUBELET_API_SERVER="--api_servers=https://localhost:6443" 

# Add your own! 
KUBELET_ARGS="--cluster_dns=10.100.0.10 --cluster_domain=cluster.local --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --v=9 --cloud-config=/etc/kubernetes/cloud_config --cloud-provider=openstack --machine-id-file=/etc/machine-id" 

$ cat/etc/kubernetes/cloud_config

[Global] 
auth-url=https://api.*******.de:5000/v2.0 
username=username 
password=password 
region=RegionOne 
tenant-id=4ee7b21351d94f2b96d363efe131b833 

risposta

4

Kubelet è in grado di raggiungere OpenStack, tuttavia non riesce a trovare questo nodo nella lista dei server, in questo tenant e in questa regione. Oct 01 07:40:27 [4196]: I1001 07:40:27.133478 4196 openstack.go:201] Found 8 compute flavors Oct 01 07:40:27 [4196]: E1001 07:40:27.158908 4196 kubelet.go:846] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object

Il nome host del nodo viene utilizzato per identificarlo dall'elenco dei server forniti dal provider cloud. Tuttavia, può essere sovrascritto usando il flag --hostname_overide.

Nella tua configurazione, vedo che l'hai sovrascritto con un ip, se questo non corrisponde al nome del server come riportato da Nova, è probabile che tu riceva questo errore.

+0

Grazie per questo consiglio. Ho provato l'IP privato (che viene utilizzato senza di me impostandolo come nome host), kube1, kube1.novalocal e l'id-istanza come nome host. Ho ancora avuto lo stesso errore. – maklemenz

+0

Puoi postare ciò che nova mostra dice, in base a ciò possiamo convalidare ciò che deve essere nel --hostname_override. Un altro modo è assicurarsi che il nome di vm sia impostato sul risultato di 'hostname' sul nodo e non utilizzare affatto il flag di override. –

+0

Il nome del vm ha eguagliato il nome host. '--hostname-override = $ {hostname}' ha funzionato, ma solo dopo un aggiornamento a 'v1.1.0-alpha.1-653-g86b4e77'. Grazie per l'aiuto. – maklemenz

Problemi correlati