2015-12-02 33 views
6

Sto testando Openshift Origin v3. L'ho installato come contenitore docker seguendo le istruzioni. Ho anche distribuito tutti i flussi in ruoli/openshift_examples/files/examples/image-streams/image-streams-centos7.json.openshift v3 pod file non trovato

ora sto testando l'installazione distribuendo un'applicazione PHP fittizio da Github. Sono in grado di creare il progetto e l'applicazione. Tuttavia, le build sono bloccate nello stato "in sospeso". Nella scheda eventi, vedo un sacco di messaggi come questo:

"Unable to mount volumes for pod "hello-world-1-build_php1": 
IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/9377d3b4-9887- 
11e5-81fe-00215abe5482/volumes/kubernetes.io~secret/builder-dockercfg-x2ijq- 
push"): file does not exist (5 times in the last 40 seconds)" 

ho provato anche con un 'immagine finestra mobile applicazioni Tomcat Java e, ma ottenuto gli stessi messaggi di errore. Sembra un problema di configurazione di Kubernetes.

Qualche idea?

Grazie per il vostro aiuto

Update1: tronchi dal contenitore origine mostrano un po 'di più informazioni sull'errore:

Unable to mount volumes for pod "deployment-example-2-deploy_test1": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/70f69f8c-98d3-11e5-8d98-00215abe5482/volumes/kubernetes.io~secret/deployer-token-8cfv8"): file does not exist; skipping pod 
E1202 09:12:24.269145 4396 pod_workers.go:113] Error syncing pod 70f69f8c-98d3-11e5-8d98-00215abe5482, skipping: IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/70f69f8c-98d3-11e5-8d98-00215abe5482/volumes/kubernetes.io~secret/deployer-token-8cfv8"): file does not exist 
W1202 09:12:34.229374 4396 kubelet.go:1690] Orphaned volume "ac11a2b5-9880-11e5-81fe-00215abe5482/builder-dockercfg-va0cl-push" found, tearing down volume 
E1202 09:12:34.287847 4396 kubelet.go:1696] Could not tear down volume "ac11a2b5-9880-11e5-81fe-00215abe5482/builder-dockercfg-va0cl-push": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/ac11a2b5-9880-11e5-81fe-00215abe5482/volumes/kubernetes.io~secret/builder-dockercfg-va0cl-push"): file does not exist 

Le voci di registro dall'inizio del contenitore origine:

202 09:12:13.992293 4396 start_master.go:278] assetConfig.loggingPublicURL: invalid value '', Details: required to view aggregated container logs in the console 
W1202 09:12:13.992442 4396 start_master.go:278] assetConfig.metricsPublicURL: invalid value '', Details: required to view cluster metrics in the console 
I1202 09:12:14.784026 4396 plugins.go:71] No cloud provider specified. 
I1202 09:12:14.981775 4396 start_master.go:388] Starting master on 0.0.0.0:8443 (v1.1-270-ge592c18) 
I1202 09:12:14.981825 4396 start_master.go:389] Public master address is https://192.168.178.55:8443 
I1202 09:12:14.981855 4396 start_master.go:393] Using images from "openshift/origin-<component>:v1.1" 
2015-12-02 09:12:15.574421 I | etcdserver: name = openshift.local 
2015-12-02 09:12:15.574455 I | etcdserver: data dir = openshift.local.etcd 
2015-12-02 09:12:15.574465 I | etcdserver: member dir = openshift.local.etcd/member 
2015-12-02 09:12:15.574472 I | etcdserver: heartbeat = 100ms 
2015-12-02 09:12:15.574480 I | etcdserver: election = 1000ms 
2015-12-02 09:12:15.574489 I | etcdserver: snapshot count = 0 
2015-12-02 09:12:15.574505 I | etcdserver: advertise client URLs = https://192.168.178.55:4001 
2015-12-02 09:12:15.606296 I | etcdserver: restarting member 2041635cb479cd3a in cluster 6a5d0422e654089a at commit index 3846 
2015-12-02 09:12:15.609623 I | raft: 2041635cb479cd3a became follower at term 2 
2015-12-02 09:12:15.609663 I | raft: newRaft 2041635cb479cd3a [peers: [], term: 2, commit: 3846, applied: 0, lastindex: 3846, lastterm: 2] 
2015-12-02 09:12:15.609815 I | etcdserver: set snapshot count to default 10000 
2015-12-02 09:12:15.609829 I | etcdserver: starting server... [version: 2.1.2, cluster version: to_be_decided] 
I1202 09:12:15.611196 4396 etcd.go:68] Started etcd at 192.168.178.55:4001 
2015-12-02 09:12:15.624029 N | etcdserver: added local member 2041635cb479cd3a [https://192.168.178.55:7001] to cluster 6a5d0422e654089a 
2015-12-02 09:12:15.624349 N | etcdserver: set the initial cluster version to 2.1.0 
I1202 09:12:15.645761 4396 run_components.go:181] Using default project node label selector: 
2015-12-02 09:12:17.009875 I | raft: 2041635cb479cd3a is starting a new election at term 2 
2015-12-02 09:12:17.009915 I | raft: 2041635cb479cd3a became candidate at term 3 
2015-12-02 09:12:17.009970 I | raft: 2041635cb479cd3a received vote from 2041635cb479cd3a at term 3 
2015-12-02 09:12:17.009995 I | raft: 2041635cb479cd3a became leader at term 3 
2015-12-02 09:12:17.010011 I | raft: raft.node: 2041635cb479cd3a elected leader 2041635cb479cd3a at term 3 
2015-12-02 09:12:17.059445 I | etcdserver: published {Name:openshift.local ClientURLs:[https://192.168.178.55:4001]} to cluster 6a5d0422e654089a 
W1202 09:12:17.111262 4396 controller.go:290] Resetting endpoints for master service "kubernetes" to &{{ } {kubernetes default c10e12cf-98d0-11e5-8d98-00215abe5482 8 0 2015-12-02 08:43:26 +0000 UTC <nil> <nil> map[] map[]} [{[{192.168.178.55 <nil>}] [] [{https 8443 TCP} {dns 53 UDP} {dns-tcp 53 TCP}]}]} 
I1202 09:12:17.524735 4396 master.go:232] Started Kubernetes API at 0.0.0.0:8443/api/v1 
I1202 09:12:17.524914 4396 master.go:232] Started Kubernetes API Extensions at 0.0.0.0:8443/apis/extensions/v1beta1 
I1202 09:12:17.525038 4396 master.go:232] Started Origin API at 0.0.0.0:8443/oapi/v1 
I1202 09:12:17.525049 4396 master.go:232] Started OAuth2 API at 0.0.0.0:8443/oauth 
I1202 09:12:17.525055 4396 master.go:232] Started Login endpoint at 0.0.0.0:8443/login 
I1202 09:12:17.525061 4396 master.go:232] Started Web Console 0.0.0.0:8443/console/ 
I1202 09:12:17.525067 4396 master.go:232] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/ 
2015-12-02 09:12:18.523290 I | http: TLS handshake error from 192.168.178.21:50932: EOF 
2015-12-02 09:12:18.537124 I | http: TLS handshake error from 192.168.178.21:50933: EOF 
2015-12-02 09:12:18.549780 I | http: TLS handshake error from 192.168.178.21:50934: EOF 
2015-12-02 09:12:18.556966 I | http: TLS handshake error from 192.168.178.21:50935: EOF 
2015-12-02 09:12:20.117727 I | skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:53 [rcache 0] 
2015-12-02 09:12:20.117804 I | skydns: ready for queries on cluster.local. for udp4://0.0.0.0:53 [rcache 0] 
I1202 09:12:20.217891 4396 run_components.go:176] DNS listening at 0.0.0.0:53 
I1202 09:12:20.225439 4396 start_master.go:519] Controllers starting (*) 
E1202 09:12:20.702335 4396 serviceaccounts_controller.go:218] serviceaccounts "default" already exists 
I1202 09:12:21.505391 4396 nodecontroller.go:133] Sending events to api server. 
I1202 09:12:21.507690 4396 start_master.go:563] Started Kubernetes Controllers 
W1202 09:12:21.944254 4396 nodecontroller.go:572] Missing timestamp for Node intweb3. Assuming now as a timestamp. 
I1202 09:12:21.944570 4396 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"intweb3", UID:"intweb3", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'RegisteredNode' Node intweb3 event: Registered Node intweb3 in NodeController 
I1202 09:12:22.662116 4396 start_node.go:179] Starting a node connected to https://192.168.178.55:8443 
I1202 09:12:22.670163 4396 plugins.go:71] No cloud provider specified. 
I1202 09:12:22.670239 4396 start_node.go:284] Starting node intweb3 (v1.1-270-ge592c18) 
W1202 09:12:22.681308 4396 node.go:121] Error running 'chcon' to set the kubelet volume root directory SELinux context: exit status 1 
I1202 09:12:22.698136 4396 node.go:56] Connecting to Docker at unix:///var/run/docker.sock 
I1202 09:12:22.717904 4396 manager.go:128] cAdvisor running in container: "/docker/f80b92397b6eb9052cf318d7225d21eb66941fcb333f16fe2b0330af629f73dd" 
I1202 09:12:22.932096 4396 fs.go:108] Filesystem partitions: map[/dev/sda1:{mountpoint:/rootfs/boot major:8 minor:1 fsType: blockSize:0} /dev/mapper/intweb3--vg-root:{mountpoint:/rootfs major:252 minor:0 fsType: blockSize:0}] 
I1202 09:12:22.949204 4396 node.go:251] Started Kubernetes Proxy on 0.0.0.0 
I1202 09:12:22.974678 4396 start_master.go:582] Started Origin Controllers 
I1202 09:12:22.999204 4396 machine.go:48] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" 
I1202 09:12:22.999311 4396 manager.go:163] Machine: {NumCores:2 CpuFrequency:2667000 MemoryCapacity:1010421760 MachineID: SystemUUID:26A5835E-1781-DD11-BBDA-5ABE54820021 BootID:6cbd9dcc-5d4d-414d-96e7-c8a41de013f7 Filesystems:[{Device:/dev/mapper/intweb3--vg-root Capacity:156112113664} {Device:/dev/sda1 Capacity:246755328}] DiskMap:map[252:0:{Name:dm-0 Major:252 Minor:0 Size:158737629184 Scheduler:none} 252:1:{Name:dm-1 Major:252 Minor:1 Size:1044381696 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:160041885696 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:00:21:5a:be:54:82 Speed:1000 Mtu:1500}] Topology:[{Id:0 Memory:1010421760 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown} 
I1202 09:12:23.010686 4396 manager.go:169] Version: {KernelVersion:3.19.0-25-generic ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} 
I1202 09:12:23.011734 4396 server.go:820] Watching apiserver 
I1202 09:12:23.253556 4396 manager.go:191] Setting dockerRoot to /var/lib/docker 
I1202 09:12:23.270558 4396 plugins.go:56] Registering credential provider: .dockercfg 
I1202 09:12:23.363525 4396 server.go:779] Started kubelet 
E1202 09:12:23.363724 4396 kubelet.go:812] Image garbage collection failed: unable to find data for container/
I1202 09:12:23.370771 4396 kubelet.go:833] Running in container "/kubelet" 
I1202 09:12:23.370860 4396 server.go:104] Starting to listen on 0.0.0.0:10250 
I1202 09:12:23.734095 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.154869743 +0000 UTC): 
[579.19167ms] [579.19167ms] Decoded 1 nodes 
[579.193136ms] [1.466µs] END 
I1202 09:12:23.734149 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.154865413 +0000 UTC): 
[3.352µs] [3.352µs] Decoding dir /openshift.io/imagestreams/test1 START 
[579.252571ms] [579.249219ms] Decoding dir /openshift.io/imagestreams/test1 END 
[579.255504ms] [2.933µs] Decoded 1 nodes 
[579.257181ms] [1.677µs] END 
I1202 09:12:23.734204 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.001854335 +0000 UTC): 
[1.676µs] [1.676µs] About to list directory 
[732.327694ms] [732.326018ms] List extracted 
[732.330138ms] [2.444µs] END 
I1202 09:12:23.773150 4396 factory.go:236] Registering Docker factory 
I1202 09:12:23.779446 4396 factory.go:93] Registering Raw factory 
I1202 09:12:24.069082 4396 manager.go:1006] Started watching for new ooms in manager 
I1202 09:12:24.074624 4396 oomparser.go:183] oomparser using systemd 
I1202 09:12:24.111389 4396 kubelet.go:944] Node intweb3 was previously registered 
I1202 09:12:24.112362 4396 manager.go:250] Starting recovery of all containers 
I1202 09:12:24.166309 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.155013407 +0000 UTC): 
[1.011259672s] [1.011259672s] Decoded 1 nodes 
[1.011261767s] [2.095µs] END 
I1202 09:12:24.166422 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.155011032 +0000 UTC): 
[1.327µs] [1.327µs] Decoding dir /openshift.io/imagestreams/test1 START 
[1.01138385s] [1.011382523s] Decoding dir /openshift.io/imagestreams/test1 END 
[1.011386853s] [3.003µs] Decoded 1 nodes 
[1.01138839s] [1.537µs] END 
I1202 09:12:24.166561 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.002949866 +0000 UTC): 
[3.142µs] [3.142µs] About to list etcd node 
[152.060049ms] [152.056907ms] Etcd node listed 
[1.163577016s] [1.011516967s] Node list decoded 
[1.163587911s] [10.895µs] END 
I1202 09:12:24.166656 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.002947281 +0000 UTC): 
[1.188µs] [1.188µs] About to list directory 
[1.16368555s] [1.163684362s] List extracted 
[1.163687576s] [2.026µs] END 
I1202 09:12:24.196265 4396 manager.go:255] Recovery completed 
I1202 09:12:24.215711 4396 manager.go:118] Starting to sync pod status with apiserver 
I1202 09:12:24.215792 4396 kubelet.go:2056] Starting kubelet main sync loop. 

UPDATE2

$sudo docker -v 
Docker version 1.9.1, build a34a1d5 

$ cat /etc/*release* 
DISTRIB_ID=Ubuntu 
DISTRIB_RELEASE=14.04 
DISTRIB_CODENAME=trusty 
DISTRIB_DESCRIPTION="Ubuntu 14.04.3 LTS" 
NAME="Ubuntu" 
VERSION="14.04.3 LTS, Trusty Tahr" 
ID=ubuntu 
ID_LIKE=debian 
PRETTY_NAME="Ubuntu 14.04.3 LTS" 
VERSION_ID="14.04" 
HOME_URL="http://www.ubuntu.com/" 
SUPPORT_URL="http://help.ubuntu.com/" 
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" 

@Clayton: Puoi condividere link per il bug di montaggio che dovrebbe essere fissato a 1.10? Grazie!

+2

Quali sono i registri o i registri di generazione sul tuo contenitore che dicono? A volte mostrano più degli errori nella scheda eventi della console. – lvthillo

+0

Sto riscontrando lo stesso problema quando si esegue l'origine nel contenitore docker. L'hai risolto? @capitanbuzz –

risposta

1

Quale sistema operativo e la versione Docker stai usando? Esistono problemi noti con il montaggio del volume nelle versioni precedenti di Docker: OpenShift/Kube monta i volumi in contenitori, ma quando Docker esegue OpenShift la versione Docker di serie nasconde l'accesso ai supporti root. I Red Hat OS hanno una patch che funziona, presumibilmente Docker 1.10 avrà la correzione giusta. L'alternativa è scaricare direttamente il binario e avviarlo semplicemente con sudo openshift start.

+0

Aggiornato con le informazioni, qualche idea? Puoi indicarmi gli errori che menzioni devono essere risolti in 1.10? – capitanbuzz