No one sees the real me

仮想化PF基盤SE

OpenShift v3.11 インストール②

◆インストールプレイブック (OpenShiftをインストール) を実行します

[root@all-openshift openshift-ansible]# ansible-playbook -vvvv playbooks/deploy_cluster.yml

~~~~
TASK [openshift_service_catalog : Wait for API Server rollout success] **********************************************************************************************************************************************************************
task path: /root/openshift-ansible/roles/openshift_service_catalog/tasks/start.yml:2
Sunday 26 January 2020  01:25:09 -0800 (0:00:00.321)       1:14:57.452 ********
Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'

×××ここで止まる×××

/var/log/messages

Jan 26 02:21:36 localhost origin-node: I0126 02:21:36.973382   75005 kuberuntime_manager.go:757] checking backoff for container "controller-manager" in pod "controller-manager-b2mml_kube-service-catalog(eb8dd47e-4023-11ea-9536-000c29a13296)"
Jan 26 02:21:36 localhost origin-node: I0126 02:21:36.973671   75005 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=controller-manager pod=controller-manager-b2mml_kube-service-catalog(eb8dd47e-4023-11ea-9536-000c29a13296)
Jan 26 02:21:36 localhost origin-node: E0126 02:21:36.973741   75005 pod_workers.go:186] Error syncing pod eb8dd47e-4023-11ea-9536-000c29a13296 ("controller-manager-b2mml_kube-service-catalog(eb8dd47e-4023-11ea-9536-000c29a13296)"), skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=controller-manager pod=controller-manager-b2mml_kube-service-catalog(eb8dd47e-4023-11ea-9536-000c29a13296)"
Jan 26 02:21:46 localhost origin-node: I0126 02:21:46.973290   75005 kuberuntime_manager.go:513] Container {Name:apiserver Image:docker.io/openshift/origin-service-catalog:v3.11 Command:[/usr/bin/service-catalog] Args:[apiserver --storage-type etcd --secure-port 6443 --etcd-servers https://all-openshift:2379 --etcd-cafile /etc/origin/master/master.etcd-ca.crt --etcd-certfile /etc/origin/master/master.etcd-client.crt --etcd-keyfile /etc/origin/master/master.etcd-client.key -v 3 --cors-allowed-origins localhost --enable-admission-plugins NamespaceLifecycle,DefaultServicePlan,ServiceBindingsLifecycle,ServicePlanChangeValidator,BrokerAuthSarCheck --feature-gates OriginatingIdentity=true --feature-gates NamespacedServiceBroker=true] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:6443 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:apiserver-ssl ReadOnly:true MountPath:/var/run/kubernetes-service-catalog SubPath: MountPropagation:<nil>} {Name:etcd-host-cert ReadOnly:true MountPath:/etc/origin/master SubPath: MountPropagation:<nil>} {Name:service-catalog-apiserver-token-m52hb ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz/ready,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:1,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jan 26 02:21:46 localhost origin-node: I0126 02:21:46.975273   75005 kuberuntime_manager.go:757] checking backoff for container "apiserver" in pod "apiserver-xvlpn_kube-service-catalog(029331bd-4024-11ea-9536-000c29a13296)"
Jan 26 02:21:46 localhost origin-node: I0126 02:21:46.981614   75005 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=apiserver pod=apiserver-xvlpn_kube-service-catalog(029331bd-4024-11ea-9536-000c29a13296)
Jan 26 02:21:46 localhost origin-node: E0126 02:21:46.981681   75005 pod_workers.go:186] Error syncing pod 029331bd-4024-11ea-9536-000c29a13296 ("apiserver-xvlpn_kube-service-catalog(029331bd-4024-11ea-9536-000c29a13296)"), skipping: failed to "StartContainer" for "apiserver" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=apiserver pod=apiserver-xvlpn_kube-service-catalog(029331bd-4024-11ea-9536-000c29a13296)"
Jan 26 02:21:51 localhost origin-node: I0126 02:21:51.973414   75005 kuberuntime_manager.go:513] Container {Name:controller-manager Image:docker.io/openshift/origin-service-catalog:v3.11 Command:[/usr/bin/service-catalog] Args:[controller-manager --secure-port 6443 -v 3 --leader-election-namespace kube-service-catalog --leader-elect-resource-lock configmaps --cluster-id-configmap-namespace=kube-service-catalog --broker-relist-interval 5m --feature-gates OriginatingIdentity=true --feature-gates AsyncBindingOperations=true --feature-gates NamespacedServiceBroker=true] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:6443 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:K8S_NAMESPACE Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:service-catalog-ssl ReadOnly:true MountPath:/var/run/kubernetes-service-catalog SubPath: MountPropagation:<nil>} {Name:service-catalog-controller-token-vw7c8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz/ready,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:1,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jan 26 02:21:51 localhost origin-node: I0126 02:21:51.973813   75005 kuberuntime_manager.go:757] checking backoff for container "controller-manager" in pod "controller-manager-b2mml_kube-service-catalog(eb8dd47e-4023-11ea-9536-000c29a13296)"
Jan 26 02:21:51 localhost origin-node: I0126 02:21:51.974157   75005 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=controller-manager pod=controller-manager-b2mml_kube-service-catalog(eb8dd47e-4023-11ea-9536-000c29a13296)
Jan 26 02:21:51 localhost origin-node: E0126 02:21:51.974208   75005 pod_workers.go:186] Error syncing pod eb8dd47e-4023-11ea-9536-000c29a13296 ("controller-manager-b2mml_kube-service-catalog(eb8dd47e-4023-11ea-9536-000c29a13296)"), skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=controller-manager pod=controller-manager-b2mml_kube-service-catalog(eb8dd47e-4023-11ea-9536-000c29a13296)"
Jan 26 02:21:58 localhost origin-node: I0126 02:21:58.988726   75005 kuberuntime_manager.go:513] Container {Name:apiserver Image:docker.io/openshift/origin-service-catalog:v3.11 Command:[/usr/bin/service-catalog] Args:[apiserver --storage-type etcd --secure-port 6443 --etcd-servers https://all-openshift:2379 --etcd-cafile /etc/origin/master/master.etcd-ca.crt --etcd-certfile /etc/origin/master/master.etcd-client.crt --etcd-keyfile /etc/origin/master/master.etcd-client.key -v 3 --cors-allowed-origins localhost --enable-admission-plugins NamespaceLifecycle,DefaultServicePlan,ServiceBindingsLifecycle,ServicePlanChangeValidator,BrokerAuthSarCheck --feature-gates OriginatingIdentity=true --feature-gates NamespacedServiceBroker=true] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:6443 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:apiserver-ssl ReadOnly:true MountPath:/var/run/kubernetes-service-catalog SubPath: MountPropagation:<nil>} {Name:etcd-host-cert ReadOnly:true MountPath:/etc/origin/master SubPath: MountPropagation:<nil>} {Name:service-catalog-apiserver-token-m52hb ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz/ready,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:1,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.

◆原因?
https://docs.openshift.com/container-platform/3.11/install/configuring_inventory_file.html#configuring-oab-storage

PVがいる?


◆状態確認

[root@all-openshift ~]# oc get pods --all-namespaces
NAMESPACE               NAME                                           READY     STATUS             RESTARTS   AGE
default                 docker-registry-1-5cf2r                        1/1       Running            0          56m
default                 registry-console-1-7xjqd                       1/1       Running            0          56m
default                 router-1-pmkw9                                 1/1       Running            0          56m
kube-service-catalog    apiserver-2x2bx                                0/1       CrashLoopBackOff   13         42m   ★
kube-service-catalog    controller-manager-6vq8j                       0/1       CrashLoopBackOff   12         42m   ★
kube-system             master-api-all-openshift                       1/1       Running            0          1h
kube-system             master-controllers-all-openshift               1/1       Running            0          59m
kube-system             master-etcd-all-openshift                      1/1       Running            0          1h
openshift-console       console-67c994c776-5lrj2                       1/1       Running            0          45m
openshift-monitoring    alertmanager-main-0                            3/3       Running            0          40m
openshift-monitoring    alertmanager-main-1                            3/3       Running            0          38m
openshift-monitoring    alertmanager-main-2                            3/3       Running            0          37m
openshift-monitoring    cluster-monitoring-operator-8578656f6f-888ts   1/1       Running            0          55m
openshift-monitoring    grafana-6b9f85786f-c8x6p                       2/2       Running            0          50m
openshift-monitoring    kube-state-metrics-c4f86b5f8-xv79d             3/3       Running            0          36m
openshift-monitoring    node-exporter-q4f5j                            2/2       Running            0          37m
openshift-monitoring    prometheus-k8s-0                               4/4       Running            1          44m
openshift-monitoring    prometheus-k8s-1                               4/4       Running            1          40m
openshift-monitoring    prometheus-operator-6644b8cd54-84skd           1/1       Running            0          51m
openshift-node          sync-lv2lb                                     1/1       Running            0          58m
openshift-sdn           ovs-pkvdq                                      1/1       Running            0          58m
openshift-sdn           sdn-ggdv7                                      1/1       Running            0          58m
openshift-web-console   webconsole-7fc8759f7b-6b4cg                    1/1       Running            0          50m
[root@all-openshift ~]#
[root@all-openshift ~]#
[root@all-openshift ~]# oc -n kube-service-catalog delete pod controller-manager-6vq8j
[root@all-openshift ~]# oc -n kube-service-catalog delete pod apiserver-2x2bx

---------------------------------------------------------------------------------------------------------------------------------------------------

[root@all-openshift ~]# oc get pods --all-namespaces
NAMESPACE               NAME                                           READY     STATUS             RESTARTS   AGE
default                 docker-registry-1-5cf2r                        1/1       Running            0          1h
default                 registry-console-1-7xjqd                       1/1       Running            0          59m
default                 router-1-pmkw9                                 1/1       Running            0          1h
kube-service-catalog    apiserver-xvlpn                                0/1       Running            0          1m    ★
kube-service-catalog    controller-manager-b2mml                       0/1       CrashLoopBackOff   3          1m    ★
kube-system             master-api-all-openshift                       1/1       Running            0          1h
kube-system             master-controllers-all-openshift               1/1       Running            0          1h
kube-system             master-etcd-all-openshift                      1/1       Running            0          1h
openshift-console       console-67c994c776-5lrj2                       1/1       Running            0          48m
openshift-monitoring    alertmanager-main-0                            3/3       Running            0          44m
openshift-monitoring    alertmanager-main-1                            3/3       Running            0          41m
openshift-monitoring    alertmanager-main-2                            3/3       Running            0          41m
openshift-monitoring    cluster-monitoring-operator-8578656f6f-888ts   1/1       Running            0          59m
openshift-monitoring    grafana-6b9f85786f-c8x6p                       2/2       Running            0          54m
openshift-monitoring    kube-state-metrics-c4f86b5f8-xv79d             3/3       Running            0          40m
openshift-monitoring    node-exporter-q4f5j                            2/2       Running            0          41m
openshift-monitoring    prometheus-k8s-0                               4/4       Running            1          47m
openshift-monitoring    prometheus-k8s-1                               4/4       Running            1          44m
openshift-monitoring    prometheus-operator-6644b8cd54-84skd           1/1       Running            0          55m
openshift-node          sync-lv2lb                                     1/1       Running            0          1h
openshift-sdn           ovs-pkvdq                                      1/1       Running            0          1h
openshift-sdn           sdn-ggdv7                                      1/1       Running            0          1h
openshift-web-console   webconsole-7fc8759f7b-6b4cg                    1/1       Running            0          54m
[root@all-openshift ~]#
[root@all-openshift ~]#


ふむ、POD削除では再作成されるわな。。。

---------------------------------------------------------------------------------------------------------------------------------------------------


<インベントリファイル>
openshift_service_catalog_version = v3.11
openshift_enable_service_catalog=false ★追加、これでどうだ
ansible_service_broker_install = false  ★追加、これでどうだ


↓消せん、、、どうやって消そう。。。
[root@all-openshift ~]# oc delete project kube-service-catalog
Error from server (Conflict): Operation cannot be fulfilled on namespaces "kube-service-catalog": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.
[root@all-openshift ~]#
[root@all-openshift ~]# oc delete --force project kube-service-catalog
warning: --force is ignored because --grace-period is not 0.
Error from server (Conflict): Operation cannot be fulfilled on namespaces "kube-service-catalog": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.
[root@all-openshift ~]#
[root@all-openshift ~]#
[root@all-openshift ~]# oc -n kube-service-catalog get all
NAME                           READY     STATUS             RESTARTS   AGE
pod/apiserver-xvlpn            0/1       Running            4          6m
pod/controller-manager-b2mml   0/1       CrashLoopBackOff   6          6m

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/apiserver            ClusterIP   172.30.187.195   <none>        443/TCP   51m
service/controller-manager   ClusterIP   172.30.240.235   <none>        443/TCP   50m

NAME                                DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR                         AGE
daemonset.apps/apiserver            1         1         0         1            0           node-role.kubernetes.io/master=true   51m
daemonset.apps/controller-manager   1         1         0         1            0           node-role.kubernetes.io/master=true   50m

NAME                                 HOST/PORT                                                         PATH      SERVICES    PORT      TERMINATION   WILDCARD
route.route.openshift.io/apiserver   apiserver-kube-service-catalog.router.default.svc.cluster.local             apiserver   secure    passthrough   None
[root@all-openshift ~]#
[root@all-openshift ~]#

・・・deploy.yml を途中で止めたくない。。。。
再認識してくれないかな、はい無理なので再度実行します。

この段階でWebConsoleにはログインできます。

[root@all-openshift ~]# htpasswd -b /etc/origin/master/htpasswd admin redhat
Adding password for user admin
[root@all-openshift ~]#
[root@all-openshift ~]# oc adm policy add-cluster-role-to-user cluster-admin admin
Warning: User 'admin' not found
cluster role "cluster-admin" added: "admin"
[root@all-openshift ~]#

f:id:naoki_1123:20200126200647p:plain