Oracle Cloud Infrastructure - General

Get Involved. Join the Conversation.

Topic

    Joydeepta Bhattacharjee
    Need Way out around Oracle OCI OKE kube-dns and service...
    Topic posted July 17, 2019 by Joydeepta BhattacharjeeRed Ribbon: 250+ Points, last edited July 17, 2019, tagged Cloud at Customer, Developer Tools 
    49 Views, 1 Comment
    Title:
    Need Way out around Oracle OCI OKE kube-dns and service lookup within container
    Summary:
    We have multiple UI and Spring boot microservices deployed in multiple pods needs to communicate with service name instead of IP which changes with redeployment
    Content:

    Once the image is built and pushed successfully versioned in the Image repository , the next step in the pipeline is to review the yaml in git for deployment and pod creation specific to Kubernetes shipped to specific Kubernetes vendor. The Kubernetes tooling has already been installed in Cloud VM in connection with master would now be utilized to sequentially create the pods with specific docker image runtime and provision in default inbuilt load balencer Ingress as part of Kubernetes setup . A typical docker image to build a spring boot api and create a pod and service to publish through Kubectl are sampled out as below. In developer cloud service under a typical project home all the artifacts and build jobs are organised so the specific user accessing devcs should have desired role for above visibility :

     

    Docker File in Spring boot Project context root, with a poc to update db store in Oracle Autonomous db cloud would be :

     

    FROM openjdk:8-jdk-alpine

    VOLUME /tmp

    ADD ./src/main/resources/Wallet_KXX /tmp/Wallet_KXX--- Wallet to securily connect the oracle db autonomous instance from a docker runtime

    COPY target/FirstPOCService-0.0.1-SNAPSHOT.jar app.jar

    ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar","--debug"]

    while the pod yaml to create a service type loadbalencer to register in OOTB Ingress with Kubernetes is as below .

     

     

     

    apiVersion: apps/v1beta1

    kind: Deployment

    metadata:

      name: app-dbboot-deployment

    spec:

      replicas: 1

      selector:

        matchLabels:

          app: app

      template:

        metadata:

          labels:

            app: app

        spec:

          containers:

            - name: appdbtest

              image: ".............../firstspringpocdbimage:1.1"

         imagePullPolicy: "Always"

              ports:

                - containerPort: 8099 #Endpoint is at port 80 in the container

          imagePullSecrets:

            -

              name: CCCC

    ---

    apiVersion: v1

    kind: Service

    metadata:

      name: springbootapp-db-service

    spec:

      type: LoadBalancer #Exposes the service as a node port

      ports:

      - port: 80

        protocol: TCP

        targetPort: 8099

      selector:

        app: app

     

     

    Now , the typical steps to be executed ultimately to bring into life the pods as and that with the mighty Devcs automation is as below :

     

    This would be added as a Unix shell connected in built step and running below commands with kubectl . These are typical commands common to any K8 managed service by any cloud vendor and runs gracefully oracle as expected.

     

    echo "" >> $HOME/.oci/config

    echo "pass_phrase=bXXXX" >> $HOME/.oci/config

    cat $HOME/.oci/config

    mkdir -p $HOME/.kube

    oci ce cluster create-kubeconfig --cluster-id ocid.XXXXX--file $HOME/.kube/config --region eu-F>>>

    export KUBECONFIG=$HOME/.kube/config

    kubectl get pods

    kubectl config view

    kubectl get nodes

    kubectl delete service springbootapp-po-service

    kubectl delete deployment app-po-deployment

    kubectl create -f ./PO/po/springboot_db_deploy.yml

    sleep 60

    kubectl get services  springbootapp-po-service

    kubectl get pods

    Once this run successfully the Job logs under the specific build would list down the URL endpoint in the console itself . I would also like to provide some experience around publishing a nginx container with angular app running on it in same way down the line so let's be on.  Now we can look up this service with external_IP or within container through POD IP however , I cannot access through service name and hosts like service_name +"."+ service_namespace + ".svc.cluster.local":port. Our services are available in default namespace and running as appropriate. I could verify Kube-dns is running and available with command output as 

    kubectl get pods --namespace=kube-system
    NAME                                    READY   STATUS    RESTARTS   AGE
    kube-dns-7db5546bc6-9k52z               3/3     Running   18         16d
    kube-dns-7db5546bc6-ljlbr               3/3     Running   18         16d
    kube-dns-autoscaler-7fcbdf46bd-zwmz2    1/1     Running   6          16d
    kube-flannel-ds-5mw8r                   1/1     Running   107        61d
    kube-flannel-ds-c2gcf                   1/1     Running   90         61d
    kube-proxy-5v7nw                        1/1     Running   18         27d
    kube-proxy-pgcdq                        1/1     Running   15         27d
    kubernetes-dashboard-7b96874d59-6cvsq   1/1     Running   6          16d
    proxymux-client-10.0.40.2               1/1     Running   4          9d
    proxymux-client-10.0.40.3               1/1     Running   4          9d
    tiller-deploy-864687d7f-t9wqx           1/1     Running   6          16d

     

    However , on running following error are seen :

     

    opc@test ~]$ nslookup springbootapp-requisition-service.default.svc.cluster.local
    Server:         XXXX.XX.XX..254
    Address:        YY.YY.Y4#53

    ** server can't find springbootapp-requisition-service.default.svc.cluster.local: NXDOMAIN

     

    Code Snippet:

    Comment

     

    • Ravi Vittal

      Can you please update the ticket with the output from these 2 commands 

      kubectl get services  springbootapp-po-service

      kubectl get pods