Containers and Kubernetes

Get Involved. Join the Conversation.

Posts

  • John K
    Deploying a basic node app to the cloudAnswered65.0
    Topic posted September 6, 2019 by John KGreen Ribbon: 100+ Points, tagged Containers, Docker, Kubernetes 
    Title:
    Deploying a basic node app to the cloud
    Content:

    Apologies for the basic question but I just signed up for a trial cloud account and want to deploy a simple hello world node app to the cloud. Even Oracle's documentation seems dated as everything I find online references Application Container Cloud which I don't see in my list of trial account applications. Nor do I see anything related to Container Cloud. Does anyone have a tutorial or how-to doc that I could follow to learn how to deploy a simple node app. Thanks in advance for pointing me in the right direction.

    Image:
  • Joydeepta Bhattacharjee
    Kafka broker in Oracle cloud event hub connectivity through...
    Topic posted August 27, 2019 by Joydeepta BhattacharjeeRed Ribbon: 250+ Points, tagged Containers, Kubernetes 
    Title:
    Kafka broker in Oracle cloud event hub connectivity through bootstrap service
    Summary:
    Microservice to connect a Kafka topic and publish message as part of Oracle event hub cloud
    Content:

    Hi Team ,

    Can any one give me a clear information around connecting a Kafka broker in a cloud event hub. The zookeeper is embedded so not able to validate connector and brokers are active or not . When the service is trying to connect thru a public Internet Url of a Cloud Event Hub - Dedicated service it's timing out

     

  • Joydeepta Bhattacharjee
    Pod to Pod communication with service name Should be...15.0
    Topic posted July 20, 2019 by Joydeepta BhattacharjeeRed Ribbon: 250+ Points, tagged Containers, Docker, Kubernetes, Tip 
    Title:
    Pod to Pod communication with service name Should be followed with Ingress Resource to realise a decouple connection
    Summary:
    Instead of accessing IP which changes with deployment I would like to access pod deployment with service created which is not working in OCI OKE setup
    Content:

    kubectl describe services kube-dns --namespace kube-system

     

    Name:              kube-dns
    Namespace:         kube-system
    Labels:            addonmanager.kubernetes.io/mode=Reconcile
                       k8s-app=kube-dns
                       kubernetes.io/cluster-service=true
                       kubernetes.io/name=KubeDNS
    Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                         {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-d...
    Selector:          k8s-app=kube-dns
    Type:              ClusterIP
    IP:                10.96.5.5
    Port:              dns  53/UDP
    TargetPort:        53/UDP
    Endpoints:         10.244.1.21:53,10.244.1.23:53
    Port:              dns-tcp  53/TCP
    TargetPort:        53/TCP
    Endpoints:         10.244.1.21:53,10.244.1.23:53
    Session Affinity:  None
    Events:            <none>
    [opc@test ~]$

     

    , kubectl describe svc my-api 

    [opc@test ~]$ kubectl describe svc springbootapp-demo-service
    Name:                     springbootapp-demo-service
    Namespace:                default
    Labels:                   <none>
    Annotations:              <none>
    Selector:                 app=app
    Type:                     LoadBalancer
    IP:                       10.96.157.177
    LoadBalancer Ingress:     132.145.235.116
    Port:                     <unset>  80/TCP
    TargetPort:               8035/TCP
    NodePort:                 <unset>  30963/TCP
    Endpoints:                10.244.0.26:8035,10.244.0.27:8035,10.244.0.30:8035 + 1 more...
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:                   <none>

     

    Now  when i exec(kubectl exec -it  **Pod) to a pod and wget the other pod by FQDN it's not reached. I also connected a busy-box image to debug the kube-dns networking between pods. 

    Exec to the pod takes to prompt as  kubectl exec -it nodejs-deployment-6bffdcb99c-lf8gn sh and tried to wget below end point dummy but unreachable though IP is looked up.

     wget http://springbootapp-demo-service/demo/test
    Connecting to springbootapp-demo-service(10.96.157.177:8035)

    This has been fixed now by renaming the selector lebel in deployment yml to unique name as they are in default name space 

  • Patrick Dizon
    SSH to worker nodeAnswered3
    Topic posted July 19, 2019 by Patrick DizonGreen Ribbon: 100+ Points, tagged Containers, Kubernetes 
    Title:
    SSH to worker node
    Summary:
    cannot SSH to worker node
    Content:

    Hello everyone

    I created a custom kubernetes cluster without LBAAS. when I added a node pool I specified a public key value after that each node has a public IP address. The problem now is that when I SSH to one of the worker nodes. It displays the following and asks for a password for the opc user. 

    Does anyone know what is wrong with what I did.

     

    Image:
  • Karthik Murthy
    How to preserve Source IP for LoadBalancer Service OKE45.0
    Topic posted July 17, 2019 by Karthik Murthy, tagged Containers, Kubernetes 
    Title:
    How to preserve Source IP for LoadBalancer Service OKE
    Summary:
    Unable to preserve source IP for a Kubernetes service exposed as type 'LoadBalancer'
    Content:

    I have deployed a backed service and nginx ingress controller as a Load Balancer service as documented in https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengsettingupingresscontroller.htm 

    I would like to know if there is any way we can preserve source IP on the backed Pod. i.e I expect to see that the source IP of my external client when the request reaches the backend pod via the loadbalancer and nginx ingress controller.

    Any help is greatly appreciated !

    Version:
    12.7
  • Joydeepta Bhattacharjee
    Kubernetes on Oracle OCI OKE : Quick Developer handouts3
    Topic posted July 13, 2019 by Joydeepta BhattacharjeeRed Ribbon: 250+ Points, tagged Containers, Docker, Kubernetes, Registry, Tip 
    Title:
    Kubernetes on Oracle OCI OKE : Quick Developer handouts
    Summary:
    This would be constant effort to derive a ideal architecture and reference to derive a K8 MSA architecture with side car facilities like service mesh etc. for developers and architects to relate to Op
    Content:

    In preparation of Oracle Cloud Infra based Micro-services platform with Kubernetes as a reference standpoint , trying to consolidate salient steps to built a resilient MS architecture. We can explore setting up Istio sidecar implementation as well with this for better governance and delivery but this tutorial would be more focused from developer angle hosting a consistent infrastructure to create development environment with 2 worker nodes of considerable size for hosting multiple containers. Below is a basic block diagram for creating a Kubernetes Infrastructure. To understand more on detail of Istio over OKE please refer the attached link

    http://www.ateam-oracle.com/istio-on-oke

    https://blogs.oracle.com/cloudnative/monitoring-and-visualization-prometheus-and-grafana

    The installation can also be done with clear articulated steps as per Helm Charts which helps to bundle and install almost all the required components.

    Now, based on the understanding of the current version of OKE provisioned we could architect a below reference diagram of Kubernetes over OCI stack. The OCI Image Registry would contained the build docker images of the app to be pulled into the Kubernetes PODs during runtime while bringing the containers up and running, On left the control plane has been shown with Master node showing up the api server, etcd database to store runtime information of pods and scheduler which are the internal components and have been incorporated as the Kubernetes release by community. On right hand side we have data plane to host application containers and services managed by Kubectl tool as the Worker nodes. We have provisioned only two worker nodes in OCI OKE with optimum sizing to host a minimal spring boot based micro service app. The default Ingress Load-balancer also shipped as part of Kubernetes. The API server inbuilt as part of Master Node architecture is responsible for routing the requests from outside the Kube cluster to the pods through the scheduler selected by the runtime on basis of present snapshot in etcd store.

    As a part of side car features there could be extra installation and configuration service mesh like Istio or monitoring tools like Prometheus and Grafana etc. These are all open source tools supported by Kubernetes community and provides lots of interesting features which also can be articulated in due time. Kubernetes has become a breakthrough inception for container orchestration with large set of features and capabilities and many vendors like Azure, Amazon, Oracle has started embracing and slowing maturing their stacks in a phenomenal diversification. Container orchestration has become very important in cloud fraternity to adapt to horizontal scaling, fault tolerance, automation and what more, The OCI native monitoring capability along with Kubernetes dashboard to configure metric and alters is also going in an interesting turn around to combine bare metal computation capability of vendor with global community driven Ops stack with many native options. IT is advisable to provision both the master and workers across multiple AD’s to give better availability and redundancy as well within a region for a Compartment assigned to the OCI account of customer. In fact as Oracle document quotes below “To ensure high availability, Container Engine for Kubernetes creates the Kubernetes Control Plane on multiple Oracle-managed master nodes (distributed across different availability domains in a region where supported).”

    Steps to Provisioning VM & configure workers and other basic installations:

    Document link from Oracle is pretty straight forward to create OCI OKE cluster

    https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke.htm

    Steps to Configure the OKE cluster:

    To create a cluster, you must either belong to the tenancy's Administrators group, or belong to a group to which a policy grants the CLUSTER_MANAGE permission.

    Once the cluster gets configured with all default setup we are now ready to create and deploy a service instance in the pod through the Master node of OCI OKE cluster. We can leverage Developer cloud services to build the code, create the docker image of the spring boot code from Job definition in CI-CD pipeline in Developer Cloud. Before that we would also create a deployment or pod definition configuration file in .yml which would finally been loaded approximately in a dockerized solution.

    Now lets go in detail to help creating the pods from the My-Services from a OCI Navigation in left and connect to the developer cloud services provisioned from console. The developer cloud services with current release has been a phenomenal to create a complete native pipeline to build both spring boot rest api and UI services , create images and then deploy in OKE clusters configured with it's cloud native Jobs configured. A typical JOB configured in the cloud native build pipeline has steps which can be configured from console for docker login, maven build , docker build and push image to Registry. I am not aware if Oracle still allows to integrate 3 rd party registry which is needed for interoperability between various cloud vendors.

    Once the image is built and pushed successfully versioned in the Image repository , the next step in the pipeline is to review the yaml in git for deployment and pod creation specific to Kubernetes shipped to specific Kubernetes vendor. The Kubernetes tooling has already been installed in Cloud VM in connection with master would now be utilized to sequentially create the pods with specific docker image runtime and provision in default inbuilt load balencer Ingress as part of Kubernetes setup . A typical docker image to build a spring boot api and create a pod and service to publish through Kubectl are sampled out as below. In developer cloud service under a typical project home all the artifacts and build jobs are organised so the specific user accessing devcs should have desired role for above visibility :

    Docker File in Spring boot Project context root, with a poc to update db store in Oracle Autonomous db cloud would be :

    FROM openjdk:8-jdk-alpine
    VOLUME /tmp
    ADD ./src/main/resources/Wallet_KXX /tmp/Wallet_KXX--- Wallet to securily connect the oracle db autonomous instance from a docker runtime
    COPY target/FirstPOCService-0.0.1-SNAPSHOT.jar app.jar
    ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar","--debug"]
    

    while the pod yaml to create a service type loadbalencer to register in OOTB Ingress with Kubernetes is as below .

     

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: app-dbboot-deployment
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: app
      template:
        metadata:
          labels:
            app: app
        spec:
          containers:
            - name: appdbtest
              image: ".............../firstspringpocdbimage:1.1"
         imagePullPolicy: "Always"
              ports:
                - containerPort: 8099 #Endpoint is at port 80 in the container
          imagePullSecrets:
            - 
              name: CCCC
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: springbootapp-db-service
    spec:
      type: LoadBalancer #Exposes the service as a node port
      ports:
      - port: 80
        protocol: TCP
        targetPort: 8099
      selector:
        app: app
    
    
    

    Now , the typical steps to be executed ultimately to bring into life the pods as and that with the mighty Devcs automation is as below :

    echo "" >> $HOME/.oci/config
    echo "pass_phrase=bXXXX" >> $HOME/.oci/config
    cat $HOME/.oci/config
    mkdir -p $HOME/.kube
    oci ce cluster create-kubeconfig --cluster-id ocid.XXXXX--file $HOME/.kube/config --region eu-F>>>
    export KUBECONFIG=$HOME/.kube/config
    kubectl get pods
    kubectl config view
    kubectl get nodes
    kubectl delete service springbootapp-po-service
    kubectl delete deployment app-po-deployment
    kubectl create -f ./PO/po/springboot_db_deploy.yml
    sleep 60
    kubectl get services  springbootapp-po-service
    kubectl get pods

    This would be added as a Unix shell connected in built step and running below commands with kubectl . These are typical commands common to any K8 managed service by any cloud vendor and runs gracefully oracle as expected.

    ------------------------------

    Once this run successfully the Job logs under the specific build would list down the URL endpoint in the console itself and we dont need to really login to oci cli and bother with Black screen syndrome. I would also like to provide some experience around publishing a nginx container with angular app running on it in same way down the line so let's be on. Now those who are interested looking for further contribution and challenges you are facing to adopt this might K8 by Oracle stack. , meanwhile.

    Further , we have now used a Ingress-Nginx load balencer and using the service name which is created on deployment of the Pod could remove the dependency of communication between pods and services through IP . We would recommend to create pod deployment with service_type = cluster_ip and decouple it from OCI native load balencer. Rather , we would download an Nginx image as a reverse proxy in front in a pod and configure the routing rule to different backend pods from there. Optionally we can define a host in Ingress resource and edit /hosts/resolve.conf or update in Oracle public dns to expose it like test.Client_domain.com. I would like to preserve this blog link for all of us which can help in future

     

     

    Image:
  • Patrick Dizon
    SSH to worker nodes1
    Topic posted July 9, 2019 by Patrick DizonGreen Ribbon: 100+ Points, tagged Kubernetes 
    Title:
    SSH to worker nodes
    Summary:
    SSH to worker nodes
    Content:

    Hello everyone

    I created a kubernetes cluster using the quick create and added a public ssh key. But after creating the cluster, all worker nodes don't have public IPs. I read in the documentation that if I added a public key when creating cluster worker nodes will have public IPs.

    Is there something that I missed or am I doing something wrong?

  • enrique ortiz
    kubernetes get ddl1
    Topic posted June 17, 2019 by enrique ortizGreen Ribbon: 100+ Points, tagged Kubernetes, Tip 
    Title:
    kubernetes get ddl
    Content:

    hi all 

    i want to know if there is a way to create an script ( terraform , ansible ) from a manually created kubernetes cluster in order to recreate it on another compartment 

     

    thanks

  • enrique ortiz
    Kuberntes flannel
    Topic posted June 3, 2019 by enrique ortizGreen Ribbon: 100+ Points, tagged Kubernetes 
    Title:
    Kuberntes flannel
    Content:

    hi all

    i created an kuberntes cluster using oracle cloud

    The kuberntes cluster is version 1.11.5 when we stop and start the cluster nodes an deploy our application gaves us an error :

    Warning  FailedCreatePodSandBox  33m  kubelet, 130.*.*.*  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "52e46a26c7a0fcad6b3baaf38119da28b618add1bcef42d6139075f845458026" network for pod "<pod name>": NetworkPlugin cni failed to set up pod "< pod name > " network: open /run/flannel/subnet.env: no such file or directory

    and all network connection to the database does not work .  I read that in order to fix i need to create a funnel network , how can i fix these errors  in the cloud kubernretes instances ,

    thanks

    Version:
    V1.11.5
  • Olivier Maurice
    Security problem on FSS hosted volume3
    Topic posted May 29, 2019 by Olivier MauriceRed Ribbon: 250+ Points, tagged Kubernetes 
    Title:
    Security problem on FSS hosted volume
    Summary:
    Some pods give a security problem when accessing FSS hosted exports
    Content:

    Hi,

    Not new to Kubernetes but also not an expert. The setting: a Kubernetes cluster (OKE) with the storage behind the PV and PVC residing on File Storage Service (FSS).

    When making a deployment based on Alpine, I can perfectly mount and use the volume in the pod.

    However, when switching to some more meaningful stuff, say MySQL or my latest try Prometheus, I just cannot make it fly. None of these containers can work with the export. In all cases the PV and PVC are bound.

    This is something security - related but I just can't figure it out. I have been squashing the root or all users to 1 or something in the 65K, nothing seemed to help.
    Also defined security context on pod level, to no avail. I am missing something, but it is clear I do not know what.

     

    What I have in place:

    Storageclass

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: oci-fss
    provisioner: oracle.com/oci-fss
    parameters:
      mntTargetId: ocid1.mounttarget.oc1.eu_frankfurt_1.aaaa...aa
    

    PV

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: prometheus-pv
      namespace: monitoring
      labels:
        app: prometheus
    spec:
      storageClassName: oci-fss
      capacity:
        storage: 100Gi
      accessModes:
        - ReadWriteMany
      mountOptions:
        - nosuid
      persistentVolumeReclaimPolicy: Delete # Reclaim policies are defined below
      nfs:
        # Replace this with the IP of your FSS file system in OCI
        server: 10.100.0.3
        # Replace this with the Path of your FSS file system in OCI
        path: "/k8s-prometheus"
        readOnly: false
    

     

    PVC
    
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: prometheus-pvc
      namespace: monitoring
    spec:
      storageClassName: oci-fss
      accessModes:
        - ReadWriteMany
      resources:
        requests:
        # Although storage is provided here it is not used for FSS file systems
          storage: 100Gi
      selector:
        matchLabels:
          app: prometheus
    

     

    Deployment

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: prometheus-deployment
      namespace: monitoring
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: prometheus-server
        spec:
          containers:
            - name: prometheus
              image: prom/prometheus:v2.2.1
              args:
                - "--config.file=/etc/prometheus/prometheus.yml"
                - "--storage.tsdb.path=/prometheus/"
              ports:
                - containerPort: 9090
              volumeMounts:
                - name: prometheus-config-volume
                  mountPath: /etc/prometheus/
                - name: prometheus-storage-volume
                  mountPath: /prometheus/
          volumes:
            - name: prometheus-config-volume
              configMap:
                defaultMode: 420
                name: prometheus-server-conf
            - name: prometheus-storage-volume
              persistentVolumeClaim:
                claimName: prometheus-pvc
                readOnly: false
    

    Log output

    level=error ts=2019-05-29T07:17:48.980589701Z caller=main.go:582 err="Opening storage failed open DB in /prometheus/: open /prometheus/199323036: permission denied"
    
    level=info ts=2019-05-29T07:17:48.980731276Z caller=main.go:584 msg="See you next time!"
    
     
    Thanks for your ideas!
     
    Olivier
    Version:
    Kubernetes v1.11.5-3