Containers and Kubernetes

Get Involved. Join the Conversation.

Comments

  • Kumar Dhanagopal

    On your local host, pls change the permissions for your private key (.ssh/id_myk8s_rsa) to 600, and then try ssh'ing to the node.

    chmod 600 .ssh/id_myk8s_rsa

  • Kumar Dhanagopal

    I suggest creating the K8s cluster in a VCN that has a different CIDR range than the VCN that's used for the DB. That way, you can use local VCN peering to enable the K8s cluster nodes to communicate with the DB using its private address, instead of routing over the public internet.

    See https://docs.cloud.oracle.com/iaas/Content/Network/Tasks/localVCNpeering.htm.

  • Kumar Dhanagopal

    Per this doc: Using the Console to create a 'Quick Cluster' with Default Settings...

    Note that because worker nodes in a 'quick cluster' are in private subnets, you cannot use SSH to access them directly (see Connecting to Worker Nodes in Private Subnets Using SSH).

     

  • Joydeepta Bhattacharjee

    Let's are request the K8 SMEs in Oracle to comment internally the best practices and tools to help in several enablers to attend service resiliency , discovery , fault tolerence , auto scaling and communication standard between multiple api's and from UI. I am requesting Oracle Experts to contribute as there are several topics like CQRS , gRPC communication , messaging etc. are over discussed but not standardised from Oracle OKE perspective.

  • Joydeepta Bhattacharjee

    I am posting a Handout for using devcs to build and deploy in K8 OKE clusters and monitoring it, but still not clear how to adopt to CQRS or a better MS decoupled communication. Looking for comments

  • Jairo Rojas Mendez

    My understanding is you don't have to open the TCP 1251 access to the "everybody". This blog post was showing the easiest (and the least secure) way to connect between two services.

    You should be able to tighten the DBCS security, limiting the incoming 1251 traffic to the public IP addresses of your containers

    Regards

  • Jon-Eric Eliker

    There may be some third-party tools that provide this end-to-end capability but I cannot say for certain. Otherwise let me describer what I have done to achieve what you as asking. I have used the Python SDK for OCI read the existing cluster information and write the relevant Terraform files.

    For a simple example using this method to create a compartment, consider this...

    import oci
    
    config = oci.config.from_file('~/.oci/config')
    iam_client = oci.identity.IdentityClient(config)
    comp = iam_client.get_compartment(compartment_id = "YOUR_COMP_ID").data
    with open("./comp.tf", "w") as f:
      f.write('resource "oci_identity_compartment" "comp1" {\n')
      f.write('  compartment_id = "{0}"\n'.format(comp.compartment_id)) #remember this is container of the compartment not compartment itself
      f.write('  description = "{0}"\n'.format(comp.description))
      f.write('  name = "{0}"\n'.format(comp.name))
      #I am ignoring tags here
      f.write('}\n')
    

    With a little bit more work you can make this accept YOUR_COMP_ID as a parameter to the script so you can reuse to "reverse engineer" any compartment.  Another option would be to loop through all compartments using something like this:

    import oci
    
    config = oci.config.from_file('~/.oci/config')
    iam_client = oci.identity.IdentityClient(config)
    comps = iam_client.list_compartments(compartment_id = "YOUR_TENANCY_ID").data #note I'm ignoring pagination so only 21 will be listed
    i = 1
    for comp in comps: 
      with open("./comp.tf", "w") as f:
        f.write('resource "oci_identity_compartment" "comp{0}" {{\n'.format(str(i)))
        f.write('  compartment_id = "{0}"\n'.format(comp.compartment_id)) #remember this is container of the compartment not compartment itself
        f.write('  description = "{0}"\n'.format(comp.description))
        f.write('  name = "{0}"\n'.format(comp.name))
        f.write('}\n\n')
      i = i + 1
    

    This same process can be used to create a Terraform file based on any OCI resource including k8s clusters.

    See this page for the OCI Python SDK documentation about k8s resources ("container engine" resources in OCI terms): https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/api/container_engine.html

    See these pages for Terraform documentation on k8s in OCI: https://www.terraform.io/docs/providers/oci/r/containerengine_cluster.html and https://www.terraform.io/docs/providers/oci/r/containerengine_node_pool.html

    Jon-Eric
    Mythics, Inc.

  • Olivier Maurice

    Hi,
     

    I already bumped into this page, but some of these discussions are a little over my head... :)

    Now one thing I realized is that I start with an empty share and I assume that the software in the pod creates the directories on the share. I guess part of the problem can be found here.

    I am not using an operator at this point, only fiddling with the basic 'building blocks'. Will come back with my findings when creating the directories upfront on the share.

     

    Thanks already for the feedback,

    Olivier

  • Mike Raab

    You might look at this, it may be related

    https://github.com/coreos/prometheus-operator/issues/830

  • Joydeepta Bhattacharjee

    Thank you. I am also trying to explore a CQRS framework Axon Opensource to deploy on it however facing challenges with right strategy. Can you give some insight,

  • javier mugueta

    Hi,

    In my opinion you don't need to find for mongoDB or mysql or whatever for micro-services persistence. With oracle database you can store/retrieve/update JSON easily, you can adopt the "one db per micro-service" easily, you can mix sql and no-sql syntax easily, you can implement a put/get mechanism easily, the database service is elastic, scalable, fault tolerant and easy to connect/disconnect to your micro-services pods easily, you have drivers for Node, Go, Java, and more...

    Regarding event driven matters, I think whatever existing Kafka implementation is a good choice because its performance that allows to manage high volumes of messages with near zero latency (at least if you implement the choreography coding in robust/quick languages such as Go). And, again, if you want/need to persist Kafka messages for a period of time, oracle database is a very good option thanks to the hi amount of IOPS it supports.

    Regards 

  • Olivier Maurice

    Hi,
     

    For anyone looking for this in the future: the combination OKE and File Storage allows you to share volumes between the OKE worker nodes.
     

    Olivier

  • Saravanan Jothilingam

    Hi Vikram,

    Where can I get basic documentation on Docker Image creations for beginners?

     

    Thanks.

  • Alex_D_Oracle
  • Shivin Vijai

    One of my use case is:

    I have application clusters in one pool and UI testing framework in another pool. 

    From my portal, there may be "n" customers, they have their own application repo and testing repo, on demand i need to create pools for each.