For partners that build or integrate commercially available applications and service solutions with the Oracle Cloud Platform
For partners that provide implementation or managed services around Oracle Cloud Applications
Based on customer input, Can I scale in/out nodes in OKE?
Hi,
Was reading by on connecting to DBCS from a container in OCCS and I see it requires to open connections to DBCS from everywhere (the rule uses a Source as "PUBLIC-INTERNET", Destination as "DB" on port TCP Port 1521).
This means the DB accept connections from anywhere, doesn't sound like the ideal setup to me ...
Is there a plan to add OCCS to other Oracle cloud services as "source"? A way to enable a connection to my DBCS instance only from OCCS containers and not the public internet?
Or is OCCS for now something completely separate from other Oracle cloud things and so better to not expect some kind of "intra-Oracle-cloud" rules for communication between services?
The question can be asked on both places, DBCS and here, but as you are the last to join the party... (as far as I know )
I am trying to create an App using Kubernetes Dashboard Deployment method. It is prompting to enter the following:
Enter the URL of a public image on any registry, or a private image hosted on Docker Hub or Google Container Registry.
Does anyone know about any sample container images( like a hello world or ngnix something that would display a sample Hello world page in browser) that can be pulled from any Oracle repositories?
Please suggest
In preparation of Oracle Cloud Infra based Micro-services platform with Kubernetes as a reference standpoint , trying to consolidate salient steps to built a resilient MS architecture. We can explore setting up Istio sidecar implementation as well with this for better governance and delivery but this tutorial would be more focused from developer angle hosting a consistent infrastructure to create development environment with 2 worker nodes of considerable size for hosting multiple containers. Below is a basic block diagram for creating a Kubernetes Infrastructure. To understand more on detail of Istio over OKE please refer the attached link
http://www.ateam-oracle.com/istio-on-oke
https://blogs.oracle.com/cloudnative/monitoring-and-visualization-prometheus-and-grafana
The installation can also be done with clear articulated steps as per Helm Charts which helps to bundle and install almost all the required components.
Now, based on the understanding of the current version of OKE provisioned we could architect a below reference diagram of Kubernetes over OCI stack. The OCI Image Registry would contained the build docker images of the app to be pulled into the Kubernetes PODs during runtime while bringing the containers up and running, On left the control plane has been shown with Master node showing up the api server, etcd database to store runtime information of pods and scheduler which are the internal components and have been incorporated as the Kubernetes release by community. On right hand side we have data plane to host application containers and services managed by Kubectl tool as the Worker nodes. We have provisioned only two worker nodes in OCI OKE with optimum sizing to host a minimal spring boot based micro service app. The default Ingress Load-balancer also shipped as part of Kubernetes. The API server inbuilt as part of Master Node architecture is responsible for routing the requests from outside the Kube cluster to the pods through the scheduler selected by the runtime on basis of present snapshot in etcd store.
As a part of side car features there could be extra installation and configuration service mesh like Istio or monitoring tools like Prometheus and Grafana etc. These are all open source tools supported by Kubernetes community and provides lots of interesting features which also can be articulated in due time. Kubernetes has become a breakthrough inception for container orchestration with large set of features and capabilities and many vendors like Azure, Amazon, Oracle has started embracing and slowing maturing their stacks in a phenomenal diversification. Container orchestration has become very important in cloud fraternity to adapt to horizontal scaling, fault tolerance, automation and what more, The OCI native monitoring capability along with Kubernetes dashboard to configure metric and alters is also going in an interesting turn around to combine bare metal computation capability of vendor with global community driven Ops stack with many native options. IT is advisable to provision both the master and workers across multiple AD’s to give better availability and redundancy as well within a region for a Compartment assigned to the OCI account of customer. In fact as Oracle document quotes below “To ensure high availability, Container Engine for Kubernetes creates the Kubernetes Control Plane on multiple Oracle-managed master nodes (distributed across different availability domains in a region where supported).”
Steps to Provisioning VM & configure workers and other basic installations:
Document link from Oracle is pretty straight forward to create OCI OKE cluster
https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke.htm
Steps to Configure the OKE cluster:
To create a cluster, you must either belong to the tenancy's Administrators group, or belong to a group to which a policy grants the CLUSTER_MANAGE permission.
Once the cluster gets configured with all default setup we are now ready to create and deploy a service instance in the pod through the Master node of OCI OKE cluster. We can leverage Developer cloud services to build the code, create the docker image of the spring boot code from Job definition in CI-CD pipeline in Developer Cloud. Before that we would also create a deployment or pod definition configuration file in .yml which would finally been loaded approximately in a dockerized solution.
Now lets go in detail to help creating the pods from the My-Services from a OCI Navigation in left and connect to the developer cloud services provisioned from console. The developer cloud services with current release has been a phenomenal to create a complete native pipeline to build both spring boot rest api and UI services , create images and then deploy in OKE clusters configured with it's cloud native Jobs configured. A typical JOB configured in the cloud native build pipeline has steps which can be configured from console for docker login, maven build , docker build and push image to Registry. I am not aware if Oracle still allows to integrate 3 rd party registry which is needed for interoperability between various cloud vendors.
Once the image is built and pushed successfully versioned in the Image repository , the next step in the pipeline is to review the yaml in git for deployment and pod creation specific to Kubernetes shipped to specific Kubernetes vendor. The Kubernetes tooling has already been installed in Cloud VM in connection with master would now be utilized to sequentially create the pods with specific docker image runtime and provision in default inbuilt load balencer Ingress as part of Kubernetes setup . A typical docker image to build a spring boot api and create a pod and service to publish through Kubectl are sampled out as below. In developer cloud service under a typical project home all the artifacts and build jobs are organised so the specific user accessing devcs should have desired role for above visibility :
Docker File in Spring boot Project context root, with a poc to update db store in Oracle Autonomous db cloud would be :
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD ./src/main/resources/Wallet_KXX /tmp/Wallet_KXX--- Wallet to securily connect the oracle db autonomous instance from a docker runtime
COPY target/FirstPOCService-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar","--debug"]
while the pod yaml to create a service type loadbalencer to register in OOTB Ingress with Kubernetes is as below .
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-dbboot-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: appdbtest
image: ".............../firstspringpocdbimage:1.1"
imagePullPolicy: "Always"
ports:
- containerPort: 8099 #Endpoint is at port 80 in the container
imagePullSecrets:
-
name: CCCC
---
apiVersion: v1
kind: Service
metadata:
name: springbootapp-db-service
spec:
type: LoadBalancer #Exposes the service as a node port
ports:
- port: 80
protocol: TCP
targetPort: 8099
selector:
app: app
Now , the typical steps to be executed ultimately to bring into life the pods as and that with the mighty Devcs automation is as below :
echo "" >> $HOME/.oci/config
echo "pass_phrase=bXXXX" >> $HOME/.oci/config
cat $HOME/.oci/config
mkdir -p $HOME/.kube
oci ce cluster create-kubeconfig --cluster-id ocid.XXXXX--file $HOME/.kube/config --region eu-F>>>
export KUBECONFIG=$HOME/.kube/config
kubectl get pods
kubectl config view
kubectl get nodes
kubectl delete service springbootapp-po-service
kubectl delete deployment app-po-deployment
kubectl create -f ./PO/po/springboot_db_deploy.yml
sleep 60
kubectl get services springbootapp-po-service
kubectl get pods
This would be added as a Unix shell connected in built step and running below commands with kubectl . These are typical commands common to any K8 managed service by any cloud vendor and runs gracefully oracle as expected.
Once this run successfully the Job logs under the specific build would list down the URL endpoint in the console itself and we dont need to really login to oci cli and bother with Black screen syndrome. I would also like to provide some experience around publishing a nginx container with angular app running on it in same way down the line so let's be on. Now those who are interested looking for further contribution and challenges you are facing to adopt this might K8 by Oracle stack. , meanwhile.
Further , we have now used a Ingress-Nginx load balencer and using the service name which is created on deployment of the Pod could remove the dependency of communication between pods and services through IP . We would recommend to create pod deployment with service_type = cluster_ip and decouple it from OCI native load balencer. Rather , we would download an Nginx image as a reverse proxy in front in a pod and configure the routing rule to different backend pods from there. Optionally we can define a host in Ingress resource and edit /hosts/resolve.conf or update in Oracle public dns to expose it like test.Client_domain.com. I would like to preserve this blog link for all of us which can help in future
As micro services being so evident and also Oracle finally coming up with host of docker images for SOA 12c and subsequently OCI OKE, we have been overwhelmed with various problem statements about designing an effecting Micro-services design pattern. With this note, I would like to really traverse the path of Event based mechanism and less on orchestrating the pre-conceived steps we were accustomed to take in traditional monolithic Integration space.
Challenge in Problem Space#
1. Breaking in DDD business entities and developing CRUD Operations eventually forming the basis for commands
2. Storing the updates with large business data in self-contained data store for that Business context or domain
3. Publishing the events adapting to application changes to decouple subscribers or listeners
4. Maintain the atomicity of the change between the action and the event generated and published allowing compensation during failure
5. Synchronicity of query with change of event store in near real time impacting time lag
6. Complexity of query formation across disparate MS data stores and choice of Materialized views over new edge document based storage like in MongoDB supporting relational + document structures
Implementation of CQRS #
It seems that as traditional Oracle Cloud BOM does not include any No-sql storage or document driven storage in Mongo db for custom query we have rely on Materialized Views
Spring Boot api driven framework would be using it’s repository beans to update the event records in MySQL
The update Order table triggered should also manage the transaction to post to Event store and publish events without supporting 2PC
The application layer may implement a Façade with dispatcher and router to efficiently route the events to event store.
The same dispatcher may be used to query from the materialized view to the presentation layer, to consumer or subscribers or may be to and reporting and analytics engine.
With all these consideration in mind can we suggest a Reference architecture with some framework from Oracle to efficiently implement the pattern.
These would be more evident as time has come to socially engineer a micro service with lead to change and adapt to business need of current time in a connected world of streams , bots , devices and legacy COTS more or less to co-exist at-least for a while. We also have to look for OKE Infra supporting event based messaging and storage and built a standard architecture to achieve resiliency .
kubectl describe services kube-dns --namespace kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-d...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.5.5
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.1.21:53,10.244.1.23:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.1.21:53,10.244.1.23:53
Session Affinity: None
Events: <none>
[opc@test ~]$
, kubectl describe svc my-api
[opc@test ~]$ kubectl describe svc springbootapp-demo-service
Name: springbootapp-demo-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=app
Type: LoadBalancer
IP: 10.96.157.177
LoadBalancer Ingress: 132.145.235.116
Port: <unset> 80/TCP
TargetPort: 8035/TCP
NodePort: <unset> 30963/TCP
Endpoints: 10.244.0.26:8035,10.244.0.27:8035,10.244.0.30:8035 + 1 more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when i exec(kubectl exec -it **Pod) to a pod and wget the other pod by FQDN it's not reached. I also connected a busy-box image to debug the kube-dns networking between pods.
Exec to the pod takes to prompt as kubectl exec -it nodejs-deployment-6bffdcb99c-lf8gn sh and tried to wget below end point dummy but unreachable though IP is looked up.
wget http://springbootapp-demo-service/demo/test
Connecting to springbootapp-demo-service(10.96.157.177:8035)
This has been fixed now by renaming the selector lebel in deployment yml to unique name as they are in default name space
hi all
i want to know if there is a way to create an script ( terraform , ansible ) from a manually created kubernetes cluster in order to recreate it on another compartment
thanks
Hi,
Use case: a POD that only has 1 replica, but with a PVC attached. In this particular case the nodepool has 2 worker nodes, 1 in each AD.
In the yaml for the PVC you have to state the AD you want the storage to be created in.
Now I am wondering: if somehow the POD gets moved from the worker node on AD1 to the worker node on AD2, is all the data contained on the PVC following to the other AD?
On GKE (Google) you can provision something like a 'regional' persistence volume, over different zones. Anything like this available on OKE?
And what with the availability of the data over the ADs?
Is it an option to look at the File Storage Service to share the storage between the worker nodes? But how explain OKE to pipe the claim to the File Storage service?
Thanks for your input,
Olivier
Can I run Fn on OKE?