Get Involved. Join the Conversation.


    Robin Chatterjee
    Is there any way to get Guaranteed IOPS in DBCS apart from...
    Topic posted June 10, 2019 by Robin ChatterjeeBlack Diamond: 60,000+ Points, tagged DBaaS, DBCS 
    96 Views, 3 Comments
    Is there any way to get Guaranteed IOPS in DBCS apart from dense io/high io
    IOPS sizing in DBCS

    Hi As per Oracle documentation the iOPS provided per block volume is available at a rate of around 60 iops per Gb until a ceiling of 25,000 IOPS which is hit at 500 GB.

    When running a database on IaaS I can control the number of Block volumes I use in order to maximize iops . i.e I have an asm disk size of 500GB

    but is there any way to do this in DBCS. Given that a disk can grow to 32 Tb in size and a 32Tb disk also has the same iops as a 500 GB disk is ther some algorithm in DBCs which prevents iops starvation by going with a more sensible size ? I could not find any way to set this when creating a db system. And its not clear to me if the dbcs system uses a sensible size for each asm volume when for example creating a two node RAC cluster.... previously when 2TB was the largest bloc volume size this was much more reasonable.






    • Simon Law

      It's not just about storage IOPS, the network of your VM impacts the IOPS and Mbps. For DBCS VM, it has same IOPS as Compute IOPS, check out the different VM shapes and their network bandwidth. 

      • Robin Chatterjee

        By the way does OCI actually connect over vnics to storage or is it actually over infiniband ? given that all the hardware is supposed to support infiniband and that the iscsi interface is independant of the public interface I suspect that the storage bandwidth has no relationship to the nic bandwidth.Though I am not sure if this is published anywhere..

        At least in OCI classic I believe it was supposed to match the cloud@customer setup and in that the storage traffic is all over infiniband and not ethernet.


    • Robin Chatterjee

      I would assume network bandwidth would have an impact on throughput i.e mbps but not sure how it would relate to iops.


      Actually the issue I have is that Oracle caps th iops per block volume to 25,000 iops which is not intuitive. In IaaS i.e normal compute we have the option of choosing the number and size of each block volume to maximise iops for example using a 500GB block size we can attach 32 such volumes for 16 TB of raw storage with 800K iops. if we layer with asm we get 14 tb usable even with triple redundancy ( that cuts write iops by 3 but preserves read iops).

      The problem is with DBCS the number of block volumes is unknown - its beyond our control. all we know is ther must be at least 3 as its triply mirrored and given the table

      For Oracle Cloud Infrastructure:

      • When creating a deployment: you can create a database of up to 9600 GB (9.3 TB) with backups to both cloud and local storage or up to 16 TB with backups to cloud storage only or no backups.

      • By adding more storage: 28 scale-up operations, each of up to 16 TB, are supported. Thus, the deployment can accommodate a database of up to 158 TB with backups to both cloud and local storage or up to 386 TB with backups to cloud storage only or no backups. However, if you need databases of such large sizes, you should consider using Oracle Database Exadata Cloud Service instead of Oracle Database Cloud Service.

      This does not make it clear what the size of each disk would be...


      Oracle Cloud Infrastructure supports 32 block storage volumes attached to a compute node, of which 4 are used when the database deployment is created. Thus, you have 28 opportunities to scale up storage.

      In each scale-up operation, you can create a storage volume of 50 GB to 16384 GB (16 TB) in 50 GB increments. The deployment is put into Maintenance status during the operation


      So from this I am surmising that the asm disk size is 50Gb and each block volume added is being sliced up into 50Gb partitions. the problem is that the volume sizes are variable.