Adjusting the size (cpu, mem) of a full install

We want to add the ability to shrink resource requirements for the deployments that make up our install. I’m sure others have done the same.

How do people typically do this? Do they make every deployment individually configurable (cpu, memory overrides), or do they typically have a single field that allows them to say half the size of the entire install? And how have you seen people make this look in the admin console UI?

Hello :wave: I think my answer here would be based off of your requirements and the experience you want for your end customers. If you want them to be able to override and set the cpu/memory limit values, then I think that’s a straight-forward path as you can provide perhaps an Advanced Settings section where you allow them to set the amount of cpu/memory available. However, if you are wanting to abstract away the user needing to set specific values for cpu/memory and wanted it to be based on a different criteria that the user selects, then obviously that would require some additional templating and configuration on your end.

I haven’t seen any exact examples of this in the past with our other customers, but I have seen others have a ConfigOption which determines if the application should be deployed as HA or not. When HA is enabled, more than 1 replica of each service is deployed, and theoretically they could change their cpu/memory limits on the backend as well.

config.yaml

      - name: is_ha
        type: bool
        title: HA Environment
        help_text: Select if the deployment is a cluster and has multiple nodes.
        default: "0"
      - name: my_service_replicas
        title: Number of MyServices Replicas
        help_text: Enter the number of pods in the replica set
        type: text
        required: true
        default: "3"
        when: '{{repl ConfigOptionEquals "is_ha" "1"}}'

kots-helm-values.yaml

    my-service:
      replicas: '{{repl if ConfigOptionEquals `is_ha` `1`}}{{repl ConfigOption `my_service_replicas` }}{{repl else}}1{{repl end}}'

So you could have a Config Option that says “shrink to half”, or perhaps something more tangible to the environment and the users requirements, “HA enabled”, “Dataset sizes”, “Based on the models in usage”, etc.

Just a few ideas there. Nothing wrong at all with passing through the cpu/memory requests/limit values directly if you think your end customers will need that.