[vmware, ssvm] Scale down of ssvm#6042
Conversation
cae2307 to
573ef7e
Compare
573ef7e to
ab997d5
Compare
|
@blueorangutan package |
|
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress. |
|
Packaging result: ✔️ el7 ✔️ el8 ✔️ debian ✔️ suse15. SL-JID 2708 |
|
@blueorangutan package |
|
@sureshanaparti a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress. |
|
Packaging result: ✔️ el7 ✔️ el8 ✔️ debian ✔️ suse15. SL-JID 2709 |
|
@blueorangutan test centos7 vmware-67u3 |
|
@nvazquez a Trillian-Jenkins test job (centos7 mgmt + vmware-67u3) has been kicked to run smoke tests |
|
Trillian test result (tid-3432)
|
borisstoyanov
left a comment
There was a problem hiding this comment.
LGTM, manually checked the scale down mechanism works fine. This should keep the systemvms up until destroyed explicitly
Description
This PR addresses the SSVM scaling issue noticed on VMware.
When active cmds i.e., no of entries in cmd_exec_log table is 0 and the SSVM standby capacity -
secstorage.capacity.standbyis higher than the max sessions an SSVM can handlesecstorage.session.max, then the no of SSVMs get scaled up. However, when there aren't any active commands, a logic is added to scale down the SSVMs.Fixes: #6038
Types of changes
Feature/Enhancement Scale or Bug Severity
Bug Severity
Screenshots (if appropriate):
How Has This Been Tested?