How does the vCPU align with the CPU? (VM with more cores than the CPU has)Best Practice: vCPUs per physical...
I encountered my boss during an on-site interview at another company. Should I bring it up when seeing him next time?
Most significant research articles for practical investors with research perspectives
Use comma instead of & in table
How to approximate rolls for potions of healing using only d6's?
Multiplication via squaring and addition
The change directory (cd) command is not working with a USB drive
What to do when being responsible for data protection in your lab, yet advice is ignored?
Called into a meeting and told we are being made redundant (laid off) and "not to share outside". Can I tell my partner?
What is better: yes / no radio, or simple checkbox?
Compare four integers, return word based on maximum
Should I choose Itemized or Standard deduction?
Is there a frame of reference in which I was born before I was conceived?
How to deny access to SQL Server to certain login over SSMS, but allow over .Net SqlClient Data Provider
Contradiction with Banach Fixed Point Theorem
Where is this triangular-shaped space station from?
Did 5.25" floppies undergo a change in magnetic coating?
Replacement ford fiesta radiator has extra hose
How to mitigate "bandwagon attacking" from players?
A "strange" unit radio astronomy
Do authors have to be politically correct in article-writing?
Can I become debt free or should I file for bankruptcy? How do I manage my debt and finances?
Second-rate spelling
Borrowing Characters
Whom do I have to contact for a ticket refund in case of denied boarding (in the EU)?
How does the vCPU align with the CPU? (VM with more cores than the CPU has)
Best Practice: vCPUs per physical coreWhat does the vSphere 8 core maximum mean?VMware ESXi Virtual Machine configuration for hyperthreadingHow to make Windows 2008 R2 see 4 CPU cores, with two single core processors , Hyperthreading enabledBest Practice: vCPUs per physical coreVMware - Can a 1 vCPU VM use more than 1 physical core at the same time?Best type of virtualization when CPU has more threads than coresVirtualization: vCPU pinning with Hyperthreading Host CPU?Does turning off vCPU in Ubuntu 14.04 impact vmware's cpu scheduler?Can the number of CPU cores be reduced for a CentOS virtual machine?Performance on Pinned CPU cores versus automatically managed?
I have a VM running on vSphere 6.5.which has 24 vCPU's.
The server has two physical cpu's (xeon e5-2699 v4) which have 22 cores and hyperthreading is enabled.
How exactly are the vcpu's running on the physical cpu's? Would it be better to reduce the vcpu's to 22 that it could run on one phyical cpu or would vSphere even use in this case multiple pysical cpu's?
vmware-vsphere central-processing-unit vcpu
add a comment |
I have a VM running on vSphere 6.5.which has 24 vCPU's.
The server has two physical cpu's (xeon e5-2699 v4) which have 22 cores and hyperthreading is enabled.
How exactly are the vcpu's running on the physical cpu's? Would it be better to reduce the vcpu's to 22 that it could run on one phyical cpu or would vSphere even use in this case multiple pysical cpu's?
vmware-vsphere central-processing-unit vcpu
add a comment |
I have a VM running on vSphere 6.5.which has 24 vCPU's.
The server has two physical cpu's (xeon e5-2699 v4) which have 22 cores and hyperthreading is enabled.
How exactly are the vcpu's running on the physical cpu's? Would it be better to reduce the vcpu's to 22 that it could run on one phyical cpu or would vSphere even use in this case multiple pysical cpu's?
vmware-vsphere central-processing-unit vcpu
I have a VM running on vSphere 6.5.which has 24 vCPU's.
The server has two physical cpu's (xeon e5-2699 v4) which have 22 cores and hyperthreading is enabled.
How exactly are the vcpu's running on the physical cpu's? Would it be better to reduce the vcpu's to 22 that it could run on one phyical cpu or would vSphere even use in this case multiple pysical cpu's?
vmware-vsphere central-processing-unit vcpu
vmware-vsphere central-processing-unit vcpu
asked Feb 22 at 7:47
user3235860user3235860
111
111
add a comment |
add a comment |
4 Answers
4
active
oldest
votes
A single VM must never have more virtual CPUs than logical physical cores that are available.
With Hyperthreading enabled you are at 44 logical physical cores, so this should be fine. However, this heavily depends on how many more VMs are running on that host. One thing you have to keep in mind is how the CPU scheduler of the ESXi server works. For every CPU cycle it always waits until there is a physical core available for each virtual CPU on a VM. So, in your case, it will always wait until 22 physical cores are available before a CPU cycle can be processed. If you have many more VMs on that host that can lead to a high CPU ready time and a very slow VM.
Personally, I always try to keep the number of vCPUs at 8 or less. If you can, rather scale your VMs out than up.
Another consideration: With the current state of mitigations against Spectre and Meltdown attacks it is generally recommended to disable Hyperthreading, because this reduces the possible attack vectors. If you decide to disable Hyperthreading your configuration will most probably not be usable anymore.
add a comment |
I can't think of a situation where you'd want a single VM to have more vCPUs allocated than there are physical cores in a server.
Benchmark your workload with the current VM configuration, and then see what happens as you gradually lower the number of vCPUs. Take note both of execution speed for your workload and of actual CPU usage on the host/VM from the hypervisor's perspective rather than that of the guest OS.
Usually when setting up VMs it's beneficial to start with a rather low number of vCPUs and then working your way up until the performance increase flattens out. For many workloads you don't necessarily need to stick to even numbers of vCPUs, even though there are exceptions to this principle. Again, a good test run should show how your application deals with its environment.
My current situation is an appliance which has specific sizes (in this case vCenter as X-Lage instance).
– user3235860
Feb 22 at 8:13
1
Is it for a lab or for a production environment? In production you should probably stick to a supported configuration (which includes the hardware on which the machine runs), but if it's for lab use or for a test environment it should be possible to turn down the number of vCPUs a notch after deploying the appliance. Again - if you actually need that kind of power, it's probable you'll get less overhead and better total performance by not exceeding your number of physical cores.
– Mikael H
Feb 22 at 8:21
This two socket box has 44 cores. Although it is a bad idea to cross NUMA nodes, especially when you could reduce vCPU or get a processor with enough cores.
– John Mahowald
Feb 23 at 14:43
add a comment |
As per VMware Maximums (https://configmax.vmware.com/) you can have 32 vCPUs per Physical Core but according to best practices you should not assign more cores than you actually have.
Keep in mind though that you can limit, reserve and prioritize according to your workloads and needs.
You can read another answer posted about the same topic here.
According the best practice not to assign more cores than you actually have, is this referring to logical or physical cores?
– user3235860
Feb 22 at 10:28
It's according to logical cores you can read more here techiessphere.com/2016/02/…
– Sir Lou
Feb 23 at 11:47
add a comment |
1) Hyperthreaded cores aren't real cores, and shouldn't be counted as such. Estimates vary, but I've seen figures that enabling HyperThreading gives you as low as 10-30% additional performance in vSphere.
2) Assigning more vCPUs to a VM should always be considered carefully, especially at higher numbers. The reason (drastically simplified) is that the resource scheduler has to find a time slot where there's enough cores available to execute all cores simultaneously. So on a simplified, hyper-unrealistic, example host with say 10 cores, and 10 VMs with 2 vCPUs, you'd have 5 of VMs waiting (aka. halted) half the time, and 5 VMs executing, alternating between each state. This is alright since all VMs are getting CPU time, and everything is dandy. Now we introduce the 11th VM, with 10 vCPUs. Suddenly you have 10 VMs waiting while the big VM gets it's stuff done, and then 5 of them execute, and then the 5 others. So now your VMs are running 33 % of the time, instead of 50%. In a complex environment, allocating relatively huge amounts of vCPUs can lower performance, especially if the VM doesn't run anything that can actually use all the vCPUs.
3) My personal best practice is to never give a VM more than half the logical cores on one single processor, this is usually also normally quite a sane number with Xeon processors anyhow. This avoids problems with "depending" too much on HT "cores", and also makes your VMs fit on a single processor, making it easier for the scheduler.
There's also the concept of NUMA nodes to take into account, if you start giving a VM more vCPUs than a single processor in the host can provide, you're basically forcing vSphere to split the VM between 2 NUMA nodes, making memory access slower, since not all memory is going to be local to either processor.
There's a lot more magic behind how vSphere schedules VM resources, and what I wrote above is hugely simplified, but these are guidelines that have served me well for almost a decade.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "2"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f955230%2fhow-does-the-vcpu-align-with-the-cpu-vm-with-more-cores-than-the-cpu-has%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
A single VM must never have more virtual CPUs than logical physical cores that are available.
With Hyperthreading enabled you are at 44 logical physical cores, so this should be fine. However, this heavily depends on how many more VMs are running on that host. One thing you have to keep in mind is how the CPU scheduler of the ESXi server works. For every CPU cycle it always waits until there is a physical core available for each virtual CPU on a VM. So, in your case, it will always wait until 22 physical cores are available before a CPU cycle can be processed. If you have many more VMs on that host that can lead to a high CPU ready time and a very slow VM.
Personally, I always try to keep the number of vCPUs at 8 or less. If you can, rather scale your VMs out than up.
Another consideration: With the current state of mitigations against Spectre and Meltdown attacks it is generally recommended to disable Hyperthreading, because this reduces the possible attack vectors. If you decide to disable Hyperthreading your configuration will most probably not be usable anymore.
add a comment |
A single VM must never have more virtual CPUs than logical physical cores that are available.
With Hyperthreading enabled you are at 44 logical physical cores, so this should be fine. However, this heavily depends on how many more VMs are running on that host. One thing you have to keep in mind is how the CPU scheduler of the ESXi server works. For every CPU cycle it always waits until there is a physical core available for each virtual CPU on a VM. So, in your case, it will always wait until 22 physical cores are available before a CPU cycle can be processed. If you have many more VMs on that host that can lead to a high CPU ready time and a very slow VM.
Personally, I always try to keep the number of vCPUs at 8 or less. If you can, rather scale your VMs out than up.
Another consideration: With the current state of mitigations against Spectre and Meltdown attacks it is generally recommended to disable Hyperthreading, because this reduces the possible attack vectors. If you decide to disable Hyperthreading your configuration will most probably not be usable anymore.
add a comment |
A single VM must never have more virtual CPUs than logical physical cores that are available.
With Hyperthreading enabled you are at 44 logical physical cores, so this should be fine. However, this heavily depends on how many more VMs are running on that host. One thing you have to keep in mind is how the CPU scheduler of the ESXi server works. For every CPU cycle it always waits until there is a physical core available for each virtual CPU on a VM. So, in your case, it will always wait until 22 physical cores are available before a CPU cycle can be processed. If you have many more VMs on that host that can lead to a high CPU ready time and a very slow VM.
Personally, I always try to keep the number of vCPUs at 8 or less. If you can, rather scale your VMs out than up.
Another consideration: With the current state of mitigations against Spectre and Meltdown attacks it is generally recommended to disable Hyperthreading, because this reduces the possible attack vectors. If you decide to disable Hyperthreading your configuration will most probably not be usable anymore.
A single VM must never have more virtual CPUs than logical physical cores that are available.
With Hyperthreading enabled you are at 44 logical physical cores, so this should be fine. However, this heavily depends on how many more VMs are running on that host. One thing you have to keep in mind is how the CPU scheduler of the ESXi server works. For every CPU cycle it always waits until there is a physical core available for each virtual CPU on a VM. So, in your case, it will always wait until 22 physical cores are available before a CPU cycle can be processed. If you have many more VMs on that host that can lead to a high CPU ready time and a very slow VM.
Personally, I always try to keep the number of vCPUs at 8 or less. If you can, rather scale your VMs out than up.
Another consideration: With the current state of mitigations against Spectre and Meltdown attacks it is generally recommended to disable Hyperthreading, because this reduces the possible attack vectors. If you decide to disable Hyperthreading your configuration will most probably not be usable anymore.
answered Feb 22 at 8:21
Gerald SchneiderGerald Schneider
6,38612446
6,38612446
add a comment |
add a comment |
I can't think of a situation where you'd want a single VM to have more vCPUs allocated than there are physical cores in a server.
Benchmark your workload with the current VM configuration, and then see what happens as you gradually lower the number of vCPUs. Take note both of execution speed for your workload and of actual CPU usage on the host/VM from the hypervisor's perspective rather than that of the guest OS.
Usually when setting up VMs it's beneficial to start with a rather low number of vCPUs and then working your way up until the performance increase flattens out. For many workloads you don't necessarily need to stick to even numbers of vCPUs, even though there are exceptions to this principle. Again, a good test run should show how your application deals with its environment.
My current situation is an appliance which has specific sizes (in this case vCenter as X-Lage instance).
– user3235860
Feb 22 at 8:13
1
Is it for a lab or for a production environment? In production you should probably stick to a supported configuration (which includes the hardware on which the machine runs), but if it's for lab use or for a test environment it should be possible to turn down the number of vCPUs a notch after deploying the appliance. Again - if you actually need that kind of power, it's probable you'll get less overhead and better total performance by not exceeding your number of physical cores.
– Mikael H
Feb 22 at 8:21
This two socket box has 44 cores. Although it is a bad idea to cross NUMA nodes, especially when you could reduce vCPU or get a processor with enough cores.
– John Mahowald
Feb 23 at 14:43
add a comment |
I can't think of a situation where you'd want a single VM to have more vCPUs allocated than there are physical cores in a server.
Benchmark your workload with the current VM configuration, and then see what happens as you gradually lower the number of vCPUs. Take note both of execution speed for your workload and of actual CPU usage on the host/VM from the hypervisor's perspective rather than that of the guest OS.
Usually when setting up VMs it's beneficial to start with a rather low number of vCPUs and then working your way up until the performance increase flattens out. For many workloads you don't necessarily need to stick to even numbers of vCPUs, even though there are exceptions to this principle. Again, a good test run should show how your application deals with its environment.
My current situation is an appliance which has specific sizes (in this case vCenter as X-Lage instance).
– user3235860
Feb 22 at 8:13
1
Is it for a lab or for a production environment? In production you should probably stick to a supported configuration (which includes the hardware on which the machine runs), but if it's for lab use or for a test environment it should be possible to turn down the number of vCPUs a notch after deploying the appliance. Again - if you actually need that kind of power, it's probable you'll get less overhead and better total performance by not exceeding your number of physical cores.
– Mikael H
Feb 22 at 8:21
This two socket box has 44 cores. Although it is a bad idea to cross NUMA nodes, especially when you could reduce vCPU or get a processor with enough cores.
– John Mahowald
Feb 23 at 14:43
add a comment |
I can't think of a situation where you'd want a single VM to have more vCPUs allocated than there are physical cores in a server.
Benchmark your workload with the current VM configuration, and then see what happens as you gradually lower the number of vCPUs. Take note both of execution speed for your workload and of actual CPU usage on the host/VM from the hypervisor's perspective rather than that of the guest OS.
Usually when setting up VMs it's beneficial to start with a rather low number of vCPUs and then working your way up until the performance increase flattens out. For many workloads you don't necessarily need to stick to even numbers of vCPUs, even though there are exceptions to this principle. Again, a good test run should show how your application deals with its environment.
I can't think of a situation where you'd want a single VM to have more vCPUs allocated than there are physical cores in a server.
Benchmark your workload with the current VM configuration, and then see what happens as you gradually lower the number of vCPUs. Take note both of execution speed for your workload and of actual CPU usage on the host/VM from the hypervisor's perspective rather than that of the guest OS.
Usually when setting up VMs it's beneficial to start with a rather low number of vCPUs and then working your way up until the performance increase flattens out. For many workloads you don't necessarily need to stick to even numbers of vCPUs, even though there are exceptions to this principle. Again, a good test run should show how your application deals with its environment.
answered Feb 22 at 8:09
Mikael HMikael H
53618
53618
My current situation is an appliance which has specific sizes (in this case vCenter as X-Lage instance).
– user3235860
Feb 22 at 8:13
1
Is it for a lab or for a production environment? In production you should probably stick to a supported configuration (which includes the hardware on which the machine runs), but if it's for lab use or for a test environment it should be possible to turn down the number of vCPUs a notch after deploying the appliance. Again - if you actually need that kind of power, it's probable you'll get less overhead and better total performance by not exceeding your number of physical cores.
– Mikael H
Feb 22 at 8:21
This two socket box has 44 cores. Although it is a bad idea to cross NUMA nodes, especially when you could reduce vCPU or get a processor with enough cores.
– John Mahowald
Feb 23 at 14:43
add a comment |
My current situation is an appliance which has specific sizes (in this case vCenter as X-Lage instance).
– user3235860
Feb 22 at 8:13
1
Is it for a lab or for a production environment? In production you should probably stick to a supported configuration (which includes the hardware on which the machine runs), but if it's for lab use or for a test environment it should be possible to turn down the number of vCPUs a notch after deploying the appliance. Again - if you actually need that kind of power, it's probable you'll get less overhead and better total performance by not exceeding your number of physical cores.
– Mikael H
Feb 22 at 8:21
This two socket box has 44 cores. Although it is a bad idea to cross NUMA nodes, especially when you could reduce vCPU or get a processor with enough cores.
– John Mahowald
Feb 23 at 14:43
My current situation is an appliance which has specific sizes (in this case vCenter as X-Lage instance).
– user3235860
Feb 22 at 8:13
My current situation is an appliance which has specific sizes (in this case vCenter as X-Lage instance).
– user3235860
Feb 22 at 8:13
1
1
Is it for a lab or for a production environment? In production you should probably stick to a supported configuration (which includes the hardware on which the machine runs), but if it's for lab use or for a test environment it should be possible to turn down the number of vCPUs a notch after deploying the appliance. Again - if you actually need that kind of power, it's probable you'll get less overhead and better total performance by not exceeding your number of physical cores.
– Mikael H
Feb 22 at 8:21
Is it for a lab or for a production environment? In production you should probably stick to a supported configuration (which includes the hardware on which the machine runs), but if it's for lab use or for a test environment it should be possible to turn down the number of vCPUs a notch after deploying the appliance. Again - if you actually need that kind of power, it's probable you'll get less overhead and better total performance by not exceeding your number of physical cores.
– Mikael H
Feb 22 at 8:21
This two socket box has 44 cores. Although it is a bad idea to cross NUMA nodes, especially when you could reduce vCPU or get a processor with enough cores.
– John Mahowald
Feb 23 at 14:43
This two socket box has 44 cores. Although it is a bad idea to cross NUMA nodes, especially when you could reduce vCPU or get a processor with enough cores.
– John Mahowald
Feb 23 at 14:43
add a comment |
As per VMware Maximums (https://configmax.vmware.com/) you can have 32 vCPUs per Physical Core but according to best practices you should not assign more cores than you actually have.
Keep in mind though that you can limit, reserve and prioritize according to your workloads and needs.
You can read another answer posted about the same topic here.
According the best practice not to assign more cores than you actually have, is this referring to logical or physical cores?
– user3235860
Feb 22 at 10:28
It's according to logical cores you can read more here techiessphere.com/2016/02/…
– Sir Lou
Feb 23 at 11:47
add a comment |
As per VMware Maximums (https://configmax.vmware.com/) you can have 32 vCPUs per Physical Core but according to best practices you should not assign more cores than you actually have.
Keep in mind though that you can limit, reserve and prioritize according to your workloads and needs.
You can read another answer posted about the same topic here.
According the best practice not to assign more cores than you actually have, is this referring to logical or physical cores?
– user3235860
Feb 22 at 10:28
It's according to logical cores you can read more here techiessphere.com/2016/02/…
– Sir Lou
Feb 23 at 11:47
add a comment |
As per VMware Maximums (https://configmax.vmware.com/) you can have 32 vCPUs per Physical Core but according to best practices you should not assign more cores than you actually have.
Keep in mind though that you can limit, reserve and prioritize according to your workloads and needs.
You can read another answer posted about the same topic here.
As per VMware Maximums (https://configmax.vmware.com/) you can have 32 vCPUs per Physical Core but according to best practices you should not assign more cores than you actually have.
Keep in mind though that you can limit, reserve and prioritize according to your workloads and needs.
You can read another answer posted about the same topic here.
answered Feb 22 at 9:15
Sir LouSir Lou
11
11
According the best practice not to assign more cores than you actually have, is this referring to logical or physical cores?
– user3235860
Feb 22 at 10:28
It's according to logical cores you can read more here techiessphere.com/2016/02/…
– Sir Lou
Feb 23 at 11:47
add a comment |
According the best practice not to assign more cores than you actually have, is this referring to logical or physical cores?
– user3235860
Feb 22 at 10:28
It's according to logical cores you can read more here techiessphere.com/2016/02/…
– Sir Lou
Feb 23 at 11:47
According the best practice not to assign more cores than you actually have, is this referring to logical or physical cores?
– user3235860
Feb 22 at 10:28
According the best practice not to assign more cores than you actually have, is this referring to logical or physical cores?
– user3235860
Feb 22 at 10:28
It's according to logical cores you can read more here techiessphere.com/2016/02/…
– Sir Lou
Feb 23 at 11:47
It's according to logical cores you can read more here techiessphere.com/2016/02/…
– Sir Lou
Feb 23 at 11:47
add a comment |
1) Hyperthreaded cores aren't real cores, and shouldn't be counted as such. Estimates vary, but I've seen figures that enabling HyperThreading gives you as low as 10-30% additional performance in vSphere.
2) Assigning more vCPUs to a VM should always be considered carefully, especially at higher numbers. The reason (drastically simplified) is that the resource scheduler has to find a time slot where there's enough cores available to execute all cores simultaneously. So on a simplified, hyper-unrealistic, example host with say 10 cores, and 10 VMs with 2 vCPUs, you'd have 5 of VMs waiting (aka. halted) half the time, and 5 VMs executing, alternating between each state. This is alright since all VMs are getting CPU time, and everything is dandy. Now we introduce the 11th VM, with 10 vCPUs. Suddenly you have 10 VMs waiting while the big VM gets it's stuff done, and then 5 of them execute, and then the 5 others. So now your VMs are running 33 % of the time, instead of 50%. In a complex environment, allocating relatively huge amounts of vCPUs can lower performance, especially if the VM doesn't run anything that can actually use all the vCPUs.
3) My personal best practice is to never give a VM more than half the logical cores on one single processor, this is usually also normally quite a sane number with Xeon processors anyhow. This avoids problems with "depending" too much on HT "cores", and also makes your VMs fit on a single processor, making it easier for the scheduler.
There's also the concept of NUMA nodes to take into account, if you start giving a VM more vCPUs than a single processor in the host can provide, you're basically forcing vSphere to split the VM between 2 NUMA nodes, making memory access slower, since not all memory is going to be local to either processor.
There's a lot more magic behind how vSphere schedules VM resources, and what I wrote above is hugely simplified, but these are guidelines that have served me well for almost a decade.
add a comment |
1) Hyperthreaded cores aren't real cores, and shouldn't be counted as such. Estimates vary, but I've seen figures that enabling HyperThreading gives you as low as 10-30% additional performance in vSphere.
2) Assigning more vCPUs to a VM should always be considered carefully, especially at higher numbers. The reason (drastically simplified) is that the resource scheduler has to find a time slot where there's enough cores available to execute all cores simultaneously. So on a simplified, hyper-unrealistic, example host with say 10 cores, and 10 VMs with 2 vCPUs, you'd have 5 of VMs waiting (aka. halted) half the time, and 5 VMs executing, alternating between each state. This is alright since all VMs are getting CPU time, and everything is dandy. Now we introduce the 11th VM, with 10 vCPUs. Suddenly you have 10 VMs waiting while the big VM gets it's stuff done, and then 5 of them execute, and then the 5 others. So now your VMs are running 33 % of the time, instead of 50%. In a complex environment, allocating relatively huge amounts of vCPUs can lower performance, especially if the VM doesn't run anything that can actually use all the vCPUs.
3) My personal best practice is to never give a VM more than half the logical cores on one single processor, this is usually also normally quite a sane number with Xeon processors anyhow. This avoids problems with "depending" too much on HT "cores", and also makes your VMs fit on a single processor, making it easier for the scheduler.
There's also the concept of NUMA nodes to take into account, if you start giving a VM more vCPUs than a single processor in the host can provide, you're basically forcing vSphere to split the VM between 2 NUMA nodes, making memory access slower, since not all memory is going to be local to either processor.
There's a lot more magic behind how vSphere schedules VM resources, and what I wrote above is hugely simplified, but these are guidelines that have served me well for almost a decade.
add a comment |
1) Hyperthreaded cores aren't real cores, and shouldn't be counted as such. Estimates vary, but I've seen figures that enabling HyperThreading gives you as low as 10-30% additional performance in vSphere.
2) Assigning more vCPUs to a VM should always be considered carefully, especially at higher numbers. The reason (drastically simplified) is that the resource scheduler has to find a time slot where there's enough cores available to execute all cores simultaneously. So on a simplified, hyper-unrealistic, example host with say 10 cores, and 10 VMs with 2 vCPUs, you'd have 5 of VMs waiting (aka. halted) half the time, and 5 VMs executing, alternating between each state. This is alright since all VMs are getting CPU time, and everything is dandy. Now we introduce the 11th VM, with 10 vCPUs. Suddenly you have 10 VMs waiting while the big VM gets it's stuff done, and then 5 of them execute, and then the 5 others. So now your VMs are running 33 % of the time, instead of 50%. In a complex environment, allocating relatively huge amounts of vCPUs can lower performance, especially if the VM doesn't run anything that can actually use all the vCPUs.
3) My personal best practice is to never give a VM more than half the logical cores on one single processor, this is usually also normally quite a sane number with Xeon processors anyhow. This avoids problems with "depending" too much on HT "cores", and also makes your VMs fit on a single processor, making it easier for the scheduler.
There's also the concept of NUMA nodes to take into account, if you start giving a VM more vCPUs than a single processor in the host can provide, you're basically forcing vSphere to split the VM between 2 NUMA nodes, making memory access slower, since not all memory is going to be local to either processor.
There's a lot more magic behind how vSphere schedules VM resources, and what I wrote above is hugely simplified, but these are guidelines that have served me well for almost a decade.
1) Hyperthreaded cores aren't real cores, and shouldn't be counted as such. Estimates vary, but I've seen figures that enabling HyperThreading gives you as low as 10-30% additional performance in vSphere.
2) Assigning more vCPUs to a VM should always be considered carefully, especially at higher numbers. The reason (drastically simplified) is that the resource scheduler has to find a time slot where there's enough cores available to execute all cores simultaneously. So on a simplified, hyper-unrealistic, example host with say 10 cores, and 10 VMs with 2 vCPUs, you'd have 5 of VMs waiting (aka. halted) half the time, and 5 VMs executing, alternating between each state. This is alright since all VMs are getting CPU time, and everything is dandy. Now we introduce the 11th VM, with 10 vCPUs. Suddenly you have 10 VMs waiting while the big VM gets it's stuff done, and then 5 of them execute, and then the 5 others. So now your VMs are running 33 % of the time, instead of 50%. In a complex environment, allocating relatively huge amounts of vCPUs can lower performance, especially if the VM doesn't run anything that can actually use all the vCPUs.
3) My personal best practice is to never give a VM more than half the logical cores on one single processor, this is usually also normally quite a sane number with Xeon processors anyhow. This avoids problems with "depending" too much on HT "cores", and also makes your VMs fit on a single processor, making it easier for the scheduler.
There's also the concept of NUMA nodes to take into account, if you start giving a VM more vCPUs than a single processor in the host can provide, you're basically forcing vSphere to split the VM between 2 NUMA nodes, making memory access slower, since not all memory is going to be local to either processor.
There's a lot more magic behind how vSphere schedules VM resources, and what I wrote above is hugely simplified, but these are guidelines that have served me well for almost a decade.
answered 6 hours ago
StuggiStuggi
693313
693313
add a comment |
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f955230%2fhow-does-the-vcpu-align-with-the-cpu-vm-with-more-cores-than-the-cpu-has%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown