Is there any way to prevent Storage Spaces Direct from automatically adding disks?Storage Spaces Direct - how...
What am I? I am in theaters and computer programs
I am on the US no-fly list. What can I do in order to be allowed on flights which go through US airspace?
Why zero tolerance on nudity in space?
Sudden drop in bounce rate
Why is this code uniquely decodable?
What's the rationale behind the objections to these measures against human trafficking?
Could quantum mechanics be necessary to analyze some biology scenarios?
How would we write a misogynistic character without offending people?
If I am a new writer, where can I take help from a professional to judge my work?
How can I handle a player who pre-plans arguments about my rulings on RAW?
How to get the sitecore field updated date instead of item updated date?
How to approximate rolls for potions of healing using only d6's?
Unexpected behavior of Bash script: First executes function, afterwards executes alias
Meth dealer reference in Family Guy
Quenching swords in dragon blood; why?
What to do when being responsible for data protection in your lab, yet advice is ignored?
How to mitigate "bandwagon attacking" from players?
Should I choose Itemized or Standard deduction?
Can chords be played on the flute?
CBP Reminds Travelers to Allow 72 Hours for ESTA. Why?
Auto Insert date into Notepad
Why is working on the same position for more than 15 years not a red flag?
Are small insurances worth it
What can I substitute for soda pop in a sweet pork recipe?
Is there any way to prevent Storage Spaces Direct from automatically adding disks?
Storage Spaces Direct - how to use newly added disk as local and prevent auto pooling / auto-select?Is it possible to use several disks in the pool and some disks as local disks?How to resolve virtual disk degraded in Windows Server 2012Mixing disks of different sizes in a Storage Spaces poolStorage Spaces Direct (s2d) wrong BusType with raid controller in HBA modeStorage Spaces Direct - New Volume on Virtual DiskStorage Spaces Direct very low available spaceStorage Spaces Direct Windows Server 2016 transient errorStorage Spaces Direct HDD MediaType is unspecifiedStorage Spaces Direct - how to use newly added disk as local and prevent auto pooling / auto-select?Reclaiming disks from storage spaces directAdding more disks to Storage Spaces Direct didn't give me the capacity increase I was expecting
Got a problem here on a 2016 Windows Server Failover Cluster (WSFC) hosting an SQL Failover Cluster Instance (FCI) employing Storage Spaces Direct (S2D). On each server, after successful initial creation, S2D automatically added an otherwise unused RAID-volume to the storage pool (although S2D cannot be created on RAID-volumes and absolutely insists on unraided disks). Now it's broken, due to - as far as I could figure out - exactly that. As a consequence, the virtual disk is offline, taking the whole cluster down with it. It won't come back online, due to a missing cluster network ressource. The disks in question can be retired but not removed. Virtual disk repair does not run, cluster compatibility test claims invalid configuration.
This is a new setup. So I could simply delete the virtual disk, the cluster or even the servers and start over. But before we go productive, I need to make sure, this does not ever happen again. The system shooting itself in the virtual knee to a crashing halt just by needlessly and wrongly adding an unsupported disk is no platform we can deploy. So primarily I need a way to prevent this from happening, rather than to repair it now. My guess is that preventing an S2D setup from grabbing more disks than it was created on would do the trick. The cost of potentially more manual interaction during a real disk replacement is negligible to the clusterf... we have here. Much as I browsed the documentation so far, however, I cannot find any way to control that. Unless I'm missing something, neither Set-StoragePool, Set-VirtualDisk nor Set-Volume offer any parameter to that extend.
Any help or hint would be greatly appreciated.
Following are just more details on the above: We have 2 HPE DL380 Gen9 server machines doubly connected to each other via RDMA capable 10GB Ethernet and via 1GB to the client net. Each feature a RAID controller HP ??? and a simple HBA controller HP ??? (since S2D absolutely requires and works only on directly attached, unraided disks). The storage configuration comprises of an OS-RAID on the RAID-controller, a Files-RAID on the RAID controller, and the set of directly attached disks on the HBA intended for S2D.
I set up 2 Windows Servers 2016 datacenter edition on the OS-RAIDs, installed WSFC feature, ran and passed the cluster compatibility test including S2D option, created the cluster without storage, added a file share witness (on a separate machine), enabled S2D on the storage pool, which automatically comprised of all of the unraided disks, and on top of that pool created a virtual disk of the mirror type and used NTFS as file system, since this is supposed to be the FS of choice for an SQL FCI installation.
I then installed SQL 2016 standard edition as an FCI on that cluster, imported a database and tested it all. Everything was fine. Database was right there and faster than ever. Forced as well as automatic failover was a breeze. Everything looked good.
Next day we tried to make use of the remaining Files-RAID. First thing was to change the RAID level as we didn't like the pre-configuration. Shortly after deleting the pre-configured RAID volume and building a new one (on each server), we detected that the cluster was down. From what I could figure out so far, the pre-configured Files-RAID volume had in the meantime been automatically added to the pool, and as we just deleted it, it was now missing from the pool. While I checked, I found the new Files-RAID, while still being created, already shown as a physical drive of the pool as well. So the pool now included 2 RAID volumes on each server, one of which didn't even exist. These volumes (but not their disks) are listed by Get-PhysicalDisk along with the actually physical disks on the HBA, not sure if that's regular. The pool itself is still online and doesn't complain, the virtual disk however is not simply degraded for missing disks, but completely offline (and so is, in consequence, the whole cluster).
I was able to retire those physical disks (i.e. those which are actually the RAID volumes), and they are now marked as retired. But they are still in the pool and I cannot remove them just now, trying to do so fails. A Repair-VirtualDisk should rebuild the virtual disk to a proper state on just the remaining disks (I went by this: https://social.technet.microsoft.com/Forums/windows/en-US/dbbf317b-80d2-4992-b5a9-20b83526a9c2/storage-spaces-remove-physical-disk?forum=winserver8gen), but this job is immediately over, "successfull" of course, with no effect whatsoever.
Trying to switch the virtual disk back online fails, stating that a networked cluster ressource is unavailable. As far as I understand, this could only refer to the (available) storage pool, since the missing disks are no cluster ressources. The pool shows no errors to fix. Running the cluster compatibility test claims a configuration not suited for a cluster.
I cannot find any part left that would budge another inch, the whole thing looks deadlocked for good. Any ideas on how to prevent a running WSFC from f...ing itself up that way?
I did not encounter any error message I found particularly enlightening, and I didn't want to bomb the page even more by posting all of them. If anyone wants to have any specific detail, just let me know.
Thanks a lot for your time, guys!
Karsten
Update as requested by Mr. Raspberry
windows-server-2016 failovercluster storage-spaces
|
show 2 more comments
Got a problem here on a 2016 Windows Server Failover Cluster (WSFC) hosting an SQL Failover Cluster Instance (FCI) employing Storage Spaces Direct (S2D). On each server, after successful initial creation, S2D automatically added an otherwise unused RAID-volume to the storage pool (although S2D cannot be created on RAID-volumes and absolutely insists on unraided disks). Now it's broken, due to - as far as I could figure out - exactly that. As a consequence, the virtual disk is offline, taking the whole cluster down with it. It won't come back online, due to a missing cluster network ressource. The disks in question can be retired but not removed. Virtual disk repair does not run, cluster compatibility test claims invalid configuration.
This is a new setup. So I could simply delete the virtual disk, the cluster or even the servers and start over. But before we go productive, I need to make sure, this does not ever happen again. The system shooting itself in the virtual knee to a crashing halt just by needlessly and wrongly adding an unsupported disk is no platform we can deploy. So primarily I need a way to prevent this from happening, rather than to repair it now. My guess is that preventing an S2D setup from grabbing more disks than it was created on would do the trick. The cost of potentially more manual interaction during a real disk replacement is negligible to the clusterf... we have here. Much as I browsed the documentation so far, however, I cannot find any way to control that. Unless I'm missing something, neither Set-StoragePool, Set-VirtualDisk nor Set-Volume offer any parameter to that extend.
Any help or hint would be greatly appreciated.
Following are just more details on the above: We have 2 HPE DL380 Gen9 server machines doubly connected to each other via RDMA capable 10GB Ethernet and via 1GB to the client net. Each feature a RAID controller HP ??? and a simple HBA controller HP ??? (since S2D absolutely requires and works only on directly attached, unraided disks). The storage configuration comprises of an OS-RAID on the RAID-controller, a Files-RAID on the RAID controller, and the set of directly attached disks on the HBA intended for S2D.
I set up 2 Windows Servers 2016 datacenter edition on the OS-RAIDs, installed WSFC feature, ran and passed the cluster compatibility test including S2D option, created the cluster without storage, added a file share witness (on a separate machine), enabled S2D on the storage pool, which automatically comprised of all of the unraided disks, and on top of that pool created a virtual disk of the mirror type and used NTFS as file system, since this is supposed to be the FS of choice for an SQL FCI installation.
I then installed SQL 2016 standard edition as an FCI on that cluster, imported a database and tested it all. Everything was fine. Database was right there and faster than ever. Forced as well as automatic failover was a breeze. Everything looked good.
Next day we tried to make use of the remaining Files-RAID. First thing was to change the RAID level as we didn't like the pre-configuration. Shortly after deleting the pre-configured RAID volume and building a new one (on each server), we detected that the cluster was down. From what I could figure out so far, the pre-configured Files-RAID volume had in the meantime been automatically added to the pool, and as we just deleted it, it was now missing from the pool. While I checked, I found the new Files-RAID, while still being created, already shown as a physical drive of the pool as well. So the pool now included 2 RAID volumes on each server, one of which didn't even exist. These volumes (but not their disks) are listed by Get-PhysicalDisk along with the actually physical disks on the HBA, not sure if that's regular. The pool itself is still online and doesn't complain, the virtual disk however is not simply degraded for missing disks, but completely offline (and so is, in consequence, the whole cluster).
I was able to retire those physical disks (i.e. those which are actually the RAID volumes), and they are now marked as retired. But they are still in the pool and I cannot remove them just now, trying to do so fails. A Repair-VirtualDisk should rebuild the virtual disk to a proper state on just the remaining disks (I went by this: https://social.technet.microsoft.com/Forums/windows/en-US/dbbf317b-80d2-4992-b5a9-20b83526a9c2/storage-spaces-remove-physical-disk?forum=winserver8gen), but this job is immediately over, "successfull" of course, with no effect whatsoever.
Trying to switch the virtual disk back online fails, stating that a networked cluster ressource is unavailable. As far as I understand, this could only refer to the (available) storage pool, since the missing disks are no cluster ressources. The pool shows no errors to fix. Running the cluster compatibility test claims a configuration not suited for a cluster.
I cannot find any part left that would budge another inch, the whole thing looks deadlocked for good. Any ideas on how to prevent a running WSFC from f...ing itself up that way?
I did not encounter any error message I found particularly enlightening, and I didn't want to bomb the page even more by posting all of them. If anyone wants to have any specific detail, just let me know.
Thanks a lot for your time, guys!
Karsten
Update as requested by Mr. Raspberry
windows-server-2016 failovercluster storage-spaces
3
Could you please share us with a list of your drives and their bus types? PoweShell command:Get-PhysicalDisk -CanPool $true | Sort Model | ft FriendlyName, BusType, CanPool, OperationalStatus, HealthStatus, Usage, Size
Also, is there any chance you had made a mistake when reconfigured File-RAID assigning a S2D drive to a new RAID?
– Mr. Raspberry
May 17 '17 at 13:03
2
What's the point in S2D + SQL Server? Why do you want to spend money on an unlimited licensed VMs if you don't plan (actually can't...) running any? SQL Server 2016 can do AlwaysOn Basic AG even with the Standard and you can save HUGE amount of money just using Windows Server Standard 2016. docs.microsoft.com/en-us/sql/database-engine/…
– BaronSamedi1958
May 18 '17 at 17:09
@Mr. Raspberry: I updated the entry with the list of physical disks. Please note that I left out "-CanPool $true" as none is poolable.
– Karsten Köpnick
May 18 '17 at 17:52
3
@KarstenKöpnick: Well, I would suggest you consider about SQL Server AlwaysOn FCI + StarWind Virtual SAN Free. This configuration would do the job better in your case of 2-node cluster for less cost and is much easier to deploy and manage with no such issues. starwindsoftware.com/…
– Mr. Raspberry
May 19 '17 at 14:47
1
"S2D was the way to go it seemed" Well... Good luck with that :)
– BaronSamedi1958
May 19 '17 at 22:06
|
show 2 more comments
Got a problem here on a 2016 Windows Server Failover Cluster (WSFC) hosting an SQL Failover Cluster Instance (FCI) employing Storage Spaces Direct (S2D). On each server, after successful initial creation, S2D automatically added an otherwise unused RAID-volume to the storage pool (although S2D cannot be created on RAID-volumes and absolutely insists on unraided disks). Now it's broken, due to - as far as I could figure out - exactly that. As a consequence, the virtual disk is offline, taking the whole cluster down with it. It won't come back online, due to a missing cluster network ressource. The disks in question can be retired but not removed. Virtual disk repair does not run, cluster compatibility test claims invalid configuration.
This is a new setup. So I could simply delete the virtual disk, the cluster or even the servers and start over. But before we go productive, I need to make sure, this does not ever happen again. The system shooting itself in the virtual knee to a crashing halt just by needlessly and wrongly adding an unsupported disk is no platform we can deploy. So primarily I need a way to prevent this from happening, rather than to repair it now. My guess is that preventing an S2D setup from grabbing more disks than it was created on would do the trick. The cost of potentially more manual interaction during a real disk replacement is negligible to the clusterf... we have here. Much as I browsed the documentation so far, however, I cannot find any way to control that. Unless I'm missing something, neither Set-StoragePool, Set-VirtualDisk nor Set-Volume offer any parameter to that extend.
Any help or hint would be greatly appreciated.
Following are just more details on the above: We have 2 HPE DL380 Gen9 server machines doubly connected to each other via RDMA capable 10GB Ethernet and via 1GB to the client net. Each feature a RAID controller HP ??? and a simple HBA controller HP ??? (since S2D absolutely requires and works only on directly attached, unraided disks). The storage configuration comprises of an OS-RAID on the RAID-controller, a Files-RAID on the RAID controller, and the set of directly attached disks on the HBA intended for S2D.
I set up 2 Windows Servers 2016 datacenter edition on the OS-RAIDs, installed WSFC feature, ran and passed the cluster compatibility test including S2D option, created the cluster without storage, added a file share witness (on a separate machine), enabled S2D on the storage pool, which automatically comprised of all of the unraided disks, and on top of that pool created a virtual disk of the mirror type and used NTFS as file system, since this is supposed to be the FS of choice for an SQL FCI installation.
I then installed SQL 2016 standard edition as an FCI on that cluster, imported a database and tested it all. Everything was fine. Database was right there and faster than ever. Forced as well as automatic failover was a breeze. Everything looked good.
Next day we tried to make use of the remaining Files-RAID. First thing was to change the RAID level as we didn't like the pre-configuration. Shortly after deleting the pre-configured RAID volume and building a new one (on each server), we detected that the cluster was down. From what I could figure out so far, the pre-configured Files-RAID volume had in the meantime been automatically added to the pool, and as we just deleted it, it was now missing from the pool. While I checked, I found the new Files-RAID, while still being created, already shown as a physical drive of the pool as well. So the pool now included 2 RAID volumes on each server, one of which didn't even exist. These volumes (but not their disks) are listed by Get-PhysicalDisk along with the actually physical disks on the HBA, not sure if that's regular. The pool itself is still online and doesn't complain, the virtual disk however is not simply degraded for missing disks, but completely offline (and so is, in consequence, the whole cluster).
I was able to retire those physical disks (i.e. those which are actually the RAID volumes), and they are now marked as retired. But they are still in the pool and I cannot remove them just now, trying to do so fails. A Repair-VirtualDisk should rebuild the virtual disk to a proper state on just the remaining disks (I went by this: https://social.technet.microsoft.com/Forums/windows/en-US/dbbf317b-80d2-4992-b5a9-20b83526a9c2/storage-spaces-remove-physical-disk?forum=winserver8gen), but this job is immediately over, "successfull" of course, with no effect whatsoever.
Trying to switch the virtual disk back online fails, stating that a networked cluster ressource is unavailable. As far as I understand, this could only refer to the (available) storage pool, since the missing disks are no cluster ressources. The pool shows no errors to fix. Running the cluster compatibility test claims a configuration not suited for a cluster.
I cannot find any part left that would budge another inch, the whole thing looks deadlocked for good. Any ideas on how to prevent a running WSFC from f...ing itself up that way?
I did not encounter any error message I found particularly enlightening, and I didn't want to bomb the page even more by posting all of them. If anyone wants to have any specific detail, just let me know.
Thanks a lot for your time, guys!
Karsten
Update as requested by Mr. Raspberry
windows-server-2016 failovercluster storage-spaces
Got a problem here on a 2016 Windows Server Failover Cluster (WSFC) hosting an SQL Failover Cluster Instance (FCI) employing Storage Spaces Direct (S2D). On each server, after successful initial creation, S2D automatically added an otherwise unused RAID-volume to the storage pool (although S2D cannot be created on RAID-volumes and absolutely insists on unraided disks). Now it's broken, due to - as far as I could figure out - exactly that. As a consequence, the virtual disk is offline, taking the whole cluster down with it. It won't come back online, due to a missing cluster network ressource. The disks in question can be retired but not removed. Virtual disk repair does not run, cluster compatibility test claims invalid configuration.
This is a new setup. So I could simply delete the virtual disk, the cluster or even the servers and start over. But before we go productive, I need to make sure, this does not ever happen again. The system shooting itself in the virtual knee to a crashing halt just by needlessly and wrongly adding an unsupported disk is no platform we can deploy. So primarily I need a way to prevent this from happening, rather than to repair it now. My guess is that preventing an S2D setup from grabbing more disks than it was created on would do the trick. The cost of potentially more manual interaction during a real disk replacement is negligible to the clusterf... we have here. Much as I browsed the documentation so far, however, I cannot find any way to control that. Unless I'm missing something, neither Set-StoragePool, Set-VirtualDisk nor Set-Volume offer any parameter to that extend.
Any help or hint would be greatly appreciated.
Following are just more details on the above: We have 2 HPE DL380 Gen9 server machines doubly connected to each other via RDMA capable 10GB Ethernet and via 1GB to the client net. Each feature a RAID controller HP ??? and a simple HBA controller HP ??? (since S2D absolutely requires and works only on directly attached, unraided disks). The storage configuration comprises of an OS-RAID on the RAID-controller, a Files-RAID on the RAID controller, and the set of directly attached disks on the HBA intended for S2D.
I set up 2 Windows Servers 2016 datacenter edition on the OS-RAIDs, installed WSFC feature, ran and passed the cluster compatibility test including S2D option, created the cluster without storage, added a file share witness (on a separate machine), enabled S2D on the storage pool, which automatically comprised of all of the unraided disks, and on top of that pool created a virtual disk of the mirror type and used NTFS as file system, since this is supposed to be the FS of choice for an SQL FCI installation.
I then installed SQL 2016 standard edition as an FCI on that cluster, imported a database and tested it all. Everything was fine. Database was right there and faster than ever. Forced as well as automatic failover was a breeze. Everything looked good.
Next day we tried to make use of the remaining Files-RAID. First thing was to change the RAID level as we didn't like the pre-configuration. Shortly after deleting the pre-configured RAID volume and building a new one (on each server), we detected that the cluster was down. From what I could figure out so far, the pre-configured Files-RAID volume had in the meantime been automatically added to the pool, and as we just deleted it, it was now missing from the pool. While I checked, I found the new Files-RAID, while still being created, already shown as a physical drive of the pool as well. So the pool now included 2 RAID volumes on each server, one of which didn't even exist. These volumes (but not their disks) are listed by Get-PhysicalDisk along with the actually physical disks on the HBA, not sure if that's regular. The pool itself is still online and doesn't complain, the virtual disk however is not simply degraded for missing disks, but completely offline (and so is, in consequence, the whole cluster).
I was able to retire those physical disks (i.e. those which are actually the RAID volumes), and they are now marked as retired. But they are still in the pool and I cannot remove them just now, trying to do so fails. A Repair-VirtualDisk should rebuild the virtual disk to a proper state on just the remaining disks (I went by this: https://social.technet.microsoft.com/Forums/windows/en-US/dbbf317b-80d2-4992-b5a9-20b83526a9c2/storage-spaces-remove-physical-disk?forum=winserver8gen), but this job is immediately over, "successfull" of course, with no effect whatsoever.
Trying to switch the virtual disk back online fails, stating that a networked cluster ressource is unavailable. As far as I understand, this could only refer to the (available) storage pool, since the missing disks are no cluster ressources. The pool shows no errors to fix. Running the cluster compatibility test claims a configuration not suited for a cluster.
I cannot find any part left that would budge another inch, the whole thing looks deadlocked for good. Any ideas on how to prevent a running WSFC from f...ing itself up that way?
I did not encounter any error message I found particularly enlightening, and I didn't want to bomb the page even more by posting all of them. If anyone wants to have any specific detail, just let me know.
Thanks a lot for your time, guys!
Karsten
Update as requested by Mr. Raspberry
windows-server-2016 failovercluster storage-spaces
windows-server-2016 failovercluster storage-spaces
edited May 18 '17 at 17:50
Karsten Köpnick
asked May 16 '17 at 17:28
Karsten KöpnickKarsten Köpnick
14819
14819
3
Could you please share us with a list of your drives and their bus types? PoweShell command:Get-PhysicalDisk -CanPool $true | Sort Model | ft FriendlyName, BusType, CanPool, OperationalStatus, HealthStatus, Usage, Size
Also, is there any chance you had made a mistake when reconfigured File-RAID assigning a S2D drive to a new RAID?
– Mr. Raspberry
May 17 '17 at 13:03
2
What's the point in S2D + SQL Server? Why do you want to spend money on an unlimited licensed VMs if you don't plan (actually can't...) running any? SQL Server 2016 can do AlwaysOn Basic AG even with the Standard and you can save HUGE amount of money just using Windows Server Standard 2016. docs.microsoft.com/en-us/sql/database-engine/…
– BaronSamedi1958
May 18 '17 at 17:09
@Mr. Raspberry: I updated the entry with the list of physical disks. Please note that I left out "-CanPool $true" as none is poolable.
– Karsten Köpnick
May 18 '17 at 17:52
3
@KarstenKöpnick: Well, I would suggest you consider about SQL Server AlwaysOn FCI + StarWind Virtual SAN Free. This configuration would do the job better in your case of 2-node cluster for less cost and is much easier to deploy and manage with no such issues. starwindsoftware.com/…
– Mr. Raspberry
May 19 '17 at 14:47
1
"S2D was the way to go it seemed" Well... Good luck with that :)
– BaronSamedi1958
May 19 '17 at 22:06
|
show 2 more comments
3
Could you please share us with a list of your drives and their bus types? PoweShell command:Get-PhysicalDisk -CanPool $true | Sort Model | ft FriendlyName, BusType, CanPool, OperationalStatus, HealthStatus, Usage, Size
Also, is there any chance you had made a mistake when reconfigured File-RAID assigning a S2D drive to a new RAID?
– Mr. Raspberry
May 17 '17 at 13:03
2
What's the point in S2D + SQL Server? Why do you want to spend money on an unlimited licensed VMs if you don't plan (actually can't...) running any? SQL Server 2016 can do AlwaysOn Basic AG even with the Standard and you can save HUGE amount of money just using Windows Server Standard 2016. docs.microsoft.com/en-us/sql/database-engine/…
– BaronSamedi1958
May 18 '17 at 17:09
@Mr. Raspberry: I updated the entry with the list of physical disks. Please note that I left out "-CanPool $true" as none is poolable.
– Karsten Köpnick
May 18 '17 at 17:52
3
@KarstenKöpnick: Well, I would suggest you consider about SQL Server AlwaysOn FCI + StarWind Virtual SAN Free. This configuration would do the job better in your case of 2-node cluster for less cost and is much easier to deploy and manage with no such issues. starwindsoftware.com/…
– Mr. Raspberry
May 19 '17 at 14:47
1
"S2D was the way to go it seemed" Well... Good luck with that :)
– BaronSamedi1958
May 19 '17 at 22:06
3
3
Could you please share us with a list of your drives and their bus types? PoweShell command:
Get-PhysicalDisk -CanPool $true | Sort Model | ft FriendlyName, BusType, CanPool, OperationalStatus, HealthStatus, Usage, Size
Also, is there any chance you had made a mistake when reconfigured File-RAID assigning a S2D drive to a new RAID?– Mr. Raspberry
May 17 '17 at 13:03
Could you please share us with a list of your drives and their bus types? PoweShell command:
Get-PhysicalDisk -CanPool $true | Sort Model | ft FriendlyName, BusType, CanPool, OperationalStatus, HealthStatus, Usage, Size
Also, is there any chance you had made a mistake when reconfigured File-RAID assigning a S2D drive to a new RAID?– Mr. Raspberry
May 17 '17 at 13:03
2
2
What's the point in S2D + SQL Server? Why do you want to spend money on an unlimited licensed VMs if you don't plan (actually can't...) running any? SQL Server 2016 can do AlwaysOn Basic AG even with the Standard and you can save HUGE amount of money just using Windows Server Standard 2016. docs.microsoft.com/en-us/sql/database-engine/…
– BaronSamedi1958
May 18 '17 at 17:09
What's the point in S2D + SQL Server? Why do you want to spend money on an unlimited licensed VMs if you don't plan (actually can't...) running any? SQL Server 2016 can do AlwaysOn Basic AG even with the Standard and you can save HUGE amount of money just using Windows Server Standard 2016. docs.microsoft.com/en-us/sql/database-engine/…
– BaronSamedi1958
May 18 '17 at 17:09
@Mr. Raspberry: I updated the entry with the list of physical disks. Please note that I left out "-CanPool $true" as none is poolable.
– Karsten Köpnick
May 18 '17 at 17:52
@Mr. Raspberry: I updated the entry with the list of physical disks. Please note that I left out "-CanPool $true" as none is poolable.
– Karsten Köpnick
May 18 '17 at 17:52
3
3
@KarstenKöpnick: Well, I would suggest you consider about SQL Server AlwaysOn FCI + StarWind Virtual SAN Free. This configuration would do the job better in your case of 2-node cluster for less cost and is much easier to deploy and manage with no such issues. starwindsoftware.com/…
– Mr. Raspberry
May 19 '17 at 14:47
@KarstenKöpnick: Well, I would suggest you consider about SQL Server AlwaysOn FCI + StarWind Virtual SAN Free. This configuration would do the job better in your case of 2-node cluster for less cost and is much easier to deploy and manage with no such issues. starwindsoftware.com/…
– Mr. Raspberry
May 19 '17 at 14:47
1
1
"S2D was the way to go it seemed" Well... Good luck with that :)
– BaronSamedi1958
May 19 '17 at 22:06
"S2D was the way to go it seemed" Well... Good luck with that :)
– BaronSamedi1958
May 19 '17 at 22:06
|
show 2 more comments
2 Answers
2
active
oldest
votes
Yes, you can disable the auto-pooling behavior. The experience is not great, but it’s certainly do-able and supported. The setting name, and example cmdlet syntax, is in the Settings section of this public doc:
https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/health-service-overview
Essentially, run this as Administrator:
Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name "System.Storage.PhysicalDisk.AutoPool.Enabled" -Value False
Hope this helps! - Cosmos (@cosmosdarwin), Microsoft PM
@CosmosDarvin: Thanks! Looks like that could do the trick. I need to read a bit more into the depths of it and understand the implications, then I'll give it a try and report.
– Karsten Köpnick
May 18 '17 at 18:03
@CosmosDarvin: Thanks a lot. I finally had the chance to delve deeper into the topic to find out about potential repercussions. As far as I can tell, with that option disabled, the only consequence would be that disks will have to be added to the pool manually with an Add-PhysicalDisk command. Which is a fine trade-off. I could not find any indications about other complication or disadvantages, so I will give this a try. - Just need to document the necessity for manually adding disks in case of a replacement. - I will report the results.
– Karsten Köpnick
Jun 12 '17 at 14:12
Reporting the results: I'd like to add that I could not gather any real-life experience with this approach. It was decided to add a disk enclosure and use that instead of S2D. Disk replacements in a RAID that size are a frequent task, and the requirement of having someone with sufficient expertise around at any time to perform a PowerShell intervention, even a documented one, for a simple disk swap was seen as a show stopper. Looking at it that way, I totally agree. So we re-installed using the enclosure and had no problems since. - Thank you all for your kind and expert help.
– Karsten Köpnick
Nov 28 '17 at 15:36
add a comment |
The workaround i've found to this problem is to change the Bus Type of the RAID volumes or disks by changing it from one of the supported type to an unsupported one.
You will have to identify the controller driver from Device Manager and after go on registry and find the driver name on the location below.
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesSmartPqiParameters
In my case, i've changed the registry key that correspond to SAS into RAID
«BusType»=0x00000008 (RAID) (instead of 0x0000000a) (SAS)
reboot the machine
After this change you can have the storage pool in Windows Storage subsystem instead of Clustered Storage Spaces
Please be careful if you want to apply this type of workaround as it's not a validated solution and might expose your production environment to a high risk.
New contributor
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "2"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f850478%2fis-there-any-way-to-prevent-storage-spaces-direct-from-automatically-adding-disk%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Yes, you can disable the auto-pooling behavior. The experience is not great, but it’s certainly do-able and supported. The setting name, and example cmdlet syntax, is in the Settings section of this public doc:
https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/health-service-overview
Essentially, run this as Administrator:
Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name "System.Storage.PhysicalDisk.AutoPool.Enabled" -Value False
Hope this helps! - Cosmos (@cosmosdarwin), Microsoft PM
@CosmosDarvin: Thanks! Looks like that could do the trick. I need to read a bit more into the depths of it and understand the implications, then I'll give it a try and report.
– Karsten Köpnick
May 18 '17 at 18:03
@CosmosDarvin: Thanks a lot. I finally had the chance to delve deeper into the topic to find out about potential repercussions. As far as I can tell, with that option disabled, the only consequence would be that disks will have to be added to the pool manually with an Add-PhysicalDisk command. Which is a fine trade-off. I could not find any indications about other complication or disadvantages, so I will give this a try. - Just need to document the necessity for manually adding disks in case of a replacement. - I will report the results.
– Karsten Köpnick
Jun 12 '17 at 14:12
Reporting the results: I'd like to add that I could not gather any real-life experience with this approach. It was decided to add a disk enclosure and use that instead of S2D. Disk replacements in a RAID that size are a frequent task, and the requirement of having someone with sufficient expertise around at any time to perform a PowerShell intervention, even a documented one, for a simple disk swap was seen as a show stopper. Looking at it that way, I totally agree. So we re-installed using the enclosure and had no problems since. - Thank you all for your kind and expert help.
– Karsten Köpnick
Nov 28 '17 at 15:36
add a comment |
Yes, you can disable the auto-pooling behavior. The experience is not great, but it’s certainly do-able and supported. The setting name, and example cmdlet syntax, is in the Settings section of this public doc:
https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/health-service-overview
Essentially, run this as Administrator:
Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name "System.Storage.PhysicalDisk.AutoPool.Enabled" -Value False
Hope this helps! - Cosmos (@cosmosdarwin), Microsoft PM
@CosmosDarvin: Thanks! Looks like that could do the trick. I need to read a bit more into the depths of it and understand the implications, then I'll give it a try and report.
– Karsten Köpnick
May 18 '17 at 18:03
@CosmosDarvin: Thanks a lot. I finally had the chance to delve deeper into the topic to find out about potential repercussions. As far as I can tell, with that option disabled, the only consequence would be that disks will have to be added to the pool manually with an Add-PhysicalDisk command. Which is a fine trade-off. I could not find any indications about other complication or disadvantages, so I will give this a try. - Just need to document the necessity for manually adding disks in case of a replacement. - I will report the results.
– Karsten Köpnick
Jun 12 '17 at 14:12
Reporting the results: I'd like to add that I could not gather any real-life experience with this approach. It was decided to add a disk enclosure and use that instead of S2D. Disk replacements in a RAID that size are a frequent task, and the requirement of having someone with sufficient expertise around at any time to perform a PowerShell intervention, even a documented one, for a simple disk swap was seen as a show stopper. Looking at it that way, I totally agree. So we re-installed using the enclosure and had no problems since. - Thank you all for your kind and expert help.
– Karsten Köpnick
Nov 28 '17 at 15:36
add a comment |
Yes, you can disable the auto-pooling behavior. The experience is not great, but it’s certainly do-able and supported. The setting name, and example cmdlet syntax, is in the Settings section of this public doc:
https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/health-service-overview
Essentially, run this as Administrator:
Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name "System.Storage.PhysicalDisk.AutoPool.Enabled" -Value False
Hope this helps! - Cosmos (@cosmosdarwin), Microsoft PM
Yes, you can disable the auto-pooling behavior. The experience is not great, but it’s certainly do-able and supported. The setting name, and example cmdlet syntax, is in the Settings section of this public doc:
https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/health-service-overview
Essentially, run this as Administrator:
Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name "System.Storage.PhysicalDisk.AutoPool.Enabled" -Value False
Hope this helps! - Cosmos (@cosmosdarwin), Microsoft PM
answered May 18 '17 at 0:31
Cosmos DarwinCosmos Darwin
541
541
@CosmosDarvin: Thanks! Looks like that could do the trick. I need to read a bit more into the depths of it and understand the implications, then I'll give it a try and report.
– Karsten Köpnick
May 18 '17 at 18:03
@CosmosDarvin: Thanks a lot. I finally had the chance to delve deeper into the topic to find out about potential repercussions. As far as I can tell, with that option disabled, the only consequence would be that disks will have to be added to the pool manually with an Add-PhysicalDisk command. Which is a fine trade-off. I could not find any indications about other complication or disadvantages, so I will give this a try. - Just need to document the necessity for manually adding disks in case of a replacement. - I will report the results.
– Karsten Köpnick
Jun 12 '17 at 14:12
Reporting the results: I'd like to add that I could not gather any real-life experience with this approach. It was decided to add a disk enclosure and use that instead of S2D. Disk replacements in a RAID that size are a frequent task, and the requirement of having someone with sufficient expertise around at any time to perform a PowerShell intervention, even a documented one, for a simple disk swap was seen as a show stopper. Looking at it that way, I totally agree. So we re-installed using the enclosure and had no problems since. - Thank you all for your kind and expert help.
– Karsten Köpnick
Nov 28 '17 at 15:36
add a comment |
@CosmosDarvin: Thanks! Looks like that could do the trick. I need to read a bit more into the depths of it and understand the implications, then I'll give it a try and report.
– Karsten Köpnick
May 18 '17 at 18:03
@CosmosDarvin: Thanks a lot. I finally had the chance to delve deeper into the topic to find out about potential repercussions. As far as I can tell, with that option disabled, the only consequence would be that disks will have to be added to the pool manually with an Add-PhysicalDisk command. Which is a fine trade-off. I could not find any indications about other complication or disadvantages, so I will give this a try. - Just need to document the necessity for manually adding disks in case of a replacement. - I will report the results.
– Karsten Köpnick
Jun 12 '17 at 14:12
Reporting the results: I'd like to add that I could not gather any real-life experience with this approach. It was decided to add a disk enclosure and use that instead of S2D. Disk replacements in a RAID that size are a frequent task, and the requirement of having someone with sufficient expertise around at any time to perform a PowerShell intervention, even a documented one, for a simple disk swap was seen as a show stopper. Looking at it that way, I totally agree. So we re-installed using the enclosure and had no problems since. - Thank you all for your kind and expert help.
– Karsten Köpnick
Nov 28 '17 at 15:36
@CosmosDarvin: Thanks! Looks like that could do the trick. I need to read a bit more into the depths of it and understand the implications, then I'll give it a try and report.
– Karsten Köpnick
May 18 '17 at 18:03
@CosmosDarvin: Thanks! Looks like that could do the trick. I need to read a bit more into the depths of it and understand the implications, then I'll give it a try and report.
– Karsten Köpnick
May 18 '17 at 18:03
@CosmosDarvin: Thanks a lot. I finally had the chance to delve deeper into the topic to find out about potential repercussions. As far as I can tell, with that option disabled, the only consequence would be that disks will have to be added to the pool manually with an Add-PhysicalDisk command. Which is a fine trade-off. I could not find any indications about other complication or disadvantages, so I will give this a try. - Just need to document the necessity for manually adding disks in case of a replacement. - I will report the results.
– Karsten Köpnick
Jun 12 '17 at 14:12
@CosmosDarvin: Thanks a lot. I finally had the chance to delve deeper into the topic to find out about potential repercussions. As far as I can tell, with that option disabled, the only consequence would be that disks will have to be added to the pool manually with an Add-PhysicalDisk command. Which is a fine trade-off. I could not find any indications about other complication or disadvantages, so I will give this a try. - Just need to document the necessity for manually adding disks in case of a replacement. - I will report the results.
– Karsten Köpnick
Jun 12 '17 at 14:12
Reporting the results: I'd like to add that I could not gather any real-life experience with this approach. It was decided to add a disk enclosure and use that instead of S2D. Disk replacements in a RAID that size are a frequent task, and the requirement of having someone with sufficient expertise around at any time to perform a PowerShell intervention, even a documented one, for a simple disk swap was seen as a show stopper. Looking at it that way, I totally agree. So we re-installed using the enclosure and had no problems since. - Thank you all for your kind and expert help.
– Karsten Köpnick
Nov 28 '17 at 15:36
Reporting the results: I'd like to add that I could not gather any real-life experience with this approach. It was decided to add a disk enclosure and use that instead of S2D. Disk replacements in a RAID that size are a frequent task, and the requirement of having someone with sufficient expertise around at any time to perform a PowerShell intervention, even a documented one, for a simple disk swap was seen as a show stopper. Looking at it that way, I totally agree. So we re-installed using the enclosure and had no problems since. - Thank you all for your kind and expert help.
– Karsten Köpnick
Nov 28 '17 at 15:36
add a comment |
The workaround i've found to this problem is to change the Bus Type of the RAID volumes or disks by changing it from one of the supported type to an unsupported one.
You will have to identify the controller driver from Device Manager and after go on registry and find the driver name on the location below.
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesSmartPqiParameters
In my case, i've changed the registry key that correspond to SAS into RAID
«BusType»=0x00000008 (RAID) (instead of 0x0000000a) (SAS)
reboot the machine
After this change you can have the storage pool in Windows Storage subsystem instead of Clustered Storage Spaces
Please be careful if you want to apply this type of workaround as it's not a validated solution and might expose your production environment to a high risk.
New contributor
add a comment |
The workaround i've found to this problem is to change the Bus Type of the RAID volumes or disks by changing it from one of the supported type to an unsupported one.
You will have to identify the controller driver from Device Manager and after go on registry and find the driver name on the location below.
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesSmartPqiParameters
In my case, i've changed the registry key that correspond to SAS into RAID
«BusType»=0x00000008 (RAID) (instead of 0x0000000a) (SAS)
reboot the machine
After this change you can have the storage pool in Windows Storage subsystem instead of Clustered Storage Spaces
Please be careful if you want to apply this type of workaround as it's not a validated solution and might expose your production environment to a high risk.
New contributor
add a comment |
The workaround i've found to this problem is to change the Bus Type of the RAID volumes or disks by changing it from one of the supported type to an unsupported one.
You will have to identify the controller driver from Device Manager and after go on registry and find the driver name on the location below.
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesSmartPqiParameters
In my case, i've changed the registry key that correspond to SAS into RAID
«BusType»=0x00000008 (RAID) (instead of 0x0000000a) (SAS)
reboot the machine
After this change you can have the storage pool in Windows Storage subsystem instead of Clustered Storage Spaces
Please be careful if you want to apply this type of workaround as it's not a validated solution and might expose your production environment to a high risk.
New contributor
The workaround i've found to this problem is to change the Bus Type of the RAID volumes or disks by changing it from one of the supported type to an unsupported one.
You will have to identify the controller driver from Device Manager and after go on registry and find the driver name on the location below.
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesSmartPqiParameters
In my case, i've changed the registry key that correspond to SAS into RAID
«BusType»=0x00000008 (RAID) (instead of 0x0000000a) (SAS)
reboot the machine
After this change you can have the storage pool in Windows Storage subsystem instead of Clustered Storage Spaces
Please be careful if you want to apply this type of workaround as it's not a validated solution and might expose your production environment to a high risk.
New contributor
New contributor
answered 6 hours ago
DragosTDragosT
1
1
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f850478%2fis-there-any-way-to-prevent-storage-spaces-direct-from-automatically-adding-disk%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
Could you please share us with a list of your drives and their bus types? PoweShell command:
Get-PhysicalDisk -CanPool $true | Sort Model | ft FriendlyName, BusType, CanPool, OperationalStatus, HealthStatus, Usage, Size
Also, is there any chance you had made a mistake when reconfigured File-RAID assigning a S2D drive to a new RAID?– Mr. Raspberry
May 17 '17 at 13:03
2
What's the point in S2D + SQL Server? Why do you want to spend money on an unlimited licensed VMs if you don't plan (actually can't...) running any? SQL Server 2016 can do AlwaysOn Basic AG even with the Standard and you can save HUGE amount of money just using Windows Server Standard 2016. docs.microsoft.com/en-us/sql/database-engine/…
– BaronSamedi1958
May 18 '17 at 17:09
@Mr. Raspberry: I updated the entry with the list of physical disks. Please note that I left out "-CanPool $true" as none is poolable.
– Karsten Köpnick
May 18 '17 at 17:52
3
@KarstenKöpnick: Well, I would suggest you consider about SQL Server AlwaysOn FCI + StarWind Virtual SAN Free. This configuration would do the job better in your case of 2-node cluster for less cost and is much easier to deploy and manage with no such issues. starwindsoftware.com/…
– Mr. Raspberry
May 19 '17 at 14:47
1
"S2D was the way to go it seemed" Well... Good luck with that :)
– BaronSamedi1958
May 19 '17 at 22:06