Jump to content
phongle123

FREENAS and Virtualbox how to raid?

18 minutes ago, phongle123 said:

What do you mean by this? If the drives are visual in My PC. Doesn't Windows already have access to the drives? Wouldn't I still need to setup RAID to merge them together? I don't think any of the 3 I mentioned are Hardware RAID.

Don't setup any RAID in the BIOS or RAID card because this hides the actual physical disks from the OS. 
You should be able to merge the drive together using Windows storage space.

 

Basically you don't need a RAID card or RAID on the motherboard because Storage Space does its own RAID in software. Hopefully that makes some sense I am not very good at explaining things sometimes

Recommended Posts

Posted · Original PosterOP

Down to new question in the same thread.

---------------------------------------------------------------------------------------------------------------------------------

OLD

I received my hardware and set up FreeNAS under Virtualbox Oracle VM. But I am confused as how to set up raid 5 properly through FreeNAS. I don't think setting it up in device management is the correct way.

 

I am using Server 2016 while VMing FREENAS with Virtualbox.

 

Setting up a virtual disk only allows me to set up to 2TB while I want to raid 5x8TB drives. When I select the 2TB it also doesn't allow me to select a drive how do I know which drive I'm partitioning?

 

I'm a little confused here. When I partition these drives into the VM I essentially lose the capacity as if it's filled up and this is because it's allocated to the VM. Does this prevent it from being used normally like in a PLEX?

 

@dalekphalm, @djdwosk97

 

Link to post
Share on other sites
39 minutes ago, BubblyCharizard said:

Use Unraid.....Virtualbox runs on top of your OS, and will most definitely not run anywhere near as good for a storage server 

No. Use Hyper-V. Comes free with the operating system and doesn't have a stupid drive based license limit :) 


PC Specs - AMD Ryzen 7 1700 MSI X370 Gaming Plus 16GB Corsair Vengeance LPX DDR4-3000 - Asus GTX1060 Dual 6GB OC Intel 600p 256GB & Crucial M550 512GB Fractal Define R5 CM G650M W10Pro

Link to post
Share on other sites

FreeNas isn't made to be virtualised. However you can do it. The major problem is FreeNAS wants direct access to your hard drives. VirtualBox does not provide direct access the best you can do it create multiple virtual disks and store each virtual disk on your physical disk this isn't recommended at all and your will have a lot of overheard. I would only do this for testing/learning. never for production.  

 

I don't know if Hyper-V supports direct access (hardware pass though)I haven't used it before.

 

Your best bet is to install FreeNAS on bare metal. However if you look at a hypervisor that supports pass through you could get a PCI SATA card and pass that to FreeNAS.

 

It gets complicated very fast.

Link to post
Share on other sites
6 hours ago, phongle123 said:

I received my hardware and set up FreeNAS under Virtualbox Oracle VM. But I am confused as how to set up raid 5 properly through FreeNAS. I don't think setting it up in device management is the correct way.

 

I am using Server 2016 while VMing FREENAS with Virtualbox.

 

Setting up a virtual disk only allows me to set up to 2TB while I want to raid 5x8TB drives. When I select the 2TB it also doesn't allow me to select a drive how do I know which drive I'm partitioning?

 

I'm a little confused here. When I partition these drives into the VM I essentially lose the capacity as if it's filled up and this is because it's allocated to the VM. Does this prevent it from being used normally like in a PLEX?

 

@dalekphalm, @djdwosk97

 

ESXi supports direct hardware passthrough (PCIe Passthrough) - though you need to make sure that your CPU/MB supports the protocols (VT-d I believe).

 

The base version is free and works well - I use FreeNAS as a VM w/ it.

 

What OS are you running VirtualBox on?

 

Also FreeNAS doesn't use "RAID", in the traditional sense (though effectively, very similar). FreeNAS uses the ZFS filesystem which has it's own parity based RAID-like system. RAID5 in ZFS is called RAIDZ1. You need to create a RAIDZ1 ZFS pool. But FreeNAS needs direct access to the drives to do so effectively.

 

ESXi + FreeNAS + an HBA works very very well.

 

However, unless you plan on running a lot of VM's, you may want to use FreeNAS itself as the base hardware (or also if your hardware doesn't support PCIe Passthrough, or again, if you don't have an HBA, etc), as @Catsrules suggested.

 

FreeNAS itself can run VM's, though it's more complicated than setting up VM's on something like ESXi and I'm told performance isn't as good.

5 hours ago, Catsrules said:

FreeNas isn't made to be virtualised. However you can do it. The major problem is FreeNAS wants direct access to your hard drives. VirtualBox does not provide direct access the best you can do it create multiple virtual disks and store each virtual disk on your physical disk this isn't recommended at all and your will have a lot of overheard. I would only do this for testing/learning. never for production.  

 

I don't know if Hyper-V supports direct access (hardware pass though)I haven't used it before.

 

Your best bet is to install FreeNAS on bare metal. However if you look at a hypervisor that supports pass through you could get a PCI SATA card and pass that to FreeNAS.

 

It gets complicated very fast.

 

Link to post
Share on other sites
Posted · Original PosterOP
3 hours ago, dalekphalm said:

What OS are you running VirtualBox on?

Running this on Windows Server 2016. From the previous thread that you posted in, I planned on running a gaming server on this as well as a NAS. I am contemplating Server 2016+VMing FreeNAS or Server 2016 w/ Storage Spaces. Running on NON-ECC RAM.

 

 

3 hours ago, dalekphalm said:

The base version is free and works well - I use FreeNAS as a VM w/ it.

+

3 hours ago, dalekphalm said:

FreeNas isn't made to be virtualised.

Before in my previous thread about ITX I was told that while VMing FreeNAS v9 or below wasn't recommended. It worked well for v10 and v11 of FreeNAS. Since the Z370 boards don't support ECC memory. I will be using NON-ECC RAM.

 

3 hours ago, dalekphalm said:

Hyper-V supports direct access

*I think this is something that can be done with Windows Storage Spaces. While I have Windows 2016 already which includes Storage Spaces. What are the real benefits of running ZFS on FreeNAS rather than NTFS on Windows Storages Spaces (aside from scrubbing from ZFS) since I will be using NON-ECC RAM.

 

9 hours ago, Catsrules said:

Your best bet is to install FreeNAS on bare metal. However if you look at a hypervisor that supports pass through you could get a PCI SATA card and pass that to FreeNAS.

I didn't state this in the initial thread because I talked about it in a previous thread, I want to set up a gaming server that houses variables able to distribute and collect in real time for mobile applications. This was the reason for VMing FreeNAS for NAS stuff and PLEXing. I've been told by everything that Hardware RAIDing is terrible unless I buy a bunch of other stuff too. So Software RAIDing is my go to option.

 

10 hours ago, BubblyCharizard said:

Use Unraid.....Virtualbox runs on top of your OS, and will most definitely not run anywhere near as good for a storage server 

So are you saying Unraid > Storage Spaces > VMing FreeNAS? I have Windows 2016 for the reason of running a gaming server which I did not state in my OP since I just wanted to figure out how to RAID5 my 5x8TB drives in VirtualBox+FreeNAS.

Link to post
Share on other sites
8 minutes ago, phongle123 said:

Running this on Windows Server 2016. From the previous thread that you posted in, I planned on running a gaming server on this as well as a NAS. I am contemplating Server 2016+VMing FreeNAS or Server 2016 w/ Storage Spaces. Running on NON-ECC RAM.

+

Before in my previous thread about ITX I was told that while VMing FreeNAS v9 or below wasn't recommended. It worked well for v10 and v11 of FreeNAS. Since the Z370 boards don't support ECC memory. I will be using NON-ECC RAM.

 

*I think this is something that can be done with Windows Storage Spaces. While I have Windows 2016 already which includes Storage Spaces. What are the real benefits of running ZFS on FreeNAS rather than NTFS on Windows Storages Spaces (aside from scrubbing from ZFS) since I will be using NON-ECC RAM.

 

I didn't state this in the initial thread because I talked about it in a previous thread, I want to set up a gaming server that houses variables able to distribute and collect in real time for mobile applications. This was the reason for VMing FreeNAS for NAS stuff and PLEXing. I've been told by everything that Hardware RAIDing is terrible unless I buy a bunch of other stuff too. So Software RAIDing is my go to option.

 

So are you saying Unraid > Storage Spaces > VMing FreeNAS? I have Windows 2016 for the reason of running a gaming server which I did not state in my OP since I just wanted to figure out how to RAID5 my 5x8TB drives in VirtualBox+FreeNAS.

If you're using Windows Server 2016 as your base OS, I'd highly recommend using a combination of Storage Spaces and Hyper-V (for any VM needs), and ditch FreeNAS/unRAID, etc, all together.

 

The main benefit of ZFS over NTFS is that it is highly resistant to bit-rot and data corruption, due to scrubbing and the way parity is calculated. It also makes monitoring drive health extremely easy.

Link to post
Share on other sites
Posted · Original PosterOP
5 minutes ago, dalekphalm said:

The main benefit of ZFS over NTFS is that it is highly resistant to bit-rot and data corruption, due to scrubbing and the way parity is calculated.

Running RAID in NTFS. Is it the same as for example running a single SATA drive normally as an NTFS for something like a non-primary drive? Or does RAID have an increased chance in data corruption than simply using 1 drive by itself as a internal/external secondary storage for non-primary OS drive?

Link to post
Share on other sites
Just now, phongle123 said:

Running RAID in NTFS. Is it the same as for example running a single SATA drive normally as an NTFS for something like a non-primary drive? Or does RAID have an increased chance in data corruption than simply using 1 drive by itself as a internal/external secondary storage for non-primary OS drive?

RAID has a higher chance of data corruption.... kind of.


Basically, data corruption can happen at the same chances in all scenarios. A single HDD has x percent chance of data corruption. In particular we are worried about bit-rot, which is when a bit will randomly flip state (change from 0 to 1, or vice versa). This usually happens due to cosmic background radiation, or intense EMI/RFI locally. It can cause the magnetic disk to flip polarity on a sector.

 

Anyway, so why is that a problem? With a single HDD, it's not really. In most types of files, if a single bit flipped, you might not even notice. A JPEG for example, might have a single pixel change colour, or some minor artifacting happen.

 

The problem with Hardware RAID is during a rebuild, if a drive fails, it relies on all other data being pristine (not corrupted), because the lost data is rebuilt using the parity calculations, combined with the rest of the data on the good drives.

 

If a bit has flipped, this can cause the rebuild to fail, because the parity calculations won't match up properly, causing an unrecoverable error (URE). A URE can cause a rebuild to fail. (URE's can happen for other reasons too, such as a bad sector, mechanical failure, etc).

 

ZFS scrubs the data on a regular basis, which compares all data with the parity data, and if it detects a mismatch, it rebuilds the file from parity. If you add ECC into the mix, it simply increases overall reliability (though by no means necessary).

 

I believe that Storage Spaces (Using ReFS anyway, not NTFS) will also scrub the data on a regular basis.

 

I believe that some hardware RAID cards also scrub the data as well, but I'm not familiar with that process, and couldn't tell you which models support it.

Link to post
Share on other sites
Posted · Original PosterOP
27 minutes ago, dalekphalm said:

ZFS scrubs the data on a regular basis, which compares all data with the parity data, and if it detects a mismatch, it rebuilds the file from parity. If you add ECC into the mix, it simply increases overall reliability (though by no means necessary).

 

I believe that Storage Spaces (Using ReFS anyway, not NTFS) will also scrub the data on a regular basis.

So using ZFS on FreeNAS and ReFS on Storages Spaces does that same thing in terms of data integrity? So in case, there's no reason for me to even pick FreeNAS then?

 

Do you know what FreeNAS does that Storages spaces can/cannot do that's necessary for my scenario?

27 minutes ago, dalekphalm said:

I believe that some hardware RAID cards also scrub the data as well, but I'm not familiar with that process, and couldn't tell you which models support it.

I was looking and they ranged from like 300$ so that's not necessary since Software Raid does the same thing. Initially I thought that using software raid if the CPU randomly shuts down I'd lose data because it's not hardware bound and software would cut if off if CPU turns off immediately.

Link to post
Share on other sites
Just now, phongle123 said:

So using ZFS on FreeNAS and ReFS on Storages Spaces does that same thing in terms of data integrity? So in case, there's no reason for me to even pick FreeNAS then?

 

Do you know what FreeNAS does that Storages spaces can/cannot do that's necessary for my scenario?

I was looking and they ranged from like 300$ so that's not necessary since Software Raid does the same thing. Initially I thought that using software raid if the CPU randomly shuts down I'd lose data because it's not hardware bound and software would cut if off if CPU turns off immediately.

There are a bunch of nuanced differences between them.


For example, with ZFS you are in 100% control of when scrubs happen. In Storage Spaces, it's 100% automated. You have no control. It happens, yes, but many people have experienced performance problems when a scrub happens in the middle of a busy period (Eg in a home server environment, a scrub happens while you try to stream some Plex 1080p Bluray rips).

 

Here's a good overview. It's for ZFS on Linux, but ZFS is ZFS - FreeNAS just provides a better interface over the WebUI compared to Linux for managing ZFS:

https://brismuth.com/zfs-on-linux-vs-windows-storage-spaces-with-refs-902d3746f47a

 

That's another benefit of FreeNAS, is you can remotely manage the server easily and quickly with a web browser. You almost never need to access the physical machine or the console screen. Windows can be remotely managed with Administrator tools, but it's a separate application and requires some configuration. It's not as easy to manage as FreeNAS.

 

In terms of "Software RAID" vs Hardware RAID, the first thing to understand is that "Software RAID" means many things.

 

It can mean Onboard Intel Motherboard RAID (FakeRAID, it used to be called). This looks like Hardware RAID because it's configured in the BIOS, but it's basically software RAID because the CPU is doing all the controlling and number crunching.

 

It can also mean any sort of software package that installs (or is built into the OS) and allows RAID compliant arrays. Linux MDADM is basically software RAID, for example. ZFS RAIDZ and Windows Storage Spaces are also basically software RAID.

 

Hardware RAID (real Hardware RAID) has an onboard processor (RoC - RAID on Chip - usually a PowerPC or ARM SoC), onboard RAM, and an onboard Battery. This all combines to form a battery protected cache, so that if there is hardware failure when there is pending writes (files sitting in the cache), the RAID Card will keep the cache alive until the computer turns back on, and once back online, it'll dump the writes from Cache to the disk.

Software RAID of course cannot do that. However, if you protect the system with a UPS (Uninterruptible power supply - basically a power bar with a big ass battery), the risk is pretty small. You'd have to have actual hardware die for that to be an issue, and in that case, you probably don't care about the lost writes.

Link to post
Share on other sites
Posted · Original PosterOP
34 minutes ago, dalekphalm said:

It can mean Onboard Intel Motherboard RAID (FakeRAID, it used to be called). This looks like Hardware RAID because it's configured in the BIOS, but it's basically software RAID because the CPU is doing all the controlling and number crunching.

So I ended up buying the https://asrock.com/mb/Intel/Z370M-ITXac/

It supports RAID 0, RAID 1, RAID 5, RAID 10

And it seems like I'm going Windows Storage Spaces.

Do I configure RAID in the BIOS, like you said since its Software and not Hardware, in disk manager, or is there some sort of RAID program in Storage Spaces?

Link to post
Share on other sites
1 hour ago, phongle123 said:

So I ended up buying the https://asrock.com/mb/Intel/Z370M-ITXac/

It supports RAID 0, RAID 1, RAID 5, RAID 10

And it seems like I'm going Windows Storage Spaces.

Do I configure RAID in the BIOS, like you said since its Software and not Hardware, in disk manager, or is there some sort of RAID program in Storage Spaces?

If Storage Spaces is anything like ZFS you don't want to use hardware RAID at all. Just  give windows direct access to each disk and Storage spaces can handle it. From my understanding Storage Spaces and ZFS are basically software RAID

Link to post
Share on other sites
Posted · Original PosterOP
2 hours ago, Catsrules said:

Just  give windows direct access to each disk and Storage spaces can handle it.

What do you mean by this? If the drives are visual in My PC. Doesn't Windows already have access to the drives? Wouldn't I still need to setup RAID to merge them together? I don't think any of the 3 I mentioned are Hardware RAID.

Link to post
Share on other sites
Posted · Best Answer
18 minutes ago, phongle123 said:

What do you mean by this? If the drives are visual in My PC. Doesn't Windows already have access to the drives? Wouldn't I still need to setup RAID to merge them together? I don't think any of the 3 I mentioned are Hardware RAID.

Don't setup any RAID in the BIOS or RAID card because this hides the actual physical disks from the OS. 
You should be able to merge the drive together using Windows storage space.

 

Basically you don't need a RAID card or RAID on the motherboard because Storage Space does its own RAID in software. Hopefully that makes some sense I am not very good at explaining things sometimes

Link to post
Share on other sites
Posted · Original PosterOP
6 minutes ago, Catsrules said:

Don't setup any RAID in the BIOS or RAID card because this hides the actual physical disks from the OS. 
You should be able to merge the drive together using Windows storage space.

 

Basically you don't need a RAID card or RAID on the motherboard because Storage Space does its own RAID in software. Hopefully that makes some sense I am not very good at explaining things sometimes

Alright thank you guys. I will see how it goes and report back if I have any problems.

Link to post
Share on other sites
4 hours ago, phongle123 said:

So I ended up buying the https://asrock.com/mb/Intel/Z370M-ITXac/

It supports RAID 0, RAID 1, RAID 5, RAID 10

And it seems like I'm going Windows Storage Spaces.

Do I configure RAID in the BIOS, like you said since its Software and not Hardware, in disk manager, or is there some sort of RAID program in Storage Spaces?

As @Catsrules says, you just install the drives as you normally would. Do NOT use the RAID options in the BIOS, as those will create an Intel RAID array. Storage Spaces would be unable to properly take advantage of them.

 

Instead, just connect each drive to a SATA port and boot into Windows. Then, use Windows Storage Spaces to create the array. You then just decide what kind of Storage Spaces array you want. Mirrored (RAID1 equivalent), Single Parity (RAID5 equivalent), Dual Parity (RAID6 equivalent), etc.

 

You can find guides online if you need them, but it should be fairly straight forward.

Link to post
Share on other sites
Posted · Original PosterOP
16 minutes ago, dalekphalm said:

As @Catsrules says, you just install the drives as you normally would. Do NOT use the RAID options in the BIOS, as those will create an Intel RAID array. Storage Spaces would be unable to properly take advantage of them.

 

Instead, just connect each drive to a SATA port and boot into Windows. Then, use Windows Storage Spaces to create the array. You then just decide what kind of Storage Spaces array you want. Mirrored (RAID1 equivalent), Single Parity (RAID5 equivalent), Dual Parity (RAID6 equivalent), etc.

 

You can find guides online if you need them, but it should be fairly straight forward.

Ah yes thank you. Exactly what I was asking. Since there was multiple ways to create the RAID. I didn't know which 1 I was supposed to specifically use.

Link to post
Share on other sites
Posted · Original PosterOP

@Catsrules, @dalekphalm

I've hit a bit of a snag. Unlike Windows 10, Storage spaces no longer exists under control panel and now exists in the server manager. My problem is with the 2TB limit.

What I did:

1) Disk Management

2) Clicked on one of my 8TB Drives

3) Action - Create VHD

4) ...

  • Set to VHDX,
  • Set to Fixed,
  • Set amount of GB, 
  • Saved the Virtual Disk inside of its respective drive

5) Saved

6) In server manager, the Drive shows 7.21TB (total) - 5.21TB (used, even though not used) to equal 2TB (free) as if the 2TB was the limit.

Even though I set it to VHDX which has a 64TB limit instead of 2TB limit of VHD.

So I'm not able to get the full 7.21 TB

 

-----------------------------------------------------------------------

Am I supposed to, for each 8TB (7.21TB Usable) do 4 physical disks of 2TB + 2TB + 2TB + 1.21TB then in Server Manager combine the 4 into a single 7.21TB volume. Do this 5 times and make them into a storage pool? Though I don't think this is possible. And even if it was, I don't thought it would make the redundancy work out very well.

-----------------------------------------------------------------------

EDIT: After about like 70 tries of doing the exact same procedure. I finally got 1 of them to work. Turned out as 7.21TB (total) - 130MB (used) to equal 7.21TB (free)

But this is just 1 out of the 5.

Link to post
Share on other sites
7 hours ago, phongle123 said:

@Catsrules, @dalekphalm

I've hit a bit of a snag. Unlike Windows 10, Storage spaces no longer exists under control panel and now exists in the server manager. My problem is with the 2TB limit.

What I did:

1) Disk Management

2) Clicked on one of my 8TB Drives

3) Action - Create VHD

4) ...

  • Set to VHDX,
  • Set to Fixed,
  • Set amount of GB, 
  • Saved the Virtual Disk inside of its respective drive

5) Saved

6) In server manager, the Drive shows 7.21TB (total) - 5.21TB (used, even though not used) to equal 2TB (free) as if the 2TB was the limit.

Even though I set it to VHDX which has a 64TB limit instead of 2TB limit of VHD.

So I'm not able to get the full 7.21 TB

 

-----------------------------------------------------------------------

Am I supposed to, for each 8TB (7.21TB Usable) do 4 physical disks of 2TB + 2TB + 2TB + 1.21TB then in Server Manager combine the 4 into a single 7.21TB volume. Do this 5 times and make them into a storage pool? Though I don't think this is possible. And even if it was, I don't thought it would make the redundancy work out very well.

-----------------------------------------------------------------------

EDIT: After about like 70 tries of doing the exact same procedure. I finally got 1 of them to work. Turned out as 7.21TB (total) - 130MB (used) to equal 7.21TB (free)

But this is just 1 out of the 5.

VHD's are virtual hard drives. These are for VM's, not for storage spaces.

 

I get the feeling you're in the wrong section of Windows.

 

What version of Windows are you running?

Link to post
Share on other sites
Posted · Original PosterOP
3 hours ago, dalekphalm said:

VHD's are virtual hard drives. These are for VM's, not for storage spaces.

 

I get the feeling you're in the wrong section of Windows.

 

What version of Windows are you running?

Windows 2016. But when I do this method. It shows up as Physical Hard Disks instead of Virtual Hard Disks under my Storage Spaces in Server Manager. Both Virtual and Physical Hard Disk sections exist and it does not appear under Virtual only Physical when I create the Vhdx.

Link to post
Share on other sites
7 minutes ago, phongle123 said:

Windows 2016. But when I do this method. It shows up as Physical Hard Disks instead of Virtual Hard Disks under my Storage Spaces in Server Manager. Both Virtual and Physical Hard Disk sections exist and it does not appear under Virtual only Physical when I create the Vhdx.

Okay, so here's a guide that I found that walks through the steps.

 

This guide is actually a bit more advanced, but you can ignore the later half of the guide.

https://nedimmehic.org/2017/05/01/how-to-configure-storage-spaces-and-tiered-storage-windows-server-2016/

 

First, you need to create a "Storage Pool", which is the collection of all of your HDD's that you want to include in the array.

 

Once you've done that, you create a "Virtual Drive" (Which in Storage Spaces, is different from a VHDX file, which is a VHD file used by VM's). During this process, you select the type of Resiliency:

Simple (RAID0 - no resiliency/no redundancy)

Mirror (RAID1 - mirrors all the drives in the array so they all have the same data)

Parity (RAID5/6, etc - allows you to select how many parity drives you want to use)

 

If you have 5x 8TB drives, I'd suggest a single Parity virtual drive.

 

You CAN do stuff like RAID10/50/60, but you have to create a nested system, which is more advanced, and I think unnecessary here.

 

Basically, follow the guide until you get to the point where it says "Tiered Storage". Nested Storage basically allows you to create SSD caches, and other more advanced stuff that I think might be unnecessary for you for now.

Link to post
Share on other sites
Posted · Original PosterOP
1 hour ago, dalekphalm said:

Okay, so here's a guide that I found that walks through the steps.

 

This guide is actually a bit more advanced, but you can ignore the later half of the guide.

https://nedimmehic.org/2017/05/01/how-to-configure-storage-spaces-and-tiered-storage-windows-server-2016/

 

First, you need to create a "Storage Pool", which is the collection of all of your HDD's that you want to include in the array.

 

This is where my problem begins. Unless I do that VHD method. The drives don't even show up under physical hard disks.

 

Untitled22.jpg

Link to post
Share on other sites
Just now, phongle123 said:

This is where my problem begins. Unless I do that VHD method. The drives don't even show up under physical hard disks.

 

Untitled22.jpg

Hmm I actually ran into that issue once. I totally forgot how I resolved it.

 

Have you tried just formatting the drives and assigning them each drive letters, then going through the steps above. See if that makes a difference.

 

@leadeater are you familiar enough with storage spaces to know why the drives don't show up?

Link to post
Share on other sites
Posted · Original PosterOP
1 minute ago, dalekphalm said:

Hmm I actually ran into that issue once. I totally forgot how I resolved it.

 

Have you tried just formatting the drives and assigning them each drive letters, then going through the steps above. See if that makes a difference.

 

@leadeater are you familiar enough with storage spaces to know why the drives don't show up?

I have tried a lot of thing I am going to list them.

1) In My PC. I have tried Formatting them in ReFS since you said Storage Spaces used ReFS. I don't know if I should have formatted to NTFS?

2) In Disk Management, I...

  • Initialized them so they all became unallocated
  • Then I did (to all of them), right click, new Simple Volume, assigned letter, ReFS format

3) I right clicked them in Server Manager and Reset making them unallocated again.

 

The only way I've gotten them to appear in "Physical Disks" is by the disk management VHDX method.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Sign in

Already have an account? Sign in here.



  • Recently Browsing 0 members

    No registered users viewing this page.

Last Topics

headset converter for xbox 360 download ica client for windows 7 asus laptop access bios how do you get stripped screws out jailbreak ipod touch 3g 5.1.1 how to open ica file citrix gigabyte 775 motherboards install hard drive xbox 360 4gb asus x401a factory reset ubiquiti unifi setup q9400 overclock guide xbox 360 controller with play and charge kit xbox 360s 4gb hard drive upgrade how to remove stripped screws from laptop jailbreak 5.1.1 ipad generic gamepad driver unifi standard installation build 7601 windows the orange box gamestop how many gigabytes in a megabyte how many megabytes equals a gigabyte sync outlook contacts with icloud how to unscrew a small stripped screw how to fix windows 7 not genuine jailbreak ipod touch 3rd gen upgrade xbox 360 4gb hard drive zero hour crack no cd pawn value of xbox 360 how to find bios password build 7601 this copy is not genuine how to remove a very small stripped screw


×