2

I'm trying to replace an old disk on a JBOD that I inherited. I bought exactly the same model but it seems it has a different capacity (resulting from different number of disk sectors), preventing me from resilvering the disk.

I don't see any difference besides the date of manufacture that could cause this. What could cause this?

New disk:

$ smartctl -a /dev/sdfk
User Capacity:        7,865,536,647,168 bytes [7.86 TB]
Vendor:               HGST
Product:              HUH728080AL5204
Revision:             NE00
Manufactured in week 13 of year 2016

$ fdisk -l /dev/sdfk
Disk /dev/sdfk: 7865.5 GB, 7865536647168 bytes, 15362376264 sectors
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Any other disk in JBOD:

$ smartctl -a /dev/sdfl
User Capacity:        8,001,563,222,016 bytes [8.00 TB]
Vendor:               HGST
Product:              HUH728080AL5204
Revision:             C7J0
Manufactured in week 39 of year 2015

$ fdisk -l /dev/sdfl
Disk /dev/sdfl: 8001.6 GB, 8001563222016 bytes, 15628053168 sectors
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Edit

As requested, hdparm seems to be the same for both disks:

$ hdparm -N /dev/sdfk
/dev/sdfk:
SG_IO: bad/missing sense data, sb[]:  72 05 20 00 00 00 00 34 00 0a 00 
00 00 00 00 00 00 00 00 00 01 0a 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  72 05 20 00 00 00 00 34 00 0a 00 
00 00 00 00 00 00 00 00 00 01 0a 00 00 00 00 00 00 00 00 00 00

$ hdparm -N /dev/sdfl
/dev/sdfl:
SG_IO: bad/missing sense data, sb[]:  72 05 20 00 00 00 00 34 00 0a 00 
00 00 00 00 00 00 00 00 00 01 0a 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  72 05 20 00 00 00 00 34 00 0a 00 
00 00 00 00 00 00 00 00 00 01 0a 00 00 00 00 00 00 00 00 00 00
6
  • 1
    I mean the short answer is that HGST probably changed something in their production or something... I doubt we can answer it. You could call them and complain, not sure that it would really help much though.
    – Zoredache
    Commented Dec 15, 2018 at 20:58
  • @Zoredache: :/ Yeah that's what I'm inclined to believe.
    – elleciel
    Commented Dec 15, 2018 at 21:04
  • @MichaelHampton There's about 88 disks in the JBOD and I seem to be getting the identical error message on all of them. There's probably something to look into here but I'm not sure if it's the root of my problem since the storage pool built on these disks seems to be completely functional?
    – elleciel
    Commented Dec 15, 2018 at 21:08
  • 2
    So my thinking right now is that you bought a used or "refurbished" drive, not a new one, and for some reason it is showing significantly less space than advertised. This could be because it's defective. It could also be because its previous owner did something to it to cause it to act this way. In either case I would RMA the drive. Commented Dec 15, 2018 at 21:08
  • @MichaelHampton Sounds good x2. I agree with the assessment. Will keep this question open for a few more hours in case anyone else has insights, but otherwise will accept a short answer from either you or Zoredache.
    – elleciel
    Commented Dec 15, 2018 at 21:10

2 Answers 2

4

Having recently encountered this myself, I can definitively say it's because of sector sizing.

I have a similar model as yours, HUH721008AL5204, and this is fdisk report before reformat.

Disk /dev/sde: 7.15 TiB, 7865536647168 bytes, 15362376264 sectors
Disk model: HUH721008AL5204 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

And this is after.

Disk /dev/sde: 7.28 TiB, 8001563222016 bytes, 1953506646 sectors
Disk model: HUH721008AL5204 
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

To switch from 512 byte sectors to 4096 byte sectors, I used this command. It typically only works with SAS drives, not SATA. Some SATA drives do support changing sector size using hdparm or vendor supplied tools, but I've always had bad luck with it.

sudo sg_format --size=4096 --format --fmtpinfo=0  /dev/sde
0

The two disks have identical manufacturers (HGST) and Model#s ... but different firmware revisions.

Look here for a similar example:

https://csrc.nist.gov/csrc/media/projects/cryptographic-module-validation-program/documents/security-policies/140sp2326.pdf

The Ultrastar C10K1800 is available in several models that vary by storage capacity and block size. Table 1 enumerates the models and characteristics and includes the hardware and firmware versions.

enter image description here

4
  • 2
    A firmware revision alone should not account for such a large discrepancy in size. Commented Dec 16, 2018 at 0:21
  • @MichaelHampton The newer firmware may include some over provisioning. I notice that the discrepancy is ~2% of the total space available. Commented Dec 19, 2018 at 20:43
  • 1
    @duct_tape_coder Yes, it's about 2%, which is several orders of magnitude too high a difference. Commented Dec 19, 2018 at 20:46
  • @MichaelHampton With Samsung SSDs, the default over-provisioning is 10%. However, I just looked up this drive and it's pure HDD (not even hybrid) so it shouldn't have over-provisioning. Possibly the drive SMART has identified failed areas and marked the 265,676,904 sectors as bad? I would contact HGST for more information. They might have had performance issues and downgraded the drive specs in the firmware. This could also be a bad firmware revision and there could be a firmware flash available. Commented Dec 19, 2018 at 21:16

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .