[Cialug] Bad hd in raid?
Wyatt Davis
cialug@cialug.org
Wed, 20 Apr 2005 15:29:35 -0500
Thanks, I'll try that. At the moment I'm using badblocks -svn /dev/hde
This is from dmesg:
hde: attached ide-disk driver.
hde: host protected area =3D> 1
hde: 156301488 sectors (80026 MB) w/8192KiB Cache, CHS=3D9729/255/63, UDMA(=
100)
hdf: attached ide-disk driver.
hdf: host protected area =3D> 1
hdf: 156301488 sectors (80026 MB) w/8192KiB Cache, CHS=3D155061/16/63, UDMA=
(100)
hdg: attached ide-disk driver.
hdg: host protected area =3D> 1
hdg: 156301488 sectors (80026 MB) w/8192KiB Cache, CHS=3D155061/16/63, UDMA=
(100)
hdh: attached ide-disk driver.
hdh: host protected area =3D> 1
hdh: 156301488 sectors (80026 MB) w/8192KiB Cache, CHS=3D155061/16/63, UDMA=
(100)
I noticed the CHS=3D are different on all of the other drives. Is that a pr=
oblem?
Also this:
Partition check:
hde: unknown partition table
hdf: hdf1
hdg: [PTBL] [9729/255/63] hdg1
hdh: hdh1
And this:
raid5: md0, not all disks are operational -- trying to recover array
raid5: allocated 4330kB for md0
raid5: raid level 5 set md0 active with 3 out of 4 devices, algorithm 2
RAID5 conf printout:
--- rd:4 wd:3 fd:1
disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00]
disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdf1
disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdg1
disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdh1
The filesystem on md0 is reiserfs, if that helps.
Thanks,
Wyatt
On 4/20/05, Josh More <MoreJ@alliancetechnologies.net> wrote:
> =20
> You can use smartctl to force a test and check it afterwards.=20
> I will not supply a guide on how to do so, as the steps you do depend=20
> on the type of drive you have and what kind of tests you run, and=20
> I don't want to lead you down the wrong path.=20
> =20
> However, man smartctl is fairly readable, so it's not too hard.=20
> Just know that doing a read of a test status stops the test,=20
> so be sure to wait until the test is done before quering it.=20
> =20
> --=20
> -Josh More, RHCE, CISSP=20
> morej@alliancetechnologies.net=20
> 515-245-7701=20
> =20
>=20
>=20
> >>>oddvector@gmail.com 04/20/05 1:59 pm >>>
>=20
> I have a disk that likes to drop out of my raid array. Here is my
> current mdstat.
>=20
> Personalities : [linear] [raid0] [raid1] [raid5]
> read_ahead 1024 sectors
> md0 : active raid5 hdh1[3] hdg1[2] hdf1[1]
> 234444288 blocks level 5, 128k chunk, algorithm 2 [4/3] [_UUU]
>=20
> unused devices: <none>
>=20
> I'm running software raid 5 on a slackware 10.1 box. hde is the
> culprit drive. The last time I had this drive drop out of the raid I
> was able to add him back in with no problems. I'm also not seeing any
> read/write errors in my log files.
>=20
> Is there any way for me to check this disk before I stick it back into
> the raid? If it was going bad shouldn't there be errors in my log
> files? Should I just replace this disk?
>=20
> Cialug mailing list
> Cialug@cialug.org
> http://cialug.org/mailman/listinfo/cialug
>