[gpfsug-discuss] get free space in GSS
Jan-Frode Myklebust
janfrode at tanso.net
Sun Jul 9 17:45:32 BST 2017
You had it here:
[root at server ~]# mmlsrecoverygroup BB1RGL -L
declustered
recovery group arrays vdisks pdisks format version
----------------- ----------- ------ ------ --------------
BB1RGL 3 18 119 4.2.0.1
declustered needs replace scrub background activity
array service vdisks pdisks spares threshold free space duration task
progress priority
----------- ------- ------ ------ ------ --------- ---------- --------
-------------------------
LOG no 1 3 0,0 1 558 GiB 14 days scrub 51% low
DA1 no 11 58 2,31 2 12 GiB 14 days scrub 78% low
DA2 no 6 58 2,31 2 4096 MiB 14 days scrub 10% low
12 GiB in DA1, and 4096 MiB i DA2, but effectively you'll get less when you
add a raidCode to the vdisk. Best way to use it id to just don't specify a
size to the vdisk, and max possible size will be used.
-jf
søn. 9. jul. 2017 kl. 14.26 skrev atmane khiredine <a.khiredine at meteo.dz>:
> thank you very much for replying. I can not find the free space
>
> Here is the output of mmlsrecoverygroup
>
> [root at server1 ~]#mmlsrecoverygroup
>
> declustered
> arrays with
> recovery group vdisks vdisks servers
> ------------------ ----------- ------ -------
> BB1RGL 3 18 server1,server2
> BB1RGR 3 18 server2,server1
> --------------------------------------------------------------
> [root at server ~]# mmlsrecoverygroup BB1RGL -L
>
> declustered
> recovery group arrays vdisks pdisks format version
> ----------------- ----------- ------ ------ --------------
> BB1RGL 3 18 119 4.2.0.1
>
> declustered needs replace
> scrub background activity
> array service vdisks pdisks spares threshold free space
> duration task progress priority
> ----------- ------- ------ ------ ------ --------- ----------
> -------- -------------------------
> LOG no 1 3 0,0 1 558 GiB 14
> days scrub 51% low
> DA1 no 11 58 2,31 2 12 GiB 14
> days scrub 78% low
> DA2 no 6 58 2,31 2 4096 MiB 14
> days scrub 10% low
>
> declustered
> checksum
> vdisk RAID code array vdisk size block
> size granularity state remarks
> ------------------ ------------------ ----------- ----------
> ---------- ----------- ----- -------
> gss0_logtip 3WayReplication LOG 128 MiB 1
> MiB 512 ok logTip
> gss0_loghome 4WayReplication DA1 40 GiB 1
> MiB 512 ok log
> BB1RGL_GPFS4_META1 4WayReplication DA1 451 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS4_DATA1 8+2p DA1 5133 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS1_META1 4WayReplication DA1 451 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS1_DATA1 8+2p DA1 12 TiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS3_META1 4WayReplication DA1 451 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS3_DATA1 8+2p DA1 12 TiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS2_META1 4WayReplication DA1 451 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS2_DATA1 8+2p DA1 13 TiB 2
> MiB 32 KiB ok
> BB1RGL_GPFS2_META2 4WayReplication DA2 451 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS2_DATA2 8+2p DA2 13 TiB 2
> MiB 32 KiB ok
> BB1RGL_GPFS1_META2 4WayReplication DA2 451 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS1_DATA2 8+2p DA2 12 TiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS5_META1 4WayReplication DA1 750 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS5_DATA1 8+2p DA1 70 TiB 16
> MiB 32 KiB ok
> BB1RGL_GPFS5_META2 4WayReplication DA2 750 GiB 1
> MiB 32 KiB ok
> BB1RGL_GPFS5_DATA2 8+2p DA2 90 TiB 16
> MiB 32 KiB ok
>
> config data declustered array VCD spares actual rebuild
> spare space remarks
> ------------------ ------------------ -------------
> --------------------------------- ----------------
> rebuild space DA1 31 34 pdisk
> rebuild space DA2 31 35 pdisk
>
>
> config data max disk group fault tolerance actual disk group
> fault tolerance remarks
> ------------------ ---------------------------------
> --------------------------------- ----------------
> rg descriptor 1 enclosure + 1 drawer 1 enclosure + 1
> drawer limiting fault tolerance
> system index 2 enclosure 1 enclosure + 1
> drawer limited by rg descriptor
>
> vdisk max disk group fault tolerance actual disk group
> fault tolerance remarks
> ------------------ ---------------------------------
> --------------------------------- ----------------
> gss0_logtip 2 enclosure 1 enclosure + 1
> drawer limited by rg descriptor
> gss0_loghome 1 enclosure + 1 drawer 1 enclosure + 1
> drawer
> BB1RGL_GPFS4_META1 1 enclosure + 1 drawer 1 enclosure + 1
> drawer
> BB1RGL_GPFS4_DATA1 2 drawer 2 drawer
> BB1RGL_GPFS1_META1 1 enclosure + 1 drawer 1 enclosure + 1
> drawer
> BB1RGL_GPFS1_DATA1 2 drawer 2 drawer
> BB1RGL_GPFS3_META1 1 enclosure + 1 drawer 1 enclosure + 1
> drawer
> BB1RGL_GPFS3_DATA1 2 drawer 2 drawer
> BB1RGL_GPFS2_META1 1 enclosure + 1 drawer 1 enclosure + 1
> drawer
> BB1RGL_GPFS2_DATA1 2 drawer 2 drawer
> BB1RGL_GPFS2_META2 3 enclosure 1 enclosure + 1
> drawer limited by rg descriptor
> BB1RGL_GPFS2_DATA2 2 drawer 2 drawer
> BB1RGL_GPFS1_META2 3 enclosure 1 enclosure + 1
> drawer limited by rg descriptor
> BB1RGL_GPFS1_DATA2 2 drawer 2 drawer
> BB1RGL_GPFS5_META1 1 enclosure + 1 drawer 1 enclosure + 1
> drawer
> BB1RGL_GPFS5_DATA1 2 drawer 2 drawer
> BB1RGL_GPFS5_META2 3 enclosure 1 enclosure + 1
> drawer limited by rg descriptor
> BB1RGL_GPFS5_DATA2 2 drawer 2 drawer
>
> active recovery group server servers
> ----------------------------------------------- -------
> server1 server1,server2
>
>
> Atmane Khiredine
> HPC System Administrator | Office National de la Météorologie
> Tél : +213 21 50 73 93 # 303 | Fax : +213 21 50 79 40 | E-mail :
> a.khiredine at meteo.dz
> ________________________________
> De : Laurence Horrocks-Barlow [laurence at qsplace.co.uk]
> Envoyé : dimanche 9 juillet 2017 09:58
> À : gpfsug main discussion list; atmane khiredine;
> gpfsug-discuss at spectrumscale.org
> Objet : Re: [gpfsug-discuss] get free space in GSS
>
> You can check the recovery groups to see if there is any remaining space.
>
> I don't have access to my test system to confirm the syntax however if
> memory serves.
>
> Run mmlsrecoverygroup to get a list of all the recovery groups then:
>
> mmlsrecoverygroup <YOURRECOVERYGROUP> -L
>
> This will list all your declustered arrays and their free space.
>
> Their might be another method, however this way has always worked well for
> me.
>
> -- Lauz
>
>
>
> On 9 July 2017 09:00:07 BST, Atmane <a.khiredine at meteo.dz> wrote:
>
> Dear all,
>
> My name is Khiredine Atmane and I am a HPC system administrator at the
> National Office of Meteorology Algeria . We have a GSS24 running
> gss2.5.10.3<http://2.5.10.3>-3b and gpfs-4.2.0.3<http://4.2.0.3>.
>
> GSS configuration: 4 enclosures, 6 SSDs, 1 empty slots, 239 disks total, 0
> NVRAM partitions
>
> disks = 3Tb
> SSD = 200 Gb
> df -h
> Filesystem Size Used Avail Use% Mounted on
>
> /dev/gpfs1 49T 18T 31T 38% /gpfs1
> /dev/gpfs2 53T 13T 40T 25% /gpfs2
> /dev/gpfs3 25T 4.9T 20T 21% /gpfs3
> /dev/gpfs4 11T 133M 11T 1% /gpfs4
> /dev/gpfs5 323T 34T 290T 11% /gpfs5
>
> Total Is 461 To
>
> I think we have more space
> Could anyone make recommendation to troubleshoot find how many free space
> in GSS ?
> How to find the available space ?
> Thank you!
>
> Atmane
>
>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170709/4e01a2df/attachment.htm>
More information about the gpfsug-discuss
mailing list