[gpfsug-discuss] mmdf vs. df
Grunenberg, Renar
Renar.Grunenberg at huk-coburg.de
Tue Jul 31 14:03:52 BST 2018
Hallo All,
a question whats happening here:
We are on GPFS 5.0.1.1 and host a TSM-Server-Cluster. A colleague from me want to add new nsd’s to grow its tsm-storagepool (filedevice class volumes).
The tsmpool fs has before 45TB of space after that 128TB. We create new 50 GB tsm-volumes with define volume cmd, but the cmd goes in error after the allocating of 89TB.
Following Outputs here:
[root at node_a tsmpool]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
tsmpool gpfs 128T 128T 44G 100% /gpfs/tsmpool
root at node_a tsmpool]# mmdf tsmpool --block-size auto
disk disk size failure holds holds free free
name group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 839.99 GB)
nsd_r2g8f_tsmpool_001 100G 0 Yes No 88G ( 88%) 10.4M ( 0%)
nsd_c4g8f_tsmpool_001 100G 1 Yes No 88G ( 88%) 10.4M ( 0%)
nsd_g4_tsmpool 256M 2 No No 0 ( 0%) 0 ( 0%)
------------- -------------------- -------------------
(pool total) 200.2G 176G ( 88%) 20.8M ( 0%)
Disks in storage pool: data01 (Maximum disk size allowed is 133.50 TB)
nsd_r2g8d_tsmpool_016 8T 0 No Yes 3.208T ( 40%) 7.867M ( 0%)
nsd_r2g8d_tsmpool_015 8T 0 No Yes 3.205T ( 40%) 7.867M ( 0%)
nsd_r2g8d_tsmpool_014 8T 0 No Yes 3.208T ( 40%) 7.867M ( 0%)
nsd_r2g8d_tsmpool_013 8T 0 No Yes 3.206T ( 40%) 7.867M ( 0%)
nsd_r2g8d_tsmpool_012 8T 0 No Yes 3.208T ( 40%) 7.867M ( 0%)
nsd_r2g8d_tsmpool_011 8T 0 No Yes 3.205T ( 40%) 7.867M ( 0%)
nsd_r2g8d_tsmpool_001 8T 0 No Yes 1.48G ( 0%) 14.49M ( 0%)
nsd_r2g8d_tsmpool_002 8T 0 No Yes 1.582G ( 0%) 16.12M ( 0%)
nsd_r2g8d_tsmpool_003 8T 0 No Yes 1.801G ( 0%) 14.7M ( 0%)
nsd_r2g8d_tsmpool_004 8T 0 No Yes 1.629G ( 0%) 15.21M ( 0%)
nsd_r2g8d_tsmpool_005 8T 0 No Yes 1.609G ( 0%) 14.22M ( 0%)
nsd_r2g8d_tsmpool_006 8T 0 No Yes 1.453G ( 0%) 17.4M ( 0%)
nsd_r2g8d_tsmpool_010 8T 0 No Yes 3.208T ( 40%) 7.867M ( 0%)
nsd_r2g8d_tsmpool_009 8T 0 No Yes 3.197T ( 40%) 7.867M ( 0%)
nsd_r2g8d_tsmpool_007 8T 0 No Yes 3.194T ( 40%) 7.875M ( 0%)
nsd_r2g8d_tsmpool_008 8T 0 No Yes 3.195T ( 40%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_016 8T 1 No Yes 3.195T ( 40%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_006 8T 1 No Yes 888M ( 0%) 21.63M ( 0%)
nsd_c4g8d_tsmpool_005 8T 1 No Yes 996M ( 0%) 18.22M ( 0%)
nsd_c4g8d_tsmpool_004 8T 1 No Yes 920M ( 0%) 11.21M ( 0%)
nsd_c4g8d_tsmpool_003 8T 1 No Yes 984M ( 0%) 14.7M ( 0%)
nsd_c4g8d_tsmpool_002 8T 1 No Yes 1.082G ( 0%) 11.89M ( 0%)
nsd_c4g8d_tsmpool_001 8T 1 No Yes 1.035G ( 0%) 14.49M ( 0%)
nsd_c4g8d_tsmpool_007 8T 1 No Yes 3.281T ( 41%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_008 8T 1 No Yes 3.199T ( 40%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_009 8T 1 No Yes 3.195T ( 40%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_010 8T 1 No Yes 3.195T ( 40%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_011 8T 1 No Yes 3.195T ( 40%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_012 8T 1 No Yes 3.195T ( 40%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_013 8T 1 No Yes 3.195T ( 40%) 7.867M ( 0%)
nsd_c4g8d_tsmpool_014 8T 1 No Yes 3.195T ( 40%) 7.875M ( 0%)
nsd_c4g8d_tsmpool_015 8T 1 No Yes 3.194T ( 40%) 7.867M ( 0%)
------------- -------------------- -------------------
(pool total) 256T 64.09T ( 25%) 341.6M ( 0%)
============= ==================== ===================
(data) 256T 64.09T ( 25%) 341.6M ( 0%)
(metadata) 200G 176G ( 88%) 20.8M ( 0%)
============= ==================== ===================
(total) 256.2T 64.26T ( 25%) 362.4M ( 0%)
In GPFS we had already space but the above df seems to be wrong and that make TSM unhappy.
If we manually wrote a 50GB File in this FS like:
[root at sap00733 tsmpool]# dd if=/dev/zero of=/gpfs/tsmpool/output bs=2M count=25600
25600+0 records in
25600+0 records out
53687091200 bytes (54 GB) copied, 30.2908 s, 1.8 GB/s
We see at df level now these:
[root at sap00733 tsmpool]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
tsmpool gpfs 128T 96T 33T 75% /gpfs/tsmpool
if we delete these file we see already the first output of 44G free space only.
This seems to be the os df Interface seems to be brocken here. What I also must mentioned we use some ignore parameters:
root @node_a(rhel7.4)> mmfsadm dump config |grep ignore
ignoreNonDioInstCount 0
! ignorePrefetchLUNCount 1
ignoreReplicaSpaceOnStat 0
ignoreReplicationForQuota 0
! ignoreReplicationOnStatfs 1
ignoreSync 0
the fs has the -S relatime option.
Are there any Known bug here existend ? Any hints on that?
Regards Renar
Renar Grunenberg
Abteilung Informatik – Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon: 09561 96-44110
Telefax: 09561 96-44104
E-Mail: Renar.Grunenberg at huk-coburg.de
Internet: www.huk.de
________________________________
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
________________________________
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet.
This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden.
________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180731/15b99932/attachment.htm>
More information about the gpfsug-discuss
mailing list