[gpfsug-discuss] gpfsug-discuss Digest, Vol 78, Issue 6
Lo Re Giuseppe
lore at cscs.ch
Tue Jul 3 09:05:41 BST 2018
Dear Eric,
thanks a lot for this information.
And what about the gpfs_vfs metric group?
What is the difference beteween for example
“gpfs_fis_read_calls" and “gpfs_vfs_read” ?
Again I see the second one being tipically higher than the first one.
In addition gpfs_vfs_read is not related to a specific file system...
[root at ela5 ~]# mmperfmon query gpfs_fis_read_calls -n1 -b 60
Legend:
1: ela5.cscs.ch<http://ela5.cscs.ch>|GPFSFilesystemAPI|durand.cscs.ch<http://durand.cscs.ch>|store|gpfs_fis_read_calls
2: ela5.cscs.ch<http://ela5.cscs.ch>|GPFSFilesystemAPI|por.login.cscs.ch<http://por.login.cscs.ch>|apps|gpfs_fis_read_calls
3: ela5.cscs.ch<http://ela5.cscs.ch>|GPFSFilesystemAPI|por.login.cscs.ch<http://por.login.cscs.ch>|project|gpfs_fis_read_calls
4: ela5.cscs.ch<http://ela5.cscs.ch>|GPFSFilesystemAPI|por.login.cscs.ch<http://por.login.cscs.ch>|users|gpfs_fis_read_calls
Row Timestamp gpfs_fis_read_calls gpfs_fis_read_calls gpfs_fis_read_calls gpfs_fis_read_calls
1 2018-07-03-10:03:00 0 0 7274 0
[root at ela5 ~]# mmperfmon query gpfs_vfs_read -n1 -b 60
Legend:
1: ela5.cscs.ch<http://ela5.cscs.ch>|GPFSVFS|gpfs_vfs_read
Row Timestamp gpfs_vfs_read
1 2018-07-03-10:03:00 45123
Cheers,
Giuseppe
***********************************************************************
Giuseppe Lo Re
CSCS - Swiss National Supercomputing Center
Via Trevano 131
CH-6900 Lugano (TI) Tel: + 41 (0)91 610 8225
Switzerland Email: giuseppe.lore at cscs.ch<mailto:giuseppe.lore at cscs.ch>
***********************************************************************
Hello Giuseppe,
Following was my attempt to answer a similar question some months ago.
When reading about the different viewpoints of the Zimon sensors, please
note that gpfs_fis_bytes_read is a metric provided by the GPFSFileSystemAPI
sensor, while gpfs_fs_bytes_read is a metric provided by the GPFSFileSystem
sensor. Therefore, gpfs_fis_bytes_read reflects application reads, while
gpfs_fs_bytes_read reflects NSD reads.
The GPFSFileSystemAPI and GPFSNodeAPI sensor metrics are from the point of
view of "applications" in the sense that they provide stats about I/O
requests made to files in GPFS file systems from user level applications
using POSIX interfaces like open(), close(), read(), write(), etc.
This is in contrast to similarly named sensors without the "API" suffix,
like GPFSFilesystem and GPFSNode. Those sensors provide stats about I/O
requests made by the GPFS code to NSDs (disks) making up GPFS file systems.
The relationship between application I/O and disk I/O might or might not be
obvious. Consider some examples. An application that starts sequentially
reading a file might, at least initially, cause more disk I/O than expected
because GPFS has decided to prefetch data. An application write() might
not immediately cause the writing of disk blocks, due to the operation of
the pagepool. Ultimately, application write()s might cause twice as much
data written to disk due to the replication factor of the file system.
Application I/O concerns itself with user data; disk I/O might have to
occur to handle the user data and associated file system metadata (like
inodes and indirect blocks).
The difference between GPFSFileSystemAPI and GPFSNodeAPI: GPFSFileSystemAPI
reports stats for application I/O per filesystem per node; GPFSNodeAPI
reports application I/O stats per node. Similarly, GPFSFilesystem reports
stats for disk I/O per filesystem per node; GPFSNode reports disk I/O stats
per node.
Eric M. Agar
agar at us.ibm.com<mailto:agar at us.ibm.com>
IBM Spectrum Scale Level 2
Software Defined Infrastructure, IBM Systems
From: Kristy Kallback-Rose <kkr at lbl.gov>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 07/02/2018 10:06 AM
Subject: Re: [gpfsug-discuss] Zimon metrics details
Sent by: gpfsug-discuss-bounces at spectrumscale.org
+1
Would love to see more detailed descriptions on Zimon metrics.
Sent from my iPhone
On Jul 2, 2018, at 6:50 AM, Lo Re Giuseppe <lore at cscs.ch> wrote:
Hi everybody,
I am extracting the Zimon performance data and uploading them to our
elasticsearch cluster.
Now that I have the mechanism in place it?s time to understand what I
am actually uploading ;)
Maybe this has been already asked.. where can I find a (as much as
possible) detailed explaination of the different Zimon metrics?
The SS probelm determination guide doens?t spend more than half a
line for each.
In particular I would like to understand the difference between these
ones:
- gpfs_fs_bytes_read
- gpfs_fis_bytes_read
The second gives tipically higher values than the first one.
Thanks for any hit.
Regards,
Giuseppe
***********************************************************************
Giuseppe Lo Re
CSCS - Swiss National Supercomputing Center
Via Trevano 131
CH-6900 Lugano (TI) Tel: + 41 (0)91 610 8225
Switzerland Email:
giuseppe.lore at cscs.ch
***********************************************************************
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20180702/3aa08500/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20180702/3aa08500/attachment.gif>
------------------------------
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
End of gpfsug-discuss Digest, Vol 78, Issue 6
*********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180703/ac17fcd9/attachment.htm>
More information about the gpfsug-discuss
mailing list