[gpfsug-discuss] Tuning: single client, single thread, small files - native Scale vs NFS

valdis.kletnieks at vt.edu valdis.kletnieks at vt.edu
Tue Oct 16 01:42:14 BST 2018


On Mon, 15 Oct 2018 18:34:50 -0400, "Kumaran Rajaram" said:

> 1. >>When writing to GPFS directly I'm able to write ~1800 files / second  in a test setup. 
> >>This is roughly the same on the protocol nodes (NSD client), as well as 
> on the ESS IO nodes (NSD server). 
>
> 2. >> When writing to the NFS export on the protocol node itself (to avoid 
> any network effects) I'm only able to write ~230 files / second.

> IMHO #2, writing to the NFS export on the protocol node should be same as #1.
> Protocol node is also a NSD client and when you write from a protocol node, it
> will use the NSD protocol to write to the ESS IO nodes. In #1, you cite seeing
> ~1800 files from protocol node and in #2 you cite seeing ~230 file/sec which
> seem to contradict each other. 

I think he means this:

1) ssh nsd_server
2) cd /gpfs/filesystem/testarea
3) (whomp out 1800 files/sec)
4) mount -t nfs localhost:/gpfs/filesystem/testarea /mnt/test
5) cd /mnt/test
6) Watch the same test struggle to hit 230.

Indicating the issue is going from NFS to GPFS

(For what it's worth, we've had issues with Ganesha as well...)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 486 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20181015/9c0c31c0/attachment.sig>


More information about the gpfsug-discuss mailing list