[gpfsug-discuss] Confusing I/O Behavior

Chris Hoffman cphoffma at uoregon.edu
Tue Apr 10 17:18:49 BST 2018


?Hi Stumped,


Is this MPI job on one machine? Multiple nodes? Are the tiny 8K writes to the same file or different ones?


Chris

________________________________
From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] <aaron.s.knister at nasa.gov>
Sent: Tuesday, April 10, 2018 9:00 AM
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Confusing I/O Behavior

I hate admitting this but I've found something that's got me stumped.

We have a user running an MPI job on the system. Each rank opens up several output files to which it writes ASCII debug information. The net result across several hundred ranks is an absolute smattering of teeny tiny I/o requests to te underlying disks which they don't appreciate. Performance plummets. The I/o requests are 30 to 80 bytes in size. What I don't understand is why these write requests aren't getting batched up into larger write requests to the underlying disks.

If I do something like "df if=/dev/zero of=foo bs=8k" on a node I see that the nasty unaligned 8k io requests are batched up into nice 1M I/o requests before they hit the NSD.

As best I can tell the application isn't doing any fsync's and isn't doing direct io to these files.

Can anyone explain why seemingly very similar io workloads appear to result in well formed NSD I/O in one case and awful I/o in another?

Thanks!

-Stumped


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180410/19d814f8/attachment.htm>


More information about the gpfsug-discuss mailing list