[gpfsug-discuss] Preliminary conclusion: single client, single thread, small files - native Scale vs NFS
Tomer Perry
TOMP at il.ibm.com
Wed Oct 17 14:41:03 BST 2018
Hi,
Without going into to much details, AFAIR, Ontap integrate NVRAM into the
NFS write cache ( as it was developed as a NAS product).
Ontap is using the STABLE bit which kind of tell the client "hey, I have
no write cache at all, everything is written to stable storage - thus,
don't bother with commits ( sync) commands - they are meaningless".
Regards,
Tomer Perry
Scalable I/O Development (Spectrum Scale)
email: tomp at il.ibm.com
1 Azrieli Center, Tel Aviv 67021, Israel
Global Tel: +1 720 3422758
Israel Tel: +972 3 9188625
Mobile: +972 52 2554625
From: "Keigo Matsubara" <MKEIGO at jp.ibm.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 17/10/2018 16:35
Subject: Re: [gpfsug-discuss] Preliminary conclusion: single
client, single thread, small files - native Scale vs NFS
Sent by: gpfsug-discuss-bounces at spectrumscale.org
I also wonder how many products actually exploit NFS async mode to improve
I/O performance by sacrificing the file system consistency risk:
gpfsug-discuss-bounces at spectrumscale.org wrote on 2018/10/17 22:26:52:
> Using this option usually improves performance, but at
> the cost that an unclean server restart (i.e. a crash) can cause
> data to be lost or corrupted."
For instance, NetApp, at the very least FAS 3220 running Data OnTap
8.1.2p4 7-mode which I tested with, would forcibly *promote* async mode to
sync mode.
Promoting means even if NFS client requests async mount mode, the NFS
server ignores and allows only sync mount mode.
Best Regards,
---
Keigo Matsubara, Storage Solutions Client Technical Specialist, IBM Japan
TEL: +81-50-3150-0595, T/L: 6205-0595
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20181017/9065542c/attachment.htm>
More information about the gpfsug-discuss
mailing list