[gpfsug-discuss] HAWC (Highly available write cache)

Sven Oehme oehmes at gmail.com
Mon Aug 1 19:49:37 BST 2016


when you say 'synchronous write' what do you mean by that  ?
if you are talking about using direct i/o (O_DIRECT flag), they don't
leverage HAWC data path, its by design.

sven

On Mon, Aug 1, 2016 at 11:36 AM, Tejas Rao <raot at bnl.gov> wrote:

> I have enabled write cache (HAWC) by running the below commands. The
> recovery logs are supposedly placed in the replicated system metadata pool
> (SSDs). I do not have a "system.log" pool as it is only needed if recovery
> logs are stored on the client nodes.
>
> mmchfs gpfs01 --write-cache-threshold 64K
> mmchfs gpfs01 -L 1024M
> mmchconfig logPingPongSector=no
>
> I have recycled the daemon on all nodes in the cluster (including the NSD
> nodes).
>
> I still see small synchronous writes (4K) from the clients going to the
> data drives (data pool). I am checking this by looking at "mmdiag --iohist"
> output. Should they not be going to the system pool?
>
> Do I need to do something else? How can I confirm that HAWC is working as
> advertised?
>
> Thanks.
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160801/7b4138e5/attachment.htm>


More information about the gpfsug-discuss mailing list