[gpfsug-discuss] Using HAWC (write cache)

Simon Thompson (Research Computing - IT Services) S.J.Thompson at bham.ac.uk
Thu Aug 27 15:17:19 BST 2015


Oh yeah, I see what you mean, I've just looking on another cluster with LROC drives and they have all disappeared. They are still listed in mmlsnsd, but mmdiag --lroc shows the drive as "NULL"/Idle.

Simon

From: <Oesterlin>, Robert <Robert.Oesterlin at nuance.com<mailto:Robert.Oesterlin at nuance.com>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at gpfsug.org<mailto:gpfsug-discuss at gpfsug.org>>
Date: Wednesday, 26 August 2015 13:27
To: gpfsug main discussion list <gpfsug-discuss at gpfsug.org<mailto:gpfsug-discuss at gpfsug.org>>
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)

Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So… tread lightly.

Bob Oesterlin
Sr Storage Engineer, Nuance Communications
507-269-0413


From: <gpfsug-discuss-bounces at gpfsug.org<mailto:gpfsug-discuss-bounces at gpfsug.org>> on behalf of "Simon Thompson (Research Computing - IT Services)"
Reply-To: gpfsug main discussion list
Date: Wednesday, August 26, 2015 at 7:23 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)

Hmm mine seem to be working which I created this morning (on a client node):


mmdiag --lroc


=== mmdiag: lroc ===

LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running

Cache inodes 1 dirs 1 data 1  Config: maxFile 0 stubFile 0

Max capacity: 190732 MB, currently in use: 4582 MB

Statistics from: Tue Aug 25 14:54:52 2015


Total objects stored 4927 (4605 MB) recalled 81 (55 MB)

      objects failed to store 467 failed to recall 1 failed to inval 0

      objects queried 0 (0 MB) not found 0 = 0.00 %

      objects invalidated 548 (490 MB)


This was running 4.1.1-1.

Simon


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20150827/7761866d/attachment.htm>


More information about the gpfsug-discuss mailing list