[gpfsug-discuss] HAWC and LROC

Sven Oehme oehmes at gmail.com
Sat Nov 5 16:17:52 GMT 2016


Yes and no :)

While olaf is right, it needs two independent blockdevices, partitions are
just fine.
So one could have in fact have a 200g ssd as a boot device and partitions
it lets say

30g os
20g hawc
150g lroc

you have to keep in mind that lroc and hawc have 2 very different
requirements on the 'device'. if you loose hawc, you loose one copy of
critical data (that's why the log needs to be replicated), if you loose
lroc, you only loose cached data stored somewhere else, so the
recommendation is to use soemwhat reliable 'devices' for hawc, while for
lroc it could be simple consumer grade ssd's.
So if you use one for both, it should be reliable.

Sven

On Sat, Nov 5, 2016, 6:40 AM Olaf Weiser <olaf.weiser at de.ibm.com> wrote:

> You can use both -HAWC ,LROC- on the same node... but you need dedicated
> ,independent ,block devices ...
> In addition for hawc, you could consider replication and use 2 devices,
> even across 2 nodes. ...
>
> Gesendet von IBM Verse
>
> leslie elliott --- [gpfsug-discuss] HAWC and LROC ---
>
> Von: "leslie elliott" <leslie.james.elliott at gmail.com>
> An: "gpfsug main discussion list" <gpfsug-discuss at spectrumscale.org>
> Datum: Sa. 05.11.2016 02:09
> Betreff: [gpfsug-discuss] HAWC and LROC
> ------------------------------
>
>
> Hi I am curious if anyone has run these together on a client and whether
> it helped
>
> If we wanted to have these functions out at the client to optimise compute
> IO in a couple of special cases
>
> can both exist at the same time on the same nonvolatile hardware or do the
> two functions need independent devices
>
> and what would be the process to disestablish them on the clients as the
> requirement was satisfied
>
> thanks
>
> leslie
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161105/c54e463c/attachment.htm>


More information about the gpfsug-discuss mailing list