[gpfsug-discuss] LROC
Matt Weil
mweil at wustl.edu
Thu Dec 29 17:23:11 GMT 2016
-k thanks all I see it using the lroc now.
On 12/29/16 11:06 AM, Sven Oehme wrote:
> first good that the problem at least is solved, it would be great if
> you could open a PMR so this gets properly fixed, the daemon shouldn't
> segfault, but rather print a message that the device is too big.
>
> on the caching , it only gets used when you run out of pagepool or
> when you run out of full file objects . so what benchmark, test did
> you run to push data into LROC ?
>
> sven
>
>
> On Thu, Dec 29, 2016 at 5:41 PM Matt Weil <mweil at wustl.edu
> <mailto:mweil at wustl.edu>> wrote:
>
> after restart. still doesn't seem to be in use.
>
>> [root at ces1 ~]# mmdiag --lroc
>>
>>
>> === mmdiag: lroc ===
>> LROC Device(s): '0A6403AA5865389E#/dev/nvme0n1;' status Running
>> Cache inodes 1 dirs 1 data 1 Config: maxFile 1073741824 stubFile
>> 1073741824
>> Max capacity: 1526184 MB, currently in use: 0 MB
>> Statistics from: Thu Dec 29 10:35:32 2016
>>
>>
>> Total objects stored 0 (0 MB) recalled 0 (0 MB)
>> objects failed to store 0 failed to recall 0 failed to inval 0
>> objects queried 0 (0 MB) not found 0 = 0.00 %
>> objects invalidated 0 (0 MB)
>
> On 12/29/16 10:28 AM, Matt Weil wrote:
>>
>> wow that was it.
>>
>>> mmdiag --lroc
>>>
>>> === mmdiag: lroc ===
>>> LROC Device(s): '0A6403AA5865389E#/dev/nvme0n1;' status Running
>>> Cache inodes 1 dirs 1 data 1 Config: maxFile 1073741824
>>> stubFile 1073741824
>>> Max capacity: 1526184 MB, currently in use: 0 MB
>>> Statistics from: Thu Dec 29 10:08:58 2016
>> It is not caching however. I will restart gpfs to see if that
>> makes it start working.
>>
>> On 12/29/16 10:18 AM, Matt Weil wrote:
>>>
>>>
>>>
>>> On 12/29/16 10:09 AM, Sven Oehme wrote:
>>>> i agree that is a very long name , given this is a nvme device
>>>> it should show up as /dev/nvmeXYZ
>>>> i suggest to report exactly that in nsddevices and retry.
>>>> i vaguely remember we have some fixed length device name
>>>> limitation , but i don't remember what the length is, so this
>>>> would be my first guess too that the long name is causing trouble.
>>> I will try that. I was attempting to not need to write a custom
>>> udev rule for those. Also to keep the names persistent. Rhel 7
>>> has a default rule that makes a sym link in /dev/disk/by-id.
>>> 0 lrwxrwxrwx 1 root root 13 Dec 29 10:08
>>> nvme-Dell_Express_Flash_NVMe_SM1715_1.6TB_SFF_______S29GNYAH200016
>>> -> ../../nvme0n1
>>> 0 lrwxrwxrwx 1 root root 13 Dec 27 11:20
>>> nvme-Dell_Express_Flash_NVMe_SM1715_1.6TB_SFF_______S29GNYAH300161
>>> -> ../../nvme1n1
>>>>
>>>>
>>>> On Thu, Dec 29, 2016 at 5:02 PM Aaron Knister
>>>> <aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>> wrote:
>>>>
>>>> Interesting. Thanks Matt. I admit I'm somewhat grasping at
>>>> straws here.
>>>>
>>>> That's a *really* long device path (and nested too), I
>>>> wonder if that's
>>>> causing issues.
>>>>
>>>> What does a "tspreparedisk -S" show on that node?
>>>>
>>>> Also, what does your nsddevices script look like? I'm
>>>> wondering if you
>>>> could have it give back "/dev/dm-XXX" paths instead of
>>>> "/dev/disk/by-id"
>>>> paths if that would help things here.
>>>>
>>>> -Aaron
>>>>
>>>> On 12/29/16 10:57 AM, Matt Weil wrote:
>>>> >
>>>> >
>>>> >> ro_cache_S29GNYAH200016 0A6403AA586531E1
>>>> >>
>>>> /dev/disk/by-id/nvme-Dell_Express_Flash_NVMe_SM1715_1.6TB_SFF_______S29GNYAH200016
>>>> >> dmm ces1.gsc.wustl.edu <http://ces1.gsc.wustl.edu>
>>>> server node
>>>> >
>>>> >
>>>> > On 12/28/16 5:19 PM, Aaron Knister wrote:
>>>> >> mmlssnsd -X | grep 0A6403AA58641546
>>>> >
>>>> > _______________________________________________
>>>> > gpfsug-discuss mailing list
>>>> > gpfsug-discuss at spectrumscale.org
>>>> <http://spectrumscale.org>
>>>> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>>> >
>>>>
>>>> --
>>>> Aaron Knister
>>>> NASA Center for Climate Simulation (Code 606.2)
>>>> Goddard Space Flight Center
>>>> (301) 286-2776 <tel:%28301%29%20286-2776>
>>>> _______________________________________________
>>>> gpfsug-discuss mailing list
>>>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
>>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> gpfsug-discuss mailing list
>>>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
>>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>>
>>>
>>>
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161229/a15626ec/attachment-0002.htm>
More information about the gpfsug-discuss
mailing list