[gpfsug-discuss] Online data migration tool

Daniel Kidger daniel.kidger at uk.ibm.com
Thu Dec 21 12:21:59 GMT 2017


My suggestion is that it is better to not think of the performance coming from having more than 32 sub-blocks but instead that the performance comes from smaller sub-blocks. The fact that there are now more of them in say a 4MB blocksize filesytem is just a side effect.

Daniel


 

 
 	
Dr Daniel Kidger 
IBM Technical Sales Specialist
Software Defined Solution Sales

+ 44-(0)7818 522 266 
daniel.kidger at uk.ibm.com

> On 19 Dec 2017, at 21:32, Aaron Knister <aaron.s.knister at nasa.gov> wrote:
> 
> Thanks, Sven. Understood!
> 
>> On 12/19/17 3:20 PM, Sven Oehme wrote:
>> Hi,
>> 
>> the zero padding was never promoted into a GA stream, it was an
>> experiment to proof we are on the right track when we eliminate the
>> overhead from client to NSD Server, but also showed that alone is not
>> good enough. the work for the client is the same compared to the >32
>> subblocks, but the NSD Server has more work as it can't pack as many
>> subblocks and therefore files into larger blocks, so you need to do more
>> writes to store the same number of files. 
>> thats why there is the additional substantial improvement  when we then
>> went to >32 subblocks. 
>> 
>> sven
>> 
>> On Mon, Dec 18, 2017 at 9:13 PM Knister, Aaron S. (GSFC-606.2)[COMPUTER
>> SCIENCE CORP] <aaron.s.knister at nasa.gov
>> <mailto:aaron.s.knister at nasa.gov>> wrote:
>> 
>>    Thanks Sven! That makes sense to me and is what I thought was the
>>    case which is why I was confused when I saw the reply to the thread
>>    that said the >32 subblocks code had no performance impact. 
>> 
>>    A couple more question for you— in your presentation there’s a
>>    benchmark that shows the file create performance without the zero
>>    padding. Since you mention this is done for security reasons was
>>    that feature ever promoted to a GA Scale release? I’m also wondering
>>    if you could explain the performance difference between the no zero
>>    padding code and the > 32 subblock code since given your the example
>>    of 32K files and 16MB block size I figure both cases ought to write
>>    the same amount to disk. 
>> 
>>    Thanks!
>> 
>>    -Aaron
>> 
>> 
>> 
>> 
>> 
>>    On December 15, 2017 at 18:07:23 EST, Sven Oehme <oehmes at gmail.com
>>    <mailto:oehmes at gmail.com>> wrote:
>>>    i thought i answered that already, but maybe i just thought about
>>>    answering it and then forgot about it :-D
>>> 
>>>    so yes more than 32 subblocks per block significant increase the
>>>    performance of filesystems with small files, for the sake of the
>>>    argument let's say 32k in a large block filesystem again for sake
>>>    of argument say 16MB. 
>>> 
>>>    you probably ask why ? 
>>> 
>>>    if you create a file and write 32k into it in a pre 5.0.0 Version
>>>    16 MB filesystem your client actually doesn't write 32k to the NSD
>>>    Server, it writes 512k, because thats the subblock size and we
>>>    need to write the full subblock (for security reasons). so first
>>>    you waste significant memory on the client to cache that zero
>>>    padding, you waste network bandwidth and you waste NSD Server
>>>    cache because you store it there too. this means you overrun the
>>>    cache more quickly, means you start doing read/modify writes
>>>    earlier on all your nice large raid tracks... i guess you get the
>>>    story by now. 
>>> 
>>>    in fact,  if you have a good raid code that can drive really a lot
>>>    of bandwidth out of individual drives like a GNR system you get
>>>    more performance for small file writes as larger your blocksize
>>>    is, because we can 'pack' more files into larger i/os and
>>>    therefore turn a small file create workload into a bandwidth
>>>    workload, essentially exactly what we did and i demonstrated in
>>>    the CORAL presentation . 
>>> 
>>>    hope that makes this crystal clear now . 
>>> 
>>>    sven
>>> 
>>> 
>>> 
>>>    On Fri, Dec 15, 2017 at 10:47 PM Aaron Knister
>>>    <aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>> wrote:
>>> 
>>>        Thanks, Alex. I'm all too familiar with the trade offs between
>>>        large
>>>        blocks and small files and we do use pretty robust SSD storage
>>>        for our
>>>        metadata. We support a wide range of workloads and we have
>>>        some folks
>>>        with many small (<1M) files and other folks with many large
>>>        (>256MB) files.
>>> 
>>>        My point in this thread is that IBM has said over and over
>>>        again in
>>>        presentations that there is a significant performance gain
>>>        with the >32
>>>        subblocks code on filesystems with large block sizes (although
>>>        to your
>>>        point I'm not clear on exactly what large means since I didn't
>>>        define
>>>        large in this context). Therefore given that the >32 subblock
>>>        code gives
>>>        a significant performance gain one could reasonably assume
>>>        that having a
>>>        filesystem with >32 subblocks is required to see this gain
>>>        (rather than
>>>        just running the >32 subblocks code on an fs w/o > 32 subblocks).
>>> 
>>>        This lead me to ask about a migration tool because in my mind
>>>        if there's
>>>        a performance gain from having >32 subblocks on the FS I'd
>>>        like that
>>>        feature and having to manually copy 10's of PB to new hardware
>>>        to get
>>>        this performance boost is unacceptable. However, IBM can't
>>>        seem to make
>>>        up their mind about whether or not the >32 subblocks code
>>>        *actually*
>>>        provides a performance increase. This seems like a pretty
>>>        straightforward question.
>>> 
>>>        -Aaron
>>> 
>>>>        On 12/15/17 3:48 PM, Alex Chekholko wrote:
>>>> Hey Aaron,
>>>> 
>>>> Can you define your sizes for "large blocks" and "small
>>>        files"?  If you
>>>> dial one up and the other down, your performance will be
>>>        worse.  And in
>>>> any case it's a pathological corner case so it shouldn't
>>>        matter much for
>>>> your workflow, unless you've designed your system with the
>>>        wrong values.
>>>> 
>>>> For example, for bioinformatics workloads, I prefer to use 256KB
>>>> filesystem block size, and I'd consider 4MB+ to be "large
>>>        block size",
>>>> which would make the filesystem obviously unsuitable for
>>>        processing
>>>> millions of 8KB files.
>>>> 
>>>> You can make a histogram of file sizes in your existing
>>>        filesystems and
>>>> then make your subblock size (1/32 of block size) on the
>>>        smaller end of
>>>> that.   Also definitely use the "small file in inode"
>>>        feature and put
>>>> your metadata on SSD.
>>>> 
>>>> Regards,
>>>> Alex
>>>> 
>>>> On Fri, Dec 15, 2017 at 11:49 AM, Aaron Knister
>>>> <aaron.s.knister at nasa.gov <mailto:aaron.s.knister at nasa.gov>
>>>        <mailto:aaron.s.knister at nasa.gov
>>>        <mailto:aaron.s.knister at nasa.gov>>> wrote:
>>>> 
>>>>      Thanks, Bill.
>>>> 
>>>>      I still don't feel like I've got an clear answer from
>>>        IBM and frankly
>>>>      the core issue of a lack of migration tool was totally
>>>        dodged.
>>>> 
>>>>      Again in Sven's presentation from SSUG @ SC17
>>>>    
>>>         (https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_SC17_SC17-2DUG-2DCORAL-5FV3.pdf&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=EdlC_gbmU-xxT7HcFq8IYttHSMts8BdrbqDSCqnt-_g&e=
>>>        <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_SC17_SC17-2DUG-2DCORAL-5FV3.pdf&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=EdlC_gbmU-xxT7HcFq8IYttHSMts8BdrbqDSCqnt-_g&e=>)
>>>>      he mentions "It has a significant performance penalty
>>>        for small files in
>>>>      large block size filesystems" and the demonstrates that
>>>        with several
>>>>      mdtest runs (which show the effect with and without the >32
>>>>      subblocks code):
>>>> 
>>>> 
>>>>      4.2.1 base code - SUMMARY: (of 3 iterations)
>>>>      File creation : Mean = 2237.644
>>>> 
>>>>      zero-end-of-file-padding (4.2.2 + ifdef for zero
>>>        padding):  SUMMARY: (of
>>>>      3 iterations)
>>>>      File creation : Mean = 12866.842
>>>> 
>>>>      more sub blocks per block (4.2.2 + morethan32subblock code):
>>>>      File creation : Mean = 40316.721
>>>> 
>>>>      Can someone (ideally Sven) give me a straight answer as
>>>        to whether or
>>>>      not the > 32 subblock code actually makes a performance
>>>        difference for
>>>>      small files in large block filesystems? And if not, help
>>>        me understand
>>>>      why his slides and provided benchmark data have
>>>        consistently indicated
>>>>      it does?
>>>> 
>>>>      -Aaron
>>>> 
>>>>      On 12/1/17 11:44 AM, Bill Hartner wrote:
>>>>      > ESS GL4 4u106 w/ 10 TB drives - same HW Sven reported
>>>        some of the
>>>>      > results @ user group meeting.
>>>>      >
>>>>      > -Bill
>>>>      >
>>>>      > Bill Hartner
>>>>      > IBM Systems
>>>>      > Scalable I/O Development
>>>>      > Austin, Texas
>>>>      > bhartner at us.ibm.com <mailto:bhartner at us.ibm.com>
>>>        <mailto:bhartner at us.ibm.com <mailto:bhartner at us.ibm.com>>
>>>>      > home office 512-784-0980 <tel:(512)%20784-0980>
>>>        <tel:512-784-0980 <tel:(512)%20784-0980>>
>>>>      >
>>>>      >
>>>>      > Inactive hide details for Jan-Frode Myklebust
>>>        ---12/01/2017 06:53:44
>>>>      > AM---Bill, could you say something about what the
>>>        metadataJan-Frode
>>>>      > Myklebust ---12/01/2017 06:53:44 AM---Bill, could you
>>>        say something
>>>>      > about what the metadata-storage here was?
>>>        ESS/NL-SAS/3way replication?
>>>>      >
>>>>      > From: Jan-Frode Myklebust <janfrode at tanso.net
>>>        <mailto:janfrode at tanso.net> <mailto:janfrode at tanso.net
>>>        <mailto:janfrode at tanso.net>>>
>>>>      > To: gpfsug main discussion list
>>>        <gpfsug-discuss at spectrumscale.org
>>>        <mailto:gpfsug-discuss at spectrumscale.org>
>>>>      <mailto:gpfsug-discuss at spectrumscale.org
>>>        <mailto:gpfsug-discuss at spectrumscale.org>>>
>>>>      > Date: 12/01/2017 06:53 AM
>>>>      > Subject: Re: [gpfsug-discuss] Online data migration tool
>>>>      > Sent by: gpfsug-discuss-bounces at spectrumscale.org
>>>        <mailto:gpfsug-discuss-bounces at spectrumscale.org>
>>>>      <mailto:gpfsug-discuss-bounces at spectrumscale.org
>>>        <mailto:gpfsug-discuss-bounces at spectrumscale.org>>
>>>>      >
>>>>      >
>>>>    
>>>         ------------------------------------------------------------------------
>>>>      >
>>>>      >
>>>>      >
>>>>      > Bill, could you say something about what the
>>>        metadata-storage here was?
>>>>      > ESS/NL-SAS/3way replication?
>>>>      >
>>>>      > I just asked about this in the internal slack channel
>>>        #scale-help today..
>>>>      >
>>>>      >
>>>>      >
>>>>      > -jf
>>>>      >
>>>>      > fre. 1. des. 2017 kl. 13:44 skrev Bill Hartner
>>>        <_bhartner at us.ibm.com_
>>>>      > <mailto:bhartner at us.ibm.com
>>>        <mailto:bhartner at us.ibm.com> <mailto:bhartner at us.ibm.com
>>>        <mailto:bhartner at us.ibm.com>>>>:
>>>>      >
>>>>      >     > "It has a significant performance penalty for
>>>        small files in
>>>>      large
>>>>      >     > block size filesystems"
>>>>      >
>>>>      >     Aaron,
>>>>      >
>>>>      >     Below are mdtest results for a test we ran for
>>>        CORAL - file
>>>>      size was
>>>>      >     32k.
>>>>      >
>>>>      >     We have not gone back and ran the test on a file
>>>        system formatted
>>>>      >     without > 32 subblocks. We'll do that at some point...
>>>>      >
>>>>      >     -Bill
>>>>      >
>>>>      >     -- started at 10/28/2017 17:51:38 --
>>>>      >
>>>>      >     mdtest-1.9.3 was launched with 228 total task(s)
>>>        on 12 node(s)
>>>>      >     Command line used: /tmp/mdtest-binary-dir/mdtest -d
>>>>      >     /ibm/fs2-16m-10/mdtest-60000 -i 3 -n 294912 -w
>>>        32768 -C -F -r
>>>>      -p 360
>>>>      >     -u -y
>>>>      >     Path: /ibm/fs2-16m-10
>>>>      >     FS: 128.1 TiB Used FS: 0.3% Inodes: 476.8 Mi Used
>>>        Inodes: 0.0%
>>>>      >
>>>>      >     228 tasks, 67239936 files
>>>>      >
>>>>      >     SUMMARY: (of 3 iterations)
>>>>      >     Operation Max Min Mean Std Dev
>>>>      >     --------- --- --- ---- -------
>>>>      >     File creation : 51953.498 50558.517 51423.221 616.643
>>>>      >     File stat : 0.000 0.000 0.000 0.000
>>>>      >     File read : 0.000 0.000 0.000 0.000
>>>>      >     File removal : 96746.376 92149.535 94658.774 1900.187
>>>>      >     Tree creation : 1.588 0.070 0.599 0.700
>>>>      >     Tree removal : 0.213 0.034 0.097 0.082
>>>>      >
>>>>      >     -- finished at 10/28/2017 19:51:54 --
>>>>      >
>>>>      >     Bill Hartner
>>>>      >     IBM Systems
>>>>      >     Scalable I/O Development
>>>>      >     Austin, Texas_
>>>>      >     __bhartner at us.ibm.com_ <mailto:bhartner at us.ibm.com
>>>        <mailto:bhartner at us.ibm.com>
>>>>      <mailto:bhartner at us.ibm.com <mailto:bhartner at us.ibm.com>>>
>>>>      >     home office 512-784-0980 <tel:(512)%20784-0980>
>>>        <tel:512-784-0980 <tel:(512)%20784-0980>>
>>>>      >
>>>>      >     _
>>>>      >     __gpfsug-discuss-bounces at spectrumscale.org_
>>>>      >     <mailto:gpfsug-discuss-bounces at spectrumscale.org
>>>        <mailto:gpfsug-discuss-bounces at spectrumscale.org>
>>>>      <mailto:gpfsug-discuss-bounces at spectrumscale.org
>>>        <mailto:gpfsug-discuss-bounces at spectrumscale.org>>> wrote on
>>>>      >     11/29/2017 04:41:48 PM:
>>>>      >
>>>>      >     > From: Aaron Knister <_aaron.knister at gmail.com_
>>>>      >     <mailto:aaron.knister at gmail.com
>>>        <mailto:aaron.knister at gmail.com>
>>>        <mailto:aaron.knister at gmail.com
>>>        <mailto:aaron.knister at gmail.com>>>>
>>>>      >
>>>>      >
>>>>      >     > To: gpfsug main discussion list
>>>>      >     <_gpfsug-discuss at spectrumscale.org_
>>>>      >     <mailto:gpfsug-discuss at spectrumscale.org
>>>        <mailto:gpfsug-discuss at spectrumscale.org>
>>>>      <mailto:gpfsug-discuss at spectrumscale.org
>>>        <mailto:gpfsug-discuss at spectrumscale.org>>>>
>>>>      >
>>>>      >     > Date: 11/29/2017 04:42 PM
>>>>      >
>>>>      >
>>>>      >     > Subject: Re: [gpfsug-discuss] Online data
>>>        migration tool
>>>>      >     > Sent by: _gpfsug-discuss-bounces at spectrumscale.org_
>>>>      >     <mailto:gpfsug-discuss-bounces at spectrumscale.org
>>>        <mailto:gpfsug-discuss-bounces at spectrumscale.org>
>>>>      <mailto:gpfsug-discuss-bounces at spectrumscale.org
>>>        <mailto:gpfsug-discuss-bounces at spectrumscale.org>>>
>>>>      >
>>>>      >     >
>>>>      >
>>>>      >     > Thanks, Nikhil. Most of that was consistent with
>>>        my understnading,
>>>>      >     > however I was under the impression that the >32
>>>        subblocks code is
>>>>      >     > required to achieve the touted 50k file
>>>        creates/second that Sven has
>>>>      >     > talked about a bunch of times:
>>>>      >     >
>>>>      >     >
>>>>      >   
>>>>    
>>>          _https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=V_Pb-mxqz3Ji9fHRp9Ic9_ztzMsHk1bSzTmhbgGkRKU&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=V_Pb-mxqz3Ji9fHRp9Ic9_ztzMsHk1bSzTmhbgGkRKU&e=>
>>>>      >   
>>>>    
>>>          <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=UGLr4Z6sa2yWvKL99g7SuQdgwxnoZwhVmDuIbYsLqYY&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=UGLr4Z6sa2yWvKL99g7SuQdgwxnoZwhVmDuIbYsLqYY&e=>>
>>>>      >     >
>>>>      >   
>>>>    
>>>          _https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=61HBHh68SJXjnUv1Lyqjzmg_Vl24EG5cZ-0Z3WgLX3A&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=61HBHh68SJXjnUv1Lyqjzmg_Vl24EG5cZ-0Z3WgLX3A&e=>
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Il2rMx4AtGwjVRzX89kobZ0W25vW8TGm0KJevLd7KQ8&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Il2rMx4AtGwjVRzX89kobZ0W25vW8TGm0KJevLd7KQ8&e=>>
>>>>      >     > _Sven_Oehme_-_News_from_Research.pdf
>>>>      >     >
>>>        _https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=fDAdLyWu9yx3_uj0z_N3IQ98yjXF7q5hDrg7ZYZYtRE&e=
>>>>      <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=fDAdLyWu9yx3_uj0z_N3IQ98yjXF7q5hDrg7ZYZYtRE&e=>
>>>>      >   
>>>>    
>>>          <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=u_qcvB--uvtByHp9H471EowagobMpPLXYT_FFzMkQiw&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=u_qcvB--uvtByHp9H471EowagobMpPLXYT_FFzMkQiw&e=>>
>>>>      >     >
>>>        _Sven_Oehme_Dean_Hildebrand_-_News_from_IBM_Research.pdf
>>>>      >
>>>>      >
>>>>      >     > from those presentations regarding 32 subblocks:
>>>>      >     >
>>>>      >     > "It has a significant performance penalty for
>>>        small files in large
>>>>      >     > block size filesystems"
>>>>      >
>>>>      >     > although I'm not clear on the specific
>>>        definition of "large". Many
>>>>      >     > filesystems I encounter only have a 1M block
>>>        size so it may not
>>>>      >     > matter there, although that same presentation
>>>        clearly shows the
>>>>      >     > benefit of larger block sizes which is yet
>>>        *another* thing for which
>>>>      >     > a migration tool would be helpful.
>>>>      >
>>>>      >     > -Aaron
>>>>      >     >
>>>>      >     > On Wed, Nov 29, 2017 at 2:08 PM, Nikhil Khandelwal
>>>>      >     <_nikhilk at us.ibm.com_ <mailto:nikhilk at us.ibm.com
>>>        <mailto:nikhilk at us.ibm.com>
>>>>      <mailto:nikhilk at us.ibm.com
>>>        <mailto:nikhilk at us.ibm.com>>>> wrote:
>>>>      >
>>>>      >     > Hi,
>>>>      >     >
>>>>      >     > I would like to clarify migration path to 5.0.0
>>>        from 4.X.X
>>>>      clusters.
>>>>      >     > For all Spectrum Scale clusters that are
>>>        currently at 4.X.X,
>>>>      it is
>>>>      >     > possible to migrate to 5.0.0 with no offline
>>>        data migration
>>>>      and no
>>>>      >     > need to move data. Once these clusters are at
>>>        5.0.0, they will
>>>>      >     > benefit from the performance improvements, new
>>>        features (such as
>>>>      >     > file audit logging), and various enhancements
>>>        that are
>>>>      included in
>>>>      >     5.0.0.
>>>>      >     >
>>>>      >     > That being said, there is one enhancement that
>>>        will not be
>>>>      applied
>>>>      >     > to these clusters, and that is the increased
>>>        number of
>>>>      sub-blocks
>>>>      >     > per block for small file allocation. This means
>>>        that for file
>>>>      >     > systems with a large block size and a lot of
>>>        small files, the
>>>>      >     > overall space utilization will be the same it
>>>        currently is
>>>>      in 4.X.X.
>>>>      >     > Since file systems created at 4.X.X and earlier
>>>        used a block
>>>>      size
>>>>      >     > that kept this allocation in mind, there should
>>>        be very little
>>>>      >     > impact on existing file systems.
>>>>      >     >
>>>>      >     > Outside of that one particular function, the
>>>        remainder of the
>>>>      >     > performance improvements, metadata improvements,
>>>        updated
>>>>      >     > compatibility, new functionality, and all of the
>>>        other
>>>>      enhancements
>>>>      >     > will be immediately available to you once you
>>>        complete the
>>>>      upgrade
>>>>      >     > to 5.0.0 -- with no need to reformat, move data,
>>>        or take
>>>>      your data
>>>>      >     offline.
>>>>      >     >
>>>>      >     > I hope that clarifies things a little and makes
>>>        the upgrade path
>>>>      >     > more accessible.
>>>>      >     >
>>>>      >     > Please let me know if there are any other
>>>        questions or concerns.
>>>>      >     >
>>>>      >     > Thank you,
>>>>      >     > Nikhil Khandelwal
>>>>      >     > Spectrum Scale Development
>>>>      >     > Client Adoption
>>>>      >     >
>>>>      >     > _______________________________________________
>>>>      >     > gpfsug-discuss mailing list
>>>>      >     > gpfsug-discuss at _spectrumscale.org_
>>>>      >   
>>>>    
>>>          <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=>>
>>>>      >     > _https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=uD-N75Y8hXNsZ7FmnqLA4D6P8WsMrRGMIM9-Oy2vIgE&e=
>>>>      <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=uD-N75Y8hXNsZ7FmnqLA4D6P8WsMrRGMIM9-Oy2vIgE&e=>
>>>>      >   
>>>>    
>>>          <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=>>
>>>>      >
>>>>      >     > _______________________________________________
>>>>      >     > gpfsug-discuss mailing list
>>>>      >     > gpfsug-discuss at _spectrumscale.org_
>>>>      >   
>>>>    
>>>          <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=>>
>>>>      >
>>>>      >     > _https://urldefense.proofpoint.com/v2/url?_
>>>>      <https://urldefense.proofpoint.com/v2/url?_>
>>>>      >     >
>>>>      >   
>>>         u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-
>>>>      >     >
>>>>      >   
>>>         siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=DHoqgBeMFgcM0LpXEI0VCYvvb8ollct5aSYUDln2t68&s=iOxGm-853L_W0XkB3jGsGzCTVlSYUvANOTSewcR_Ue8&e=
>>>>      >
>>>>      >     _______________________________________________
>>>>      >     gpfsug-discuss mailing list
>>>>      >     gpfsug-discuss at _spectrumscale.org_
>>>>      >   
>>>>    
>>>          <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=>>_
>>>>      >     __https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=uD-N75Y8hXNsZ7FmnqLA4D6P8WsMrRGMIM9-Oy2vIgE&e=
>>>>      <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=uD-N75Y8hXNsZ7FmnqLA4D6P8WsMrRGMIM9-Oy2vIgE&e=>
>>>>      >   
>>>>    
>>>          <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=>>_______________________________________________
>>>>      >     gpfsug-discuss mailing list
>>>>      >     gpfsug-discuss at spectrumscale.org
>>>        <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=>
>>>>      >   
>>>>    
>>>          https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=
>>>>    
>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=>
>>>>      >
>>>>      >
>>>>      >
>>>>      >
>>>>      > _______________________________________________
>>>>      > gpfsug-discuss mailing list
>>>>      > gpfsug-discuss at spectrumscale.org
>>>        <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=>
>>>>      > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=
>>>>      <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=>
>>>>      >
>>>> 
>>>>      --
>>>>      Aaron Knister
>>>>      NASA Center for Climate Simulation (Code 606.2)
>>>>      Goddard Space Flight Center
>>>>      (301) 286-2776 <tel:(301)%20286-2776>
>>>        <tel:%28301%29%20286-2776>
>>>>      _______________________________________________
>>>>      gpfsug-discuss mailing list
>>>>      gpfsug-discuss at spectrumscale.org
>>>        <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=>
>>>>      https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=
>>>>      <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=>
>>>> 
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> gpfsug-discuss mailing list
>>>> gpfsug-discuss at spectrumscale.org <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=>
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=
>>>> 
>>> 
>>>        --
>>>        Aaron Knister
>>>        NASA Center for Climate Simulation (Code 606.2)
>>>        Goddard Space Flight Center
>>>        (301) 286-2776 <tel:(301)%20286-2776>
>>>        _______________________________________________
>>>        gpfsug-discuss mailing list
>>>        gpfsug-discuss at spectrumscale.org <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=>
>>>        https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=
>>> 
>>    _______________________________________________
>>    gpfsug-discuss mailing list
>>    gpfsug-discuss at spectrumscale.org <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=E8EYx8GcVSBTAiWBsLOEIKqJVguZN9klgMUg-Ggeh28&e=>
>>    https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=
>> 
>> 
>> 
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=
>> 
> 
> -- 
> Aaron Knister
> NASA Center for Climate Simulation (Code 606.2)
> Goddard Space Flight Center
> (301) 286-2776
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=QVUNLL0-CptOraOHQIoZ4ApWaqAgO-JTb-rhyidzipI&e=
> 
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20171221/facaf6ea/attachment.htm>


More information about the gpfsug-discuss mailing list