[gpfsug-discuss] Singularity + GPFS

Yugendra Guvvala yguvvala at cambridgecomputer.com
Thu Apr 26 15:53:58 BST 2018


I am interested to learn this too. So please add me sending a direct mail. 

Thanks,
Yugi

> On Apr 26, 2018, at 10:51 AM, Oesterlin, Robert <Robert.Oesterlin at nuance.com> wrote:
> 
> Hi Lohit, Nathan
>  
> Would you be willing to share some more details about your setup? We are just getting started here and I would like to hear about what your configuration looks like. Direct email to me is fine, thanks.
>  
> Bob Oesterlin
> Sr Principal Storage Engineer, Nuance
>  
>  
> From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of "valleru at cbio.mskcc.org" <valleru at cbio.mskcc.org>
> Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date: Thursday, April 26, 2018 at 9:45 AM
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Subject: [EXTERNAL] Re: [gpfsug-discuss] Singularity + GPFS
>  
> We do run Singularity + GPFS, on our production HPC clusters.
> Most of the time things are fine without any issues.
>  
> However, i do see a significant performance loss when running some applications on singularity containers with GPFS.
>  
> As of now, the applications that have severe performance issues with singularity on GPFS - seem to be because of “mmap io”. (Deep learning applications)
> When i run the same application on bare metal, they seem to have a huge difference in GPFS IO when compared to running on singularity containers.
> I am yet to raise a PMR about this with IBM.
> I have not seen performance degradation for any other kind of IO, but i am not sure.
> 
> Regards,
> Lohit
> 
> On Apr 26, 2018, 10:35 AM -0400, Nathan Harper <nathan.harper at cfms.org.uk>, wrote:
> 
> We are running on a test system at the moment, and haven't run into any issues yet, but so far it's only been 'hello world' and running FIO.
>  
> I'm interested to hear about experience with MPI-IO within Singularity.
>  
> On 26 April 2018 at 15:20, Oesterlin, Robert <Robert.Oesterlin at nuance.com> wrote:
> Anyone (including IBM) doing any work in this area? I would appreciate hearing from you.
>  
> Bob Oesterlin
> Sr Principal Storage Engineer, Nuance
>  
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> 
> 
>  
> --
> Nathan Harper // IT Systems Lead
>  
> 
> 
> e: nathan.harper at cfms.org.uk   t: 0117 906 1104  m:  0787 551 0891  w: www.cfms.org.uk  
> CFMS Services Ltd // Bristol & Bath Science Park // Dirac Crescent // Emersons Green // Bristol // BS16 7FR 
>  
> CFMS Services Ltd is registered in England and Wales No 05742022 - a subsidiary of CFMS Ltd 
> CFMS Services Ltd registered office // 43 Queens Square // Bristol // BS1 4QP
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180426/e99e4bd3/attachment.htm>


More information about the gpfsug-discuss mailing list