[gpfsug-discuss] A GPFS newbie

Jez Tucker Jez.Tucker at rushes.co.uk
Tue Aug 7 12:32:55 BST 2012


The HPC folks should probably step in here.
Not having such a large system, I'll point you at : 
https://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_complan.htm


> -----Original Message-----
> From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-
> bounces at gpfsug.org] On Behalf Of Robert Esnouf
> Sent: 07 August 2012 12:10
> To: gpfsug main discussion list
> Subject: [gpfsug-discuss] A GPFS newbie
> 
> 
> Dear GPFS users,
> 
> Please excuse what is possibly a naive question from a not-yet GPFS admin.
> We are seriously considering GPFS to provide storage for our compute
> clusters. We are probably looking at about 600-900TB served into 2000+
> Linux cores over InfiniBand.
> DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is
> high I/O rates from multiple readers (100-1000) all accessing parts of the
> same set of 1000-5000 large files (typically 30GB BAM files, for those in the
> know). We could easily sustain read rates of 5-10GB/s or more if the system
> would cope.
> 
> My question is how should we go about configuring the number and
> specifications of the NSDs? Are there any good rules of thumb? And are
> there any folk out there using GPFS for high I/O rates like this in a similar
> setup who would be happy to have their brains/experiences picked?
> 
> Thanks in advance and best wishes,
> Robert Esnouf
> 
> --
> 
> Dr. Robert Esnouf,
> University Research Lecturer
> and Head of Research Computing,
> Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt
> Drive, Oxford OX3 7BN, UK
> 
> Emails: robert at strubi.ox.ac.uk   Tel: (+44) - 1865 - 287783
>     and robert at well.ox.ac.uk     Fax: (+44) - 1865 - 287547
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss





More information about the gpfsug-discuss mailing list