[gpfsug-discuss] gpfs performance monitoring

Salvatore Di Nardo sdinardo at ebi.ac.uk
Thu Sep 4 14:25:09 BST 2014


> >> Yes, we have dedicate vdisks for metadata, but they are in the same 
> declustered arrays/recoverygroups, so they whare the same spindles
>
> Thats imho not a good approach. Metadata operation are small and 
> random, data io is large and streaming.
>
> Just think you have a highway full of large trucks and you try to get 
> with a high speed bike to your destination. You will be blocked.
> The same problem you have at your destiation. If many large trucks 
> would like to get their stuff off there is no time for somebody with a 
> small parcel.
>
> Thats the same reason why you should not access tape storage and disk 
> storage via the same FC adapter. ( Streaming IO version v. 
> random/small IO )
>
> So even without your current problem and motivation for measureing i 
> would strongly suggest to have at least dediacted SSD for metadata and 
> if possible even dedicated NSD server for the metadata.
> Meaning have a dedicated path for your data and a dedicated path for 
> your metadata.
>
> All from a users point of view
> Hajo
>
That's where i  was puzzled too. GSS its a gpfs appliance and came 
configured this way. Also official GSS documentation suggest to create 
separate vdisks for data and meatadata, but in the same declustered 
arrays. I always felt this a strange choice, specially if we consider 
that metadata require a very small abbount of space, so few ssd could do 
the trick....





More information about the gpfsug-discuss mailing list