[gpfsug-discuss] Policy scan against billion files for ILM/HSM

Zachary Giles zgiles at gmail.com
Tue Apr 11 05:49:10 BST 2017


It's definitely doable, and these days not too hard. Flash for
metadata is the key.
The basics of it are:
* Latest GPFS for performance benefits.
* A few 10's of TBs of flash ( or more ! ) setup in a good design..
lots of SAS, well balanced RAID that can consume the flash fully,
tuned for IOPs, and available in parallel from multiple servers.
* Tune up mmapplypolicy with -g somewhere-on-gpfs; --choice-algorithm
fast; -a, -m and -n to reasonable values ( number of cores on the
servers ); -A to ~1000
* Test first on a smaller fileset to confirm you like it. -I test
should work well and be around the same speed minus the migration
phase.
* Then throw ~8 well tuned Infiniband attached nodes at it using -N,
If they're the same as the NSD servers serving the flash, even better.

Should be able to do 1B in 5-30m depending on the idiosyncrasies of
above choices. Even 60m isn't bad and quite respectable if less gear
is used or if they system is busy while the policy is running.
Parallel metadata, it's a beautiful thing.



On Tue, Apr 11, 2017 at 12:29 AM, Masanori Mitsugi
<mitsugi at linux.vnet.ibm.com> wrote:
> Hello,
>
> Does anyone have experience to do mmapplypolicy against billion files for
> ILM/HSM?
>
> Currently I'm planning/designing
>
> * 1 Scale filesystem (5-10 PB)
> * 10-20 filesets which includes 1 billion files each
>
> And our biggest concern is "How log does it take for mmapplypolicy policy
> scan against billion files?"
>
> I know it depends on how to write the policy,
> but I don't have no billion files policy scan experience,
> so I'd like to know the order of time (min/hour/day...).
>
> It would be helpful if anyone has experience of such large number of files
> scan and let me know any considerations or points for policy design.
>
> --
> Masanori Mitsugi
> mitsugi at linux.vnet.ibm.com
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-- 
Zach Giles
zgiles at gmail.com



More information about the gpfsug-discuss mailing list