[gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters

Dean Hildebrand dhildeb at us.ibm.com
Fri Apr 10 18:45:14 BST 2015


Hi Zach,

The summary is that GPFS is being integrated much more across the
portfolio...

With GPFS itself, there is a video below demonstrating the ESS/GSS GUI and
monitoring feature that is in the product today.  Moving forward, as you
can probably see, there is a push in IBM to move GPFS to software-defined,
which includes features such as the GUI as well...
https://www.youtube.com/watch?v=Mv9Sn-VYoGU

Dean




From:	Zachary Giles <zgiles at gmail.com>
To:	gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
Date:	04/10/2015 08:27 AM
Subject:	Re: [gpfsug-discuss] Monitoring capacity and health status for
            a multitude of GPFS clusters
Sent by:	gpfsug-discuss-bounces at gpfsug.org



Christian:
Interesting and thanks for the latest news.

May I ask: Is there an intent moving forward that TPC and / or other Tivoli
products will be a required part of GPFS?
The concern I have is that GPFS is pretty straightforward at the moment and
has very logical requirements to operate (min servers, quorum, etc),
whereas there are many IBM products that require two or three more servers
just to manage the servers managing the service.. too much. It would be
nice to make sure, going forward, that the core of GPFS can still function
without additional web servers, Java, a suite of middleware, and a handful
of DB2 instance .. :)

-Zach



On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik <BOLIK at de.ibm.com> wrote:

  Just wanted to let you know that recently GPFS support has been added to
  TPC, which is IBM's Tivoli Storage Productivity Center (soon to be
  renamed
  to IBM Spectrum Control). As of now, TPC allows GPFS administrators to
  get
  answers to the following questions, across any number of GPFS clusters
  which have been added to TPC:

  - Which of my clusters are running out of free space?
  - Which of my clusters or nodes have a health problem?
  - Which file systems and pools are running out of capacity?
  - Which file systems are mounted on which nodes?
  - How much space is occupied by snapshots? Are there any very old,
  potentially obsolete ones?
  - Which quotas are close to being exceeded or have already been exceeded?
  - Which filesets are close to running out of free inodes?
  - Which NSDs are at risk of becoming unavailable, or are unavailable?
  - Are the volumes backing my NSDs performing OK?
  - Are all nodes fulfilling critical roles in the cluster up and running?
  - How can I be notified when nodes go offline or file systems fill up
  beyond a threshold?

  There's a short 6-minute video available on YouTube which shows how TPC
  helps answering these questions:
  https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be

  For more information about TPC, please check out the product wiki on
  developerWorks: http://ibm.co/1adWNFK

  Thanks,
  Christian Bolik
  IBM Storage Software Development


  _______________________________________________
  gpfsug-discuss mailing list
  gpfsug-discuss at gpfsug.org
  http://gpfsug.org/mailman/listinfo/gpfsug-discuss



--
Zach Giles
zgiles at gmail.com_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20150410/9afe8bc4/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20150410/9afe8bc4/attachment.gif>


More information about the gpfsug-discuss mailing list