[gpfsug-discuss] Not recommended, but why not?

Buterbaugh, Kevin L Kevin.Buterbaugh at Vanderbilt.Edu
Fri May 4 16:56:44 BST 2018


Hi Anderson,

Thanks for the response … however, the scenario you describe below wouldn’t impact us.  We have 8 NSD servers and they can easily provide the needed performance to native GPFS clients.  We could also take a downtime if we ever did need to expand in the manner described below.

In fact, one of the things that’s kinda surprising to me is that upgrading the SMB portion of CES requires a downtime.  Let’s just say that I know for a fact that sernet-samba can be done rolling / live.

Kevin

On May 4, 2018, at 10:52 AM, Anderson Ferreira Nobre <anobre at br.ibm.com<mailto:anobre at br.ibm.com>> wrote:

Hi Kevin,

I think one of the reasons is if you need to add or remove nodes from cluster you will start to face the constrains of this kind of solution. Let's say you have a cluster with two nodes  and share the same set of LUNs through SAN. And for some reason you need to add more two nodes that are NSD Servers and Protocol nodes. For the new nodes become NSD Servers, you will have to redistribute the NSD disks among four nodes. But for you do that you will have to umount the filesystems. And for you umount the filesystems you would need to stop protocol services. At the end you will realize that a simple task like that is disrruptive. You won't be able to do online.


Abraços / Regards / Saludos,

Anderson Nobre
AIX & Power Consultant
Master Certified IT Specialist
IBM Systems Hardware Client Technical Team – IBM Systems Lab Services

[community_general_lab_services]

________________________________

Phone: 55-19-2132-4317
E-mail: anobre at br.ibm.com<mailto:anobre at br.ibm.com>     [IBM]


----- Original message -----
From: "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu<mailto:Kevin.Buterbaugh at Vanderbilt.Edu>>
Sent by: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Cc:
Subject: [gpfsug-discuss] Not recommended, but why not?
Date: Fri, May 4, 2018 12:39 PM

Hi All,

In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers … but I’ve not found any detailed explanation of why not.

I understand that CES, especially if you enable SMB, can be a resource hog.  But if I size the servers appropriately … say, late model boxes with 2 x 8 core CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still should not combine the two?

To answer the question of why I would want to … simple, server licenses.

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2b0fc12c4dc24aa1f7fb08d5b1d70c9e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610459542553835&sdata=8aArQLzU5q%2BySqHcoQ3SI420XzP08ICph7F18G7C4pw%3D&reserved=0>


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2b0fc12c4dc24aa1f7fb08d5b1d70c9e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610459542553835&sdata=8aArQLzU5q%2BySqHcoQ3SI420XzP08ICph7F18G7C4pw%3D&reserved=0

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180504/8808efe3/attachment.htm>


More information about the gpfsug-discuss mailing list