Advantages and disadvantages of multiple instances on a cluster

Discussion in 'microsoft.public.sqlserver.clustering' started by YaHozna, May 14, 2009.

  1. YaHozna

    YaHozna Guest

    Looking for advice rather than the solution to a specific question here.

    I'm trying to make the case for creating a number of additional instances on
    my company's existing 2-node active/passive SQL Server 2005 cluster.
    Currently there is a single instance overloaded (to my mind anyway) with 92
    databases (excluding the systems). (I'm sure I read somewhere that the
    recommended limit was 25).

    So I figured that I would recommend the creation of a number of additional
    instances to host separately in-house databases, third-party databases,
    SharePoint databases (we're adopting SP big-time at the moment) and a test
    instance (currently databases are developed on a test server then moved to
    the live cluster - would be useful to have a test instance on the live
    cluster). Naturally, however, I'm being asked to justify that recommendation
    in terms of increased efficiency both of my time as DBA and of the cluster
    itself. Apologies for the somewhat lengthy preamble by the way.

    So what then are the pros and cons? I've mentioned having fewer databases in
    any one instance, I guess thereby making Admin easier, at least in terms of
    managing scheduled jobs, etc. Are there technical advantages as well (or
    indeed disadvantages)? I figure memory management could be more effective -
    i.e.the individual instances could be apportioned the optimum amounts of
    memory. Security might also be improved. By the way, the cluster nodes are
    quad-core Intels running Win 2003 Server. Does this mean that the
    multi-threading capabilities of such cores will mean better management of,
    for example, locking, blocking or even more efficient running of scheduled

    And what, if any, advantages/disadvantages are there in terms of I/O?
    Databases are all held off-server on FAS storage in this case. Will having
    several instances instead of one mean better performance, poorer performance,
    no change?

    I'm sure there are lots of other things I haven't mentioned or indeed
    thought of. I would certainly welcome any and all advice, points of view,
    comments, etc.


    YaHozna, May 14, 2009
    1. Advertisements

  2. YaHozna

    Uri Dimant Guest

    We have clustrered environment with two nodes where one node hosts >150
    databases +all SharePoint dabases. I do not have any performance issues , in
    the perfect world I would move the SharePoint into separate server (not an
    As you probaly know that each instance will consume its memory/disks and
    Also what if one instance 'freezes' and the machine need to be restarted,
    that means another instance has to down for some time....
    On a production server - I can't come up with a good reason to use named
    instances. Production is usually something you don't want to mess with, and
    you wouldn't want to have two servers running on the same machine.

    I think that BIGest advantage of having multiple instances is ideal for
    those third party applications that require elevated SQL Server privileges
    where you do not want to commingle applications on the same instance ( I
    think that is yuor case, right?)
    Uri Dimant, May 14, 2009
    1. Advertisements

  3. You can install one more named clustered instance on your server (but you
    need one more shared disk). And then divide your database in both the
    instances. For example, you have a two node cluster and A an B are the names
    of nodes . Now host 46(assuming you have total 92 databases) database on
    node A as active node and Node B as passive node on default instances and
    host reaming 46 Databases on node B as Active and Node A as passive on named
    instance. This configuration will provide you both high availability and
    scalability. But may increase administration burden.

    Manpreet Singh
    MCITP – DB Admin, DB Dev SQL Server 2005
    MCTS – MOSS 2007, SQL server 2005
    Manpreet Singh, May 19, 2009
  4. YaHozna

    frankm Guest

    Just my 2 cents ... From a 10,00 foot view

    If designed correctly you can have multiple instance / virtual servers
    running in a 2 node cluster. You can add nodes at will (up front design for
    expansion is essential, but you can't do a rolling upgrade from 2003 to
    2008). Even though you have 2 node cluster you can still run virtual
    servers on both boxes. You could have a balance of virtual servers running
    across nodes and when one node fails all of the vritual servers will run on
    one node, you could see performance issues, but with the reliability of
    hardware these days, it would be an infrequent situation.

    Just having 2 servers in a cluster doesn't make it HA (High Availability).
    Dual pathing, dual controller, ECC memory, hot spares, raid10, dual NIC's,
    dual pathed network all contribute to an HA monicker.

    There's a difference between instances and virtual servers. A virtual server
    will live in it's own resource group and can only live on one node of the
    cluster at a time (2003, 8 nodes max....2008, 16). You could have one SQL
    Server instance per virtual server. You could have many. Multiple instances
    can reside in one resource group / virtual server.
    Straight up instances will run basically next to each other on a server,
    they will have the same virtual server name and will be part of a single
    resource group and must move together from node to node. Keep in mind that
    there can still only be 1 default instance and any additional will need to
    be names instances, they can be in different respurce group virtual servers,
    but the 1 default rule still holds true.
    So if you try to do multiple instances in a virtual serevr, you will not be
    able to separate the databases easily if you find they don't play well

    You should keep in mind that each database has its' own character.
    You could have 200 Citrix data stores and not have a problem. You could have
    one single 200gb datafile Sharepoint content db storing BLOBs and 400
    concurrent users that could give you great grief. The bottom line is that
    each database can have very different or very similiar requirements. Either
    way it's bad with the latter having more grief potential. Don't mix heavy
    hitters on the same disk (arrays or drive etc).

    Your disk subsystem along with array design is essential to good
    performance. If you do a lot of writes and you have a RAID5 (or 6) and you
    have X number of high transaction OLTP db's, you may to run into trouble.
    You could have several high read db's on a 10 disk RAID 10 and still have
    problems. Don't let a SAN engineer tell you that you can't overdrive a SAN
    backend, I've seen it more than once and had great pain over it. If you are
    using something like a HP MSA or Dell Powervault etc, make sure it is fully
    redundant and that you design the arrays correctly. Always use hot spares.
    You will by necessity need to use completely separate disk drives (drive
    letters, mount points (2005/8)) for each virtual server. You won't have a
    choice, so design accordingly.

    Also, multiple instances need their own space to run, memory, processor etc.
    Get boxes that are big enough to handle it. I've seen great results from
    dual and quad cores. If you plan on more that a couple of instances, get as
    much emory as possible, this is nice because compared to 10 years ago, it's
    cheap and very helpful.
    If your db's are all proceduralized (using primarily stored procs) and you
    may want to look at 64bit as this allows a larger address space for that. If
    everything is ad hoc's 64bit will work, but 32bit and AWE may basically work
    as well, but you will won't see as big of an advantage from converting to
    sp's later.

    It's all about bottlenecks: how many thing are vying for the same resources
    at the same time.

    whew - all in one breath
    frankm, May 19, 2009
  5. YaHozna

    YaHozna Guest

    Uri, many apologies for taking so long to reply.

    Thank you very much indeed for the feedback. Yes it does seem to be the case
    that having multiple instances is not necessarily the best way forward
    despite my initial assumptions. I had thought it would be easier from an
    Admin point of view however other more important things may suffer as a
    result - e.g. memory.

    Also, since I currently have an Active-Passive setup I think moving to
    Active-Active might be a better option.


    YaHozna, May 26, 2009
  6. YaHozna

    YaHozna Guest

    Manpreet, many apologies for taking so long to reply.

    Thank you very much indeed for the feedback. You're thinking is pretty much
    in tune with my own although the general consensus would seem to be that
    instead of additional instances on the same hardware I should perhaps think
    about moving my current Active-Passive configuration to an Active-Active one.


    YaHozna, May 26, 2009
  7. YaHozna

    YaHozna Guest

    Frank, many apologies for taking so long to reply.

    Thank you very much indeed for the feedback. Very much more valuable than 2
    cents I would say :)

    The general consensus, from answers given to this question here and on other
    newsgroups would seem to be that it would be more beneficial to adopt an
    Active-Active configuration in place of the current Active-Passive one.
    Databases could then be spread across the 2 nodes. Would you tend to agree
    with this?

    I'm certainly also looking at virtualisation as it's a technology that I use
    both at PC level and at SQL Server level, albeit for hosting SSRS instances.
    Naturally there are additional licensing costs involved with that though.

    Anyway you've certainly give me lots to think about. Many thanks again.


    YaHozna, May 26, 2009
  8. YaHozna

    frankm Guest

    Having a A/A (not really current terminology, but it's easier). is the most
    cost effective.

    The only thing you have to look out for is the fact that all virtual servers
    may wind up on the same node at some time.
    You have to balance the SQL Server resources acros the nodes with the idea
    that they will wind up on the same node eventually.
    A big one is if you are running AWE on your 32bit SQL Servers. The way it
    works is that SQL Server AWE memory is allocated on startup. If you an
    instance from SQLB fails to SQLA node and the AWE memory space is too big
    for the remaining space, the SQLB instance will revert to SQL Server memory
    management instead of AWE.

    I would be careful of virtualizing any high I/O - throughpout SQL Server (or
    an other RDBMS).
    Disk issues are the first big thing. Second is the way SQL Server does it's
    own thread scheduling (UMS-User Mode Scheduler).
    The slicing and dicing of processor resources at the host can cause major
    performance issues.
    frankm, May 26, 2009
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.