OpsCenter 5.2 cannot connect to multi-dc clusterCassandra Opscenter historical viewopscenter build cluster...

Hacker Rank: Array left rotation

"Murder!" The knight said

Avoiding unpacking an array when altering its dimension

Should I choose Itemized or Standard deduction?

Where is the fallacy here?

Contradiction with Banach Fixed Point Theorem

Is there any relevance to Thor getting his hair cut other than comedic value?

Pure Functions: Does "No Side Effects" Imply "Always Same Output, Given Same Input"?

What do the pedals on grand pianos do?

Replacement ford fiesta radiator has extra hose

Is divide-by-zero a security vulnerability?

Linear regression when Y is bounded and discrete

Can you use a beast's innate abilities while polymorphed?

Does music exist in Panem? And if so, what kinds of music?

Equivalent to "source" in OpenBSD?

What is better: yes / no radio, or simple checkbox?

If nine coins are tossed, what is the probability that the number of heads is even?

Why zero tolerance on nudity in space?

Why do members of Congress in committee hearings ask witnesses the same question multiple times?

What to do when being responsible for data protection in your lab, yet advice is ignored?

Difference between 小吃 and 零食

You'll find me clean when something is full

How to speed up a process

Borrowing Characters



OpsCenter 5.2 cannot connect to multi-dc cluster


Cassandra Opscenter historical viewopscenter build cluster failing on EC2Opscenter 5 setup error on adding existing clusterDatastax opscenter 5.0.1: can't create a new clusterDatastax OpsCenter 5.1 fails to backup 'All Keyspaces'OpsCenter 5.1 add existing clusterOpscenter not showing disk infoOpscenter 5.2 missing data for 1day and 1week graph scaleOpscenter can not connect to the nodesOpsCenter 5.1.3 tells agents wrong host IP













1















We have two data centers (192.X.X.X and 10.X.X.X) between which gossip (port 7001) is possible but not thrift or the native protocol. OpsCenter runs on a node in the first data center (192.X.X.X).



After updating from OpsCenter 5.1.3 to OpsCenter 5.2.0 on CentOS 6.6 the dashboard only shows "Cannot Connect to Cluster".



The opscenterd.log file shows repeated attempts to connect to the Cluster.



It begins with connecting to a seed node:




2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: Connecting to cluster, contact points: ['192.168.0.100', '192.168.0.101']; protocol version: 2
2015-08-10 11:52:04+0200 [] DEBUG: Host 192.168.0.100 is now marked up
2015-08-10 11:52:04+0200 [] DEBUG: Host 192.168.0.101 is now marked up
2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: [control connection] Opening new connection to 192.168.0.100
2015-08-10 11:52:04+0200 [] INFO: Starting factory
2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: [control connection] Established new connection , registering watchers and refreshing schema and topology
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Refreshing node list and token map using preloaded results


The following part is repeated for each node in the other data center and also for each node from the local data center which is not in the list of seed nodes:




2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Found new host to connect to: 10.0.0.1
2015-08-10 11:52:05+0200 [Cluster_01] INFO: New Cassandra host 10.0.0.1 discovered
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Handling new host 10.0.0.1 and notifying listeners
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Not adding connection pool for new host 10.0.0.1 because the load balancing policy has marked it as IGNORED
2015-08-10 11:52:05+0200 [] DEBUG: Host 10.0.0.1 is now marked up


The log continues a bit until the control connection is closed:




2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Finished fetching ring info
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Rebuilding token map due to topology changes
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Attempting to use preloaded results for schema agreement
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Schemas match
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] user types table not found
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Fetched schema, rebuilding metadata
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Control connection created
2015-08-10 11:52:05+0200 [] DEBUG: Initializing new connection pool for host 192.168.0.100
2015-08-10 11:52:05+0200 [] INFO: Starting factory
2015-08-10 11:52:05+0200 [] INFO: Starting factory
2015-08-10 11:52:05+0200 [] DEBUG: Finished initializing new connection pool for host 192.168.0.100
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Added pool for host 192.168.0.100 to session
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Shutting down Cluster Scheduler
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Not executing scheduled task due to Scheduler shutdown
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Shutting down control connection
2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (46700368) to 192.168.0.100
2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (44407568) to 192.168.0.100
2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
]
2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (47567568) to 192.168.0.100
2015-08-10 11:52:05+0200 [] INFO: Stopping factory
2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
]
2015-08-10 11:52:05+0200 [] INFO: Stopping factory
2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
]
2015-08-10 11:52:05+0200 [] INFO: Stopping factory


Then something strange happens: A connection is established to the first node in the other data center:




2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Connecting to cluster, contact points: ['10.0.0.1']; protocol version: 2
2015-08-10 11:52:05+0200 [] DEBUG: Host 10.0.0.1 is now marked up
2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Opening new connection to 10.0.0.1
2015-08-10 11:52:05+0200 [] INFO: Starting factory
2015-08-10 11:52:07+0200 [] TRACE: Sending heartbeat.
2015-08-10 11:52:10+0200 [Cluster_01] WARN: [control connection] Error connecting to 10.0.0.1: errors=Timed out creating connection, last_host=None
2015-08-10 11:52:10+0200 [Cluster_01] ERROR: Control connection failed to connect, shutting down Cluster: ('Unable to connect to any servers', {'10.0.0.1': OperationTimedOut('errors=Timed out creating connection, last_host=None',)})
2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Shutting down Cluster Scheduler
2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Shutting down control connection
2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Not executing scheduled task due to Scheduler shutdown
2015-08-10 11:52:10+0200 [] WARN: No cassandra connection available for hostlist ['192.168.0.100', '192.168.0.101'] . Retrying.


This fails of course as we don't want clients to communicate across data centers.



Even with this cluster configuration OpsCenter still tries to connect to the other (wrong) data center:




[cassandra]
seed_hosts = 192.168.0.100,192.168.0.101
username = opscenter
password = XXX
local_dc_pref = DC1
used_hosts_per_remote_dc = 0


This setup worked without problems for all versions of OpsCenter until 5.2.0. Is it a new requirement that all nodes must be reachable through native protocol from the OpsCenter? Can't I tell OpsCenter to only connect to its local data center?










share|improve this question














bumped to the homepage by Community 5 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    1















    We have two data centers (192.X.X.X and 10.X.X.X) between which gossip (port 7001) is possible but not thrift or the native protocol. OpsCenter runs on a node in the first data center (192.X.X.X).



    After updating from OpsCenter 5.1.3 to OpsCenter 5.2.0 on CentOS 6.6 the dashboard only shows "Cannot Connect to Cluster".



    The opscenterd.log file shows repeated attempts to connect to the Cluster.



    It begins with connecting to a seed node:




    2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: Connecting to cluster, contact points: ['192.168.0.100', '192.168.0.101']; protocol version: 2
    2015-08-10 11:52:04+0200 [] DEBUG: Host 192.168.0.100 is now marked up
    2015-08-10 11:52:04+0200 [] DEBUG: Host 192.168.0.101 is now marked up
    2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: [control connection] Opening new connection to 192.168.0.100
    2015-08-10 11:52:04+0200 [] INFO: Starting factory
    2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: [control connection] Established new connection , registering watchers and refreshing schema and topology
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Refreshing node list and token map using preloaded results


    The following part is repeated for each node in the other data center and also for each node from the local data center which is not in the list of seed nodes:




    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Found new host to connect to: 10.0.0.1
    2015-08-10 11:52:05+0200 [Cluster_01] INFO: New Cassandra host 10.0.0.1 discovered
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Handling new host 10.0.0.1 and notifying listeners
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Not adding connection pool for new host 10.0.0.1 because the load balancing policy has marked it as IGNORED
    2015-08-10 11:52:05+0200 [] DEBUG: Host 10.0.0.1 is now marked up


    The log continues a bit until the control connection is closed:




    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Finished fetching ring info
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Rebuilding token map due to topology changes
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Attempting to use preloaded results for schema agreement
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Schemas match
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] user types table not found
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Fetched schema, rebuilding metadata
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Control connection created
    2015-08-10 11:52:05+0200 [] DEBUG: Initializing new connection pool for host 192.168.0.100
    2015-08-10 11:52:05+0200 [] INFO: Starting factory
    2015-08-10 11:52:05+0200 [] INFO: Starting factory
    2015-08-10 11:52:05+0200 [] DEBUG: Finished initializing new connection pool for host 192.168.0.100
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Added pool for host 192.168.0.100 to session
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Shutting down Cluster Scheduler
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Not executing scheduled task due to Scheduler shutdown
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Shutting down control connection
    2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (46700368) to 192.168.0.100
    2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
    2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (44407568) to 192.168.0.100
    2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
    2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
    ]
    2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (47567568) to 192.168.0.100
    2015-08-10 11:52:05+0200 [] INFO: Stopping factory
    2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
    2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
    ]
    2015-08-10 11:52:05+0200 [] INFO: Stopping factory
    2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
    ]
    2015-08-10 11:52:05+0200 [] INFO: Stopping factory


    Then something strange happens: A connection is established to the first node in the other data center:




    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Connecting to cluster, contact points: ['10.0.0.1']; protocol version: 2
    2015-08-10 11:52:05+0200 [] DEBUG: Host 10.0.0.1 is now marked up
    2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Opening new connection to 10.0.0.1
    2015-08-10 11:52:05+0200 [] INFO: Starting factory
    2015-08-10 11:52:07+0200 [] TRACE: Sending heartbeat.
    2015-08-10 11:52:10+0200 [Cluster_01] WARN: [control connection] Error connecting to 10.0.0.1: errors=Timed out creating connection, last_host=None
    2015-08-10 11:52:10+0200 [Cluster_01] ERROR: Control connection failed to connect, shutting down Cluster: ('Unable to connect to any servers', {'10.0.0.1': OperationTimedOut('errors=Timed out creating connection, last_host=None',)})
    2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Shutting down Cluster Scheduler
    2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Shutting down control connection
    2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Not executing scheduled task due to Scheduler shutdown
    2015-08-10 11:52:10+0200 [] WARN: No cassandra connection available for hostlist ['192.168.0.100', '192.168.0.101'] . Retrying.


    This fails of course as we don't want clients to communicate across data centers.



    Even with this cluster configuration OpsCenter still tries to connect to the other (wrong) data center:




    [cassandra]
    seed_hosts = 192.168.0.100,192.168.0.101
    username = opscenter
    password = XXX
    local_dc_pref = DC1
    used_hosts_per_remote_dc = 0


    This setup worked without problems for all versions of OpsCenter until 5.2.0. Is it a new requirement that all nodes must be reachable through native protocol from the OpsCenter? Can't I tell OpsCenter to only connect to its local data center?










    share|improve this question














    bumped to the homepage by Community 5 hours ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      1












      1








      1








      We have two data centers (192.X.X.X and 10.X.X.X) between which gossip (port 7001) is possible but not thrift or the native protocol. OpsCenter runs on a node in the first data center (192.X.X.X).



      After updating from OpsCenter 5.1.3 to OpsCenter 5.2.0 on CentOS 6.6 the dashboard only shows "Cannot Connect to Cluster".



      The opscenterd.log file shows repeated attempts to connect to the Cluster.



      It begins with connecting to a seed node:




      2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: Connecting to cluster, contact points: ['192.168.0.100', '192.168.0.101']; protocol version: 2
      2015-08-10 11:52:04+0200 [] DEBUG: Host 192.168.0.100 is now marked up
      2015-08-10 11:52:04+0200 [] DEBUG: Host 192.168.0.101 is now marked up
      2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: [control connection] Opening new connection to 192.168.0.100
      2015-08-10 11:52:04+0200 [] INFO: Starting factory
      2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: [control connection] Established new connection , registering watchers and refreshing schema and topology
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Refreshing node list and token map using preloaded results


      The following part is repeated for each node in the other data center and also for each node from the local data center which is not in the list of seed nodes:




      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Found new host to connect to: 10.0.0.1
      2015-08-10 11:52:05+0200 [Cluster_01] INFO: New Cassandra host 10.0.0.1 discovered
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Handling new host 10.0.0.1 and notifying listeners
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Not adding connection pool for new host 10.0.0.1 because the load balancing policy has marked it as IGNORED
      2015-08-10 11:52:05+0200 [] DEBUG: Host 10.0.0.1 is now marked up


      The log continues a bit until the control connection is closed:




      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Finished fetching ring info
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Rebuilding token map due to topology changes
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Attempting to use preloaded results for schema agreement
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Schemas match
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] user types table not found
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Fetched schema, rebuilding metadata
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Control connection created
      2015-08-10 11:52:05+0200 [] DEBUG: Initializing new connection pool for host 192.168.0.100
      2015-08-10 11:52:05+0200 [] INFO: Starting factory
      2015-08-10 11:52:05+0200 [] INFO: Starting factory
      2015-08-10 11:52:05+0200 [] DEBUG: Finished initializing new connection pool for host 192.168.0.100
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Added pool for host 192.168.0.100 to session
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Shutting down Cluster Scheduler
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Not executing scheduled task due to Scheduler shutdown
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Shutting down control connection
      2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (46700368) to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (44407568) to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
      ]
      2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (47567568) to 192.168.0.100
      2015-08-10 11:52:05+0200 [] INFO: Stopping factory
      2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
      ]
      2015-08-10 11:52:05+0200 [] INFO: Stopping factory
      2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
      ]
      2015-08-10 11:52:05+0200 [] INFO: Stopping factory


      Then something strange happens: A connection is established to the first node in the other data center:




      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Connecting to cluster, contact points: ['10.0.0.1']; protocol version: 2
      2015-08-10 11:52:05+0200 [] DEBUG: Host 10.0.0.1 is now marked up
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Opening new connection to 10.0.0.1
      2015-08-10 11:52:05+0200 [] INFO: Starting factory
      2015-08-10 11:52:07+0200 [] TRACE: Sending heartbeat.
      2015-08-10 11:52:10+0200 [Cluster_01] WARN: [control connection] Error connecting to 10.0.0.1: errors=Timed out creating connection, last_host=None
      2015-08-10 11:52:10+0200 [Cluster_01] ERROR: Control connection failed to connect, shutting down Cluster: ('Unable to connect to any servers', {'10.0.0.1': OperationTimedOut('errors=Timed out creating connection, last_host=None',)})
      2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Shutting down Cluster Scheduler
      2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Shutting down control connection
      2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Not executing scheduled task due to Scheduler shutdown
      2015-08-10 11:52:10+0200 [] WARN: No cassandra connection available for hostlist ['192.168.0.100', '192.168.0.101'] . Retrying.


      This fails of course as we don't want clients to communicate across data centers.



      Even with this cluster configuration OpsCenter still tries to connect to the other (wrong) data center:




      [cassandra]
      seed_hosts = 192.168.0.100,192.168.0.101
      username = opscenter
      password = XXX
      local_dc_pref = DC1
      used_hosts_per_remote_dc = 0


      This setup worked without problems for all versions of OpsCenter until 5.2.0. Is it a new requirement that all nodes must be reachable through native protocol from the OpsCenter? Can't I tell OpsCenter to only connect to its local data center?










      share|improve this question














      We have two data centers (192.X.X.X and 10.X.X.X) between which gossip (port 7001) is possible but not thrift or the native protocol. OpsCenter runs on a node in the first data center (192.X.X.X).



      After updating from OpsCenter 5.1.3 to OpsCenter 5.2.0 on CentOS 6.6 the dashboard only shows "Cannot Connect to Cluster".



      The opscenterd.log file shows repeated attempts to connect to the Cluster.



      It begins with connecting to a seed node:




      2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: Connecting to cluster, contact points: ['192.168.0.100', '192.168.0.101']; protocol version: 2
      2015-08-10 11:52:04+0200 [] DEBUG: Host 192.168.0.100 is now marked up
      2015-08-10 11:52:04+0200 [] DEBUG: Host 192.168.0.101 is now marked up
      2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: [control connection] Opening new connection to 192.168.0.100
      2015-08-10 11:52:04+0200 [] INFO: Starting factory
      2015-08-10 11:52:04+0200 [Cluster_01] DEBUG: [control connection] Established new connection , registering watchers and refreshing schema and topology
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Refreshing node list and token map using preloaded results


      The following part is repeated for each node in the other data center and also for each node from the local data center which is not in the list of seed nodes:




      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Found new host to connect to: 10.0.0.1
      2015-08-10 11:52:05+0200 [Cluster_01] INFO: New Cassandra host 10.0.0.1 discovered
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Handling new host 10.0.0.1 and notifying listeners
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Not adding connection pool for new host 10.0.0.1 because the load balancing policy has marked it as IGNORED
      2015-08-10 11:52:05+0200 [] DEBUG: Host 10.0.0.1 is now marked up


      The log continues a bit until the control connection is closed:




      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Finished fetching ring info
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Rebuilding token map due to topology changes
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Attempting to use preloaded results for schema agreement
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Schemas match
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] user types table not found
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Fetched schema, rebuilding metadata
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Control connection created
      2015-08-10 11:52:05+0200 [] DEBUG: Initializing new connection pool for host 192.168.0.100
      2015-08-10 11:52:05+0200 [] INFO: Starting factory
      2015-08-10 11:52:05+0200 [] INFO: Starting factory
      2015-08-10 11:52:05+0200 [] DEBUG: Finished initializing new connection pool for host 192.168.0.100
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Added pool for host 192.168.0.100 to session
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Shutting down Cluster Scheduler
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Not executing scheduled task due to Scheduler shutdown
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Shutting down control connection
      2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (46700368) to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (44407568) to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
      ]
      2015-08-10 11:52:05+0200 [] DEBUG: Closing connection (47567568) to 192.168.0.100
      2015-08-10 11:52:05+0200 [] INFO: Stopping factory
      2015-08-10 11:52:05+0200 [] DEBUG: Closed socket to 192.168.0.100
      2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
      ]
      2015-08-10 11:52:05+0200 [] INFO: Stopping factory
      2015-08-10 11:52:05+0200 [] DEBUG: Connect lost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
      ]
      2015-08-10 11:52:05+0200 [] INFO: Stopping factory


      Then something strange happens: A connection is established to the first node in the other data center:




      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: Connecting to cluster, contact points: ['10.0.0.1']; protocol version: 2
      2015-08-10 11:52:05+0200 [] DEBUG: Host 10.0.0.1 is now marked up
      2015-08-10 11:52:05+0200 [Cluster_01] DEBUG: [control connection] Opening new connection to 10.0.0.1
      2015-08-10 11:52:05+0200 [] INFO: Starting factory
      2015-08-10 11:52:07+0200 [] TRACE: Sending heartbeat.
      2015-08-10 11:52:10+0200 [Cluster_01] WARN: [control connection] Error connecting to 10.0.0.1: errors=Timed out creating connection, last_host=None
      2015-08-10 11:52:10+0200 [Cluster_01] ERROR: Control connection failed to connect, shutting down Cluster: ('Unable to connect to any servers', {'10.0.0.1': OperationTimedOut('errors=Timed out creating connection, last_host=None',)})
      2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Shutting down Cluster Scheduler
      2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Shutting down control connection
      2015-08-10 11:52:10+0200 [Cluster_01] DEBUG: Not executing scheduled task due to Scheduler shutdown
      2015-08-10 11:52:10+0200 [] WARN: No cassandra connection available for hostlist ['192.168.0.100', '192.168.0.101'] . Retrying.


      This fails of course as we don't want clients to communicate across data centers.



      Even with this cluster configuration OpsCenter still tries to connect to the other (wrong) data center:




      [cassandra]
      seed_hosts = 192.168.0.100,192.168.0.101
      username = opscenter
      password = XXX
      local_dc_pref = DC1
      used_hosts_per_remote_dc = 0


      This setup worked without problems for all versions of OpsCenter until 5.2.0. Is it a new requirement that all nodes must be reachable through native protocol from the OpsCenter? Can't I tell OpsCenter to only connect to its local data center?







      opscenter






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Aug 10 '15 at 11:14









      Severin LeonhardtSeverin Leonhardt

      64




      64





      bumped to the homepage by Community 5 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 5 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          1 Answer
          1






          active

          oldest

          votes


















          0














          I can confirm your bug and it can be tracked as OPSC-6299 (sorry no public bug tracker, but this can be used for communications with Datastax or future ticket references).



          The short of it is that OpsCenter should be respecting that load balancing policy, it is valid, but in this case there's a bug.






          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "2"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f713062%2fopscenter-5-2-cannot-connect-to-multi-dc-cluster%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            I can confirm your bug and it can be tracked as OPSC-6299 (sorry no public bug tracker, but this can be used for communications with Datastax or future ticket references).



            The short of it is that OpsCenter should be respecting that load balancing policy, it is valid, but in this case there's a bug.






            share|improve this answer




























              0














              I can confirm your bug and it can be tracked as OPSC-6299 (sorry no public bug tracker, but this can be used for communications with Datastax or future ticket references).



              The short of it is that OpsCenter should be respecting that load balancing policy, it is valid, but in this case there's a bug.






              share|improve this answer


























                0












                0








                0







                I can confirm your bug and it can be tracked as OPSC-6299 (sorry no public bug tracker, but this can be used for communications with Datastax or future ticket references).



                The short of it is that OpsCenter should be respecting that load balancing policy, it is valid, but in this case there's a bug.






                share|improve this answer













                I can confirm your bug and it can be tracked as OPSC-6299 (sorry no public bug tracker, but this can be used for communications with Datastax or future ticket references).



                The short of it is that OpsCenter should be respecting that load balancing policy, it is valid, but in this case there's a bug.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Aug 11 '15 at 20:47









                DioDio

                1264




                1264






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Server Fault!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f713062%2fopscenter-5-2-cannot-connect-to-multi-dc-cluster%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Фонтен-ла-Гаярд Зміст Демографія | Економіка | Посилання |...

                    Список ссавців Італії Природоохоронні статуси | Список |...

                    Маріан Котлеба Зміст Життєпис | Політичні погляди |...