Cassandra集群搭建

搭建cassandra集群很简单,简单来说就是四步骤:

(1)各节点安装、配置JRE

(2)部署可执行行cassandra文件

(3)修改cassandra配置文件

(4)启动cassandra

一、准备:

192.168.159.150  ---cassandra1

192.168.159.151  ---cassandra2

192.168.159.152  ---cassandra3

[hadoop@datax bin]$ java -version
java version "1.8.0_333"
Java(TM) SE Runtime Environment (build 1.8.0_333-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.333-b02, mixed mode)

相关基础环境这里就省略,搭建集群主要在于conf里cassandra.yaml配置文件中种子节点等个别细节处配置。

 

[hadoop@datax bin]$ grep '^[^#|^$]' ../conf/cassandra.yaml
cluster_name: 'AndyXi Cluster'
num_tokens: 16 #控制一下节点映射出来的虚拟节点的个数
allocate_tokens_for_local_replication_factor: 3
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
network_authorizer: AllowAllNetworkAuthorizer
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
credentials_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
cdc_enabled: false
disk_failure_policy: stop
commit_failure_policy: stop
prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "192.168.159.151,192.168.159.152"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 192.168.159.150
start_native_transport: true
native_transport_port: 9042
native_transport_allow_older_protocols: true
rpc_address: 192.168.159.150
rpc_keepalive: true
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
snapshot_links_per_second: 0
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
concurrent_materialized_view_builders: 1
compaction_throughput_mb_per_sec: 64
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
slow_query_log_timeout_in_ms: 500
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 1.0
server_encryption_options:
# On outbound connections, determine which type of peers to securely connect to.
# The available options are :
# none : Do not encrypt outgoing connections
# dc : Encrypt connections to peers in other datacenters but not within datacenters
# rack : Encrypt connections to peers in other racks but not within racks
# all : Always use encrypted connections
internode_encryption: none
# When set to true, encrypted and unencrypted connections are allowed on the storage_port
# This should _only be true_ while in unencrypted or transitional operation
# optional defaults to true if internode_encryption is none
# optional: true
# If enabled, will open up an encrypted listening socket on ssl_storage_port. Should only be used
# during upgrade to 4.0; otherwise, set to false.
enable_legacy_ssl_storage_port: false
# Set to a valid keystore if internode_encryption is dc, rack or all
keystore: conf/.keystore
keystore_password: cassandra
# Verify peer server certificates
require_client_auth: false
# Set to a valid trustore if require_client_auth is true
truststore: conf/.truststore
truststore_password: cassandra
# Verify that the host name in the certificate matches the connected host
require_endpoint_verification: false
# More advanced defaults:
# protocol: TLS
# store_type: JKS
# cipher_suites: [
# TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
# TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
# TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA,
# TLS_RSA_WITH_AES_256_CBC_SHA
# ]
client_encryption_options:
# Enable client-to-server encryption
enabled: false
# When set to true, encrypted and unencrypted connections are allowed on the native_transport_port
# This should _only be true_ while in unencrypted or transitional operation
# optional defaults to true when enabled is false, and false when enabled is true.
# optional: true
# Set keystore and keystore_password to valid keystores if enabled is true
keystore: conf/.keystore
keystore_password: cassandra
# Verify client certificates
require_client_auth: false
# Set trustore and truststore_password if require_client_auth is true
# truststore: conf/.truststore
# truststore_password: cassandra
# More advanced defaults:
# protocol: TLS
# store_type: JKS
# cipher_suites: [
# TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
# TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
# TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA,
# TLS_RSA_WITH_AES_256_CBC_SHA
# ]
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
# CBC IV length for AES needs to be 16 bytes (which is also the default size)
# iv_length: 16
key_provider:
- class_name: org.apache.cassandra.security.JKSKeyProvider
parameters:
- keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
replica_filtering_protection:
# These thresholds exist to limit the damage severely out-of-date replicas can cause during these
# queries. They limit the number of rows from all replicas individual index and filtering queries
# can materialize on-heap to return correct results at the desired read consistency level.
#
# "cached_replica_rows_warn_threshold" is the per-query threshold at which a warning will be logged.
# "cached_replica_rows_fail_threshold" is the per-query threshold at which the query will fail.
#
# These thresholds may also be adjusted at runtime using the StorageService mbean.
#
# If the failure threshold is breached, it is likely that either the current page/fetch size
# is too large or one or more replicas is severely out-of-sync and in need of repair.
cached_rows_warn_threshold: 2000
cached_rows_fail_threshold: 32000
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
audit_logging_options:
enabled: false
logger:
- class_name: BinAuditLogger
# audit_logs_dir:
# included_keyspaces:
# excluded_keyspaces: system, system_schema, system_virtual_schema
# included_categories:
# excluded_categories:
# included_users:
# excluded_users:
# roll_cycle: HOURLY
# block: true
# max_queue_weight: 268435456 # 256 MiB
# max_log_size: 17179869184 # 16 GiB
## archive command is "/path/to/script.sh %path" where %path is replaced with the file being rolled:
# archive_command:
# max_archive_retries: 10
# log_dir:
# roll_cycle: HOURLY
# block: true
# max_queue_weight: 268435456 # 256 MiB
# max_log_size: 17179869184 # 16 GiB
## archive command is "/path/to/script.sh %path" where %path is replaced with the file being rolled:
# archive_command:
# max_archive_retries: 10
diagnostic_events_enabled: false
repaired_data_tracking_for_range_reads_enabled: false
repaired_data_tracking_for_partition_reads_enabled: false
report_unconfirmed_repaired_data_mismatches: false
enable_materialized_views: false
enable_sasi_indexes: false
enable_transient_replication: false
enable_drop_compact_storage: false
[hadoop@datax bin]$ grep '^[^#|^$]' ../conf/cassandra.yaml
cluster_name: 'AndyXi Cluster'
num_tokens: 16
allocate_tokens_for_local_replication_factor: 3
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
network_authorizer: AllowAllNetworkAuthorizer
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
credentials_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
cdc_enabled: false
disk_failure_policy: stop
commit_failure_policy: stop
prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "192.168.159.151,192.168.159.152"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 192.168.159.150
start_native_transport: true
native_transport_port: 9042
native_transport_allow_older_protocols: true
rpc_address: 192.168.159.150
rpc_keepalive: true
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
snapshot_links_per_second: 0
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
concurrent_materialized_view_builders: 1
compaction_throughput_mb_per_sec: 64
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
slow_query_log_timeout_in_ms: 500
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 1.0
server_encryption_options:
# On outbound connections, determine which type of peers to securely connect to.
# The available options are :
# none : Do not encrypt outgoing connections
# dc : Encrypt connections to peers in other datacenters but not within datacenters
# rack : Encrypt connections to peers in other racks but not within racks
# all : Always use encrypted connections
internode_encryption: none
# When set to true, encrypted and unencrypted connections are allowed on the storage_port
# This should _only be true_ while in unencrypted or transitional operation
# optional defaults to true if internode_encryption is none
# optional: true
# If enabled, will open up an encrypted listening socket on ssl_storage_port. Should only be used
# during upgrade to 4.0; otherwise, set to false.
enable_legacy_ssl_storage_port: false
# Set to a valid keystore if internode_encryption is dc, rack or all
keystore: conf/.keystore
keystore_password: cassandra
# Verify peer server certificates
require_client_auth: false
# Set to a valid trustore if require_client_auth is true
truststore: conf/.truststore
truststore_password: cassandra
# Verify that the host name in the certificate matches the connected host
require_endpoint_verification: false
# More advanced defaults:
# protocol: TLS
# store_type: JKS
# cipher_suites: [
# TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
# TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
# TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA,
# TLS_RSA_WITH_AES_256_CBC_SHA
# ]
client_encryption_options:
# Enable client-to-server encryption
enabled: false
# When set to true, encrypted and unencrypted connections are allowed on the native_transport_port
# This should _only be true_ while in unencrypted or transitional operation
# optional defaults to true when enabled is false, and false when enabled is true.
# optional: true
# Set keystore and keystore_password to valid keystores if enabled is true
keystore: conf/.keystore
keystore_password: cassandra
# Verify client certificates
require_client_auth: false
# Set trustore and truststore_password if require_client_auth is true
# truststore: conf/.truststore
# truststore_password: cassandra
# More advanced defaults:
# protocol: TLS
# store_type: JKS
# cipher_suites: [
# TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
# TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
# TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA,
# TLS_RSA_WITH_AES_256_CBC_SHA
# ]
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
# CBC IV length for AES needs to be 16 bytes (which is also the default size)
# iv_length: 16
key_provider:
- class_name: org.apache.cassandra.security.JKSKeyProvider
parameters:
- keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
replica_filtering_protection:
# These thresholds exist to limit the damage severely out-of-date replicas can cause during these
# queries. They limit the number of rows from all replicas individual index and filtering queries
# can materialize on-heap to return correct results at the desired read consistency level.
#
# "cached_replica_rows_warn_threshold" is the per-query threshold at which a warning will be logged.
# "cached_replica_rows_fail_threshold" is the per-query threshold at which the query will fail.
#
# These thresholds may also be adjusted at runtime using the StorageService mbean.
#
# If the failure threshold is breached, it is likely that either the current page/fetch size
# is too large or one or more replicas is severely out-of-sync and in need of repair.
cached_rows_warn_threshold: 2000
cached_rows_fail_threshold: 32000
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
audit_logging_options:
enabled: false
logger:
- class_name: BinAuditLogger
# audit_logs_dir:
# included_keyspaces:
# excluded_keyspaces: system, system_schema, system_virtual_schema
# included_categories:
# excluded_categories:
# included_users:
# excluded_users:
# roll_cycle: HOURLY
# block: true
# max_queue_weight: 268435456 # 256 MiB
# max_log_size: 17179869184 # 16 GiB
## archive command is "/path/to/script.sh %path" where %path is replaced with the file being rolled:
# archive_command:
# max_archive_retries: 10
# log_dir:
# roll_cycle: HOURLY
# block: true
# max_queue_weight: 268435456 # 256 MiB
# max_log_size: 17179869184 # 16 GiB
## archive command is "/path/to/script.sh %path" where %path is replaced with the file being rolled:
# archive_command:
# max_archive_retries: 10
diagnostic_events_enabled: false
repaired_data_tracking_for_range_reads_enabled: false
repaired_data_tracking_for_partition_reads_enabled: false
report_unconfirmed_repaired_data_mismatches: false
enable_materialized_views: false
enable_sasi_indexes: false
enable_transient_replication: false
enable_drop_compact_storage: false

 

二、配置修改

其它两台一样的配置,除seeds种子节点配置不改,其它都改成当前服务器的固定IP即可。

三、开启

分别运行bin/cassandra命令开启各node服务器程序

四、报错

如1:

 

WARN [main] 2022-12-22 14:19:03,077 SystemKeyspace.java:1130 - No host ID found, created 7ad63e8c-2692-4782-b73d-7e5a0ee2c287 (Note: This should happen exactly once per node).
INFO [Messaging-EventLoop-3-1] 2022-12-22 14:19:03,219 NoSpamLogger.java:92 - /192.168.159.151:7000->/192.168.159.152:7000-URGENT_MESSAGES-[no-channel] failed to connect
io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: /192.168.159.152:7000
Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused
at io.netty.channel.unix.Errors.throwConnectException(Errors.java:124)
at io.netty.channel.unix.Socket.finishConnect(Socket.java:251)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:673)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:650)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:530)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:470)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:750)
INFO [Messaging-EventLoop-3-1] 2022-12-22 14:19:03,673 InboundConnectionInitiator.java:464 - /192.168.159.150:7000(/192.168.159.150:44688)->/192.168.159.151:7000-URGENT_MESSAGES-2be24ed1 messaging connection established, version = 12, framing = CRC, encryption = unencrypted
INFO [Messaging-EventLoop-3-1] 2022-12-22 14:19:03,729 OutboundConnection.java:1150 - /192.168.159.151:7000(/192.168.159.151:54214)->/192.168.159.150:7000-URGENT_MESSAGES-94e652c8 successfully connected, version = 12, framing = CRC, encryption = unencrypted
INFO [Messaging-EventLoop-3-1] 2022-12-22 14:19:33,296 NoSpamLogger.java:92 - /192.168.159.151:7000->/192.168.159.152:7000-URGENT_MESSAGES-[no-channel] failed to connect
io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: /192.168.159.152:7000
Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused
at io.netty.channel.unix.Errors.throwConnectException(Errors.java:124)
at io.netty.channel.unix.Socket.finishConnect(Socket.java:251)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:673)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:650)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:530)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:470)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:750)

 

请检查防火墙,因我在线下,直接运行了iptables -F解决。

验证连通性:

 

 

修改手停配置:

 

 

验证状态报错解决:

 

[hadoop@datax2 bin]$ nodetool status
nodetool: Failed to connect to '127.0.0.1:7199' - URISyntaxException: 'Malformed IPv6 address at index 7: rmi://[127.0.0.1]:7199'.

根本原因

JNDI 提供程序的 URL 解析器包括 RMI(由 JMX 使用)在 Oracle Java 8u331 中得到了改进,并且只允许在 IPv6 地址周围使用括号 (JDK-8278972)。

尝试nodetool使用较新的 Java 版本运行会中断,因为 RMI URL 中的主机包含在方括号中(来自NodeProbe.java类):

private static final String fmtUrl = "service:jmx:rmi:///jndi/rmi://[%s]:%d/jmxrmi";

解决方法

选项 1 - 在运行时添加“传统”解析标志nodetool,例如:

$ nodetool -Dcom.sun.jndi.rmiURLParsing=legacy status

选项 2 - 指定带有 IPv6 子网前缀的主机名,例如:

$ nodetool -h ::FFFF:127.0.0.1 status

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.159.151 69.06 KiB 16 64.7% 7ad63e8c-2692-4782-b73d-7e5a0ee2c287 rack1
UN 192.168.159.152 74.11 KiB 16 59.3% 8bcce62b-7fe0-43b3-b504-40f71189bae5 rack1
UN 192.168.159.150 74.04 KiB 16 76.0% 222f91f3-76ca-454b-bd18-1058aff2e43d rack1

 

 查看某个服务器的运行状态 (-h加IP,-p加jxm_port)

[hadoop@datax3 bin]$ nodetool -h ::FFFF:127.0.0.1 -p7199 info
ID : 8bcce62b-7fe0-43b3-b504-40f71189bae5
Gossip active : true
Native Transport active: true
Load : 238.84 KiB
Generation No : 1671772230
Uptime (seconds) : 79222
Heap Memory (MB) : 202.82 / 1014.00
Off Heap Memory (MB) : 0.00
Data Center : datacenter1
Rack : rack1
Exceptions : 0
Key Cache : entries 22, size 2.09 KiB, capacity 50 MiB, 99 hits, 124 requests, 0.798 recent hit rate, 14400 save period in seconds
Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 requests, NaN recent hit rate, 0 save period in seconds
Counter Cache : entries 0, size 0 bytes, capacity 25 MiB, 0 hits, 0 requests, NaN recent hit rate, 7200 save period in seconds
Percent Repaired : 100.0%
Token : (invoke with -T/--tokens to see all 16 tokens)

 查看columnfamily执行记录/当前执行记录

[hadoop@datax3 bin]$ nodetool -h ::FFFF:127.0.0.1 -p7199 cfstats/tpstats

 

 

选项 3 

此问题已在Apache Cassandra 3.0.27、3.11.13、4.0.4 和 4.1 版本 ( CASSANDRA-17581 ) 中得到解决,而我的刚好是4.0.0版本。

 

[hadoop@datax2 bin]$ cqlsh 192.168.159.151
Connected to AndyXi Cluster at 192.168.159.151:9042
[cqlsh 6.0.0 | Cassandra 4.0.0 | CQL spec 3.4.5 | Native protocol v5]
Use HELP for help.
cqlsh>

 查看集群中hash环中的具体情况:

 

posted @ 2022-12-22 14:52  青空如璃  阅读(1184)  评论(0编辑  收藏  举报