TiDB 虚拟机单机部署
单机环境仅用于练手和测试、开发验证。运行环境:vmware workstations+ CentOS 7.51.下载软件:约170M左右#wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz# mv /usr/local/tidb-latest-linux-amd64/ /usr/local/tidb2.目录...
·
单机环境仅用于练手和测试、开发验证。
运行环境:vmware workstations + CentOS 7.5
1.下载软件:约170M左右
#wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz
# mv /usr/local/tidb-latest-linux-amd64/ /usr/local/tidb
2.目录规划和创建目录:
#mkdir -p /data/tidb/{pddata,tikvdata}
3.启动PD:
# /usr/local/tidb/pd-server --data-dir=/data/tidb/pddata/ &
[1] 4486
[root@node1 tidb]# 2018/09/05 02:50:16.549 util.go:56: [info] Welcome to Placement Driver (PD).
2018/09/05 02:50:16.549 util.go:57: [info] Release Version: v2.1.0-rc.1-14-g0d99adc
2018/09/05 02:50:16.549 util.go:58: [info] Git Commit Hash: 0d99adce94dd87fa3ab162830dd379fe167ec71a
2018/09/05 02:50:16.549 util.go:59: [info] Git Branch: master
2018/09/05 02:50:16.549 util.go:60: [info] UTC Build Time: 2018-09-05 09:00:03
2018/09/05 02:50:16.549 metricutil.go:79: [info] disable Prometheus push client
2018/09/05 02:50:16.549 server.go:103: [info] PD config - {
"client-urls": "http://127.0.0.1:2379",
"peer-urls": "http://127.0.0.1:2380",
"advertise-client-urls": "http://127.0.0.1:2379",
"advertise-peer-urls": "http://127.0.0.1:2380",
"name": "pd",
"data-dir": "/data/tidb/pddata/",
"initial-cluster": "pd=http://127.0.0.1:2380",
"initial-cluster-state": "new",
"join": "",
"lease": 3,
"log": {
"level": "",
"format": "text",
"disable-timestamp": false,
"file": {
"filename": "",
"log-rotate": true,
"max-size": 0,
"max-days": 0,
"max-backups": 0
}
},
"log-file": "",
"log-level": "",
"tso-save-interval": "3s",
"metric": {
"job": "pd",
"address": "",
"interval": "0s"
},
"schedule": {
"max-snapshot-count": 3,
"max-pending-peer-count": 16,
"max-merge-region-size": 20,
"max-merge-region-keys": 200000,
"split-merge-interval": "1h0m0s",
"patrol-region-interval": "100ms",
"max-store-down-time": "30m0s",
"leader-schedule-limit": 4,
"region-schedule-limit": 4,
"replica-schedule-limit": 8,
"merge-schedule-limit": 8,
"tolerant-size-ratio": 5,
"low-space-ratio": 0.8,
"high-space-ratio": 0.6,
"disable-raft-learner": "false",
"disable-remove-down-replica": "false",
"disable-replace-offline-replica": "false",
"disable-make-up-replica": "false",
"disable-remove-extra-replica": "false",
"disable-location-replacement": "false",
"disable-namespace-relocation": "false",
"schedulers-v2": [
{
"type": "balance-region",
"args": null,
"disable": false
},
{
"type": "balance-leader",
"args": null,
"disable": false
},
{
"type": "hot-region",
"args": null,
"disable": false
},
{
"type": "label",
"args": null,
"disable": false
}
]
},
"replication": {
"max-replicas": 3,
"location-labels": ""
},
"namespace": {},
"cluster-version": "0.0.0",
"quota-backend-bytes": "0 B",
"auto-compaction-mode": "periodic",
"auto-compaction-retention-v2": "1h",
"TickInterval": "500ms",
"ElectionInterval": "3s",
"PreVote": true,
"security": {
"cacert-path": "",
"cert-path": "",
"key-path": ""
},
"label-property": null,
"WarningMsgs": null,
"namespace-classifier": "table"
}
2018/09/05 02:50:16.550 server.go:135: [info] start embed etcd
2018/09/05 02:50:16.550 log.go:86: [info] embed: [pprof is enabled under /debug/pprof]
2018/09/05 02:50:16.551 systime_mon.go:24: [info] start system time monitor
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [name = pd]
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [data dir = /data/tidb/pddata/]
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [member dir = /data/tidb/pddata/member]
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [heartbeat = 500ms]
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [election = 3000ms]
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [snapshot count = 100000]
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [advertise client URLs = http://127.0.0.1:2379]
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [initial advertise peer URLs = http://127.0.0.1:2380]
2018/09/05 02:50:16.570 log.go:86: [info] etcdserver: [initial cluster = pd=http://127.0.0.1:2380]
2018/09/05 02:50:16.575 log.go:86: [info] etcdserver: [starting member b71f75320dc06a6c in cluster 1c45a069f3a1d796]
{"level":"info","ts":1536087016.5758674,"caller":"raft/raft.go:656","msg":"b71f75320dc06a6c became follower at term 0"}
{"level":"info","ts":1536087016.5759113,"caller":"raft/raft.go:364","msg":"newRaft b71f75320dc06a6c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":1536087016.57592,"caller":"raft/raft.go:656","msg":"b71f75320dc06a6c became follower at term 1"}
2018/09/05 02:50:16.592 log.go:82: [warning] auth: [simple token is not cryptographically signed]
2018/09/05 02:50:16.594 log.go:86: [info] embed: [b71f75320dc06a6c starting with cors ["*"]]
2018/09/05 02:50:16.594 log.go:86: [info] embed: [b71f75320dc06a6c starting with host whitelist ["*"]]
2018/09/05 02:50:16.594 log.go:86: [info] etcdserver: [starting server... [version: 3.3.0+git, cluster version: to_be_decided]]
2018/09/05 02:50:16.595 log.go:86: [info] etcdserver: [b71f75320dc06a6c as single-node; fast-forwarding 5 ticks (election ticks 6)]
2018/09/05 02:50:16.595 log.go:86: [info] etcdserver/membership: [added member b71f75320dc06a6c [http://127.0.0.1:2380] to cluster 1c45a069f3a1d796]
2018/09/05 02:50:16.596 log.go:86: [info] embed: [listening for peers on 127.0.0.1:2380]
{"level":"info","ts":1536087017.0773394,"caller":"raft/raft.go:857","msg":"b71f75320dc06a6c is starting a new election at term 1"}
{"level":"info","ts":1536087017.0773728,"caller":"raft/raft.go:684","msg":"b71f75320dc06a6c became pre-candidate at term 1"}
{"level":"info","ts":1536087017.0773876,"caller":"raft/raft.go:755","msg":"b71f75320dc06a6c received MsgPreVoteResp from b71f75320dc06a6c at term 1"}
{"level":"info","ts":1536087017.0773978,"caller":"raft/raft.go:669","msg":"b71f75320dc06a6c became candidate at term 2"}
{"level":"info","ts":1536087017.0774035,"caller":"raft/raft.go:755","msg":"b71f75320dc06a6c received MsgVoteResp from b71f75320dc06a6c at term 2"}
{"level":"info","ts":1536087017.0774126,"caller":"raft/raft.go:712","msg":"b71f75320dc06a6c became leader at term 2"}
{"level":"info","ts":1536087017.07742,"caller":"raft/node.go:306","msg":"raft.node: b71f75320dc06a6c elected leader b71f75320dc06a6c at term 2"}
2018/09/05 02:50:17.077 log.go:86: [info] etcdserver: [published {Name:pd ClientURLs:[http://127.0.0.1:2379]} to cluster 1c45a069f3a1d796]
2018/09/05 02:50:17.077 log.go:86: [info] embed: [ready to serve client requests]
2018/09/05 02:50:17.077 log.go:86: [info] etcdserver: [setting up the initial cluster version to 3.3]
2018/09/05 02:50:17.077 log.go:84: [info] embed: [serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!]
2018/09/05 02:50:17.077 log.go:84: [info] etcdserver/membership: [set the initial cluster version to 3.3]
2018/09/05 02:50:17.078 server.go:162: [info] create etcd v3 client with endpoints [http://127.0.0.1:2379]
2018/09/05 02:50:17.078 log.go:86: [info] etcdserver/api: [enabled capabilities for version 3.3]
2018/09/05 02:50:17.080 server.go:201: [info] init cluster id 6597443502776081340
2018/09/05 02:50:17.080 namespace_classifier.go:438: [info] load 0 namespacesInfo cost 176.109µs
2018/09/05 02:50:17.082 tso.go:104: [info] sync and save timestamp: last 0001-01-01 00:00:00 +0000 UTC save 2018-09-05 02:50:20.082110968 +0800 CST m=+3.558018392 next 2018-09-05 02:50:17.082110968 +0800 CST m=+0.558018392
2018/09/05 02:50:17.082 leader.go:263: [info] cluster version is 0.0.0
2018/09/05 02:50:17.082 leader.go:264: [info] PD cluster leader pd is ready to serve
4.启动tikv:
# /usr/local/tidb/tikv-server --pd="127.0.0.1:2379" --store=/data/tidb/tikvdata/ &
[2] 4532
[root@node1 tidb]# 2018/09/05 02:52:56.885 INFO mod.rs:26: Welcome to TiKV.
Release Version: 2.1.0-rc.1
Git Commit Hash: 355b8fcbfc8a89f426445ab94defd6053c5685de
Git Commit Branch: master
UTC Build Time: 2018-09-05 07:32:14
Rust Version: rustc 1.29.0-nightly (4f3c7a472 2018-07-17)
2018/09/05 02:52:56.899 INFO config.rs:142: no advertise-addr is specified, fall back to addr.
2018/09/05 02:52:56.900 INFO tikv-server.rs:392: using config: {
"log-level": "info",
"log-file": "",
"log-rotation-timespan": "24h",
"readpool": {
"storage": {
"high-concurrency": 4,
"normal-concurrency": 4,
"low-concurrency": 4,
"max-tasks-per-worker-high": 2000,
"max-tasks-per-worker-normal": 2000,
"max-tasks-per-worker-low": 2000,
"stack-size": "10MB"
},
"coprocessor": {
"high-concurrency": 8,
"normal-concurrency": 8,
"low-concurrency": 8,
"max-tasks-per-worker-high": 2000,
"max-tasks-per-worker-normal": 2000,
"max-tasks-per-worker-low": 2000,
"stack-size": "10MB"
}
},
"server": {
"addr": "127.0.0.1:20160",
"advertise-addr": "127.0.0.1:20160",
"grpc-compression-type": "none",
"grpc-concurrency": 4,
"grpc-concurrent-stream": 1024,
"grpc-raft-conn-num": 10,
"grpc-stream-initial-window-size": "2MB",
"grpc-keepalive-time": "10s",
"grpc-keepalive-timeout": "3s",
"concurrent-send-snap-limit": 32,
"concurrent-recv-snap-limit": 32,
"end-point-recursion-limit": 1000,
"end-point-stream-channel-size": 8,
"end-point-batch-row-limit": 64,
"end-point-stream-batch-row-limit": 128,
"end-point-request-max-handle-duration": "1m",
"snap-max-write-bytes-per-sec": "100MB",
"snap-max-total-size": "0KB",
"labels": {}
},
"storage": {
"data-dir": "/data/tidb/tikvdata",
"gc-ratio-threshold": 1.1,
"max-key-size": 4096,
"scheduler-notify-capacity": 10240,
"scheduler-concurrency": 2048000,
"scheduler-worker-pool-size": 4,
"scheduler-pending-write-threshold": "100MB"
},
"pd": {
"endpoints": [
"127.0.0.1:2379"
]
},
"metric": {
"interval": "15s",
"address": "",
"job": "tikv"
},
"raftstore": {
"sync-log": true,
"prevote": true,
"raftdb-path": "/data/tidb/tikvdata/raft",
"capacity": "0KB",
"raft-base-tick-interval": "1s",
"raft-heartbeat-ticks": 2,
"raft-election-timeout-ticks": 10,
"raft-min-election-timeout-ticks": 10,
"raft-max-election-timeout-ticks": 20,
"raft-max-size-per-msg": "1MB",
"raft-max-inflight-msgs": 256,
"raft-entry-max-size": "8MB",
"raft-log-gc-tick-interval": "10s",
"raft-log-gc-threshold": 50,
"raft-log-gc-count-limit": 73728,
"raft-log-gc-size-limit": "72MB",
"split-region-check-tick-interval": "10s",
"region-split-check-diff": "6MB",
"region-compact-check-interval": "5m",
"clean-stale-peer-delay": "11m",
"region-compact-check-step": 100,
"region-compact-min-tombstones": 10000,
"region-compact-tombstones-percent": 30,
"pd-heartbeat-tick-interval": "1m",
"pd-store-heartbeat-tick-interval": "10s",
"snap-mgr-gc-tick-interval": "1m",
"snap-gc-timeout": "4h",
"lock-cf-compact-interval": "10m",
"lock-cf-compact-bytes-threshold": "256MB",
"notify-capacity": 40960,
"messages-per-tick": 4096,
"max-peer-down-duration": "5m",
"max-leader-missing-duration": "2h",
"abnormal-leader-missing-duration": "10m",
"peer-stale-state-check-interval": "5m",
"leader-transfer-max-log-lag": 10,
"snap-apply-batch-size": "10MB",
"consistency-check-interval": "0s",
"report-region-flow-interval": "1m",
"raft-store-max-leader-lease": "9s",
"right-derive-when-split": true,
"allow-remove-leader": false,
"merge-max-log-gap": 10,
"merge-check-tick-interval": "10s",
"use-delete-range": false,
"cleanup-import-sst-interval": "10m",
"local-read-batch-size": 1024
},
"coprocessor": {
"split-region-on-table": true,
"batch-split-limit": 10,
"region-max-size": "144MB",
"region-split-size": "96MB",
"region-max-keys": 1440000,
"region-split-keys": 960000
},
"rocksdb": {
"wal-recovery-mode": 2,
"wal-dir": "",
"wal-ttl-seconds": 0,
"wal-size-limit": "0KB",
"max-total-wal-size": "4GB",
"max-background-jobs": 6,
"max-manifest-file-size": "20MB",
"create-if-missing": true,
"max-open-files": 40960,
"enable-statistics": true,
"stats-dump-period": "10m",
"compaction-readahead-size": "0KB",
"info-log-max-size": "1GB",
"info-log-roll-time": "0s",
"info-log-keep-log-file-num": 10,
"info-log-dir": "",
"rate-bytes-per-sec": "0KB",
"bytes-per-sync": "1MB",
"wal-bytes-per-sync": "512KB",
"max-sub-compactions": 1,
"writable-file-max-buffer-size": "1MB",
"use-direct-io-for-flush-and-compaction": false,
"enable-pipelined-write": true,
"defaultcf": {
"block-size": "64KB",
"block-cache-size": "747MB",
"disable-block-cache": false,
"cache-index-and-filter-blocks": true,
"pin-l0-filter-and-index-blocks": true,
"use-bloom-filter": true,
"whole-key-filtering": true,
"bloom-filter-bits-per-key": 10,
"block-based-bloom-filter": false,
"read-amp-bytes-per-bit": 0,
"compression-per-level": [
"no",
"no",
"lz4",
"lz4",
"lz4",
"zstd",
"zstd"
],
"write-buffer-size": "128MB",
"max-write-buffer-number": 5,
"min-write-buffer-number-to-merge": 1,
"max-bytes-for-level-base": "512MB",
"target-file-size-base": "8MB",
"level0-file-num-compaction-trigger": 4,
"level0-slowdown-writes-trigger": 20,
"level0-stop-writes-trigger": 36,
"max-compaction-bytes": "2GB",
"compaction-pri": 3,
"dynamic-level-bytes": true,
"num-levels": 7,
"max-bytes-for-level-multiplier": 10,
"compaction-style": 0,
"disable-auto-compactions": false,
"soft-pending-compaction-bytes-limit": "64GB",
"hard-pending-compaction-bytes-limit": "256GB"
},
"writecf": {
"block-size": "64KB",
"block-cache-size": "448MB",
"disable-block-cache": false,
"cache-index-and-filter-blocks": true,
"pin-l0-filter-and-index-blocks": true,
"use-bloom-filter": true,
"whole-key-filtering": false,
"bloom-filter-bits-per-key": 10,
"block-based-bloom-filter": false,
"read-amp-bytes-per-bit": 0,
"compression-per-level": [
"no",
"no",
"lz4",
"lz4",
"lz4",
"zstd",
"zstd"
],
"write-buffer-size": "128MB",
"max-write-buffer-number": 5,
"min-write-buffer-number-to-merge": 1,
"max-bytes-for-level-base": "512MB",
"target-file-size-base": "8MB",
"level0-file-num-compaction-trigger": 4,
"level0-slowdown-writes-trigger": 20,
"level0-stop-writes-trigger": 36,
"max-compaction-bytes": "2GB",
"compaction-pri": 3,
"dynamic-level-bytes": true,
"num-levels": 7,
"max-bytes-for-level-multiplier": 10,
"compaction-style": 0,
"disable-auto-compactions": false,
"soft-pending-compaction-bytes-limit": "64GB",
"hard-pending-compaction-bytes-limit": "256GB"
},
"lockcf": {
"block-size": "16KB",
"block-cache-size": "256MB",
"disable-block-cache": false,
"cache-index-and-filter-blocks": true,
"pin-l0-filter-and-index-blocks": true,
"use-bloom-filter": true,
"whole-key-filtering": true,
"bloom-filter-bits-per-key": 10,
"block-based-bloom-filter": false,
"read-amp-bytes-per-bit": 0,
"compression-per-level": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"write-buffer-size": "128MB",
"max-write-buffer-number": 5,
"min-write-buffer-number-to-merge": 1,
"max-bytes-for-level-base": "128MB",
"target-file-size-base": "8MB",
"level0-file-num-compaction-trigger": 1,
"level0-slowdown-writes-trigger": 20,
"level0-stop-writes-trigger": 36,
"max-compaction-bytes": "2GB",
"compaction-pri": 0,
"dynamic-level-bytes": true,
"num-levels": 7,
"max-bytes-for-level-multiplier": 10,
"compaction-style": 0,
"disable-auto-compactions": false,
"soft-pending-compaction-bytes-limit": "64GB",
"hard-pending-compaction-bytes-limit": "256GB"
},
"raftcf": {
"block-size": "16KB",
"block-cache-size": "128MB",
"disable-block-cache": false,
"cache-index-and-filter-blocks": true,
"pin-l0-filter-and-index-blocks": true,
"use-bloom-filter": true,
"whole-key-filtering": true,
"bloom-filter-bits-per-key": 10,
"block-based-bloom-filter": false,
"read-amp-bytes-per-bit": 0,
"compression-per-level": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"write-buffer-size": "128MB",
"max-write-buffer-number": 5,
"min-write-buffer-number-to-merge": 1,
"max-bytes-for-level-base": "128MB",
"target-file-size-base": "8MB",
"level0-file-num-compaction-trigger": 1,
"level0-slowdown-writes-trigger": 20,
"level0-stop-writes-trigger": 36,
"max-compaction-bytes": "2GB",
"compaction-pri": 0,
"dynamic-level-bytes": true,
"num-levels": 7,
"max-bytes-for-level-multiplier": 10,
"compaction-style": 0,
"disable-auto-compactions": false,
"soft-pending-compaction-bytes-limit": "64GB",
"hard-pending-compaction-bytes-limit": "256GB"
}
},
"raftdb": {
"wal-recovery-mode": 2,
"wal-dir": "",
"wal-ttl-seconds": 0,
"wal-size-limit": "0KB",
"max-total-wal-size": "4GB",
"max-manifest-file-size": "20MB",
"create-if-missing": true,
"max-open-files": 40960,
"enable-statistics": true,
"stats-dump-period": "10m",
"compaction-readahead-size": "0KB",
"info-log-max-size": "1GB",
"info-log-roll-time": "0s",
"info-log-keep-log-file-num": 10,
"info-log-dir": "",
"max-sub-compactions": 1,
"writable-file-max-buffer-size": "1MB",
"use-direct-io-for-flush-and-compaction": false,
"enable-pipelined-write": true,
"allow-concurrent-memtable-write": false,
"bytes-per-sync": "1MB",
"wal-bytes-per-sync": "512KB",
"defaultcf": {
"block-size": "64KB",
"block-cache-size": "256MB",
"disable-block-cache": false,
"cache-index-and-filter-blocks": true,
"pin-l0-filter-and-index-blocks": true,
"use-bloom-filter": false,
"whole-key-filtering": true,
"bloom-filter-bits-per-key": 10,
"block-based-bloom-filter": false,
"read-amp-bytes-per-bit": 0,
"compression-per-level": [
"no",
"no",
"lz4",
"lz4",
"lz4",
"zstd",
"zstd"
],
"write-buffer-size": "128MB",
"max-write-buffer-number": 5,
"min-write-buffer-number-to-merge": 1,
"max-bytes-for-level-base": "512MB",
"target-file-size-base": "8MB",
"level0-file-num-compaction-trigger": 4,
"level0-slowdown-writes-trigger": 20,
"level0-stop-writes-trigger": 36,
"max-compaction-bytes": "2GB",
"compaction-pri": 0,
"dynamic-level-bytes": true,
"num-levels": 7,
"max-bytes-for-level-multiplier": 10,
"compaction-style": 0,
"disable-auto-compactions": false,
"soft-pending-compaction-bytes-limit": "64GB",
"hard-pending-compaction-bytes-limit": "256GB"
}
},
"security": {
"ca-path": "",
"cert-path": "",
"key-path": ""
},
"import": {
"import-dir": "/tmp/tikv/import",
"num-threads": 8,
"num-import-jobs": 8,
"num-import-sst-jobs": 2,
"max-prepare-duration": "5m",
"region-split-size": "96MB",
"stream-channel-window": 128,
"max-open-engines": 8
}
}
2018/09/05 02:52:56.900 WARN tikv-server.rs:87: Limit("kernel parameters net.core.somaxconn got 128, expect 32768")
2018/09/05 02:52:56.900 WARN tikv-server.rs:87: Limit("kernel parameters net.ipv4.tcp_syncookies got 1, expect 0")
2018/09/05 02:52:56.900 WARN tikv-server.rs:87: Limit("kernel parameters vm.swappiness got 30, expect 0")
2018/09/05 02:52:56.900 INFO config.rs:824: data_path: "/data/tidb/tikvdata", mount fs info: FsInfo { tp: "xfs", opts: "rw,relatime,attr2,inode64,noquota", mnt_dir: "/data", fsname: "/dev/mapper/centos-data" }
2018/09/05 02:52:56.900 WARN config.rs:827: "/data/tidb/tikvdata" not on SSD device
2018/09/05 02:52:56.900 INFO config.rs:824: data_path: "/data/tidb/tikvdata/raft", mount fs info: FsInfo { tp: "xfs", opts: "rw,relatime,attr2,inode64,noquota", mnt_dir: "/data", fsname: "/dev/mapper/centos-data" }
2018/09/05 02:52:56.900 WARN config.rs:827: "/data/tidb/tikvdata/raft" not on SSD device
2018/09/05 02:52:56.900 WARN setup.rs:166: environment variable `TZ` is missing, using `/etc/localtime`
2018/09/05 02:52:56.900 INFO util.rs:406: connecting to PD endpoint: "127.0.0.1:2379"
2018/09/05 02:52:57.009 INFO util.rs:406: connecting to PD endpoint: "http://127.0.0.1:2379"
2018/09/05 02:52:57.009 INFO util.rs:406: connecting to PD endpoint: "http://127.0.0.1:2379"
2018/09/05 02:52:57.011 INFO util.rs:465: connected to PD leader "http://127.0.0.1:2379"
2018/09/05 02:52:57.011 INFO util.rs:394: All PD endpoints are consistent: ["127.0.0.1:2379"]
2018/09/05 02:52:57.092 INFO tikv-server.rs:415: connect to PD cluster 6597443502776081340
2018/09/05 02:52:57.731 INFO mod.rs:341: starting working thread: addr-resolver
2018/09/05 02:52:59.729 INFO mod.rs:421: storage RaftKv started.
2018/09/05 02:52:59.968 INFO server.rs:98: listening on 127.0.0.1:20160
2018/09/05 02:52:59.969 id.go:90: [info] idAllocator allocates a new id: 1000
2018/09/05 02:52:59.969 INFO node.rs:229: alloc store id 1
2018/09/05 02:52:59.971 INFO node.rs:242: alloc first region id 2 for cluster 6597443502776081340, store 1
2018/09/05 02:52:59.971 INFO node.rs:247: alloc first peer id 3 for first region 2
2018/09/05 02:52:59.974 server.go:333: [info] try to bootstrap raft cluster 6597443502776081340 with header:<cluster_id:6597443502776081340 > store:<id:1 address:"127.0.0.1:20160" version:"2.1.0-rc.1" > region:<id:2 region_epoch:<conf_ver:1 version:1 > peers:<id:3 store_id:1 > >
2018/09/05 02:52:59.975 server.go:390: [info] bootstrap cluster 6597443502776081340 ok
2018/09/05 02:52:59.975 cluster_info.go:72: [info] load 1 stores cost 329.537µs
2018/09/05 02:52:59.975 cluster_info.go:78: [info] load 1 regions cost 115.76µs
2018/09/05 02:52:59.975 namespace_classifier.go:438: [info] load 0 namespacesInfo cost 114.338µs
2018/09/05 02:52:59.976 coordinator.go:208: [info] coordinator: Start collect cluster information
2018/09/05 02:52:59.977 INFO node.rs:300: bootstrap cluster 6597443502776081340 ok
2018/09/05 02:52:59.979 grpc_service.go:210: [info] put store ok - id:1 address:"127.0.0.1:20160" version:"2.1.0-rc.1"
2018/09/05 02:52:59.980 cluster_info.go:107: [info] cluster version changed from 0.0.0 to 2.1.0-rc.1
2018/09/05 02:52:59.981 INFO node.rs:338: start raft store 1 thread
2018/09/05 02:52:59.981 INFO peer.rs:298: [region 2] create peer with id 3
2018/09/05 02:52:59.981 INFO <unknown>:833: [region 2] 3 became follower at term 5
2018/09/05 02:52:59.981 INFO <unknown>:433: [region 2] 3 newRaft [peers: [3], term: 5, commit: 5, applied: 5, last_index: 5, last_term: 5]
2018/09/05 02:52:59.981 INFO <unknown>:1120: [region 2] 3 is starting a new election at term 5
2018/09/05 02:52:59.981 INFO <unknown>:865: [region 2] 3 became pre-candidate at term 5
2018/09/05 02:52:59.981 INFO <unknown>:950: [region 2] 3 received MsgRequestPreVoteResponse from 3 at term 5
2018/09/05 02:52:59.982 INFO <unknown>:848: [region 2] 3 became candidate at term 6
2018/09/05 02:52:59.982 INFO <unknown>:950: [region 2] 3 received MsgRequestVoteResponse from 3 at term 6
2018/09/05 02:52:59.982 INFO <unknown>:891: [region 2] 3 became leader at term 6
2018/09/05 02:52:59.982 INFO store.rs:283: [store 1] starts with 1 regions, including 0 tombstones, 0 applying regions and 0 merging regions, takes 256.27µs
2018/09/05 02:52:59.982 INFO store.rs:338: [store 1] cleans up 2 ranges garbage data, takes 2.959µs
2018/09/05 02:52:59.982 INFO mod.rs:341: starting working thread: split-check
2018/09/05 02:52:59.982 INFO mod.rs:341: starting working thread: snapshot-worker
2018/09/05 02:52:59.982 INFO mod.rs:341: starting working thread: raft-gc-worker
2018/09/05 02:52:59.982 INFO mod.rs:341: starting working thread: compact-worker
2018/09/05 02:52:59.982 INFO future.rs:142: starting working thread: pd-worker
2018/09/05 02:52:59.982 INFO mod.rs:341: starting working thread: consistency-check
2018/09/05 02:52:59.982 INFO mod.rs:341: starting working thread: cleanup-sst
2018/09/05 02:52:59.982 INFO mod.rs:341: starting working thread: apply-worker
2018/09/05 02:52:59.982 INFO mod.rs:341: starting working thread: local-reader
2018/09/05 02:52:59.982 INFO tikv-server.rs:225: start storage
2018/09/05 02:53:00.696 INFO mod.rs:341: starting working thread: storage-scheduler
2018/09/05 02:53:00.696 INFO mod.rs:341: starting working thread: gc-worker
2018/09/05 02:53:00.696 INFO mod.rs:341: starting working thread: end-point-worker
2018/09/05 02:53:00.696 INFO mod.rs:341: starting working thread: snap-handler
2018/09/05 02:53:00.777 INFO server.rs:170: TiKV is ready to serve
2018/09/05 02:53:02.331 INFO util.rs:84: heartbeat receiver is refreshed.
2018/09/05 02:53:02.331 INFO client.rs:306: heartbeat sender is refreshed.
2018/09/05 02:53:02.331 ERRO client.rs:325: failed to send heartbeat: Grpc(RpcFinished(Some(RpcStatus { status: Ok, details: None })))
2018/09/05 02:53:02.331 ERRO util.rs:297: request failed: Grpc(RpcFinished(Some(RpcStatus { status: Ok, details: None }))), retry
2018/09/05 02:53:02.332 ERRO util.rs:297: request failed: Other(SendError("...")), retry
2018/09/05 02:53:02.332 ERRO util.rs:297: request failed: Other(SendError("...")), retry
2018/09/05 02:53:02.332 WARN util.rs:243: updating PD client, block the tokio core
2018/09/05 02:53:02.332 INFO util.rs:406: connecting to PD endpoint: "http://127.0.0.1:2379"
2018/09/05 02:53:02.334 INFO util.rs:406: connecting to PD endpoint: "http://127.0.0.1:2379"
2018/09/05 02:53:02.337 INFO util.rs:465: connected to PD leader "http://127.0.0.1:2379"
2018/09/05 02:53:02.337 WARN util.rs:186: heartbeat sender and receiver are stale, refreshing..
2018/09/05 02:53:02.337 WARN util.rs:205: updating PD client done, spent 5.512839ms
2018/09/05 02:53:02.337 INFO client.rs:306: heartbeat sender is refreshed.
2018/09/05 02:53:02.338 INFO util.rs:84: heartbeat receiver is refreshed.
2018/09/05 02:53:02.976 coordinator.go:211: [info] coordinator: Cluster information is prepared
2018/09/05 02:53:02.976 coordinator.go:221: [info] coordinator: Run scheduler
2018/09/05 02:53:02.976 coordinator.go:236: [info] create scheduler balance-region-scheduler
2018/09/05 02:53:02.976 coordinator.go:236: [info] create scheduler balance-leader-scheduler
2018/09/05 02:53:02.976 coordinator.go:236: [info] create scheduler balance-hot-region-scheduler
2018/09/05 02:53:02.976 coordinator.go:236: [info] create scheduler label-scheduler
2018/09/05 02:53:02.977 coordinator.go:121: [info] coordinator: start patrol regions
5.启动tidb:
#usr/local/tidb/tikv-server --store=tikv --path="127.0.0.1:2379"
2018/09/05 02:55:50.231 INFO apply.rs:2171: [region 30] 31 register to apply delegates at term 6
2018/09/05 02:55:50.268 manager.go:240: [warning] [ddl] /tidb/ddl/fg/owner ownerManager 183e4581-c9b1-4aac-8a17-460689e8cdb0 isn't the owner
2018/09/05 02:55:50.268 delete_range.go:104: [info] [ddl] closing delRange session pool
2018/09/05 02:55:50.268 ddl_worker.go:112: [info] [ddl-worker 1, tp general] close DDL worker
2018/09/05 02:55:50.268 delete_range.go:104: [info] [ddl] closing delRange session pool
2018/09/05 02:55:50.268 ddl_worker.go:112: [info] [ddl-worker 2, tp add index] close DDL worker
2018/09/05 02:55:50.268 ddl.go:398: [info] [ddl] closing DDL:183e4581-c9b1-4aac-8a17-460689e8cdb0 takes time 968.756µs
2018/09/05 02:55:50.268 ddl.go:351: [info] [ddl] stop DDL:183e4581-c9b1-4aac-8a17-460689e8cdb0
2018/09/05 02:55:50.269 manager.go:251: [info] [ddl] /tidb/ddl/fg/owner ownerManager 183e4581-c9b1-4aac-8a17-460689e8cdb0 break campaign loop, revoke err <nil>
2018/09/05 02:55:50.269 domain.go:413: [info] [domain] close
2018/09/05 02:55:50.269 tidb.go:63: [info] store tikv-6597443502776081340 new domain, ddl lease 45s, stats lease 3000000000
2018/09/05 02:55:50.270 ddl.go:358: [info] [ddl] start DDL:db53a9fb-60af-4525-b338-ee4b31e274a1, run worker true
2018/09/05 02:55:50.270 ddl_worker.go:83: [info] [ddl] start delRangeManager OK, with emulator: false
2018/09/05 02:55:50.270 ddl_worker.go:83: [info] [ddl] start delRangeManager OK, with emulator: false
2018/09/05 02:55:50.270 ddl_worker.go:118: [info] [ddl-worker 4, tp add index] start DDL worker
2018/09/05 02:55:50.270 ddl_worker.go:118: [info] [ddl-worker 3, tp general] start DDL worker
2018/09/05 02:55:50.272 manager.go:275: [info] [ddl] /tidb/ddl/fg/owner ownerManager db53a9fb-60af-4525-b338-ee4b31e274a1, owner is db53a9fb-60af-4525-b338-ee4b31e274a1
2018/09/05 02:55:50.283 domain.go:119: [info] [ddl] full load InfoSchema from version 0 to 15, in 7.30164ms
2018/09/05 02:55:50.283 domain.go:316: [info] [ddl] full load and reset schema validator.
2018/09/05 02:55:50.733 gc_worker.go:131: [info] [gc worker] 59697c376680001 start.
2018/09/05 02:55:50.735 manager.go:275: [info] [stats] /tidb/stats/owner ownerManager db53a9fb-60af-4525-b338-ee4b31e274a1, owner is db53a9fb-60af-4525-b338-ee4b31e274a1
2018/09/05 02:55:50.765 gc_worker.go:235: [info] [gc worker] leaderTick on 59697c376680001: another gc job has just finished. skipped.
2018/09/05 02:55:50.766 server.go:194: [warning] Secure connection is NOT ENABLED
2018/09/05 02:55:50.766 server.go:161: [info] Server is running MySQL Protocol at [0.0.0.0:4000]
2018/09/05 02:55:50.766 http_status.go:86: [info] Listening on :10080 for status and metrics report.
2018/09/05 02:55:51.231 domain.go:669: [info] [stats] init stats info takes 499.075327ms
2018/09/05 02:55:51.832 INFO apply.rs:904: [region 4] 5 execute admin command cmd_type: CompactLog compact_log {compact_index: 117 compact_term: 6} at [term: 6, index: 119]
6.登录:
# mysql -h 127.0.0.1 -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7.10-TiDB-v2.1.0-rc.1-59-g6e3bbe8 MySQL Community Server (Apache License 2.0)
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
4 rows in set (0.001 sec)
MySQL [(none)]> create database wuhan;
Query OK, 0 rows affected (0.074 sec)
MySQL [(none)]>
7.版本查询:
MySQL [(none)]> select version();
+-------------------------------------+
| version() |
+-------------------------------------+
| 5.7.10-TiDB-v2.1.0-rc.1-59-g6e3bbe8 |
+-------------------------------------+
1 row in set (0.001 sec)
MySQL [(none)]> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v2.1.0-rc.1-59-g6e3bbe8
Git Commit Hash: 6e3bbe8e35aaebfcef9628efc8486435c984f534
Git Branch: master
UTC Build Time: 2018-09-05 04:51:18
GoVersion: go version go1.11 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false
1 row in set (0.000 sec)
端口和进程查看:
# netstat -nultp | grep -E 'pd|tikv|tidb'
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 4486/pd-server
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 4486/pd-server
tcp6 0 0 :::10080 :::* LISTEN 4680/tidb-server
tcp6 0 0 :::4000 :::* LISTEN 4680/tidb-server
tcp6 0 0 127.0.0.1:20160 :::* LISTEN 4532/tikv-server
tcp6 0 0 :::80 :::* LISTEN 856/httpd
tcp6 0 0 :::443 :::* LISTEN 856/httpd
# ps -ef | grep -E 'pd-server|tikv-server|tidb-server'
root 4486 4163 1 02:50 pts/0 00:00:44 /usr/local/tidb/pd-server --data-dir=/data/tidb/pddata/
root 4532 4163 0 02:52 pts/0 00:00:12 /usr/local/tidb/tikv-server --pd=127.0.0.1:2379 --store=/data/tidb/tikvdata/
root 4680 4163 0 02:55 pts/0 00:00:13 /usr/local/tidb/tidb-server --store=tikv --path=127.0.0.1:2379
若关闭经常可以使用kill杀掉:
kill -9 `ps -ef | grep -E 'pd-server|tikv-server|tidb-server'|grep -v grep | awk '{print $2}'`
总结:单机版本的tidb部署还是很简单的,需要规划好目录,简化为如下几个步骤:
#mkdir -p /data/tidb/{pddata,tikvdata}
#pd-server --data-dir=/data/tidb/pddata/ &
#tikv-server --pd="127.0.0.1:2379" --store=/data/tidb/tikvdata/ &
#tidb-server --store=tikv --path="127.0.0.1:2379" &
#mysql -h 127.0.0.1 -P 4000 -u root
更多推荐
已为社区贡献3条内容
所有评论(0)