搭建gitlab&jenkins

搭建gitlab&jenkins

1、 搭建mysql

1
docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci

用户名/密码:root/12345

密码可以用docker inspect [image id]查看

"Env": [
            "MYSQL_ROOT_PASSWORD=12345",
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "GOSU_VERSION=1.7",
            "MYSQL_MAJOR=5.7",
            "MYSQL_VERSION=5.7.28-1debian9"
        ],

使用docker启动的mysql,远程无法连接,在启动的时候添加远程机器和mysql端口号映射: -p 3306:3306

git config –global user.email “794182189@qq.com

2、 搭建gitlab

centos 7 使用gitlab docker镜像安装gitlab

docker-hub

GitLab Docker images

1
2
3
4
5
6
7
8
9
10
docker run --detach \
--hostname 192.168.0.3 \
--publish 443:443 --publish 80:80 --publish 8022:22 \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://192.168.0.3/'; gitlab_rails['gitlab_shell_ssh_port'] = 8022;" \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest

3、 搭建jenkins

jenkins

docker-jenkins

docker-jenkins-README.md

1
docker run -p 8080:8080 -p 50000:50000 -d --name jenkins jenkins/jenkins:lts

jenkins密码 ( /var/jenkins_home/secrets/initialAdminPassword):admin/ c84649faf5b3470499691a621d8d90d7

管理员账户:root/12345678

jenkinsfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
pipeline {
agent any

stages {
stage('build') {
steps {
echo 'Buliding ... '
}
}

stage('Test') {
steps {
echo 'Testing ... '
}
}

stage('Deploy') {
steps {
echo 'Deploying ... '
}
}
}
}

利用redis天然的key自动过期机制,下单时将订单id写入redis,
过期时间30分钟,30分钟后检查订单状态,如果未支付,则进行处理。

开启redis key过期提醒

修改redis相关事件配置。找到redis配置文件redis.conf,查看“notify-keyspace-events”的配置项,如果没有,添加“notify-keyspace-events Ex”,如果有值,添加Ex,相关参数说明如下:

1
2
3
4
5
6
7
8
9
10
11
K:keyspace事件,事件以__keyspace@<db>__为前缀进行发布;         
E:keyevent事件,事件以__keyevent@<db>__为前缀进行发布;
g:一般性的,非特定类型的命令,比如del,expire,rename等;
$:字符串特定命令;
l:列表特定命令;
s:集合特定命令;
h:哈希特定命令;
z:有序集合特定命令;
x:过期事件,当某个键过期并删除时会产生该事件;
e:驱逐事件,当某个键因maxmemore策略而被删除时,产生该事件;
A:g$lshzxe的别名,因此”AKE”意味着所有事件。

redis测试:

打开一个redis-cli ,监控db0的key过期事件

1
2
3
4
5
127.0.0.1:6379> PSUBSCRIBE __keyevent@0__:expired
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "__keyevent@0__:expired"
3) (integer) 1

打开另一个redis-cli ,发送定时过期key

1
127.0.0.1:6379> setex test_key 3 test_value

观察上一个redis-cli ,会发现收到了过期的keytest_key,但是无法收到过期的value test_value

1
2
3
4
5
6
7
8
9
127.0.0.1:6379> PSUBSCRIBE __keyevent@0__:expired
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "__keyevent@0__:expired"
3) (integer) 1
1) "pmessage"
2) "__keyevent@0__:expired"
3) "__keyevent@0__:expired"
4) "test_key"

在springboot中使用

1.pom 中添加依赖

1
2
3
4
5
<!-- redis -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

2.定义配置RedisListenerConfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

import com.example.wendy.controller.RedisKeyExpirationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.listener.KeyExpirationEventMessageListener;
import org.springframework.data.redis.listener.RedisMessageListenerContainer;

/**
* @Author: zy
* @Date: 2019/7/17 18:23
*/
@Configuration
@ComponentScan("com.example.wendy.controller")
public class RedisListenerConfig {

@Bean
RedisMessageListenerContainer listenerContainer(RedisConnectionFactory connectionFactory) {
RedisMessageListenerContainer listenerContainer = new RedisMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory);
return listenerContainer;
}

@Bean
KeyExpirationEventMessageListener redisKeyExpirationListener(RedisMessageListenerContainer listenerContainer) {
return new RedisKeyExpirationListener(listenerContainer);
}

}

3.定义监听器,实现KeyExpirationEventMessageListener接口,查看源码发现,该接口监听所有db的过期事件keyevent@*:expired”

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.data.redis.connection.Message;
import org.springframework.data.redis.listener.KeyExpirationEventMessageListener;
import org.springframework.data.redis.listener.RedisMessageListenerContainer;

/**
* @Author: zy
* 监听所有db的过期事件__keyevent@*__:expired"
* @Date: 2019/7/17/0017 17:56
*/
public class RedisKeyExpirationListener extends KeyExpirationEventMessageListener {

private Logger logger = LoggerFactory.getLogger(this.getClass());


public RedisKeyExpirationListener(RedisMessageListenerContainer listenerContainer) {
super(listenerContainer);
}

/**
* 针对redis数据失效事件,进行数据处理
*
* @param message
* @param pattern
*/
@Override
public void onMessage(Message message, byte[] pattern) {

// 获取失效数据的key
String expiredKey = message.toString();

System.out.println("弹出来啦。。。。: " + expiredKey);


}
}

在 Kubernetes 中,如何为 Elasticsearch Cluster 分配专用的 Node 节点?

在 Kubernetes 中,如何为 Elasticsearch Cluster 分配专用的 Node 节点?

1. 对应的Node打上相应的污点和标签:

1
2
3
4
5
6
7
kubectl taint node node03 baas.yonghui.cn/elasticsearch-cluster=true:NoSchedule
kubectl taint node node04 baas.yonghui.cn/elasticsearch-cluster=true:NoSchedule
kubectl taint node node05 baas.yonghui.cn/elasticsearch-cluster=true:NoSchedule

kubectl label node node03 baas.yonghui.cn/elasticsearch-cluster=true
kubectl label node node04 baas.yonghui.cn/elasticsearch-cluster=true
kubectl label node node05 baas.yonghui.cn/elasticsearch-cluster=true

2. 部署 Elasticsearch Cluster 到 Kubernetes 中的专用节点上:

1
2
3
4
5
6
7
cd backend/elasticsearch/deploy/kubernetes-elasticsearch-cluster/stateful/
kubectl create -f ../es-discovery-svc.yaml
kubectl create -f ../es-svc.yaml
kubectl create -f ../es-master.yaml
kubectl create -f ../es-ingest-svc.yaml
kubectl create -f ../es-ingest.yaml
kubectl create -f es-data-stateful.yaml


重要提示:


(1) Elasticsearch Pod 需要 init-container 以特权模式运行,因此它可以设置一些VM选项。为此,应该使用 args –allow-privileged运行kubelet,否则 init-container 将无法运行。


(2) 默认情况下,ES_JAVA_OPTS设置为-Xms256m -Xmx256m。这是一个非常低的价值,但许多用户,即minikube用户,由于主机内存不足而导致pod被杀的问题。可以在此存储库中可用的部署描述符中更改此设置。


(3) 目前,Kubernetes pod描述符使用emptyDir在每个数据节点容器中存储数据。这是为了简单起见,应根据一个人的存储需求进行调整。


(4) 有状态目录包含一个将数据窗体部署为StatefulSet的示例。这些使用volumeClaimTemplates为每个pod配置持久存储。


(5) 默认情况下,PROCESSORS 设置为1.对于某些部署,这可能是不够的,尤其是在启动时。如果需要,相应地调整 resources.limits.cpu 或 livenessProbe。请注意,resources.limits.cpu 必须是整数。

3. 从 Kubernetes 中的专用节点上清理 Elasticsearch Cluster :

1
2
3
4
5
6
7
cd backend/elasticsearch/deploy/kubernetes-elasticsearch-cluster/stateful/
kubectl delete -f ../es-discovery-svc.yaml
kubectl delete -f ../es-svc.yaml
kubectl delete -f ../es-master.yaml
kubectl delete -f ../es-ingest-svc.yaml
kubectl delete -f ../es-ingest.yaml
kubectl delete -f es-data-stateful.yaml

4. 对应的Node去掉相应的污点和标签:

1
2
3
4
5
6
7
kubectl taint node node03 baas.yonghui.cn/elasticsearch-cluster-
kubectl taint node node04 baas.yonghui.cn/elasticsearch-cluster-
kubectl taint node node05 baas.yonghui.cn/elasticsearch-cluster-

kubectl label node node03 baas.yonghui.cn/elasticsearch-cluster-
kubectl label node node04 baas.yonghui.cn/elasticsearch-cluster-
kubectl label node node05 baas.yonghui.cn/elasticsearch-cluster-

参考资料


https://cloud.tencent.com/info/21f27eb131873f979d6275f085dfabdc.html

启动k8s

一、安装准备

1. 查看系统版本

1
cat /etc/redhat-release

2. 更新系统版本:

1
yum update -y

3. 改主机名:

1
hostnamectl set-hostname master

4. 改各个机器host

1
2
3
4
5
vi /etc/hosts

10.0.90.217 master
10.0.90.218 node01
10.0.90.219 node02

5. 关闭防火墙:

1
2
3
systemctl disable firewalld.service

systemctl stop firewalld.service

6. 关闭networkManager

1
2
3
4
5
systemctl disable

NetworkManager.service

systemctl stop NetworkManager.service

7. 查看系统版本:

1
cat /etc/redhat-release

8. 关闭selinux

1
2
3
getenforce

setenforce 0

永久关闭:

1
2
3
vi /etc/selinux/config

SELINUX=disabled

9.

  • Disabling SELinux by running setenforce 0 is required to allow containers to access the host filesystem, which is required by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
1
2
3
4
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
  • 加载系统任一个配置文件
    1
    sysctl --system

10. 临时关掉所有交换设备

1
swapoff –a
  • 永久关闭:
    1
    2
    3
    vi /etc/fstab

    # /dev/mapper/sysvg-swaplv swap

二、安装docker yum

1. 配置源

1
2
3
4
5
6
7
8
vi /etc/yum.repos.d/docker-main.repo

[docker-main-repo]
name=Docker main Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/docker-engine/yum/gpg

2. 查询yum源版本

1
yum list docker-engine-selinux  --showduplicates | sort -r

3. 安装docker

1
yum install -y docker-engine-selinux-17.03.1.ce-1.el7.centos

查询yum源版本

1
yum list docker-engine  --showduplicates | sort -r

安装

1
yum install -y docker-engine-17.03.1.ce-1.el7.centos

4. 设置docker开机启动

1
systemctl enable docker.service
1
systemctl start docker.service

三、 安装k8s

参考:

1
2
3
4
5
6
7
8
9
10
11
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. 查看k8s版本:

1
yum list kubelet  --showduplicates | sort -r
1
yum list kubeadm  --showduplicates | sort -r
1
yum list kubectl  --showduplicates | sort -r

3. 安装k8s:

1
yum install -y kubelet-1.11.0-0 kubeadm-1.11.0-0 kubectl-1.11.0-0

4. 设置开机启动:

1
systemctl enable kubelet.service
1
systemctl start kubelet.service

5. 配置主节点:

(需要修改advertiseAddress为master ip)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Vi /kubeadm/kubeadm-master.config:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
api:
advertiseAddress: 10.0.90.217

controllerManagerExtraArgs:
node-monitor-grace-period: 10s
pod-eviction-timeout: 10s

networking:
podSubnet: 10.211.0.0/16
serviceSubnet: 10.96.0.0/16

kubeProxy:
config:
mode: iptables
1
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
1
vi /etc/sysconfig/kubelet
1
KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"

6. 重启:

  • systemctl daemon-reload
  • systemctl restart kubelet.service

    7. 下载镜像:

1
kubeadm config images pull --config kubeadm-master.config
  • 因各种原因无法执行,请手动下载:
    (每个节点执行)
    1
    kubeadm config images pull --config kubeadm-master.config
1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.11.0
1
registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1
1
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3
  • 各个节点执行
1
docker pull quay.io/calico/typha:v0.7.4
1
docker pull quay.io/calico/node:v3.1.3
1
docker pull quay.io/calico/cni:v3.1.3
  • 其余master执行

8. 初始化集群

1
kubeadm init --config kubeadm/kubeadm-master.config
  • 执行init之后将执行结果粘贴出来,会用到标红处:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
I0910 02:21:32.639267  127417 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0910 02:21:32.662821 127417 kernel_validator.go:81] Validating kernel version
I0910 02:21:32.662876 127417 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [server01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.170.128]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [server01 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [server01 localhost] and IPs [172.16.170.128 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 37.004312 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node server01 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node server01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "server01" as an annotation
[bootstraptoken] using token: rbwvyz.pv9yna3slu6y0gg4
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 172.16.170.128:6443 --token rbwvyz.pv9yna3slu6y0gg4 --discovery-token-ca-cert-hash sha256:884bad27ceb666fdd0ee58dc291a7d361348b449e2d55a1d29ab09386df07265

9.

1
scp calico.yaml root@10.0.90.217:~
1
scp rbac-kdd.yaml root@10.0.90.217:~

10. 新建

  • (在kubeadm下面):
1
mkdir -p $HOME/.kube
1
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
1
sudo chown $(id -u):$(id -g) $HOME/.kube/config

11. 创建网络

1
kubectl create -f kubeadm/calico/rbac-kdd.yaml
1
kubectl create -f kubeadm/calico/calico.yaml

12. 加入节点

1
kubeadm join 172.16.170.128:6443 --token rbwvyz.pv9yna3slu6y0gg4 --discovery-token-ca-cert-hash sha256:884bad27ceb666fdd0ee58dc291a7d361348b449e2d55a1d29ab09386df07265

kubeadm 生成的token过期后,集群增加节点

  • 默认token的有效期为24小时,当过期之后,该token就不可用了, 重新生成新的token
1
kubeadm token create
  • 获取ca证书sha256编码hash值
1
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

13. #安装nfs(复用master节点)

1
yum install -y nfs-utils rpcbind
  • vi /etc/exports
1
/data 10.0.90.217 (rw,async,no_root_squash)

14. #设置启动:

1
2
3
4
[root@server ~]# systemctl start rpcbind
[root@server ~]# systemctl start nfs-server
[root@server ~]# systemctl enable nfs-server
[root@server ~]# systemctl enable rpcbind

在node节点:

1
yum install -y nfs-utils

15. #挂载测试:

1
showmount -e 172.16.0.250
1
mount -t nfs 10.0.90.217:/data /mnt
  • 查看挂载状态
1
df -h
  • 取消挂载
1
umount 10.0.90.217:/data

16. #添加nfs 支持

  • 在154 sources scp
1
scp external-storage.tar.gz root@10.0.90.217:~

(k8s不支持nfs,由external-storage.tar.gz实现)

  • 解压
1
tar -zxvf external-storage.tar.gz
  • 打开文件目录
1
cd external-storage/nfs-client/deploy
  • 执行
1
2
3
kubectl create -f rbac.yaml 
kubectl create -f deployment.yaml
kubectl create -f class.yaml

17. 配置docker

  • cat /etc/docker/daemon.json
    (在154上传到各个节点)
1
scp /etc/docker/daemon.json root@10.0.90.218:/etc/docker/
  • 在master上:
1
2
kubectl create -f class.yaml
kubectl create -f deployment.yaml

18. 设置cephfs:

1
2
3
4
cd /sources/external-storage/ceph/cephfs
kubectl create –f deploy/rbac
cd example
kubectl create –f class.yaml

七、清理不干净的docker环境(2018-11-06 更新)

1
2
3
4
5
6
rm -rf /etc/yum.repos.d/docker-ce.repo
yum remove -y docker-ce docker-ce-cli container-selinux
yum clean all
rm -rf /var/cache/yum/
yum makecache fast
rm -rf /var/lib/docker/

八、Kubernetes宿主机之相关参数修改

1. ulimit -n 参数修改永久生效

1
2
3
4
cat <<EOF >  /etc/security/limits.d/k8s.conf
* hard nofile 65536
* soft nofile 65536
EOF

参考资料:
https://www.aliyun.com/jiaocheng/208647.html
https://www.aliyun.com/jiaocheng/124954.html
https://www.cnblogs.com/xuexiaohun/articles/6233430.html
https://my.oschina.net/jxcdwangtao/blog/1621106

2. 通过sysctl修改的相关内核参数:

这里是最完成的参数列表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

vm.swappiness = 0
net.ipv4.ip_forward = 1

net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_tw_resue = 1
EOF

sysctl --system

或者(这里仅仅列出了几个和提升性能测试效果有关的参数)

1
2
3
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_tw_reuse=1

九、Docker容器的时区设置

以CentOS 7.5.1804作为容器宿主机为例,分别介绍如何在原生Docker容器和Kubernetes的Pod中正确设置时区。

Node设置正确时区(以CentOS 7.5.1804 为例)

1
2
3
timedatectl list-timezones //查看时区列表
timedatectl set-timezone Asia/Shanghai //设置时区为"亚洲/上海"
timedatectl status | grep 'Time zone' //查看当前时区
提示:这里相当于为Docker Container和Kubernetes Pod的宿主机设置正确的时区,需要优先配置完成

Docker Container 与 Node同步时区设置

1
docker run -it -v /etc/localtime:/etc/localtime --rm alpine:3.8 /bin/sh
提示:上述命令注意-v参数的部分。
总结:经测试验证,alpine:3.8、centos:7.5.1804和ubuntu:16.04处理方式相同。

Kubernetes Pod 与 Node同步时区设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
name: alpine
labels:
app: alpine
spec:
containers:
- image: alpine:3.8
imagePullPolicy: IfNotPresent
name: alpine
command:
- sleep
- "3600"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
readOnly: true
restartPolicy: Always
volumes:
- name: tz-config
hostPath:
path: /etc/localtime
提示:上述YAML注意volumes和volumeMounts部分。
总结:经测试验证,alpine:3.8、centos:7.5.1804和ubuntu:16.04处理方式相同。

Docker Container 和 Kubernetes Pod 通用 与 Node同步时区设置

该方式在需要构建Docker Image的时候,即修改Docker Image的默认时区设置。

1
2
3
4
5
6
7
Alpine Linux 3.8 Dockerfile Example
FROM alpine:3.8

MAINTAINER wangxin_0611@126.com

RUN apk --no-cache add tzdata && \
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
Ubuntu Linux 16.04 Dockerfile Example
1
2
3
4
5
6
7
FROM ubuntu:16.04

MAINTAINER wangxin_0611@126.com

RUN apt-get update && \
apt-get install -y tzdata && \
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
CenOS Linux 7.5.1804 Dockerfile Example
1
2
3
4
5
FROM centos:7.5.1804

MAINTAINER wangxin_0611@126.com

RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

参考资料:

Docker修改默认时区

【Kubernetes】同步pod时区,与node主机保持一致

中国麻将:世界最早的区块链项目

转:中国麻将:世界最早的区块链项目

甲首先发起一个申请,我要打麻将,组建一个麻将局,这就相当于创建一个区块,这个区块就会被广播给乙、丙、丁、A、B、C、D……

在打麻将的过程中,上述四个人不断的摸牌、打牌,这个可以理解成挖矿,通过什么来挖矿?麻将机就可以理解成矿机,这四个人就是所谓的矿工,这四个矿工从144个麻将中碰撞出任一正确的牌就能胡牌,我们可以将这144个麻将理解成一连串的哈希值(数字),胡牌的过程就叫算力,直到胡牌,意味着碰撞出了正确的哈希值,可以获得奖励,每人给丙支付相应的筹码,放在区块链中这个奖励就是比特币或者其他虚拟币。

为什么其他三个人都会主动给丙奖励,那是因为这些人都自动达成了共识,丙确实赢了,大家都记录了这笔账,包括坐在旁边买马的家属们,想抵赖是不行的,不然以后传出去这人品不行,就没人再和他一起玩儿了,毕竟圈子很重要。

细究一下,在大家达成共识时,我们看不到任何中介或者第三方出来评判丙赢了,大家给丙的奖励也不需要通过第三方转交给丙,都是直接点对点交易,这一过程就是去中心化,牌友们(矿工)各自记录了第一局的战绩,丙大胡自摸十三幺,乙杠了甲东风,记录完成后就生成了一个完整的区块,但要记住,这才只是第一局,在整个区块链上,这才仅仅是一个节点,开头说的8局打完,也就是8个节点(区块),8个区块连接在一起就形成了一个完整账本,这就是区块链。因为这个账本每人都有一个,所以就是分布式账本,目的就是为了防止有人篡改记录,打到最后,谁输谁赢一目了然。

vscode 中 格式化 vue 及标签内属性如何不自动换行

平台:windows

vue版本:v1.31.1

一、格式化vue

首先打开vscode的 >文件 > 首选项 >设置

找到vetur 这个选项里面的

vetur.format.defaultFormatter.html 这个选项 将其设置为 js-beautify-html

前提是你必须安装了这个插件

二、设置标签内属性不自动换行

1、找到如图“在settings.json中编辑”

​​​​​​​​​

2、添加如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
"workbench.colorTheme": "Visual Studio Light",
"workbench.startupEditor": "newUntitledFile",
"explorer.confirmDelete": false,
"vetur.format.defaultFormatterOptions": {
"js-beautify-html": {
"wrap_attributes": "auto"
},
"prettyhtml": {
"printWidth": 100,
"singleQuote": false,
"wrapAttributes": false,
"sortAttributes": false
}
},
}

参考资料:Vetur

Systemd Journal Logrotate 实现

Systemd Journal Logrotate 实现


1. 利用 systemd-journald 提供的客户端 journal 实现;

1
2
journalctl --vacuum-size=10M
journalctl --vacuum-time=1days


man journal

1
2
3
4
5
6
--vacuum-size=, --vacuum-time=
Removes archived journal files until the disk space they use falls below the specified size (specified with the usual "K", "M", "G", "T" suffixes), or
all journal files contain no data older than the specified timespan (specified with the usual "s", "min", "h", "days", "months", "weeks", "years"
suffixes). Note that running --vacuum-size= has only indirect effect on the output shown by --disk-usage as the latter includes active journal files,
while the former only operates on archived journal files. --vacuum-size= and --vacuum-time= may be combined in a single invocation to enforce both a
size and time limit on the archived journal files.


参考资料:


http://blog.51cto.com/molewan/2043336


https://www.lulinux.com/archives/3135


https://blog.csdn.net/zstack_org/article/details/56274966


https://hk.saowen.com/a/e87c9aea9fda48e01d04a56a0bf87b31917eaa1be7ed8ef14df8c805d2fc54b1


https://blog.selectel.com/managing-logging-systemd/


https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs


https://huataihuang.gitbooks.io/cloud-atlas/os/linux/redhat/system_administration/systemd/systemd_clear_journalctl.html


http://www.jinbuguo.com/systemd/journalctl.html


https://www.kancloud.cn/wizardforcel/vbird-linux-basic-4e/152347





2. 利用配置 systemd-journald 保存的最大日志容量实现, 配置文件为 /etc/systemd/journald.conf 。

1
2
3
...
SystemMaxUse=50M
...


参考资料:


https://huataihuang.gitbooks.io/cloud-atlas/os/linux/redhat/system_administration/systemd/systemd_clear_journalctl.html


https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs


http://os.51cto.com/art/201405/440886.htm









swagger传参为list的方法

#swagger传参为list的方法
接口

1
2
3
@ApiOperation(value = "修正履约率")
@GetMapping(value = "/correctPerformance")
R<Boolean> correctPerformance(@RequestParam(value = "orderList") List<String> orderList);

添加 @RequestParam(value = “orderList”) 注解