背景: 
kubernetes集群中部署应用,对应用进行压力测试。jmeter进行压力测试大概是每秒300个左右的请求(每分钟elasticsearch中采集的请求有18000个)。查看日志有nginx的erro log:  ?  
   但是我的cpu 内存资源也都没有打满。通过搜索引擎搜索发现与下面博客的环境基本相似,php-fpm也是走的socket:     ?  
参见:http://www.bubuko.com/infodetail-3600189.html  
解决问题: 
修改net.core.somaxconn 
进入自己的nginx-php容器查看:  
bash-5.0# cat /proc/sys/net/core/somaxconn
128
  
   随机找了一个work节点查看主机的somaxconn:  
root@ap-shanghai-k8s-node-1:~# cat /proc/sys/net/core/somaxconn
32768
  
注:这是一个tke集群。参数都是默认的。未进行修改  下面修改一下应用的配置文件:  
apiVersion: apps/v1
kind: Deployment
metadata:
  name: paper-miniprogram
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: paper-miniprogram
  template:
    metadata:
      labels:
        app: paper-miniprogram
    spec:
      containers:
        - name: paper-miniprogram
          image: ccr.ccs.tencentyun.com/xxxx/paper-miniprogram:{data}
          ports:
            - containerPort: 80
          resources:
            requests:
              memory: "1024M"
              cpu: "1000m"
            limits:
              memory: "1024M"
              cpu: "1000m" 
      imagePullSecrets:                                              
        - name: tencent
---
apiVersion: v1
kind: Service
metadata:
  name: paper-miniprogram
  labels:
    app: paper-miniprogram
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: paper-miniprogram
  
修改如下:  增加initContainers配置  
apiVersion: apps/v1
kind: Deployment
metadata:
  name: paper-miniprogram
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: paper-miniprogram
  template:
    metadata:
      labels:
        app: paper-miniprogram
    spec:
      containers:
        - name: paper-miniprogram
          image: ccr.ccs.tencentyun.com/xxxx/paper-miniprogram:{data}
          ports:
            - containerPort: 80
          resources:
            requests:
              memory: "1024M"
              cpu: "1000m"
            limits:
              memory: "1024M"
              cpu: "1000m" 
      initContainers:
      - image: busybox
        command:
        - sh
        - -c
        - echo 1000 > /proc/sys/net/core/somaxconn
        imagePullPolicy: Always
        name: setsysctl
        securityContext:
          privileged: true
      imagePullSecrets:                                              
        - name: tencent
---
apiVersion: v1
kind: Service
metadata:
  name: paper-miniprogram
  labels:
    app: paper-miniprogram
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: paper-miniprogram
  
php-fpm listen.backlog参数修改 
先看一下系统变量net.ipv4.tcp_max_syn_backlog的参数值  
cat /proc/sys/net/core/netdev_max_backlog
#OR
sysctl -a|grep backlog
  
   然后查看一下php中listen。backlog的配置:     511就先511吧不先修改了 如果修改这个值也要特权模式修改一下啊容器中net.ipv4.tcp_max_syn_backlog的值?  ?  
官方关于sysctl 
kubernetes官方有syscl的用法说明的:https://kubernetes.io/zh/docs/tasks/administer-cluster/sysctl-cluster/  
然后这样做的后遗症: 
个人觉得特权模式会带来的安全等问题,还是不喜欢pod启用特权模式。  
个人觉得比较好的方式: 
- 通过grafana看板发现pod的资源利用率还是没有那么高。合理调整资源limits参数。
   
      
- 启用hpa 水平自动伸缩。
 - 综上所述我还想想保持默认的net.core.somaxconn=128。而依靠更多的副本数去满足高负载。这也是符合使用容器的思想的思路。
 - 关键是很多人认为扩大资源就可以提高并发负载量的思想是不对的.更应该去调优参数。
   
关于php-fpm unix socket and tcp 
   
   参见知乎:https://zhuanlan.zhihu.com/p/83958307     ?  
一些配置的可供参考: 
https://github.com/gaoxt/blog/issues/9  https://blog.csdn.net/pcyph/article/details/46513521  ? 
                
                
                
        
    
 
 |