Giter VIP home page Giter VIP logo

istio-pod-network-controller's Introduction

This project has been superseded by https://github.com/istio/cni

Istio Pod Network Controller

Controller to manage Istio Pod Network

Overview

This controller emulates the functionality of the Istio init proxy to modify the iptables rules so that the Istio proxy sidecar will properly intercept connections.

The primary benefit of this controller is that it helps alleviate a security issue of Istio which requires pods within the mesh to be running as privileged. Instead, privileged actions are performed by the controller instead of pods deployed by regular users. In OpenShift, this avoids the use of the privileged Security Context Constraint and using a more restrictive policy, such as nonroot.

How this works

This controller is deployed as a DaemonSet that runs on each node. Each pod deployed by the DaemonSet takes on the responsibility of managing the pods that are deployed on the respective nodes the controller is deployed on.

As new pods that are to be added to the Istio mesh are created, the controller modifies iptables rules on the nodes so that the pod is able to join the mesh. Finally, the controller annotates the pod indicating that it has been successfully initialized.

Pod will be initialized if the pod's namespace is annotated with istio-pod-network-controller/initialize: true or if the pod itself is annotated with istio-pod-network-controller/initialize: true. The logic works the same as for the istio-injection: enabled label.

Installation on Kubernetes

Starting Kubernetes

If you don't have a kubernetes cluster available run this command to start a minikube instance large enough to host istio:

minikube start --memory=8192 --cpus=2 --kubernetes-version=v1.10.0 \
    --extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
    --extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key"

If you want to run minikube with the crio container runtime run the following:

minikube start --memory=8192 --cpus=2 --kubernetes-version=v1.10.0 \
    --extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
    --extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
    --network-plugin=cni \
    --container-runtime=cri-o \
    --bootstrapper=kubeadm

Install Istio

Run the following to install Istio

kubectl create namespace istio-system
kubectl apply -f examples/istio-demo.yaml -n istio-system

Install istio-pod-network-controller

Run the following to install istio-pod-network-controller

helm template -n istio-pod-network-controller ./chart/istio-pod-network-controller | kubectl apply -f -

if you are using with crio, run the following

helm template -n istio-pod-network-controller --set containerRuntime=crio ./chart/istio-pod-network-controller | kubectl apply -f -

Testing with automatic sidecar injection

Execute the following commands:

kubectl create namespace bookinfo
kubectl label namespace bookinfo istio-injection=enabled
kubectl annotate namespace bookinfo istio-pod-network-controller/initialize=true
kubectl apply -f examples/bookinfo.yaml -n bookinfo

Installation on OpenShift

Starting OpenShift

If you don't have an OpenShift cluster available run this command to start a minikube instance large enough to host istio:

minishift start --ocp-tag=v3.9.40 --vm-driver=kvm \
    --cpus=2 --memory=8192 --skip-registration

Install istio

oc adm new-project istio-system --node-selector=""
oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z default -n istio-system
oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-galley-service-account -n istio-system
oc apply -f examples/istio-demo.yaml -n istio-system
oc expose svc istio-ingressgateway -n istio-system
oc expose svc servicegraph -n istio-system
oc expose svc grafana -n istio-system
oc expose svc prometheus -n istio-system
oc expose svc tracing -n istio-system

Install istio-pod-network-controller

The istio-pod-network-controller is to be installed in the istio-system namespace along with with the other istio components

To install the istio-pod-network-controller, execute the following commands:

helm template -n istio-pod-network-controller --set kubernetesDistribution=OpenShift ./chart/istio-pod-network-controller | oc apply -f -

Testing with the bookinfo Application

To demonstrate the functionality of the istio-pod-network-controller, let's use he classic bookinfo application.

Testing with manual sidecar injection

Execute the following commands:

oc new-project bookinfo
oc annotate namespace bookinfo istio-pod-network-controller/initialize=true
oc adm policy add-scc-to-user anyuid -z default -n bookinfo
oc apply -f <(istioctl kube-inject -f examples/bookinfo.yaml) -n bookinfo
oc expose svc productpage -n bookinfo

Building

Instructions for building this project can be found here

istio-pod-network-controller's People

Contributors

raffaelespazzoli avatar sabre1041 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

istio-pod-network-controller's Issues

Can't use `traffic.sidecar.istio.io/includeOutboundIPRanges` and `traffic.sidecar.istio.io/excludeOutboundIPRanges` options.

Issue:

To restrict istio service mesh external traffic we have options like traffic.sidecar.istio.io/includeOutboundIPRanges and traffic.sidecar.istio.io/excludeOutboundIPRanges. The problem with istio-pod-network-controller is it looks at these values from labels and not annotations and the label values can't have / character in them.

https://github.com/sabre1041/istio-pod-network-controller/blob/master/pkg/handler/handler.go#L290

func getIncludedOutboundCidrs(pod *corev1.Pod) string {
	if includeCidrs, ok := pod.ObjectMeta.Labels[IncludeCidrsAnnotation]; ok {
		return includeCidrs
	} else {
		return "*"
	}
}

Steps to reproduce this issue

  • Create a deployment with one of the labels.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: debug-deploy
spec:
  template:
    metadata:
      annotations:
        # Change the following line to use the arn of interest
        iam.amazonaws.com/role:  <arn-role>
      labels:
        traffic.sidecar.istio.io/includeOutboundIPRanges: "<cird-ranges>[100.72.0.0./13,100.86.0.0/11]"
        app: debug-app
    spec:
      containers:
      - name: debug-app
        image: debug-app:latest
        command: ["bash", "-c", "sleep infinity"]
  • Apply the above deployment definition, then you get an error
The Deployment "debug-tools" is invalid: 
* metadata.labels: Invalid value: "100.72.0.0./13,100.86.0.0/11": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')
* spec.selector.matchLabels: Invalid value: "100.72.0.0./13,100.86.0.0/11": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')
* spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"traffic.sidecar.istio.io/includeOutboundIPRanges":"100.72.0.0./13,100.86.0.0/11", "app":"debug-tools"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: invalid label selector.

Did I miss something here? or Is this a bug?

I am happy to raise a PR to look for these values from annotations rather than labels?

Failed to get pidID : unable to find pod main pid

on 3.11 alpha pod-network controller is throwing this error. when a work load is deployed

time="2018-10-10T10:09:41Z" level=info msg="Go Version: go1.9.4"
--
  | time="2018-10-10T10:09:41Z" level=info msg="Go OS/Arch: linux/amd64"
  | time="2018-10-10T10:09:41Z" level=info msg="operator-sdk Version: 0.0.5+git"
  | time="2018-10-10T10:09:41Z" level=info msg="Managing Pods Running on Node: ns536048"
  | time="2018-10-10T10:10:55Z" level=error msg="Failed to get pidID : unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="Failed to process pod : unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="error syncing key (test/test-1-5zvts): unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="Failed to get pidID : unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="Failed to process pod : unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="error syncing key (test/test-1-5zvts): unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="Failed to get pidID : unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="Failed to process pod : unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="error syncing key (test/test-1-5zvts): unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="Failed to get pidID : unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="Failed to process pod : unable to find pod main pid"
  | time="2018-10-10T10:10:55Z" level=error msg="error syncing key (test/test-1-5zvts): unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="Failed to get pidID : unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="Failed to process pod : unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="error syncing key (test/test-1-5zvts): unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="Failed to get pidID : unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="Failed to process pod : unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="error syncing key (test/test-1-5zvts): unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="Failed to get pidID : unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="Failed to process pod : unable to find pod main pid"
  | time="2018-10-10T10:10:56Z" level=error msg="error syncing key (test/test-1-5zvts): unable to find pod main pid"
  | time="2018-10-10T10:10:57Z" level=info msg="Pod test-1-5zvts previously initialized, ignoring"
  | time="2018-10-10T10:11:02Z" level=info msg="Pod test-1-5zvts previously initialized, ignoring"

Can't build image with current dockerfile

Get the following error when building the image with current dockerfile:

package golang is not installed
The command '/bin/sh -c yum repolist > /dev/null &&     yum-config-manager --enable rhel-7-server-optional-rpms --enable rhel-7-server-extras-rpms &&     yum clean all &&     rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO &&     curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo &     INSTALL_PKGS="golang iptables iproute git runc" &&     yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS &&     rpm -V $INSTALL_PKGS &&     mkdir -p ${GOPATH}/{bin,src} &&     curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh &&     cd /opt/app-root/go/src/github.com/sabre1041/istio-pod-network-controller &&     cp bin/istio-iptables.sh /usr/local/bin/ &&     dep ensure -vendor-only &&    go build -o bin/istio-pod-network-controller -v cmd/istio-pod-network-controller/main.go &&     mv bin/istio-pod-network-controller /usr/local/bin &&     rm -rf ${GOPATH} &&     REMOVE_PKGS="golang git" &&     yum remove -y $REMOVE_PKGS &&     yum clean all &&     rm -rf /var/cache/yum &&     VERSION="v1.11.1" &&     curl -L -o /root/crictl-$VERSION-linux-amd64.tar.gz https://github.com/kubernetes-incubator/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz &&    tar zxvf /root/crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin &&     rm -f crictl-$VERSION-linux-amd64.tar.gz &&     curl -L -o /usr/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 &&     chmod +x /usr/bin/jq' returned a non-zero code: 1

Seems that we can't install golang with yum now.
Could we just build the binary outside the docker process and put it into the final image.

init container blocks Deployment of Pod

Following the installation steps on an already working OpenShift 3.11 with Istio, adding the istio-pod-network-controller/initialize: true annotation to a DeploymentConfig, will block and brake Pod deployment.

OpenShift scc error with Invalid value: 1337

After successfully install istio and istio-pod-network-controller . Enable auto-inject in ns-1 and add "istio-pod-network-controller/initialize: true" annotation in ns-1 , when I apply a Deployment it occur error like below, any suggestion for this? thanks!
By the way "oc adm policy add-scc-to-user anyuid -z default -n ns-1" had alreday been executed.

OpenShift version : 3.11
istio version: 1.0.3

16s 16s 1 two-v1-5c9cd6c4cb.157858270bc4fa1b ReplicaSet Warning FailedCreate replicaset-controller Error creating: pods "two-v1-5c9cd6c4cb-lpfpl" is forbidden: unable to validate against any pod security policy: [spec.containers[1].securityContext.securityContext.runAsUser: Invalid value: 1337: must be in the ranges: [1000240000, 1000249999]]

web-only installation

it should be possible to install this project without cloning the repo.
update all the file references to be URL (all commands should support it)
add pointers to how to install minishift, minikube and helm, where appropriate in the doc.
update the template to use the quay.io image automatically built.

configurable list of excluded pod annotations

right now builder and deployer pods are automatically excluded. this is hardwired in the code.
the list of annotations that make a pod be excluded should be passed as a configuration.
may be in the form of a comma separated list of value pairs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.