How to Implement Istio in Multicloud and Multicluster (GKE/AKS)

Today every application has microservices architecture where individual services are spread across public clouds and multiple Kubernetes clusters. Since all the communication of messages among services happens over the internet, it is very important to ensure the security of your data. You don’t want any malicious guy to read and record the data-in-transit (known as packet sniffing), or intervene in between the communication as someone you trust (IP spoofing), or perform a DoS attack such as bandwidth flooding or connection flooding, etc.

The idea is security should always be developed in a layered approach to build defense in depth. When software engineers are developing containerised applications they need to think about security at Code, Container, Cluster, Cloud levels (read 4 C’s of container security). 

So in this article, we will explain how you can avoid all the security vulnerabilities by securing the communication of microservices in multi-cloud and multicluster using open-source Istio service mesh. We have considered two different Kubernetes clusters- GKE and AKS- where we will implement two applications and ensure they talk to each other using secure channels. If you want to know more, read about mTLS and certificate rotation with Istio.

Prerequisite

  1. Ready-to-use GKE (primary cluster) and AKS (remote/secondary cluster)
  2. Configure the environment variables
  3. Terminal to access primary and remote/secondary cluster through kubectl.
  4. Refer all the files in IMESH Github repo 

Watch the video for implementing Istio in multicluster Kubernetes  

If you are comfortable to watch and refer the video to implement the security of multicluster apps using Istio, then watch the following video:

Steps

There are 6 important steps you need to follow to try to implement Istio in multicloud, deploy services and then implement mTLS, L4 and L7 authorization. 

  1. Install and configure Istio in GKE
  2. Configure the remote cluster- AKS
  3. Allow Istio in GKE to access the remote cluster
  4. Deploy applications in each cluster and validate mTLS
  5. Implement L4 authorization policy using Istio
  6. Implement L7 authorization policy using Istio

Step 1: Install and Configure Istio in the primary cluster (GKE) 

The idea from step-1 to step-3 is to configure Istio in the clusters- GKE and AKS so that apps in each cluster can talk to each other using an east-west ingress gateway. Please refer to the image of the Istio configuration that we are trying to achieve. 

High level Istio configuration for multicluster

Step1.1: Configure Istio operator

We will use the following IstioOperator yaml to define the desired state of Istio components. We will treat GKE as the primary cluster and the name of the whole service mesh is ‘Mesh1’ and call the primary cluster GKE datacenter network as ‘network1’. 

Refer the yaml file below, you can also download the file from Git.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    pilot:
      env:
        EXTERNAL_ISTIOD: true
    global:
      meshID: mesh1
      multiCluster:
        clusterName: cluster-gke
      network: network1
      proxy:
        privileged: true

In the above file, we have done two things:

  • Set the flag EXTERNAL_ISTIOD as ‘true’ to allow Istio control plane to handle remote clusters
  • Set the flag proxy->privilege as ‘true’ to get root access to the proxy container. (Note, this is NOT ideal for production implementation. You can reach out to IMESH Istio support for production support) 

Step 1.2: Install Istio using Istio Operator

Execute the command to install Istio

istioctl install $env:GKE -f <<Istio Operator file name>>

You will observe that the Istio core, Istiod and Ingress gateways are installed.

Step 1.3: Install Istio east-west gateway 

We will use the Istio operator to install an ingress gateway in GKE that can handle traffic from outside the cluster- from AKS. We have given the name of the ingress as istio-eastwestgateway. 

Note: Using the Istio operator we are installing an east-west ingress controller ( which is slightly different from normal ingress controller-act as an API). Once we install the east-west ingress controller, we will create a gateway resource to link with eastwest gateway and later on create virtual services to make sure the gateway resource in GKE listens to ASK in certain ports.  

You can refer to the east-west-gateway-cluster-gke.yaml file in the Git or refer the code below:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: eastwest
spec:
  revision: ""
  profile: empty
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network1
        enabled: true
        k8s:
          env:
            # traffic through this gateway should be routed inside the network
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network1
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017
  values:
    gateways:
      istio-ingressgateway:
        injectionTemplate: gateway
    global:
      network: network1

Note: east-west gateway file could be create using below command as well:

samples/multicluster/gen-eastwest-gateway.sh --network network1

Install the ingress gateway using the following command:

istioctl install $env:GKE -f <<ingress gateway file name>>

Ingress istio-eastwestgateway will be active now.

Step 1.4: Setup east-west gateway to allow the remote cluster (AKS) to access GKE 

Execute the following command to find out the IP of the ingress gateway istio-eastwestgateway. Copy it and we will use it in step-3 while configuring the Istio in remote cluster. 

Kubectl get svc -n istio-system $env:GKE

We will then create a port to receive the external traffic from AKS into GKS through the gateway.

Note: Since the eastwest IP is public, for production implementation, we suggest to consider security measures to secure the IP such as HTTPS, firewall, certificates, etc. 

Create two yaml files of Gateway kind to expose Istiod and the services in GKE to the AKS. 

Please apply the expose-istiod.yaml and expose-services yaml files in the istio-system namespace. 

Declaration of expose-istiod.yaml file below:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istiod-gateway
  namespace: istio-system
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        name: tls-istiod
        number: 15012
        protocol: tls
      tls:
        mode: PASSTHROUGH       
      hosts:
        - "*"
    - port:
        name: tls-istiodwebhook
        number: 15017
        protocol: tls
      tls:
        mode: PASSTHROUGH         
      hosts:
        - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istiod-vs
  namespace: istio-system
spec:
  hosts:
  - "*"
  gateways:
  - istiod-gateway
  tls:
  - match:
    - port: 15012
      sniHosts:
      - "*"
    route:
    - destination:
        host: istiod.istio-system.svc.cluster.local
        port:
          number: 15012
  - match:
    - port: 15017
      sniHosts:
      - "*"
    route:
    - destination:
        host: istiod.istio-system.svc.cluster.local
        port:
          number: 443

Declaration of expose-services.yaml file below:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cross-network-gateway
  namespace: istio-system
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        number: 15443
        name: tls
        protocol: TLS
      tls:
        mode: AUTO_PASSTHROUGH
      hosts:
        - "*.local"

Apply the following commands to deploy these two files and to allow cross-cluster communication:

kubectl apply $env:GKE -f .\expose-istiod.yaml 
kubectl apply $env:GKE -f .\expose-services.yaml

Step 2: Configure the remote cluster (AKS)

Step 2.1: Label and annotate the istio-system namespace in the AKS

You need to label and annotate istio-system namespace to let istiod know that the control plane of istio is ‘cluster-gke’- the primary cluster, when remote cluster is attached to it. You can do so by applying the below namespace. (I have given the name as cluster-aks-remote-namespace-prep.yaml).

apiVersion: v1
kind: Namespace
metadata:
  name: istio-system
  labels:
    topology.istio.io/network: network2
  annotations:
    topology.istio.io/controlPlaneClusters: cluster-gke

Step 2.2: Use the east-west gateway of GKE while configuring Istio in AKS

I have used cluster-aks-remote yaml in AKS to set up Istio. Use the IP of the east-west gateway of GKE cluster as the value under remotePilotAddress in the yaml file.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: remote
  values:
    istiodRemote:
      injectionPath: /inject/cluster/cluster-aks/net/network2
    global:
      remotePilotAddress: <replace with ip of east-west gateway of primary cluster>
      proxy:
        privileged: true

Step 2.3: Install Istio using the Istio operator in AKS

Use the command to install cluster-aks-remote.yaml

istioctl install $env:AKS -f .\cluster-aks-remote.yaml

Step 3: Allow Istio in GKE to access the API server of AKS

This step is crucial to allow the Istio control plane to access the API server of AKS to be able to perform its core activities such as service discovery, patch the webhooks, etc. The idea is to create a remote secret and apply the remote secret in the primary cluster GKE. 

Step 3.1: Create remote cluster secrets 

Use the following command to generate the remote secret of remote cluster (AKS) and store it in a secret yaml file. 

istioctl x create-remote-secret $env:AKS --name=cluster-aks > apiserver-creds-aks.yaml

The output file apiserver-creds-aks.yaml will look something like below:

Step 3.2: Apply the remote cluster secrets in primary cluster (GKE)

Use the following command to implement the secrets in GKE so that it can access the API server of AKS. 

kubectl apply $env:GKE -f .\apiserver-creds-aks.yaml

Note: Apply the remote credentials first to connect both the cluster and then create east-west gateway, expose the services in the remote cluster, otherwise there will be errors. 

Step 3.3: Install east-west ingress gateway in remote cluster AKS

Use the command to install east-west ingress gateway controllers in AKS.

istioctl install $env:AKS -f east-west-gateway-cluster-aks.yaml

After the controller is installed we will create gateway resource to link with east-west gateway in the remote cluster by applying the following commands:

kubectl apply $env:AKS -f .\expose-services.yaml 

Step 4:  Deploy application into primary and remote Kubernetes clusters in Istio service mesh

Step 4.1: Deploy service and deployment into each cluster- GKE and AKS. 

We will deploy service in each cluster and then deploy the Deployment file with version 1 and version 2 of hello world for GKE and AKS respectively. The idea is to see how two services in different clusters communicate with each other through the gateway. 

Link to demo-service.yaml, demo-deployment-v1.yaml and demo-deployment-2.yaml.

Use the following commands to deploy services and deployments into each cluster.

kubectl apply $env:GKE -f .\demo-service.yaml

kubectl apply $env:AKS -f .\demo-service.yaml

kubectl apply $env:GKE -f .\demo-deployment-v1.yaml

kubectl apply $env:AKS -f .\demo-deployment-v2.yaml

Step 4.2: Deploy a another service to request to hello service in GKE and AKS 

Git Link to sleep-deployment-cluster-gke.yaml  and sleep-deployment-cluster-aks.yaml

kubectl apply $env:GKE -f .\sleep-deployment-cluster-gke.yaml

kubectl apply $env:AKS -f .\sleep-deployment-cluster-aks.yaml

Step 4.3: Get into one of ‘sleep’ service pods and request the hello service

Give the command to enter into one the pods of ‘sleep’ services

kubectl exec -it <<sleep service pod name in gke>> $env:GKE -n multi-cluster -- sh

Request the hello service from the pod. 

curl helloworld/hello
multicluster service to service communication with Istio

Similarly, you can also verify the communication by entering into the pod of ‘sleep’ service in AKS. 

Step 4.4: Verify if communications are secured with mTLS 

You can verify the communication between services in the multicluster by dumping TCP/IP packets on the Envoy proxy container. Use the below command to enter into envoy proxy container. 

kubectl exec -it <<helloworld deployment-v1-pod name>> -c istio-proxy -n <<namespace>> -- sh

Run the following command to dump TCP/IP packets.

sudo tcpdump -nA port 5000

You would see an output like the below:

tcp dump of Envoy proxy logs

You can see all the packets exchanged between two services across clusters are encrypted with mTLS encryption. 

Step 5:  Apply L4 authorization policies to multicluster communication with Istio

To apply granular policies such as restricting a service from getting accessed by a certain service, you can use Istio authorization policies. 

Step 5.1: Create and deploy an Istio L4 authorization policy 

You can refer to the following helloworld-policy to create your authorization policy or check out Git. The objective of the policy is:

  • Allow deployment-1 to be accessed from sleep service in the remote cluster only ( i.e. from AKS). If we send a request from the sleep service pod in AKS to hello-world service, we should get response from both deployment-1 and deployment-2. 
  • Don’t allow deployment-1 to be accessed from any other services in the mesh. If a pod from GKE requests hello-world services, the response should come from only deployment-2 pods. 
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: helloworld-policy
  namespace: multi-cluster
spec:
  selector:
    matchLabels:
      version: v1
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/multi-cluster/sa/sleep-aks"]

Deploy the the authorization policy with the below command:

Kubectl apply -f .\demo-authorization.yaml

Step 5.2 Verify L4 authorization policy implementation

After you apply the L4 policy, to verify if the policy is applicable, enter into the ‘sleep’ service pods of GKE and AKS respectively and try to curl helloworld service. You will realize you can access deployment-1 from sleep-service AKS only, and access from GKE will through RBAC denied error.  Refer to the screenshots below. 

Istio L4 authorization policy verification logs part-1

Access from GKE will throw an error.

Istio L4 authorization policy verification logs part-2

Step 6:  Apply L7 authorization policies to multicluster communication with Istio

Now you can apply L7 authorization policies to create rules on HTTP traffic. Below is the example of the L7 auth policy used to allow only HEAD and block all kinds of API access. The idea here is to allow traffic requests to deployment-v1 from sleep service in AKS, provided the HTTP request is placed using HEAD method.

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: helloworld-policy
  namespace: multi-cluster
spec:
  selector:
    matchLabels:
      version: v1
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/multi-cluster/sa/sleep-aks"]
    to:
    - operation:
        methods: ["HEAD"]

Once you apply the L7 policy you can validate the traffic using logs which would look something like the below screenshot. Note, if you try to access the deployment-v1 service from AKS using direct curl (GET request) then it would fail. But if you use the HEAD method using curl -I helloworld/hello , then we will get a response HTTP/1.1 200 OK. 

Istio L7 authorization policy verification logs

That’s the end of the securing multicloud and multicluster application using Istio.

IMESH for enterprise Istio

Conclusion

If you want to implement Istio in large enterprises with numerous microservices across public or private cloud or VMs, then IMESH can help you. We ensure Istio performs optimally with guaranteed SLAs. 

Contact us for enterprise Istio support today. 

Ravi Verma

Ravi Verma

Ravi is the CTO of IMESH. Ravi, a technology visionary, brings 12+ years of experience in software development and cloud architecture in enterprise software. He has led R&D divisions at Samsung and GE Healthcare and architected high-performance, secure and scalable systems for Baxter and Aricent. ​His passion and interest lie in network and security. Ravi frequently discusses open-source technologies such as Kubernetes, Istio, and Envoy Proxy from the CNCF landscape.

Leave a Reply