Configuring AWS API GW with Istio and Kops cluster
Aws api gw has straight forward implementation scope with Amazon EKS cluster in aws. But if you want to integrate that same things with salf managed kubernetes cluster in AWS then you will face some challenges. Means the integration is not straight forward. And if you have istio enabled in your cluster then it will getting complex. But you can easily integrate that things with your cluster including Istio. Here I presume that there is a kops kubernetes cluster with istio enabled namespace in aws. We want to integrate aws api gateway with our service with minimum effort.
At the begaining you have to deceide about the ingress router. All ingress generally create Classic load balancer which is not supported perfactly by AWS api gateway means the integration will be challenging. But it is pretty simple with aws ALB(Application Load Balancer) or NLB (network load balancer)type load balancer.
If we deploy istio in our cluster with full capability then automatically a ingress router will be created with CLB component. Later part we usually use gateway component with this if we want to use istio ingress for the entry of North south bound traffic. In our first phase we will replace this CLB component with aws 2nd generation Layer 4 NLB. Here is the procedure of doing that.
At First You need to apply policy on the master role in order to be able to provision network load balancer. So create a policy document with following json in iam
"Version": "2012-10-17",
"Statement": [
{
"Sid": "kopsK8sNLBMasterPermsRestrictive",
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"ec2:DescribeRegions"
],
"Resource": "*"
}
]
}
Click review policy, fill all fields and click create policy:

then click on roles, select you master role nodes, and click attach policy to attache this policy in your master nodes.
Then edit existing ingress router service and add following annotation:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"service.beta.kubernetes.io/aws-load-balancer-type: nlb
remember we will create internal load balancer here as we will forward all traffic via API GW. We do not need any other entry point for this demonstration. External facing LB can also possible to create by setting first parameter false. Save and apply the configuration will generate provate NLB for you by pointing istio ingress router. Awesome.
(Following portion of work is taken as per aws guideline)
Now it is time to configure API-GW. You can create an API Gateway API with private integration to provide your customers access to HTTP/HTTPS resources within your Amazon Virtual Private Cloud (Amazon VPC).
When a client calls the API, API Gateway connects to the Network Load Balancer through the pre-configured VPC link. A VPC link is encapsulated by an API Gateway resource of VpcLink. It is responsible for forwarding API method requests to the VPC resources and returns backend responses to the caller. For an API developer, a VpcLink
is functionally equivalent to an integration endpoint.
To create an API with private integration, you must create a new VpcLink
, or choose an existing one, that is connected to a Network Load Balancer that targets the desired VPC resources. You must have appropriate permissions to create and manage a VpcLink
. You then set up an API method and integrate it with the VpcLink
by setting either HTTP
or HTTP_PROXY
as the integration type, setting VPC_LINK
as the integration connection type, and setting the VpcLink
identifier on the integration connectionId
.
To create VPC link do following first
- From the primary navigation pane, choose VPC links and then choose Create.
- Choose VPC link for REST APIs.
- Enter a name, and optionally, a description for your VPC link.
- Choose a Network Load Balancer from the Target NLB drop-down list.
- You must have the Network Load Balancer already created in the same Region as your API for the Network Load Balancer to be present in the list. For us istio NLB setup already did this
- Choose Create to start creating the VPC link.
The initial response returns a VpcLink
resource representation with the VPC link ID and a PENDING
status. This is because the operation is asynchronous and takes about 2-4 minutes to complete. Upon successful completion, the status is AVAILABLE
. In the meantime, you can proceed to create the API.
Now Choose APIs from the primary navigation pane and then choose Create API to create a new API of either an edge-optimized or regional endpoint type. For the root resource (/), choose Create Method from the Actions drop-down menu, and then choose GET
.
In the / GET — Setup pane, initialize the API method integration as follows:
- Choose
VPC Link
for Integration type. - Choose Use Proxy Integration.
- From the Method drop-down list, choose
GET
as the integration method. - From the VPC Link drop-down list, choose
[Use Stage Variables]
and type${stageVariables.vpcLinkId}
in the text box below. - We will define the
vpcLinkId
stage variable after deploying the API to a stage and set its value to the ID of theVpcLink.
- Type a URL, for example,
http://aws.companyname.ai
, for Endpoint URL. - Here, the host name (for example,
aws.companyname.ai
) is used to set theHost
header of the integration request. - Leave the Use Default Timeout selection as-is, unless you want to customize the integration timeouts.
- Choose Save to finish setting up the integration.
- With the proxy integration, the API is ready for deployment. Otherwise, you need to proceed to set up appropriate method responses and integration responses.
- From the Actions drop-down menu, choose Deploy API and then choose a new or existing stage to deploy the API.
- Note the resulting Invoke URL. You need it to invoke the API. Before doing that, you must set up the
vpcLinkId
stage variable.

In the Stage Editor, choose the Stage Variables tab and choose Add Stage Variable.
- Under the Name column, type
vpcLinkId
. - Under the Value column, type the ID of
VPC_LINK
, for example,gix6s7
. - Choose the check-mark icon to save this stage variable.
- Using the stage variable, you can easily switch to different VPC links for the API by changing the stage variable value.
- This completes creating the API. You can test invoking the API as with other integrations.
Now if you want to create other path then do following
- From resource create new resource. provide path name for example
/bar
2. Now again from dropdown select create method

3. You have to provide full path url in endpoint url options. like http://aws.companyname.ai/bar
4. then deploy the api again with stage variable.
Now AWS part is done. You have to add the path base routing in one virtual service. Here is my example:

here if path have only / then traffic will route to quote service. If the path has bar service then it will route to echo service
Moreover you also need a gateway here

Congratulation you have done with minimum required setup. Now browse the workload using provided url of api gw.