Configure a {custom} area title to your Amazon MSK cluster

[ad_1]

Amazon Managed Streaming for Kafka (Amazon MSK) is a totally managed service that lets you construct and run purposes that use Apache Kafka to course of streaming information. It runs open-source variations of Apache Kafka. This implies current purposes, tooling, and plugins from companions and the Apache Kafka neighborhood are supported with out requiring modifications to software code.

Prospects use Amazon MSK for real-time information sharing with their finish prospects, who may very well be inner groups or third events. These finish prospects handle Kafka purchasers, that are deployed in AWS, different managed cloud suppliers, or on premises. When migrating from self-managed to Amazon MSK or shifting purchasers between MSK clusters, prospects wish to keep away from the necessity for Kafka consumer reconfiguration, to make use of a distinct Area Title System (DNS) title. Due to this fact, it’s vital to have a {custom} area title for the MSK cluster that the purchasers can talk to. Additionally, having a {custom} area title makes the catastrophe restoration (DR) course of simpler as a result of purchasers don’t want to vary the MSK bootstrap deal with when both a brand new cluster is created or a consumer connection must be redirected to a DR AWS Area.

MSK clusters use AWS-generated DNS names which can be distinctive for every cluster, containing the dealer ID, MSK cluster title, two service generated sub-domains, and the AWS Area, ending with amazonaws.com. The next determine illustrates this naming format.

MSK brokers use the identical DNS title for the certificates used for Transport Layer Safety (TLS) connections. The DNS title utilized by purchasers with TLS encrypted authentication mechanisms should match the first Frequent Title (CN), or Topic Various Title (SAN) of the certificates introduced by the MSK dealer, to keep away from hostname validation errors.

The answer mentioned on this put up supplies a manner so that you can use a {custom} area title for purchasers to hook up with their MSK clusters when utilizing SASL/SCRAM (Easy Authentication and Safety Layer/ Salted Problem Response Mechanism) authentication solely.

Answer overview

Community Load Balancers (NLBs) are a well-liked addition to the Amazon MSK structure, together with AWS PrivateLink as a strategy to expose connectivity to an MSK cluster from different digital personal clouds (VPCs). For extra particulars, see How Goldman Sachs builds cross-account connectivity to their Amazon MSK clusters with AWS PrivateLink. On this put up, we run by way of the right way to use an NLB to allow using a {custom} area title with Amazon MSK when utilizing SASL/SCRAM authentication.

The next diagram reveals all parts utilized by the answer.

SASL/SCRAM makes use of TLS to encrypt the Kafka protocol visitors between the consumer and Kafka dealer. To make use of a {custom} area title, the consumer must be introduced with a server certificates matching that {custom} area title. As of this writing, it isn’t attainable to switch the certificates utilized by the MSK brokers, so this resolution makes use of an NLB to take a seat between the consumer and MSK brokers.

An NLB works on the connection layer (Layer 4) and routes the TCP or UDP protocol visitors. It doesn’t validate the appliance information being despatched and forwards the Kafka protocol visitors. The NLB supplies the power to make use of a TLS listener, the place a certificates is imported into AWS Certificates Supervisor (ACM) and related to the listener and allows TLS negotiation between the consumer and the NLB. The NLB performs a separate TLS negotiation between itself and the MSK brokers. This NLB TLS negotiation to the goal works precisely the identical no matter whether or not certificates are signed by a public or personal Certificates Authority (CA).

For the consumer to resolve DNS queries for the {custom} area, an Amazon Route 53 personal hosted zone is used to host the DNS information, and is related to the consumer’s VPC to allow DNS decision from the Route 53 VPC resolver.

Kafka listeners and marketed listeners

Kafka listeners (listeners) are the lists of addresses that Kafka binds to for listening. A Kafka listener consists of a hostname or IP, port, and protocol: <protocol>://<hostname>:<port>.

The Kafka consumer makes use of the bootstrap deal with to hook up with one of many brokers within the cluster and points a metadata request. The dealer supplies a metadata response containing the deal with info of every dealer that the consumer wants to hook up with speak to those brokers. Marketed listeners (marketed.listeners) is a configuration choice utilized by Kafka purchasers to hook up with the brokers. By default, an marketed listener just isn’t set. After it’s set, Kafka purchasers will use the marketed listener as a substitute of listeners to acquire the connection info for brokers.

When Amazon MSK multi-VPC personal connectivity is enabled, AWS units the marketed.listeners configuration choice to incorporate the Amazon MSK multi-VPC DNS alias.

MSK brokers use the listener configuration to inform purchasers the DNS names to make use of to hook up with the person brokers for every authentication sort enabled. Due to this fact, when purchasers are directed to make use of the {custom} area title, you’ll want to set a {custom} marketed listener for SASL/SCRAM authentication protocol. Marketed listeners are distinctive to every dealer; the cluster received’t begin if a number of brokers have the identical marketed listener deal with.

Kafka bootstrap course of and setup choices

A Kafka consumer makes use of the bootstrap addresses to get the metadata from the MSK cluster, which in response supplies the dealer hostname and port (the listeners info by default or the marketed listener if it’s configured) that the consumer wants to hook up with for subsequent requests. Utilizing this info, the consumer connects to the suitable dealer for the subject or partition that it must ship to or fetch from. The next diagram reveals the default bootstrap and subject or partition connectivity between a Kafka consumer and MSK dealer.

You could have two choices when utilizing a {custom} area title with Amazon MSK.

Possibility 1: Solely a bootstrap connection by way of an NLB

You need to use a {custom} area title just for the bootstrap connection, the place the marketed listeners usually are not set, so the consumer is directed to the default AWS cluster DNS title. This selection is helpful when the Kafka consumer has direct community connectivity to each the NLB and the MSK dealer’s Elastic Community Interface (ENI). The next diagram illustrates this setup.

No modifications are required to the MSK brokers, and the Kafka consumer has the {custom} area set because the bootstrap deal with. The Kafka consumer makes use of the {custom} area bootstrap deal with to ship a get metadata request to the NLB. The NLB sends the Kafka protocol visitors acquired by the Kafka consumer to a wholesome MSK dealer’s ENI. That dealer responds with metadata the place solely listeners is ready, containing the default MSK cluster DNS title for every dealer. The Kafka consumer then makes use of the default MSK cluster DNS title for the suitable dealer and connects to that dealer’s ENI.

Possibility 2: All connections by way of an NLB

Alternatively, you need to use a {custom} area title for the bootstrap and the brokers, the place the {custom} area title for every dealer is ready within the marketed listeners configuration. You have to use this feature when Kafka purchasers don’t have direct community connectivity to the MSK brokers ENI. For instance, Kafka purchasers want to make use of an NLB, AWS PrivateLink, or Amazon MSK multi-VPC endpoints to hook up with an MSK cluster. The next diagram illustrates this setup.

The marketed listeners are set to make use of the {custom} area title, and the Kafka consumer has the {custom} area set because the bootstrap deal with. The Kafka consumer makes use of the {custom} area bootstrap deal with to ship a get metadata request, which is distributed to the NLB. The NLB sends the Kafka protocol visitors acquired by the Kafka consumer to a wholesome MSK dealer’s ENI. That dealer responds with metadata the place marketed listeners is ready. The Kafka consumer makes use of the {custom} area title for the suitable dealer, which directs the connection to the NLB, for the port set for that dealer. The NLB sends the Kafka protocol visitors to that dealer.

Community Load Balancer

The next diagram illustrates the NLB port and goal configuration. A TLS listener with port 9000 is used for bootstrap connections with all MSK brokers set as targets. The listener makes use of TLS goal sort with goal port as 9096. A TLS listener port is used to symbolize every dealer within the MSK cluster. On this put up, there are three brokers within the MSK cluster with TLS 9001, representing dealer 1, as much as TLS 9003, representing dealer 3.

For all TLS listeners on the NLB, a single imported certificates with the area title bootstrap.instance.com is connected to the NLB. bootstrap.instance.com is used because the Frequent Title (CN) in order that the certificates is legitimate for the bootstrap deal with, and Topic Various Names (SANs) are set for all dealer DNS names. If the certificates is issued by a non-public CA, purchasers must import the basis and intermediate CA certificates to the belief retailer. If the certificates is issued by a public CA, the basis and intermediate CA certificates shall be within the default belief retailer.

The next desk reveals the required NLB configuration.

NLB Listener Kind NLB Listener Port Certificates NLB Goal Kind NLB Targets
TLS 9000 bootstrap.instance.com TLS All Dealer ENIs
TLS 9001 bootstrap.instance.com TLS Dealer 1
TLS 9002 bootstrap.instance.com TLS Dealer 2
TLS 9003 bootstrap.instance.com TLS Dealer 3

Area Title System

For this put up, a Route 53 personal hosted zone is used to host the DNS information for the {custom} area, on this case instance.com. The personal hosted zone is related to the Amazon MSK VPC, to allow DNS decision for the consumer that’s launched in the identical VPC. In case your consumer is in a distinct VPC than the MSK cluster, you’ll want to affiliate the personal hosted zone with that consumer’s VPC.

The Route 53 personal hosted zone just isn’t a required a part of the answer. Essentially the most essential half is that the consumer can carry out DNS decision in opposition to the {custom} area and get the required responses. You may as a substitute use your group’s current DNS, a Route 53 public hosted zone or Route 53 inbound resolver to resolve Route 53 personal hosted zones from exterior of AWS, or an alternate DNS resolution.

The next determine reveals the DNS information utilized by the consumer to resolve to the NLB. We use bootstrap for the preliminary consumer connection, and use b-1, b-2, and b-3 to reference every dealer’s title.

The next desk lists the DNS information required for a three-broker MSK cluster when utilizing a Route 53 personal or public hosted zone.

Report Report Kind Worth
bootstrap A NLB Alias
b-1 A NLB Alias
b-2 A NLB Alias
b-3 A NLB Alias

The next desk lists the DNS information required for a three-broker MSK cluster when utilizing different DNS options.

Report Report Kind Worth
bootstrap C NLB DNS A Report (e.g. name-id.elb.area.amazonaws.com)
b-1 C NLB DNS A Report
b-2 C NLB DNS A Report
b-3 C NLB DNS A Report

Within the following sections, we undergo the steps to configure a {custom} area title to your MSK cluster and purchasers connecting with the {custom} area.

Conditions

To deploy the answer, you want the next stipulations:

Launch the CloudFormation template

Full the next steps to deploy the CloudFormation template:

  1. Select Launch Stack.

  1. Present the stack title as msk-custom-domain.
  2. For MSKClientUserName, enter the consumer title of the key used for SASL/SCRAM authentication with Amazon MSK.
  3. For MSKClientUserPassword, enter the password of the key used for SASL/SCRAM authentication with Amazon MSK.

The CloudFormation template will deploy the next assets:

Arrange the EC2 occasion

Full the next steps to configure your EC2 occasion:

  1. On the Amazon EC2 console, connect with the occasion msk-custom-domain-KafkaClientInstance1 utilizing Session Supervisor, a functionality of AWS Methods Supervisor.
  2. Swap to ec2-user:
  3. Run the next instructions to configure the SASL/SCRAM consumer properties, create Kafka entry management lists (ACLs), and create a subject named buyer:
    . ./cloudformation_outputs.sh 
    aws configure set area $REGION 
    export BS=$(aws kafka get-bootstrap-brokers --cluster-arn ${MSKClusterArn} | jq -r '.BootstrapBrokerStringSaslScram') 
    export ZOOKEEPER=$(aws kafka describe-cluster --cluster-arn $MSKClusterArn | jq -r '.ClusterInfo.ZookeeperConnectString')
    ./configure_sasl_scram_properties_and_kafka_acl.sh

Create a certificates

For this put up, we use self-signed certificates. Nevertheless, it’s beneficial to make use of both a public certificates or a certificates signed by your group’s personal key infrastructure (PKI).

In the event you’re are utilizing an AWS personal CA for the personal key infrastructure, consult with Creating a non-public CA for directions to create and set up a non-public CA.

Use the openSSL command to create a self-signed certificates. Modify the next command, including the nation code, state, metropolis, and firm:

SSLCONFIG="[req]
immediate = no
distinguished_name = req_distinguished_name
x509_extensions = v3_ca

[req_distinguished_name]
C = <<Country_Code>>
ST = <<State>>
L = <<Metropolis>>
O = <<Firm>>
OU = 
emailAddress = 
CN = botstrap.instance.com

[v3_ca]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = @alternate_names

[alternate_names]
DNS.1 = bootstrap.instance.com
DNS.2 = b-1.instance.com
DNS.3 = b-2.instance.com
DNS.4 = b-3.instance.com
"

openssl req -x509 -newkey rsa:2048 -days 365 -nodes 
    -config <(echo "$SSLCONFIG") 
    -keyout msk-custom-domain-pvt-key.pem 
    -out msk-custom-domain-certificate.pem  

You may verify the created certificates utilizing the next command:

openssl x509 -text -noout -in msk-custom-domain-certificate.pem

Import the certificates to ACM

To make use of the self-signed certificates for the answer, you’ll want to import the certificates to ACM:

export CertificateARN=$(aws acm import-certificate --certificate file://msk-custom-domain-certificate.pem --private-key file://msk-custom-domain-pvt-key.pem | jq -r '.CertificateArn')

echo $CertificateARN

After it’s imported, you possibly can see the certificates in ACM.

Import the certificates to the Kafka consumer belief retailer

For the consumer to validate the server SSL certificates in the course of the TLS handshake, you’ll want to import the self-signed certificates to the consumer’s belief retailer.

  1. Run the next command to make use of the JVM belief retailer to create your consumer belief retailer:
    cp /usr/lib/jvm/jre-1.8.0-openjdk/lib/safety/cacerts /dwelling/ec2-user/kafka.consumer.truststore.jks 
    chmod 700 kafka.consumer.truststore.jks

  2. Import the self-signed certificates to the belief retailer by utilizing the next command. Present the keystore password as changeit.
    /usr/lib/jvm/jre-1.8.0-openjdk/bin/keytool -import  
    	-trustcacerts  
    	-noprompt  
    	-alias msk-cert  
    	-file msk-custom-domain-certificate.pem  
    	-keystore kafka.consumer.truststore.jks

  3. You have to embody the belief retailer certificates location config properties utilized by Kafka purchasers to allow certification validation:
    echo 'ssl.truststore.location=/dwelling/ec2-user/kafka.consumer.truststore.jks' >> /dwelling/ec2-user/kafka/config/client_sasl.properties

Arrange DNS decision for purchasers inside the VPC

To arrange DNS decision for purchasers, create a non-public hosted zone for the area and affiliate the hosted zone with the VPC the place the consumer is deployed:

aws route53 create-hosted-zone 
--name instance.com 
--caller-reference "msk-custom-domain" 
--hosted-zone-config Remark="Non-public Hosted Zone for MSK",PrivateZone=true 
--vpc VPCRegion=$REGION,VPCId=$MSKVPCId

export HostedZoneId=$(aws route53 list-hosted-zones-by-vpc --vpc-id $MSKVPCId --vpc-region $REGION | jq -r '.HostedZoneSummaries[0].HostedZoneId')

Create EC2 goal teams

Goal teams route requests to particular person registered targets, akin to EC2 cases, utilizing the protocol and port quantity that you just specify. You may register a goal with a number of goal teams and you may register a number of targets to 1 goal group.

For this put up, you want 4 goal teams: one for every dealer occasion and one that can level to all of the brokers and shall be utilized by purchasers for Amazon MSK connection bootstrapping.

The goal group will obtain visitors on port 9096 (SASL/SCRAM authentication) and shall be related to the Amazon MSK VPC:

aws elbv2 create-target-group 
    --name b-all-bootstrap 
    --protocol TLS 
    --port 9096 
    --target-type ip 
    --vpc-id $MSKVPCId
    
aws elbv2 create-target-group 
    --name b-1 
    --protocol TLS 
    --port 9096 
    --target-type ip 
    --vpc-id $MSKVPCId
    
aws elbv2 create-target-group 
    --name b-2 
    --protocol TLS 
    --port 9096 
    --target-type ip 
    --vpc-id $MSKVPCId
    
aws elbv2 create-target-group 
    --name b-3 
    --protocol TLS 
    --port 9096 
    --target-type ip 
    --vpc-id $MSKVPCId

Register goal teams with MSK dealer IPs

You have to affiliate every goal group with the dealer occasion (goal) within the MSK cluster in order that the visitors going by way of the goal group might be routed to the person dealer occasion.

Full the next steps:

  1. Get the MSK dealer hostnames:

This could present the brokers, that are a part of bootstrap deal with. The hostname of dealer 1 seems to be like the next code:

b-1.mskcustomdomaincluster.xxxxx.yy.kafka.area.amazonaws.com

To get the hostname of different brokers within the cluster, substitute b-1 with values like b-2, b-3, and so forth. For instance, when you have six brokers within the cluster, you’ll have six dealer hostnames beginning with b-1 to b-6.

  1. To get the IP deal with of the person brokers, use the nslookup command:
nslookup b-1.mskcustomdomaincluster.xxxxx.yy.kafka.area.amazonaws.com Server: 172.16.0.2
Deal with: 172.16.0.2#53

Non-authoritative reply:
Title: b-1.mskcustomdomaincluster.xxxxx.yy.kafka.area.amazonaws.com
Deal with: 172.16.1.225

  1. Modify the next instructions with the IP addresses of every dealer to create an setting variable that shall be used later:
export B1=<<b-1_IP_Address>> 
export B2=<<b-2_IP_Address>> 
export B3=<<b-3_IP_Address>>

Subsequent, you’ll want to register the dealer IP with the goal group. For dealer b-1, you’ll register the IP deal with with goal group b-1.

  1. Present the goal group title b-1 to get the goal group ARN. Then register the dealer IP deal with with the goal group.
export TARGET_GROUP_B_1_ARN=$(aws elbv2 describe-target-groups --names b-1 | jq -r '.TargetGroups[0].TargetGroupArn')

aws elbv2 register-targets 
--target-group-arn ${TARGET_GROUP_B_1_ARN} 
--targets Id=$B1

  1. Iterate the steps of acquiring the IP deal with from different dealer hostnames and register the IP deal with with the corresponding goal group for brokers b-2 and b-3:
B-2
export TARGET_GROUP_B_2_ARN=$(aws elbv2 describe-target-groups --names b-2 | jq -r '.TargetGroups[0].TargetGroupArn')

aws elbv2 register-targets 
    --target-group-arn ${TARGET_GROUP_B_2_ARN} 
    --targets Id=$B2
B-3
export TARGET_GROUP_B_3_ARN=$(aws elbv2 describe-target-groups --names b-3 | jq -r '.TargetGroups[0].TargetGroupArn')

aws elbv2 register-targets 
    --target-group-arn ${TARGET_GROUP_B_3_ARN} 
    --targets Id=$B3

  1. Additionally, you’ll want to register all three dealer IP addresses with the goal group b-all-bootstrap. This goal group shall be used for routing the visitors for the Amazon MSK consumer connection bootstrap course of.
export TARGET_GROUP_B_ALL_ARN=$(aws elbv2 describe-target-groups --names b-all-bootstrap | jq -r '.TargetGroups[0].TargetGroupArn')

aws elbv2 register-targets 
--target-group-arn ${TARGET_GROUP_B_ALL_ARN} 
--targets Id=$B1 Id=$B2 Id=$B3

Arrange NLB listeners

Now that you’ve got the goal teams created and certificates imported, you’re able to create the NLB and listeners.

Create the NLB with the next code:

aws elbv2 create-load-balancer 
--name msk-nlb-internal 
--scheme inner 
--type community 
--subnets $MSKVPCPrivateSubnet1 $MSKVPCPrivateSubnet2 $MSKVPCPrivateSubnet3 
--security-groups $NLBSecurityGroupId

export NLB_ARN=$(aws elbv2 describe-load-balancers --names msk-nlb-internal | jq -r '.LoadBalancers[0].LoadBalancerArn')

Subsequent, you configure the listeners that shall be utilized by the purchasers to speak with the MSK cluster. You have to create 4 listeners, one for every goal group for ports 9000–9003. The next desk lists the listener configurations.

Protocol Port Certificates NLB Goal Kind NLB Targets
TLS 9000 bootstrap.instance.com TLS b-all-bootstrap
TLS 9001 bootstrap.instance.com TLS b-1
TLS 9002 bootstrap.instance.com TLS b-2
TLS 9003 bootstrap.instance.com TLS b-3

Use the next code for port 9000:

aws elbv2 create-listener 
--load-balancer-arn $NLB_ARN 
--protocol TLS 
--port 9000 
--certificates CertificateArn=$CertificateARN 
--ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 
--default-actions Kind=ahead,TargetGroupArn=$TARGET_GROUP_B_ALL_ARN

Use the next code for port 9001:

aws elbv2 create-listener 
--load-balancer-arn $NLB_ARN 
--protocol TLS 
--port 9001 
--certificates CertificateArn=$CertificateARN 
--ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 
--default-actions Kind=ahead,TargetGroupArn=$TARGET_GROUP_B_1_ARN

Use the next code for port 9002:

aws elbv2 create-listener 
--load-balancer-arn $NLB_ARN 
--protocol TLS 
--port 9002 
--certificates CertificateArn=$CertificateARN 
--ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 
--default-actions Kind=ahead,TargetGroupArn=$TARGET_GROUP_B_2_ARN

Use the next code for port 9003:

aws elbv2 create-listener 
--load-balancer-arn $NLB_ARN 
--protocol TLS 
--port 9003 
--certificates CertificateArn=$CertificateARN 
--ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 
--default-actions Kind=ahead,TargetGroupArn=$TARGET_GROUP_B_3_ARN

Allow cross-zone load balancing

By default, cross-zone load balancing is disabled on NLBs. When disabled, every load balancer node distributes visitors to wholesome targets in the identical Availability Zone. For instance, requests that come into the load balancer node in Availability Zone A will solely be forwarded to a wholesome goal in Availability Zone A. If the one wholesome goal or the one registered goal related to an NLB listener is in one other Availability Zone than the load balancer node receiving the visitors, the visitors is dropped.

As a result of the NLB has the bootstrap listener that’s related to a goal group that has all brokers registered throughout a number of Availability Zones, Route 53 will reply to DNS queries in opposition to the NLB DNS title with the IP deal with of NLB ENIs in Availability Zones with wholesome targets.

When the Kafka consumer tries to hook up with a dealer by way of the dealer’s listener on the NLB, there shall be a noticeable delay in receiving a response from the dealer because the consumer tries to hook up with the dealer utilizing all IPs returned by Route 53.

Enabling cross-zone load balancing distributes the visitors throughout the registered targets in all Availability Zones.

aws elbv2 modify-load-balancer-attributes --load-balancer-arn $NLB_ARN --attributes Key=load_balancing.cross_zone.enabled,Worth=true

Create DNS A information in a non-public hosted zone

Create DNS A information to route the visitors to the community load balancer. The next desk lists the information.

Report Report Kind Worth
bootstrap A NLB Alias
b-1 A NLB Alias
b-2 A NLB Alias
b-3 A NLB Alias

Alias report sorts shall be used, so that you want the NLB’s DNS title and hosted zone ID:

export NLB_DNS=$(aws elbv2 describe-load-balancers --names msk-nlb-internal | jq -r '.LoadBalancers[0].DNSName')

export NLB_ZoneId=$(aws elbv2 describe-load-balancers --names msk-nlb-internal | jq -r '.LoadBalancers[0].CanonicalHostedZoneId')

Create the bootstrap report, after which repeat this command to create the b-1, b-2, and b-3 information, modifying the Title subject:

aws route53 change-resource-record-sets 
--hosted-zone-id $HostedZoneId 
--change-batch file://<(cat << EOF
{
   "Remark": "Create bootstrap report",
   "Adjustments": [{
      "Action": "CREATE",
      "ResourceRecordSet": {
         "Name": "bootstrap.example.com",
         "Type": "A",
         "AliasTarget": {
            "HostedZoneId": "$NLB_ZoneId",
            "DNSName": "$NLB_DNS",
            "EvaluateTargetHealth": true
         }
      }
   }]
}
EOF)

Optionally, to optimize cross-zone information costs, you possibly can set b-1, b-2, and b-3 to the IP deal with of the NLB’s ENI that’s in the identical Availability Zone as every dealer. For instance, if b-2 is utilizing an IP deal with that’s in subnet 172.16.2.0/24, which is in Availability Zone A, you must use the NLB ENI that’s in the identical Availability Zone as the worth for the DNS report.

The subsequent step particulars the right way to use a {custom} area title for bootstrap connectivity solely. If all Kafka visitors must undergo the NLB, as mentioned earlier, proceed to the following part to arrange marketed listeners.

Configure the marketed listener within the MSK cluster

To get the listener particulars for dealer 1, you present entity-type as brokers and entity-name as 1 for the dealer ID:

/dwelling/ec2-user/kafka/bin/kafka-configs.sh --bootstrap-server $BS 
--entity-type brokers 
--entity-name 1 
--command-config ~/kafka/config/client_sasl.properties 
--all 
--describe | grep 'listeners=CLIENT_SASL_SCRAM'

You’ll get an output like the next:

Listeners=CLIENT_SASL_SCRAM://b-1.mskcustomdomaincluster.XXXX.yy.kafka.area.amazonaws.com:9096,CLIENT_SECURE://b-1.mskcustomdomaincluster.XXXX.yy.kafka.area.amazonaws.com:9094,REPLICATION://b-1.mskcustomdomaincluster.XXXX.yy.kafka.area.amazonaws.com:9093,REPLICATION_SECURE:// b-1.mskcustomdomaincluster.XXXX.yy.kafka.area.amazonaws.com:9095 delicate=false synonyms={STATIC_BROKER_CONFIG:listeners=CLIENT_SASL_SCRAM://b-1.mskcustomdomaincluster.XXXX.yy.kafka.area.amazonaws.com:9096,CLIENT_SECURE://b-1.mskcustomdomaincluster.XXXX.yy.kafka.area.amazonaws.com:9094,REPLICATION://b-1.mskcustomdomaincluster.XXXX.yy.kafka.area.amazonaws.com:9093,REPLICATION_SECURE:// b-1.mskcustomdomaincluster.XXXX.yy.kafka.area.amazonaws.com:9095}

Going ahead, purchasers will join by way of the {custom} area title. Due to this fact, you’ll want to configure the marketed listeners to the {custom} area hostname and port. For this, you’ll want to copy the listener particulars and alter the CLIENT_SASL_SCRAM listener to b-1.instance.com:9001.

Whilst you’re configuring the marketed listener, you additionally must protect the details about different listener sorts within the marketed listener as a result of inter-broker communications additionally use the addresses within the marketed listener.

Based mostly on our configuration, the marketed listener for dealer 1 will seem like the next code, with the whole lot after delicate=false eliminated:

CLIENT_SASL_SCRAM://b-1.instance.com:9001,REPLICATION://b-1-internal.mskcustomdomaincluster.xxxxxx.yy.kafka.area.amazonaws.com:9093,REPLICATION_SECURE://b-1-internal.mskcustomdomaincluster.xxxxxx.yy.kafka.area.amazonaws.com:9095

Modify the next command as follows:

  • <<BROKER_NUMBER>> – Set to the dealer ID being modified (for instance, 1 for dealer 1)
  • <<PORT_NUMBER>> – Set to the port quantity comparable to dealer ID (for instance, 9001 for dealer 1)
  • <<REPLICATION_DNS_NAME>> – Set to the DNS title for REPLICATION
  • <<REPLICATION_SECURE_DNS_NAME>> – Set to the DNS title for REPLICATION_SECURE
/dwelling/ec2-user/kafka/bin/kafka-configs.sh --alter 
--bootstrap-server $BS 
--entity-type brokers 
--entity-name <<BROKER_NUMBER>> 
--command-config ~/kafka/config/client_sasl.properties 
--add-config marketed.listeners=[CLIENT_SASL_SCRAM://b-<<BROKER_NUMBER>>.example.com:<<PORT_NUMBER>>,REPLICATION://<<REPLICATION_DNS_NAME>>:9093,REPLICATION_SECURE://<<REPLICATION_SECURE_DNS_NAME>>:9095]

The command ought to look one thing like the next instance:

/dwelling/ec2-user/kafka/bin/kafka-configs.sh --alter 
--bootstrap-server $BS 
--entity-type brokers 
--entity-name 1 
--command-config ~/kafka/config/client_sasl.properties 
--add-config marketed.listeners=[CLIENT_SASL_SCRAM://b-1.example.com:9001,REPLICATION://b-1-internal.mskcustomdomaincluster.xxxxxx.yy.kafka.region.amazonaws.com:9093,REPLICATION_SECURE://b-1-internal.mskcustomdomaincluster.xxxxxx.yy.kafka.region.amazonaws.com:9095]

Run the command so as to add the marketed listener for dealer 1.

You have to get the listener particulars for the opposite brokers and configure the marketed.listener for every.

Take a look at the setup

Set the bootstrap deal with to the {custom} area. That is the A report created within the personal hosted zone.

export BS=bootstrap.instance.com:9000

Listing the MSK subjects utilizing the {custom} area bootstrap deal with:

/dwelling/ec2-user/kafka/bin/kafka-topics.sh --list 
--bootstrap-server $BS 
--command-config=/dwelling/ec2-user/kafka/config/client_sasl.properties

You need to see the subject buyer.

Clear up

To cease incurring prices, it’s beneficial to manually delete the personal hosted zone, NLB, goal teams, and imported certificates in ACM. Additionally, delete the CloudFormation stack to take away any assets provisioned by CloudFormation.

Use the next code to manually delete the aforementioned assets:

aws route53 change-resource-record-sets 
  --hosted-zone-id $HostedZoneId 
  --change-batch file://<(cat << EOF
{
  "Adjustments": [
    {
      "Action": "DELETE",
      "ResourceRecordSet": {
        "Name": "bootstrap.example.com",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "$NLB_ZoneId",
          "DNSName": "$NLB_DNS",
          "EvaluateTargetHealth": true
        }
      }
    }
  ]
}
EOF
)
    
aws route53 change-resource-record-sets 
  --hosted-zone-id $HostedZoneId 
  --change-batch file://<(cat << EOF
{
  "Adjustments": [
    {
      "Action": "DELETE",
      "ResourceRecordSet": {
        "Name": "b-1.example.com",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "$NLB_ZoneId",
          "DNSName": "$NLB_DNS",
          "EvaluateTargetHealth": true
        }
      }
    }
  ]
}
EOF
)
    
aws route53 change-resource-record-sets 
  --hosted-zone-id $HostedZoneId 
  --change-batch file://<(cat << EOF
{
  "Adjustments": [
    {
      "Action": "DELETE",
      "ResourceRecordSet": {
        "Name": "b-2.example.com",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "$NLB_ZoneId",
          "DNSName": "$NLB_DNS",
          "EvaluateTargetHealth": true
        }
      }
    }
  ]
}
EOF
)
    
aws route53 change-resource-record-sets 
  --hosted-zone-id $HostedZoneId 
  --change-batch file://<(cat << EOF
{
  "Adjustments": [
    {
      "Action": "DELETE",
      "ResourceRecordSet": {
        "Name": "b-3.example.com",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "$NLB_ZoneId",
          "DNSName": "$NLB_DNS",
          "EvaluateTargetHealth": true
        }
      }
    }
  ]
}
EOF
)
    
aws route53 delete-hosted-zone --id $HostedZoneId
aws elbv2 delete-load-balancer --load-balancer-arn $NLB_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_B_ALL_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_B_1_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_B_2_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_B_3_ARN

You have to wait as much as 5 minutes for the completion of the NLB deletion:

aws acm delete-certificate --certificate-arn $CertificateARN

Now you possibly can delete the CloudFormation stack.

Abstract

This put up explains how you need to use an NLB, Route 53, and the marketed listener configuration choice in Amazon MSK to assist {custom} domains with MSK clusters when utilizing SASL/SCRAM authentication. You need to use this resolution to maintain your current Kafka bootstrap DNS title and scale back or take away the necessity to change consumer purposes due to a migration, restoration course of, or multi-cluster excessive availability. You can even use this resolution to have the MSK bootstrap and dealer names beneath your {custom} area, enabling you to carry the DNS title according to your naming conference (for instance, msk.prod.instance.com).

Strive the answer out for your self, and depart your questions and suggestions within the feedback part.


Concerning the Authors

Subham Rakshit is a Senior Streaming Options Architect for Analytics at AWS primarily based within the UK. He works with prospects to design and construct streaming architectures to allow them to get worth from analyzing their streaming information. His two little daughters hold him occupied more often than not exterior work, and he loves fixing jigsaw puzzles with them. Join with him on LinkedIn.

Mark Taylor is a Senior Technical Account Supervisor at Amazon Net Companies, working with enterprise prospects to implement greatest practices, optimize AWS utilization, and deal with enterprise challenges. Previous to becoming a member of AWS, Mark spent over 16 years in networking roles throughout industries, together with healthcare, authorities, training, and funds. Mark lives in Folkestone, England, along with his spouse and two canines. Outdoors of labor, he enjoys watching and taking part in soccer, watching motion pictures, taking part in board video games, and touring.

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *