Amazon EMR 7.1 runtime for Apache Spark and Iceberg can run Spark workloads 2.7 occasions sooner than Apache Spark 3.5.1 and Iceberg 1.5.2

[ad_1]

On this publish, we discover the efficiency advantages of utilizing the Amazon EMR runtime for Apache Spark and Apache Iceberg in comparison with working the identical workloads with open supply Spark 3.5.1 on Iceberg tables. Iceberg is a well-liked open supply high-performance format for big analytic tables. Our benchmarks exhibit that Amazon EMR can run TPC-DS 3 TB workloads 2.7 occasions sooner, decreasing the runtime from 1.548 hours to 0.564 hours. Moreover, the price effectivity improves by 2.2 occasions, with the overall price lowering from $16.09 to $7.23 when utilizing Amazon Elastic Compute Cloud (Amazon EC2) On-Demand r5d.4xlarge cases, offering observable features for knowledge processing duties.

The Amazon EMR runtime for Apache Spark affords a high-performance runtime setting whereas sustaining 100% API compatibility with open supply Spark and Iceberg desk format. In Run Apache Spark 3.5.1 workloads 4.5 occasions sooner with Amazon EMR runtime for Apache Spark, we detailed among the optimizations, displaying a runtime enchancment of 4.5 occasions sooner and a pair of.8 occasions higher price-performance in comparison with open supply Spark 3.5.1 on the TPC-DS 3 TB benchmark. Nonetheless, most of the optimizations are geared in the direction of DataSource V1, whereas Iceberg makes use of Spark DataSource V2. Recognizing this, we’ve got centered on migrating among the current optimizations within the EMR runtime for Spark to DataSource V2 and introducing Iceberg-specific enhancements. These enhancements are constructed on prime of the Spark runtime enhancements on question planning, bodily plan operator enhancements, and optimizations with Amazon Easy Storage Service (Amazon S3) and the Java runtime. Now we have added eight new optimizations incrementally for the reason that Amazon EMR 6.15 launch in 2023, that are current in Amazon EMR 7.1 and turned on by default. A few of the enhancements embody the next:

  • Optimizing DataSource V2 in Spark:
    • Dynamic filtering on non-partitioned columns
    • Eradicating redundant broadcast hash joins
    • Partial hash mixture pushdowns
    • Bloom filter-based joins
  • Iceberg-specific enhancements:
    • Knowledge prefetch
    • Help for file size-based estimations

Amazon EMR on EC2, Amazon EMR Serverless, Amazon EMR on Amazon EKS, and Amazon EMR on AWS Outposts all use the optimized runtimes. Confer with Working with Apache Iceberg in Amazon EMR and Greatest practices for optimizing Apache Iceberg workloads for extra particulars.

Benchmark outcomes for Amazon EMR 7.1 vs. open supply Spark 3.5.1 and Iceberg 1.5.2

To evaluate the Spark engine’s efficiency with the Iceberg desk format, we carried out benchmark checks utilizing the 3 TB TPC-DS dataset, model 2.13 (our outcomes derived from the TPC-DS dataset usually are not straight similar to the official TPC-DS outcomes as a result of setup variations). Benchmark checks for the EMR runtime for Spark and Iceberg had been carried out on Amazon EMR 7.1 clusters with Spark 3.5.0 and Iceberg 1.4.3-amzn-0 variations, and open supply Spark 3.5.1 and Iceberg 1.5.2 was deployed on EC2 clusters designated for open supply runs.

The setup directions and technical particulars can be found in our GitHub repository. To attenuate the affect of exterior catalogs like AWS Glue and Hive, we used the Hadoop catalog for the Iceberg tables. This makes use of the underlying file system, particularly Amazon S3, because the catalog. We are able to outline this setup by configuring the property spark.sql.catalog.<catalog_name>.kind. The very fact tables used the default partitioning by the date column, which have plenty of partitions various from 200–2,100. No precalculated statistics had been used for these tables.

We ran a complete of 104 SparkSQL queries in three sequential rounds, and the typical runtime of every question throughout these rounds was taken for comparability. The typical runtime for the three rounds on Amazon EMR 7.1 with Iceberg enabled was 0.56 hours, demonstrating a 2.7-fold velocity enhance in comparison with open supply Spark 3.5.1 and Iceberg 1.5.2. The next determine presents the overall runtimes in seconds.

The next desk summarizes the metrics.

Metric Amazon EMR 7.1 on EC2 Open Supply Spark 3.5.1 and Iceberg 1.5.2
Common runtime in seconds 2033.17 5575.19
Geometric imply over queries in seconds 10.13153 20.34651
Price* $7.23 $16.09

*Detailed price estimates are mentioned later on this publish.

The next chart demonstrates the per-query efficiency enchancment of Amazon EMR 7.1 relative to open supply Spark 3.5.1 and Iceberg 1.5.2. The extent of the speedup varies from one question to a different, starting from 9.6 occasions sooner for q93 to 1.04 occasions sooner for q34, with Amazon EMR outperforming the open supply Spark with Iceberg tables. The horizontal axis arranges the TPC-DS 3 TB benchmark queries in descending order based mostly on the efficiency enchancment seen with Amazon EMR, and the vertical axis depicts the magnitude of this speedup in seconds.

Price comparability

Our benchmark supplies the overall runtime and geometric imply knowledge to evaluate the efficiency of Spark and Iceberg in a posh, real-world determination assist state of affairs. For extra insights, we additionally look at the price facet. We calculate price estimates utilizing formulation that account for EC2 On-Demand cases, Amazon Elastic Block Retailer (Amazon EBS), and Amazon EMR bills.

  • Amazon EC2 price (contains SSD price) = variety of cases * r5d.4xlarge hourly fee * job runtime in hours
    • 4xlarge hourly fee = $1.152 per hour
  • Root Amazon EBS price = variety of cases * Amazon EBS per GB-hourly fee * root EBS quantity dimension * job runtime in hours
  • Amazon EMR price = variety of cases * r5d.4xlarge Amazon EMR price * job runtime in hours
    • 4xlarge Amazon EMR price = $0.27 per hour
  • Complete price = Amazon EC2 price + root Amazon EBS price + Amazon EMR price

The calculations reveal that the Amazon EMR 7.1 benchmark yields a 2.2-fold price effectivity enchancment over open supply Spark 3.5.1 and Iceberg 1.5.2 in working the benchmark job.

Metric Amazon EMR 7.1 Open Supply Spark 3.5.1 and Iceberg 1.5.2
Runtime in hours 0.564 1.548
Variety of EC2 cases 9 9
Amazon EBS Dimension 20gb 20gb
Amazon EC2 price $5.85 $16.05
Amazon EBS price $0.01 $0.04
Amazon EMR price $1.37 $0
Complete price $7.23 $16.09
Price financial savings Amazon EMR 7.1 is 2.2 occasions higher Baseline

Along with the time-based metrics mentioned to this point, knowledge from Spark occasion logs exhibits that Amazon EMR 7.1 scanned roughly 3.4 occasions much less knowledge from Amazon S3 and 4.1 occasions fewer information than the open supply model within the TPC-DS 3 TB benchmark. This discount in Amazon S3 knowledge scanning contributes on to price financial savings for Amazon EMR workloads.

Run open supply Spark benchmarks on Iceberg tables

We used separate EC2 clusters, every outfitted with 9 r5d.4xlarge cases, for testing each open supply Spark 3.5.1 and Iceberg 1.5.2 and Amazon EMR 7.1. The first node was outfitted with 16 vCPU and 128 GB of reminiscence, and the eight employee nodes collectively had 128 vCPU and 1024 GB of reminiscence. We carried out checks utilizing the Amazon EMR default settings to showcase the everyday consumer expertise and minimally adjusted the settings of Spark and Iceberg to keep up a balanced comparability.

The next desk summarizes the Amazon EC2 configurations for the first node and eight employee nodes of kind r5d.4xlarge.

EC2 Occasion vCPU Reminiscence (GiB) Occasion Storage (GB) EBS Root Quantity (GB)
r5d.4xlarge 16 128 2 x 300 NVMe SSD 20 GB

Stipulations

The next conditions are required to run the benchmarking:

  1. Utilizing the directions within the emr-spark-benchmark GitHub repo, arrange the TPC-DS supply knowledge in your S3 bucket and in your native laptop.
  2. Construct the benchmark utility following the steps supplied in Steps to construct spark-benchmark-assembly utility and duplicate the benchmark utility to your S3 bucket. Alternatively, copy spark-benchmark-assembly-3.5.1.jar to your S3 bucket.
  3. Create Iceberg tables from the TPC-DS supply knowledge. Comply with the directions on GitHub to create Iceberg tables utilizing the Hadoop catalog. For instance, the next code makes use of an EMR 7.1 cluster with Iceberg enabled to create the tables:
aws emr add-steps --cluster-id <cluster-id> --steps Kind=Spark,Identify="Create Iceberg Tables",
Args=[--class,com.amazonaws.eks.tpcds.CreateIcebergTables,
--conf,spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,
--conf,spark.sql.catalog.hadoop_catalog=org.apache.iceberg.spark.SparkCatalog,
--conf,spark.sql.catalog.hadoop_catalog.type=hadoop,
--conf,spark.sql.catalog.hadoop_catalog.warehouse=s3://<bucket>/<warehouse_path>/,
--conf,spark.sql.catalog.hadoop_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO,
s3://<bucket>/<jar_location>/spark-benchmark-assembly-3.5.1.jar,
s3://blogpost-sparkoneks-us-east-1/blog/BLOG_TPCDS-TEST-3T-partitioned/,
/home/hadoop/tpcds-kit/tools,parquet,3000,true,<database_name>,true,true],ActionOnFailure=CONTINUE 
--region <AWS area>

Word the Hadoop catalog warehouse location and database identify from the previous step. We use the identical tables to run benchmarks with Amazon EMR 7.1 and open supply Spark and Iceberg.

This benchmark utility is constructed from the department tpcds-v2.13_iceberg. Should you’re constructing a brand new benchmark utility, change to the proper department after downloading the supply code from the GitHub repo.

Create and configure a YARN cluster on Amazon EC2

To check Iceberg efficiency between Amazon EMR on Amazon EC2 and open supply Spark on Amazon EC2, observe the directions within the emr-spark-benchmark GitHub repo to create an open supply Spark cluster on Amazon EC2 utilizing Flintrock with eight employee nodes.

Primarily based on the cluster choice for this check, the next configurations are used:

Run the TPC-DS benchmark with Apache Spark 3.5.1 and Iceberg 1.5.2

Full the next steps to run the TPC-DS benchmark:

  1. Log in to the open supply cluster major utilizing flintrock login $CLUSTER_NAME.
  2. Submit your Spark job:
    1. Select the proper Iceberg catalog warehouse location and database that has the created Iceberg tables.
    2. The outcomes are created in s3://<YOUR_S3_BUCKET>/benchmark_run.
    3. You possibly can monitor progress in /media/ephemeral0/spark_run.log.
spark-submit 
--master yarn 
--deploy-mode consumer 
--class com.amazonaws.eks.tpcds.BenchmarkSQL 
--conf spark.driver.cores=4 
--conf spark.driver.reminiscence=10g 
--conf spark.executor.cores=16 
--conf spark.executor.reminiscence=100g 
--conf spark.executor.cases=8 
--conf spark.community.timeout=2000 
--conf spark.executor.heartbeatInterval=300s 
--conf spark.dynamicAllocation.enabled=false 
--conf spark.shuffle.service.enabled=false 
--conf spark.hadoop.fs.s3a.aws.credentials.supplier=com.amazonaws.auth.InstanceProfileCredentialsProvider 
--conf spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem 
--conf spark.jars.packages=org.apache.hadoop:hadoop-aws:3.3.4,org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.2,org.apache.iceberg:iceberg-aws-bundle:1.5.2 
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions   
--conf spark.sql.catalog.native=org.apache.iceberg.spark.SparkCatalog    
--conf spark.sql.catalog.native.kind=hadoop  
--conf spark.sql.catalog.native.warehouse=s3a://<YOUR_S3_BUCKET>/<warehouse_path>/ 
--conf spark.sql.defaultCatalog=native   
--conf spark.sql.catalog.native.io-impl=org.apache.iceberg.aws.s3.S3FileIO   
spark-benchmark-assembly-3.5.1.jar   
s3://<YOUR_S3_BUCKET>/benchmark_run 3000 1 false  
q1-v2.13,q10-v2.13,q11-v2.13,q12-v2.13,q13-v2.13,q14a-v2.13,q14b-v2.13,q15-v2.13,q16-v2.13,
q17-v2.13,q18-v2.13,q19-v2.13,q2-v2.13,q20-v2.13,q21-v2.13,q22-v2.13,q23a-v2.13,q23b-v2.13,
q24a-v2.13,q24b-v2.13,q25-v2.13,q26-v2.13,q27-v2.13,q28-v2.13,q29-v2.13,q3-v2.13,q30-v2.13,
q31-v2.13,q32-v2.13,q33-v2.13,q34-v2.13,q35-v2.13,q36-v2.13,q37-v2.13,q38-v2.13,q39a-v2.13,
q39b-v2.13,q4-v2.13,q40-v2.13,q41-v2.13,q42-v2.13,q43-v2.13,q44-v2.13,q45-v2.13,q46-v2.13,
q47-v2.13,q48-v2.13,q49-v2.13,q5-v2.13,q50-v2.13,q51-v2.13,q52-v2.13,q53-v2.13,q54-v2.13,
q55-v2.13,q56-v2.13,q57-v2.13,q58-v2.13,q59-v2.13,q6-v2.13,q60-v2.13,q61-v2.13,q62-v2.13,
q63-v2.13,q64-v2.13,q65-v2.13,q66-v2.13,q67-v2.13,q68-v2.13,q69-v2.13,q7-v2.13,q70-v2.13,
q71-v2.13,q72-v2.13,q73-v2.13,q74-v2.13,q75-v2.13,q76-v2.13,q77-v2.13,q78-v2.13,q79-v2.13,
q8-v2.13,q80-v2.13,q81-v2.13,q82-v2.13,q83-v2.13,q84-v2.13,q85-v2.13,q86-v2.13,q87-v2.13,
q88-v2.13,q89-v2.13,q9-v2.13,q90-v2.13,q91-v2.13,q92-v2.13,q93-v2.13,q94-v2.13,q95-v2.13,
q96-v2.13,q97-v2.13,q98-v2.13,q99-v2.13,ss_max-v2.13    
true <database> > /media/ephemeral0/spark_run.log 2>&1 &!

Summarize the outcomes

After the Spark job finishes, retrieve the check end result file from the output S3 bucket at s3://<YOUR_S3_BUCKET>/benchmark_run/timestamp=xxxx/abstract.csv/xxx.csv. This may be carried out both by means of the Amazon S3 console by navigating to the required bucket location or through the use of the Amazon Command Line Interface (AWS CLI). The Spark benchmark utility organizes the information by making a timestamp folder and inserting a abstract file inside a folder labeled abstract.csv. The output CSV recordsdata include 4 columns with out headers:

  • Question identify
  • Median time
  • Minimal time
  • Most time

With the information from three separate check runs with one iteration every time, we will calculate the typical and geometric imply of the benchmark runtimes.

Run the TPC-DS benchmark with the EMR runtime for Spark

Many of the directions are just like Steps to run Spark Benchmarking with a couple of Iceberg-specific particulars.

Stipulations

Full the next prerequisite steps:

  1. Run aws configure to configure the AWS CLI shell to level to the benchmarking AWS account. Confer with Configure the AWS CLI for directions.
  2. Add the benchmark utility JAR file to Amazon S3.

Deploy the EMR cluster and run the benchmark job

Full the next steps to run the benchmark job:

  1. Use the AWS CLI command as proven in Deploy EMR on EC2 Cluster and run benchmark job to spin up an EMR on EC2 cluster. Be certain that to allow Iceberg. See Create an Iceberg cluster for extra particulars. Select the proper Amazon EMR model, root quantity dimension, and similar useful resource configuration because the open supply Flintrock setup. Confer with create-cluster for an in depth description of the AWS CLI choices.
  2. Retailer the cluster ID from the response. We’d like this for the subsequent step.
  3. Submit the benchmark job in Amazon EMR utilizing add-steps from the AWS CLI:
    1. Substitute <cluster ID> with the cluster ID from Step 2.
    2. The benchmark utility is at s3://<your-bucket>/spark-benchmark-assembly-3.5.1.jar.
    3. Select the proper Iceberg catalog warehouse location and database that has the created Iceberg tables. This ought to be the identical because the one used for the open supply TPC-DS benchmark run.
    4. The outcomes will probably be in s3://<your-bucket>/benchmark_run.
aws emr add-steps   --cluster-id <cluster-id>
--steps Kind=Spark,Identify="SPARK Iceberg EMR TPCDS Benchmark Job",
Args=[--class,com.amazonaws.eks.tpcds.BenchmarkSQL,
--conf,spark.driver.cores=4,
--conf,spark.driver.memory=10g,
--conf,spark.executor.cores=16,
--conf,spark.executor.memory=100g,
--conf,spark.executor.instances=8,
--conf,spark.network.timeout=2000,
--conf,spark.executor.heartbeatInterval=300s,
--conf,spark.dynamicAllocation.enabled=false,
--conf,spark.shuffle.service.enabled=false,
--conf,spark.sql.iceberg.data-prefetch.enabled=true,
--conf,spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,
--conf,spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog,
--conf,spark.sql.catalog.local.type=hadoop,
--conf,spark.sql.catalog.local.warehouse=s3://<your-bucket>/<warehouse-path>,
--conf,spark.sql.defaultCatalog=local,
--conf,spark.sql.catalog.local.io-impl=org.apache.iceberg.aws.s3.S3FileIO,
s3://<your-bucket>/spark-benchmark-assembly-3.5.1.jar,
s3://<your-bucket>/benchmark_run,3000,1,false,
'q1-v2.13,q10-v2.13,q11-v2.13,q12-v2.13,q13-v2.13,q14a-v2.13,
q14b-v2.13,q15-v2.13,q16-v2.13,q17-v2.13,q18-v2.13,q19-v2.13,
q2-v2.13,q20-v2.13,q21-v2.13,q22-v2.13,q23a-v2.13,q23b-v2.13,
q24a-v2.13,q24b-v2.13,q25-v2.13,q26-v2.13,q27-v2.13,q28-v2.13,
q29-v2.13,q3-v2.13,q30-v2.13,q31-v2.13,q32-v2.13,q33-v2.13,
q34-v2.13,q35-v2.13,q36-v2.13,q37-v2.13,q38-v2.13,q39a-v2.13,
q39b-v2.13,q4-v2.13,q40-v2.13,q41-v2.13,q42-v2.13,q43-v2.13,
q44-v2.13,q45-v2.13,q46-v2.13,q47-v2.13,q48-v2.13,q49-v2.13,
q5-v2.13,q50-v2.13,q51-v2.13,q52-v2.13,q53-v2.13,q54-v2.13,
q55-v2.13,q56-v2.13,q57-v2.13,q58-v2.13,q59-v2.13,q6-v2.13,
q60-v2.13,q61-v2.13,q62-v2.13,q63-v2.13,q64-v2.13,q65-v2.13,
q66-v2.13,q67-v2.13,q68-v2.13,q69-v2.13,q7-v2.13,q70-v2.13,
q71-v2.13,q72-v2.13,q73-v2.13,q74-v2.13,q75-v2.13,q76-v2.13,
q77-v2.13,q78-v2.13,q79-v2.13,q8-v2.13,q80-v2.13,q81-v2.13,
q82-v2.13,q83-v2.13,q84-v2.13,q85-v2.13,q86-v2.13,q87-v2.13,
q88-v2.13,q89-v2.13,q9-v2.13,q90-v2.13,q91-v2.13,q92-v2.13,
q93-v2.13,q94-v2.13,q95-v2.13,q96-v2.13,q97-v2.13,q98-v2.13,
q99-v2.13,ss_max-v2.13',true,<database>],ActionOnFailure=CONTINUE 
--region <aws-region>

Summarize the outcomes

After the step is full, you’ll be able to see the summarized benchmark end result at s3://<YOUR_S3_BUCKET>/benchmark_run/timestamp=xxxx/abstract.csv/xxx.csv in the identical means because the earlier run and compute the typical and geometric imply of the question runtimes.

Clear up

To forestall any future costs, delete the sources you created by following the directions supplied within the Cleanup part of the GitHub repository.

Abstract

Amazon EMR is constantly enhancing the EMR runtime for Spark when used with Iceberg tables, attaining a efficiency that’s 2.7 occasions sooner than open supply Spark 3.5.1 and Iceberg 1.5.2 on TPC-DS 3 TB, v2.13. We encourage you to maintain updated with the most recent Amazon EMR releases to totally profit from ongoing efficiency enhancements.

To remain knowledgeable, subscribe to the AWS Huge Knowledge Weblog’s RSS feed, the place you’ll find updates on the EMR runtime for Spark and Iceberg, in addition to recommendations on configuration greatest practices and tuning suggestions.


Concerning the authors

Hari Kishore Chaparala is a software program growth engineer for Amazon EMR at Amazon Net Companies.

Udit Mehrotra is an Engineering Supervisor for EMR at Amazon Net Companies.

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *