Apache Spark componentApache Spark component is available starting from Camel 2.17.
This documentation page covers the Apache Spark component for the Apache Camel. The main purpose of the Spark integration with Camel is to provide a bridge between Camel connectors and Spark tasks. In particular Camel connector provides a way to route message from various transports, dynamically choose a task to execute, use incoming message as input data for that task and finally deliver the results of the execution back to the Camel pipeline. Supported architectural stylesSpark component can be used as a driver application deployed into an application server (or executed as a fat jar). Spark component can also be submitted as a job directly into the Spark cluster. While Spark component is primary designed to work as a long running job serving as an bridge between Spark cluster and the other endpoints, you can also use it as a fire-once short job. Running Spark in OSGi serversCurrently the Spark component doesn't support execution in the OSGi container. Spark has been designed to be executed as a fat jar, usually submitted as a job to a cluster. For those reasons running Spark in an OSGi server is at least challenging and is not support by Camel as well. URI formatCurrently the Spark component supports only producers - it it intended to invoke a Spark job and return results. You can call RDD, data frame or Hive SQL job.
Spark URI format
RDD jobs
To invoke an RDD job, use the following URI: Spark RDD producer Where Spark RDD callback The following snippet demonstrates how to send message as an input to the job and return results: Calling spark job The RDD callback for the snippet above registered as Spring bean could look as follows: Spark RDD callback The RDD definition in Spring could looks as follows: Spark RDD definition
RDD jobs options
Void RDD callbacksIf your RDD callback doesn't return any value back to a Camel pipeline, you can either return Spark RDD definition Converting RDD callbacksIf you know what type of the input data will be sent to the RDD callback, you can use Spark RDD definition Annotated RDD callbacksProbably the easiest way to work with the RDD callbacks is to provide class with method marked with Annotated RDD callback definition If you will pass CamelContext to the annotated RDD callback factory method, the created callback will be able to convert incoming payloads to match the parameters of the annotated method: Body conversions for annotated RDD callbacks
DataFrame jobs
Instead of working with RDDs Spark component can work with DataFrames as well. To invoke an DataFrame job, use the following URI: Spark RDD producer Where Spark RDD callback The following snippet demonstrates how to send message as an input to a job and return results: Calling spark job The DataFrame callback for the snippet above registered as Spring bean could look as follows: Spark RDD callback The DataFrame definition in Spring could looks as follows: Spark RDD definition
DataFrame jobs options
Hive jobsInstead of working with RDDs or DataFrame Spark component can also receive Hive SQL queries as payloads. To send Hive query to Spark component, use the following URI: Spark RDD producer The following snippet demonstrates how to send message as an input to a job and return results: Calling spark job The table we want to execute query against should be registered in a HiveContext before we query it. For example in Spring such registration could look as follows: Spark RDD definition
Hive jobs options
See Also |