Exam 203 back-end services Spark: Difference between revisions

From MillerSql.com
NeilM (talk | contribs)
No edit summary
NeilM (talk | contribs)
No edit summary
Line 3: Line 3:
Languages supported in Spark include Python, Scala, Java, SQL, and C#.
Languages supported in Spark include Python, Scala, Java, SQL, and C#.


To run Spark code it is necessary to first create a Spark pool. Then in Synapse Studio '''Develop''' tab, create a new Notebook, and select the Spark pool created to it. Then paste the following code into it and run:
To run Spark code it is necessary in Synapse Studio to first create a Spark pool in the '''Manage''' - '''Apache Spark Pools''' tab. Then in the '''Develop''' tab, create a new Notebook, and select the Spark pool created to it. Then paste the following code into it and run:


<pre>
<pre>
Line 12: Line 12:
)
)
display(df.limit(10))
display(df.limit(10))
</pre>Note the first time it runs it will take several minutes to complete because it takes the time to start up the Spark pool.
</pre>
Note the first time it runs it will take several minutes to complete because it takes the time to start up the Spark pool.
 
<pre>
df_counts = df.groupby(df.Category).count()
display(df_counts)
</pre>

Revision as of 19:47, 15 November 2024

The second back-end service is: Spark

Languages supported in Spark include Python, Scala, Java, SQL, and C#.

To run Spark code it is necessary in Synapse Studio to first create a Spark pool in the Manage - Apache Spark Pools tab. Then in the Develop tab, create a new Notebook, and select the Spark pool created to it. Then paste the following code into it and run:

%%pyspark
df = spark.read.load('abfss://files@datalakexxxxxxx.dfs.core.windows.net/product_data/products.csv', format='csv'
## If header exists uncomment line below
##, header=True
)
display(df.limit(10))

Note the first time it runs it will take several minutes to complete because it takes the time to start up the Spark pool.

df_counts = df.groupby(df.Category).count()
display(df_counts)