Exam DP203 SQL Management: Difference between revisions

From MillerSql.com
NeilM (talk | contribs)
No edit summary
NeilM (talk | contribs)
No edit summary
Line 9: Line 9:
# '''TSQL''': ALTER DATABASE mySampleDataWarehouse MODIFY (SERVICE_OBJECTIVE = 'DW300c')
# '''TSQL''': ALTER DATABASE mySampleDataWarehouse MODIFY (SERVICE_OBJECTIVE = 'DW300c')
# '''PowerShell''': Set-AzSqlDatabase -ResourceGroupName "resourcegroupname" -DatabaseName "mySampleDataWarehouse" -ServerName "sqlpoolservername" -RequestedServiceObjectiveName "DW300c"
# '''PowerShell''': Set-AzSqlDatabase -ResourceGroupName "resourcegroupname" -DatabaseName "mySampleDataWarehouse" -ServerName "sqlpoolservername" -RequestedServiceObjectiveName "DW300c"
To do the same with '''Spark pools''', you can also click "Scale". Note you can also apply '''autoscale'''. (I don't think you can autoscale with SQL dedicated pools).
 
== Spark Pools ==
To do the same with '''Spark pools''', you can click "Scale" in Synapse Studio.
 
== Spark Pool Autoscale ==
"Auto Scale Settings" in the Azure portal, or "scale" in Synapse Studio.
 
AutoScale collects the following metrics as it runs:
{| class="wikitable"
|Total Pending CPU
|The total number of cores required to start execution of all pending nodes.
|-
|Total Pending Memory
|The total memory (in MB) required to start execution of all pending nodes.
|-
|Total Free CPU
|The sum of all unused cores on the active nodes.
|-
|Total Free Memory
|The sum of unused memory (in MB) on the active nodes.
|-
|Used Memory per Node
|The load on a node. A node on which 10 GB of memory is used, is considered under more load than a worker with 2 GB of used memory.
|}
The following conditions will then autoscale the memory or CPU:
{| class="wikitable"
!Scale-up
!Scale-down
|-
|Total pending CPU is greater than total free CPU for more than 1 minute.
|Total pending CPU is less than total free CPU for more than 2 minutes.
|-
|Total pending memory is greater than total free memory for more than 1 minute.
|Total pending memory is less than total free memory for more than 2 minutes.
|}

Revision as of 20:57, 18 November 2024

SQL Pool management

In SQL pools, the unit of scale is an abstraction of compute power that is known as a DWU - data warehouse unit

You can scale a Synapse SQL dedicated pool through:

  1. Azure portal (click on Scale, then drag a slider)
  2. Synapse Studio (also right-click- scale + slider)
  3. TSQL: ALTER DATABASE mySampleDataWarehouse MODIFY (SERVICE_OBJECTIVE = 'DW300c')
  4. PowerShell: Set-AzSqlDatabase -ResourceGroupName "resourcegroupname" -DatabaseName "mySampleDataWarehouse" -ServerName "sqlpoolservername" -RequestedServiceObjectiveName "DW300c"

Spark Pools

To do the same with Spark pools, you can click "Scale" in Synapse Studio.

Spark Pool Autoscale

"Auto Scale Settings" in the Azure portal, or "scale" in Synapse Studio.

AutoScale collects the following metrics as it runs:

Total Pending CPU The total number of cores required to start execution of all pending nodes.
Total Pending Memory The total memory (in MB) required to start execution of all pending nodes.
Total Free CPU The sum of all unused cores on the active nodes.
Total Free Memory The sum of unused memory (in MB) on the active nodes.
Used Memory per Node The load on a node. A node on which 10 GB of memory is used, is considered under more load than a worker with 2 GB of used memory.

The following conditions will then autoscale the memory or CPU:

Scale-up Scale-down
Total pending CPU is greater than total free CPU for more than 1 minute. Total pending CPU is less than total free CPU for more than 2 minutes.
Total pending memory is greater than total free memory for more than 1 minute. Total pending memory is less than total free memory for more than 2 minutes.