Friday, April 19, 2024

Tuning SQL Server

Note that diagnosing the query plan in MS SQL Server is not hugely different to examing Spark query plans. Some operations are conceptually the same. 

Similarly, putting indexed columns into a function will remove any benefit they may bring just as they do for Spark's predicate pushdowns. In both, queries must be sargeable [Brent Ozar's blog]. SARGs are search arguments and "a sargable predicate is one of form (or which can be put into the form) column comparison-operator value". [SO]

Query Plans

"You shouldn't rely too much on cost percentages in execution plans" [SO] says SQL Server luminary, Paul White. 

Heap Scan

Before we look at the scan aspect, what is a heap? "A heap is a table without a clustered index... One or more nonclustered indexes can be created on tables stored as a heap."

What's the advantage of a heap? "Because data is inserted without enforcing a strict order, the insert operation is usually faster than the equivalent insert into a clustered index." 

There are many disadvantages to a heap, notably, "Do not use a heap when the data is frequently grouped together." [Microsoft]

Nested Loop Joins

"The database usually performs this matching step in a nested-loops join by indexed lookups on the join key, repeated for each row from the driving table" [1] The DB filters as it goes.

Hash Joins

"Based on table and index statistics, the cost-based optmizer estimates which of these two independent tables will retyurn fewer rows after discarding filtered rows. It chooses to has the complete results from that singe-table query...

"It then executes the larger query ... returning the driving rowset. As eadh rows exits this step, the database executes thte same hash function in its join key and uses the hash-function result to go directly to the corresponding hash bucket for the other rowset. When it reaches the right hash bucket, the database searches the tiny list of rows in that bucket to find matches."

The catch with this approach is you "hope those hash buckets fit entirely in memory, but if necessary, the database temporarily dedicates scracth disk space to hold the buckets... A large prehashed rowset could require unexpected disk scratch space, performing poorly and possibly even running out of space."

"It is the most memory-intensive of any of the joins" [SO].

Sort-merge Joins

Spark does exactly this. This is where it "reads the two tables independently but, instead of matching rows by hashing, it presorts rowsets on the join key and merges the sorted lists... Once the two lists are sorted, the matching process is fast but presorting lists is expensive and nonrobust." [1] For this reason, hash joins are preferred. They don't have this downside but have all the same advantages.

In the event of a nonrobust query, SQL Server may throw a error 701 "There is insufficient memory to run this query" [Microsoft docs].

Indices

You can see all the indexes by executing: 

select * from sys.indexes

But remember that "cost-based optimizers often do a terrible job if they do not have statistics on all the tables and indexes involved in the query. It is therefore imperative to maintain statistics on tables and indexes reliably; this includes regenerating statistics anytime table volumes change much or anytime tables or indexes are rebuilt." [1]

Clustered vs. Non Clustered Indexes

Clustered indexes actually change where rows are stored on disk. As a result, "there can be only one clustered index per table" [Microsoft].  Non-clustered indexes by contrast are just pointers to the row data.

Adding a clustered index can be an intense operation that is measured in minutes or even hours. For instance, adding a clustered index to a table of 765k rows and about 20 columns that are dates and varchars (13 columns totally a size of about 1800) takes about 15 minutes on a 12 core Azure SQL Server. But this one index reduced the TotalSubtreeCost from c. 131k to 71k.

Bit-map indexes
"Each stored value of a bit-mapped index points to what amount to a list of yes/no bits that map to the whole list of table rows. These bit strings are ways to AND and OR together with other bit stroings of other bit-mapped indexes... The big catch is that such bit strings are expensive to maintain in sync with frequently changing table contents... Bit-mapped indexes work best for tables that are mostly read-only... The best scenario for success is precisely the data-warehouse scenario for which bit-mapped indexes were designed." [1]

Columnstore

SQL Server seems to be stealing ideas from other big data tool as it now allows columnar storage. "Columnstore indexes are the standard for storing and querying large data warehousing fact tables." [Microsoft] Adding this to one of my tables made the cost drop two orders of magnitude... but the query still took over an hour before I killed it. I guess you should never judge a query by its cost [Brent Ozar].

[Aside: I eventually made the query work in a reasonable if not stellar duration by dropping a clustered index and having only the columnstore index, not the two together as I previously had.]

[Finally, some nice people on Discord gave some links to their favourite authors for all things SQL. They are Kimberly Tripp, Itzik Ben-GanNiko Neugebauer, and Paul White).

[1] SQL Tuning, Dan Tow

Saturday, April 6, 2024

When adding more CPUs does not help distressed CPUs

This is an interesting problem on Discourse where the symptoms belie the cause. Here, a very beefy Spark cluster is taking a long time process (admittedly) a large amount of data. However, it's the CPUs that are getting hammered. 

Insanely high CPU usage

The temptation at this point is to add more CPU resources but this won't help much.

When your Spark jobs that are not computationally intensive are using large amounts of CPU, there's an obvious suspect. Let's check time spent in Garbage Collection:


Insanely large GC Times

Shuffle per worker seems modest but look at those GC Times. In a five hours job, nearly two hours is spent just garbage collecting. 

And this is something of a surprise to people new to Spark. Sure, it delivers on its promise to process more data than can fit in memory but if you want it to be performant, you need to give it as much memory as possible.  

Friday, April 5, 2024

Network Adventures in Azure Databricks

My Azure Databricks cluster could not see one of my Blob containers although it could see others in the same subscription. The error in Databricks looked something like this: 

ExecutionError: An error occurred while calling o380.ls.
: Status code: -1 error code: null error message: java.net.SocketTimeoutException: connect timed outjava.net.SocketTimeoutException: connect timed out
        at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:423)
        at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:274)
        at shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:214)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:464)
...

My first suspicion was that because they were in different resource groups, this could explain things.

Resource groups
"Resource groups are units of deployment in ARM [Azure Resource Manager]. 
"They are containers grouping multiple resource instances in a security and management boundary. 
"A resource group is uniquely named in a subscription. 
"Resources can be provisioned on different Azure regions and yet belong to the same resource group. 
"Resource groups provide additional services to all the resources within them. Resource groups provide metadata services, such as tagging, which enables the categorization of resources; the policy-based management of resources; RBAC; the protection of resources from accidental deletion or updates; and more... 
"They have a security boundary, and users that don't have access to a resource group cannot access resources contained within it.  Every resource instance needs to be part of a resource group; otherwise, it cannot be deployed." [Azure for Architects]
That last paragraph is interesting because I can access the container I want via the Azure portal. So, a friendly sysadmin suggested this was barking up the wrong tree and instead looked at:

Virtual Networks
"A VNet is required to host a virtual machine. It provides a secure communication mechanism between Azure resources so that they can connect to each other. 
"The VNets provide internal IP addresses to the resources, facilitate access and connectivity to other resources (including virtual machines on the same virtual network), route requests, and provide connectivity to other networks. 
"A virtual network is contained within a resource group and is hosted within a region, for example, West Europe. It cannot span multiple regions but can span all datacenters within a region, which means we can span virtual networks across multiple Availability Zones in a region. For connectivity across regions, virtual networks can be connected using VNet-to-VNet connectivity." [Azure for Architects]
Nothing obvious here. Both Databricks and the container were on the same network. However, they weren't on the same subnet.

Network Security Groups
"Subnets provide isolation within a virtual network. They can also provide a security boundary. Network security groups (NSGs) can be associated with subnets, thereby restricting or allowing specific access to IP addresses and ports. Application components with separate security and accessibility requirements should be placed within separate subnets." [Azure for Architects]
And this proved to be the problem. Databricks and the container are on the same virtual network but not the same subnet and there was an NSG blocking communication between these subnets.

Note that changes can take a few minutes to propagate, sometimes faster but sometimes slower. My sysadmin says he has seen it take up to an hour.

AWS Real Estate

Just some notes I've made playing around with AWS real estate.

ECS
Amazon's offering that scales Docker containers. Whereas EC2 is simply a remote VM, ECS is a "logical grouping of EC2 machines" [SO]

Fargate
Is a serverless version of EC2 [SO].
 
Kinesis
A propriertary Amazon Kafka replacement. While Kafka writes data locally, Kinesis uses a quorum of shards.

MSK
Amazon also offers a hosted Kafka solution called MSK (Managed Streaming for Kafka). 

Lambda
Runs containers like Docker that exists for up to 15 minutes and whose storage is ephemeral.

Glue
A little like Hive. It has crawlers that are batch jobs that compile metadata, thus doing some of the job of Hive's metastore. In fact, you can delegate the meta store that Spark uses to use Glue as its backing store. See:

EMR
EMR is AWS's MapReduce tool on which we can run Spark. "You can configure Hive to use the AWS Glue Data Catalog as its metastore." [docs] If you want to run Spark locally but still take advantage of Glue, follow these instructions.

Athena
Athena is AWS's hosted Trino offering. You can make data in S3 buckets available to Athena by using Glue crawlers.

Step Functions
AWS's orchestration of different services within Amazon's cloud.

CodePipeline
...is AWS's CI/CD offering.

Databases
DynamoDB is a key/value store and Aurora is a distributed relational DB.