Saturday, November 13, 2010

Tuning Sybase

A debate is raging in the office over which DB query is the most expensive. Since it's the weekend, time to fire up Fedora in VMWare and tinker with my Sybase installation.

First, you have to set up the DB to take advantage of the statistics. Do this with:

[sybase@vmwareFedoraII ~]$ isql -Usa -SVMWAREFEDORAII
1> exec sp_configure "allow resource limits", 1
2> go

You need to reboot, so:

1> shutdown
2> go
Server SHUTDOWN by request.

and restart:

[root@vmwareFedoraII henryp]# /etc/rc.d/init.d/sybase start


[sybase@vmwareFedoraII ~]$ isql -Usa -SVMWAREFEDORAII
1> use henryp
2> go
1> set statistics io on
2> go
Total actual I/O cost for this command: 0.
Total writes for this command: 0
1> set showplan on
2> select * from tranche t , ioi i where i.fk_tranche =
Table: tranche (t) scan count 1, logical reads: (regular=1 apf=0 total=1),
physical reads: (regular=1 apf=0 total=1), apf IOs used=0

Table: ioi (i) scan count 2, logical reads: (regular=18 apf=0 total=18),
physical reads: (regular=8 apf=1 total=9), apf IOs used=1

Total actual I/O cost for this command: 288.

Total writes for this command: 0

Execute the same select again and you get:

Table: tranche (t) scan count 1, logical reads: (regular=1 apf=0 total=1),
physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: ioi (i) scan count 2, logical reads: (regular=18 apf=0 total=18),
physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total actual I/O cost for this command: 38.

Total writes for this command: 0

Notice how the physical reads are reduced second time around and indeed the actual I/O cost is an order of magnitude less.

Presumably, this is because the DB is caching this data. Rebooting clears this cache:

[root@vmwareFedoraII henryp]# /etc/rc.d/init.d/sybase restart
Stopping sybase service: [ OK ]
Starting sybase service: [ OK ]

And running the query again with io statistics on gives the same actual I/O as when we first ran the query.

Let's see the showplan again without actually executing it:

1> set noexec on
2> select * from tranche t , ioi i where i.fk_tranche =
3> go

Total estimated I/O cost for statement 2 (at line 2): 731.

The I/O costs are a unit less number (see Administrator's Guide to Sybase ASE 15).

Why the difference?

"The estimated cost is higher than the actual cost if some pages are already in the cache or if the statistics are incorrect. The estimated cost may be lower than the actual cost if the optimizer chooses 16K I/O, and some of the pages are in 2K cache pools, which requires many 2K I/Os. Also, if a big join forces the cache to flush its pages back to disk, repeated access may require repeated physical I/Os."

So, by a circuitous route, why were some people in the team seeing different estimated I/O costs for the same query? Actual costs can change for a variety of reasons - but estimated? Well, I don't know for sure but I was also using an IDE to run some of these queries and just happen to limit the number of rows since I didn't want to print out thousands of lines. Showplan knows this and changes the estimated accordingly. Gotcha.

Of course, if the data changes then the estimate will change too.

Sunday, November 7, 2010

ByteArrayInputStream is a not terribly efficient...

A question came my way on Friday: if Java New I/O (NIO) is so efficient, why doesn't using it to read a big XML file into memory and wrapping the result in a ByteArrayInputStream make parsing faster?

The answer lies in what ByteArrayInputStream actually does. It's nothing more than a thin wrapper around a byte[] that allows calling code to think it is dealing with an InputStream. But each call to read(byte[], int, int) causes a fragment of the array to be copied (so that the caller cannot modify the underlying array).

The problem is that by the time you've reached the end, you've copied the whole array. With large amounts of data, this can cause a lot of memory to be used and garbage collected and you'll soon start to see that savings of NIO have been more than offset by the activity of ByteArrayInputStream.

Afterall, one of the benefits of NIO is minimizing copying. From Ron Hitchen's excellent Java NIO:

"[in normal Java IO] The disk controller writes the data directly into a kernel memory buffer by DMA [Direct Memory Access] without further assistance from the main CPU. Once the disk controller finishes filling the buffer, the kernel copies the data from the temporary buffer in kernel space to the buffer specified by the process when it requested the read() operation... [C]opying from kernel space to the final user buffer seems like extra work."

(Java NIO, p13,14).

"By mapping a kernel space address to the same physical address as a virtual address in user space [as with NIO], the DMA hardware (which can access only physical memory addresses) can fill a buffer that is simultaneously visible to both the kernel and a user space process."

(ibid, p15).

Wednesday, November 3, 2010

Unfairs Fair

This conundrum was on our internal mailing lists:

Take a ReentrantReadWriteLock. One thread calls readLock(), gets the lock and calls lock() on it.

A second calls writeLock() and also calls lock(). The first thread still holds the read lock so the write lock blocks. No surprises there.

But then a third thread also gets a read lock and tries to call lock() on it and it too blocks - even though it is a read lock.

Surprised? Why should a read block when a fellow read holds the lock? Doesn't the JavaDocs for the ReadWriteLock interface say:

"The read lock may be held simultaneously by multiple reader threads"


Well, this is a special case. Yes, usually if a read lock is being held then other reading threads can lock even if there is one or more writer threads attempting to take the write lock. This is true irrespective of the ReentrantReadWriteLock being fair or not.

The special case is this: if a read lock is held but the next bid for the lock comes from an attempt to attain a write lock, subsequent read locks will block and wait for the write operation to complete.

You won't find this in the JavaDocs but in a small comment in the ReentrantReadWriteLock.NonfairSync code. I found this snippet from Sun's 1.6.05 implementation while stepping through the code to diagnose this behaviour:

final boolean readerShouldBlock(Thread current) {

/* As a heuristic to avoid indefinite writer starvation,

* block if the thread that momentarily appears to be head

* of queue, if one exists, is a waiting writer. This is

* only a probablistic effect since a new reader will not

* block if there is a waiting writer behind other enabled

* readers that have not yet drained from the queue.


return apparentlyFirstQueuedIsExclusive();


This may or may not be the case on other JVMs.