Hello Nigel.
Shales are so much more practical than a lawn, but, a lawn has its charm.
It’s not a wifes tale, although N. Wirth just uses it to illustrate how quicksort operates. He uses the low + high div 2 like you do. However, a Professor Eric D. Demaine of MIT, highly praised the random partioning of quicks sort, then remarked he didn’t show it, but the students were supposed to have seen it in the book: Cormen “Introduction To Algorithms”. I have read through the handout for the “quick sort” lecture, and it wasn’t there either. The philosophy behind it is described by Wirth however, and that is that things fall into place faster when they are swapped over a longer distance.
I really shouldn’t speculate too much on the advantages, with random partiontiong, the elements are put faster into the right partition I guess, due to the longer swapping distance. The call tree will be more unbalanced, because the differing sizes of the partions, but maybe some middle operations are saved for all I know.
There is of course also that, hower small the random number generator is, it takes something to outperform (low+high) div 2.
Have a nice evening.
Edit
I changed the partitioning algorithm to one that swaps over longer distances, (Wirths), and now it has decreased yet another 50% in time (for 1000 elements).
It may also be, that the had a “tutorial-version” of quicksort, using the partitioning of a dataset, using a random pivot as an introduction to the subject, I don’t know that before I have the “Cormen-book” in front of me.