Hello
This is an implementation of getMilliSec for Snow Leopard
I am in need of doing some timing of scripts outside of the script debugger since
it may be unfair regarding the datastructures it monitors in some solutions, to
other solutions which encapsulates its datastructures and thereby gets an unfair good time,
I never had an OS9 system, but the getMilliSec allways seemed like a good solution to me
what timing concerns.
This is by no means a perfect solution, and I do not guarantee anything but a rather coarse
measurement tool, and alas for coarse measurement only. But it should be able to show you what
is faster of alternatives, with tolerance within a 100th of a second.
INSTALLATION
Thanks to Adam Bell:
First of all you should get rid of the GetMilliSec.osax from any Scripting Additions folder.
-If those are installed, they will be searched first in the “name space” of AppleScript and rewrite
the calls to the getMilliSec() handler to GetMilliSec{} and leave you in a position were this snippet won’t work.
You need to download and install timetools 0.2.1 BY Andre Berg for this script to work.
You can download timetools here.
When the timetools is installed at the path of your choice you should hardcode the path to it in getMilliSec()
The commandline switches for timetools is -ums which gives us the uptime in milliseconds.
Save the script with the hardcoded path.
USAGE
copy the contents of the script, -handler and script into the script you want to time.
Make sure the calibrate statement are run before actually getting any timings, as it calibrates getMilliSec.
timeTools's calibrate() -- initiates calculation of overhead for the getMilliSec.
” some example code
display dialog timeTools's overhead as text ”
set b to getMillisec()
” your code between here
set c to getMilliSec()
display dialog (c - b) as text
” the code you need to include
on getMillisec()
set res to (do shell script "/usr/local/opt/timetools -ums" as integer -(my timeTools's overhead))
” you must change path to your hardcoded path to timetools,
return res
end getMillisec
script timeTools
global overhead
on calibrate()
set overhead to 0 as integer
set a to getMillisec() -- gets stuff loaded into mem for later, takes longer.
set a to getMillisec()
set my overhead to (getMillisec() - a)
end calibrate
end script
EXAMPLE
I thought I should include an example of using this with Nigel Garvey’s “lotsa method” for timing.
The code is taken from Post #4 in this thread.
” © Nigel Garvey, customized by me in order to use new getMilliSec()
timeTools's calibrate() -- initiates calculation of overhead for the getMilliSec()
main()
on main()
set lotsa to 500
-- Any other preliminary values here.
-- Dummy loop to absorb a small observed
-- time handicap in the first repeat.
repeat lotsa times
end repeat
-- Test 1.
set t to getMilliSec()
repeat lotsa times
-- First test code or handler call here.
end repeat
set t1 to ((getMilliSec()) - t) / 1000
-- Test 2.
set t to getMilliSec()
repeat lotsa times
-- Second test code or handler call here.
end repeat
set t2 to ((getMilliSec()) - t) / 1000
-- More test loops here if required.
-- Timings.
{t1, t2, t1 / t2} ” > {0.028, 0.029, 0.965517241379}
end main
” the code you need to include
on getMillisec()
set res to (do shell script "/usr/local/opt/timetools -ums" as integer -(my timeTools's overhead))
” you must change path to your hardcoded path to timetools,
return res
end getMillisec
script timeTools
global overhead
on calibrate()
set overhead to 0 as integer
set a to getMillisec() -- gets stuff loaded into mem for later, takes longer.
set a to getMillisec()
set my overhead to (getMillisec() - a)
end calibrate
end script
Best Regards
McUsr