Hi,
I know that additions exist to perform accurate estimation of the running time of your script, but sometimes it may be convenient to have a (less accurate) way to measure the time needed to execute some code without installing any extensions. Below is a script that I have successfully used for some time, so I think it’s ready to be shared.
The script provides a benchmark() handler that takes three arguments: the first argument is the handler or script whose performance should be measured; the second argument is the number of times the execution of such handler or script should be repeated; the third argument is a tolerance for the time measurement. Besides providing the measured time (in milliseconds), benchmark() also calculates an estimate on the error on such measurement, so you get an idea of how accurate the measurement is. Here is an example of how the script can be used:
-- Load the Benchmark script (which is assumed to be in the same folder as this one)
set scriptPath to the folder of file (path to me) of application "Finder"
set B to run script (((scriptPath as text) & "Benchmark.applescript") as alias)
-- Run benchmarks!
B's benchmark(test1, 600, "10%")
B's benchmark(Test2, 5, 15 / 100)
B's benchmark(test3, 150, missing value) -- Use missing value if you do not want to specify a tolerance
B's benchmarkResultAsText()
on test1()
delay 1 / 60 -- ≈ 16.7 ms
end test1
script Test2
set bigList to {}
repeat with n from 1 to 5000
copy n to the end of bigList
end repeat
end script
on test3()
script Wrapped -- This script must be enclosed into a handler in order to re-initialize its properties every time it is run
property bigList : {}
set bigListRef to a reference to bigList
repeat with n from 1 to 10000
copy n to the end of bigListRef
end repeat
end script
run Wrapped
end test3
The above might produce the following output:
And here is the code of the Benchmark script:
property kTimeResolution : 1 -- in seconds
property pBenchmarkReport : {}
(*
Benchmark a piece of code by running it several times and measuring the average time per run.
@param theCode: a handler or a script
@param numRepeat: the number of times the code should be run
@param theUncertainty: the tolerated error on the measurement (e.g., "5%" or 0.05)
@return the average time spent by an execution of the code
*)
on benchmark(theCode, numRepeat, theUncertainty)
local minimumDuration, startTime, elapsedTime, avgTime, avgError, avgDelta
if theUncertainty is missing value then
set theUncertainty to 1 -- 100%
else if class of theUncertainty is text then -- Assumes it is a percentage
set theUncertainty to ((text 1 thru -2 of theUncertainty) as real) / 100
end if
if theCode's class is handler then
script Wrapper
property exec : theCode
property name : "Handler"
exec()
end script
else if theCode's class is script then
set Wrapper to theCode
else
set the end of pBenchmarkReport to {what:theCode's class, avg:missing value, delta:missing value, uncertainty:theUncertainty}
return missing value
end if
set minimumDuration to kTimeResolution / theUncertainty
-- Start benchmark! --------------------------
set startTime to time of (current date)
repeat numRepeat times -- The overhead of the repeat and run commands is considered unimportant
run Wrapper
end repeat
set elapsedTime to ((time of (current date)) - startTime)
---------------------------------------------------
if elapsedTime < minimumDuration then
set the end of pBenchmarkReport to {what:Wrapper's name, avg:missing value, delta:missing value, uncertainty:theUncertainty}
return missing value
end if
set avgTime to (elapsedTime / numRepeat) * 1000 -- ms
-- To get an error <e% using a clock with resolution r, the duration of the measured event must be >100r/e.
-- Therefore, if the event lasts (approximately) t seconds, the error can be estimated as e ≈ 100r/t %.
-- For "time of current date", r = 1s.
set avgError to 100 * kTimeResolution / elapsedTime -- %
set avgDelta to 1000 * kTimeResolution / numRepeat -- (avgTime * avgError/100) ms
set the end of pBenchmarkReport to {what:Wrapper's name, avg:avgTime, delta:avgDelta, uncertainty:avgError}
return avgTime
end benchmark
on benchmarkReset()
set pBenchmarkReport to {}
end benchmarkReset
on benchmarkResult()
pBenchmarkReport
end benchmarkResult
on benchmarkResultAsText()
local theReport, theText
set theReport to {}
repeat with r in pBenchmarkReport
if r's what's class is not text then
set the end of theReport to ("WARNING: an object of class " & r's what as text) & " cannot be benchmarked."
else if r's avg is missing value then
set the end of theReport to (r's what & "'s performance could not be measured within the given error threshold (" & (round 100 * (r's uncertainty)) as text) & "%). Try to increase the number of repetitions."
else
set the end of theReport to (((r's what & " took " & rround(r's avg, 1) as text) & "±" & rround(r's delta, 1) as text) & " ms (uncertainty: " & rround(r's uncertainty, 1) as text) & "%)"
end if
end repeat
set tid to AppleScript's text item delimiters
set AppleScript's text item delimiters to ASCII character 13
set theText to theReport as text
set AppleScript's text item delimiters to tid
return theText
end benchmarkResultAsText
-- Round number x to the specified number of decimal digits
on rround(x, digits)
if digits ≤ 0 then
round x rounding to nearest
else
set quantum to 1.0 / (round (10 ^ digits))
(round x / quantum) * quantum
end if
end rround
on run
set pBenchmarkReport to {}
return me
end run
Happy benchmarking!