words of string or text item delimiters

You’re right. When I change the string variable into another string at the end of the loop, the code is significant slower (equally fast as Nigel’s). So, yes, it’s totally misleading.

I hope ‘current date’ results aren’t cached for long. :confused:

Tried to eliminate as much caching as possible. Couldn’t eliminate the variables from the text item delimiters method. I’m getting pretty consistent results:

run script (do shell script "python -c 'import time; print time.time()'") --dummy
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_calib to t2 - t1
--
set timings to {}
set cur_date to (current date)
--
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
--
repeat 1000 times
	tell (cur_date as «class isot» as string)
		text 1 thru 4 & text 6 thru 7 & text 9 thru 13 & text 15 thru 16 & text 18 thru 19
	end tell
end repeat
--
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_diff to t2 - t1 - time_calib
set end of timings to time_diff

set t1 to run script (do shell script "python -c 'import time; print time.time()'")
--
repeat 1000 times
	set isotDateString to cur_date as «class isot» as string
	set {TID, text item delimiters} to {text item delimiters, {":", "-"}}
	set isotItems to text items of isotDateString
	set text item delimiters to ""
	set trimmedIsotDateString to isotItems as text
	set text item delimiters to TID
end repeat
--
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_diff to t2 - t1 - time_calib
set end of timings to time_diff

set t1 to run script (do shell script "python -c 'import time; print time.time()'")
--
repeat 1000 times
	tell cur_date
		(((its year) * 10000 + (its month) * 100 + (its day)) as text) & "T" & text 2 thru -1 of ((1000000 + (its hours) * 10000 + (its minutes) * 100 + (its seconds)) as text)
	end tell
end repeat
--
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_diff to t2 - t1 - time_calib
set end of timings to time_diff

return timings

Interesting. I really should switch the orders of the different methods in the run loop.

Interesting to me is that there is almost no difference. Funny to see how timings are so inconsistent in AppleScript. So it’s rather an developer’s preference than one technique beats the another.

Yep. FWIW, this also comes in about the middle of the range of the others, allowing for the fact that it’s including the creation of the date:

script theTimer
	set theFormatter to current application's NSDateFormatter's new()
	theFormatter's setDateFormat_("yyyyMMdd'T'HHmmss")
	set theDate to current application's NSDate's |date|()
	repeat 1000 times
		theFormatter's stringFromDate_(theDate)
	end repeat
	set y to current application's NSDate's |date|()
	return y's timeIntervalSinceDate_(theDate)
end script

tell application "ASObjC Runner"
	set z to run the script {theTimer} with response
end tell
return z

I guess its garbage collection must come into play, especially with all the repeats in these loops. But the variations can certainly be very dramatic. And yet the ASObjC method is more like +/-5%, so presumably something else is at play.

Does this mean you can’t go wrong? :slight_smile:

:lol: You can even go wrong with the fastest solution; It’s not a given that fastest is always the best choice. My point is that there is a huge gray area between a line of code in AppleScript and the actual processing done by the processor. In this largely gray area none of us knows how everything works for sure and the only way to find our best solution is simply on a try-error base. That leads me to another point, even when you think you’re right you can even be more wrong :o A simple drawback of such high, on user level, programming languages.

That’s true about one method being better than the other. In this example, the text item delimiter method might be better if you use the search/replace subroutine throughout your script. In other situations you might want to format. The method of using the range reference is very specific to the job that needs to be done.

Indeed. Given that the two fastest solutions rely on a quirk – the isot to string-not-text coercion – you could argue that in the interest of future-proofing, they’re not a good choice. It’s not like they offer significant performance gains.

Surely that applies to all languages :stuck_out_tongue:

Correct, but for the record: The lower the language the less it will fool you :slight_smile:

Hello.

Another facet of it: The lower level of the language creates a hell of a lot more code to rework if your design was wrong, sloppily produced, or otherwise faulty. :slight_smile:

One of the reasons for loving AppleScript, is that prototyping comes very cheap. You may stumble now and then over quirks, but in the long run, it is a great time saver to do your (general you) mistakes in AppleScript, and not in C for instance. :smiley:

The lower the language and/or the bigger the program the more you’ll write the work flows and architecture on paper before writing a line of code. There are programmers that starts writing their code immediately in index.php, main function or run handler but that can’t be blamed to the language, It’s the programmer that starts wrong.

Hello.

I guess you basically agree with me in that sometimes, you just don’t know enough about the problem, up front, and the hours of research can be many. In such situations, I prefer to start prototyping right away, writing one to toss away, since that seem to me to be most efficient anyway. Because often when you, at least I, start out to write something that hindsightly turns out to be the wrong soloution to the problem, or the problem wasn’t the right one after all.

Many times, you aren’t aware of such situations, before you have coded it, and actually runs a solution. Under such circumstances, it is a lot cheaper to toss away a prototype.

Of course, if you are basically dead sure, that you are getting the right solution, to the right problem, and you have everything else leveled out, then of course, there is no reason to prototype at all. :slight_smile:

Applescript is a very good tool for prototyping, whether it is the layout of a class hierarchy in Objective-C, or for simulating parameter passing in a cluster of functions that is to be implemented in C.

I believe ASObj-C has the same advantages, when figuring out how a cocoa app should work. It is room for play and mistakes. Which all in all makes up for better code, and apps.

The thing that Shane and I mentioned is not about how well structured your application design is. You can use a method to tackle down problems which seems fit right, but is that truely the best way.

For instance: Listing a folder can be done more than 4 different ways, but which is the best? Using the Finder because it’s Mac OS X’s default file browser? Using System Events because of it’s speed? Using the deprecated, fastest and shortest list folder command? Or is the do shell scripting listing a folder using the UFS file system? If we answered that question the same way as we approach the best solution in this topic for the date format, we would answer that with “Use the (deprecated) list folder command to list folders because it’s the fastest”. But is it really the best solution?

The best solution is one that always works for the user.

First of all, it must work, and this depends on the user base, if the user is using a system where the command isn’t deprectated, fine. (And you’ll get an opportunity to upgrade. :)) Second of all, the solution must be fast enough for the user to bother to use it. Example: Having Finder List files by the label index, is something that just doesn’t work well, due to the slowness of that particular query, so we can just say that it doesn’t work. On the other hand, here the mdfind returns from the query promptly. Thirdly, the solution shouldn’t slow down overall system performance, something that a solution that issues a zillion AppleEvents to Finder or any other app does.

My point is, for a solution to be best, or “good enough”, it must work like it should, and return results quickly enough but not necessarily faster, and not leave any other app “hanging” in the process.

Then it is good to have menagerie of different creatures to pick a solution from, so that one can find one that qualifies for the problem at hand.

It wasn’t really a question :wink: But thanks for the input

Hello.

It really wasn’t an answer, just my own thoughts on the subject of “best” solution. :slight_smile:

There is also a lot more to it, than mere planning, with the lower level languages one must pay alot more attention to detail of course, -not teaching you, just summarizing it. So, not only do you work in smaller chuncks, that requires a lot more rework, and you really have to pay attention to details.

My point is, is that higher level languages like AppleScript, sure has its limitations with many respects, but they are unsurpassed with regards to prototyping, when you are not totally sure what you are building, since the cost of rework is just a fraction. The second point is, that there are a lot of peculiarities, and dark corners in the lower level languages as well.

Anyways, regarding the C-Languages, this fine slide-show, which I hope you’ll enjoy, shows off some details in C/C++, but skipping floating point arithmetic. (You probably know everything there, but at least I found it still worth skimming. :wink:

Nice paper McUsr. Before I reply I would like to say that there is nothing wrong with AppleScript in the first place, the stuff it should do, it does it well. Then I don’t want to start this in an AppleScript vs C language. But C is a good example for an language that’s still considered weak typed, high performance with no runtime in it.

But that nice slide show about C does indeed back me up in my previous posts. The thing is that when someone is willing to read a good book about C from cover to cover he doesn’t need that slide. It’s all documented how code (gcc) optimizers works and their default values. About that, it’s better to used initialized variables in C, especially for working multiple dialects and compilers. It’s better that your code works on multiple dialects and compilers than knowing all the ins and outs of one compiler and it’s dialect. Back on topic again, my point was that all this behaviour mention in the slide show is covered in the books about C, that doesn’t apply for the ins and outs about AppleScript and it’s implicit runtime. As the slide shows how you could go “wrong” with such simple things, there is still documentation how you should do it but in AppleScript it’s sometimes a guess.

From PLC towards AppleScript is something that I’ve learned through the years: High level language means more difficulty with less writing while low level languages means less difficulty with more writing. AppleScript is a language that gets you started very easily, true. But when you want to teach everything about C and it’s compilers or AppleScript and it’s runtime, compilers, managers and events I know I can do for C in a single day, not sure about AppleScript.

Hello.

This is not arguments against you, these are my own thoughts.

My first point is, is that the less you know about a problem area, or the less tangible your idea/vision is, the greater the benefit of stepping up in the language hierarchy, because iterating is so much cheaper.

And that cost outweighs the problems with the high level language, (but your problem must be solvable in the language. :slight_smile: ). An example here is, say you use a table of functions with pointers to char, and later you figure out that you’d be far better off using a function that returns a table with pointers to char. This is sometimes things that happens from idea to realization, and just an example to make a point.

You could argue, that that would never happen with better planning. So, this would be totally unacceptable, if you used say water fall methodology. I for sure prefer to work like Kernighan and Ritchie proposed, get an idea, implement it, see if it works, hone it if necessary, and elaborate on it if you must.
The biggest benefit of working in such a way, is of course the psychological state you are in while developing, 3 weeks of planning of more, then starting to code, and then realizing that the idea wasn’t so good after all, can be pretty devastating.

What I have written above, is in the context of the aformentioned “unexplored territories”, it doesn’t apply to making something you already know a whole lot about, a calculator for instance, is something you know up front how should work, and how not. So unless your idea contains something very radical stuff, then you should be good to use a traditional development methodology, and a low level language, and not an iterative approach.

And it isn’t so, that everything with the C-languages are so well defined, because you have third-party libraries to relate to as well. I just mention, creating and using type-safe libraries here.

And, I am a pragmatic, and I think you are as well, the only truth with regards to computer science, is that in some context, most of it is a lie.

I agree, but first we’re talking about throwing away complete scripts, now just functions. That is what I do, when you have prepped everything together you can simply replace objects or functions with better versions.

But again, I wasn’t talking about “wrong” of “bad” programming in an obvious way. The thing is that when everything works fine, without knowing, it can still be wrong. Because there is an greater unknown area in high level languages than in low level languages, the chance it will happen in higher level languages is higher.