I saw this and got idea.
What if someone writes app, which turns any unix command to OSAX or something similar. Would it be faster then?
I saw this and got idea.
What if someone writes app, which turns any unix command to OSAX or something similar. Would it be faster then?
The overhead isn’t that big if you ask me, and if there is a problem, then you can always mount a ram disk and copy over the stuff you want there during login.
The overhead of having Osax’s is bigger in other ways, as you suddenly gets lots of names that conflicts with handlers and variables. Nowadays Osax’s are, or should be more complex creatures with regards to thread safety too. There are also security restrictions on what an Osax can and cannot do. This is not an issue with the shell tools.
And besides that, the tools that are included with Mac Os X is there on every users machine, so there is no need for extra installs. Now, if the osax’s were included with the installation of mac Os X, then it would be a whole diferent story.
. dead.
Scripting Additions are using outdated Carbon technology.
Nowadays scriptable faceless apps like System Events or Shane’s Runner are the state of the art
What about optimizing the shell command. I’ve seen a lot of repetitive shell commands while they all could be merged into 1 command. I think the problems with do shell script comes from inefficient scripts.
A small example:
repeat 1000 times
do shell script "echo $HOME >/dev/null"
end repeat
will take up to 3 - 4 seconds on my machine. I always keep saying that once you’re in the shell you can better stay there are long as possible. So I would write the code above down as:
do shell script "for ((c=0;c<1000;c++))
do
echo $HOME >/dev/null
done"
Look how fast it is now.
Compare the code above
Honestly 0,4 millisecs isn’t that long. and the repeat example is both more readable and debugable. I’ll fork out 0,4 millsecs for that. But, when you repeat that many times, you should of course code it like you do! :). Say it takes 10 millisecs or more on a much slower machine, it still isn’t that much.
That is also my rationale for not duplicating unix commands in an Osax, which won’t do much difference nowadays, but bloat the system. Osaxes with functionality you otherwise can’t get, fine! But not duplicating something that is already there. But if someones hardware is very old, and this is a problem, then they can make a ram disk with the tools they need, and gain the performance. The tools are small, so they don’t loose much ram.
But I think cirno was aware of small amount of overhead but I think he just mentioned what I posted. The problem is that when your script contains many do shell script commands will eventually slow your script down, mainly because of the overhead of the do shell script command itself. Why else would the TS ask this question? So saying that 2.5 ms isn’t that long doesn’t answers his question. My Answer is that repetitive do shell script commands should, when possible, be merged into one command to get rid of this overhead.
I agree, but I still don1t see the need for Osax’s.
A great course way to identify any bottlenecks is by the way to time like this.
Very simple, and very fast to implement.
set t1 to (current date)
repeat 1000 times
do shell script "exec echo $HOME >/dev/null"
end repeat
log " " & (current date) - t1
By having newlines embedded in your second fast example, it is as a matter of fact quite readable.
But this is better still
do shell script "
for ((c=0;c<1000;c++)) ; do
echo $HOME >/dev/null
done"
Okay i believe you all are right.
What “c=0” and “c++” does?
Thanks
I’ll answer for him:
in “for ((c=0,c<1000;c++))”:
c=0 :sets the initial value to 0 (that is the initiation clause for the loop variable, you can really have several.
c<1000: is the condition for which the loop will execute
c++: is post incrementation of the counter c by 1, after the body of the loop has executed.
If you want the full explanation, open a Terminal window, enter man bash on the command line, then press “/” then paste in “expr1 ; expr2 ; expr3” then hit return.
Me too
thanks
At the moment I find it hard to make relatively complex shell scripts work within the do shell script. I think I must be using some bash features the do shell script command isn’t happy about.
But at the moment, I can sit for hours! To make just say 10 lines work. Programmer time counts for something as well.
I guess this will change. But getting the what not’s, and the escapes right is a hairy experience!
The first step is of course thorough testing as a normal shell script before the conversion.
Actually, getting something like this into a do shell script is hard, and this isn’t even starting to be complex.
It is actually so hard that I don’t bother to do it. I feel I have tried everything into converting it into a do shell script, which is a nice way to publish, for now, I’ll either use SystemEvents, or send along the shell script file, and add some tests for its presence in a do shell script.
"-------------------------------------------------------------------------------------------------------------
#!/bin/bash
# climb <theFolder to find> <last folder upwards to look in> <the start point>
# It looks for a folder in a starting dir, and stops as soon as it has found it, upwardly. ($1)
# or, it finds the stop folder, which is the highest level we'll search. ($2)
# from a starting point ($3)
wantedfol=$1
treeroot=$2
start=$3
cd $start
while [ ! -d $wantedfol ]; do
cd ..
probe=$(pwd)
if [ $probe=$treeroot ] ; then
# We have reached the final directory to search in
break 1
fi
if [ -d $wantedfol ] ; then
foundit=$probe
break
fi
done
if [ -z $foundit ] ; then
# Affirms that we found
echo "$treeroot"
else
echo "$foundit"
fi
-------------------------------------------------------------------------------------------------------------
So in cases like that, or until I learn how to cope, I’ll be happy with AppleScript counterparts, that are made in a fraction of the time.
to hgRepoPath for aPath against pathForHighestLevel
# finds the path to the nearest repository upwards in the folder structure.
local tids, probeItems, probeItemCount, probePath, notfound
set {tids, AppleScript's text item delimiters} to {AppleScript's text item delimiters, "/"}
set probeItems to text items of aPath
set {AppleScript's text item delimiters, notfound, probePath, probeItemCount} to {tids, true, aPath, length of probeItems}
tell application id "sevs"
repeat while notfound
if exists disk item (probePath & "/.hg") then
set notfound to false
exit repeat
else
set probePath to items 1 thru (probeItemCount - 1) of probeItems
set {tids, AppleScript's text item delimiters} to {AppleScript's text item delimiters, "/"}
set probePath to probePath as text
set {AppleScript's text item delimiters, probeItemCount} to {tids, probeItemCount - 1}
end if
if probePath is in pathForHighestLevel then exit repeat
# the highest level we are looking.
end repeat
end tell
if notfound then return null
return probePath
end hgRepoPath
The loop constructs, at least while with a test, is like handling nitro!
I think, but I loathe it to much at the moment to test it, that the solution is to use an awk here document to do looping in.
The thing with the loops in shell, is that they are processes, with their own input streams, when you then starts using "functions pwd
or $(pwd) for instance. Somthing either get deliberately confused, or just confused.
Sed here documents works fine, and multi line shell scripts too, as long as you don’t do too much. I think awk will be nice as a layer for doing multline shell scripts in here documents, as you then reduce the chance for confusion, and mingling of the streams. :
An integral part of BSD indeed…
Edit
Maybe it is that when the script containing the multiline do shell script is saved as a script, and not text (.applescript), and the (shell script) then contains some syntactic error, then the do shell script gets corrupted internally, and it is just to start over in another new script. Because it isn’t just to remove the mistakes, and start over.
Maybe this works better with text (.applescripts)
One thing I have figured out
When assigning variables, those, or many of those are supposed to be terminated with a newline, so, you don’t want trailing blanks on the end of the newling, when assigning in a multiline shell script.
That may have been my major flaw so far!
This is a working example, it only looks good if it is saved as .applescript, unix linefeeds, if you don’t have Script Debugger but even then unix linefeeds are recommended!
set theRest to do shell script "{
# climb a up a subtree searching for a folder that contains a subfolder
a=" & quoted form of ((POSIX path of (path to home folder)) & "/bin/source") & "
cd $a
# start location for our search
while [ ! -d $a/.hg ]
# cur dir havn't a .hg folder
do
cd ..
a=`pwd`
if [ $a = / ]
# we get out of the loop if root is reached
then
break
fi
done
echo $a
}"
I think that the debugger in bash is good enough. The difference is that, like xcode, modern ide have debug and release builds. With bash it’s just the way you write, the more compressed you write code the less information the debugger gives. if you write the bash way (without all the ‘;’), which you seem to prefer (something about readablity), the less detailed debug information you get. That’s why I wrote the repeat loop in a sort of ‘debug’ mode.
Depends in my opinion. Me, including my employees, can write hundreds of lines in in an hour and sometimes only 1 line in just an hour. Some code needs more attention than others. Time is indeed money but clients hates buggy software, therefore inefficient, more than anything else. It not only leaves an unprofessional imprint of your company but also your client will look for other software. You’re always balancing the pro against the cons.
The most common mistake is that strings in bash works like strings in other programming languages. There is a substitution mode which can be turned on and off 3 different ways which is very uncommon in regular languages.
When scripts are getting bigger and bigger I leave them in an SH file because then they become complete scripts on their own rather than a part of your AppleScript. However you can still use AppleScript as a script launcher and controller by executing an complete script in the background and gives some feedback to the user and let the user be able to cancel the process.
Funny, I think the shell script is better readable than the AppleScript code.
That’s command substitution where the command
is old bash and deprecated, therefore use $(command) instead. Great way to execute a command and put it’s output stream in a variable. You can indeed substitute a loop as well and grab it’s output but that won’t happen by default and have to hard code yourself.
TIP: When using the cd command, the shell variable $PWD will be set, I would use that instead of command substitution.
The beauty of utilities like AWK and Sed is that they can create great one liners and are built for fast text processing. Remember that all those utilities are created to solve problems when existing software wasn’t up for the task at that moment. Looping wasn’t one of the issues but text recognition was the problem with bash and limited string functions which you find in Sed and AWK and those tools are built for it. Using software where it’s not built for can only be askign for problems later. All, shell script included, have their cons and pros and imo you should use the one where to pro’s matches your demands mostly and con’s are less important. When I’m processing files I’m very comfortable with bash while text processing is very easy in an AWK script.
Err… let me rephrase that:
But at the moment, I can sit for hours! To make just say 10 lines work with the do shell script command. Programmer time counts for something as well.
You missed the do in do shell script.
I tell you what, I have had some experiences lately, where I got error messages from gnu sh!
I have pushed it far with here documents, it appears that lineendings within the do shell script gets translated into returns during the parsing by the sh command. So the fix is to staple the script with linefeeds, like this. This pertains to here documents, where the shell also started to interpret The comment in the sed script that was a here document within that shell script.
For posterity, here is the working version of just that! Thanks to Nigel, who discovered what was going on.
So this is maybe easy when you are seasoned with the combination of shell scripts and do shell script, but it is not very pleasant, and very time consuming in the beginning, before you have figured it all out.
set recList to paragraphs of (do shell script "{ cat " & theLog & " ; echo ; } |sed -n -e '
/^[[:alnum:]]\\{1,\\}/ {" & linefeed & "
:start" & linefeed & "
h" & linefeed & "
s/^\\([[:space:]]*\\)\\([^,]\\{1,\\},[^,]\\{1,\\},[^,]\\{1,\\}\\)\\(,\\)\\(.*\\)/\\2/p" & linefeed & "
t prune" & linefeed & "
n" & linefeed & "
b start" & linefeed & "
:prune" & linefeed & "
g" & linefeed & "
s/^\\([[:space:]]*\\)\\([^,]\\{1,\\},[^,]\\{1,\\},[^,]\\{1,\\}\\)\\(,\\)\\(.*\\)/\\4/" & linefeed & "
b start" & linefeed & "
}" & linefeed & "' -e 's/\\([^,]\\{1,\\}\\),[[:space:]]*\\([^,]\\{1,\\}\\),[[:space:]]*\\([^,]\\{1,\\}\\)/\\2,\\1,\\3/p' ")
Maybe I am going out over the intentions of the do shell script, but until I am comfortable with how this works, It feels like a shaky experience. The line endings the shell are dealing with, are turned into returns when it parses from the text in the AppleScript, even if it was linefeeds to begin with.
How did I miss the do? I was referring to post #12 and not post #13 which is an shell script, not an do shell script
do shell script “…” without altering line endings will solve the return problem. Also you probably need to use the /bin/echo and use option -n to suppress the trailing linefeed.
I don’t get your sed script, it only says to me how difficult Sed is rather than shell scripting.
Hello.
I really addressed the problems with the do shell script command, not with shell scripting. I can’t seem where you found
The problems was traling whitespace I hadn’t figured out yet at post #12, I hadn’t been able to convert it yet.
I wouldn’t differentiate between sed scripting and shell scripting, as sed is a shell tool, and meant to work in that environment, but hopefully, it is only it that got the quirks.
I think altering line endings goes for what comes in, and what goes out of a do shell script, not what happens in between.
Small stuff is easy to get to work:
set i to "McUsr"
set mresult to (do shell script "
me=" & i & "
cat <<%
I am $me
%")
set because to "because it is interesting!"
set theRest to (do shell script "(
# commenting works, but not in the middle of streams \\
echo
cat <<<" & "'Another concept
I have wondered about!'" & " \\
|nl \\
|sed -n 's/^[[:space:]]\\{1,\\}//p'
# I can even comment inline some places but not in the middle of a stream \\
echo " & because & " \\
# line after line with comments \\
echo )
")
And I like the looks of this, when it blends in perfectly, inline shell script but there are some caveats there, that so far have been very hard to find.
The above works for line endings in a normal shell script, and here document, to signal line endings, it just doesn’t work with multi-line sed scripts from the command line. If I were to use normal here documents with sed, then I’d better resort to gsed. I guess sed is not the only tool that will trip at some moment.