I’m very very new to applescript Language. I have been trying to make my own personal backup solution in Real Studio, but have found that when using the RS copy function it freezes my machine. I was pointed to a plugin at Monkey bread software (MBS) which works great but at a cost (£145.00 (lic+plugin)).
My question is am i able to achieve the following in applescript rather than spending alot of money.
I want my app to calculate the amount of files it needs to copy and return the number
I want the app then as it copies the items count the completed copied items
Well the guys from CCC (Carbon Copy Cloner) started with an AppleScript that uses ditto and bless (bless to make a volume bootable). So basically it is possible with AppleScript, yes.
The first issue you will encounter in AppleScript is that it is single threaded. Your code is procedural and can’t dispatch code which will return to your object later. What you need to do is process backgrounding and keep track of the whole process. You can iterate through the given folder and copy each file with ditto in the background.
When you want to go to a next level you can script running AppleScripts (you have to save your script as an application). The application is more like an stop and go. When the copy has finished the background process will make a call to your script which simulates more like an thread with an callback function.
For the progress indicator there are some hacks out there but you could also take a look at making an AppleScriptObjC script with Script Editor (Lion and higher) where you can build GUI elements on demand.
Of Course once you’re deciding to use AppleScriptObjC you can use NSTasks object to run shell command in the background.
I know it’s great to make your own tools and learn Applescript. But I must say there are quite a number of good backup applications that can probably do what you want, without spending a lot of money. I use SmartBackup. It’s small, has good options, reasonably fast, does things like keep versions. Only $15 US.
Maybe use that right away to do your backups while you try to learn coding to really customize a solution that does exactly what you want. Backups = good.
Don’t forget Time machine. IMHO The best thing Apple ever integrated into Mac OsX. So you at least can rescue your data if something goes awry. (And the burglar doesn’t steal the harddrive as well.)
I have managed very well with Time machine. It isn’t easy to get at exactly what you want, but the data is there if you can find it, and that is a good start. (Provided you use it!)
15 Bucks for a solution that works, isn’t that much, a lot less than 150 pounds! But I suspect the OP to want to “roll his own”.
It is A Good Thing for those who would not be bothered to work out a decent backup scheme.
But not to be relied upon. A few days ago I discovered it had deleted its oldest backup - on a disk just 20% full.
No backup app should do that!
I can’t understand why it did that, if it is the literally oldest. That it would delete backups in between, so you only had per day for the latest days, I would understand, but not deleting the oldest one.
Oh well, I’ll give you right in the thing with backup-schemes, but as with all things in life, the lesser parts, the lesser number of part that may break.
It is also important to never use more than around 80% of a hard-drive in order to avoid it crashing. I usually stop at that point of filling anyway, maybe I’ll wait a little longer with my 2TB drive.
The reason for this is that harddrives sometimes “reshuffles” stuff, and the partion tables might be destructed in the process, that is nothing you ever want to happen to your backup drive!
Like I said, AppleScript is single threading. While the copy takes place AppleScript waits till the event has completed. So how would you do a ‘du’ while AppleScript is waiting? That was the whole point of my previous post.
StefanK’s progressbar should be fully able to display the progress as you copy files from one place to another, the caveat is however, that you have to cope 1 file at a time.
The scheme is as follows:
CONTEXT:
You have an app with a shell script in its belly, and a main script, that will run in a loop, and display the progress as the shell script copies the files.
First thing to do is counts the files, you then calculate how many or what fraction of percent a file is of the total.
This will be your increment interval for the progress bar.
You execute this with a do shell script so it will run in the bacground.
do shell script "scriptinappsbelly& >/dev/null 2>&1" ”< note first ampersand to send it to background proc.
Then copies a file at a time in a loop the end of the loop it creates a certain file somewere:
something like this
#!/bin/bash
cd toTheFolderYouAreCopyingFrom
for i in * ; do
while test −e certainFile ; do
sleep 1
done
cp $i $destinationFolder
touch certainFile
done
Now further down in your applescript that fired off the shell script (in the main.scpt of the applet).
repeat until fc < maxfilesToCopy
if exists certainfile then
update progressbar
set fc to fc +1
delete certainFile
else
delay 1
end if
end repeat
You don’t need ignoring application responses. When putting an process in the background it’s stdout is still connected to it’s shell. To start an process completely free from the do shell script you need to redirect it’s stdout and stderr to another file.
The correct way to put an process in the background would be:
do shell script "system_profiler &>/dev/null &"
I’ve used system_profiler as an example because it’s an ‘long’ process. I’ve used /dev/null to put the data in the system’s black hole. To redirect to two different files you could use something like
do shell script "ditto /User/Documents /Volumes/ExternalDisk/Users/Documents 1>/var/log/backup/messages.log 2>/var/log/backup/error.log &"
Just to clarify : application responses affects only applications which send Apple Event responses to received Apple Events.
Commands like do shell script do not send and receive Apple Events at all.
A similar case: with timeout affects only commands sending Apple Events
The most important C hack is missing in that paper, the hack where Objective-C totally relies on. It’s the void pointer, the only pointer that will match with any sort of type.
Well, that is a kind of disastrous hack in normal C (ansi-c stdc99) nowadays, at least when you declare void *func(void) or something, (a function that returns a pointer to void), as void * will be truncated to int, not taking the 64-bit addressing into account but truncating the result to a 32-bit int. And it gets even worse, the compiler may not catch it, if you have “compatible constructs” that gets the (void *)something.
But of course if this works in Objective-C then it is all good, I haven’t used it. I think this idiom isn’t a hack, more a convention.
The nicest thing in the article, was being reminded that I can declare a func like this: (stdc99) int mf( int param[static 1]) {…} and be sure that the compiler will catch NULL pointers passed as parameters
You can read about it in the comments section aside from the link on the main page.
Well I’m not talking about functions that returns an pointer to void. They will get caught in the pre-compiling process. What I mean is that functions that accept void pointers as their argument.
Nope, void * is an pointer to nothing, which means it is typeless, int has always been an type. However GCC will allow you to do arithmetic operations on an void pointer and therefore GCC will (probably) cast it to an int. It’s definitely not something according to ISO or ANSI C. You’ll notice when working with other compilers than GCC
My view of this is that that (especially pre ansi compilers no matter what brand) a void * used to be treated like an int, which worked just fine as long as we didn’t step into address rangers demanding addresses larger than an int really.
Therefore the compilers used an unsigned int as the storage class behind the scenes for a void *.
If the compiler doesn’t have any declaration of the function you call, then it assumes int as the default argument for calling the object code, which it doesn’t have a declaration for.
When you on the other hand uses the function as it was (void *) and assumes you’ll get back say a 64-bit pointer, you may get only 16-bits. The real fun starts of course when you use that function, to call another undeclared function,
Say first a malloc, then using the result as a parameter to realloc. And even more fun if that doesn’t crash at once.
And that is why such code now may behave erratically. It has really nothing to do with gcc, although clang for all I know address this. I have had both Borland and Watcom compilers that I am sure worked after that principle, (void *)/int, (at least on a semantic level) because then there were segment:offset things to worry about.
Edit
Retry in explaining it better.
Scenario. You know how to call malloc, so you don’t bother include stdlib.h, nor do you bother casting it into
something different than (void *).
The compiler doesn’t see a declaration of malloc so it sends out a warning, that you ignore.
Context: The default return type of a function is int.
Result of malloc call: an int, which you believe is a 64-bit address, but in reality is a 16 bit int.
Now I understand. Some mis-understanding here in translation. For the record: An void * can act as an generic pointer, before void * existed (pre-ansi) different types where used as generic pointers instead like int * and char *. That’s what you mean right?