AppleScript itself is the boss. It will load your data in memory when you request it, and it will release it when your app quits or you re-write the relevant data.
If you read a 15MB text file, and hold it in a variable called “x”, AppleScript will asign 15MB of memory thru an identifier called “x”. So, if you now set “x” to something else (eg, the integer 5), these 15MB will be released and substituted with the amount of bytes needed for the new value of “x” (in OS X, the system will request them if needed, otherwise they will remain “idle” and still assigned to your app).
There are various errors related to memory management: stack overflow, internal table overflow, out of memory, etc. These are associated to data overload, infinite loops, recursion, etc.
So, you must avoid some bad ideas in your mind, such as reading a 45GB text file, or creating a list with (2 ^ 16) members. It’s not only unsafe, but also very slow. AppleScript was born for Application Intercommunication, and this means it’s not concibed for high speed, complex operations nor large amounts of data.
For such needs, you should be helped by someone else, like third-party scriptable applications, scripting additions, *nix tools thru “do shell script”, etc.
And for some particular tasks, you can design your own memory-aware application. For example, you can write a search/replace script which reads files in 200Kb chunks (instead of all-at-once).
If you still want manage the memory of your script, in a very constrained fashion, you can make use of local declarations in your script, empty variables when they are no more needed (a simple “set x to 0” will do the job) or overwrite variables on-the-fly when you can do it. Eg:
set x to read fileOf10MB
set x to text 1 thru (offset of "foo" in x) of x
--> instead of "set y to text 1 thru..."
But anyway, your script’s performance is still linked to the way AppleScript manages the data.