Home | Trees | Indices | Help |
|
---|
|
object --+ | CacheAbstract --+ | CacheInRam
Ram based caching
This is implemented as global (per process, shared by all threads) dictionary. A mutex-lock mechanism avoid conflicts.
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from |
|
|||
locker = thread.allocate_lock()
|
|||
meta_storage =
|
|||
Inherited from |
|
|||
Inherited from |
|
Paremeters ---------- request: the global request object
|
Clears the cache of all keys that match the provided regular expression. If no regular expression is provided, it clears all entries in cache. Parameters ---------- regex: if provided, only keys matching the regex will be cleared. Otherwise all keys are cleared.
|
Attention! cache.ram does not copy the cached object. It just stores a reference to it. Turns out the deepcopying the object has some problems: 1) would break backward compatibility 2) would be limiting because people may want to cache live objects 3) would work unless we deepcopy no storage and retrival which would make things slow. Anyway. You can deepcopy explicitly in the function generating the value to be cached.
|
Increments the cached value for the given key by the amount in value Parameters ---------- key: key for the cached object to be incremeneted value: amount of the increment (defaults to 1, can be negative)
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Wed Feb 3 10:53:18 2010 | http://epydoc.sourceforge.net |