Package web2py :: Package gluon :: Module cache :: Class CacheInRam
[hide private]
[frames] | no frames]

Class CacheInRam

source code

   object --+    
            |    
CacheAbstract --+
                |
               CacheInRam

Ram based caching

This is implemented as global (per process, shared by all threads) dictionary. A mutex-lock mechanism avoid conflicts.

Instance Methods [hide private]
 
__init__(self, request=None)
Paremeters...
source code
 
clear(self, regex=None)
Clears the cache of all keys that match the provided regular expression.
source code
 
__call__(self, key, f, time_expire=300)
Attention! cache.ram does not copy the cached object.
source code
 
increment(self, key, value=1)
Increments the cached value for the given key by the amount in value
source code

Inherited from CacheAbstract (private): _clear

Inherited from object: __delattr__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __str__

Class Variables [hide private]
  locker = thread.allocate_lock()
  meta_storage = {}

Inherited from CacheAbstract: cache_stats_name

Properties [hide private]

Inherited from object: __class__

Method Details [hide private]

__init__(self, request=None)
(Constructor)

source code 

Paremeters
----------
request:
    the global request object

Overrides: object.__init__
(inherited documentation)

clear(self, regex=None)

source code 

Clears the cache of all keys that match the provided regular expression.
If no regular expression is provided, it clears all entries in cache.

Parameters
----------
regex:
    if provided, only keys matching the regex will be cleared.
    Otherwise all keys are cleared.

Overrides: CacheAbstract.clear
(inherited documentation)

__call__(self, key, f, time_expire=300)
(Call operator)

source code 

Attention! cache.ram does not copy the cached object. It just stores a reference to it. Turns out the deepcopying the object has some problems: 1) would break backward compatibility 2) would be limiting because people may want to cache live objects 3) would work unless we deepcopy no storage and retrival which would make things slow. Anyway. You can deepcopy explicitly in the function generating the value to be cached.

Overrides: CacheAbstract.__call__

increment(self, key, value=1)

source code 

Increments the cached value for the given key by the amount in value

Parameters
----------
key:
    key for the cached object to be incremeneted
value:
    amount of the increment (defaults to 1, can be negative)

Overrides: CacheAbstract.increment
(inherited documentation)