Docs for cache.ram.__call__

[ Python Tutorial ] [ Python Libraries ] [ web2py epydoc ]

Description


<type 'instancemethod'>















Attention! cache.ram does not copy the cached object.
It just stores a reference to it. Turns out the deepcopying the object
has some problems:

-
would break backward compatibility
- would be limiting because people may want to cache live objects
- would work unless we deepcopy no storage and retrival which would make
things slow.

Anyway. You can deepcopy explicitly in the function generating the value
to be cached.


Attributes


cache.ram.__call__.__call__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__call__(...) <==> x(...)

cache.ram.__call__.__class__ <type 'type'> extends (<type 'object'>,) belongs to class <type 'type'>
instancemethod(function, instance, class) Create an instance method object.

cache.ram.__call__.__cmp__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__cmp__(y) <==> cmp(x,y)

cache.ram.__call__.__delattr__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__delattr__('name') <==> del x.name

cache.ram.__call__.__doc__ <type 'str'> belongs to class <type 'str'>
str(object='') -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object.

cache.ram.__call__.__format__ <type 'builtin_function_or_method'> belongs to class <type 'builtin_function_or_method'>
default object formatter

cache.ram.__call__.__func__ <type 'function'> belongs to class <type 'function'>
Attention! cache.ram does not copy the cached object. It just stores a reference to it. Turns out the deepcopying the object has some problems: - would break backward compatibility - would be limiting because people may want to cache live objects - would work unless we deepcopy no storage and retrival which would make things slow. Anyway. You can deepcopy explicitly in the function generating the value to be cached.

cache.ram.__call__.__get__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
descr.__get__(obj[, type]) -> value

cache.ram.__call__.__getattribute__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__getattribute__('name') <==> x.name

cache.ram.__call__.__hash__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__hash__() <==> hash(x)

cache.ram.__call__.__init__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__init__(...) initializes x; see help(type(x)) for signature

cache.ram.__call__.__new__ <type 'builtin_function_or_method'> belongs to class <type 'builtin_function_or_method'>
T.__new__(S, ...) -> a new object with type S, a subtype of T

cache.ram.__call__.__reduce__ <type 'builtin_function_or_method'> belongs to class <type 'builtin_function_or_method'>
helper for pickle

cache.ram.__call__.__reduce_ex__ <type 'builtin_function_or_method'> belongs to class <type 'builtin_function_or_method'>
helper for pickle

cache.ram.__call__.__repr__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__repr__() <==> repr(x)

cache.ram.__call__.__self__ <class 'gluon.cache.CacheInRam'> belongs to class <class 'gluon.cache.CacheInRam'>
Ram based caching This is implemented as global (per process, shared by all threads) dictionary. A mutex-lock mechanism avoid conflicts.

cache.ram.__call__.__setattr__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__setattr__('name', value) <==> x.name = value

cache.ram.__call__.__sizeof__ <type 'builtin_function_or_method'> belongs to class <type 'builtin_function_or_method'>
__sizeof__() -> int size of object in memory, in bytes

cache.ram.__call__.__str__ <type 'method-wrapper'> belongs to class <type 'method-wrapper'>
x.__str__() <==> str(x)

cache.ram.__call__.__subclasshook__ <type 'builtin_function_or_method'> belongs to class <type 'builtin_function_or_method'>
Abstract classes can override this to customize issubclass(). This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

cache.ram.__call__.im_class <type 'type'> extends (<class 'gluon.cache.CacheAbstract'>,) belongs to class <type 'type'>
Ram based caching This is implemented as global (per process, shared by all threads) dictionary. A mutex-lock mechanism avoid conflicts.

cache.ram.__call__.im_func <type 'function'> belongs to class <type 'function'>
Attention! cache.ram does not copy the cached object. It just stores a reference to it. Turns out the deepcopying the object has some problems: - would break backward compatibility - would be limiting because people may want to cache live objects - would work unless we deepcopy no storage and retrival which would make things slow. Anyway. You can deepcopy explicitly in the function generating the value to be cached.

cache.ram.__call__.im_self <class 'gluon.cache.CacheInRam'> belongs to class <class 'gluon.cache.CacheInRam'>
Ram based caching This is implemented as global (per process, shared by all threads) dictionary. A mutex-lock mechanism avoid conflicts.