Docs for cache.ram.__class__
Description
<type 'type'> extends (<class 'gluon.cache.CacheAbstract'>,)
|
Attributes
cache.ram.__class__.__call__ |
<type 'instancemethod'>
belongs to class <type 'instancemethod'>
Attention! cache.ram does not copy the cached object. It just stores a reference to it. Turns out the deepcopying the object has some problems: - would break backward compatibility - would be limiting because people may want to cache live objects - would work unless we deepcopy no storage and retrival which would make things slow. Anyway. You can deepcopy explicitly in the function generating the value to be cached. |
cache.ram.__class__.__class__ |
<type 'type'> extends (<type 'object'>,)
belongs to class <type 'type'>
type(object) -> the object's type type(name, bases, dict) -> a new type |
cache.ram.__class__.__delattr__ |
<type 'wrapper_descriptor'>
belongs to class <type 'wrapper_descriptor'>
x.__delattr__('name') <==> del x.name |
cache.ram.__class__.__dict__ |
<type 'dictproxy'>
belongs to class <type 'dictproxy'>
|
cache.ram.__class__.__doc__ |
<type 'str'>
belongs to class <type 'str'>
str(object='') -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. |
cache.ram.__class__.__format__ |
<type 'method_descriptor'>
belongs to class <type 'method_descriptor'>
default object formatter |
cache.ram.__class__.__getattribute__ |
<type 'wrapper_descriptor'>
belongs to class <type 'wrapper_descriptor'>
x.__getattribute__('name') <==> x.name |
cache.ram.__class__.__hash__ |
<type 'wrapper_descriptor'>
belongs to class <type 'wrapper_descriptor'>
x.__hash__() <==> hash(x) |
cache.ram.__class__.__init__ |
<type 'instancemethod'>
belongs to class <type 'instancemethod'>
|
cache.ram.__class__.__module__ |
<type 'str'>
belongs to class <type 'str'>
str(object='') -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. |
cache.ram.__class__.__new__ |
<type 'builtin_function_or_method'>
belongs to class <type 'builtin_function_or_method'>
T.__new__(S, ...) -> a new object with type S, a subtype of T |
cache.ram.__class__.__reduce__ |
<type 'method_descriptor'>
belongs to class <type 'method_descriptor'>
helper for pickle |
cache.ram.__class__.__reduce_ex__ |
<type 'method_descriptor'>
belongs to class <type 'method_descriptor'>
helper for pickle |
cache.ram.__class__.__repr__ |
<type 'wrapper_descriptor'>
belongs to class <type 'wrapper_descriptor'>
x.__repr__() <==> repr(x) |
cache.ram.__class__.__setattr__ |
<type 'wrapper_descriptor'>
belongs to class <type 'wrapper_descriptor'>
x.__setattr__('name', value) <==> x.name = value |
cache.ram.__class__.__sizeof__ |
<type 'method_descriptor'>
belongs to class <type 'method_descriptor'>
__sizeof__() -> int size of object in memory, in bytes |
cache.ram.__class__.__str__ |
<type 'wrapper_descriptor'>
belongs to class <type 'wrapper_descriptor'>
x.__str__() <==> str(x) |
cache.ram.__class__.__subclasshook__ |
<type 'builtin_function_or_method'>
belongs to class <type 'builtin_function_or_method'>
Abstract classes can override this to customize issubclass(). This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached). |
cache.ram.__class__.__weakref__ |
<type 'getset_descriptor'>
belongs to class <type 'getset_descriptor'>
list of weak references to the object (if defined) |
cache.ram.__class__._clear |
<type 'instancemethod'>
belongs to class <type 'instancemethod'>
Auxiliary function called by `clear` to search and clear cache entries |
cache.ram.__class__.cache_stats_name |
<type 'str'>
belongs to class <type 'str'>
str(object='') -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. |
cache.ram.__class__.clear |
<type 'instancemethod'>
belongs to class <type 'instancemethod'>
|
cache.ram.__class__.increment |
<type 'instancemethod'>
belongs to class <type 'instancemethod'>
|
cache.ram.__class__.initialize |
<type 'instancemethod'>
belongs to class <type 'instancemethod'>
|
cache.ram.__class__.locker |
<type 'thread.lock'>
belongs to class <type 'thread.lock'>
A lock object is a synchronization primitive. To create a lock, call the PyThread_allocate_lock() function. Methods are: acquire() -- lock the lock, possibly blocking until it can be obtained release() -- unlock of the lock locked() -- test whether the lock is currently locked A lock is not owned by the thread that locked it; another thread may unlock it. A thread attempting to lock a lock that it has already locked will block until another thread unlocks it. Deadlocks may ensue. |
cache.ram.__class__.max_ram_utilization |
<type 'NoneType'>
belongs to class <type 'NoneType'>
|
cache.ram.__class__.meta_storage |
<type 'dict'>
belongs to class <type 'dict'>
dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2) |
cache.ram.__class__.stats |
<type 'dict'>
belongs to class <type 'dict'>
dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2) |