Learning Sections show
Function Caching in Python
Function caching is a technique used to store the results of expensive function calls and reuse those results when the same inputs occur again. Python's functools
module provides a built-in way to cache function results using the lru_cache
decorator.
Using lru_cache
from functools
The lru_cache
decorator caches the results of a function based on its inputs. It uses a Least Recently Used (LRU) caching strategy to manage the cache size:
from functools import lru_cache
# Apply the lru_cache decorator
@lru_cache(maxsize=128)
def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n - 1) + fib(n - 2)
# Call the cached function
print(fib(10))
In this example, the Fibonacci function results are cached, so repeated calls with the same input are faster.
Parameters of lru_cache
- maxsize: Defines the maximum size of the cache. If the cache exceeds this size, the least recently used items are discarded. Setting it to None disables the LRU feature and allows the cache to grow without bound.
- typed: If set to True, arguments of different types will be cached separately. For example,
f(3)
andf(3.0)
will be treated as distinct calls with separate cache entries.
Benefits of Function Caching
- Performance Improvement: Function caching can significantly improve the performance of your program by reducing the need to recompute results for the same inputs.
- Resource Efficiency: Reduces the consumption of computational resources for functions that are called multiple times with the same arguments.
- Ease of Use: The
lru_cache
decorator is easy to apply and requires minimal code changes to implement caching.