Cache in Python !
Exploring Python's functools: Cache, Cached Property, and LRU Cache Python's functools module is a treasure trove for functional programming enthusiasts, offering tools to make your code more efficient and elegant. In this post, we'll dive into three powerful functions—cache, cached_property, and lru_cache—that help optimize performance by storing results of expensive computations. Whether you're speeding up recursive algorithms or simplifying class-based calculations, these tools have you covered. Let’s explore each one with clear explanations and practical examples. 1. cache: Simple, Unbounded Memoization The cache decorator is a lightweight way to memoize function results, storing them for reuse when the same inputs occur again. It’s like a sticky note for your function’s outputs—no recomputation needed! How It Works What it does: Stores function results in an unbounded dictionary, using arguments as keys. When to use it: Ideal for pure functions (same input, same output) that are computationally expensive. Key feature: It’s equivalent to lru_cache(maxsize=None) but faster due to its simplicity. Example from functools import cache @cache def factorial(n): return n * factorial(n-1) if n else 1 print(factorial(10)) # Computes: 3628800 print(factorial(10)) # Returns cached result, no recomputation Why It’s Awesome Speed: Avoids redundant calculations, making recursive functions like factorial lightning-fast. Simplicity: No configuration needed—just slap on the decorator. Caveat: The cache grows indefinitely, so monitor memory usage for functions with many unique inputs. 2. cached_property: One-Time Property Computation The cached_property decorator transforms a class method into a property that computes its value once and caches it for the instance. Think of it as a lazy-loaded attribute that sticks around. How It Works What it does: Runs the method the first time it’s accessed, caches the result as an instance attribute, and returns the cached value on subsequent calls. When to use it: Perfect for expensive calculations in classes that don’t change after the first computation. Key feature: Works only with instance methods (requires self). Example from functools import cached_property class Circle: def __init__(self, radius): self.radius = radius @cached_property def area(self): print("Computing area") return 3.14159 * self.radius ** 2 c = Circle(5) print(c.area) # Prints: Computing area, then 78.53975 print(c.area) # Prints: 78.53975 (cached, no recomputation) Why It’s Awesome Efficiency: Computes only once per instance, saving CPU cycles. Clean code: Reads like a property (c.area vs. c.area()), blending seamlessly into class design. Caveat: The cached value can be overridden (e.g., c.area = 0), so use it for immutable data. 3. lru_cache: Flexible, Bounded Memoization The lru_cache decorator is the heavy hitter of memoization, offering a Least Recently Used (LRU) cache with configurable size. It’s thread-safe and packed with introspection features, making it a go-to for optimizing complex functions. How It Works What it does: Caches up to maxsize results, evicting the least recently used entry when full. Supports a typed option to treat different types (e.g., 3 vs. 3.0) as distinct keys. When to use it: Great for recursive algorithms, dynamic programming, or any function with repetitive calls. Key feature: Provides cache_info() to inspect hits, misses, and cache size. Example from functools import lru_cache @lru_cache(maxsize=32) def fib(n): if n

Exploring Python's functools
: Cache, Cached Property, and LRU Cache
Python's functools
module is a treasure trove for functional programming enthusiasts, offering tools to make your code more efficient and elegant. In this post, we'll dive into three powerful functions—cache
, cached_property
, and lru_cache
—that help optimize performance by storing results of expensive computations. Whether you're speeding up recursive algorithms or simplifying class-based calculations, these tools have you covered. Let’s explore each one with clear explanations and practical examples.
1. cache
: Simple, Unbounded Memoization
The cache
decorator is a lightweight way to memoize function results, storing them for reuse when the same inputs occur again. It’s like a sticky note for your function’s outputs—no recomputation needed!
How It Works
- What it does: Stores function results in an unbounded dictionary, using arguments as keys.
- When to use it: Ideal for pure functions (same input, same output) that are computationally expensive.
-
Key feature: It’s equivalent to
lru_cache(maxsize=None)
but faster due to its simplicity.
Example
from functools import cache
@cache
def factorial(n):
return n * factorial(n-1) if n else 1
print(factorial(10)) # Computes: 3628800
print(factorial(10)) # Returns cached result, no recomputation
Why It’s Awesome
- Speed: Avoids redundant calculations, making recursive functions like factorial lightning-fast.
- Simplicity: No configuration needed—just slap on the decorator.
- Caveat: The cache grows indefinitely, so monitor memory usage for functions with many unique inputs.
2. cached_property
: One-Time Property Computation
The cached_property decorator transforms a class method into a property that computes its value once and caches it for the instance. Think of it as a lazy-loaded attribute that sticks around.
How It Works
- What it does: Runs the method the first time it’s accessed, caches the result as an instance attribute, and returns the cached value on subsequent calls.
- When to use it: Perfect for expensive calculations in classes that don’t change after the first computation. Key feature: Works only with instance methods (requires self).
Example
from functools import cached_property
class Circle:
def __init__(self, radius):
self.radius = radius
@cached_property
def area(self):
print("Computing area")
return 3.14159 * self.radius ** 2
c = Circle(5)
print(c.area) # Prints: Computing area, then 78.53975
print(c.area) # Prints: 78.53975 (cached, no recomputation)
Why It’s Awesome
- Efficiency: Computes only once per instance, saving CPU cycles.
- Clean code: Reads like a property (c.area vs. c.area()), blending seamlessly into class design.
- Caveat: The cached value can be overridden (e.g., c.area = 0), so use it for immutable data.
3. lru_cache
: Flexible, Bounded Memoization
The lru_cache decorator is the heavy hitter of memoization, offering a Least Recently Used (LRU) cache with configurable size. It’s thread-safe and packed with introspection features, making it a go-to for optimizing complex functions.
How It Works
- What it does: Caches up to maxsize results, evicting the least recently used entry when full. Supports a typed option to treat different types (e.g., 3 vs. 3.0) as distinct keys.
- When to use it: Great for recursive algorithms, dynamic programming, or any function with repetitive calls.
- Key feature: Provides cache_info() to inspect hits, misses, and cache size.
Example
from functools import lru_cache
@lru_cache(maxsize=32)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
print(fib(10)) # Computes: 55
print(fib.cache_info()) # Shows: CacheInfo(hits=8, misses=11, maxsize=32, currsize=11)
Why It’s Awesome
- Control: Set maxsize to balance memory and performance (use None for unbounded, like cache).
- Thread-safe: Safe for multithreaded applications, ensuring the cache stays consistent.
- Debugging: cache_info() helps you tune performance by revealing cache effectiveness.
- Caveat: Avoid using with functions that have side effects, as the cache assumes deterministic outputs.