# LFU Cache

LeetCode 460. LFU Cache (Hard): https://leetcode.com/problems/lfu-cache/

• Description in short:
`Design and implement a data structure for Least Frequently Used (LFU) cache.Implement the LFUCache class:- LFUCache(int capacity) Initializes the object with the capacity of the data structure.- int get(int key) Gets the value of the key if the key exists in the cache. Otherwise, returns -1.- void put(int key, int value) Sets or inserts the value if the key is not already present. When the cache reaches its capacity, it should invalidate the least frequently used item before inserting a new item. For this problem, when there is a tie (i.e., two or more keys with the same frequency), the least recently used key would be evicted.Notice that the number of times an item is used is the number of calls to the get and put functions for that item since it was inserted. This number is set to zero when the item is removed.Follow up:Could you do both operations in O(1) time complexity?Eg1.Input["LFUCache", "put", "put", "get", "put", "get", "get", "put", "get", "get", "get"][[2], [1, 1], [2, 2], [1], [3, 3], [2], [3], [4, 4], [1], [3], [4]]Output[null, null, null, 1, null, -1, 3, null, -1, 3, 4]`
• Diagram:
`capacity=2most recent <----------> least recentmost frequent <----------> least frequent(assume left side is the most recent and most frequent side)put(1, 1):        |k=1,v=1,f=1|___|put(2, 2):        |k=2,v=2,f=1|k=1,v=1,f=1|get(1):        |k=1,v=1,f=2|k=2,v=2,f=1|        return 1put(3, 3):        |k=1,v=1,f=2|k=3,v=3,f=1|        evicts k=2,v=2,f=1get(2):        |k=1,v=1,f=2|k=3,v=3,f=1|        return -1 (not found)get(3):        |k=3,v=3,f=2|k=1,v=1,f=2|        return 3put(4, 4):        |k=3,v=3,f=2|k=4,v=4,f=1|        evicts k=1,v=1,f=2get(1):        |k=3,v=3,f=2|k=4,v=4,f=1|        return -1 (not found)get(3):        |k=3,v=3,f=3|k=4,v=4,f=1|        return 3get(4):         |k=3,v=3,f=3|k=4,v=4,f=2|        return 4`
• Thought

Review how LRU Cache was solved, and adding frequency on top of the design

`lru_cache = {cache_key:cache_value}cache_key=keycache_value=doubly linked list nodedboubly linked list: node (key,value,pre,next)        cache_k=1  cache_k=2  cache_k=3  cache_k=3  cache_k=3      cache_v=n1 cache_v=n2 cache_v=n3 cache_v=n3 cache_v=n3          |          |          |          |          | head <-> node1 <--> node2 <--> node3 <--> node4 <--> node5 <-> tail         k=1        k=2        k=3        k=4        k=5               v=1        v=2        v=3        v=4        v=5          pre=head   pre=n1     pre=n2     pre=n3     pre=n4         next=n2    next=n3    next=n4    next=n5    next=tail         *fre=2       *fre=2   *fre=1     *fre=1     *fre=1          |                                            |  most recently used                           least recently used  add new node here                            remove node here`

Anytime a new node adds to the doubly linked list, it adds to the middle of the linked list, according to its frequency. If anytime we have to add/remove in the middle of the linked list, it will be O(N) time complexity since we need to traverse the linked list.

• Solution: Use 2 dictionaries and a list of the doubly linked list, according to the frequency.
`lfu_cache = {cache_key:cache_value}cache_key=keycache_value=doubly linked list node(key,value,frequency,pre,next)fre_dll = {fre_dll_key:fre_dll_value}fre_dll_key=frquency Nfre_dll_value=doubly linked list node at frequency Nfrequency N:   head <------> node1 <------> node2 <------> node3 <------> tail                 key=1          key=2          key=3                           value=1        value=2        value=3                  frequency=N    frequency=N    frequency=N                 pre=head       pre=node1      pre=node2                 next=node2     next=node3     next=tail                  |                              |           most recently used             least recently used           add new node here              remove node herefrequency N-1:   head <------> node1 <------> node2 <------> node3 <------> tail......frequency 1:   head <------> node1 <------> node2 <------> node3 <------> tail`
• Code:

List Node:

`class ListNode():        def __init__(self, key, value, fre=None, pre=None, next=None):        self.key = key        self.value = value        self.fre = fre        self.pre = pre        self.next = next`

Doubly Linked List: (see diagram in LRU Cache session)

`class DLinkedList():        def __init__(self):            self.head = ListNode(None, None)        self.tail = ListNode(None, None)        self.head.next = self.tail        self.tail.pre = self.head            def remove_node(self, node):        pre_n = node.pre        next_n = node.next                pre_n.next = next_n        next_n.pre = pre_n            def add_to_head(self, node):        next_n = self.head.next                self.head.next = node        node.pre = self.head        node.next = next_n        next_n.pre = node            def remove_from_tail(self):        node = self.tail.pre        pre_n = node.pre                pre_n.next = self.tail        self.tail.pre = pre_n                return node`

LFU Cache:

`class LFUCache:    def __init__(self, capacity: int):                self.capacity = capacity        self.cache_dict = {}                self.head = ListNode(None, None)        self.tail = ListNode(None, None)        self.head.next = self.tail        self.tail.prev = self.head              def add_node_to_head(self, node):                next_n = self.head.next                self.head.next = node        node.pre = self.head        node.next = next_n        next_n.pre = node                    def remove_node(self, node):                pre_n = node.pre        next_n = node.next                pre_n.next =  next_n        next_n.pre = pre_n            def remove_from_tail(self):                node = self.tail.pre        pre_n = node.pre                pre_n.next = self.tail        self.tail.pre = pre_n                    def move_to_head(self, node):                self.remove_node(node)        self.add_node_to_head(node)            def get_size(self):        return len(self.cache_dict)                    def get(self, key: int) -> int:        # if exist, move the node to the head                if key not in self.cache_dict:            return -1                node = self.cache_dict[key]        self.move_to_head(node)        return node.value                 def put(self, key: int, value: int) -> None:        # add the node to the head                if key in self.cache_dict:            node = self.cache_dict[key]            node.value = value            self.move_to_head(node)        else:            node = ListNode(key, value)            self.cache_dict[key] = node            self.add_node_to_head(node)                    if self.get_size() > self.capacity:            self.remove_from_tail()            del self.cache_dict[lru_cache.tail.pre.key]`
• Other solution: using Python ordered dictionary

Note. this is not the solution the interviewer is looking for. See details in the LRU cache session.

## More from Jing Dong

Software Engineer

## How to SSL Secure Your LocalHost

Get the Medium app