Group 1: Basic Frequency Counting & Existence Checks
This group introduces the most fundamental use of hash maps: counting element occurrences or checking for their existence. Problems start with simple counting and progress to using maps to track state across more complex data structures like a 2D grid.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
1 | Find All Duplicates in an Array | Link | Frequency Counting | Base case: Use a hash map to count integer occurrences and identify those that appear twice. | |
2 | Rabbits in Forest | Link | Frequency Counting | Adds a layer of logic on top of frequency counts. You must interpret the counts according to the problem's grouping rules. | |
3 | Bulls and Cows | Link | Frequency Counting | Requires two passes or a single clever pass. Introduces managing two types of counts (bulls vs. cows) based on character and position. | |
4 | K-diff Pairs in an Array | Link | Frequency Counting | Builds on counting by requiring a second pass over the map's keys to find pairs that satisfy a specific condition (difference of k). | |
5 | Valid Sudoku | Link | Existence Check (Hash Set) | Applies the existence check pattern to a 2D grid. Requires managing 27 different sets (9 rows, 9 columns, 9 boxes) simultaneously. | |
6 | Set Matrix Zeroes | Link | State Tracking (Hash Set) | Uses hash sets not for counting, but to store the rows and columns that need to be modified, separating the "finding" and "setting" phases. | |
7 | Subdomain Visit Count | Link | Frequency Counting | Adds significant string parsing logic. You must first process the input strings into subdomains before applying the counting pattern. | |
8 | Finding the Users Active Minutes | Link | Frequency Counting | Involves a nested data structure: a hash map where each value is a hash set, to count unique minutes per user. | |
9 | Display Table of Food Orders in a Restaurant | Link | Frequency Counting | Requires managing nested counts (table → food item → count). Adds complexity of collecting all keys for table headers and sorting. |
Group 2: Grouping by Canonical Representation
These problems use a hash map to group items. The key to the map is a “canonical” (standard) representation of an item, and the value is a list of all items that map to that key.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
10 | Group Anagrams | Link | Grouping | Foundational grouping problem. The canonical key is the sorted version of a string. | |
11 | Find and Replace Pattern | Link | Grouping / Pattern Matching | Introduces a more complex canonical key. You need to normalize each word into a numeric pattern (e.g., 'abb' → '122') to group them. | |
12 | Groups of Special-Equivalent Strings | Link | Grouping | A variation on the canonical key concept. The key must represent the counts of characters at both even and odd indices separately. | |
13 | Determine if Two Strings Are Close | Link | Frequency Map Comparison | Instead of grouping, you create two frequency maps and compare them based on two conditions: same keys and same frequency values. | |
14 | Word Subsets | Link | Frequency Map Comparison | Requires creating a single "max frequency" map from one list of words and then comparing each word from another list against it. | |
15 | Find Duplicate File in System | Link | Grouping | Adds string parsing to extract file content, which serves as the canonical key for grouping file paths. |
Group 3: Frequency Counting with Sorting or Greedy Logic
Here, the hash map is the first step in a two-step process. After counting frequencies, you must apply a sorting or greedy algorithm to the counts to find the answer.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
16 | Sort Characters by Frequency | Link | Frequency Count & Sort | Base case: Count character frequencies, then sort the characters based on those counts to build the result string. | |
17 | Top K Frequent Elements | Link | Frequency Count & Heap/Sort | Similar to the previous, but introduces finding the "top K" elements, making a Min-Heap or Quickselect an efficient follow-up step. | |
18 | Top K Frequent Words | Link | Frequency Count & Heap/Sort | A "boring variation" that adds complexity to the sorting logic: you must handle ties using lexicographical order. | |
19 | Reduce Array Size to the Half | Link | Frequency Count & Greedy | Introduces a greedy approach. After counting, you sort the frequencies and greedily remove the largest counts to solve the problem. | |
20 | Least Number of Unique Integers after K Removals | Link | Frequency Count & Greedy | Similar greedy logic, but now you remove the smallest counts first. Reinforces the pattern of operating on sorted frequencies. | |
21 | Hand of Straights | Link | Frequency Count & Greedy | Requires more complex greedy logic. After counting, you must check for consecutive sequences, often by iterating through sorted keys. | |
22 | Divide Array in Sets of K Consecutive Numbers | Link | Frequency Count & Greedy | A "boring variation" of Hand of Straights. The underlying logic is identical. | |
23 | Reorganize String | Link | Frequency Count & Greedy Heap | The greedy logic becomes more complex, requiring a Max-Heap to always pick the most frequent character that is not adjacent to the last. | |
24 | Distant Barcodes | Link | Frequency Count & Greedy Heap | A "boring variation" of Reorganize String with the same underlying greedy heap-based algorithm. |
Group 4: Modulo Arithmetic & Pair/Tuple Counting
This set of problems combines frequency maps with modulo arithmetic or number theory to find pairs or combinations that satisfy a specific sum or divisibility property.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
25 | Pairs of Songs with Total Durations Divisible by 60 | Link | Hashing with Modulo | Foundational problem for this pattern. Use a map to store counts of time % 60 to find complementary pairs. | |
26 | Check if Array Pairs Are Divisible by k | Link | Hashing with Modulo | A "boring variation" of the previous problem, generalizing the logic from a fixed 60 to a variable k . | |
27 | 4Sum II | Link | Hashing Pairs (Two-Sum) | Applies the two-sum pattern on a larger scale. You pre-compute and store sums of pairs from two arrays in a map to find -(c+d) . | |
28 | Number of Pairs of Interchangeable Rectangles | Link | Hashing with Ratios | Uses a map to count occurrences of a canonical key, which is the reduced fraction (ratio) of width to height. | |
29 | Count Good Meals | Link | Hashing Pairs (Two-Sum) | A variation of Two-Sum where the target isn't fixed. For each number, you must check for complements for all possible powers of two. | |
30 | Tuple with Same Product | Link | Hashing Pairs | Uses a hash map to count the frequency of products of pairs. The final calculation requires combinatorial logic (nC2). |
Group 5: Sliding Window with Hash Maps
In these problems, a hash map is used to maintain the state of a “sliding window” as it moves across an array or string. This is a powerful and common pattern.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
31 | Longest Substring Without Repeating Characters | Link | Sliding Window | The classic introduction to using a map in a sliding window to track characters and their last seen positions. | |
32 | Fruit Into Baskets | Link | Sliding Window | A slight variation where the window's constraint is having at most 2 distinct character types (fruits). | |
33 | Permutation in String | Link | Sliding Window (Fixed) | Introduces a fixed-size window. Requires two maps: one for the target string and one for the current window, which you compare. | |
34 | Find All Anagrams in a String | Link | Sliding Window (Fixed) | A "boring variation" of the previous problem. Instead of returning true/false, you collect all starting indices of valid anagrams. | |
35 | Maximum Erasure Value | Link | Sliding Window | Combines the unique-element window from "Longest Substring" with calculating a sum, rather than just the length of the window. | |
36 | Longest Repeating Character Replacement | Link | Sliding Window | Introduces a more complex window condition: window_length - max_frequency <= k . Requires efficiently tracking the max frequency. | |
37 | Find the Longest Substring Containing Vowels in Even Counts | Link | Sliding Window & Bitmask | Advanced pattern combining a hash map with bitmasking. The map stores mask → first_index to find subarrays with a target parity state. |
Group 6: Prefix Sum with Hash Maps
This pattern is used to efficiently find subarrays or subsequences that sum to a specific target. A hash map stores prefix sums and their indices, allowing for O(1) lookups.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
38 | Subarray Sum Equals K | Link | Prefix Sum | The canonical problem for this pattern. You store counts of prefix sums in a map to find current_sum - k . | |
39 | Binary Subarrays with Sum | Link | Prefix Sum | A "boring variation" of Subarray Sum Equals K, cementing the core concept. | |
40 | Count Number of Nice Subarrays | Link | Prefix Sum | A re-framing of the same problem. You must first transform the input (odd=1, even=0) to see it as a standard subarray sum problem. | |
41 | Contiguous Array | Link | Prefix Sum | Another re-framing. By converting 0s to -1s, the problem becomes finding the longest subarray with a sum of 0. | |
42 | Subarray Sums Divisible by K | Link | Prefix Sum with Modulo | Combines the prefix sum pattern with modulo arithmetic. The map stores counts of prefix_sum % k . | |
43 | Continuous Subarray Sum | Link | Prefix Sum with Modulo | A variation of the previous problem that requires checking for a subarray length of at least 2. | |
44 | Minimum Operations to Reduce X to Zero | Link | Prefix Sum | Requires an insightful re-framing: the problem is equivalent to finding the longest middle subarray that sums to total_sum - x . | |
45 | Remove Zero Sum Consecutive Nodes from Linked List | Link | Prefix Sum on Linked List | Applies the prefix sum pattern to a linked list. The map stores prefix_sum → node , requiring pointer manipulation to remove sublists. |
Group 7: Graph/Tree Traversal with Hashing
In graph and tree problems, hash maps are essential for tracking visited nodes (to avoid cycles) or for mapping old nodes to new nodes when creating a deep copy.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
46 | Employee Importance | Link | Graph Traversal (DFS/BFS) | Base case: Use a hash map to build an adjacency list (id → Employee ) for easy lookups, then perform a simple traversal (DFS or BFS) to sum values. | |
47 | Linked List Cycle II | Link | Traversal & Visited Set | Uses a hash set to keep track of visited nodes in a linked list. The first repeated node found is the start of the cycle. | |
48 | Clone Graph | Link | Graph Traversal & Cloning | The canonical graph cloning problem. A hash map (old_node → new_node ) is crucial for tracking visited/cloned nodes to avoid infinite loops and re-creations. | |
49 | Copy List with Random Pointer | Link | Graph Traversal & Cloning | A "boring variation" of Clone Graph applied to a linked list with an extra random pointer, reinforcing the old_node → new_node map pattern. | |
50 | Open the Lock | Link | BFS & Visited Set | A classic state-space search using BFS. A hash set is essential to store visited states (lock combinations) to prevent redundant exploration. | |
51 | Minimum Genetic Mutation | Link | BFS & Visited Set | A "boring variation" of Open the Lock. The states are gene strings instead of numbers, but the BFS with a visited set pattern is identical. | |
52 | Construct Binary Tree from Preorder and Inorder Traversal | Link | Recursion & Index Lookup | A hash map is used as a critical optimization to store value → index for the inorder traversal, allowing O(1) lookup of the root's position. | |
53 | Construct Binary Tree from Inorder and Postorder Traversal | Link | Recursion & Index Lookup | A "boring variation" of the previous problem, using the same inorder map but with postorder traversal logic. | |
54 | Restore the Array from Adjacent Pairs | Link | Graph Traversal | Requires building a graph (adjacency list via hash map) from pairs, finding an endpoint (a node with one neighbor), and then traversing to reconstruct. |
Group 8: Dynamic Programming with Hash Map Memoization
In these problems, a hash map is used as a “memo” to store the results of previously computed subproblems, which is the core of top-down dynamic programming.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
55 | Word Break | Link | DP with Memo | Classic DP problem. A hash set provides O(1) dictionary lookups, and a hash map can be used for memoization (index → boolean ) to store subproblem results. | |
56 | Delete and Earn | Link | DP with Memo | The first step is using a hash map to aggregate points for each number. The problem then becomes a "House Robber" variation on the sorted keys of the map. | |
57 | Longest Arithmetic Subsequence of Given Difference | Link | DP with Memo | Introduces DP where the map state is value → length . For each num , the length of the sequence is 1 + map[num - difference] . | |
58 | Longest String Chain | Link | DP with Memo | The DP state is word → longest_chain_length . Requires sorting the words by length and building the solution by checking predecessors of each word. | |
59 | Binary Trees with Factors | Link | DP with Memo | The DP state is number → count_of_trees . For each number, you iterate through its factors to calculate the total combinations. | |
60 | Longest Arithmetic Subsequence | Link | DP with Memo | The DP state is more complex, requiring a nested map: dp[index][difference] . A map of maps (map<int, map<int, int>> ) is a natural fit for this. |
Group 9: Custom Data Structure Design
This group contains problems that require you to design and implement a class with specific methods and time complexities. A hash map is almost always the core component.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
61 | Encode and Decode TinyURL | Link | Design | The simplest design problem. Requires two hash maps for forward and reverse lookups (long→short and short→long ). | |
62 | Insert Delete GetRandom O(1) | Link | Design | A classic design problem combining a hash map (value → index ) with an array for O(1) average time complexity on all operations. | |
63 | Time-Based Key-Value Store | Link | Design | Introduces combining a hash map with another structure. The map's value is a sorted list of (timestamp, value) pairs, requiring binary search for get . | |
64 | Design Underground System | Link | Design | Requires multiple hash maps to track different states: one for active check-ins and another to aggregate travel times between stations. | |
65 | Snapshot Array | Link | Design | A clever design for versioning. Each array index contains a map (snap_id → value ) or a sorted list to allow efficient historical lookups. | |
66 | LRU Cache | Link | Design | A canonical design problem requiring a combination of a hash map (for O(1) lookups) and a doubly linked list (to manage recency). | |
67 | Design Twitter | Link | Design | A more complex design involving multiple hash maps (userId → follows , userId → tweets ) and merging sorted lists or using a heap to generate the feed. | |
68 | Stock Price Fluctuation | Link | Design | Combines a hash map for current prices with a sorted data structure (like a sorted map or two heaps) to efficiently track min and max prices. | |
69 | Detect Squares | Link | Design | Requires storing point frequencies in a map. The count method involves iterating through existing points to find valid squares. |
Group 10: Clever Hashing & Miscellaneous Problems
This final group includes problems where the hash map is used in a particularly clever way, as a minor utility in a larger algorithm, or for complex state tracking.
# | Problem | Link | Concept | Description of Difficulty Increase | ✓ |
---|---|---|---|---|---|
70 | Longest Consecutive Sequence | Link | Clever Hashing (Set) | A classic problem solved with a clever trick: use a hash set for O(1) lookups and only start counting a sequence from its true starting number. | |
71 | Letter Combinations of a Phone Number | Link | Backtracking & Lookup | The hash map is used as a simple utility (a lookup table for digit → letters ). The core logic is backtracking, not hashing. | |
72 | Brick Wall | Link | Clever Hashing | The key insight is to use a hash map to count the frequency of "gaps" between bricks. The answer is total_rows - max_gap_frequency . | |
73 | Fraction to Recurring Decimal | Link | Simulation & State | A complex simulation of long division. The hash map is critical for detecting a cycle by storing remainder → position . | |
74 | Prison Cells After N Days | Link | Simulation & Cycle Detection | Since the state space is small, cycles must occur. A hash map (state → day ) is used to detect the cycle and calculate the final state. | |
75 | Smallest Integer Divisible by K | Link | Math & Cycle Detection | Similar to the above, this problem uses a hash set to detect when a remainder % k repeats, which indicates a non-terminating sequence. | |
76 | Partition Labels | Link | Greedy & Pre-computation | A two-pass greedy solution. The first pass uses a hash map to pre-compute the last occurrence index of every character. | |
77 | Task Scheduler | Link | Greedy & Math | The hash map is used for initial frequency counting. The core logic is a mathematical or greedy approach to calculate idle time. |