18 min read
Hashmap Patterns in LeetCode Problems

Group 1: Basic Frequency Counting & Existence Checks

This group introduces the most fundamental use of hash maps: counting element occurrences or checking for their existence. Problems start with simple counting and progress to using maps to track state across more complex data structures like a 2D grid.

#ProblemLinkConceptDescription of Difficulty Increase
1Find All Duplicates in an ArrayLinkFrequency CountingBase case: Use a hash map to count integer occurrences and identify those that appear twice.
2Rabbits in ForestLinkFrequency CountingAdds a layer of logic on top of frequency counts. You must interpret the counts according to the problem’s grouping rules.
3Bulls and CowsLinkFrequency CountingRequires two passes or a single clever pass. Introduces managing two types of counts (bulls vs. cows) based on character and position.
4K-diff Pairs in an ArrayLinkFrequency CountingBuilds on counting by requiring a second pass over the map’s keys to find pairs that satisfy a specific condition (difference of k).
5Valid SudokuLinkExistence Check (Hash Set)Applies the existence check pattern to a 2D grid. Requires managing 27 different sets (9 rows, 9 columns, 9 boxes) simultaneously.
6Set Matrix ZeroesLinkState Tracking (Hash Set)Uses hash sets not for counting, but to store the rows and columns that need to be modified, separating the “finding” and “setting” phases.
7Subdomain Visit CountLinkFrequency CountingAdds significant string parsing logic. You must first process the input strings into subdomains before applying the counting pattern.
8Finding the Users Active MinutesLinkFrequency CountingInvolves a nested data structure: a hash map where each value is a hash set, to count unique minutes per user.
9Display Table of Food Orders in a RestaurantLinkFrequency CountingRequires managing nested counts (table -> food item -> count). Adds complexity of collecting all keys for table headers and sorting.

Group 2: Grouping by Canonical Representation

These problems use a hash map to group items. The key to the map is a “canonical” (standard) representation of an item, and the value is a list of all items that map to that key.

#ProblemLinkConceptDescription of Difficulty Increase
10Group AnagramsLinkGroupingFoundational grouping problem. The canonical key is the sorted version of a string.
11Find and Replace PatternLinkGrouping / Pattern MatchingIntroduces a more complex canonical key. You need to normalize each word into a numeric pattern (e.g., ‘abb’ -> ‘122’) to group them.
12Groups of Special-Equivalent StringsLinkGroupingA variation on the canonical key concept. The key must represent the counts of characters at both even and odd indices separately.
13Determine if Two Strings Are CloseLinkFrequency Map ComparisonInstead of grouping, you create two frequency maps and compare them based on two conditions: same keys and same frequency values.
14Word SubsetsLinkFrequency Map ComparisonRequires creating a single “max frequency” map from one list of words and then comparing each word from another list against it.
15Find Duplicate File in SystemLinkGroupingAdds string parsing to extract file content, which serves as the canonical key for grouping file paths.

Group 3: Frequency Counting with Sorting or Greedy Logic

Here, the hash map is the first step in a two-step process. After counting frequencies, you must apply a sorting or greedy algorithm to the counts to find the answer.

#ProblemLinkConceptDescription of Difficulty Increase
16Sort Characters by FrequencyLinkFrequency Count & SortBase case: Count character frequencies, then sort the characters based on those counts to build the result string.
17Top K Frequent ElementsLinkFrequency Count & Heap/SortSimilar to the previous, but introduces finding the “top K” elements, making a Min-Heap or Quickselect an efficient follow-up step.
18Top K Frequent WordsLinkFrequency Count & Heap/SortA “boring variation” that adds complexity to the sorting logic: you must handle ties using lexicographical order.
19Reduce Array Size to the HalfLinkFrequency Count & GreedyIntroduces a greedy approach. After counting, you sort the frequencies and greedily remove the largest counts to solve the problem.
20Least Number of Unique Integers after K RemovalsLinkFrequency Count & GreedySimilar greedy logic, but now you remove the smallest counts first. Reinforces the pattern of operating on sorted frequencies.
21Hand of StraightsLinkFrequency Count & GreedyRequires more complex greedy logic. After counting, you must check for consecutive sequences, often by iterating through sorted keys.
22Divide Array in Sets of K Consecutive NumbersLinkFrequency Count & GreedyA “boring variation” of Hand of Straights. The underlying logic is identical.
23Reorganize StringLinkFrequency Count & Greedy HeapThe greedy logic becomes more complex, requiring a Max-Heap to always pick the most frequent character that is not adjacent to the last.
24Distant BarcodesLinkFrequency Count & Greedy HeapA “boring variation” of Reorganize String with the same underlying greedy heap-based algorithm.

Group 4: Modulo Arithmetic & Pair/Tuple Counting

This set of problems combines frequency maps with modulo arithmetic or number theory to find pairs or combinations that satisfy a specific sum or divisibility property.

#ProblemLinkConceptDescription of Difficulty Increase
25Pairs of Songs with Total Durations Divisible by 60LinkHashing with ModuloFoundational problem for this pattern. Use a map to store counts of time % 60 to find complementary pairs.
26Check if Array Pairs Are Divisible by kLinkHashing with ModuloA “boring variation” of the previous problem, generalizing the logic from a fixed 60 to a variable k.
274Sum IILinkHashing Pairs (Two-Sum)Applies the two-sum pattern on a larger scale. You pre-compute and store sums of pairs from two arrays in a map to find -(c+d).
28Number of Pairs of Interchangeable RectanglesLinkHashing with RatiosUses a map to count occurrences of a canonical key, which is the reduced fraction (ratio) of width to height.
29Count Good MealsLinkHashing Pairs (Two-Sum)A variation of Two-Sum where the target isn’t fixed. For each number, you must check for complements for all possible powers of two.
30Tuple with Same ProductLinkHashing PairsUses a hash map to count the frequency of products of pairs. The final calculation requires combinatorial logic (nC2).

Group 5: Sliding Window with Hash Maps

In these problems, a hash map is used to maintain the state of a “sliding window” as it moves across an array or string. This is a powerful and common pattern.

#ProblemLinkConceptDescription of Difficulty Increase
31Longest Substring Without Repeating CharactersLinkSliding WindowThe classic introduction to using a map in a sliding window to track characters and their last seen positions.
32Fruit Into BasketsLinkSliding WindowA slight variation where the window’s constraint is having at most 2 distinct character types (fruits).
33Permutation in StringLinkSliding Window (Fixed)Introduces a fixed-size window. Requires two maps: one for the target string and one for the current window, which you compare.
34Find All Anagrams in a StringLinkSliding Window (Fixed)A “boring variation” of the previous problem. Instead of returning true/false, you collect all starting indices of valid anagrams.
35Maximum Erasure ValueLinkSliding WindowCombines the unique-element window from “Longest Substring” with calculating a sum, rather than just the length of the window.
36Longest Repeating Character ReplacementLinkSliding WindowIntroduces a more complex window condition: window_length - max_frequency <= k. Requires efficiently tracking the max frequency.
37Find the Longest Substring Containing Vowels in Even CountsLinkSliding Window & BitmaskAdvanced pattern combining a hash map with bitmasking. The map stores mask -> first_index to find subarrays with a target parity state.

Group 6: Prefix Sum with Hash Maps

This pattern is used to efficiently find subarrays or subsequences that sum to a specific target. A hash map stores prefix sums and their indices, allowing for O(1) lookups.

#ProblemLinkConceptDescription of Difficulty Increase
38Subarray Sum Equals KLinkPrefix SumThe canonical problem for this pattern. You store counts of prefix sums in a map to find current_sum - k.
39Binary Subarrays with SumLinkPrefix SumA “boring variation” of Subarray Sum Equals K, cementing the core concept.
40Count Number of Nice SubarraysLinkPrefix SumA re-framing of the same problem. You must first transform the input (odd=1, even=0) to see it as a standard subarray sum problem.
41Contiguous ArrayLinkPrefix SumAnother re-framing. By converting 0s to -1s, the problem becomes finding the longest subarray with a sum of 0.
42Subarray Sums Divisible by KLinkPrefix Sum with ModuloCombines the prefix sum pattern with modulo arithmetic. The map stores counts of prefix_sum % k.
43Continuous Subarray SumLinkPrefix Sum with ModuloA variation of the previous problem that requires checking for a subarray length of at least 2.
44Minimum Operations to Reduce X to ZeroLinkPrefix SumRequires an insightful re-framing: the problem is equivalent to finding the longest middle subarray that sums to total_sum - x.
45Remove Zero Sum Consecutive Nodes from Linked ListLinkPrefix Sum on Linked ListApplies the prefix sum pattern to a linked list. The map stores prefix_sum -> node, requiring pointer manipulation to remove sublists.

Group 7: Graph/Tree Traversal with Hashing

In graph and tree problems, hash maps are essential for tracking visited nodes (to avoid cycles) or for mapping old nodes to new nodes when creating a deep copy.

#ProblemLinkConceptDescription of Difficulty Increase
46Employee ImportanceLinkGraph Traversal (DFS/BFS)Base case: Use a hash map to build an adjacency list (id -> Employee) for easy lookups, then perform a simple traversal (DFS or BFS) to sum values.
47Linked List Cycle IILinkTraversal & Visited SetUses a hash set to keep track of visited nodes in a linked list. The first repeated node found is the start of the cycle.
48Clone GraphLinkGraph Traversal & CloningThe canonical graph cloning problem. A hash map (old_node -> new_node) is crucial for tracking visited/cloned nodes to avoid infinite loops and re-creations.
49Copy List with Random PointerLinkGraph Traversal & CloningA “boring variation” of Clone Graph applied to a linked list with an extra random pointer, reinforcing the old_node -> new_node map pattern.
50Open the LockLinkBFS & Visited SetA classic state-space search using BFS. A hash set is essential to store visited states (lock combinations) to prevent redundant exploration.
51Minimum Genetic MutationLinkBFS & Visited SetA “boring variation” of Open the Lock. The states are gene strings instead of numbers, but the BFS with a visited set pattern is identical.
52Construct Binary Tree from Preorder and Inorder TraversalLinkRecursion & Index LookupA hash map is used as a critical optimization to store value -> index for the inorder traversal, allowing O(1) lookup of the root’s position.
53Construct Binary Tree from Inorder and Postorder TraversalLinkRecursion & Index LookupA “boring variation” of the previous problem, using the same inorder map but with postorder traversal logic.
54Restore the Array from Adjacent PairsLinkGraph TraversalRequires building a graph (adjacency list via hash map) from pairs, finding an endpoint (a node with one neighbor), and then traversing to reconstruct.

Group 8: Dynamic Programming with Hash Map Memoization

In these problems, a hash map is used as a “memo” to store the results of previously computed subproblems, which is the core of top-down dynamic programming.

#ProblemLinkConceptDescription of Difficulty Increase
55Word BreakLinkDP with MemoClassic DP problem. A hash set provides O(1) dictionary lookups, and a hash map can be used for memoization (index -> boolean) to store subproblem results.
56Delete and EarnLinkDP with MemoThe first step is using a hash map to aggregate points for each number. The problem then becomes a “House Robber” variation on the sorted keys of the map.
57Longest Arithmetic Subsequence of Given DifferenceLinkDP with MemoIntroduces DP where the map state is value -> length. For each num, the length of the sequence is 1 + map[num - difference].
58Longest String ChainLinkDP with MemoThe DP state is word -> longest_chain_length. Requires sorting the words by length and building the solution by checking predecessors of each word.
59Binary Trees with FactorsLinkDP with MemoThe DP state is number -> count_of_trees. For each number, you iterate through its factors to calculate the total combinations.
60Longest Arithmetic SubsequenceLinkDP with MemoThe DP state is more complex, requiring a nested map: dp[index][difference]. A map of maps (map<int, map<int, int>>) is a natural fit for this.

Group 9: Custom Data Structure Design

This group contains problems that require you to design and implement a class with specific methods and time complexities. A hash map is almost always the core component.

#ProblemLinkConceptDescription of Difficulty Increase
61Encode and Decode TinyURLLinkDesignThe simplest design problem. Requires two hash maps for forward and reverse lookups (long->short and short->long).
62Insert Delete GetRandom O(1)LinkDesignA classic design problem combining a hash map (value -> index) with an array for O(1) average time complexity on all operations.
63Time-Based Key-Value StoreLinkDesignIntroduces combining a hash map with another structure. The map’s value is a sorted list of (timestamp, value) pairs, requiring binary search for get.
64Design Underground SystemLinkDesignRequires multiple hash maps to track different states: one for active check-ins and another to aggregate travel times between stations.
65Snapshot ArrayLinkDesignA clever design for versioning. Each array index contains a map (snap_id -> value) or a sorted list to allow efficient historical lookups.
66LRU CacheLinkDesignA canonical design problem requiring a combination of a hash map (for O(1) lookups) and a doubly linked list (to manage recency).
67Design TwitterLinkDesignA more complex design involving multiple hash maps (userId -> follows, userId -> tweets) and merging sorted lists or using a heap to generate the feed.
68Stock Price FluctuationLinkDesignCombines a hash map for current prices with a sorted data structure (like a sorted map or two heaps) to efficiently track min and max prices.
69Detect SquaresLinkDesignRequires storing point frequencies in a map. The count method involves iterating through existing points to find valid squares.

Group 10: Clever Hashing & Miscellaneous Problems

This final group includes problems where the hash map is used in a particularly clever way, as a minor utility in a larger algorithm, or for complex state tracking.

#ProblemLinkConceptDescription of Difficulty Increase
70Longest Consecutive SequenceLinkClever Hashing (Set)A classic problem solved with a clever trick: use a hash set for O(1) lookups and only start counting a sequence from its true starting number.
71Letter Combinations of a Phone NumberLinkBacktracking & LookupThe hash map is used as a simple utility (a lookup table for digit -> letters). The core logic is backtracking, not hashing.
72Brick WallLinkClever HashingThe key insight is to use a hash map to count the frequency of “gaps” between bricks. The answer is total_rows - max_gap_frequency.
73Fraction to Recurring DecimalLinkSimulation & StateA complex simulation of long division. The hash map is critical for detecting a cycle by storing remainder -> position.
74Prison Cells After N DaysLinkSimulation & Cycle DetectionSince the state space is small, cycles must occur. A hash map (state -> day) is used to detect the cycle and calculate the final state.
75Smallest Integer Divisible by KLinkMath & Cycle DetectionSimilar to the above, this problem uses a hash set to detect when a remainder % k repeats, which indicates a non-terminating sequence.
76Partition LabelsLinkGreedy & Pre-computationA two-pass greedy solution. The first pass uses a hash map to pre-compute the last occurrence index of every character.
77Task SchedulerLinkGreedy & MathThe hash map is used for initial frequency counting. The core logic is a mathematical or greedy approach to calculate idle time.