Path: blob/aarch64-shenandoah-jdk8u272-b10/jdk/src/share/classes/java/util/HashMap.java
38829 views
/*1* Copyright (c) 1997, 2017, Oracle and/or its affiliates. All rights reserved.2* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.3*4* This code is free software; you can redistribute it and/or modify it5* under the terms of the GNU General Public License version 2 only, as6* published by the Free Software Foundation. Oracle designates this7* particular file as subject to the "Classpath" exception as provided8* by Oracle in the LICENSE file that accompanied this code.9*10* This code is distributed in the hope that it will be useful, but WITHOUT11* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or12* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License13* version 2 for more details (a copy is included in the LICENSE file that14* accompanied this code).15*16* You should have received a copy of the GNU General Public License version17* 2 along with this work; if not, write to the Free Software Foundation,18* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.19*20* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA21* or visit www.oracle.com if you need additional information or have any22* questions.23*/2425package java.util;2627import java.io.IOException;28import java.io.InvalidObjectException;29import java.io.Serializable;30import java.lang.reflect.ParameterizedType;31import java.lang.reflect.Type;32import java.util.function.BiConsumer;33import java.util.function.BiFunction;34import java.util.function.Consumer;35import java.util.function.Function;36import sun.misc.SharedSecrets;3738/**39* Hash table based implementation of the <tt>Map</tt> interface. This40* implementation provides all of the optional map operations, and permits41* <tt>null</tt> values and the <tt>null</tt> key. (The <tt>HashMap</tt>42* class is roughly equivalent to <tt>Hashtable</tt>, except that it is43* unsynchronized and permits nulls.) This class makes no guarantees as to44* the order of the map; in particular, it does not guarantee that the order45* will remain constant over time.46*47* <p>This implementation provides constant-time performance for the basic48* operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function49* disperses the elements properly among the buckets. Iteration over50* collection views requires time proportional to the "capacity" of the51* <tt>HashMap</tt> instance (the number of buckets) plus its size (the number52* of key-value mappings). Thus, it's very important not to set the initial53* capacity too high (or the load factor too low) if iteration performance is54* important.55*56* <p>An instance of <tt>HashMap</tt> has two parameters that affect its57* performance: <i>initial capacity</i> and <i>load factor</i>. The58* <i>capacity</i> is the number of buckets in the hash table, and the initial59* capacity is simply the capacity at the time the hash table is created. The60* <i>load factor</i> is a measure of how full the hash table is allowed to61* get before its capacity is automatically increased. When the number of62* entries in the hash table exceeds the product of the load factor and the63* current capacity, the hash table is <i>rehashed</i> (that is, internal data64* structures are rebuilt) so that the hash table has approximately twice the65* number of buckets.66*67* <p>As a general rule, the default load factor (.75) offers a good68* tradeoff between time and space costs. Higher values decrease the69* space overhead but increase the lookup cost (reflected in most of70* the operations of the <tt>HashMap</tt> class, including71* <tt>get</tt> and <tt>put</tt>). The expected number of entries in72* the map and its load factor should be taken into account when73* setting its initial capacity, so as to minimize the number of74* rehash operations. If the initial capacity is greater than the75* maximum number of entries divided by the load factor, no rehash76* operations will ever occur.77*78* <p>If many mappings are to be stored in a <tt>HashMap</tt>79* instance, creating it with a sufficiently large capacity will allow80* the mappings to be stored more efficiently than letting it perform81* automatic rehashing as needed to grow the table. Note that using82* many keys with the same {@code hashCode()} is a sure way to slow83* down performance of any hash table. To ameliorate impact, when keys84* are {@link Comparable}, this class may use comparison order among85* keys to help break ties.86*87* <p><strong>Note that this implementation is not synchronized.</strong>88* If multiple threads access a hash map concurrently, and at least one of89* the threads modifies the map structurally, it <i>must</i> be90* synchronized externally. (A structural modification is any operation91* that adds or deletes one or more mappings; merely changing the value92* associated with a key that an instance already contains is not a93* structural modification.) This is typically accomplished by94* synchronizing on some object that naturally encapsulates the map.95*96* If no such object exists, the map should be "wrapped" using the97* {@link Collections#synchronizedMap Collections.synchronizedMap}98* method. This is best done at creation time, to prevent accidental99* unsynchronized access to the map:<pre>100* Map m = Collections.synchronizedMap(new HashMap(...));</pre>101*102* <p>The iterators returned by all of this class's "collection view methods"103* are <i>fail-fast</i>: if the map is structurally modified at any time after104* the iterator is created, in any way except through the iterator's own105* <tt>remove</tt> method, the iterator will throw a106* {@link ConcurrentModificationException}. Thus, in the face of concurrent107* modification, the iterator fails quickly and cleanly, rather than risking108* arbitrary, non-deterministic behavior at an undetermined time in the109* future.110*111* <p>Note that the fail-fast behavior of an iterator cannot be guaranteed112* as it is, generally speaking, impossible to make any hard guarantees in the113* presence of unsynchronized concurrent modification. Fail-fast iterators114* throw <tt>ConcurrentModificationException</tt> on a best-effort basis.115* Therefore, it would be wrong to write a program that depended on this116* exception for its correctness: <i>the fail-fast behavior of iterators117* should be used only to detect bugs.</i>118*119* <p>This class is a member of the120* <a href="{@docRoot}/../technotes/guides/collections/index.html">121* Java Collections Framework</a>.122*123* @param <K> the type of keys maintained by this map124* @param <V> the type of mapped values125*126* @author Doug Lea127* @author Josh Bloch128* @author Arthur van Hoff129* @author Neal Gafter130* @see Object#hashCode()131* @see Collection132* @see Map133* @see TreeMap134* @see Hashtable135* @since 1.2136*/137public class HashMap<K,V> extends AbstractMap<K,V>138implements Map<K,V>, Cloneable, Serializable {139140private static final long serialVersionUID = 362498820763181265L;141142/*143* Implementation notes.144*145* This map usually acts as a binned (bucketed) hash table, but146* when bins get too large, they are transformed into bins of147* TreeNodes, each structured similarly to those in148* java.util.TreeMap. Most methods try to use normal bins, but149* relay to TreeNode methods when applicable (simply by checking150* instanceof a node). Bins of TreeNodes may be traversed and151* used like any others, but additionally support faster lookup152* when overpopulated. However, since the vast majority of bins in153* normal use are not overpopulated, checking for existence of154* tree bins may be delayed in the course of table methods.155*156* Tree bins (i.e., bins whose elements are all TreeNodes) are157* ordered primarily by hashCode, but in the case of ties, if two158* elements are of the same "class C implements Comparable<C>",159* type then their compareTo method is used for ordering. (We160* conservatively check generic types via reflection to validate161* this -- see method comparableClassFor). The added complexity162* of tree bins is worthwhile in providing worst-case O(log n)163* operations when keys either have distinct hashes or are164* orderable, Thus, performance degrades gracefully under165* accidental or malicious usages in which hashCode() methods166* return values that are poorly distributed, as well as those in167* which many keys share a hashCode, so long as they are also168* Comparable. (If neither of these apply, we may waste about a169* factor of two in time and space compared to taking no170* precautions. But the only known cases stem from poor user171* programming practices that are already so slow that this makes172* little difference.)173*174* Because TreeNodes are about twice the size of regular nodes, we175* use them only when bins contain enough nodes to warrant use176* (see TREEIFY_THRESHOLD). And when they become too small (due to177* removal or resizing) they are converted back to plain bins. In178* usages with well-distributed user hashCodes, tree bins are179* rarely used. Ideally, under random hashCodes, the frequency of180* nodes in bins follows a Poisson distribution181* (http://en.wikipedia.org/wiki/Poisson_distribution) with a182* parameter of about 0.5 on average for the default resizing183* threshold of 0.75, although with a large variance because of184* resizing granularity. Ignoring variance, the expected185* occurrences of list size k are (exp(-0.5) * pow(0.5, k) /186* factorial(k)). The first values are:187*188* 0: 0.60653066189* 1: 0.30326533190* 2: 0.07581633191* 3: 0.01263606192* 4: 0.00157952193* 5: 0.00015795194* 6: 0.00001316195* 7: 0.00000094196* 8: 0.00000006197* more: less than 1 in ten million198*199* The root of a tree bin is normally its first node. However,200* sometimes (currently only upon Iterator.remove), the root might201* be elsewhere, but can be recovered following parent links202* (method TreeNode.root()).203*204* All applicable internal methods accept a hash code as an205* argument (as normally supplied from a public method), allowing206* them to call each other without recomputing user hashCodes.207* Most internal methods also accept a "tab" argument, that is208* normally the current table, but may be a new or old one when209* resizing or converting.210*211* When bin lists are treeified, split, or untreeified, we keep212* them in the same relative access/traversal order (i.e., field213* Node.next) to better preserve locality, and to slightly214* simplify handling of splits and traversals that invoke215* iterator.remove. When using comparators on insertion, to keep a216* total ordering (or as close as is required here) across217* rebalancings, we compare classes and identityHashCodes as218* tie-breakers.219*220* The use and transitions among plain vs tree modes is221* complicated by the existence of subclass LinkedHashMap. See222* below for hook methods defined to be invoked upon insertion,223* removal and access that allow LinkedHashMap internals to224* otherwise remain independent of these mechanics. (This also225* requires that a map instance be passed to some utility methods226* that may create new nodes.)227*228* The concurrent-programming-like SSA-based coding style helps229* avoid aliasing errors amid all of the twisty pointer operations.230*/231232/**233* The default initial capacity - MUST be a power of two.234*/235static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16236237/**238* The maximum capacity, used if a higher value is implicitly specified239* by either of the constructors with arguments.240* MUST be a power of two <= 1<<30.241*/242static final int MAXIMUM_CAPACITY = 1 << 30;243244/**245* The load factor used when none specified in constructor.246*/247static final float DEFAULT_LOAD_FACTOR = 0.75f;248249/**250* The bin count threshold for using a tree rather than list for a251* bin. Bins are converted to trees when adding an element to a252* bin with at least this many nodes. The value must be greater253* than 2 and should be at least 8 to mesh with assumptions in254* tree removal about conversion back to plain bins upon255* shrinkage.256*/257static final int TREEIFY_THRESHOLD = 8;258259/**260* The bin count threshold for untreeifying a (split) bin during a261* resize operation. Should be less than TREEIFY_THRESHOLD, and at262* most 6 to mesh with shrinkage detection under removal.263*/264static final int UNTREEIFY_THRESHOLD = 6;265266/**267* The smallest table capacity for which bins may be treeified.268* (Otherwise the table is resized if too many nodes in a bin.)269* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts270* between resizing and treeification thresholds.271*/272static final int MIN_TREEIFY_CAPACITY = 64;273274/**275* Basic hash bin node, used for most entries. (See below for276* TreeNode subclass, and in LinkedHashMap for its Entry subclass.)277*/278static class Node<K,V> implements Map.Entry<K,V> {279final int hash;280final K key;281V value;282Node<K,V> next;283284Node(int hash, K key, V value, Node<K,V> next) {285this.hash = hash;286this.key = key;287this.value = value;288this.next = next;289}290291public final K getKey() { return key; }292public final V getValue() { return value; }293public final String toString() { return key + "=" + value; }294295public final int hashCode() {296return Objects.hashCode(key) ^ Objects.hashCode(value);297}298299public final V setValue(V newValue) {300V oldValue = value;301value = newValue;302return oldValue;303}304305public final boolean equals(Object o) {306if (o == this)307return true;308if (o instanceof Map.Entry) {309Map.Entry<?,?> e = (Map.Entry<?,?>)o;310if (Objects.equals(key, e.getKey()) &&311Objects.equals(value, e.getValue()))312return true;313}314return false;315}316}317318/* ---------------- Static utilities -------------- */319320/**321* Computes key.hashCode() and spreads (XORs) higher bits of hash322* to lower. Because the table uses power-of-two masking, sets of323* hashes that vary only in bits above the current mask will324* always collide. (Among known examples are sets of Float keys325* holding consecutive whole numbers in small tables.) So we326* apply a transform that spreads the impact of higher bits327* downward. There is a tradeoff between speed, utility, and328* quality of bit-spreading. Because many common sets of hashes329* are already reasonably distributed (so don't benefit from330* spreading), and because we use trees to handle large sets of331* collisions in bins, we just XOR some shifted bits in the332* cheapest possible way to reduce systematic lossage, as well as333* to incorporate impact of the highest bits that would otherwise334* never be used in index calculations because of table bounds.335*/336static final int hash(Object key) {337int h;338return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);339}340341/**342* Returns x's Class if it is of the form "class C implements343* Comparable<C>", else null.344*/345static Class<?> comparableClassFor(Object x) {346if (x instanceof Comparable) {347Class<?> c; Type[] ts, as; Type t; ParameterizedType p;348if ((c = x.getClass()) == String.class) // bypass checks349return c;350if ((ts = c.getGenericInterfaces()) != null) {351for (int i = 0; i < ts.length; ++i) {352if (((t = ts[i]) instanceof ParameterizedType) &&353((p = (ParameterizedType)t).getRawType() ==354Comparable.class) &&355(as = p.getActualTypeArguments()) != null &&356as.length == 1 && as[0] == c) // type arg is c357return c;358}359}360}361return null;362}363364/**365* Returns k.compareTo(x) if x matches kc (k's screened comparable366* class), else 0.367*/368@SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable369static int compareComparables(Class<?> kc, Object k, Object x) {370return (x == null || x.getClass() != kc ? 0 :371((Comparable)k).compareTo(x));372}373374/**375* Returns a power of two size for the given target capacity.376*/377static final int tableSizeFor(int cap) {378int n = cap - 1;379n |= n >>> 1;380n |= n >>> 2;381n |= n >>> 4;382n |= n >>> 8;383n |= n >>> 16;384return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;385}386387/* ---------------- Fields -------------- */388389/**390* The table, initialized on first use, and resized as391* necessary. When allocated, length is always a power of two.392* (We also tolerate length zero in some operations to allow393* bootstrapping mechanics that are currently not needed.)394*/395transient Node<K,V>[] table;396397/**398* Holds cached entrySet(). Note that AbstractMap fields are used399* for keySet() and values().400*/401transient Set<Map.Entry<K,V>> entrySet;402403/**404* The number of key-value mappings contained in this map.405*/406transient int size;407408/**409* The number of times this HashMap has been structurally modified410* Structural modifications are those that change the number of mappings in411* the HashMap or otherwise modify its internal structure (e.g.,412* rehash). This field is used to make iterators on Collection-views of413* the HashMap fail-fast. (See ConcurrentModificationException).414*/415transient int modCount;416417/**418* The next size value at which to resize (capacity * load factor).419*420* @serial421*/422// (The javadoc description is true upon serialization.423// Additionally, if the table array has not been allocated, this424// field holds the initial array capacity, or zero signifying425// DEFAULT_INITIAL_CAPACITY.)426int threshold;427428/**429* The load factor for the hash table.430*431* @serial432*/433final float loadFactor;434435/* ---------------- Public operations -------------- */436437/**438* Constructs an empty <tt>HashMap</tt> with the specified initial439* capacity and load factor.440*441* @param initialCapacity the initial capacity442* @param loadFactor the load factor443* @throws IllegalArgumentException if the initial capacity is negative444* or the load factor is nonpositive445*/446public HashMap(int initialCapacity, float loadFactor) {447if (initialCapacity < 0)448throw new IllegalArgumentException("Illegal initial capacity: " +449initialCapacity);450if (initialCapacity > MAXIMUM_CAPACITY)451initialCapacity = MAXIMUM_CAPACITY;452if (loadFactor <= 0 || Float.isNaN(loadFactor))453throw new IllegalArgumentException("Illegal load factor: " +454loadFactor);455this.loadFactor = loadFactor;456this.threshold = tableSizeFor(initialCapacity);457}458459/**460* Constructs an empty <tt>HashMap</tt> with the specified initial461* capacity and the default load factor (0.75).462*463* @param initialCapacity the initial capacity.464* @throws IllegalArgumentException if the initial capacity is negative.465*/466public HashMap(int initialCapacity) {467this(initialCapacity, DEFAULT_LOAD_FACTOR);468}469470/**471* Constructs an empty <tt>HashMap</tt> with the default initial capacity472* (16) and the default load factor (0.75).473*/474public HashMap() {475this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted476}477478/**479* Constructs a new <tt>HashMap</tt> with the same mappings as the480* specified <tt>Map</tt>. The <tt>HashMap</tt> is created with481* default load factor (0.75) and an initial capacity sufficient to482* hold the mappings in the specified <tt>Map</tt>.483*484* @param m the map whose mappings are to be placed in this map485* @throws NullPointerException if the specified map is null486*/487public HashMap(Map<? extends K, ? extends V> m) {488this.loadFactor = DEFAULT_LOAD_FACTOR;489putMapEntries(m, false);490}491492/**493* Implements Map.putAll and Map constructor.494*495* @param m the map496* @param evict false when initially constructing this map, else497* true (relayed to method afterNodeInsertion).498*/499final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {500int s = m.size();501if (s > 0) {502if (table == null) { // pre-size503float ft = ((float)s / loadFactor) + 1.0F;504int t = ((ft < (float)MAXIMUM_CAPACITY) ?505(int)ft : MAXIMUM_CAPACITY);506if (t > threshold)507threshold = tableSizeFor(t);508}509else if (s > threshold)510resize();511for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {512K key = e.getKey();513V value = e.getValue();514putVal(hash(key), key, value, false, evict);515}516}517}518519/**520* Returns the number of key-value mappings in this map.521*522* @return the number of key-value mappings in this map523*/524public int size() {525return size;526}527528/**529* Returns <tt>true</tt> if this map contains no key-value mappings.530*531* @return <tt>true</tt> if this map contains no key-value mappings532*/533public boolean isEmpty() {534return size == 0;535}536537/**538* Returns the value to which the specified key is mapped,539* or {@code null} if this map contains no mapping for the key.540*541* <p>More formally, if this map contains a mapping from a key542* {@code k} to a value {@code v} such that {@code (key==null ? k==null :543* key.equals(k))}, then this method returns {@code v}; otherwise544* it returns {@code null}. (There can be at most one such mapping.)545*546* <p>A return value of {@code null} does not <i>necessarily</i>547* indicate that the map contains no mapping for the key; it's also548* possible that the map explicitly maps the key to {@code null}.549* The {@link #containsKey containsKey} operation may be used to550* distinguish these two cases.551*552* @see #put(Object, Object)553*/554public V get(Object key) {555Node<K,V> e;556return (e = getNode(hash(key), key)) == null ? null : e.value;557}558559/**560* Implements Map.get and related methods.561*562* @param hash hash for key563* @param key the key564* @return the node, or null if none565*/566final Node<K,V> getNode(int hash, Object key) {567Node<K,V>[] tab; Node<K,V> first, e; int n; K k;568if ((tab = table) != null && (n = tab.length) > 0 &&569(first = tab[(n - 1) & hash]) != null) {570if (first.hash == hash && // always check first node571((k = first.key) == key || (key != null && key.equals(k))))572return first;573if ((e = first.next) != null) {574if (first instanceof TreeNode)575return ((TreeNode<K,V>)first).getTreeNode(hash, key);576do {577if (e.hash == hash &&578((k = e.key) == key || (key != null && key.equals(k))))579return e;580} while ((e = e.next) != null);581}582}583return null;584}585586/**587* Returns <tt>true</tt> if this map contains a mapping for the588* specified key.589*590* @param key The key whose presence in this map is to be tested591* @return <tt>true</tt> if this map contains a mapping for the specified592* key.593*/594public boolean containsKey(Object key) {595return getNode(hash(key), key) != null;596}597598/**599* Associates the specified value with the specified key in this map.600* If the map previously contained a mapping for the key, the old601* value is replaced.602*603* @param key key with which the specified value is to be associated604* @param value value to be associated with the specified key605* @return the previous value associated with <tt>key</tt>, or606* <tt>null</tt> if there was no mapping for <tt>key</tt>.607* (A <tt>null</tt> return can also indicate that the map608* previously associated <tt>null</tt> with <tt>key</tt>.)609*/610public V put(K key, V value) {611return putVal(hash(key), key, value, false, true);612}613614/**615* Implements Map.put and related methods.616*617* @param hash hash for key618* @param key the key619* @param value the value to put620* @param onlyIfAbsent if true, don't change existing value621* @param evict if false, the table is in creation mode.622* @return previous value, or null if none623*/624final V putVal(int hash, K key, V value, boolean onlyIfAbsent,625boolean evict) {626Node<K,V>[] tab; Node<K,V> p; int n, i;627if ((tab = table) == null || (n = tab.length) == 0)628n = (tab = resize()).length;629if ((p = tab[i = (n - 1) & hash]) == null)630tab[i] = newNode(hash, key, value, null);631else {632Node<K,V> e; K k;633if (p.hash == hash &&634((k = p.key) == key || (key != null && key.equals(k))))635e = p;636else if (p instanceof TreeNode)637e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);638else {639for (int binCount = 0; ; ++binCount) {640if ((e = p.next) == null) {641p.next = newNode(hash, key, value, null);642if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st643treeifyBin(tab, hash);644break;645}646if (e.hash == hash &&647((k = e.key) == key || (key != null && key.equals(k))))648break;649p = e;650}651}652if (e != null) { // existing mapping for key653V oldValue = e.value;654if (!onlyIfAbsent || oldValue == null)655e.value = value;656afterNodeAccess(e);657return oldValue;658}659}660++modCount;661if (++size > threshold)662resize();663afterNodeInsertion(evict);664return null;665}666667/**668* Initializes or doubles table size. If null, allocates in669* accord with initial capacity target held in field threshold.670* Otherwise, because we are using power-of-two expansion, the671* elements from each bin must either stay at same index, or move672* with a power of two offset in the new table.673*674* @return the table675*/676final Node<K,V>[] resize() {677Node<K,V>[] oldTab = table;678int oldCap = (oldTab == null) ? 0 : oldTab.length;679int oldThr = threshold;680int newCap, newThr = 0;681if (oldCap > 0) {682if (oldCap >= MAXIMUM_CAPACITY) {683threshold = Integer.MAX_VALUE;684return oldTab;685}686else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&687oldCap >= DEFAULT_INITIAL_CAPACITY)688newThr = oldThr << 1; // double threshold689}690else if (oldThr > 0) // initial capacity was placed in threshold691newCap = oldThr;692else { // zero initial threshold signifies using defaults693newCap = DEFAULT_INITIAL_CAPACITY;694newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);695}696if (newThr == 0) {697float ft = (float)newCap * loadFactor;698newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?699(int)ft : Integer.MAX_VALUE);700}701threshold = newThr;702@SuppressWarnings({"rawtypes","unchecked"})703Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];704table = newTab;705if (oldTab != null) {706for (int j = 0; j < oldCap; ++j) {707Node<K,V> e;708if ((e = oldTab[j]) != null) {709oldTab[j] = null;710if (e.next == null)711newTab[e.hash & (newCap - 1)] = e;712else if (e instanceof TreeNode)713((TreeNode<K,V>)e).split(this, newTab, j, oldCap);714else { // preserve order715Node<K,V> loHead = null, loTail = null;716Node<K,V> hiHead = null, hiTail = null;717Node<K,V> next;718do {719next = e.next;720if ((e.hash & oldCap) == 0) {721if (loTail == null)722loHead = e;723else724loTail.next = e;725loTail = e;726}727else {728if (hiTail == null)729hiHead = e;730else731hiTail.next = e;732hiTail = e;733}734} while ((e = next) != null);735if (loTail != null) {736loTail.next = null;737newTab[j] = loHead;738}739if (hiTail != null) {740hiTail.next = null;741newTab[j + oldCap] = hiHead;742}743}744}745}746}747return newTab;748}749750/**751* Replaces all linked nodes in bin at index for given hash unless752* table is too small, in which case resizes instead.753*/754final void treeifyBin(Node<K,V>[] tab, int hash) {755int n, index; Node<K,V> e;756if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)757resize();758else if ((e = tab[index = (n - 1) & hash]) != null) {759TreeNode<K,V> hd = null, tl = null;760do {761TreeNode<K,V> p = replacementTreeNode(e, null);762if (tl == null)763hd = p;764else {765p.prev = tl;766tl.next = p;767}768tl = p;769} while ((e = e.next) != null);770if ((tab[index] = hd) != null)771hd.treeify(tab);772}773}774775/**776* Copies all of the mappings from the specified map to this map.777* These mappings will replace any mappings that this map had for778* any of the keys currently in the specified map.779*780* @param m mappings to be stored in this map781* @throws NullPointerException if the specified map is null782*/783public void putAll(Map<? extends K, ? extends V> m) {784putMapEntries(m, true);785}786787/**788* Removes the mapping for the specified key from this map if present.789*790* @param key key whose mapping is to be removed from the map791* @return the previous value associated with <tt>key</tt>, or792* <tt>null</tt> if there was no mapping for <tt>key</tt>.793* (A <tt>null</tt> return can also indicate that the map794* previously associated <tt>null</tt> with <tt>key</tt>.)795*/796public V remove(Object key) {797Node<K,V> e;798return (e = removeNode(hash(key), key, null, false, true)) == null ?799null : e.value;800}801802/**803* Implements Map.remove and related methods.804*805* @param hash hash for key806* @param key the key807* @param value the value to match if matchValue, else ignored808* @param matchValue if true only remove if value is equal809* @param movable if false do not move other nodes while removing810* @return the node, or null if none811*/812final Node<K,V> removeNode(int hash, Object key, Object value,813boolean matchValue, boolean movable) {814Node<K,V>[] tab; Node<K,V> p; int n, index;815if ((tab = table) != null && (n = tab.length) > 0 &&816(p = tab[index = (n - 1) & hash]) != null) {817Node<K,V> node = null, e; K k; V v;818if (p.hash == hash &&819((k = p.key) == key || (key != null && key.equals(k))))820node = p;821else if ((e = p.next) != null) {822if (p instanceof TreeNode)823node = ((TreeNode<K,V>)p).getTreeNode(hash, key);824else {825do {826if (e.hash == hash &&827((k = e.key) == key ||828(key != null && key.equals(k)))) {829node = e;830break;831}832p = e;833} while ((e = e.next) != null);834}835}836if (node != null && (!matchValue || (v = node.value) == value ||837(value != null && value.equals(v)))) {838if (node instanceof TreeNode)839((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);840else if (node == p)841tab[index] = node.next;842else843p.next = node.next;844++modCount;845--size;846afterNodeRemoval(node);847return node;848}849}850return null;851}852853/**854* Removes all of the mappings from this map.855* The map will be empty after this call returns.856*/857public void clear() {858Node<K,V>[] tab;859modCount++;860if ((tab = table) != null && size > 0) {861size = 0;862for (int i = 0; i < tab.length; ++i)863tab[i] = null;864}865}866867/**868* Returns <tt>true</tt> if this map maps one or more keys to the869* specified value.870*871* @param value value whose presence in this map is to be tested872* @return <tt>true</tt> if this map maps one or more keys to the873* specified value874*/875public boolean containsValue(Object value) {876Node<K,V>[] tab; V v;877if ((tab = table) != null && size > 0) {878for (int i = 0; i < tab.length; ++i) {879for (Node<K,V> e = tab[i]; e != null; e = e.next) {880if ((v = e.value) == value ||881(value != null && value.equals(v)))882return true;883}884}885}886return false;887}888889/**890* Returns a {@link Set} view of the keys contained in this map.891* The set is backed by the map, so changes to the map are892* reflected in the set, and vice-versa. If the map is modified893* while an iteration over the set is in progress (except through894* the iterator's own <tt>remove</tt> operation), the results of895* the iteration are undefined. The set supports element removal,896* which removes the corresponding mapping from the map, via the897* <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,898* <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>899* operations. It does not support the <tt>add</tt> or <tt>addAll</tt>900* operations.901*902* @return a set view of the keys contained in this map903*/904public Set<K> keySet() {905Set<K> ks = keySet;906if (ks == null) {907ks = new KeySet();908keySet = ks;909}910return ks;911}912913final class KeySet extends AbstractSet<K> {914public final int size() { return size; }915public final void clear() { HashMap.this.clear(); }916public final Iterator<K> iterator() { return new KeyIterator(); }917public final boolean contains(Object o) { return containsKey(o); }918public final boolean remove(Object key) {919return removeNode(hash(key), key, null, false, true) != null;920}921public final Spliterator<K> spliterator() {922return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);923}924public final void forEach(Consumer<? super K> action) {925Node<K,V>[] tab;926if (action == null)927throw new NullPointerException();928if (size > 0 && (tab = table) != null) {929int mc = modCount;930for (int i = 0; i < tab.length; ++i) {931for (Node<K,V> e = tab[i]; e != null; e = e.next)932action.accept(e.key);933}934if (modCount != mc)935throw new ConcurrentModificationException();936}937}938}939940/**941* Returns a {@link Collection} view of the values contained in this map.942* The collection is backed by the map, so changes to the map are943* reflected in the collection, and vice-versa. If the map is944* modified while an iteration over the collection is in progress945* (except through the iterator's own <tt>remove</tt> operation),946* the results of the iteration are undefined. The collection947* supports element removal, which removes the corresponding948* mapping from the map, via the <tt>Iterator.remove</tt>,949* <tt>Collection.remove</tt>, <tt>removeAll</tt>,950* <tt>retainAll</tt> and <tt>clear</tt> operations. It does not951* support the <tt>add</tt> or <tt>addAll</tt> operations.952*953* @return a view of the values contained in this map954*/955public Collection<V> values() {956Collection<V> vs = values;957if (vs == null) {958vs = new Values();959values = vs;960}961return vs;962}963964final class Values extends AbstractCollection<V> {965public final int size() { return size; }966public final void clear() { HashMap.this.clear(); }967public final Iterator<V> iterator() { return new ValueIterator(); }968public final boolean contains(Object o) { return containsValue(o); }969public final Spliterator<V> spliterator() {970return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);971}972public final void forEach(Consumer<? super V> action) {973Node<K,V>[] tab;974if (action == null)975throw new NullPointerException();976if (size > 0 && (tab = table) != null) {977int mc = modCount;978for (int i = 0; i < tab.length; ++i) {979for (Node<K,V> e = tab[i]; e != null; e = e.next)980action.accept(e.value);981}982if (modCount != mc)983throw new ConcurrentModificationException();984}985}986}987988/**989* Returns a {@link Set} view of the mappings contained in this map.990* The set is backed by the map, so changes to the map are991* reflected in the set, and vice-versa. If the map is modified992* while an iteration over the set is in progress (except through993* the iterator's own <tt>remove</tt> operation, or through the994* <tt>setValue</tt> operation on a map entry returned by the995* iterator) the results of the iteration are undefined. The set996* supports element removal, which removes the corresponding997* mapping from the map, via the <tt>Iterator.remove</tt>,998* <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and999* <tt>clear</tt> operations. It does not support the1000* <tt>add</tt> or <tt>addAll</tt> operations.1001*1002* @return a set view of the mappings contained in this map1003*/1004public Set<Map.Entry<K,V>> entrySet() {1005Set<Map.Entry<K,V>> es;1006return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;1007}10081009final class EntrySet extends AbstractSet<Map.Entry<K,V>> {1010public final int size() { return size; }1011public final void clear() { HashMap.this.clear(); }1012public final Iterator<Map.Entry<K,V>> iterator() {1013return new EntryIterator();1014}1015public final boolean contains(Object o) {1016if (!(o instanceof Map.Entry))1017return false;1018Map.Entry<?,?> e = (Map.Entry<?,?>) o;1019Object key = e.getKey();1020Node<K,V> candidate = getNode(hash(key), key);1021return candidate != null && candidate.equals(e);1022}1023public final boolean remove(Object o) {1024if (o instanceof Map.Entry) {1025Map.Entry<?,?> e = (Map.Entry<?,?>) o;1026Object key = e.getKey();1027Object value = e.getValue();1028return removeNode(hash(key), key, value, true, true) != null;1029}1030return false;1031}1032public final Spliterator<Map.Entry<K,V>> spliterator() {1033return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);1034}1035public final void forEach(Consumer<? super Map.Entry<K,V>> action) {1036Node<K,V>[] tab;1037if (action == null)1038throw new NullPointerException();1039if (size > 0 && (tab = table) != null) {1040int mc = modCount;1041for (int i = 0; i < tab.length; ++i) {1042for (Node<K,V> e = tab[i]; e != null; e = e.next)1043action.accept(e);1044}1045if (modCount != mc)1046throw new ConcurrentModificationException();1047}1048}1049}10501051// Overrides of JDK8 Map extension methods10521053@Override1054public V getOrDefault(Object key, V defaultValue) {1055Node<K,V> e;1056return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;1057}10581059@Override1060public V putIfAbsent(K key, V value) {1061return putVal(hash(key), key, value, true, true);1062}10631064@Override1065public boolean remove(Object key, Object value) {1066return removeNode(hash(key), key, value, true, true) != null;1067}10681069@Override1070public boolean replace(K key, V oldValue, V newValue) {1071Node<K,V> e; V v;1072if ((e = getNode(hash(key), key)) != null &&1073((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {1074e.value = newValue;1075afterNodeAccess(e);1076return true;1077}1078return false;1079}10801081@Override1082public V replace(K key, V value) {1083Node<K,V> e;1084if ((e = getNode(hash(key), key)) != null) {1085V oldValue = e.value;1086e.value = value;1087afterNodeAccess(e);1088return oldValue;1089}1090return null;1091}10921093@Override1094public V computeIfAbsent(K key,1095Function<? super K, ? extends V> mappingFunction) {1096if (mappingFunction == null)1097throw new NullPointerException();1098int hash = hash(key);1099Node<K,V>[] tab; Node<K,V> first; int n, i;1100int binCount = 0;1101TreeNode<K,V> t = null;1102Node<K,V> old = null;1103if (size > threshold || (tab = table) == null ||1104(n = tab.length) == 0)1105n = (tab = resize()).length;1106if ((first = tab[i = (n - 1) & hash]) != null) {1107if (first instanceof TreeNode)1108old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1109else {1110Node<K,V> e = first; K k;1111do {1112if (e.hash == hash &&1113((k = e.key) == key || (key != null && key.equals(k)))) {1114old = e;1115break;1116}1117++binCount;1118} while ((e = e.next) != null);1119}1120V oldValue;1121if (old != null && (oldValue = old.value) != null) {1122afterNodeAccess(old);1123return oldValue;1124}1125}1126V v = mappingFunction.apply(key);1127if (v == null) {1128return null;1129} else if (old != null) {1130old.value = v;1131afterNodeAccess(old);1132return v;1133}1134else if (t != null)1135t.putTreeVal(this, tab, hash, key, v);1136else {1137tab[i] = newNode(hash, key, v, first);1138if (binCount >= TREEIFY_THRESHOLD - 1)1139treeifyBin(tab, hash);1140}1141++modCount;1142++size;1143afterNodeInsertion(true);1144return v;1145}11461147public V computeIfPresent(K key,1148BiFunction<? super K, ? super V, ? extends V> remappingFunction) {1149if (remappingFunction == null)1150throw new NullPointerException();1151Node<K,V> e; V oldValue;1152int hash = hash(key);1153if ((e = getNode(hash, key)) != null &&1154(oldValue = e.value) != null) {1155V v = remappingFunction.apply(key, oldValue);1156if (v != null) {1157e.value = v;1158afterNodeAccess(e);1159return v;1160}1161else1162removeNode(hash, key, null, false, true);1163}1164return null;1165}11661167@Override1168public V compute(K key,1169BiFunction<? super K, ? super V, ? extends V> remappingFunction) {1170if (remappingFunction == null)1171throw new NullPointerException();1172int hash = hash(key);1173Node<K,V>[] tab; Node<K,V> first; int n, i;1174int binCount = 0;1175TreeNode<K,V> t = null;1176Node<K,V> old = null;1177if (size > threshold || (tab = table) == null ||1178(n = tab.length) == 0)1179n = (tab = resize()).length;1180if ((first = tab[i = (n - 1) & hash]) != null) {1181if (first instanceof TreeNode)1182old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1183else {1184Node<K,V> e = first; K k;1185do {1186if (e.hash == hash &&1187((k = e.key) == key || (key != null && key.equals(k)))) {1188old = e;1189break;1190}1191++binCount;1192} while ((e = e.next) != null);1193}1194}1195V oldValue = (old == null) ? null : old.value;1196V v = remappingFunction.apply(key, oldValue);1197if (old != null) {1198if (v != null) {1199old.value = v;1200afterNodeAccess(old);1201}1202else1203removeNode(hash, key, null, false, true);1204}1205else if (v != null) {1206if (t != null)1207t.putTreeVal(this, tab, hash, key, v);1208else {1209tab[i] = newNode(hash, key, v, first);1210if (binCount >= TREEIFY_THRESHOLD - 1)1211treeifyBin(tab, hash);1212}1213++modCount;1214++size;1215afterNodeInsertion(true);1216}1217return v;1218}12191220@Override1221public V merge(K key, V value,1222BiFunction<? super V, ? super V, ? extends V> remappingFunction) {1223if (value == null)1224throw new NullPointerException();1225if (remappingFunction == null)1226throw new NullPointerException();1227int hash = hash(key);1228Node<K,V>[] tab; Node<K,V> first; int n, i;1229int binCount = 0;1230TreeNode<K,V> t = null;1231Node<K,V> old = null;1232if (size > threshold || (tab = table) == null ||1233(n = tab.length) == 0)1234n = (tab = resize()).length;1235if ((first = tab[i = (n - 1) & hash]) != null) {1236if (first instanceof TreeNode)1237old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1238else {1239Node<K,V> e = first; K k;1240do {1241if (e.hash == hash &&1242((k = e.key) == key || (key != null && key.equals(k)))) {1243old = e;1244break;1245}1246++binCount;1247} while ((e = e.next) != null);1248}1249}1250if (old != null) {1251V v;1252if (old.value != null)1253v = remappingFunction.apply(old.value, value);1254else1255v = value;1256if (v != null) {1257old.value = v;1258afterNodeAccess(old);1259}1260else1261removeNode(hash, key, null, false, true);1262return v;1263}1264if (value != null) {1265if (t != null)1266t.putTreeVal(this, tab, hash, key, value);1267else {1268tab[i] = newNode(hash, key, value, first);1269if (binCount >= TREEIFY_THRESHOLD - 1)1270treeifyBin(tab, hash);1271}1272++modCount;1273++size;1274afterNodeInsertion(true);1275}1276return value;1277}12781279@Override1280public void forEach(BiConsumer<? super K, ? super V> action) {1281Node<K,V>[] tab;1282if (action == null)1283throw new NullPointerException();1284if (size > 0 && (tab = table) != null) {1285int mc = modCount;1286for (int i = 0; i < tab.length; ++i) {1287for (Node<K,V> e = tab[i]; e != null; e = e.next)1288action.accept(e.key, e.value);1289}1290if (modCount != mc)1291throw new ConcurrentModificationException();1292}1293}12941295@Override1296public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {1297Node<K,V>[] tab;1298if (function == null)1299throw new NullPointerException();1300if (size > 0 && (tab = table) != null) {1301int mc = modCount;1302for (int i = 0; i < tab.length; ++i) {1303for (Node<K,V> e = tab[i]; e != null; e = e.next) {1304e.value = function.apply(e.key, e.value);1305}1306}1307if (modCount != mc)1308throw new ConcurrentModificationException();1309}1310}13111312/* ------------------------------------------------------------ */1313// Cloning and serialization13141315/**1316* Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and1317* values themselves are not cloned.1318*1319* @return a shallow copy of this map1320*/1321@SuppressWarnings("unchecked")1322@Override1323public Object clone() {1324HashMap<K,V> result;1325try {1326result = (HashMap<K,V>)super.clone();1327} catch (CloneNotSupportedException e) {1328// this shouldn't happen, since we are Cloneable1329throw new InternalError(e);1330}1331result.reinitialize();1332result.putMapEntries(this, false);1333return result;1334}13351336// These methods are also used when serializing HashSets1337final float loadFactor() { return loadFactor; }1338final int capacity() {1339return (table != null) ? table.length :1340(threshold > 0) ? threshold :1341DEFAULT_INITIAL_CAPACITY;1342}13431344/**1345* Save the state of the <tt>HashMap</tt> instance to a stream (i.e.,1346* serialize it).1347*1348* @serialData The <i>capacity</i> of the HashMap (the length of the1349* bucket array) is emitted (int), followed by the1350* <i>size</i> (an int, the number of key-value1351* mappings), followed by the key (Object) and value (Object)1352* for each key-value mapping. The key-value mappings are1353* emitted in no particular order.1354*/1355private void writeObject(java.io.ObjectOutputStream s)1356throws IOException {1357int buckets = capacity();1358// Write out the threshold, loadfactor, and any hidden stuff1359s.defaultWriteObject();1360s.writeInt(buckets);1361s.writeInt(size);1362internalWriteEntries(s);1363}13641365/**1366* Reconstitutes this map from a stream (that is, deserializes it).1367* @param s the stream1368* @throws ClassNotFoundException if the class of a serialized object1369* could not be found1370* @throws IOException if an I/O error occurs1371*/1372private void readObject(java.io.ObjectInputStream s)1373throws IOException, ClassNotFoundException {1374// Read in the threshold (ignored), loadfactor, and any hidden stuff1375s.defaultReadObject();1376reinitialize();1377if (loadFactor <= 0 || Float.isNaN(loadFactor))1378throw new InvalidObjectException("Illegal load factor: " +1379loadFactor);1380s.readInt(); // Read and ignore number of buckets1381int mappings = s.readInt(); // Read number of mappings (size)1382if (mappings < 0)1383throw new InvalidObjectException("Illegal mappings count: " +1384mappings);1385else if (mappings > 0) { // (if zero, use defaults)1386// Size the table using given load factor only if within1387// range of 0.25...4.01388float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);1389float fc = (float)mappings / lf + 1.0f;1390int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?1391DEFAULT_INITIAL_CAPACITY :1392(fc >= MAXIMUM_CAPACITY) ?1393MAXIMUM_CAPACITY :1394tableSizeFor((int)fc));1395float ft = (float)cap * lf;1396threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?1397(int)ft : Integer.MAX_VALUE);13981399// Check Map.Entry[].class since it's the nearest public type to1400// what we're actually creating.1401SharedSecrets.getJavaOISAccess().checkArray(s, Map.Entry[].class, cap);1402@SuppressWarnings({"rawtypes","unchecked"})1403Node<K,V>[] tab = (Node<K,V>[])new Node[cap];1404table = tab;14051406// Read the keys and values, and put the mappings in the HashMap1407for (int i = 0; i < mappings; i++) {1408@SuppressWarnings("unchecked")1409K key = (K) s.readObject();1410@SuppressWarnings("unchecked")1411V value = (V) s.readObject();1412putVal(hash(key), key, value, false, false);1413}1414}1415}14161417/* ------------------------------------------------------------ */1418// iterators14191420abstract class HashIterator {1421Node<K,V> next; // next entry to return1422Node<K,V> current; // current entry1423int expectedModCount; // for fast-fail1424int index; // current slot14251426HashIterator() {1427expectedModCount = modCount;1428Node<K,V>[] t = table;1429current = next = null;1430index = 0;1431if (t != null && size > 0) { // advance to first entry1432do {} while (index < t.length && (next = t[index++]) == null);1433}1434}14351436public final boolean hasNext() {1437return next != null;1438}14391440final Node<K,V> nextNode() {1441Node<K,V>[] t;1442Node<K,V> e = next;1443if (modCount != expectedModCount)1444throw new ConcurrentModificationException();1445if (e == null)1446throw new NoSuchElementException();1447if ((next = (current = e).next) == null && (t = table) != null) {1448do {} while (index < t.length && (next = t[index++]) == null);1449}1450return e;1451}14521453public final void remove() {1454Node<K,V> p = current;1455if (p == null)1456throw new IllegalStateException();1457if (modCount != expectedModCount)1458throw new ConcurrentModificationException();1459current = null;1460K key = p.key;1461removeNode(hash(key), key, null, false, false);1462expectedModCount = modCount;1463}1464}14651466final class KeyIterator extends HashIterator1467implements Iterator<K> {1468public final K next() { return nextNode().key; }1469}14701471final class ValueIterator extends HashIterator1472implements Iterator<V> {1473public final V next() { return nextNode().value; }1474}14751476final class EntryIterator extends HashIterator1477implements Iterator<Map.Entry<K,V>> {1478public final Map.Entry<K,V> next() { return nextNode(); }1479}14801481/* ------------------------------------------------------------ */1482// spliterators14831484static class HashMapSpliterator<K,V> {1485final HashMap<K,V> map;1486Node<K,V> current; // current node1487int index; // current index, modified on advance/split1488int fence; // one past last index1489int est; // size estimate1490int expectedModCount; // for comodification checks14911492HashMapSpliterator(HashMap<K,V> m, int origin,1493int fence, int est,1494int expectedModCount) {1495this.map = m;1496this.index = origin;1497this.fence = fence;1498this.est = est;1499this.expectedModCount = expectedModCount;1500}15011502final int getFence() { // initialize fence and size on first use1503int hi;1504if ((hi = fence) < 0) {1505HashMap<K,V> m = map;1506est = m.size;1507expectedModCount = m.modCount;1508Node<K,V>[] tab = m.table;1509hi = fence = (tab == null) ? 0 : tab.length;1510}1511return hi;1512}15131514public final long estimateSize() {1515getFence(); // force init1516return (long) est;1517}1518}15191520static final class KeySpliterator<K,V>1521extends HashMapSpliterator<K,V>1522implements Spliterator<K> {1523KeySpliterator(HashMap<K,V> m, int origin, int fence, int est,1524int expectedModCount) {1525super(m, origin, fence, est, expectedModCount);1526}15271528public KeySpliterator<K,V> trySplit() {1529int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1530return (lo >= mid || current != null) ? null :1531new KeySpliterator<>(map, lo, index = mid, est >>>= 1,1532expectedModCount);1533}15341535public void forEachRemaining(Consumer<? super K> action) {1536int i, hi, mc;1537if (action == null)1538throw new NullPointerException();1539HashMap<K,V> m = map;1540Node<K,V>[] tab = m.table;1541if ((hi = fence) < 0) {1542mc = expectedModCount = m.modCount;1543hi = fence = (tab == null) ? 0 : tab.length;1544}1545else1546mc = expectedModCount;1547if (tab != null && tab.length >= hi &&1548(i = index) >= 0 && (i < (index = hi) || current != null)) {1549Node<K,V> p = current;1550current = null;1551do {1552if (p == null)1553p = tab[i++];1554else {1555action.accept(p.key);1556p = p.next;1557}1558} while (p != null || i < hi);1559if (m.modCount != mc)1560throw new ConcurrentModificationException();1561}1562}15631564public boolean tryAdvance(Consumer<? super K> action) {1565int hi;1566if (action == null)1567throw new NullPointerException();1568Node<K,V>[] tab = map.table;1569if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1570while (current != null || index < hi) {1571if (current == null)1572current = tab[index++];1573else {1574K k = current.key;1575current = current.next;1576action.accept(k);1577if (map.modCount != expectedModCount)1578throw new ConcurrentModificationException();1579return true;1580}1581}1582}1583return false;1584}15851586public int characteristics() {1587return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |1588Spliterator.DISTINCT;1589}1590}15911592static final class ValueSpliterator<K,V>1593extends HashMapSpliterator<K,V>1594implements Spliterator<V> {1595ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est,1596int expectedModCount) {1597super(m, origin, fence, est, expectedModCount);1598}15991600public ValueSpliterator<K,V> trySplit() {1601int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1602return (lo >= mid || current != null) ? null :1603new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,1604expectedModCount);1605}16061607public void forEachRemaining(Consumer<? super V> action) {1608int i, hi, mc;1609if (action == null)1610throw new NullPointerException();1611HashMap<K,V> m = map;1612Node<K,V>[] tab = m.table;1613if ((hi = fence) < 0) {1614mc = expectedModCount = m.modCount;1615hi = fence = (tab == null) ? 0 : tab.length;1616}1617else1618mc = expectedModCount;1619if (tab != null && tab.length >= hi &&1620(i = index) >= 0 && (i < (index = hi) || current != null)) {1621Node<K,V> p = current;1622current = null;1623do {1624if (p == null)1625p = tab[i++];1626else {1627action.accept(p.value);1628p = p.next;1629}1630} while (p != null || i < hi);1631if (m.modCount != mc)1632throw new ConcurrentModificationException();1633}1634}16351636public boolean tryAdvance(Consumer<? super V> action) {1637int hi;1638if (action == null)1639throw new NullPointerException();1640Node<K,V>[] tab = map.table;1641if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1642while (current != null || index < hi) {1643if (current == null)1644current = tab[index++];1645else {1646V v = current.value;1647current = current.next;1648action.accept(v);1649if (map.modCount != expectedModCount)1650throw new ConcurrentModificationException();1651return true;1652}1653}1654}1655return false;1656}16571658public int characteristics() {1659return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);1660}1661}16621663static final class EntrySpliterator<K,V>1664extends HashMapSpliterator<K,V>1665implements Spliterator<Map.Entry<K,V>> {1666EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est,1667int expectedModCount) {1668super(m, origin, fence, est, expectedModCount);1669}16701671public EntrySpliterator<K,V> trySplit() {1672int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1673return (lo >= mid || current != null) ? null :1674new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,1675expectedModCount);1676}16771678public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) {1679int i, hi, mc;1680if (action == null)1681throw new NullPointerException();1682HashMap<K,V> m = map;1683Node<K,V>[] tab = m.table;1684if ((hi = fence) < 0) {1685mc = expectedModCount = m.modCount;1686hi = fence = (tab == null) ? 0 : tab.length;1687}1688else1689mc = expectedModCount;1690if (tab != null && tab.length >= hi &&1691(i = index) >= 0 && (i < (index = hi) || current != null)) {1692Node<K,V> p = current;1693current = null;1694do {1695if (p == null)1696p = tab[i++];1697else {1698action.accept(p);1699p = p.next;1700}1701} while (p != null || i < hi);1702if (m.modCount != mc)1703throw new ConcurrentModificationException();1704}1705}17061707public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) {1708int hi;1709if (action == null)1710throw new NullPointerException();1711Node<K,V>[] tab = map.table;1712if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1713while (current != null || index < hi) {1714if (current == null)1715current = tab[index++];1716else {1717Node<K,V> e = current;1718current = current.next;1719action.accept(e);1720if (map.modCount != expectedModCount)1721throw new ConcurrentModificationException();1722return true;1723}1724}1725}1726return false;1727}17281729public int characteristics() {1730return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |1731Spliterator.DISTINCT;1732}1733}17341735/* ------------------------------------------------------------ */1736// LinkedHashMap support173717381739/*1740* The following package-protected methods are designed to be1741* overridden by LinkedHashMap, but not by any other subclass.1742* Nearly all other internal methods are also package-protected1743* but are declared final, so can be used by LinkedHashMap, view1744* classes, and HashSet.1745*/17461747// Create a regular (non-tree) node1748Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {1749return new Node<>(hash, key, value, next);1750}17511752// For conversion from TreeNodes to plain nodes1753Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {1754return new Node<>(p.hash, p.key, p.value, next);1755}17561757// Create a tree bin node1758TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) {1759return new TreeNode<>(hash, key, value, next);1760}17611762// For treeifyBin1763TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {1764return new TreeNode<>(p.hash, p.key, p.value, next);1765}17661767/**1768* Reset to initial default state. Called by clone and readObject.1769*/1770void reinitialize() {1771table = null;1772entrySet = null;1773keySet = null;1774values = null;1775modCount = 0;1776threshold = 0;1777size = 0;1778}17791780// Callbacks to allow LinkedHashMap post-actions1781void afterNodeAccess(Node<K,V> p) { }1782void afterNodeInsertion(boolean evict) { }1783void afterNodeRemoval(Node<K,V> p) { }17841785// Called only from writeObject, to ensure compatible ordering.1786void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {1787Node<K,V>[] tab;1788if (size > 0 && (tab = table) != null) {1789for (int i = 0; i < tab.length; ++i) {1790for (Node<K,V> e = tab[i]; e != null; e = e.next) {1791s.writeObject(e.key);1792s.writeObject(e.value);1793}1794}1795}1796}17971798/* ------------------------------------------------------------ */1799// Tree bins18001801/**1802* Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn1803* extends Node) so can be used as extension of either regular or1804* linked node.1805*/1806static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {1807TreeNode<K,V> parent; // red-black tree links1808TreeNode<K,V> left;1809TreeNode<K,V> right;1810TreeNode<K,V> prev; // needed to unlink next upon deletion1811boolean red;1812TreeNode(int hash, K key, V val, Node<K,V> next) {1813super(hash, key, val, next);1814}18151816/**1817* Returns root of tree containing this node.1818*/1819final TreeNode<K,V> root() {1820for (TreeNode<K,V> r = this, p;;) {1821if ((p = r.parent) == null)1822return r;1823r = p;1824}1825}18261827/**1828* Ensures that the given root is the first node of its bin.1829*/1830static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {1831int n;1832if (root != null && tab != null && (n = tab.length) > 0) {1833int index = (n - 1) & root.hash;1834TreeNode<K,V> first = (TreeNode<K,V>)tab[index];1835if (root != first) {1836Node<K,V> rn;1837tab[index] = root;1838TreeNode<K,V> rp = root.prev;1839if ((rn = root.next) != null)1840((TreeNode<K,V>)rn).prev = rp;1841if (rp != null)1842rp.next = rn;1843if (first != null)1844first.prev = root;1845root.next = first;1846root.prev = null;1847}1848assert checkInvariants(root);1849}1850}18511852/**1853* Finds the node starting at root p with the given hash and key.1854* The kc argument caches comparableClassFor(key) upon first use1855* comparing keys.1856*/1857final TreeNode<K,V> find(int h, Object k, Class<?> kc) {1858TreeNode<K,V> p = this;1859do {1860int ph, dir; K pk;1861TreeNode<K,V> pl = p.left, pr = p.right, q;1862if ((ph = p.hash) > h)1863p = pl;1864else if (ph < h)1865p = pr;1866else if ((pk = p.key) == k || (k != null && k.equals(pk)))1867return p;1868else if (pl == null)1869p = pr;1870else if (pr == null)1871p = pl;1872else if ((kc != null ||1873(kc = comparableClassFor(k)) != null) &&1874(dir = compareComparables(kc, k, pk)) != 0)1875p = (dir < 0) ? pl : pr;1876else if ((q = pr.find(h, k, kc)) != null)1877return q;1878else1879p = pl;1880} while (p != null);1881return null;1882}18831884/**1885* Calls find for root node.1886*/1887final TreeNode<K,V> getTreeNode(int h, Object k) {1888return ((parent != null) ? root() : this).find(h, k, null);1889}18901891/**1892* Tie-breaking utility for ordering insertions when equal1893* hashCodes and non-comparable. We don't require a total1894* order, just a consistent insertion rule to maintain1895* equivalence across rebalancings. Tie-breaking further than1896* necessary simplifies testing a bit.1897*/1898static int tieBreakOrder(Object a, Object b) {1899int d;1900if (a == null || b == null ||1901(d = a.getClass().getName().1902compareTo(b.getClass().getName())) == 0)1903d = (System.identityHashCode(a) <= System.identityHashCode(b) ?1904-1 : 1);1905return d;1906}19071908/**1909* Forms tree of the nodes linked from this node.1910*/1911final void treeify(Node<K,V>[] tab) {1912TreeNode<K,V> root = null;1913for (TreeNode<K,V> x = this, next; x != null; x = next) {1914next = (TreeNode<K,V>)x.next;1915x.left = x.right = null;1916if (root == null) {1917x.parent = null;1918x.red = false;1919root = x;1920}1921else {1922K k = x.key;1923int h = x.hash;1924Class<?> kc = null;1925for (TreeNode<K,V> p = root;;) {1926int dir, ph;1927K pk = p.key;1928if ((ph = p.hash) > h)1929dir = -1;1930else if (ph < h)1931dir = 1;1932else if ((kc == null &&1933(kc = comparableClassFor(k)) == null) ||1934(dir = compareComparables(kc, k, pk)) == 0)1935dir = tieBreakOrder(k, pk);19361937TreeNode<K,V> xp = p;1938if ((p = (dir <= 0) ? p.left : p.right) == null) {1939x.parent = xp;1940if (dir <= 0)1941xp.left = x;1942else1943xp.right = x;1944root = balanceInsertion(root, x);1945break;1946}1947}1948}1949}1950moveRootToFront(tab, root);1951}19521953/**1954* Returns a list of non-TreeNodes replacing those linked from1955* this node.1956*/1957final Node<K,V> untreeify(HashMap<K,V> map) {1958Node<K,V> hd = null, tl = null;1959for (Node<K,V> q = this; q != null; q = q.next) {1960Node<K,V> p = map.replacementNode(q, null);1961if (tl == null)1962hd = p;1963else1964tl.next = p;1965tl = p;1966}1967return hd;1968}19691970/**1971* Tree version of putVal.1972*/1973final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,1974int h, K k, V v) {1975Class<?> kc = null;1976boolean searched = false;1977TreeNode<K,V> root = (parent != null) ? root() : this;1978for (TreeNode<K,V> p = root;;) {1979int dir, ph; K pk;1980if ((ph = p.hash) > h)1981dir = -1;1982else if (ph < h)1983dir = 1;1984else if ((pk = p.key) == k || (k != null && k.equals(pk)))1985return p;1986else if ((kc == null &&1987(kc = comparableClassFor(k)) == null) ||1988(dir = compareComparables(kc, k, pk)) == 0) {1989if (!searched) {1990TreeNode<K,V> q, ch;1991searched = true;1992if (((ch = p.left) != null &&1993(q = ch.find(h, k, kc)) != null) ||1994((ch = p.right) != null &&1995(q = ch.find(h, k, kc)) != null))1996return q;1997}1998dir = tieBreakOrder(k, pk);1999}20002001TreeNode<K,V> xp = p;2002if ((p = (dir <= 0) ? p.left : p.right) == null) {2003Node<K,V> xpn = xp.next;2004TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);2005if (dir <= 0)2006xp.left = x;2007else2008xp.right = x;2009xp.next = x;2010x.parent = x.prev = xp;2011if (xpn != null)2012((TreeNode<K,V>)xpn).prev = x;2013moveRootToFront(tab, balanceInsertion(root, x));2014return null;2015}2016}2017}20182019/**2020* Removes the given node, that must be present before this call.2021* This is messier than typical red-black deletion code because we2022* cannot swap the contents of an interior node with a leaf2023* successor that is pinned by "next" pointers that are accessible2024* independently during traversal. So instead we swap the tree2025* linkages. If the current tree appears to have too few nodes,2026* the bin is converted back to a plain bin. (The test triggers2027* somewhere between 2 and 6 nodes, depending on tree structure).2028*/2029final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,2030boolean movable) {2031int n;2032if (tab == null || (n = tab.length) == 0)2033return;2034int index = (n - 1) & hash;2035TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;2036TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;2037if (pred == null)2038tab[index] = first = succ;2039else2040pred.next = succ;2041if (succ != null)2042succ.prev = pred;2043if (first == null)2044return;2045if (root.parent != null)2046root = root.root();2047if (root == null2048|| (movable2049&& (root.right == null2050|| (rl = root.left) == null2051|| rl.left == null))) {2052tab[index] = first.untreeify(map); // too small2053return;2054}2055TreeNode<K,V> p = this, pl = left, pr = right, replacement;2056if (pl != null && pr != null) {2057TreeNode<K,V> s = pr, sl;2058while ((sl = s.left) != null) // find successor2059s = sl;2060boolean c = s.red; s.red = p.red; p.red = c; // swap colors2061TreeNode<K,V> sr = s.right;2062TreeNode<K,V> pp = p.parent;2063if (s == pr) { // p was s's direct parent2064p.parent = s;2065s.right = p;2066}2067else {2068TreeNode<K,V> sp = s.parent;2069if ((p.parent = sp) != null) {2070if (s == sp.left)2071sp.left = p;2072else2073sp.right = p;2074}2075if ((s.right = pr) != null)2076pr.parent = s;2077}2078p.left = null;2079if ((p.right = sr) != null)2080sr.parent = p;2081if ((s.left = pl) != null)2082pl.parent = s;2083if ((s.parent = pp) == null)2084root = s;2085else if (p == pp.left)2086pp.left = s;2087else2088pp.right = s;2089if (sr != null)2090replacement = sr;2091else2092replacement = p;2093}2094else if (pl != null)2095replacement = pl;2096else if (pr != null)2097replacement = pr;2098else2099replacement = p;2100if (replacement != p) {2101TreeNode<K,V> pp = replacement.parent = p.parent;2102if (pp == null)2103root = replacement;2104else if (p == pp.left)2105pp.left = replacement;2106else2107pp.right = replacement;2108p.left = p.right = p.parent = null;2109}21102111TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);21122113if (replacement == p) { // detach2114TreeNode<K,V> pp = p.parent;2115p.parent = null;2116if (pp != null) {2117if (p == pp.left)2118pp.left = null;2119else if (p == pp.right)2120pp.right = null;2121}2122}2123if (movable)2124moveRootToFront(tab, r);2125}21262127/**2128* Splits nodes in a tree bin into lower and upper tree bins,2129* or untreeifies if now too small. Called only from resize;2130* see above discussion about split bits and indices.2131*2132* @param map the map2133* @param tab the table for recording bin heads2134* @param index the index of the table being split2135* @param bit the bit of hash to split on2136*/2137final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {2138TreeNode<K,V> b = this;2139// Relink into lo and hi lists, preserving order2140TreeNode<K,V> loHead = null, loTail = null;2141TreeNode<K,V> hiHead = null, hiTail = null;2142int lc = 0, hc = 0;2143for (TreeNode<K,V> e = b, next; e != null; e = next) {2144next = (TreeNode<K,V>)e.next;2145e.next = null;2146if ((e.hash & bit) == 0) {2147if ((e.prev = loTail) == null)2148loHead = e;2149else2150loTail.next = e;2151loTail = e;2152++lc;2153}2154else {2155if ((e.prev = hiTail) == null)2156hiHead = e;2157else2158hiTail.next = e;2159hiTail = e;2160++hc;2161}2162}21632164if (loHead != null) {2165if (lc <= UNTREEIFY_THRESHOLD)2166tab[index] = loHead.untreeify(map);2167else {2168tab[index] = loHead;2169if (hiHead != null) // (else is already treeified)2170loHead.treeify(tab);2171}2172}2173if (hiHead != null) {2174if (hc <= UNTREEIFY_THRESHOLD)2175tab[index + bit] = hiHead.untreeify(map);2176else {2177tab[index + bit] = hiHead;2178if (loHead != null)2179hiHead.treeify(tab);2180}2181}2182}21832184/* ------------------------------------------------------------ */2185// Red-black tree methods, all adapted from CLR21862187static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,2188TreeNode<K,V> p) {2189TreeNode<K,V> r, pp, rl;2190if (p != null && (r = p.right) != null) {2191if ((rl = p.right = r.left) != null)2192rl.parent = p;2193if ((pp = r.parent = p.parent) == null)2194(root = r).red = false;2195else if (pp.left == p)2196pp.left = r;2197else2198pp.right = r;2199r.left = p;2200p.parent = r;2201}2202return root;2203}22042205static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,2206TreeNode<K,V> p) {2207TreeNode<K,V> l, pp, lr;2208if (p != null && (l = p.left) != null) {2209if ((lr = p.left = l.right) != null)2210lr.parent = p;2211if ((pp = l.parent = p.parent) == null)2212(root = l).red = false;2213else if (pp.right == p)2214pp.right = l;2215else2216pp.left = l;2217l.right = p;2218p.parent = l;2219}2220return root;2221}22222223static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,2224TreeNode<K,V> x) {2225x.red = true;2226for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {2227if ((xp = x.parent) == null) {2228x.red = false;2229return x;2230}2231else if (!xp.red || (xpp = xp.parent) == null)2232return root;2233if (xp == (xppl = xpp.left)) {2234if ((xppr = xpp.right) != null && xppr.red) {2235xppr.red = false;2236xp.red = false;2237xpp.red = true;2238x = xpp;2239}2240else {2241if (x == xp.right) {2242root = rotateLeft(root, x = xp);2243xpp = (xp = x.parent) == null ? null : xp.parent;2244}2245if (xp != null) {2246xp.red = false;2247if (xpp != null) {2248xpp.red = true;2249root = rotateRight(root, xpp);2250}2251}2252}2253}2254else {2255if (xppl != null && xppl.red) {2256xppl.red = false;2257xp.red = false;2258xpp.red = true;2259x = xpp;2260}2261else {2262if (x == xp.left) {2263root = rotateRight(root, x = xp);2264xpp = (xp = x.parent) == null ? null : xp.parent;2265}2266if (xp != null) {2267xp.red = false;2268if (xpp != null) {2269xpp.red = true;2270root = rotateLeft(root, xpp);2271}2272}2273}2274}2275}2276}22772278static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,2279TreeNode<K,V> x) {2280for (TreeNode<K,V> xp, xpl, xpr;;) {2281if (x == null || x == root)2282return root;2283else if ((xp = x.parent) == null) {2284x.red = false;2285return x;2286}2287else if (x.red) {2288x.red = false;2289return root;2290}2291else if ((xpl = xp.left) == x) {2292if ((xpr = xp.right) != null && xpr.red) {2293xpr.red = false;2294xp.red = true;2295root = rotateLeft(root, xp);2296xpr = (xp = x.parent) == null ? null : xp.right;2297}2298if (xpr == null)2299x = xp;2300else {2301TreeNode<K,V> sl = xpr.left, sr = xpr.right;2302if ((sr == null || !sr.red) &&2303(sl == null || !sl.red)) {2304xpr.red = true;2305x = xp;2306}2307else {2308if (sr == null || !sr.red) {2309if (sl != null)2310sl.red = false;2311xpr.red = true;2312root = rotateRight(root, xpr);2313xpr = (xp = x.parent) == null ?2314null : xp.right;2315}2316if (xpr != null) {2317xpr.red = (xp == null) ? false : xp.red;2318if ((sr = xpr.right) != null)2319sr.red = false;2320}2321if (xp != null) {2322xp.red = false;2323root = rotateLeft(root, xp);2324}2325x = root;2326}2327}2328}2329else { // symmetric2330if (xpl != null && xpl.red) {2331xpl.red = false;2332xp.red = true;2333root = rotateRight(root, xp);2334xpl = (xp = x.parent) == null ? null : xp.left;2335}2336if (xpl == null)2337x = xp;2338else {2339TreeNode<K,V> sl = xpl.left, sr = xpl.right;2340if ((sl == null || !sl.red) &&2341(sr == null || !sr.red)) {2342xpl.red = true;2343x = xp;2344}2345else {2346if (sl == null || !sl.red) {2347if (sr != null)2348sr.red = false;2349xpl.red = true;2350root = rotateLeft(root, xpl);2351xpl = (xp = x.parent) == null ?2352null : xp.left;2353}2354if (xpl != null) {2355xpl.red = (xp == null) ? false : xp.red;2356if ((sl = xpl.left) != null)2357sl.red = false;2358}2359if (xp != null) {2360xp.red = false;2361root = rotateRight(root, xp);2362}2363x = root;2364}2365}2366}2367}2368}23692370/**2371* Recursive invariant check2372*/2373static <K,V> boolean checkInvariants(TreeNode<K,V> t) {2374TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,2375tb = t.prev, tn = (TreeNode<K,V>)t.next;2376if (tb != null && tb.next != t)2377return false;2378if (tn != null && tn.prev != t)2379return false;2380if (tp != null && t != tp.left && t != tp.right)2381return false;2382if (tl != null && (tl.parent != t || tl.hash > t.hash))2383return false;2384if (tr != null && (tr.parent != t || tr.hash < t.hash))2385return false;2386if (t.red && tl != null && tl.red && tr != null && tr.red)2387return false;2388if (tl != null && !checkInvariants(tl))2389return false;2390if (tr != null && !checkInvariants(tr))2391return false;2392return true;2393}2394}23952396}239723982399