Path: blob/master/src/java.base/share/classes/java/util/HashMap.java
67794 views
/*1* Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.2* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.3*4* This code is free software; you can redistribute it and/or modify it5* under the terms of the GNU General Public License version 2 only, as6* published by the Free Software Foundation. Oracle designates this7* particular file as subject to the "Classpath" exception as provided8* by Oracle in the LICENSE file that accompanied this code.9*10* This code is distributed in the hope that it will be useful, but WITHOUT11* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or12* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License13* version 2 for more details (a copy is included in the LICENSE file that14* accompanied this code).15*16* You should have received a copy of the GNU General Public License version17* 2 along with this work; if not, write to the Free Software Foundation,18* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.19*20* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA21* or visit www.oracle.com if you need additional information or have any22* questions.23*/2425package java.util;2627import java.io.IOException;28import java.io.InvalidObjectException;29import java.io.ObjectInputStream;30import java.io.Serializable;31import java.lang.reflect.ParameterizedType;32import java.lang.reflect.Type;33import java.util.function.BiConsumer;34import java.util.function.BiFunction;35import java.util.function.Consumer;36import java.util.function.Function;37import jdk.internal.access.SharedSecrets;3839/**40* Hash table based implementation of the {@code Map} interface. This41* implementation provides all of the optional map operations, and permits42* {@code null} values and the {@code null} key. (The {@code HashMap}43* class is roughly equivalent to {@code Hashtable}, except that it is44* unsynchronized and permits nulls.) This class makes no guarantees as to45* the order of the map; in particular, it does not guarantee that the order46* will remain constant over time.47*48* <p>This implementation provides constant-time performance for the basic49* operations ({@code get} and {@code put}), assuming the hash function50* disperses the elements properly among the buckets. Iteration over51* collection views requires time proportional to the "capacity" of the52* {@code HashMap} instance (the number of buckets) plus its size (the number53* of key-value mappings). Thus, it's very important not to set the initial54* capacity too high (or the load factor too low) if iteration performance is55* important.56*57* <p>An instance of {@code HashMap} has two parameters that affect its58* performance: <i>initial capacity</i> and <i>load factor</i>. The59* <i>capacity</i> is the number of buckets in the hash table, and the initial60* capacity is simply the capacity at the time the hash table is created. The61* <i>load factor</i> is a measure of how full the hash table is allowed to62* get before its capacity is automatically increased. When the number of63* entries in the hash table exceeds the product of the load factor and the64* current capacity, the hash table is <i>rehashed</i> (that is, internal data65* structures are rebuilt) so that the hash table has approximately twice the66* number of buckets.67*68* <p>As a general rule, the default load factor (.75) offers a good69* tradeoff between time and space costs. Higher values decrease the70* space overhead but increase the lookup cost (reflected in most of71* the operations of the {@code HashMap} class, including72* {@code get} and {@code put}). The expected number of entries in73* the map and its load factor should be taken into account when74* setting its initial capacity, so as to minimize the number of75* rehash operations. If the initial capacity is greater than the76* maximum number of entries divided by the load factor, no rehash77* operations will ever occur.78*79* <p>If many mappings are to be stored in a {@code HashMap}80* instance, creating it with a sufficiently large capacity will allow81* the mappings to be stored more efficiently than letting it perform82* automatic rehashing as needed to grow the table. Note that using83* many keys with the same {@code hashCode()} is a sure way to slow84* down performance of any hash table. To ameliorate impact, when keys85* are {@link Comparable}, this class may use comparison order among86* keys to help break ties.87*88* <p><strong>Note that this implementation is not synchronized.</strong>89* If multiple threads access a hash map concurrently, and at least one of90* the threads modifies the map structurally, it <i>must</i> be91* synchronized externally. (A structural modification is any operation92* that adds or deletes one or more mappings; merely changing the value93* associated with a key that an instance already contains is not a94* structural modification.) This is typically accomplished by95* synchronizing on some object that naturally encapsulates the map.96*97* If no such object exists, the map should be "wrapped" using the98* {@link Collections#synchronizedMap Collections.synchronizedMap}99* method. This is best done at creation time, to prevent accidental100* unsynchronized access to the map:<pre>101* Map m = Collections.synchronizedMap(new HashMap(...));</pre>102*103* <p>The iterators returned by all of this class's "collection view methods"104* are <i>fail-fast</i>: if the map is structurally modified at any time after105* the iterator is created, in any way except through the iterator's own106* {@code remove} method, the iterator will throw a107* {@link ConcurrentModificationException}. Thus, in the face of concurrent108* modification, the iterator fails quickly and cleanly, rather than risking109* arbitrary, non-deterministic behavior at an undetermined time in the110* future.111*112* <p>Note that the fail-fast behavior of an iterator cannot be guaranteed113* as it is, generally speaking, impossible to make any hard guarantees in the114* presence of unsynchronized concurrent modification. Fail-fast iterators115* throw {@code ConcurrentModificationException} on a best-effort basis.116* Therefore, it would be wrong to write a program that depended on this117* exception for its correctness: <i>the fail-fast behavior of iterators118* should be used only to detect bugs.</i>119*120* <p>This class is a member of the121* <a href="{@docRoot}/java.base/java/util/package-summary.html#CollectionsFramework">122* Java Collections Framework</a>.123*124* @param <K> the type of keys maintained by this map125* @param <V> the type of mapped values126*127* @author Doug Lea128* @author Josh Bloch129* @author Arthur van Hoff130* @author Neal Gafter131* @see Object#hashCode()132* @see Collection133* @see Map134* @see TreeMap135* @see Hashtable136* @since 1.2137*/138public class HashMap<K,V> extends AbstractMap<K,V>139implements Map<K,V>, Cloneable, Serializable {140141@java.io.Serial142private static final long serialVersionUID = 362498820763181265L;143144/*145* Implementation notes.146*147* This map usually acts as a binned (bucketed) hash table, but148* when bins get too large, they are transformed into bins of149* TreeNodes, each structured similarly to those in150* java.util.TreeMap. Most methods try to use normal bins, but151* relay to TreeNode methods when applicable (simply by checking152* instanceof a node). Bins of TreeNodes may be traversed and153* used like any others, but additionally support faster lookup154* when overpopulated. However, since the vast majority of bins in155* normal use are not overpopulated, checking for existence of156* tree bins may be delayed in the course of table methods.157*158* Tree bins (i.e., bins whose elements are all TreeNodes) are159* ordered primarily by hashCode, but in the case of ties, if two160* elements are of the same "class C implements Comparable<C>",161* type then their compareTo method is used for ordering. (We162* conservatively check generic types via reflection to validate163* this -- see method comparableClassFor). The added complexity164* of tree bins is worthwhile in providing worst-case O(log n)165* operations when keys either have distinct hashes or are166* orderable, Thus, performance degrades gracefully under167* accidental or malicious usages in which hashCode() methods168* return values that are poorly distributed, as well as those in169* which many keys share a hashCode, so long as they are also170* Comparable. (If neither of these apply, we may waste about a171* factor of two in time and space compared to taking no172* precautions. But the only known cases stem from poor user173* programming practices that are already so slow that this makes174* little difference.)175*176* Because TreeNodes are about twice the size of regular nodes, we177* use them only when bins contain enough nodes to warrant use178* (see TREEIFY_THRESHOLD). And when they become too small (due to179* removal or resizing) they are converted back to plain bins. In180* usages with well-distributed user hashCodes, tree bins are181* rarely used. Ideally, under random hashCodes, the frequency of182* nodes in bins follows a Poisson distribution183* (http://en.wikipedia.org/wiki/Poisson_distribution) with a184* parameter of about 0.5 on average for the default resizing185* threshold of 0.75, although with a large variance because of186* resizing granularity. Ignoring variance, the expected187* occurrences of list size k are (exp(-0.5) * pow(0.5, k) /188* factorial(k)). The first values are:189*190* 0: 0.60653066191* 1: 0.30326533192* 2: 0.07581633193* 3: 0.01263606194* 4: 0.00157952195* 5: 0.00015795196* 6: 0.00001316197* 7: 0.00000094198* 8: 0.00000006199* more: less than 1 in ten million200*201* The root of a tree bin is normally its first node. However,202* sometimes (currently only upon Iterator.remove), the root might203* be elsewhere, but can be recovered following parent links204* (method TreeNode.root()).205*206* All applicable internal methods accept a hash code as an207* argument (as normally supplied from a public method), allowing208* them to call each other without recomputing user hashCodes.209* Most internal methods also accept a "tab" argument, that is210* normally the current table, but may be a new or old one when211* resizing or converting.212*213* When bin lists are treeified, split, or untreeified, we keep214* them in the same relative access/traversal order (i.e., field215* Node.next) to better preserve locality, and to slightly216* simplify handling of splits and traversals that invoke217* iterator.remove. When using comparators on insertion, to keep a218* total ordering (or as close as is required here) across219* rebalancings, we compare classes and identityHashCodes as220* tie-breakers.221*222* The use and transitions among plain vs tree modes is223* complicated by the existence of subclass LinkedHashMap. See224* below for hook methods defined to be invoked upon insertion,225* removal and access that allow LinkedHashMap internals to226* otherwise remain independent of these mechanics. (This also227* requires that a map instance be passed to some utility methods228* that may create new nodes.)229*230* The concurrent-programming-like SSA-based coding style helps231* avoid aliasing errors amid all of the twisty pointer operations.232*/233234/**235* The default initial capacity - MUST be a power of two.236*/237static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16238239/**240* The maximum capacity, used if a higher value is implicitly specified241* by either of the constructors with arguments.242* MUST be a power of two <= 1<<30.243*/244static final int MAXIMUM_CAPACITY = 1 << 30;245246/**247* The load factor used when none specified in constructor.248*/249static final float DEFAULT_LOAD_FACTOR = 0.75f;250251/**252* The bin count threshold for using a tree rather than list for a253* bin. Bins are converted to trees when adding an element to a254* bin with at least this many nodes. The value must be greater255* than 2 and should be at least 8 to mesh with assumptions in256* tree removal about conversion back to plain bins upon257* shrinkage.258*/259static final int TREEIFY_THRESHOLD = 8;260261/**262* The bin count threshold for untreeifying a (split) bin during a263* resize operation. Should be less than TREEIFY_THRESHOLD, and at264* most 6 to mesh with shrinkage detection under removal.265*/266static final int UNTREEIFY_THRESHOLD = 6;267268/**269* The smallest table capacity for which bins may be treeified.270* (Otherwise the table is resized if too many nodes in a bin.)271* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts272* between resizing and treeification thresholds.273*/274static final int MIN_TREEIFY_CAPACITY = 64;275276/**277* Basic hash bin node, used for most entries. (See below for278* TreeNode subclass, and in LinkedHashMap for its Entry subclass.)279*/280static class Node<K,V> implements Map.Entry<K,V> {281final int hash;282final K key;283V value;284Node<K,V> next;285286Node(int hash, K key, V value, Node<K,V> next) {287this.hash = hash;288this.key = key;289this.value = value;290this.next = next;291}292293public final K getKey() { return key; }294public final V getValue() { return value; }295public final String toString() { return key + "=" + value; }296297public final int hashCode() {298return Objects.hashCode(key) ^ Objects.hashCode(value);299}300301public final V setValue(V newValue) {302V oldValue = value;303value = newValue;304return oldValue;305}306307public final boolean equals(Object o) {308if (o == this)309return true;310311return o instanceof Map.Entry<?, ?> e312&& Objects.equals(key, e.getKey())313&& Objects.equals(value, e.getValue());314}315}316317/* ---------------- Static utilities -------------- */318319/**320* Computes key.hashCode() and spreads (XORs) higher bits of hash321* to lower. Because the table uses power-of-two masking, sets of322* hashes that vary only in bits above the current mask will323* always collide. (Among known examples are sets of Float keys324* holding consecutive whole numbers in small tables.) So we325* apply a transform that spreads the impact of higher bits326* downward. There is a tradeoff between speed, utility, and327* quality of bit-spreading. Because many common sets of hashes328* are already reasonably distributed (so don't benefit from329* spreading), and because we use trees to handle large sets of330* collisions in bins, we just XOR some shifted bits in the331* cheapest possible way to reduce systematic lossage, as well as332* to incorporate impact of the highest bits that would otherwise333* never be used in index calculations because of table bounds.334*/335static final int hash(Object key) {336int h;337return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);338}339340/**341* Returns x's Class if it is of the form "class C implements342* Comparable<C>", else null.343*/344static Class<?> comparableClassFor(Object x) {345if (x instanceof Comparable) {346Class<?> c; Type[] ts, as; ParameterizedType p;347if ((c = x.getClass()) == String.class) // bypass checks348return c;349if ((ts = c.getGenericInterfaces()) != null) {350for (Type t : ts) {351if ((t instanceof ParameterizedType) &&352((p = (ParameterizedType) t).getRawType() ==353Comparable.class) &&354(as = p.getActualTypeArguments()) != null &&355as.length == 1 && as[0] == c) // type arg is c356return c;357}358}359}360return null;361}362363/**364* Returns k.compareTo(x) if x matches kc (k's screened comparable365* class), else 0.366*/367@SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable368static int compareComparables(Class<?> kc, Object k, Object x) {369return (x == null || x.getClass() != kc ? 0 :370((Comparable)k).compareTo(x));371}372373/**374* Returns a power of two size for the given target capacity.375*/376static final int tableSizeFor(int cap) {377int n = -1 >>> Integer.numberOfLeadingZeros(cap - 1);378return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;379}380381/* ---------------- Fields -------------- */382383/**384* The table, initialized on first use, and resized as385* necessary. When allocated, length is always a power of two.386* (We also tolerate length zero in some operations to allow387* bootstrapping mechanics that are currently not needed.)388*/389transient Node<K,V>[] table;390391/**392* Holds cached entrySet(). Note that AbstractMap fields are used393* for keySet() and values().394*/395transient Set<Map.Entry<K,V>> entrySet;396397/**398* The number of key-value mappings contained in this map.399*/400transient int size;401402/**403* The number of times this HashMap has been structurally modified404* Structural modifications are those that change the number of mappings in405* the HashMap or otherwise modify its internal structure (e.g.,406* rehash). This field is used to make iterators on Collection-views of407* the HashMap fail-fast. (See ConcurrentModificationException).408*/409transient int modCount;410411/**412* The next size value at which to resize (capacity * load factor).413*414* @serial415*/416// (The javadoc description is true upon serialization.417// Additionally, if the table array has not been allocated, this418// field holds the initial array capacity, or zero signifying419// DEFAULT_INITIAL_CAPACITY.)420int threshold;421422/**423* The load factor for the hash table.424*425* @serial426*/427final float loadFactor;428429/* ---------------- Public operations -------------- */430431/**432* Constructs an empty {@code HashMap} with the specified initial433* capacity and load factor.434*435* @param initialCapacity the initial capacity436* @param loadFactor the load factor437* @throws IllegalArgumentException if the initial capacity is negative438* or the load factor is nonpositive439*/440public HashMap(int initialCapacity, float loadFactor) {441if (initialCapacity < 0)442throw new IllegalArgumentException("Illegal initial capacity: " +443initialCapacity);444if (initialCapacity > MAXIMUM_CAPACITY)445initialCapacity = MAXIMUM_CAPACITY;446if (loadFactor <= 0 || Float.isNaN(loadFactor))447throw new IllegalArgumentException("Illegal load factor: " +448loadFactor);449this.loadFactor = loadFactor;450this.threshold = tableSizeFor(initialCapacity);451}452453/**454* Constructs an empty {@code HashMap} with the specified initial455* capacity and the default load factor (0.75).456*457* @param initialCapacity the initial capacity.458* @throws IllegalArgumentException if the initial capacity is negative.459*/460public HashMap(int initialCapacity) {461this(initialCapacity, DEFAULT_LOAD_FACTOR);462}463464/**465* Constructs an empty {@code HashMap} with the default initial capacity466* (16) and the default load factor (0.75).467*/468public HashMap() {469this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted470}471472/**473* Constructs a new {@code HashMap} with the same mappings as the474* specified {@code Map}. The {@code HashMap} is created with475* default load factor (0.75) and an initial capacity sufficient to476* hold the mappings in the specified {@code Map}.477*478* @param m the map whose mappings are to be placed in this map479* @throws NullPointerException if the specified map is null480*/481public HashMap(Map<? extends K, ? extends V> m) {482this.loadFactor = DEFAULT_LOAD_FACTOR;483putMapEntries(m, false);484}485486/**487* Implements Map.putAll and Map constructor.488*489* @param m the map490* @param evict false when initially constructing this map, else491* true (relayed to method afterNodeInsertion).492*/493final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {494int s = m.size();495if (s > 0) {496if (table == null) { // pre-size497float ft = ((float)s / loadFactor) + 1.0F;498int t = ((ft < (float)MAXIMUM_CAPACITY) ?499(int)ft : MAXIMUM_CAPACITY);500if (t > threshold)501threshold = tableSizeFor(t);502} else {503// Because of linked-list bucket constraints, we cannot504// expand all at once, but can reduce total resize505// effort by repeated doubling now vs later506while (s > threshold && table.length < MAXIMUM_CAPACITY)507resize();508}509510for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {511K key = e.getKey();512V value = e.getValue();513putVal(hash(key), key, value, false, evict);514}515}516}517518/**519* Returns the number of key-value mappings in this map.520*521* @return the number of key-value mappings in this map522*/523public int size() {524return size;525}526527/**528* Returns {@code true} if this map contains no key-value mappings.529*530* @return {@code true} if this map contains no key-value mappings531*/532public boolean isEmpty() {533return size == 0;534}535536/**537* Returns the value to which the specified key is mapped,538* or {@code null} if this map contains no mapping for the key.539*540* <p>More formally, if this map contains a mapping from a key541* {@code k} to a value {@code v} such that {@code (key==null ? k==null :542* key.equals(k))}, then this method returns {@code v}; otherwise543* it returns {@code null}. (There can be at most one such mapping.)544*545* <p>A return value of {@code null} does not <i>necessarily</i>546* indicate that the map contains no mapping for the key; it's also547* possible that the map explicitly maps the key to {@code null}.548* The {@link #containsKey containsKey} operation may be used to549* distinguish these two cases.550*551* @see #put(Object, Object)552*/553public V get(Object key) {554Node<K,V> e;555return (e = getNode(key)) == null ? null : e.value;556}557558/**559* Implements Map.get and related methods.560*561* @param key the key562* @return the node, or null if none563*/564final Node<K,V> getNode(Object key) {565Node<K,V>[] tab; Node<K,V> first, e; int n, hash; K k;566if ((tab = table) != null && (n = tab.length) > 0 &&567(first = tab[(n - 1) & (hash = hash(key))]) != null) {568if (first.hash == hash && // always check first node569((k = first.key) == key || (key != null && key.equals(k))))570return first;571if ((e = first.next) != null) {572if (first instanceof TreeNode)573return ((TreeNode<K,V>)first).getTreeNode(hash, key);574do {575if (e.hash == hash &&576((k = e.key) == key || (key != null && key.equals(k))))577return e;578} while ((e = e.next) != null);579}580}581return null;582}583584/**585* Returns {@code true} if this map contains a mapping for the586* specified key.587*588* @param key The key whose presence in this map is to be tested589* @return {@code true} if this map contains a mapping for the specified590* key.591*/592public boolean containsKey(Object key) {593return getNode(key) != null;594}595596/**597* Associates the specified value with the specified key in this map.598* If the map previously contained a mapping for the key, the old599* value is replaced.600*601* @param key key with which the specified value is to be associated602* @param value value to be associated with the specified key603* @return the previous value associated with {@code key}, or604* {@code null} if there was no mapping for {@code key}.605* (A {@code null} return can also indicate that the map606* previously associated {@code null} with {@code key}.)607*/608public V put(K key, V value) {609return putVal(hash(key), key, value, false, true);610}611612/**613* Implements Map.put and related methods.614*615* @param hash hash for key616* @param key the key617* @param value the value to put618* @param onlyIfAbsent if true, don't change existing value619* @param evict if false, the table is in creation mode.620* @return previous value, or null if none621*/622final V putVal(int hash, K key, V value, boolean onlyIfAbsent,623boolean evict) {624Node<K,V>[] tab; Node<K,V> p; int n, i;625if ((tab = table) == null || (n = tab.length) == 0)626n = (tab = resize()).length;627if ((p = tab[i = (n - 1) & hash]) == null)628tab[i] = newNode(hash, key, value, null);629else {630Node<K,V> e; K k;631if (p.hash == hash &&632((k = p.key) == key || (key != null && key.equals(k))))633e = p;634else if (p instanceof TreeNode)635e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);636else {637for (int binCount = 0; ; ++binCount) {638if ((e = p.next) == null) {639p.next = newNode(hash, key, value, null);640if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st641treeifyBin(tab, hash);642break;643}644if (e.hash == hash &&645((k = e.key) == key || (key != null && key.equals(k))))646break;647p = e;648}649}650if (e != null) { // existing mapping for key651V oldValue = e.value;652if (!onlyIfAbsent || oldValue == null)653e.value = value;654afterNodeAccess(e);655return oldValue;656}657}658++modCount;659if (++size > threshold)660resize();661afterNodeInsertion(evict);662return null;663}664665/**666* Initializes or doubles table size. If null, allocates in667* accord with initial capacity target held in field threshold.668* Otherwise, because we are using power-of-two expansion, the669* elements from each bin must either stay at same index, or move670* with a power of two offset in the new table.671*672* @return the table673*/674final Node<K,V>[] resize() {675Node<K,V>[] oldTab = table;676int oldCap = (oldTab == null) ? 0 : oldTab.length;677int oldThr = threshold;678int newCap, newThr = 0;679if (oldCap > 0) {680if (oldCap >= MAXIMUM_CAPACITY) {681threshold = Integer.MAX_VALUE;682return oldTab;683}684else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&685oldCap >= DEFAULT_INITIAL_CAPACITY)686newThr = oldThr << 1; // double threshold687}688else if (oldThr > 0) // initial capacity was placed in threshold689newCap = oldThr;690else { // zero initial threshold signifies using defaults691newCap = DEFAULT_INITIAL_CAPACITY;692newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);693}694if (newThr == 0) {695float ft = (float)newCap * loadFactor;696newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?697(int)ft : Integer.MAX_VALUE);698}699threshold = newThr;700@SuppressWarnings({"rawtypes","unchecked"})701Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];702table = newTab;703if (oldTab != null) {704for (int j = 0; j < oldCap; ++j) {705Node<K,V> e;706if ((e = oldTab[j]) != null) {707oldTab[j] = null;708if (e.next == null)709newTab[e.hash & (newCap - 1)] = e;710else if (e instanceof TreeNode)711((TreeNode<K,V>)e).split(this, newTab, j, oldCap);712else { // preserve order713Node<K,V> loHead = null, loTail = null;714Node<K,V> hiHead = null, hiTail = null;715Node<K,V> next;716do {717next = e.next;718if ((e.hash & oldCap) == 0) {719if (loTail == null)720loHead = e;721else722loTail.next = e;723loTail = e;724}725else {726if (hiTail == null)727hiHead = e;728else729hiTail.next = e;730hiTail = e;731}732} while ((e = next) != null);733if (loTail != null) {734loTail.next = null;735newTab[j] = loHead;736}737if (hiTail != null) {738hiTail.next = null;739newTab[j + oldCap] = hiHead;740}741}742}743}744}745return newTab;746}747748/**749* Replaces all linked nodes in bin at index for given hash unless750* table is too small, in which case resizes instead.751*/752final void treeifyBin(Node<K,V>[] tab, int hash) {753int n, index; Node<K,V> e;754if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)755resize();756else if ((e = tab[index = (n - 1) & hash]) != null) {757TreeNode<K,V> hd = null, tl = null;758do {759TreeNode<K,V> p = replacementTreeNode(e, null);760if (tl == null)761hd = p;762else {763p.prev = tl;764tl.next = p;765}766tl = p;767} while ((e = e.next) != null);768if ((tab[index] = hd) != null)769hd.treeify(tab);770}771}772773/**774* Copies all of the mappings from the specified map to this map.775* These mappings will replace any mappings that this map had for776* any of the keys currently in the specified map.777*778* @param m mappings to be stored in this map779* @throws NullPointerException if the specified map is null780*/781public void putAll(Map<? extends K, ? extends V> m) {782putMapEntries(m, true);783}784785/**786* Removes the mapping for the specified key from this map if present.787*788* @param key key whose mapping is to be removed from the map789* @return the previous value associated with {@code key}, or790* {@code null} if there was no mapping for {@code key}.791* (A {@code null} return can also indicate that the map792* previously associated {@code null} with {@code key}.)793*/794public V remove(Object key) {795Node<K,V> e;796return (e = removeNode(hash(key), key, null, false, true)) == null ?797null : e.value;798}799800/**801* Implements Map.remove and related methods.802*803* @param hash hash for key804* @param key the key805* @param value the value to match if matchValue, else ignored806* @param matchValue if true only remove if value is equal807* @param movable if false do not move other nodes while removing808* @return the node, or null if none809*/810final Node<K,V> removeNode(int hash, Object key, Object value,811boolean matchValue, boolean movable) {812Node<K,V>[] tab; Node<K,V> p; int n, index;813if ((tab = table) != null && (n = tab.length) > 0 &&814(p = tab[index = (n - 1) & hash]) != null) {815Node<K,V> node = null, e; K k; V v;816if (p.hash == hash &&817((k = p.key) == key || (key != null && key.equals(k))))818node = p;819else if ((e = p.next) != null) {820if (p instanceof TreeNode)821node = ((TreeNode<K,V>)p).getTreeNode(hash, key);822else {823do {824if (e.hash == hash &&825((k = e.key) == key ||826(key != null && key.equals(k)))) {827node = e;828break;829}830p = e;831} while ((e = e.next) != null);832}833}834if (node != null && (!matchValue || (v = node.value) == value ||835(value != null && value.equals(v)))) {836if (node instanceof TreeNode)837((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);838else if (node == p)839tab[index] = node.next;840else841p.next = node.next;842++modCount;843--size;844afterNodeRemoval(node);845return node;846}847}848return null;849}850851/**852* Removes all of the mappings from this map.853* The map will be empty after this call returns.854*/855public void clear() {856Node<K,V>[] tab;857modCount++;858if ((tab = table) != null && size > 0) {859size = 0;860for (int i = 0; i < tab.length; ++i)861tab[i] = null;862}863}864865/**866* Returns {@code true} if this map maps one or more keys to the867* specified value.868*869* @param value value whose presence in this map is to be tested870* @return {@code true} if this map maps one or more keys to the871* specified value872*/873public boolean containsValue(Object value) {874Node<K,V>[] tab; V v;875if ((tab = table) != null && size > 0) {876for (Node<K,V> e : tab) {877for (; e != null; e = e.next) {878if ((v = e.value) == value ||879(value != null && value.equals(v)))880return true;881}882}883}884return false;885}886887/**888* Returns a {@link Set} view of the keys contained in this map.889* The set is backed by the map, so changes to the map are890* reflected in the set, and vice-versa. If the map is modified891* while an iteration over the set is in progress (except through892* the iterator's own {@code remove} operation), the results of893* the iteration are undefined. The set supports element removal,894* which removes the corresponding mapping from the map, via the895* {@code Iterator.remove}, {@code Set.remove},896* {@code removeAll}, {@code retainAll}, and {@code clear}897* operations. It does not support the {@code add} or {@code addAll}898* operations.899*900* @return a set view of the keys contained in this map901*/902public Set<K> keySet() {903Set<K> ks = keySet;904if (ks == null) {905ks = new KeySet();906keySet = ks;907}908return ks;909}910911/**912* Prepares the array for {@link Collection#toArray(Object[])} implementation.913* If supplied array is smaller than this map size, a new array is allocated.914* If supplied array is bigger than this map size, a null is written at size index.915*916* @param a an original array passed to {@code toArray()} method917* @param <T> type of array elements918* @return an array ready to be filled and returned from {@code toArray()} method.919*/920@SuppressWarnings("unchecked")921final <T> T[] prepareArray(T[] a) {922int size = this.size;923if (a.length < size) {924return (T[]) java.lang.reflect.Array925.newInstance(a.getClass().getComponentType(), size);926}927if (a.length > size) {928a[size] = null;929}930return a;931}932933/**934* Fills an array with this map keys and returns it. This method assumes935* that input array is big enough to fit all the keys. Use936* {@link #prepareArray(Object[])} to ensure this.937*938* @param a an array to fill939* @param <T> type of array elements940* @return supplied array941*/942<T> T[] keysToArray(T[] a) {943Object[] r = a;944Node<K,V>[] tab;945int idx = 0;946if (size > 0 && (tab = table) != null) {947for (Node<K,V> e : tab) {948for (; e != null; e = e.next) {949r[idx++] = e.key;950}951}952}953return a;954}955956/**957* Fills an array with this map values and returns it. This method assumes958* that input array is big enough to fit all the values. Use959* {@link #prepareArray(Object[])} to ensure this.960*961* @param a an array to fill962* @param <T> type of array elements963* @return supplied array964*/965<T> T[] valuesToArray(T[] a) {966Object[] r = a;967Node<K,V>[] tab;968int idx = 0;969if (size > 0 && (tab = table) != null) {970for (Node<K,V> e : tab) {971for (; e != null; e = e.next) {972r[idx++] = e.value;973}974}975}976return a;977}978979final class KeySet extends AbstractSet<K> {980public final int size() { return size; }981public final void clear() { HashMap.this.clear(); }982public final Iterator<K> iterator() { return new KeyIterator(); }983public final boolean contains(Object o) { return containsKey(o); }984public final boolean remove(Object key) {985return removeNode(hash(key), key, null, false, true) != null;986}987public final Spliterator<K> spliterator() {988return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);989}990991public Object[] toArray() {992return keysToArray(new Object[size]);993}994995public <T> T[] toArray(T[] a) {996return keysToArray(prepareArray(a));997}998999public final void forEach(Consumer<? super K> action) {1000Node<K,V>[] tab;1001if (action == null)1002throw new NullPointerException();1003if (size > 0 && (tab = table) != null) {1004int mc = modCount;1005for (Node<K,V> e : tab) {1006for (; e != null; e = e.next)1007action.accept(e.key);1008}1009if (modCount != mc)1010throw new ConcurrentModificationException();1011}1012}1013}10141015/**1016* Returns a {@link Collection} view of the values contained in this map.1017* The collection is backed by the map, so changes to the map are1018* reflected in the collection, and vice-versa. If the map is1019* modified while an iteration over the collection is in progress1020* (except through the iterator's own {@code remove} operation),1021* the results of the iteration are undefined. The collection1022* supports element removal, which removes the corresponding1023* mapping from the map, via the {@code Iterator.remove},1024* {@code Collection.remove}, {@code removeAll},1025* {@code retainAll} and {@code clear} operations. It does not1026* support the {@code add} or {@code addAll} operations.1027*1028* @return a view of the values contained in this map1029*/1030public Collection<V> values() {1031Collection<V> vs = values;1032if (vs == null) {1033vs = new Values();1034values = vs;1035}1036return vs;1037}10381039final class Values extends AbstractCollection<V> {1040public final int size() { return size; }1041public final void clear() { HashMap.this.clear(); }1042public final Iterator<V> iterator() { return new ValueIterator(); }1043public final boolean contains(Object o) { return containsValue(o); }1044public final Spliterator<V> spliterator() {1045return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);1046}10471048public Object[] toArray() {1049return valuesToArray(new Object[size]);1050}10511052public <T> T[] toArray(T[] a) {1053return valuesToArray(prepareArray(a));1054}10551056public final void forEach(Consumer<? super V> action) {1057Node<K,V>[] tab;1058if (action == null)1059throw new NullPointerException();1060if (size > 0 && (tab = table) != null) {1061int mc = modCount;1062for (Node<K,V> e : tab) {1063for (; e != null; e = e.next)1064action.accept(e.value);1065}1066if (modCount != mc)1067throw new ConcurrentModificationException();1068}1069}1070}10711072/**1073* Returns a {@link Set} view of the mappings contained in this map.1074* The set is backed by the map, so changes to the map are1075* reflected in the set, and vice-versa. If the map is modified1076* while an iteration over the set is in progress (except through1077* the iterator's own {@code remove} operation, or through the1078* {@code setValue} operation on a map entry returned by the1079* iterator) the results of the iteration are undefined. The set1080* supports element removal, which removes the corresponding1081* mapping from the map, via the {@code Iterator.remove},1082* {@code Set.remove}, {@code removeAll}, {@code retainAll} and1083* {@code clear} operations. It does not support the1084* {@code add} or {@code addAll} operations.1085*1086* @return a set view of the mappings contained in this map1087*/1088public Set<Map.Entry<K,V>> entrySet() {1089Set<Map.Entry<K,V>> es;1090return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;1091}10921093final class EntrySet extends AbstractSet<Map.Entry<K,V>> {1094public final int size() { return size; }1095public final void clear() { HashMap.this.clear(); }1096public final Iterator<Map.Entry<K,V>> iterator() {1097return new EntryIterator();1098}1099public final boolean contains(Object o) {1100if (!(o instanceof Map.Entry<?, ?> e))1101return false;1102Object key = e.getKey();1103Node<K,V> candidate = getNode(key);1104return candidate != null && candidate.equals(e);1105}1106public final boolean remove(Object o) {1107if (o instanceof Map.Entry<?, ?> e) {1108Object key = e.getKey();1109Object value = e.getValue();1110return removeNode(hash(key), key, value, true, true) != null;1111}1112return false;1113}1114public final Spliterator<Map.Entry<K,V>> spliterator() {1115return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);1116}1117public final void forEach(Consumer<? super Map.Entry<K,V>> action) {1118Node<K,V>[] tab;1119if (action == null)1120throw new NullPointerException();1121if (size > 0 && (tab = table) != null) {1122int mc = modCount;1123for (Node<K,V> e : tab) {1124for (; e != null; e = e.next)1125action.accept(e);1126}1127if (modCount != mc)1128throw new ConcurrentModificationException();1129}1130}1131}11321133// Overrides of JDK8 Map extension methods11341135@Override1136public V getOrDefault(Object key, V defaultValue) {1137Node<K,V> e;1138return (e = getNode(key)) == null ? defaultValue : e.value;1139}11401141@Override1142public V putIfAbsent(K key, V value) {1143return putVal(hash(key), key, value, true, true);1144}11451146@Override1147public boolean remove(Object key, Object value) {1148return removeNode(hash(key), key, value, true, true) != null;1149}11501151@Override1152public boolean replace(K key, V oldValue, V newValue) {1153Node<K,V> e; V v;1154if ((e = getNode(key)) != null &&1155((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {1156e.value = newValue;1157afterNodeAccess(e);1158return true;1159}1160return false;1161}11621163@Override1164public V replace(K key, V value) {1165Node<K,V> e;1166if ((e = getNode(key)) != null) {1167V oldValue = e.value;1168e.value = value;1169afterNodeAccess(e);1170return oldValue;1171}1172return null;1173}11741175/**1176* {@inheritDoc}1177*1178* <p>This method will, on a best-effort basis, throw a1179* {@link ConcurrentModificationException} if it is detected that the1180* mapping function modifies this map during computation.1181*1182* @throws ConcurrentModificationException if it is detected that the1183* mapping function modified this map1184*/1185@Override1186public V computeIfAbsent(K key,1187Function<? super K, ? extends V> mappingFunction) {1188if (mappingFunction == null)1189throw new NullPointerException();1190int hash = hash(key);1191Node<K,V>[] tab; Node<K,V> first; int n, i;1192int binCount = 0;1193TreeNode<K,V> t = null;1194Node<K,V> old = null;1195if (size > threshold || (tab = table) == null ||1196(n = tab.length) == 0)1197n = (tab = resize()).length;1198if ((first = tab[i = (n - 1) & hash]) != null) {1199if (first instanceof TreeNode)1200old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1201else {1202Node<K,V> e = first; K k;1203do {1204if (e.hash == hash &&1205((k = e.key) == key || (key != null && key.equals(k)))) {1206old = e;1207break;1208}1209++binCount;1210} while ((e = e.next) != null);1211}1212V oldValue;1213if (old != null && (oldValue = old.value) != null) {1214afterNodeAccess(old);1215return oldValue;1216}1217}1218int mc = modCount;1219V v = mappingFunction.apply(key);1220if (mc != modCount) { throw new ConcurrentModificationException(); }1221if (v == null) {1222return null;1223} else if (old != null) {1224old.value = v;1225afterNodeAccess(old);1226return v;1227}1228else if (t != null)1229t.putTreeVal(this, tab, hash, key, v);1230else {1231tab[i] = newNode(hash, key, v, first);1232if (binCount >= TREEIFY_THRESHOLD - 1)1233treeifyBin(tab, hash);1234}1235modCount = mc + 1;1236++size;1237afterNodeInsertion(true);1238return v;1239}12401241/**1242* {@inheritDoc}1243*1244* <p>This method will, on a best-effort basis, throw a1245* {@link ConcurrentModificationException} if it is detected that the1246* remapping function modifies this map during computation.1247*1248* @throws ConcurrentModificationException if it is detected that the1249* remapping function modified this map1250*/1251@Override1252public V computeIfPresent(K key,1253BiFunction<? super K, ? super V, ? extends V> remappingFunction) {1254if (remappingFunction == null)1255throw new NullPointerException();1256Node<K,V> e; V oldValue;1257if ((e = getNode(key)) != null &&1258(oldValue = e.value) != null) {1259int mc = modCount;1260V v = remappingFunction.apply(key, oldValue);1261if (mc != modCount) { throw new ConcurrentModificationException(); }1262if (v != null) {1263e.value = v;1264afterNodeAccess(e);1265return v;1266}1267else {1268int hash = hash(key);1269removeNode(hash, key, null, false, true);1270}1271}1272return null;1273}12741275/**1276* {@inheritDoc}1277*1278* <p>This method will, on a best-effort basis, throw a1279* {@link ConcurrentModificationException} if it is detected that the1280* remapping function modifies this map during computation.1281*1282* @throws ConcurrentModificationException if it is detected that the1283* remapping function modified this map1284*/1285@Override1286public V compute(K key,1287BiFunction<? super K, ? super V, ? extends V> remappingFunction) {1288if (remappingFunction == null)1289throw new NullPointerException();1290int hash = hash(key);1291Node<K,V>[] tab; Node<K,V> first; int n, i;1292int binCount = 0;1293TreeNode<K,V> t = null;1294Node<K,V> old = null;1295if (size > threshold || (tab = table) == null ||1296(n = tab.length) == 0)1297n = (tab = resize()).length;1298if ((first = tab[i = (n - 1) & hash]) != null) {1299if (first instanceof TreeNode)1300old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1301else {1302Node<K,V> e = first; K k;1303do {1304if (e.hash == hash &&1305((k = e.key) == key || (key != null && key.equals(k)))) {1306old = e;1307break;1308}1309++binCount;1310} while ((e = e.next) != null);1311}1312}1313V oldValue = (old == null) ? null : old.value;1314int mc = modCount;1315V v = remappingFunction.apply(key, oldValue);1316if (mc != modCount) { throw new ConcurrentModificationException(); }1317if (old != null) {1318if (v != null) {1319old.value = v;1320afterNodeAccess(old);1321}1322else1323removeNode(hash, key, null, false, true);1324}1325else if (v != null) {1326if (t != null)1327t.putTreeVal(this, tab, hash, key, v);1328else {1329tab[i] = newNode(hash, key, v, first);1330if (binCount >= TREEIFY_THRESHOLD - 1)1331treeifyBin(tab, hash);1332}1333modCount = mc + 1;1334++size;1335afterNodeInsertion(true);1336}1337return v;1338}13391340/**1341* {@inheritDoc}1342*1343* <p>This method will, on a best-effort basis, throw a1344* {@link ConcurrentModificationException} if it is detected that the1345* remapping function modifies this map during computation.1346*1347* @throws ConcurrentModificationException if it is detected that the1348* remapping function modified this map1349*/1350@Override1351public V merge(K key, V value,1352BiFunction<? super V, ? super V, ? extends V> remappingFunction) {1353if (value == null || remappingFunction == null)1354throw new NullPointerException();1355int hash = hash(key);1356Node<K,V>[] tab; Node<K,V> first; int n, i;1357int binCount = 0;1358TreeNode<K,V> t = null;1359Node<K,V> old = null;1360if (size > threshold || (tab = table) == null ||1361(n = tab.length) == 0)1362n = (tab = resize()).length;1363if ((first = tab[i = (n - 1) & hash]) != null) {1364if (first instanceof TreeNode)1365old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1366else {1367Node<K,V> e = first; K k;1368do {1369if (e.hash == hash &&1370((k = e.key) == key || (key != null && key.equals(k)))) {1371old = e;1372break;1373}1374++binCount;1375} while ((e = e.next) != null);1376}1377}1378if (old != null) {1379V v;1380if (old.value != null) {1381int mc = modCount;1382v = remappingFunction.apply(old.value, value);1383if (mc != modCount) {1384throw new ConcurrentModificationException();1385}1386} else {1387v = value;1388}1389if (v != null) {1390old.value = v;1391afterNodeAccess(old);1392}1393else1394removeNode(hash, key, null, false, true);1395return v;1396} else {1397if (t != null)1398t.putTreeVal(this, tab, hash, key, value);1399else {1400tab[i] = newNode(hash, key, value, first);1401if (binCount >= TREEIFY_THRESHOLD - 1)1402treeifyBin(tab, hash);1403}1404++modCount;1405++size;1406afterNodeInsertion(true);1407return value;1408}1409}14101411@Override1412public void forEach(BiConsumer<? super K, ? super V> action) {1413Node<K,V>[] tab;1414if (action == null)1415throw new NullPointerException();1416if (size > 0 && (tab = table) != null) {1417int mc = modCount;1418for (Node<K,V> e : tab) {1419for (; e != null; e = e.next)1420action.accept(e.key, e.value);1421}1422if (modCount != mc)1423throw new ConcurrentModificationException();1424}1425}14261427@Override1428public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {1429Node<K,V>[] tab;1430if (function == null)1431throw new NullPointerException();1432if (size > 0 && (tab = table) != null) {1433int mc = modCount;1434for (Node<K,V> e : tab) {1435for (; e != null; e = e.next) {1436e.value = function.apply(e.key, e.value);1437}1438}1439if (modCount != mc)1440throw new ConcurrentModificationException();1441}1442}14431444/* ------------------------------------------------------------ */1445// Cloning and serialization14461447/**1448* Returns a shallow copy of this {@code HashMap} instance: the keys and1449* values themselves are not cloned.1450*1451* @return a shallow copy of this map1452*/1453@SuppressWarnings("unchecked")1454@Override1455public Object clone() {1456HashMap<K,V> result;1457try {1458result = (HashMap<K,V>)super.clone();1459} catch (CloneNotSupportedException e) {1460// this shouldn't happen, since we are Cloneable1461throw new InternalError(e);1462}1463result.reinitialize();1464result.putMapEntries(this, false);1465return result;1466}14671468// These methods are also used when serializing HashSets1469final float loadFactor() { return loadFactor; }1470final int capacity() {1471return (table != null) ? table.length :1472(threshold > 0) ? threshold :1473DEFAULT_INITIAL_CAPACITY;1474}14751476/**1477* Saves this map to a stream (that is, serializes it).1478*1479* @param s the stream1480* @throws IOException if an I/O error occurs1481* @serialData The <i>capacity</i> of the HashMap (the length of the1482* bucket array) is emitted (int), followed by the1483* <i>size</i> (an int, the number of key-value1484* mappings), followed by the key (Object) and value (Object)1485* for each key-value mapping. The key-value mappings are1486* emitted in no particular order.1487*/1488@java.io.Serial1489private void writeObject(java.io.ObjectOutputStream s)1490throws IOException {1491int buckets = capacity();1492// Write out the threshold, loadfactor, and any hidden stuff1493s.defaultWriteObject();1494s.writeInt(buckets);1495s.writeInt(size);1496internalWriteEntries(s);1497}14981499/**1500* Reconstitutes this map from a stream (that is, deserializes it).1501* @param s the stream1502* @throws ClassNotFoundException if the class of a serialized object1503* could not be found1504* @throws IOException if an I/O error occurs1505*/1506@java.io.Serial1507private void readObject(ObjectInputStream s)1508throws IOException, ClassNotFoundException {15091510ObjectInputStream.GetField fields = s.readFields();15111512// Read loadFactor (ignore threshold)1513float lf = fields.get("loadFactor", 0.75f);1514if (lf <= 0 || Float.isNaN(lf))1515throw new InvalidObjectException("Illegal load factor: " + lf);15161517lf = Math.min(Math.max(0.25f, lf), 4.0f);1518HashMap.UnsafeHolder.putLoadFactor(this, lf);15191520reinitialize();15211522s.readInt(); // Read and ignore number of buckets1523int mappings = s.readInt(); // Read number of mappings (size)1524if (mappings < 0) {1525throw new InvalidObjectException("Illegal mappings count: " + mappings);1526} else if (mappings == 0) {1527// use defaults1528} else if (mappings > 0) {1529float fc = (float)mappings / lf + 1.0f;1530int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?1531DEFAULT_INITIAL_CAPACITY :1532(fc >= MAXIMUM_CAPACITY) ?1533MAXIMUM_CAPACITY :1534tableSizeFor((int)fc));1535float ft = (float)cap * lf;1536threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?1537(int)ft : Integer.MAX_VALUE);15381539// Check Map.Entry[].class since it's the nearest public type to1540// what we're actually creating.1541SharedSecrets.getJavaObjectInputStreamAccess().checkArray(s, Map.Entry[].class, cap);1542@SuppressWarnings({"rawtypes","unchecked"})1543Node<K,V>[] tab = (Node<K,V>[])new Node[cap];1544table = tab;15451546// Read the keys and values, and put the mappings in the HashMap1547for (int i = 0; i < mappings; i++) {1548@SuppressWarnings("unchecked")1549K key = (K) s.readObject();1550@SuppressWarnings("unchecked")1551V value = (V) s.readObject();1552putVal(hash(key), key, value, false, false);1553}1554}1555}15561557// Support for resetting final field during deserializing1558private static final class UnsafeHolder {1559private UnsafeHolder() { throw new InternalError(); }1560private static final jdk.internal.misc.Unsafe unsafe1561= jdk.internal.misc.Unsafe.getUnsafe();1562private static final long LF_OFFSET1563= unsafe.objectFieldOffset(HashMap.class, "loadFactor");1564static void putLoadFactor(HashMap<?, ?> map, float lf) {1565unsafe.putFloat(map, LF_OFFSET, lf);1566}1567}15681569/* ------------------------------------------------------------ */1570// iterators15711572abstract class HashIterator {1573Node<K,V> next; // next entry to return1574Node<K,V> current; // current entry1575int expectedModCount; // for fast-fail1576int index; // current slot15771578HashIterator() {1579expectedModCount = modCount;1580Node<K,V>[] t = table;1581current = next = null;1582index = 0;1583if (t != null && size > 0) { // advance to first entry1584do {} while (index < t.length && (next = t[index++]) == null);1585}1586}15871588public final boolean hasNext() {1589return next != null;1590}15911592final Node<K,V> nextNode() {1593Node<K,V>[] t;1594Node<K,V> e = next;1595if (modCount != expectedModCount)1596throw new ConcurrentModificationException();1597if (e == null)1598throw new NoSuchElementException();1599if ((next = (current = e).next) == null && (t = table) != null) {1600do {} while (index < t.length && (next = t[index++]) == null);1601}1602return e;1603}16041605public final void remove() {1606Node<K,V> p = current;1607if (p == null)1608throw new IllegalStateException();1609if (modCount != expectedModCount)1610throw new ConcurrentModificationException();1611current = null;1612removeNode(p.hash, p.key, null, false, false);1613expectedModCount = modCount;1614}1615}16161617final class KeyIterator extends HashIterator1618implements Iterator<K> {1619public final K next() { return nextNode().key; }1620}16211622final class ValueIterator extends HashIterator1623implements Iterator<V> {1624public final V next() { return nextNode().value; }1625}16261627final class EntryIterator extends HashIterator1628implements Iterator<Map.Entry<K,V>> {1629public final Map.Entry<K,V> next() { return nextNode(); }1630}16311632/* ------------------------------------------------------------ */1633// spliterators16341635static class HashMapSpliterator<K,V> {1636final HashMap<K,V> map;1637Node<K,V> current; // current node1638int index; // current index, modified on advance/split1639int fence; // one past last index1640int est; // size estimate1641int expectedModCount; // for comodification checks16421643HashMapSpliterator(HashMap<K,V> m, int origin,1644int fence, int est,1645int expectedModCount) {1646this.map = m;1647this.index = origin;1648this.fence = fence;1649this.est = est;1650this.expectedModCount = expectedModCount;1651}16521653final int getFence() { // initialize fence and size on first use1654int hi;1655if ((hi = fence) < 0) {1656HashMap<K,V> m = map;1657est = m.size;1658expectedModCount = m.modCount;1659Node<K,V>[] tab = m.table;1660hi = fence = (tab == null) ? 0 : tab.length;1661}1662return hi;1663}16641665public final long estimateSize() {1666getFence(); // force init1667return (long) est;1668}1669}16701671static final class KeySpliterator<K,V>1672extends HashMapSpliterator<K,V>1673implements Spliterator<K> {1674KeySpliterator(HashMap<K,V> m, int origin, int fence, int est,1675int expectedModCount) {1676super(m, origin, fence, est, expectedModCount);1677}16781679public KeySpliterator<K,V> trySplit() {1680int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1681return (lo >= mid || current != null) ? null :1682new KeySpliterator<>(map, lo, index = mid, est >>>= 1,1683expectedModCount);1684}16851686public void forEachRemaining(Consumer<? super K> action) {1687int i, hi, mc;1688if (action == null)1689throw new NullPointerException();1690HashMap<K,V> m = map;1691Node<K,V>[] tab = m.table;1692if ((hi = fence) < 0) {1693mc = expectedModCount = m.modCount;1694hi = fence = (tab == null) ? 0 : tab.length;1695}1696else1697mc = expectedModCount;1698if (tab != null && tab.length >= hi &&1699(i = index) >= 0 && (i < (index = hi) || current != null)) {1700Node<K,V> p = current;1701current = null;1702do {1703if (p == null)1704p = tab[i++];1705else {1706action.accept(p.key);1707p = p.next;1708}1709} while (p != null || i < hi);1710if (m.modCount != mc)1711throw new ConcurrentModificationException();1712}1713}17141715public boolean tryAdvance(Consumer<? super K> action) {1716int hi;1717if (action == null)1718throw new NullPointerException();1719Node<K,V>[] tab = map.table;1720if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1721while (current != null || index < hi) {1722if (current == null)1723current = tab[index++];1724else {1725K k = current.key;1726current = current.next;1727action.accept(k);1728if (map.modCount != expectedModCount)1729throw new ConcurrentModificationException();1730return true;1731}1732}1733}1734return false;1735}17361737public int characteristics() {1738return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |1739Spliterator.DISTINCT;1740}1741}17421743static final class ValueSpliterator<K,V>1744extends HashMapSpliterator<K,V>1745implements Spliterator<V> {1746ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est,1747int expectedModCount) {1748super(m, origin, fence, est, expectedModCount);1749}17501751public ValueSpliterator<K,V> trySplit() {1752int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1753return (lo >= mid || current != null) ? null :1754new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,1755expectedModCount);1756}17571758public void forEachRemaining(Consumer<? super V> action) {1759int i, hi, mc;1760if (action == null)1761throw new NullPointerException();1762HashMap<K,V> m = map;1763Node<K,V>[] tab = m.table;1764if ((hi = fence) < 0) {1765mc = expectedModCount = m.modCount;1766hi = fence = (tab == null) ? 0 : tab.length;1767}1768else1769mc = expectedModCount;1770if (tab != null && tab.length >= hi &&1771(i = index) >= 0 && (i < (index = hi) || current != null)) {1772Node<K,V> p = current;1773current = null;1774do {1775if (p == null)1776p = tab[i++];1777else {1778action.accept(p.value);1779p = p.next;1780}1781} while (p != null || i < hi);1782if (m.modCount != mc)1783throw new ConcurrentModificationException();1784}1785}17861787public boolean tryAdvance(Consumer<? super V> action) {1788int hi;1789if (action == null)1790throw new NullPointerException();1791Node<K,V>[] tab = map.table;1792if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1793while (current != null || index < hi) {1794if (current == null)1795current = tab[index++];1796else {1797V v = current.value;1798current = current.next;1799action.accept(v);1800if (map.modCount != expectedModCount)1801throw new ConcurrentModificationException();1802return true;1803}1804}1805}1806return false;1807}18081809public int characteristics() {1810return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);1811}1812}18131814static final class EntrySpliterator<K,V>1815extends HashMapSpliterator<K,V>1816implements Spliterator<Map.Entry<K,V>> {1817EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est,1818int expectedModCount) {1819super(m, origin, fence, est, expectedModCount);1820}18211822public EntrySpliterator<K,V> trySplit() {1823int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1824return (lo >= mid || current != null) ? null :1825new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,1826expectedModCount);1827}18281829public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) {1830int i, hi, mc;1831if (action == null)1832throw new NullPointerException();1833HashMap<K,V> m = map;1834Node<K,V>[] tab = m.table;1835if ((hi = fence) < 0) {1836mc = expectedModCount = m.modCount;1837hi = fence = (tab == null) ? 0 : tab.length;1838}1839else1840mc = expectedModCount;1841if (tab != null && tab.length >= hi &&1842(i = index) >= 0 && (i < (index = hi) || current != null)) {1843Node<K,V> p = current;1844current = null;1845do {1846if (p == null)1847p = tab[i++];1848else {1849action.accept(p);1850p = p.next;1851}1852} while (p != null || i < hi);1853if (m.modCount != mc)1854throw new ConcurrentModificationException();1855}1856}18571858public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) {1859int hi;1860if (action == null)1861throw new NullPointerException();1862Node<K,V>[] tab = map.table;1863if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1864while (current != null || index < hi) {1865if (current == null)1866current = tab[index++];1867else {1868Node<K,V> e = current;1869current = current.next;1870action.accept(e);1871if (map.modCount != expectedModCount)1872throw new ConcurrentModificationException();1873return true;1874}1875}1876}1877return false;1878}18791880public int characteristics() {1881return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |1882Spliterator.DISTINCT;1883}1884}18851886/* ------------------------------------------------------------ */1887// LinkedHashMap support188818891890/*1891* The following package-protected methods are designed to be1892* overridden by LinkedHashMap, but not by any other subclass.1893* Nearly all other internal methods are also package-protected1894* but are declared final, so can be used by LinkedHashMap, view1895* classes, and HashSet.1896*/18971898// Create a regular (non-tree) node1899Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {1900return new Node<>(hash, key, value, next);1901}19021903// For conversion from TreeNodes to plain nodes1904Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {1905return new Node<>(p.hash, p.key, p.value, next);1906}19071908// Create a tree bin node1909TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) {1910return new TreeNode<>(hash, key, value, next);1911}19121913// For treeifyBin1914TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {1915return new TreeNode<>(p.hash, p.key, p.value, next);1916}19171918/**1919* Reset to initial default state. Called by clone and readObject.1920*/1921void reinitialize() {1922table = null;1923entrySet = null;1924keySet = null;1925values = null;1926modCount = 0;1927threshold = 0;1928size = 0;1929}19301931// Callbacks to allow LinkedHashMap post-actions1932void afterNodeAccess(Node<K,V> p) { }1933void afterNodeInsertion(boolean evict) { }1934void afterNodeRemoval(Node<K,V> p) { }19351936// Called only from writeObject, to ensure compatible ordering.1937void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {1938Node<K,V>[] tab;1939if (size > 0 && (tab = table) != null) {1940for (Node<K,V> e : tab) {1941for (; e != null; e = e.next) {1942s.writeObject(e.key);1943s.writeObject(e.value);1944}1945}1946}1947}19481949/* ------------------------------------------------------------ */1950// Tree bins19511952/**1953* Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn1954* extends Node) so can be used as extension of either regular or1955* linked node.1956*/1957static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {1958TreeNode<K,V> parent; // red-black tree links1959TreeNode<K,V> left;1960TreeNode<K,V> right;1961TreeNode<K,V> prev; // needed to unlink next upon deletion1962boolean red;1963TreeNode(int hash, K key, V val, Node<K,V> next) {1964super(hash, key, val, next);1965}19661967/**1968* Returns root of tree containing this node.1969*/1970final TreeNode<K,V> root() {1971for (TreeNode<K,V> r = this, p;;) {1972if ((p = r.parent) == null)1973return r;1974r = p;1975}1976}19771978/**1979* Ensures that the given root is the first node of its bin.1980*/1981static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {1982int n;1983if (root != null && tab != null && (n = tab.length) > 0) {1984int index = (n - 1) & root.hash;1985TreeNode<K,V> first = (TreeNode<K,V>)tab[index];1986if (root != first) {1987Node<K,V> rn;1988tab[index] = root;1989TreeNode<K,V> rp = root.prev;1990if ((rn = root.next) != null)1991((TreeNode<K,V>)rn).prev = rp;1992if (rp != null)1993rp.next = rn;1994if (first != null)1995first.prev = root;1996root.next = first;1997root.prev = null;1998}1999assert checkInvariants(root);2000}2001}20022003/**2004* Finds the node starting at root p with the given hash and key.2005* The kc argument caches comparableClassFor(key) upon first use2006* comparing keys.2007*/2008final TreeNode<K,V> find(int h, Object k, Class<?> kc) {2009TreeNode<K,V> p = this;2010do {2011int ph, dir; K pk;2012TreeNode<K,V> pl = p.left, pr = p.right, q;2013if ((ph = p.hash) > h)2014p = pl;2015else if (ph < h)2016p = pr;2017else if ((pk = p.key) == k || (k != null && k.equals(pk)))2018return p;2019else if (pl == null)2020p = pr;2021else if (pr == null)2022p = pl;2023else if ((kc != null ||2024(kc = comparableClassFor(k)) != null) &&2025(dir = compareComparables(kc, k, pk)) != 0)2026p = (dir < 0) ? pl : pr;2027else if ((q = pr.find(h, k, kc)) != null)2028return q;2029else2030p = pl;2031} while (p != null);2032return null;2033}20342035/**2036* Calls find for root node.2037*/2038final TreeNode<K,V> getTreeNode(int h, Object k) {2039return ((parent != null) ? root() : this).find(h, k, null);2040}20412042/**2043* Tie-breaking utility for ordering insertions when equal2044* hashCodes and non-comparable. We don't require a total2045* order, just a consistent insertion rule to maintain2046* equivalence across rebalancings. Tie-breaking further than2047* necessary simplifies testing a bit.2048*/2049static int tieBreakOrder(Object a, Object b) {2050int d;2051if (a == null || b == null ||2052(d = a.getClass().getName().2053compareTo(b.getClass().getName())) == 0)2054d = (System.identityHashCode(a) <= System.identityHashCode(b) ?2055-1 : 1);2056return d;2057}20582059/**2060* Forms tree of the nodes linked from this node.2061*/2062final void treeify(Node<K,V>[] tab) {2063TreeNode<K,V> root = null;2064for (TreeNode<K,V> x = this, next; x != null; x = next) {2065next = (TreeNode<K,V>)x.next;2066x.left = x.right = null;2067if (root == null) {2068x.parent = null;2069x.red = false;2070root = x;2071}2072else {2073K k = x.key;2074int h = x.hash;2075Class<?> kc = null;2076for (TreeNode<K,V> p = root;;) {2077int dir, ph;2078K pk = p.key;2079if ((ph = p.hash) > h)2080dir = -1;2081else if (ph < h)2082dir = 1;2083else if ((kc == null &&2084(kc = comparableClassFor(k)) == null) ||2085(dir = compareComparables(kc, k, pk)) == 0)2086dir = tieBreakOrder(k, pk);20872088TreeNode<K,V> xp = p;2089if ((p = (dir <= 0) ? p.left : p.right) == null) {2090x.parent = xp;2091if (dir <= 0)2092xp.left = x;2093else2094xp.right = x;2095root = balanceInsertion(root, x);2096break;2097}2098}2099}2100}2101moveRootToFront(tab, root);2102}21032104/**2105* Returns a list of non-TreeNodes replacing those linked from2106* this node.2107*/2108final Node<K,V> untreeify(HashMap<K,V> map) {2109Node<K,V> hd = null, tl = null;2110for (Node<K,V> q = this; q != null; q = q.next) {2111Node<K,V> p = map.replacementNode(q, null);2112if (tl == null)2113hd = p;2114else2115tl.next = p;2116tl = p;2117}2118return hd;2119}21202121/**2122* Tree version of putVal.2123*/2124final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,2125int h, K k, V v) {2126Class<?> kc = null;2127boolean searched = false;2128TreeNode<K,V> root = (parent != null) ? root() : this;2129for (TreeNode<K,V> p = root;;) {2130int dir, ph; K pk;2131if ((ph = p.hash) > h)2132dir = -1;2133else if (ph < h)2134dir = 1;2135else if ((pk = p.key) == k || (k != null && k.equals(pk)))2136return p;2137else if ((kc == null &&2138(kc = comparableClassFor(k)) == null) ||2139(dir = compareComparables(kc, k, pk)) == 0) {2140if (!searched) {2141TreeNode<K,V> q, ch;2142searched = true;2143if (((ch = p.left) != null &&2144(q = ch.find(h, k, kc)) != null) ||2145((ch = p.right) != null &&2146(q = ch.find(h, k, kc)) != null))2147return q;2148}2149dir = tieBreakOrder(k, pk);2150}21512152TreeNode<K,V> xp = p;2153if ((p = (dir <= 0) ? p.left : p.right) == null) {2154Node<K,V> xpn = xp.next;2155TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);2156if (dir <= 0)2157xp.left = x;2158else2159xp.right = x;2160xp.next = x;2161x.parent = x.prev = xp;2162if (xpn != null)2163((TreeNode<K,V>)xpn).prev = x;2164moveRootToFront(tab, balanceInsertion(root, x));2165return null;2166}2167}2168}21692170/**2171* Removes the given node, that must be present before this call.2172* This is messier than typical red-black deletion code because we2173* cannot swap the contents of an interior node with a leaf2174* successor that is pinned by "next" pointers that are accessible2175* independently during traversal. So instead we swap the tree2176* linkages. If the current tree appears to have too few nodes,2177* the bin is converted back to a plain bin. (The test triggers2178* somewhere between 2 and 6 nodes, depending on tree structure).2179*/2180final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,2181boolean movable) {2182int n;2183if (tab == null || (n = tab.length) == 0)2184return;2185int index = (n - 1) & hash;2186TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;2187TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;2188if (pred == null)2189tab[index] = first = succ;2190else2191pred.next = succ;2192if (succ != null)2193succ.prev = pred;2194if (first == null)2195return;2196if (root.parent != null)2197root = root.root();2198if (root == null2199|| (movable2200&& (root.right == null2201|| (rl = root.left) == null2202|| rl.left == null))) {2203tab[index] = first.untreeify(map); // too small2204return;2205}2206TreeNode<K,V> p = this, pl = left, pr = right, replacement;2207if (pl != null && pr != null) {2208TreeNode<K,V> s = pr, sl;2209while ((sl = s.left) != null) // find successor2210s = sl;2211boolean c = s.red; s.red = p.red; p.red = c; // swap colors2212TreeNode<K,V> sr = s.right;2213TreeNode<K,V> pp = p.parent;2214if (s == pr) { // p was s's direct parent2215p.parent = s;2216s.right = p;2217}2218else {2219TreeNode<K,V> sp = s.parent;2220if ((p.parent = sp) != null) {2221if (s == sp.left)2222sp.left = p;2223else2224sp.right = p;2225}2226if ((s.right = pr) != null)2227pr.parent = s;2228}2229p.left = null;2230if ((p.right = sr) != null)2231sr.parent = p;2232if ((s.left = pl) != null)2233pl.parent = s;2234if ((s.parent = pp) == null)2235root = s;2236else if (p == pp.left)2237pp.left = s;2238else2239pp.right = s;2240if (sr != null)2241replacement = sr;2242else2243replacement = p;2244}2245else if (pl != null)2246replacement = pl;2247else if (pr != null)2248replacement = pr;2249else2250replacement = p;2251if (replacement != p) {2252TreeNode<K,V> pp = replacement.parent = p.parent;2253if (pp == null)2254(root = replacement).red = false;2255else if (p == pp.left)2256pp.left = replacement;2257else2258pp.right = replacement;2259p.left = p.right = p.parent = null;2260}22612262TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);22632264if (replacement == p) { // detach2265TreeNode<K,V> pp = p.parent;2266p.parent = null;2267if (pp != null) {2268if (p == pp.left)2269pp.left = null;2270else if (p == pp.right)2271pp.right = null;2272}2273}2274if (movable)2275moveRootToFront(tab, r);2276}22772278/**2279* Splits nodes in a tree bin into lower and upper tree bins,2280* or untreeifies if now too small. Called only from resize;2281* see above discussion about split bits and indices.2282*2283* @param map the map2284* @param tab the table for recording bin heads2285* @param index the index of the table being split2286* @param bit the bit of hash to split on2287*/2288final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {2289TreeNode<K,V> b = this;2290// Relink into lo and hi lists, preserving order2291TreeNode<K,V> loHead = null, loTail = null;2292TreeNode<K,V> hiHead = null, hiTail = null;2293int lc = 0, hc = 0;2294for (TreeNode<K,V> e = b, next; e != null; e = next) {2295next = (TreeNode<K,V>)e.next;2296e.next = null;2297if ((e.hash & bit) == 0) {2298if ((e.prev = loTail) == null)2299loHead = e;2300else2301loTail.next = e;2302loTail = e;2303++lc;2304}2305else {2306if ((e.prev = hiTail) == null)2307hiHead = e;2308else2309hiTail.next = e;2310hiTail = e;2311++hc;2312}2313}23142315if (loHead != null) {2316if (lc <= UNTREEIFY_THRESHOLD)2317tab[index] = loHead.untreeify(map);2318else {2319tab[index] = loHead;2320if (hiHead != null) // (else is already treeified)2321loHead.treeify(tab);2322}2323}2324if (hiHead != null) {2325if (hc <= UNTREEIFY_THRESHOLD)2326tab[index + bit] = hiHead.untreeify(map);2327else {2328tab[index + bit] = hiHead;2329if (loHead != null)2330hiHead.treeify(tab);2331}2332}2333}23342335/* ------------------------------------------------------------ */2336// Red-black tree methods, all adapted from CLR23372338static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,2339TreeNode<K,V> p) {2340TreeNode<K,V> r, pp, rl;2341if (p != null && (r = p.right) != null) {2342if ((rl = p.right = r.left) != null)2343rl.parent = p;2344if ((pp = r.parent = p.parent) == null)2345(root = r).red = false;2346else if (pp.left == p)2347pp.left = r;2348else2349pp.right = r;2350r.left = p;2351p.parent = r;2352}2353return root;2354}23552356static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,2357TreeNode<K,V> p) {2358TreeNode<K,V> l, pp, lr;2359if (p != null && (l = p.left) != null) {2360if ((lr = p.left = l.right) != null)2361lr.parent = p;2362if ((pp = l.parent = p.parent) == null)2363(root = l).red = false;2364else if (pp.right == p)2365pp.right = l;2366else2367pp.left = l;2368l.right = p;2369p.parent = l;2370}2371return root;2372}23732374static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,2375TreeNode<K,V> x) {2376x.red = true;2377for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {2378if ((xp = x.parent) == null) {2379x.red = false;2380return x;2381}2382else if (!xp.red || (xpp = xp.parent) == null)2383return root;2384if (xp == (xppl = xpp.left)) {2385if ((xppr = xpp.right) != null && xppr.red) {2386xppr.red = false;2387xp.red = false;2388xpp.red = true;2389x = xpp;2390}2391else {2392if (x == xp.right) {2393root = rotateLeft(root, x = xp);2394xpp = (xp = x.parent) == null ? null : xp.parent;2395}2396if (xp != null) {2397xp.red = false;2398if (xpp != null) {2399xpp.red = true;2400root = rotateRight(root, xpp);2401}2402}2403}2404}2405else {2406if (xppl != null && xppl.red) {2407xppl.red = false;2408xp.red = false;2409xpp.red = true;2410x = xpp;2411}2412else {2413if (x == xp.left) {2414root = rotateRight(root, x = xp);2415xpp = (xp = x.parent) == null ? null : xp.parent;2416}2417if (xp != null) {2418xp.red = false;2419if (xpp != null) {2420xpp.red = true;2421root = rotateLeft(root, xpp);2422}2423}2424}2425}2426}2427}24282429static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,2430TreeNode<K,V> x) {2431for (TreeNode<K,V> xp, xpl, xpr;;) {2432if (x == null || x == root)2433return root;2434else if ((xp = x.parent) == null) {2435x.red = false;2436return x;2437}2438else if (x.red) {2439x.red = false;2440return root;2441}2442else if ((xpl = xp.left) == x) {2443if ((xpr = xp.right) != null && xpr.red) {2444xpr.red = false;2445xp.red = true;2446root = rotateLeft(root, xp);2447xpr = (xp = x.parent) == null ? null : xp.right;2448}2449if (xpr == null)2450x = xp;2451else {2452TreeNode<K,V> sl = xpr.left, sr = xpr.right;2453if ((sr == null || !sr.red) &&2454(sl == null || !sl.red)) {2455xpr.red = true;2456x = xp;2457}2458else {2459if (sr == null || !sr.red) {2460if (sl != null)2461sl.red = false;2462xpr.red = true;2463root = rotateRight(root, xpr);2464xpr = (xp = x.parent) == null ?2465null : xp.right;2466}2467if (xpr != null) {2468xpr.red = (xp == null) ? false : xp.red;2469if ((sr = xpr.right) != null)2470sr.red = false;2471}2472if (xp != null) {2473xp.red = false;2474root = rotateLeft(root, xp);2475}2476x = root;2477}2478}2479}2480else { // symmetric2481if (xpl != null && xpl.red) {2482xpl.red = false;2483xp.red = true;2484root = rotateRight(root, xp);2485xpl = (xp = x.parent) == null ? null : xp.left;2486}2487if (xpl == null)2488x = xp;2489else {2490TreeNode<K,V> sl = xpl.left, sr = xpl.right;2491if ((sl == null || !sl.red) &&2492(sr == null || !sr.red)) {2493xpl.red = true;2494x = xp;2495}2496else {2497if (sl == null || !sl.red) {2498if (sr != null)2499sr.red = false;2500xpl.red = true;2501root = rotateLeft(root, xpl);2502xpl = (xp = x.parent) == null ?2503null : xp.left;2504}2505if (xpl != null) {2506xpl.red = (xp == null) ? false : xp.red;2507if ((sl = xpl.left) != null)2508sl.red = false;2509}2510if (xp != null) {2511xp.red = false;2512root = rotateRight(root, xp);2513}2514x = root;2515}2516}2517}2518}2519}25202521/**2522* Recursive invariant check2523*/2524static <K,V> boolean checkInvariants(TreeNode<K,V> t) {2525TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,2526tb = t.prev, tn = (TreeNode<K,V>)t.next;2527if (tb != null && tb.next != t)2528return false;2529if (tn != null && tn.prev != t)2530return false;2531if (tp != null && t != tp.left && t != tp.right)2532return false;2533if (tl != null && (tl.parent != t || tl.hash > t.hash))2534return false;2535if (tr != null && (tr.parent != t || tr.hash < t.hash))2536return false;2537if (t.red && tl != null && tl.red && tr != null && tr.red)2538return false;2539if (tl != null && !checkInvariants(tl))2540return false;2541if (tr != null && !checkInvariants(tr))2542return false;2543return true;2544}2545}25462547}254825492550