今天读了读HashSet的源码,发现HashSet内部是包装了一个HashMap,构造器是也就是构造内部的HashMap
public HashSet() {
map = new HashMap<>();
}
public HashSet(Collection<? extends E> c) {
map = new HashMap<>(Math.max((int) (c.size()/.75f) + 1, 16));
addAll(c);
}
public HashSet(int initialCapacity, float loadFactor) {
map = new HashMap<>(initialCapacity, loadFactor);
}
public HashSet(int initialCapacity) {
map = new HashMap<>(initialCapacity);
}
HashSet(int initialCapacity, float loadFactor, boolean dummy) {
map = new LinkedHashMap<>(initialCapacity, loadFactor);
}
添加删除元素也就是操作内部的HashMap:
public boolean add(E e) {
return map.put(e, PRESENT)==null;
}
public boolean remove(Object o) {
return map.remove(o)==PRESENT;
}
public void clear() {
map.clear();
}
value PRESENT是一个final类型的Object对象:
private static final Object PRESENT = new Object();
而内部用来存储数据的field hashMap是一个transient类型的:
private transient HashMap<E,Object> map;
而transient作用就是在序列化对象时候忽略这个属性,既然HashSet的数据都是存放在其内部属性HashMap中的,序列化HashSet忽略它内部的HashMap不就会丢失数据吗?带着这个疑问继续读HashSet的代码,可以发现内部还有两个方法:
private void writeObject(java.io.ObjectOutputStream s)
throws java.io.IOException {
// Write out any hidden serialization magic
s.defaultWriteObject();
// Write out HashMap capacity and load factor
s.writeInt(map.capacity());
s.writeFloat(map.loadFactor());
// Write out size
s.writeInt(map.size());
// Write out all elements in the proper order.
for (E e : map.keySet())
s.writeObject(e);
}
private void readObject(java.io.ObjectInputStream s)
throws java.io.IOException, ClassNotFoundException {
// Read in any hidden serialization magic
s.defaultReadObject();
// Read capacity and verify non-negative.
int capacity = s.readInt();
if (capacity < 0) {
throw new InvalidObjectException("Illegal capacity: " +
capacity);
}
// Read load factor and verify positive and non NaN.
float loadFactor = s.readFloat();
if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
throw new InvalidObjectException("Illegal load factor: " +
loadFactor);
}
// Read size and verify non-negative.
int size = s.readInt();
if (size < 0) {
throw new InvalidObjectException("Illegal size: " +
size);
}
// Set the capacity according to the size and load factor ensuring that
// the HashMap is at least 25% full but clamping to maximum capacity.
capacity = (int) Math.min(size * Math.min(1 / loadFactor, 4.0f),
HashMap.MAXIMUM_CAPACITY);
// Constructing the backing map will lazily create an array when the first element is
// added, so check it before construction. Call HashMap.tableSizeFor to compute the
// actual allocation size. Check Map.Entry[].class since it's the nearest public type to
// what is actually created.
SharedSecrets.getJavaOISAccess()
.checkArray(s, Map.Entry[].class, HashMap.tableSizeFor(capacity));
// Create backing HashMap
map = (((HashSet<?>)this) instanceof LinkedHashSet ?
new LinkedHashMap<E,Object>(capacity, loadFactor) :
new HashMap<E,Object>(capacity, loadFactor));
// Read in all elements in the proper order.
for (int i=0; i<size; i++) {
@SuppressWarnings("unchecked")
E e = (E) s.readObject();
map.put(e, PRESENT);
}
}
大致明白了,在序列化HashSet的时候,会调HashSet中的writeObject,将hashSet中的hashMap中的数据序列化存起来,而在反序列化的时候,会调用Hashset中的readObject来重新构造hashSet中的hashMap,这样的话,就完全没必要序列化hashSet中的HashMap啦,而事实也是如此,debug一下ObjectOutputStream:
private void writeSerialData(Object obj, ObjectStreamClass desc)
throws IOException
{
ObjectStreamClass.ClassDataSlot[] slots = desc.getClassDataLayout();
for (int i = 0; i < slots.length; i++) {
ObjectStreamClass slotDesc = slots[i].desc;
if (slotDesc.hasWriteObjectMethod()) {
PutFieldImpl oldPut = curPut;
curPut = null;
SerialCallbackContext oldContext = curContext;
if (extendedDebugInfo) {
debugInfoStack.push(
"custom writeObject data (class \"" +
slotDesc.getName() + "\")");
}
try {
curContext = new SerialCallbackContext(obj, slotDesc);
bout.setBlockDataMode(true);
slotDesc.invokeWriteObject(obj, this);
bout.setBlockDataMode(false);
bout.writeByte(TC_ENDBLOCKDATA);
} finally {
curContext.setUsed();
curContext = oldContext;
if (extendedDebugInfo) {
debugInfoStack.pop();
}
}
curPut = oldPut;
} else {
defaultWriteFields(obj, slotDesc);
}
}
}
进 slotDesc.invokeWriteObject(obj, this):
void invokeWriteObject(Object obj, ObjectOutputStream out)
throws IOException, UnsupportedOperationException
{
requireInitialized();
if (writeObjectMethod != null) {
try {
writeObjectMethod.invoke(obj, new Object[]{ out });
} catch (InvocationTargetException ex) {
Throwable th = ex.getTargetException();
if (th instanceof IOException) {
throw (IOException) th;
} else {
throwMiscException(th);
}
} catch (IllegalAccessException ex) {
// should not occur, as access checks have been suppressed
throw new InternalError(ex);
}
} else {
throw new UnsupportedOperationException();
}
}
writeObjectMethod也就是拿到HashSet中名字为writeObject的method对象:
writeObjectMethod = getPrivateMethod(cl, "writeObject",
new Class<?>[] { ObjectOutputStream.class },
Void.TYPE);
readObjectMethod = getPrivateMethod(cl, "readObject",
new Class<?>[] { ObjectInputStream.class },
Void.TYPE);
再进:writeObjectMethod.invoke(obj, new Object[]{ out });
public Object invoke(Object obj, Object... args)
throws IllegalAccessException, IllegalArgumentException,
InvocationTargetException
{
if (!override) {
if (!Reflection.quickCheckMemberAccess(clazz, modifiers)) {
Class<?> caller = Reflection.getCallerClass();
checkAccess(caller, clazz, obj, modifiers);
}
}
MethodAccessor ma = methodAccessor; // read volatile
if (ma == null) {
ma = acquireMethodAccessor();
}
return ma.invoke(obj, args);
}
很明显了,就是在反射调用HashSet的writeObject方法,同理反序列化也就是在反射调用HashSet中的readObjcet方法,不在赘述
浙公网安备 33010602011771号