天天看點

java 數組 add_Java ArrayList.add 的實作

ArrayList是平時相當常用的List實作, 其中boolean add(E e) 的實作比較直接:

public boolean add(E e) {

ensureCapacityInternal(size + 1); // Increments modCount!!

elementData[size++] = e;

return true;

}

有時候也使用 void add(int index, E element) 把元素插入到指定的index上. 在JDK中的實作是:

public void add(int index, E element) {

rangeCheckForAdd(index);

ensureCapacityInternal(size + 1); // Increments modCount!!

System.arraycopy(elementData, index, elementData, index + 1,

size - index);

elementData[index] = element;

size++;

}

略有差别, 需要保證目前elementData 數組容量夠用, 然後把從index處一直到尾部的數組元素都向後挪一位. 最後把要插入的元素賦給數組的index處.

一直以來, 我都認為 System.arraycopy 這個native方法, 它的c++實作是調用底層的memcpy, 直接友善, 效率也沒問題.

但今天看了openJDK的源碼發現并非如此.

以openJDK8u60 為例, 在 objArrayKlass.cpp 中:

void ObjArrayKlass::copy_array(arrayOop s, int src_pos, arrayOop d,

int dst_pos, int length, TRAPS) {

assert(s->is_objArray(), "must be obj array");

if (!d->is_objArray()) {

THROW(vmSymbols::java_lang_ArrayStoreException());

}

// Check is all offsets and lengths are non negative

if (src_pos < 0 || dst_pos < 0 || length < 0) {

THROW(vmSymbols::java_lang_ArrayIndexOutOfBoundsException());

}

// Check if the ranges are valid

if ( (((unsigned int) length + (unsigned int) src_pos) > (unsigned int) s->length())

|| (((unsigned int) length + (unsigned int) dst_pos) > (unsigned int) d->length()) ) {

THROW(vmSymbols::java_lang_ArrayIndexOutOfBoundsException());

}

// Special case. Boundary cases must be checked first

// This allows the following call: copy_array(s, s.length(), d.length(), 0).

// This is correct, since the position is supposed to be an 'in between point', i.e., s.length(),

// points to the right of the last element.

if (length==0) {

return;

}

if (UseCompressedOops) {

narrowOop* const src = objArrayOop(s)->obj_at_addr(src_pos);

narrowOop* const dst = objArrayOop(d)->obj_at_addr(dst_pos);

do_copy(s, src, d, dst, length, CHECK);

} else {

oop* const src = objArrayOop(s)->obj_at_addr(src_pos);

oop* const dst = objArrayOop(d)->obj_at_addr(dst_pos);

do_copy (s, src, d, dst, length, CHECK);

}

}

可以看到copy_array在做了各種檢查之後, 最終copy的部分在do_copy方法中, 而這個方法實作如下:

// Either oop or narrowOop depending on UseCompressedOops.

template void ObjArrayKlass::do_copy(arrayOop s, T* src,

arrayOop d, T* dst, int length, TRAPS) {

BarrierSet* bs = Universe::heap()->barrier_set();

// For performance reasons, we assume we are that the write barrier we

// are using has optimized modes for arrays of references. At least one

// of the asserts below will fail if this is not the case.

assert(bs->has_write_ref_array_opt(), "Barrier set must have ref array opt");

assert(bs->has_write_ref_array_pre_opt(), "For pre-barrier as well.");

if (s == d) {

// since source and destination are equal we do not need conversion checks.

assert(length > 0, "sanity check");

bs->write_ref_array_pre(dst, length);

Copy::conjoint_oops_atomic(src, dst, length);

} else {

// We have to make sure all elements conform to the destination array

Klass* bound = ObjArrayKlass::cast(d->klass())->element_klass();

Klass* stype = ObjArrayKlass::cast(s->klass())->element_klass();

if (stype == bound || stype->is_subtype_of(bound)) {

// elements are guaranteed to be subtypes, so no check necessary

bs->write_ref_array_pre(dst, length);

Copy::conjoint_oops_atomic(src, dst, length);

} else {

// slow case: need individual subtype checks

// note: don't use obj_at_put below because it includes a redundant store check

T* from = src;

T* end = from + length;

for (T* p = dst; from < end; from++, p++) {

// XXX this is going to be slow.

T element = *from;

// even slower now

bool element_is_null = oopDesc::is_null(element);

oop new_val = element_is_null ? oop(NULL)

: oopDesc::decode_heap_oop_not_null(element);

if (element_is_null ||

(new_val->klass())->is_subtype_of(bound)) {

bs->write_ref_field_pre(p, new_val);

*p = element;

} else {

// We must do a barrier to cover the partial copy.

const size_t pd = pointer_delta(p, dst, (size_t)heapOopSize);

// pointer delta is scaled to number of elements (length field in

// objArrayOop) which we assume is 32 bit.

assert(pd == (size_t)(int)pd, "length field overflow");

bs->write_ref_array((HeapWord*)dst, pd);

THROW(vmSymbols::java_lang_ArrayStoreException());

return;

}

}

}

}

bs->write_ref_array((HeapWord*)dst, length);

}

可以看到, 在設定了heap barrier之後, 元素是在for循環中被一個個挪動的. 做的工作比我想象的要多.

如果有m個元素, 按照給定位置, 使用ArrayList.add(int,E)逐個插入到一個長度為n的ArrayList中, 複雜度應當是O(m*n), 或者O(m*(m+n)), 是以, 如果m和n都不小的話, 效率确實是不高的.

效率高一些的方法是, 建立m+n長度的數組或ArrayList, 在給定位置指派該m個要插入的元素, 其他位置依次指派原n長度List的元素. 這樣時間複雜度應當是O(m+n).

還有, 在前面的實作中, 我們可以看到有對ensureCapacityInternal(int) 的調用. 這個保證數組容量的實作主要在:

private void grow(int minCapacity) {

// overflow-conscious code

int oldCapacity = elementData.length;

int newCapacity = oldCapacity + (oldCapacity >> 1);

if (newCapacity - minCapacity < 0)

newCapacity = minCapacity;

if (newCapacity - MAX_ARRAY_SIZE > 0)

newCapacity = hugeCapacity(minCapacity);

// minCapacity is usually close to size, so this is a win:

elementData = Arrays.copyOf(elementData, newCapacity);

}

大家知道由于效率原因, ArrayList容量增長不是正好按照要求的容量minCapacity來設計的, 新容量計算的主要邏輯是: 如果要求容量比目前容量的1.5倍大, 就按照要求容量重新配置設定空間; 否則按目前容量1.5倍增加. 當然不能超出Integer.MAX_VALUE了. oldCapacity + (oldCapacity >> 1) 實際就是目前容量1.5倍, 等同于(int) (oldCapacity * 1.5), 但因這段不涉及浮點運算隻是移位, 顯然效率高不少.

是以如果ArrayList一個一個add元素的話, 容量是在不夠的時候1.5倍增長的. 關于1.5這個數字, 或許是覺得2倍增長太快了吧. 也或許有實驗資料的驗證支撐.

關于這段代碼中出現的Arrays.copyOf這個方法, 實作的是重新配置設定一段數組, 把elementData指派給新配置設定的空間, 如果新配置設定的空間大, 則後面指派null, 如果配置設定空間比目前數組小則截斷. 底層還是調用的System.arraycopy.