資料遷移過程中,Core Data 建立了兩個棧(stacks),一個用于源存儲(source store),一個用于目标存儲(destination store)。随後,Core Data 從源棧中擷取對象,對應的将其插入目标棧。注意:在新棧中,諸多對象是 Core Data 重新建立(re-create)的。

概述
在ios中實體存儲是綁定到對應的模型上的,是以,當模型與存儲不對應時,就需要遷移。遷移過程有兩個時間點供我們采取行動(There are two areas where you get default functionality and hooks for customizing the default behavior):
- 當檢測版本變化(version skew)和初始化遷移過程時;
- 當執行遷移過程時;
成功執行遷移過程需要兩個棧,都由 Core Data 自動為我們建立,一個是面向源存儲的棧,一個是面向目标存儲的棧,整個棧對棧的拷貝過程分3步完成。

遷移過程必要條件
持久存儲的遷移由 NSMigrationManager 的執行個體完成,為完成遷移,遷移管理器(migration manager)需要涉及很多東西:
-
目标存儲的管理對象模型(The managed object model for the destination store)
這是持久存儲協調器模型(This is the persistent store coordinator’s model)
- 能打開現有存儲的管理對象模型
-
最重要的,映射模型,定義了如何轉換
如果你使用輕量級遷移,是不需要映射模型的,參見 “Lightweight Migration.”
另外,我們可以定制實體/表遷移政策,如圖 Figure 4-1,下圖:

定制實體/表遷移政策
如果隻是增加幾個屬性/字段,那就沒有必要定制政策;是以,在複雜的情況下,才需要建立 NSEntityMigrationPolicy 的子類,定制遷移政策,比如:
- 有一個 Person 實體/表,裡面存有位址資訊(address),現在想把位址資訊分離出來成為獨立的 Address 實體/表,同時要保證 Address 實體/表中的位址都不重複(ensure uniqueness)。
- 将某個屬性/字段由字元串型(string)轉為二進制存儲(binary representation)。
定制遷移政策時,即要以子類方式重寫 NSEntityMigrationPolicy 的方法,參見下面“遷移三階段”。

遷移三階段(Three-Stage Migration)
遷移過程在三個階段内完成,The migration process itself is in three stages. It uses a copy of the source and destination models in which the validation rules are disabled and the class of all entities is changed to NSManagedObject.
為完成遷移,Core Data 建立sets up two stacks, one for the source store and one for the destination store. Core Data then processes each entity mapping in the mapping model in turn. It fetches objects of the current entity into the source stack, creates the corresponding objects in the destination stack, then recreates relationships between destination objects in a second stage, before finally applying validation constraints in the final stage.
Before a cycle starts, the entity migration policy responsible for the current entity is sent a beginEntityMapping:manager:error:message. You can override this method to perform any initialization the policy requires. The process then proceeds as follows:
-
Create destination instances based on source instances.
At the beginning of this phase, the entity migration policy is sent acreateDestinationInstancesForSourceInstance:entityMapping:manager:error: message; at the end it is sent aendInstanceCreationForEntityMapping:manager:error: message.
In this stage, only attributes (not relationships) are set in the destination objects.
Instances of the source entity are fetched. For each instance, appropriate instances of the destination entity are created (typically there is only one) and their attributes populated (for trivial cases, name = $source.name). A record is kept of the instances per entity mapping since this may be useful in the second stage.
-
Recreate relationships.
At the beginning of this phase, the entity migration policy is sent acreateRelationshipsForDestinationInstance:entityMapping:manager:error: message; at the end it is sent aendRelationshipCreationForEntityMapping:manager:error: message.
For each entity mapping (in order), for each destination instance created in the first step any relationships are recreated.
-
Validate and save.
In this phase, the entity migration policy is sent a performCustomValidationForEntityMapping:manager:error: message.
Validation rules in the destination model are applied to ensure data integrity and consistency, and then the store is saved.
At the end of the cycle, the entity migration policy is sent an endEntityMapping:manager:error: message. You can override this method to perform any clean-up the policy needs to do.
Note that Core Data cannot simply fetch objects into the source stack and insert them into the destination stack, the objects must be re-created in the new stack. Core Data maintains “association tables” which tell it which object in the destination store is the migrated version of which object in the source store, and vice-versa. Moreover, because it doesn't have a means to flush the contexts it is working with, you may accumulate many objects in the migration manager as the migration progresses. If this presents a significant memory overhead and hence gives rise to performance problems, you can customize the process as described in “Multiple Passes—Dealing With Large Datasets.”