|
|
# ODBC Export/Import cheatsheet using [drincw](https://github.com/fau-CDI/drincw)
|
|
|
|
|
|
|
|
|
## Remarks:
|
|
|
- this was not really tested with explicit Entity References(ER), but is an edjucated guess about how it should work
|
|
|
- it was tested by converting all ER paths of the old pathbuilder (PB) and their fields to plaintext. These were then resolved by the link formatter in the new system.
|
|
|
|
|
|
## Steps:
|
|
|
1. Download the [drincw](https://github.com/fau-CDI/drincw) executables or build them from source.
|
|
|
1. Export the Triplestore of the old system as `.nq`.
|
|
|
For example using the [GraphDB](https://www.ontotext.com/products/graphdb/) workbench
|
|
|
2. Export Pathbuilder of the old system as `.xml`:
|
|
|
- In case you want to use the plaintext + WissKI Link Formatter approach change all ER paths to plaintext.
|
|
|
3. Create an SQLITE dump using [n2j](https://github.com/fau-CDI/drincw#n2j---a-wisski-viewer):
|
|
|
- use `./n2j -sqlite dump.sqlite path_to_old_pb.xml triplestore_dump.nq`
|
|
|
- you can specify separators for fields with multiple values with the `-sqlite-delimiter`
|
|
|
|
|
|
4. Import the dump into [phpMyAdmin](https://www.phpmyadmin.net/):
|
|
|
- Open the dump with an SQLITE viewer. E.g. https://sqlitebrowser.org/dl/
|
|
|
- Export the desired tables as `.csv` files with the viewer
|
|
|
- Import them using phpMyAdmin
|
|
|
5. Export the Pathbuilder of the new system:
|
|
|
- make sure you temporarily convert fields that store URIs to a plaintext field e.g. Image
|
|
|
- make sure all paths in the pathbuilder have a field assigned
|
|
|
6. Use [makeodbc](https://github.com/fau-CDI/drincw#makeodbc---generating-an-odbc) to generate a selector file for the new pathbuilder:
|
|
|
```
|
|
|
./makeodbc -dump-selectors path_to_new_pb.xml > selectors.json
|
|
|
```
|
|
|
|
|
|
7. Modify the selector file in a text editor of your choice:
|
|
|
- generally the selector files entries look like this:
|
|
|
```json
|
|
|
"bundle_machine_name": {
|
|
|
// name of the corresponding DB table
|
|
|
"table": "table_name",
|
|
|
// define the import order
|
|
|
// if not specified defaults to 0
|
|
|
// tables with lower numbers are imported first
|
|
|
"order": "0",
|
|
|
// currenly does nothing
|
|
|
"id": "uri",
|
|
|
"distinct": true,
|
|
|
// paths of the bundle
|
|
|
"fields": {
|
|
|
// use `column` to reference a column in "table"
|
|
|
"field_machine_name": "`column` column_name",
|
|
|
// use `join` to reference columns from other tables where myKey
|
|
|
// is the column name of the key of "table" and theirKey is
|
|
|
// the column name of the key of the "external_table"
|
|
|
// usually these are from sub-bundles
|
|
|
"other_field_machine_name": "`join` external_column_name `from` external_table on myKey theirKey",
|
|
|
}
|
|
|
}
|
|
|
```
|
|
|
- create a mapping that maps the table columns to the correct bundles and field names of the new PB
|
|
|
- (This is a guess, could be wrong!!) You may have to combine the field sections of Bundles in order to retain the correct Entity References. </br>
|
|
|
E.g.: Bundle1 references Bundle2 via explicit Entity Reference (not Sub-Bundle!).
|
|
|
Then you would have to cut and paste the fields from Bundle2 to Bundle1 and treat them as a Sub-Bundle.
|
|
|
8. Use [makeodbc](https://github.com/fau-CDI/drincw#makeodbc---generating-an-odbc) to generate an ODBC import file:
|
|
|
```
|
|
|
./makeodbc -load-selectors selectors.json path_to_new_pb.xml > import.xml
|
|
|
```
|
|
|
9. You may need to adjust the import order depending on the structure of your data. This may be necessary when you have highly interlinked data, like e.g. classification trees. In this case it is advisable to import the data by starting from the root. This can be done by adding the following to the relevant table's `<append>` section in the import file:
|
|
|
```
|
|
|
<append>... ORDER BY `table_name`.`column_name`</append>
|
|
|
```
|
|
|
10. Sometimes some fields should be combined into a single new field. This can be done by using the `CONCAT` and `ISNULL` functions:
|
|
|
```
|
|
|
CONCAT(IFNULL(bundle__alte_maske_1.field__alte_maske_63,""),IFNULL(bundle__alte_maske_1.field__alte_maske_44, "")) as objektgeschichte
|
|
|
```
|
|
|
|
|
|
11. Import the data by using the created `import.xml` file.
|
|
|
- Enable the `WissKI ODBC Import` module by navigating to the `Manage` → `Extend` section of the WissKI.
|
|
|
- After enabling the module navigate to `Configuration` → `WissKI ODBC Import`
|
|
|
- Click `File upload` and select the `import.xml` file.
|
|
|
- Click `Save` at the bottom of the page and pray everything works
|
|
|
|