Migrate your
on-premises Database to Oracle Cloud Infrastructure Database System
Part 2: Data Pump
Author: Donatien MBADI OUM, Consultant
1.
Introduction
The first part was to present an
overview and considerations about migrating the Oracle database to Oracle Cloud
Infrastructure. You may notice that Oracle Databases on Oracle Oracle Cloud are
only created using the Multitenant architecture and all databases are encrypted
by default, using transparent Data Encryption (TDE). The keystore password is
set to the value of the administration password that is specified when the
database system is created from cloud console. For more information, see here.
There are several methods to migrate
to Oracle Cloud. In this part, we will talking in practice the Data Pump
methods including:
-
Data Pump Conventional Export/Import
-
Data Pump Full Transportable
-
Data Pump Transportable Tablespace
-
Data Pump Convert Full transportable
Before migration, check that all the
options installed on the source database, either non-CDB or PDB, are available
on the target cloud CDB, otherwise you will need to remove those options on the
source database before migration.
2.
What is Data Pump
Oracle Data Pump enables you to fast move
data and metadata from one database to another. It’s an upgrade to the former
export/import utility but it has been built from scratch and it has a
completely different architecture.
Data Pump is available from Oracle
Database 10g and the latest version, like 19c is compatible with any lower
version up to 10g.
Data Pump supports migrating database across
platforms with different endian formats and to a different database
architecture or between different release versions of the database software.
Data Pump supports encryption of data
before writing it to the dump file. It also supports parallelism to accelerate
the export and import of large amount of data.
Data Pump supports compression of
output file to save space on the disk and also supports operations over the
network without need an intermediate storage.
Data Pump supports remapping tables,
schemas and tablespaces during and import operation.
Data Pump supports filtering the
objects that are exported and imported, based upon objects and object types.
3.
Data Pump migration in practice
3.1.Conventional Data Pump Export/Import
In this lab, we will migrate a non-CDB
11g database on on-premise machine to OCI pluggable database which is running
on 19c. The following steps can be applied using non-CDB/PDB on 12c or later.
The workflow that we will follow is as
below:
Database migration workflow using
conventional Data Pump Export/Import
Note: If you are migrating to Oracle Autonomous Database, then you
would copy the dump files to OCI Object Storage instead of the local database
machine. You can simply use the web-based cloud console or the OCI command-line
to put the dump files on Object Storage.
Prerequisites:
-
The OCI Database System is supposed to be created
-
The connection from on-premises to OCI node is configured
using the public key
-
The security list is correctly configured to allow SSH and SQL
Net connectivity.
For more information see here
3.1.1. Make Directory on on-premise
$.
11g_env
$mkdir
–p /home/oracle/dumps
SQL>create
directory dpdump as '/home/oracle/dumps';
3.1.2.
Perform Export from on-premise
$expdp system/oracle@onprem:1521/db11g
directory=dpdump schemas=hr flashback_time=systimestamp dumpfile=exp_hr.dmp
logfile=exp_hr.log
3.1.3.
Make directory on OCI DB System node
mkdir -p
/u01/app/oracle/dumps
3.1.4.
scp the
dump files from on-premise to the OCI DB System node
$scp -i /home/oracle/.ssh/id_rsa exp_hr.dmp oracle@db_system_node_ip:/u01/app/oracle/dumps/
Db system node ip address can be found
through the Cloud console.
3.1.5.
Create a directory on OCI
SQL>create directory dpdump as
'/u01/app/oracle/dumps';
3.1.6.
Create tablespace on OCI (optional)
SQL>create tablespace datatbs datafile '+DATA' size
256M autoextend on;
3.1.7.
Run impdp
§ Checking the host and service name
registered on the listener
§ Perform impdp
$impdp system/****************@10.0.0.109:1521/pdb1.sub10061528440.vcntestmig.oraclevcn.com
directory=dpdump dumpfile=exp_hr.dmp schemas=HR remap_schema=HR:HR11GMIG
remap_tablespace=EXAM
PLE:DATATBS
logfile=imp_hr.log
…
…
Note: It’s possible to have some errors that can be fixed after
the migration.
3.1.8. Testing
Now we can make some testing:
3.2.Data Pump Transportable tablespace
The transportable tablespace method is
generally much faster than a conventional export/import of the same data
because the data files containing all of the actual data are simply copied to
the destination location. You use Data Pump only to transfer the metadata of
the tablespace objects to the new database. You can only use this method if the
on-premise platform is little endian and the database character sets of your
on-premise database and Oracle Database Service database are compatible.
In this lab, we will migrate a PDB 12c
database on on-premise machine to OCI database which is running on 19c. The
following steps can be applied using non-CDB/PDB on 12c or later.
The workflow that we will follow is as
below:
Database migration workflow using Data
Pump Transportable Tablespace
This method become really complex because you had to do
it one tablespace at a time and the business require a complicated set of steps
to move user and application metadata needed to effectively use these tables
based on the data files in the destination database. That why Oracle came up
with the strategy of data full transportable.
3.3.Data Pump Full Transportable
Oracle came up with the feature of
speed a usability to combine in a form of full transportable Export/Import.
This method has the ability to move the metadata over a database link and to
accomplish a full database migration with a single import command.
The workflow that we will follow is as below:
Database migration workflow using Data
Pump Full Transportable
3.3.1.
Check Data Pump Directory (cleanup)
$cd /home/oracle/dumps/
$rm -f *
3.3.2.
Let’s create another two tablespaces on 12c pluggable database
3.3.3.
Make tablespace Read Only
3.3.4.
Perform expdp
$expdp system/Oracle21@onprem:1525/PDB12C_1 full=y transportable=always
directory=dpdump dumpfile=exp_full_tts.dmp logfile=exp_full_tts.log
exclude=tablespace:"IN('USERS')"
…
…
3.3.5.
Cleanup dump directory on OCI
3.3.6.
scp dump file and datafiles to OCI node
scp -i /home/oracle/.ssh/id_rsa /home/oracle/dumps/exp_full_tts.dmp oracle@140.238.133.0:/u01/app/oracle/dumps/
scp -i /home/oracle/.ssh/id_rsa
/opt/app/oracle/oradata/cdb12c/pdb12c_1/EXAMPLE21.DBF /opt/app/oracle/oradata/cdb12c/pdb12c_1/datadf01.dbf
/opt/app/oracle/oradata/cdb12c/pdb12c_1/datadf02.dbf oracle@140.238.133.0:/u01/app/oracle/dumps/
o
We are going to create another pluggable database, PDB2 using
OCI console
o
Check that the service is listened
3.3.7. Copy datafile from filesystem to ASM
on OCI (optional)
§
Checking the ASM datafile directory
§
Copy the datafiles to +DATA
3.3.8. Create related users and grand
privileges and roles on OCI
3.3.9. Run impdp on OCI
impdp
system/Momentum-Tech_1234@10.0.0.109:1521/pdb2.sub10061528440.vcntestmig.oraclevcn.com
directory=dpdump dumpfile=exp_full_tts.dmp full=y transport_datafiles='+data
/DBTESTM_YYZ1VH/datafile/EXAMPLE21.DBF','+data/DBTESTM_YYZ1VH/datafile/datadf01.dbf','+data/DBTESTM_YYZ1VH/datafile/datadf02.dbf'
3.3.10. Login and check data
3.4.RMAN CONVERT Transportable Tablespace
with Data Pump
You can use this method only if the
database character sets of your on-premises database and the database service
database are compatible.
It’s similar to the Data Pump
Transportable Tablespace method, with the addition of the RMAN CONVERT command
to enable transport between platforms with different endianness.
The workflow that we will follow is as
below:
Database migration workflow using RMAN
CONVERT Transportable Tablespace
In this lab,
we are going to migrate an Oracle database running on 19c as pluggable database
named PDB1 on Windows machine to OCI Database service pluggable database PDB3.
All the data are owned by the schema OT on OT_TBS tablespaces.
o
Data
o
Checking Endianness
3.5. Create data pump directory on
on-premise machine
SQL>create directory
dpdump_dir as 'c:\dumps';
o
Make tablespace READ ONLY on
on-premise
SQL>alter tablespace ot_tbs
read only;
3.6. Perform expdp
C:>expdp system/oracle@localhost:1521/pdb1
directory=dpdump_dir TRANSPORT_TABLESPACES=OT_TBS TRANSPORT_FULL_CHECK=YES
dumpfile=exp_ot_tts.dmp logfile=exp_ot_tts.log
3.7. Perform RMAN CONVERT
RMAN>CONVERT TABLESPACE
OT_TBS to platform='Linux x86 64-bit' format 'c:\rman_df\%U';
3.8. scp dump file and datafiles to OCI
scp -i '.\Users\Donatien MBADI\.ssh\id_rsa'
C:\dumps\EXP_OT_TTS.DMP oracle@140.238.133.0:/u01/app/oracle/dumps/
scp -i '.\Users\Donatien MBADI\.ssh\id_rsa'
C:\rman_df\DATA_D-CDB01_I-1643732499_TS-OT_TBS_FNO-29_011A0PI6 oracle@140.238.133.0:/u01/app/oracle/dumps/
o
Copy datafiles to ASM OCI
cp
/u01/app/oracle/dumps/DATA_D-CDB01_I-1643732499_TS-OT_TBS_FNO-29_011A0PI6
+DATA/DBTESTM_YYZ1VH/EAC891AD5094F9EAE0536D00000A7FB9/DATAFILE/ot_df01.dbf
o
Create a new PDB on OCI
create pluggable database PDB3
admin user admin
identified by Oracle12
file_name_convert =
('+DATA','+DATA');
3.9. Create the user on destination
database service database
3.10. Run impdp TRANSPORT_DATAFILES
impdp system/Momentum-Tech_1234@10.0.0.109:1521/pdb3.sub10061528440.vcntestmig.oraclevcn.com
directory=dpdump_dir dumpfile=EXP_OT_TTS.DMP
TRANSPORT_DATAFILES='+DATA/DBTESTM_YYZ1VH/EAC891AD5094F9EAE0536D00000A7FB9/DATAFILE/ot_df01.dbf'
3.11.Tests
Aucun commentaire:
Enregistrer un commentaire