Another example of oracle data file is encrypted and restored

A customer oracle database file is encrypted with the suffix name: .z33m8rvi, and the content of the txt file is as follows
20200617223806


The encrypted data file is:
20200617223900


By analyzing the file, we judge this encryption situation, all data in the data file can be recovered, after recovering the file, the database is directly opened, and the data is exported normally
20200617224546


We have a lot of experience in encrypting similar data files. If you encounter such encryption, you need to solve it,you can contact us to provide:E-Mail:chf.dba@gmail.com

.lockbit encrypted database recovery

There is a case where the oracle database is encrypted with the suffix name: .lockbit
20200619224513


Restore-My-Files.txt file content:
20200619224658


Through the recovery of encrypted files, confirm that each file only destroyed a part of the data block
20200619224900


Through the underlying analysis and recovery, the undamaged block data in the file can be recovered, minimizing the loss of customers due to encryption
20200619225348


We have a lot of experience in encrypting similar data files. If you encounter such encryption, you need to solve it,you can contact us to provide:E-Mail:chf.dba@gmail.com

[squadhack@email.tg].Devos Recovery

Recently, a friend inquired that the hospital database file was encrypted as: .id[FC1CFDC9-2700].[squadhack@email.tg].Devos
20200628222729


The info.txt information they left is

!!!All of your files are encrypted!!!
To decrypt them send e-mail to this address: squadhack@email.tg

Analyze the destruction of encrypted files
20200628223640


There is not much data destroyed by encryption, but because the business is special, there is a lot of xml type data, it is best to open the database, and then export the data, analyze the damaged part through the bottom layer, the open database is successful, and the loss is minimized (for individual damage (The table is processed separately according to rowid/pk)
20200628224435
20200628224027


We have a lot of experience in encrypting similar data files. If you encounter such encryption, you need to solve it,you can contact us to provide:E-Mail:chf.dba@gmail.com

mysql ibd file is encrypted and recovery

Another mysql ibd file was encrypted and ransomware recovery request
 20200504203408


Through analysis, only part of the file is encrypted. In theory, through the underlying analysis, part of the data that has not been damaged can be recovered
 20200504203850


Recover data by processing
 20200504212047


Once again proved that we can also achieve better recovery from the ransomware that the mysql file is encrypted
If the Innodb_file_per_table parameter is false (the pre-5.6 version defaults to false), you need to restore through the ibdata file.If Innodb_file_per_table is true, you need to restore through each individual ibd.If your database (oracle, mysql sql server) is unfortunately encrypted by Bitcoin, you can contact us at chf.dba@gamil.com to provide professional decryption and recovery services.

oracle asm drop pdb recovery

Recently, a new recovery method for asm disk data file loss has been analyzed and studied. Cod and acd are used for recovery. Our changes (creation, deletion, expansion, reduction, etc.) of files in the asm disk group will be reflected in cod and acd. There are some manifestations in the database. The changes in the data in the oracle database will be reflected in the redo and undo. You can confirm the distribution of the files in the asm disk group through their analysis to achieve the recovery of the data files. Create table spaces, insert data, delete table spaces (and delete files at the same time), and then analyze related cod and acd to achieve data file recovery.
Create tablespace

SQL> create tablespace xifenfei datafile '+data' size 1G;

Tablespace created.

SQL> alter tablespace xifenfei add datafile '+data' size 128M autoextend on;

Tablespace altered.

Create mock table and insert data

SQL> create table t_xifenfei tablespace xifenfei as
  2  select * from dba_objects;

Table created.

SQL> insert into t_xifenfei select * from t_xifenfei;

73013 rows created.

…………

SQL> insert into t_xifenfei select * from t_xifenfei;

18691328 rows created.

SQL> COMMIT;

Commit complete.

SQL> SELECT COUNT(1) FROM T_XIFENFEI;

  COUNT(1)
----------
  37382656

SQL> alter system checkpoint;

System altered.

SQL> select bytes/1024/1024/1024,TABLESPACE_NAME FROM USER_SEGMENTS  where segment_name='T_XIFENFEI';

BYTES/1024/1024/1024 TABLESPACE_NAME
-------------------- ------------------------------
          5.56738281 XIFENFEI

drop tablespace

SQL> drop tablespace xifenfei including contents and datafiles;

Tablespace dropped.

View alert log information

2020-04-23T18:23:43.088997-04:00
drop tablespace xifenfei including contents and datafiles
2020-04-23T18:23:46.226654-04:00
Deleted Oracle managed file +DATA/ORA18C/DATAFILE/xifenfei.262.1035571131
Deleted Oracle managed file +DATA/ORA18C/DATAFILE/xifenfei.263.1038507123
Completed: drop tablespace xifenfei including contents and datafiles

Here we can see that the asm numbers of the two deleted data files are 262 and 263. If you want to restore the table space data, you need to recover the data file first. Since the file is deleted, the file is stored in the asm and found to be related The data distribution relationship can be restored. Try to find the file extension mapping relationship
Try to read directly extent map

[root@rac18c2 ~]#  kfed read /dev/xifenfei-sdb  aus=4194304 blkn=0|grep f1b1locn
kfdhdb.f1b1locn:                     10 ; 0x0d4: 0x0000000a
[root@rac18c2 ~]#  kfed read /dev/xifenfei-sdb  aus=4194304 aun=10 blkn=262|more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            4 ; 0x002: KFBTYP_FILEDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                     262 ; 0x004: blk=262
kfbh.block.obj:                       1 ; 0x008: file=1
kfbh.check:                  4132734069 ; 0x00c: 0xf6548475
kfbh.fcn.base:                     6741 ; 0x010: 0x00001a55
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfffdb.node.incarn:          1035571132 ; 0x000: A=0 NUMM=0x1edcc7de
kfffdb.node.frlist.number:          264 ; 0x004: 0x00000108
kfffdb.node.frlist.incarn:            0 ; 0x008: A=0 NUMM=0x0
kfffdb.hibytes:                       0 ; 0x00c: 0x00000000
kfffdb.lobytes:              1073750016 ; 0x010: 0x40002000
kfffdb.xtntcnt:                       0 ; 0x014: 0x00000000
kfffdb.xtnteof:                     257 ; 0x018: 0x00000101
kfffdb.blkSize:                    8192 ; 0x01c: 0x00002000
kfffdb.flags:                         1 ; 0x020: O=1 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfffdb.fileType:                      2 ; 0x021: 0x02
…………
kfffdb.mxshad:                        0 ; 0x498: 0x0000
kfffdb.mxprnt:                        0 ; 0x49a: 0x0000
kfffdb.fmtBlks:                  131073 ; 0x49c: 0x00020001
kfffde[0].xptr.au:           4294967295 ; 0x4a0: 0xffffffff
kfffde[0].xptr.disk:              65535 ; 0x4a4: 0xffff
kfffde[0].xptr.flags:                 0 ; 0x4a6: L=0 E=0 D=0 S=0 R=0 I=0
kfffde[0].xptr.chk:                  42 ; 0x4a7: 0x2a
kfffde[1].xptr.au:           4294967295 ; 0x4a8: 0xffffffff
kfffde[1].xptr.disk:              65535 ; 0x4ac: 0xffff
kfffde[1].xptr.flags:                 0 ; 0x4ae: L=0 E=0 D=0 S=0 R=0 I=0
kfffde[1].xptr.chk:                  42 ; 0x4af: 0x2a

The kfffdb.blkSize of 8192 here proves that it was probably a data file before, but the au and disk in kfffde are all set to f, indicating that the direct mapping table of the extent has been emptied, not to mention the indirect extent allocation mapping table, which is Say this path can’t get through. Change a way of thinking, since the extent mapping relationship of the file deleted from asm is cleared, then can the relevant data be found through the corresponding acd record. By analyzing the acd, it is found that drop Similar records related to the time point, by analyzing the corresponding acd records, it is found that the direct extent and extended extent allocation are all emptied and cannot be recovered through this idea
Try acd to restore extent map

kfracdb2.lge[2].bcd[2].kfbl.blk:    262 ; 0x1cc: blk=262
kfracdb2.lge[2].bcd[2].kfbl.obj:      1 ; 0x1d0: file=1
kfracdb2.lge[2].bcd[2].kfcn.base:  6216 ; 0x1d4: 0x00001848
kfracdb2.lge[2].bcd[2].kfcn.wrap:     0 ; 0x1d8: 0x00000000
kfracdb2.lge[2].bcd[2].oplen:        20 ; 0x1dc: 0x0014
kfracdb2.lge[2].bcd[2].blkIndex:    262 ; 0x1de: 0x0106
kfracdb2.lge[2].bcd[2].flags:        28 ; 0x1e0: F=0 N=0 F=1 L=1 V=1 A=0 C=0 R=0
kfracdb2.lge[2].bcd[2].opcode:      162 ; 0x1e2: 0x00a2
kfracdb2.lge[2].bcd[2].kfbtyp:        4 ; 0x1e4: KFBTYP_FILEDIR
kfracdb2.lge[2].bcd[2].redund:       17 ; 0x1e5: SCHE=0x1 NUMB=0x1
kfracdb2.lge[2].bcd[2].pad:       63903 ; 0x1e6: 0xf99f
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.xtntcnt:0 ; 0x1e8: 0x00000000
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.xtntblk:0 ; 0x1ec: 0x0000
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.xnum:0 ; 0x1ee: 0x0000
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.xcnt:1 ; 0x1f0: 0x0001
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.setflg:0 ; 0x1f2: 0x00
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.flags:0 ; 0x1f3: O=0 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.xptr[0].au:4294967292 ; 0x1f4: 0xfffffffc
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.xptr[0].disk:0 ; 0x1f8: 0x0000
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.xptr[0].flags:0 ; 0x1fa: L=0 E=0 D=0 S=0 R=0 I=0
kfracdb2.lge[2].bcd[2].KFFFD_PEXT.xptr[0].chk:41 ; 0x1fb: 0x29
kfracdb2.lge[2].bcd[2].au[0]:        10 ; 0x1fc: 0x0000000a
kfracdb2.lge[2].bcd[2].disks[0]:      0 ; 0x200: 0x0000


kfracdb2.lge[20].bcd[1].kfbl.blk:2147483648 ; 0xe54: blk=0 (indirect)
kfracdb2.lge[20].bcd[1].kfbl.obj:   262 ; 0xe58: file=262
kfracdb2.lge[20].bcd[1].kfcn.base: 3280 ; 0xe5c: 0x00000cd0
kfracdb2.lge[20].bcd[1].kfcn.wrap:    0 ; 0xe60: 0x00000000
kfracdb2.lge[20].bcd[1].oplen:       16 ; 0xe64: 0x0010
kfracdb2.lge[20].bcd[1].blkIndex:     0 ; 0xe66: 0x0000
kfracdb2.lge[20].bcd[1].flags:       28 ; 0xe68: F=0 N=0 F=1 L=1 V=1 A=0 C=0 R=0
kfracdb2.lge[20].bcd[1].opcode:     163 ; 0xe6a: 0x00a3
kfracdb2.lge[20].bcd[1].kfbtyp:      12 ; 0xe6c: KFBTYP_INDIRECT
kfracdb2.lge[20].bcd[1].redund:      17 ; 0xe6d: SCHE=0x1 NUMB=0x1
kfracdb2.lge[20].bcd[1].pad:      63903 ; 0xe6e: 0xf99f
kfracdb2.lge[20].bcd[1].KFFIX_PEXT.xtntblk:0 ; 0xe70: 0x0000
kfracdb2.lge[20].bcd[1].KFFIX_PEXT.xnum:0 ; 0xe72: 0x0000
kfracdb2.lge[20].bcd[1].KFFIX_PEXT.xcnt:1 ; 0xe74: 0x0001
kfracdb2.lge[20].bcd[1].KFFIX_PEXT.ub2spare:0 ; 0xe76: 0x0000
kfracdb2.lge[20].bcd[1].KFFIX_PEXT.xptr[0].au:4294967292 ; 0xe78: 0xfffffffc
kfracdb2.lge[20].bcd[1].KFFIX_PEXT.xptr[0].disk:0 ; 0xe7c: 0x0000
kfracdb2.lge[20].bcd[1].KFFIX_PEXT.xptr[0].flags:0 ; 0xe7e: L=0 E=0 D=0 S=0 R=0 I=0
kfracdb2.lge[20].bcd[1].KFFIX_PEXT.xptr[0].chk:41 ; 0xe7f: 0x29
kfracdb2.lge[20].bcd[1].au[0]:      296 ; 0xe80: 0x00000128
kfracdb2.lge[20].bcd[1].disks[0]:     0 ; 0xe84: 0x0000

It is also not feasible to find the extent of the data file allocation directly by deleting the acd record. By analyzing the relevant acd block, I finally found the relevant record of the corresponding extent allocation

kfracdb2.lge[21].bcd[0].kfbl.blk:     2 ; 0x918: blk=2
kfracdb2.lge[21].bcd[0].kfbl.obj:2147483648 ; 0x91c: disk=0
kfracdb2.lge[21].bcd[0].kfcn.base: 2820 ; 0x920: 0x00000b04
kfracdb2.lge[21].bcd[0].kfcn.wrap:    0 ; 0x924: 0x00000000
kfracdb2.lge[21].bcd[0].oplen:       28 ; 0x928: 0x001c
kfracdb2.lge[21].bcd[0].blkIndex:     2 ; 0x92a: 0x0002
kfracdb2.lge[21].bcd[0].flags:       28 ; 0x92c: F=0 N=0 F=1 L=1 V=1 A=0 C=0 R=0
kfracdb2.lge[21].bcd[0].opcode:      73 ; 0x92e: 0x0049
kfracdb2.lge[21].bcd[0].kfbtyp:       3 ; 0x930: KFBTYP_ALLOCTBL
kfracdb2.lge[21].bcd[0].redund:      18 ; 0x931: SCHE=0x1 NUMB=0x2
kfracdb2.lge[21].bcd[0].pad:      63903 ; 0x932: 0xf99f
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.curidx:2416 ; 0x934: 0x0970
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.nxtidx:8 ; 0x936: 0x0008
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.prvidx:8 ; 0x938: 0x0008
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.asz:0 ; 0x93a: KFDASZ_1X
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.frag:0 ; 0x93b: 0x00
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.total:0 ; 0x93c: 0x0000
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.free:0 ; 0x93e: 0x0000
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.fnum:262 ; 0x940: 0x00000106
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.xnum:0 ; 0x944: 0x00000000
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.vaa.flags:8388608 ; 0x948: 0x00800000
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.lxnum:3 ; 0x94c: 0x03
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.spare1:0 ; 0x94d: 0x00
kfracdb2.lge[21].bcd[0].KFDAT_ALLOC2.spare2:0 ; 0x94e: 0x0000
kfracdb2.lge[21].bcd[0].au[0]:        0 ; 0x950: 0x00000000
kfracdb2.lge[21].bcd[0].au[1]:       11 ; 0x954: 0x0000000b
kfracdb2.lge[21].bcd[0].disks[0]:     0 ; 0x958: 0x0000
kfracdb2.lge[21].bcd[0].disks[1]:     0 ; 0x95a: 0x0000
kfracdb2.lge[21].bcd[1].kfbl.blk:   262 ; 0x95c: blk=262
kfracdb2.lge[21].bcd[1].kfbl.obj:     1 ; 0x960: file=1
kfracdb2.lge[21].bcd[1].kfcn.base: 3018 ; 0x964: 0x00000bca
kfracdb2.lge[21].bcd[1].kfcn.wrap:    0 ; 0x968: 0x00000000
kfracdb2.lge[21].bcd[1].oplen:       20 ; 0x96c: 0x0014
kfracdb2.lge[21].bcd[1].blkIndex:   262 ; 0x96e: 0x0106
kfracdb2.lge[21].bcd[1].flags:       28 ; 0x970: F=0 N=0 F=1 L=1 V=1 A=0 C=0 R=0
kfracdb2.lge[21].bcd[1].opcode:     162 ; 0x972: 0x00a2
kfracdb2.lge[21].bcd[1].kfbtyp:       4 ; 0x974: KFBTYP_FILEDIR
kfracdb2.lge[21].bcd[1].redund:      17 ; 0x975: SCHE=0x1 NUMB=0x1
kfracdb2.lge[21].bcd[1].pad:      63903 ; 0x976: 0xf99f
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.xtntcnt:0 ; 0x978: 0x00000000
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.xtntblk:0 ; 0x97c: 0x0000
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.xnum:0 ; 0x97e: 0x0000
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.xcnt:1 ; 0x980: 0x0001
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.setflg:0 ; 0x982: 0x00
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.flags:0 ; 0x983: O=0 S=0 S=0 D=0 C=0 I=0 R=0 A=0
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.xptr[0].au:297 ; 0x984: 0x00000129
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.xptr[0].disk:0 ; 0x988: 0x0000
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.xptr[0].flags:0 ; 0x98a: L=0 E=0 D=0 S=0 R=0 I=0
kfracdb2.lge[21].bcd[1].KFFFD_PEXT.xptr[0].chk:2 ; 0x98b: 0x02
kfracdb2.lge[21].bcd[1].au[0]:       10 ; 0x98c: 0x0000000a
kfracdb2.lge[21].bcd[1].disks[0]:     0 ; 0x990: 0x0000
kfracdb2.lge[21].bcd[2].kfbl.blk:     2 ; 0x994: blk=2
kfracdb2.lge[21].bcd[2].kfbl.obj:     4 ; 0x998: file=4
kfracdb2.lge[21].bcd[2].kfcn.base: 3019 ; 0x99c: 0x00000bcb
kfracdb2.lge[21].bcd[2].kfcn.wrap:    0 ; 0x9a0: 0x00000000
kfracdb2.lge[21].bcd[2].oplen:        8 ; 0x9a4: 0x0008
kfracdb2.lge[21].bcd[2].blkIndex:     2 ; 0x9a6: 0x0002
kfracdb2.lge[21].bcd[2].flags:       28 ; 0x9a8: F=0 N=0 F=1 L=1 V=1 A=0 C=0 R=0
kfracdb2.lge[21].bcd[2].opcode:     211 ; 0x9aa: 0x00d3
kfracdb2.lge[21].bcd[2].kfbtyp:      16 ; 0x9ac: KFBTYP_COD_DATA
kfracdb2.lge[21].bcd[2].redund:      17 ; 0x9ad: SCHE=0x1 NUMB=0x1
kfracdb2.lge[21].bcd[2].pad:      63903 ; 0x9ae: 0xf99f
kfracdb2.lge[21].bcd[2].KFRCOD_DATA.offset:60 ; 0x9b0: 0x003c
kfracdb2.lge[21].bcd[2].KFRCOD_DATA.length:4 ; 0x9b2: 0x0004
kfracdb2.lge[21].bcd[2].KFRCOD_DATA.data[0]:1 ; 0x9b4: 0x01
kfracdb2.lge[21].bcd[2].KFRCOD_DATA.data[1]:0 ; 0x9b5: 0x00
kfracdb2.lge[21].bcd[2].KFRCOD_DATA.data[2]:0 ; 0x9b6: 0x00
kfracdb2.lge[21].bcd[2].KFRCOD_DATA.data[3]:0 ; 0x9b7: 0x00
kfracdb2.lge[21].bcd[2].au[0]:       16 ; 0x9b8: 0x00000010
kfracdb2.lge[21].bcd[2].disks[0]:     0 ; 0x9bc: 0x0000

For records like this, all the au extent allocation records corresponding to the file number are obtained through the summary process, and the dd statement is generated, and then the file is generated
20200428213654


dbv Check recovery files

[oracle@rac18c2 tmp]$ dbv file=262.dbf

DBVERIFY: Release 18.0.0.0.0 - Production on Tue Apr 28 09:45:37 2020

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

DBVERIFY - Verification starting : FILE = /tmp/262.dbf


DBVERIFY - Verification complete

Total Pages Examined         : 131072
Total Pages Processed (Data) : 123400
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 631
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 7041
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Total Pages Encrypted        : 0
Highest block SCN            : 10001146011 (2.1411211419)

[oracle@rac18c2 tmp]$ dbv file=263.dbf

DBVERIFY: Release 18.0.0.0.0 - Production on Tue Apr 28 09:51:05 2020

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

DBVERIFY - Verification starting : FILE = /tmp/263.dbf



DBVERIFY - Verification complete

Total Pages Examined         : 643584
Total Pages Processed (Data) : 595146
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 821
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 36865
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Total Pages Encrypted        : 0
Highest block SCN            : 10001153042 (2.1411218450)
[oracle@rac18c2 tmp]$ 

dul Confirm to restore the data in the file

DUL>  scan database
start scan database in parallel 1...
scan database completed.
DUL> sample all segment 
start get segment info: data_obj#: 74635
finish get segment info: data_obj#: 74635
guess col def: 22
write segment info: 74635, 1, 8, 22
write sample rows: 10000
DUL> unload object 74635
 2020-04-24 22:32:11 unloading table segment 74635...
 2020-04-24 22:35:36 unloaded 37382656 rows.
DUL>

By comparing the number of dbv and actual data, the data recovered by this kind of recovery is completely normal, and the data of the deleted data file in the asm can be recovered without using the underlying fragment scanning. In some special cases, this method cooperates with the underlying fragment recovery A more perfect recovery effect can be achieved. For the more typical oracle pdb is deleted (because there are multiple data files with the same file number, it cannot be directly recovered through the underlying fragmentation scan), and it can be recovered very well by such methods.
Similar article reference:
asm disk header complete damage recovery
ASM does not start normally, use dd to retrieve the data file
Asm disk group operation leads to data file loss recovery
If you unfortunately encounter asm data files deleted / lost, or accidentally deleted pdb and other related matters, if you need to restore, you can contact us to provide:E-Mail:chf.dba@gmail.com

sql server drop table recovery

A customer found us. Due to improper operation, the drop table statement that should be executed on the standby database was placed on the production library to execute, resulting in the entire production table being droped.
20200416220255


By letting the customer provide relevant data files for analysis and recovery, the recovery effect is very good because the customer does not perform other operations after deleting the table.
20200416215043


20200416220601


If a sql server database is misused (drop, truncate, delete table, etc.), write operations should be avoided as soon as possible, such as: copying data files offline, stopping all database operations, etc. to prevent overwriting due to further writes, thus Make the recovery effect not good. If there is a recovery need, you can contact us:chf.dba@mail.com

heronpiston@pm.me encrypted recovery

A friend found us, the oracle dmp file was encrypted and the suffix was:.id[CA4D3CB9-2696].[heronpiston@pm.me].HER

f:\temp>dir *.HER
 驱动器 F 中的卷是 本地硬盘
 卷的序列号是 928E-A028

 f:\temp 的目录

2020-03-18  03:55     1,992,802,578 HX_2020-03-16.DMP.id[CA4D3CB9-2696].[heronpiston@pm.me].HER
2020-03-18  03:55            51,314 hx_2020-03-16.log.id[CA4D3CB9-2696].[heronpiston@pm.me].HER
               2 个文件  1,992,853,892 字节
               0 个目录 376,749,625,344 可用字节

Analysis found that the file header 0.25M data was blanked
20200331202522


In such cases, you can try your better database recovery.
20200331203024


For such encrypted oracle, sql server, mysql and other databases, no need to contact the hacker, we can provide database level recovery support.

Rezcrypt@cock.li Encrypted Database Recovery

Some friends feedback system is encrypted with similar suffix name.Email=[Rezcrypt@cock.li]ID=[EOPw1aiueARdMy4].KRONOS

f:\temp>dir xifenfei_DB*
 驱动器 F 中的卷是 本地硬盘
 卷的序列号是 928E-A028

 f:\temp 的目录

2020-03-04  10:37     4,312,465,408 xifenfei_DB.ldf.Email=[Rezcrypt@cock.li]ID=[EOPw1aiueARdMy4].KRONOS
2020-03-04  10:41     5,445,189,632 xifenfei_DB.mdf.Email=[Rezcrypt@cock.li]ID=[EOPw1aiueARdMy4].KRONOS
               2 个文件  9,757,655,040 字节
               0 个目录 376,749,887,488 可用字节

Analysis determined that the file is partially encrypted
20200331192405


Through the underlying processing, perfect recovery of the data outside the shielded block is achieved
20200331192636


For such file encryption, in principle, we can achieve better recovery at the database (mainly oracle and sql server) levels.

expdp dmp recovered by encryption damage

A friend oracle database dmp backup is encrypted, with the suffix: .DMP.voyager. Through analysis, it is found that the file is encrypted about 2M.
20200320134933


It can be seen here that the dmp file is exported in expdp mode (expdp is essentially stored in xml mode, exp is stored in direct binary mode), and the situation of the recoverable table is analyzed by tools.
Analyze the dmp file with a tool

CPFL> OPEN F:\BaiduNetdisk\KINGDEE85GH_2020-03-17.DMP.voyager
TABLE_NAME START_POS DATA_BYTE
-------------------------------------------------- --------------- ---------------
KINGDEE85GH.T_WFD_PROCESSDEF 116300288 648396035
KINGDEE85GH.T_DYN_DYNAMICCONFIGURE 864710656 181453794
KINGDEE85GH.T_RPTS_STORAGEFILEDATA 1078767616 21548951
KINGDEE85GH.T_BOT_RULESEGMENT 1100324864 10372516
KINGDEE85GH.T_LOG_APP 1110712320 12603573
KINGDEE85GH.T_PM_PERMITEM 1123336192 7282412
KINGDEE85GH.T_PM_USERORGPERM 1130635264 6692320
KINGDEE85GH.T_DYN_APPSOLUTION 1137336320 801697
KINGDEE85GH.T_PM_MAINMENUITEM 1138155520 3573943
KINGDEE85GH.T_PM_PERMUIGROUP 1141751808 2159245
KINGDEE85GH.T_SYS_ENTITYREF 1143922688 4183869
KINGDEE85GH.T_PM_ROLEPERM 1148116992 2758960
KINGDEE85GH.T_BAS_SYSMENUITEM 1150885888 3304627
KINGDEE85GH.T_JP_PAGE 1154211840 3019174
…………
KINGDEE85GH.T_XT_CHECKTIME 1212776448 41
KINGDEE85GH.T_XT_SYNCHTIME 1212784640 41
SYSTEM.SYS_EXPORT_SCHEMA_02 1212792832 215423380
-------------------------------------------------- --------------- ---------------
Scanned Find 895 segments.

Through this, you can basically determine that more than 100 M data have been lost, and other data can theoretically be recovered.
Create a user

SQL> create user KINGDEE85GHidentified by oracle;

User created.

SQL> grant dba to KINGDEE85GH;

Grant succeeded.

unexpdp data (automatic table creation and import data)

CPFL> unexpdp table KINGDEE85GH.T_WFD_PROCESSDEF

unexpdp table: KINGDEE85GH.T_WFD_PROCESSDEF storage (START_POSITION:116300288 DATA_BYTE:748396035)
824 rows unexpdp

View recovery results
20200320142445
20200320142034


If you have oracle expdp dmp is encrypted or damaged and cannot be imported into the database normally, you can contact us to restore it:chf.dba@gmail.com
If your oracle dmp is exported in exp mode, you can also contact us to process it, see:
exp dmp file corruption recovery
oracle dmp is encrypted and restored

Oracle 19c failure recovery

Some customers found us. Their oracle 19c database started abnormally due to abnormal power failure. After a series of recovery, the problem could not be solved, and we asked for support. Through our Oracle Database Recovery Check Script , get The current database information is as follows:
The database version is 19C and the 19.5.0.0.191015 (30125133) patch is installed
20200310220453
20200310220748


Database using pdb
20200310220610


After the database is successfully started, it will crash in a while

2020-03-10T01:44:41.018032+08:00
Pluggable database RACBAK opened read write
2020-03-10T01:44:41.018996+08:00
Pluggable database RAC opened read write
2020-03-10T01:44:51.244050+08:00
Completed: ALTER PLUGGABLE DATABASE ALL OPEN
Starting background process CJQ0
Completed: ALTER DATABASE OPEN
2020-03-10T01:44:51.317085+08:00
CJQ0 started with pid=224, OS id=32581 
2020-03-10T01:44:56.067043+08:00
Errors in file /opt/oracle/diag/rdbms/XFF/XFF/trace/XFF_j001_32588.trc  (incident=1095281) (PDBNAME=RAC):
ORA-00600: internal error code, arguments: [4193], [27733], [27754], [], [], [], [], [], [], [], [], []
RAC(4):Incident details in: /opt/oracle/diag/rdbms/XFF/XFF/incident/incdir_1095281/XFF_j001_32588_i1095281.trc
RAC(4):Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-03-10T01:44:56.073112+08:00
RAC(4):*****************************************************************
RAC(4):An internal routine has requested a dump of selected redo.
RAC(4):This usually happens following a specific internal error, when
RAC(4):analysis of the redo logs will help Oracle Support with the
RAC(4):diagnosis.
RAC(4):It is recommended that you retain all the redo logs generated (by
RAC(4):all the instances) during the past 12 hours, in case additional
RAC(4):redo dumps are required to help with the diagnosis.
RAC(4):*****************************************************************
2020-03-10T01:44:56.079228+08:00
Errors in file /opt/oracle/diag/rdbms/XFF/XFF/trace/XFF_j002_32590.trc  (incident=1095289) (PDBNAME=RAC):
ORA-00600: internal error code, arguments: [4193], [2633], [2638], [], [], [], [], [], [], [], [], []
RAC(4):Incident details in: /opt/oracle/diag/rdbms/XFF/XFF/incident/incdir_1095289/XFF_j002_32590_i1095289.trc
RAC(4):Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-03-10T01:44:56.085068+08:00
RAC(4):*****************************************************************
RAC(4):An internal routine has requested a dump of selected redo.
RAC(4):This usually happens following a specific internal error, when
RAC(4):analysis of the redo logs will help Oracle Support with the
RAC(4):diagnosis.
RAC(4):It is recommended that you retain all the redo logs generated (by
RAC(4):all the instances) during the past 12 hours, in case additional
RAC(4):redo dumps are required to help with the diagnosis.
RAC(4):*****************************************************************
2020-03-10T01:44:56.115765+08:00
Errors in file /opt/oracle/diag/rdbms/XFF/XFF/trace/XFF_j004_32594.trc  (incident=1095305) (PDBNAME=RAC):
ORA-00600: internal error code, arguments: [4193], [63532], [63537], [], [], [], [], [], [], [], [], []
RAC(4):Incident details in: /opt/oracle/diag/rdbms/XFF/XFF/incident/incdir_1095305/XFF_j004_32594_i1095305.trc
RAC(4):Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-03-10T01:46:48.202213+08:00
RAC(4):Recovery of Online Redo Log: Thread 1 Group 2 Seq 2 Reading mem 0
RAC(4):  Mem# 0: /opt/oracle/oradata/XFF/redo02.log
RAC(4):Block recovery completed at rba 0.0.0, scn 0x0000000d3675e48e
RAC(4):DDE: Problem Key 'ORA 600 [4193]' was completely flood controlled (0x6)
Further messages for this problem key will be suppressed for up to 10 minutes
2020-03-10T01:46:48.384040+08:00
Errors in file /opt/oracle/diag/rdbms/XFF/XFF/trace/XFF_clmn_31741.trc:
ORA-00600: internal error code, arguments: [4193], [27733], [27754], [], [], [], [], [], [], [], [], []
Errors in file /opt/oracle/diag/rdbms/XFF/XFF/trace/XFF_clmn_31741.trc  (incident=1093505) (PDBNAME=CDB$ROOT):
ORA-501 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /opt/oracle/diag/rdbms/XFF/XFF/incident/incdir_1093505/XFF_clmn_31741_i1093505.trc
2020-03-10T01:46:49.264624+08:00
USER (ospid: 31741): terminating the instance due to ORA error 501
2020-03-10T01:46:49.280664+08:00
System state dump requested by (instance=1, osid=31741 (CLMN)), summary=[abnormal instance termination].
System State dumped to trace file /opt/oracle/diag/rdbms/XFF/XFF/trace/XFF_diag_31759.trc
2020-03-10T01:46:53.156926+08:00
ORA-00501: CLMN process terminated with error
2020-03-10T01:46:53.157103+08:00
Errors in file /opt/oracle/diag/rdbms/XFF/XFF/trace/XFF_diag_31759.trc:
ORA-00501: CLMN process terminated with error
2020-03-10T01:46:53.157211+08:00
Dumping diagnostic data in directory=[cdmp_20200310014649], requested by (instance=1, osid=31741 (CLMN)), 
summary=[abnormal instance termination].

Judging by the error information, after the database is opened (especially after pdb 4 is opened), an ORA-600 4193 error is reported. Then the CLMN process is abnormal and the database crashes. For this type of failure, because of the pdb used, and because of the pdb The undo exception causes the database to crash after startup.You can perform special processing on pdb to prevent the database from crashing after startup.