Deleted_My_Files_Accidentally

garuda-inxi :

System:
  Kernel: 6.5.9-zen2-1-zen arch: x86_64 bits: 64 compiler: gcc v: 13.2.1
    clocksource: tsc available: acpi_pm
    parameters: BOOT_IMAGE=/@/boot/vmlinuz-linux-zen
    root=UUID=5d412073-2912-4323-b384-c15f3f937b40 rw rootflags=subvol=@
    quiet quiet rd.udev.log_priority=3 vt.global_cursor_default=0 loglevel=3
    ibt=off
  Desktop: GNOME v: 45.0 tk: GTK v: 3.24.38 wm: gnome-shell dm: GDM
    v: 45.0.1 Distro: Garuda Linux base: Arch Linux
Machine:
  Type: Laptop System: HP product: HP Pavilion Gaming Laptop 15-dk2xxx
    v: Type1ProductConfigId serial: <superuser required> Chassis: type: 10
    serial: <superuser required>
  Mobo: HP model: 88E5 v: 76.25 serial: <superuser required> UEFI: Insyde
    v: F.03 date: 04/20/2021
Battery:
  ID-1: BAT1 charge: 41.7 Wh (100.0%) condition: 41.7/52.5 Wh (79.4%)
    volts: 12.8 min: 11.6 model: Hewlett-Packard PABAS0241231 type: Li-ion
    serial: <filter> status: full
CPU:
  Info: model: 11th Gen Intel Core i5-11300H bits: 64 type: MT MCP
    arch: Tiger Lake gen: core 11 level: v4 note: check built: 2020
    process: Intel 10nm family: 6 model-id: 0x8C (140) stepping: 1
    microcode: 0xAC
  Topology: cpus: 1x cores: 4 tpc: 2 threads: 8 smt: enabled cache:
    L1: 320 KiB desc: d-4x48 KiB; i-4x32 KiB L2: 5 MiB desc: 4x1.2 MiB L3: 8 MiB
    desc: 1x8 MiB
  Speed (MHz): avg: 1000 high: 3535 min/max: 400/4400 scaling:
    driver: intel_pstate governor: powersave cores: 1: 400 2: 400 3: 400 4: 401
    5: 400 6: 3535 7: 1651 8: 818 bogomips: 49766
  Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
  Vulnerabilities: <filter>
Graphics:
  Device-1: Intel TigerLake-LP GT2 [Iris Xe Graphics] vendor: Hewlett-Packard
    driver: i915 v: kernel arch: Gen-12.1 process: Intel 10nm built: 2020-21
    ports: active: eDP-1 empty: DP-1, DP-2, DP-3, DP-4, HDMI-A-1
    bus-ID: 0000:00:02.0 chip-ID: 8086:9a49 class-ID: 0300
  Device-2: NVIDIA GA107M [GeForce RTX 3050 Mobile] vendor: Hewlett-Packard
    driver: nvidia v: 535.113.01 alternate: nouveau,nvidia_drm non-free: 535.xx+
    status: current (as of 2023-09) arch: Ampere code: GAxxx
    process: TSMC n7 (7nm) built: 2020-23 bus-ID: 0000:01:00.0
    chip-ID: 10de:25a2 class-ID: 0302
  Device-3: Chicony [] driver: uvcvideo type: USB rev: 2.0 speed: 480 Mb/s
    lanes: 1 mode: 2.0 bus-ID: 3-5:2 chip-ID: 04f2:b627 class-ID: 0e02
  Display: x11 server: X.Org v: 21.1.9 with: Xwayland v: 23.2.2
    compositor: gnome-shell driver: X: loaded: modesetting
    alternate: fbdev,intel,vesa dri: iris gpu: i915 display-ID: :0 screens: 1
  Screen-1: 0 s-res: 1920x1080 s-dpi: 96 s-size: 508x285mm (20.00x11.22")
    s-diag: 582mm (22.93")
  Monitor-1: eDP-1 model: BOE Display 0x094a built: 2020 res: 1920x1080
    hz: 60 dpi: 142 gamma: 1.2 size: 344x194mm (13.54x7.64") diag: 395mm (15.5")
    ratio: 16:9 modes: 1920x1080
  API: Vulkan v: 1.3.269 layers: 7 device: 0 type: integrated-gpu name: Intel
    Xe Graphics (TGL GT2) driver: mesa intel v: 23.2.1-arch1.2
    device-ID: 8086:9a49 surfaces: xcb,xlib device: 1 type: discrete-gpu
    name: NVIDIA GeForce RTX 3050 Laptop GPU driver: nvidia v: 535.113.01
    device-ID: 10de:25a2 surfaces: xcb,xlib device: 2 type: cpu name: llvmpipe
    (LLVM 16.0.6 256 bits) driver: mesa llvmpipe v: 23.2.1-arch1.2 (LLVM
    16.0.6) device-ID: 10005:0000 surfaces: xcb,xlib
  API: OpenGL Message: Unable to show GL data. glxinfo is missing.
Audio:
  Device-1: Intel Tiger Lake-LP Smart Sound Audio vendor: Hewlett-Packard
    driver: snd_hda_intel v: kernel alternate: snd_sof_pci_intel_tgl
    bus-ID: 0000:00:1f.3 chip-ID: 8086:a0c8 class-ID: 0403
  API: ALSA v: k6.5.9-zen2-1-zen status: kernel-api tools: N/A
  Server-1: sndiod v: N/A status: off tools: aucat,midicat,sndioctl
  Server-2: PipeWire v: 0.3.83 status: active with: 1: pipewire-pulse
    status: active 2: wireplumber status: active 3: pipewire-alsa type: plugin
    4: pw-jack type: plugin tools: pactl,pw-cat,pw-cli,wpctl
Network:
  Device-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet
    vendor: Hewlett-Packard driver: r8169 v: kernel port: 4000
    bus-ID: 0000:02:00.0 chip-ID: 10ec:8168 class-ID: 0200
  IF: eno1 state: down mac: <filter>
  Device-2: Realtek vendor: Hewlett-Packard driver: rtw89_8852ae v: kernel
    modules: rtw_8852ae port: 3000 bus-ID: 0000:03:00.0 chip-ID: 10ec:a85a
    class-ID: 0280
  IF: wlo1 state: up mac: <filter>
Bluetooth:
  Device-1: Realtek [] driver: btusb v: 0.8 type: USB rev: 1.0 speed: 12 Mb/s
    lanes: 1 mode: 1.1 bus-ID: 3-10:3 chip-ID: 0bda:385a class-ID: e001
    serial: <filter>
  Report: btmgmt ID: hci0 rfk-id: 0 state: down bt-service: disabled
    rfk-block: hardware: no software: yes address: N/A
RAID:
  Hardware-1: Intel Volume Management Device NVMe RAID Controller driver: vmd
    v: 0.6 port: N/A bus-ID: 0000:00:0e.0 chip-ID: 8086:9a0b rev: class-ID: 0104
Drives:
  Local Storage: total: 505.58 GiB used: 143.08 GiB (28.3%)
  SMART Message: Required tool smartctl not installed. Check --recommends
  ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: Samsung
    model: MZVLQ512HALU-000H1 size: 476.94 GiB block-size: physical: 512 B
    logical: 512 B speed: 31.6 Gb/s lanes: 4 tech: SSD serial: <filter>
    fw-rev: HPS4NFXV temp: 41.9 C scheme: GPT
  ID-2: /dev/sda maj-min: 8:0 vendor: SanDisk model: Ultra size: 28.64 GiB
    block-size: physical: 512 B logical: 512 B type: USB rev: 3.0 spd: 5 Gb/s
    lanes: 1 mode: 3.2 gen-1x1 tech: N/A serial: <filter> fw-rev: 1.00
    scheme: MBR
Partition:
  ID-1: / raw-size: 142.97 GiB size: 142.97 GiB (100.00%)
    used: 138.53 GiB (96.9%) fs: btrfs dev: /dev/nvme0n1p9 maj-min: 259:9
  ID-2: /boot/efi raw-size: 325 MiB size: 324.3 MiB (99.80%)
    used: 580 KiB (0.2%) fs: vfat dev: /dev/nvme0n1p8 maj-min: 259:8
  ID-3: /home raw-size: 142.97 GiB size: 142.97 GiB (100.00%)
    used: 138.53 GiB (96.9%) fs: btrfs dev: /dev/nvme0n1p9 maj-min: 259:9
  ID-4: /var/log raw-size: 142.97 GiB size: 142.97 GiB (100.00%)
    used: 138.53 GiB (96.9%) fs: btrfs dev: /dev/nvme0n1p9 maj-min: 259:9
  ID-5: /var/tmp raw-size: 142.97 GiB size: 142.97 GiB (100.00%)
    used: 138.53 GiB (96.9%) fs: btrfs dev: /dev/nvme0n1p9 maj-min: 259:9
Swap:
  Kernel: swappiness: 133 (default 60) cache-pressure: 100 (default) zswap: no
  ID-1: swap-1 type: zram size: 7.51 GiB used: 3.01 GiB (40.1%)
    priority: 100 comp: zstd avail: lzo,lzo-rle,lz4,lz4hc,842 max-streams: 8
    dev: /dev/zram0
  ID-2: swap-2 type: partition size: 10.74 GiB used: 0 KiB (0.0%)
    priority: -2 dev: /dev/nvme0n1p6 maj-min: 259:6
Sensors:
  System Temperatures: cpu: 43.0 C mobo: N/A
  Fan Speeds (rpm): cpu: 0 fan-2: 0
Info:
  Processes: 362 Uptime: 8h 57m wakeups: 51789 Memory: total: 8 GiB
  available: 7.51 GiB used: 3.41 GiB (45.5%) Init: systemd v: 254
  default: graphical tool: systemctl Compilers: gcc: 13.2.1 alt: 12
  clang: 16.0.6 Packages: 2151 pm: pacman pkgs: 2111 libs: 502
  tools: pamac,paru pm: flatpak pkgs: 40 Shell: fish v: 3.6.1
  running-in: alacritty inxi: 3.3.30
Garuda (2.6.17-1):
  System install date:     2023-06-13
  Last full system update: 2023-11-01
  Is partially upgraded:   Yes
  Relevant software:       snapper NetworkManager mkinitcpio nvidia-dkms
  Windows dual boot:       Probably (Run as root to verify)
  Failed units:

Guys I really f****ed up. I feel so bad and sad inside, and melting slowly.

I LOST FILES IN PROJECTS FOLDER WHERE I USUALLY CREATE MY RUST AND PYTHON PROJECTS/ MODULES.

I did something stupid using fdupes, following the advice of chatgpt blindly, in order to free up some space off of my drive… and I wanted to remove dupes. Stupid me thinking that the program would actually comprehend the difference between a main.rs file and another main.rs file from another module or even directory.

So anyways, here I am banging my head on the keyboard trying to figure out how to use testdisk in order to recover those files.

And no, I tried snapper, but no it does not work *(as it seems to, regrettably, only keep the / root directory files and important configuration files intact. But not the subdirectoryies).
"
So how am I doing ? Do I have hope, or am I hopeless ?

So far I have reached the step where I created an image.dd from under the specific subdirectory that I am wanting to recover the files from, hopefully. But I guess I’m not so smart enough either to comprehend that it’s actually creating a recovery file back of that folder, and not actually recovering files. -.-

I’m really frustrated :(((((((((((((((

Please advice me.

Given that I only have so much limited space at the moment in my current partition, I would imagine someone here to come and advice me to bring another hard drive, or boot up a linux from another hard drive, and operate ā€˜testdisk’ from there ! With a preferred size of over 500 GB’s, (or 141 GB’s only, for the allocated partition size for the machine that I am currently facing this problem with).

Is that the only option that I have really ? Bring another hard drive and recover everything, including the whole partition, there ?

I would only want to recover files under /$HOME/$USER/Documents as those were the ones affected. Seems too radical to do a whole partition file recovery.

Whoogle sayed so.

Use live ISO, install test disk, but backup the disk before.

I am sorry to say, as always

and this forum is not a general technical support forum.

Snapper save your root folder, not the home folder, until you change it.
But it is no good idea to do this. Make separate backups.

3 Likes

I don’t have much spaces, or drives. So doing backups was not possible for me. We’re talking more than 10’s of GB’s, and more ! :frowning:
Guess I’ll have to compromise to get another HDD for the freaking backup.

Thank you for your response.

Noted on the forum. But how do I change from this to general technical support forum ?

Apologies for confusion, as I thought this was going to be specifically constrained around the usage of testdisk to recover files in garuda linux, as use case; and therefore, some not-so-general topic.

SSD. That’s what I’m doing. And rsync.

Just wanna ask do you have git snapshot of your projects? If you do then recovering the deleted files should be possible with git itself. Provided you have not deleted/messed the .git directory of your projects as well.

1 Like

That is Google.

2 Likes

Noted. So what’s the protocol now. Should I close this forum page ?

I also believe that most editors take a backup of the files you are editing so do check that as well. Just in case you can recover some files.

4 Likes

it messed the .git directory, as far as I can observed. The .git dir exists, but each subdirectories of each folder under it are empty. I’m guessing fdupes precieved them as dupes across all the other folders and directories systematically compared.

That’s quite the tough luck man… :sweat: Honestly I have faced what you are facing right now. Almost had a heart attack (almost because I was lucky and managed to revert with git). Sadly recovering files is not a very easy process. The recovery tools are going to detect and copy each file they can find in your partition to your secondary drive and this is going to take a ton of time and a lot of sorting after to get your real files from the garbage.

Just maintain a github or gitlab backup after this is all I can say…

4 Likes

It will take snapshots of any subvolume you want. The default configuration takes snapshots of the root subvolume only, but it’s pretty easy to set it up for taking snapshots of the other subvolumes (in your case maybe for next time :grimacing:).

If recovering the data is very important, you should immediately shut down your computer and completely disconnect the drive, then take it to a specialist to recover as much data as can be salvaged. As long as the drive is connected and you are continuing to interact with it (even just looking through directories or making copies), the chances of being able to recover the lost data will continue to diminish moment by moment as those old sectors are overwritten with new disk activity.

Learning to use data recovery tools is better done in your spare time when you have a testing environment and non-crucial data on the line. If this stuff is important you should go get help, and practice with data recovery tools another day.

4 Likes

Be prepared for a tremendous amount of renaming pictures, videos, pdfs, etc. Then eliminating all the duplicates. Plus there’s a liability for recovery applications to recover every, single, file that has never been overwritten.

Welcome to a World of Pain

I’ve been there.

4 Likes

maybe this thread can help:

https://www.reddit.com/r/linux/comments/ypa22z/btrfsundelete_a_simple_script_for_recovering/

1 Like

Maybe install that utility in a different disk, otherwise it may be overwriting an important document on the disk you’re trying to recover? Besides, it’s Reddit.

I think it’s normal for people to near shit themselves when something happens like this, and immediately panic. I sure did the first time, and maybe the second.

It’s wonderful that utilities like that (these) exist. It would be better if a big, red, WARNING precluded their necessity.

They erect warning signs along dangerous roads and cliffs and people obey them. I guess personal, important data–what some folks call their ā€˜lives’–is not so.

For anyone wondering, here’s my take. It takes a lot less time and energy to secure my data than it takes trying to restore it. I’ve got several thousand ā€˜fnnnnnnn’, documents, text, pdf, and picture files from the last attempt at recovering my data. Many, many duplicates.

I was lucky to be quick enough to isolate the disk and didn’t really lose anything. It’s all still there, but I don’t think I’ll ever have the time to really straighten it out unless I go one-by-one through several thousand files. And I ain’t got that kinda time.

My bottom-of-the-line desktop isn’t all that good and spending more money on this 2018/19 machine is like putting a $100 saddle on a $50 horse, but I just bought a Western Digital Blue 1TB SSD to mirror everything to my old 1TB HDD. It doesn’t make this 'puter worth any more, but what price does one put on memories?

To anyone wondering: Take the time, spend the money, just do it.

3 Likes

I was planning on cleaning my files first, and then exporting them to some external hard drive later.

As exporting them as is feels like Im gonna procrostinate just thinking about how I have to clean them later, and alter them, after the fact that I have already exported them to some backup place.

Some of them are useless to me, some of them make me iffy, and make me think ā€œif I should even back up that file, or if it isnot worth itā€.

Backing them up is a whole process all on its own that I would rather think only once about, and forget it.

I already have so much of a mess of file jumpled around on one of my old external back-up WD hdd drives, and it makes me crazy thinking about how am even going to sort everything out based on relevancy, and delete some of the things that may be duplicates (in content, not duplicate in file name, which is even more tricky to sort out, but I do that on the occassion when and where it matters).

My 1TB is already almost full, and I use it for accessing another machine, as I have another partition for another machine installed in there, and it’s rather inconvenient to have that corrupted or deleted (that drive machine is mostly for gaming).

and I do take extra measures to ensure that I have contingency plans if anything gets ā€œf*ed upā€. even though I do not fully rely on them or trust them. However this contingency plans have worked with me in the past, though they do not work with me at the current moment.

Shit happens.

3 Likes

I promise to post an update of the things that I tried to solve my issue if it doesnt work. As I have been pretty much active trying many different methods (including some that were mentioned here in this forum).

And if I get to have my files restored, I promise to mention the process in which I used to restore those files, so that in the future people may benefit from it.

I hope it’s allowed in this forum.

I still have a few thousand duplicates, mostly PDFs & JPGs. It has been a couple of years and I think I used PhotoRec, but I’m far from sure. One way to semi-eliminate duplicates is to sort by size, another would be by ā€˜bitness’ and probably others. But I’m lazy.

PhotoRec (or whatever it was) even recovered files that had been deleted aeons ago, and the disk in question formatted several times previously. That’s scary! :wink:

1 Like