Hi all,

I currently have a Linux install from an old 256GB SATA SSD that I inherited. It was originally used as a swap drive in another person’s RAID server for about 7 years, then it was given to me, where I put my own Linux install that I have been running for about 5 years.

About a year ago, I acquired a new computer that has an NVMe SSD. It originally ran windows, but I dropped in my SSD with my Linux install, installed grub on the NVMe SSD, and booted to the old SSD.

I am mildly concerned about with this SSD being so old, it could crap out on me eventually. I remember that being a topic of discussion when SSDs first hit the market (i.e. when the one that I am using was made). So I was thinking of wiping the 1TB NVMe SSD that is currently unused in this computer and migrating my install to it. Now, I know I could copy my whole disk with dd, then expand the partition to make use of the space. But I was wondering if I could change the filesystem to something that had snapshots (such as btrfs).

Is it possible to do this, or to change filesystems do I need to create a new Linux install and copy all the files over that I want to keep?

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Make the new filesystem, rsync the old SSD to the new one (making sure to use rsync -ax to copy everything properly, also add -H if you use hardlinks), update fstab UUID, regenerate GRUB configuration and you’re good to go.

    I have a 10 year old install that’s survived moving several disks and computers, it works just fine.

    • phx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Don’t forget to change the fstab filesystem type when updating the UUID as well (yes, I’ve made this oops before).

    • Dandroid@dandroid.appOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This is likely what I will do now that I have given it some thought. This will bring over all of my installed apt and snap packages, right? And they will both be aware and know how to update from there?

      I have the NVMe prepped. It has a fresh Ubuntu install of the same version, but on btrfs. I could probably even snapshot it before I get started to make sure I can roll back and try again if I fuck up. And worst case, I can just reinstall the OS on that partition, as it would touch my existing install. It feels pretty safe to try. Worst thing that can go wrong is I waste my time.

    • vrt3@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      -x (alias --one-file-system) means “don’t cross filesystem boundaries”; is that what you meant? Or did you mean -X | --xattrs?

      Edited because I wrote some things before that were incorrect.

  • socphoenix@midwest.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The amount of changes you’d need to make to get Linux to boot on a different partition format and drive would be a lot of work. It would be much faster to install a new copy of Linux to the nvme drive and copy the files from the ssd post install before decommissioning the old drive.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It’s really not that bad, unlike Windows you can pretty much just rsync the data over, update fstab and it’s good to go.

    • exi@feddit.de
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      I disagree, you usually just need to get /boot and your EFI things right on the new disk, rsync stuff over and fix any references to old disks in /etc/fstab and maybe your grub config and you are done. I have done this migration>10 times over the years onto different filesystems, partition Layout and raid configurations and it’s never been particularly hard.

        • exi@feddit.de
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Most of the time, it’s enough to copy the whole EFI partition to the new machine and update whatever boot entries are in there to point to the right new partitions.

          In case of a switch to something like zfs, it’s a bit more involved and you need to boot a live Linux, chroot into the new “/” with /boot mounted and /dev, /proc, /sys bind mounted into the chroot.

          Then you can run the distro-appropriate command to reinstall/ update grub into the EFI partition and they will usually take care of adding the right drivers.

      • socphoenix@midwest.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        That’s true if everything is supported on the current kernel. I might just be very out of touch/date here but is btrfs built in to the kernel? I was thinking he’d need to have a different kernel/loaded modules on it

        • exi@feddit.de
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          Btrfs is in the mainline kernel since 2.6.29, that’s 14 years ago my friend 😃

          It’s included in every major distro for a long long time.

          • socphoenix@midwest.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Well dang it’s been a while since I tried it then! I keep hearing how it’s unstable in comments so I tend to assume its fairly new even when I should know better lol