As a full time desktop Linux user since 1999 (the actual year of the Linux desktop, I swear) I wish all you Windows folks the best of luck on the next clean install 👍

…and Happy 30th Birthday “New Technology” File System!

  • Rakust
    link
    fedilink
    2231 year ago

    How do you know when someone uses linux?

    Don’t worry, they’ll tell you

      • @laylawashere44@lemmy.blahaj.zone
        link
        fedilink
        English
        431 year ago

        Comment by someone who hasn’t used Windows in an age. When was the last time you rebooted because you had installed new software? When was the last time you ran random code from a forum post to make software work? Because this windows user doesn’t remember ever doing that.

          • @herrvogel@lemmy.world
            link
            fedilink
            English
            28
            edit-2
            1 year ago

            Many Linux package managers themselves tell you you should reboot your system after updates, especially if the update has touched system packages. You can definitely run into problems that will leave you scratching your head if you don’t.

          • pacoboyd
            link
            fedilink
            English
            271 year ago

            *nix systems are not immune to needing reboots after updates. I work as an escalation engineer for an IT support firm and our support teams that do *nix updates without reboots have DEFINATELY been the cause of some hard to find issues. We’ll often review environment changes first thing during an engagement only to fix the issue to find that it was from some update change 3 months ago where the team never rebooted to validate the new config was good. Not gonna argue that in general its more stable and usually requires less reboots, but its certainly not the answer to every Windows pitfall.

            • @havokdj@lemmy.world
              link
              fedilink
              English
              31 year ago

              The only time you truly need to reboot is when you update your kernel.

              The solution to this problem is live-patching. Not really a game changer with consumer electronics because they don’t have to use ECC, but with servers that can take upwards of 10 minutes to reboot, it is a game changer.

              • @UnsafePantomime@lemmy.world
                link
                fedilink
                English
                61 year ago

                We have an Ubuntu machine at work with an NVIDIA GPU we use for CUDA. Every time CUDA has an update, CUDA throws obtuse errors until reboot.

                To say only kernel updates require reboot is naive.

                • @havokdj@lemmy.world
                  link
                  fedilink
                  English
                  41 year ago

                  Damn yeah I didn’t think of that either. Alright, scratch what I said. The point still stands that you very rarely need to update outside of scenarios containing very critical processes such as these, those of which depend on what work you do with it.

                  It’s been a long slow night and morning and I was half awake when I said that. Hell I’m still half awake now, just disregard anything I’ve said.

              • @rambaroo@lemmy.world
                link
                fedilink
                English
                61 year ago

                This isn’t true, I had to reboot debian the other day to take an update to dbus which is not part of the kernel.

            • @bremen15@feddit.de
              link
              fedilink
              English
              -71 year ago

              Seems to be sloppy engineering. We ran a huge multi site operation on Linux and did not need to.

        • @Weirdfish@lemmy.world
          link
          fedilink
          English
          41 year ago

          A couple days ago, but I have a company issued remote managed windows laptop, and I get zero say in the matter.

          At least once a month my system forces me to do a reboot for updates.

          I can tell it to wait, but I can not tell it to stop.

        • @heimchen@discuss.tchncs.de
          link
          fedilink
          English
          31 year ago

          Yesterday, on one of my family members computer the Laptop speakers stopped working, after an hour of clicking through legacy Ui trying to fix it(Lenovo Yoga 730 if someone could help me) I gave up, plugged my Linux boot usb in to test if there is a driver issue or so. Miss click in the boot menu and had to wait half an hour for a random Windows update(I did not start it because I used the physical button to turn it off, with Windows 11 turning off the computer via software requires so much mouse movement).

      • pacoboyd
        link
        fedilink
        English
        25
        edit-2
        1 year ago

        Haven’t used windows in a while huh?

        Edit: Just to clarify, I run ALOT of operating systems in my lab; RHEL, Debian, Ubuntu (several LTS flavors), TruNAS, Unraid, RancherOS, ESXi, Windows 2003 thru 2022, Windows 10, Windows 11.

        My latest headless Steam box with Windows 11 based on a AMD 5600g basically reboots about as fast as I can retype my password in RDP.

          • @avapa@lemmy.world
            link
            fedilink
            English
            121 year ago

            Probably a gaming PC (as he mentioned Steam) without a display connected to it that’s used for game streaming using Parsec or other software like Sunshine. By the way, if you want to try that setup yourself make sure you get a dummy plug (HDMI or DisplayPort) for the GPU as Windows doesn’t really allow video capture if no display is detected.

            • pacoboyd
              link
              fedilink
              English
              3
              edit-2
              1 year ago

              This, thanks. I just use Steam link though, works good for my needs.

      • @Audbol@lemmy.world
        link
        fedilink
        English
        111 year ago

        And boy do you guys ever talk about Windows… Like constantly. Go on any Linux subreddit or community and 8 of the top 10 posts will mention Windows.

      • @Crozekiel@lemmy.zip
        link
        fedilink
        English
        31 year ago

        Omg. This hits home. I think Linux has prompted / asked me to reboot one time since I installed it 2 months ago. Windows wants you to reboot everytime you change anything. I didn’t realize how insanely often it asks until I had something to compare it to.

        I got a friend trying Linux for the first time and they asked for some help picking software to install, like which office suite or photo app etc… They just instinctively rebooted after everything they did like it was a pavlovian response, lol.

        • @hamsterkill@lemmy.sdf.org
          link
          fedilink
          English
          41 year ago

          This will vary by distro. Arch for example expects (but doesn’t ask) you to reboot quite often since their packages are “bleeding edge” and update the kernel often.

  • @Fylkir@lemmy.sdf.org
    link
    fedilink
    English
    811 year ago

    The last update to NTFS was in 2004.

    The fact that ReFS doesn’t even support all the features NTFS does is pathetic.

    • @deranger@lemmy.world
      link
      fedilink
      English
      321 year ago

      Genuine question, not being sarcastic.

      What’s the benefit to the average end user to modernizing NTFS?

      Sure, I love having btrfs on my NAS for all the features it brings, but I’m not a normal person. What significant changes that would affect your average user does NTFS require to modernize it?

      I just see it as an “if it’s not broken” type thing. I can’t say I’ve ever given the slightest care about what filesystem my computer was running until I got into NAS/backups, which itself was a good 10 years after I got into building PCs. The way I see it, it doesn’t really matter when I’m reinstalling every few years and have backups elsewhere.

      • @vividspecter@lemm.ee
        link
        fedilink
        English
        40
        edit-2
        1 year ago
        • Near instantaneous snapshots and rollback (would help with system restore etc)
        • Compression that uses a modern algorithm
        • Checking for silent corruption, so users know if their files are no longer correct

        I’d add built in multi-device support (think RAID and drive pooling) but that might be beyond the “average” user (which is always a vague term and I feel there are many types of users within that average). E.g. users that mod their games can benefit from snapshots and/or reflink copies allowing to make backups of their game dirs without taking up any additional space beyond the changes that the mods add.

        • @deranger@lemmy.world
          link
          fedilink
          English
          31 year ago

          I agree all those are nice things to have, and things I’d want to see in an update. Now how can you sell those features to management? How do these improve the experience for the everyday end user?

          I’d say the snapshots feature could be a major selling point. Windows needs a good backup/restore solution.

          It just seems like potentially a ton of work to satisfy the needs of “people who think about filesystems”, which is an extremely small subset of users. I can see how it might be hard to get the manpower and resources needed to rework the Windows default filesystem.

          I really have no clue how much work it takes though, so it’s just speculation on my end. I’m just curious; on one hand, I do see where NTFS is way behind, but on the other… who cares? I’ve somehow made it past 20 years of building WIndows PCs without really caring what filesystem I’ve used, from 95 all the way to 11.

          • @vividspecter@lemm.ee
            link
            fedilink
            English
            2
            edit-2
            1 year ago

            I’m not sure you need to sell it to actual users. A lot of benefits of an advanced filesystem could be done by the OS itself, almost transparently. All of the features I mentioned could be managed by Windows, with only minimal changes to the UI. Even reflink copies could just be a control panel option then used by default in Explorer (equivalent of cp --reflink=auto in Linux). And from the OS side, deduplication would help a lot on Windows given all of the DLL bundling, and weird shit they have to do to maintain legacy compatibility, and that’s no small thing given how space inefficient modern Windows installs have become.

            It would be some work to upgrade it (maybe a lot given how ancient and likely full of cruft that Windows is full of with legacy compatibility) but it would eventually make the system more reliable and more space efficient.

            But yeah, there are challenges. I’m mainly speaking in terms of btrfs which would take some time to port to Windows (although there is a 3rd party driver they’d want to handle it themselves I suspect) but they’ll probably want to use their own ReFS and I’ve not really investigated it seriously so I can’t say how ready that is for prime time. But given that it’s being included as an option in some enterprise/server editions of Windows maybe it will be soon in consumer editions soon anyway (as much as I’d prefer something more open and widely supported, at least it’s a step forward on Windows).

      • @Fylkir@lemmy.sdf.org
        link
        fedilink
        English
        191 year ago

        At the very least, better filesystem level compression support. A somewhat common usecase might be people who use emulators. Both Wii U and PS3 are consoles where major emulators just use a folder on your filesystem. I know a lot of emulator users who are non-technical to the point that they don’t have “show hidden files and folders” enabled.

        Also your average person wouldn’t necessarily need checksums, but having them built into the filesystem would lead to overall more reliability.

    • @Exec@pawb.social
      link
      fedilink
      English
      241 year ago

      Nope, long paths are supported since 8.1 or 10 person bit you have to enable it yourself because very old apps can break

      • @eco@lemmy.world
        link
        fedilink
        English
        101 year ago

        Furthermore, apps using the unicode versions of functions (which all apps should be doing for a couple decades now) have 32kb maximum character length paths.

    • @sorenant@lemmy.world
      link
      fedilink
      English
      5
      edit-2
      1 year ago

      Are you writing parahraphs for folder/file names? That’s one “issue” I never had problem with.

      Maybe enterprises need a solution for it but that’s a very different use case from most end users.

      Improvements are always welcome but saying it’s “ridiculously short” makes the problem sound worse than it is.

      • Tekchip
        link
        fedilink
        331 year ago

        I think they mean the full path length. As in you can’t nest folders too deep or the total path length hits a limit. Not individual folder name limits.

      • @RagingNerdoholic@lemmy.ca
        link
        fedilink
        English
        24
        edit-2
        1 year ago

        File paths. Not just the filename, the entire directory path, including the filename. It’s way too easy to run up against limit if you’re actually organized.

        • @Serinus@lemmy.ml
          link
          fedilink
          English
          71 year ago

          It might be 255 characters for the entire path?

          I’ve run into it at work where I don’t get to choose many elements. Thanks “My Name - OneDrive” and people who insist on embedding file information into filenames.

          • @chinpokomon@lemmy.world
            link
            fedilink
            English
            21 year ago

            The limit was 260. The OS and the filesystem support more. You have to enable a registry key and apps need to have a manifest which says they understand file paths longer than 260 characters. So while it hasn’t been a limitation for awhile, as long as apps were coded to support lesser path lengths it will continue to be a problem. There needs to be an conversion mechanism like Windows 95 had so that apps could continue to use short file names. Internally the app could use short path names while the rest of the OS was no longer held back.

        • @motorwerks@sopuli.xyz
          link
          fedilink
          English
          -11 year ago

          You like diving 12 folders deep to find the file you’re after? I feel like there’s better, more efficient ways to be organized using metadata, but maybe I’m wrong.

          • @d3Xt3r@lemmy.world
            link
            fedilink
            English
            151 year ago

            Not OP, but I occasionally come across this issue at work, where some user complains they they are unable to access a file/folder because of the limit. You often find this in medium-large organisations with many regions and divisions and departments etc. Usually they would create a shortcut to their team/project’s folder space so they don’t have to manually navigate to it each time. The folder structure might be quite nested, but it’s organized logically, it makes sense. Better than dumping millions of files into a single folder.

            Anyways, this isn’t actually an NTFS limit, but a Windows API limit. There’s even a registry value[1] you can change to lift the limit, but the problem is that it can crash legacy programs or lead to unexpected behavior, so large organisations (like ours) shy away from the change.

            1. https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry
          • RiskableOP
            link
            fedilink
            English
            91 year ago

            C:\Users\axexandriaanastasiachristianson\Downloads\some_git_repo\src\...

            You run into the file parth limit all the fucking time if you’re a developer at an organization that enforces fullname usernames.

              • @bighi@lemmy.world
                link
                fedilink
                English
                111 year ago

                People have been talking about the real problem from the beginning of the thread: small character limit on file paths.

                • @lolcatnip@reddthat.com
                  link
                  fedilink
                  English
                  1
                  edit-2
                  1 year ago

                  I would be pissed if they made me use such a ridiculously long login name at work. Mine is twelve characters and that’s already a pain in the ass (but it’s a huge company and I have a really common name, so I guess all the shorter variations were already taken).

                  Edit: Also, I checked it’s really very simple to enable 32kb paths in recent versions of Windows.

  • @tony@lemmy.hoyle.me.uk
    link
    fedilink
    English
    411 year ago

    You want your filesystems to be old and stable. It’s new filesystems you want to view with suspicion… not battle tested.

    • @olutukko@lemmy.world
      link
      fedilink
      English
      101 year ago

      I wouldn’t really say so. Of course it’s not a good idea take the absolutely latest system as your daily driver since it’s propably not bugproof yet but also you don’t want to use something extremely old just because it’s been tested much more because then you’re just trading away perfomance and features for nothing. For example ext4 is extremely reliable and the stable version is 15year newer than NTFS.

      • @dgilluly@lemmy.world
        link
        fedilink
        English
        31 year ago

        I’m a client-side technician working in a predominantly Windows environment for the last 8 going on 9 years.

        Out of all the issues I have seen on Windows, filesystem issues is rather low on that list as far as prevalence, as I don’t recall one that’s not explainable by hardware failure or interrupted write. Not saying it doesn’t happen and that ext4 is bad or anything, but I don’t work in Linux all that much so me saying that I never had an issue with ext4 isn’t the same because I don’t have nearly the same amount of experience.

        Also ext came about in 1992, so 31 years so far to hash out the bugs is no small amount of time. Especially in terms of computing.

    • BrooklynMan
      link
      fedilink
      English
      91 year ago

      NFTS: you invest all of your data into it, and it grows and grows until it suddenly disappears as you discover it was a scam all along.

  • @InvaderDJ@lemmy.world
    link
    fedilink
    English
    171 year ago

    It is weird to me that Microsoft hasn’t updated the file system in so long. They were going to with Longhorn/VIsta but that failed and it seems like they’ve been gunshy ever since.

    • ultratiem
      link
      fedilink
      English
      151 year ago

      You don’t sound like you weren’t around the Windows Vista/Longhorn development days when they promised a successor to NTFS and then over the course of the next couple of years, would bail on that (and nearly every other promise made).

      WinFS: https://www.zdnet.com/article/bill-gates-biggest-microsoft-product-regret-winfs/

      And FWIW, they are developing ReFS, which looks like it will finally supplant NTFS, but given MS’ business model, don’t expect NTFS to ever really disappear.

      • @InvaderDJ@lemmy.world
        link
        fedilink
        English
        51 year ago

        Yeah, I definitely was. I think that gave them PTSD or something because they haven’t even tried to make moderate changes to NTFS since. And besides ReFS which I hadn’t heard about until this thread, they haven’t even done something as minor as give you an option to use different file systems like ext4.

    • @MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      71 year ago

      NTFS has evolved over the years, but the base structure is mostly unchanged. Things have changed, but not the name. I think they’ve been using NTFS v3 for a while now…

      • @InvaderDJ@lemmy.world
        link
        fedilink
        English
        31 year ago

        Yeah, that’s what I mean. There have been small changes, but nothing major and if the other poster was right, even minor changes haven’t been made since 2004.

        Meanwhile Apple has come out with APFS and *nix variants have multiple file systems, each more modern than NTFS.

        It is weird to me. Here’s hoping reFS or some other file system comes out.

        • @MystikIncarnate@lemmy.ca
          link
          fedilink
          English
          51 year ago

          ReFS is out. But only specific revisions of Windows, notably Windows server, can use it for specific use cases.

          I tried setting up ReFS on a disk for a cluster of hyper-v systems… I couldn’t because they were using a cluster shared DAS, and in that version of Windows server or ReFS there was no support for cluster access to the FS, it should have otherwise worked, it just seems a bit incomplete at the moment. If I had been using it for cifs access for a single server, then yeah, it probably would have been fine, it was just the clustered direct access that wasn’t yet supported.

          Windows desktop is unlikely to get ReFS support until the fs is more mature, and it’s likely that will be limited to non-os disks for a while.

          It’s pretty far along right now, it’s just that MS isn’t going to pop open any Champaign until the fs can hold its own as a direct replacement and upgrade from NTFS, with all the capabilities and features required (and more).

          I’ll note that the vast majority of systems running some kind of *nix are generally using either ext2 or ext3. Where ext3 is essentially just ext2 with journaling (which is something NTFS has, AFAIK), and ext2 is just as old as NTFS.

          We can argue and complain all we want, but these are tried and true, battle tested file systems that do the job adequately for the demands of systems, both in the past, and now. They do one fairly simple thing… Organizing data on disk into files and directories, and enabling that data to be written, updated, read from, and otherwise retrieved when needed.

          I know in IT we don’t go by the saying “if it’s not broken don’t fix it”, since all of us have horror stories of when you don’t fix something that’s not broken and something very bad happens… But I would say that systems like ext2/3 and NTFS have achieved the coveted goal of RFC 1925, rule 12: In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

          There’s no fat in these file systems. Everything in them generally exists for good reason, the fs is stable and does the required job.

          Does that mean we should pack it up, we’ll never need another fs again? Absolutely not. We will hit the hard upper limits of what these file systems can do, eventually; probably fairly soon, but that doesn’t mean that either is bad simply because they are old.

    • @elscallr@lemmy.world
      link
      fedilink
      English
      51 year ago

      It is weird to me that Microsoft hasn’t updated the file system in so long.

      Honest question: why? NTFS isn’t great, it isn’t terrible, it’s functional. I don’t really spend any time thinking about my filesystem. I like having symbolic links on my Linux boxes, but aside from that I just want it to work, and NTFS does.

    • @chinpokomon@lemmy.world
      link
      fedilink
      English
      41 year ago

      WinFS wasn’t a replacement of NTFS as much as it was a supplement. Documents could be broken apart into atomic pieces, like an embedded image and that would be indexed on its own. Those pieces were kept in something more like a SQL database, more like using binary blobs in SharePoint Portal, but that database still was written to the disk on an NTFS partition as I recall. WinFS was responsible for bringing those pieces back together to represent a compete document if you were transferring it to a non-WinFS filesystem or transferring to a different system altogether. It wasn’t a new filesystem as much as it was a database with a filesystem driver.

    • @Psythik@monyet.cc
      link
      fedilink
      English
      81 year ago

      What the hell ever happened with ReiserFS (or whatever it was called?) It was supposed to be used in Vista, and then just never was.

      • @HR_Pufnstuf@lemmy.world
        link
        fedilink
        English
        21 year ago

        XFS is more like ext3 or ext4 than zfs. It has now COW, snapshots, although it is very performant and can handle very large volumes. It’s a pretty good all around filesystem. I trust it more than ext4, but you also can’t shrink it, like you can ext4.

  • @TheOldRepublic@lemmy.world
    link
    fedilink
    English
    151 year ago

    I use both. I like Linux better, even more since W10. It’s spyware, crap, all those nasty things. But hey, I’m a pc gamer and, sadly enough, my games (80% of them) all get funcky in Linux (wine, playonlinux,… I tried it all), so guess I’m stuck with the crap. But again, Linux is far better and superior

    • @orangeboats@lemmy.world
      link
      fedilink
      English
      81 year ago

      Modern Linux systems are slowly moving toward Btrfs at least… which is pretty young compared to ext4 and Ntfs.

    • @zerbey@lemmy.world
      link
      fedilink
      English
      8
      edit-2
      1 year ago

      XFS, the default filesystem in Red Hat, is older than NTFS. Released 1994.

      I’ll say this, the previous admin of one of the Linux servers I support set up RAID-0 striping for the main data slice (must have been dropped on their head as a child or something). Two drives, and one of the drives developed bad sectors, but I was still able to recover 95% of the data before it shit the bed completely. So, XFS is apparently quite resilient, or I got lucky.

  • @Secret300@lemmy.world
    link
    fedilink
    English
    131 year ago

    This might sound ignorant but that’s cause I am. Why doesn’t windows just use ext4, btrfs, XFS, or something open source. They wouldn’t have to worry about developing it so it’d be a load off their chest and they could get really good features that even NTFS doesn’t have. Well maybe not with ext4 but with btrfs

    • @salient_one@lemmy.villa-straylight.social
      link
      fedilink
      English
      41
      edit-2
      1 year ago

      Microsoft really really hated open source some time ago. Now they seem to have embraced it, however some still think that might be an attempt to EEE.

      Still, I suppose Microsoft doesn’t think replacing the Windows default filesystem is a sound investment at this point even if the political resistance to such a change is, supposedly, gone.

      • @Secret300@lemmy.world
        link
        fedilink
        English
        61 year ago

        idk who was dumb enough to upvote this but NTFS hasn’t been great. That’s why they’re making a replacement called reFS

    • nicman24
      link
      fedilink
      41 year ago

      there is an open source btrfs kernel driver for it and a userspace one for ext4

  • @RunningInRVA@lemmy.world
    link
    fedilink
    English
    41 year ago

    Does NTFS allow for merging of disks into a single partition? Apple was able to do this by combing a larger HDD with a smaller SSD into a single virtual HFS+ volume.

    • @d3Xt3r@lemmy.world
      link
      fedilink
      English
      13
      edit-2
      1 year ago

      Yep. You need to convert the disk into a “dynamic disk” (no data loss btw) and then you can create a “spanned volume” across the disks. You can also create a striped volume for performance, which is basically RAID 0.

      But apparently dynamic disks are now deprecated and Microsoft wants you to use “storage spaces” instead, which is basically RAID and not just simple spanned volumes. The problem with this, IIRC, is that you’ll need at least two extra drives (in addition to the drive where Windows is installed).

      • @Spotlight7573@lemmy.world
        link
        fedilink
        English
        41 year ago

        I don’t think a spanned volume is quite what they were after. I’m pretty sure macOS uses the SSD part as a cache and it’s used mainly for increasing the performance of the relatively slow but large capacity HDD. Nowadays though you might as well just go with all SSD in most cases if performance matters.