Amazingly screwed-up installation experience



  • @ais523 said:

    Then how could the same user possibly install two copies of the application at once? Half the reason you'd want to customize the install paths is to do that.
    Work off your executable path, though most applications don't bother supporting multiple system-level installs on the same machine (they don't make much sense with multiple users, where data has to be kept per-user), and instead support profiles (which are pretty simple to implement - just use a separate subfolder in %APPDATA% and key in registry for each profile).


  • Discourse touched me in a no-no place

    @ender said:

    @ais523 said:
    Then how could the same user possibly install two copies of the application at once? Half the reason you'd want to customize the install paths is to do that.
    Work off your executable path, though most applications don't bother supporting multiple system-level installs on the same machine (they don't make much sense with multiple users, where data has to be kept per-user), and instead support profiles (which are pretty simple to implement - just use a separate subfolder in %APPDATA% and key in registry for each profile).
    Something I've probably missed upthread - or it's not been raised - what about multiple versions of the same software.



    Since it was in a previous life, (and quite a few versions of Windows ago, so maybe not as relevant to the topic,) I'll name and shame one such that had both problems - Citect (SCADA software for plant/machinery monitoring); I seem to recall needing at one time some 6 or 7 versions installed since either

    1. different customer's projects used different versions [my point] and while you could probably open a V4 project in V5, once you had done so, you were buggered if you wanted to open in V4 again, and
    2. even the same version couldn't handle multiple (different customers') projects at the same time ['profiles' in the context of what other people are talking about here].


    Utter, utter PITA - especially the second since I seem to recall needing to come up with some hack to bugger around with registry settings or some-such just to be able to 'change projects'.

    I get the impression it was basically sold as a single project system with little in the way of support for software houses requiring front-ends for multiple customers.

    I've no idea what it's like since Schneider bought them out - or even if it still exists in any form - since that happened two years after I resigned from that job stopped using it.


  • @PJH said:

    @ender said:
    @ais523 said:
    Then how could the same user possibly install two copies of the application at once? Half the reason you'd want to customize the install paths is to do that.
    Work off your executable path, though most applications don't bother supporting multiple system-level installs on the same machine (they don't make much sense with multiple users, where data has to be kept per-user), and instead support profiles (which are pretty simple to implement - just use a separate subfolder in %APPDATA% and key in registry for each profile).
    Something I've probably missed upthread - or it's not been raised - what about multiple versions of the same software.

    I don't think it was specifically mentioned, but that is what I assumed, without even realizing I was assuming it, when ais523 mentioned installing two copies of an application. That is why I install multiple copies of a single application; I'm having a hard time thinking of another reason for doing so. (No doubt I'll have a duh moment when someone posts other good and should-be-obvious reasons.)



  • Just version your AppData stuff, and write an importer to import from the old version to the new. So your folder is like "AppData/DumbApp/1.25/shit" and you also have "AppData/DumbApp/1.4/more_shit".

    However doing this is dumb and I hate you.


  • Discourse touched me in a no-no place

    @HardwareGeek said:

    @PJH said:
    Something I've probably missed upthread - or it's not been raised - what about multiple versions of the same software.

    I don't think it was specifically mentioned, but that is what I assumed, without even realizing I was assuming it, when ais523 mentioned installing two copies of an application. That is why I install multiple copies of a single application; I'm having a hard time thinking of another reason for doing so. (No doubt I'll have a duh moment when someone posts other good and should-be-obvious reasons.)
    The other reason for having multiple installations of an application is to have enhanced isolation between them; putting each in its own VM is just an extreme form of this.



  • @blakeyrat said:

    @joe.edwards said:
    He's trying to support "portable installations" I think (where you install on a USB drive and the settings are carried along with it). Most applications I've seen that support this scenario ask the user at installation if they want a normal or portable installation.

    Oh. Then his insane rambling makes some tiny sort of sense. Imagine how less confused we'd all be if he's typed like 4 words to actually COMMUNICATE WHAT PROBLEM HE WAS TRYING TO SOLVE.

     

    My problem is: allow the user to install the application wherever they want, and have it still work, with multiple installs in independent locations being 100% independent of each other.

    Reasons a user might want to do this (all of which have actually come up):

    • They don't have permission to write AppData (or don't want to write AppData), but nonetheless want to run the program. If the program looks for its data files in AppData, it won't find them, because the user didn't have permission to write there.
    • They're installing the application for other people to access remotely; and they also want to use the application locally. The remote use is heavily locked down, for security reasons; it doesn't have permission to even read any of the known folders, and thus cannot be installed in any of them. The local use is not locked down, it's just a "normal install". Both are the same version of the software.
    • The user is working on local modifications to the software (perhaps for their own amusement), some of which involve recompiling it. They also want an unmodified version of the software, to compare. (As far as I know, the usual way to achieve this is for the locally modified version to be stored in the equivalent of the Documents folder, and not touch any of the system install paths.) The "have one version of the data and registry keys" method would work for this purpose (although not for the others), but only if the user set their own version number (ensuring it didn't clash with any "official" version numbers). Things radically get more complex if the user wants two separate locally modified versions, with unrelated modifications (this has also actually happened).

    Reasons that haven't come up for me, but are plausible (and have been mentioned by others upthread):

    • The application is stored on a USB stick, and moved about between computers that aren't connected to the same network. Its configuration should also be on that same USB stick, or it won't be able to find it if it's moved to a different computer.

    And I guess the real reason is that this functionality is standard in installers, or at least I thought it was. Are you implying that when I use one of those installers that allows editing of the install path, if I install the application a second time in a second location, the first installation will stop working? That removes most of the point of customizing the install path in the first place.

    BTW, when I was talking about writing the paths into the executable, I meant that the installer modified the executable to know where it had been installed. DEP has nothing to do with that, because the executable isn't running while it's being modified. And when I was talking about running two versions of the application, I mean two different versions; it's also supported, and entirely normal, to open the same executable twice.

    @blakeyrat said:

    Anyway, the best/only way to handle this is to get the path to your own executable (not as trivial as it seems) and have a folder called something like "thisapp_config" at the same level in the filesystem and create your own little mini-AppData and mini-Registry in there in the form of files.

    I want to store in AppData (and the Registry if necessary, although that makes it impossible for the user to uninstall without a specific uninstall program or a lot of searching around and guessing), if the program itself is in Program Files; a default install should look default. (And I'm aware of how hard it is to get the path to your own executable, although I was hoping that it would at least be possible.)



  • @ais523 said:

    They don't have permission to write AppData

    This guy's computer is broken. No point supporting him.

    @ais523 said:

    (or don't want to write AppData)

    This guy's a retard. No point supporting him. (This guy's like the guy who disables JavaScript, then crows everywhere about how websites don't work with JS turned off. Those are the worst people.)

    @ais523 said:

    They're installing the application for other people to access remotely; and they also want to use the application locally. The remote use is heavily locked down, for security reasons; it doesn't have permission to even read any of the known folders, and thus cannot be installed in any of them. The local use is not locked down, it's just a "normal install". Both are the same version of the software.

    I don't even know what this one means. Access remotely how? Over a network share? In that case, the rules are no different-- the only catch is it might be more difficult to find the application's path (no disk letter necessarily, and some APIs have issues with \ paths), and once found it's a lot more likely the appname_settings folder is inaccessible.

    But if computer B runs a program from computer A over a network drive, and that program asks for named folders, it gets computer B's named folders. So I don't see how computer A could possibly "lock it down" to the point where it can't use named folders, unless the same admin is in charge of both computers and set it up that way-- then we're back to the first "the computer is broken" reply.

    If you mean access remotely as in "using Remote Desktop", then there's no issue-- the remoter has their own user account which works like any other independent user account.

    And BTW, we're two use-cases in, and both of them are fucking ridiculous. If these actually came up in support tickets, then you product is used by a lot of morons who don't understand how computers work. I would love, love, to hear this moron's ridiculous explanation for making AppData read-only. I can only guess it involves magical invisible elves who whisper to him.

    @ais523 said:

    The user is working on local modifications to the software (perhaps for their own amusement), some of which involve recompiling it. They also want an unmodified version of the software, to compare. (As far as I know, the usual way to achieve this is for the locally modified version to be stored in the equivalent of the Documents folder, and not touch any of the system install paths.) The "have one version of the data and registry keys" method would work for this purpose (although not for the others), but only if the user set their own version number (ensuring it didn't clash with any "official" version numbers). Things radically get more complex if the user wants two separate locally modified versions, with unrelated modifications (this has also actually happened).

    The method of doing "portable installs" I recommended works for this. Hell, with that method you can make any install a portable install just by adding the myapp_settings folder.

    @ais523 said:

    The application is stored on a USB stick, and moved about between computers that aren't connected to the same network. Its configuration should also be on that same USB stick, or it won't be able to find it if it's moved to a different computer.

    Ditto.

    @ais523 said:

    And I guess the real reason is that this functionality is standard in installers, or at least I thought it was.

    And the basis of this thinking is... what? Have you ever actually tried it?

    @ais523 said:

    Are you implying that when I use one of those installers that allows editing of the install path, if I install the application a second time in a second location, the first installation will stop working?

    Typically if you run an installer for an application that's already installed, the only options you get are "repair" or "remove". Have you seriously never actually tried this? And you write software? Do you... do you even own a computer? Or do you just write software on a legal pad and fax it to Bermuda for someone to type in?

    Did it ever occur to you to maybe learn how a computer works before writing software for one?

    @ais523 said:

    That removes most of the point of customizing the install path in the first place.

    Not really.

    @ais523 said:

    BTW, when I was talking about writing the paths into the executable, I meant that the installer modified the executable to know where it had been installed. DEP has nothing to do with that, because the executable isn't running while it's being modified. And when I was talking about running two versions of the application, I mean two different versions; it's also supported, and entirely normal, to open the same executable twice.

    Yes, well, you're a fucking terrible communicator. And writing a hard-coded path in an executable is such a fucking awful idea.

    @ais523 said:

    I want to store in AppData (and the Registry if necessary, although that makes it impossible for the user to uninstall without a specific uninstall program or a lot of searching around and guessing), if the program itself is in Program Files; a default install should look default. (And I'm aware of how hard it is to get the path to your own executable, although I was hoping that it would at least be possible.)

    Look. Your installer has a fucking check box. The check box reads "portable install". If the user selects that when the application is installed, the installer writes a folder called "myapp_settings" into the same folder as the .exe.

    When your .exe boots, the first thing it does it look for "myapp_settings". If it finds it, it creates its own virtual AppData and virtual Registry inside this path and uses those virtual paths instead of the real AppData or Registry. (Making a compatible wrapper to write Registry entries to either place I leave as an exercise to you.) If the "myapp_settings" folder isn't present, the app uses the normal AppData or Registry path like a normal app written by a normal person normally.

    This isn't rocket science.



  • @blakeyrat said:

    @ais523 said:
    They don't have permission to write AppData

    This guy's computer is broken. No point supporting him.

     

    I thought only Administrators could write there? Or did I pick the wrong folder? I was talking about the known folder in which programs store their read-only data files.

    @blakeyrat said:

    @ais523 said:
    They're installing the application for other people
    to access remotely; and they also want to use the application locally.
    The remote use is heavily locked down, for security reasons; it doesn't
    have permission to even read any of the known folders, and thus cannot
    be installed in any of them. The local use is not locked down, it's just
    a "normal install". Both are the same version of the software.

    I don't even know what this one means. Access remotely how? Over a network share?

    Most commonly, over a remote-desktop-like functionality, except that the target user is highly locked down and doesn't have permissions to anything other than the files used by one application. (Sometimes it'll be a web interface instead, that's in a separate executable written for the purpose.) I wouldn't want random people to be able to potentially read any registry entry in HKLM if they found an exploit in the application, or see the files used by the other programs I have installed, etc.. The idea is to be able to host a system that lets other people use your application remotely (basically, SaaS/cloud usage of it), while maintaining some sort of security.

    @blakeyrat said:

    If you mean access remotely as in "using Remote Desktop", then there's no issue-- the remoter has their own user account which works like any other independent user account.

    I guess this is the fundamental disconnect between our points of view. The remoter has an account that has very, very low permissions. Not even enough to make any sort of basic use of the system. When such systems are set up on Linux, they frequently don't have read access to such fundamental folders as /usr or /etc. Are you saying that on Windows, I can't lock down an account so far that it cannot read Program Files or the registry, perhaps for the purpose of allowing people to remote into it to run one specific program?

    @blakeyrat said:

    @ais523 said:
    And I guess the real reason is that this functionality is standard in installers, or at least I thought it was.

    And the basis of this thinking is... what? Have you ever actually tried it?

    Yes, but only on Linux, which is my primary system, and it works as I expected there. --prefix is the #1 most commonly used option to the configure/make/make install process for an install from source, and usually the very first feature request on any install system that doesn't have it. (DESTDIR is also quite common; that's used for a workflow of "install into directory X, but tell the executable that it's installed in directory Y", which has various uses for build system automation.) Part of the reason is that standard Linux systems have two hierarchies into which programs are commonly installed: /usr which is managed by the operating system, and /usr/local that is managed by the system administrator (and never touched via the equivalent of Add/Remove programs), so the ability to specify which one you want to install into is kind-of critical.

    I assumed that the functionality would also be standard on Windows.

    @blakeyrat said:

    Typically if you run an installer for an application that's already installed, the only options you get are "repair" or "remove". Have you seriously never actually tried this? And you write software? Do you... do you even own a computer? Or do you just write software on a legal pad and fax it to Bermuda for someone to type in?

    I own a computer, but rarely boot it into Windows, which is partly why I'm asking about common Windows practice; the situation of actually installing a program on Windows has come up incredibly rarely for me (and the few times it has, the installer often clearly isn't well-behaved for a standard Windows installer, e.g. one of them couldn't handle paths with spaces even though "Program Files" has a space in specifically to ensure that installers handle paths with spaces in).

    On Linux, if you run an installer on an application that's already installed, it'll do a repair; if you change the installation directory, it will do a second install, into that chosen directory (while leaving the original directory intact). If you want to uninstall it, you run the uninstaller instead (which is normally the same program as the installer, but run a different way, either with different command-line options or using the "uninstall" command in a GUI whose purpose is to run installers and uninstallers. Or, if you installed it into a directory of its own, you can (if the installer was correctly written) just delete that directory instead, because it didn't touch anything outside that directory.

    If you're unlucky, and have a badly written installer, it'll get the path handling wrong and touch things in directories it's not supposed to. This is, ofc, a fun source of WTFs in its own right (e.g. there is a program whose purpose is to attach a debugger to an installer in order to log every path it touches, and uses the information to generate a matching uninstaller; the WTF is not the program itself, but the fact that it's necessary). 

    @blakeyrat said:

    Look. Your installer has a fucking check box. The check box reads "portable install". If the user selects that when the application is installed, the installer writes a folder called "myapp_settings" into the same folder as the .exe.

    When your .exe boots, the first thing it does it look for "myapp_settings". If it finds it, it creates its own virtual AppData and virtual Registry inside this path and uses those virtual paths instead of the real AppData or Registry. (Making a compatible wrapper to write Registry entries to either place I leave as an exercise to you.) If the "myapp_settings" folder isn't present, the app uses the normal AppData or Registry path like a normal app written by a normal person normally.

     

    OK, I think this almost works. The main way in which I see it failing is that it requires the executable to be on a writable disk, but systems on which people want executables on non-writable disks are rare on Windows(?) and so it probably won't be a problem in practice?



  • @ais523 said:

    I thought only Administrators could write there? Or did I pick the wrong folder? I was talking about the known folder in which programs store their read-only data files.

    What huh?

    Programs store their read-only data files in Program Files, generally. You want them in a place where every user can access them, and since they're read-only it's harmless to have them in Program Files.

    Roaming AppData is for read-write configuration files. (Roaming means these files can "roam" over an active directory to another physical PC if needed.) Each user has their own Roaming AppData

    Normal AppData is for read-write caching, basically files which if deleted won't impact your program. (The perfect example of this is a web browser's cache. You don't want it to roam over the network, and deleting the files is harmless.) Each user has their own AppData

    EDIT: to clarify, the reason you can't put read-only files created at install-time in AppData is because AppData is per-user and at the time you install your app you don't know how which users will require the file. You could write it in every existing user's AppData (if permissions allow), but as soon as a new user were created your program would be broken for them.

    @ais523 said:

    Most commonly, over a remote-desktop-like functionality, except that the target user is highly locked down and doesn't have permissions to anything other than the files used by one application.

    What does this MEAN!

    So it's Remote Desktop-like, but it's not Remote Desktop. So what is it? Knowing that is kind of critical to being able to answer the question. For example, if you're using VNC instead of Remote Desktop, then the remote user does not get their own individual user account, and all remote users would be sharing the same settings.

    @ais523 said:

    (Sometimes it'll be a web interface instead, that's in a separate executable written for the purpose.)

    Now I'm even more confused. If it's running in a web server, it's a completely different app you need to handle like any other web application.

    @ais523 said:

    I wouldn't want random people to be able to potentially read any registry entry in HKLM if they found an exploit in the application, or see the files used by the other programs I have installed, etc.. The idea is to be able to host a system that lets other people use your application remotely (basically, SaaS/cloud usage of it), while maintaining some sort of security.

    Remote Desktop is by far the best solution for this. A simple screensharer like VNC would be terrible if you want to maintain security, and not only because it's open source and probably buggy as shit.

    @ais523 said:

    I guess this is the fundamental disconnect between our points of view. The remoter has an account that has very, very low permissions. Not even enough to make any sort of basic use of the system. When such systems are set up on Linux, they frequently don't have read access to such fundamental folders as /usr or /etc. Are you saying that on Windows, I can't lock down an account so far that it cannot read Program Files or the registry, perhaps for the purpose of allowing people to remote into it to run one specific program?

    Can you? Yes. You can lock down an account so it doesn't even have access to Program Files or the Registry. If Remote Desktop can log-in under those circumstances (unlikely), then it would be able to run an application you provide that doesn't require access to either of those things. But that setup would be extraordinarily unusual.

    How would I do it? Set up your template "new user" account so it has read-only access to Program Files and Registry (which is probably required for someone to remote in anyway), but give it a read-only desktop that only has access to your application. Also adding your application to Startup, so it'll automatically start when someone logs in. Then you've pretty much make a kiosk, but you can still add new users as-needed. If someone causes trouble (and I think the worst they could possibly do here is fill up disk... which you can prevent with a strict quota), blow their account away and remake it.

    Of course the normal solution here would be to not make it a desktop application in the first place. Why isn't it a web app, if this is what you want?

    @ais523 said:

    Yes, but only on Linux, which is my primary system, and it works as I expected there.

    Linux doesn't even have installers.

    @ais523 said:

    --prefix is the #1 most commonly used option to the configure/make/make install process for an install from source, and usually the very first feature request on any install system that doesn't have it.

    Same with install location.

    @ais523 said:

    (DESTDIR is also quite common; that's used for a workflow of "install into directory X, but tell the executable that it's installed in directory Y", which has various uses for build system automation.) Part of the reason is that standard Linux systems have two hierarchies into which programs are commonly installed: /usr which is managed by the operating system, and /usr/local that is managed by the system administrator (and never touched via the equivalent of Add/Remove programs), so the ability to specify which one you want to install into is kind-of critical.

    Right, and Windows has System/System32 which is used by the OS and Program Files which is for stuff installed by the user/admin. The difference is that you can't install into System/System32 because why the fuck would you be able to? Why would this be something you need to tell the OS at install time? If the OS manages the programs in /usr doesn't it already fucking know that? It can just set the permissions when the OS is installed to disallow it! (And also: why the fuck would the "programs managed by the admin/user" be in a subfolder of "programs managed by the OS"... how does that make any sense?

    @ais523 said:

    I assumed that the functionality would also be standard on Windows.

    It is. It's just done automatically so you never have to think about it.

    @ais523 said:

    I own a computer, but rarely boot it into Windows, which is partly why I'm asking about common Windows practice; the situation of actually installing a program on Windows has come up incredibly rarely for me (and the few times it has, the installer often clearly isn't well-behaved for a standard Windows installer, e.g. one of them couldn't handle paths with spaces even though "Program Files" has a space in specifically to ensure that installers handle paths with spaces in).

    So if you know it's broken, fix it.

    Hey answer a riddle for me. If you go into your stupid-ass Linux build process and set the dash-dash-prefix to something like "my app" does it fail there too? I bet it does. I bet it's fucking broken on its native OS. But I'm too lazy to get a VM and try it.

    @ais523 said:

    On Linux, if you run an installer on an application that's already installed, it'll do a repair; if you change the installation directory, it will do a second install, into that chosen directory (while leaving the original directory intact). If you want to uninstall it, you run the uninstaller instead (which is normally the same program as the installer, but run a different way, either with different command-line options or using the "uninstall" command in a GUI whose purpose is to run installers and uninstallers.

    Linux doesn't even have installers. You're the first Linux user I've ever seen to even talk about installers, in a way other than telling me how much better package managers are and how I'm so stupid for not appreciating package managers and package managers solve all problems ever.

    @ais523 said:

    Or, if you installed it into a directory of its own, you can (if the installer was correctly written) just delete that directory instead, because it didn't touch anything outside that directory.

    Really? How is that possible, if Linux is a multi-user system? And doesn't have any kind of non-filesystem based way of saving configuration?

    @ais523 said:

    OK, I think this almost works. The main way in which I see it failing is that it requires the executable to be on a writable disk, but systems on which people want executables on non-writable disks are rare on Windows(?) and so it probably won't be a problem in practice?

    If someone goes out of their way to make a portable install, you pretty much just have to assume they kind of know what they're doing. I mean, you are 100% correct.

    I think you can set up Windows to open USB drives read-only, and in that environment your portable app installed on the USB drive wouldn't be able to save its settings when the user was done with it. Pretty much all you can do about that is give them a little message saying so.



  • @blakeyrat said:

    @ais523 said:
    I thought only Administrators could write there? Or did I pick the wrong folder? I was talking about the known folder in which programs store their read-only data files.

    What huh?

    Programs store their read-only data files in Program Files, generally. You want them in a place where every user can access them, and since they're read-only it's harmless to have them in Program Files.

    Roaming AppData is for read-write configuration files. (Roaming means these files can "roam" over an active directory to another physical PC if needed.) Each user has their own Roaming AppData

    Normal AppData is for read-write caching, basically files which if deleted won't impact your program. (The perfect example of this is a web browser's cache. You don't want it to roam over the network, and deleting the files is harmless.) Each user has their own AppData

     

    Ah right, sorry, in that case I just got the folders muddled. The reason for the confusion is that on Linux, there are separate known-location-equivalents for executables (which depend on the architecture), and read-only data files (which don't), both of which are read-only and centralized. The reason is that if you, say, have a 64-bit and 32-bit version of the same program, they can then share their data. Or you can share the read-only data files over NFS even if you have a bunch of programs running different architectures.

    I agree with your clarification, and am sorry about the disconnect. I assumed that AppData was the equivalent of /usr/share, when it seems it's actually the equivalent of /home/username/.cache (non-roaming) or /home/username/.share (roaming). And /usr/share and /usr/bin are both lumped together as Program Files, it seems.

    @blakeyrat said:

    @ais523 said:

    Most commonly, over a remote-desktop-like functionality, except that the target user is highly locked down and doesn't have permissions to anything other than the files used by one application.

    What does this MEAN!

    So it's Remote Desktop-like, but it's not Remote Desktop. So what is it? Knowing that is kind of critical to being able to answer the question. For example, if you're using VNC instead of Remote Desktop, then the remote user does not get their own individual user account, and all remote users would be sharing the same settings.

    In my case, this is a command-line program, and the program in question is a service that accepts commands from arbitrary people over the Internet, provides them to the program's command line, and then sends the results back across the Internet. This is effectively a command-line Remote Desktop. (You could even use telnet for the purpose, in fact. Just imagine it's telnet except vaguely secure.) The service has the same permissions as the program; it runs under an account that basically doesn't have permissions to access anything, all it can do is run one program. (I know that Windows' permissions system is entirely capable of implementing this.) Being a service, it's running from boot time, so has no separate "log in" stage, and thus the login process doesn't need to be able to complete normally.

    @blakeyrat said:

    @ais523 said:
    (Sometimes it'll be a web interface instead, that's in a separate executable written for the purpose.)

    Now I'm even more confused. If it's running in a web server, it's a completely different app you need to handle like any other web application.

    A service with a web interface is just a webserver. So it's much the same as a service with a network interface.

    @blakeyrat said:

    Right, and Windows has System/System32 which is used by the OS and Program Files which is for stuff installed by the user/admin. The difference is that you can't install into System/System32 because why the fuck would you be able to? Why would this be something you need to tell the OS at install time? If the OS manages the programs in /usr doesn't it already fucking know that? It can just set the permissions when the OS is installed to disallow it! (And also: why the fuck would the "programs managed by the admin/user" be in a subfolder of "programs managed by the OS"... how does that make any sense?

    I think there's been a communication problem here again. On Linux distributions, you normally install via package managers (which, basically, download and run the installer for you), which is part of the distribution (which corresponds to the OS; Linux the kernel has no concept of package managers, but it's only one component of the distribution). These install into the main /usr hierarchy. You can also install programs by either running the installer by hand, or (if you feel like it) copying files into the /usr/local hierarchy manually; in both cases, the files end up in /usr/local (which is important, because otherwise the package manager might overwrite them).

    As for why /usr/local is a subdirectory of /usr, this is indeed a WTF (although not as much of a WTF as the reason the directory is called /usr in the first place). It's worked around in practice (the package manager knows that /usr/local is special and doesn't touch it), but it would definitely be possible to do better. (There was something of a campaign to use /opt for the purpose instead, but that lead to some sort of bureaucratic process with the result that nobody really knows what /opt is for, and in my experience, it normally seems to be used for programs that were ported directly from Windows and nothing else.) The way in which the home file directories start with dots is also a WTF, and again for stupid historical reasons (Windows' approach, in putting things like AppData in the directory below Documents, is a lot more sensible).

    @blakeyrat said:

    So if you know it's broken, fix it.

    I'm talking about other people's installers here, which are broken on Windows. (What this thread is notionally about, in fact.)

    @blakeyrat said:

    Hey answer a riddle for me. If you go into your stupid-ass Linux build process and set the dash-dash-prefix to something like "my app" does it fail there too? I bet it does. I bet it's fucking broken on its native OS. But I'm too lazy to get a VM and try it.

    I did just try. The installer in autoconf/automake is the most commonly used one on Linux; it handles spaces in directory names just fine (I just installed a program to a directory with spaces in). However, the programs that are actually installed often themselves have problems in such a configuration (for instance, failing to escape them correctly in command-line arguments when invoking other programs via the equivalent of Windows' CreateProcess). That said, it did fail when I tried a double quote in the directory name instead (although double quotes are illegal in filenames on Windows, so all Windows programs are either perfect or terrible at handling them depending on your point of view), and it's easy for programs using the installer to misconfigure it in a way that it fails to handle spaces (by not telling it how it wants them escaped).

    My own system that I'm working on did indeed fail (it handles the spaces itself just fine, so it works in simple cases, but cannot correctly communciate them to custom build steps defined by the application; this is close, but not perfect, and not good enough). I view this as a bug that needs fixing; thank you for drawing it to my attention. It makes sense to fix this at the same time as implementing Windows support.

    @blakeyrat said:

    Linux doesn't even have installers. You're the first Linux user I've ever seen to even talk about installers, in a way other than telling me how much better package managers are and how I'm so stupid for not appreciating package managers and package managers solve all problems ever.

    The package managers work via either a) downloading and running the installer; or b) running the installer on the distributor's system, recording the results, then downloading the results and copying them into the appropriate places. (The latter method is why you'd want to install in directory X, telling the program you're actually installing in directory Y; directory X is used to record the results of the installation.) So Linux users rarely deal with installers directly. People doing packaging for Linux, however, have to write the installers (if the installer is well-behaved, the package managers don't need much information from there, you can just tell it "this installer works like every other installer, do what you normally do" and it will work).

    WRT package managers solving every problem ever, they don't; however, what they do do, is cause those problems to become someone else's (the team who administers the package database). In a way, this is solving problems from the user's point of view. Having seen what happens when they go wrong (so far, twice in seven years), they're no magical tool for making the problems not exist in the first place, and recovering from such a problem is almost as hard as it would be on Windows (slightly easier, in that all the source information is on someone else's server and so there's a limit to how badly you can screw things up).

    @blakeyrat said:

    @ais523 said:
    Or, if you installed it into a directory of its own, you can (if the installer was correctly written) just delete that directory instead, because it didn't touch anything outside that directory.

    Really? How is that possible, if Linux is a multi-user system? And doesn't have any kind of non-filesystem based way of saving configuration?

    The configuration would be installed into the same "directory of its own", in that situation.

    @blakeyrat said:


    @ais523 said:

    OK, I think this almost works. The main way in which I see it failing is that it requires the executable to be on a writable disk, but systems on which people want executables on non-writable disks are rare on Windows(?) and so it probably won't be a problem in practice?

    If someone goes out of their way to make a portable install, you pretty much just have to assume they kind of know what they're doing. I mean, you are 100% correct.

    I don't use it myself, but apparently several people use a W^X system for disks on Linux (W^X normally refers to the practice of making sure that memory is never simultaneously writable and executable, which I believe is called DEP on Windows?); each disk either contains executable files, or writable files, but never both (and this is enforced at the filesystem level). You temporarily overwrite the "writable" flag on the executables disk to do an install, but otherwise leave it the same. Obviously, in such a case, the configuration would be on a different disk to the executable, in which case it would be awkward to put them in parallel directories (you could do it with Linux bind mounts or Windows junctions, but probably wouldn't want to). It's that sort of configuration I had in mind when I made my comment. (As well as just --prefix, most installers allow you to specify specific directories for executables, configuration, etc., so as to be able to support that practice, although that's not as ubiquitous as --prefix.)



  • @ais523 said:

    Ah right, sorry, in that case I just got the folders muddled. The reason for the confusion is that on Linux, there are separate known-location-equivalents for executables (which depend on the architecture), and read-only data files (which don't), both of which are read-only and centralized. The reason is that if you, say, have a 64-bit and 32-bit version of the same program, they can then share their data. Or you can share the read-only data files over NFS even if you have a bunch of programs running different architectures.

    That's an interesting idea, but Microsoft makes the (IMO pretty solid) assumption that you'd only ever have either the 32-bit or the 64-bit version installed, not both. (The one exception I can think of is older versions of IE-- I'm not sure how they solved this, but I wager they just copied the same data files into both the 32-bit and 64-bit folders.)

    @ais523 said:

    I agree with your clarification, and am sorry about the disconnect. I assumed that AppData was the equivalent of /usr/share, when it seems it's actually the equivalent of /home/username/.cache (non-roaming) or /home/username/.share (roaming). And /usr/share and /usr/bin are both lumped together as Program Files, it seems.

    You gotta look at it the other way around. The Windows folder system is MUCH more comprehensive than the Linux one-- there are tons of Windows named folders that don't have Linux equivalents, and at least one that's used entirely differently (the user's "home" folder.) If you insist on equating every Windows known folder to one in Linux, you're going to produce buggy, broken, awful software-- the better solution is to do what Mono does, and artificially impose Windows' system on Linux.

    @ais523 said:

    I agree with your clarification, and am sorry about the disconnect. I assumed that AppData was the equivalent of /usr/share, when it seems it's actually the equivalent of /home/username/.cache (non-roaming) or /home/username/.share (roaming). And /usr/share and /usr/bin are both lumped together as Program Files, it seems.

    The whole idea of putting all .exes in a directory divorced from any contextual information about what program they are, or what associated files they have, is such a terrible idea I can't imagine that even Linux users are a fan of it. The Windows system kind of sucks, too, honestly... the Mac (both Classic, using "resource forks", and OS X, using application bundles) approach is much better.

    @ais523 said:

    (Windows' approach, in putting things like AppData in the directory below Documents, is a lot more sensible).

    AppData isn't below Documents, assuming by "below" you mean "inside". At least not by default.

    The important thing to remember about Windows named folders is that could be anywhere, on any drive, on any network. And the only way to find out where they are for the current login is to ask the OS.

    @ais523 said:

    In my case, this is a command-line program, and the program in question is a service that accepts commands from arbitrary people over the Internet, provides them to the program's command line, and then sends the results back across the Internet. This is effectively a command-line Remote Desktop. (You could even use telnet for the purpose, in fact. Just imagine it's telnet except vaguely secure.) The service has the same permissions as the program; it runs under an account that basically doesn't have permissions to access anything, all it can do is run one program. (I know that Windows' permissions system is entirely capable of implementing this.) Being a service, it's running from boot time, so has no separate "log in" stage, and thus the login process doesn't need to be able to complete normally.

    Windows has admittedly poor support for running CLI programs over a network without using something like Remote Desktop to set up a graphical remote first. Windows wasn't built to support CLI programs well, because they were rightly considered arcane and obsolete even in 1995 when this UI was first being designed.

    Going back in time and telling them about the current Linux developer holdouts still building these shitty UIs in 2014 would probably cause all their monocles to pop out.

    @ais523 said:

    My own system that I'm working on did indeed fail (it handles the spaces itself just fine, so it works in simple cases, but cannot correctly communciate them to custom build steps defined by the application; this is close, but not perfect, and not good enough). I view this as a bug that needs fixing; thank you for drawing it to my attention. It makes sense to fix this at the same time as implementing Windows support.

    Woot, I win my head-bet.

    @ais523 said:

    WRT package managers solving every problem ever, they don't; however, what they do do, is cause those problems to become someone else's (the team who administers the package database). In a way, this is solving problems from the user's point of view.

    ... unless the user's problem is, "I want to install a program that the package manager group hasn't heard of yet". Which is basically the most important problem that needs solving. (Given, the Windows solution sucks also, but.)

    @ais523 said:

    The configuration would be installed into the same "directory of its own", in that situation.

    My point is that each user who logged in and ran the program would have generated files in their own user folders that simply deleting the folder containing the application wouldn't be able to remove.

    Again, it's not as if Windows or OS X solves that problem, either. So Linux is no worse in that regard. Just pointing out that saying you can uninstall an entire application just by deleting its folder is pretty misleading.

    @ais523 said:

    I don't use it myself, but apparently several people use a W^X system for disks on Linux (W^X normally refers to the practice of making sure that memory is never simultaneously writable and executable, which I believe is called DEP on Windows?); each disk either contains executable files, or writable files, but never both (and this is enforced at the filesystem level). You temporarily overwrite the "writable" flag on the executables disk to do an install, but otherwise leave it the same. Obviously, in such a case, the configuration would be on a different disk to the executable, in which case it would be awkward to put them in parallel directories (you could do it with Linux bind mounts or Windows junctions, but probably wouldn't want to). It's that sort of configuration I had in mind when I made my comment. (As well as just --prefix, most installers allow you to specify specific directories for executables, configuration, etc., so as to be able to support that practice, although that's not as ubiquitous as --prefix.)

    Sounds like overcomplicated crap. But there's no law that said the Linux version of your portable install needs to run the same way as the Windows version. And since you're a shitty programmer who doesn't even know how computer works, and didn't bother spending any time making your application usable, it's not like people will expect it to be non-broken.



  • @ais523 said:

    Not even enough to make any sort of basic use of the system. When such systems are set up on Linux, they frequently don't have read access to such fundamental folders as /usr or /etc.

    Um.. what? If you can't read /usr or /etc then you can't do anything.



  • @blakeyrat said:

    Just pointing out that saying you can uninstall an entire application just by deleting its folder is pretty misleading.

    It's really misleading. The standard for Linux/Unix is to scatter application files all over the filesystem, not unlike a demented Johnny Appleseed. So any application that follows the standard is going to have files in a half-dozen-plus locations.



  • @morbiuswilters said:

    @ais523 said:
    Not even enough to make any sort of basic use of the system. When such systems are set up on Linux, they frequently don't have read access to such fundamental folders as /usr or /etc.

    Um.. what? If you can't read /usr or /etc then you can't do anything.

    I think that was the point. This login is allowed to run /foo/bar/baz -option blat and absolutely nothing else. I'm guessing it can't even give that command line to a shell to process any metacharacters in -option blat.



  • @blakeyrat said:

    The important thing to remember about Windows named folders is that could be anywhere, on any drive, on any network.
    Same applies to any folder in the Unix filesystem tree. System administrators can mount anything anywhere.



  • @flabdablet said:

    Same applies to any folder in the Unix filesystem tree. System administrators can mount anything anywhere.

    Then why the fuck you be hard-coding the path!? As we've already discussed.



  • @blakeyrat said:

    You gotta look at it the other way around. The Windows folder system is MUCH more comprehensive than the Linux one-- there are tons of Windows named folders that don't have Linux equivalents, and at least one that's used entirely differently (the user's "home" folder.) If you insist on equating every Windows known folder to one in Linux, you're going to produce buggy, broken, awful software-- the better solution is to do what Mono does, and artificially impose Windows' system on Linux.
     

    I wasn't insisting that they had to be the same. I was simply pointing out that the fact that they were different was the reason for my confusion. (There is, indeed, a greater range of known folders on Windows than Linux.)

    @blakeyrat said:

    The whole idea of putting all .exes in a directory divorced from any contextual information about what program they are, or what associated files they have, is such a terrible idea I can't imagine that even Linux users are a fan of it.

    It can be a problem, indeed; there's even been Linux distros that aim to change it even though it means fighting the expectations of every program in existence. The package managers try to work around that problem (I can type "dpkg -S /usr/bin/cat", be told it belongs to "coreutils", then type "dpkg -L coreutils" and get a list of all the files associated with it), but that's no help when the program you want isn't in the repositories. It does mean you rarely have the wrong value for $PATH, but that's about the best thing you can say about it.

    Incidentally, just for fun, I tried to do the same thing through the package manager's GUI, and couldn't figure out how. This might say something about the relative discoverability of interfaces, or possibly not. Most likely they didn't implement it, on the grounds that the functionality was available through the CLI already and most people would look for it there rather than the GUI. Meanwhile, both those commands were on the first page of help output from the CLI.

    @blakeyrat said:

    AppData isn't below Documents, assuming by "below" you mean "inside". At least not by default.

    I meant "outside", and used entirely the wrong word. And I was talking about the default locations, indeed. My point is that the Windows default hierarchy, in which we have /Users/blakeyrat/AppData and /Users/blakeyrat/Documents, is more sensible than the Linux default hierarchy, in which we have /home/ais523/.share and /home/ais523, with the convention of using a dot on the filename to imply that it isn't "really" part of the home directory. (And that convention came about because the earliest versions of ls, the equivalent of dir from DOS, ignored files starting with dots in order to not list the . and .. entries that referred to the current and parent directories, so people thought it'd be a great name for directories that weren't really user-visible. As I said, quite something of a WTF.) Interestingly, Ubuntu has now created a /home/ais523/Documents. I'm not sure what their intended use is for it.

    @blakeyrat said:

    Windows has admittedly poor support for running CLI programs over a network without using something like Remote Desktop to set up a graphical remote first. Windows wasn't built to support CLI programs well, because they were rightly considered arcane and obsolete even in 1995 when this UI was first being designed.

    Going back in time and telling them about the current Linux developer holdouts still building these shitty UIs in 2014 would probably cause all their monocles to pop out.

     

    My program actually has a GUI as well as a CLI, but the users who want to use it remotely wanted to use a CLI, so there we are.

    @blakeyrat said:

    ... unless the user's problem is, "I want to install a program that the package manager group hasn't heard of yet". Which is basically the most important problem that needs solving. (Given, the Windows solution sucks also, but.)

    Yeah, this is definitely a problem. The common answer is "you run the installer by hand, setting it to install into /usr/local, and hope that a) the installer works, and b) you don't have to install so much manually that you get things like filename clashes in /usr/local/lib when two programs both try to install the same library". Oh, and "c) oh, we forgot to mention that you have to download and manually install all the dependencies first". This seems to be strictly worse than Windows, which suffers from problems a) and b) but normally doesn't suffer so much with c).

    There are a bunch of workarounds for a) (there are programs that sandbox installers to make sure that they're behaving correctly, for instance), but b) and c) are entirely left to the team managing the package manager (those sandboxing programs can detect the existence of filename clashes, but can't do anything to solve them), and you're in trouble if the program you want to install isn't there. Incidentally, the existence of problem a), on Linux and Windows, is part of the reason I'm trying so hard to get my installer to work correctly.

    @blakeyrat said:

    My point is that each user who logged in and ran the program would have generated files in their own user folders that simply deleting the folder containing the application wouldn't be able to remove.

    Often no; it rather depends on what the program is. I once had to find, download, and install a program to concatenate PDFs within 15 minutes to meet a deadline (thankfully, it was in the package manager, although I probably wouldn't have made it). That program doesn't write anything to my user folder (apart from the resulting concatenated PDF, if you ask it to put it there, and you probably will), because it has no reason to; it's not like it has any per-user configuration to worry about. Incidentally, it's usual for Linux uninstallers to not delete configuration files by default anyway (often they have an option to do so, but it's not the default). Often I choose to leave the configuration around in case I ever install the program again. When the option's there, this isn't a problem, but some don't support this at all, and in general, uninstalling things is really hard on Linux because people don't test their uninstallers. (I'm annoyed about this. That said, I haven't had time to write the matching uninstaller to my installer yet, but when I do, I'll make sure it works.)

    @blakeyrat said:

    Sounds like overcomplicated crap. But there's no law that said the Linux version of your portable install needs to run the same way as the Windows version.

    Indeed. I'd like to make them work similarly, if possible; it means you only have to maintain one set of documentation, that if a bug appears on one system you know to look for it on the other system too, and so on. However, conforming to the expectations of the OS is more important than that, at least in a default setup. 



  • @blakeyrat said:

    Then why the fuck you be hard-coding the path!?
    Because you can mount anything you like on any hard coded path. As we've already discussed, a hardcoded directory pathname in a Unix program is functionally very close to a hardcoded CSIDL constant in a Windows program.



  • @HardwareGeek said:

    I think that was the point. This login is allowed to run /foo/bar/baz -option blat and absolutely nothing else. I'm guessing it can't even give that command line to a shell to process any metacharacters in -option blat.

    But it still needs to read libraries in /usr, unless the whole thing is statically-linked. And statically-linking with C/C++ on glibc is a big no-no, so the only way this would work is if the application was written in Go.



  • @ais523 said:

    Incidentally, just for fun, I tried to do the same thing through the package manager's GUI, and couldn't figure out how. This might say something about the relative discoverability of interfaces, or possibly not.

    It could also just mean that FOSS people are awful at building GUIs, just like they're awful at building everything else.



  •  @morbiuswilters said:

    @HardwareGeek said:
    I think that was the point. This login is allowed to run /foo/bar/baz -option blat and absolutely nothing else. I'm guessing it can't even give that command line to a shell to process any metacharacters in -option blat.

    But it still needs to read libraries in /usr, unless the whole thing is statically-linked. And statically-linking with C/C++ on glibc is a big no-no, so the only way this would work is if the application was written in Go.

    I put copies of all the libraries it needs, and only those libraries, in a folder it can read (and tell it where to find them). It can't read /usr.



  • @flabdablet said:

    Because you can mount anything you like on any hard coded path.

    Tell us again about the UnionFS, George..

    @flabdablet said:

    s we've already discussed, a hardcoded directory pathname in a Unix program is functionally very close to a hardcoded CSIDL constant in a Windows program.

    That really doesn't make any sense. I think you know that. You can't install Linux programs to any location without resorting to weird tricks. Windows installers let you customize the location of installs.



  • @ais523 said:

     @morbiuswilters said:

    @HardwareGeek said:
    I think that was the point. This login is allowed to run /foo/bar/baz -option blat and absolutely nothing else. I'm guessing it can't even give that command line to a shell to process any metacharacters in -option blat.

    But it still needs to read libraries in /usr, unless the whole thing is statically-linked. And statically-linking with C/C++ on glibc is a big no-no, so the only way this would work is if the application was written in Go.

    I put copies of all the libraries it needs, and only those libraries, in a folder it can read (and tell it where to find them). It can't read /usr.

    Are you running it chrooted? Otherwise, that sounds really bizarre and hack-y.


  • Considered Harmful

    @ais523 said:

     @morbiuswilters said:

    @HardwareGeek said:
    I think that was the point. This login is allowed to run /foo/bar/baz -option blat and absolutely nothing else. I'm guessing it can't even give that command line to a shell to process any metacharacters in -option blat.

    But it still needs to read libraries in /usr, unless the whole thing is statically-linked. And statically-linking with C/C++ on glibc is a big no-no, so the only way this would work is if the application was written in Go.

    I put copies of all the libraries it needs, and only those libraries, in a folder it can read (and tell it where to find them). It can't read /usr.


    You can always chroot it into its own private sandbox where it thinks has full root access.



  • @morbiuswilters said:

    Are you running it chrooted? Otherwise, that sounds really bizarre and hack-y.
     

    Yes.



  • @ais523 said:

    Interestingly, Ubuntu has now created a /home/ais523/Documents. I'm not sure what their intended use is for it.

    Any distro supporting xdg will have ~/Documents/ (and ~/Downloads/, ~/Music/, ~/Pictures/ and ~/Videos/ too). This strikes me as a tidier arrangement than throwing everything in ~/ willy-nilly, and I was quite pleased to see Vista picking up the same layout (My Music, My Pictures and My Videos were all subfolders of My Documents by default in Windows versions up to XP, but Vista defaults to having them side by side inside %USERPROFILE%).

    The tidying influence has certainly flowed the other way as well: a lot of per-user config can now be found consolidated in ~/.config/appname/ rather than being scattered around in ~/.*, which makes ~/.config pretty much equivalent to %APPDATA% (which, by default, now goes in %USERPROFILE%\AppData\Roaming). Similarly, ~/.cache/ works much like %LOCALAPPDATA% (typically found at %USERPROFILE%\AppData\Local).



  • @morbiuswilters said:

    @flabdablet said:
    s we've already discussed, a hardcoded directory pathname in a Unix program is functionally very close to a hardcoded CSIDL constant in a Windows program.

    That really doesn't make any sense. I think you know that. You can't install Linux programs to any location without resorting to weird tricks. Windows installers let you customize the location of installs.

    This part of the discussion is about finding config and saved state information, not executables. Keep up or fuck off.



  • @ais523 said:

    @morbiuswilters said:

    Are you running it chrooted? Otherwise, that sounds really bizarre and hack-y.
     

    Yes.

    Okay, that sounds less insane than what I was first suspecting.



  • @flabdablet said:

    This part of the discussion is about finding config and saved state information, not executables.

    Which Windows also does better, by having an API for config settings rather than puking up a mish-mash of text files in various formats all over the filesystem.

    I know that you are old and that you are afraid of new things, but I assure you that joining us in using the technologies of the late twentieth century will not hurt you. No, Windows will not support your punch-card reader, but where we're going you won't need it.



  • @flabdablet said:

    The tidying influence has certainly flowed the other way as well: a lot of per-user config can now be found consolidated in ~/.config/appname/ rather than being scattered around in ~/.*
    Except for all the applications that don't use it, yet. I have ~/.config/appname for exactly three applications, and still have approximately 97000 ~/.appnamerc files.


  • Considered Harmful

    @morbiuswilters said:

    but I assure you that joining us in using the technologies of the late twentieth century will not hurt you.

    Phew, that's reassuring and comforting and...
    @morbiuswilters said:
    where we're going you won't need it.

    screams and wets self



  • @blakeyrat said:

    Programs store their read-only data files in Program Files, generally.
    Actually, the recommended directory for data is the common AppData folder, which (by default) is read-only to normal users, and writeable to administrators (but some installers make this folder read-write for everybody, to be able to store per-machine data while running as unprivileged users).



  • @morbiuswilters said:

    Windows installers let you customize the location of installs.
    ...unless the program uses assistive technologies, in which case it must be installed to Program Files (it simply won't work otherwise).



  • @ender said:

    @blakeyrat said:
    Programs store their read-only data files in Program Files, generally.
    Actually, the recommended directory for data is the common AppData folder, which (by default) is read-only to normal users, and writeable to administrators (but some installers make this folder read-write for everybody, to be able to store per-machine data while running as unprivileged users).
     

    Huh, so I was right the first time, and blakeyrat was just telling me I was wrong?

    What's the directory for common files that are writeable to normal users?

    As a bonus question (one that's quite close to my usecase, but not that important), is there any way to set permissions on a file so that normal users can only read and write it via using a specific executable? (On Linux, this is both possible, and sufficiently difficult and nonintuitive to set up that some major distros have screwed it up.)


  • Considered Harmful

    @ais523 said:

    @ender said:

    @blakeyrat said:
    Programs store their read-only data files in Program Files, generally.
    Actually, the recommended directory for data is the common AppData folder, which (by default) is read-only to normal users, and writeable to administrators (but some installers make this folder read-write for everybody, to be able to store per-machine data while running as unprivileged users).
     

    Huh, so I was right the first time, and blakeyrat was just telling me I was wrong?


    I think that's ProgramData (in Windows 7 found by default in C:\ProgramData). AppData is user-writable and non-shared, as Blakey said. ProgramData, shared and non-writable(?). (There is an "C:\Users\All Users" symlink I see that's pointing to C:\ProgramData in Win7.)



  • @ais523 said:

    @ender said:

    @blakeyrat said:
    Programs store their read-only data files in Program Files, generally.
    Actually, the recommended directory for data is the common AppData folder, which (by default) is read-only to normal users, and writeable to administrators (but some installers make this folder read-write for everybody, to be able to store per-machine data while running as unprivileged users).
     

    Huh, so I was right the first time, and blakeyrat was just telling me I was wrong?

    What's the directory for common files that are writeable to normal users?

    I think ender is referring to \Users\All Users\AppData vs. \Users\SomeSpecificUser\AppData. The latter is writable by SomeSpecificUser, while the former is only writable by administrators.



  • @joe.edwards said:

    There is an "C:\Users\All Users" symlink I see that's pointing to C:\ProgramData in Win7.
    Huh, I just learned something. Explorer gives no indication at all that this is anything other than a normal folder, unless you specifically look at its properties. I'd never had any reason to do that before.



  • @HardwareGeek said:

    @joe.edwards said:
    There is an "C:\Users\All Users" symlink I see that's pointing to C:\ProgramData in Win7.
    Huh, I just learned something. Explorer gives no indication at all that this is anything other than a normal folder, unless you specifically look at its properties. I'd never had any reason to do that before.

    That's there to help create a pre-Vista-compatible filesystem tree for the benefit of braindead applications that hard-code special folder paths on Windows because hey, that worked in Windows 95. There's a bunch of junction points as well: C:\ProgramData\Application Data is a junction to C:\ProgramData; C:\ProgramData\Start Menu is a junction to C:\ProgramData\Microsoft\Windows\Start Menu and so forth.

    The junction points all have Hidden and System attributes to discourage users from fiddling with them, but even if you set Explorer to show you hidden and system files you still can't browse into them as they all have Deny List Folder for Everyone NTFS access control entries. Remove those and a bunch of Windows utilities (including Windows Backup, tragically) act as if the disk is infinitely large due to the C:\ProgramData\Application Data -> C:\ProgramData pathname loop.

    More potential pathname loops exist inside each user profile: C:\Users\%USERNAME%\AppData\Local\Application Data is a junction back to C:\Users\%USERNAME%\AppData\Local, to allow C:\Documents and Settings\%USERNAME%\Local Settings\Application Data and C:\Documents and Settings\%USERNAME%\Local Settings\Temp to be aliases for C:\Users\%USERNAME%\AppData\Local and C:\Users\%USERNAME%\AppData\Local\Temp respectively (the pre-Vista default filesystem tree doesn't nest Temp inside Application Data).

    The HS attributes and the Deny List Folder ACEs apply only to the junction points themselves, not to subfolders, so if you browse explicitly to e.g. C:\Documents and Settings\%USERNAME%\Local Settings\Temp or C:\Documents and Settings\%USERNAME%\Local Settings\Microsoft it will work.

    The whole arrangement is exactly the kind of fragile and farcical link-spaghetti kludge that Morbs seems to think could only possibly occur under Unix, made necessary by the kind of nothing-has-changed-for-two-decades backward compatibility that Blakey seem to think could only possibly occur under Unix. If you ever have to fix a broken one, the tool you need is JunctionBox.



  • @flabdablet said:

    The whole arrangement is exactly the kind of fragile and farcical link-spaghetti kludge that Morbs seems to think could only possibly occur under Unix, made necessary by the kind of nothing-has-changed-for-two-decades backward compatibility that Blakey seem to think could only possibly occur under Unix.

    Bullshit. The worst crime would be the OS changing something to break the user's applications. Like Linux and OS X do all the fucking time.

    And virtually none of those kludges exist to help applications written to follow the OS contract. Microsoft's only "problem" is that third-party developers never followed the goddamned API contract. Even though those applications just happened to work by accident before.

    You know a software company is good when they spent thousands of man-hours repairing other people's shitty broken software just so their users aren't subjected to the breakage. The problem with Linux is nobody does the boring work-- guess what, Microsoft does nothing but the boring work.



  • @blakeyrat said:

    The problem with Linux is nobody does the boring work-- guess what, Microsoft does nothing but the boring work.

    Exactly.



  • @blakeyrat said:

    @flabdablet said:
    The whole arrangement is exactly the kind of fragile and farcical link-spaghetti kludge that Morbs seems to think could only possibly occur under Unix, made necessary by the kind of nothing-has-changed-for-two-decades backward compatibility that Blakey seem to think could only possibly occur under Unix.

    Bullshit. The worst crime would be the OS changing something to break the user's applications. Like Linux and OS X do all the fucking time.

    And virtually none of those kludges exist to help applications written to follow the OS contract. Microsoft's only "problem" is that third-party developers never followed the goddamned API contract. Even though those applications just happened to work by accident before.

    You know a software company is good when they spent thousands of man-hours repairing other people's shitty broken software just so their users aren't subjected to the breakage. The problem with Linux is nobody does the boring work-- guess what, Microsoft does nothing but the boring work.

    Bullshit. Linux doesn't need to fix software they broke because they don't break the software in the first place. For example, any program compiled with glibc 1.93 (from 17 years ago) will run exactly the same with glibc 2.19 (the current version). Linuxes don't randomly move the /home, /usr, /usr/bin, etc. directories on a whim.



  • @Ben L. said:

    Bullshit.

    Oh no, Ben L, you gonna debate like the big boys instead of just posting idiotic joke images from Dwarf Fortress? Bring it on.

    @Ben L. said:

    Linux doesn't need to fix software they broke because they don't break the software in the first place.

    If and only if you have the source code to the application. It also helps that Linux's shitty-ass API is like 17 commands instead of 1700, like Windows'.

    Since Linux is just the kernel, you can do that tongue-twisting logic Linux users do where the meaning of the word "Linux" changes ten times in a single paragraph-- I'm assuming you're pulling that bullshit and when you say "Linux" in that sentence you mean the kernel specifically. Which is great, the kernel has a compatible API, the problem is nobody can do anything with only the kernel.

    @Ben L. said:

    For example, any program compiled with glibc 1.93 (from 17 years ago) will run exactly the same with glibc 2.19 (the current version).

    Without recompilation? I don't believe you.

    Even if I did, what useful program can be written using only glibc? How the fuck does it draw windows? Are you going to try and tell me that GUI code from 1997 runs in 2014 without recompilation? Even though I know for a fact that both Gnome and KDE have been rewritten from scratch since then? (And Unity didn't even fucking exist.)

    So let me modify that: I do believe you, but I also believe this only applies to the most trivial program imaginable.

    @Ben L. said:

    Linuxes don't randomly move the /home, /usr, /usr/bin, etc. directories on a whim.

    No; but they do do it (haha! doodoo!) when they release a new distro, which is like every fucking week. So, sure, once Redhat has the folder structure in-place, it doesn't change. Ok, sure, conceded. The problem is, there's 137 other distros all of which have their own, completely different, folder structure... and your poor software has to run on all of them somehow. (Made even more difficult by the fact that there's no Linux API you can call to tell you where those folders are.)

    So the Linux community "solves" this problem by inventing the package manager, and literally every distro has to have a couple guys toil every day grabbing packages intended for other distros, changing all the paths around, then re-packaging them for their own distro. In an error-prone manual process. Rinse and repeat every time there's even the tiniest of revision to the program.

    This is the vaunted Linux "efficiency" they talk about-- hundreds of people spending thousands of man-hours doing something that would be completely unnecessary if the idiots who developed the OS had spent an extra 8 hours doing it correctly in the first place. (And fucking Linux was started in 1991. It's a baby, OS-wise. It had plenty of good examples to crib ideas from-- in 1991, Mac software had been finding folder locations using the Gestalt() API call for 7 years already.

    Oh and if the software is closed source well-- sorry fuck you. It's not going into the repository because how can we change the fucking hard-coded paths if we don't have access to the source code? Man, having an API to grab those paths from the OS sure would be handy here, wouldn't it?


  • Considered Harmful

    @blakeyrat said:

    Even if I did, what useful program can be written using only glibc? How the fuck does it draw windows? Are you going to try and tell me that GUI code from 1997 runs in 2014 without recompilation? Even though I know for a fact that both Gnome and KDE have been rewritten from scratch since then? (And Unity didn't even fucking exist.)
    The X11 protocol is from 1987, so everything since then is compatible. That's like saying IE6 couldn't load a site served from IIS8. Sure, IIS8 didn't exist when IE6 came out, but both use the HTTP 1.1 protocol. (And yes, GUI on Linux is a client-server model.)


  • Discourse touched me in a no-no place

    @joe.edwards said:

    The X11 protocol is from 1987, so everything since then is compatible. That's like saying IE6 couldn't load a site served from IIS8. Sure, IIS8 didn't exist when IE6 came out, but both use the HTTP 1.1 protocol. (And yes, GUI on Linux is a client-server model.)
    That's "compatible" only. There are some horrendous details that changed along the way in terms of what programs expected of hardware availability. (The very early X11 systems were almost all monochrome, and the set of fonts used has changed massively over the years. Hardware is so much better now.)
    @blakeyrat said:
    Even if I did, what useful program can be written using only glibc? How the fuck does it draw windows?
    If you can open a socket, you can talk X11 — it's just a network protocol after all — to a display server and draw windows that way. I've even done it (just to prove I could). We use a different library on top merely because dealing with all the marshalling of messages about drawing (and the converse of receiving messages about events) gets real tedious very quickly; sticking to just glibc for GUIs would be like declaring that you hate your face and promptly chopping your nose off. But the dependencies of the main library for talking the X11 protocol are very minimal. (They might even be just the C library; I've not checked for a few years now.)

    Most application code uses a toolkit to abstract the details, such as GTK or Qt. Those may be able to even run using Windows as a host OS. (You've typically got to be a little careful to make that happen — some sorts of assumptions are unsafe, such as whether there's a single filesystem root or many of them — but most apps don't really care too much.)



  • Since all the other points blakeyrat made have already been refuted:

    @blakeyrat said:

    @Ben L. said:
    Linux doesn't need to fix software they broke because they don't break the software in the first place.

    If and only if you have the source code to the application. It also helps that Linux's shitty-ass API is like 17 commands instead of 1700, like Windows'.

    Since Linux is just the kernel, you can do that tongue-twisting logic Linux users do where the meaning of the word "Linux" changes ten times in a single paragraph-- I'm assuming you're pulling that bullshit and when you say "Linux" in that sentence you mean the kernel specifically. Which is great, the kernel has a compatible API, the problem is nobody can do anything with only the kernel.

    Well, if you can declare Linux to be "The Linux Kernel" then I can declare Windows to be "The Windows Kernel". But nobody uses the Windows kernel without using Windows, just like nobody uses the Linux kernel without using a Linux. Which is why your entire defense of "Linux only has 17 functions in its standard library but Windows has eleventy billion" makes no sense.

    This page lists 1281 functions in glibc, but I can't find a list of the functions defined in the Windows C standard library.

    And no, you don't need to recompile your program to use a different version of glibc. Unless you change the code of your program, obviously.



  • @blakeyrat said:

    debate

    That's a pretty strong word for what usually happens on this forum.


  • Discourse touched me in a no-no place

    @Ben L. said:

    I can't find a list of the functions defined in the Windows C standard library.
    It's a fairly long list (which I've never seen the whole of, since I just do a web search for particular function that I find by reading someone else's sample code) and you have to work out which libraries count as interfaces to Windows and which count as something built on top.



  • @mikeTheLiar said:

    @blakeyrat said:
    debate

    That's a pretty strong word for what usually happens on this forum.
    Yeah, that's more mass debate.


Log in to reply