Friends don't let friends use EAV



  • Reasons to hate Firefox:

    1) It is by far the most buggy and less standards-complete of all current browsers (yes, including IE9). Firefox has been back-sliding since version 4. Firefox is the only browser I've ever had to submit a bug to because they didn't read the fucking DOM documentation.

    2) They are the last to adopt any positive change by any other browser. They still don't have process separation, for example, which Microsoft even with their crazy backwards compatibility burden managed years ago.

    3) Mozilla is full of insane people who spend all their time doing insane things instead of, say, making Thunderbird not suck shit. Their crazy release schedule (which they just now, now in version 10, fixed so it won't constantly break plug-ins!) and continuing war against network admins (MSI installer? Central management? Fuck you!) is proof of that.

    Reasons to hate Chrome:

    1) The UI is thoughtful and clean, unless you enter the dev tools in which case it turns into a labyrinthine nightmare of non-standard, non-working controls and mysteriously random layout decisions.

    2) Unlike Mozilla, the Chromium project doesn't read their own bug tracker at all. Find a bug in Chrome? Post it. Three weeks later, nag someone to actually look at it. Then it turns out its been fixed in the pre-release for a month because someone else just fixed it without submitting a bug. Fuuuck.*

    3) Unlike every other browser on Earth, Chrome doesn't handle .rss links in any intelligent way whatsoever. It doesn't have an RSS reader (which is fine), but it also doesn't send them to your OS default RSS reader. It literally just shows the XML, as if you gave a shit what a .rss file looked like.

    4) (And this one's brand new) Unless I'm mistaken, it's giving special preference to Google Account cookies over all other cookies-- if I log into Gmail and Twitter, then restart Chrome, when Chrome re-opens I'll still be logged into Gmail but I will no longer be logged into Twitter. I told both of those sites to use session cookies, so why is my Gmail session still active? This behavior happens on my work computer as well, even though I've never "logged into Chrome" on my work computer. (Either Chrome is, behind my back, automatically logging me into Google when it boots, or it's saving a session cookie longer than a session if and only if it's a Google session cookie. Both possibilities stink.)

    * This, minus the "someone else fixed it" part, is the standard Open Source bug tracking experience, but it still pisses me off.



  • @blakeyrat said:

    Reasons to hate Firefox:

    1) It is by far the most buggy and less standards-complete of all current browsers (yes, including IE9). Firefox has been back-sliding since version 4. Firefox is the only browser I've ever had to submit a bug to because they didn't read the fucking DOM documentation.

    2) They are the last to adopt any positive change by any other browser. They still don't have process separation, for example, which Microsoft even with their crazy backwards compatibility burden managed years ago.

    3) Mozilla is full of insane people who spend all their time doing insane things instead of, say, making Thunderbird not suck shit. Their crazy release schedule (which they just now, now in version 10, fixed so it won't constantly break plug-ins!) and continuing war against network admins (MSI installer? Central management? Fuck you!) is proof of that.

    Reasons to hate Chrome:

    1) The UI is thoughtful and clean, unless you enter the dev tools in which case it turns into a labyrinthine nightmare of non-standard, non-working controls and mysteriously random layout decisions.

    2) Unlike Mozilla, the Chromium project doesn't read their own bug tracker at all. Find a bug in Chrome? Post it. Three weeks later, nag someone to actually look at it. Then it turns out its been fixed in the pre-release for a month because someone else just fixed it without submitting a bug. Fuuuck.*

    3) Unlike every other browser on Earth, Chrome doesn't handle .rss links in any intelligent way whatsoever. It doesn't have an RSS reader (which is fine), but it also doesn't send them to your OS default RSS reader. It literally just shows the XML, as if you gave a shit what a .rss file looked like.

    4) (And this one's brand new) Unless I'm mistaken, it's giving special preference to Google Account cookies over all other cookies-- if I log into Gmail and Twitter, then restart Chrome, when Chrome re-opens I'll still be logged into Gmail but I will no longer be logged into Twitter. I told both of those sites to use session cookies, so why is my Gmail session still active? This behavior happens on my work computer as well, even though I've never "logged into Chrome" on my work computer. (Either Chrome is, behind my back, automatically logging me into Google when it boots, or it's saving a session cookie longer than a session if and only if it's a Google session cookie. Both possibilities stink.)

    * This, minus the "someone else fixed it" part, is the standard Open Source bug tracking experience, but it still pisses me off.

    Yep.



  • @toon said:

    @TheCPUWizard said:

    Refering to technologies as "crap" clearly shows that one is not sufficiently informed to make an unbiased judgement.

    No, it doesn't. It just means one is of the opnion that non-relational database management systems are crap. The fact that your opinion differs, doesn't mean the other party didn't think theirs through. Maybe you're the one who's biased in favor of non-relational DBMSes.

    Now you are back pedaling, changing your statement from some is crap, to simply having an opinion (as quoted above). My opionion very well may be biased on any given subject, but that is simply opinion. Your previous statement was phrased in an inherent and absolute form, which is what I respondend to, and I stand by my response.



  • @TheCPUWizard said:

    @toon said:
    @TheCPUWizard said:

    Refering to technologies as "crap" clearly shows that one is not sufficiently informed to make an unbiased judgement.

    No, it doesn't. It just means one is of the opnion that non-relational database management systems are crap. The fact that your opinion differs, doesn't mean the other party didn't think theirs through. Maybe you're the one who's biased in favor of non-relational DBMSes.

    Now you are back pedaling, changing your statement from some is crap, to simply having an opinion (as quoted above). My opionion very well may be biased on any given subject, but that is simply opinion. Your previous statement was phrased in an inherent and absolute form, which is what I respondend to, and I stand by my response.

    Again, sorry about the "high horse" thing. That was needlessly impolite. But unless I'm much mistaken, my previous statement in this thread was:

    @toon said:

    Well, we, being on a programming forum, would say that, wouldn't we? Not that I think you're wrong, but I do think it's a point a higher-up decision maker might actually count in favor of a "dynamic structure".

    Could you do me a favor, and explain to me what's so inherent and absolute about that? I genuinely don't see it. In fact, I genuinely don't see the relevance to referring to technologies as "crap", either. The conclusion I draw from that is this: either I'm a moron for not understanding my own statements, or someone isn't really paying attention to what's being said, but drawing conclusions about it anyway. I'd like to pick option B, and would like to add that the words "clearly shows" sound pretty definite and factual to me. So I reckon this is at the very least a pot-and-kettle type deal.



  • @blakeyrat said:

    [very long lists which take up lots of space]

    I'd honestly never thought of them that way. I think you're absolutely right... But I should admit I deliberately said something like "version of IE I've seen so far" because I've actually never worked with 9. I hear it's pretty good, compared to other versions. It seems like when Chrome came along folks finally felt compelled to make decent browsers instead of slow bloated monsters.



  • @TheCPUWizard said:

    @toon said:
    @TheCPUWizard said:

    Refering to technologies as "crap" clearly shows that one is not sufficiently informed to make an unbiased judgement.

    No, it doesn't. It just means one is of the opnion that non-relational database management systems are crap. The fact that your opinion differs, doesn't mean the other party didn't think theirs through. Maybe you're the one who's biased in favor of non-relational DBMSes.

    Now you are back pedaling, changing your statement from some is crap, to simply having an opinion (as quoted above). My opionion very well may be biased on any given subject, but that is simply opinion. Your previous statement was phrased in an inherent and absolute form, which is what I respondend to, and I stand by my response.

    I just realized, you are not even the person I was responding to regarding the use of the word "crap" in the specific quoted context....



  • @blakeyrat said:

    2) They are the last to adopt any positive change by any other browser.
    They're also the first to adopt any negative change by any other browser (like Chrome's "status bars are for losers").

     

    More on the topic, if I understood correctly EAV is the developer's way of saying "Yo dawg, I heard you like databases", e.g. soft-coding the DB ?



  • @Medinoc said:

    @blakeyrat said:
    2) They are the last to adopt any positive change by any other browser.
    They're also the first to adopt any negative change by any other browser (like Chrome's "status bars are for losers").

    Chrome has a status bar. You must be using some weird parallel reality Chrome.



  • @Medinoc said:

    They're also the first to adopt any negative change by any other browser (like Chrome's "status bars are for losers").

    What went missing for you after the status bar was removed?


  • ♿ (Parody)

    @blakeyrat said:

    @Medinoc said:
    @blakeyrat said:
    2) They are the last to adopt any positive change by any other browser.
    They're also the first to adopt any negative change by any other browser (like Chrome's "status bars are for losers").

    Chrome has a status bar. You must be using some weird parallel reality Chrome.

    Maybe it started that way, but it hasn't for a long time, at least not by default. Could you post a screenshot? Instead of displaying hrefs of links I hover over in a status bar, I get a tool tip like thing in the lower left corner of the window that starts a certain link, and after a second (or something) it expands to the full length for long URLs. Otherwise, the bottom of the window is simply the border of the main window frame...a thin blue line separating my html viewing from the rest of my desktop.

    Which is what I've seen lately in FF (even though it appears to have a status bar), though they manage to get it wrong and not do the auto-expand thing, so you can't properly see long URLs like that.

    That's in chrome 18.0.1025.168 and FF 12.0.



  • @boomzilla said:

    Which is what I've seen lately in FF (even though it appears to have a status bar), though they manage to get it wrong and not do the auto-expand thing, so you can't properly see long URLs like that.

    So it does – the address is truncated with an ellipsis in the centre if it's longer than expected. It appears that the maximum length of the address thingy is half the width of the window. Personally that's never been a problem for me – maybe it would be if I was developing something with obscenely long URLs, but then there's View Source if I needed to figure out why my heinous URL abuse was failing. For everyone else's URLs, if they're so long that they have to be truncated for display, then they're going to be incomprehensible garbage anyway.

    I don't feel the Firefox hate. I have the tabs on top, and no add-ons bar, but I did put the menu bar back, and I have the menu bar bookmarks toolbar inside the menu bar as I've always done.



  • @boomzilla said:

    @blakeyrat said:
    @Medinoc said:
    @blakeyrat said:
    2) They are the last to adopt any positive change by any other browser.
    They're also the first to adopt any negative change by any other browser (like Chrome's "status bars are for losers").

    Chrome has a status bar. You must be using some weird parallel reality Chrome.

    Maybe it started that way, but it hasn't for a long time, at least not by default.

    @boomzilla said:

    Instead of displaying hrefs of links I hover over in a status bar, I get a tool tip like thing in the lower left corner of the window that starts a certain link, and after a second (or something) it expands to the full length for long URLs.

    Oh so you already fucking knew it had a status bar, you're just being a dick. I see. Once again: fuck off and die.

    @boomzilla said:

    Otherwise, the bottom of the window is simply the border of the main window frame...a thin blue line separating my html viewing from the rest of my desktop.

    I love your desire to waste pixels showing something that's blank 98% of the time and useless the other 2%. You're an idiot. Chrome has the right idea.


  • ♿ (Parody)

    @blakeyrat said:

    @boomzilla said:
    Instead of displaying hrefs of links I hover over in a status bar, I get a tool tip like thing in the lower left corner of the window that starts a certain link, and after a second (or something) it expands to the full length for long URLs.

    look ma, no status bar!


    Oh so you already fucking knew it had a status bar, you're just being a dick. I see.

    You have the strangest way of admitting that you were wrong, because your picture clearly shows the tooltip thing I mentioned, but then you start talking about a status bar again.

    @blakeyrat said:

    @boomzilla said:
    Otherwise, the bottom of the window is simply the border of the main window frame...a thin blue line separating my html viewing from the rest of my desktop.

    I love your desire to waste pixels showing something that's blank 98% of the time and useless the other 2%. You're an idiot. Chrome has the right idea.

    What desire did I have to waste pixels? I didn't offer any opinion on chromes implementation, and I actually like the way chrome does it (which is partly why I ragged on FF for not getting it right). But that doesn't make it a status bar, either.

    I guess this is just another demonstration of your inability to follow a conversation, then.


  • ♿ (Parody)

    @Daniel Beardsmore said:

    So it does – the address is truncated with an ellipsis in the centre if it's longer than expected. It appears that the maximum length of the address thingy is half the width of the window. Personally that's never been a problem for me – maybe it would be if I was developing something with obscenely long URLs, but then there's View Source if I needed to figure out why my heinous URL abuse was failing. For everyone else's URLs, if they're so long that they have to be truncated for display, then they're going to be incomprehensible garbage anyway.

    I don't use FF too much, but the place where this has irritated me is on URLs that pass lots of stuff in the query string...in particular, that's how reports are called up in our Enterprise Reporting software. I often find it useful to look at that, and the part I want to see inevitably gets cut off. Also, lots of CMSes end up with long urls, especially when they end up using the title or the first sentence to build the permalink.

    Actually, looking at it again, it looks like chrome just truncates it wherever the window cuts off, and FF starts at either end, and puts an ellipsis in the middle if it runs out of room. I suppose either way is reasonable, except that FF's method tends to cut off the part I'm actually interested in. But I guess that's just my poor circumstances, so now I have less FF hate.



  • @toon said:

    @blakeyrat said:
    [very long lists which take up lots of space]

    I'd honestly never thought of them that way. I think you're absolutely right... But I should admit I deliberately said something like "version of IE I've seen so far" because I've actually never worked with 9. I hear it's pretty good, compared to other versions. It seems like when Chrome came along folks finally felt compelled to make decent browsers instead of slow bloated monsters.

    When I last used IE9, which is probably more than a year ago now, it had Cleartype enabled by default, with no way to turn it off. I hate cleartype.

    Looks like they still haven't changed it.



  • @boomzilla said:

    You have the strangest way of admitting that you were wrong, because your picture clearly shows the tooltip thing I mentioned, but then you start talking about a status bar again.

    This argument does not amuse me. Regardless, I think the "tooltip thing" basically does what a status bar does (go ahead and click a link and watch the loading status display there) and it's not in my way all of the time. The auto-hide status bar is pretty much identical for me in Chrome 18 and FF 12.

    Some of my favorite FF extensions (Colorzilla, ShowIP) add their own panel at the bottom, but it kind of sucks because so much of it is unused.


  • ♿ (Parody)

    @morbiuswilters said:

    @boomzilla said:
    You have the strangest way of admitting that you were wrong, because your picture clearly shows the tooltip thing I mentioned, but then you start talking about a status bar again.

    This argument does not amuse me.

    Boo hoo.

    @morbiuswilters said:

    Regardless, I think the "tooltip thing" basically does what a status bar does (go ahead and click a link and watch the loading status display there) and it's not in my way all of the time.

    Exactly my point.



  • @Medinoc said:

    More on the topic, if I understood correctly EAV is the developer's way of saying "Yo dawg, I heard you like databases", e.g. soft-coding the DB ?

     

     It's a lot less worse way than others have tried, for the reasons stated.

     



  • @TheCPUWizard said:

    @blakeyrat said:
    On a related note, I love blowing SQL neophyte minds by demonstrating that you can join tables to themselves. For some reason, wherever idiots learn SQL, they never mention that.
    Agreed that it will be a mind blower for many (not just neophytes, I have seen people with a few years not realize this). What I find to be "problematic" is a solution for artibrarily deep recursions of this type, especially when there are "Trees" involved. This is not to say that it "cant" be done, merely that it is not always easy to get the desired results (especially in an efficient mannrer when filtering across layers is involved).

    Actually, a nested set model works quite well if you're dealing with a lot of read-only queries of the 'aggregate everything beneath x where y=z' nature. It's even possible to write queries that perform 'aggregate everything beneath x where for all ancestors y=z' with reasonably sane amounts of resulting SQL.

    If you're dealing with query abstractions like the IQueryable query providers for LINQ-to-Entities or LINQ-to-SQL in C#, then you can also get extremely readable results by simply creating custom operations on entities that implement a specific interface for nested sets (say INestedSet). Like the following simple operation that filters a set to those elements that share a common specified ancestor:

    public static IQueryable<TSource> NestedIn<TSource>(this IQueryable<TSource> source, TSource ancestor, unsigned int maxDepth) where TSource : INestedSet
    {
      return source
        .Where( el => ancestor.Left < el.Left )
        .Where( el => ancestor.Right > el.Right )
        .Where( el => el.Depth - ancestor.Depth <= maxDepth );
    }
    

    Or let's do something slightly more complex and filter into groups separated by ancestor:

    public static IQueryable<IGrouping<TSource,TSource>> NestedIn<TSource>(this IQueryable<TSource> source, IQueryable<TSource> ancestors, unsigned int maxDepth) where TSource : INestedSet
    {
      return ancestor
        .SelectMany( ancestor =>
          source
            .Where( el => ancestor.Left < el.Left )
            .Where( el => ancestor.Right > el.Right )
            .Where( el => el.Depth - ancestor.Depth <= maxDepth )
            .Select( el => new { Key = ancestor, Value = el })
        )
        .GroupBy( tuple => tuple.Key, tuple => tuple.Value );
    }
    


  • @toon said:

    @blakeyrat said:
    @toon said:
    I haven't met a Firefox or Chrome hater in my life

    Haven't been on this site long, have you.

    I don't believe I've ever met any of you! Seriously though, no, I haven't. If I had to venture a guess, I'd say people are Firefox haters because it's OSS?

     

    Well I wouldn't say I hate firefox myself, but the fact that it often consumes more memory than most Triple-A games to show a few fucking web pages is damned ridiculous. paired with the fact that for every single version since I think 2.0 they mention that "we made it use memory better" meanwhile, here I am with FF, 2 tabs, and it's using 1.5GB of it's Address Space. 

    For the most part it's between Chrome and Firefox for me, and I just find Firefox to be slightly less annoying overall. That's on my linux machine; where IE is not an option. on my Win7 machine it is between Firefox and IE, and I basically use Firefox until it accumulates so much garbage it has to be restarted, then swap over to IE until I remember that FF has all my passwords saved. 

     Open Source is fine, but hardly any of the people who shill it have even looked at the source to a OSS application; even fewer have even a basic understanding of what is going on, and even less than that would be able to do anything more advanced than change the application icon. It's like that person who took chromium or Chrome or whatever it was, and made the "more secure" version. Which basically just hard-coded what were already default options and made them unchangable. So many of the people going "Open SOURCE YEAH!" about it clearly didn't even look at the source to see the trivial, meaningless changes that were made, so evidently there is no intrinsic advantage there. The way I see it when it comes down to evilz proprietary programs and OSS comparisons, if somebody has to fall back on the "but...but... X is Open Source" than that tells me they have flat-out run out of actual merits for X and are scraping the bottom of the barrel. Or they are stupid. Like the idiots trying to convince people that gimp can replace Photoshop, for which typically both are the case.



  • @BC_Programmer said:

    ...and it's using 1.5GB of it's Address Space...

    So what. Aside from an almost imperceptible power consumption, why shouldn't the ram in the machine be used? Additionally, even if the address space is allocated, there is nothing (unless there is information you havent posted) anything to confirm that it is commited.

    A few years ago, I had "fun" with a little utility application. I wrote it (because I wanted to be sadistic towards the particular user) so that it allocated 1 page of memory every 128MB in the address space. The actual footprint was tiny, but boy did it look huge on various resource monitors. [this technique may have negative consequences if you are running an ancient 32 bit OS...]



  • @TheCPUWizard said:

    @BC_Programmer said:
    ...and it's using 1.5GB of it's Address Space...

    So what. Aside from an almost imperceptible power consumption, why shouldn't the ram in the machine be used? Additionally, even if the address space is allocated, there is nothing (unless there is information you havent posted) anything to confirm that it is commited.

    A few years ago, I had "fun" with a little utility application. I wrote it (because I wanted to be sadistic towards the particular user) so that it allocated 1 page of memory every 128MB in the address space. The actual footprint was tiny, but boy did it look huge on various resource monitors. [this technique may have negative consequences if you are running an ancient 32 bit OS...]

    He's not saying it has 1.5gb in its address space, he's saying it's actually using 1.5gb. And I can confirm that Firefox certainly can use that much memory for a small number of tabs (Chrome isn't much better..) The reason a web browser shouldn't be using 1.5gb to render a small number of pages is because: 1) it's fucking ridiculous on its surface; 2) it inhibits the ability to run on memory-constrained systems (quite a few people only have 1 or 2gb); and 3) even on my 16gb laptop, I've run out of memory (this was with maybe 90 Chrome tabs open.. about 1gb was just the Flash plugin).

    Aside: what is the difference between allocated and committed in Windows? In Linux, it essentially means the same thing.



  • @morbiuswilters said:

    Aside: what is the difference between allocated and committed in Windows? In Linux, it essentially means the same thing.

    The difference is in the virtual memory allocation scheme used by the kernel. Allocated memory in the virtual address space is memory of which you've let the kernel know that you'll be needing it at some time in the future. You can then commit one or more pages of said memory to actually take it into use. You can define your memory usage scheme up front and then put it to use piece by piece, as required. Potentially this enables the kernel to take memory allocation into consideration at a much earlier stage and optimize to reduce page faults and the like (or to pre-emptively expand the page file).

    See: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366803%28v=vs.85%29.aspx

    This is what the API says and what you can rely on. However, the Windows kernel itself seems to currently be doing things a bit differently when it comes to the actual implementation: the kernel will still not actually reserve the memory of a committed page for use until your process first accesses an address in that page. (This means it's a good practice to always 'touch' each page of virtual memory when you're preparing to run real-time code, e.g. during the asset loading stage of a video game, and pay the cost for the actual reservation up front.)



  • @Ragnax said:

    @morbiuswilters said:
    Aside: what is the difference between allocated and committed in Windows? In Linux, it essentially means the same thing.

    The difference is in the virtual memory allocation scheme used by the kernel. Allocated memory in the virtual address space is memory of which you've let the kernel know that you'll be needing it at some time in the future. You can then commit one or more pages of said memory to actually take it into use. You can define your memory usage scheme up front and then put it to use piece by piece, as required. Potentially this enables the kernel to take memory allocation into consideration at a much earlier stage and optimize to reduce page faults and the like (or to pre-emptively expand the page file).

    See: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366803%28v=vs.85%29.aspx

    This is what the API says and what you can rely on. However, the Windows kernel itself seems to currently be doing things a bit differently when it comes to the actual implementation: the kernel will still not actually reserve the memory of a committed page for use until your process first accesses an address in that page. (This means it's a good practice to always 'touch' each page of virtual memory when you're preparing to run real-time code, e.g. during the asset loading stage of a video game, and pay the cost for the actual reservation up front.)

    Yup, that is a good explanation. So for people using a "real" 64 bit operating system, The 1.5GB mentioned previously is approximately 0.000017% of the total address space, providing the original statement of "allocated" is correct. As I said before, I do not use FireFox, so I do not have any clue what percentage was actually commited; but that is immaterial when discussing the impact of allocation.



  • @TheCPUWizard said:

    Yup, that is a good explanation.

    So committing does.. nothing?

    @TheCPUWizard said:

    So for people using a "real" 64 bit operating system, The 1.5GB mentioned previously is approximately 0.000017% of the total address space, providing the original statement of "allocated" is correct.

    He didn't say "allocated" he said "using". Nobody said anything about allocation until you brought it up. And as I already said, I can confirm that Firefox is completely capable of growing to consume 1.5gb of memory with only a few tabs.



  • Firefox using 1.5GB RAM? So what, I somehow managed to get Opera to eat 33GB:
    Oops



  • @morbiuswilters said:

    @TheCPUWizard said:
    Yup, that is a good explanation.

    So committing does.. nothing?

    @TheCPUWizard said:

    So for people using a "real" 64 bit operating system, The 1.5GB mentioned previously is approximately 0.000017% of the total address space, providing the original statement of "allocated" is correct.

    He didn't say "allocated" he said "using". Nobody said anything about allocation until you brought it up. And as I already said, I can confirm that Firefox is completely capable of growing to consume 1.5gb of memory with only a few tabs.

    From an address space perspective, as soon as it is "allocated" it is "used" (i.e. those addresses can not be "used" for anything else). On a 32 bit bit process, simply allocate 1.9GB of virtual address space, but do not do anything with it, so that it remains non-committed. Your program will blow up with "out of memory" as soon as it tries to allocate anything past 100MB, even though there are plenty of resources available. So I stand by my association of the two terms.



  • @TheCPUWizard said:

    @morbiuswilters said:

    @TheCPUWizard said:
    Yup, that is a good explanation.

    So committing does.. nothing?

    @TheCPUWizard said:

    So for people using a "real" 64 bit operating system, The 1.5GB mentioned previously is approximately 0.000017% of the total address space, providing the original statement of "allocated" is correct.

    He didn't say "allocated" he said "using". Nobody said anything about allocation until you brought it up. And as I already said, I can confirm that Firefox is completely capable of growing to consume 1.5gb of memory with only a few tabs.

    From an address space perspective, as soon as it is "allocated" it is "used" (i.e. those addresses can not be "used" for anything else). On a 32 bit bit process, simply allocate 1.9GB of virtual address space, but do not do anything with it, so that it remains non-committed. Your program will blow up with "out of memory" as soon as it tries to allocate anything past 100MB, even though there are plenty of resources available. So I stand by my association of the two terms.

    Oh boy, more pointless dickweedery from TheCPUWizard.. who the fuck would've guessed it? Sure, "used" can be ambiguous, but not in this case. Even if you thought BC_Programmer didn't know the difference between allocated memory and committed memory (although, apparently committing does nothing) I helpfully confirmed for you that it's entirely possible for Firefox to use that much physical memory.

    Let's recap: you don't even use Firefox, so you don't know what the fuck you're talking about. Instead of accepting that, you picked a fight over the ambiguity of "used" just so you could hear yourself talk about virtual memory. Great.


Log in to reply