My heart bleeds


  • Considered Harmful

    @aristurtle said:

    Also, because std::shared_ptr and std::unique_ptr are in the standard library, you can't laugh in a C++ weenie's face when he starts talking about "RAII". (You need to wait until he starts saying that shared_ptr is "better than garbage collection").

    My favorite was when I was looking for try...finally functionality, and all I could find was bunch of Flavor Aid drinkers on Stack Overflow trying to convince me that RAII was superior.


  • Considered Harmful

    @aristurtle said:

    @bstorer said:

    No question. I just needed a C++11-specific example. At least they tried to plug half the leaks with std library things this time, instead of the typical strategy of just adding more keywords. The C++ committee should go ahead and write out a list of reserved words for future additions. Or maybe it would be simpler to list the names which won't be used as keywords in the future. Safe names list: "foo, bar, FFDOEDDIS_ddhs_RHDIS_eeeee"

    I'm gonna disagree with you on this one. The ridiculous "virtual void foo() = 0;" syntax should have been a new reserved word from the beginning.  The less they pretend to be compatible with C code (they never really were) the better off they are.


    Read The Design and Evolution of C++. They wanted a new keyword, but could not push it through the bureaucratic machine.


  • Considered Harmful

    @bstorer said:

    @aristurtle said:

    @bstorer said:

    @aristurtle said:

    Yeah, C++11 adds them (#include <cstdint>). In general, C++11 smooths out some of the more obvious failures of the standard library, but the fundamental problems with the language are still there. It's like spraypainting over a rust spot on your car.

     

    C++11 is the devil. The whole auto/decltype thing is a solution to a problem which should have never existed.
     

    But you can actually pass the filename to a std::fstream as a std::string now! And std::vectors can be created with an initializer list, so they aren't openly inferior to the C feature that they supposedly replace!

    Also, because std::shared_ptr and std::unique_ptr are in the standard library, you can't laugh in a C++ weenie's face when he starts talking about "RAII". (You need to wait until he starts saying that shared_ptr is "better than garbage collection").

     

    Hooray! Problems solved, then!

    For me, the C++ reference standard will always be whatever compiles using Borland Turbo C++ 3.0. Anything since is just window-dressing on an already-perfect language.

    Remember when you had to declare that a pointer was near or far? Those were the days.


  • @joe.edwards said:

    @bstorer said:

    For me, the C++ reference standard will always be whatever compiles using Borland Turbo C++ 3.0. Anything since is just window-dressing on an already-perfect language.


    Remember when you had to declare that a pointer was near or far? Those were the days.
    IIRC, you only had to declare pointers as far; near was the default.



  • @blakeyrat said:

    The dumbfucks who wrote this library built their own malloc()/free() implementation because it was "too slow" on some platforms-- this idiocy had the effect of defeating the protection that some OSes had already written to prevent this exact type of bug. Because malloc() is "too slow". On some platforms.

    Edit: Oh it's even worse. OpenSSL *relies* on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.

    The really sad thing is this writing your own malloc() implementation isn't even that uncommon in the FOSS world. It's almost a rite of passage for any large-ish project.



  • @joe.edwards said:

    Read The Design and Evolution of C++. They wanted a new keyword, but could not push it through the bureaucratic machine.
     

    "Push it through the bureaucratic machine" sounds like an awesome new tortue method...



  • @El_Heffe said:

    @anotherusername said:

    @blakeyrat said:
    Edit: Oh it's even worse. OpenSSL relies on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.

    Returning a pointer to memory which already contains information is not a bug. Returning a pointer to memory with a predictable value is not a bug either. malloc() does not securely erase the memory before it returns the pointer, and makes no guarantees about randomness of its contents.

    Code that assumes it knows which memory locations malloc() is going to return (and thus, what information will still be stored there) has a bug, but malloc() does not have a bug.

    The "real" malloc() doesn't have a bug, but, they fellt that malloc() was "too slow" so they wrote their own buggy version to use instead.

     

    No. The bug was releasing memory before it was needed. That has nothing to do with a bug in malloc itself. Some undocumented behaviour of their version of malloc was allowing their bug to go unnoticed, but it's hardly malloc's fault that they explicitly released some memory before they were finished with it.



  • @anotherusername said:

    @El_Heffe said:

    @anotherusername said:

    @blakeyrat said:
    Edit: Oh it's even worse. OpenSSL *relies* on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.

    Returning a pointer to memory which already contains information is not a bug. Returning a pointer to memory with a predictable value is not a bug either. malloc() does not securely erase the memory before it returns the pointer, and makes no guarantees about randomness of its contents.

    Code that assumes it knows which memory locations malloc() is going to return (and thus, what information will still be stored there) has a bug, but malloc() does not have a bug.

    The "real" malloc() doesn't have a bug, but, they fellt that malloc() was "too slow" so they wrote their own buggy version to use instead.

     

    No. The bug was releasing memory before it was needed. That has nothing to do with a bug in malloc itself. Some undocumented behaviour of their version of malloc was allowing their bug to go unnoticed, but it's hardly malloc's fault that they explicitly released some memory before they were finished with it.
     

     

    The best part was how they had a compile-time option to use the system malloc instead of their hand-rolled one, and if you used that option the code wouldn't work.

    Although I don't like the whole malloc digression, really, because it will lead people to say "It's not C's fault, they're just doing crazy crap like hand-rolling malloc! Why, if they were using the system malloc like a sane developer would, and if the system malloc used ASLR, then they'd probably segfault instead of leaking sensitive data, except for the times when they randomly wouldn't segfault and would leak anyway."

     



  • @anotherusername said:

    @El_Heffe said:

    @anotherusername said:

    @blakeyrat said:
    Edit: Oh it's even worse. OpenSSL relies on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.

    Returning a pointer to memory which already contains information is not a bug. Returning a pointer to memory with a predictable value is not a bug either. malloc() does not securely erase the memory before it returns the pointer, and makes no guarantees about randomness of its contents.

    Code that assumes it knows which memory locations malloc() is going to return (and thus, what information will still be stored there) has a bug, but malloc() does not have a bug.

    The "real" malloc() doesn't have a bug, but, they fellt that malloc() was "too slow" so they wrote their own buggy version to use instead.

     

    No. The bug was releasing memory before it was needed. That has nothing to do with a bug in malloc itself. Some undocumented behaviour of their version of malloc was allowing their bug to go unnoticed, but it's hardly malloc's fault that they explicitly released some memory before they were finished with it.

    Why are you acting like their custom version of malloc() is some kind of distinct third-party here? They wrote a crappy version of malloc() and then they mis-used it. Both were fuck-ups.

    What I find particularly amusing is that back in the day DJB was always writing his own memory management routines to work around the fact that there were so many different implementations of malloc() in the wild, so many of which were buggy and insecure.


  • ♿ (Parody)

    @aristurtle said:

    The best part was how they had a compile-time option to use the system malloc instead of their hand-rolled one, and if you used that option the code wouldn't work.

    Although I don't like the whole malloc digression, really, because it will lead people to say "It's not C's fault, they're just doing crazy crap like hand-rolling malloc! Why, if they were using the system malloc like a sane developer would, and if the system malloc used ASLR, then they'd probably segfault instead of leaking sensitive data, except for the times when they randomly wouldn't segfault and would leak anyway."

    Also, their memory management foiled things like valgrind, which are designed to find memory issues. I've done things where I cached memory for particular purposes (after profiling, etc, and determining that it made a difference). But I kept the ability to use "normal" free/malloc so I could find memory issues like this. And I tested it regularly, whether I suspected a problem or not.



  • @boomzilla said:

    I've done things where I cached memory for particular purposes (after profiling, etc, and determining that it made a difference). But I kept the ability to use "normal" free/malloc so I could find memory issues like this. And I tested it regularly, whether I suspected a problem or not.

    But if you're using malloc for testing valgrind but your own memory management (in places) for production, what's the point of using valgrind on those pieces in the first place?

    Edit: Not criticizing, just asking. I'm personally not seeing a reason to test something that's only used for testing. If it were me, I'd just suppress the Valgrind errors from the custom bit of memory management code..



  • @morbiuswilters said:

    @boomzilla said:
    I've done things where I cached memory for particular purposes (after profiling, etc, and determining that it made a difference). But I kept the ability to use "normal" free/malloc so I could find memory issues like this. And I tested it regularly, whether I suspected a problem or not.

    But if you're using malloc for testing valgrind but your own memory management (in places) for production, what's the point of using valgrind on those pieces in the first place?

    Edit: Not criticizing, just asking. I'm personally not seeing a reason to test something that's only used for testing. If it were me, I'd just suppress the Valgrind errors from the custom bit of memory management code..

    Valgrind even has macros that you can use to tell it about your custom mallocs. So you can just tell it how to use your shit.


  • ♿ (Parody)

    @morbiuswilters said:

    @boomzilla said:
    I've done things where I cached memory for particular purposes (after profiling, etc, and determining that it made a difference). But I kept the ability to use "normal" free/malloc so I could find memory issues like this. And I tested it regularly, whether I suspected a problem or not.

    But if you're using malloc for testing valgrind but your own memory management (in places) for production, what's the point of using valgrind on those pieces in the first place?

    Edit: Not criticizing, just asking. I'm personally not seeing a reason to test something that's only used for testing. If it were me, I'd just suppress the Valgrind errors from the custom bit of memory management code..

    If I'm caching memory, I might not realize that I'm using some memory after freeing it, since from the OS' point of view, I haven't freed it. I don't do anything that has the security profile of openssl, but it could still lead to some nasty bugs.

    So, basically, I have functions that are either just macros for malloc / free, or functions that call malloc / free, plus some of their own caching. I suppose I could learn the valgrind macro stuff to which DescentJS referred.



  • @boomzilla said:

    @morbiuswilters said:
    @boomzilla said:
    I've done things where I cached memory for particular purposes (after profiling, etc, and determining that it made a difference). But I kept the ability to use "normal" free/malloc so I could find memory issues like this. And I tested it regularly, whether I suspected a problem or not.

    But if you're using malloc for testing valgrind but your own memory management (in places) for production, what's the point of using valgrind on those pieces in the first place?

    Edit: Not criticizing, just asking. I'm personally not seeing a reason to test something that's only used for testing. If it were me, I'd just suppress the Valgrind errors from the custom bit of memory management code..

    If I'm caching memory, I might not realize that I'm using some memory after freeing it, since from the OS' point of view, I haven't freed it. I don't do anything that has the security profile of openssl, but it could still lead to some nasty bugs.

    So, basically, I have functions that are either just macros for malloc / free, or functions that call malloc / free, plus some of their own caching. I suppose I could learn the valgrind macro stuff to which DescentJS referred.

    Ah, I see. My experience with Valgrind is not extensive; the one time I really needed it to find a memory leak it was unable to. (The leak wasn't actually in my code but in a popular FOSS project my code linked against. I finally just gave up and accepted the fact the daemon is gonna sigsegv every few hours. The supervising process always restarts the daemon so the impact is minimal.)


  • ♿ (Parody)

    @morbiuswilters said:

    Ah, I see. My experience with Valgrind is not extensive; the one time I really needed it to find a memory leak it was unable to. (The leak wasn't actually in my code but in a popular FOSS project my code linked against. I finally just gave up and accepted the fact the daemon is gonna sigsegv every few hours. The supervising process always restarts the daemon so the impact is minimal.)

    Sadly, it's an indispensable tool if you're using C/C++. Even sadder, there's nothing as convenient that works on Windows.



  • @boomzilla said:

    @morbiuswilters said:
    Ah, I see. My experience with Valgrind is not extensive; the one time I really needed it to find a memory leak it was unable to. (The leak wasn't actually in my code but in a popular FOSS project my code linked against. I finally just gave up and accepted the fact the daemon is gonna sigsegv every few hours. The supervising process always restarts the daemon so the impact is minimal.)

    Sadly, it's an indispensable tool if you're using C/C++. Even sadder, there's nothing as convenient that works on Windows.

    I do C development, but like I said it's never really been helpful. The one time I had a memory leak, it wasn't able to help in tracking it down.


  • Discourse touched me in a no-no place

    @morbiuswilters said:

    I do C development, but like I said it's never really been helpful. The one time I had a memory leak, it wasn't able to help in tracking it down.
    That might be indicative of a bigger problem such as code having pointers to stuff which should be dead but isn't. Hunting those things is hard; it's not formally a memory leak, but rather a screwup in the semantics of ones own code.

    I work a fair bit with an OSS project which has custom memory allocators, but it doesn't rely on them as a crutch: it works with the straight system allocator too (and so is valgrind-able). It's custom allocators are one for “normal” debug mode (which scribbles over memory on deallocation so that it becomes effectively impossible to use a piece of memory after free()) and another for threaded mode which maintains separate memory pools for each thread (because the system memory allocator can't make as many assumptions about how memory is shared between threads — it doesn't know as much about the application as we do — forcing it to have a global lock that turned out to be a significant bottleneck). Merely having a custom allocator is not a WTF. It's having normal code that doesn't work with the system allocator and only works with your custom one that stinks.


  • ♿ (Parody)

    @morbiuswilters said:

    @boomzilla said:
    @morbiuswilters said:
    Ah, I see. My experience with Valgrind is not extensive; the one time I really needed it to find a memory leak it was unable to. (The leak wasn't actually in my code but in a popular FOSS project my code linked against. I finally just gave up and accepted the fact the daemon is gonna sigsegv every few hours. The supervising process always restarts the daemon so the impact is minimal.)

    Sadly, it's an indispensable tool if you're using C/C++. Even sadder, there's nothing as convenient that works on Windows.

    I do C development, but like I said it's never really been helpful. The one time I had a memory leak, it wasn't able to help in tracking it down.

    Actually, you said valgrind helped you track it to a library. /pedanticdickweed

    To each his own. Some people can't live without a GUI debugger.



  • @dkf said:

    That might be indicative of a bigger problem such as code having pointers to stuff which should be dead but isn't. Hunting those things is hard; it's not formally a memory leak, but rather a screwup in the semantics of ones own code.

    Yeah, may be. As I said, it was a popular FOSS project that actually had the leak, my code was golden.

    @dkf said:

    and another for threaded mode which maintains separate memory pools for each thread (because the system memory allocator can't make as many assumptions about how memory is shared between threads — it doesn't know as much about the application as we do — forcing it to have a global lock that turned out to be a significant bottleneck). Merely having a custom allocator is not a WTF.

    Yeah, I've used tcmalloc, too.

    @dkf said:

    Merely having a custom allocator is not a WTF.

    I honestly think that in this day and age, it is. If you're writing your memory allocator (or even using a third party one), that says something about the maturity of the platform as a whole.



  • @boomzilla said:

    Actually, you said valgrind helped you track it to a library. /pedanticdickweed

    Actually, no, I discovered the leak was in the other code afterwards. (It was actually a daemon, not a library. My code was a dynamically-loaded library.)

    @boomzilla said:

    To each his own. Some people can't live without a GUI debugger.

    I can, but I realize it's asinine. I did all of my debugging on that project with gdb running in another terminal. It was annoying, too, because it had tens of thousands of threads running. Still, I can recognize there are much better tools out there.



  • @morbiuswilters said:

    my code was golden.
    Of course it was. We all believe you, right guys?



  • @morbiuswilters said:

    @dkf said:
    Merely having a custom allocator is not a WTF.

    I honestly think that in this day and age, it is. If you're writing your memory allocator (or even using a third party one), that says something about the maturity of the platform as a whole.

    Not really.  I can totally understand the part about the system's allocator being too slow.

    Delphi's FastMM 4 is, from the tests I've seen, the fastest memory allocator around.  Not "for Delphi," it's the fastest, period.  It's written in a mixture of Delphi and hand-tuned ASM.

    Unfortunately, this means that it's only available on platforms that support the assembler.  When they came out with the 64-bit compiler, they had to rely on the OS's allocator until FastMM got ported to x64.  A lot of programs ran approximately 50x slower (not exaggerating!) than their 32-bit counterparts until the new allocator was ready.


  • Discourse touched me in a no-no place

    @morbiuswilters said:

    @dkf said:
    and another for threaded mode which maintains separate memory pools for each thread (because the system memory allocator can't make as many assumptions about how memory is shared between threads — it doesn't know as much about the application as we do — forcing it to have a global lock that turned out to be a significant bottleneck). Merely having a custom allocator is not a WTF.
    Yeah, I've used tcmalloc, too.
    The key is that this particular code is able to assume that each free() always happens in the thread that did the malloc(), which allows you to completely skip locking on allocation (except when increasing the size of a per-thread pool, but that's comparatively rare). If the code is allocation-heavy and multi-threaded, it's a gigantic win (getting to the nirvana of lock-free code is hard, but worthwhile) yet it's not something that a generic malloc implementation is ever going to be allowed to get away with. Other styles of handling multi-threading can't work that way at all. Different fundamental assumptions.

    tcmalloc is a bit slower since it tries to handle transfer of ownership between threads, which is a very gnarly case.



  • @HardwareGeek said:

    @morbiuswilters said:
    my code was golden.
    Of course it was. We all believe you, right guys?

    Hey, it worked and the memory leak was somewhere else! And it mostly used the stock memory allocator because it easily handled tens of thousands of requests /sec so further optimization seemed a waste of my time.



  • @Mason Wheeler said:

    @morbiuswilters said:

    @dkf said:
    Merely having a custom allocator is not a WTF.

    I honestly think that in this day and age, it is. If you're writing your memory allocator (or even using a third party one), that says something about the maturity of the platform as a whole.

    Not really.  I can totally understand the part about the system's allocator being too slow.

    Delphi's FastMM 4 is, from the tests I've seen, the fastest memory allocator around.  Not "for Delphi," it's the fastest, period.  It's written in a mixture of Delphi and hand-tuned ASM.

    Unfortunately, this means that it's only available on platforms that support the assembler.  When they came out with the 64-bit compiler, they had to rely on the OS's allocator until FastMM got ported to x64.  A lot of programs ran approximately 50x slower (not exaggerating!) than their 32-bit counterparts until the new allocator was ready.

    I don't think you understood me. The fact that Delphi (a platform) has its own memory allocator is not a WTF. If you were to write you own on top of that, it would be. My point is that the custom allocator business in C/Unix strikes me as more evidence the platform is stuck in a rut, spinning its wheels over the same crap over and over again. I mean, it's 2014 and OpenSSL still maintains a custom-written allocator. Memory allocation is, like, Step One and yet so many C/Unix guys insist on re-inventing the wheel. And, yeah, sometimes it leads to big performance gains, which is a WTF in its own right.

    This shouldn't be happening. To me, it's proof of sclerosis within the industry (particularly the FOSS/Unix side of things.)



  • @dkf said:

    @morbiuswilters said:
    @dkf said:
    and another for threaded mode which maintains separate memory pools for each thread (because the system memory allocator can't make as many assumptions about how memory is shared between threads — it doesn't know as much about the application as we do — forcing it to have a global lock that turned out to be a significant bottleneck). Merely having a custom allocator is not a WTF.
    Yeah, I've used tcmalloc, too.
    The key is that this particular code is able to assume that each free() always happens in the thread that did the malloc(), which allows you to completely skip locking on allocation (except when increasing the size of a per-thread pool, but that's comparatively rare). If the code is allocation-heavy and multi-threaded, it's a gigantic win (getting to the nirvana of lock-free code is hard, but worthwhile) yet it's not something that a generic malloc implementation is ever going to be allowed to get away with. Other styles of handling multi-threading can't work that way at all. Different fundamental assumptions.

    tcmalloc is a bit slower since it tries to handle transfer of ownership between threads, which is a very gnarly case.

    I would once again make the case that if you're doing lots of memory allocation in a heavily-threaded application, something is probably awry. We have had managed languages for decades now. No, they won't always be as fast as C, but processor time is cheap.

    As for tcmalloc, I see no reason why the built-in malloc couldn't support that functionality through a compile-time directive. It seems to me there's something really broken with the development model if the built-in malloc is so slow in certain, common situations and we end up with all of these third-party band-aids trying to overcome that.



  • Completely off-topic and apropos of absolutely nothing whatsoever, here are some photos of Rosie O'Donnell from her recent vacay to the Caymans. As you can see, she's wearing a bikini which leaves nothing to the imagination while she enjoys the complimentary all-you-can-eat ham bar that's located right inside the pool. How's that for glamorous?




  • @morbiuswilters said:

    Completely off-topic and apropos of absolutely nothing whatsoever, here are some photos of Rosie O'Donnell from her recent vacay to the Caymans. As you can see, she's wearing a bikini which leaves nothing to the imagination while she enjoys the complimentary all-you-can-eat ham bar that's located right inside the pool. How's that for glamorous?


    snip sexy, sexy photos

    Hey, guys. did you see this awesome comic about Heartbleed?



  • Trolleybus Mechanic

    @bstorer said:

    Hey, guys. did you see this awesome comic about Heartbleed?
     

    Pshaw, please. "User Friendly" did it first:




  • @bstorer said:

    Hey, guys. did you see this awesome comic about Heartbleed?


    snip really huge fucking image

    Why are your images so massive?



  • @Lorne Kates said:

    Ahhh, that takes me back. User Friendly was Rosie O'Donnell riding a Sybian back when Randall Munroe was just a twinkle in the bumbling abortionist's face shield.



  •  @joe.edwards said:

    @aristurtle said:
    Also, because std::shared_ptr and std::unique_ptr are in the standard library, you can't laugh in a C++ weenie's face when he starts talking about "RAII". (You need to wait until he starts saying that shared_ptr is "better than garbage collection").
    My favorite was when I was looking for try...finally functionality, and all I could find was bunch of Flavor Aid drinkers on Stack Overflow trying to convince me that RAII was superior.

     Just curious, what was your use case that required try...finally semantics?


  • Considered Harmful

    @Groaner said:

     @joe.edwards said:

    @aristurtle said:
    Also, because std::shared_ptr and std::unique_ptr are in the standard library, you can't laugh in a C++ weenie's face when he starts talking about "RAII". (You need to wait until he starts saying that shared_ptr is "better than garbage collection").

    My favorite was when I was looking for try...finally functionality, and all I could find was bunch of Flavor Aid drinkers on Stack Overflow trying to convince me that RAII was superior.

     Just curious, what was your use case that required try...finally semantics?

    Shitty third party API that had its own special alloc and free functions. I don't dispute RAII would be viable here, but it's a more effort than I should need to spend. I was more amused that they were trying to convince me that the more involved approach was somehow conceptually superior. No, it's yet another hackaround for a shitty language missing basic functionality that would make "the right way" and "the easy way" the same thing.



  • @morbiuswilters said:

    while she enjoys the complimentary all-you-can-eat ham bar that's located right inside the pool.
    Damn celebrities have all the fun.



  • @El_Heffe said:

    @morbiuswilters said:

    while she enjoys the complimentary all-you-can-eat ham bar that's located right inside the pool.
    Damn celebrities have all the fun.

    Caymanian cabana boys? Highest suicide rates in the world.



  • @Groaner said:

     @joe.edwards said:

    @aristurtle said:
    Also, because std::shared_ptr and std::unique_ptr are in the standard library, you can't laugh in a C++ weenie's face when he starts talking about "RAII". (You need to wait until he starts saying that shared_ptr is "better than garbage collection").
    My favorite was when I was looking for try...finally functionality, and all I could find was bunch of Flavor Aid drinkers on Stack Overflow trying to convince me that RAII was superior.

    Just curious, what was your use case that required try...finally semantics?

    They're all over the place. Any time you need a guaranteed reversible state change, try/finally is what you're looking for.

    For example, is it possible to express the following simple Delphi concept in C++ without being forced to create an entirely new class just to enable RAII?

    Bookmark := MyDataset.CurrentBookmark;  //save current location
    MyDataset.DisableControls; //disable updates of all linked GUI controls
    try
       //iterate over each row and do something with the data
       MyDataset.First;
       while not MyDataset.EOF do
       begin
          ProcessRecord(MyDataset);
          MyDataset.Next;
       end;
    finally
       MyDataset.EnableControls; //re-enable updating of linked controls;
       MyDataset.Bookmark := Bookmark; //restore previous locatoin
    end;

    This isn't a task for RAII; there's no memory or other resources to be released. Can it be done in C++?



  • @Mason Wheeler said:

    @Groaner said:

     @joe.edwards said:

    @aristurtle said:
    Also, because std::shared_ptr and std::unique_ptr are in the standard library, you can't laugh in a C++ weenie's face when he starts talking about "RAII". (You need to wait until he starts saying that shared_ptr is "better than garbage collection").

    My favorite was when I was looking for try...finally functionality, and all I could find was bunch of Flavor Aid drinkers on Stack Overflow trying to convince me that RAII was superior.

    Just curious, what was your use case that required try...finally semantics?

    They're all over the place. Any time you need a guaranteed reversible state change, try/finally is what you're looking for.

    For example, is it possible to express the following simple Delphi concept in C++ without being forced to create an entirely new class just to enable RAII?

    Bookmark := MyDataset.CurrentBookmark;  //save current location
    MyDataset.DisableControls; //disable updates of all linked GUI controls
    try
       //iterate over each row and do something with the data
       MyDataset.First;
       while not MyDataset.EOF do
       begin
          ProcessRecord(MyDataset);
          MyDataset.Next;
       end;
    finally
       MyDataset.EnableControls; //re-enable updating of linked controls;
       MyDataset.Bookmark := Bookmark; //restore previous locatoin
    end;

    This isn't a task for RAII; there's no memory or other resources to be released. Can it be done in C++?

    Yes, it can be done because you didn't use goto/exit/return inside of the try...finally block, so you didn't need to use finally at all.



  • @Mason Wheeler said:

    Can it be done in C++?

    Sure. The basic idea is to create a utility class which calls a user-defined function in its destructor, e.g. ScopeGuard 2.0 or a smart pointer with custom deleter. This is a lot easier with C++11 lambdas. Hide the boilerplate code behind a macro (maybe even overload operator<< to get rid of the parentheses) and you have a viable alternative for try/finally.

    tl;dr: Here's the code (mostly copy&paste with some modifications):

    #include <cassert>
    #include <utility>

    template< typename t >
    class sentry {
    t o;
    sentry( sentry && ) = delete;
    sentry( sentry const & ) = delete;
    void operator=( sentry && ) = delete;
    void operator=( sentry const & ) = delete;
    public:
    sentry( t in_o ) : o( ::std::move( in_o ) ) {}
    ~sentry() noexcept {
    try {
    o();
    } catch(...) {
    assert(false && "'finally' must not throw!");
    }
    }
    };

    struct create_sentry {
    template< typename t >
    sentry< t > operator<<( t o ) { return { ::std::move( o ) }; }
    };

    #define CONCATENATE_DIRECT(s1, s2) s1##s2
    #define CONCATENATE(s1, s2) CONCATENATE_DIRECT(s1, s2)
    #define ANONYMOUS_VARIABLE(str) CONCATENATE(str, __LINE__)
    #define finally const auto& ANONYMOUS_VARIABLE(_finally_) = create_sentry() << [&]()

    With that code hidden in a utility header somewhere, your example could be written as:

    void Process() {
    auto Bookmark = MyDataset.CurrentBookmark; // save current location
    MyDataset.DisableControls(); // disable updates of all linked GUI controls
    finally {
    MyDataset.EnableControls(); // re-enable updating of linked controls
    MyDataset.Bookmark = Bookmark; // restore previous location
    };
    // iterate over each row and do something with the data
    for (MyDataset.First(); !MyDataset.EOF(); MyDataset.Next())
    ProcessRecord(MyDataset);
    }

  • Discourse touched me in a no-no place

    @anotherusername said:

    Yes, it can be done because you didn't use goto/exit/return inside of the try...finally block, so you didn't need to use finally at all.
    You don't know if there are any exceptions about.



  • @fatbull said:

    @Mason Wheeler said:
    Can it be done in C++?

    Sure. The basic idea is to create a utility class which calls a user-defined function in its destructor, e.g. ScopeGuard 2.0 or a smart pointer with custom deleter. This is a lot easier with C++11 lambdas. Hide the boilerplate code behind a macro (maybe even overload operator<< to get rid of the parentheses) and you have a viable alternative for try/finally.

    Wow.  Just... wow.

    So first you ignore my fundamental anti-gross-hack requirement: without being forced to create an entirely new class just to enable RAII.

    Then you throw in nearly 30 lines of macro- and template-heavy code that's going to make debugging this, if it's ever necessary, a nightmare.  And oh by the way, you seriously changed the semantics.  There's no rule anywhere that forbids raising an exception inside a finally clause.  Why did you introduce one?

    Then you implement the whole thing with a lambda, which means it will never work on any non-latest-and-greatest compiler.

    Then you change the flow of the code: putting the finally between the setup block and the main body, when it executes after.  That's confusing and difficult to read.

    Then you change the semantics again, by using RAII, which (assuming error-free execution) will cause the finally lambda to execute when the macro-mess goes out of scope at the end of the function, instead of executing immediately after the main body finishes.  This is a meaningful difference if there is extra code after the finally block.

    It's a nice try, I'll grant you that, but it doesn't work, because C++ is a deficient language in this regard.  RAII is a classic abstraction inversion: it uses the try/finally mechanism to implement RAII, but does not provide support for the try/finally construct, forcing developers who need it to re-implement it (badly) on top of RAII.

     



  • @dkf said:

    @anotherusername said:
    Yes, it can be done because you didn't use goto/exit/return inside of the try...finally block, so you didn't need to use finally at all.
    You don't know if there are any exceptions about.
     

    Yeah.  For those unfamiliar with the construct, the semantics of try/finally are that if you enter the main body, the code inside the finally block is guaranteed to execute as the stack unwinds, whether execution proceeds normally, or an exception is raised inside the main body.  Even if a control-flow construct such as break, continue, or exit (aka return in C) is used from within the main body that would cause execution to pass outside the try block, the finally clause is guaranteed to execute.  It's illegal to use a goto to jump into or out of a try block, though I'm not sure why anyone would want to even try in this day and age.



  • @Mason Wheeler said:

    It's illegal to use a goto to jump into or out of a try block, though I'm not sure why anyone would want to even try in this day and age.
    You must be new here.


  • Discourse touched me in a no-no place

    @Mason Wheeler said:

    Then you change the semantics again, by using RAII, which (assuming error-free execution) will cause the finally lambda to execute when the macro-mess goes out of scope at the end of the function, instead of executing immediately after the main body finishes. This is a meaningful difference if there is extra code after the finally block.
    While I'd agree with the majority of your points, the magical variable used to implement the hack goes out of scope at the end of the containing block (and is destroyed at that point, running the “finally” clause); you're not constrained to waiting to the end of the function.

    But it otherwise scares the horses something rotten.



  • @morbiuswilters said:

    @bstorer said:

    Hey, guys. did you see this awesome comic about Heartbleed?


    snip really huge fucking image

    Why are your images so massive?

    Becuase I set width to 100%. Wouldn't want anyone to miss it.



  • @Mason Wheeler said:

    So first you ignore my fundamental anti-gross-hack requirement: without being forced to create an entirely new class just to enable RAII.

    Hmm, I thought "entirely new" meant you didn't want to write an entirely new class each time you need a try/finally clause; not no new class at all.

    @Mason Wheeler said:

    Then you throw in nearly 30 lines of macro- and template-heavy code that's going to make debugging this, if it's ever necessary, a nightmare.  And oh by the way, you seriously changed the semantics.  There's no rule anywhere that forbids raising an exception inside a finally clause.  Why did you introduce one?

    [17.9] How can I handle a destructor that fails?

    @Mason Wheeler said:

    Then you implement the whole thing with a lambda, which means it will never work on any non-latest-and-greatest compiler.

    So what? Are we supposed to wait another 15 years before we actually use C++11?

    @Mason Wheeler said:

    Then you change the flow of the code: putting the finally between the setup block and the main body, when it executes after.  That's confusing and difficult to read.

    It can't be more confusing than normal destructors getting called "out of thin air"... Actually, I like that the "do" and "undo" parts are close together. I could get used to this.

    @Mason Wheeler said:
    It's a nice try, I'll grant you that, but it doesn't work, because C++ is a deficient language in this regard.  RAII is a classic abstraction inversion: it uses the try/finally mechanism to implement RAII, but does not provide support for the try/finally construct, forcing developers who need it to re-implement it (badly) on top of RAII.

    Hey, it's a hack, and it's far from perfect, but it works (within reason).



  • @fatbull said:

    @Mason Wheeler said:
    Then you throw in nearly 30 lines of macro- and template-heavy code that's going to make debugging this, if it's ever necessary, a nightmare.  And oh by the way, you seriously changed the semantics.  There's no rule anywhere that forbids raising an exception inside a finally clause.  Why did you introduce one?

    [17.9] How can I handle a destructor that fails?

    You linked to the wrong C++ guide there.

     



  • @Mason Wheeler said:

    Yeah.  For those unfamiliar with the construct, the semantics of try/finally are that if you enter the main body, the code inside the finally block is guaranteed to execute as the stack unwinds, whether execution proceeds normally, or an exception is raised inside the main body.  Even if a control-flow construct such as break, continue, or exit (aka return in C) is used from within the main body that would cause execution to pass outside the try block, the finally clause is guaranteed to execute.  It's illegal to use a goto to jump into or out of a try block, though I'm not sure why anyone would want to even try in this day and age.

     

    If you're worried about subsequent calls throwing (without catching them), why not do something like this:

    Bookmark = MyDataset.GetCurrentBookmark();  //save current location
    MyDataset.DisableControls(); //disable updates of all linked GUI controls

    try
    {
       //iterate over each row and do something with the data
       MyDataset.First();
       while(!MyDataset.EOF())
       {
          ProcessRecord(MyDataset.Get());
          MyDataset.Next();
       }
    }
    catch(std::exception e)
    {

    }

    MyDataset.EnableControls(); //re-enable updating of linked controls;
    MyDataset.Bookmark = Bookmark; //restore previous location

    For early exits with break/continue/return, it becomes problematic in that you would have to repeat the cleanup code, but that can be addressed by having a single exit point.  Many coding standards require single exit points anyhow, but I'm of the opinion that exceptions to that rule should be allowed on a case-by-case basis.  If it becomes too awkward to have a single exit point, you can of course extract the initialization/cleanup code to methods (i.e. creating a SaveBookmark() and RestoreBookmark()).  Creating a new class seems like overkill.

    Also, I seem to remember Alex having an opinion on relying on finally.

     



  • @Groaner said:

    If you're worried about subsequent calls throwing (without catching them), why not do something like this:

    [snip]

    I already tried suggesting that and was met with a mumbled something about exceptions.



  • @Groaner said:

    If you're worried about subsequent calls throwing (without catching them), why not do something like this:

    You've changed the actual behaviour of the code so that it no longer propagates exceptions.



  • @Salamander said:

    @Groaner said:

    If you're worried about subsequent calls throwing (without catching them), why not do something like this:

    You've changed the actual behaviour of the code so that it no longer propagates exceptions.

     

    Fair point, but I would argue that it would be appropriate for the block where the exception is raised to be the block that handles it.  I could see an argument for letting it bubble up to a caller five calls above, (i.e. generic exception logger, or generic message box, perhaps) but that doesn't seem as clean a separation of concerns.  Would the caller be able to do anything meaningful to rectify the situation without access to variables that long went out of scope?

    In the example Mason gave, I assume we're looking at some sort of database grid.  What sort of exceptions might one expect to be raised from iterating through a recordset?  ConnectionLostException?  InvalidCursorException?  GenericSQLException? (or whatever you want to call them).  If your DB connection is dropped, the most graceful solution would be to try and reconnect (with limits on reconnection attempts, etc.), wouldn't it? That's likely going to necessitate refreshing the whole grid, as it would be assumed dirty and unreliable now that we're disconnected.  If you kept the bookmark before the connection was dropped, you could possibly return to the same spot once the connection was reestablished (assuming the row wasn't deleted in the interim). 

    If, on the other hand, an exception is thrown because of invalid SQL used, there's not much one can do except log the error, or ask the user to send an error report.  So the recovery method becomes different, and that's a good hint that there should be different ways to process different exceptions.  If there's a common error reporting form, show it in catch blocks where it's relevant.  If there are common pathways to recover from similar or different exceptions, call those in the catch block where applicable.

     


Log in to reply