My heart bleeds



  • @too_many_usernames said:

    Nice soapbox, but how would you pull this off? If none of the numerous alternatives to C have managed to knock it off its throne, then what would it take?

    I was hoping the old Unix users would all die and/or retire, but now the problem is that they're passing on their idiotic wrong philosophies on the next generation and now there's MORE of those idiots around and not less.

    Now? I don't know what it would take to stem the tide... these idiots are so delusional they actually think their development process produces superior software against all evidence.

    @too_many_usernames said:

    From my standpoint: make me want to give up C.

    If the last 20 years of following the IT news hasn't made you want to give up C, nothing will. You're like the guy who posts the crazy conspiracy theory about 9/11 then says: "convince me I'm wrong!" It can't be done, the brain is already too damaged.

    @too_many_usernames said:

    I don't want a language that changes every year by adding (or worse) changing language features - new libraries are OK, but don't change the language.

    So stagnation is a feature, to you. What do you drive to work? A 1966 VW Bug? Because you know those newer cars keep CHANGING THINGS!!!

    @too_many_usernames said:

    Don't make me rely on runtime libraries, but provide them. And have them simple, without dependencies on fifty other libraries. And provide them with fully tested code, and make the test plans and results available to me.

    These are all bitches about the shitty open source Unix culture I was mentioning above. You don't have any of these problems if you're on .net.

    @too_many_usernames said:

    Don't make me tied to a specific vendor for binary tools. Support all the processors I need to support, native code (256kB RAM is a luxury on my micros, most only have 64kB, usually only 2-4MB FLASH; heck I've only got 1024 bytes of FLASH for boot code on one of my modules!), and supporting real-time operation, and I've got to be running app code in less than 500ms after a reset, so no crazy initialization requirements.

    PASCAL can meet those requirements, and is a better language than C. Yeah. I said it. I stand by it.



  • @too_many_usernames said:

    I don't want a language that changes every year by adding (or worse) changing language features - new libraries are OK, but don't change the language.

    "Hey, it's a pile of shit, but at least it's a pile of shit that's not going anywhere!"

    Actually, that's not even true, C has changed several times and sometimes in big ways..

    @too_many_usernames said:

    Don't make me rely on runtime libraries, but provide them.

    Um.. what? What useful work are you going to get done without a library?

    @too_many_usernames said:

    And have them simple, without dependencies on fifty other libraries.

    Okay, you just lost me. C requires tons of libraries to do anything non-trivial. And then the dependencies, my God.. I can't even tell you how many different versions of libc are scattered throughout my system. It's a fucking nightmare.



  • @morbiuswilters said:

    Okay, you just lost me. C requires tons of libraries to do anything non-trivial. And then the dependencies, my God.. I can't even tell you how many different versions of libc are scattered throughout my system. It's a fucking nightmare.
     

    I guess maybe there are two different flavors of "C" programming out there: C programming on the desktop, which I agree is pretty atrocious, and C programming on very small important devices that basically make society work (cars, machines, airplanes, power plants, etc.).

    I don't think I've ever used libc in my desktop programming, and I sure as hell haven't used it in my embedded programming.  And I'm not saying "no libraries"...

    And yes, while C has changed, it has generally been by adding things - it has never fundamentally changed the way something works, like, say, Perl or Python or PHP.  I don't think I've ever seen anything in C be deprecated, or the meaning of an operator change.  I am of course ignoring things like pragmas and all that other undefined crap.

    @blakeyrat said:

    So stagnation is a feature, to you. What do you drive to work? A 1966 VW Bug? Because you know those newer cars keep CHANGING THINGS!!!

    Modern cars have improved, yes, but if they have changed the way computer languages have, every time you bought a new car you'd have to learn a new user interface, speak a different language, and alternate which seat had the steering wheel.

    @blakeyrat said:

    Pascal

    My experience was that while Pascal doesn't let you hurt yourself, it makes it very hard to do useful things.  And I'm not aware of that many modern Pascal compilers. I might have to go look that one up...there would be some nice things like strict typing in my line of work...

    @blakeyrat said:

    .Net

    If they have a .Net compiler for MPC5xxx processors, I'd be willing to try it.  The almighty google can't seem to find anything promising (there appears to be a German company called HILF! that might offer something...)

    Trust me, I have to fight against the things people do in C all the time - if there was a better tool, that would save me time and help my company make more money, we'd use it.

    It goes back to the insurance idea - the pitfalls of C code in the wild are tragedy of the commons, and no company is willing to take on all that expense themselves. Instead they buy a small insurance policy for data losses or whatever, or hope the government bails everyone out.  The only solution (sadly) is going to be something like the EPA of software.... which I don't think any of us really wants.



  • @Mason Wheeler said:

    Again, do you think the guy who wrote this update was an idiot?  I don't; the rest of the code shows he understands what he's doing.  No, he's a very smart, talented developer... who made a mistake.  To err is human, and it's really that simple.  The problem is that the C language doesn't take that into account, and so is fundamentally at odds with reality itself.  And when the tasks being performed are so unforgiving, when the consequences of failure are so high, that's unforgivable.  C needs to die, and its entire family along with it.  It's 25 years overdue for its funeral.
     

    Admittedly, the guy who wrote the update probably wasn't an idiot. I may have been overly critical. My point though is that the problem isn't C. It's that a mistake, which is fairly obvious to someone who knows what they are doing, was allowed to enter the final code base. Fixing the language to be less flexible is not the way to solve that. It just restricts what can be done. Which has both benefits and detriments.

    As I said, I don't write code in C. I wouldn't choose it for anything I do write code in because the benefits of having really low level memory access are not something I need in the stuff I write code for. But I don't think it's possible to say that any and all "secure" code projects wouldn't be better off with that sort of low level memory access. That's a choice that needs to be individually for each project with the risks and benefits accessed and judged for that project.

    One of the things that people know when writing C code is that it's really easy to fuck up. This doesn't mean that you should never write C code, it just means that you need more processes in place to detect and fix fuckups.



  • @blakeyrat said:

    PASCAL can meet those requirements, and is a better language than C. Yeah. I said it. I stand by it.


    Fuck you. I learned how to code with Pascal and C is fucking magnitudes better. I can't remember why right now since I haven't written a line of Pascal in 12 years, but I DO remember the sheer goddamn relief when I switched to writing in C/C++.



  • @too_many_usernames said:

    it has never fundamentally changed the way something works, like, say, Perl or Python or PHP.

    Or like type promotions in C.. oh wait.

    @too_many_usernames said:

    I don't think I've ever seen anything in C be deprecated

    VLAs.

    @too_many_usernames said:

    Modern cars have improved, yes, but if they have changed the way computer languages have, every time you bought a new car you'd have to learn a new user interface, speak a different language, and alternate which seat had the steering wheel.

    But you're not the driver of the car, you're the mechanic. You work with internals; you can't expect technology never to advance. And obviously the internals of cars have changed significantly since the 60s.



  • @Snooder said:

    @Mason Wheeler said:

    Again, do you think the guy who wrote this update was an idiot?  I don't; the rest of the code shows he understands what he's doing.  No, he's a very smart, talented developer... who made a mistake.  To err is human, and it's really that simple.  The problem is that the C language doesn't take that into account, and so is fundamentally at odds with reality itself.  And when the tasks being performed are so unforgiving, when the consequences of failure are so high, that's unforgivable.  C needs to die, and its entire family along with it.  It's 25 years overdue for its funeral.
     

    Admittedly, the guy who wrote the update probably wasn't an idiot. I may have been overly critical. My point though is that the problem isn't C. It's that a mistake, which is fairly obvious to someone who knows what they are doing, was allowed to enter the final code base. Fixing the language to be less flexible is not the way to solve that. It just restricts what can be done. Which has both benefits and detriments.

    As I said, I don't write code in C. I wouldn't choose it for anything I do write code in because the benefits of having really low level memory access are not something I need in the stuff I write code for. But I don't think it's possible to say that any and all "secure" code projects wouldn't be better off with that sort of low level memory access. That's a choice that needs to be individually for each project with the risks and benefits accessed and judged for that project.

     

    No it isn't.  If your project isn't designed with security in mind as the highest consideration, trumping other factors such as low-level memory access, then it is not a secure project.  This is doubly true in computer software, where everything implicitly relies on invariants and a single exploit anywhere in the program can have far-reaching consequences.

    It it is not secure, it is not secure, period.  And that means, again, that C has to die.

     



  • @Snooder said:

    @blakeyrat said:
    PASCAL can meet those requirements, and is a better language than C. Yeah. I said it. I stand by it.


    Fuck you. I learned how to code with Pascal and C is fucking magnitudes better. I can't remember why right now since I haven't written a line of Pascal in 12 years, but I DO remember the sheer goddamn relief when I switched to writing in C/C++.
     

    Based on what?  I learned to code with C, and I still remember the relief and clarity I gained from learning Pascal several years later, where all these concepts started to make sense.  Pascal is magnitudes better than any C-family system.  At my last job I knew one Delphi developer who could literally out-perform an entire team of C++ coders, because all those ridiculous language-level gotchas simply don't exist for him, and so they don't bog him down.

     



  • @Mason Wheeler said:

    It it is not secure, it is not secure, period.  And that means, again, that C has to die.

    You might as well just say computing itself should die then, because there is no way to really make it secure, especially when you have the concept of networking; remote access is always insecure compared to local access, so remote access should die, right?

    @morbs said:
    Various stuff about C

    Uh... C type promotion rules were more about the standard standardizing previously undefined behavior. Also, VLAs were added in C11, I haven't been able to find anything about it being removed. But perhaps my Google-foo isn't up to par.

    I will grant, though, that the amount of things C leaves "undefined" or "implementation defined" do cause problems. And I really wish C did have fixed-size data types.



  • @boomzilla said:

    @blakeyrat said:
    he universe contains gamma ray bursts which can, in seconds, entirely sterilize entire solar systems of life, at least all forms of life we're aware of.

    Also, magnetars!

    Magnetars are so badass. The magnetic field itself could have mass density like 10000 greater than lead.



  • @too_many_usernames said:

    You might as well just say computing itself should die then, because there is no way to really make it secure, especially when you have the concept of networking; remote access is always insecure compared to local access, so remote access should die, right?

    Come on, you're better than this. C adds a lot more overhead for writing secure software (shit like integer overflows, buffer overflows, memory-allocation screwiness) than other languages do. That it's harder to write secure software in C compared to other languages means we're going to keep having bugs like this until C is dead. Does that mean software written in other languages won't have bugs? Of course not, but it's harder in C which means people of the same talent and with the same resources will write buggier software. I don't see any way around that.

    @too_many_usernames said:

    Uh... C type promotion rules were more about the standard standardizing previously undefined behavior.

    I'm pretty sure the rules changed at some point, but it's far enough back that I don't recall in what standard.

    @too_many_usernames said:

    Also, VLAs were added in C11

    They were added as mandatory in C99 but made optional in C11. That means they were more-or-less taken out because hardly any compilers bothered implementing them (except those nutjobs at the FSF).

    @too_many_usernames said:

    And I really wish C did have fixed-size data types.

    Anymore I try to use stuff like int8_t or uint8_t as much as possible. I avoid the ambiguously-sized types like the plague.



  • @morbiuswilters said:

    @too_many_usernames said:
    And I really wish C did have fixed-size data types.

    Anymore I try to use stuff like int8_t or uint8_t as much as possible. I avoid the ambiguously-sized types like the plague.

    ...which is great, since you know about them.  But why do you know about them?  Dollars to donuts it's because at some point, you didn't know them, and you wrote something using the default types--the intuitive ones that look like they should be right--and got burned by it, and had to learn the difference.

    When the default behavior built into the language, the stuff that intuitively looks right, is something to be "avoided like the plague," that tells you something about the quality of the language itself.  That's the principal reason I'm not a fan of Lisp either.  Have a look at Paul Graham's On Lisp sometime.  When the language's most outspoken proponent goes and talks to all the muggles about how much Lisp simplifies things for you, but then feels that he needs to spend the start of his coding book for people actually interested in Lisp going over all sorts of things that look intuitive but will actually screw you up if you don't do them this other, more involved way... it's kind of hard to take the first claim seriously.



  • The universe [i]is[/i] inherently unsuitable for life. We've known that since 400 BCE. Did you miss the memo?


  • :belt_onion:

    @jes said:

    Did no-one else notice the claim that the bug means:

    ** Leaked secret keys allows the attacker to decrypt any past and future traffic to the protected services ...

    I wonder what happened to the concept of ephemeral session keys and perfect forward secrecy. Or maybe the author of this site felt that a lot more FUD was needed to make a large problem appear to be a few orders of magnitude bigger.

    I was so put off by that patently nonsensical statement that I found it hard to take any of the rest of the page seriously, my initial reaction was to ignore it as being simply the product of massive ignorance on the author's part.

     

    The statement on that Web site that really makes me see red is "The combined market share of just those two [Apache and nginx] out of the active sites on the Internet was over 66% according to Netcraft's April 2014 Web Server Survey." This sentence has resulted in every freaking news outlet reporting that two-thirds of the Internet is affected by the bug. For God's sake, MSN quoted Lifehacker as a source of this in an article I read this morning. For fuck's sake. Actual data from mass tests indicate that the actual percentage is more like 10% -- of the Web sites that actually use SSL at all -- and almost none of those are well-known sites (except Yahoo!).

    Granted that that data was from 5 p.m. yesterday and a number of system operators probably spent a few hours upgrading when they got in in the morning (e.g. Wikipedia was listed as not vulnerable but that's because they force-updated everyone via Puppet and expired all session tokens). Yawn. The hysteria is still completely unwarranted. A huge number of sites are on 0.9.8e (a very old but a very stable release); in fact, that's the latest release available on a lot of distributions. Pretty much the only people who upgraded did it for SNI or (as is the case for a couple of my clients) for TLS 1.2 (generally due to federal regulations).



  • @Mason Wheeler said:

    @Snooder said:

    @blakeyrat said:
    PASCAL can meet those requirements, and is a better language than C. Yeah. I said it. I stand by it.


    Fuck you. I learned how to code with Pascal and C is fucking magnitudes better. I can't remember why right now since I haven't written a line of Pascal in 12 years, but I DO remember the sheer goddamn relief when I switched to writing in C/C++.
     

    Based on what?  I learned to code with C, and I still remember the relief and clarity I gained from learning Pascal several years later, where all these concepts started to make sense.  Pascal is magnitudes better than any C-family system.  At my last job I knew one Delphi developer who could literally out-perform an entire team of C++ coders, because all those ridiculous language-level gotchas simply don't exist for him, and so they don't bog him down.

     

    I have a searing hatred for all Wirth languages, so I disagree with that point. Or, perhaps I think it can be applied to many languages: learning them will elucidate things about programming you hadn't even realized you don't understand.

    C++ was my first language, and at one time I knew it inside and out. I have a copy of D&E sitting upstairs on my bookshelf. Now, though, I'd vomit with rage long before I managed to complete any meaningful code. Sweet zombie Jesus is it a clunky language hell-bent on getting in your way. It's like the Clippy of programming languages. "It looks like you're writing yet another terrible graphical front-end for cdrdao. Let's use the dynamic_cast operator on this void pointer."



  • @too_many_usernames said:

    @Buttembly Coder said:

    I think the flying comparison is basically perfect. Just as someone with little experience flying should not fly others, someone with little experience dealing with C's pointer fuckery should not write code intended for public use.
     

    Yes, I agree with this.

    @Mason Wheeler said:

    C needs to die, and its entire family along with it.  It's 25 years overdue for its funeral.

    Nice soapbox, but how would you pull this off? If none of the numerous alternatives to C have managed to knock it off its throne, then what would it take? Perhaps it would indeed be legal pressure, but that sounds "artificial" to me.

    From my standpoint: make me want to give up C.  I don't want a language that changes every year by adding (or worse) changing language features - new libraries are OK, but don't change the language. Don't make me rely on runtime libraries, but provide them. And have them simple, without dependencies on fifty other libraries. And provide them with fully tested code, and make the test plans and results available to me. Don't make me tied to a specific vendor for binary tools. Support all the processors I need to support, native code (256kB RAM is a luxury on my micros, most only have 64kB, usually only 2-4MB FLASH; heck I've only got 1024 bytes of FLASH for boot code on one of my modules!), and supporting real-time operation, and I've got to be running app code in less than 500ms after a reset, so no crazy initialization requirements.

    Give me those things in another language, and I'll look at it. Other languages do have some of those things, but not all of them.

    Until then, C is still the best option.

     

    Yes, the best option for an embedded application. However i do believe Mason Wheeler was describing the use of C in regards to web server applications?

     

     



  • On tonight's The Tonight Show Starring Jimmy Fallon, Jimmy Fallon called heartbleed "a virus" and suggested changing your password to something more secure, like "12347".

    That second half I can understand as being a joke, but the first half is like saying there's a feral cat colony in your back yard and then pointing to a dog.



  • @too_many_usernames said:

    @Mason Wheeler said:
    It it is not secure, it is not secure, period.  And that means, again, that C has to die.

    You might as well just say computing itself should die then, because there is no way to really make it secure, especially when you have the concept of networking; remote access is always insecure compared to local access, so remote access should die, right?

    There's a difference between something being ((insecure) in general) and something being (not (secure in general)).



  • @Mason Wheeler said:

    @Snooder said:

    @Mason Wheeler said:

    Again, do you think the guy who wrote this update was an idiot?  I don't; the rest of the code shows he understands what he's doing.  No, he's a very smart, talented developer... who made a mistake.  To err is human, and it's really that simple.  The problem is that the C language doesn't take that into account, and so is fundamentally at odds with reality itself.  And when the tasks being performed are so unforgiving, when the consequences of failure are so high, that's unforgivable.  C needs to die, and its entire family along with it.  It's 25 years overdue for its funeral.
     

    Admittedly, the guy who wrote the update probably wasn't an idiot. I may have been overly critical. My point though is that the problem isn't C. It's that a mistake, which is fairly obvious to someone who knows what they are doing, was allowed to enter the final code base. Fixing the language to be less flexible is not the way to solve that. It just restricts what can be done. Which has both benefits and detriments.

    As I said, I don't write code in C. I wouldn't choose it for anything I do write code in because the benefits of having really low level memory access are not something I need in the stuff I write code for. But I don't think it's possible to say that any and all "secure" code projects wouldn't be better off with that sort of low level memory access. That's a choice that needs to be individually for each project with the risks and benefits accessed and judged for that project.

     

    No it isn't.  If your project isn't designed with security in mind as the highest consideration, trumping other factors such as low-level memory access, then it is not a secure project.  This is doubly true in computer software, where everything implicitly relies on invariants and a single exploit anywhere in the program can have far-reaching consequences.

    It it is not secure, it is not secure, period.  And that means, again, that C has to die.

     

     

    Actually all FOSS projects are more secure then you think.

    http://www.scmagazine.com/open-source-software-is-more-secure-than-you-think/article/315374/

    Don't forget that users have the opportunity to evaluate and critique the actual code – not just how it works, but how it was written to work. Because of the nature of the open source community and the fear of losing credibility, developers take great caution in releasing code with their name on it.

     



  • @Helix said:

    Actually
     

    How do you do.



  • @too_many_usernames said:

    From my standpoint: make me want to give up C.  I don't want a language that changes every year by adding (or worse) changing language features - new libraries are OK, but don't change the language. Don't make me rely on runtime libraries, but provide them. And have them simple, without dependencies on fifty other libraries. And provide them with fully tested code, and make the test plans and results available to me. Don't make me tied to a specific vendor for binary tools. Support all the processors I need to support, native code (256kB RAM is a luxury on my micros, most only have 64kB, usually only 2-4MB FLASH; heck I've only got 1024 bytes of FLASH for boot code on one of my modules!), and supporting real-time operation, and I've got to be running app code in less than 500ms after a reset, so no crazy initialization requirements.
    I think if you're working at a level as low as that, C and C++ will continue to be your only options. However, if we had Cisco IP phones running Java a decade ago, and people can watch HD video on Linux on a $50 Raspberry Pi board, then surely such ridiculously low hardware specs should be dying out too.

     



  • Fuck, I'm not saying "C is demonspawn and should not be used anywhere ever again". What I am saying is, "C is the wrong tool for the majority of FOSS projects it's used in". Embedded programming? Well fuck, your hardware constraints limit your language choice. But a security-critical project like OpenSSL that's gonna be run on full-blown web servers? Why would you not use a language that is secure by default?

    @morbiuswilters said:

    @The_Assimilator said:
    Even C++ would be a step up, at least that would bring OpenSSL into the late 20th century.

    You had me until you said "C++". Sorry, no, C++ would be 100x the clusterfuck this would be.

    But, yeah, the default in so much FOSS is just to use C. The wages of C are laughably-destructive bugs like this one. shrug FOSS should be using a better platform, like .NET, but then they wouldn't be sticking it to The Man which is the entire reason FOSS exists in the first place.

    I'm trying to be realistic here, getting FOSStards onto a completely different language is going to be next to impossible because of the aforementioned Sticking It To The Man willy-waving issues. But at least if we can get them onto C++, which is more secure (IF USED CORRECTLY*) than plain-jane C, it'll be an improvement.

    @boomzilla said:

    ... portability is the real reason why we can't get away from C.

    C is portable only because many idiots have spent many many hours of their exceptionally wretched lives rewriting the core libraries and compilers to work on their $PLATFORM of choice. Time and effort that would have been far better spent porting C code to a better language, or just fucking rewriting it from scratch in said better language. It's not portable because it was designed to be, or because it was designed well.



  • @LoremIpsumDolorSitAmet said:

    However, if we had Cisco IP phones running Java a decade ago, and people can watch HD video on Linux on a $50 Raspberry Pi board, then surely such ridiculously low hardware specs should be dying out too.
     

    All specs aren't created equal - there are quite a few reasons why "hard*" embedded systems like automotive have lower "PC" specs (memory, storage, clock speed) than things like Raspberry Pi and the like, including but not limited to: 10 or 20 year field life (as opposed to 2-5), survive -40 to 125 degree** celsius environment with only passive cooling, survive crazy voltage spikes, have fairly tight radiated and conducted noise requirements, vibration requirements, etc.  So the hardware specs are not really as "low" as you think - they are just focused on different design parameters.

    *I'd classify "Hard" embedded systems special-purpose computers generally with an RTOS, where "soft" embedded systems are those that are really headless or minimal interface Linux or Windows systems; things that are really general-purpose computers but in an embedded context.

    **Most consumer electronics are only rated up to 85C, some are up to 105.



  • @The_Assimilator said:

    Fuck, I'm not saying "C is demonspawn and should not be used anywhere ever again". What I am saying is, "C is the wrong tool for the majority of FOSS projects it's used in". Embedded programming? Well fuck, your hardware constraints limit your language choice. But a security-critical project like OpenSSL that's gonna be run on full-blown web servers? Why would you not use a language that is secure by default?

    @morbiuswilters said:

    @The_Assimilator said:
    Even C++ would be a step up, at least that would bring OpenSSL into the late 20th century.

    You had me until you said "C++". Sorry, no, C++ would be 100x the clusterfuck this would be.

    But, yeah, the default in so much FOSS is just to use C. The wages of C are laughably-destructive bugs like this one. shrug FOSS should be using a better platform, like .NET, but then they wouldn't be sticking it to The Man which is the entire reason FOSS exists in the first place.

    I'm trying to be realistic here, getting FOSStards onto a completely different language is going to be next to impossible because of the aforementioned Sticking It To The Man willy-waving issues. But at least if we can get them onto C++, which is more secure (IF USED CORRECTLY*) than plain-jane C, it'll be an improvement.

     

    C++ is not more secure than C, though; that's the point. Don't give me that "if used correctly" line, either; if C was used correctly this bug wouldn't have happened.

    Hey, let's use C++ correctly. Copy the contents of one std::vector to another std::vector.  We'll call them "src" and "dest". Correct way is to use std::copy, right? std::copy(src.begin(), src.end(), dest.begin()). What happens if dest is shorter than src? Well, it's "undefined", but you know what it'll do in practice: std::copy don't care, it'll just crap all over whatever is after that array that std::vector maintains as a private member.

    I mean, sometimes performance might be critical: I don't care if your video game or something equally unimportant crashes every hour or so. But for security-critical software we need to stop using languages that don't have some semblance of memory safety. I don't leave my motorcycle helmet off to save weight, you know?

     


  • ♿ (Parody)

    @The_Assimilator said:

    @boomzilla said:
    ... portability is the real reason why we can't get away from C.

    C is portable only because many idiots have spent many many hours of their exceptionally wretched lives rewriting the core libraries and compilers to work on their $PLATFORM of choice. Time and effort that would have been far better spent porting C code to a better language, or just fucking rewriting it from scratch in said better language. It's not portable because it was designed to be, or because it was designed well.

    Well, duh. But that's my point. Who's going to spend a similar amount of time doing all that with some other language? The fact is that if you use C, you have the ability to compile for just about any target. I guess C++ comes close to that, but that's not necessarily a step forward. There's simply no momentum that way. I don't think the dominance of C was a conscious choice, and I don't think any such conscious choice for anything new is really possible. Even MS keeps to C/C++ for most of its stuff. What language would you choose? Short of flamethrower executions, how would you get cooperation? And if you come back with a blakey-like exhortation that you're just talking about the ideal state of things, you probably deserve that flamethrower execution.

    I haven't been keeping up with the new C++ standards. Have they come up with something comparable to C's stdint.h (inptr_t, uint8_t, etc) yet?



  • @boomzilla said:

    I haven't been keeping up with the new C++ standards. Have they come up with something comparable to C's stdint.h (inptr_t, uint8_t, etc) yet?

     

    Yeah, C++11 adds them (#include <cstdint>). In general, C++11 smooths out some of the more obvious failures of the standard library, but the fundamental problems with the language are still there. It's like spraypainting over a rust spot on your car.

     

    edit: Another big problem is that C++ is basically the Hotel California. C, at least, has the advantage (?) that you can call into C code from pretty much any language. C++, on the other hand, you can't reliably call even from C++ code that's been built with a different compiler.


  • :belt_onion:

    @LoremIpsumDolorSitAmet said:

    However, if we had Cisco IP phones running Java a decade ago, and people can watch HD video on Linux on a $50 Raspberry Pi board, then surely such ridiculously low hardware specs should be dying out too.
     

    Speaking of Java, I'll elaborate on a tag from above: About 99.9% of the infrastructure with which I work wasn't affected. That's because it's pretty much all Java-based. Now, you can say a lot about Java (and I do), but as far as the managed language arguments in this thread: Remember the last time Java had a security vulnerability like this applicable to server deployments? Especially one due to the lack of a bounds check? Yeah, I don't either.



  • @Helix said:

    Don't forget that users have the opportunity to evaluate and critique the actual code –
    not just how it works, but how it was written to work. Because of the
    nature of the open source community and the fear of losing credibility,
    developers take great caution in releasing code with their name on it.

    Yes, that is the theory.

    But we are talking here about the practice.

    ACTUALLY GUYZ!!!!! IT TURNS OUT SLASHDOT IS BETTER THAN YOU THINK GUYZZZZ!!!! HEY GUYZZZZ!!!



  • @heterodox said:

    Speaking of Java, I'll elaborate on a tag from above: About 99.9% of the infrastructure with which I work wasn't affected. That's because it's pretty much all Java-based. Now, you can say a lot about Java (and I do), but as far as the managed language arguments in this thread: Remember the last time Java had a security vulnerability like this applicable to server deployments? Especially one due to the lack of a bounds check? Yeah, I don't either.
    If a Java-based web app is available over SSL, then it's the web server's responsibility to provide that service. Many Java deployments are vulnerable to HeartBleed, especially since a large portion of people who choose Java also choose open source operating systems and web servers. Corporate guys running WebSphere on Z-series hardware are fine, but Java can't claim that victory.

    Also, Java has more security bugs than pretty much anything else. The only thing that's prevented Java from causing a problem this big is that no one uses Java for system level code.

    C#'s libraries, compiler, and much of the runtime are written in C#. Java's runtime and compiler are written in C (at least the ones most people use are). That's why there is a security fix for Java every two weeks.



  • The answer to some of the idiots above mouth-farting about portability is: don't fucking compile it at all. 99.999999999999% of software doesn't get (or need!) the speed advantage of compilation, and code that hasn't been compiled yet is a shitload more portable.



  • @aristurtle said:

    Yeah, C++11 adds them (#include <cstdint>). In general, C++11 smooths out some of the more obvious failures of the standard library, but the fundamental problems with the language are still there. It's like spraypainting over a rust spot on your car.

     

    C++11 is the devil. The whole auto/decltype thing is a solution to a problem which should have never existed.



  • @bstorer said:

    @aristurtle said:

    Yeah, C++11 adds them (#include <cstdint>). In general, C++11 smooths out some of the more obvious failures of the standard library, but the fundamental problems with the language are still there. It's like spraypainting over a rust spot on your car.

     

    C++11 is the devil. The whole auto/decltype thing is a solution to a problem which should have never existed.
     

    But you can actually pass the filename to a std::fstream as a std::string now! And std::vectors can be created with an initializer list, so they aren't openly inferior to the C feature that they supposedly replace!

    Also, because std::shared_ptr and std::unique_ptr are in the standard library, you can't laugh in a C++ weenie's face when he starts talking about "RAII". (You need to wait until he starts saying that shared_ptr is "better than garbage collection").

     



  • @bstorer said:

    @aristurtle said:

    Yeah, C++11 adds them (#include <cstdint>). In general, C++11 smooths out some of the more obvious failures of the standard library, but the fundamental problems with the language are still there. It's like spraypainting over a rust spot on your car.

    C++11 is the devil. The whole auto/decltype thing is a solution to a problem which should have never existed.

    You can say that about half of the C++ features added since the initial release: they're solutions to specific C-language or C++-language issues, rather than actual innovative language features.  For example:

    @Joel Spolsky said:

    C++ string classes are supposed to let you pretend that strings are first-class data. They try to abstract away the fact that strings are hard and let you act as if they were as easy as integers. Almost all C++ string classes overload the + operator so you can write s + "bar" to concatenate. But you know what? No matter how hard they try, there is no C++ string class on Earth that will let you type "foo" + "bar", because string literals in C++ are always char*'s, never strings. The abstraction has sprung a leak that the language doesn't let you plug. (Amusingly, the history of the evolution of C++ over time can be described as a history of trying to plug the leaks in the string abstraction. Why they couldn't just add a native string class to the language itself eludes me at the moment.)

     -- The Law of Leaky Abstractions

     



  • @Mason Wheeler said:

    @bstorer said:

    @aristurtle said:

    Yeah, C++11 adds them (#include <cstdint>). In general, C++11 smooths out some of the more obvious failures of the standard library, but the fundamental problems with the language are still there. It's like spraypainting over a rust spot on your car.

    C++11 is the devil. The whole auto/decltype thing is a solution to a problem which should have never existed.

    You can say that about half of the C++ features added since the initial release: they're solutions to specific C-language or C++-language issues, rather than actual innovative language features.

    No question. I just needed a C++11-specific example. At least they tried to plug half the leaks with std library things this time, instead of the typical strategy of just adding more keywords. The C++ committee should go ahead and write out a list of reserved words for future additions. Or maybe it would be simpler to list the names which won't be used as keywords in the future. Safe names list: "foo, bar, FFDOEDDIS_ddhs_RHDIS_eeeee"



  • @bstorer said:

    No question. I just needed a C++11-specific example. At least they tried to plug half the leaks with std library things this time, instead of the typical strategy of just adding more keywords. The C++ committee should go ahead and write out a list of reserved words for future additions. Or maybe it would be simpler to list the names which won't be used as keywords in the future. Safe names list: "foo, bar, FFDOEDDIS_ddhs_RHDIS_eeeee"

    I'm gonna disagree with you on this one. The ridiculous "virtual void foo() = 0;" syntax should have been a new reserved word from the beginning.  The less they pretend to be compatible with C code (they never really were) the better off they are.


  • ♿ (Parody)

    @blakeyrat said:

    The answer to some of the idiots above mouth-farting about portability is: don't fucking compile it at all. 99.999999999999% of software doesn't get (or need!) the speed advantage of compilation, and code that hasn't been compiled yet is a shitload more portable.

    It's so hard to tell when blakey is missing the point seriously or ironically. It's interpreters all the way down!



  • @aristurtle said:

    @bstorer said:

    @aristurtle said:

    Yeah, C++11 adds them (#include <cstdint>). In general, C++11 smooths out some of the more obvious failures of the standard library, but the fundamental problems with the language are still there. It's like spraypainting over a rust spot on your car.

     

    C++11 is the devil. The whole auto/decltype thing is a solution to a problem which should have never existed.
     

    But you can actually pass the filename to a std::fstream as a std::string now! And std::vectors can be created with an initializer list, so they aren't openly inferior to the C feature that they supposedly replace!

    Also, because std::shared_ptr and std::unique_ptr are in the standard library, you can't laugh in a C++ weenie's face when he starts talking about "RAII". (You need to wait until he starts saying that shared_ptr is "better than garbage collection").

     

    Hooray! Problems solved, then!

    For me, the C++ reference standard will always be whatever compiles using Borland Turbo C++ 3.0. Anything since is just window-dressing on an already-perfect language.


  • :belt_onion:

    @Jaime said:

    If a Java-based web app is available over SSL, then it's the web server's responsibility to provide that service. Many Java deployments are vulnerable to HeartBleed, especially since a large portion of people who choose Java also choose open source operating systems and web servers. Corporate guys running WebSphere on Z-series hardware are fine, but Java can't claim that victory.

    The hell are you talking about? The applications are made available via an application server (e.g. Jetty, Tomcat, WebLogic) and those tend to use the JSSE implementation because why the hell would they reinvent the wheel, especially by interfacing with OpenSSL via JNI or something? That'd be insane.

    Now, if you're using a Web server as a reverse proxy to the application server, I'm not at all saying that's a bad thing, but that's not Java's problem. You're back where you started. As for the JVM being written in C, a.) that's not always the case, and b.) there's been a push for years to move more and more of Hotspot into Java and minimize use of native code.



  • @heterodox said:

    The hell are you talking about? The applications are made available via an application server (e.g. Jetty, Tomcat, WebLogic) and those tend to use the JSSE implementation because why the hell would they reinvent the wheel, especially by interfacing with OpenSSL via JNI or something? That'd be insane.
    If you are talking about middle-tier and back end services, then the whole claim of them not being compromised is meaningless since they aren't even exposed to the Internet. I thought, since you were participating in this conversation about HeartBleed and the underlying causes for the bug, that you were actually talking about code that was in some way related to this - web app code.


  • :belt_onion:

    @Jaime said:

    If you are talking about middle-tier and back end services, then the whole claim of them not being compromised is meaningless since they aren't even exposed to the Internet. I thought, since you were participating in this conversation about HeartBleed and the underlying causes for the bug, that you were actually talking about code that was in some way related to this - web app code.
     

    You misunderstood me. I was using JSSE as an example of an SSL implementation written in a managed language, comparing its history of vulnerabilities with that of implementations like OpenSSL. See also: The entire discussion above which was talking about the languages used in SSL implementations rather than about those used in Web servers or Web services.



  • The dumbfucks who wrote this library built their own malloc()/free() implementation because it was "too slow" on some platforms-- this idiocy had the effect of defeating the protection that some OSes had already written to prevent this exact type of bug. Because malloc() is "too slow". On some platforms.

    Edit: Oh it's even worse. OpenSSL *relies* on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.



  • @blakeyrat said:

    The dumbfucks who wrote this library built their own malloc()/free() implementation because it was "too slow" on some platforms-- this idiocy had the effect of defeating the protection that some OSes had already written to prevent this exact type of bug. Because malloc() is "too slow". On some platforms.

    Edit: Oh it's even worse. OpenSSL *relies* on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.

    Didn't OpenSSL have that thing where a Debian maintainer removed some code because it accessed memory before it was written to and then it made OpenSSL only generate 16 different "random" numbers or something?



  • @Ben L. said:

    @blakeyrat said:
    The dumbfucks who wrote this library built their own malloc()/free() implementation because it was "too slow" on some platforms-- this idiocy had the effect of defeating the protection that some OSes had already written to prevent this exact type of bug. Because malloc() is "too slow". On some platforms.

    Edit: Oh it's even worse. OpenSSL *relies* on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.

    Didn't OpenSSL have that thing where a Debian maintainer removed some code because it accessed memory before it was written to and then it made OpenSSL only generate 16 different "random" numbers or something?

     

    Yeah, to suppress a Valgrind warning. (It was 32,768 possible random numbers, but of course that's even worse; with 16 you have a chance of actually noticing).

     





  • One thing this has been missed in a lot of discussions of the impact of this bug is that the damage could have been contained with a little defense in depth. For example, memory dumps should have been a lot less useful if sensitive data were handled with one of the many available techniques. Example: SecureString. There are a million other coding patterns out there that handle allocated memory securely, it seems that they aren't used all that often if people were grabbing private keys and passwords from fragmented memory dumps.



  • @Jaime said:

    One thing this has been missed in a lot of discussions of the impact of this bug is that the damage could have been contained with a little defense in depth. For example, memory dumps should have been a lot less useful if sensitive data were handled with one of the many available techniques. Example: SecureString. There are a million other coding patterns out there that handle allocated memory securely, it seems that they aren't used all that often if people were grabbing private keys and passwords from fragmented memory dumps.


    I can think of at least one reason a server running OpenSSL is not likely to use .NET's SecureString...



  • @Buttembly Coder said:

    @Jaime said:
    ... one of the many available techniques. Example: SecureString. There are a million other ...

    I can think of at least one reason a server running OpenSSL is not likely to use .NET's SecureString...
    If only I would have thought of that possibility and instead used SecureString as just an example and pointed out that there were many alternatives...


  • @blakeyrat said:

    Edit: Oh it's even worse. OpenSSL relies on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.

    Returning a pointer to memory which already contains information is not a bug. Returning a pointer to memory with a predictable value is not a bug either. malloc() does not securely erase the memory before it returns the pointer, and makes no guarantees about randomness of its contents.

    Code that assumes it knows which memory locations malloc() is going to return (and thus, what information will still be stored there) has a bug, but malloc() does not have a bug.



  • @anotherusername said:

    @blakeyrat said:
    Edit: Oh it's even worse. OpenSSL *relies* on a bug in its own malloc() function-- it frees memory than immediately remallocs it, then assumes the new memory is the same as the old. Awesome.

    Returning a pointer to memory which already contains information is not a bug. Returning a pointer to memory with a predictable value is not a bug either. malloc() does not securely erase the memory before it returns the pointer, and makes no guarantees about randomness of its contents.

    Code that assumes it knows which memory locations malloc() is going to return (and thus, what information will still be stored there) has a bug, but malloc() does not have a bug.

    The "real" malloc() doesn't have a bug, but, they fellt that malloc() was "too slow" so they wrote their own buggy version to use instead.

     


  • Considered Harmful

    @blakeyrat said:

    The answer to some of the idiots above mouth-farting about portability is: don't fucking compile it at all. 99.999999999999% of software doesn't get (or need!) the speed advantage of compilation, and code that hasn't been compiled yet is a shitload more portable.

    Speaking of mouth-farting, would you care to back that baseless statistic up with some data?

    I recognize that it's hyperbole, but you can't just make a bold claim like that and not have anything to back it up with, right?


Log in to reply