Make sure it's initialized!



  • @Sutherlands said:

    Quoting fail.  I'm not sure what you're saying, exactly, but you make me look at it and say "this isn't even a valid scenario, since it will dispose of the object and you won't be able to use it."

    Disposing can object is nothing more than calling a method on that object. There is nothing inherent in this that means " you won't be able to use it", unless the developer of the class in question has explicitly coded in functionallity to detect the call to Dispose and do something special.



  • @TheCPUWizard said:

    @Sutherlands said:

    Quoting fail.  I'm not sure what you're saying, exactly, but you make me look at it and say "this isn't even a valid scenario, since it will dispose of the object and you won't be able to use it."

    Disposing can object is nothing more than calling a method on that object. There is nothing inherent in this that means " you won't be able to use it", unless the developer of the class in question has explicitly coded in functionallity to detect the call to Dispose and do something special.

    I understand what you're saying, but I would think that the vast majority of IDisposable objects cannot be used, or are not in a valid state, after they are disposed.  So it may be valid code in very specific circumstances.


  • @TheCPUWizard said:

    @Sutherlands said:

    Quoting fail.  I'm not sure what you're saying, exactly, but you make me look at it and say "this isn't even a valid scenario, since it will dispose of the object and you won't be able to use it."

    Disposing can object is nothing more than calling a method on that object. There is nothing inherent in this that means " you won't be able to use it", unless the developer of the class in question has explicitly coded in functionallity to detect the call to Dispose and do something special.

    ...which is the exact semantics that IDisposable demands. If you call Dispose() on, say, a Stream, then calling any of its instance methods will cause an ObjectDisposedException to be thrown. If an object doesn't have any special dispose logic, then it shouldn't even have a Dispose() method.

    To paraphase Blakeyrat, this shit isn't hard people.



  • @pkmnfrk said:

    @TheCPUWizard said:

    @Sutherlands said:

    Quoting fail.  I'm not sure what you're saying, exactly, but you make me look at it and say "this isn't even a valid scenario, since it will dispose of the object and you won't be able to use it."

    Disposing can object is nothing more than calling a method on that object. There is nothing inherent in this that means " you won't be able to use it", unless the developer of the class in question has explicitly coded in functionallity to detect the call to Dispose and do something special.

    ...which is the exact semantics that IDisposable demands. If you call Dispose() on, say, a Stream, then calling any of its instance methods will cause an ObjectDisposedException to be thrown. If an object doesn't have any special dispose logic, then it shouldn't even have a Dispose() method.

    To paraphase Blakeyrrequat, this shit isn't hard people.

    There is nothing that demands those Semantics. The only requirement (in this respect) is that Dispose "releases valuable resources". An object is free to re-create these resources, and many objects (including BCL objects) maintain some properties that are valid even after Dispose.

    This is one of the reasons why so many people consider the Disposable Pattern fatally flawed.


  • :belt_onion:

    @Sutherlands said:

    @bjolling said:

    So your still have declared your recordset outside the try block that handles the dispose. Only you have introduced 2 try blocks now.

    ... I don't give a flying crap what the IL does, or whether it's the same as what you wrote.... Are you dense?
    You sure sound like a competent programmer. I'd rather be called dense than spouting that kind of crap.



  • @bjolling said:

    @Sutherlands said:

    @bjolling said:

    So your still have declared your recordset outside the try block that handles the dispose. Only you have introduced 2 try blocks now.

    ... I don't give a flying crap what the IL does, or whether it's the same as what you wrote.... Are you dense?
    You sure sound like a competent programmer. I'd rather be called dense than spouting that kind of crap.
    Ah, taking a quote out of context and then deriding someone for it.  Good job!

  • :belt_onion:

    @Sutherlands said:

    @bjolling said:

    @Sutherlands said:

    @bjolling said:

    So your still have declared your recordset outside the try block that handles the dispose. Only you have introduced 2 try blocks now.

    ... I don't give a flying crap what the IL does, or whether it's the same as what you wrote.... Are you dense?
    You sure sound like a competent programmer. I'd rather be called dense than spouting that kind of crap.
    Ah, taking a quote out of context and then deriding someone for it.  Good job!
    Thanks. At least it's better than claiming that you don't need to care about the IL that your code generates. But seriously, what important part of your quote did I leave out? I was specifically responding to the "I don't care what the IL does" part

     



  • @bjolling said:

    @Sutherlands said:

    @bjolling said:

    @Sutherlands said:

    @bjolling said:

    So your still have declared your recordset outside the try block that handles the dispose. Only you have introduced 2 try blocks now.

    ... I don't give a flying crap what the IL does, or whether it's the same as what you wrote.... Are you dense?
    You sure sound like a competent programmer. I'd rather be called dense than spouting that kind of crap.
    Ah, taking a quote out of context and then deriding someone for it.  Good job!
    Thanks. At least it's better than claiming that you don't need to care about the IL that your code generates. But seriously, what important part of your quote did I leave out? I was specifically responding to the "I don't care what the IL does" part

     

    I think that safely qualifies as "trivia most developers don't need to know".



  • @morbiuswilters said:

    @bjolling said:

    @Sutherlands said:

    @bjolling said:

    @Sutherlands said:

    @bjolling said:

    So your still have declared your recordset outside the try block that handles the dispose. Only you have introduced 2 try blocks now.

    ... I don't give a flying crap what the IL does, or whether it's the same as what you wrote.... Are you dense?

    You sure sound like a competent programmer. I'd rather be called dense than spouting that kind of crap.
    Ah, taking a quote out of context and then deriding someone for it.  Good job!
    Thanks. At least it's better than claiming that you don't need to care about the IL that your ode generates. But seriously, what important part of your quote did I leave out? I was specifically responding to the "I don't care what the IL does" part

    I think that safely qualifies as "trivia most developers don't need to know".

    I disagree. I have full expectations that any "non Junior" is familiar with key elements of IL. It comprises about 10%-15% of my screening [enough to matter, but not enough to "fail" if everything else is excelllent]



  • @TheCPUWizard said:

    @morbiuswilters said:
    @bjolling said:

    @Sutherlands said:

    @bjolling said:

    @Sutherlands said:

    @bjolling said:

    So your still have declared your recordset outside the try block that handles the dispose. Only you have introduced 2 try blocks now.

    ... I don't give a flying crap what the IL does, or whether it's the same as what you wrote.... Are you dense?

    You sure sound like a competent programmer. I'd rather be called dense than spouting that kind of crap.
    Ah, taking a quote out of context and then deriding someone for it.  Good job!
    Thanks. At least it's better than claiming that you don't need to care about the IL that your ode generates. But seriously, what important part of your quote did I leave out? I was specifically responding to the "I don't care what the IL does" part

    I think that safely qualifies as "trivia most developers don't need to know".

    I disagree. I have full expectations that any "non Junior" is familiar with key elements of IL. It comprises about 10%-15% of my screening [enough to matter, but not enough to "fail" if everything else is excelllent]

    Maybe it's really different in .NET, but "internals of the JVM/other runtime" are something 95% of developers never need to know (and most of them don't).



  • @TheCPUWizard said:

    @pkmnfrk said:
    @TheCPUWizard said:

    @Sutherlands said:

    Quoting fail.  I'm not sure what you're saying, exactly, but you make me look at it and say "this isn't even a valid scenario, since it will dispose of the object and you won't be able to use it."

    Disposing can object is nothing more than calling a method on that object. There is nothing inherent in this that means " you won't be able to use it", unless the developer of the class in question has explicitly coded in functionallity to detect the call to Dispose and do something special.

    ...which is the exact semantics that IDisposable demands. If you call Dispose() on, say, a Stream, then calling any of its instance methods will cause an ObjectDisposedException to be thrown. If an object doesn't have any special dispose logic, then it shouldn't even have a Dispose() method.

    To paraphase Blakeyrrequat, this shit isn't hard people.

    There is nothing that demands those Semantics. The only requirement (in this respect) is that Dispose "releases valuable resources". An object is free to re-create these resources, and many objects (including BCL objects) maintain some properties that are valid even after Dispose.

    This is one of the reasons why so many people consider the Disposable Pattern fatally flawed.

    Are you arguing that it actually makes sense to return disposed objects, or is this just pedantic dickweedery?

    Edit: I re-read the thread again and there were a lot less posts talking about returning from inside a using than I remember. So, uh, nevermind.



  • @bjolling said:

    I was specifically responding to the "I don't care what the IL does" part

    Let me summarize the conversation for you.

    KattMan: "You need to declare the record set[sic] before the try in order to ... dispose."

    Sutherlands: No you don't, here's code

    bjolling: using is syntatic sugar.  The IL is the same as <>. (therefore) You have declared your recordset outside of a try.

    Sutherlands: I don't care if the IL is the same, I didn't declare a recordset outside of a try.

    bjolling: OMG YOU DON'T CARE ABOUT IL.

     

    No... that's not what I said.  You can't take code, say "that's the same IL as this code!" and then claim that I wrote something (a variable declaration outside of a try) that I didn't.  I never made a blanket statement that I don't care about IL.  After your fail, I even reiterated what the point of my code was.


  • :belt_onion:

    @Sutherlands said:

    @bjolling said:

    I was specifically responding to the "I don't care what the IL does" part

    Let me summarize the conversation for you.

    KattMan: "You need to declare the record set[sic] before the try in order to ... dispose."

    Sutherlands: No you don't, here's code

    bjolling: using is syntatic sugar.  The IL is the same as <>. (therefore) You have declared your recordset outside of a try.

    Sutherlands: I don't care if the IL is the same, I didn't declare a recordset outside of a try.

    bjolling: OMG you claim YOU DON'T CARE ABOUT IL.

     

    No... that's not what I said.  You can't take code, say "that's the same IL as this code!" and then claim that I wrote something (a variable declaration outside of a try) that I didn't.  I never made a blanket statement that I don't care about IL.  After your fail, I even reiterated what the point of my code was.

    minor FTFY

    It's like you are proud you can do addition without using "plus" after discovering that 5 minus (-1) equals 6 and then calling me dense for telling you that in the background it is still an addition

     


  • :belt_onion:

    @morbiuswilters said:

    @TheCPUWizard said:
    @morbiuswilters said:
    @bjolling said:

    ..snip..

    I think that safely qualifies as "trivia most developers don't need to know".

    I disagree. I have full expectations that any "non Junior" is familiar with key elements of IL. It comprises about 10%-15% of my screening [enough to matter, but not enough to "fail" if everything else is excelllent]

    Maybe it's really different in .NET, but "internals of the JVM/other runtime" are something 95% of developers never need to know (and most of them don't).

    Actually I have to agree with Morbs. If you just strive to be a compentent programmer or developer, it really is just a triva. It only starts becoming important if you have the ambition to become technical project lead or software architect. Only then things like garbage collection and other run-time stuff start coming into the picture.

    I once had a really bad interviewing experience when applying for a junior/medior .NET role where the interviewer kept pounding me with questions like second generation and garbage collection until I literally stopped caring about getting the position

     



  • @bjolling said:

    It's like you are proud you can do addition without using "plus" after discovering that 5 minus (-1) equals 6 and then calling me dense for telling you that in the background it is still an addition

    No, it's like I am proud that I can write good code and then calling you dense for telling me that it generates the same IL as bad code.



  • @bjolling said:

    It's like you are proud you can do subtraction by writing 5-1 and then calling me dense for telling you that in the background it is still doing an addition of a negative number.

    You know what, I take it back, it's a decent analogy, you just got some things mixed up.  I FTFY.


  • @bjolling said:

    @morbiuswilters said:

    @TheCPUWizard said:
    @morbiuswilters said:
    @bjolling said:

    ..snip..

    I think that safely qualifies as "trivia most developers don't need to know".

    I disagree. I have full expectations that any "non Junior" is familiar with key elements of IL. It comprises about 10%-15% of my screening [enough to matter, but not enough to "fail" if everything else is excelllent]

    Maybe it's really different in .NET, but "internals of the JVM/other runtime" are something 95% of developers never need to know (and most of them don't).

    Actually I have to agree with Morbs. If you just strive to be a compentent programmer or developer, it really is just a triva. It only starts becoming important if you have the ambition to become technical project lead or software architect. Only then things like garbage collection and other run-time stuff start coming into the picture.

    I once had a really bad interviewing experience when applying for a junior/medior .NET role where the interviewer kept pounding me with questions like second generation and garbage collection until I literally stopped caring about getting the position

    I thuink it is largely based on what one considers "Junior". 

    I DEFINATELY expect someone to be able to draw a diagram of what happens in memory during each of the GC phases for each generation and for each GV type - unless they are applying for a position with "under one year of .NET experience".

    In my environments, one remains in the "Junior" status until around 3-5 years of experience (and this presumes a well rounded experience, not 5 years of doing webforms,or any other single technology. Then one is in the "Regular"/"MidLevel" until they have a good understanding of the entire environment (not knowing it off the top of their head, but at least being able to have a starting point, knowing where to go for the authoritative answer, and judging if that answer is reasonable). This usually takes  another 3-5 years, and requires the person actually be in an environment that uses these advanced capabilities.

    I realize there are lots of situations where what I consider "Junior" is accepable for most of the staff, but that is not the environment I have typically worked in for many years (decades)..

    As a concerete example, last week we had a situation where a bad object (not desired state) object was being passed into a third party library. It is a realtime system, so adding breakpoints would not work. Since it was third party, adding diagnostic statements was not practical either.

    The quickest solution was to assign one of my developers to intercept the load of the assembly, use reflection to find all methods that tooks this specific type as a parameter, and IL-REWRITE the methods to invoke a custom hook that could check the state. It took him about 20 minutes to write and test, and we found the problem within 10 minutes.

    In this case the technique was pretty much necessary, in other scenarios the other means would have likely sufficed. But consider how many hours people will often spend tracking down a bug and not even know there is a methodology that could nail it in 30 minutes...when time is money (and being based out of NYC with the high costs, and "competition" from other areas with significantly lower costs - this is crucial).



  • A simple question (related to items discussed above)... You have a loop that allocates objects and puts them in a HUGE List<T> with Add(). Later in your code you need two things

    1) A sorted version of the list (which will be used first), based on some custom sorter.

    2) A copy of the list while will be repeatedly used many times in a for each (this will be used later)

     Should you make the copy from the original list (which means keeping the original around longer, or making the copy earlier), or from the sorted list (items are much closer to their first usage), or does it not matter at all?

    [This is a paraphrase of one of my general C# developer questions used during interviews] 


  • :belt_onion:

    @Sutherlands said:

    @bjolling said:

    It's like you are proud you can do addition without using "plus" after discovering that 5 minus (-1) equals 6 and then calling me dense for telling you that in the background it is still an addition

    No, it's like I am proud that I can write good code and then calling you dense for telling me that it generates the same IL as bad code.

    Yes and no. 'using' corresponds to [code]try { ... } finally { dispose } [/code] so it means that there is an expensive try block that just lets exceptions fall through.

    But if you do want to catch the exceptions thrown from inside a 'using', you need to write (see also this code sample)

    try { 
      using ( recordset ) { ... } 
    } catch ( Exception ex ) { 
      Log.Exception(ex); 
    } 

    But because 'using' only is syntactic sugar, this in reality is exactly the same as writing: 

    try {
      new recordset
      try { ... } 
      finally { dispose recordset }  
    } catch ( Exception ex ) { 
      Log.Exception(ex); 
    }

    So your nice little trick is using two expensive try blocks for absolutely no benefit. The optimal way is as follows

    new recordset
    try { ... } 
    catch ( Exception ex ) { 
      Log.Exception(ex);
      throw ;
    }
    finally { dispose recordset } 

    The only point I wanted to make is that I expect my developers to understand this and care about this. You claim to write good code but still you post example code with a using block inside a try/catch. But it's always easier to call people 'dense' and 'fail', isn't it?



  • @bjolling said:

     so it means that there is an expensive try block that just lets exceptions fall through.

    The only point I wanted to make is that I expect my developers to understand this and care about this. You claim to write good code but still you post example code with a using block inside a try/catch. But it's always easier to call people 'dense' and 'fail', isn't it?

    You claim it is expensive...yet do not QUANTIFY the cost.  There is no argument that it will add clock cycles, but I think we all agree that this will only be an issue in a very rare set of circumstances, and that readability is key. A using block (while merely sugar) is detcted by many style analysis tools where the equivilant try/finally (with or with out the desired catch will not). Therefore by "optimizing" the code, you may make the code harder to maintain.

     ps: I am a performance junkie (partly by requirements, partly by personallity) and have spent many hours doing analysis of .NET...I know the answer to the question above....Just remember the JIT does most of the optimizations, so looking at IL in this case does NOT give the answer...


  • :belt_onion:

    @TheCPUWizard said:

    @bjolling said:

     so it means that there is an expensive try block that just lets exceptions fall through.

    The only point I wanted to make is that I expect my developers to understand this and care about this. You claim to write good code but still you post example code with a using block inside a try/catch. But it's always easier to call people 'dense' and 'fail', isn't it?

    You claim it is expensive...yet do not QUANTIFY the cost.  There is no argument that it will add clock cycles, but I think we all agree that this will only be an issue in a very rare set of circumstances, and that readability is key. A using block (while merely sugar) is detcted by many style analysis tools where the equivilant try/finally (with or with out the desired catch will not). Therefore by "optimizing" the code, you may make the code harder to maintain.

     ps: I am a performance junkie (partly by requirements, partly by personallity) and have spent many hours doing analysis of .NET...I know the answer to the question above....Just remember the JIT does most of the optimizations, so looking at IL in this case does NOT give the answer...

    Well, it is 2am around here and the Visual Studio on the laptop that I'm currently using runs inside a VM. I'll never be able to get a decent qualification like that. I've always been told there is a performance hit by using a try/catch. But this StackOverflow question seems to agree with you that the cost is small.

    But I don't agree that try/catch/finally{dispose} is harder to understand/maintain than try{ using }catch.

    BTW: is that "our" DaveK that replied to it?

     



  • I'll simply echo what TheCPUWizard says and provide this link for you to read.  I don't particularly care what you find easier to read, because I don't work with you.  My team uses 'using' to take care of disposable objects because it's clear and maintainable.  Also - "But it's always easier to call people 'dense' and 'fail', isn't it?" - I'm not sure if you were deliberately trying to push my buttons when you came into this thread, but saying "Not sure what you are trying to prove" irks me, especially when followed up by something about how what I wrote is "the same" as something else... when it's not...



  • @TheCPUWizard said:

    I DEFINATELY expect someone to be able to draw a diagram of what happens in memory during each of the GC phases for each generation and for each GV type - unless they are applying for a position with "under one year of .NET experience".

    If you ever have to think for even a millisecond about this, you've failed. If you only hire developers who know this kind of pointless minutiae that hopeless geeks like you masturbate over, then you're probably excluding brilliant developers in the process.

    The correct answer to, "draw a diagram of what happens in memory during each of the garbage collection phases" is "who gives a fuck? Let's write some software!" Garbage collection is a solved problem. It might have been interesting a couple decades ago, it's an implementation detail now.

    Who gives a fuck, go write some software.

    While I'm fucking ranting:

    @TheCPUWizard said:

    but that is not the environment I have typically worked in for many years (decades)..

    Look, CPUWizard, we all know you love to brag about how many decades you've been in the industry, about being a New Yorker, about seeing some prototype Intel CPU, that you owned business, and all that other stupid pointless bullshit that you seem to imagine impresses people but in fact only makes them think you do nothing but stroke your own ego all fucking day long...

    But saying that you've been in the industry for decades is not necessarily a good thing. People who have been in the industry for decades are the reason I had to find a fucking C# library to read .ini files today. Today, meaning, "in the year 2012, fully 17 years after .ini files were made obsolete." Fuck those dinosaurs.



  • In response to Blakerant (I refuse to quote people who can not post rational or feel they must use foul language):

    IF GC is a "solved problem", then how come:  over $300K of my company's revenue in 2011 came from clients who had developed software that did not perform, and the root cause was invariably a lack of understanding of GC [This number has been a rough constant as a 20% percentage of business over the past 6 years]????

    You say "Let's write some software!"... I say "Let's write some quality software".  One of my pet peeves are people who still write applications that will not scale to make use of the multiple cores [woohoo, Kiefer has shipped with 32 cores, in limited quantities - March 2012].

    As to "stroking my own ego"...I dont see it that way at all. If someone is going to read something, then knowing something about the author puts it in context. My comments about being a New Yorker (if you actually read them) are about the difficulties the location brings, and not something to brag about at at...

    FWIW: There are multiple C# libraries readily available for ini files, so finding one should not have taken more than a few minutes. The native primitives are still part of the supported Windows API (not deprecated even in Windows 8), although I agree they should not typically be part of a modern (last 15 years) application.



  • @TheCPUWizard said:

    In response to Blakerant (I refuse to quote people who can not post rational or feel they must use foul language):

    Wow see what you did there. My name is "Blakeyrat" and you added an N to make it "Blakeyrant". Like, you're combining my name with the fact that it was a rant. And you even bolded the N so that we could all see how clever you were. That's just awesome.

    @TheCPUWizard said:

    IF GC is a "solved problem", then how come:  over $300K of my company's revenue in 2011 came from clients who had developed software that did not perform, and the root cause was invariably a lack of understanding of GC [This number has been a rough constant as a 20% percentage of business over the past 6 years]????

    Because your clients are idiots????????????????

    @TheCPUWizard said:

    As to "stroking my own ego"...I dont see it that way at all.

    Yes, well, that's great when you're talking to yourself, but there are other people here reading your crap.

    @TheCPUWizard said:

    If someone is going to read something, then knowing something about the author puts it in context. My comments about being a New Yorker (if you actually read them) are about the difficulties the location brings, and not something to brag about at at...

    I hate to break this to you, but everybody hates New Yorkers. Including other New Yorkers. You guys are the fucking worst. The financial collapse, 2008? All your fucking fault. Die in a fire.

    @TheCPUWizard said:

    FWIW: There are multiple C# libraries readily available for ini files, so finding one should not have taken more than a few minutes.

    Yes, because remember when I was ranting about how many hours it took me to insta-- oh wait I didn't spend more than a few minutes on it. And I just love adding random open source shit-libraries as dependencies because some other developer (probably a New Yorker with decades of experience) is a fucking asshat. My software is worse because he is an idiot. Fuck that noise.

    @TheCPUWizard said:

    The native primitives are still part of the supported Windows API (not deprecated even in Windows 8), although I agree they should not typically be part of a modern (last 15 years) application.

    Wow. You're exactly the type of dinosaur I'm referring to. I love the weasel word, "typically", because now you can say, "well I typically say not to use them but in this case...". No. No weasel words. Fuck that noise.

    .ini files are fucking obsolete. They're trash. If you use them, you're trash. You need to leave the industry and get a job picking up trash. Or recycling. I boldfaced this paragraph because I think that "trash talk" is pretty damn clever, as is the pun in this sentence.



  • Blakey...I can only wish that every post you make online shows up at every interview you go to for the rest of your life...it would save lots of people tons of grief...



  • @TheCPUWizard said:

    Blakey...I can only wish that every post you make online shows up at every interview you go to for the rest of your life...it would save lots of people tons of grief...

    I wish I had a unicorn.



  • @blakeyrat said:

    I wish I had a unicorn.

    You did, but some jackass declared it in a using block and now it's gone.


  • :belt_onion:

    @Sutherlands said:

    I'll simply echo what TheCPUWizard says and provide this link for you to read.  I don't particularly care what you find easier to read, because I don't work with you.  My team uses 'using' to take care of disposable objects because it's clear and maintainable.  Also - "But it's always easier to call people 'dense' and 'fail', isn't it?" - I'm not sure if you were deliberately trying to push my buttons when you came into this thread, but saying "Not sure what you are trying to prove" irks me, especially when followed up by something about how what I wrote is "the same" as something else... when it's not...

    At first I just wanted to point out that by putting 'using' inside a try/catch, you'd effectively have two try blocks in IL. Since you put this in a reply to show how to dispose without putting the 'new' inside a 'try', I felt it necessary to point that in your code your 'new' was still inside a 'try'. Your example code did not prove your point. That's why I said I was not sure what you where trying to prove.

    I have nothing against 'using' and I use it all the time, except putting inside a try blocks feels counter-intuitive to me. Don't echo TheCPUWizard -- if that was your point all along, you could have said it before he joined the discussion. But you didn't so it wasn't

    That said, I followed TheCPUWizards advice and tried to quantify the cost of a try block. This is the first run of my test application

    Warming up
    Iterations: 500000
    NewRecordset                  : 241,957 ms
    NewRecordsetUsing             : 240,467 ms
    NewRecordsetTryUsing          : 239,920 ms
    NewRecordsetTryTryFinallyCatch: 242,363 ms
    NewRecordsetTryCatchFinally   : 241,064 ms
    

    This is the second run of my test apllication

    Warming up
    Iterations: 500000
    NewRecordset                  : 250,339 ms
    NewRecordsetUsing             : 249,909 ms
    NewRecordsetTryUsing          : 244,398 ms
    NewRecordsetTryTryFinallyCatch: 243,899 ms
    NewRecordsetTryCatchFinally   : 242,196 ms
    

    I think the names are descriptive enough. But anyway:

    • NewRecordset -- Just doing a new DataSet, call the clear function, test for null and dispose
    • NewRecordsetUsing -- Call the clear function inside a using(new DataSet) block.
    • NewRecordsetTryUsing -- like previous but wrapped inside a try/catch. The code that you proposed at first
    • NewRecordsetTryTryFinallyCatch -- unrolled the try/using/catch into try/try/finally{dispose}/catch
    • NewRecordsetTryCatchFinally -- removed the redundant try from previous line, ending up with try/catch/finally{dispose}

    I've executed it several times in a row and there seems no clear winner. Funnily enough, having an try block is not slower than having no try block at all. I guess that the rule is "exceptions are expensive when using to control the flow of an application when compared to doing it with an if statement". Someone else can test that.



  • @bjolling said:

    <snip>

    That said, I followed TheCPUWizards advice and tried to quantify the cost of a try block. This is the first run of my test application

    <snip>
    I've executed it several times in a row and there seems no clear winner. Funnily enough, having an try block is not slower than having no try block at all. I guess that the rule is "exceptions are expensive when using to control the flow of an application when compared to doing it with an if statement". Someone else can test that.

    First, thanks for taking the time to run and publish your results. What you have clearly shown is that the "cost" of try/catch is sufficiently small that it can not be measured because of the variances (some caused by .NET, some by Windows) in time are larger. While it takes a good amount of work to prove, I can add the following:

    * Consecutive "try { try { " is much less expensive than "try { somecode; try {" because there is no difference in the state at the start of the try blocks.

    * "finally" without any catch blocks is an optimized case, further reducing the cost. So if you inner block had also contained "catch" statements, the cost would be higher (using does not contain any catch blocks - so you case is good for analysis of the original point).

    * "successful" (non-throwing) code paths are MUCH cheaper than throwing code paths. So worst case timing would have the inner code atually throwing an exception.

    * The compiler (in this case the JIT) is allowed to re-order operations (in optimized builds without a debugger attached), provided that they do not produce visible effects. This is key in many cases.

    These are all things which have an impact on a program [the topic of which programs this impact is significant for is a different one..and also IMHO quite interesting]. Yet they are all things which can not be determined at the language level, requiring at least IL knowledge, and in many cases knowldge of the native code generated by the JIT.



  • @TheCPUWizard said:

    These are all things which have an impact on a program [the topic of which programs this impact is significant for is a different one..and also IMHO quite interesting]. Yet they are all things which can not be determined at the language level, requiring at least IL knowledge, and in many cases knowldge of the native code generated by the JIT.

    They are also all things you shouldn't give a shit about.



  • @blakeyrat said:

    @TheCPUWizard said:
    These are all things which have an impact on a program [the topic of which programs this impact is significant for is a different one..and also IMHO quite interesting]. Yet they are all things which can not be determined at the language level, requiring at least IL knowledge, and in many cases knowldge of the native code generated by the JIT.

    They are also all things you shouldn't give a **** about.

    I am sure bosses/employers/clients would love to hear that ("I don't give a **** that the code I wrote for you is useless") when their program is failing to meet performance requirements [see: which programs this impact is significant for in my post above].  Many HFT programs directly translate microseconds of execution speed to potentially thousands of dollars (per execution). Many industrial control applciations, translate these same timeframes to safety (equipment damage, personal injury) related effects.

    Even many "conventional business programs" suffer when scaled to significant size (tens of thousands of transactions per second). In these cases, "scale-out" may be an option, but that introduces additional concerns about synchronization which may make things worse (or at least much more expensive)



  • I thought you didn't quote people who used gasp curse words! That was a short-lived policy! INSTANT BLAKEYRAT VICTORY.

    @TheCPUWizard said:

    I am sure bosses/employers/clients would love to hear that ("I don't give a **** that the code I wrote for you is useless") when their program is failing to meet performance requirements [see: which programs this impact is significant for in my post above]. Many HFT programs directly translate microseconds of execution speed to potentially thousands of dollars (per execution). Many industrial control applciations, translate these same timeframes to safety (equipment damage, personal injury) related effects.

    When it becomes a problem, then it becomes a problem. That's not the fucking point. The point is you don't need to know this kind of pointless minutiae now, and you certainly don't need to make it a hiring requirement. You do need at least one guy who can learn it when it comes up, but it won't come up so it's a waste of time to spent time wanking over it now. You're significantly better-off hiring the guy who's writes bulletproof code by default, or hiring the guy who is an expert in UI and user testing or, in a lot of cases, hiring the guy who's an expert in cloud computing. When and if the performance of the CLR becomes a problem (and, again, it won't), I'm sure any of those valuable people can research and solve the problem.

    If you're that one guy who needs to know it, fine. But don't pretend its useful general knowledge for programmers, because the entire point of garbage collection and the CLR is to shield the programmer from having to think about any of this shit at all, ever. So stop thinking about it and build some software.

    All that time you're spending wanking around with these pointless experiments is time you could have been spending on additional QA to smooth out the rough bits, or on user testing to make sure your application isn't a wide-awake nightmare to use. (Which, from my experience, most applications overly concerned with performance are*.) Look, you disagree with me: fine, I get that.

    Most programmers probably disagree with me. That's because they don't think about what it takes to create great software because they don't give a shit.

    Oh, here's a *gasp* curse word so you don't quote this post: fucktacular shitorama.

    *) Wow! Look how fast that video encoder is! It encodes videos 2 hours faster than the competition! Of course it takes 4 hours to learn the fucking arcane UI, so your net gain is -2 hours.



  • @TheCPUWizard said:

    @blakeyrat said:

    @TheCPUWizard said:
    These are all things which have an impact on a program [the topic of which programs this impact is significant for is a different one..and also IMHO quite interesting]. Yet they are all things which can not be determined at the language level, requiring at least IL knowledge, and in many cases knowldge of the native code generated by the JIT.

    They are also all things you shouldn't give a **** about.

    I am sure bosses/employers/clients would love to hear that ("I don't give a **** that the code I wrote for you is useless") when their program is failing to meet performance requirements [see: which programs this impact is significant for in my post above].  Many HFT programs directly translate microseconds of execution speed to potentially thousands of dollars (per execution). Many industrial control applciations, translate these same timeframes to safety (equipment damage, personal injury) related effects.

    Even many "conventional business programs" suffer when scaled to significant size (tens of thousands of transactions per second). In these cases, "scale-out" may be an option, but that introduces additional concerns about synchronization which may make things worse (or at least much more expensive)

    His point might be that empirical evidence is good enough; we don't need to know why. Using blocks are obviously a good idea because it's really hard to forget to Dispose when you use it. If someone has a suspicion that Using causes a performance problem, then it's pretty easy to whip up an A/B test to see if Using does or doesn't incur a performance penalty. The time spent thinking why could better be spent on your next problem. If we run into an edge case, we'll hire you to give us a hand. Only a very small fraction of programmers really need to be able to handle the corner cases where things that normally don't cause performance problem do.

    Also, corner case optimization is actually bad for most of us. It's pretty common for the next version of a framework class to have performance improvements. If we spent a bunch of time implementing custom disposability for a class due to a performance problem, it's likely that we wouldn't benefit from the improvements of the next version. That would either cause our code to be slower than naive code or cause a lot of effort re-working it. The best case I can think of is all of the patterns of keeping a global database connection that were popular before the mid 1990s. After Microsoft introduced connection pooling, suddenly the most performant pattern was to create as late as possible and destroy as early as possible.



  • @Jaime said:

    @TheCPUWizard said:

    @blakeyrat said:

    @TheCPUWizard said:
    These are all things which have an impact on a program [the topic of which programs this impact is significant for is a different one..and also IMHO quite interesting]. Yet they are all things which can not be determined at the language level, requiring at least IL knowledge, and in many cases knowldge of the native code generated by the JIT.

    They are also all things you shouldn't give a **** about.

    I am sure bosses/employers/clients would love to hear that ("I don't give a **** that the code I wrote for you is useless") when their program is failing to meet performance requirements [see: which programs this impact is significant for in my post above].  Many HFT programs directly translate microseconds of execution speed to potentially thousands of dollars (per execution). Many industrial control applciations, translate these same timeframes to safety (equipment damage, personal injury) related effects.

    Even many "conventional business programs" suffer when scaled to significant size (tens of thousands of transactions per second). In these cases, "scale-out" may be an option, but that introduces additional concerns about synchronization which may make things worse (or at least much more expensive)

    His point might be that empirical evidence is good enough; we don't need to know why. Using blocks are obviously a good idea because it's really hard to forget to Dispose when you use it. If someone has a suspicion that Using causes a performance problem, then it's pretty easy to whip up an A/B test to see if Using does or doesn't incur a performance penalty. The time spent thinking why could better be spent on your next problem. If we run into an edge case, we'll hire you to give us a hand. Only a very small fraction of programmers really need to be able to handle the corner cases where things that normally don't cause performance problem do.

    Also, corner case optimization is actually bad for most of us. It's pretty common for the next version of a framework class to have performance improvements. If we spent a bunch of time implementing custom disposability for a class due to a performance problem, it's likely that we wouldn't benefit from the improvements of the next version. That would either cause our code to be slower than naive code or cause a lot of effort re-working it. The best case I can think of is all of the patterns of keeping a global database connection that were popular before the mid 1990s. After Microsoft introduced connection pooling, suddenly the most performant pattern was to create as late as possible and destroy as early as possible.

    Jamie,

     I dont disagee with anything you posted, in fact I agree with most of it.  As a general rule (as a statistical percentage of programs) everything you say is true.  "Premature" or "Over" optimization is at best counter-productive, and at worst a disaster.  My point is that as soon as there is a single valid case where such things do matter, a blanket/absolute statement is incorrect - unless qualified. BalkeyRat's statement was "you shouldn't". If this is taken to mean me personally, then it would indeed have the ramifications I mentioned. If it is taken to mean "the reader of this post" ( a common usage of the word "you" in online postings, then it would have to apply to every potential reader (the "you") under every circumstance.

    Now my own experience, shows that these conditions do occur. If someone can conclusively prove that they have *never* occured, then I will conceed - but that is inherently impossible without having first hand knowledge of every program that has been written.

    My bigger concern (and a core professional belief) is that more of these situations exist than the average programmer realizes. This means that developers are creating problematic code without even realizing it at the time it is written, resulting in issues that are much harder to remediate later (sometimes causing a complete rewrite - although this is rare).  Wouldn't it be better if developers were aware that a) These Conditions Exist, b) The Situations where they are Applicable, and c) That there are Designs that can avoid these issues, when necessary [even if the person in question does not have the direct knowledge to implement such designs]?????



  • @TheCPUWizard said:

    Wouldn't it be better if developers were aware that a) These Conditions Exist, b) The Situations where they are Applicable, and c) That there are Designs that can avoid these issues, when necessary [even if the person in question does not have the direct knowledge to implement such designs]?????

    Yes.

    However, most programmers are only capable of specializing in so many things. Half of the cargo-cult programming out there came from performance optimization. Most of the worst sins made by programmers were made in the name of performance. Also, most developers will take any distraction they can get to avoid dealing with "soft skills" like usability, proper naming, and even proper architecture. It's often beneficial to steer programmers away from performance tweaking until they have matured to the point where they ignore your guidance and do the right thing.

    So, I generally tell people to ignore performance issues until they know better than to listen to me. As long as they listen to me, they are either novices or cargo-cultists and can't be trusted making performance optimizations, so they are right to listen to me. As soon as they can present a strong, well thought out argument for what they want to do, it's time to let them go.

    BTW, my world is very different from yours. I can't buy a server small enough to have real performance problems with the workloads I have.



  • @blakeyrat said:

    .ini files are fucking obsolete. They're trash. If you use them, you're trash. You need to leave the industry and get a job picking up trash. Or recycling. I boldfaced this paragraph because I think that "trash talk" is pretty damn clever, as is the pun in this sentence.
    In the Javascript thread, you posted [url=http://blogs.msdn.com/b/oldnewthing/archive/2007/11/26/6523907.aspx]this little gem[/url] and I never replied to it.  However, since this subject has reared its ugly head again . . .

    Problems with the Registry [url=http://www.virtualdub.org/blog/pivot/entry.php?id=312]here[/url], [url=http://www.codinghorror.com/blog/2007/08/was-the-windows-registry-a-good-idea.html]here[/url], and even an alternative to the Registry [url=http://msdn.microsoft.com/en-us/library/k4s6c3a0.aspx]here[/url], where Microsoft is suggesting a feature in .Net to create custom configuration files.  (It has been years since I programmed in .Net and can't attest to the usefulness of this feature; nevertheless, Microsoft documents a feature here that they suggest using for application and user settings which is in direct opposition to the "store it in the Registry" mantra.)  Finally, I have had Registry corruption cause an unrecoverable workstation that required a reinstall; I've never had a corrupted .cfg file under . . . let's just say "other operating systems" . . . cause me to not be able to get it back up and running, even if I had to boot into rescue/emergency mode.

    Now, based on what I can see I will agree that Microsoft's implementation of text-based configuration files is shit.  They've not updated the interface to process them, leading to the limitations that existed since Win 3 (probably 2, but I had no experience with that).  There's no reason an updated interface couldn't provide better .ini handling that could handle the problems that you believe exist.  UNIX-y OSes have used config files for decades.  The Registry is but 17 years old and is only used by one common OS.  If the idea was really so much better, I'd have expected other OSes to adopt something similar.  But they haven't.

     



  • @blakeyrat said:

    *) Wow! Look how fast that video encoder is! It encodes videos 2 hours faster than the competition! Of course it takes 4 hours to learn the fucking arcane UI, so your net gain is -2 hours.
    It's a one-time cost.  Once the UI is understood, the program will encode each video two hours faster than the competition and there's no additional learning curve required.  If in all other aspects it was superior to the competition, I'd still purchase TheCPUWizard's product and take the temporary hit during training.



  • @Jaime said:

    @TheCPUWizard said:
    Wouldn't it be better if developers were aware that a) These Conditions Exist, b) The Situations where they are Applicable, and c) That there are Designs that can avoid these issues, when necessary [even if the person in question does not have the direct knowledge to implement such designs]?????

    Yes.

    However, most programmers are only capable of specializing in so many things. Half of the cargo-cult programming out there came from performance optimization. Most of the worst sins made by programmers were made in the name of performance. Also, most developers will take any distraction they can get to avoid dealing with "soft skills" like usability, proper naming, and even proper architecture. It's often beneficial to steer programmers away from performance tweaking until they have matured to the point where they ignore your guidance and do the right thing.

    So, I generally tell people to ignore performance issues until they know better than to listen to me. As long as they listen to me, they are either novices or cargo-cultists and can't be trusted making performance optimizations, so they are right to listen to me. As soon as they can present a strong, well thought out argument for what they want to do, it's time to let them go.

    BTW, my world is very different from yours. I can't buy a server small enough to have real performance problems with the workloads I have.

     100% agreement.. What I think makes software development so interesting is that you can get a fairly large group together and have each person still say "my world is very different from yours" to at least half of the people in the room. When people follow blindly (e.g. "you shouldn't care" without knowing anything about the actual situation) then cargo cultism is almost a certainity, with all of the problems entails. But when people listne and learn from each other, adapting what applies, and knowing what to ignore, it seems that everyone ends up learning more.



  • @nonpartisan said:

    Problems with the Registry here, here, and even an alternative to the Registry here, where Microsoft is suggesting a feature in .Net to create custom configuration files. (It has been years since I programmed in .Net and can't attest to the usefulness of this feature; nevertheless, Microsoft documents a feature here that they suggest using for application and user settings which is in direct opposition to the "store it in the Registry" mantra.)

    Are you trying to imply that I said the Registry doesn't have any problems? Or... what is the point of this paragraph?

    @nonpartisan said:

    Now, based on what I can see I will agree that Microsoft's implementation of text-based configuration files is shit. They've not updated the interface to process them, leading to the limitations that existed since Win 3 (probably 2, but I had no experience with that).

    That's because they've been deprecated since Windows 3. Duh?

    @nonpartisan said:

    There's no reason an updated interface couldn't provide better .ini handling that could handle the problems that you believe exist.

    Raymond lists several reasons in the link I gave you. You should actually read it. If we're going to discuss the article, it would be nice if you demonstrated that you at least skimmed it briefly.

    @nonpartisan said:

    UNIX-y OSes have used config files for decades.

    Yeah, and that's why it's so popular for corporate networks-- oh wait it's not. Because the Registry provides all those fancy handy-dandy things like Group Policies that large centralized networks need so badly.

    Yes, UNIX-y OSes have used config files for decades. UNIX-y OSes have also sucked ass for decades, and they've gone decades without a substantial increase in user-base. So obviously citing that is, shall we say, not a good idea.

    @nonpartisan said:

    If the idea was really so much better, I'd have expected other OSes to adopt something similar. But they haven't.

    First of all, yes they have. Netware had a Registry, for example, although I'm not sure if the most recent version still does. Strangely, Netware was also very popular for managing large corporate networks-- I'm noticing a pattern here!

    Secondly, while it's true that other OSes haven't adopted something similar to the Registry, it's also true that no other OS has features the Registry enables. CAR ANALOGY: "your car doesn't have a dashboard readout for the TPMS!" "That's because my car doesn't have a TPMS."

    Here's a fun challenge for you: you're a network administrator, and you have a network full of Macs. How do you force them to "require password on wake from sleep?" In Windows, it's trivial. In OS X? The most advanced UNIX-esque OS? Well... you can do it at install time, I guess, but if you don't do it then, then you're fucked-- you're back to the 1995 solution of walking to every single workstation, changing the setting, then locking the user account from changing it back. EFFICIENT!

    What you need to point out to me, when saying the Registry is oh so horrible, is an OS that manages to have all of the features Windows has while simultaneously not having a Registry. That thing doesn't exist. Or if it does, please show it to me, because I'd love to see it.



  • @TheCPUWizard said:

    When people follow blindly (e.g. "you shouldn't care" without knowing anything about the actual situation) then cargo cultism is almost a certainity, with all of the problems entails.

    Except if you really agreed with Jamie, you'd realize "you shouldn't care" is the default position to take when it comes to optimization, for the reasons Jamie discusses. You're trying to have it both ways, buddy, and it's just not gonna happen, ok?

    I already conceded that there is, in that 0.01% of cases, some value in knowing when to dig into this stuff for performance optimization. But:
    1) If you ever reached that case, you've probably already failed somewhere else in your app (it's freakin' 2012, we all know to scale wide, not high, right?)
    2) If you do implement your "optimizations", your code becomes much more fragile and likely to break and requires more maintenance in the long run
    3) In the time it takes you to figure out your optimization and get the code humming, computer hardware got 2 times faster and it's no longer an issue-- buying a new server, even a really beefy one, is cheaper than 3 months of an employee's time



  • @blakeyrat said:

    Here's a fun challenge for you: you're a network administrator, and you have a network full of Macs. How do you force them to "require password on wake from sleep?" In Windows, it's trivial. In OS X? The most advanced UNIX-esque OS? Well... you can do it at install time, I guess, but if you don't do it then, then you're fucked-- you're back to the 1995 solution of walking to every single workstation, changing the setting, then locking the user account from changing it back. EFFICIENT!

    Guess you don't use Macs in a corporate environment much do you????  (I do very, very little with Apple products, but even I know this)...There are plenty of solutions, there is absolutely no need for " the 1995 solution of walking to every single workstation".  One system that I know is used in a couple of big (>1K machines) corporate environments: http://www.jamfsoftware.com/
    "



  • @blakeyrat said:

    @TheCPUWizard said:
    When people follow blindly (e.g. "you shouldn't care" without knowing anything about the actual situation) then cargo cultism is almost a certainity, with all of the problems entails.

    Except if you really agreed with Jamie, you'd realize "you shouldn't care" is the default position to take when it comes to optimization, for the reasons Jamie discusses. You're trying to have it both ways, buddy, and it's just not gonna happen, ok?

    I already conceded that there is, in that 0.01% of cases, some value in knowing when to dig into this stuff for performance optimization. But:
    1) If you ever reached that case, you've probably already failed somewhere else in your app (it's freakin' 2012, we all know to scale wide, not high, right?)
    2) If you do implement your "optimizations", your code becomes much more fragile and likely to break and requires more maintenance in the long run
    3) In the time it takes you to figure out your optimization and get the code humming, computer hardware got 2 times faster and it's no longer an issue-- buying a new server, even a really beefy one, is cheaper than 3 months of an employee's time

    There is a world of difference between "not caring" (or "not knowing") and having sufficient knowledge to know that something is not applicable.  I have never said that developers need to DO anything in the vast majority of cases.  Even taking your percentage, there are an estimated 20-35 million application written per year, so this means there are thousands of cases per year. (I would place the percentage between 0.5% and 2%)

    a) Try to "scale wide" when you have power constraints, or space/weight constraints....yes, these apply to .NET applications, in fact there was a great MSDN article about 18 months ago (http://msdn.microsoft.com/en-us/magazine/gg232761.aspx)

    b) If your code fails to meet requirements, then it is a fail. So, yes, programs which are "pushing the envelope" will be more difficult to develop and test.  In my experience however, this has NOT translated into "more fragile", "more likely to break" or "requires more maintenance" provided that the ongoing support is done by people with equivilant knowledge, skills and experience as the people who developed the original code.

    c) If one has already invested the time to "to figure out your optimization and get the code humming", then there is very little time required (hours or days, worst case weeks) to apply this to future work. Additionally the improvement continues as the hardware gets faster (it has been quite some time since pure CPU time has actualy gotten "2 times faster" at a given price point). Even once the hardware is faster, the optimized software will still be faster than the non-optimized version, this can often be the opportunity to add new features while still meeting performance obligations.



  • @nonpartisan said:

    here

    Wow Atwood's a dumbass and so are his readers.

    1) He doesn't mention a single positive aspect of the Registry, probably because he's entirely ignorant of why it's there or what features it enables. He could have researched this by talking to-- oh wait research? Hahaha!
    2) In the comments he blames the Registry for a driver bug, in the paraphrased words of one comment, "maybe the Registry gets all the blame because that's where buggy software keeps all their buggy settings".

    The readers are even dumber:

    3) Registry functions are kept in the Microsoft namespace because they're specific to Microsoft OSes. .net is cross platform, dumbshits. (Atwood piles on to this retarded point.)
    4) .net doesn't use the Registry for ... the exact same reason as point 3. Not because .net developers "hate" the Registry, or whatever fake-ass reason they're assuming, but because .net is cross-platform and therefore can't rely on the Registry even existing at runtime.
    5) One genius proposed replacing the Registry with a database. What the fuck do you think the Registry is, numbnuts!? (Presumably he means "relational database", but he's still a retard. And doesn't even explain how a relational database would behave any differently than how it behaves now. I guess it would be slightly slower.

    Jesus. For once I'd like to read a Jeff Atwood article that didn't leave me feeling dumber then before I started reading. How the hell does that guy have a following at all?



  • @TheCPUWizard said:

    Guess you don't use Macs in a corporate environment much do you????

    Not in the last 5 years!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    It seems that Apple has finally caught-up to what Microsoft was providing in 1995. Kudos for them.



  • @TheCPUWizard said:

    a) Try to "scale wide" when you have power constraints, or space/weight constraints....yes, these apply to .NET applications, in fact there was a great MSDN article about 18 months ago (http://msdn.microsoft.com/en-us/magazine/gg232761.aspx)

    What does that article have anything to do with optimizing an application? I mean the entire article's point is, "hey this device is very power constrained, let's make it a dumb client of a beefy server elsewhere" which is duh.

    Did you post the wrong link? Or what?

    @TheCPUWizard said:

    provided that the ongoing support is done by people with equivilant knowledge, skills and experience as the people who developed the original code.

    You are an extremely optimistic person.

    @TheCPUWizard said:

    (it has been quite some time since pure CPU time has actualy gotten "2 times faster" at a given price point).

    Spoken by a man who has never used Amazon Web Services.



  • @blakeyrat said:

    @TheCPUWizard said:
    Guess you don't use Macs in a corporate environment much do you????

    Not in the last 5 years!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    It seems that Apple has finally caught-up to what Microsoft was providing in 1995. Kudos for them.

    Typical response... first you start with a challenge in the present tense "Here's a fun challenge for you: you're a network administrator, "  [in case you are unaware, "you're" is a contraction of you ARE" (not was)...

    Then you try to beg off by claiming no knowledge becuase you havent used them in 5 years.......When the product family I linked to has been around for 10 years (+/- a few months)...meaning it was an available toolfor (approx) 5 years when by your own statement you last worked with Macs in a corporate environment.

     But coming from you (based on your posts on thsi site) , such a narrow minded view is quite expected.



  • @blakeyrat said:

    @TheCPUWizard said:
    a) Try to "scale wide" when you have power constraints, or space/weight constraints....yes, these apply to .NET applications, in fact there was a great MSDN article about 18 months ago (http://msdn.microsoft.com/en-us/magazine/gg232761.aspx)

    What does that article have anything to do with optimizing an application? I mean the entire article's point is, "hey this device is very power constrained, let's make it a dumb client of a beefy server elsewhere" which is duh.

    Did you post the wrong link? Or what?

    @TheCPUWizard said:

    provided that the ongoing support is done by people with equivilant knowledge, skills and experience as the people who developed the original code.

    You are an extremely optimistic person.

    @TheCPUWizard said:

    (it has been quite some time since pure CPU time has actualy gotten "2 times faster" at a given price point).

    Spoken by a man who has never used Amazon Web Services.

    Double checked the link..it is the intended article....my point was how would you "scale out", "add another server", etc... to a device mounted on the handlebars of a bicycle?!?!?!?!?!?!

    Also, I have used AWS quite frequently. While there are conditions where it is fantastic, there are limitations. Also, the price point is not as good as it may seem for a wide variety of situations. If your needs are intermittent (or periodic) then yes, the ability to spin up instances can provide a much lower cost than building local hardware. But there have actualy been a fair number of companies (mainly those doing computational research) that are finding the long terms costs are not so great, and have moved back to their own (upgraded) datacenters.


  • :belt_onion:

    @blakeyrat said:

    If you're that one guy who needs to know it, fine. But don't pretend its useful general knowledge for programmers, because the entire point of garbage collection and the CLR is to shield the programmer from having to think about any of this shit at all, ever. So stop thinking about it and build some software.

    All that time you're spending wanking around with these pointless experiments is time you could have been spending on additional QA to smooth out the rough bits, or on user testing to make sure your application isn't a wide-awake nightmare to use. (Which, from my experience, most applications overly concerned with performance are*.) Look, you disagree with me: fine, I get that.

    I can only speak for myself but I investigated because I find it an interesting topic. Now that I know it's not an issue I can stop worrying about it and nest try, using, catch and finally up the wazoo (for suitable values of wazoo) if that is more readable without worrying about the performance impact.

    I wrote this test code during some idle time in the weekend so I wouldn't have been doing QA anyway. And as an added bonus I learned about the Stopwatch class in .NET. Time well spent over a rainy weekend.



  • @blakeyrat said:

    If you're that one guy who needs to know it, fine. But don't pretend its useful general knowledge for programmers, because the entire point of garbage collection and the CLR is to shield the programmer from having to think about any of this shit at all, ever. So stop thinking about it and build some software.

    So true. Except, the dotnet GC isn't ok, it is not a solved problem. It actually causes problems. And knowing how it works. well, in my experience, that didn't help much.


Log in to reply