Going to the Google's walled garden with Go



  • Nota bene: you have to write a public constructor without parameters in order to do so (well, it is created by default if you do not add any other constructor, but when you use injection via the constructor...).

    @BernieTheBernie That's new. One of the fruit sorting challenges @apapadimoulis put up had a constructor that always threw, and although plenty of submissions had workarounds the "one little trick" he was looking for was Activator.CreateInstance precisely because it did not require or run constructors.



  • So I've never used Go. Based on these code samples, it... does it... is it just me, or... is it... it looks kind of like... the way people use it, kind of reminds me of... PHP???

    EDIT: oh, hmm, yep. That sounds familiar.

    f2f52e8e-3efa-4649-b941-df97c06d6acd-image.png


  • Considered Harmful

    @TwelveBaud said in Going to the Google's walled garden with Go:

    the "one little trick" he was looking for

    one-weird-trick (1).png



  • @error Top tier thread derailing!



  • @konnichimade said in Going to the Google's walled garden with Go:

    So I've never used Go. Based on these code samples, it... does it... is it just me, or... is it... it looks kind of like... the way people use it, kind of reminds me of... PHP???

    EDIT: oh, hmm, yep. That sounds familiar.

    f2f52e8e-3efa-4649-b941-df97c06d6acd-image.png

    Well, the people who committed Go definitely did want to design a language. A better C, even. But what they did was lack a concept. They just sorta piled up features until it was good enough to program some things in and then said that's enough and they won't add any more to keep it simple and stupid.

    Back when I considered whether I should learn that language I noted the general by-value semantics—except arrays—and built-in hash maps (so they don't miss generics that much), thought it's inconsistent as hell and gave it a pass.



  • @Bulb

    sorta piled up features until it was good enough to program some things in

    Yep, that certainly describes PHP and JavaScript. As the OP pointed out, I just assumed Go would be "good" because Google made it.

    general by-value semantics—except arrays

    Argh gragh why can't languages just pick one or the other and call it a day. I'm fine with "generally by-value" if I don't have to always remember which certain things are different!

    Edit: from the article:

    You can, at great cost, write extremely careful Go code that stays far away from stringly-typed values and constantly checks invariants — you just get no help from the compiler whatsoever.

    Once again, it sounds like they went to great effort to recreate PHP.


  • Considered Harmful

    @topspin said in Going to the Google's walled garden with Go:

    @Mason_Wheeler you’re ignoring the context of the question. :surprised-pikachu:

    Now you're ignoring the context of the question :pendant:



  • @konnichimade so once they catch up to PHP 7 where you can get help with type invariants Go will be good?



  • There is some weirdass case where the GO slice has a backing array which is sometimes by value and sometimes by reference there is a very fucky logic behind it that can easily trip up programmers used to languages that don’t suck donkey dick. I don’t remember the issue off the top of my head but I’ll post it soon if someone doesn’t beat me to it.



  • @konnichimade said in Going to the Google's walled garden with Go:

    @Bulb

    general by-value semantics—except arrays

    Argh gragh why can't languages just pick one or the other and call it a day. I'm fine with "generally by-value" if I don't have to always remember which certain things are different!

    I think they had two conflicting desires: 1. put in few features so there is only one way to do it and the language takes shorter to learn and 2. don't put in any easy-looking operations that would take significant time at runtime, because they hate it (for all the wrong reasons, probably) on C++.

    Structs are, usually, small and in Go assignment always copies them bytewise with no user-definable copy constructor that could do anything expensive, but arrays are ‘usually’ large so they don't want implicit allocation and copy in assignment…

    I think Rust does it right: anything that is a reference always has the & in the type name, even if there is no corresponding value (trait refs, as you need references for runtime polymorphism), and customized copying is possible, but uses explicit .clone() so you know where such thing may be happening when reading the code. But Rust is does have a concept.

    Also note that the bytewise copy means you can still copy structs that you better shouldn't because they reference external resources.



  • @konnichimade said in Going to the Google's walled garden with Go:

    I just assumed Go would be "good" because Google made it.

    :laugh-harder: You assumed ... poorly.


  • Discourse touched me in a no-no place

    @Bulb said in Going to the Google's walled garden with Go:

    Structs are, usually, small and in Go assignment always copies them bytewise with no user-definable copy constructor that could do anything expensive, but arrays are ‘usually’ large so they don't want implicit allocation and copy in assignment…

    No hybrid structures then, such as you can do with C using a flexible array member? Those are useful.



  • @dkf That would take more than three words to describe! Can't have such difficult stuff in a nice and easy language like Go.</:tro-pop:>

    You can only have those in C and Rust, and while in Rust the type itself is safe, you can't safely create it as you need to construct the reference from_raw_parts.


  • Discourse touched me in a no-no place

    @Bulb said in Going to the Google's walled garden with Go:

    @dkf That would take more than three words to describe! Can't have such difficult stuff in a nice and easy language like Go.</:tro-pop:>

    You can only have those in C and Rust, and while in Rust the type itself is safe, you can't safely create it as you need to construct the reference from_raw_parts.

    In theory, you can have them in C++ too, except the standard totally broke the capability by allowing implementations to add arbitrary unknowable extra padding when doing placement array new, including at the start. Because you obviously wanted that. :headdesk:



  • @dkf I never dug into that mess, but I believe it's because destructors. Basically if your type is not trivially destructible, the delete[] needs to know how many objects to delete, but you are not telling it, so the new[] has to write that information somewhere, and the only place is the buffer you are constructing into.

    The only way to do it is to have a char[], construct and destruct the objects with std::allocator—which does have the count argument—and do the casting with a getter on each access.


  • Discourse touched me in a no-no place

    @Bulb Yes... but there is no way to know how much space to (over-)allocate. That makes actual using the capability impossible. (It isn't a problem with C as that's content to give you uninitialised memory.)



  • @dkf Indeed. Array new and delete is a half-baked feature that was eventually replaced by the allocator.



  • @Arantor said in Going to the Google's walled garden with Go:

    @konnichimade so once they catch up to PHP 7 where you can get help with type invariants Go will be good?

    Catching up to PHP is not a way to get good.



  • @dkf Fixed, apparently: https://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#2382

    Placement new isn't allowed to add padding anymore, if I understand that correctly. Current standard (draft) also says as much:

    That argument shall be no less than the size of the object being created; it may be greater than the size of the object being created only if the object is an array and the allocation function is not a non-allocating form ([new.delete.placement]).

    @Bulb said in Going to the Google's walled garden with Go:

    Basically if your type is not trivially destructible, the delete[] needs to know how many objects to delete, but you are not telling it, so the new[] has to write that information somewhere, and the only place is the buffer you are constructing into.

    There's no (user callable) "placement delete", and you can't use the normal delete[] for objects constructed with array-placement-new. The array-placement-delete operator exists only to deal with the case where a constructor throws during array-placement-new (spec). At that point, the implementation knows how many objects were constructed anyway.

    Edit: More relevant link to the standard.



  • @cvi said in Going to the Google's walled garden with Go:

    There's no (user callable) "placement delete"

    True. 🧠💨 on my part. There is only the explicit destructor call (x->~Type()) and you have to call it for each element, so … you can call the placement new for each element too and ignore that an array placement new ever existed in the first place. Which is what the std::allocator does anyway.



  • @Bulb Yeah, pretty much.

    FWIW, there's std::destroy and std::destroy_n, which invoke the destructor for multiple objects.



  • @cvi … it looks like it was moved out of the std::allocator.


  • Discourse touched me in a no-no place

    @cvi Of course, the real fun in all this is that almost all cases where you'd want to use it, you'd also want the contained elements to only have trivial constructors and destructors; you're using it to model a particular memory layout, not something with complicated objects in the mix.

    C++: can't decide whether it wants to be a low-level or a high-level language, so tries to do both and leaves nobody happy.



  • @dkf There are plenty of cases where you want to use the placement new and destructor calls with non-trivial objects—basically all the collection types. And some might even use flexible arrays so you can have one allocation with a node header and content.



  • @dkf said in Going to the Google's walled garden with Go:

    C++: can't decide whether it wants to be a low-level or a high-level language, so tries to do both and leaves nobody happy.

    If it just tried to do both, but admitted they are different cases, it would be fine. But it tries to pretend they are the same case and that makes it a mess. See:



  • @dkf said in Going to the Google's walled garden with Go:

    C++: can't decide whether it wants to be a low-level or a high-level language, so tries to do both and leaves nobody happy.

    Yeah, I can see that complaint. From my POV, it's probably a bit more nuanced. I do want a language that does both "low" and "high" level stuff. C++'s problem is more that in some aspects it seems low-level, when it really isn't.

    Then again, I don't really see any other language stepping into that space. Rust seemed/seems interesting, but it's been going down a different route. I think it's trying to solve some real-world problems, just not really the ones that I'm having. Not sure I have seen anything else in the space.



  • @cvi What problems are you having for which Rust does not do at least a bit better than C++? Besides shared libraries.



  • @Bulb Let's assume that that statement is true - I don't have the same amount of experience with Rust as with C++, so I'm not going to argue too much with the details. If you have used Rust significantly, you know those much better than I do.

    That said, based on the post above, I could probably argue that Rust's move mechanics have some advantages over C++'s ones. They also seem a lot less flexible - e.g. how do you implement stuff like small-storage optimization in strings/vectors/... efficiently? But ultimately, most of these seem largely insignificant in the whole, and don't warrant switching languages (and the whole mess that this brings with it).

    I don't have much use for Rust's main feature (memory safety/management). What little Rust code I've seen in my field seems to use unsafe fairly liberally (often a bit too much, but still). Memory management/safety isn't a bottleneck IME - I don't spend significant time debugging such or dealing with those problems (they are mostly trivial). Most of the effort goes into solving issues on the "algorithmic" level. Rust doesn't help with that. (The other one is concurrency in either shared memory or distributed systems. Rust doesn't appear to help with that either.)

    There are a few other minor-ish things as well. Most APIs and systems I deal with are first class citizens in C/C++, but second class in Rust. Dealing with custom bindings isn't a huge deal, but I'd rather not if I don't have to. Also CUDA et al.. Rust has a single implementation (AFAIK), and doesn't have a spec (with the single implementation being the reference standard). That implementation relies on LLVM, which can be a bit of a mixed bag.

    That said, I think there are a few nice things I remember off the top of my head

    • Static reflection (I think Rust has this?)
    • Dealing explicitly with stuff like integer over-/underflow. Rust's way of dealing with it is ... awkward, but at least it doesn't bake the behaviour into signedness. 😠
    • rust-gpu. Don't know if it's mainline, don't know if it's any good, but dammit, this is the one thing that is closest to making me reconsider my stance towards Rust.

    In my eyes it's pretty much a wash between the two in general. Whatever minor differences exist are way offset by the fact that I have a lot more experience with C++. (And a good chunk of existing code.)



  • @cvi said in Going to the Google's walled garden with Go:

    @Bulb Let's assume that that statement is true - I don't have the same amount of experience with Rust as with C++, so I'm not going to argue too much with the details. If you have used Rust significantly, you know those much better than I do.

    No, I didn't use it much either unfortunately.

    That said, based on the post above, I could probably argue that Rust's move mechanics have some advantages over C++'s ones. They also seem a lot less flexible - e.g. how do you implement stuff like small-storage optimization in strings/vectors/... efficiently?

    Well, you use a different trade-off: you have to check whether the content is embedded on access (using enum, probably), but then don't need to check whether the pointer needs updating on move.
    And you can always convert to slice and then it's irrelevant.

    But ultimately, most of these seem largely insignificant in the whole, and don't warrant switching languages (and the whole mess that this brings with it).

    Rust is working hard on integration with C and C++, because there are a lot of projects that would like to start using it, but are big and can't just rewrite everything, but yes, that's of course a good reason for staying with C++. It isn't about no language working to cover the space though, it is simply institutional inertia.

    I don't have much use for Rust's main feature (memory safety/management). What little Rust code I've seen in my field seems to use unsafe fairly liberally (often a bit too much, but still). Memory management/safety isn't a bottleneck IME - I don't spend significant time debugging such or dealing with those problems (they are mostly trivial). Most of the effort goes into solving issues on the "algorithmic" level. Rust doesn't help with that. (The other one is concurrency in either shared memory or distributed systems. Rust doesn't appear to help with that either.)

    Rust does help with concurrency. The borrow checker, together with the automatic marker traits Send and Sync are there to check that any thing you share between threads is either read-only, wrapped in a mutex, or otherwise handles concurrent access internally.

    It is a big advantage of the ownership tracking over managed memory (which is otherwise definitely easier to work with) that it can also prevent data races.

    There are a few other minor-ish things as well. Most APIs and systems I deal with are first class citizens in C/C++, but second class in Rust. Dealing with custom bindings isn't a huge deal, but I'd rather not if I don't have to. Also CUDA et al.. Rust has a single implementation (AFAIK), and doesn't have a spec (with the single implementation being the reference standard). That implementation relies on LLVM, which can be a bit of a mixed bag.

    There are people working towards those things, but it takes time and of course Rust is much younger.

    That said, I think there are a few nice things I remember off the top of my head

    • Static reflection (I think Rust has this?)

    Sort of but not really. It has powerful macros, but they see the token tree before types are resolved, which limits the uses somewhat.

    • Dealing explicitly with stuff like integer over-/underflow. Rust's way of dealing with it is ... awkward, but at least it doesn't bake the behaviour into signedness. 😠

    There is too many options so any way of dealing with it will probably be awkward in some aspect unfortunately.

    • rust-gpu. Don't know if it's mainline, don't know if it's any good, but dammit, this is the one thing that is closest to making me reconsider my stance towards Rust.

    I am … not even sure which rust-gpu. There are several projects on that front but nothing seems really finished yet.

    In my eyes it's pretty much a wash between the two in general. Whatever minor differences exist are way offset by the fact that I have a lot more experience with C++. (And a good chunk of existing code.)

    That's fine and good. But you said it's been going down a different route, so I wanted to know which route it is you consider insufficiently covered.


  • Discourse touched me in a no-no place

    @Bulb said in Going to the Google's walled garden with Go:

    The borrow checker, together with the automatic marker traits Send and Sync are there to check that any thing you share between threads is either read-only, wrapped in a mutex, or otherwise handles concurrent access internally.

    The big problem I have with those is that they don't support either the hardware I'm targeting or the execution model I use. The hardware doesn't have a strong memory synchronization model at all, so mutexes need to use a special hardware memory unit, and as a consequence are either very limited in number (most of the hardware resources are reserved for OS use) or astoundingly expensive (when they're built on top of the remaining free resources). The execution model is TDMA, i.e., based on time slots; in your time slot you have free access, and outside it you mustn't touch it at all. (TDMA only works in real-time programming.) We also have memory usage patterns that are pretty horrible to model in many languages, due to the use of concatenated VLAs (because that optimises for how the DMA engine works). Oh, and we have hardware interrupts. I've also no idea if Rust supports fixed-point arithmetic; I've simply not checked that (I know that clang used to not support it, but apparently does somewhat now, but validating an additional compiler is annoying when we know that gcc works fine).

    By the time I allow for all that "unsafe" (or at least not analysable by the standard compiler) code, I really suspect that Rust wouldn't be offering me any help at all.

    (The code that doesn't run on the special hardware uses Python and Java. That's a totally different battle.)



  • @dkf That's a really unusual hardware though. It might still be possible to build an abstraction on top of it in which the compiler could protect you when writing the higher level code, but it may or may not be worth the trouble depending on how much higher level code there is.


  • Discourse touched me in a no-no place

    @Bulb said in Going to the Google's walled garden with Go:

    it may or may not be worth the trouble depending on how much higher level code there is

    It's almost certainly not worth it. There are other constraints (especially a very small amount of memory for executable code) that make abstractions particularly costly; there's nothing zero-cost about anything at all on that system other than simply avoiding putting code or data there in the first place.



  • @dkf “Zero-cost abstractions” mean they are compile-time only and compile down to no instructions. Sometimes it's possible. But it's quite likely that in your case, which is rather special, sticking with C is just the sensible thing to do anyway.


  • Discourse touched me in a no-no place

    @Bulb said in Going to the Google's walled garden with Go:

    “Zero-cost abstractions” mean they are compile-time only and compile down to no instructions.

    The usual interpretation is that they don't expand the size of the data or significantly increase the number of instructions normally executed. Minimising the size of the TEXT segment is usually a non-goal, but on embedded platforms (and ones like ours that look a lot like that) it's potentially absolutely crucial. You simply can't fit all that much code in 32kB, so the code that's there really needs to pull its weight.



  • @dkf The usual interpretations is that they are ‘as efficient as if you wrote it by hand’ and for your platform it includes adding no instructions even on any cold paths. And the Rust borrow-checker is compile-time only—once the lifetimes are checked the next compiler pass discards them and the code generation does not even know they existed.



  • @Bulb said in Going to the Google's walled garden with Go:

    Rust does help with concurrency. The borrow checker, together with the automatic marker traits Send and Sync are there to check that any thing you share between threads is either read-only, wrapped in a mutex, or otherwise handles concurrent access internally.

    The (interesting) stuff that I write would probably fall under the "handles concurrent access internally" category. E.g. manipulating tree/graph structures, where synchronization is a mix of algorithmic guarantees, stuff like manual atomics / CAS, and relying on the underlying HW guarantees. This is where C++ may seem low-level, but isn't really (and some of the stuff is technically in UB land, but we can deal with that.)

    @Bulb said in Going to the Google's walled garden with Go:

    I am … not even sure which rust-gpu. There are several projects on that front but nothing seems really finished yet.

    This seems to be the one I've looked at previously.

    @Bulb said in Going to the Google's walled garden with Go:

    But you said it's been going down a different route, so I wanted to know which route it is you consider insufficiently covered.

    Fair. To clarify, Rust seems to be going down the route of really worrying about (memory) safety. I'm not particularly concerned about that, which means that I'm "meh" about one of its main features. It's not doing significant strides (IMO) in other areas, such as with respect to parallelism/concurrency/distributed stuff, memory layout stuff, nor is it doing anything particularly interesting for heterogeneous systems or even just when dealing with many (slightly) different target systems. Doesn't mean Rust can't do well in this context, just that there's nothing ground-breaking about Rust in this context (that I have seen).



  • @cvi It's doing async in the concurrency domain. Which isn't groundbreaking and appears to be finally coming to C++ as well. It is applying the borrow checker to the parallelism domain, which is quite ground-breaking. Memory layout and heterogeneous, not much indeed.


  • Discourse touched me in a no-no place

    @Bulb said in Going to the Google's walled garden with Go:

    the Rust borrow-checker is compile-time only

    I've seen what LLVM can do with reference counting memory allocation when you've got an object that provably doesn't escape, turning it into something as efficient as you'd write by hand with manually-determined scoping. It's pretty impressive stuff even knowing how it manages it.



  • @dkf The borrow checker isn't even anything like that. The borrow checker has a declaration which scope the reference must not escape and either it is clear at compile-time the reference does not escape (and appropriate destructor is inserted as needed) or it cannot be proven and the code does not compile. Not even need to derive anything.

    You are still managing the memory explicitly (with help of destructors) like in C++, it's just the compiler checking that you stay within provably sound patterns.



  • @Arantor Touche.


Log in to reply