• I remember reading this about 20 years ago, but it's out of date now since it was based on C++ 2003 iso standards. There's an update alternative now called something like C++ core guidelines, it's a living document by Stroustrup and Herb Sutter which is focused on C++ 17 and 20.

    But is that the one that the US military uses?

    Yes, it's one of a number of standards they use in addition to others like MISRA C++, CERT C++, etc.
    They definitely do not however, use the out date JSF standard anymore.

    A lot of people have the notion that US software is always terribly out of date, like still using COBOL or Fortran or something. The reality is that the vast majority of software written by USGov is very modern. They have strict security requirements that older code simply cannot meet.

    To be fair, there's a modern fortran 2023 standard, people use it, if in a hpc numerics niche. Since f90 and later fortran added array language inspired ops, it's quite different to the f77 people still picture.

    Actually the latest cobol standard seems to be 2023 too. But,well, cobol.

    Sure, these things happen. But they happen less frequently in the government, not more.

    I’d actually love to see some data on that. I’m not sure how it’s been in the last decade since TEFCA but it seemed like every small medical practice in the USA tottered along on antiquated “Y2K+1” systems forever. “B-HIPAA” systems, barely hipaa compliant.

    Keep in mind that private industry has sometimes just as much inertia, and they’re spending their own money. Don’t touch what ain’t broke was the motto for a lot of systems.

    You're not going to find hard data on how many government projects use x version of x language, but the executive orders and directives from cisa/disa/dodiis requiring software to meet modern security standards are all public

    That's the great thing about standards. There's so many to choose from..

    Are you working in that sector or where do you know that from? A "living" document (and in this case crowd sourced) is usually not a good basis for development in highly regulated industries.

    I'm not the same person and I'm not in that exact industry, but I'm a DoD contract SW engineer and we also have living documents. DoD/Military is trying to become more "agile" and along with that comes things like constantly updating standards. (I put agile in quotes cause it's more like pretend agile...)

    As for how the standards impact code, any new code written has to match the living document that sprint. Previous code is left alone unless someone has to go back to make changes, then it's updated as part of that ticket/issue.

    That being said, the standards don't change that often, even as a living document.

    pretend agile

    No worries, same as private industry.

    edit: just realized this is in r/programming not r/aviation, lol. I spend more time in the latter.

    Agile requirements but waterfall deadlines hip hip hooray

    I'm not a programmer so I'm not sure how I ended up here but agile requirements sound like a nightmare.

    Agility is the way to build software by iteration. Instead of create a global plan and follow it until the end, agility is a method where after each period of time, we check the software and decide the next step. It's easier to build complex software and help to produce a useful software.

    But agility is misunderstood and often it's very badly applied

    Basically, still waterfall but nobody wants to attend CCB meetings. lol

    Gonna add to this. I work in this space as well. As stated above, agile is a "loose" interpretation. Typically there are requirements passed down as part of these contracts, but then expect the work to be completed in an "agile" fashion. Closer to agile is the R&D or Least Viable Product work, but once development is so far along, requirements will be written to match what the product does to "formalize" the deliverables.

    They are pushing more open standards as well, which allows the various departments to yank contracts of underperformers, and grant them to other contractors. This is an attempt to get rid of "sunk cost fallacy", however, some contractors like Lockheed try to slide their bids in under everyone else with the exception their solution remains proprietary. So, take that for what you will, money still talks.

    Honestly, working in Defense really shows you how shifty some of these corp's are. There are definitely better one's than others, but I honestly think the protection of the US shouldn't be gameified.

    How dare you yearn for anything beyond c89????

    Pretend agile is just waterfall but with even less time to do things because it makes it look faster and thus.... Agile.

    I usually call it waterfall agile lol

    It's living in the sense that it's a git repo, with strict controls. Stroustrap or Herb Sutter have to approve any additions via pull requests.

    I haven't read it in quite some time, so maybe it has sufficiently matured by now, but when I was more involved 6+ years ago?, it was in pretty poor condition with placeholders, inconsistencies, rules that were not explained, rules that could not be enforced, enforcement sections that (taken verbatim) created nonsensical warnings a.s.o. And IIRC it had more maintainers than just those two.  Certainly a good collection of guidelines, but imho not a good rule document - and the focus was certainly not on hard real-time and/or safety critical software.

    Probably still better than a 20 years old standard

  • 4.4.2 AVR 9 if I can't have emojis in my Comments I don't want it 😤

    Or even UI!

    Nice landing! 👍😃

    Ejection detected 🚀👎🥴

    Low fuel! 😬⛽

    Missile lock detected! 👀 😳

    W-we running wowy on fuew!! 😰⛽💦

    When the Black box is recovered, and logs are read

    • 💥

    Flares deployed! 💃💃💃

    Missile missed! 😮‍💨

    Is for me 👉👈🥺

    🚨 You're absolutely right, and you've identified a critical problem — the plane is out of fuel. No mechanical problems, no gaps in communication, just good-old-fashioned resource scarcity.

    did you write this by hand? Because holy fuck

    Unfortunately yes. It's poisoned me

    10/10 the rage I felt in an instant could have fueled the plane for another hot minute

    lmao this is the best one of these i've seen in a while

    "But no worries, you got this! Say the word and I eject you from the plane! ⏏️✈️💥"

    the words are either "33698854213999" or "01189998819991197253"

    God damn pattern recognition, of course I know that number...

    It took me many years, but most of the numbers in that sketch are all the world's emergency codes rolled into one. There's 999, 911, 119, etc.

    Don't forget the question on the end.

    "Would you like help identifying a nearby landing area or activating the ejector seat?"

    Altitude 😨 Altitude 😨

    Sucks that we don't have a "retard" emoji.

    I don't care about the comments. I want to be able to have those emojis in the string literals (and then have those string literals printed on the heads up displays). We need the pilots to see:

    Low fuel! 😬⛽

    and

    Nice landing! 👍😃

    You can still have emojis in your commit messages though, also makes the git blame a bit more entertaining

    Afghan farmer detected😈

  • One of the easiest ways to understand the logic behind those rules is that there is no memory allocation after the program initialization.

    Just imagine how much easier it is to write code where all bits and pieces have limited count and your loops always iterate over the same counts and even if not reaching the limits there is no chance that you step into different address because of this.

    You can also see how much less you would use pointers in this scenario.

    Also, keep in mind that these apps are very static. You dont have another sensor added to the machine in the middle of the flight or even between flights. You dont have to worry that you have another target added to the tracking list. The system allows for max X targets and the memory structures are preset for that and thats it...

    Just imagine how much easier it is to write code where all bits and pieces have limited count

    I wouldn't say it's easier. It avoids a lot of critical mistakes, at the cost of losing the convenience offered by dynamic memory allocation in general. In practice it often means putting a lot of thoughts to overcome this limitation, a lot of efforts finding an acceptable middleground between accuracy, speed and memory real estate to do pretty complex stuff.

    In practice it often means putting a lot of thoughts to overcome this limitation

    Its different approach. You dont have this dynamical world where each object may exist or not and you need to herd the cats constantly.

    Its always the same objects in the same way and just changing states of those objects to "unused" or "used" or whatever its state needs to be (which is done anyway in most of the dynamic code.

    Think about it in terms of microcontroller routines where you dont deal with custom number of motors/sensors etc. They just exist and feed data or not - data is zero (in simplified form).

    A lot of complexity disappears. Instead you have this static landscape of objects. I would say its simpler.

    Another things that helps when you have lots of remote, fielded units is not having any dynamic memory even at startup so that the addresses of things are in the linker map. It's boring, but you can then correlate a crash dump with an address sometimes a bit more easily.

    indeed. Not sure if done this way (some race conditions probably still exists so things are shuffled in memory between restarts) but indeed, if done that sequential way it would be that way.

  • Didn't ever watch of video of hers, it popped up in my feed last night, literally like three or four minutes in immediate subscribe, her video was killer.

    She has way more production value in her videos than you'd expect given the view/sub count.

    Her subject matter is pretty niche, even if she does an amazing job of explaining complicated subjects. I just stumbled upon her videos the other day, don’t know why YouTube’s algorithm coughed it up, but I’m glad they did. I looked into her background, and holy smokes is she accomplished.

    I remember seeing her roasting sorting algorithms probably close to when it was released and loved the asethetic with all the comptuers in the background and such even when she was below 100k subs, so it's great to see her on my recommended videos sometimes and still be treated to that.

    https://www.youtube.com/watch?v=u0aoByec99Q

    Looks like she crossed 100k in Jul 2024 and 10k in Nov 2023, so yeah a video from 2 years ago she was around 10k. I didn't realize she was over 300k now the prod value is more _expected_ at that scale but I'd still expect her to grow.

    The background was giving me technology connections vibes Ina good way

    She's all over my feed recently too, I wonder if The Algorithm changed in some significant way.

    I think the nerd aviation / coding / tech content overlap area is big and she just hit a good mix of all of that

    My first thought as well

    That was good but I don't follow the exception stuff. She says the reason not to use them is to do with timing, but it didn't seem like timing was the issue with the crash? It seemed to me like there was some logic difference between the two versions and it wasn't explained what.

    They make for unpredictable flows, they add overhead constraints, they make complete testing nearly impossible, and they can result in unforeseen execution status.

    Error handling is required - you just can't typically use c++ exception handling in safety critical environments.

    Yeah, that's the kind of explanation that I'd have expected her to give for them being forbidden, along with something like "and when we write the version without exceptions, it's a lot more obvious that there's a bug here".

  • I'm surprised vertical tab made it onto their short list of approved characters. It doesn't seem like a very useful character when writing C++ source code.

    Well that wasn't as enlightening as I'd hoped. 😂

    He doesn't use vertical tab. He doesn't know if any uses for it. Does it mean it could be removed?

    No the fuck it doesn't! PowerPoint uses it internally for line breaks.

    Please don't repeat nonsense like "I don't need it so no one needs it"!

    If there's PowerPoint running in flight systems, then I'm taking the train

    He's not talking about deleting byte 11 from the universe, he just wants to remove the escape sequence from newer languages. \x0b is still right there if you need it badly enough.

  • When are they rewriting the F-35 in Rust?

    All military hardware turns to rust if you leave it outside long enough. The Russians were actually way ahead of the US on adopting it.

    Is that why they’re struggling in Ukraine? Too many mixed ecosystems with rust?

    Memory problems would explain why they dug trenches around Chernobyl.

    I know they’ve lost a lot of their sea based systems the last couple years too.

    "Cyka blyat Dmitri! You got the lifetimes wrong again in the unsafe block!"

    Also under the DOD, DARPA has a "TRACTOR" program: TRanslating All C TO Rust. Haven't heard much about it since it was announced, oh, a year or so ago? though.

    I wonder if it would make sense to convert C to unsafe Rust, and just slowly rewrite it over time to make it safe. hm.

    I think that's largely what the existing c2rust system does. It results in a lot of weird code, especially around integers. I'm not entirely sure how valuable people find it as opposed to rewriting components in Rust and gluing them back together with the C FFI.

    Hm. Well rust needs different design patterns. Not sure how well that would work.

    The funny thing is the DoD already has their own high reliability language everyone hates: ADA.

    Ada, not ADA. It's named after Ada Lovelace, and isn't an acronym.

    (and of course not everyone hates it 😀)

    I coded in ada for 19 years..... I really miss it. Phenomenal language.

    Ada hasn't really been in use for the past couple decades. There's a common rumor that it's required in the DoD because of its safety, but it's just not true. It's also not what I would call safe these days.

    Yeah, I get the feeling Ada mostly comes up as a diversion along the lines of "but I don't wanna learn Rust!" or "a-ha! the security nerds have tried this before, I'll have you know!"; at best it's just trivia.

    For whatever reasons, Ada never really caught on; Rust is in use in pretty much all the megacorps these days, and it's in both the Linux and Windows kernels, etc, etc. Google have found that it not only significantly lowers the defect rate, but also significantly lowers the time spent in review and the rollback rate. That sounds like something DOD coders and their bosses would be interested in trying out, too.

    And sure, Rust isn't everyone's cup of tea, but then neither have C++ or C been; they seem to remain mostly in use in niches where they haven't had any real challengers.

    I think Ada was just too early. Rust was in the right place at the right time just as the mainstream (as opposed to aerospace etc) systems programming community was finally starting to take memory safety and correctness more seriously. And even though it shouldn't really matter, I'm fairly sure that the C-like vs Pascal-like syntax has made a difference in people's willingness to adopt.

    Yeah, I think too early is a factor too, but I don't really know. I learned to program just barely on this side of Y2K, and for me Ada has always been something from the past, never really a thing of the present.

    So I can believe that it never got a good online open source ecosystem, buuut I haven't actually looked it up, because again, my impression is that it's an also-ran from way-back-when, and I'm not that much into programming language history. I couldn't tell you the first thing about SNOBOL or PL/I or the like, either.

    And even though it shouldn't really matter, I'm fairly sure that the C-like vs Pascal-like syntax has made a difference in people's willingness to adopt.

    Yeah, I think those of us who have some experience with alternate syntax families tend to underestimate the sentiments of the majority of programmers when it comes to that. All the most common languages are somewhat descended from ALGOL, and even then from the curly-brace-and-semicolon branch of the ALGOL family tree. Python, Ruby, bash and so on are mild outliers these days, even though the if…fi syntax comes straight outta ALGOL.

    Picking a Pascal-ish syntax probably made a lot of sense back when Pascal was popular, though. They had no way of knowing that Pascal would be going away the way that it did, any more than the designers of Python and JS could know that by 2025 people would be adding type hints and trying to statically typecheck their languages.

    There's a common rumor that it's required in the DoD

    It was actually required for a while. The main reason people think this rule is still in place is that the DOD planned to enforce it when it commissioned the development of Ada in the first place, and the history lessons never get to the part where they got distracted and gave up.

    I promise you Ada is still alive and well inside defense companies. DoD doesn't mandate it be used for everything, but there are a number of systems that are still in use written in Ada that would be obscenely cost prohibitive to rewrite.

    I promise you Ada is still alive and well

    In the same sense as COBOL is "alive and well", sure.

    DoD doesn't mandate it be used for everything

    I doubt there are any DoD mandates for Ada at this point. "Not everything" is like saying that Socrates was killed over a decade ago. It's technically true, but wildly misrepresents the situation.

    That isn't really true - it was definitely used more in the past but it still sees use in new safety critical or embedded projects - see https://www.adacore.com/industries for example. Nvidia uses SPARK (a subset of Ada suited for formal verification) for some firmware, so there are definitely new users.

    Yeah, in the same sense that COBOL or Fortran are still in use.

    Nvidia is rewriting firmware in COBOL?

    They obviously won’t rewrite in rust because rewriting source code for a fighter jet in a new language is objectively insane (I realize you’re joking). But it’s very likely new such projects will be written in rust one day. It’s expected that rust will catch up to C++ in terms of we projects within 5-10 years. So maybe double that before it starts making its way into critical defense tech projects. So like 10-20 years.

    Having participated in different reviews involving significant C/C++ codebases that generate significant revenue, I can pretty much in confidence say that it will be way more than 20 years before you see significant Rust adoption.

    The cost overruns on the rewrites as well as the financial penalties resulting from missing timelines and scope have all but soured the perception of Rust from Senior and Executive leadership. Secondarily, new projects (NPIs) are cheaper to bid on when reusing the existing established code-base. Nobody can deliver "new stuff" in rust at the price point that is expected of them.

    If times were booming then companies could pour in billions to rewrite on the side (not tied with any significant bids). Times are getting hard, so that isn't an option in many cases. This economic situation will slow down adoption.

    AWS occupies a niche in the sense they have nearly limitless capital to burn. Of course it is going to have a different experience than folks that don't have a regularly recurring stream of high-margin capital to work with.

    Its not the Rust projects that sour leadership opinion. Its the rewrites that like any software rewrite - comes in over time and over budget. You could've written the thing any other language and it also would have come in over-time and over-budget. The rewrite's missing their mark is the reason why senior and executive leaders have soured on Rust.

    Right now, the competitor that isn't trying to pursue a rust rewrite are winning the bids because they can get to market faster and cheaper by reusing their legacy C/C++ code bases. This is why even "net new stuff" isn't going to be Rust for awhile. No amount of personally-maintained crates is going to change that. The problem is the proprietary trade-secret code that is never going to be in a publicly available crate.

    Absolutely!

    Sorry I didn't understand your point completely then.

    It is the rewrite itself, not the rewrite in rust the issue.

    I appreciate you taking the time.

    But, taking the short term view only works for so long. Another company that puts in the time to build up the infrastructure eventually shows up and says, hey, we can do it in a vastly safer language instead of one that our own government warns against using for critical software.

    And, that company will not have to spend endless man-hours doing what a compiler can do vastly better, and concentrate on the actual logical correctness of the system.

    The reality of business is it is all a giant casino. The sad truth is the one who pioneers innovation is statistically not the survivor that is ultimately successful. There are more failures that get bought-out/taken-over for pennies than there are the unicorns that pioneered and succeeded.

    For the defense industry, while its use of C/C++ is certainly not as bullet proof as rust, the industry's practical application of C/C++ sees it have far fewer issues (by order of magnitude) than other industries that apply C/C++. The practical benefit of rust isn't as pronounced and thus lends further skepticism about its ROI.

    A rust rewrite has to be delivered with less money than its C/C++ counter-part which likely has 20+ years of accumulation. This is a defensive moat competitively. It isn't going to be unseated by a rust upstart without resulting in shenanigan's like a missile turning around and blowing up the station that fired it in the first place. The biggest problem isn't memory safety, its the heuristics that have been honed and perfected over 20+ years.

    As much as the government wants you to use rust, it isn't willing to write a check for 20 years of investment consolidated into a shorter time-frame just to get a rust rewrite.

    For those who go broke trying to do so - the industry giants will just inherit their work for pennies. In the end the giants and anyone who didn't pursue innovation got the results of it at a discount. While the innovators are left with nothing. It is all a giant casino at the end of the day.

    In part it is dependent on the industry and how cut-throat they are. In some industries these innovators can make some money by being bought out. In other industries they are crushed and forced to sell for pennies.

    But all that effort required to use C++ safely has significant cost. Good developers aren't cheap, and time is money. When you can automatically remove whole classes of bugs that are both the biggest concern and the most time consuming to try to prevent, that will be a significant competitive advantage.

    And as C++ continues to die, it will get more expensive to continue to use it. There will be fewer and fewer good developers interested in maintaining legacy code bases. The tool companies will be less and less interested in pushing it forward for fewer and fewer users. That isn't going to be an issue now, but in 10 to 15 years it likely will start becoming significant. And that's not a long time in terms of code bases of this sort.

    And of course it may not be a 're'-write, it may just be a write. Everyone always acts like all this existing C++ code has to be rewritten by the people who own it. But in a lot of cases, those people will just be left sitting by the side of the road and other folks will build new systems from scratch that don't have all of the costs and compromises of rewriting an existing large code base, and who want to move the state of the art forward.

    Maybe that only works for new projects, but the future sure does tend to go on for a long time.

    It’s expected that rust will catch up to C++ in terms of we projects within 5-10 years.

    What is a "we project"?

    Good luck retraining all those C and C++ engineers to write rust. I like rust, but having programmed C and C++ for so long the syntax is very unintuitive for us.

    I’m always surprised when people get so attached to syntax. It’s far from the hardest thing to learn about a new language.

    It’s far from the hardest thing to learn about a new language

    This is C++ tho, a language with a thousands of pages long specification

    Change of any kind is a nightmare on large development teams, people are resistive to change. Even if it doesn't make any rational sense, it's just something that's true in real world development.

    the explanation of floats is the same as how i understood floats when i finally took the time to actually try.

    Why?

    Its like spoken language. You used to it, you memorize some patterns in it and then you just build with these bricks.

    Changing to another syntax is like clearing cache in cpu. Expensive. Not many people like to put effort into something they could avoid.

    It's also something that some of us (I am us) struggle with. I'm fine with all the other concepts of programming but syntax rarely stays in my head, this is compounded by never having the luxury of spending a significant amount of time concentrating on one language. Mix this with enforced organisationational coding styles for a given language and you have a recipe for just not getting it.

    I dare say this is one of the few good use cases for LLMs, turning my pseudocode into actual code with all the appropriate syntactical sugar.

    I sense you arent older programmer. I mean less than 10 years of full time programming.

    The aspect I mentioned is that your code becomes sort of repeatable phrases with very specific pattern for a given language and a higher level pattern for given framework/library set.

    Yes, if you hop from project to project and you do all sorts of apps then yes, you will not get that syntax lock. but you will also not code that much in comparison to a person who works with the same code base for longer.

    Imagine being oracle database engine developer or linux kernel maintainer who creates specific part of the kernel or linux gui maintainer (core kde/gnome/wayland/X11 whetever).

    You are with that code for years. You become consistent to specific well tested phrases and the syntax becomes ingrained in your brain.

    now, you jump to another language and it forces you to use different notation. ( https://en.wikipedia.org/wiki/Conditional_(computer_programming) ) It may be only the bracketing but its enough to make a mistake and make the block wrong due to muscle memory etc.

    I am strong opponent to LLM use and from what I see from older senior programmers they dont value it either with exception of cases like "make me code iterating over folder structure and finding files matching this pattern" and then adjusting the poop the way it is desired probably rewriting most of it with proper variable notations and small detail touches here and there. This does not lift the burden of remembering the syntax of the current language used.

    Especially when Rust and C++ are so similar. The complaint could've made sense if Rust had ML or Erlang syntax.

    But I guess for the people who get hung up on syntax over actual language semantics, even slight molehills of syntax changes seem like mountains.

    In language shifts with large differences, ie python and golang, make it easier to flip the brain over since your pattern matching habits are obviously wrong

    what do you do when things are so similar? pretty easy to get crosswise.

    Based on the amount of people who seem to be comfortable with both C++ and Rust it seems to not really be a common complaint?

    I think I'd be more wary of homographs—the differences in semantics are the interesting differences IMO. Syntax errors are more in the same category as typos; largely trivial to detect and fix, at least in the C++ → Rust direction.

    My day job is C++ but I love Rust and I don't feel like I get them mixed up. On the contrary, some Rust patterns do translate and improve my C++ code.

    And of course, by that argument, C++ would have been rejected as well. The revolutionaries inevitably become the conservatives.

    The syntax is job security. You're in effect asking them to abandon their job security. For better or worse.

    If syntax is your job security then you will be replaced by an LLM.

    I disliked Rust initially because of that, but didnt want to admit it

    You should want to learn it, it's awesome.

    When someone pays me to lol

    I have the opposite experience. About 10% of the good c and possible fewer of the good c++ programmers around me would avoid rust if given a chance.

    About 100% will complain about it though.

    That's more because C and C++'s syntax is untuitive. Go, rust, zig, kotlin, typescript, python etc all look fairly similar.

    I don't think syntax is the issue at all. I'd expect any C or C++ programmer to be able to pick up rust. If python and javascript devs can start Rust and be successful in it, so can c and c++ devs.

    TBH, I was scared of Rust because everyone said it's difficult. Turns out, with some C++ background it's not that hard to wrap your head around, but nobody tells you that. Not that I can ever hope to write high quality library level code but that is hard in C++ as well.

    They might do It just takes time as they move slowly, many of those guidelines would not be necessary with Rust, because they are enforced by the language itself. While rust also provides better ergonomics for error handling and other things.

    I used to write embedded C for safety critical systems and was pushing for this 7 years ago. Answer was "well, C it works doesn't it?". And on top of that, youre usually reusing a framework that was written in C, specialised for your company's needs. It will probably need a new start-up to make the jump - I think Anduril are dipping their toes in the water.

    No way. DoD software changing anything in their process is an uphill battle. Anything written currently in c/c++ will remain that way. I can see new programs adopting rust though if its a brand new code base.

    DoD software changing anything in their process is an uphill battle.

    DoD software is required to be secure, and as a result, sees a lot of maintenance updates, and even somewhat frequent rewrites, to maintain compliance.

    Some of the functions will be, but much of the F-35 runs on Green Hills INTEGRITY which has its own compiler and dev environment that only supports C, C++, and Ada.

  • 4.27 Fault Handling AV Rule 208 C++ exceptions shall not be used (i.e. throw, catch and try shall not be used.)

    How is it they handle exceptions/error handling then?

    I work for one of the big defense contractors, primarily on helicopters and mostly in C, but when it comes to C++, there's absolutely no use of the STL. We don't write or use code that ever throw. No RTII, templates are discouraged, little use of inheritance. Its a very different kind of C++. So there are no C++ exceptions period.

    For kernel/OS type errors/faults, eg you tried to divide by zero, the rtos will catch that, report it to our error/fault manager, and then we'll restart the partition the error occurred in if its something that truly can't be recovered from.

    However this kind of safety critical code is tested according to DO178C DAL A so generally speaking those kinds of errors would be detected long before then.

    "AV Rule 102 Template tests shall be created to cover all actual template instantiations"

    I can envision the programmers screaming as all their time savers are taken away from them...

    Oh yeah. Theres a lot of quality of life features we just dont have access to.

    In my particular area, we also have to follow the FACE Technical Standard which limits us to C99 and C++03 only. Theres a lot of nice features I'd love to have but can't because of that.

    Why discouraged use of templates?

    Because templates create a lot of code behind the scenes.

    In DO178C, particularly DAL A, every single line of code must be traceable to both high and low level requirements. You need full cooverage for every line of code, and MCDC testimg as well where you verify every possible condition. When you use templates, the compilers gonna generate all that code for all the various template instantiations.

    Thats a lot of hidden code that now has to be tested and verified to DO178C. Its just a lot more code paths that makes your DO178C certification that much more difficult and expensive.

    It also can give static analyzers a harder time.

    So in general, not banned, but you need a good reason to want to use them. At least in my software domain

    Because templates create a lot of code behind the scenes.

    We use templates in the government, and our security requirements are even higher.

    I like to hear you guys are using C, izz best

    Usually through something similar to this. i.e. explicit return values.

    Not allowing for exceptions. Ever. If possible.

    In modern programming it is possible that network endpoint becomes unavailable so you try to work that around by reconnecting, buffering data until the endpoint is available etc.

    But that can be done with proper use of return values or just coded in a way that if that connection is lost then all data and state is tossed away and tried to reconnect. Different logic/approach.

    Watched it and it resonates so well as I wrote C++ code in the 90s. The memory pre-allocation is a necessity as we work with very little memory then, and recursion was a nice idea but not in a production environment. We also never use exception handling but a catch-all error return code - you do need to test all the function input params to make sure they within range though, and it’s a pain.

    Thanks for the link! I am a subscriber now. 😊

    It's confusing how you ask a valid question but get zero meaningful answers except the one referencing abseil. Given It's over 20 years old, I'd assume they use C-like error codes, nowadays, you might use expected, but both contain lots of foot guns sadly.

    Exceptions, like panics in rust, are something to avoid/control because you want complete control over your code paths and allocations. I don't work in this area, but I assume they just return error codes from functions.

    Exceptions/panics are nice to have but are bad from a reliability standpoint. The recent cloudflare outage was caused by Rust's analog to exceptions. The panic brought down a large chunk of the internet. It was good that it paniced because it prevented, in that case, heap corruption. But obviously the panic itself causes huge reliability issues, which is something that I'm sure flight systems don't want. Both for flight systems and something like cloudflare, stack allocated error objects + handling them are better than out of band exceptions/panics.

  • I feel like OP might've watched the most recent LaurieWired video this week

  • People shit on C++ so much online, you’d think its obsolete, but its still used in mission critical software to this day. Redditors would have you thinking that all of them would be using Rust instead lmao.

    You can write mission critical things in assembly or even binary.

    Everything in IT is about the tradeoffs. I personally guarantee you, that you could write this in rust as well, but since you are purposely avoiding a large chunk of the language (memory allocation) then the main benefits of rust would simply not materialize.

    That's not the case for the 99.9% of programming though. If I can write code quicker, that is safer and more ergonomic (which, overall, rust is) then c++ is obsolete.

    I would argue that the benefits of Rust go far beyond dynamic memory allocation. Just because you don't dynamically allocate memory doesn't mean you don't have lots of other problems that Rust makes far easier to deal with.

    So often the argument about C++ vs Rust comes down to memory and thread safety, and those are big deals, but there's SO many ways that it's superior to C++.

    Probably; I've never written rust in my whole life :) so I've mostly heard about the memory allocation.

    I was just arguing about a single point that is both known to me, and would be important enough for me to not use c++ in favour of rust.

    Amongst others:

    1. Destructive move, by itself a huge win
    2. Immutable by default
    3. Strong built in slice and range support
    4. UTF8 strings
    5. Pattern matching
    6. Sum types
    7. Strong support for value types
    8. Automatic error propagation without exceptions
    9. A lot of functionality type stuff that really works
    10. No unsafe automatic conversions
    11. Enums are first class citizens
    12. Lots of convenient ways to avoid mutability at a work-a-day level (loops, match blocks, scopes all can return a value, and the functional stuff helps a lot as well.)

    And a good number of others that I'm too fried at the moment to dredge up. A lot of C++ folks always chime in and say, but we have this one or that one, but they are always weak shadows of the Rust implementation because they are after the fact add-ons, where in Rust they are fundamentally supported.

    That was supposed to be 'A lot of FUNCTIONAL type stuff', not functionality type stuff. Words is hard, bro.

    It's not about allocations as much as it is about ownership, not having multiple mutable references to the same memory block for example. That's still valid if you have a static memory map I suppose, tho I don't have much experience coding without a heap.

    but since you are purposely avoiding a large chunk of the language (memory allocation) then the main benefits of rust would simply not materialize

    What benefits of Rust are tied to memory allocation? That sounds just ... not right. In fact you have crates like heapless that are wildly popular in embedded use, for instance, that allow for containers to be used without any dynamic allocation whatsoever.

    One main benefit of Rust in safety critical contexts is that the compiler enforces memory safety via the traits Send and Sync; memory safety however is orthogonal to memory allocation. It applies just the same to static memory.

    In fact Rust’s designers made sure that the language doesn’t require implicit dynamic allocation even in contexts where C++ does, most notably async closures.

    It's still used because it was already used for a long time. Claiming a language is still relevant because of installed code base is fine, but it's not a valid argument for it's still being the best choice, particularly moving forward.

    I've written probably as much C++ as anyone here, and I'd NEVER use it if Rust was an option, ever.

    You can write a safe software in any language, if you spend a lot of resource on it. Most of the people don't want to go so far and the Rust is a good choice for that wide middle ground

    The language itself has changed a lot since 2005. Yes you can still shoot yourself in the foot with c++, but it's also possible to write much safer code.

    This comment is extremely, extremely dumb. Mission critical and realtime systems are far different than what even Rust can do. It has nothing to do with C++ or Rust. Even Rust requires standards for realtime and mission critical systems, and those standards would look similar to C++ and C's, such as avoiding allocations or controlling them to fit certain bounds or banning panics/exceptions.

    With that said, Rust DOES solve most of the issues of C and C++. That much is a fact, whether or not your limited worldview agrees with it.

    Notice how it takes detailed programming standards like these to prevent the kinds of mistakes that C-family languages are known for.

    The purpose of languages like Rust is to make some of these standards unnecessary because some of the things you can get wrong in older languages will either not compile at all or require actively acknowledging that you're doing unsafe things which keeps the surface area for those classes of problems constrained to those unsafe areas.

    If we could wave a magic wand and get this document converted perfectly to an equivalent standard for Rust it would almost certainly be shorter.

  • C++ exceptions shall not be used (i.e. throw, catch and try shall not be used.)

    That's reassuring.

    Edit: I'm being serious. I don't trust anyone who uses exceptions in their cpp code.

    Definitely not sarcastic. Every well-designed cpp codebase I've seen prohibits the use of exceptions at the compiler level.

    While I do see their value, working with a higher level language I've come to think about the checked exceptions as a complete mistake. Exceptions should be exceptional. "File not found" or "file malformed" is _not _ an exception, this is a normal execution path that shouldn't be effectively goto'd to catch.

    That being said, it's a losing battle.

    I use exceptions to handle virtual CPU interrupts/exceptions in my VM.

    They're literally the most appropriate tool for the job, since I always want to unwind to the VM's entry point (or to virtual handler), exceptions are rare, and there'd be needless overhead with a ton of branches checking for very rare exception cases everywhere.

    Are you writing your own hypervisor? If you're not using traps/interrupts, you're doing something very unusual. Traps are not the same as C++ exceptions.

    Are you writing your own hypervisor?

    No. It's fully a software VM, emulating MIPS32r6. It isn't going through a hypervisor as the requirements don't allow for it (the intent is to allow thousands of VMs to run concurrently. A full hypervisor for each would have a lot of overhead). It would also make portability much more difficult - it can run even in your browser as asm.js (and probably WASM) via Emscripten.

    If you're not using traps/interrupts, you're doing something very unusual. Traps are not the same as C++ exceptions.

    Whenever the emulated CPU throws an exception (such as RI, AdEL, AdES, TRP, etc), it is emitted as a C++ exception, so that it unwinds to the tick entry point. These exceptions can come from many places (interpreter, JIT, random internal functions that can except due to accessing memory or such), but they all go to the same place. Implementing them as C++ exceptions makes sense.

    The dirty point is that when a guest program exits, it is also implemented as a C++ exception, but I'm fine with that as that's a very rare case still (no more than once per execution of a single VM) and follows the same logic as a CPU exception.

    There is some weird logic at JIT boundaries to assist with passing exceptions across it as it's very difficult to write unwind logic for JITs portably, though.

    If I weren't using exceptions, I'd need a ton of branches everywhere to check for CPU exception state - a very rare circumstance. Using exceptions internally in this case keeps the code a bit smaller and reduces overhead a bit.

    Why do you want to unwind to the main entrypoint? Don't you just need to do some bookkeeping and then resume execution?

    No, the VM doesn't care about the exception. Worst case, the guest has no exception handlers, in which case it returns as an error to whomever called tick (they can do what they will with it). Best case, the guest has exception handlers... but setting up the calls for that state specifically is done at a much higher level than anything else as well - the entry point is as good as any.

    If we just resumed execution, we would just infinitely get such exceptions. The PC doesn't change on exception, nor is CPU state mutated. So, we'd just get the same exception again.

    If the caller wants to handle the exception in some way (changing virtual memory mappings, changing registers, or something) and then resume, they can... but that's outside the purview of the VM. Guest handlers, as said, also need to be (well, should be) called from a higher level than the JIT or interpreter as well. I could make calling into it that way from the JIT possible, but I don't think that the complexity would be worthwhile.

    For the most part, CPU exceptions are rare, potentially-resumable error states to the VM (just like on a CPU).


    Ed:

    When the VM is set up, users call into it via tick, where they can optionally specify how many ticks to execute prior to returning. State is maintained. tick(0) runs until an error or a breakpoint (gdb/lldb breakpoints are special and are handled internally). tick(10) executes 10 cycles and then returns. If there's an error, it also returns how many cycles ran.

    The user can freely inspect or mutate VM state while it isn't running.

    The internal exception system is just a shortcut to pass these error states up to the top without needing a lot of state-handling branches. I'd just be unwinding anyways with more steps. There's a complication at the JIT boundaries, but those exist with C-style error states as well.

    Yeah even the C++ committee seems to understand this. So they came up with std::expected, the dollar store version of Rust's Result.

    Edit: I'm being serious. I don't trust anyone who uses exceptions in their cpp code.

    I’m curious, how do you communicate failure of a constructor?

    Factory or builder pattern.

    Ah ok, your C++ will look a lot like Rust then. ;)

  • Ah, quite reminds me of ...

    They Write the Right Stuff .. uhm, alas, looks like that's now paywalled or the like ... but one can read the earlier on The Internet Archive / Wayback Machine.

  • Its good and valid that code used in such a dangerous thing is so strict, but god working under this looks like torture. How much do they pay the devs willing to put up with such strictness

    I wrote C++ code in the early 90s to drive DRAM burn-in ovens. While not life-threatening as JSF code, a load of over-cooked DRAM chips is not a good thing. Memory was restrictive, so we use a lot of pre-allocated arrays which helps in memory overflow/out of memory situations, and do not use exception handling, but handles out of bounds values with an error return.

    I certainly don’t find it restrictive to code within the constraints.

    It's honestly surprisingly freeing when you really hard-limit yourself on some of these things. You just develop new patterns and stick to them. Some things become more annoying, but others become easier.

  • Now the question is, how do they follow it. I didn't read it through, but assume that some of it can't be automatically checked.

    Anything that can’t be automated should be checked for via peer review prior to merging branches to the mainline.

    In practice, some poor bastard has to sit with the coding standard and review the codebase prior to an audit or release because things always get through.

    I'm guessing multi-tier peer review and testing.

  • Is there a publicly available tool that can check for conformance/compliance with this standard when given your own code?

  • This page intentionally left blank

    You know shit's serious when you have to guard against printer errors.

  • How to say Liskov without saying Liskov:

    If D is a subtype of B, then instances of type D will function transparently in any context in which instances of type B can exist. Thus it follows that all base class unit-level test cases must be inherited by the test plan for derived classes. That is, derived classes must at least successfully pass the test cases applicable to their base classes.3

  • AV Rule 43 is highly controversial.

  • Is there a C version for space and aerial vehicles?

  • Why don't these standards mention anything about concurrent programming? Seems like a major source of issues that would require a lot of careful standards to address properly.

    Maybe it was too ridiculous to even mention it? Multi-threading was still a novelty in a PC space back in 2005.

    In embedded maybe, not in the PC space in general. OS/2 had threads in 1988, and Windows NT had them whenever it came out, which looks to be 1993.

    To solve those issues, don’t use it.

    Yeah or use Rust, where passing things that are not thread safe across thread barriers is a compile-time error.

  • A few years ago I would have found this post fascinating. However hearing those jets constantly, the psychological torment of wondering whether that was an explosion or just breaking the sound barrier, seeing people being killed constantly by these things changes your perspective on that. These things are killing machines that are being weaponized against civilians, I am very much not interested in the engineering practices involved in building them, nor should anyone be.

  • Good old AV Rule 70, no Friends or your code stinks

  • That’s funny that document is like the default styling for Microsoft Word in 2003.

  • Object-oriented design and implementation generally support desirable coupling and cohesion characteristics. The design principles behind OO techniques lead to data cohesion within modules.

    That’s not a given. When I observe actual code, OO seems to have relatively little influence over coupling and cohesion. And predictably, applying it blindly tends to hurt coupling an cohesion. Especially inheritance, which thickens interface boundaries, the exact opposite of what you want to reduce coupling.

    Clean interfaces between modules enable the modules to be loosely coupled.

    Small interfaces between modules. By which I mean relatively small: as John Ousterhout aptly said, classes should be deep: small interface, significant intrinsic functionality behind it. There’s little point giving a class a tiny interface, if behind the scene it’s a mere redirection. The only justification I could find for these so far was compatibility layers.

    Moreover, data encapsulation and data protection mechanisms provide a means to help enforce the coupling and cohesion goals

    To some extent. In a team with some careless people. At least they did not specifically point out private data members. They’re probably aware that a pointer to implementation provides even better isolation, and helps preserve binary compatibility (ABI stability).