top of page
Tanveer Salim

The Problem with Rust


Outline

  1. The US White House has recommended Rust to replace C/C++ codebases.

    1. Primary Motivation: Most codebases vulnerabilities are based on memory safety bugs.

  2. The Problem with Switching to Rust

    1. In a Nutshell: Far too many codebases we rely on are already published in C/C++ for decades

      1. Cryptographic Libraries: The most difficult to write.

        1. Some hope: RustTLS

      2. DNS Software: Even the US Government Hosts their own DNS Root Servers

        1. GNU/Linux is mostly written in C and the Linux Project is struggling to apply Rust

          1. Rust for Linux is an attempt to publish Linux kernel code as Rust.

          2. Primary motivation: Memory safety issues as White House points out.

          3. Problem #1: Most Linux Kernel Developers don't know Rust

          4. Problem #2:

    2. Development Teams are struggling to switch to Rust from C, including GNU/Linux

      1. Switching from C to Rust is a big ask. It's one of the things you should never do.

      2. Switching to Rust for Linux would require developers to rewrite drivers--wasting time.

        1. Wedson Almeida, a Microsoft Software Engineer, quit the Rust for Linux project from frustration with nontechnical issues such as these.

      3. A lot of Linux Documentation has to be rewritten from scratch--no software engineer gets paid just for documentation work.

        1. Rust's API System works much differently from C's.

      4. All major operating systems, commerical or not, are built on C/C++.

  3. AI is Not Your Savior

    1. AI is not yet advanced enough to audit codebases up to that level.

      1. Human errors in logic would be difficult for the AI to catch.

      2. Too Many false positives.

      3. A report by Snyk admits 56.4% AI suggestions are still insecure.

        1. Most AI-generated answers to coding questions were horribly wrong in the first place.

        2. Believe it or not: most developers are incapable of spotting AI insecure suggestions.

        3. The topic of using AI for source code audits deserves its own video.

      4. Disclaimer: All I am saying is that its not ready yet.

  4. No One Audits Open Source Software

  5. Cost-Effective Solution: Formal Verification of Software for Security

The US White House Recommends Rust to Replace C/C++. Why so? Before I can explain that I will quickly review what the biggest issues the US Federal Government is facing in software security and how we got where we are today.


On February 26, 2024, the US White House published a report explaining the severity of memory-based vulnerabilities against US spacecraft. As the report points out, for over 35 years the same kinds of memory vulnerabilities have left US space and aircraft vulnerable to exploit by foreign actors.


Throughout history, US machines have faced memory exploits that have taken down entire networks such as the Morris Worm of 1988, the Slammer Worm of 2003, Heartbleed Vulnerability in OpenSSL in 2014, and the recent BLASTPASS exploit chain in 2023.


What's interesting is that the US Federal Government is most concerned about protecting spacecraft from cyberattackers. Not what's already on land. Now let's take a step back here. Why is that?


If you read documents such as the USSF Commercial Space Strategy and the US Annual Threat Assessment Report of 2024, you will notice the US frequently mentions both Russia and China as threats to US spacecraft.


The United States Federal Government is most concerned that Russia and China, two nations whose military forces rival that of the US, will become the dominant military force in outerspace. If you were to look at the job postings of companies such as SpaceX and Northrup Grumann you will notice these companies work with the US government to help protect the secrecy of military intelligence--which is a a must have to win any war.


Nations such as Russia and China have the manpower and military intelligence to disrupt or forcibly take down US satellites that play this crucial role.


And the fact that the US is complaining about memory exploits as a threat is a clear giveaway that memory bugs continue to be one of the biggest security vulnerabilities that the US federal government is forced to deal with. According to research reported by the US White House and other companies including Microsoft up to 70% of reported vulnerabilities in CVE databases are memory-focused bugs.


If the US is so concerned about these bugs, why are we still relying on the same old coding languages that make such vulnerabilities easy to crop up? The answer has to do with legacy support. The C programming language was and still is the de facto standard to build production-ready operating systems--the software that allows the hardware and applications of your machine to work together to complete all your tasks. Since C became the de facto standard--writing software that quickly performed critical tasks--like managing your spacecraft--were also written in C. Your operating system, regardless of whether its Windows, GNU/Linux, a variant of BSD including macOS, are still built on this technology!


As the US White House points out--the US and other nations have been using C to write our computer networking stacks, our operating systems, and our cryptographic libraries. So although languages such as C/C++ are very easy to accidentially make memory bugs in it is very hard to forego the speed benefits and legacy support it brings.


In response to this issue, the CISA has released a report titled "The Case for Memory Safe Roadmaps" that counsels software development teams to make their codebases to be more secure. To be honest, when I read this report most of these were secure conceptual guidelines--leaving it to managers to figure out how to best follow the advice for their project.


Some of the advice the CISA recommends. Its unfortunately broadly worded.
Some of the advice the CISA recommends. Its unfortunately broadly worded.


Since the effectiveness of the advice given in the Roadmap is hard to quantify it will be difficult to estimate how effective it really will be.


Most product managers probably will not be able to take the advice into action.


One of the worst pieces of advice the article gave was to bring university CS education up-to-date with modern software security?


If you are a CS student or have a degree in CS when was the last time your instructors educated you in software security? That's right. NEVER.


Even the writers of the Roadmap are aware of this unfortunate fact. Is it any wonder that there has been little to no improvement in software security. We are not taking the time to train younger software developers in the secure coding habits that would prevent the software bugs the US White House recently complained about this past year!


The CISA Roadmap and the US White House claim the most promising defense in the meantime is using a memory-safe language such as Rust. The compilers of these languages are designed to catch or altogether make such bugs impractical. To the CISA's credit, languages such as Rust are designed to prevent four recurring problems languages such as:


  1. Software Crash Due to Memory Vulnerabilities. It is common for applications written in unsafe languages such as C/C++ to crash thanks to stackoverflow errors.

  2. Memory Bugs: Remember that 70% of reported vulnerabilities are memory bugs. So less memory bugs means less wasted time finding and resolving such bugs.

  3. Arbitrary Code Execution: Memory bugs can allow cyberattackers to perform unauthorized execution of their software on victim machines. Once an attacker succeeds in this they can often execute any program they want on your machine--including gaining a root access shell.

  4. Concurrency vulnerabilities: Deadlocks and race conditions can cause servers to crash.


    All of these are valid points the CISA has brought up.


    Problems with Rust


    And with saying there is a bunch of advice the CISA gives that I find unrealistic. Even the CISA report admits C/C++ are unfortunately the most common. Our operating systems, compilers, DNS software, reverse proxies, the Internet, and our cryptographic software continue to be written in C/C++. And that's why most employers want candidates that are trained in C/C++. However, those same companies often do not pay attention to secure coding practices for these languages such as the CERT C Coding Standard since its a major time commitment to learn. The CISA admits most universities nor businesses train their employees in secure coding. However, the CISA's recommendation alone will not change universities' minds about their curriculum. As long as universities can boast their graduates get hired at prestigious companies that's good enough for them.


    The CISA also admits most professors that are trained in C/C++ would have to accept the burden of learning the new memory safe language such as Rust. It takes years to be proficient enough at a language to satisfy industry demands.


    GNU/Linux Continues to Struggle to Switch to Rust


    To further illustrate why asking developers to switch to Rust will be difficult let's consider how attempts to integrate Rust code into GNU/Linux is going.


    The GNU/Linux project is the largest free software project in the world. Throughout history the GNU/Linux kernels have been written in C just like any other operating system. Recently GNU/Linux developers have attempted to upload Rust kernel code .


    Rust for Linux


    Rust for Linux is the official name of the community project to add support for the Rust language in the GNU/Linux kernel. Like the US White House this is being done to reduce the kernel's attack surface. However, most kernel developers do not know Rust and are, for now at least, not willing to learn it. Veteran C developers have become so defensive on insisting the kernel continues to be developed in C that Linus admits the Rust vs C debate has become a political debate. Linus admits this may be because Rust is more complex of a language to grasp than C--which is famous as a terse yet powerful language.


    Linux Developers Quit in Frustration


    The language wars for GNU/Linux have become so heated that even prominent developers have quit. Wedson Almeida, a Microsoft engineer, quit the project after getting tired with other Linux developers' reluctance to make the change to Rust. A fellow developer Asahi Lina admits it seems developers are not concerned with the security benefits Rust offers and are emotionally attached to the language they grew up with coding.


    Reinventing the Documentation


    A hidden downside developers have discovered in switching to Rust is the necessity to revise documentation--documentation that has already been published in the past. Switching to a language requires one to change how they approach solving a problem--and this in turn requires a change in how the code works and...therefore a change in how to explain how the code works. Josef Bacik, a developer on Linux Virtual Filesystem, had to revise 65 pages of documentation on the filesystem.


    There is a major problem with that--and that is no one can earn a living just writing documentation. If you want to earn a living on the GNU/Linux project--code! This is why GNU/Linux kernel documentation was/is already esoteric before the transition to Rust began. And now with the switch to Rust developers are challenged to decipher legacy Linux code and document it more lucidly for incoming Rust developers. This is why Joel Spolsky said YOU SHOULD NEVER REWRITE YOUR CODE BASE FROM SCRATCH--EVER!!! To understand why look at Netscape--it cost them their business.


    As Joel wisely points out its not that the code itself is a mess--its that it is naturally hard to read someone else's code than to write your own from scratch. Although that's annoying it is better to take the time to read someone else's code and tweak it than rewrite it from scratch. The legacy code may be annoying--but it has been tested in the wild--something that new code cannot easily boast.


    I will simply quote what Joel said here:


    "When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes.


    Years of programming work.


    It’s important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time...you’re just going to make most of the old mistakes again, and introduce some new problems that weren’t in the original version."


    --Joel Spolsky


    Since its mentally painful to reread someone else's legacy code--who may or may not be available for contact--we can now start to see why many developers are resisting Rust like a firewall--its a hassle to learn how someone else thinks.


    Rust Still Deserves a Place in GNU/Linux


    Allow me to be clear: I am not saying Rust should not exist in GNU/Linux. It should to reduce the attack surface as the US White House. But thinking that most of the GNU/Linux legacy code should be replaced with Rust code is a dangerous mistake--it can cost Linux its legacy.


    AI is Not Your Savior


    Recently software developers have begun using AI tools to solve software issues for them. For now AI is not yet advanced enough to audit our source code for us. Its just a tool to help an experienced auditor do their job. I am not saying it never will be able to source audits independently! But for now AI gives far too many false reports of insecure code to be efficient. What's tricky about using AI is that computers are engineered to comply with Boolean logic. But real humans make irrational decisions. Its a part of being human!


    The AI Market Hype is Real


    Despite AI's imperfections the AI market hype is real. In a statistical survey by Snyk, many developers have practiced using AI tools to help audit their source code. Unfortunately, people trust the AI too much. In the survey by Snyk out of 535 teams, 97% of teams attempted to use AI tools. The allure is simple: AI tools advertise less development time to launch software-based products.


    But as they say--there is an inverse relationship between speed and security. 91.6% of the respondants in Snyk's survey admitted the AI tools gave wrong advice in some cases.


    This is why respected Q&A forums to help developers such as Stack Overflow have banned answers using tools such as ChatGPT: for now the accuracy of the AI tools is too low to be safe to use. Researchers from prestigious universities such as Stanford and New York University have drawn similiar conclusions.


    Laziness Rules the Day


    Despite scientific reports such as these Snyk reports 75.8% of people interviewed actually believe AI-generated code is more secure than hand-made code. And the ironic truth is the respondants in Snyk's survey believe this even though 56.4% of them admit identifying flaws in the code suggested by the AI. Companies acknowledge this bias and have standardized policies restricting the extent to which developer staff is allowed to rely on AI tech. Yet even worse--79.9% developers often break their company's policy: laziness and speed of development time trump good security sense. Yes, the average public has no software security sense. Do not trust popular opinion when it comes down to your digital security. You. Will. Get. Shot!


    At this point AI is simply allowing developers to finish their code faster--to say nothing on whether the code is correct let alone secure.


    No One Audits Open-Source Software


    If you are reading this you may have heard of the free software or open source software movement. A common advertised benefit of open-source software is that since the software is available to the public it has received much more scrutiny than closed-source software.


    I am sorry to break it to you: that's not true! The work "Building Secure Software" in Chapter 4 admits that just because a software's repository is open source it does not mean people will take the time to review the source code.


    The book sums up the problem with open source software better than I can:


    "Because of the economic incentives, people often trust that projects that are open source are likely to have received scrutiny, and are therefore secure. When developers write open source software, they usually believe that people are diligently checking the security of their code, and, for this reason, come to believe it is becoming more and more secure over time. Unfortunately, this bad reasoning causes people to spend less time worrying about security in the first place. (Sounds suspiciously like a penetrate-and-patch problem, doesn’t it?)" -- Building Secure Software, Chapter 4


    Usually people that take the time to read your code only do it for two reasons: tweak the code to do what you want--but that doesn't mean they cared about security the entire time.


    Remember, buffer overflows continue to plague codebases--source-available or not--to this day!


    Cost-Effective Solution: Formal Verification of Software for Security


    By now you should understand the severity of the situation: far too much software that the world relies on is notoriously insecure and would be too burdensome or risky to rewrite it in a memory-safe language. Your developers will most likely not be willing to take the time and energy to learn the new memory-safe language of the decade even though it would make the project more secure because that's a BIG ASK. You can learn the syntax of any language. Getting the logic across is the hard part. Your development team will most likely not audit the codebase or even cared about security in the first place. AI is not as effective for software security as advertised for now. And just because you slapped your code with a free software license does not grant it the status "secure".


    So what do we do?


    Formal Verification of Software


    Software developers are increasingly applying formal verification to ensure their source code is secure. Formal verification in software development is the act of mathematically proving one's software is correct. Its history in modern software development begins with the invention of the Meta Language programming language. As the author Micheal R. Clarkson points out--ML was invented to prove mathematical theorems. ML influenced several languages--OCaml being the most used in the industry.


    OCaml and Coq for Formal Verification of Software Security


    Software developers have used OCaml to develop tools to prove software is secure against common security exploits. Companies such as Amazon, CloudFlare, Google, and others have written programs to formally verify their software is secure. Cryptographic developers rely on formal verification tools to ensure their cryptographic software is safe from cracking.


    One OCaml project known as MirageOS challenged hackers around the world to hack its unikernel implementation--promising a stash of Bitcoin hidden in the unikernel as a reward. No hacker ever succeeded in hacking through the formally verified software. Formal verification is capable of automatically auditing a source code base for security flaws similar to how a compiler can detect grammatical errors in a code base.


    Formal verification is perhaps the only practical scientific technique to automate or partially audit legacy code bases. The list of industries that have used it in their commercial projects is a testament to that.


    The Ocaml language currently is a foundation for developing software that can be mathematically proven to be secure. This is the language that Coq, a famous software for proving mathematical theorems, is written in. Throughout history many tech companies have used tools such as Coq and OCaml to verify their software is secure against known attacks.


    Jane Street is one of the most famous advocates of OCaml for software security. Trust-in-Soft is a company that offers software to audit your C/C++ code for common security bugs. Cryptographic developers use formal verification tools developed in OCaml such as Coq or Z3 to verify their cryptographic software is secure. Allowing formal verification tools to helpdo the dirty work of finding security bugs saves everyone the time and hassle of having to source audit their codebases manually.


    Remember, your organization's code base grows larger with time. Not shorter. So its unreasonable to think the burden of security developers to review their source code gets easier with time. Software tools designed to help verify that software is secure is a reasonable solution. There are a few downsides to this. Few people invest the time and energy to learn formal verification. Yet companies continue to invest in such talent--so there is a growing market for it. It is my suspicion that people will ignore the White House's advice to switch to memory-safe languages--Rust does not account for the fact your organization already has a bunch of legacy code that is difficult to replace. Instead a more viable solution is to invent software tools to verify such software is secure.






18 views0 comments

Recent Posts

See All

Comments


bottom of page