Migrating away from Rust
deadmoney.gg731 points by rc00 a year ago
731 points by rc00 a year ago
Another failed game project in Rust. This is sad.
I've been writing a metaverse client in Rust for almost five years now, which is too long.[1] Someone else set out to do something similar in C#/Unity and had something going in less than two years. This is discouraging.
Ecosystem problems:
The Rust 3D game dev user base is tiny.
Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues. I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.
The lower levels are buggy and have a lot of churn
The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs. There just aren't enough users to wring out the bugs.
Also, too many different crates want to own the event loop.
These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.
Language problems:
Back-references are difficult
A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
There are three common workarounds:
- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.
- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.
- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.
Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.
"Is-a" relationships are difficult
Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.
I caveat my remarks with although I've have studed the Rust specification, I have not written a line of Rust code.
I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.
So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.
Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.
In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.
It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.
I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.
I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/
I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.
As discussed multiple times, I see automatic resouce management (written this way on purpose), coupled with effects/linear/affine/dependent types for lowlevel coding as the way to go.
At least until we get AI driven systems good enough to generate straight binaries.
Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.
The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.
There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.
So in D, is it now natural to mix borrow checking and garbage collection? I think some kind of "gradual memory management" is the holy grail, but like gradual typing, there are technical problems
The issue is the boundary between the 2 styles/idioms -- e.g. between typed code and untyped code, you have either expensive runtime checks, or you have unsoundness
---
So I wonder if these styles of D are more like separate languages for different programs? Or are they integrated somehow?
Compared with GC, borrow checking affects every function signature
Compared with manual memory management, GC also affects every function signature.
IIRC the boundary between the standard library and programs was an issue -- i.e. does your stdlib use GC, and does your program use GC? There are 4 different combinations there
The problem is that GC is a global algorithm, i.e. heap integrity is a global property of a program, not a local one.
Likewise, type safety is a global property of a program
---
(good discussion of what programs are good for the borrow checking style -- stateless straight-line code seems to benefit most -- https://news.ycombinator.com/item?id=34410187)
> So in D, is it now natural to mix borrow checking and garbage collection?
I think "natural" is a bit loaded, there is native support in the frontend for doing both. You have to go out of your way to annotate functions with @live and it is still experimental(https://dlang.org/spec/ob.html). The garbage collection is natural and happens if you do nothing, but you can turn it off with proper annotations like @nogc(https://dlang.org/spec/function.html#nogc-functions) or using betterC(https://dlang.org/spec/betterc.html). There is also @safe, @system and @trusted(https://dlang.org/spec/memory-safe-d.html).
So natural is a stretch at the moment, but you can use all kinds of different techniques, what is needed is more community and library standardization around some solutions.
> is it now natural to mix borrow checking and garbage collection?
D is as memory safe as Rust is, when you use the garbage collector to allocate/free memory. If you don't use the GC in D, then there's a risk from:
* double frees
* memory leaks
* not pairing the allocation with free'ing
Those last 3 is what the borrow checker handles.In other words, with D, there is no point to using the borrow checker if one is using D's GC for memory management.
You can mix and match using the GC or manual memory allocation however it makes sense for your program. It is normal for D programmers to use both.
> D is as memory safe as Rust is, when you use the garbage collector to allocate/free memory.
Does D also protects against data race?
(I couldn't find an obvious answer after a bit of research, but I might have overlooked something)
> "gradual memory management" is the holy grail
I don't think gradual types are as holy grail as you make them out to be. In gradual typing, if I recall correctly, there was a large overhead when communicating between typed and untyped parts. But further
But lets say gradual memory management is perfect, you have to keep in mind the costs of having GC + borrow checking.
First thing, rather than focusing on perfecting GC or borrow checking, you divert your focus.
Second, you introduce an ecosystem split, with some libraries supporting GC and others supporting non-GC. E.g. you make games in C# and you want to be careful about avoiding GC, good luck finding fast enough non-GC libraries.
Not all languages using a GC are designed that way, where libraries are dependent on it. For example, in V (Vlang), none of their libraries need the GC. They also can freely mix memory management methods. There is no ecosystem split, but rather preferences.
I agree with you.
For me Rust was amazing for writing things like concurrency code. But it slowed me down significantly in tasks I would do in, say, C# or even C++. It feels like the perfect language for game engines, compilers, low-level libraries... but I wasn't too happy writing more complex game code in it using Bevy.
And you make a good point, it's the same for OOP, which is amazing for e.g. writing plugins but when shoehorned into things it's not good at, it also kills my joy.
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
#4 safer union/enum, I do hope D gets tagged-union/pattern-matching sometimes in the future, I know about std.sumtype, but that's nowhere close to what Rust offer
Doing a sumtype is on the agenda and we'll get to it.
BTW, D has always had quite an advanced pattern matcher for types.
https://dlang.org/spec/expression.html#is_expression
https://dlang.org/spec/template.html#template_type_parameter...
Another language with such features, including sumtypes, is V (Vlang)[1].
[1] https://github.com/vlang/v/blob/master/doc/docs.md#sum-types
> So I implemented a borrow checker for D...
D's implementation of a borrow checker, is very intriguing, in terms of possibilities and putting it back into the context of a tool and not the "be all, end all".
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
This speaks volumes from such an experienced and accomplished programmer.
Hey, thank you for spreading the joy of the borrow checker beyond Rust; awesome stuff, sounds very interesting, challenging, and useful!
One question that came to mind as a single-track-Rust-mind kind of person: in D generally or in your experience specifically, when you find that the borrow checker doesn't work for a data structure, what is the alternative memory management strategy that you choose usually? Is it garbage collection, or manual memory management without a borrow checker?
Cheers!
Personally, I frankly do not need the borrow checker. I have been writing manual memory management code for so long I have simply internalized how to avoid having problems with it. I've been called arrogant for saying this, but it's true.
But I still like the borrow checker style of programming because it makes the code easier to understand.
I find it convenient in the D compiler implementation to use the GC for the AST memory management, as the algorithms that manipulate it are easier if they needn't concern themselves with memory management. A borrow checker approach doesn't fit it comfortably, either.
Many of the data structures persist to the end of the program, as a compiler is a batch program. No memory management strategy is even necessary for those.
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
I think these are generally considered table stake in a modern programming language? That's why people are/were excited by the borrow checker, as data races are the next prominent source of memory corruption, and one that is especially annoying to debug.
I saw a good talk, though I don't remember the name, that went over the array-index approach. It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
> It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
I've gone back and forth on this, myself.
I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.
Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
I think this is indeed peak rust.
It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.
Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!
>Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
Cache locality matters, but so does having less allocator pressure. Use 32-bit unsigned ints as indices, and you get improvements on that as well.
>The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
I'd always try to avoid that type of allocation pattern in C++, FWIW :-).
> Recently I rewrote the b-tree to simply use a vec of internal nodes
Doesn't this also require you to correctly and efficiently implement (equivalents of C's) malloc() and free()? IIUC your requirements are more constrained, in that malloc() will only ever be called with a single block size, meaning you could just maintain a stack of free indices -- though if tree nodes are comparable in size to integers this increases memory usage by a significant fraction.
(I just checked and Rust has unions, but they require unsafe. So, on pain of unsafe, you could implement a "traditional" freelist-based allocator that stores the index of the next free block in-place inside the node.)
Depends on if you need to allocate/deallocate nodes. If you construct the tree once and don’t modify it thereafter you don’t need to. If you do need to modify and alloc/dealloc nodes you can use a bitmap to track free/occupied slots which is very fast (find first set + bitmanip) and has minuscule overhead even for integer sized elements.
Yeah, or just store all freed nodes in a linked list. Eg, have a pointer / index from the root to the first unused (free) node, and in that node store a pointer to the next one and so on. This is pretty trivial to implement.
In my case, inserts and read operations vastly outnumber deletes. So much so that in all of my testing, I never saw a leaf node which could be freed anyway. (Leaves store ~32 values, and there were no cases where all of a leaf's values actually get deleted). I decided to just leak nodes if it ever happens in real life.
The algorithm processes data in batches then frees everything. So worst case, it just has slightly higher peak memory usage while processing. A fine trade in this case given it let me remove ~200 lines of code - and any bugs that might have been lurking in them.
GC languages like C# don't need these tricks, because it is feature rich enough to do C++ style low level programming, and has value types.
Having gone full-in on this approach before, with some good success, it still feels wrong to me today. Contiguous storage may work for reasonable numbers of elements, but it's potentially blocking a huge contiguous chunk of address space especially for large numbers of elements.
I probably say this because I still have to main 32-bit binaries (only 2G of address space), but it can potentially be problematic even on 64-bit machines (typically 256 TB of address space), especially if the data structure should be a reusable container with unknown number of instances. If you don't know a reasonable upper bound of elements beforehand, you have to reallocate later, or drastically over-reserve from the start. The former removes a pointer stability guarantee, the later is uneconomical, it may even be uneconomical on 64-bit depending on how many instances of the data structures you plan to have. And having to reallocate when overflowing the preallocated space makes operations less deterministic with regards to execution time.
> Having gone full-in on this approach before, with some good success, it still feels wrong to me today. Contiguous storage may work for reasonable numbers of elements, but it's potentially blocking a huge contiguous chunk of address space especially for large numbers of elements.
That makes sense. If my btree was gigabytes in size, I might rethink the approach for a number of reasons. But in my case, even for quite large input, the data structure never gets more than a few megabytes in size. Thats small enough that resizing the vec has a negligible performance impact.
It helps that my btree stores its contents using lossless internal run-length encoding. Eg, if I have values like this:
{key: 5, value: 'a'}
{key: 6, value: 'a'}
{key: 7, value: 'a'}
Then I store them like this: {key: [5..8), value: 'a'}
In my use case, this compaction decreases the size of the data structure by about 20x. There's some overhead in joining and splitting values - but its easily worth it.> What it doesn't protect you from is use-after-free bugs.
Yes. I've found that problem in index-allocated code.
Also, when you do this, you need an allocator for the indexes. I've found bugs in those.
Could std::rc::Weak solve the backreference problem?
Weak is very helpful in preventing ownership loops which prevent deallocation. Weak plus RefCell lets you do back pointers cleanly. You call ".borrow()" to get access to the data protected by a RefCell. The run-time borrow panics if someone else is using the data item. This prevents two mutable pointers to the same data, which Rust requires.
Static analysis could potentially check for those potential panics at compile time. If that was implemented, the run time check, and the potential for a panic, would go away. It's not hard to check, provided that all borrows have limited scope. You just have to determine, conservatively, that no two borrow scopes for the same thing overlap.
If you had that check, it would be possible to have something that behaves like RefCell, but is checked entirely at compile time. Then you know you're free of potential double-borrow panics.
I started a discussion on this on a Rust forum. A problem is that you have to perform that check after template expansion, and the Rust compiler is not set up to do global analysis after template expansion. This idea needs further development.
This check belongs to the same set of checks which prevent deadlocking a mutex against itself. There's been some work on Rust static deadlock analysis, but it's still a research topic.
I didn't consider that. Looking at how weak references work, that might work. It would reduce the need for raw pointers and unsafe code. But in exchange, it would add 16 bytes of overhead to every node in my data structure. That's pure overhead - since the reference count of all nodes should always be exactly 1.
However, I'm not sure what the implications are around mutability. I use a Cursor struct which stores a reference to a specific leaf node in the tree. Cursors can walk forward in the tree (cursor.next_entry()). The tree can also be modified at the cursor location (cursor.insert(item)). Modifying the tree via the cursor also updates some metadata all the way up from the leaf to the root.
If the cursor stored a Rc<Leaf> or Weak<Leaf>, I couldn't mutate the leaf item because rc.get_mut() returns None if there are other strong or weak pointers pointing to the node. (And that will always be the case!). Maybe I could use a Rc<Cell<Leaf>>? But then my pointers down the tree would need the same, and pointers up would be Weak<Cell<Leaf>> I guess? I have a headache just thinking about it.
Using Rc + Weak would mean less unsafe code, worse performance and code thats even harder to read and reason about. I don't have an intuitive sense of what the performance hit would be. And it might not be possible to implement this at all, because of mutability rules.
Switching to an array improved performance, removed all unsafe code and reduced complexity across the board. Cursors got significantly simpler - because they just store an array index. (And inserting becomes cursor.insert(item, &mut tree) - which is simple and easy to reason about.)
I really think the Vec<Node> / Vec<Leaf> approach is the best choice here. If I were writing this again, this is how I'd approach it from the start.
Consider copy-pasting the code from Rc/Weak, then tweaking it to suit your needs (reduce the overhead).
I have done this before with stdlib stuff like io::Cursor and was pretty happy with the result.
One can also use this array-index approach in C++, utilize the `at` methods and have "memory safety guarantees", no ?
> What it doesn't protect you from is use-after-free bugs.
How about using hash maps/hash tables/dictionaries/however it's called in Rust? You could generate unique IDs for the elements rather than using vector indices.
But Unity game objects are the same way: you allocate them when they spawn into the scene, and you deallocate them when they despawn. Accessing them after you destroyed them throws an exception. This is exactly the same as entity IDs! The GC doesn't buy you much, other than memory safety, which you can get in other ways (e.g. generational indices, like Bevy does).
But in rust you have to fight the borrow checker a lot, and sometimes concede, with complex referential stuff. I say this as someone who writes a good bit of rust and enjoys doing so.
I just don't, and even less often with game logic which tends to be rather simple in terms of the data structures needed. In my experience, the ownership and borrowing rules are in no way an impediment to game development. That doesn't invalidate your experience, of course, but it doesn't match mine.
That's a good comment.
The difference is that I'm writing a metaverse client, not a game. A metaverse client is a rare beast about halfway between an MMO client and a web browser. It has to do most of the the graphical things a 3D MMO client does. But it gets all its assets and gameplay instructions from a server.
From a dev perspective, this means you're not making changes to gameplay by recompiling the client. You make changes to objects in the live world while you're connected to the server. So client compile times (I'm currently at about 1 minute 20 seconds for a recompile in release mode) aren't a big issue.
Most of the level and content building machinery of Bevy or Unity or Unreal Engine is thus irrelevant. The important parts needed for performance are down at the graphics level. Those all exist for Rust, but they're all at the My First Renderer level. They don't utilize the concurrency of Vulkan or multiple CPUs. When you get to a non-trivial world, you need that. Tiny Glade is nice, but it works because it's tiny.
What does matter is high performance and reliability while content is coming in at a high rate and changing. Anything can change at any time, but usually doesn't. So cache type optimizations are important, as is multithreading to handle the content flood. Content is constantly coming in, being displayed, and then discarded as the user moves around the big world. All that dynamism requires more complex data structures than a game that loads everything at startup.
Rust's "fearless multiprogramming" is a huge win for performance. I have about 20 threads running, and many are doing quite different things. That would be a horror to debug in C++. In Rust, it's not hard.
(There's a school of thought that says that fast, general purpose renderers are impossible. Each game should have its own renderer. Or you go all the way to a full game engine and integrate gameplay control and the scene graph with the renderer. Once the scene graph gets big enough that (lights x objects) becomes too large to do by brute force, the renderer level needs to cull based on position and size, which means at least a minimal scene graph with a spatial data structure. So now there's an abstraction layering problem - the rendering level needs to see the scene graph. No one in Rust land has solved this problem efficiently. Thus, none of the four available low-level renderers scale well.
I don't think it's impossible, just moderately difficult. I'm currently looking at how to do this efficiently, with some combination of lambdas which access the scene graph passed into the renderer, and caches. I really wish someone else had solved this generic problem, though. I'm a user of renderers, not a rendering expert.)
Meta blew $40 billion dollars on this problem and produced a dud virtual world, but some nice headsets. Improbable blew upwards of $400 million and produced a limited, expensive to run system. Metaverses are hard, but not that hard. If you blow some of the basic architectural decisions, though, you never recover.
The dependency injection framework provided by Bevy also particularly elides a lot of the problems with borrow checking that users might run into and encourages writing data oriented code that generally is favorable to borrow checking anyway.
This is a valid point. I've played a little with Bevy and liked it. I have also not written a triple-A game in Rust, with any engine, but I'm extrapolating the mess that might show up once you have to start using lots of other libraries; Bevy isn't really a batteries-included engine so this probably becomes necessary. Doubly so if e.g. you generate bindings to the C++ physics library you've already licensed and work with.
These are all solvable problems, but in reality, it's very hard to write a good business case for being the one to solve them. Most of the cost accrues to you and most of the benefit to the commons. Unless a corporate actor decides to write a major new engine in Rust or use Bevy as the base for the same, or unless a whole lot of indie devs and part-time hackers arduously work all this out, it's not worth the trouble if you're approaching it from the perspective of a studio with severe limitations on both funding and time.
Thankfully my studio has given me time to be able to submit a lot of upstream code to Bevy. I do agree that there's a bootstrapping problem here and I'm glad that I'm in a situation where I can help out. I'm not the only one; there are a handful of startups and small studios that are doing the same.
Given my experience with Bevy this doesn't happen very often, if ever.
The only challenge is not having an ecosystem with ready made everything like you do in "batteries included" frameworks. You are basically building a game engine and a game at the same time.
We need a commercial engine in Rust or a decade of OSS work. But what features will be considered standard in Unreal Engine 2035?
Nobody is going to be writing code in 2035
Long bet: people are going to write much more code in 2035 than today. It's just going to be very different.
(For the record software development has nothing to do now with how it looked when I started in 2003, plenty of things have revolutionized the way we write code (especially Github) and made us an order of magnitude more productive at least. Yet the number of developer has skyrocketed. I don't expect this trend to stop, AI is yet another productivity boost in an industry that already faced a lot of them in recent time.
> fight the borrow checker
I see this and I am reminded when I had to fight the 0 indexing, when I was cutting my teeth in C, for class.
I wonder why no one complains about 0 indexing anymore. Isn't it weird how you have to go 0 to length - 1, and implement algorithm differently than in a math book?
The ground floor in lifts isn't "1", it is "G". Same thing.
Country dependent. Like there are 1-based indexing languages (Lua, Matlab, et al)
And others like Pascal linage (Pascal, Object Pascal, Extended Pascal, Modula-2, Ada, Oberon,...), that have flexible bounds, they can be whatever numeric subranges we feel like using, or enumeration values.
Not in Lua.
Most languages have abstractions for iterating over an array so that you don’t need to use 0 or length-1 these days
Because the math books are the ones being weird. https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831...
Maths books aren't being weird. They are counting in a way most people learn to count. One apple, two apples, three apples. You don't start zeroth apple, one apple, two apples, then respond the set of apple contains three apples.
But computers are not actually counting array elements, it's more accurate to compare array indexing with distance measurement. The pointer (memory address) puts you at the start of the array, so the first element is right there under your feet (i.e. index 0). The other elements are found by measuring how far away from the start they are:
element_position = start + index*element_size> But computers are not actually counting array elements
Sure. But from most general to least general, it goes:
Counting on fingers >>>> Math Equations >> Computer algorithms
The more general ones influence the less general ones because that's where most people start.
I believe it’s a practicality to simplify pointer arithmetic
Yes but why does no one talk here about fighting the 0 indices. Or how they are switching to Lua, because 0 indices are hard?
Am I the only person that remembers how hard it was to wrap your head around numbers starting at 0, rather than 1?
I find indices starting from zero much easier. Especially when index/pointer arithmetic is involved like converting between pixel or voxel indices and coordinates, or indexing in ring buffers. 1-based indexing is one of the reasons I eventuallz abandoned Mathematica, because it got way too cumbersome.
So the reason why you don't see many people fighting 0-indexing is because they actually prefer it.
> 0 indices are hard?
I started out with BASIC and Fortran, which use 1 based indices. Going to C was a small bump in the road getting used to that, and then it's Fortran which is the oddball.
Most oldschool BASIC dialects (including the original Dartmouth IIRC) use 0-based indices, though. It's the worst of both worlds, where something like:
DIM a(10)
actually declares an array of 11 elements, with indices from 0 to 10 inclusive.I believe it was QBASIC that first borrowed the ability to define ranges explicitly from Pascal, so that we could do:
DIM a(1 TO 10)
etc to avoid the "zero element tax"Interesting path. I went Basic, and Pascal and then C in college. Honestly it was such a mind twist.
Yes, I think you are. The challenges people describe with Rust look more difficult than remembering to start from 0 instead of 1…
I don't think so. One based numbering is barring few particular (spoken) languages the default. You have to had to change your counting strategies when going from regular world to 0 based indices.
Maybe you had the luck of learning 0 based language first. Then most of them were a smooth ride.
My point is you forgot how hard it is because it's now muscle memory (if you need a recap of the difficulty learn a program with arbitrary array indexing and set you first array index to something exciting like 5 or -6). It also means if you are "fighting the borrow checker" you are still at pre-"muscle memory" stage of learning Rust.