Hm, this implementation seems allergic to passing types by value, which eliminates half of the allocations. It also makes the mistake of being mutable-first, and provides some fundamentally-inefficient operations.
The main mistake that this makes in common with most string implementations make is to only provide a single type, rather than a series of mostly-compatible types that can be used generically in common contexts, but which differ in ways that sometimes matter. Ownership, lifetime, representation, etc.
> It also makes the mistake of being mutable-first
Is mutability not part of the point of having a string buffer? Wouldn't the corresponding immutable type just be a string?
"Buffer" just means it is used between input and output. It does not imply mutability, and many buffers indeed only take their state at construction time and are not mutable.
In my experience, the only functions a mutable string buffer needs to provide are "append string (or to-string-able)" and "undo that append" (which mostly comes up in list-like contexts, e.g. to remove a final comma); for everything else you can convert to an immutable string first.
(theoretically there might be a "split and clobber" function like `strtok`, but in my experience it isn't that useful once your APIs actually take a buffer class).
Considering the functions from this implementation, they can be divided as follows:
Lifetime methods:
init
free
clear
Immutable methods: print
index_of
match_all
split
Mutable methods: append
prepend (inefficient!)
remove
replace
I've already mentioned `append`, and I suppose I can grant `prepend` for symmetry (though note that immutable strings do provide some sort of `concatenate`, though beware efficiency concerns). Immutable strings ubiquitously provide `replace` (and `remove` is just `replace` with an empty string), which are much safer/easier to use.There are also a lot of common operations not provided here. And the ones that are provided fail to work with `StringBuffer` input.
How would you recommend doing that sort of "subtyping"? _Generic and macros?
Yup. It's a lot saner in C++, but people who refuse to use C++ for political reasons can do it the ugly way using C11 or GNU C.
"political reasons"?
I switched from C++ to C because C++ is too complex and dealing with this complexity was stealing my time. I would not call this a "political reason".
They even downvote people who suggest C++ :-). Doing this in C is such a colossal waste of time and energy, not to mention the bugs it'll introduce. Sigh!
Following that argument, c++ is also a colossal waste of time and energy and bugs when compared with Rust :D.
Eventually, when Rust finally catches up with C++ ecosystem, including being used in industry standards like Khronos APIs, CUDA, console devkits, HPC and HFT standards.
Until then, the choice is pretty much between C and C++, and the latter provides a much saner and safer alternative, than a language that keeps pretending to be portable macro assembler.
Trolling about the choice of implementation language from a throwaway account is worth downvotes, yes. Doing a given task in a given language, simply for the sake of having it done in that language, is a legitimate endeavour, and having someone document (from personal experience) why it's difficult in that language is real content worth discussion. Choosing a better language is very much not a goal here.
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
I wonder how an LLM would rate this code.
> I wonder how an LLM would rate this code.
I dunno how it rates it now, but if this link: https://www.reddit.com/r/programming/comments/1m3dg0l/making... gets used for future training, future LLMs might make good suggestions for cleaning it up.
new_capacity *= 2;
A better value is to increase size by 1.5:https://stackoverflow.com/questions/1100311/what-is-the-idea...
This seems clever if we forget that it runs on a real machine. Once we remember there's a real machine we run into discontinuities and suddenly "Naive doubling" works best for a lot of cases.
At the small end, on the real machine tiny allocations are inefficient, so rather than 1, 2, 3, 5, 8, 12, 27, 41 turns out we should probably start with 16 or even 64 bytes.
Then at the large end the "clever" reuse is a nonsense because of virtual memory, so that 1598, 2397, 3596 doesn't work out as cleanly as 1024, 2048, 4096
Folly has a growable array type which talks a big game on this 1.5 factor, but as I indicated above they quietly disable this "optimisation" for both small and large sizes in the code itself.
Of course YMMV, it is entirely possible that some particular application gets a noticeable speedup from a hand rolled 1.8x growth factor, or from starting at size 18 or whatever.
Also depends what we're trying to optimize. If we're trying to optimize for space then using a constant is a bad idea: Consider if we have 2Gi elements in the array: We have to grow it to 3Gi, but we may only need to add a few additional elements. That's pretty much a whole Gi of wasted space.
Clearly we don't just want a blanket constant, but the growth factor should be a function of the current length - decreasing as the size of the array grows.
For space optimization an ideal growth factor is 1+1/√(length). In the above example, where we have 2 Gi array, we would grow it only by 64ki elements. Obviously this results in many more allocations, and would only use this technique where we're optimizing for space rather than time.
We don't want to be messing around with square roots, and ideally, we want arrays to always be a multiple of some power of 2, so the trick is to approximate the square root:
inline int64_t approx_sqrt(int64_t length) {
return 1 << (64 - __builtin_clzll(length)) / 2;
// if C23, use stdc_first_leading_one_ull() from <stdbit.h>
}
inline int64_t new_length(int64_t length) {
if (length == 0) return 1;
return length + approx_sqrt(length);
}
Some examples - for all powers of 2 between UINT16_MAX and UINT32_MAX, the old and new lengths: old length: 2^16 -> new length: 0x00010100 (growth: 2^8)
old length: 2^17 -> new length: 0x00020200 (growth: 2^9)
old length: 2^18 -> new length: 0x00040200 (growth: 2^9)
old length: 2^19 -> new length: 0x00080400 (growth: 2^10)
old length: 2^20 -> new length: 0x00100400 (growth: 2^10)
old length: 2^21 -> new length: 0x00200800 (growth: 2^11)
old length: 2^22 -> new length: 0x00400800 (growth: 2^11)
old length: 2^23 -> new length: 0x00801000 (growth: 2^12)
old length: 2^24 -> new length: 0x01001000 (growth: 2^12)
old length: 2^25 -> new length: 0x02002000 (growth: 2^13)
old length: 2^26 -> new length: 0x04002000 (growth: 2^13)
old length: 2^27 -> new length: 0x08004000 (growth: 2^14)
old length: 2^28 -> new length: 0x10004000 (growth: 2^14)
old length: 2^29 -> new length: 0x20008000 (growth: 2^15)
old length: 2^30 -> new length: 0x40008000 (growth: 2^15)
old length: 2^31 -> new length: 0x80010000 (growth: 2^16)
This is the growth rate used in Resizeable Arrays in Optimal Time and Space[1], but they don't use a single array with reallocation - instead they have an array of arrays, where growing the array appends an element to an index block which points to a data block of `approx_sqrt(length)` size, and the existing data blocks are all reused. The index block may require reallocation.[1]:https://cs.uwaterloo.ca/research/tr/1999/09/CS-99-09.pdf
> For space optimization an ideal growth factor is 1+1/√(length)
> return length + approx_sqrt(length);
They don't seem the same. Should one take the root or its reciprocal?
They're the same in theory. The factor is the amount the original length is multiplied by (in place of *2 or *1.5 constants), and obviously x/√x = √x.
length * (1 + 1/√(length)) = length + length/√(length) = length + √(length)
Addition is cheaper than the multiplication, and since we're approximating the sqrt we probably want to avoid calculating its reciprocal and multiplying. Our actual implementation is an approximation of 1+1/√(length)Maybe I could've worded it a bit better.
More detail is in the RAOTS paper. It organizes the data blocks into so called logical "superblocks" (which have no material representation). Each superblock is ~n data blocks each of length n. In practice it's not exactly this because when we have an odd number of bits, say 5, we have to split it either 2:3 or 3:2 (for the log2 of num_blocks:block_size). The paper choses the former (same as above), but we could potentially also do it the other way where you have more blocks of smaller average size, but it would be a bit more awkward to index into the data blocks themselves since you'd have have to test for odd/even - which we don't do above because of the /2 integer division, and it would make the index block larger which we'd want to avoid because we have to reallocate it.
So in the approach taken, avg_block_size > √(length) and num_blocks < √(length), but avg_block_size * num_blocks == length still holds, and avg_block_size ≈ √(length) ≈ num_blocks.
If we're reallocating rather than using an index block, replace `num_blocks` with number of times we have to call the allocator.
Yep. And probably use tcmalloc or jemalloc (deprecated?) too. Most OS sbrk/libc malloc implementations are better than they used to be, but certain profiled programs can increased performance by tuning one of the nonstandard allocators. YMMV. Test, profile, and experiment.
I remember reading (decades ago) an extensive article in Software Practice and Experience reaching the same conclusion.
Or like Python shows there, 1.25+k which can be better (faster growth and less memory wasted) than both
1.25 of what? Do you mean 2.25*k == 9*k/4.
They say 1.25+k, in this context k will be some constant, suppose it's 16 bytes
Thus you might see a curve like 13, 32, 56, 86 - this curve is aggressive to start but then much gentler. Because it's so gentle it gets the re-use upside for medium sized allocations but it incurs a lot more copying, I can imagine in Python that might be a good trade.
There is a way for this not to terminate:
while (new_capacity < required) {
new_capacity *= 2;
}
1. All variables are unsigned (due to being size_t). So we don't worry about overflow UB.2. new_capacity * 2 always produces an even number, whether truncating or not.
3. Supppose required is SIZE_MAX, the highest value of size_t; note that this is an odd number.
4. Therefore new_capacity * 2 is always < required; loop does not terminate.
It can be refactored into creating a buffer primitive of void* buf, size_t capacity, size_t refcount. Then, the string can implement using CoW logic on a buffer and size_t length. Read-only references to substrings become cheap and copying is done whenever there's a modification or realloc can't grow the underlying buffer.
What I don't like is that some functions take as arguments a mix of StringBuffer's and regular C strings. This is confusing. For example why this:
void StringBuffer_replace(StringBuffer *buf,
const char *original,
const char *update,
size_t from);
instead of this: void StringBuffer_replace(StringBuffer *buf,
const StringBuffer *original,
const StringBuffer *update,
size_t from);
It's odd how it has error reporting in some areas (alloc, split can return NULL if allocation fails), but not others (append, prepend have a void return type but might require allocation internally).
You might be interested in https://github.com/antirez/sds
neat, i like it, has some of the same ideas i've used in my string packages
but i did see a place to shave a byte in the sds data struct. The null terminator is a wasted field, that byte (or int) should be used to store the amount of free space left in the buffer (as a proxy for strlen). When there is no space left in the buffer, the free space value will be.... a very convenient 0 heheh
hey, OP said he wants to be a better C programmer!
> The null terminator is a wasted field
I think that would break its "Compatible with normal C string functions" feature.
nooooo you don't understand. when the buffer is not full, the string will be zero terminated "in buffer" (which is how it works as is anyway). when the buffer is full, the "free count" at the end will do double duty, both as a zero count and a zero terminater
But calling "normal C string functions" don't know about the "free count" byte, right? So it wouldn't be updated... unless I'm misunderstanding something.
I'm assuming he's talking about this specific small string optimization: https://www.youtube.com/watch?v=kPR8h4-qZdk&t=409s
just watched, yes, that is the same optimization
normal c string functions don't know about any of this package's improvements, I'm not sure you understand what the package does.
+--------+-------------------------------+-----------+
| Header | Binary safe C alike string... | Null term |
+--------+-------------------------------+-----------+
|
`-> Pointer returned to the user.
his trick is to create a struct with fields in the header for extra information about the string, and then a string buffer also in the struct. but on instantiation, instead of returning the address of the struct/header, he returns the address of the string, so it could be passed to strlen and return the right answer, or open and open the right file, all compatible-like.but if you call "methods" on the package, they know that there is a header with struct fields below the string buffer and it can obtain those, and update them if need be.
He doesn't document that in more detail in the initial part of the spec/readme, but an obvious thing to add in the header would be a strlen, so you'd know where to append without counting through the string. But without doing something like that, there is no reason to have a header. Normal string functions can "handle" these strings, but they can't update the header information. I'm just extending that concept to the byte at the end also.
this type of thing falls into what the soulless ginger freaks call UB and want to eliminate.
(soulless ginger freaks? a combination of "rust colored" and https://www.youtube.com/watch?v=EY39fkmqKBM )
> instead of returning the address of the struct
Yes I'm pretty sure I understand this part.
> an obvious thing to add in the header would be a strlen
The length is already in the header from what I can tell: https://github.com/antirez/sds/blob/master/sds.h#L64
But my point was that if something like your "free count" byte existed at the end, I would think it couldn't be relied upon because functions such as s*printf that might truncate, don't know about that field, and you don't want later "methods" to rely on a field that hasn't been updated and then run off the end.
And from what I can tell from the link above, there isn't actually a "free count" defined anywhere in the struct, the buffer appears to be at the end of the struct, with no extra fields after it.
Maybe I'm misunderstanding something?
you misunderstood what i said about the strlen field, but we agree, yes, it's in the header where it belongs.
I explained how returning the address of the string buffer instead of the address of the struct would give you a C compatible string that you could pass to other C library functions. If those functions are "readonly" wrt the string, everything is copasetic.
if those string functions update/write the c-string (which is in the buffer) the strlen in the header will now be wrong. That has nothing to do with my suggestion, and it's already "broken" in that way you point out. My "string free bytes field" suggestion will also be broken by an operation like that, so my suggestion does not make this data structure worse than it already is wrt compatibility with C library functions.
However that strlen and free bytes problem can be managed (no worse than C standard strings themselves) and strlen and/or free bytes are useful features that make some other things easier so overall it's a win.
I was basing my response off of this:
> i did see a place to shave a byte in the sds data struct. The null terminator is a wasted field
I'm still not sure what byte in the struct you're talking about removing... because I don't see an actual null terminator field.
the word "null term" appears in the ascii art diagram, that's where the null terminator is. the strlen field is in the portion labelled header.
the strlen field can be moved to where the word "null term" appears, except with a changed semantic of "bytes remaining" so it will go to zero at the right time. now you have a single entity "bytes remaining" instead of two entities, "strlen" and "null" giving a small storage saving (there is an additional null terminator most of the time, right at the end of the string; but this doesn't take up any storage because that storage is not used for anything else)
over and out.
> the word "null term" appears in the ascii art diagram
Yes but it does not appear anywhere in the struct that I can see... I would love to be proven wrong though.
the string needs a null terminator to be C-string compatible. the trick of putting a count at the end that turns into a null terminator at the right moment will save a byte, regardless of how it is labelled
By the way, I have no power here, but you people are ubiquitous, at least one of you is in C committee.
Hereby I propose an addition to C:
namespace something { }
Which is preprocessing to something_ before every function class someclass { }
Which is preprocessing all the functions inside to someclass_fn1(someclass\* asfirstparameter, ...)
And of course the final syntax sugar cl\* c;
c->fn1(a,b)
I mean this would make C much easier, as we already code object oriented in it, but the amount of preprocessing, unreadability that needs to be done in headers is simply brain exhaustingI would rather focus on solving the main problem than reinvent the wheel. Just use C++ if perf is critical which gives you all these things for free. In this day and age the reasons for using C as your main language should be almost zero.