Caution with these functions: in most cases you need to check not only the error code, but also the `ptr` in the result.
Otherwise you end up with `to_int("0x1234") == 0` instead of the expected `std::nullopt`, because these functions return success if they matched a number at the beginning of the string.
how can this be the ~5th iteration of a very wide-spread use-case and still contain a footgun?
The API looks like it's following best-practice, with a specific result type that also contains specific error information and yet, that's not enough and you still end up with edge-cases where things look like they're fine when they aren't.
I suppose the reason is that sometimes you want to parse more than just the number. For example, numbers separated by commas. In that case you will have to call the function repeatedly from your own parsing routine and advance the pointer.
Yes, if you are parsing in-place numbers in an existing string you do not necessarily know a-priori where the number end. You could have extra logic to look ahead to the end of the number before passing it from_chars, but a) it would require an additional pass and b) you could end up replicating part of the implementation of from chars.
from_chars is supposed to be a low lever routine, it must be possible to use it correctly, but it is not necessarily the most ergonomic.
I get that this is a super low level API, but yet, my expectation about an API that parses a buffer with length to a number and which has a specific enum for error cases as its return type would be that when asked to parse "9not a number" that I would get an error response and not a "the number you asked me to parse is 9 and everything went well" as the result.
The whole reason for returning a result enum would be so I don't forget to also check somewhere else whether something could possibly have gone wrong.
I wonder why it is called `from_chars` and not `to_number` or similar. It's obvious what you are converting from, because you have to write down the argument `from_chars(someString)`, but you don't see what is coming out.
As you indicate, you do see what you’re putting in, and that includes an argument holding a reference to the result of the conversion.
What’s coming out is a std::from_chars_result: a status code plus an indicator how much of the data was consumed.
What to name this depends on how you see this function. As part of a function on number types, from_chars is a good names. As part of a function on strings, to_int/to_long/etc are good names. As freestanding functions, chars_to_int (ugly, IMO), parse_int (with parse sort-of implying taking a string) are options.
I can see why they went for from_chars. Implementations will be more dependent on the output type than on character pointers, it’s more likely there will be more integral types in the future than that there will be a different way to specify character sequences, and it means adding a single function name.
Maybe “number” is too ambiguous because they’d have to define that “in this context a number means a float or integer type only.” The C++ standard also includes support for complex numbers.
I saw another commenter explain that it’s passed by reference, but I agree with you. The C++ Core Guidelines even mention that it’s better to use raw pointers (or pass by value and return a value) in cases like this to make the intent clear.
A pointer parameter can be null and it doesn't make sense for this parameter to be null, so IMO a reference is the better choice here.
A non-const reference is just as clear a signal that the parameter may be modified as a non-const pointer. If there's no modification const ref should be used.
It's about clarity of intent at the call site. Passing by mutable ref looks like `foo`, same as passing by value, but passing mutability of a value by pointer is textually readably different: `&foo`. That's the purpose of the pass by pointer style.
You could choose to textually "tag" passing by mutable ref by passing `&foo` but this can rub people the wrong way, just like chaining pointer outvars with `&out_foo`.
If you want clarity of intent define dummy in and out macros but please don't make clean reference-taking APIs a mess by turning them into pointers for no good reason
In theory a from_char with an optional output parameter could be useful to validate that the next field is a number and/or discard it without needing to parse it; it might even be worth optimizing for that case.
In the function signature, value is a reference. You can think of reference as being a pointer that points to a fixed location but with value semantics.
So you can dip value into the function call and the function can assign to it, as it could to data pointed to by a pointer.
&value would be a pointer to that integer. Instead, it's using references, which also use the & character. References use different syntax, but they're like pointers that can't be reassigned. Examples:
int x = 0;
int &r = x;
r += 1;
assert(x == 1);
int y;
r = y; // won't compile
void inc(int &r) { r += 1; }
int x = 0;
inc(x);
assert(x == 1);
The equivalent using pointers, like in C:
int x = 0;
int *p = &x;
*p += 1;
assert(x == 1);
int y;
p = &y;
void inc(int *p) { *p += 1; }
int x = 0;
inc(&x);
assert(x == 1);
std::from_chars / std::to_chars are explicitly made to only operate in the C locale, so basically ASCII texts. It's not meant for parsing user-input strings but rather text protocols with maximal efficiency (and locale support prevents that).
E.g. "۱۲۳٤" isn't as far as I know a valid number to put in a json file, http content-length or CSS property. Maybe it'd be ok in a CSV but realistically have you ever seen a big data csv encoded in non-ascii numbers?
If your CSV is defined to contain straight, unparsed user input it's wrong no matter the context.
If it's defined to contain numbers then if at some point between [user input] and [csv output] you don't have a pass where the value is parsed, validated and converted to one of your programming language's actual number data types before being passed to your CSV writer, then your code is wrong.
std::expected is new in C++23, std::from_chars was introduces in C++17. Obviously, 2023 features were not available in 2017. Changing the return type now would break everybody's code, with very little benefit, you can easily wrap std::from_chars.
For example: With Python Python breaking changes are more common, and everyone complains about how much they have to go and fix every time something changes.
Damned if you do, damned if you don't. Either have good backwards compatibility but sloppy older parts of the language - or have a 'cleaner' overall language at the cost of crushing the community every time you want to change anything.
I'm fully aware, just pointing out that C++ is particularly afflicted with backward compatibility issues; far more than other languages.
"The Design and Evolution of C++" gives the impression that even back in the 80s, major concessions were being made in the name of compatibility. At that time it was with C; now it's with previous versions of C++.
Then you would have to use 2 names: the primary name and the wrapper name. What would they be? Using 2 names wastes more of the namespace, and will confuse people. If the wrapper name isn't from_chars, then people's code will break when upgrading.
They could add an overload like std::expected<T, std::errc> from_chars(std::string_view). That way, since the input arguments are different, there'd be no issues about overload resolution.
Whoever introduced the rule to automatically delete :: in titles on a hacker site should be made to rethink their decisions. Its a silly rule. It should go.
I am sure the intern who wrote a rule for the Apple VoiceOver speech synthesizer to special case the number 18 being pronounced english while the synth voice is set to german imagined to have a good reason at the time as well. However, that desn't make ther decision less stupid. "Vierzehn Uhr", "Fünfzehn Uhr", "Sechzehn Uhr", "Siebzehn Uhr", "Eighteen Uhr".
It's not a list of ways to convert strings to numbers, it's a list of string conversion functions (i.e. including the other direction). to_string is also listed there.
Wasn’t the old stuff good enough? Why do we need new methods? In short: because from_chars is low-level, and offers the best possible performance.
That sounds like marketing BS, especially when most likely these functions just call into or are implemented nearly identically to the old C functions which are already going to "offers the best possible performance".
I did some benchmarks, and the new routines are blazing fast![...]around 4.5x faster than stoi, 2.2x faster than atoi and almost 50x faster than istringstream
Are you sure that wasn't because the compiler decided to optimise away the function directly? I can believe it being faster than istringstream, since that has a ton of additional overhead.
After all, the source is here if you want to look into the horse's mouth:
Apart from various nuts and bolts optimizations (eg not using locales, better cache friendless, etc...) it also uses a novel algorithm which is an order of magnitude quicker for many floating points tasks (https://github.com/ulfjack/ryu).
If you actually want to learn about this, then watch the video I linked earlier.
Integers are simple to parse, but from_chars is a great improvement when parsing floats. It's more standardized on different platforms than the old solutions (no need to worry about the locale, for example whether to use comma or dot as decimals separator) but also has more reliable performance in different compilers. The most advanced approaches to parsing floats can be surprisingly much faster than intermediately advanced approaches. The library used by GCC since version 12 (and also used by Chrome) claims to be 4 - 10 times faster than old strtod implementations:
I agree with some of this, and the author could've made a better case for from/to_chars:
- Afaik stoi and friends depend on the locale, so it's not hard to believe this introduced additional overhead. The implicit locale dependency is also often very surprising.
- std::stoi only accepts std::string as input, so you're forced to allocate a string to use it. std::from_chars does not.
- from/to_chars don't throw. As far as I know this won't affect performance if it doesn't happen, it does mean you can use these functions in environments where exceptions are disabled.
A few months ago I optimized the parsing of a file and did some micro benchmarks.
I observed a similar speed-up compared to stoi and atoi (didn't bother to look at stringstream). Others already commented, that it's probably due to not supporting locales.
For sake of example: a "locale-aware" number conversion routine would be the worst possible choice for parsing incoming network traffic. Beyond the performance concerns, there's the significant semantic difference in number formatting across cultures. Different conventions of decimal or thousands coma easily leading to subtle data errors or even security concerns.
Lastly, having a simple and narrowly specified conversion routines allows one to create a small sub-set of C++ standard library fit for constrained environments like embedded systems.
I get that. However then they should name the function and put highly visible disclaimers in the documentation. Something like "from_ascii" instead of "from_chars". Also the documentation, including this blog post should be very clear that this function is only suitable for parsing machine to machine communications and should never be used for human input data. There is clearly a place for this type of function, however this blog post miscommunicates this in a potentially harmful way. When I read the post I presumed that this was a replacement for atoi() even though it had a confusing "non-locale" bullet point.
Did you verify their claims or are you just calling BS and that's it? The new functions are in fact much faster than their C equivalent (and yes, I did verify that).
Your original claim "I've not checked but this guy, and by extension the C++ standards committee who worked on this new API, are probably full of shit" was pretty extraordinary.
Look at the compiler-generated instructions yourself if you don't believe the source that I linked; in the cases I've seen all the extra new stuff just adds another layer on top of existing functions and if the former are faster the latter must necessarily also be.
The standards committee's purpose is to justify their own existence by coming up with new stuff all the time. Of course they're going to try to spin it as better in some way.
It compiles from sources, can be better in-lined, benefits from dead code elimination when you don't use unusual radix.
It also don't do locale based things.
> Not surprisingly, under all those layers of abstraction-hell, there's just a regular accumulation loop.
Your dismissive answer sounds so much like the one of a typical old-C style programmer that underestimate by 2 order of magnitude what compiler inlining can do.
Abstraction, genericity and inlining on a function like from_chars is currently exactly what you want.
It's my experience that says inlining only looks great in microbenchmarks but is absolutely horrible for cache usage and causes other things to become slower.
For small size functions inlining is almost always preferable because (1) the prefetcher actually love that and (2) a cache-miss due to a mis predicted jump is way more costly than anything a bit of bloat will ever cost you.