Hacker Newsnew | past | comments | ask | show | jobs | submit | john_fushi's commentslogin

Doesn't the singleton example contain an error?

On one hand, the author says that instanciating the singleton and storing it is not safe because there is no synchronization, but on the other hand he says that null check before is safe because the mutex introduces synchronization.

My understanding would be that the mutex protects this whole section and that the atomic is not necessary at all in this case.

Please correct me if I'm wrong.


It looks correct to me.

Consider the code without the initial null check optimization. Everything occurs under the mutex in this case, so no atomics are needed -- it's fully serialized. (That is, all memory operations which occur within the mutex will be visible to any other thread which subsequently acquires the mutex.)

Now add in the initial null check. Because it occurs outside the mutex, it has no ordering established with anything that occurs under the mutex. Beside that p itself must be atomic to avoid a data race with itself (torn read), you need some way to ensure that App is visible if p was found to be set. Since the mutex isn't touched here, it doesn't provide any help. You need to establish that relationship using p itself. Hence the additional release-acquire sequence on p.

Btw a simple mental model for mutexes is that of the spin lock, acquired using an acquire-release operation (e.g. test-and-set), released using a release operation (e.g. write). There's no other "magic" that happens in a mutex to make them special memory-order wise. E.g. mutexes (nor memory ordering generally) do not "force" things out to memory or such.


This paper will help a lot: https://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf

tl;dr: Remember that the "new" operator does two operations: allocates memory and fills in the memory with the data. Now, if you have two threads (A & B):

- A allocates memory and then gets pre-empted

- B will pass the null pointer check and attempt to use the non-filled in memory block


Thanks for the explanation!

That's the part I missed : the first null check can be skipped if the memory is allocated but the constructor hasn't been executed.


I guess Matsumoto should have charged for Ruby. I'm sure Mike Perham and Derek Kraan would have been glad to pay for that and that the Ruby community would be in a strong and healthy shape.


People need to eat man, doing something for free is a luxury not many possess. We can see how messed up the financial situation is for a lot of Open Source devs and their lives would be better if they were charging money instead of subsisting off grants and donations.

I don't even necessarily disagree with your point that without being free those things wouldn't have taken off but we need to find a way to strike a balance in the developer community.

Sidekiq having a free version and an enterprise version walks an okay middle line imo.


I personally try to spend money or use ad supported anything that is open-source just to help someone else eat. It really hit home for me about five years ago when the author of a WoW addon[0] couldn't develop anymore because of his financial and life situation.

So many communities across the web rely on people putting in their spare hours for free just to enjoy things. Whether it's spreadsheets in Eve, Addons and Weak Auras in WoW, forum analysis posts, or whatever goes on in the depths of pvpoke, so much free labor underpins massive parts of the world today.

I would love something that I could donate x money to per month and then based on usage, have it dole out to all the content providers with perhaps a minimum per month. It just seems daunting to do that as a) not a crypto scheme and b.) across all the various creator landscapes.

[0]https://www.polygon.com/2018/9/25/17901552/world-of-warcraft...


Well Sidekiq is free to use. It's only the pro version that he charges and the free version code is open source.

I don't see the problem in having that kind of business model, it still allows the community to thrive and offers entreprises a way to have premium support.

Plus it allows him to invest more time in maintaining the free version.


I have no problem paying for the Pro version, but one if its marketing pitches is "enhanced reliability", which is a wild marketing spin on "the free version will lose jobs in fairly common scenarios".

In sidekiq without super_fetch (a paid feature), any jobs in progress when a worker crashes are lost forever. If a worker merely encounters an exception the job will be put back on the queue and retried but a crash means the job is lost.

Again, no problem paying for Pro, but I would prefer a little more transparency on how big a gap that is.


I wish this was prominently documented. Most people new to Sidekiq have no idea that the job will be lost forever if you simply hard kill the worker. I have seen a couple of instances where the team had Sidekiq Pro, but they had not enabled reliable fetch because they were unaware of this problem


The free version acts exactly like Resque, the previous market leader in Ruby background jobs. If it was good enough reliability for GitHub and Shopify to use for years, it was good enough for Sidekiq OSS too.

Here's Resque literally using `lpop` which is destructive and will lose jobs.

https://github.com/resque/resque/blob/7623b8dfbdd0a07eb04b19...


> If it was good enough reliability for GitHub and Shopify to use for years, it was good enough for Sidekiq OSS too.

Great point, and thanks for chiming in. I wonder if containerization has made this more painful (due to cgroups and OOMs). The comments here are basically some people saying it's never been a problem for them and some people saying they encounter it a lot (in containerized environments) and have had to add mitigations.

Either way, my observation is a lot of people not paying for Sidekiq Pro should. I hope you can agree with that.


When we used Sidekiq in production, not only did I never see crashes that lost us jobs, but there are also ways to protect yourself from that. I highly recommend writing your jobs to be idempotent.


Idempotence doesn't solve this problem. The jobs are all idempotent. The problem is that jobs will never be retried if a crash occurs.

This doesn't happen at a high rate, but it happens more than zero times per week for us. We pay for Sidekiq Pro and have superfetch enabled so we are protected. If we didn't do so we'd need to create some additional infra to detect jobs that were never properly run and re-run them.


Or install an opensource gem[1] that recreates the functionality using the same redis rpoplpush[2] command

[1] https://gitlab.com/gitlab-org/ruby/gems/sidekiq-reliable-fet...

[2] https://redis.io/commands/rpoplpush/#pattern-reliable-queue


Fair enough about idempotence.

I'm still confused about what you're saying though. You're saying that the language of "enhanced reliability" doesn't reflect losing 2 jobs over about 50*7 million (from your other comment)?

And that if you didn't pay for the service, you'd have to add some checks to make up for this?

That all seems incredibly reasonable to me.


Crashes are under your control though. They’re not caused by sidekiq. And you could always add your own crash recovery logic, as you say. To me that makes it a reasonable candidate for a pro feature.

It’s hard to get this right though. No matter where the line gets drawn, free users will complain that they don’t get everything for free.


How are crashes under your control? Again they aren't talking about uncaught exceptions, but crashes. So maybe the server gets unplugged, the network disconnects, etc.


To me 'crash' means any unexpected termination, whether it's caused by an uncaught exception, OOM, or hardware/network issues.

I guess you can say that hardware issues on your host aren't under your control, but it's under your control to find a host that doesn't have these issues. And not even a full-on ACID database is going to be 100% reliable if you yank the power cord at the wrong moment.


I hope my tone doesn't come across as rude or too argumentative, but I think your understanding is a bit inaccurate.

> it's under your control to find a host that doesn't have these issues

All hosts will have these issues, the only question is how often. If you need 100% consistency, then you can't use the free Sidekiq. Personally, I've never needed Sidekiq pro (as these kinds of crashes are extremely rare). But this will depend on your scale and use case.

> And not even a full-on ACID database is going to be 100% reliable if you yank the power cord at the wrong moment

This is only true if there's bugs in the DB, or some underlying disk corruption happens. The whole point of an ACID database is that they're atomic, durable, and consistent, even in the worst case scenario. If a power failure corrupted my SQL database I would feel very betrayed by the database.


It wouldn’t be corrupted, but in-flight transactions could fail to commit, just like queued jobs can be lost with sidekiq. The failure modes are similar.

I take your point that at a certain scale, hardware failure is inevitable, but if you’re running that many servers, you can afford sidekiq’s enterprise plan. It’s not something that will realistically happen if you’re just running like 20 instances on AWS. It’s perfectly reasonable to charge extra for something only large organizations with huge infrastructure budgets need.


For sure, I agree with you.

I would say that queued jobs being lost is different from an in-flight transaction being auto-rolled-back, but it's not a super important distinction. Like others have said, I think Sidekiq really nailed the free vs premium features and its success is evidence of that.


Jobs may crash due to VM issues or OOM problems. The more common cause of "orphans" is when the VM restarts and jobs can't finish during the shutdown period.


how often do your workers crash? i rely heavily on sidekiq and don't think I see this very often, if ever.


We process around 50M sidekiq jobs a day across a few hundred workers on a heavily autoscaled infrastructure.

Over the past week there were 2 jobs that would have been lost if not for superfetch.

It's not a ton, but it's not zero. And when it comes to data durability the difference between zero and not zero is usually all that matters.

Edit for additional color: One of the most common crashes we'll see is OutOfMemory. We run in a containerized environment and if a rogue job uses too much memory (or a deploy drastically changes our memory footprint) the container will be killed. In that scenario, the job is not placed back into the queue. SuperFetch is able to recover them, albeit with really lose guarantees around "when".


Let me get this straight, you're complaining about eight 9s of reliability?

50,000,000 * 7 = 350,000,000

2 / 350,000,000 = 0.000000005714286

1 - (2 / 350,000,000) = 0.999999994285714 = 99.999999%

> It's not a ton, but it's not zero. And when it comes to data durability the difference between zero and not zero is usually all that matters.

If your system isn't resilient to 2 in 350,000,000 jobs failing I think there is something wrong with your system.


This isn't about 2 in 350,000,000 jobs failing. It's about 2 jobs disappearing entirely.

It's not reliability we're talking about, it's about durability. For reference, S3 has eleven 9s of durability.

Every major queuing system solves this problem. RabbitMQ uses unacknowledged messages which are pinned to a tcp connection, so when that connection drops before acknowledging them they get picked up by another worker. SQS uses visibility timeouts, where if the message hasn't been successfully processed within a time frame it's made available to other workers. Sidekiq free edition chooses not to solve it. And that's a fine stance for a free product, but just one I wish was made clearer.


If you want to focus on durability then I think your complaint makes even less sense. Somehow I doubt S3 is primarily backed by Redis.

I think it's fair to assume that something backed by Redis is not durable by default because that's not what Redis is known for, whereas the other options you listed are known for their resiliency and durability. I wouldn't view Sidekiq as a similar product to RabbitMQ and SQS.

Also, Sidekiq Pro uses more advanced Redis features to enable super_fetch lending to the assumption that by default Redis is not durable: https://www.bigbinary.com/blog/increase-reliability-of-backg....


it’s not uncommon to lose jobs in sidekiq if you heavily rely on it and have a lot of jobs running. If using the free version for mission critical jobs, I usually run that task as a cron job to ensure that it will re-try if the job is lost.

I have in the past monitored how many jobs were lost and, although a small percentage, it was still recurring thing.


In containerized environments it may happen more often due to OOM kills or if you leverage autoscalers and have long running sidekiq jobs that have a runtime that exceeds your configured grace period for shutting down a container during a downscale and the process is eventually terminated without prejudice.

OOM kills are particularly pernicious as they can get into a vicious cycle of retry-killed-retry loops. The individual job causing the OOM isn't that important (we will identify it, log it and noop it), it's the blast radius effect on other sidekiq threads (we use up to 20 threads on some of our workers), so you want to be able to recover and re-run any jobs that are innocent victims of a misbehaving job.


Exactly why we refuse to use Sidekiq. “Hey, you have to pay to guarantee your jobs won’t just vanish”.

No thanks.


This is a very bad take. From an OSS perspective languages can attract large communities of contributors and corporate sponsors because of their broad appeal and utility. Specialized libraries will have more trouble doing both and may need alternate models to sustain themselves. From a business perspective, Mike offers not only a free version but the paid enterprise version comes with support from Mike and his team, which is something you can't get from a language owner unless you outright hire them or they run a consultancy.


To be fair to Mike, Sidekiq is absolutely free. He sells an enterprise version for money, that comes with support.


And he only started because companies were telling him it would be easier if they could just pay him.


Anyone who has a free thing, sell something that produces an expendable invoice or receipt. Makes it easy to use company funds to pay for company work. “Buy me a coffee” buttons don’t really cut it.


This is great. Reminds me of the Derek Sivers post [1], "Don't start a business until people are asking you to."

1. https://sive.rs/asking


Sorry, this is bogus.

"If I had asked people what they wanted, they would have said faster horses." or whatever the quote is.

There are tons of businesses selling products that are ideas brought to life from scratching their own itch or simply a desire to make something and put it out there.

Personally, I am one of those people who started a business based on an idea with no validation before launching it.

I built it in its entirety, then went to places and people who I expected would want it and low-and-behold, I am making a living doing it.


> There are tons of businesses selling products that are ideas brought to life from scratching their own itch or simply a desire to make something and put it out there.

I'd bet money that the number of businesses which fail because they boil down to "a solution in search of a problem" is vastly larger than the number of businesses that succeeded despite performing "no validation".

That said, "making something" and "starting a business" are two different things. I would challenge you to point out where in the post he argues against making something, especially for the reasons you mention.

> I am one of those people who started a business based on an idea with no validation before launching it. I built it in its entirety, then went to places and people who I expected would want it and low-and-behold, I am making a living doing it.

Consider the possibility that you're in the minority there, and that you succeeded despite performing no validation.

> Sorry, this is bogus.

So if it doesn't apply to you personally, or in all cases, then you dismiss it as "bogus", full stop? Are you in the habit of doing this often?

Also, for future reference, it's "lo-and-behold", not "low-and-behold". [1]

1. https://en.wiktionary.org/wiki/lo_and_behold


> That said, "making something" and "starting a business" are two different things. I would challenge you to point out where in the post he argues against making something, especially for the reasons you mention.

The author states up directly, "Don’t make a website or an app. Don’t build a system".

Starting a business doesn't have to follow any real "formula". It can be a step in the beginning, middle, or end of a process. Yes, you should have the ingredients for a cake before you bake it, but you don't need to find someone who will eat your cake before you bake one or offer it for sale and you sure as shit don't need to "find real people whose problem you can solve. You listen deeply to find their dream scenario."

> Consider the possibility that you're in the minority there, and that you succeeded despite performing no validation, rather than because of it.

While I never claimed it was due to not performing validation, we can clarify that yes, it is not due to this, it is despite not doing it. Validation itself can be found in successful sales or other means, it does not need to be done pre-market and there are many examples of this, aside from the horses quote I provided.

And yes, I, in the same way the author declares it, declare it as bogus because these rules do not need to be followed in the way the author claims.

The post is riddled with questionable content, IMO, made to hit the wannapreneur market.

I don't dare tell someone how to start a business. It's their business, it's their journey, they should do it how they want.


Hilarious take given the fact that there are plenty of orgs making an order of magnitude more than Mike is that rely on Ruby.


Not by Matz, but there are paid versions of Ruby out there.


(In Canada) The max my bank offers is 10 years, and it comes at a premium. The standard I see around me are 5 years.


Let's say that, hypothetically, I lead a project for a very big customer that ended up being used to commit a (arguably) crime against humanity.

Should I put this on resume and would it be in poor taste to quantify the results?


I'm not sure why you'd be willing to work on such a project but then be concerned about putting it on a CV. Assuming something I don't understand, you could talk about the impact it had on your employer without going into details of what it allowed them to do.


> I'm not sure why you'd be willing to work on such a project but then be concerned about putting it on a CV.

He just said:

>> that ended up being used to commit a (arguably) crime against humanity.

What about that makes you think that he knew what it would be used for?

"I'm the civil engineer who designed a large and complicated factory for the mass baking of ceramic products."

...

"I did not know the next government would start a world war and use them to gas Jews."

You seem to assume that everyone has perfect foresight. I assure you, even you don't have perfect foresight.


Woah there Betsy, I wasn't accusing anyone of anything, I literally didn't understand the comment as well as you apparently did.

So the second part of my comment stands. In the case of such a civil engineer, I'd focus on "Designed factory and rolled out process allowing organisation to achieve strategic objective to increase rate of baking ceramic products by 25%".


I’m not comparing this to a crime against humanity or making value judgments either way.

But if someone told me that they were a senior SRE at MindGeek, I’m inclined to hire them.


> val x: Pair<String, Pair<String, String>>

This just describes the structure. Structure without intent is useless.

Comments and out of code documentation both have the same problems : - Blindspots - Rot

That is not to say they are useless, but they are rarely if ever sufficient.

In the end, the most complete and trustworthy source of truth for the code is the code itself.

Help others by having your code describe your intent. Using appropriate variable name and using appropriate types _help_ to keep displayed intent in sync with reality.

Another point to consider is that for the same intent the structure may need to change, to evolve. Proper typing can insulate the code from those changes.


> Comments and out of code documentation both have the same problems : - Blindspots - Rot

This has nothing to do with the usefulness of comments (which is what i'm promoting). And no comments by themselves aren't sufficient for a system to do things. However, code by itself isn't sufficient to express intent. Code in isolation only serves to express a past *interpretation* of intent (not necessarily error free).

> In the end, the most complete and trustworthy source of truth for the code is the code itself.

For implementation I would agree. But not for intent.

Both versions of the code presented originally only give us the structure. The latter gives us some names. But neither gives us intent.

> Another point to consider is that for the same intent the structure may need to change, to evolve. Proper typing can insulate the code from those changes.

I think I can agree with this in some cases and not in others. Basically it boils down to if a new type is really warranted. My opinion is that new types are warranted if there is some functionality that is unique to that type. That's hard to say in this example because neither of the examples actually gave us intent or context. But if it's the case of we're only going to be doing "String" things and "Pair" things then I don't think it's warranted to invent new types to do the things the existing types already handle. It's extra abstraction and indirection.

As another commenter pointed out as well, a nitpick about the single letter variable name. It's something done in both of the original examples. It's bad in both but it's glaringly obvious in the shorter example. But changing the names here also really doesn't help us with the intent of the code. A good comment/description would.


Unless you live in Quebec - then google disabled this option to avoid complying with local children protection law.


Which is also an issue in the author's example, no? So the point would be moot.


Hey! What's up?


Telling someone to "man up" is a telltale sign of toxic masculinity.


Why? Isn't it the opposite? Doesn't it refer to the best of what is masculine?


> memset(data, 0, LENGTH);

>// ...

> T data[LENGTH];

I'm not sure how important it is in practice, but I'm pretty sure you don't zero out the whole array for sizeof(T) > 1.

Anyhow, memsetting to 0 a complex type is... not something I'd recommend in most cases.


Ouch, yeah. They need std::fill(). This is the kind of thing that makes you lose faith in a C++ library. (Also, what's with the volatile private variables?!)


The use of volatile is typical here. It allows the ring buffer to be used from interrupts, as long as you have one reader and one writer at a time.

I haven't checked the code for correctness, but in a typical ring buffer implementation intended to be used in interrupts, you would make the read and write pos volatile.

To write, you put the value in the array, and then advance the write position. To read, you copy a value out, and then advance the read position. Volatile ensures that if a read is interrupted by a write or vice versa, the entire operation is still atomic. Without volatile, the compiler has more freedom to reorder memory access.


> The use of volatile is typical here.

This is definitely true. C++ programmers typically do this.

> It allows the ring buffer to be used from interrupts

Unfortunately "allows" here is telling us about the programmers not the hardware. The programmers see this and figure eh, I don't really understand this but somebody wrote "volatile" so I guess they knew what they were doing.

> Volatile ensures that if a read is interrupted by a write or vice versa, the entire operation is still atomic

If you want atomic operations you need to use atomic operations not volatile ones. C++ 98 doesn't provide standardized atomics, so you would need to find out on each target what (if anything) you're required to do to get atomic behaviour.

The volatile keyword turns accesses into explicit memory reads and writes. This is what you need if you're a device driver, because your "memory" access might really not be to RAM at all. If the compiler elides a series of repeating writes to the CGA card because they don't seem to be needed after analysing the program, the effect is that the screen is blank and the program's purpose was not fulfilled.

This keyword does not mean "I want a Sequentially Consistent memory model across all my code, except somehow still very fast". That's not a thing.

Volatile accesses for things that are clearly just RAM are a code smell. As a result the volatile keyword in C and C++ is usually a code smell.


Right. Another reason not to confine ourselves to C++98.

While the abstract machine is not allowed to reorder volatile writes, the compiler is NOT obliged to emit instructions forcing the actual hardware not to reorder writes. Thus, regular cache behavior can turn your carefully ordered sequence of volatile writes into a bunch of local cache operations followed by a single writeback bus transaction.

If you are coding to a microcontroller, its cache hardware might be simple enough that this can't happen. Or, you might be able (and need!) to initialize a memory controller, at startup, to give a chosen memory address range simpler write semantics, e.g. "write-through".

But atomics are the cleanest way to express things at the source level.


> Unfortunately "allows" here is telling us about the programmers not the hardware. The programmers see this and figure eh, I don't really understand this but somebody wrote "volatile" so I guess they knew what they were doing.

So... you're saying that whether a piece of code is correct is something that's more complicated than just noticing that it has the word "volatile" written somewhere? This is super obvious.

> C++ 98 doesn't provide standardized atomics, so you would need to find out on each target what (if anything) you're required to do to get atomic behaviour.

Yes.

But you don't need all of your operations to be atomic in order to get atomic behavior for an operation. It turns out that for a common ring buffer, with one producer and one consumer, you only need the read/write position to be read/written atomically, and operations on buffer data must be ordered wrt. operations on the read/write positions.

This is commonly achieved with "volatile" on single-core systems.

> This keyword does not mean "I want a Sequentially Consistent memory model across all my code, except somehow still very fast". That's not a thing.

You get ordering of the volatile operations with respect to other volatile operations, from the perspective of one CPU core. That is sometimes all you need.

The point that "some people don't understand what volatile means" is not germane.

> Volatile accesses for things that are clearly just RAM are a code smell. As a result the volatile keyword in C and C++ is usually a code smell.

This is a quite extreme viewpoint. I can't agree with it.

Volatile is, yes, overused and abused. Or at least it was. However, if you want to write a ring buffer and use it on a single-core processor in an embedded environment, you can write the whole thing in old-school C90 or C++98, and the only real question you have about your environment is whether the operations on read/write pointers will tear.

It is rare, at the very least, to find a CPU where reads and writes to an int will tear.


> It turns out that for a common ring buffer, with one producer and one consumer, you only need the read/write position to be read/written atomically, and operations on buffer data must be ordered wrt. operations on the read/write positions.

But again, putting a volatile modifier on the read/write position did not achieve this.

Most of what you thought you wanted "volatile" for here is either just always the behaviour for aligned primitive types anyway (on platforms like x86 with a relatively strong consistency guarantee at a low level) or still unsafe even after you sprinkled volatile around (on ARM platforms with relaxed memory access rules, unless your compiler specifically gives different behaviour for volatile)

> This is a quite extreme viewpoint. I can't agree with it.

It's the conventional viewpoint by now, even the C++ Core Guidelines explicitly tell you not to do this. e.g. CP200

But if you want an extreme viewpoint not yet widely accepted by say the C++ Standards Committee, try this:

Neither volatile nor atomic should exist as type qualifiers. A "volatile 32-bit integer" isn't a thing, and neither is an "atomic 32-bit integer". The operations exist and they're important in low-level programming, but the types are a fiction‡.

Rust's model here is much closer. You can do volatile memory access in Rust, but you can't have "volatile" types, they're not a thing. You must explicitly call an (inlined) function to read whatever primitive data you need out of a memory address or write it to the address. As a result, programmers who need volatile memory access are far more likely to remain clear about what they're doing and why, not just sprinkling "volatile" keywords and hoping it'll do what they meant.

‡ If you want some real C++ horror, why is volatile constexpr a thing? Literally the committee's excuse is maybe this could be useful. They don't have a concrete example of use, but hey, why not throw things we don't need into the standard, it's not as though C++ is a bloated language with far too many edge cases already...


Unfortunately the compiler can still reorder nonvolatile operations across volatile ones, so unless the buffer is also volatile, it is not as useful as one would expect.


Right. Volatile very rarely means what peoole using it imagine it means. It very much does not mean Do What I Mean.

E.g. compilers routinely elide writes to stack variables declared volatile—or even atomic—that they "know" are not aliased. So e.g. if an interrupt routine might look at things in your stack frame, you need to use asm to force it not to fool with writes there. Volatile and atomic don't help you, there, because the stack frame is special.

LTO builds can expose what had been invisible operations in other TUs (".o" files) to the optimizer. So that might demand especial care.


To be pedantic, technically they can't elide, reorder or coalesce a accesses to volatile objects any more they can do any other form of I/O, even if it is a local variable whose address is never taken.

Of course how accesses on the abstract machine maps to the actual hardware is implementation defined, although most will document to translate them to plain movs.

And implementations have bugs of course; here [1] gcc removing a volatile access to an otherwise unused volatile parameter is considered a bug, even when the parameter is actually passed via register (in this case I would say the standard is underspecified).

You are absolutely correct regarding non-volatile atomics though.

[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71793


It is notable that this report is against gcc-5, not assigned, and has not been looked at in five years.

There certainly are people who think volatile should mean something for stack variables, but the compiler people very much Do Not Care. So, by the technical wording of Standards, volatile means the same for all variables, in actual compilers (with certain exceptions) it does not.


Well, Richard Biener certainly count as a compiler person (they are heavily involved with the GCC middle end), and they seem to care, although not enough to fix it as the issue if fairly academical in this case.


This is why volatile function parameters are deprecated in C++20. It's basically nonsensical to begin with.

It sounds like the author of the code expected that the argument would get a memory location on the stack somewhere, but this behavior isn't actually mandated by the standard. If a variable is placed in a register, then "reading" it is a no-op.


Interesting that they are deprecated because volatile locals have specific guarantees with regard to longjmp.


Assuming pre-C++20 semantics, which were anyway compiler specific.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p115...


With one exception, the relevant details aren't really compiler-specific, and C++20 keeps the relevant parts... if you perform two operations on volatile values, the operations can't be reordered, because the spec says so.

> Accesses through volatile glvalues are evaluated strictly according to the rules of the abstract machine.

Not the best, clearest, most useful part of the C++ spec but the core concept is there... if you perform operation A on a volatile object, then perform operation B on a volatile object, the emitted code must perform A before B. To my knowledge, this part of the C++ standard hasn't even changed wording in C++20, and isn't deprecated either.

The compiler-specific parts are things like:

1. Is this a compiler fence? (If no, the buffer in a ring buffer must also be volatile.)

2. Is this a memory fence?

3. Is there some platform-specific way in which operations on volatile objects are different?

The part you do need to know is if the operation will tear.


Thanks for putting it better than my short remark.

As addition, some of the complaints regarding the new changes might be addressed in C++23.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p213...


Do you know which of these proposed changes is actually happening? I don't think all of them are, right?


You can find the C++ draft by searching for "n4861", if you're not the kind of person who wants to pay for (or has institutional access to) the final version of the spec.

The draft lists ++ / += of volatile deprecated, lists volatile function parameters and return types as deprecated, but does not mention deprecation of volatile member functions (or I didn't find it).

Keep in mind that the standard does change between draft and finalization, and I've been bitten by this before (one draft of C is missing library functions present in the final standard).


> the standard does change between draft and finalization

Interesting. This topic turned up 2 months ago [0] and I was assured that the differences between the last draft and the final document were guaranteed to be insubstantial things like formatting tweaks. You're saying this is definitely not the case in practice?

[0] https://news.ycombinator.com/item?id=26684368


The latex sources of the C++ standard are on github

https://github.com/cplusplus/draft/tree/c+%2B20

I assume that the C++20 branch actually contains the final version, but you'll have to generate the pdf yourself.


Thanks!


But the ”writing” bool in the example in the README is not declared volatile. A bit weird...


Oh I see. Didn't realize they're assuming 1 reader/1 writer. Thanks!


N.B. I'm writing my comments as I am reading your code. Please, don't take offense from my criticism, it is meant to be constructive, albeit concise.

/* * @brief Retrieve a continuous block of * valid buffered data. * @param num_reads_requested How many reads are required. * @param skip Whether to increment the read position * automatically, (false for manual skip * control) * @return A block of items containing the maximum * number the buffer can provide at this time. / Block<T> Read(unsigned int num_reads_requested)

Where is skip?

> if (buffer_full) > { > / > * Tried to append a value while the buffer is full. > / > overrun_flag = true; > } > else > { > / > * Buffer isn't full yet, write to the curr write position > * and increment it by 1. > / > overrun_flag = false; > data[write_position] = value; > write_position = (write_position + 1U) % LENGTH; > }

You don't write in the case of an "overrun". Isn't that one of the most interesting property of a ringbuffer? It seems that your implementation is specific to your use case (buffering to sd cards?). I don't think your _current_ implementation is apropriate for a _general_ purpose ring buffer mostly because of api concerns. It may be interesting to emphasis this part in your doc.

> reads_to_end = LENGTH - read_position; > req_surpasses_buffer_end = num_reads_requested > reads_to_end; > > if (req_surpasses_buffer_end) > { > / > * If the block requested exceeds the buffer end. Then > * return a block that reaches the end and no more. > / > block.SetStart(&(data[read_position])); > block.SetLength(reads_to_end); > } > else > { > / > * If the block requested does not exceed 0 > * then return a block that reaches the number of reads required. > */ > block.SetStart(&(data[read_position])); > block.SetLength(num_reads_requested); > }

Maybe : > reads_to_end = LENGTH - read_position; > eff_reads = (num_reads_requested > reads_to_end) ? reads_to_end : num_reads_requested; > //or : eff_reads = std::min(num_reads_requested, reads_to_end) > block.SetStart(&(data[read_position])); > block.SetLength(eff_reads);

Still, I understand the need to be very explicit in an embedded context.

Same principle for ``if (!bridges_zero)`` (the ``else`` case)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: