Post by Peter OlcottPost by PaulPost by Peter OlcottPost by PaulIf a read-modify-write instruction is involved, that is
atomic enough to complete, without being snipped in
half.
The benefit of using an actual RMW type instruction on
the
computer, is the processor takes care of it being
atomic.
The processor won't allow an RMW to be split in half.
http://en.wikipedia.org/wiki/Read-modify-write
Paul
Great now I have a fast way to trigger my preemptive
scheduling.
Will it work the way that you suggest on all modern
hardware?
I don't know when it was introduced.
Maybe your programming environment has a primitive, and it
takes care of the details ?
No, not at all. The reason that I asked this question is so
that I would know the limits of the degree that I could
enhance the behavior of the operating system.
As long as one process reading a memory location does not
ever mangle another process writing to this location, even
if the read may get mangled data when it occurs at the exact
same moment as the write, then I am good to go. It would be
even better if the read never gets mangled too. It looks
like you are saying that I can count on both of these
behaviors.
I don't think you have anything to worry about, with respect to
memory based operations.
If you want to create a semaphore, there might be a construct
in the language you're using, which makes that possible using
only the high level language. If it wasn't supported directly
by the language, or a library provided for such things, then
you can inject assembler code into your high level source, and
do it that way. Otherwise, there might not be a direct mapping
from a line of C/C++ code and say, a Test and Set instruction
on the x86 processor. People can and do inject assembler into
their programs, but when doing so, they need a damn good reason
for doing so (think of the portability issues). If there is already
a high level primitive, a library somewhere that does such
things, then someone else already solved any portability
issues for you.
Paul