CO538 Anonymous Questions and Answers Keyword Index |
This page provides a keyword index to questions and answers. Clicking on a keyword will take you to a page containing all questions and answers for that keyword, grouped by year.
To submit a question, use the anonymous questions page. You may find the keyword index and/or top-level index useful for locating past questions and answers.
Keyword reference for fair-alt
2011 |
Submission reference: IN2145
In the exam are we expected to implement fair alting if the question doesn't explicitly ask it?
Nope — if fair-alting is required, it would be asked for (indirectly though perhaps).
Keywords: fair-alt
2009 |
Submission reference: IN1846
I'm trying to get my fork process to FAIR ALT over its left and right input channels. I know how to do this with an array of channels, but I'm not sure how to do it with 2 channels not in an array. Can you give any hints?
See Question 62 (2007).
Keywords: fair-alt
Submission reference: IN1830
Could I clarify that the only use of a shared channel is that you get a free FIFO mechanism for handling requests? As opposed to implementing a FAIR ALT – this doesn't actually exist does it?. Numerous channels per se aren't bad - it's not we are going to run out of channels? Or is it to declutter things?
Thanks,
A shared channel writing-end means that the writing processes are FIFO queued to do their writing on that channel – which is fair. If the other (reading-)end is not shared, than the single reader there will automatically provide a fair service to all writers.
The above is simpler than having a number of classic channels (with non-shared ends) from all the writers to a single reader, with the reader having to be programmed with fair-alting logic to provide a fair service. There are times when we have to do this though – e.g. when the protocols on the channels have to be different (although this can usually be overcome with variant protocols and variant protocol inheritance) or when fair service for some channels has to be balanced with prioritised service for others.
You are right that occam-pi does not support the notion of FAIR ALT as a language mechanism. That would not actually make sense, since the notion of fairness only is relevant when many executions of the ALT take place. The question of fairness for a single ALT does not arise. If fairness were introduced at the language level, it would have to be in terms of a loop that services a set of channels – e.g.
FAIR SERVER i = 0 FOR SIZE in? in[i] ? x ... deal with x
which would mean something like:
VAL INT n IS SIZE in?: INITIAL INT favourite IS 0: WHILE TRUE PRI ALT i = favourite FOR n VAL INT i.mod.n IS i\n: in[i.mod.n] ? x SEQ ... deal with x favourite := i.mod.n + 1 -- channel just serviced become least favoured
where we have used classical fair alting logic (see Question 62 (2007)).
FIFO servicing of channel inputs is not the only benefit of shared channels. The reading ends of channel may also be shared, which provides for fair distribution of messages from a single writer. And both ends of a channel may be shared, which provides a fair brokerage service for putting reader and writer processes in touch.
As you say, numerous channels are not necessarily to be avoided. After all, each one only requires one word of storage (to hold a reference to a waiting partner – or nil if no process is waiting). But if you can replace an array of channels with one shared channel, that does tend to simplify the picture and associated logic.
Keywords: q7 , fair-alt , shared-channel
2008 |
Submission reference: IN1763
I'm a little unsure about question 2b from the 2008 exam paper. It asks for a plex process to muliplex fairly two input channels into one output channel. Would it be safe to assume the input channels are from an array and so write the process something like this :
PROC plex ([]CHAN INT in?, CHAN INT out!) WHILE TRUE ALT i = 0 FOR 2 INT x: in[i] ? x out ! x :
The process is fair as it loops round, inputting from each of the two channels in turn.
Or would a better answer be to use a PRI ALT and use its guaranteed un-fairness as a way of guaranteeing fairness?
The code you have there is not fair. It relies on the implementation of ALT being fair, which may not be the case. ALT makes an arbitrary decision, which may be random or fair ... but may not be either. With current implementations, an ALT is neither random nor fair.
Your code is, therefore, not fair. Neither does it "loop around, inputting from each of the two channels in turn". Expanding out your replicated ALT, your code is equivalent to:
PROC plex ([]CHAN INT in?, CHAN INT out!) WHILE TRUE ALT INT x: in[0] ? x out ! x INT x: in[1] ? x out ! x
which, of course, is exactly the same as:
PROC plex ([]CHAN INT in?, CHAN INT out!) WHILE TRUE ALT INT x: in[1] ? x out ! x INT x: in[0] ? x out ! x
A suitable solution to the two-channel case (and others) does use the guaranteed unfairness of the PRI ALT to construct a fair solution. This can be found in the answer to Question 62 (2007).
Keywords: past-exams , fair-alt
Submission reference: IN1759
Hello there, I noticed in the 2007 exam paper there was a question on fair servicing / starvation-free processes. Would we need to use the notion of shared channels for a question like this or would you accept answers using a fair-alt ?
If you're referring to question 1(f), then this is really asking about the fair-alt. The question states "servicing of input channels", which suggests something about the insides of a process whose interface is already defined (multiple channels). So a shared-channel would not be a good answer, unless you can explain (correctly) how shared channels handle fairness. I don't think we've taught you how that works though! [Added later: yes, we have! See Question 62 (2007), about half-way through the answer.]
Keywords: past-exams , shared-channel , fair-alt
2007 |
Submission reference: IN1419
I'm just looking over last years exam papers, and a question about fair alting came up.
I didn't think we covered fair alting, at least code wise. I understand what one is, but I havn't seen the code to do so. Was this in the slides, I can't see it?
Fair alting is how a process provides a fair service to multiple client
processes competing for its attention. Writing a process that waits on an ALT
(over input channels from its clients) at the start of its cycle does not guarantee fairness.
Selection between mutliple ready guards in an ALT
is arbitrary,
so some clients may be starved of service if they have competitors who always happen to get
chosen. Changing the ALT
to a PRI
ALT
guarantees
unfairness, with guards (clients) lower in priority order always losing out to any
higher up that happen to be ready.
In classical occam, fair alting is a simple technique that uses the guaranteed unfairness of prioritised alting to provide a fair one – we just switch priority ordering each time, giving the guard serviced last time the lowest priority in the next round.
[In occam-pi, however, the need for fair alting has been superceded by the mechanism
of SHARED
channel-end (specifically, shared output-ends) and protocol
inheritance. More later.]
To fair alt between just two channels, unroll the server cycle loop once and use
a PRI
ALT
twice with oppoising priorities:
WHILE running SEQ PRI ALT a ? ... ... deal with the 'a' message b ? ... ... deal with the 'b' message PRI ALT b ? ... ... deal with the 'b' message a ? ... ... deal with the 'a' message
Now, if a message is pending on the 'b' channel, it will either be taken next
(if the server process is at the second PRI
ALT
in
its cycle or nothing is pending on the 'a' channel) or it will be taken after
at most one 'a' channel message (if the server is at its first PRI
ALT
and something was pending on 'a').
Similarly, any message pending on 'a' will be taken either next or the time after that.
Neither channel can be starved and we can calculate worst case response times by
the server for incoming messages (if we know maximum times for dealing with them).
For example, the fork
process in the Dining Philosophers' college can be
modified as above to guarantee no philosopher starves to death because of ever-greedy
colleagues.
[Fair servicing was mentioned several times during the course.
For this (2008) year exam purposes, the trivial solution above is all you need
to have discovered –
though you might as well just read further on to find the fair.fork.0
process that actually applies the above modification.
The rest of this answer – on the fair servicing of arrays of channels, fair
sevicing via a single channel with a shared output-end, and reflections on mutexes
(i.e. "mutually exclusive" locks), counting semaphores and Java monitors –
are for interest only.
Reading and following the examples below, however, will give you more practice
in the ideas of occam-pi concurrency and deepen your understanding –
which always helps with exam preparation, :).]
If there are three channels, this approach is no harder but gets a little tedious:
WHILE running SEQ PRI ALT a ? ... ... deal with the 'a' message b ? ... ... deal with the 'b' message c ? ... ... deal with the 'c' message PRI ALT b ? ... ... deal with the 'b' message c ? ... ... deal with the 'c' message a ? ... ... deal with the 'a' message PRI ALT c ? ... ... deal with the 'c' message a ? ... ... deal with the 'a' message b ? ... ... deal with the 'b' message
The code is exploding! This can be ameliorated by placing the code responding
to each source of message inside a PROC
, so that the above responses
are single line PROC
calls. But it's still tedious and gets worse
as the number of guards gets larger!
If the server is servicing an array of channels (rather than a set of individual ones), we can wrap the algorithm up neatly, without any repeated code:
PROC fair.server.0 ([]CHAN SOME.PROTOCOL in?, ...) ... local state variables SEQ ... initialise local state INTITIAL BOOL running IS TRUE: INITIAL INT favourite IS 0: WHILE running SEQ PRI ALT i = favourite FOR SIZE in? VAL INT actual.i IS i \ (SIZE in?) ... more local variables in[actual.i] ? ... ... deal with this message favourite := (favourite + 1) \ (SIZE in?) :
On its first cycle, 'favourite' is 0 and the channel priority is 0 .. (n-1),
where n is the number of channels (SIZE
in?
).
The channel with index 'favourite' always has highest priority and that
moves round each cycle. So, in the second cycle, channel priority order
is 1 .. (n-1) 0. Then, 2 .. (n-1) 0 1. Then, 3 .. (n-1) 0 1 2. Each
channel will have highest priority once every n cycles. At worst,
a pending message on any channel will be serviced within n cycles –
and earlier if not all the other channels are pending.
This round-robin prioritisation is not as fair as it could be though. It's possible for channel 7 (say) to have a pending message and for channel 6 to be serviced 7 times in a row before channel 7 is served (e.g. when 'favourite' is 0, channels 0 through 5 have nothing and channel 6 is always ready!).
To fix this, don't round-robin the value of 'favourite'. Instead, as mentioned in the second paragraph of this answer, switch 'favourite' to one after the index serviced last time (which channel becomes least favoured in the next cycle) – a simple modification:
PROC fair.server.1 ([]CHAN SOME.PROTOCOL in?, ...) ... local state variables SEQ ... initialise local state INTITIAL BOOL running IS TRUE: INITIAL INT favourite IS 0: WHILE running PRI ALT i = favourite FOR SIZE in? VAL INT actual.i IS i \ (SIZE in?) ... more local variables in[actual.i] ? ... SEQ ... deal with this message favourite := actual.i + 1 :
Now, once any channel (say index 5) gets serviced, every other channel that is pending will be serviced before channel 5 gets serviced again. Of course, if no other channels are pending, channel 5 can get serviced again immediately should it become ready. Whatever happens, no pending channel will see another channel serviced twice before it gets serviced – and that's fair!
Having said this, a SHARED
channel in occam-pi eliminates
the need for these tricks.
Simply let all clients of the server share the output-end of a single
channel to the server.
The server just sees a normal (unshared) channel input-end:
PROC fair.server.2 (CHAN SOME.PROTOCOL in?, ...) ... local state variables INTITIAL BOOL running IS TRUE: SEQ ... initialise rest of local state WHILE running ... more local variables SEQ in[actual.i] ? ... ... deal with this message :
which is a lot simpler! The clients see the SHARED
output-end
to this service channel and simply CLAIM
s it before use.
Competing clients will be queued and serviced in FIFO order – and that's fair!
The only downside (to fair.server.1
and fair.server.2
)
is that all clients must speak the same channel protocol.
However, with protocol inheritance, the service channel can carry the union of
distinct protocols for each client (but all these protocols must carry
a CASE
tag):
PROC fair.server.3 (CHAN SOME.PROTOCOL in?, ...) ... local state variables INTITIAL BOOL running IS TRUE: SEQ ... initialise rest of local state WHILE running ... more local variables in[actual.i] ? CASE ... deal with all message variants :
Note: the fair alting servers (fair.server.0
and
fair.server.1
) may process further messages from its
client (or, indeed, any other client it chooses) in the block of code
labelled: "... deal with this message
".
That is not allowed for the shared channel servers
(fair.server.2
and fair.server.3
)
since client messages seeking the server's attention all share
the one channel and any further messages from a client will be
queued behind other client's messages – so they are
not available for immediate input.
The effect can be managed, however, by providing a separate channel
on which to receive subsequent messages.
[Note on the above note: an assumption is that clients of the shared channel servers only send one message per CLAIM of its shared writing-end. If they held on to that CLAIM and sent more than one message, all messages sent by that client within that CLAIM block will be grouped together and received in sequence by the shared channel server without being interleaved with messages from other clients. However, for many applications the client must relinquish its CLAIM after sending one message and, then, the problem (and solution) in the above note apply. See the philosopher.0 process below for an example of this.]
An example is the fork
process from the Dining
Philosophers system. As suggested earlier, this can be made
to service its competing philosophers fairly by:
PROC fair.fork.0 (CHAN BOOL left?, right?) WHILE TRUE BOOL any: SEQ PRI ALT left ? any -- picked up by left phil left ? any -- put down by left phil right ? any -- picked up by right phil right ? any -- put down by right phil PRI ALT right ? any -- picked up by right phil right ? any -- put down by right phil left ? any -- picked up by left phil left ? any -- put down by left phil :
which is neat enough! Doing this with via a shared channel (on which both philosophers try to pick it up) means that we need a separate channel for the release – as opposed to the above, where each philosopher uses the same channel for pick up and release. The code for this server becomes very trivial:
PROC fair.fork.1 (CHAN BOOL pick.up?, put.down?) WHILE TRUE BOOL any: SEQ pick.up ? any -- get picked up by someone put.down ? any -- get put down by that same someone :
Note: this fair.fork.1
process is, in fact, a general
(and fair) mutex, sometimes known as a binary semaphore.
So, the channels pick.up
and put.down
might
be better renamed, respectively, acquire
and
release
.
With fair.fork.1
, the code for the clients (the philosophers)
is slightly harder.
For one thing, they now need two channels (both shared by other
philosphers) to each fork – instead of one. And they have
to make a CLAIM
on those channels before they can
use them. Here it is:
PROC philosopher.0 (SHARED CHAN BOOL pick.up.left!, put.down.left!, SHARED CHAN BOOL pick.up.right!, put.down.right!, ...) WHILE TRUE SEQ ... thinking ... hungry, get past security PAR CLAIM pick.up.left! -- pick up pick.up.left ! any -- forks CLAIM pick.up.right! -- in pick.up.right ! any -- parallel ... eating PAR CLAIM put.down.left! -- put down put.down.left ! any -- forks CLAIM put.down.right! -- in put.down.right ! any -- parallel ... leaving, let security know :
[Note: it's because we insist that the philosopher picks up its forks in parallel that it must relinquish its CLAIM on the pick-up channels immediately after using them – it needs to know when both events have happened (so that it can eat) and, for that, both parallel actions (inlcuding the claims) have to terminate.]
A final thought: if we didn't mind programming the philosophers
to pick up their forks serially (and, with the security
guard refusing entry to all five at the same time, this is safe),
we could do things even more simply.
CLAIM
ing a shared channel-end grabs a mutex
lock implemented by the occam-pi run-time.
We could just use CLAIM
s on those shared channel-ends
as pre-requisites for eating and never actually send messages over
the channels at all:
PROC philosopher.1 (SHARED CHAN BOOL pick.up.left!, put.down.left!, SHARED CHAN BOOL pick.up.right!, put.down.right!, ...) WHILE TRUE SEQ ... thinking ... hungry, get past security CLAIM pick.up.left! CLAIM pick.up.right! ... eating ... leaving, let security know :
Making each CLAIM
models grabbing the fork –
only one philosopher can succeed at a time.
Exiting the CLAIM
block models putting the fork
back – another philosopher may now grab it.
We now don't need any fork
processes at all –
just the pick-up/put-down channels, whose receiving ends may
be safely left dangling (since nothing ever gets sent down them!).
The above is, however, slightly clever programming and that can
quickly lead to tears.
Note that making a CLAIM
on a SHARED
channel-end and not using it in the claim block is exactly
the semantics Java offers from its monitor concept:
Java: occam-pi: synchronize (someObject) { CLAIM some.channel! ... stuff ... stuff (don't use some.channel!) }
However, nested monitor locks – like nested channel claims –
do not always give us what we want.
Hence, from Java 1.5 onwards, Java gives us
a java.util.concurrent.Semaphore
class
whose acquire
and release
methods give more
flexible orderings of use (and, of course, more ways for them to be
mis-used) than monitor locks.
The fair.fork.1
process above (which should be renamed
to mutex
) gives us the same flexibility.
For interest, here is a fair counting semaphore process in occam-pi:
--* This is a counting semaphore providing a fair acquisition service. -- The other ends of the acquire and release channels must be SHARED -- amongst all user processes. To acquire this semaphore, CLAIM the -- acquire channel and send TRUE down it. To release, CLAIM the release -- channel and send TRUE. -- -- Warning: this semaphore must first be acquired before being released. -- Releasing a semaphore before acquiring it will lead to deadlock. -- Forgetting to release an acquired semaphore will also lead to deadlock. -- -- @param n The number of processes allowed to hold this semaphore -- at the same time (must be greater than zero). -- @param acquire The channel to acquire this semaphore. -- @param release The channel to release this semaphore. -- PROC semaphore (VAL INT n, CHAN BOOL acquire?, release?) INITIAL INT count IS n: WHILE TRUE BOOL any: ALT (count > 0) & acquire ? any count := count - 1 release ? any count := count + 1 :
Finally, note that the security guard in the Dining Philosophers'
college can be implemented with the above semaphore
,
initialised with a count of 4.
Keywords: fair-alt , shared-channel
Referrers: Question 30 (2009) , Question 45 (2009) , Question 72 (2008) , Question 74 (2008) , Question 68 (2007)
2006 |
Submission reference: IN1008
Can I just check that something like:
ALT Guard Code ALT i = 0 FOR 5 Code ALT i = 0 FOR 5 Code
gives a 1/15 chance of selection to each element, or whether its 1/3, then 1/5 for each of the bits of code in the 2nd, if that makes sense?
The above code flattens into an ALT
over 11 guarded processes.
Assuming all were ready, this does not give each guarded process
a 1/11 chance of selection!
That selection is arbitrary.
One way to do it would be to choose a ready guard at random.
But computing random numbers is expensive!
A simpler and legal way to make the selection is to choose the first –
or maybe the last.
With an ALT
, we can say nothing about the likelyhood
of selection when more than one guard is ready.
See the answer to Question 43 (2006).
Submission reference: IN1007
I'm having some trouble with my replicated ALT
.
My display process contains:
WHILE TRUE ALT i = 0 FOR 5 phils[i] ? n[i] SEQ pause (50000) -- to be replaced by random delays in phil proc out.string ("Philospher ", 0, out!) out.int (i, 0, out!) ... more stuff
The idea is that it'll pick an arbitrary channel from the phils
array but
when I run this it always seems to pick 0
for i
.
I tried replacing the ALT
with an alternative block of code using
a SEQ
to ensure all the channels were producing output, which they are.
I've no idea why you have a delay in response to a message from one of the philosophers?!! The delay is to model a thinking or eating period in the life of a philosopher – so that (i.e. the philosopher process) is where the delay should be invoked. The code we write in occam-pi really is process-oriented: code that relates to the behaviour of a process always is executed by that process.
[Aside: note that the same statement does not apply to object-orientation. Code that relates to the behaviour of an object might be executed by all sorts of threads of control. Unfortunately, this means that to understand/write the code for one method, we have to understand/write the code for all other methods that operate on the fields used by the orginal method and might be exectued by concurrent threads. Such logic is impossible to scale to the management of complex behaviour.]
Assuming you have no delays in your philosopher code, the behaviour you
describe can be explained.
At the start of each loop, every philosopher is trying to make
a report - their think and eat times are zero.
The ALT
sees all channels with messages pending
and it chooses one arbitrarily.
Please note that this does not mean randomly or fairly!
Always choosing channel index 0
is quite arbitrary and
perfectly legal – you cannot complain.
You need to invoke delays, ideally for random amounts of time within
some defined limits, in the philospher process to model its thinking
and eating times.
Then, there will not generally be messages pending on all report
channels each time the display process starts its loop –
and the ALT
will cope just fine.
By the way, why does your code only receive reports from the philosophers?
You may find it much easier to receive reports from all players
(the philosophers, forks and security guard) in one replicated
ALT
(over an array of 11 report channels).
Note: sometimes we really do need to service an array of
ALT
guards (e.g. channel inputs) fairly –
for example, when the server is under stress from the continued
pushing of messages by very active clients.
In this case, we cannot use a simple replicated ALT
.
We could try a replicated PRI
ALT
, but
that guarantees unfairness – only channel 0
will be serviced.
However, from guaranteed unfairness, we can obtain fairness.
A PRI
ALT
gives priority to its first
guard that's ready – that's first in the listed order,
not in time of readyness.
Find a way to arrange that the guard serviced last time is the last
in the PRI
ALT
order next time.
Please try to work this out – it's very simple once you see it!
The technique is know as fair alting.
It guarantees that no pending message has to wait more than
(n - 1)
cycles of a server process loop before it is taken.
This gives us worst case response times in a stressed real-time system.
Keywords: q7 , fair-alt , process-orientation , object-orientation
Referrers: Question 44 (2006) , Question 51 (2006) , Question 51 (2006)
2003 |
I'm trying to do a fair-ALT, but there are 11 channels to ALT over and thats a lot of of code! I was wondering is there a way of looping around FOR statements? What I mean is if the first time you write, say,
ALT i = 0 FOR 5
is there anyway so that the second time I can write
ALT i = 1 FOR 5
but have it so the 5th iteration loops back to channel 0 (i.e. the one I started on the first time). Sorry if thats unclear.
I'm not sure what you're asking here -- at a guess you're confused as to the operation of a replicated-ALT. One key idea is that it is a replicator, not a loop as such. Writing:
ALT i = 0 FOR 5 in[i] ? x P(x)
is equivalent to:
ALT in[0] ? x P(x) in[1] ? x P(x) in[2] ? x P(x) in[3] ? x P(x) in[4] ? x P(x)
For more information, look in the course notes that describe the ALT, around slide 5-29. The code for a fair-ALT is also given explicitly on slide 10-23, as you have been told many times! Also check out other related anonymous questions, particularly those on alternation and the fair-alt.
2002 |
Hi, it has been strongly suggested by our seminar leader that we use protocols for the state report channels to the display process. The reports disclose the any change is state from the philosphers, forks and the security guard. Obviously you need to fair multiplex all the philosphers together and all the forks??? However, how can you multiplex three different protocols together??? You can't can you? If not, then I guess we write some display process with three input channels one for each protocol, and work that way.
Fair mulitplex the philospher reports down to one channel. Same for the forks. That leaves 3 channels, with different protocols, coming to the display process - say a, b and c. To fair-alt those:
WHILE TRUE SEQ PRI ALT a ? ... ... react to the a mess b ? ... ... react to the b mess c ? ... ... react to the c mess PRI ALT b ? ... ... react to the b mess c ? ... ... react to the c mess a ? ... ... react to the a mess PRI ALT c ? ... ... react to the c mess a ? ... ... react to the a mess b ? ... ... react to the b mess
Bit repetitive writing the reaction codes thrice ... so make a PROC for each reaction ... just invoke that PROC thrice.
However, simpler-but-more-dangerous is just to have one protocol that all philosophers, forks and the security guard to use - that's 11 channels, all with the same super-protocol, coming out of the college. Plug those into the display and simply fair-alt that!
It's unsafe to use one protocol for all reports - since a fork, for example, could send a philosopher report by mistake! This is the sort of thing that can happen somewhere down the life-cycle of a product during maintenance. However, for this exercise, you will not lose marks for doing so ... so long as, of course, you don't make any mistakes that this approach allows!
2001 |
I am having trouble modifing the fork process in the dining philosopher's question. What I want to do is basically have a fair ALTing process going over the two input lines, left and right.
I have already implemented a fair ALTing process in my animate process, but the difference there was that all the input lines were called the same name -- i.e. state[0], state[1] etc., so implementing it there was easy from the definition given in the notes. The problem with the fork process is that the input lines have two distinct names, and so the definition of fair ALTing does not translate as obviously.
What I have tried to do is give them the same name, so in the fork header I have put '[]CHAN OF BOOL in', but I could not get it to work this way after many attempts. My question is: is this the correct way to be going about altering fork?
There are two ways. One is to create a channel array abbreviation for the two scaler channels - i.e.
[2]CHAN OF BOOL c IS [left, right]:
Normal anti-aliasing rules now apply - you can't use the names left and right anymore (because you have the new names, c[0] and c[1], for them). But you now have an array of channels and the usual fair ALTing coding (see slide 10-23) can now be applied.
Second way: don't be clever and just write:
WHILE TRUE SEQ PRI ALT -- first give left priority left ? .. .. right ? .. .. PRI ALT -- then give right priority right ? .. .. left ? .. ..
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. |