CO538 Anonymous Questions and Answers |
This page lists the various questions and answers. To submit a question, use the anonymous questions page. You may find the keyword index and/or top-level index useful for locating past questions and answers.
We have taken the liberty of making some minor typographical corrections to some of the questions as originally put. Although most of the questions here will have been submitted anonymously, this page also serves to answer some questions of general interest to those on the course.
When asking questions, please do not simply paste in your code and ask what is wrong with it, as these types of question are hard to answer in a public forum (resulting in either no answer, or an answer which will only be of use to the person asking the question). In many cases, a question about code can be formulated in general terms. For example, if you have an `IF' block that results in an `incorrect indentation' error, you might ask, ``what is the correct syntax for an occam `if' statement ?''. Questions of this form are easier to answer, and are meaningful to all.
Questions that are not suitable for these public pages (i.e. those that contain assignment-specific code), should be mailed to your seminar-leader.
Submission reference: IN1422
Is any of the content of the 'Mobiles' slides examinable? I don't think they were covered in lectures but I did see some information in there about Rendezvous which I think was covered. Thanks for any help.
No - the content of the 'Mobiles' slides is not examinable. They were included for interest only, so you could see some other things in the language.
The material on Extended Rendezvous is in the 'shared-etc' slides, was covered in the lectures and is examinable. In the 'Mobiles' slides, these slides are repeated but more extensive examples are given (supporting distributed occam-pi). Only the material presented in 'shared-etc' is part of this module.
Submission reference: IN1419
I'm just looking over last years exam papers, and a question about fair alting came up.
I didn't think we covered fair alting, at least code wise. I understand what one is, but I havn't seen the code to do so. Was this in the slides, I can't see it?
Fair alting is how a process provides a fair service to multiple client
processes competing for its attention. Writing a process that waits on an ALT
(over input channels from its clients) at the start of its cycle does not guarantee fairness.
Selection between mutliple ready guards in an ALT
is arbitrary,
so some clients may be starved of service if they have competitors who always happen to get
chosen. Changing the ALT
to a PRI
ALT
guarantees
unfairness, with guards (clients) lower in priority order always losing out to any
higher up that happen to be ready.
In classical occam, fair alting is a simple technique that uses the guaranteed unfairness of prioritised alting to provide a fair one – we just switch priority ordering each time, giving the guard serviced last time the lowest priority in the next round.
[In occam-pi, however, the need for fair alting has been superceded by the mechanism
of SHARED
channel-end (specifically, shared output-ends) and protocol
inheritance. More later.]
To fair alt between just two channels, unroll the server cycle loop once and use
a PRI
ALT
twice with oppoising priorities:
WHILE running SEQ PRI ALT a ? ... ... deal with the 'a' message b ? ... ... deal with the 'b' message PRI ALT b ? ... ... deal with the 'b' message a ? ... ... deal with the 'a' message
Now, if a message is pending on the 'b' channel, it will either be taken next
(if the server process is at the second PRI
ALT
in
its cycle or nothing is pending on the 'a' channel) or it will be taken after
at most one 'a' channel message (if the server is at its first PRI
ALT
and something was pending on 'a').
Similarly, any message pending on 'a' will be taken either next or the time after that.
Neither channel can be starved and we can calculate worst case response times by
the server for incoming messages (if we know maximum times for dealing with them).
For example, the fork
process in the Dining Philosophers' college can be
modified as above to guarantee no philosopher starves to death because of ever-greedy
colleagues.
[Fair servicing was mentioned several times during the course.
For this (2008) year exam purposes, the trivial solution above is all you need
to have discovered –
though you might as well just read further on to find the fair.fork.0
process that actually applies the above modification.
The rest of this answer – on the fair servicing of arrays of channels, fair
sevicing via a single channel with a shared output-end, and reflections on mutexes
(i.e. "mutually exclusive" locks), counting semaphores and Java monitors –
are for interest only.
Reading and following the examples below, however, will give you more practice
in the ideas of occam-pi concurrency and deepen your understanding –
which always helps with exam preparation, :).]
If there are three channels, this approach is no harder but gets a little tedious:
WHILE running SEQ PRI ALT a ? ... ... deal with the 'a' message b ? ... ... deal with the 'b' message c ? ... ... deal with the 'c' message PRI ALT b ? ... ... deal with the 'b' message c ? ... ... deal with the 'c' message a ? ... ... deal with the 'a' message PRI ALT c ? ... ... deal with the 'c' message a ? ... ... deal with the 'a' message b ? ... ... deal with the 'b' message
The code is exploding! This can be ameliorated by placing the code responding
to each source of message inside a PROC
, so that the above responses
are single line PROC
calls. But it's still tedious and gets worse
as the number of guards gets larger!
If the server is servicing an array of channels (rather than a set of individual ones), we can wrap the algorithm up neatly, without any repeated code:
PROC fair.server.0 ([]CHAN SOME.PROTOCOL in?, ...) ... local state variables SEQ ... initialise local state INTITIAL BOOL running IS TRUE: INITIAL INT favourite IS 0: WHILE running SEQ PRI ALT i = favourite FOR SIZE in? VAL INT actual.i IS i \ (SIZE in?) ... more local variables in[actual.i] ? ... ... deal with this message favourite := (favourite + 1) \ (SIZE in?) :
On its first cycle, 'favourite' is 0 and the channel priority is 0 .. (n-1),
where n is the number of channels (SIZE
in?
).
The channel with index 'favourite' always has highest priority and that
moves round each cycle. So, in the second cycle, channel priority order
is 1 .. (n-1) 0. Then, 2 .. (n-1) 0 1. Then, 3 .. (n-1) 0 1 2. Each
channel will have highest priority once every n cycles. At worst,
a pending message on any channel will be serviced within n cycles –
and earlier if not all the other channels are pending.
This round-robin prioritisation is not as fair as it could be though. It's possible for channel 7 (say) to have a pending message and for channel 6 to be serviced 7 times in a row before channel 7 is served (e.g. when 'favourite' is 0, channels 0 through 5 have nothing and channel 6 is always ready!).
To fix this, don't round-robin the value of 'favourite'. Instead, as mentioned in the second paragraph of this answer, switch 'favourite' to one after the index serviced last time (which channel becomes least favoured in the next cycle) – a simple modification:
PROC fair.server.1 ([]CHAN SOME.PROTOCOL in?, ...) ... local state variables SEQ ... initialise local state INTITIAL BOOL running IS TRUE: INITIAL INT favourite IS 0: WHILE running PRI ALT i = favourite FOR SIZE in? VAL INT actual.i IS i \ (SIZE in?) ... more local variables in[actual.i] ? ... SEQ ... deal with this message favourite := actual.i + 1 :
Now, once any channel (say index 5) gets serviced, every other channel that is pending will be serviced before channel 5 gets serviced again. Of course, if no other channels are pending, channel 5 can get serviced again immediately should it become ready. Whatever happens, no pending channel will see another channel serviced twice before it gets serviced – and that's fair!
Having said this, a SHARED
channel in occam-pi eliminates
the need for these tricks.
Simply let all clients of the server share the output-end of a single
channel to the server.
The server just sees a normal (unshared) channel input-end:
PROC fair.server.2 (CHAN SOME.PROTOCOL in?, ...) ... local state variables INTITIAL BOOL running IS TRUE: SEQ ... initialise rest of local state WHILE running ... more local variables SEQ in[actual.i] ? ... ... deal with this message :
which is a lot simpler! The clients see the SHARED
output-end
to this service channel and simply CLAIM
s it before use.
Competing clients will be queued and serviced in FIFO order – and that's fair!
The only downside (to fair.server.1
and fair.server.2
)
is that all clients must speak the same channel protocol.
However, with protocol inheritance, the service channel can carry the union of
distinct protocols for each client (but all these protocols must carry
a CASE
tag):
PROC fair.server.3 (CHAN SOME.PROTOCOL in?, ...) ... local state variables INTITIAL BOOL running IS TRUE: SEQ ... initialise rest of local state WHILE running ... more local variables in[actual.i] ? CASE ... deal with all message variants :
Note: the fair alting servers (fair.server.0
and
fair.server.1
) may process further messages from its
client (or, indeed, any other client it chooses) in the block of code
labelled: "... deal with this message
".
That is not allowed for the shared channel servers
(fair.server.2
and fair.server.3
)
since client messages seeking the server's attention all share
the one channel and any further messages from a client will be
queued behind other client's messages – so they are
not available for immediate input.
The effect can be managed, however, by providing a separate channel
on which to receive subsequent messages.
[Note on the above note: an assumption is that clients of the shared channel servers only send one message per CLAIM of its shared writing-end. If they held on to that CLAIM and sent more than one message, all messages sent by that client within that CLAIM block will be grouped together and received in sequence by the shared channel server without being interleaved with messages from other clients. However, for many applications the client must relinquish its CLAIM after sending one message and, then, the problem (and solution) in the above note apply. See the philosopher.0 process below for an example of this.]
An example is the fork
process from the Dining
Philosophers system. As suggested earlier, this can be made
to service its competing philosophers fairly by:
PROC fair.fork.0 (CHAN BOOL left?, right?) WHILE TRUE BOOL any: SEQ PRI ALT left ? any -- picked up by left phil left ? any -- put down by left phil right ? any -- picked up by right phil right ? any -- put down by right phil PRI ALT right ? any -- picked up by right phil right ? any -- put down by right phil left ? any -- picked up by left phil left ? any -- put down by left phil :
which is neat enough! Doing this with via a shared channel (on which both philosophers try to pick it up) means that we need a separate channel for the release – as opposed to the above, where each philosopher uses the same channel for pick up and release. The code for this server becomes very trivial:
PROC fair.fork.1 (CHAN BOOL pick.up?, put.down?) WHILE TRUE BOOL any: SEQ pick.up ? any -- get picked up by someone put.down ? any -- get put down by that same someone :
Note: this fair.fork.1
process is, in fact, a general
(and fair) mutex, sometimes known as a binary semaphore.
So, the channels pick.up
and put.down
might
be better renamed, respectively, acquire
and
release
.
With fair.fork.1
, the code for the clients (the philosophers)
is slightly harder.
For one thing, they now need two channels (both shared by other
philosphers) to each fork – instead of one. And they have
to make a CLAIM
on those channels before they can
use them. Here it is:
PROC philosopher.0 (SHARED CHAN BOOL pick.up.left!, put.down.left!, SHARED CHAN BOOL pick.up.right!, put.down.right!, ...) WHILE TRUE SEQ ... thinking ... hungry, get past security PAR CLAIM pick.up.left! -- pick up pick.up.left ! any -- forks CLAIM pick.up.right! -- in pick.up.right ! any -- parallel ... eating PAR CLAIM put.down.left! -- put down put.down.left ! any -- forks CLAIM put.down.right! -- in put.down.right ! any -- parallel ... leaving, let security know :
[Note: it's because we insist that the philosopher picks up its forks in parallel that it must relinquish its CLAIM on the pick-up channels immediately after using them – it needs to know when both events have happened (so that it can eat) and, for that, both parallel actions (inlcuding the claims) have to terminate.]
A final thought: if we didn't mind programming the philosophers
to pick up their forks serially (and, with the security
guard refusing entry to all five at the same time, this is safe),
we could do things even more simply.
CLAIM
ing a shared channel-end grabs a mutex
lock implemented by the occam-pi run-time.
We could just use CLAIM
s on those shared channel-ends
as pre-requisites for eating and never actually send messages over
the channels at all:
PROC philosopher.1 (SHARED CHAN BOOL pick.up.left!, put.down.left!, SHARED CHAN BOOL pick.up.right!, put.down.right!, ...) WHILE TRUE SEQ ... thinking ... hungry, get past security CLAIM pick.up.left! CLAIM pick.up.right! ... eating ... leaving, let security know :
Making each CLAIM
models grabbing the fork –
only one philosopher can succeed at a time.
Exiting the CLAIM
block models putting the fork
back – another philosopher may now grab it.
We now don't need any fork
processes at all –
just the pick-up/put-down channels, whose receiving ends may
be safely left dangling (since nothing ever gets sent down them!).
The above is, however, slightly clever programming and that can
quickly lead to tears.
Note that making a CLAIM
on a SHARED
channel-end and not using it in the claim block is exactly
the semantics Java offers from its monitor concept:
Java: occam-pi: synchronize (someObject) { CLAIM some.channel! ... stuff ... stuff (don't use some.channel!) }
However, nested monitor locks – like nested channel claims –
do not always give us what we want.
Hence, from Java 1.5 onwards, Java gives us
a java.util.concurrent.Semaphore
class
whose acquire
and release
methods give more
flexible orderings of use (and, of course, more ways for them to be
mis-used) than monitor locks.
The fair.fork.1
process above (which should be renamed
to mutex
) gives us the same flexibility.
For interest, here is a fair counting semaphore process in occam-pi:
--* This is a counting semaphore providing a fair acquisition service. -- The other ends of the acquire and release channels must be SHARED -- amongst all user processes. To acquire this semaphore, CLAIM the -- acquire channel and send TRUE down it. To release, CLAIM the release -- channel and send TRUE. -- -- Warning: this semaphore must first be acquired before being released. -- Releasing a semaphore before acquiring it will lead to deadlock. -- Forgetting to release an acquired semaphore will also lead to deadlock. -- -- @param n The number of processes allowed to hold this semaphore -- at the same time (must be greater than zero). -- @param acquire The channel to acquire this semaphore. -- @param release The channel to release this semaphore. -- PROC semaphore (VAL INT n, CHAN BOOL acquire?, release?) INITIAL INT count IS n: WHILE TRUE BOOL any: ALT (count > 0) & acquire ? any count := count - 1 release ? any count := count + 1 :
Finally, note that the security guard in the Dining Philosophers'
college can be implemented with the above semaphore
,
initialised with a count of 4.
Keywords: fair-alt , shared-channel
Referrers: Question 30 (2009) , Question 45 (2009) , Question 72 (2008) , Question 74 (2008) , Question 68 (2007)
Submission reference: IN1429
Give examples of real-time systems that combine hard and soft real-time concepts and a brief description on the functionalities of one of the systems.
The course did cover issues dealing with time (e.g. TIMER
declarations, the occam-pi model of time, timeouts and timeout guards
in ALT
s). You need to be comfortable about programming
loops that have time-periodic cycles (e.g. a process that outputs once
per second how many inputs it received in the previous second).
However, this (2008) year we did not consider the common problems of real-time systems: how to calculate worst case response times, how to design systems so that such things can be calculated and minimised, and the concepts of hard and soft real-time. So, these things are not examinable.
For information, a hard real-time system is one whose specified deadlines must always be met – for example, collision detection systems in autonomous vehicles or the control avionics in a fly-by-wire airplane. If those deadlines get missed, the system crashes – literally!
A soft real-time system is one whose specified deadlines should be met most of the time – but only lead to temporary inconvenience when missed, with the system continuing to work and recovering to full functionality in time. Examples include: delivery of video data to a phone, processing bar-scanner readings at a supermarket checkout station, processing web forms for e-commerce.
An example of a system having elements of both: an avionics system combining the presentation of air-flow data to the head-up display for the pilot (soft) with the computation of wing surface movements, based on that air-flow data and other things, to keep the aircraft in stable flight (hard).
Keywords: timers
Referrers: Question 68 (2007)
Submission reference: IN1450
On the lecture on protocols, at the slide 55 what is the purpose of j
?
It is never being used.
It's only there because, currently, occam-pi insists on their being
a named replication variable and initial value (in a replicated SEQ
,
PAR
, IF
and ALT
) – even if it's not used.
We might remove that insistence in the future – so that on that slide 55 we could write:
SEQ FOR length INT x: SEQ in ? x out[i] ! x
When I get a moment, I'll add that to the list of occam enhancement proposals, :).
Keywords: replicator
Submission reference: IN1451
hi, I am using Transterpreter for Windows Jva based IDE version 1.4.
I am trying to compile simple code taken from Peter Welch Slides one example is:
PROC numbers (CHAN INT out!) INT n: SEQ n := 0 WHILE TRUE SEQ out ! n n := n + 1 :
after compilation it gives warning:
Compiling: Test5.occ Not enough parameters in top level. Expected 3 parameters, ("CHAN BYTE" "CHAN BYTE" "CHAN BYTE") Given: 1, ("CHAN INT") skroc: Couldn't slink one or more files. skroc exited with error code: 1
Why it is asking for three parameters. Please help.
Do the occam programs have some starting and ending methods like in Java and C:
public static void main(){} void main (void){}
We did not tell you about separate compilation of individual processes in this module. We only told you how to compile main occam-pi programs.
In the first terminal class of the module, we showed you lots
of main occam-pi programs (from your examples
folder,
also on raptor, in \courses\co631\examples\
).
The structure of main programs is explained in the
README.txt
file in that folder.
An occam-pi main program consists of a sequence of declarations
(e.g. VAL
constants, PROTOCOL
s,
TYPEs
, PROC
s etc.). The last
declaration must be a PROC
with the header:
PROC foo (CHAN BYTE keyboard?, screen!, error!)
It is this last PROC
, known as the top level
process, that the system runs from the compiled code
(and corresponds to the Java and C main
).
The PROC
and channel parameter names are the programmer's
choice – but the header structure must be as above.
It looks like you were trying to compile a file containing only
the PROC
numbers
quoted in your question.
That is not a main program and will not compile, with the compiler
complaining about the lack of a top level PROC
with the three mandated channel parameters.
Every exercise, example and assessment program in the module had this structure.
Keywords: main
Submission reference: IN1452
On the 2006 exam paper, at question 1(d) the closest answer is that it is a
variant protocol but the CASE
is missing.
Is it a typing error? Or am i missing anything here?
You are quite right. There was a typing error in the question
and the CASE
line was missing from the PROTOCOL
declaration. We hope not to repeat that!
See Question 104 (2006) for information on question 1(e) from the same paper. Note: the year numbers for these anon Q&As corresponds to the start of the academic year – these year's questions are labelled (2007). Note also: the typing error in the 2006 question was not noticed by the question setter, checker, external examiner, nor any students taking the exam! Question 104 (2006) was answered almost a year after the exam and the typo (repeated in that answer) had still not been spotted. We apologise.
Keywords: protocol
Submission reference: IN1454
Regarding the 2006 exam question 2a: I have created a protocol which has only a boolean type in it. But the process as I have implemented it, it makes good use of that protocol. Did you expect anything more specific? I didn't find useful or helpful to add more things in it. When the student requests for a drink, it sends a TRUE along the protocol and when the drink is ready, bar sends a TRUE using the protocol to the student.
You are quite correct. Boolean channels are all that are
needed for a student to request a beer and for the bar to supply.
And only TRUE
s need be sent. Nothing else is needed
here.
Keywords: exams
Submission reference: IN1453
Regarding time:
suppose t
is the current time, from a TIMER
,
and t1
is t
PLUS
100
.
For 100ms, (t
AFTER
t1
) will be
FALSE
.
As soon as the condition becomes TRUE
,
if I want to increase their difference again by 100,
am I going to add 101 or 100?
I know is a bit silly but I have been playing around with the maths and it seems logical to add 101 instead of 100.
occam-pi TIMER
s work to microseconds (us), not
milliseconds (ms). This is just to set your question in the right
units and does not otherwise affect the point you raise.
Suppose we have:
SEQ tim ? t tim ? AFTER t PLUS 0
where tim
is a TIMER
and t
is
an INT
.
The last line does not complete until the time on tim
is AFTER
what was recorded the line before.
Depending how far it was through the hardware timer's microsecond
when that time was recorded, the last line may have to wait:
if the timer has not moved on by (at least) one microsecond,
the last line will have to wait.
Suppose we have:
SEQ tim ? t tim ? AFTER t PLUS n
where n
is a non-negative INT
.
In this case, the last line will wait until the timer has moved on by
(at least) (n
+ 1) microseconds (from the time recorded
in the previous line).
[If n
were -1, the last line will not have to wait
(since the time on the timer will be AFTER
one
microsecond before the previous line executed).]
As you have observed, this is a little subtle (and that's bad!).
It would be more logical if occam-pi had an AFTER.OR.EQUAL
operator, that related to >=
in the same way as
AFTER
relates to >
.
Then, we could have:
SEQ tim ? t tim ? AFTER.OR.EQUAL t PLUS n
In which case, the last line would wait until the timer had moved
on by (at least) n
microseconds (from the time recorded
in the previous line).
But there is no AFTER.OR.EQUAL
operator!
For practical applications, however, this microsecond discrepancy
will not matter.
To set up a regular series of timeouts, evenly spaced every
n
microseconds, the correct template is:
TIMER tim: INT timeout: SEQ tim ? timeout timeout := timeout PLUS n WHILE <some condition> SEQ tim ? AFTER timeout timeout := timeout PLUS n -- set up next timeout value ... do some stuff
Notice that an actual time value is only recorded once
– before the loop is even entered.
Inside the loop, the values in timeout
advance
in exact increments of n
microseconds
– i.e. the timeouts in the series are always n
microseconds
apart from each other.
The result of having to write AFTER
,
rather than AFTER.OR.EQUAL
,
is that that series is always one microsecond after the series
we might have been expecting (because the initialisation of that
original timeout
value, before loop entry, was
one microsecond after what we might have thought).
Of course, if an incorrect template were used – such as:
TIMER tim: INT timeout: WHILE <some condition> SEQ tim ? timeout tim ? AFTER timeout PLUS n ... do some stuff
then timeouts of (n
+ 1) microseconds from the start
of each loop are being set.
The actual series of timeouts will be separated by (n
+ 1)
microseconds together with however long it takes to do the rest
of the stuff in the loop.
This will not be a regular series!
For information, here is a correct template for setting up a regular
series of timeouts (separated by n
microseconds) in
a process that also has to deal with other events:
TIMER tim: INT timeout: SEQ tim ? timeout timeout := timeout PLUS n WHILE <some condition> PRI ALT tim ? AFTER timeout SEQ timeout := timeout PLUS n -- set up next timeout value ... response to the timeout <another guard> ... response to this other guard <yet another guard> ... response to this other guard
where we have given the timeout highest priority (over the other events). Of course, some applications may need to give other events higher priority ... or even deal with them fairly (see Question 62 (2007)).
In all the above analysis, we are assuming perfect timers and that the process being considered is running on a sufficently fast processor (that can complete whatever else the process has to do before its next timeout is due). In practice, that won't always happen – especially when there are other processes sharing that same processor. Dealing with this takes us into the realm of hard and soft real-time systems, which was not included in this year's material (see Question 63 (2007)).
For interest however, you may note that, with the correct template, if a timeout is missed (because the process didn't get around to the timeout line in time), the timeout line will not delay anything – i.e. the response will be late ... but gets done as soon as possible. The next set timeout will still be for the original series schedule. So long as the processor can get through its response a bit quicker, the process will get back on schedule. This is what is needed for many soft real-time systems. At exceptionally busy times, some deadlines may be missed ... but it gets back on schedule as soon as the busy period ends. Note the emphasis on exceptionally.
Summary: the line:
tim ? AFTER timeout
does not terminate until the time value in the timer tim
is AFTER
the value of timeout
.
This means that it is guaranteed to wait until the time is (at least)
one microsecond after timeout
.
In practice, it may have to wait longer (for the run-time kernel to get
around to scheduling this process to run on some processor core).
If we wanted the guarantee to be for it to wait until the time is
(at least) timeout
, we should write:
tim ? AFTER (timeout MINUS 1)
Because the guarantees have that "(at least)" in them, there is
not much need for the above subtlety.
Guaranteeing to wait until (at least) one microsecond after
timeout
is also a guarantee to wait until (at least)
after timeout
.
Keywords: timers
Referrers: Question 72 (2007)
Submission reference: IN1455
Regarding the 2006 past paper question 4b: If the period pass without any incoming data do you want us to constantly send on the output channel false? Also i do know the watchdog process, is it correct if i implement it otherwise? Also do you want us to know every single process implemented during the course? Because watchdog was implemented as an example on a lecture
I think you must mean question 3b from the 2006 paper?
Assuming you do mean that question, no.
The first time no input arrives within period
microseconds of the last input, monitor.2
must
send TRUE
down its broken!
channel.
If there is never any more input, this process does nothing.
But if another input does arrive, this process sends FALSE
down its broken!
channel and deals with the arrived
data as normal.
The watchdog
process suggested to help the implementation
of monitor.2
is similar to, but not the same as,
the one discussed in the course slides (which repeatedly signals on
its panic!
channel for each period in which data fails
to arrive).
Yes, you may implement monitor.2
any way you like –
using a watchdog process was only a hint.
No, you don't have to know every single process implemented during the course! You do have to understand enough of the ideas to be able to design and implement whatever processes are needed. If some of these are the same as, or similar to, processes shown in the course, familiarity will help. If you do not understand the ideas well enough, rote learning of the processes presented in the slides will not help.
Keywords: exams
Referrers: Question 72 (2007)
Submission reference: IN1456
Regarding the 3b from paper 2006 i have set monitor1 as a subprocess of monitor2. Then the values from the gather are read into the integers angle and value. How can i set the values of a protocol type?Then send it to monitor1 and then monitor1 it outputs on the report channel
When you say monitor1
in your question, I'm assuming
you mean monitor
.
There is no monitor1
mentioned in this question.
Yes, the hint suggests implementing monitor.2
as a pipeline
consisting of a watchdog
process and monitor
– i.e. monitor
is a sub-process of monitor.2
.
To read from the gather?
channel into integer variables
angle
and distance
:
gather ? angle; distance
To write to the report!
channel from integer variables
angle
and distance
:
report ! angle; distance
The above will have been necessary for answering part (a) of this question. I don't understand your question: "How can i set the values of a protocol type?". I hope the above answers it.
I also don't understand: "Then send it to monitor1 and
then monitor1 it outputs on the report channel"?
I would put a watchdog
process first in the pipeline
inside monitor.2
; the output from watchdog
being the input of monitor
.
Hope that helps!
Keywords: exams
Referrers: Question 72 (2007)
Submission reference: IN1457
Is there going to be somewhere to give feedback on this module? Thanks.
Apologies for not setting this up sooner. A form for providing feedback is now linked from the top of the Co631 module page.
Keywords: feedback
Submission reference: IN1458
Reference to Question 69 (2007) and Question 70 (2007):
monitor.2
accepts a channel gather?
.
Using an ALT
command, I check if there is
data on that channel:
gather ? angle; value
If there is, how can I pass the angle
and
value
to the subprocess monitor
?
I think that the solution is like:
CHAN R.DATA input.to.monitor2 IS gather:
To pass on angle
and value
to
the subprocess monitor
, simply output it.
Your suggestion of using an abbreviation to rename
the gather
channel to something with
a plausible sounding name won't help.
OK - here's a solution. First the monitor.2
process,
implemented by a pipeline of watchdog
then monitor
:
----------------------------------------------------- | | | -------------- ------------- | gather | | | c | | | report --->-----|----| watchdog |-------->--------| monitor |---|------>--- | | (period) | | | | | | | | | | | -------------- ------------- | | | | | | monitor.2 (period) | ----------------------------------------------------- | | broken v
which is written:
PROC monitor.2 (VAL INT period, CHAN R.DATA gather?, report!, CHAN BOOL broken!) CHAN R.DATA c: PAR watchdog (period, gather?, c!, broken!) monitor (c?, report!) :
The question says: "to look out for a delay of period microseconds
or more from the last input".
This does not imply dividing time into a regular series of
time slices (as given by the correct templates in Question 68 (2007)).
Instead, we need the incorrect version that calulates the timeout
relative to the timing of the last input (as asked by this question).
So, the watchdog
can be coded:
PROC watchdog (VAL INT period, CHAN R.DATA gather?, report!, CHAN BOOL broken!) WHILE TRUE TIMER tim: INT timeout: SEQ tim ? timeout timeout := timeout PLUS period INT angle, distance: ALT gather ? angle; distance -- data arrived in time report ! angle; distance -- assume no delay here tim ? AFTER timeout SEQ broken ! TRUE -- report supply failure gather ? angle; distance -- wait for supply to resume report ! angle; distance -- forward as normal broken ! FALSE -- report supply OK again :
Strictly speaking, we should follow the golden rule and eliminate needless serialisation in the above response to the timeout:
tim ? AFTER timeout SEQ PAR broken ! TRUE -- report supply failure gather ? angle; distance -- wait for supply to resume PAR report ! angle; distance -- forward as normal broken ! FALSE -- report supply OK again :
but marks would not have been lost for not doing that.
Keywords: exams
Submission reference: IN1459
I know what the symbol ?
means.
Can you explain me what the symbol ??
means?
Also at the end of the module we have been taught some Java multi threaded
features. Are they examinable? Because I have never came across them in
past papers.
The ??
introduces an extended rendezvous block –
see slides 52-61 of the shared-etc slides.
As announced at the end of the lectures, the JCSP/Java material is not examinable.
Keywords: exams , extended-rendezvous
Submission reference: IN1460
If we have:
DATA TYPE FOO RECORD [20]INT data: : CHAN [20]INT c: [20]INT x:
Can we use casting, so that c
transfers an element
of type FOO
?
i.e is the code:
c ? FOO x
correct?
Your data type FOO
is not cast compatible with
[20]INT
, so neither can be cast into the other.
In occam-pi, casting is a special operator used (mostly) to
change the representation of data values between types.
For example, to change from REAL32 to an INT, we cast
and the internal representation changes dramatically.
However, if we define a new type directly from an existing type:
DATA TYPE BAR IS [20]INT:
then we can cast between BAR
and [20]INT
variables freely. In this case, no change in internal representation
is needed.
The result of casting is a new piece of data. So your line:
c ? FOO x
where x
is an [20]INT
and
channel c
carries [20]INT
makes no sense.
It works without the (illegal attempt to) cast with the FOO
operator.
If you need to turn your x
data from an [20]INT
into a FOO
, there is a way – but not by casting!
If two types have respresentations of exactly the same size,
they can be RETYPES
d into each other. This is described
in the
occam-pi reference wiki,
which is linked from the Co631 module web page.
Take a look at Section 10.2 "Retyping".
We did not cover this during the lectures, so it is not examinable.
I'll probably include it in future years, just after abbreviations
in the shared-etc sildes.
For information, since your FOO
obviously has the same
representation size as [20]INT
, retyping is allowed
between them. To retype your x
, RETYPES
it:
FOO x RETYPES x: ... for the duration of this block, x has the type FOO
Retyping is like abbreviating: all the rules about aliasing carry
over, along with VAL
retyping (like VAL
abbreviations). So, if we only needed to use x
as a FOO
and not change it:
VAL FOO x RETYPES x: ... x has the type FOO here and it's a VAL
The retyping can go the other way. If f
is a FOO
variable, then:
[20]INT f RETYPES f: ... for the duration of this block, f has the type [20]INT
and:
VAL [20]INT f RETYPES f: ... f has the type [20]INT here and it's a VAL
Keywords: data-type , cast , retypes
Submission reference: IN1461
Can processes return a value without the use of a channel?
Sure. If a PROC
has reference (i.e. non-VAL
)
data parameters and (if) it terminates, whatever has been assigned to those
parameters will have actually been assigned to the variables passed as arguments
to the call.
If a PROC
only has data parameters (both reference
and VAL
) and is generally used in SEQ
uence with other
processes, it is being used like a procedure (in Pascal), a function
(in C) or a method (in Java).
Keywords: parameter , reference , value
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. |