AU612814B2 – Data processing system
– Google Patents
AU612814B2 – Data processing system
– Google Patents
Data processing system
Download PDF
Info
Publication number
AU612814B2
AU612814B2
AU32537/89A
AU3253789A
AU612814B2
AU 612814 B2
AU612814 B2
AU 612814B2
AU 32537/89 A
AU32537/89 A
AU 32537/89A
AU 3253789 A
AU3253789 A
AU 3253789A
AU 612814 B2
AU612814 B2
AU 612814B2
Authority
AU
Australia
Prior art keywords
bus
data
address
asserted
clock cycle
Prior art date
1988-04-09
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU32537/89A
Other versions
AU3253789A
(en
Inventor
Geoffrey Poskitt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Services Ltd
Original Assignee
Fujitsu Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
1988-04-09
Filing date
1989-04-07
Publication date
1991-07-18
1989-04-07
Application filed by Fujitsu Services Ltd
filed
Critical
Fujitsu Services Ltd
1989-10-12
Publication of AU3253789A
publication
Critical
patent/AU3253789A/en
1991-07-18
Application granted
granted
Critical
1991-07-18
Publication of AU612814B2
publication
Critical
patent/AU612814B2/en
2009-04-07
Anticipated expiration
legal-status
Critical
Status
Ceased
legal-status
Critical
Current
Links
Espacenet
Global Dossier
Discuss
Classifications
G—PHYSICS
G06—COMPUTING; CALCULATING OR COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
G06F13/38—Information transfer, e.g. on bus
G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
G06F13/4204—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
G06F13/4234—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus
G06F13/4243—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus with synchronous protocol
G—PHYSICS
G06—COMPUTING; CALCULATING OR COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F12/00—Accessing, addressing or allocating within memory systems or architectures
G06F12/02—Addressing or allocation; Relocation
G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
G06F12/0815—Cache consistency protocols
G06F12/0831—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
Description
COMMONWEALTH OF AUSTRALIA 12 1 410 PATENTS ACT 1952-69 COMPLETE SPECIFICATION
(ORIGINAL)
Class Int. Class Application Number: Lodged: Complete Specification Lodged: Accepted: Published: Priority Related Art .k lypme of Applicant: 0 Address of Applicant: Actual Inventor: Address for Service: INTERNATIONAL COMPUTERS LIMITED ICL House, Putney, London, SW15 1SW, England GEOFFREY POSKITT EDWD. WATERS SONS, 50 QUEEN STREET, MELBOURNE, AUSTRALIA, 3000.
Complete Specification for the invention entitled: DATA PROCESSING SYSTEM The following statement is a full description of this invention, including the best method of performing it known to US
I
1 C1108 DATA PROCESSING SYSTEM.
Background to the invention.
VoThis invention relates to data processing systems. More specifically, the invention is concerned o0 S• with a data processing system comprising a plurality of 0W •units interconnected by a bus, for information transfer between the units. The units may, for example, include :0 data processing units and memory units.
It is conventional in such a system to rely on an acknowledgement signal to indicate that information sent over the bus has been accepted by the receiving unit or units. A disadvantage of this, however, is that it means that each information transfer is essentially a two-way process information must first propagate from the sender to the receiver, and the acknowledgement signal must then propagate back from the receiver to the S k0• sender. Thus, the minimum time for each transfer is at least twice the bus propagation delay time. This restricts the rate of information transfer over the bus.
An object of the present invention is to avoid this limitation inherent in the use of acknowledgement signals.
1 T-
I
2 Summary of the invention.
According to the invention there is provided a data processing system comprising a plurality of units interconnected by a bus for information transfer between the units, wherein the bus carries an ADDRESS WAIT signal indicating that at least one of the units is unable to accept an address from the bus, and a separate DATA WAIT signal indicating that at least one of the units is unable to accept data from the bus, and wherein when a unit sends data or an address on the bus, it holds the data or address on the bus for one clock cycle only, without waiting for any acknowledgement unless a DATA WAIT signal or an ADDRESS WAIT signal, as the case may be, is present in which case it holds the data or e S address on the bus for as long as the DATA WAIT signal eeoc or ADDRESS WAIT is present.
*0 e can be seen that, provided the receiving unit is free to accept the information, each transfer of information occupies just one clock beat, and this clock beat can be chosen to be equal to or not substantially boo s: greater than the time delay for one-way propagation down *00:0 the bus. There is no need to wait for an acknowledgement .5 to propagate back to the sender. When the receiving unit Goes is not free to accept information, it will produce a SSSe WAIT signal which will delay subsequent transfers; however, as will be shown, these WAIT states can o e frequently be overlapped with other operations, and so do not affect the performance.
Brief description of the drawings.
One data processing system in accordance with the invention will now be described by way of example with reference to the accompanying drawings.
T 1 Figure 1 is an overall block diagram of the system.
S-3- 3 Figure 2 shows a processing module in more detail.
Figure 3 shows bus control logic forming part of the processing module.
Figures 4, 5 and 6 are flow charts showing the operation of the bus control logic.
I
Figures 7 and 8 are timing diagrams illustrating an example of operation of the system.
0* Description of an embodiment of the invention.
*0.
Referring to Figure 1, the data processing system comprises a plurality of data processing modules 10, and a plurality of memory modules 11, interconnected by a high-speed bus 12.
In operation, any one of the processing modules 10, can aquire ownership of the bus for the purpose of
S.
initiating a bus transaction e.g. a read or write over the bus 12 to the memory module 11. Ownership of the bus is aquired by a bus arbitration scheme as follows.
I Each of the modules has a priority dependent upon its slot position on the bus 12, the slots being arranged in decreasing order of priority from left to right as viewed in Figure 1.
Each of the modules has a bus request output line BRQOUT and three bus request input lines BRQIN1-3.
BRQIN1 is connected to BRQOUT of the module immediately to the left of the module to the next higher priority module). Similarly, BRQIN2 and 3 are connected to BRQOUT of the modules 2 and 3 slot positions to the 4 left. In each module, BRQOUT is asserted driven to the voltage level representing the «true» logic state) if the module is making a request for ownership of the bus, or if any of the three input signals BRQIN1-3 is asserted.
A module wins ownership of the bus if none of the three higher priority requests BRQIN1-3 is asserted, and if the bus is not already in ownership, as indicated by a signal BOWN on a line common to all the modules.
When a module wins ownership of the bus, it asserts the signal BOWN, so as to inhibit any further requests until it is ready to relinquish the bus.
p **9 Thus, it can be seen that if two or more modules simultaneously request ownership of the bus, ownership will be granted to the highest priority one of 9.
those modules. Once the bus has been aquired by a module, it may not be taken away by subseqent request from a higher priority module.
o High speed bus.
The high-speed bus 12 comprises the following lines.
Information transfer lines ADO-31, D32-63.
During address cycles, ADO-31 carry a 32-bit memory address, while lines D32-63 are unused. During data cycles, ADO-31 and D32-63 carry a 64-bit data word, consisting of eight 8-bit bytes. Alternatively, ADO-31 can be used to carry a 32-bit data word, with D32-63 being unused.
Bus qualifier lines QO-Q7.
I -r -1- r 5 During address cycles, the qualifier lines carry control information indicating, for example, whether this is a read or a write transaction, and whether the data is word length is -7 or 64 bits. During data cycle, in a write transaction, the ‘ines QO-Q7 carry byte validity signals, to indicate which of the 8 bytes in the data word are valid and are to be written into the memory.
Bus control lines: AS (address strobe) This is asserted by the module that currently owns the bus, when it places an address on lines ADO-31 and a qualifier on lines QO-7.
S
AW (address wait) This is asserted by the receiver of an address, if it is not currently free to accept another address. In particular, AW is asserted by a memory module if it has just accepted an address and has not yet dealt with it.
Su 5, DS (data strobe) This is asserted by a module when it places data on the lines ADO-31 and D32-63. In particular, it is asserted by a processing module when it places data on the bus in a data write transaction, and by a memory module when it places data S, on the bus in a data read transaction.
DW (data wait) This is asserted by a receiver of data if it is not currer.tly free to accept data. For example, DW is asserted by a memory module if it is not free to accept data because it is currently performing an internal refresh cycle.
Miscellaneous lines: Il t b 6 CLK (Clock) This carries a clock signal. The clock cycle is determined by the time required for one-way propagation of signals over the bus 12: that is, the turn-on delay of the sending unit, plus the bus propagation and setting delay, plus the set-up time for the receiving unit. Preferable, the clock cycle is substantially equal to, or not substantially greater than, this one-way propagation time. In the present example, the clock frequency is 16.67 MHz, i.e.
the clock cycle is 60 nanoseconds. All bus cycles are timed from this clock. All signals are asserted on the high speed bus at the positive edge of this clock, and are clocked into receiving modules on this edge.
*RST (Reset) This is a synchronous reset signal for all the modules.
0V SRD (Shared) This is used, as will be described, to allow one module to indicate that the data item currently being read by another module is shared.
IVN (Intervene) This is used, as will be described, to allow a processing module, or the I/O module, to indicate that it holds a more up-to-date copy of data that another module is attempting to read from the memory.
MERR (Memory Error) This signal is asserted by a memory module when it detects an error on performing a read.
BERR (Bus error) This is asserted if BOWN persists for more than 5 microseconds.
It should be noted that the lines AW, DW and SRD are common to all the modules, and each carries the logical OR function of signals placed on it by all the ~1 i i i I 7 modules. In particular, AW and DW is asserted if any one of the modules has asserted it e.g. if any one of the memory modules is not free to accept an address or data.
Processing Modules.
Referring now to Figure 2, this shows one of the processing modules 10 in more detail.
The module interfaces with the high-speed bus 12 by way of the following registers.
DARXSD Data send and receive register.
This is a 64-bit register, and is connected to the bus lines ADO-31 and DA32-63.
I
g QREG Qualifier register.
This is an 8-bit register, connected to the bus qualifier lines QO-Q7.
ADSD Address sender register.
This is a 32-bit register, connected to the bus lines ADO-31.
ADRX Address receiver register.
This is a 32-bit register, connected to the bus lines ADO-31.
The registers DARXSD, ADSD and ADRX are connected to the respective bus lines by way of a two-way buffer register 19.
7′ C I-
I
8 The processing module includes a data processing unit 20, which may be a known 32-bit microprocessor.
The processing module also contains a cache 21.
This is a relatively small, fast-access store, compared with the main memory (the memory modules 11) and holds data copied from the main memory, for rapid access by the processing unit 20. The cache is a set-associated cache, and is addressed by a virtual address VA from the processing unit 20. The cache contains 4K lines of data, each line holding 32 bytes four 64-bit words).
0 see Each line of the cache has status bits, defining one of e.: 0 *four states as follows: S o
INVALID
SHARED NON-DIRTY EXCLUSIVE NON-DIRTY EXCLUSIVE DIRTY.
«SHARED» means that the line of data is also present in the cache of at least one other module, while «EXCLUSIVE» means that the line is not present in any other cache.
S
«DIRTY» means that this line has been written to since being copied from the main memory, while «NON-DIRTY» means that the line has not been written to.
The processing module 10 also includes a memory management unit (MMU) 22, which translates the virtual address VA into a physical address PA, which can then be applied to the bus 12 by way of the register ADSD, so as to address the main memory.
aII a a a *5 a 995* a 55 9 When the processing unit 20 requires to access data for reading or writing, it applies the virtual address VA of the data to the cache 21, so as to access the corresponding line of data in the cache. If the required data is present in the cache line, a hit is scored; otherwise a miss is scored. The action of the cache is as follows.
Read hit in this case, the data can be accessed immediately from the cache. The status of the cache line is not changed Read miss in this case, the required data must be fetched from the mainstore, and loaded into the cache, overwriting the existing line of the cache. If the existing line is in the EXCLUSIVE DIRTY state, it must first be copied back to the main memory, so as to ensure that the most up-to-date copy of the data is preserved. This is achieved by means of a block write transaction over the high speed bus..The required data is then fetched from the main memory, by means of a block read transaction over the high speed bus, and loaded into the cache. The status of the new block is set either to SHARED NON-DIRTY or EXCLUSIVE NON-DIRTY, according to whether or not this line is already present in the cache of another processing module, as indicated by the SRD line.
Write hit if the current status of the cache line is EXCLUSIVE NON-DIRTY, the data is written into the cache, and the status is set to EXCLUSIVE DIRTY. If the status is already EXCLUSIVE DIRTY, the write proceeds without delay and there is no state change. If the cache line status is SHARED NON-DIRTY, then the physical address on the line is broadcast over the bus to the other processing modules, so that they can invalidate the corresponding line in their caches, to
L
LO
10 ensure cache coherency. This is referred to as a broadcast invalidate operation. The data is written into the cache and the cache line status set to EXCLUSIVE
DIRTY.
Write miss in this case, the cache follows the sequence for read miss described above, followed by the sequence for write hit.
f I Each processing module 10 includes a snoop logic unit 23, whose purpose is to ensure coherency between the contents of the cache 21 and the caches in the other processing modules. The snoop logic unit 23 is ean associative memory which stores as tags the physical addresses of all the data currently resident in the cache 21. The snoop logic receives all the physical :addresses appearing on the high speed bus from the other ee processing modules, by way of the register ADRX, and compares each received address with the stored physical address tags. If the received address matches any of the stored physical addresses, the snoop logic generates the corresponding virtual address, and applies it to the cache 21 so as to access the corresponding line of data.
CS..
The operation of the snoop logic unit 23 is as follows. If the snoop logic detects a match during a broadcast invalidate operation by another processing module, it sets the status of the addressed cache line to INVALID. This ensures cache coherency.
If the snoop logic detects a match during a block read transaction by another processing module, it asserts DW and AW so as to temporarily freeze the read transaction. It then checks the status of the data line in the cache 21. When it has ascertained the status, it de-asserts DW and AW, to allow the read transaction to continue. At the same time, if the status is SHARED 1 11 NON-DIRTY or EXCLUSIVE NON-DIRTY, the snoop logic asserts the SRD line so as to inform the other processing module that the data in question is also present in the cache in this processing module. The cache line status is set to SHARED NON-DIRTY. If, on the other hand, the status of the cache line is EXCLUSIVE DIRTY, the snoop logic initiates an INTERVENTION operation, to be described in more detail below. This causes the block read transaction to be temporarily suspended, while the data line is copied back to the main store. The cache line status is changed to SHARED NON-DIRTY. The block read transaction is then allowed to continue. This ensures that the other processing module reads the most up-to-date copy of the data.
S
Bus control logic.
Referring now to Figure 3, each processing
S
module also includes bus control logic 30, which controls the transactions over the high speed bus 12.
As shown, the bus control logic 30 receives the signals CLK, BRQIN1-3, BOWN, AW, DW, AS, DS, IVN, SRD, o BERR and MERR from the high-speed bus. The control logic also generates load signals for the registers ADSD, ADRX, DARXSD and QREG, and a direction control signal DIR for the buffer 19, to control whether that buffer is sending or receiving data on the address and data lines.
The control logic also generates the input signals for
QREG.
The control logic 30 also receives signals PREQ and PSTATUS from the associated processing unit 20. PREQ indicates that the processing unit requires the control logic to initiate an action, such as for example a block read transaction. PSTATUS indicates the nature of the required action. When the control logic 30 has completed 11 12 the requested action, it returns a signal PDONE to the processing unit.
Similarly, the control logic 30 receives signals SREQ and SSTATUS from the snoop logic 23. SREQ indicates that the snoop logic requires the control logic to initiate an action, such as an INTERVENTION operation. SSTATUS indicates the nature of the required action. When the control logic has completed the requested action, it returns a signal SDONE to the snoop logic.
The bus control logic 30 comprises a state machine, having a number of internal states. Transitions between these states are governed by the values of the input signals, and the output signals are determined by the current state. The operation of the bus control *0o logic is as follows.
Block read.
S
A block read transaction reads a block of four data words DO-D3 from the main memory, over the h high-speed bus. The block is used to refill a line of the cache 21.
Referring to Figure 4, this shows the sequence of states of the bus control logic 30 for a block read transaction. The action in each state is as follows.
REQUEST. In this state, the control logic loads the physical address PA from the MMU 22 into the ADSD register, and loads control bits specifying the transaction type into QREG. At the same time, it asserts BRQOUT, requesting ownership of the high-speed bus.
13 At each beat of CLK, the following condition is tested: BRQIN1 BRQIN2 BRQIN3 BOWN.
While this condition is true, the control logic remains in the REQUEST state. When the condition becomes false, the control logic goes on to its next state.
Thus, it can be seen that the control logic remains in the REQUEST state until there are no requests from any higher priority modules, and the bus is not owned by any other module.
SEND ADDRESS. The bus has now been won, and so the control logic de-asserts BRQOUT and asserts BOWN.
At the same time it switches the buffer register 19 to its send condition, so as to place the contents of the register ADSD on to the address lines ADO-31. The contents of QREG are sent on QO-7. The address strobe signal AS is asserted.
At each clock beat, the condition of the address wait line AW is tested. While AW is true, the t control logic remains in this state. If AW is false, the control logic goes on to its next state.
Thus, the address is held on the bus until any address wait AW from another module has been withdrawn.
WAIT FOR DO. The address has now been sent, and so the address strobe AS is de-asserted. At the same time, the buffer 19 is switched to its receive condition, so as to remove the address from the bus and to allow the voltage of the address lines to float.
Similarly, QREQ is switched to allow the qualifier lines QO-7 to float.
I I I I 1 14 The state of the SRD line is tested. If the snoop logic in any other module has found a match with the address sent over the bus, it will have asserted SRD as described above, to indicate that the data block is shared with at least one other module. Thus, if the control logic finds that SRD is true, it sets the status of the currently addressed line of the cache 21 (which will receive the data) to SHARED NON-DIRTY. Otherwise, if SRD is not asserted, the cache line status is set to EXCLUSIVE NON-DIRTY.
At the same time, the condition DS.DW.IVN is 0, monitored. If this condition is true, a 64-bit data word S’ is loaded into the data register DARXSD from the bus lines ADO-31 and D32-63. This is the first data word DO of the four-word block.
*g At each beat of clock CLK, the following condition is tested: BERR DS.DW.IVN Whilst this condition is false, the control logic remains in this state. When the condition becomes true, the controller goes on to the next state.
i Thus, the controller remains in this state until either a data strobe DS is received, without DW or IVN being asserted, or a bus error is detected.
WAIT FOR D1-D3. The next three states are similar to the WAIT FOR DO state, except that the SRD line is not tested in these states, and the cache line status is not changed.
RELEASE BUS. The transaction is now finished, and so BOWN is de-asserted to release the bus.
I-:
4 I 15 If BERR or MERR was asserted, the processing unit will perform an error-handling action.
Block write.
A block write transaction writes a block of four data words DO-D3 from the processing module to the main memory, over the high speed bus. This is used to copy data back from the cache 21 to the main memory before it is overwritten in the cache.
Referring to Figure 5, this is a flow chart showing the sequence of states of the bus control logic 30 for a block write transaction. The action in each state is as follows.
a REQUEST. This is the same as for the block S. read transaction.
SEND ADDRESS. This is the same as for the block read transaction, except that in this case the first data word DO is loaded from the cache into the 0 data register DARXSD to get it ready to send, and QREG is loaded with the required validity bits for the data.
SEND DO. The control logic de-asserts AS and asserts DS. At the same time, the buffer 19 is set to its send condition, so as to place the first data word on the bus lines ADO-31, D32-63. The contents of QREG are sent on QO-7. The next data word D1 is then loaded into the data register DARXSD to get it ready to send, while the first word DO is still in the buffer 19.
At each clock beat, DW is tested. If DW is true, the control logic remains in this state, If DW is false, it goes on to the next state.
i i 16 Thus it can be seen that the first data word DO is placed on the bus with a data strobe. If the data wait DW is asserted, the data is held there until DW is removed.
SEND D1-D3. These states are similar to the SEND DO state, and cause the second to fourth words of the block to be sent.
S(7) RELEASE BUS. The transaction is now complete, and so BOWN is de-asserted to release the bus, and DS is de-asserted. At the same time, the buffer 19 is switched to its receive condition, so as to remove the data from the bus, allowing the lines ADO-31, D32-63 to float. Similarly, QREG is switched to allow QO-7 to float.
S
INTERVENTION.
When the snoop logic in one module detects that another processor is attempting to read a stale (i.e.
out-of-date) block of data from the main memory, it instructs the control logic 30 to perform an INTERVENTION action as follows.
ASSERT IVN. In this state, IVN is asserted.
S» The effect of this is to temporarily suspend the block read transaction of the other processor. At the next clock beat, the control logic goes on to the next state.
GET DATA READY. The first word DO of the up-to-date data is loaded from the cache into the data register DARXSD, and the corresponding validity bits are set in QREG. However, the data is not placed on the bus yet; the control logic waits an extra cycle to give the memory module that was addressed by the block read P 0 17 transaction time to get off the bus. At the next clock beat, the control logic goes on to the next state.
SEND DO-D3. These states are similar to those in the block write transaction. This causes the up-to-date data block to be written into the currently addressed location of the main memory.
DE-ASSERT IVN. Finally, the intervention signal IVN is de-asserted, the DS line is de-asserted, and the bus lines ADO-31, D32-63 and QO-7 are allowed to float. This allows the interrupted block read transaction to continue. The transaction will now read 9 the up-to-date value of the data from the main memory.
Examples of operation.
Referring now to Figure 7, this is a timing diagram showing a normal block write transaction.
In the first clock period, BRQOUT is asserted by a processing module to request bus ownership.
In the second clock period, the bus has been won by the requesting module, and BOWN is asserted.
AS is asserted so as to strobe the address on the bus.
This address is received by the memory modules into their own address receiver registers.
Each memory module now asserts AW for one clock period, while it is handling the received address.
At the same time, the processing module places the first data word DO on the bus and asserts DS.
In the next three clock periods, the other three data words are placed on the bus.
I F 18 Finally, the processing module de-asserts BOWN and DS, to terminate the transaction.
It should be noted that, although the address wait AW is asserted for one clock period, this does not in fact cause any hold-up in this case, since the address wait is overlapped with the data, and is de-asserted before the next address would be sent over the bus.
Figure 8 also shows a block write transaction.
However, in this case, it is assumed that the addressed memory module is busy, performing an internal refresh cycle, and is thus not able to accept the data. This a memory module therefore asserts DW. This causes the a4 processing module that initiated the block write to hold the first data word DO on the bus.
When the memory module is ready to accept the data it de-asserts DW. As a result the processing module is able to remove the first data word DO at the end of the clock period, and to place the other three data words on the bus in the next three clock periods.
0
I
Claims (8)
2. A system according to Claim 1 wherein at least one of the units is a data processing unit, and at least one of the units is a memory unit.
3. A system according to Claim 1 or 2 wherein said clock cycle is substantially equal to the time required for one-way propagation over the bus.
4. A system according to Claim 1 or 2 wherein said clock cycle is not substantially greater than the time required for one-way propagation over the bus.
5. A system according to any preceding claim 4 wherein at least one of the units has at least first, second and third control states, and comprises: means operative in the first control state for applying an address to the bus and then proceeding to the second control state at the next clock cycle unless the ADDRESS WAIT signal is asserted, means operative in the second control state for applying data to the bus and then proceeding to the -/iM third control state at the next clock cycle unless the v 7%h DATA WAIT signal is asserted, and w,- S SS S S 0 S S S S 0* OS *S S 0@ S SO S. 20 means operative in the third control state for applying further data to the bus.
6. A method of transferring a series of information items over a bus between a plurality of functional units at least some of which can act as senders of information and at least some of which can act as receivers of information, the method comprising the steps: causing a receiver to assert a DATA WAIT signal if it is unable to receive data from the bus, and to assert an ADDRESS WAIT signal if it is unable to receive an address from the bus, dividing the operation of the bus into clock cycles, in the absence of a DATA WAIT or ADDRESS WAIT signal, sending data or an address over the bus from a sender to a receiver in a single clock cycle, without waiting for any acknowledgement from the receiver, and if a DATA WAIT or ADDRESS WAIT signal is asserted, placing the sender into a WAIT state, to hold up subsequent transfers of data or addresses as the case may be.
7. A method according to Claim 6 wherein each clock cycle is substantially equal to the time required for one-way propagation over the bus.
8. A method according to Claim 6 wherein each clock cycle is not substantially greater than the time required for one-way propagation over the bus.
9. A data processing system substantially as hereinbefore described with reference to the accompanying drawings.
AU32537/89A
1988-04-09
1989-04-07
Data processing system
Ceased
AU612814B2
(en)
Applications Claiming Priority (2)
Application Number
Priority Date
Filing Date
Title
GB888808353A
GB8808353D0
(en)
1988-04-09
1988-04-09
Data processing system
GB8808353
1988-04-09
Publications (2)
Publication Number
Publication Date
AU3253789A
AU3253789A
(en)
1989-10-12
AU612814B2
true
AU612814B2
(en)
1991-07-18
Family
ID=10634876
Family Applications (1)
Application Number
Title
Priority Date
Filing Date
AU32537/89A
Ceased
AU612814B2
(en)
1988-04-09
1989-04-07
Data processing system
Country Status (6)
Country
Link
US
(1)
US5151979A
(en)
EP
(1)
EP0344886B1
(en)
AU
(1)
AU612814B2
(en)
DE
(1)
DE68900708D1
(en)
GB
(1)
GB8808353D0
(en)
ZA
(1)
ZA892189B
(en)
Families Citing this family (16)
* Cited by examiner, † Cited by third party
Publication number
Priority date
Publication date
Assignee
Title
JPH03210649A
(en)
*
1990-01-12
1991-09-13
Fujitsu Ltd
Microcomputer and its bus cycle control method
US5313621A
(en)
*
1990-05-18
1994-05-17
Zilog, Inc.
Programmable wait states generator for a microprocessor and computer system utilizing it
US5426765A
(en)
*
1991-08-30
1995-06-20
Compaq Computer Corporation
Multiprocessor cache abitration
US5269005A
(en)
*
1991-09-17
1993-12-07
Ncr Corporation
Method and apparatus for transferring data within a computer system
US5524212A
(en)
*
1992-04-27
1996-06-04
University Of Washington
Multiprocessor system with write generate method for updating cache
US5339440A
(en)
*
1992-08-21
1994-08-16
Hewlett-Packard Co.
Wait state mechanism for a high speed bus which allows the bus to continue running a preset number of cycles after a bus wait is requested
CA2109043A1
(en)
*
1993-01-29
1994-07-30
Charles R. Moore
System and method for transferring data between multiple buses
EP0692764B1
(en)
*
1994-06-17
2000-08-09
Advanced Micro Devices, Inc.
Memory throttle for PCI master
US5634076A
(en)
*
1994-10-04
1997-05-27
Analog Devices, Inc.
DMA controller responsive to transition of a request signal between first state and second state and maintaining of second state for controlling data transfer
US5873114A
(en)
*
1995-08-18
1999-02-16
Advanced Micro Devices, Inc.
Integrated processor and memory control unit including refresh queue logic for refreshing DRAM during idle cycles
US5896543A
(en)
*
1996-01-25
1999-04-20
Analog Devices, Inc.
Digital signal processor architecture
US5954811A
(en)
*
1996-01-25
1999-09-21
Analog Devices, Inc.
Digital signal processor architecture
DE69733011T2
(en)
1997-06-27
2005-09-29
Bull S.A.
Interface bridge between a system bus and a local bus for controlling at least one slave device, such as a ROM memory
US6002882A
(en)
*
1997-11-03
1999-12-14
Analog Devices, Inc.
Bidirectional communication port for digital signal processor
US6061779A
(en)
*
1998-01-16
2000-05-09
Analog Devices, Inc.
Digital signal processor having data alignment buffer for performing unaligned data accesses
US6996016B2
(en)
2003-09-30
2006-02-07
Infineon Technologies Ag
Echo clock on memory system having wait information
Citations (2)
* Cited by examiner, † Cited by third party
Publication number
Priority date
Publication date
Assignee
Title
US4045782A
(en)
*
1976-03-29
1977-08-30
The Warner & Swasey Company
Microprogrammed processor system having external memory
EP0140751A2
(en)
*
1983-09-22
1985-05-08
Digital Equipment Corporation
Cache invalidation mechanism for multiprocessor systems
Family Cites Families (5)
* Cited by examiner, † Cited by third party
Publication number
Priority date
Publication date
Assignee
Title
US4484267A
(en)
*
1981-12-30
1984-11-20
International Business Machines Corporation
Cache sharing control in a multiprocessor
US4807109A
(en)
*
1983-11-25
1989-02-21
Intel Corporation
High speed synchronous/asynchronous local bus and data transfer method
US4908749A
(en)
*
1985-11-15
1990-03-13
Data General Corporation
System for controlling access to computer bus having address phase and data phase by prolonging the generation of request signal
JPH0619760B2
(en)
*
1986-04-23
1994-03-16
日本電気株式会社
Information processing equipment
US4961140A
(en)
*
1988-06-29
1990-10-02
International Business Machines Corporation
Apparatus and method for extending a parallel synchronous data and message bus
1988
1988-04-09
GB
GB888808353A
patent/GB8808353D0/en
active
Pending
1989
1989-03-16
EP
EP89302588A
patent/EP0344886B1/en
not_active
Expired – Lifetime
1989-03-16
DE
DE8989302588T
patent/DE68900708D1/en
not_active
Expired – Fee Related
1989-03-20
US
US07/325,785
patent/US5151979A/en
not_active
Expired – Lifetime
1989-03-22
ZA
ZA892189A
patent/ZA892189B/en
unknown
1989-04-07
AU
AU32537/89A
patent/AU612814B2/en
not_active
Ceased
Patent Citations (2)
* Cited by examiner, † Cited by third party
Publication number
Priority date
Publication date
Assignee
Title
US4045782A
(en)
*
1976-03-29
1977-08-30
The Warner & Swasey Company
Microprogrammed processor system having external memory
EP0140751A2
(en)
*
1983-09-22
1985-05-08
Digital Equipment Corporation
Cache invalidation mechanism for multiprocessor systems
Also Published As
Publication number
Publication date
AU3253789A
(en)
1989-10-12
GB8808353D0
(en)
1988-05-11
ZA892189B
(en)
1989-11-29
EP0344886B1
(en)
1992-01-15
EP0344886A1
(en)
1989-12-06
DE68900708D1
(en)
1992-02-27
US5151979A
(en)
1992-09-29
Similar Documents
Publication
Publication Date
Title
KR0154533B1
(en)
1998-11-16
Data processor
US5353415A
(en)
1994-10-04
Method and apparatus for concurrency of bus operations
US5463753A
(en)
1995-10-31
Method and apparatus for reducing non-snoop window of a cache controller by delaying host bus grant signal to the cache controller
AU612814B2
(en)
1991-07-18
Data processing system
US6405271B1
(en)
2002-06-11
Data flow control mechanism for a bus supporting two-and three-agent transactions
EP0748481B1
(en)
2003-10-15
Highly pipelined bus architecture
US5787486A
(en)
1998-07-28
Bus protocol for locked cycle cache hit
US6405291B1
(en)
2002-06-11
Predictive snooping of cache memory for master-initiated accesses
US5802560A
(en)
1998-09-01
Multibus cached memory system
US5283886A
(en)
1994-02-01
Multiprocessor cache system having three states for generating invalidating signals upon write accesses
JP3067112B2
(en)
2000-07-17
How to reload lazy push into copy back data cache
KR100228940B1
(en)
1999-11-01
Method for maintaining memory coherency in a computer system having a cache
US5561783A
(en)
1996-10-01
Dynamic cache coherency method and apparatus using both write-back and write-through operations
EP0288649A1
(en)
1988-11-02
Memory control subsystem
WO1994008297A9
(en)
1994-05-26
Method and apparatus for concurrency of bus operations
US5918069A
(en)
1999-06-29
System for simultaneously writing back cached data via first bus and transferring cached data to second bus when read request is cached and dirty
EP0303648B1
(en)
1995-08-30
Central processor unit for digital data processing system including cache management mechanism
US5822756A
(en)
1998-10-13
Microprocessor cache memory way prediction based on the way of a previous memory read
EP1041492A2
(en)
2000-10-04
Method and system for optimizing of peripheral component interconnect (PCI) bus transfers
JPH06318174A
(en)
1994-11-15
Cache memory system and method for performing cache for subset of data stored in main memory
KR100322223B1
(en)
2002-03-08
Memory controller with oueue and snoop tables
US5860113A
(en)
1999-01-12
System for using a dirty bit with a cache memory
US5649232A
(en)
1997-07-15
Structure and method for multiple-level read buffer supporting optimal throttled read operations by regulating transfer rate
WO1997004392A1
(en)
1997-02-06
Shared cache memory device
US5699540A
(en)
1997-12-16
Pseudo-concurrent access to a cached shared resource
Legal Events
Date
Code
Title
Description
2003-11-06
MK14
Patent ceased section 143(a) (annual fees not paid) or expired