AU631420B2 – Processing of memory access exceptions with pre-fetched instructions within the instruction pipeline of a memory system based digital computer
– Google Patents
AU631420B2 – Processing of memory access exceptions with pre-fetched instructions within the instruction pipeline of a memory system based digital computer
– Google Patents
Processing of memory access exceptions with pre-fetched instructions within the instruction pipeline of a memory system based digital computer
Download PDF
Info
Publication number
AU631420B2
AU631420B2
AU53943/90A
AU5394390A
AU631420B2
AU 631420 B2
AU631420 B2
AU 631420B2
AU 53943/90 A
AU53943/90 A
AU 53943/90A
AU 5394390 A
AU5394390 A
AU 5394390A
AU 631420 B2
AU631420 B2
AU 631420B2
Authority
AU
Australia
Prior art keywords
memory
unit
instruction
fault
memory access
Prior art date
1989-02-03
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU53943/90A
Other versions
AU5394390A
(en
Inventor
Mark A. Firstenberg
David B. Fite
Tryggve Fossum
Dwight P. Manley
Michael M. Mckeon
John E. Murray
Ronald M. Salett
David A. Webb Jr.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital Equipment Corp
Original Assignee
Digital Equipment Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
1989-02-03
Filing date
1990-04-27
Publication date
1992-11-26
1990-04-27
Application filed by Digital Equipment Corp
filed
Critical
Digital Equipment Corp
1991-12-19
Publication of AU5394390A
publication
Critical
patent/AU5394390A/en
1992-11-26
Application granted
granted
Critical
1992-11-26
Publication of AU631420B2
publication
Critical
patent/AU631420B2/en
2010-04-27
Anticipated expiration
legal-status
Critical
Status
Ceased
legal-status
Critical
Current
Links
Espacenet
Global Dossier
Discuss
Classifications
G—PHYSICS
G06—COMPUTING; CALCULATING OR COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F9/00—Arrangements for program control, e.g. control units
G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
G06F9/38—Concurrent instruction execution, e.g. pipeline, look ahead
G06F9/3861—Recovery, e.g. branch miss-prediction, exception handling
G06F9/3865—Recovery, e.g. branch miss-prediction, exception handling using deferred exception handling, e.g. exception flags
G—PHYSICS
G06—COMPUTING; CALCULATING OR COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F11/00—Error detection; Error correction; Monitoring
G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
G06F11/073—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a memory management context, e.g. virtual memory or cache management
G—PHYSICS
G06—COMPUTING; CALCULATING OR COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F11/00—Error detection; Error correction; Monitoring
G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
G06F11/0766—Error or fault reporting or storing
G06F11/0772—Means for error signaling, e.g. using interrupts, exception flags, dedicated error registers
G—PHYSICS
G06—COMPUTING; CALCULATING OR COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F12/00—Accessing, addressing or allocating within memory systems or architectures
G06F12/02—Addressing or allocation; Relocation
G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
G06F12/10—Address translation
G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
G06F12/1045—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
Abstract
A technique for processing memory access exceptions along with pre-fetched instructions in a pipelined instruction processing computer system is based upon the concept of pipelining exception information along with other parts of the instruction being executed. In response to the detection of access exceptions at a pipeline stage, corresponding fault information is generated and transferred along the pipeline. The fault information is acted upon only when the instruction reaches the execution stage(20) of the pipeline. Each stage of the instruction pipeline is ported into the front end of a memory unit (16) adapted to perform the virtual-to-physical address translation; each port being provided with means for storing virtual addresses accompanying an instruction as well as means for storing corresponding fault information. When a memory access exception is encountered at the front end of the memory unit, the fault information generated therefrom is loaded into the storage means and the port is prevented from accepting further references.
Description
63 2 0 S F Ref: 128516 FORM COMMONWEALTH OF AUSTRALIA PATENTS ACT 1952 COMPLETE SPECIFICATION
(ORIGINAL)
FOR OFFICE USE: Class Int Class 0 90 00 0 t It Complete Specification Lodged: Accepted: Published: Priority: Related Art: Name and Address of Applicant: Digital Equipment Corporation 111 Powdermill Road Maynard Massachusetts 01754-1418 UNITED STATES OF AMERICA 3000 4 00a 00 34 0 4 44 4 0 Address for Service: Spruson Ferguson, Patent Attorneys Level 33 St Martins Tower, 31 Market Street Sydney, New South Wales, 2000, Australia Complete Specification for the invention entitled: Processing of Memory Access Exceptions with Pre-Fetched Instructions within the Instruction Pipeline of a Memory System Based Digital Computer The following statement is a full description of this invention, including the best method of performing it known to me/us 5845/5 1 .1 PROCESSING OF MEMORY ACCESS EXCEPTIONS WITH PRE-FETCHED INSTRUCTIONS WITHIN THE INSTRUCTION PIPELINE OF A VIRTUAL MEMORY SYSTEM-BASED DIGITAL COMPUTER
ABSTRACT
A technique for processing memory access exceptions along with pre-fetched instructions in a pipelined instruction processing computer system is based upon the concept of pipelining exception information along with o o• other parts of the instruction being executed. In response to the detection of access exceptions at a pipeline stage, corresponding fault information is o o 15 generated and transferred along the pipeline. The fault information is acted upon only when the instruction reaches the execution stage of the pipeline. Each stage of the instruction pipeline is ported into the front end of a memory unit adapted to perform the virtual-to-physical address translation; each port being oo° provided with means for storing virtual addresses °o accompanying an instruction as well as means for storing corresponding fault information. When a memory access exception is encountered at the front end of the memory unit, the fault information generated therefrom is loaded into the storage means and the port is prevented from accepting further references.
PD8C-0265 DIGM:019 FOREIGN: DIGM:050 PD88-0265 -lA- DIGM: 019 DIGM: 050 PROCESSING OF MEMORY ACCESS EXCEPTIONS WITH PRE-FETCHED INSTRUCTIONS WITHIN THE INSTRUCTION PIPELINE OF A VIRTUAL MEMORY SYSTEM-BASED DIGITAL COMPUTER The present application discloses certain aspects of a computing system that is further described in the followinj Australian patent applications and United States patents: Evans et al., AN INTERFACE BETWEEN A SYSTEM CONTROL UNIT AND, A SERVICE PROCESSING UNIT OF A DIGITAL COMPUTER, Serial No. 53954/90, filed April 27, 1990, and issued on Sept. 8, 1992 as U.S. Patent 5,146,564; Arnold et al., METHOD AND APPARATUS FOR INTERFACING A SYSTEM CONTROL UNIT FOR A MULTIPROCESSOR SYSTEM WITH THE CENTRAL PROCESSING UNIT5, Serial No. 53949/90, filed April 27, 25 1990; Gagliardo at al., METHOD AND MEANS FOR INTERFACING A SYSTEM CONTRO’L UNTT FOR A MULTI-PROCESSOR SYSTEM WITH THE a SISTEM MAIN MEMORY, Serial No. 53938/90, filed April 27, 1990; D. Fite et al., DECODING MULTIPLE SPECIFIERS IN A VARIABLE LENGTH INSTRUCTION ARCHITECTURE, Serial No.
53939/90, filed April 27, 1990, and issued on September 1992 as U.S. Patent 5,148,528; D. Fite et al., VIRTUAL INSTRUCTION CACHE REFILL ALGORITHM, Serial No. 53940/90, filed April 27, 1990, and issued on May 12, 1992 as U.S.
Patent 5,113,515; Murray et al., PIPELINE PROCESSING OF a 35 REGISTER AND REGISTER MODIFYING SPECIFIERS WITHIN THE SAME INSTRUCTION, Serial No. 53955/90, filed April 27, 1990; -iB- Murray et al., MULTIPLE INSTRUCTION PREPROCESSING SYSTEM WITH DATA DEPENDENCY RESOLUTION FOR DIGITAL COMPUTERS, Serial No. 53936/90, filed April 27, 1990, and issued on August 25, 1992 as U.S. Patent 5,142,631; D. Fite et al., BRANCH PREDICTION, Serial No. 53937/90, filed April 27, 1990, and issued on August 25, 1992 as U.S. Patent 5,142,634; Fossun et al., PIPELINED FLOATING POINT ADDER FOR DIGITAL COMPUTER, Serial No. Serial No. 53948/90, filed April 27, 1990, and issued as U.S. Patent 4,994,996 on Feb. 19, 1991; Grundmann et al., SELF TIMED REGISTER FILE, Serial No. 53941/90, filed April 27, 1990, issued as U.S. Patent 5,107,462 on April 21, 1992; Beaven et al., METHOD AND APPARATUS FOR DETECTING AND CORRECTING ERRORS IN A PIPELINED COMPUTER SYSTEM, Serial No. 53945/90, filed April 27, 1990 and issued as U.S. Patent 4,982,402 on Jan.
1, 1991; Flynn et al., MPETHOD AND MEANS FOR ARBITRATING COMMUNICATION REQUESTS USING A SYSTEM CONTROL UNIT IN A MULTI-PROCESSOR SYSTEM, Serial No, 53946/90, filed April 27, 1990; E. Fite et al., CONTROL OF MULTIPLE FUNCTION UNITS WITH PARALLEL OPERATION IN A MICROCODED EXECUTION UNIT, Serial No. 53951/90, filed April 27, 1990, and issued on November’ 19, 1991 as U.S. Patent 5,067,069; Hetherington et al., METHOD AND APPARATUS FOR CONTROLLING THE CONVERSION OF VIRTUAL TO PHYSICAL MEMORY ADDRESSES IN A DIGITAL COMPUTER SYSTEM, Serial No. 53950/90, filed April 27, 1990; Hetherington et al., WRITE BACK BUFFER WITH ERROR CORRECTING CAPABILITIES, Serial No. 53934/90, fied April 27, 1990, and issued as U.S. Patent4,995,041 on Feb. 19, 1991; Chinnaswamy et al., MODULAR CROSSBAR INTERCONNECTION NETWORK FOR DATA TRANSACTIONS BETWEEN SYSTEM UNITS IN A MULTI-PROCESSOR, SYSTEM, Serial No.
53933/90, filed April 27, 1990, and isoued as U.S. Patent 4,968,977 on Nov. 6, 1990; Polzin et al., METHOD AND -2- APPARATUS FOR INTERFACING A SYSTEM CONTROL UNIT FOR A MULTI-PROCESSOR SYSTEM WITH INPUT/OUTPUT UNITS, Serial No.
53953/90, filed April 27, 1990, and issued as U.S. Patent 4,965,793 on Oct. 23, 1990; and Gagliardo et al., MEMORY CONFIGURATION FOR USE WITH MEANS FOR INTERFACING A SYSTEM CONTROL UNIT FOR A MULTI-PROCESSOR SYSTEM WITH THE SYSTEM MAIN MEMORY, Serial No. 53942/90, filed April 27, 1990 and issued as U.S. Patent 5,043,874 on August 27, 1991.
This invention relates generally to digital computers based on the virtual memory system. More particularly, this invention relates to a technique for the processing of memory access exceptions along with pre-fetched instructions within the instruction pipeline of a pipelined instruction processing computer system.
A computer system using virtual memory is capable of recognizing a large number of addresses (more than 4 billion addresses for a 32-bit computer) defined within a .o virtual address space. The actual physical main memory of °o the computer is substantially smaller and yet the system 25 is capable of processing data whose addresses are :a o 4 -3scattered through the address space. Such capabilities are provided by means of sophisticated memory management techniques which permit a program to be executed under the presumption that a large part of the virtual address space is actually available, thereby providing users with the illusion of a much larger main memory address space than is actually available. By the use of memory mapping and the translation of logical to physical addresses, the virtual memory system provides the computer with contiguous logical memory on non-contiguous physical storage.
6 o Virtual memory systems are generally based on the concept of memory blocking using a combination of either 15 statically or dynamically partitioning a linear array of memory into smaller memory regions and a block address Q o mapping system on the basis of which virtual addresses are translated into block locations and displacements within the block. The mapping process from virtual to physical addressing is typically accomplished by means of a block mapping table which holds an entry containing the oo block address in memory for each physical memory block and, for variable-size blocked memory systems, the size *Goo of the memory block. In such a blocked virtual memory scheme, all physical blocks are of the same size to facilitate the interchanging of block locations in order oQ that a virtual memory block may be placed at any of the physical block locations in memory. Each block of memory is referred to as a memory Vage and not all of the virtual pages are resident in primary memory at any one time. Instead, some means of secondary storage, usually disk, is used to hold the remainder of the pages.
Mapping or translation from virtual to real (physical) addresses in a paged memory system is PD88-0265 DIGM:019 FOREIGN: DIGM:050 1 0 a co 0 4 So0 oaa o o o 0 0 44 0004 00 o0 0 U 4 4404 aa 00 0407 001i 4 d 00 .performed by the use of page tables for each major region of virtual address space that is actively used. The page table is a virtually contiguous array of page table entries, each of which is a long word representing the physical mapping for one virtual page. Translation from a virtual to a physical address is then performed by simply using the virtual page number as an index into the page table from the given page table base address. The page table, among other things, includes a field indicative of whether a memory page is physically located in primary or secondary memory. Memory management and execution logic are used to translate the program’s virtual addresses into physical addresses, to store programs and related data in convenient locations (either 15 in main memory or auxiliary memory), and to procure into main memory required data or program segments.
In a virtual memory computer system of the above kind, programs access physical memory and input/output 20 devices by generating virtual addresses which are subsequently translated into physical addresses by using parts of the physical address to index into a page table and fetch the corresponding page table entry (PTE). The PTE typically contains information about access 25 privileges, creation of physical addresses, and bitsindicative of the modification and validity status of the address. The PTE also contains status bits which are used by the system software to handle access exceptions, such as those occurring when an address page is not resident in memory. The operating system thus provides an image of physical memory which can be accessed by a user without any reference to memory resource location.
As a result of the translation process, the operating system subsequently either grants or denies access to addressed segments of memory. If a memory access request PD88-0265 DIGM:019 FOREIGN: DIGM: 050 is granted, the corresponding memory operation proceeds to conclusion. On the other hand, if an access request is denied, the program execution process is halted and instead an exception handler routine is executed.
When an instruction is to be executed, the virtual memory system hardware generates a virtual address corresponding to the instruction and relays it to the system memory unit along with a request for a memory access operation such as read or write. Translation means provided within the memory unit compute the o° physical address corresponding to the virtual address and o the requested memory access operation is executed if the 0 0. translation process has been successful. If the o” s o 15 translation from the virtual address to the physical 0 “address is for some reason found to be unsuccessful, the a memory unit returns a signal to the instruction processor which causes the initiation of a memory access exception instead of continuing with program execution.
‘Jo Although the technique of halting program execution o0 0 upon detection of access exceptions is conceptually straight forward, its application in high performance computers, which typically use multi-processing along with pipelined instruction execution, can be fairly complicated and problematic. High performance computers 0* are generally based on the concept of multi-processing at 0 o system level by utilizing a plurality of central processor units to execute a defined task through appropriate problem decomposition. The multi-processing operation is further complimented by the process of pipelining so that computer instructions are divided into a series of smaller and simpler operations which are subsequently executed in a pipeline fashion by several dedicated function units optimized for specific purposes.
PD88-0265 DIGM:019 FOREIGN: DIGM:050 -6- High speed and extensive connectivity and redundancy is provided in such systems by the use of parallel paths to mass storage and other devices through multiple I/O buses.
Detection and processing of memory access exceptions are complicated in high performance computer systems because the entire sequence of operations required for executing instructions is pipelined. A typical example is the “VAX” brand family of computers from Digital Equipment Corporation, 111 Powdermill Road, Maynard, MA, o% 01754. The instruction pipeline for the VAX 8600 model computer is described in detail by Fossum et al., in an article entitled “An Overview Of The VAX 8600 System”, *aoo a o 15 Digital Technical Journal, Number 1, August 1985, pp.
8-23. As described therein, high perforance pipelining °o ?uses separate pipeline stages for each of the different stages of operation involved in the execution of instructions, The pipeline stages typically include instruction fetching, instruction decoding, operand o”ao “address generation, operand fetching, instruction o? execution, and result storage. Processing of memory access exceptions is difficult because several instructions may be active at any one time. In addition, each instruction may activate several memory references, instruction reads, operand reads, operand writes, address %o reads, and string reads. Further, each of these S operations is likely to be performed by different hardware segments at different stages in the instruction pipeline.
Any time a memory reference Is made as part of executing an instruction along the instruction pipeline, the address translation process takes place in order to generate physical addresses from virtual addresses PD88-0265 DIGM:019 FOREIGN: DIGM:050 i -7provided by the instruction. At each of these translation stages, there is a possibility that a memory access exception may occur. The problem is compounded when the computer system is geared to pre-fetching, while a particular instruction is being acted upon, instructions and operands which are anticipated to be required for execution of subsequent stages of the instruction. If all necessary access exceptions are acted upon at the time the exceptions are detected, the result is that the pipeline quickly stalls when interdependent operation stages are halted to resolve 4 o access violations resultant therefrom. Accordingly, a direct conflict exists between achieving hilh speed, pipelined instruction processing and the relatively low o 15 speed, sequential processing that results when related memory access exceptions are concurrently implemented.
It is thus exceedingly critical that memory exceptions occurring within the pipeline stages be handled in such a way as to avoid the stalling of the instruction pipeline by disruption of other pipeline stages. In the VAX o°i architecture, for instance, this problem is approached by a protocol which insures that exceptions which occur in the reading of memory as part of pre-fetching instructions do-not disrupt the execution of previously issued instructions.
*ro An exception handling scheme is providA., that is based upon the concept of pipeLhing exception information along with other parts of the instruction being executed. According to an important feature of this invention, exception information generated at a pipeline stage is transferred along the pipeline and is acted upon only when the instruction reaches the execution stage of the pipeline. Accordingly, exception handling routines need only be invoked if the exception PD88-0265 DIGMt019 FOREIGN: DIGM:050 00 0 0O 0 4 t 00,fl 0 0 ht 00 9i 0 B 0 04b 0 0 O 04 0a information is found to be valid and existent at the execution stage. The complicated and time consuming process of resorting to exception handling routines at each stage of the instruction execution pipeline where an exception is found to exist is eliminated. A major advantage resulting from such a scheme is that, if the instruction stream is altered before an instruction accompanied by an associated exception reaches the execution stage, the ,exception condition can be dispensed along with the rest of the instruction.
According to a preferred embodiment of this invention, the above scheme is implemented by porting each stage of the instruction pipeline into the front end of a memory unit adapted to perform the virtual to physical translations. The back end of the memory unit is adapted to utilize the physical address produced by the front end to access addressed data in main memory or in cache memories. Each port provided at the front end of the memory unit for a pipeline stage is also provided with means for storing virtual addresses accompanying an instruction as well as means for storing “fault” information concerning detected exceptions. This exception information is acted upon by the system software only at the execution stage. When a memory access exception is encountered in the front end of the memory unit, the fault information generated therefrom is loaded into the storage means and the port is prevented from accepting further references. However, ports corresponding to other pipeline stages are retained as active for receiving memory references. This arrangement permits instructions further along the pipeline to be completed without being disrupted by faults or exceptions encountered in preceding stages of the instruction stream.
906 04 0 U~ PD88-0265 DIGM:019 FOREIGN: DIGM: 050
I
I 1.111- .1111- 1 6ih -1 -11 11 0 -9- 44 ~4 4 0 0 4 14 4 4O 4 4 44 4II 4 44 4) According to another feature of this invention, means are provided for synchronizing the exception handling process to the instruction execution process.
Faults located in the pipeline stages before an instruction passes the issue stage are pipelined alowith (or instead of) data and control words derived from instruction pre-processing and operand pre-fetching. The fault pipeline is checked at the point when this data and control is required for issuing an instruction at the execution stage; if a fault or exception is found, an exception is initiated instead of issuing the instruction. In this manner, all instructions existing in the pipeline stages following the issue stage are 15 allowed to complete without any obstruction.
According to another important feature of this invention, destination operand addresses for memory write operations are calculated in the operand processing unit Stage of the pipeline and subsequently passed onto the memory unit for translation. Corresponding write operations are usually postponed because data is not available until after the execution stage and the translated destination addresses are stored within a write queue and are subsequently paired with the corresponding data received following the execution stage. When instructions are being retired, it is imperative that the memory destinations to be written be defined at that point. Because the destination addresses have been pre-translated, it becomes possible for instructions to be conveniently retired at or following the execution stage as long as corresponding valid entries exist in the data write queues, PD88-0265 DIGM:019 FOREIGN: DIGM:050 04~ 4* 4 14 1 4 44 4 4110
I
i i r i- C-UIY I~ In summary, the pipeline stages put out memory references as required and the task to be performed by the pipeline stage is completed in the usual manner if the address translation corresponding to the memory reference is successful; in this case, data relevant to the pipeline stage is used to execute the instruction eventually at the execution stage. However, if the address translation corresponding to the memory reference is unsuccessful, fault information is generated and the port corresponding to the pipeline stage in the memory unit is designated as closed. The fault information 0 generated by the memory unit is propagated through the pipeline and is eventually.used at the execution stage as a basis for invoking a fault handler routine on the basis S 15 of the fault or exception information.
S*”o Other objects and advantages of the invention will become apparent upon reading the following detailed description and upon reference to the drawings in which: 4a 0d 0 44 *44 4, 4 44 FIG. 1 is a simplified block diagram of a ,xry-elined virtual memory-based computer system adapted to the scheme for processing memory access exceptions according to thie invention.
FIG. 2 is an illustration of the various pipeline stages involved in executing a typical instruction.
FIG. 3 is a block diagram illustrating the functional blocks involved in the translation of virtual to physical addresses using the translation buffer shown in FIG. 1.
FIG. 4 is a more detailed block diagrammatic representation of the organization of the translation PD88-0265 DIGM:019 FOREIGN: DIGM:050 -11buffer and the translation buffer fix-up unit shown in FIG. 3.
FIG. 5 is a block diagram illustrating the generation of fault information according to the exception handling scheme of this invention.
FIG. 6 is a simplified flowchart illustrating the sequence of operations involved in the generation of fault information and related codes and parameters.
FIG. 7 is a simplified flowchart illustrating the operations involved in deteting and responding to fault information, according to this invention.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that it is not intended to limit the S% invention to the particular forms disclosed, but on the 44 contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit U “4 and scope of the invention as defined by the appended claims.
Referring now to FIG. 1, there is shown a top level Q” block diagram of a pipelined virtual memory-based computer system 10 which uses one or more central processing units (CPUs) 12 and is configured to permit simultaneous, parallel, operation of all system CPUs by permitting them to share a common main memory 14 for the system. In a practical implementation, up to four CPUs may be operated simultaneously in such a system by efficiently sharing the main memory 14. In accordance PD88-0265 DIGM:019 FOREIGN: DIGM:050 -12- 0 0 oooo oa a o0 0 o o a 0Q 0 0 000 0 0 with the concept of pipelining, the CPU 12 is a non-homogeneous processor which includes a set of special-purpose functional units dedicated to and optimized for performing specific tasks into which individual instructions are split prior to execution.
According to the pipelining technique, each basic operation (such as addition, multiplication, etc.) is broken down into a number of independent stages, quite analogous to the manner in which a manufacturing assembly line is organized. Assuming that each stage requires “t” seconds for completion, an operand pair finishes with a stage each t seconds and is subsequently passed on to the next stage, allowing a new operand pair to begin. In the 15 case of an instruction requiring, for example, four independent stages (as in the case of a floating-point addition operation requiring the four separate stages of exponent subtraction, mantissa alignment, mantissa add, and result normalization) a time period of 4t seconds is 20 required from beginning to end of the execution process.
However, what is significant is that a new result can be produced every t seconds. In other words, as each dedicated stage proceeds with executing the task that is allotted to it,,the subject instruction moves closer to being completed. At the final stage in the pipeline, each time a task is completed with the passing of a system cycle, a new result signifying the completion of an instruction is produced. Although such a pipeline generally takes longer than normal to perform a single operation, the pipeline is capable of executing a much larger number of operations in the same amount of time if a sequence of similar operations is to be performed.
00 @0 0 00* 00 0 0 00a 0 00 In general, the execution of an instruction may be broken down into the following discrete stages: PD88-0265 DIGM:019 FOREIGN: DIGM:050 :’iE -13- .instruction fetch, instruction decode, operand fetch, instruction execute, and result storage. It is also possible for these independent stages to be overlapped in some fashion so that the overall instruction throughput may be increased. According to a scheme of this type, the results of each pipeline stage are transferred to the next stage oi the basis of a common system clock. For example, during a first clock cycle, an instruction is fetched by a functional unit dedicated to instruction fetching. During the second clock cycle, the fetched instruction is transferred to the instruction decode stage where a dedicated functional unit decodes the 00.. instruction; at the same time, the instruction fetch stage remains active and proceeds with fetching the 15 subsequent instruction through the instruction fetch o unit. In the following clock cycle, the result generated by each pipeline stage is shifted to the next stage of the pipeline while at the same time fetching another new instruction. This process continues until the final 0 co 20 stage of the pipeline is activated, at which point the pipeline is filled. Subsequently, an instruction is 9 completed by the final stage at the end of each subsequent clock cycle as long as new instructions continue to be fetched by the first pipelin( stage.
Returning now to FIG. 1, each CPU 12 is essentially partitioned into at least three independent functional units: the memory access unit 16 (the M-Unit), the instruction unit 18 (the I-Unit), and, the execution unit 20 (the E-Unit).
The M-Unit 16 provides the CPU interface to memory, I/O and other CPU units and, in particular, serves as means for accepting virtual memory references, 3S translating the references into physical addresses, and PD88-0265 DIGM:019 FOREIGN: DIGM:050 -14initiating accesses to memory data, either in main memory 14 through appropriate interface means or within a local cache.
In the illustrative system of FIG. 1, the M-Unit 16 includes a main cache 18 which permits the instruction and E-Units 12 and 13 to access and process data at a much faster rate than permitted by the normal access time of the main memory 14. The main cache temporarily retains data (typically, the most recently used instructions and data items) that the processor is likely oa o o° to require in executing current operations. The cache interprets memory addresses by using an associative memory map which defines a correspondence between o. 15 requested address locations and cache contents. The system operates by inhibiting requestE to main memory and S” *supplying data requested by the processor from the cache if the requested data item is found to exist within the cache. The main memory 14 is accessed only when a 20 req!ested data item is absent from the cache 18, in which *ool casi the data is fetched from the system memory and then 0 supplied to the requesting unit. In short, the cache 18 operates on the phenomenon of locality in programs and provides a window into the system main memory and permits high-speed access to data references with spatial and.
temporal locality.
0 0 S”The main cache 22 includes means for storing selected pre-defined blocks of data elements, means for receiving memory access requests via a translation buffer 24 in order to access specified data elements, means for checking whether or not a specified data element exists within the block of memory stored in the cache, and means operative when data for the block including a specified data element is not within the cache for retrieving the PD88-0265 DIGM:019 FOREIGN: DIGM:050 i specified block of data from the main memory 14 and storing it in the cache 22. Each time a requested data element is not found to be present within the cache 22, the entire block of data containing the data element is obtained from main memory 14. The next time the functional units of the processor request a data element from memory, the principle of locality dictates that the requested data element will most likely be found in the memory block which includes the previously addressed data element. Since the cache 22 will be accessed at a much higher rate than main memory 14, it becomes possible for the main memory to have a proportionally slower access time than the cache without substantially degrading 0U00 system performance. Consequently, the main memory 14 may 0oo 15 be comprised of slower and less expensive memory elements.
o 00 The translation buffer 24 is a high speed associative memory which stores the most recently used virtual-to-physical address translations. In a virtual oo memory system of the type being discussed here, a o o o reference to a single virtual address can produce several memory references before the desired memory information becomes available. The translation buffer 24, however, simplifies the translation process by reducing the translation of a virtual address to the corresponding physical address to merely searching for a “hit” in the a buffer.
For the purpose of transmitting commands and input data to the computer system of FIG. 1 and for receiving data output from the system, an I/O bus 25 is linked to the main memory 14 and the main cache 22.
PD88-0265 DIGM:019 FOREIGN: DIGM:050 -16- The I-Unit 18 includes a program counter 26 and an instruction cache 28 for fetching instructions f7rom the main cache 22. The program counter 26 preferably addresses virtual memory locations rather than the physical memory locations of the main memory 14 and the cache 22. It is hence required that the virtual address put out by the program counter 26 be translated into the corresponding physical address of the main memory 14 before required instructions may be retrieved. This translation is accomplished by the translation buffer 24 in the M-Unit 16. The contents of the program counter 26 are transferred to the M-Unit 11 where the translation buffer 24 performs the address conversion. Subsequently, the required instruction is retrieved from its physical 15 memory location either in the cache 22 or the main memory 14 and delivered on data return lines to the instruction 6 cash 28. The organization and operation of the cache 22 and the translation buffer 24 are further described in chapter 11 of Levy and Ackhouse, Jr., Computer Proqramming and Architecture, The VAX-11, Digital a eo Equipment Corporation, pp. 351-368 (1980).
The instruction cache 28 generally has pre-stored instructions at the addresses specified by the program counter 26. The cache 28 is preferably arranged to receive and transmit instruction data in blocks of multiple data bytes in such a way that the memory o addresses for the blocks are specified by specified bits in the address provided by the PC 26. The addressed instructions are then available immediately for transfer into an instruction buffer (I-Buf) 30 which essentially acts as a data latch for receiving instruction data on the basis of the clocking action of the system clock.
From the I-Buf 30, the instructions are fed to an instruction decoder 32 which decodes both the operation PD88-0265 DIGM:019 FOREIGN: DIGM:050 -17codes (op-codes) and the specifiers which accompany the instructions. An operand processing unit (OPU) 33 produces memory or register addresses for the operands or evaluates the operand directly from the instruction stream in the case of literals. Register addresses and literals are supplied to the E-Unit 20. The addresses produced by the OPU 33 are also virtual and may represent virtual addresses for memory source (read) and destination (write) operands. In the case of memory read operands, the OPU 33 delivers these virtual addressts to M-Unit 16 for translation into physical addresses. The physical memory locations designated by the translation process are then accessed to fetch the operands for the memory source operands.
oo t In the case of memory write operations, the data that is to be written does not become available until the execution of the instruction has been completed and it accordingly is required that the write address be stored 20 until the data to be written becomes available. However, a the translation of the virtual address of the destination to to the corresponding physical address may be completed during the time required for executing the instruction.
In addition, the OPU 33 may be used to advantage in increasing the rate of execution of instructions by pre-processing multiple instruction specifiers during the time an instruction is being executed. In order to accommodate these factors, the M-Unit 16 is provided with a write queue arrangement 34 which is disposed between the translation buffer 24 and the main cache 22. The write queue arrangement 34 essentially retains the translated address until the E-Unit 20 completes the instruction and relays the resulting data to the M-Unit 16 where it is paired with the stored write address and subsequently written into the cache 22 at the memory PD88-0265 DIGM:019 FOREIGN: DIGM:050 user without any reference to memory resource location.
As a result of the translation process, the operating system subsequently either grants or denies access to addressed segments of memory. If a memory access request PD88-0265 DIGM:019 FOREIGN: DIGM:050 -18location specified by the translated physical address. A detailed description of a preferred write queue arrangement is provided in the above referenced co-pending D. Fite et al, United States Patent Application No. 306,767, filed February 3, 1989, entitled “Method And Apparatus For Resolving A Variable Number Of Potential Memory Access Conflicts In A Pipelined Computer System”, which is also owned by the assignee of the present application.
In the case of an instruction requiring a memory read operation, the translation buffer 24 directly .oo provides the physical address for an operand of the read 0.0″ instruction. Temporary storage means 36 are provided in o.”oo 15 M-Unit 16 for storage of translated addresses prior to their being used by the main cache 22 to access identified memory locations and deliver data stored therein to the E-Unit 20 via appropriate data return lines. Multiplexer and de-multiplexer units, 20 respectively designated as 38 and 40, are provided in the memory unit 16 for selection of either the temporary B storage unit 36 or the write queue 34 for interchange of translated addresses between the main cache 22 and the translation buffer 24.
4. In each instruction, the first byte contains the op-code and the following bytes are the operand l’ l specifiers to be decoded. The first byte of esah specifier indicates the addressing mode for that specifier. This byte is usually broken into halves, with one half specifying the addressing mode and the other half specifying the register to be used for addressing.
The instructions preferably have a variable length, and various types of specifiers may be used with the same op-code. A typical arrangement of this type is disclosed PD88-0265 U4S.: DIGM:019 FOREIGN: DIGM:050 1 znin~llxusr~h~.. I i; i’ -19in Strecker et al., U.S. Patent No. 4,241,397, issued December 23, 1980.
The first step in processing the instructions is to decode the “opcode” portions of the instruction. The first portion of each instruction consists of its opcode which specifies the operation to be performed in the instruction, the number of specifiers and type of each specifier. The decoding is done using a table-look-up technique in the instruction decoder 32. The instruction decoder finds a microcode starting address for executing the instruction in a look-up table and passes the starting address to the E-Unit 20. Later, the E-Unit 9 performs the specified operation by executing pre-stored 15 microcode, beginning at the indicated starting address.
Also, the decoder determines where source-operand and destination-operand specifiers occur in the instruction and passes these specifiers to the OPU 33 for pre-processing prior to execution of the instruction.
4,44 0 4 4~ 4 .4 O I 4 #00 4 4 441 4 44 The look-up table is organized as an array of multiple blocks, each.having multiple entries. Each entry in the look-up table can be addressed by its block and entry index. The opcode byte addresses the block, and a pointer from an execution point counter (indicating the position of the current specifier in the instruction) selects a particular entry in the block. The selected entry specifies the data context (byte, word, etc.), data type (address, integer, etc.) and accessing mode (read, write, modify, etc.) for each specifier.
After an instruction has been decoded, the OPU 33 parses the operand specifiers and computes their effective addresses; this process involves reading GPRs and possibly modifying the GPR contents by PD88-0265 DIGM:019 FOREIGN: DIGM:050 writes the result into the destination identified by the destination pointer for that instruction. The OPU 33 also produces a specifier signal based on the opcode in each instruction.
Each time an instruction is passed to the E-Unit, the I-Unit sends a microcode dispatch address and a set of pointers for the locations in the E-Unit register o file where the source operands can be found, and the location where the results are to be stored. Within the E-Unit, a set of buffer-based queues 42 includes a fork 15 queue for storing the microcode dispatch address, a source pointer queue for storing the source-operand locations, and a destination pointer queue for storing the destination location. Each of these queues is a FIFO buffer capable of holding the data for multiple o 20 instructions.
6 0 o 0 A The E-Unit 20 also includes a source operand list 44, which is stored in a multi-ported register file that also contains a copy of the GPRs. Thus, entries in the source pointer queue will either point to GPR locations for register operands, or point to the source list for o. memory and literal operands. Both the M-Unit 16 and the 0 I-unit 18 write entries in the source list 44, and the E-Unit 20 reads operands out of the source list as needed to execute the instructions. For executing instructions, the E-Unit 20 includes an instruction issue unit 46. a microcode execution unit 48, an arithmetic and logic unit (ALU) 50, and an instruction retire unit 52.
PD88-0265 DIGM:019 FOREIGN: DIGM:050 -21- According to an important feature of this invention, each pipeline stage is provided with a port into the front end of the M-Unit. This arrangement allows memory access requests processed by the M-Unit to be flagged conveniently as to the particular pipeline stage which initiated the request. Accordingly, a port associated with a memory access request which produces an exception can be isolated easily and deactivated or prevented from accepting further memory access requests from its associated pipeline stage until the exception has been appropriately acted upon.
In FIG. 1, the front end is represented by the translation buffer 24 which, as shown, has ports for receiving memory access requests from appropriate stages 0 of the pipeline which are d isposed in the I-Unit 18 and 0 0.0′ the E-Unit 20. In particular, the 1-Buf 30 is linked through the instruction cache 28 to a corresponding front-end port 24A on the H-Unit 16. The OPU 33 is ::o020 linked to its corresponding front-end port 24B and a Voo. front-end port 24C is provided for the E-Unit stages. It will be apparent that other discrete ports may be provided for distinct stages of the pipeline which 0000 generate memory access requests and the representation of pocrts in FIG. 1 is merely intended for illustrative purposes and not as a limitation.
The various pipeline stages involved in executing a typical instruction will now be described with reference to FIG. 2. As discussed above, in a pipelined processor the processor’s instruction fetch hardware may be fetching one instruction while other hardware is decoding the operation code of a second instruction, fetching the operands of a third instruction, executing a fourth instruction, and storing bhe processed data of a fifth PD88-0265 DIGM:019 FOREIGN: DIG!: 050 active for receiving memory references. Tnis arrangement permits instructions further along the pipeline to be completed without being disrupted by faults or exceptions encountered in preceding stages of the instruction stream.
PD88-0265 DIGM:019 FOREIGN: DIGM:050 1 -22instruction. FIG. 2 illustrates a pipeline for a typical instruction such as: ADDL3 RO,B^12(R1),R2.
This is a long-word addition using the displacement mode of addressing.
In the first stage of the pipelined execution of this instruction, the program counter (the PC 26 in FIG.l) of the instruction is created. This is usually Saccomplished either by incrementing the program counter o 0 from the previous instruction, or by using the target address of a branch instruction. The PC is then used to access the instruction cache 28 in the second stage of the pipeline.
In the third stage of the pipeline, the instruction data is available from the cache 22 for use by the instruction decoder 32, or to be loaded into the instruction buffer 30. The instruction decoder 32 decodes the opcode and the three specifiers in a single cycle, as will be described in more detail below. The operand addresses RO and R2 are passed to the ALU unit j 50, and the operand is sent to the OPU 33 R1 along with the byte displacement at the end of the decode cycle.
I In stage 4, the operand unit 33 !:eads the contents I* of its GPR register file at location ki, adds that value to the specified displacement (in this case 12), and sends the resulting address to the translation buffer 24 in the M-Unit 16, along with an OP READ request, at the end of the address generation stage.
In stage 5, the M-Unit 16 selects the address generated in stage 4 for execution. Using the PD88-0265 U.S. DIGM:019 FOREIGN: DIGM:050 ii -i ii- r -23- 4a 4 944.
pm p p 4 44 translation buffer 24, the M-Unit 16 translates the virtual address to a physical address during the address translation stage. It is at this stage that any faults resulting from the address translation are detected and corresponding fault information ger!;rated. According to this invention, the resulting fault information is placed in storage and a pertinent segment of the fault informat n is passed along with the results generated by the current stage to be acted upon at a later stage. The generation of fault information and its eventual use will be described below in detail. The physical address is then used to address the cache 22, which is read in stage 6 of the pipeline.
In stage 7 of the pipeline, the instruction is issued to the ALU 27 which adds the two operands and sends the result to the retire unit 28. It will be noted that during stage 4, the register values for R1 and R2, and a pointer to the source list location for the memory 20 data, are sent to the E-Unit and stored in the pointer queues. Then during the cache read stage, the E-Unit looks for the two source operands in the source list. In this particular example it finds only the register data in RO, but at the end of this stage the memory data arrives and is substituted for the invalidated read-out of the register file. Thus both operands are available in the instruction execution stage. Instruction execution essentially involves the stages of instruction issuance followed by actual execution using designated operands.
According to this invention, the data resulting from the completion of prior stages of the pipeline is checked for the presence of fault information at the execution stage. If any fault indication is detected, further PD88-0255 DIGM:019 FOREIGN: DIGM:050 p 0 4444 4444~ p p4r 44r 0 4 404 0Jt 4 0 44d 0, 4* ;Yi -24fault information, previously stored when the fault was originally detected, is recalled and an exception handling routine is invoked, as will be explained below.
0) 044 00* a 0 04 .4 0~ 0444 00 0 0 04W 0 04t In the retire stage 8 of the pipeline, the result data is paired with the next entry in the retire queue.
Altbhugh several functional E-Units can be busy at the same time, only one instruction can be retired in a single cycle.
In the last stage 9 of the illustrative pipeline, the data is written into the GPR portion of the register files in both the E-Unit 20 and the I-Unit 18.
In accordance with this inventi,n, memory access requests are lodged by those stages in the instruction pipeline which require virtual-to-physical memory address translation. These requests are lodged at the corresponding port provided on the front end of the 20 M-Unit. The virtual addresses associated with lodged memory access requests are processed to determine the presence of a predefined set of memory access violations.
If no violation is found to exist, the memory access request is granted and the associated memory operation completed i.n a normal manner. However, if a violation, is found to exist, the associated virtual address is stored along with a code identifying the particular type of access violation that is encountered. A fault signal indicative of the presence of a violation is then generated and the information contained therein is propagated along with th. resulting data relayed along subsequent pipeline stages to the E-Unit. When this data is required by the E-Unit in order to execute an instruction, the data is checked by the E-Unit for the presence of the fault signal. If the signal is found to PD88-0265 DIGM:019 FOREIGN: DIGM:050 L .exist, the fault address and code previously stored in the M-Unit are retrieved and a corresponding predefined exception handler routine is invoked.
The virtual addresses accompanying lodged memory access requests can be originated as a result of the operation of the translation buffer 30 (FIG. 1) when it is operating with a localized cache of recently used virtual-to-physical conversions or the action of an associated translation buffer fix-up unit adapted to handling address translation when the localized translation buffer cache does not contain required Saddress conversions. A detailed description of the functional composition of the translation buffer 30, an associated fix-up unit, and operational details pertinent o thereto is provided in the above-identified Hetherington 1 U.S. Patent Application Serial No. 306,544, filed February 3, 1989, entitled “Method And Apparatus For Controlling The Conversion Of Virtual To Physical Memory Addressee In A Digital Computer System”, incorporated herein by reference, which is also owned by the assignee i of the present application. To facilitate the understanding of the present invention, the operation of the translation buffer and the fix-up unit is briefly described below with reference to FIGS. 3 and 4.
Referring first to FIG. 3, the operation of the Bk translation buffer 24 is described in greater detail.
The translation buffer 24 is connected to receive virtual addresses from five different sources. Three of theses sources are external to the memory access unit 16 and are, hereafter, generally referred to as external. The remaining two sources are controlled from within the memory access unit 16 and are, hereafter generally referred to as internalO These internal Isters are PD88-0265 DIGM:019 FOREIGN: DIGM:050
J
-26used during translation buffer “misses” to retrieve the virtual-to-physical translation from memory and place it in the translation buffer 24.
The external sources include the I-buffer 30, which is part of the I-Unit 18 and is responsible for delivering instruction pre-fetch addresses; the OPU 33, which delivers operand pre-fetch addresses; and the E-Unit 20, which delivers implicit operand addresses.
The action of the translation buffer 24 is independent of the particular external address being processed, as all S° °addresses are handled identically.
:2 o Each of these external sources is delivered to the inputs of a multiplexer 54 which controllably delivers the selected input to the translation buffer 24. The 0 o0 translation buffer 24 compares the received virtual address to a cahbe 55 of recently used virtual-to-physical address conversions. If a match is found, the translation buffer 24 selects the corresponding physical address and delivers it to the cache 22. There is no need to access the cache 22 to fetch the virtual-to-physical translation since it is already present in the translation buffer cache 55 by virtue of its earlier use. In this respect, the translation buffer 24 greatly enhances processor speed by reducing the number of accesses to memory.
However, the translation buffer cache 55 contains only a small number of the virtual-to-physical translations. Thus, it is possible that the virtual address currently being translated is not present in the translation buffer cache 55. When this happens, it is necessary to retrieve the convevsion from memory and PD88-0265 DIGM:019 FOREIGN: DIGM:050 I/O and other CPU units and, in particular, serves as means for accepting virtual memory references, translating the references into physical addresses, and PD88-0265 DIGM:019 FOREIGN: DIGM:050 r, 99 9 09 9~ 4 *0 4 -27- .place it in the translation buffer cache 55, so that the virtual-to-physical conversion can be completed.
The virtual address delivered by the selected external source is also delivered to a translation buffer fixup unit (TB Fixup) 56. As its name implies, TB Fixup 56 is primarily dedicated to retrieving those conversions not present in the translation buffer cache 51 and placing them in the translation buffer 24. The particular operation of the TB Fixup 56 is controlled by the type of memory access currently being processed. To understand this distinction, it is first necessary to explain the configuration of virtual memory.
15 Virtual address space is actually broken into several functional regions or segments. First, virtual address space is divided into two halves called system space and process space. Process space is again broken into the program (PO) and control (PI) regions. Each 20 region has a collection of all of the virtual-to-physical address translations for that region of memory. These translations are collectively referred to as page tabled while the individual translations are referred to as page table entries (PTE). Each region has its own page table 25 and is defined by two registers: a base register containing the page table starting address and a length register containing the number of page table entries in the table.
0 4 9 09 The virtual address is a binary number, 32 bits in length with the two high order bits defining the regions of emory. For example, bit 31 defines system and process space. A one in this position indicates system space while a zero identifies process space. Bit further defines the two process regions (PO,P1). The Pi88-0265 DIGM:019 FOREIGN: DIGM: 050
CI~
1 rr a -28- 4 .4 00 0 0 0 0 0 nat 0 0 Q 044 0 0 o o 0 4 0 t 0I high-address half of process space is the control region (P1) while the low-address half is occupied by the program region (PO).
The high-address half of the address space is called system space because it is shared by all programs in the system and the operating system runs in this region.
There is only one page table for system space, the system page table (SPT), that translates all system space references. SPT is described by its two hardware registers, the system base register (SBR) and the system length register (SLR). These registers are loaded by the software when the operating system is booted. It is important to note that SPT must be referenced by physical 15 addresses, since there can be no virtual-to-physical address conversion without the page table itself. For example, in order for the conversion process to begin, the physical address of at least the SPT must be known to retrieve the virtual-to-physical conversions.
The low-address half of memory is called process space because, unlike system space, process space is unique to each program in the system. Each program has its own page table for its program and control regions, and they are defined by the corresponding base and length registers (POBR, PlBR, POLR, and P1LR). For example, different programs referencing the same process space virtual address will access different physical memory locations. Accordingly, the process page tables ar.
referenced by virtual, rather than physical, memory addresses. Thus, any conversion of process space virtual addresses must first locate the physical memory location of the process page table. These addresses are available in the physical memory locations of the system page table (SPT). It can be seen that while a conversion of a PD88-0265 DIGM:019 FOREIGN: DIGM:050 1
L
-29system memory reference can be accomplished in a single step, a process memory reference must perform two conversions: first, the conversion of the process base register address and second, the conversion of the virtual address itself.
The program region (PO) contains user programs, thereby providing the zero-based virtual address space into which programs expect to be loaded. Conversely, the control region (PI) accommodates the user mode stack of the process. The operating system can also use the control region to contain protected process-specific data and code, as well as the stacks for the higher access modes.
Soo Referring, once again, to FIG. 3, TB Fixup 56 C oreceives the virtual address from the multiplexer 50 and uses bits 30 and 31 to determine the virtual memory region being accessed. The designated region is used to 20 determine which base register should be used to locate *j the corresponding page table. For a system memory reference, the page table address computed from SBR is a physical address and can be delivered directly to the memory access unit cache 22 where the corresponding virtual-to-physical conversion is stored. However, it is I’ only necessary to fix the translation buffer 24 when a “miss” occurs. Accordingly, the translation buffer 24 delivers a miss signal to the TB Fixup 56 to allow the computed address to be delivered to the cache 22. In the event of a TB “miss”, the conversion is retrieved from the cache 22 and stored in the translation buffer cache Thus, the immediately subsequent comparison of the translation buffer cache 55 to the pending virtual address must necessarily result in a “hit”. Therefore, TB Fixup 56 temporarily asserts control over the PD88-0265 DIGM:019 FOREIGN: DIGM:050 k wILU .iA UQ U11 the basis of the clocking action of the system clock.
From the I-Buf 30, the instructions are fed to an instruction decoder 32 which decodes both the operation PD88-0265 DIGM:019 FOREIGN: DIGM:050 translation buffer 24 to update the translation buffer cache 55, whereby the pending conversion is altered from a “miss” to a “hit” and the virtual-to-physical translation is completed.
Conversely, where the virtual memory region being accessed corresponds to the process region, the address computed from either of the process base registers POBR, P1BR is a virtual address. This virtual address cannot be delivered to the cache 22, but must first be converted to a physical address. Of course, conversion of virtual to physical addresses is normally accomplished by the translation buffer 24. Since the translation buffer 24 o* is currently stalled, waiting for TB Fixup 56 to update its cache 51, TB Fixup 56 can assert control over the 8 0 translation buffer to perform this virtual-to-physical conversion. TB Fixup 56 delivers the computed virtual address of the process page table to an internal register 57 in response to the translation buffer “miss”. A 20 multiplexer 58 is selected by TB Fixup 56 to deliver the o contents of the internal register 57 to an input of the multiplexer 54. TB Fixup 56 also operates to select the 00 output of the multiplexer 58 as the input to the multiplexer 54.’ It can be seen that a translation buffer “miss” on a process memory reference results in the o o. computed virtual address of the process page table being delivered to the translation buffer 24 for a virtual-to-physical conversion. Thus, a “hit” in the translation buffer 24 at this time results in the physical address being delivered directly to the cache 22 by the translation buffer 24.
It is also possible that a second translation buffer “miss” will result on the address contained in the internal register 56. TB Fixup 56 can also correct this PD88-0265 DIGM:019 FOREIGN: DIGM:050 i ;I -31- 00 0 0 0 0 0o o o E o 9 0 000 0 9 0 o 0 0* 0 o i second “miss”. The fixup routine is identical to that used to retrieve the PTE for a system reference “miss”.
The retrieved PTE is stored in the translation buffer cache and is used to form the physical address of the virtual-to-physical translation on a subsequent pass through the translation buffer 24.
A sequencer 59 also receives input from TB Fixup 56 over the same bus as the internal register 57, The sequencer 59 is employed during multi-precision operations where it is necessary to read multiple contiguous bytes from memory. .The sequencer 59 increments the address and delivers it to the multiplexer 58. TB Fixup 56 controls which of the multiplexer inputs 15 are selected to deliver consecutively, first, the internal register address, and second, the incremented address of the sequencer. All bytes of a multi-byte operation .re accessed in this manner.
Referring now to FIG. 4, a detailed block diagram of 20 the translation buffer 24 and TB Fixup 56 is shown. The translation buffer 24 maintains the cache 55 of recently used PTEs. The 32-bit virtual address is received by the translation buffer 30 and bits 31 and 17:09 are used as pointers for the 1024 memory locations of the cache Each memory location in the cache 55 has a 13-bit tag corresponding to bits 30:18 of the virtual address.
These tags are indicated as Ag A 1024 and are collectively referred to as the page translation directory. The lower 512 tags correspond to process memory references and the upper 512 tags correspond to system memory references.
The purpose of separating the tags into process and system tags is to allow the operating system to quickly invalidate only those tags associated with the current program when a context switch occurs. For example, if PD88-0265 DIGM:019 FOREIGN: DIGM:050 -32these entries are not invalidated when another program begins to execute, this next program could access the wrong physical memory location by generating a process virtual address which had been previously translated by the prior program. Thus, rather than the translation buffer 30 detecting a “miss”, a tag match will result based on the virtual-to-physical translation of the prior program.
The system tags need not be invalidated after a context change since all processes share system space and the virtual-to-physical translations will be identical for each process. Consequently, a system memory ‘o Sreference from any program will access the same physical 15 memory location, so there is no need to flush the system tags. Bit 31 separates the process tags from the system tags.
Associated with each of the tags A 0
A
1024 is a 20 32-bit data field containing the corresponding PTE and oO O indicated as B 0
B
1024 The PTE includes the physical page frame number PFN at bits 24:00, a valid bit at bit 44 31, a protection field at bits 30:27, and a modify bit at bit 26. These PTEs are collectively referred to as the page translation store.
’64 A comparator 60 receives bits 30:18 of the virtual address and the 13-bit tag corresponding to the pointer.
A match between these values indicates that the PTE corresponding to the virtual address is located in the page translation store at the location corresponding to the tag bits. Bits 29:09 of the selected PTE are delivered to one input of a register 62. The other input to the register 62 is formed from bits 08:00 of the virtual address. In other words, the corresponding PD88-0265 DIGM:019 FOREIGN: DIGM:050 -33physical page is appended to the byte offset of the virtual address, forming the actual physical address.
This physical address is maintained in the register 62 which is clocked by an inverted pulse from the comparator 60. Thus, the calculated physical address is delivered to the cache 22 only if the comparator 60 indicates that a “hit” was found in the page translation directory.
At the same time, the VPN (bits 29:09) is delivered to the TB Fixup 56. A state machine 66 controls the S*o. operation of the TB Fixup 56 in response to control inputs from the comparator 60 (TB miss), and bits 31:30 of the virtual address. The state machine 66 responds to the TB mi.ss signal by calculating the address of the I* 15 desired PTE. In general, the PTE is determined by adding S, the VPN and the address of the base register.
The status of bits 31:30 determines the particular base register that is used for computing the PTE. As 20 discussed previously, there are three separate areas of memory, each having its own unique page table and base address. The state machine 66 interprets the bits 31:30 25| e and delivers a control signal to the select input of a multiplexer 68, whereby the appropriate base register can be selected. The inputs to the multiplexer 68 are 0 o connected to a series of six registers 70 containing the base and length registers (POBR, P1BR, SBR, POLR, P1BR, and SLR) of each area of memory.
The state machine 66 also controls the select inputs of a multiplexer 72 which is linked to the outputs of a series of three address registers forming a register set 73 adapted to temporarily store virtual addresses corresponding to different types of TB “miss” states.
More specifically, a “port miss” register is provided for PD88-0265 DIGM:019 FOREIGN: DIGM:050 il.i 114-1.
i lli–9ii-i IL_ -34storing the virtual address corresponding to a first “miss” in processor memory, a “fix-up miss” register is provided for storing the virtual address corresponding to a “miss” resulting from a successive reference to the translation buffer from the fix-up unit, and a “delay miss” register is provided for storing the virtual address corresponding to a “miss” which requires a delayed fix-up action. The response of the fix-up unit to these stored virtual addresses will be described below.
During any “miss” in system memory or a first “miss” in o_ process memory, the state machine 66 selects the port miss address input to the multiplexer 72 which contains bits 21:2 of the virtual address. The lowest order two bits are not needed since the PTE is stored in the cache 15 28 on longword alignment (4 bytes). The multiplexers 68, t 72 deliver their outputs to an adder 74 where they are combined to form the address of the PTE. The address is delivered to an arbitration unit 75 or to the cache 22.
Along with the address, the state machine 66 delivers request signals to either the arbitration unit or the cache 28, depending upon whether the calculated address is a physical or virtual address. The request signals act to enable one of the arbitration unit 75 and cache 22. For example, an address calculated -from a process base register is a virtual address and cannot be delivered to the cache 22, but must undergo a virtual-to-physical translation in the translation buffer 24. Accordingly, the state machine 66 delivers the request signal to the arbitration unit 75. The arbitration unit 75 corresponds to the multiplexer 54, shown in FIG. 3, and operates to deliver the signals from the external registers or the internal registers based upon a priority scheme. The internal registers, sequencer 59 and internal 57, have the highest priority.
PD88-0265 DXGM:019 FOREIGN: DIGM:050 1
I
I i 40 4 4 4 440 404 *o 0 44 900 o 4 0 4400 90* .Thus, when the state machine 66 delivers the request signal to the arbitration unit 75, the internal registers are selected over the external registers to allow the TB Fixup routine to proceed without conflict from the external registers.
Conversely, an address calculated from a system base register is a physical address and can be delivered directly to the cache 22 to retrieve the desired PTE.
The PTE is retrieved from memory and delivered to a register 76. Bits 30:18 of the corresponding virtual address are delivered to a register 78. The contents of the registers 76,78 are stored at the locations indicated by the pointer, so as to update the translation buffer 15 cache 51 with the most recently used virtual-to-physical translation.
There is a possibility that the second reference to the translation buffer 24, during a process memory 20 “miss”, will also result in a “miss”. TB Fixup 56 is capable of handling this double “miss”. The state machine 66 recognizes the double miss condition when the second consecutive “miss” signal is received from the comparator 60. The state machine 66 selects the system base register via the multiplexer 68 and the fixup miss address via the multiplexer 72. The port miss address register remains loaded with the original virtual address which resulted in the first “miss”. The adder 74 combines these selected signals to arrive at the physical system address of the process base register. Since this is a system memory reference, the address identifies a physical memory location and can be delivered directly to the cache 22 along with the cache enable signal. Here the process is substantially identical to an original system memory reference, and the cache 22 will respond by t t PD88-0265 DIGM:019 FOREIGN: DIGM:050 I, 0 cc O 4 d0 44 4 4 cc.D0 4O 4 00l 0 0 t 44o~ 4 0 04 0e 0 0 0 -36delivering the PTE stored at the .entfied address to the translation buffer cache 55. Thus, when the external register is again selected by the arbitration unit the translation buffer 24 will necessarily “hit” on the virtual-to-physical translation.
According to the translation buffer fix-up routine, before the TB Fixup 56 calculates the PTE address, a fault check is performed to determine if the virtual address has violated a length parameter of the page table. More simply stated, the number of available pages in an area of memory is known, and a virtual page that is greater than the number of pages in memory must be the result of a system error. The adder 74 is used to make this comparison. The state machine 66 configures the adder 74 to perform a 2’s complement subtraction by inverting the inputs from the multiplexer 72 and enabling the carry in bit. For this process, the two lowest order bits are necessary for the calculation, so rather than selecting the port miss address input, the state machine selects the delay miss address input to the multiplexer 72 to retrieve.bits 21:0 of the virtual address.
The state machine 66 also selects the length 25 register 70 corresponding to the area of memory being translated. Thus, by subtracting the virtual address from the known length of the page table, a negative result indicates that the virtual address is attempting to access a nonexistent PTE. Alternatively, a positive result indicates no length violation exists and the fixup process is allowed to proceed.
The state machine 66 monitors this process via the carry out bit of the adder 74. If the carry out bit is PD88-0265 DIGM:019 FOREIGN: DIGM:050 I -37asserted, the result is negative and a fault command is issued to the E-Unit 16.
In addition to correcting “misses” in the translation buffer 24, the TB Fixup 56 also aids in retrieving data during multi-precisio operations. These multi-precision instructions require access to multiple memory locations even though only a single memory I location is identified in the instruction. Thus, while the first memory reference is passed to the translation buffer 24, TB Fixup 56 calculates the next sequential j o{ address and delivers it to the sequencer 59. The virtual i address is deliver-4 to the zero input of the multiplexer 72 and selected b- the state machine 66, At the same S 15 time, a constant, having a. value of four, is located at the zero input df the multiplexer 68 and is selected by I the state machine 66. Therefore, the output of the adder 74 is the virtual address of the next longword needed for the multi-precision instruction. This address is delivered to the arbitration unit 75 where it takes 4 priority ove4 the external registers and if. translated to a physical address by the translation buffw 24.
Finally, the process for loading the base and length registers 70 is controlled by the E-Unit 20 during the initialization phase of the CPU. The E-Unit 20 provides a 4-bit tag address and an enable signal to a decoder jThe decoder 80 responds by enabling the corresponding i register 70 to input the data present in the virtual 30 address. The process is repeated for each of the base and length register 70 until all of the register have been loaded with the appropriate data.
Referring now to FIG. 5, there is shown a block diagram illustrating a preferred arrangement for PD88-0265 DIGM:019 FOREIGN: DIGM:050 uu uy Lne z-unit in oraer to execute an instruction, the data is checked by the E-Unit for the presence of the fault signal. If the signal is found to PD88-0265 DIGM:019 FOREIGN: DIGM:050 :il -38-
,I
*f I I
I
o oo 0C 0 generation of fault info::mation according to the exception handling schema of this invention. As shown therein, virtual addresses associated with memory access requests that are received at the individual ports on the front end of the M-Unit (reference numeral 16 in FIG. 1) are accepted by an arbitration unit 92 through communication links 93, 94, and 95 which correspond respectively to the OPU 33 (see FIG. I-Unit 18, and the E-Unit 20 respectively. The arbitration unit 92 is adapted to select the address corresponding to one of the three external ports that are defined at the M-Unit on the basis of a predefined priority scheme. As discussed above with reference to the operation of the translation buffer and the translation buffer fix-up unit, the virtual addresses being processed are preferably 32-bit addresses generated by those stages in the instruction execution pipeline which require access to memory. One of the three virtual addresses zLceived at its input ports is selected for processing by the arbitration unit 92 and put out as the translation buffer request address that is used subsequently to access address segments of memory. The arbitration unit 92 essentially functions to relay signals from the external or the internal virtual address sources on the basis of a priority scheme which provides the internal sources with a higher priority than the external sources.
It should be noted that the virtual addresses received at the arbitration unit could be originated directly from the external sources as a result of a translat:i buffer “hit” operation or through the internal sources as a result of the action of the translation buffer fix-up unit subsequent to a “miss” operation during the process of translating virtual addresses to corresponding physical addresses in the PD88-0265 DIGM:019 FOREIGN: DIGM:050
I,
I.
I 1 El 4 I I I 14 remaining two sources are controlled from within the memory access unit 16 and are, hereafter, generally referred to as internal. These internal “.sters are PD88-0265 DIGM:019 FOREIGN: DIGMO050 I- -39system memory. The particular virtual address selected by the arbitration unit is relayed to a protection check unit 96 which processes the accepted virtual address to determine the presence of a predefined set of memory access violations. According to a preferred embodiment, the protection check unit 96 is adapted to monitor the presence of at least the five types of memory access violations that are listed below at table A.
TABLE A 04 0, o 4o 0Od4 o) 4 4 o9 4 44 I” 15 *o 4 4 4 o 44 *4 4 4404 4 TYPE OF
VIOLATION
ASSERTED BIT IN FJLT CODE ACCESS MODE INVALID TRANSLATION
LENGTH
INVALID PPTE
MODIFY
0444 4 4 *040 4 448 4* 4 4 44 a« o If it is found that the virtual address in question corresponds to one of the predefined memory access violations, a fault signal 96A, indicative of the presence of a violation, is generated. The fault signal 96A is preferably in the form of-a single bit flag which is added to and passed along with the data relayed along subsequent pipeline stages until it eventually reaches the execution stage in the E-Unit.
In order to perform the protection check, check unit 96 receives a 32-bit PTE 98 which corresponds to the virtual address being processed and is generated as a result of the virtual-to-physical address translation process discussed above in detail. It will be recalled that the PTE includes the physical page frame number PFN PD88-0265 DIGM:019 FOREIGN: DIGM:050 at bits 20:00, a “valid” bit at bit 31, a protection field comprising bits 30:27, and a “modify” bit at bit 6.
In functional terms, the protection check unit 96 is a state machine adapted to check the status of specified bits in the 32-bit PTE 98 in order to determine the presence of corresponding predefined memory access violations. More specifically, bit 31 of the PTE 98 is checked to see whether or not it is asserted. As described above, bit 31, when asserted, represents a valid bit signifying that the corresponding page number is resident in memory; when bit 31 is not asserted, the o virtual address corresponding to that PTE does not have a corresponding valid translation. If the check unit 96 o 0o finds that the bit 31 of the PTE is not asserted, it 00 a o0 15 generates the fault signal 96A, indicating the presence e of an access violation.
The protection check unit 96 also checks the modify bit in the bit information represented by the PTE 98. If 4 20 that bit is not asserted, it is an indication that the particular page in memory referenced by the PTE does not 0 0o r have write access and signifies the presence of a memory access violation.
o The check unit 96 also monitors the PTE 98 for POo length violations to determine if the virtual address 00accompanying a memory access request is attempting to 04 access a nonexistent PTE, as described in detail above with reference to FIG. 4. Again, if a length violation is found to exist, the fault signal 96A is generated.
Similarly, the PTE 98 is also checked to see if an invalid entry is being referred to in the page table for the process section of system memory; this means that the PD88-0265 DIGM:019 FOREIGN: DIGM:050 -41process PTE or PPTE is invalid. If the result is positive, the fault signal 96A is generated.
0o 00 So o 0 000 0000 0 0 o 0 0 0 0 G 00 0 000 O0 0 O0 0 Q* Another type of memory access violation that is recognized by the protection check unit 96 is a mode access violation based upon a memory access request that transcends the current mode in which the processor unit is operating. For instance, a memory access request may originate during operation of the system in the. user mode and yet address memory segments in the supervisory mode; it is imperative that such requests be identified as access violations. In order to accomplish this, the check unit 96 is provided with a two-bit code 97 representing the mode under which the system is being 15 operated at the time that the protection check is being performed.
In accordance with this invention, the detection of a predefined memory access violation is followed by the 20 generation of a fault code identifying the particular kind of violation that has been detected. More specifically, a separate bit in a fault code field is designated for each of the predefined access violations listed above in table A. For instance, bit 1 of the 5-bit fault code is set if a mode access violation is found, bit 2 is asserted if an invalid translation violation exists, bit 3 is asserted if a PPTE violation exists, and bit 5 is asserted if a modify violation is detected. A 5-bit fault code is generated for each of the external virtual address sources and stored separately in corresponding 5-bit code registers. More specifically, a register set 100 is provided in which a register 101 is adapted to receive and store the fault code generated in correspondence to the virtual address originating from the E-Unit. Similarly, a PD88-0265 DIGM:019 FOREIGN: DIGM:050 -42- 0P 00 o a *0000 O O o 00 0 0o P register 102 is provided for storing the fault code corresponding to the OPU and another 5-bit register 103 is provided for storing the fault code corresponding to the I-Unit.
At the same time that an access violation is detected by the protection check unit 96 and the corresponding fault signal 96A generated, the virtual address corresponding to the violation is relayed as a translation buffer request 99 and is subsequently stored in a corresponding register of register set 104. The register set includes a 32-bit address register 105 for storing the virtual address which is determined by the check unit as representing a memory access violation 15 which is generated by the E-Unit. Similarly, address register 106 is provided for storing the 32-bit virtual address generated by the I-Unit and a 32-bit register 107 is provided for storing the virtual address corresponding to the OPU.
The fault address information stored in registers 105-107 is linked to a multiplexer 108. T’he MUX 108 is provided with a select signal 4esignated as the fault priority signal 109 which determines the order in which stored fault addresses are channeled out in case more than one fault address is found to be active in the address registers when the stored fault information is subsequently recalled. The fault address information and the fault code information that is stored in the register sets 100 and 104 is retained within the M-Unit until the E-Unit requests the t,7. r of the fault parameters.
Such a request is origin ;ed when the instruction is acted upon by the E-Unit at the execution stage and a check for the presence of the 1-bit fault signal is found to be positive.
PD88-0265 DIGM:019 FOREIGN: DIGM:050 000*a
P
PP.
e 11 o POP 0 00 P ft c-y aueuelL coumparison or tne translation buffer cache 55 to the pending virtual address must necessarily result in a “hit”. Therefore, TB Fixup 56 temporarily asserts control over the PD88-0265 DIGM:019 FOREIGN: DIGM:050 -43- According to a feature of this invention, fault information generated in response to memory access violations which are initiated by the E-Unit is designated as taking precedence over fault information generated by the OPU which, in turn, is designated as having a higher priority than I-Unit faults. In effect, faults are processed in the order of execution dictated i by the pipeline stages in the E-Unit. The reason for placing E-Unit faults at the highest level in the fault priority scheme is that completion of operations in the E E-Unit pipeline stages is indispensable to execution of a current instruction. This is not the case with the OPU stage which is adapted to the processing of pre-fetched instructions and operands which are not essential to the completion of a current instruction. The fault priority signal is preferably a 2-bit control signal which selects the E-Unit address register data as the output of MUX 104 if valid fault addresses exist simultaneously in the E- 20 and I- Unit address registers 105 and 106 respectively.
On a similar basis, the fault code information stored in the registers 101-103 is fed to a MUX 110 which generates, on the basis of the same fault priority signal that is fed to MUX 108, an output 111 representing one of 4: the three 5-bit codes input to it. The 32-bit fault address generated from MUX 108 is combined with the i fault code generated by MUX 110 at a third MUX 111 so that, in effect, the fault address and the corresponding fault code constitute the fault data that is relayed out to the E-Unit as the fault parameters requested by the E-Unit when a fault indication is detected at the execution stage. It should be noted that the fault parameters are relayed along the same lines that are PD88-0265 DIGM:019 FOREIGN: DIGM:050 ~I~-LIYL~h~Y -44normally used to transfer data from the M-Unit to the E-Unit.
Referring now to FIG. 6, there is shown a flowchart 120 embodying the procedural steps undergone by the system in the identification and generation of memory access exceptions or faults. At step 121, a memory access request corresponding to one of the external source ports provided on the M-Unit is selected for processing. Subsequently, at step 122 the virtual address corresponding to the selected memory access request is processed and undergoes the virtual-to-physical address translation.
0 0 9 9 a 0 0 a a* 9 ftQ 9 9 9 a 9000 0 9 .19 0*9 101 0490 4, 4 00 4 6 4 04 15 At step 123, the protection check is performed upon the information (in particular, the PTE) generated as a result of the translation process. The checking procedure detects the presence of the predefined set of memory access faults. A determination is made at step S20 123A as to whether or not any memory access violations or faults exist. If the answer at step 123A is positive, step 124 is accessed. If no memory access violation is detected pursuant to the check preformed at step 123A, the system automatically continues with the pipelined processing of other memory access requests at step 131-.
At the next step 124, the particular type of access fault is identified and the corresponding fault code generated by the protection check unit. Subsequently, at step 125, the external port originating the request leading to the detected memory access violation is deactivated on the M-Unit. Next, at step 126, a determination is made as to whether or not the requested memory access corresponds to a read operation.
PD88-0265 DIGM:019 FO ‘SIGN: DIGM:050 Tne purpose or separating tne tags into process ana system tags is to allow the operating system to quickly invalidate only those tags associated with the current program when a context switch occurs. For example, if PD88-0265 DIGM:019 FOREIGN: DIGM:050 If the answer at step 126 is found to be in the affirmative, step 127 is reached where the fault signal indicative of the presence of a memory access violation is generated by asserting a fault bit which is propagated along the pipeline with the results of the memory access request. At step 128, the fault parameters, including the virtual fault address and the fault code generated as a result of the protection checking process, are stored in corresponding registers in the M-Unit.
At the subsequent step 129, the read operation is performed upon the translated physical address o corresponding to the virtual address being currently 15 processed. At step 130, the read data is propagated in combination with the asserted fault bit along the Ott a succeeding pipeline stages until the data is stored within the source list 44 (see FIG. 1) within the E-Unit.
Subsequently, at step 131, the system continues with the 20 pipelined processing of memory access requests related to those M-Unit front-end ports which have not been affected by the deactivation performed at step 125.
If the answer at step 126 is found to be in the negative, it is an indication that the memory access request corresponds to a write operation and step 132 is accessed where the fault signal is generated by asserting the fault bit. At step 133, the fault information comprising the virtual fault address and the fault code identifying the port originating the request are stored as fault parameters for subsequent use. At step 134, the translated physical address is stored along with the asserted fault bit in the write queue arrangement 34 (see FIG. 1) instead of being propagated along the succeeding pipeline stages.
PD88-0265 DIGM:019 FOREIGN: DIGM:050 -46- The actual write operation has to be postponed in this manner because the data that has to be written is not available until after the execution stage. When the data is in fact available, the translated destination addresses which are stored within the write queue are paired with the corresponding data received from the E-Unit. Accordingly, instructions may be conveniently retired by writing the E-Unit data at the corresponding pre-translated destination address, thereby saving the 0 time that would otherwise be required in performing the 0 0 virtual-to-physical address translation at this point.
Following the execution of step 134, the system accesses o step 131, where the pipelined processing of other 😮 15 outstanding memory access requests from ports which have 0 oo not been deactivated at step 125, is continued.
Referring now to FIG. 7, there is shown a flowchart illustrating the sequence of operations involved in 0 °0o 20 detecting the presence of and responding to fault information. As shown therein, the detection and response process for memory read operations is initiated at step 141. At step 142, the read data representing the source operand is retrieved from the source list 44 (FIG.
1) in the E-Unit 20. At step 143, the fault bit to associated with the stored data is examined and a determination is made at step 144 as to whether or not the fault bit is asserted. If the fault bit is not found to be asserted, step 145 is accessed where the instruction is executed in a normal fashion using the data retrieved from the source list as the operand for the read operation.
However, if the answer at step 144 is found to be positive, the fault bit is indeed asserted, step PD88-0265 DIGM:019 FOREIGN: DIGM:050 series of three address registers forming a register set 73 adapted to temporarily store virtual addresses corresponding to different typs of TB “miss” states.
More specifically, a “port miss” register is provided for PD88-0265 DIGM:019 FOREIGN: DIGM:050 -47- 146 is accessed where the micro-engine of the system is trapped so that execution of the micro-code which controls the pipelined instruction execution process is halted. Subsequently, at step 147, the E-Unit accesses the M-Unit 16 and requests that the corresponding fault parameters stored in the fault address registers and the fault code registers inside the M-Unit be transmitted over to the E-Unit. Upon receiving the fault parameters from the M-Unit, the fault code is decoded by the E-Unit in a conventional manner and a corresponding one of a set of predefined trap routines is o “invoked for processing the particular type of memory eola access violation that has been detected. The definition o1 of such trap routines for handling excess violations is 15 conventional and accordingly will not be described here o in detail.
It should be noted that the trapping of the system micro-engine at step 146 occurs only if an instruction 20 which has resulted in a memory access violation has reached the execution stage in the E-Unit and the 0° corresponding read or write data is absolutely essential B00o to execution of the instruction. The exception processing scheme of this invention accordingly provides a distinct advantage over conventional techniques where 😮 trap routines are invoked in response to access violations at the point in the pipeline stage where the violation is detected. With such schemes, the processing activities of all succeeding of pipeline stages are disrupted. In addition, such conventional fault processing schemes result in substantial wastage of time because trap routines get invoked even for those operations which are eventually cancelled prior to axecution of the instruction as a result of events occurring at succeeding stages of the pipeline. These PD88-0265 DIGM:019 FOREIGN: DIGM:050 -48problems are avoided by the present invention because the stored fault parameters are recalled only at the point of final execution of the instruction so that trap routines are invoked only when it is essential that the memory access exception be processed prior to executing the instruction.
In the flowchart of FIG. 7, the processing of fault information in the case of memory write operations is initiated at step 150 and is followed, at step 151, by the retrieval of write data that is generated by the E-Unit and which needs to be written into the segment of memory whose address has been previously stored in the M-Unit. At step 152, the corresponding address entry in 15 the write queue arrangement 34 (see FIG. 1) is retrieved from the M-Unit. Subsequently, at step 153, the fault bit also stored in association with the write queue entry is examined. At step 154, a determination is made as to whether or not the fault bit has been asserted. If the 0 4. 20 answer is found to be in the negative, the system accesses step 155 where execution of the instruction is t proceeded with in a normal fashion by using the retrieved *0 address entry from the write queue as the destination operand. However, if the fault bit is indeed found to be asserted at step 154, the system executes steps 146, 147, t 148, and 149 in the manner identical to that used with the processing of memory read operations. More specifically, the system micro-engine is trapped, the fault parameters previously stored in the M-Unit are recalled, the fault code included therein is decoded, and the corresponding trap routine invoked in order to process the fault.
The exception handling scheme described above is particularly adapted to handling efficiently exception PD88-0265 DIGM:019 FOREIGN: DIGM:050 -49information encountered in the processing of memory references which may or may not be required for subsequent instruction execution even when the instruction is completed (as is the case for execution of variable length bit field operations specified by base, position, and size operands). Where an address translation problem results in the generation of exception information only for the base address while the corresponding real field address has no access violation, the port corresponding to the pipeline stage where the exception is encountered is not deactivated. Instead, the exception information is pipelined onto the execution ro a stage where a determination of the real field is made.
o° If the field is not found to be active at that point, the fault information is dismissed and the real field data is a fetched. With this arrangement, other operand data can be pre-fetched in the meantime without any flow problems since the instruction flow does not change direction.
a a a PD88-0265 DIGM:019 FOREIGN: DIGM:050
Claims (12)
1. A method of processing memory access exceptions encountered during pipelined instruction execution in a virtual memory-based computer system, said instruction execution being performed by using an instruction pipeline having a plurality of pipeline stages, each of said pipeline stages being dedicated to performing a predefined one of several tasks into which an instruction is divided, said computer including an instruction unit (I-Unit) for fetching and decoding instructions and fetching instruction operands, an execution unit (E-Unit) for performing specified operations upon instruction operands, a system memory indexed by physical address and a memory unit (M-Unit) including a translation buffer for converting virtual addresses delivered by the I- and E- Units to physical addresses within the system memory, said pipeline stages including an execution stage in the E- Unit, and preceding pipeline stages in the E- and I-Units o, 20 which require virtual-to-physical address translations prior to instruction execution in the execution stage in order for one of the preceding pipeline stages to send o valid results of said address translation down said Sinstruction pipeline and through an intermediate one of the preceding pipeline stages which processes said valid results before said valid results reach said execution stage, said method comprising the steps of: receiving memory access requests, including memory read or write operations, from said preceding pipeline 30 stages in said E-and I-Units which require virtual-to- physical address translations; using said translation buffer to translate virtual *j addresses accompanying a memory access request into corresponding physical addresses; I: -51- checking said virtual addresses and results of said address translation to determine the presence of one or more of a predefined set of memory access violations; generating fault information and invalid results in response to said presence of one or more of the predefined set of memory access violations; pipelining selected segments of said fault information and said invalid results along said instructibn pipeline from said preceding pipeline stages to said execution stage; and detecting the presence of said pipelined segments of fault information when said pipelined segments of fault information reach said execution stage, and in response thereto invoking a predefined exception handler routine corresponding to the memory access violation associated with said detected fault information so that the invocation of said exception handler routine is delayed past the time that said one of said preceding pipeline stages sends said invalid results down said instruction S 20 pipeline and past the time that said invalid results pass through said intermediate one of said preceding pipeline coo* stages.
2. The exception processing metiad of claim 1 wherein memory access requests from pipeline stages within said E- and 1-Units are accepted by the M-Unit at separate ports defined in corriutpondence to said pipeline stages and wherein the port corresponding to the pipeline stage initiating a memory access request which results in a memory access violation is prevented from accepting further memory access requests.
3. The exception processing method of claim 2 wherein said fault information includes flag information wherein said fault information includes flag information result of the virtual-to-physical address translation process discussed above in detail. It will be recalled that the PTE includes the physical page frame number PFN PD88-0265 DIGM:019 FOREIGN: DIGM:050 -52- indicative of the presence of a memory access violation, a fault code identifying the type of memory access violation detected, and a fault address corresponding to the virtual address associated with said detected violation.
4. The exception processing method of claim 3 wherein said selected segments of said fault information consists of said flag information, and the fault code and fault address are stored within said M-Unit.
A method of processing memory access exceptions during the operation of a pipelined computer system; said computer system having a memory unit, an instruction unit and an execution unit interconnected to form an instruction pipeline for processing instructions; said instruction unit including means for fetching and decoding instructions to obtain operation codes and source and destination operand specifiers, and means for fetching source operands specified by said source operand specifiers; said execution unit including means for S l performing operations specified by said operation codes I upon said source operands, means for fetching additional r 25 operands and means for retiring results of said operations; said memory unit including means for ;performing virtual-to-physical address translation, a °0 first port connected to said means for fetching instructions, a second port connected to said means for fetching source operands, and a third port connected to said means for fetching additional operands; said method ooS comprising the steps of: sensing when memory access requests by said means for fetching instructions and means for fetching source operands cause a memory access violation, and in response to said memory access violation generating fault -53- information and inhibiting the processing of additional memory requests from the respective means for fetching having caused the memory access violation; pipelining from said memory unit to said execution unit fault information about faults generated by said means for fetching instructions and means for fetching source operands, the fault information being pipelined in parallel with the instruction pipeline, said execution unit also receiving fault information about faults generated by said means for fetching additional operands, and in response to receiving the fault information, said execution unit invoking a predefined exception handing routine corresponding to the memory access violation associated with the received fault information so that the initiation of exception handling to resolve memory access violations caused by the fetching of instructions and the fetching of source operands is delayed until the fault information being pipelined in 9 20 parallel with the instruction pipeline is received by the i o execution unit. a
6. The method of claim 5 wherein the fault 25 information generated in step includes flag information indicative of the presence of a memory access violation, a fault code identifying the type of memory access violation detected, and a fault address corresponding to the virtual address associated with said detected violation. 4 Q0 4
7. The method of claim 6 wherein the only portion of said fault information generated in step that is pipelined or passed to the execution unit in step is :x7, -a agisuer set 100 is proviaea in wnin a register 101 is adapted to receive and store the fault code generated in correspondence to the virtual address originating from the E-Unit. Similarly, a PD88-0265 DIGM:019 FOREIGN: DIGM:050 -54- said flag information, and the fault code and fault address are stored within said memory access unit.
8. The method of claim 5 wherein said execution unit in step invokes said exception handling routine when the instruction having caused the fault would have been issued in the absence of the fault.
9. A pipelined computer system comprising a memory unit, an instruction unit and an execution unit interconnected to form an instruction pipeline for processing instructions; said instruction unit including means for fetching and decoding instructions to obtain operation codes and source and destination operand specifiers, and means for fetching source operands specified by said source operand specifiers; said execution unit including means for performing operations 20 specified by said operation codes upon said source 00 9 operands, means for fetching additional operands and means o for retiring results of said operations; said memory unit os including means for performing virtual-to-physical address Stranslation, a first port connected to said means for fetching instructions in said instruction unit, a second port connected to said means for fetching source operands j in said instruction unit, and a third port connected to 1.;o said means for fetching additional operands in said execution unit; said memory unit further including means 4 00 30 for sensing when memory access requests by said means for fetching instructions and means for fetching source o operands cause a memory access violation, and means responsive to the sensing of a memory access violation for generating fault information and inhibiting the processing of additional memory requests from the respective means for fetching having caused the memory access violation; said computer system further including means for pipelining from said memory unit to said execution unit fault information about faults generated by said means for fetching instructions and means for fetching source operands, the fault information being pipelined in parallel with the instruction pipeline, said memory unit also being connected to said execution unit for passing fault information about faults generated by said means for fetching additional operands; and said execution unit further comprising means responsive to the received fault information for invoking a predefined exception handling routine so that the initiation of exception handling to resolve the memory access Violations caused by the fetching of instructions and the fetching of source operands is delayed until the fault information being pipelined in parallel with the instruction pipeline is received by the execution unit.
10. The computer system of claim 9 wherein said means for generating includes means for generating flag information indicative of the presence of a memory access violation, a fault code identifying the type of memory a* access violation detected, and a fault address corresponding to the virtual address associated with said detected violation. 0i 0
11. The computer system of claim 10 wherein said s 30 memory access unit includes means for storing said fault code and said fault address, and wherein said means for pipelining includes means for pipelining said flag S% information.
12. The computer sy.tem substantially as described herein with reference to the drawings. i A.,p r -56- DATED this TWENTY SECOND day of SEPTEMBER 1992 Digital Equipment Corporation Patent Attorneys for the Appli4cant SPRUSON FERGUSON A0 4 4 00
AU53943/90A
1989-02-03
1990-04-27
Processing of memory access exceptions with pre-fetched instructions within the instruction pipeline of a memory system based digital computer
Ceased
AU631420B2
(en)
Applications Claiming Priority (1)
Application Number
Priority Date
Filing Date
Title
US07/306,866
US4985825A
(en)
1989-02-03
1989-02-03
System for delaying processing of memory access exceptions until the execution stage of an instruction pipeline of a virtual memory system based digital computer
Publications (2)
Publication Number
Publication Date
AU5394390A
AU5394390A
(en)
1991-12-19
AU631420B2
true
AU631420B2
(en)
1992-11-26
Family
ID=23187213
Family Applications (1)
Application Number
Title
Priority Date
Filing Date
AU53943/90A
Ceased
AU631420B2
(en)
1989-02-03
1990-04-27
Processing of memory access exceptions with pre-fetched instructions within the instruction pipeline of a memory system based digital computer
Country Status (7)
Country
Link
US
(1)
US4985825A
(en)
EP
(1)
EP0381470B1
(en)
JP
(1)
JPH02234248A
(en)
AT
(1)
ATE158423T1
(en)
AU
(1)
AU631420B2
(en)
CA
(1)
CA1323701C
(en)
DE
(1)
DE69031433T2
(en)
Families Citing this family (60)
* Cited by examiner, † Cited by third party
Publication number
Priority date
Publication date
Assignee
Title
US5297263A
(en)
*
1987-07-17
1994-03-22
Mitsubishi Denki Kabushiki Kaisha
Microprocessor with pipeline system having exception processing features
US5197133A
(en)
*
1988-12-19
1993-03-23
Bull Hn Information Systems Inc.
Control store addressing from multiple sources
US5075844A
(en)
*
1989-05-24
1991-12-24
Tandem Computers Incorporated
Paired instruction processor precise exception handling mechanism
US5329629A
(en)
*
1989-07-03
1994-07-12
Tandem Computers Incorporated
Apparatus and method for reading, writing, and refreshing memory with direct virtual or physical access
JP2504235B2
(en)
*
1989-11-16
1996-06-05
三菱電機株式会社
Data processing device
JPH03185530A
(en)
*
1989-12-14
1991-08-13
Mitsubishi Electric Corp
Data processor
US5546551A
(en)
*
1990-02-14
1996-08-13
Intel Corporation
Method and circuitry for saving and restoring status information in a pipelined computer
US5450564A
(en)
*
1990-05-04
1995-09-12
Unisys Corporation
Method and apparatus for cache memory access with separate fetch and store queues
JP2570466B2
(en)
*
1990-05-18
1997-01-08
日本電気株式会社
Information processing device
CA2045789A1
(en)
*
1990-06-29
1991-12-30
Richard Lee Sites
Granularity hint for translation buffer in high performance processor
US5251310A
(en)
*
1990-06-29
1993-10-05
Digital Equipment Corporation
Method and apparatus for exchanging blocks of information between a cache memory and a main memory
GB9114513D0
(en)
*
1991-07-04
1991-08-21
Univ Manchester
Condition detection in asynchronous pipelines
US5493687A
(en)
1991-07-08
1996-02-20
Seiko Epson Corporation
RISC microprocessor architecture implementing multiple typed register sets
US5539911A
(en)
*
1991-07-08
1996-07-23
Seiko Epson Corporation
High-performance, superscalar-based computer system with out-of-order instruction execution
US5961629A
(en)
*
1991-07-08
1999-10-05
Seiko Epson Corporation
High performance, superscalar-based computer system with out-of-order instruction execution
US5438668A
(en)
1992-03-31
1995-08-01
Seiko Epson Corporation
System and method for extraction, alignment and decoding of CISC instructions into a nano-instruction bucket for execution by a RISC computer
EP0636256B1
(en)
*
1992-03-31
1997-06-04
Seiko Epson Corporation
Superscalar risc processor instruction scheduling
JP3637920B2
(en)
1992-05-01
2005-04-13
セイコーエプソン株式会社
System and method for retirement of instructions in a superscaler microprocessor
JPH0667980A
(en)
*
1992-05-12
1994-03-11
Unisys Corp
Cache logic system for optimizing access to four- block cache memory and method for preventing double mistakes in access to high-speed cache memory of main frame computer
WO1994008287A1
(en)
*
1992-09-29
1994-04-14
Seiko Epson Corporation
System and method for handling load and/or store operations in a superscalar microprocessor
US6735685B1
(en)
*
1992-09-29
2004-05-11
Seiko Epson Corporation
System and method for handling load and/or store operations in a superscalar microprocessor
US5628021A
(en)
*
1992-12-31
1997-05-06
Seiko Epson Corporation
System and method for assigning tags to control instruction processing in a superscalar processor
WO1994016384A1
(en)
1992-12-31
1994-07-21
Seiko Epson Corporation
System and method for register renaming
US5630149A
(en)
*
1993-10-18
1997-05-13
Cyrix Corporation
Pipelined processor with register renaming hardware to accommodate multiple size registers
US5740398A
(en)
*
1993-10-18
1998-04-14
Cyrix Corporation
Program order sequencing of data in a microprocessor with write buffer
US5471598A
(en)
*
1993-10-18
1995-11-28
Cyrix Corporation
Data dependency detection and handling in a microprocessor with write buffer
US5615402A
(en)
*
1993-10-18
1997-03-25
Cyrix Corporation
Unified write buffer having information identifying whether the address belongs to a first write operand or a second write operand having an extra wide latch
US6219773B1
(en)
1993-10-18
2001-04-17
Via-Cyrix, Inc.
System and method of retiring misaligned write operands from a write buffer
SG48907A1
(en)
*
1993-12-01
1998-05-18
Intel Corp
Exception handling in a processor that performs speculative out-of-order instruction execution
DE4434895C2
(en)
*
1993-12-23
1998-12-24
Hewlett Packard Co
Method and device for handling exceptional conditions
US5555399A
(en)
*
1994-07-07
1996-09-10
International Business Machines Corporation
Dynamic idle list size processing in a virtual memory management operating system
US5640526A
(en)
*
1994-12-21
1997-06-17
International Business Machines Corporation
Superscaler instruction pipeline having boundary indentification logic for variable length instructions
US6643765B1
(en)
1995-08-16
2003-11-04
Microunity Systems Engineering, Inc.
Programmable processor with group floating point operations
US5933651A
(en)
*
1995-09-29
1999-08-03
Matsushita Electric Works, Ltd.
Programmable controller
US6101590A
(en)
1995-10-10
2000-08-08
Micro Unity Systems Engineering, Inc.
Virtual memory system with local and global virtual address translation
US5778208A
(en)
*
1995-12-18
1998-07-07
International Business Machines Corporation
Flexible pipeline for interlock removal
US5802573A
(en)
*
1996-02-26
1998-09-01
International Business Machines Corp.
Method and system for detecting the issuance and completion of processor instructions
US6061773A
(en)
*
1996-05-03
2000-05-09
Digital Equipment Corporation
Virtual memory system with page table space separating a private space and a shared space in a virtual memory
JP3849951B2
(en)
*
1997-02-27
2006-11-22
株式会社日立製作所
Main memory shared multiprocessor
US6219758B1
(en)
*
1998-03-24
2001-04-17
International Business Machines Corporation
False exception for cancelled delayed requests
US6233668B1
(en)
1999-10-27
2001-05-15
Compaq Computer Corporation
Concurrent page tables
US6766440B1
(en)
*
2000-02-18
2004-07-20
Texas Instruments Incorporated
Microprocessor with conditional cross path stall to minimize CPU cycle time length
US6859897B2
(en)
*
2000-03-02
2005-02-22
Texas Instruments Incorporated
Range based detection of memory access
JP4522548B2
(en)
*
2000-03-10
2010-08-11
富士通フロンテック株式会社
Access monitoring device and access monitoring method
DE10108107A1
(en)
*
2001-02-21
2002-08-29
Philips Corp Intellectual Pty
Circuit arrangement and method for detecting an access violation in a microcontroller arrangement
US7310800B2
(en)
*
2001-02-28
2007-12-18
Safenet, Inc.
Method and system for patching ROM code
US7684447B2
(en)
*
2004-12-23
2010-03-23
Agilent Technologies, Inc.
Sequencer and method for sequencing
US7752427B2
(en)
*
2005-12-09
2010-07-06
Atmel Corporation
Stack underflow debug with sticky base
US20080181210A1
(en)
*
2007-01-31
2008-07-31
Finisar Corporation
Processing register values in multi-process chip architectures
US9507725B2
(en)
*
2012-12-28
2016-11-29
Intel Corporation
Store forwarding for data caches
US20140189246A1
(en)
*
2012-12-31
2014-07-03
Bin Xing
Measuring applications loaded in secure enclaves at runtime
KR101978984B1
(en)
*
2013-05-14
2019-05-17
한국전자통신연구원
Apparatus and method for detecting fault of processor
US10061675B2
(en)
*
2013-07-15
2018-08-28
Texas Instruments Incorporated
Streaming engine with deferred exception reporting
US9311508B2
(en)
*
2013-12-27
2016-04-12
Intel Corporation
Processors, methods, systems, and instructions to change addresses of pages of secure enclaves
US9672354B2
(en)
*
2014-08-18
2017-06-06
Bitdefender IPR Management Ltd.
Systems and methods for exposing a result of a current processor instruction upon exiting a virtual machine
US20160085695A1
(en)
2014-09-24
2016-03-24
Intel Corporation
Memory initialization in a protected region
US10528353B2
(en)
2016-05-24
2020-01-07
International Business Machines Corporation
Generating a mask vector for determining a processor instruction address using an instruction tag in a multi-slice processor
US10248555B2
(en)
2016-05-31
2019-04-02
International Business Machines Corporation
Managing an effective address table in a multi-slice processor
US10467008B2
(en)
*
2016-05-31
2019-11-05
International Business Machines Corporation
Identifying an effective address (EA) using an interrupt instruction tag (ITAG) in a multi-slice processor
US10747679B1
(en)
*
2017-12-11
2020-08-18
Amazon Technologies, Inc.
Indexing a memory region
Family Cites Families (8)
* Cited by examiner, † Cited by third party
Publication number
Priority date
Publication date
Assignee
Title
GB1443777A
(en)
*
1973-07-19
1976-07-28
Int Computers Ltd
Data processing apparatus
JPS6028015B2
(en)
*
1980-08-28
1985-07-02
日本電気株式会社
information processing equipment
JPS57185545A
(en)
*
1981-05-11
1982-11-15
Hitachi Ltd
Information processor
US4757445A
(en)
*
1983-09-12
1988-07-12
Motorola, Inc.
Method and apparatus for validating prefetched instruction
US4710866A
(en)
*
1983-09-12
1987-12-01
Motorola, Inc.
Method and apparatus for validating prefetched instruction
DE3369015D1
(en)
*
1983-09-16
1987-02-12
Ibm Deutschland
Arrangement in the command circuit of a pipe-line processor for instruction interrupt and report
US5063497A
(en)
*
1987-07-01
1991-11-05
Digital Equipment Corporation
Apparatus and method for recovering from missing page faults in vector data processing operations
US4875160A
(en)
*
1988-07-20
1989-10-17
Digital Equipment Corporation
Method for implementing synchronous pipeline exception recovery
1989
1989-02-03
US
US07/306,866
patent/US4985825A/en
not_active
Expired – Lifetime
1989-09-19
CA
CA000611918A
patent/CA1323701C/en
not_active
Expired – Fee Related
1990
1990-01-16
JP
JP2007008A
patent/JPH02234248A/en
active
Granted
1990-01-31
EP
EP90301002A
patent/EP0381470B1/en
not_active
Expired – Lifetime
1990-01-31
DE
DE69031433T
patent/DE69031433T2/en
not_active
Expired – Lifetime
1990-01-31
AT
AT90301002T
patent/ATE158423T1/en
not_active
IP Right Cessation
1990-04-27
AU
AU53943/90A
patent/AU631420B2/en
not_active
Ceased
Also Published As
Publication number
Publication date
DE69031433D1
(en)
1997-10-23
AU5394390A
(en)
1991-12-19
US4985825A
(en)
1991-01-15
EP0381470B1
(en)
1997-09-17
ATE158423T1
(en)
1997-10-15
JPH02234248A
(en)
1990-09-17
JPH0526219B2
(en)
1993-04-15
EP0381470A2
(en)
1990-08-08
EP0381470A3
(en)
1992-11-19
CA1323701C
(en)
1993-10-26
DE69031433T2
(en)
1998-04-16
Similar Documents
Publication
Publication Date
Title
AU631420B2
(en)
1992-11-26
Processing of memory access exceptions with pre-fetched instructions within the instruction pipeline of a memory system based digital computer
CA1325288C
(en)
1993-12-14
Method and apparatus for controlling the conversion of virtual to physical memory addresses in a digital computer system
US5113515A
(en)
1992-05-12
Virtual instruction cache system using length responsive decoded instruction shifting and merging with prefetch buffer outputs to fill instruction buffer
EP0391517B1
(en)
1997-07-30
Method and apparatus for ordering and queueing multiple memory access requests
AU632324B2
(en)
1992-12-24
Multiple instruction preprocessing system with data dependency resolution
US5517651A
(en)
1996-05-14
Method and apparatus for loading a segment register in a microprocessor capable of operating in multiple modes
US5125083A
(en)
1992-06-23
Method and apparatus for resolving a variable number of potential memory access conflicts in a pipelined computer system
EP0380859B1
(en)
1997-12-29
Method of preprocessing multiple instructions
EP0465321B1
(en)
1997-08-13
Ensuring data integrity in multiprocessor or pipelined processor system
JP2618175B2
(en)
1997-06-11
History table of virtual address translation prediction for cache access
US6216200B1
(en)
2001-04-10
Address queue
KR100303673B1
(en)
2001-09-24
Forwarding store instruction result to load instruction with reduced stall or flushing by effective/real data address bytes matching
US5249286A
(en)
1993-09-28
Selectively locking memory locations within a microprocessor's on-chip cache
US5019965A
(en)
1991-05-28
Method and apparatus for increasing the data storage rate of a computer system having a predefined data path width
US4763250A
(en)
1988-08-09
Paged memory management unit having variable number of translation table levels
JPH1074166A
(en)
1998-03-17
Multilevel dynamic set predicting method and its device
JP2000250810A
(en)
2000-09-14
Method, processor and system for executing load instruction
WO1996012231A1
(en)
1996-04-25
A translation buffer for detecting and preventing conflicting virtual addresses from being stored therein
CN113412473A
(en)
2021-09-17
Directed interrupts for multi-level virtualization with interrupt tables
CN113424150A
(en)
2021-09-21
Directed interrupt virtualization with run indicator
US6901540B1
(en)
2005-05-31
TLB parity error recovery
JP3045952B2
(en)
2000-05-29
Full associative address translator
US6338128B1
(en)
2002-01-08
System and method for invalidating an entry in a translation unit
US5758141A
(en)
1998-05-26
Method and system for selective support of non-architected instructions within a superscaler processor system utilizing a special access bit within a machine state register
IE901525A1
(en)
1991-11-06
Processing of memory access exceptions with pre-fetched¹instructions within the instruction pipeline of a memory¹system based digital computer
None