What are Architecture Questions on Technical Interview? Architecture questions that can be asked on interview in IT companies are always interesting topic. You can be prepared with some theoretical materials before you understand that there is something that you never heard about. Computer science questions is always difficult to manage, because everything is growing so fast and becomes more complex.
1. Explain what is DMA?
DMA makes and support the two tier and three tier to n-tier architecture application in which it can be able to communicate and transfer the data as much as it can but practically it was very difficult with Windows DNA . The DMA Architecture uses only the basic encapsulation mechanism of COM for local, memory resident objects. DMA Architecture documentation.
2. What is pipelining?
Pipelining is a process in which the data is accessed in a stage by stage process. The data is accessed in a sequence that is each stage performs an operation. If there are n number of stages then n number of operations is done. To increase the throughput of the processing network the pipe lining process is done. This method is adopted because the operation or the data is accessed in a sequence with a fast mode. Pipelining FAQ.
3. What are superscalar machines and vliw machines?
As superscalar machines become more complex the difficulties of scheduling instruction issue become more complex. Another way of looking at superscalar machines is as dynamic instruction schedulers - the hardware decides on the fly which instructions to execute in parallel out of order etc. An alternative approach would be to get the compiler to do it beforehand - that is to statically schedule execution. This is the basic concept behind Very Long Instruction Word or VLIW machines.
4. What is cache?
Cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the more requests can be served from the cache the faster the overall system performance is. Caching is often considered as a performance-enhancement tool than a way to store application data. If u spends more server resources in accessing the same data repeatedly, use caching instead. Caching data can bring huge performance benefits, so whenever u find that u need to frequently access data that doesn’t often change, cache it in the cache object and your application's performance will improve.
5. What is cache coherency and how is it eliminated?
Cache coherence or Cache coherency refers to a number of ways to make sure all the caches of the resource have the same data, and that the data in the caches makes sense (called data integrity). Cache coherence is a special case of memory coherence. There may be problems if there are many caches of a common memory resource, as data in the cache may no longer make sense, or one cache may no longer have the same data as the others. A common case where the problem occurs is the cache of CPUs in a multiprocessing system.
6. What is write back and write through caches?
write-back cache a caching method in which modifications to data in the cache aren't copied to the cache source until absolutely necessary. write-through cache performs all write operations in parallel -- data is written to main memory and the L1 cache simultaneously. Write-back caching yields somewhat better performance than write-through caching because it reduces the number of write operations to main memory. With this performance improvement comes a slight risk that data may be lost if the system crashes. Write Back throught caches.pdf (78.32 kb)
7. What are different pipelining hazards and how are they eliminated?
Pipeline is a process where a business object goes through several stages asynchronously. Where one stage picks up processes and drops it for the next process to pick up. The hazard is when the a different thread of the same process picks up the business object leads to malfunction. This can be handled by status handling or scan delays.
8. What are different stages of a pipe?
There are two types of pipelines- Instructional pipeline where different stages of an instruction fetch and execution are handled in a pipeline. Arithmetic pipeline are different stages of an arithmetic operation are handled along the stages of a pipeline.
9. Explain more about branch prediction in controlling the control hazards
10. Give examples of data hazards with pseudo codes.
A hazard is an error in the operation of the microcontroller, caused by the simultaneous execution of multiple stages in a pipelined processor. There are three types of hazards: Data hazards, control hazards, and structural hazards. Here is more: https://en.wikibooks.org/wiki/Microprocessor_Design/Hazards
11. How do you calculate the number of sets given its way and size in a cache?
A cache in the primary storage hierarchy contains cache lines that are grouped into sets. If each set contains k lines then we say that the cache is k-way associative.
A data request has an address specifying the location of the requested data. Each cache-line sized chunk of data from the lower level can only be placed into one set. The set that it can be placed into depends on its address. This mapping between addresses and sets must have an easy, fast implementation. The fastest implementation involves using just a portion of the address to select the set. When this is done, a request address is broken up into three parts:
- An offset part identifies a particular location within a cache line.
- A set part identifies the set that contains the requested data.
- A tag part must be saved in each cache line along with its data to distinguish different addresses that could be placed in the set.
12. How is a block found in a cache?
Each place in cache records block's tag (as well as its data). Of course place in cache may be unoccupied so usually place maintains valid bit. So to find block in cache: 1. Use index of block address to determine place (or set of places) 2. For that (or each) place check valid bit is set and compare tag with that of block address --- this can be done in parallel for all places in a set.
13. Scoreboard analysis.
Scoreboarding is a centralized method, used in the CDC 6600 computer, for dynamically scheduling a pipeline so that the instructions can execute out of order when there are no conflicts and the hardware is available. In a scoreboard, the data dependencies of every instruction are logged. Instructions are released only when the scoreboard determines that there are no conflicts with previously issued and incomplete instructions. If an instruction is stalled because it is unsafe to continue, the scoreboard monitors the flow of executing instructions until all dependencies have been resolved before the stalled instruction is issued.
14. What is miss penalty and give your own ideas to eliminate it?
The fraction or percentage of accesses that result in a hit is called the hit rate. The fraction or percentage of accesses that result in a miss is called the miss rate. hit rate + miss rate = 1.0 (100%) The difference between lower level access time and cache access time is called the miss penalty.
15. How do you improve the cache performance.
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
CPU time = (CPU execution clock cycles + Memory stall clock cycles) x clock cycle time
Memory stall clock cycles = (Reads x Read miss rate x Read miss penalty + Writes x Write miss rate x Write miss penalty)
Memory stall clock cycles = Memory accesses x Miss rate x Miss penalty
CPUtime = IC x (CPIexecution + (Mem accesses per instruction x Miss rate x Miss penalty)) x Clock cycle time hits are included in CPIexecution
Misses per instruction = Memory accesses per instruction x Miss rate
CPUtime = IC x (CPIexecution + Misses per instruction x Miss penalty) x Clock cycle time
Check this articles for more information: http://ece-research.unm.edu/jimp/611/slides/chap5_2.html, https://www.cs.duke.edu/courses/fall06/cps220/lectures/PPT/lect12.pdf
16. Different addressing modes.
Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how machine language instructions in that architecture identify the operand (or operands) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand by using information held in registers and/or constants contained within a machine instruction or elsewhere. Addressing modes
17. Computer arithmetic with two's complements.
The two's complement of a binary number is defined as the value obtained by subtracting the number from a large power of two (specifically, from 2N for an N-bit two's complement). The two's complement of the number then behaves like the negative of the original number in most arithmetic, and it can coexist with positive numbers in a natural way. A two's-complement system or two's-complement arithmetic is a system in which negative numbers are represented by the two's complement of the absolute value; this system is the most common method of representing signed integers on computers. In such a system, a number is negated (converted from positive to negative or vice versa) by computing its two's complement. An N-bit two's-complement numeral system can represent every integer in the range −2N−1 to +2N−1−1.
18. About hardware and software interrupts.
Hardware Interrupt: Each CPU has External Interrupt lines. Other external devices line keyboard, Mouse, Other controllers can send signals to CPU asynchronously. Software Interrupt:is an interrupt generated with in a processor by executing an instruction . Software interrupt are often used to implemented system calls because they implemented a subroutine call with a CPU ring level change. Interrupts
19. What is bus contention and how do you eliminate it?
Bus contention occurs when more than one memory module attempts to access the bus simultaneously. It can be reduced by using hierarchical bus architecture.
20. What is aliasing?
In computing, aliasing describes a situation in which a data location in memory can be accessed through different symbolic names in the program. Thus, modifying the data through one name implicitly modifies the values associated to all aliased names, which may not be expected by the programmer. As a result, aliasing makes it particularly difficult to understand, analyze and optimize programs. Aliasing analyses intend to make and compute useful information for understanding aliasing in programs. Aliasing (computing)
21. What is the difference between a latch and a flip flop?
22. What is the race around condition? How can it be overcome?
Race conditions are a severe way crashing the server/ system at times. Generally, this problem arises in priority less systems or the users who have equal priority will be put to this problem. Race condition is a situation in which a resource D is to be serviced to a process A and the process B that holds the resource C is to be given to the process A. So a cyclic chain occurs and no way the resources will be get shared and also the systems with equal priority wont get the resource so that the system wont come out of the blocked state due to race condition!
23. What are the types of memory management?
Swapping and Demand Paging are main memory management types. Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system. The Memory Management Reference. Memory ManagementII.pdf (190.34 kb)