russia is waging a genocidal war in Ukraine. Please help Ukraine defend itself before russia has a chance to invade other countries.
Exploring the Intersection of Software Development, AI Innovation, and Entrepreneurial Success | Computer Architecture Questions on Technical Interview

Computer Architecture Questions on Technical Interview

What are Architecture Questions on Technical Interview? Architecture questions that can be asked on interview in IT companies are always interesting topic. You can be prepared with some theoretical materials before you understand that there is something that you never heard about. Computer science questions is always difficult to manage, because everything is growing so fast and becomes more complex.

1. Explain what is DMA?

DMA makes and support the two tier and three tier to n-tier architecture application in which it can be able to communicate and transfer the data as much as it can but practically it was very difficult with Windows DNA . The DMA Architecture uses only the basic encapsulation mechanism of COM for local, memory resident objects. DMA Architecture documentation.

2. What is pipelining?

Pipelining is a process in which the data is accessed in a stage by stage process. The data is accessed in a sequence that is each stage performs an operation. If there are n number of stages then n number of operations is done. To increase the throughput of the processing network the pipe lining process is done. This method is adopted because the operation or the data is accessed in a sequence with a fast mode. Pipelining FAQ.

3. What are superscalar machines and vliw machines?

As superscalar machines become more complex the difficulties of scheduling instruction issue become more complex. Another way of looking at superscalar machines is as dynamic instruction schedulers - the hardware decides on the fly which instructions to execute in parallel out of order etc. An alternative approach would be to get the compiler to do it beforehand - that is to statically schedule execution. This is the basic concept behind Very Long Instruction Word or VLIW machines.

4. What is cache?

Cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the more requests can be served from the cache the faster the overall system performance is.  Caching is often considered as a performance-enhancement tool than a way to store application data. If u spends more server resources in accessing the same data repeatedly, use caching instead. Caching data can bring huge performance benefits, so whenever u find that u need to frequently access data that doesn’t often change, cache it in the cache object and your application's performance will improve.

 5. What is cache coherency and how is it eliminated?

Cache coherence or Cache coherency refers to a number of ways to make sure all the caches of the resource have the same data, and that the data in the caches makes sense (called data integrity). Cache coherence is a special case of memory coherence. There may be problems if there are many caches of a common memory resource, as data in the cache may no longer make sense, or one cache may no longer have the same data as the others. A common case where the problem occurs is the cache of CPUs in a multiprocessing system.


6. What is write back and write through caches?

write-back cache a caching method in which modifications to data in the cache aren't copied to the cache source until absolutely necessary. write-through cache performs all write operations in parallel -- data is written to main memory and the L1 cache simultaneously. Write-back caching yields somewhat better performance than write-through caching because it reduces the number of write operations to main memory. With this performance improvement comes a slight risk that data may be lost if the system crashes. Write Back throught caches.pdf (78.32 kb)

7. What are different pipelining hazards and how are they eliminated?

Pipeline is a process where a business object goes through several stages asynchronously. Where one stage picks up processes and drops it for the next process to pick up. The hazard is when the a different thread of the same process picks up the business object leads to malfunction. This can be handled by status handling or scan delays.

8. What are different stages of a pipe?

There are two types of pipelines- Instructional pipeline where different stages of an instruction fetch and execution are handled in a pipeline. Arithmetic pipeline are different stages of an arithmetic operation are handled along the stages of a pipeline.

9. Explain more about branch prediction in controlling the control hazards

A branch prediction control device, in an information processing unit which performs a pipeline process, generates a branch prediction address used for verification of an instruction being speculatively executed. The branch prediction control device includes a first return address storage unit storing the prediction return address, a second return address storage unit storing a return address to be generated depending on an execution result of the call instruction, and a branch prediction address storage unit sending a stored prediction return address as a branch prediction address and storing the sent branch prediction address. When the branch prediction address differs from a return address, which is generated after executing a branch instruction or a return instruction, contents stored in the second return address storage unit are copied to the first return address storage unit. Control hazards.

10. Give examples of data hazards with pseudo codes.

A hazard is an error in the operation of the microcontroller, caused by the simultaneous execution of multiple stages in a pipelined processor. There are three types of hazards: Data hazards, control hazards, and structural hazards. Here is more: https://en.wikibooks.org/wiki/Microprocessor_Design/Hazards 


11. How do you calculate the number of sets given its way and size in a cache?

A cache in the primary storage hierarchy contains cache lines that are grouped into sets. If each set contains k lines then we say that the cache is k-way associative.
A data request has an address specifying the location of the requested data. Each cache-line sized chunk of data from the lower level can only be placed into one set. The set that it can be placed into depends on its address. This mapping between addresses and sets must have an easy, fast implementation. The fastest implementation involves using just a portion of the address to select the set. When this is done, a request address is broken up into three parts:

  • An offset part identifies a particular location within a cache line.
  • A set part identifies the set that contains the requested data.
  • A tag part must be saved in each cache line along with its data to distinguish different addresses that could be placed in the set.
  • http://www.d.umn.edu/~gshute/arch/cache-addressing.xhtml 
    12. How is a block found in a cache?

Each place in cache records block's tag (as well as its data). Of course place in cache may be unoccupied so usually place maintains valid bit. So to find block in cache: 1. Use index of block address to determine place (or set of places) 2. For that (or each) place check valid bit is set and compare tag with that of block address --- this can be done in parallel for all places in a set.

13. Scoreboard analysis.

Scoreboarding is a centralized method, used in the CDC 6600 computer, for dynamically scheduling a pipeline so that the instructions can execute out of order when there are no conflicts and the hardware is available. In a scoreboard, the data dependencies of every instruction are logged. Instructions are released only when the scoreboard determines that there are no conflicts with previously issued and incomplete instructions. If an instruction is stalled because it is unsafe to continue, the scoreboard monitors the flow of executing instructions until all dependencies have been resolved before the stalled instruction is issued.


14. What is miss penalty and give your own ideas to eliminate it?

The fraction or percentage of accesses that result in a hit is called the hit rate. The fraction or percentage of accesses that result in a miss is called the miss rate. hit rate + miss rate = 1.0 (100%) The difference between lower level access time and cache access time is called the miss penalty.

15. How do you improve the cache performance.

1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.

CPU time = (CPU execution clock cycles + Memory stall clock cycles) x clock cycle time
Memory stall clock cycles = (Reads x Read miss rate x Read miss penalty + Writes x Write miss rate x Write miss penalty)
Memory stall clock cycles = Memory accesses x Miss rate x Miss penalty

CPUtime = IC x (CPIexecution + (Mem accesses per instruction x Miss rate x Miss penalty)) x Clock cycle time hits are included in CPIexecution
Misses per instruction = Memory accesses per instruction x Miss rate
CPUtime = IC x (CPIexecution + Misses per instruction x Miss penalty) x Clock cycle time

Check this articles for more information: http://ece-research.unm.edu/jimp/611/slides/chap5_2.html, https://www.cs.duke.edu/courses/fall06/cps220/lectures/PPT/lect12.pdf 

16. Different addressing modes.

Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how machine language instructions in that architecture identify the operand (or operands) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand by using information held in registers and/or constants contained within a machine instruction or elsewhere. Addressing modes

17. Computer arithmetic with two's complements.

The two's complement of a binary number is defined as the value obtained by subtracting the number from a large power of two (specifically, from 2N  for an N-bit two's complement). The two's complement of the number then behaves like the negative of the original number in most arithmetic, and it can coexist with positive numbers in a natural way. A two's-complement system or two's-complement arithmetic is a system in which negative numbers are represented by the two's complement of the absolute value; this system is the most common method of representing signed integers on computers. In such a system, a number is negated (converted from positive to negative or vice versa) by computing its two's complement. An N-bit two's-complement numeral system can represent every integer in the range −2N−1 to +2N−1−1.

18. About hardware and software interrupts.

Hardware Interrupt: Each CPU has External Interrupt lines. Other external devices line keyboard, Mouse, Other controllers can send signals to CPU asynchronously. Software Interrupt:is an interrupt generated with in a processor by executing an instruction . Software interrupt are often used to implemented system calls because they implemented a subroutine call with a CPU ring level change. Interrupts

19. What is bus contention and how do you eliminate it?

Bus contention occurs when more than one memory module attempts to access the bus simultaneously. It can be reduced by using hierarchical bus architecture.

20. What is aliasing?

In computing, aliasing describes a situation in which a data location in memory can be accessed through different symbolic names in the program. Thus, modifying the data through one name implicitly modifies the values associated to all aliased names, which may not be expected by the programmer. As a result, aliasing makes it particularly difficult to understand, analyze and optimize programs. Aliasing analyses intend to make and compute useful information for understanding aliasing in programs. Aliasing (computing)

21. What is the difference between a latch and a flip flop?

The difference between a latch and a flip-flop is that a latch does not have a clock signal, whereas a flip-flop always does.Latches are asynchronous, which means that the output changes very soon after the input changes.A flip-flop is a synchronous version of the latch. Latch is a level sensitive device and flip-flop is edge sensitive device. Latch is sensitive to glitches on enable pin, where as flip-flop is immune to gltiches. Latches take less gates (also less power) to implement then flip-flops. Latches are faster then flip-flops. this is how the output of the two will differ: the output of the latch will be the same as the data input as it does not have a clock signal whereas in a flipflop there would be a delay of one clock cycle to see the output. Read more: Difference between flip-flops & latches | Answerbag http://www.answerbag.com/q_view/436819#ixzz19QXdC2Jv

22. What is the race around condition? How can it be overcome?

Race conditions are a severe way crashing the server/ system at times. Generally, this problem arises in priority less systems or the users who have equal priority will be put to this problem. Race condition is a situation in which a resource D is to be serviced to a process A and the process B that holds the resource C is to be given to the process A. So a cyclic chain occurs and no way the resources will be get shared and also the systems with equal priority wont get the resource so that the system wont come out of the blocked state due to race condition!

23. What are the types of memory management?

Swapping and Demand Paging are main memory management types. Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system. The Memory Management Reference.  Memory ManagementII.pdf (190.34 kb)

References

Roadmap To Microsoft
https://www.cs.duke.edu/courses/fall06/cps220/lectures/PPT/lect12.pdf
Very nice book about
Computer Architecture Questions on Technical Interview  

Top 30 questions you should ask the interviewer  

Interview Questions for Senior and Mid Software Engineers 

http://cs.stackexchange.com/questions/13356/how-to-calculate-the-tag-index-and-offset-fields-of-different-caches 

http://www.d.umn.edu/~gshute/arch/cache-addressing.xhtml

Summary

Technical Interview can be very stressful for candidates. But you can be well prepared to all tricky questions with additional materials. This article helps to find answers on complex topics on tech interview, especially architectural questions. If you have better answers on some questions add them to comments after this article please.

Basic Concepts:

  1. What is computer architecture?
  2. Explain the difference between RISC and CISC architectures.
  3. What are Harvard and von Neumann architectures, and how do they differ?
  4. Explain the concept of pipelining in CPUs.
  5. What are little endian and big endian formats?

Memory Management:

  1. Explain the difference between stack and heap memory.
  2. What is virtual memory, and how does it work?
  3. Describe cache memory and its importance in computer architecture.
  4. How does a CPU deal with cache misses?
  5. Explain paging and segmentation in memory management.

Processing Units:

  1. What is the difference between a CPU, GPU, and TPU?
  2. Explain the concept of parallel processing.
  3. What are SIMD and MIMD in the context of parallel computing?
  4. Describe the role of the Arithmetic Logic Unit (ALU).
  5. How do branch predictors work in modern CPUs?

Performance and Optimization:

  1. What factors affect the performance of a processor?
  2. Explain the term 'clock speed' and its impact on CPU performance.
  3. How can you reduce the bottleneck in a computer system?
  4. Discuss the concept of instruction-level parallelism.
  5. What is speculative execution in CPUs?

I/O Systems and Buses:

  1. Describe the role of buses in computer architecture.
  2. What is Direct Memory Access (DMA), and why is it used?
  3. Explain the differences between parallel and serial transmission.
  4. What are interrupts, and how are they handled in computer systems?
  5. Discuss the purpose and types of I/O scheduling algorithms.

Advanced Topics:

  1. What are superscalar processors, and how do they improve performance?
  2. Explain multithreading in the context of CPUs.
  3. What is out-of-order execution in CPU design?
  4. Discuss the challenges and solutions for power management in modern processors.
  5. What is the role of firmware, such as BIOS or UEFI, in a computer system?

When preparing for these questions:

  • Understand the basic concepts thoroughly, as they often serve as a foundation for more complex discussions.
  • Practice explaining complex topics in simple terms, as this demonstrates your understanding and communication skills.
  • Stay updated on current trends and advancements in computer architecture.
  • If you have experience with specific architectures or systems, be prepared to discuss those in detail.

Remember, technical interviews can also involve practical problems or case studies, so be ready to apply your knowledge to real-world scenarios.

Pingbacks and trackbacks (1)+

Comments are closed