Sunday, April 26, 2015

Core Java Interview Questions

What is the difference between HashMap and Hashtable ?
  • HashTable is thread safe and synchronized, while, HashMap is not synchronized and not thread safe. It is better off externally synchronizing a HashMap using Collections.synchronizedMap(hashMap) or using a ConcurrentMap implementation.
  • Hashmap allows one null key and any number of null values, while Hashtable do not allow null keys and null values in the HashTable object.
  • Hashmap object values are iterated by using iterator .HashTable is the only class other than vector which uses enumerator to iterate the values of HashTable object.
  • The iterator in Hashmap is fail-fast iterator while the enumerator for Hashtable is not.
  • Hashmap is much faster and uses less memory than Hashtable as former is unsynchronized .
  • Hashtable is a subclass of Dictionary class which is now obsolete in Jdk 1.7 ,so ,it is not used anymore. HashMap on other hand is the subclass of the AbstractMap class, and it along with Hashtable both implement Map interface.
  • HashMap does not guarantee that the order of the map will remain constant over time.

Difference between Java Enumeration and Iterator ?
Both Iterator and Enumeration allows to traverse over elements of Collections. Enumerations do not allow any modification of the collection during its traversal. Iterators on the other hand allow the caller to remove elements from the underlying collection during the iteration with the remove method. Also iterator has shorter method names.

Can static methods be overridden ?
No. Static methods and fields are associated with the class, not with the objects. Overriding depends on having an instance of a class. The point of polymorphism is that the objects implementing the subclasses will have different behaviors for the same methods defined in the superclass. Since static method is not associated with any instance of a class the overriding concept is not applicable. Further we can define class A with method a(), and class B which extends A also with method a(). Since B.a() has the same name as a method in the parent class, it hides A.a(), as the compiler uses the actual class of the declared reference to determine which method to run.

When should we use intern method of String ?
A pool of strings is maintained privately by the class String which is initially empty. When the intern method is invoked, if the pool already contains a string equal to the String object passed as determined by the equals() method, then the string from the pool is returned. Otherwise, this String object is added to the pool and a reference to this String object is returned. All literal strings and string-valued constant expressions are interned. Hence the intern() method is to be used on Strings constructed with new String().

Why String is Immutable in Java ?
String objects are cached in String pool which is maintained privately by the class String for better performance. When a string is created and if the string already exists in the pool, the reference of the existing string will be returned, instead of creating a new object and returning its reference.
Since cached String literal is shared between multiple clients there is always a security risk, when one client's action would affect all other clients. Also string is widely used for network connection, opening files etc and mutable strings can pose a big security threat. Hence String is immutable. Also string is made final so that no one can compromise invariant of String class e.g. Immutability, Caching, Hashcode etc by overriding its behavior. Since string is immutable, it is thread-safe and it caches its hashcode instead of calculating it every time for better performance e.g. in Hashmap.

What is the difference between LinkedList and ArrayList ?
LinkedList and ArrayList are two different implementations of the List interface. LinkedList implements it with a doubly-linked list while ArrayList implements it with a dynamically resizing array. LinkedList allows for constant-time insertions or removals using iterators, but only sequential access of elements. Hence finding a position in the LinkedList takes time proportional to the size of the list. ArrayList allow fast random read access to any element in constant time, although adding or removing elements from anywhere but the end requires shifting all the latter elements over, either to make an opening or fill the gap. Also, if more elements are added than the capacity of the underlying array, a new array (1.5 times the size) is allocated, and the old array is copied to the new one, adding to an ArrayList is O(n) in the worst case.

How does Java Garbage Collection works ?
The operating system allocates the heap in advance to be managed by the JVM while the program is running. Object creation is faster because global synchronization with the operating system is not needed for every single object. An allocation claims some portion of a memory array and moves the offset pointer forward for the next allocation.

Java Garbage Collector tracks the live objects and everything else is designated to be garbage collected. When an object is no longer used, the garbage collector reclaims the underlying memory and reuses it for future object allocations. All objects are allocated on the heap area managed by the JVM. As long as an object is being referenced, the JVM considers it alive. Once an object is no longer referenced and therefore is not reachable by the application code, the garbage collector removes it and reclaims the unused memory. Hence any instances that cannot be reached by a live thread, or Circularly referenced instances that cannot be reached by any other instances handled by the Garbage collector.

Every object tree must have one or more root objects. Special objects called garbage-collection roots are always reachable and so is any object that has a garbage-collection root at its own root.
The garbage collection usually runs the mark-and-sweep algorithm which carries out two step process. The algorithm traverses all object references, starting with the GC roots, and marks every object found as alive. All of the heap memory that is not occupied by marked objects is reclaimed. It is simply marked as free, essentially swept free of unused objects.
Garbage collection intends to remove the cause of classic memory leaks were unreachable-but-not-deleted objects in memory.

The Garbage collection process cannot be forced, but can be requested to the JVM to initiate the Garbage Collection process using System.gc() and Runtime.gc(). The JVM can reject such request as it is not guaranteed to initiate garbage collection on such requests.

Before evicting an instance and reclaiming the memory space, the Java garbage collector invokes the finalize() method of the respective instance so that the instance will get a chance to free up any resources held by it. Though there is a guarantee that the finalize() will be invoked before reclaiming the memory space, there is no order or time specified. The order between multiple instances cannot be predetermined, they can even happen in parallel. Programs should not per-mediate an order between instances and reclaim resources using the finalize() method.

Any uncaught exception thrown during finalize process is ignored silently and the finalization of that instance is cancelled.  Hence it will never be called more than once on an object (by the JVM).
Garbage collection runs behind the application by a daemon thread, initiated by the JVM.

There are different types of references in Java
-----------------------------------------------------------------------------------------
Reference                       Garbage Collection
-----------------------------------------------------------------------------------------
Strong Reference           Not eligible for garbage collection
Soft Reference               Garbage collection possible but will be done as a last option
Weak Reference             Eligible for Garbage Collection
Phantom Reference        Eligible for Garbage Collection
-----------------------------------------------------------------------------------------
When a GC happens it is necessary to completely pause the threads in an application whilst collection occurs. This is known as Stop The World.  For most applications long pauses are not acceptable.  As a result it is important to tune the garbage collector to minimize the impact of collections to be acceptable for the application.

Most applications have a high volume of short lived objects.  Analyzing all objects in an application during a GC would be slow and time consuming, so it therefore makes sense to separate the shortlived objects so that they can be quickly collected. Hence the new generation heap is split into Eden Space and Survivor spaces. The New Generation helps to reduce the impact of fragmentation.

Eden Space: All new objects are placed in eden space.  When it becomes full, a minor GC occurs.  All objects that are still referenced are then promoted to a survivor space.

Survivor spaces: Each GC of the New Generation increments the age of objects in the survivor space.  When an object has survived a sufficient number of minor GCs (normally start at 15) it will then be promoted to the Old Generation.  Some implementations use two survivor spaces, a From space and a To space. During each collection these will swap roles, with all promoted Eden objects and surviving objects move to the To space, leaving From empty.

Any objects survived from survivor spaces in the New Generation are promoted to the Old Generation. The Old Generation is usually much larger than the New Generation.  When a GC occurs in old gen it is known as a full GC. Full GCs are also stop-the-world and tend to take longer, which is why most JVM tuning occurs here.

When should we use a volatile variable ?
In a multi-threaded environment, the changes made by multiple threads on instance variables is not visible to others in absence of any synchronizers, were the volatile variable comes into picture. The copy of volatile variable is stored in the main memory, so every time a thread access the variable even for reading purpose the local copy is updated each time from the main memory. The volatile variable also has performance issues.

What is the difference between yield() and sleep() ?
The yield() method causes the current thread to transit from running to ready-to-run state thus relinquish the CPU and giving other threads in ready-to-run state a chance to run. It only guarantees that current thread will relinquish the CPU but doesn't say anything about which other thread will get CPU. The current thread is at the mercy of the thread scheduler as to when it will run again. Its can be possible for same thread to get CPU back immediately. A call to yield() method does not affect any locks that the thread might hold.

The call to sleep() method causes currently running thread to pause its execution and transit to sleeping state. Sleep method does not relinquish any lock that the thread might have. The thread will sleep with the amount of time specified in its argument and then transition to ready-to-run state where it takes its turn to run again.

What is difference between wait(), notify() and notfiyAll() ?
The wait and notify methods provide means of communication between threads that synchronize on the same object.
The wait() method allows thread to release the lock and goes to suspended (waiting-for-notification) state. The thread is only active when a notify() or notifAll() method is called for the same object.

The notify() method will only notify one Thread and notifyAll() method will notify all Threads which are waiting on that monitor or lock. When notify method is called only one of waiting thread will be woken, but its not guaranteed which thread will be woken as it depends upon JVM implementation. The notifyAll method on other hand wakes up all the threads waiting on the lock, but again all woken thread will fight for lock before executing remaining code. Hence wait method is usually called within a loop because if multiple threads are woken up, the thread which gets the lock first will execute and it may reset waiting condition, which will force subsequent threads to wait. On being notified, a waiting thread first transits to the Blocked-for-lock-acquisition state to acquire the lock on the object, before transitioning to Ready-to-Run state. When the notified thread executes, the call to the wait() method returns and the thread can continue with its remaining execution.

In order to call the wait (), notify () or notifyAll () methods in Java, we must have obtained the lock for the object on which we're calling the method. If wait(), notify() or notifyAll() methods are not called from synchronized context then an IllegalMonitorStateException is thrown.

The wait() method releases the lock prior to waiting and reacquires it prior to returning from the wait() method. Hence the lock must be used to ensure the checking and setting of the condition is atomic, which can only be achieved by using synchronized method or block. Further even if after waking up the thread by calling notify method, the awakened thread will not be able to proceed until the current thread relinquishes the lock on object.

What is the difference between Runnable and Callable interface ?
The Callable interface is similar to Runnable, in that both are designed for classes whose instances are potentially executed by another thread. A Runnable, however, does not return a result and cannot throw a checked exception. Runnable interface has run() method to define task while Callable interface uses call() method for task definition.

When is join() method used ?
A thread can invoke a join() method on another thread in order to wait for the other thread to complete its execution before continuing. The thread calling join waits until other thread completes its execution, or waiting thread is time out or interrupted.

Why should we use ReentrantLock instead of synchronized ?
ReentrantLock supports lock polling, and interruptible lock waits that support time-out. ReentrantLock also has support for configurable fairness policy, allowing more flexible thread scheduling. The timed and polled lock-acquisition allows to regain control if we cannot acquire all the required locks, release the ones already acquired and retry. The tryLock() method makes an attempt to acquire all the locks, if it cannot then it releases the acquired locks (thus preventing deadlock) and retries again. It supports non-block Structured Locking were a lock is always released in the same basic block in which it was acquired using intrinsic locks, regardless of how control exits the block. Interruptible Lock acquisition allows us to try and acquire a lock while being available for interruption. The thread hence can immediately react to the interrupt signal sent to it from another thread. This is helpful to send a KILL signal to all the waiting locks. The ReentrantLock provides options to create a non-fair lock or a fair lock. With fair locking, threads can acquire locks only in the order in which they were requested, whereas an unfair lock allows a lock to acquire it out of its turn, thus breaking the queue and acquiring the lock when it became available.

What is a Daemon thread ?
A daemon thread is a thread whose execution state is not evaluated when the JVM decides if it should stop or not. The JVM stops when all user threads (in contrast to the daemon threads) are terminated. A user thread cannot be converted into a daemon thread once it has been started. Invoking the method thread.setDaemon(true) on an already running thread instance causes a IllegalThreadStateException.

Can a constructor be synchronized ?
No, a constructor cannot be synchronized. The reason why this leads to an syntax error is the fact that only the constructing thread should have access to the object being constructed.

Is it possible to check whether a thread holds a lock on the given object ?
The class java.lang.Thread provides the static method Thread.holdsLock(Object) that returns true if the current thread holds the lock on the object, given as argument to the method invocation.

What is the purpose of the class java.lang.ThreadLocal ?
As memory is shared between different threads, ThreadLocal provides a way to store and retrieve values for each thread separately. Implementations of ThreadLocal store and retrieve the values for each thread independently in the same instance of ThreadLocal. Instances of ThreadLocal can be used to transport information throughout the application without the need to pass them from method to method.

What is Compare-And-Swap and which Java classes use it ?
Compare-And-Swap (CAS) means that the processor provides a separate instruction that updates the value of a register only if the provided value is equal to the current value. CAS operations are used to avoid synchronization were the thread tries to update a value by providing its current value and the new value to the CAS operation. If another thread has meanwhile updated the value, the thread’s value is not equal to the current value and the update operation fails. The thread then reads the new value and tries again. Hence the necessary synchronization is interchanged by an optimistic spin waiting. The java.util.concurrent.atomic provides AtomicInteger or AtomicBoolean which internally use the CAS operation to implement concurrent incrementation.

What is a Semaphore ?
A semaphore is a data structure that manages a set of permits that have to be acquired by competing threads. Semaphores is used to control how many threads access a critical section or resource simultaneously. The constructor of java.util.concurrent.Semaphore takes the initial number of permits the threads compete about. Each invocation of its acquire() methods tries to obtain one of the available permits. The method acquire() without any parameter blocks until the next permit gets available. When the thread has finished its work on the critical resource, it can release the permit by invoking the method release() on an instance of Semaphore. Semaphores are useful for implementing resource pools such as database connection pools.

What is a CountDownLatch ?
The CountDownLatch class provides a synchronization aid that can be used to implement scenarios in which threads have to wait until some other threads have reached the same state such that all thread can start. This is done by providing a synchronized counter that is decremented until it reaches the value zero. Having reached zero the CountDownLatch instance lets all threads proceed. This can be either used to let all threads start at a given point in time by using the value 1 for the counter or to wait until a number of threads has finished. In the latter case the counter is initialized with the number of threads and each thread that has finished its work counts the latch down by one.

What is the difference between a CountDownLatch and a CyclicBarrier ?
Both classes maintain internally a counter that is decremented by different threads. The threads wait until the internal counter reaches the value zero and proceed from there on. But in contrast to the CountDownLatch the class CyclicBarrier resets the internal value back to the initial value once the value reaches zero. As the name indicates instances of CyclicBarrier can therefore be used to implement use cases where threads have to wait on each other again and again.

No comments:

Post a Comment